I’ve side eyed most major newspapers in recent years, including The New York Times, for how they’ve handled reporting on everything from Orange Indictment Guy, to Black Lives Matter, to the pandemic. But a free, robust press is a must for healthy democracy, and so while I don’t love everything they publish, I’m still glad NYT exists. It feels as if journalism is getting attacked on all sides: declining trust in the press, the death of print, misinformation everywhere, hostility towards journalists–and now, AI. AI is another threat to journalism and an independent press because it has no guardrails and spits out false information. A recent Stanford study showed that while it improves in some areas, it’s getting dumber over time when it comes to some math problems, and is worse at answering nuanced questions. But AI might just have an Achilles heel: copyright infringement. ChatGPT and other large language models have “scraped’ huge swathes of content from The New York Times (and other publications) without consent. Now, NYT is possibly suing OpenAI, the company that created ChatGPT.
The New York Times and OpenAI could end up in court.
Lawyers for the newspaper are exploring whether to sue OpenAI to protect the intellectual property rights associated with its reporting, according to two people with direct knowledge of the discussions.
For weeks, the Times and the maker of ChatGPT have been locked in tense negotiations over reaching a licensing deal in which OpenAI would pay the Times for incorporating its stories in the tech company’s AI tools, but the discussions have become so contentious that the paper is now considering legal action.
The individuals who confirmed the potential lawsuit requested anonymity because they were not authorized to speak publicly about the matter.
A lawsuit from the Times against OpenAI would set up what could be the most high-profile legal tussle yet over copyright protection in the age of generative AI.
The possible case against OpenAI: So-called large language models like ChatGPT have scraped vast parts of the internet to assemble data that inform how the chatbot responds to various inquiries. The data-mining is conducted without permission. Whether hoovering up this massive repository is legal remains an open question.
If OpenAI is found to have violated any copyrights in this process, federal law allows for the infringing articles to be destroyed at the end of the case.
In other words, if a federal judge finds that OpenAI illegally copied the Times’ articles to train its AI model, the court could order the company to destroy ChatGPT’s dataset, forcing the company to recreate it using only work that it is authorized to use.
Federal copyright law also carries stiff financial penalties, with violators facing fines up to $150,000 for each infringement “committed willfully.”
“If you’re copying millions of works, you can see how that becomes a number that becomes potentially fatal for a company,” said Daniel Gervais, the co-director of the intellectual property program at Vanderbilt University who studies generative AI. “Copyright law is a sword that’s going to hang over the heads of AI companies for several years unless they figure out how to negotiate a solution.”
[From NPR]
You know what? I hope the NYT takes OpenAI to the damn cleaners. I hope those tech bros are forced to fold, taking the entire AI industry with them. The case against them seems relatively cut-and-dry to me but I’m not an attorney. I’m sure OpenAI will come up with some slippery defense about the AI not being capable of “willfully” copying NYT because it’s not human. There are so many ethical problems with AI and copyright infringement is just one of them. There’s evidence that AI is promoting eating disorders and sending people “thinspo,” which is heinous. Tech companies aren’t stopping it from doing that. It’s bad for the environment because it takes massive amounts of energy and water to work. And AI still needs people to run it, and guess what–it looks like a lot of those people, mostly based in the Global South, are not earning a living wage for continuing to train the AI. I’ve been obsessed with the movie Oppenheimer this summer–I’ve seen it three times–and while it’s obviously about nuclear weapons, the themes of the movie also map neatly onto AI. I feel that AI has the potential to be more destructive than we realize. Just because a technology is powerful, that does not mean it is good. And just because a new technology is an impressive scientific discovery, that doesn’t make it ethical–or wise–to use it.
Photos credit: Marco Lenti and Emiliano Vittoriosi on Unsplash and Matheus Bertelli on Pexels
Source: Read Full Article