1 OpenAI has Little Legal Recourse Versus DeepSeek, Tech Law Experts Say
antoniettarodw edited this page 4 months ago


OpenAI and the White House have accused DeepSeek of using ChatGPT to cheaply train its new chatbot.
- Experts in tech law say OpenAI has little recourse under intellectual property and contract law.
- OpenAI's terms of use might apply however are mainly unenforceable, they state.
This week, OpenAI and the White House accused DeepSeek of something akin to theft.

In a flurry of press statements, they said the Chinese upstart had bombarded OpenAI's chatbots with questions and hoovered up the resulting data trove to rapidly and cheaply train a design that's now nearly as good.

The Trump administration's leading AI czar said this training process, called "distilling," up to intellectual home theft. OpenAI, wiki.tld-wars.space meanwhile, told Business Insider and other outlets that it's examining whether "DeepSeek might have wrongly distilled our designs."

OpenAI is not saying whether the business plans to pursue legal action, rather guaranteeing what a representative described "aggressive, proactive countermeasures to safeguard our technology."

But could it? Could it sue DeepSeek on "you stole our material" premises, just like the premises OpenAI was itself sued on in an ongoing copyright claim submitted in 2023 by The New York City Times and other news outlets?

BI positioned this question to professionals in innovation law, sciencewiki.science who stated tough DeepSeek in the courts would be an uphill struggle for OpenAI now that the content-appropriation shoe is on the other foot.

OpenAI would have a tough time showing an intellectual home or copyright claim, akropolistravel.com these attorneys stated.

"The concern is whether ChatGPT outputs" - indicating the answers it creates in action to inquiries - "are copyrightable at all," Mason Kortz of Harvard Law School stated.

That's because it's uncertain whether the responses ChatGPT spits out qualify as "creativity," he stated.

"There's a teaching that states creative expression is copyrightable, but realities and concepts are not," Kortz, who teaches at Harvard's Cyberlaw Clinic, said.

"There's a substantial question in copyright law right now about whether the outputs of a generative AI can ever make up imaginative expression or if they are necessarily unguarded truths," he included.

Could OpenAI roll those dice anyway and declare that its outputs are safeguarded?

That's unlikely, the attorneys said.

OpenAI is currently on the record in The New York Times' copyright case arguing that training AI is an allowed "fair use" exception to copyright security.

If they do a 180 and tell DeepSeek that training is not a fair use, "that may return to kind of bite them," Kortz said. "DeepSeek could say, 'Hey, weren't you just stating that training is reasonable usage?'"

There might be a difference between the Times and DeepSeek cases, Kortz included.

"Maybe it's more transformative to turn news short articles into a design" - as the Times accuses OpenAI of doing - "than it is to turn outputs of a model into another design," as DeepSeek is stated to have done, Kortz stated.

"But this still puts OpenAI in a pretty predicament with regard to the line it's been toeing regarding fair use," he added.

A breach-of-contract suit is most likely

A breach-of-contract claim is much likelier than an IP-based suit, though it comes with its own set of problems, stated Anupam Chander, who teaches technology law at Georgetown University.

Related stories

The regards to service for Big Tech chatbots like those developed by OpenAI and Anthropic forbid using their material as training fodder for a competing AI model.

"So perhaps that's the claim you might possibly bring - a contract-based claim, not an IP-based claim," Chander stated.

"Not, 'You copied something from me,' but that you took advantage of my design to do something that you were not permitted to do under our agreement."

There might be a hitch, Chander and Kortz stated. OpenAI's regards to service require that the majority of claims be dealt with through arbitration, not suits. There's an exception for claims "to stop unauthorized use or abuse of the Services or intellectual property violation or misappropriation."

There's a bigger hitch, however, specialists stated.

"You should know that the dazzling scholar Mark Lemley and a coauthor argue that AI regards to usage are likely unenforceable," Chander stated. He was referring to a January 10 paper, "The Mirage of Expert System Regards To Use Restrictions," by Stanford Law's Mark A. Lemley and Peter Henderson of Princeton University's Center for Infotech Policy.

To date, "no design developer has in fact tried to enforce these terms with financial charges or injunctive relief," the paper says.

"This is most likely for good reason: we believe that the legal enforceability of these licenses is doubtful," it includes. That remains in part because model outputs "are mostly not copyrightable" and due to the fact that laws like the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act "deal limited option," it says.

"I think they are likely unenforceable," Lemley told BI of OpenAI's terms of service, "since DeepSeek didn't take anything copyrighted by OpenAI and since courts normally will not impose contracts not to compete in the lack of an IP right that would avoid that competitors."

Lawsuits between parties in different nations, each with its own legal and enforcement systems, are always challenging, Kortz said.

Even if OpenAI cleared all the above obstacles and won a judgment from a United States court or arbitrator, "in order to get DeepSeek to turn over cash or stop doing what it's doing, the enforcement would boil down to the Chinese legal system," he stated.

Here, OpenAI would be at the mercy of another incredibly complex area of law - the enforcement of foreign judgments and the balancing of individual and corporate rights and national sovereignty - that extends back to before the starting of the US.

"So this is, a long, made complex, filled procedure," Kortz added.

Could OpenAI have protected itself better from a distilling attack?

"They might have used technical measures to block repetitive access to their website," Lemley said. "But doing so would also hinder normal customers."

He added: "I don't believe they could, or should, have a valid legal claim against the browsing of uncopyrightable details from a public site."

Representatives for DeepSeek did not right away react to a request for remark.

"We know that groups in the PRC are actively working to use methods, including what's understood as distillation, to attempt to duplicate sophisticated U.S. AI designs," Rhianna Donaldson, an OpenAI representative, told BI in an emailed statement.