ChatGPT, OpenAI, Napster: AI is the future, so is litigation

ChatGPT, OpenAI, Napster: AI is the future, so is litigation

[ad_1]

Artificial intelligence has gone from science fiction to novelty and certainly to the future. Very, very fast.

One easy way to measure change is with headlines. An article announcing Microsoft has invested $10 billion in OpenAI and the company behind the eye-popping ChatGPT text generator are followed by other AI startups looking for big bucks . Or about school districts desperately trying to deal with students using ChatGPT to write term papers. Or digital publishers like CNET and BuzzFeed, he’s talking about admitting or bragging about using AI to create their content, and investors rewarding them for it.

“Until very recently, these were science experiments that nobody cared about,” said Mathew Dryhurst, co-founder of AI startup Spawning.ai. [they] It has become an economically important project. “

Then we have another leading indicator. Lawsuits filed against OpenAI and similar companies. These companies claim their AI engines illegally use other people’s work to build their platforms and products. In other words, it’s aimed directly at the current boom in generative AI, software like ChatGPT that uses existing text, images, or code to create new works.

Last fall, a group of anonymous copyright owners sued Open AI, which owns the GitHub software platform, and Microsoft for violating the rights of developers who contributed software to GitHub. Microsoft and OpenAI jointly built GitHub Copilot. This means that you can use AI to write code.

And in January, a similar class action lawsuit was filed (by the same attorneys) against Stability AI, creators of AI art generator Stable Diffusion, alleging copyright infringement. Meanwhile, Getty Images, a UK-based photo and art library, says it will also sue Stable Diffusion for using the image without a license.

It’s easy to reflexively dismiss legal papers as the inevitable marker of a tech boom. With the hype and the money, lawyers will comply. It’s about the nature of intellectual property and the pros and cons of riding full speed into a new tech environment before anyone knows the rules of the road. Yes, generative AI now seems inevitable. These battles can shape how we use it and how it affects business and culture.

We’ve seen versions of this story played before. Just ask the music industry, which has spent years migrating from CDs to digital songs, and the publishers who vehemently opposed Google’s move to digitize books.

The AI ​​boom “provokes a common reaction among people we consider creators: ‘My stuff is being stolen,'” he says, citing music labels and years of Lawrence Lessig, a Harvard law professor who has fought, says. He argued that music owners use copyright rules to squash creativity.

In the early 2000s, the debate over digital rights and copyright played a minor role and was of concern to a relatively small portion of the population. But now everyone is online. So even if he doesn’t consider himself a “creator”, what he writes and shares can become part of an AI engine and be used in unimaginable ways.

And the tech giants leading the AI ​​effort, in addition to Microsoft, both Google and Facebook have invested heavily in the industry, but have yet to reveal much to the public. Their dot-com boom counterpart. That means they have much to lose in court challenges and have the resources to fight and delay legal outcomes until they are out of the question.

Meals powered by AI data

The technology behind AI is a complex black box, and many claims and predictions about its power may be exaggerated. Yes, some AI software seems to be able to pass his MBA and some of his medical licensing tests, but it’s still not going to replace a doctor or his CFO. No matter what the bewildered Googlers say, they’re not even aware.

But the basic idea is relatively simple. Engines like the one built by OpenAI ingest huge data sets and use it to train software that can make recommendations or generate code, art, and text.

Engines often scour the web for these data sets, much like Google’s search crawlers. This allows it to learn the content of web pages and catalog them for search queries. In some cases, such as Meta, AI engines have access to huge proprietary data sets built in part by text, photos, and videos posted on the platform by their own users. , not for building AI products like ChatGPT-esque engines. The engine also licenses the data, much like Meta and OpenAI did with their photo library Shutterstock.

Unlike turn-of-the-century music piracy lawsuits, no one claims that they are making bit-for-bit copies of the data used by AI engines and distributing them under the same name. For now, the legal question tends to lie in how the data got into the engine in the first place, and who has the rights to use it.

Proponents of AI believe that 1) there are no laws against learning, so engines can learn from existing data sets without permission, and 2) even if you don’t own one data set, you can transform it into something else entirely. claims to be protected by law. A lengthy legal battle confirms Google’s victory against authors and publishers who sued Google over a book index that cataloged and extracted a large number of books.

The objection to the engine seems even simpler. For example, Getty says it is happy to license its images to AI engines, but Stable Diffusion builder Stability AI has not paid. In the OpenAI/Microsoft/GitHub lawsuit, attorneys allege Microsoft and he violated the rights of developers who contributed code to GitHub by ignoring open source software licenses that govern the commercial use of code by OpenAI. claims.

Also in the Stability AI lawsuit, the same attorneys argue that the image engine truly creates a copy of the artist’s work, even if the output is not a mirror image of the original. And their own output competes with the artist’s ability to make a living.

“I’m not against AI. No one is against AI. I just want it to be fair and ethical. To make sure it’s done right,” said the two groups. Matthew Butterrick, an attorney representing the plaintiffs in the lawsuit, said.

Also, depending on who you ask, your questions about the data may change. Elon Musk, who was an early investor in OpenAI, said that once he owned Twitter, he didn’t want OpenAI to crawl Twitter’s database.

What can the past tell us about the future of AI?

Now, remember that the Next Big Thing isn’t always the case. When people like me were seriously trying to figure out what his Web3 really meant, when Jimmy Fallon was promoting his Bored Ape NFT, FTX paid millions for Super Bowl ads. remember when you were ? That was a year ago.?

Yet, as the AI ​​hype bubble inflates, I’ve been thinking a lot about the parallels to the battle of music versus technology from over 20 years ago.

Simply put, “file-sharing” services blew up the music industry almost overnight because they let anyone with a broadband connection download their favorite music for free, instead of paying $15 for a CD. . The music industry has responded by suing owners of services such as Napster and ordinary users like my 66-year-old grandmother. Over time, the label won the battle against his Napster and its ilk, and in some cases investors. They also came under a lot of criticism from music listeners who didn’t buy much music, and the value of the music label plummeted.

But after a decade of trying to revive CD sales, music labels eventually settled with the likes of Spotify, offering users the ability to subscribe to unlimited listening services for a monthly fee. These fees exceed what the average listener spends on her CDs in a year, and now the music rights and the people who own them are worth a lot of money.

So you can imagine one result here. Ultimately, the group of people who put things on the internet collectively bargain with technology entities over the value of their data, and everyone wins. Of course, that scenario could mean that an individual posting things on the Internet would find their personal photos, tweets, or sketches to be of little value to an AI engine that uses billions of inputs for training. There is also

Also, courts, or regulators, who are increasingly interested in adopting technology, especially in the EU, can be retroactively punished for enforcing regulations or obtaining data that make operations such as OpenAI very difficult. There is also the possibility of without consent. We hear some tech executives say they are wary of using AI engines for fear of eventually being sued or being asked to roll back the work they created. I’ve been there before.

But the fact that Microsoft, certainly aware of the dangers of punitive regulators, has poured another $10 billion into OpenAI suggests that the tech industry believes the rewards outweigh the risks. increase. And legal or regulatory solutions emerge long after AI winners and losers are sorted.

At the moment, the compromise might be that people who know and care about things like this take the time to let AI engines leave them alone. As anyone who knows , “robots.txt” should tell Google not to crawl your site.

Spawning.Ai created “Have I Been Trained”. This is a simple tool for determining if artwork has been consumed by her AI engine, and provides the ability to tell the engine not to inhale artwork in the future. Spawning co-founder Dryhurst says the tool won’t work for everyone or every engine, but it’s a starting point. More importantly, it’s a placeholder that collectively captures what you want the AI ​​to do and what you don’t want it to do.

“This is a dress rehearsal and an opportunity to establish habits that will be important in the decades to come,” he told me via email. Is difficult.”



[ad_2]

Source link