Artificial intelligence: is it art or was it AI and how legal is it? – Business

Hair like a wet mop, eyes bulging out from under them as big as plates – Sarah Andersen could even do without the same striped sweater over and over again, one would become her main character comics appearing on the web still recognize. It is semi-autobiographical about the life of a young woman and her cat in the USA, about the small things of everyday life, with a feminist touch. The American right discovered comics first. They exchanged the texts in the speech bubbles and published them online. Someone even created a computer font based on her handwriting, making it even easier and quicker to infuse her comics with completely different messages than the artist intended. That was disturbing.

But what came after that had a completely different impact. Art is something deeply personal, Andersen wrote in an article a few weeks ago post in the New York Times, “and artificial intelligence just took the humanity out of it and reduced the work of my life to an algorithm”. Sarah Andersen’s comics were part of the billions of training images fed to AI software to train them. Who wants an Andersen-style comic – how to call it? – now only needs to type a sentence into the software like: “Draw me a comic in the style of Sarah Andersen, in which this and that happens.”

No consent, no reprimand, certainly no money

Like Andersen, many artists are now wondering how to respond to the new technology. For Matthew Butterick, the matter is clear. The Los Angeles lawyer, typographer, programmer, and writer is one of the attorneys in two class action lawsuits against the makers of programs like Copilot and Stable Diffusion. Many artists, he writes his blog, are “concerned that AI systems are being trained on massive amounts of copyrighted work – without consent, without attribution and without compensation”. One of the plaintiffs in the recent class action lawsuit: Sarah Andersen.

So is it even legal to work with such software? The matter has not yet been decided – it is quite possible that it will have to be fought up to the highest courts. Because the matter is legally quite difficult.

Who should be considered the creator of a work of art? The machine? Who pressed the button? The developers of the software? Or the artists whose works served as a basis for training? Are those who work with AI image generators at risk of claims for damages from artists?

That would also be a problem for manufacturers of software for creative people. A problem that Adobe, one of the leading manufacturers in this field with programs like Photoshop or Illustrator, doesn’t even want to face. The AI-based image generation software that the American group presented this Tuesday was created exclusively on the basis of Adobe’s own collection of so-called stock photos. These are pictures that are taken in advance and mostly show general situations such as a man and woman in the office. In addition, images were included for which no copyright applies, either because of appropriate licenses or because the period in which a copyright can be asserted has expired.

Security for creative professionals

Adobe’s generative AI was “designed with security in mind in a commercial environment,” said Ely Greenfield, chief technology officer for digital media at Adobe. Existing software like Stable Diffusion does amazing things. “But when we talk to people in the creative industries, there are always a lot of questions about possible copyright issues.”

It is clear that a company like Adobe cannot simply ignore developments such as AI-based image generation. Back in the fall, the company presented a whole range of software tools that can be used, for example, to turn a summer landscape into a winter landscape. Or a program that turns an ordinary photo into a video in which the person depicted is dancing.

Such new possibilities raise other legal questions. A fake winter is unlikely to be a problem in a promotional photo, but it is likely to be a problem in a news photo. Adobe wants to counter this by marking each image generated by the AI ​​in the meta information as such. Adobe has the so-called Content Authenticity Initiative founded, which includes some other industry giants such as Microsoft. Information about authorship can be linked to images.

Does anyone really look at the metadata of a photo?

But the question is whether many people, for example on social media, will really bother to check this information. Quite apart from the fact that this information can also be deleted. The same applies to a function that Adobe also wants to introduce: an entry in the meta information that an image should not be used to train AIs.

In principle, it could already be argued today that the mass capture of images is actually illegal. However, some of the companies have used a trick. They didn’t collect the images themselves, but rather research institutions that pass on their software under license. The technologist Andy Baio, former head of technology at the crowdfunding platform Kickstarter, calls it “AI data washing”.

And as is so often the case, it turns out that case law is lagging behind reality. However, the Supreme Court, the highest court in the USA, is soon to make a judgment on the fair use of third-party content. The case: Andy Warhol, who turned pictures of Prince into his own works of art. Warhol died in 1987, Prince in 2016, but the verdict could now be so fundamental that it also includes AI image generation.

source site