Artificial intelligence creates “photos” based only on a description

It sounds pretty spooky, but it actually works surprisingly well now: a person writes to a computer what kind of picture they want – and the computer generates it fully automatically. Algorithms that work with machine learning make it possible.

What the services can do

The capabilities of the artificial intelligence are quite impressive: You can give services like Dall-E almost all real and some fictitious information and get a finished picture in a few seconds. In addition, you are not limited to photos; the services can also reproduce other styles such as drawings, paintings or even the typical styles of well-known artists such as van Gogh, da Vinci & Co.

Several providers are new to the market: one of the best known and most impressive in terms of quality is the Dall-E service, which was developed by Elon Musk’s OpenAI. Among other things, Midjourney competes with OpenAI. In addition to these commercial providers, there is also “Stable Diffusion”, a basically free and open-source service.

For a long time, OpenAI’s Dall-E was only available in a beta version, for which you had to register on a waiting list. This restriction has been over for a few days now, every user can register with OpenAI and generate a few dozen photos per month free of charge. If you want more or want to integrate the service into your own existing applications, you have to pay or switch to “Stable Diffusion”.

Artificial intelligence can be used in many meaningful ways

The applications for services such as Dall-E are wide-ranging and have hardly been researched at the moment. Many users are currently still using the services for fun without having planned any real applications with the photos. In the long term, the photos could appear wherever there are few photos or they cannot be purchased cheaply.

Among other things, it can be used in advertising or the illustration of blog posts. At the same time, the computer-generated images create a new form of digital art – with the question: Who is actually the author?

The services can become problematic if they actually produce images that can hardly be distinguished from reality and are used for disinformation. Dall-E already puts one – admittedly soft – stop to this: The “prompts”, i.e. the text descriptions for the AI, must not contain any prominent people. Although this barrier can be circumvented, it can at least result in users being excluded from the service.

Left:Dall E, Stable diffusion, mid-journey

tvm

source site-5