ChatGPT and AI: Will artificial intelligence replace our jobs? – Business

The modern form of artificial intelligence (AI) that a lot of people are talking about right now is called generative artificial intelligence. She can create images and text in seconds that appear as if they were made by human hands. For example, parts of the illustration for this article come from “Midjourney”, an AI image generator, which you only have to feed with a short descriptive sentence. If you add all the new text, music and film AIs, this has consequences for many professions – and possibly also for our coexistence. An AI cheat sheet summarizes the key facts of the nervous debate.

acute dangers

Jobs: AI – especially generative AI – means automation and therefore cost reductions, which is why employers hunger for it. At risk for the time being are employees in industries that quickly need a lot of text or images of low to mediocre quality, for example certain social media departments or online marketing and advertising.

However, the generative language models now available behind ChatGTP and Co. only act autonomously to a very limited extent, and still have to be fed with commands. Therefore, they can only partially replace humans. However, entrepreneurs can also simply use AI as an argument for job cuts.

Discrimination: Self-learning algorithms are fundamentally conservative. They learn based on large amounts of data, the input, and also orientate themselves on what they give out, the output. If the data is distorted by prejudices, for example, the models adopt them.

Bloomberg, a US media outlet, found in the analysis of 5100 AI-generated images from Stable Diffusion, which shows black people more often in depictions of poorly paid work groups (“fast food workers”) than in depictions of well-paid work groups (“architects”). Apparently there is no moral compass. That’s why many AI companies are working to filter out such biases from their models, removing racial slurs, for example.

Long-term dangers

Apocalypse: Some leading AI developers and entrepreneurs warn their technology could become an “existential risk” for humanity. Their thesis: If AI becomes more and more autonomous and capable, it will eventually be able to improve itself and acquire resources. Then it optimizes itself until its intelligence exceeds that of all humans combined – the so-called singularity occurs.

According to this, the AI ​​could then destroy humanity. Either because the AI ​​develops its own, in this case malicious, will – although it is still completely unclear whether AIs can develop their own intentions at all. Or the AI ​​is pursuing a goal that makes sense in itself, which was actually specified, and kills a person because they appear to be an obstacle on the way to the set goal.

Whether such a scenario could become reality can hardly be said seriously given the current state of research.

loss of responsibility: More realistic than the annihilation of humanity by rampant AI is a new form of irresponsibility. Anyone who ascribes human-like intelligence to AI and transfers more and more tasks to it is also outsourcing responsibility. It is not unreasonable that in a few years a large part of decisions will be made automatically by AI. But then the question arises as to who is responsible if something goes wrong – for example in the case of self-driving cars on the road.

This is one of the reasons why the most important rule for dealing with AI is: You should not blindly trust its output.

source site