Elon Musk and 1000 tech experts call for a break in the development of artificial intelligence
The rapid development of the text robot ChatGPT fuels concerns about a loss of control: tech gurus such as Steve Wozniak and Elon Musk are now calling for a moratorium to weigh up the consequential damage of artificial intelligence. Governments are also asked.
MSeveral top tech experts like Tesla CEO Elon Musk and Apple co-founder Steve Wozniak have called for a moratorium on the rapid development of powerful new artificial intelligence tools. In an open letter published on Wednesday, the signatories call for a pause of at least six months.
This development stop should give the industry time to set safety standards for the development of AI and to avert possible damage from the riskiest AI technologies. If the companies do not agree on this, governments would have to step in and impose a moratorium. The six months should therefore be used, among other things, to develop security protocols.
The call was made by the non-profit institute “Future of Life” published, where Musk acts as an external consultant. In addition to the Tesla boss, more than 1,000 people signed the manifesto, including Emad Mostaque, head of the AI company Stability AI, and several developers from Google’s AI subsidiary DeepMind. Supporters also include AI pioneers Stuart Russel, Gary Marcus and Yoshua Bengio.
In the manifesto, they warn of the dangers of so-called generative AI, such as those implemented with the text robot ChatGPT or the image generator DALL-E from OpenAI. These AI tools can simulate human interaction and create text or images based on a few keywords.
Musk was once involved with OpenAI
Musk originally co-founded the AI start-up OpenAI as a non-profit organization. However, he then withdrew after OpenAI boss Sam Altman had pushed through with his plans to advance the development of AI software through large-scale financial and technical cooperation, especially with Microsoft.
Since the release of ChatGPT last November, the topic of “Artificial Intelligence” has been in the public eye.
Google was forced to abandon its rather restrictive course in releasing AI tools and countered the OpenAI offensive with its own chat robot Bard.
In Germany, the TÜV association welcomed the open letter. “The appeal shows the political need for action for clear legal regulation of artificial intelligence,” explained Joachim Bühler, Managing Director of the TÜV Association. This is the only way to get the risks of particularly powerful AI systems under control.
There is a threat of fake news, propaganda and unemployment
Bühler emphasized that the experts warned of a flood of propaganda and fake news, the destruction of many jobs and a general loss of control. “At the same time, it is clear that AI systems are increasingly being used in medicine, in vehicles or other safety-critical areas. Malfunctions can have fatal consequences.” Legal guidelines are necessary in these areas. “This creates trust and promotes innovative offers instead of slowing them down.”
Since the launch of ChatGPT in November 2022, numerous large corporations, above all Microsoft and Google, have been in a race for technological leadership in AI. New applications are presented almost every week, because companies expect bubbling profits from this technology.
Some countries like China consider AI to be strategically important and want to give developers great freedom.
“Everything on shares” is the daily stock exchange shot from the WELT business editorial team. Every morning from 7 a.m. with our financial journalists. For stock market experts and beginners. Subscribe to the podcast at Spotify, Apple Podcasts, Amazon Music and deezer. Or directly by RSS feed.