AI: ChatGPT forces debate over rules
February 24, 2023Who should write the rules for the age of artificial intelligence (AI)? Pose this question to one of today's best-known AI systems, ChatGPT, and it will tell you that — well, it's complicated: "There are many different perspectives on the matter," is the response by the chatbot developed by San Francisco-based tech company OpenAI.
"As an AI language model, I do not have personal opinions, beliefs, or biases," it adds. But efforts should "include input from a diverse range of stakeholders, including experts in the field of AI, representatives from other relevant fields such as law, ethics, and policy, affected communities, civil society, and government."
Lilian Edwards, a professor of law, innovation, and society at the UK's Newcastle University, told DW, she "doesn't disagree" with the chatbot's assessment. "But I would add that rules ought to be binding," she said.
Within a few months, ChatGPT has risen to become the most prominent of a new generation of "generative AI" systems; others are called LaMDA, DALL-E, or Stable Diffusion. The programs produce text, code, images or even video footage from scratch. The results are so convincing that it's often impossible to tell they weren't created by a human.
Generative AI is expected to revolutionize how people work or find information online. But it has also raised fears that it could put millions out of work or be abused for disinformation, prompting a debate over what rules are necessary.
"We do need regulation," law professor Lilian Edwards said. This would mean both applying existing legislation and drafting new laws for AI. At the same time, she warned that although programs like ChatGPT dominate headlines, they "aren't the online thing happening in AI."
"There is a real danger that they are sucking up the air in the room," she said.
The rise of AI
Research into artificial intelligence reaches back to the 1950s. But it wasn't until the early 2010s that engineers started building AI into day-to-day applications. Over the last couple of years, the systems also got better and better at creating content from scratch.
Those breakthroughs, however, went largely unnoticed by the general public — until in late November 2022, OpenAI released the prototype of ChatGPT. It is easy to use: All you need to do is type a prompt into a text bar. Ask, for instance, for a summary of Johann Wolfgang von Goethe's Faust in the style of a 14-year-old, and the chatbot spits out a text that reads like it was written by a teenager. Ask the system to write the same text in the style of Aldous Huxley, and you get it in 20th-century prose.
The release marked the first time a sophisticated AI tool was made available on a free website, and it prompted a debate over what the rise of generative AI might mean for human creativity. Professors bemoaned "the death of the college essay." Newspapers announced that they would use the software to help their reporters write news.
Joanna Bryson, a professor of ethics and technology at Berlin-based Hertie School stressed the importance of raising awareness for how AI systems work, for instance by including it in school curricula — and to help people develop an understanding of how to communicate with technology that appears to be human but isn't.
"I'd like to see kids learn that at school, and then go home and tell their parents about it," she told DW.
Education Minister Dorothee Feller in North Rhine-Westphalia, Germany's most populous state, announced this Thursday to be planning concrete steps. Instead of trying to ban AI from schools, the ministry is working on instructions for teachers to take a "constructively critical view" of the possibilities of such software for teaching.
The stuff of science fiction?
But the initial euphoria soured when in February, US tech giant Microsoft announced that it had outfitted its Bing search engine with an advanced version of ChatGPT and allowed a group of testers to play around with it. Soon after, they published screenshots of creepy exchanges, in which the chatbot became combative, referred to itself as "Sydney," or expressed a wish to be human.
Experts were quick to explain that the bot was not sentient. Instead, the underlying technology has been trained by analyzing vast amounts of text. That makes it incredibly good at guessing which word should follow the previous one — so good it sounds like an emotional human being.
Nonetheless, the incidents led to public outcry on social media. People realized that it's becoming increasingly difficult to distinguish human creations from those of AI systems, ethics professor Joanna Bryson said. And this experience is "something totally new."
And yet she feels, "the rules that we have been recommending for about a decade are still enough."
Since the mid-2010s, hundreds of institutions — from the world's tech giants to the Catholic Church or international organizations like the UN — have released non-binding guidelines for how to develop and use AI responsibly. These texts were most likely the sources ChatGPT used to draw its conclusion quoted at the beginning of the article.
But as the technology progresses rapidly, there now is consensus that voluntary guidelines alone will not be enough, according to legal scholar Lilian Edwards.
For years, governments around the world have been working on legislation. An overview by the Organisation for Economic Co-operation and Development (OECD) lists policy initiatives in over 65 countries, from Argentina to Uzbekistan. Yet only a few have, so far, followed through with hard laws. China is a notable exception as it released legislation on AI last year.
In the West, all eyes are on the European Union. As early as 2018, the EU — often considered a frontrunner in regulation — began work on what's now called the "AI Act." Five years later, the bloc's institutions are set to start final discussions which could lead to laws forcing companies to disclose to customers whenever they're actually talking to software like ChatGPT.
But experts like Edwards caution that generative AI is only the tip of the iceberg. Companies and public institutions have long begun using similar machine-learning technology to automate decisions in other fields from surveillance to criminal justice, where the risk for lasting harm is even greater.
Tellingly, the CEO of the company behind ChatGPT himself issued a similar warning. OpenAI's Sam Altman wrote on Twitter in mid-February that, "although current-generation AI tools aren't very scary, I think we are potentially not that far away from potentially scary ones."
In the EU, the bloc's upcoming AI rulebook will include particularly strict rules for "high-risk applications" to make sure, among other things, that AI does not discriminate against vulnerable minorities.
Officials in Brussels hope that the laws will become a "gold standard" for AI regulation and will be copied by governments worldwide.
Edited by: Rina Goldenberg
While you're here: Every Tuesday, DW editors round up what is happening in German politics and society. You can sign up here for the weekly email newsletter Berlin Briefing.