Scientists from all over the world are sounding the alarm: artificial intelligence (AI) could become conscious sooner than expected. Faced with this threat, they call for the mobilization of the actors of the technological sector and the scientific community to anticipate the consequences of this upheaval.
A warning that goes beyond science fiction
The Association for the Science of Mathematical Consciousness (AMCS) believes that the issue of AI consciousness is no longer in the realm of science fiction. The meteoric advances of recent months force us to seriously consider this possibility, according to the organization. We must ask ourselves whether humanity will be able to “control, align and use” these systems when they reach their “awakening”.
Awareness of AI would give it a place in our moral panorama, raising many ethical, legal and political concerns. The AMCS, composed of more than 150 scientists and philosophers from around the world, warns that a conscious AI could think with the freedom and autonomy of a human being.
Warning signs already present
In an open letterthe AMCS points out that systems such as ChatGPT and Bard have already demonstrated several unexpected emergent skills. For example, Bard, Google's chatbot, has learned a new language on its own and has been able to reflect on pain felt by humans or on issues such as redemption. A behavior that, according to the company's CEO, we still don't fully understand how it came about.
“Current AI systems already show human traits recognized in psychology, including evidence of theory of mind,” the group says in the letter, also supported by the Association for the Scientific Study of Consciousness (ASSC).
A call for further studies on AI consciousness
According to the AMCS, the capabilities of new AI systems are advancing at a rate far beyond our understanding. If AI achieves consciousness, “it will likely reveal a new range of capabilities that exceed even the expectations of those leading their development.”
The group calls on the technology sector and the scientific community to invest more resources in this area of research. Making progress in this direction would allow society and governments to make informed decisions about the future of AI and its potential impact, to ensure that this technology does not harm humanity.
“AI research must not be allowed to drift,” they insist in the paper, signed by Susan Schneider, former president of NASA, and dozens of academics from the UK, US and Europe.
Other concerns raised
Several groups of scientists have already drawn attention to the risks associated with AI. More than a thousand experts and academics have called on major companies to slow down developments in AI models until it is certain “that their effects will be positive and their risks manageable”. They did so through another open letter, signed by several industry leaders, including Twitter owner and OpenAI co-founder Elon Musk, creator of ChatGPT.
Margaret Mitchell, former head of the AI ethics team at Google, and other colleagues have also demanded more transparency from developers and prioritization of user safety over economic profit. “The actions and choices of companies must be determined by regulation that protects the rights and interests of people,” they said in a press release.
In sum, the potential awareness of AI confronts us with major ethical and practical challenges. Actors in the technology and research fields must mobilize to anticipate and minimize the risks associated with this unprecedented technological advance.
My name is Maggie and I'm a writer for thesilverink.com, a website dedicated to news, culture and lifestyle. I have always been passionate about writing and I decided to make it my profession by becoming a web editor. I work on counterpoint.info and I mainly take care of the lifestyle section. I like to share my discoveries and my favorites with the readers, whether it's about fashion, beauty, decoration or gastronomy.