Geoffrey Hinton is known in the Artificial Intelligence industry as the “godfather” of AI. He has been awarded the Turing Award for his advances in neural networks, a key technology in AI. Until recently, he worked at Google, but resigned so he could safely warn of the dangers of this technology.
He says companies should not continue to develop AI without knowing if they can control it. His warning is not just for Google, but for the entire industry, which has seen a mad rush of AI launches since the release of ChatGPT in November last year.
AI: more danger than benefit?
Geoffrey Hinton is concerned about the risks associated with AI. He says the technology could cause major problems in the job market and facilitate misinformation campaigns. He admits that AI can be beneficial in removing chores, but he worries that it will eventually remove everything. Hinton acknowledges that AI has come a long way in recent years, even beyond human capabilities. He warns that future AI models could become a threat to humanity, especially if these machines become autonomous. George Hinton is not alone in his concern about the risks of AI. Other technology experts have signed an open letter calling for the slowing down of AI development until the positive effects are beyond the risks. This includes Elon Musk, owner of Twitter and co-founder of OpenAI.
Geoffrey Hinton and the development of AI
Geoffrey Hinton played a key role in the development of neural network technology. He began working on the technology since the 1970s, and in 2012 he and two of his students created a neural network that could analyze thousands of photos to identify commonly used objects. Google eventually bought the company for $44 million. However, Hinton changed his views on AI as Google and OpenAI began developing large-scale systems from massive data. He now believes the risks outweigh the benefits. Other Google employees share this concern, such as two AI product analysts who tried to block the launch of Bard because of their fear that the chatbot would generate false or dangerous content.