Geoffrey Hinton, famously known as the “Godfather of AI,” has voiced growing concerns over the unchecked pace of artificial intelligence development and the failure of major tech companies to properly acknowledge or address its potential dangers. In a recent appearance on the One Decision podcast, Hinton stated that many industry leaders are fully aware of the risks posed by AI but choose to downplay them publicly. He noted that only a few, such as Demis Hassabis of DeepMind, genuinely understand these risks and are actively seeking ways to mitigate them.
Hinton, who was awarded the 2024 Nobel Prize in Physics alongside John J. Hopfield for their pioneering work in artificial neural networks, has been instrumental in shaping the foundation of modern AI. His research over decades has contributed significantly to the breakthroughs seen today. However, he now warns that AI systems are evolving rapidly in ways that even scientists struggle to fully comprehend. According to him, the speed and efficiency at which these systems are advancing have exceeded expectations, making it harder to predict their behavior or future impact.
Reflecting on his own career, Hinton expressed regret for not recognizing and addressing these risks earlier. He admitted that he previously believed the threats were far in the future and wishes he had focused on safety from the beginning. In 2023, Hinton left Google after more than ten years, a move initially interpreted by many as a protest against the company’s aggressive pursuit of AI development. However, during the podcast, Hinton clarified that this was a media myth. He explained that he left because he was 75 years old and could no longer program effectively. More importantly, he wanted the freedom to speak openly about the risks associated with AI without being bound by corporate interests.
Hinton added that remaining at Google would have forced him to practice a form of self-censorship, noting, “You can’t take their money and then not be influenced by what’s in their own interest.” His departure has since allowed him to more freely advocate for responsible AI development.
In the same podcast, Hinton lauded Demis Hassabis as one of the few tech leaders who not only understands the dangers but is also committed to preventing misuse. Hassabis, who founded DeepMind and sold it to Google in 2014, currently leads its AI research division. Despite being at the forefront of AI development, he has repeatedly voiced concerns over the ethical implications of advanced AI systems falling into the wrong hands.
Earlier in 2024, Hassabis reiterated these worries in an interview with CNN. While he is not overly concerned about AI replacing jobs, he emphasized the danger of powerful AI tools being used for malicious purposes. “A bad actor could repurpose those same technologies for a harmful end,” he warned. He stressed the importance of finding ways to restrict access to advanced AI systems for bad actors while enabling responsible users to leverage the technology for beneficial outcomes.
Together, Hinton and Hassabis highlight the urgent need for ethical safeguards, regulatory oversight, and a deeper public dialogue around the long-term impact of artificial intelligence.