Geoffrey Hinton
Summary
Geoffrey Everest Hinton is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023 citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.
With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularised the backpropagation algorithm for training multi-layer neural networks, although they were not the first to propose the approach. Hinton is viewed as a leading figure in the deep learning community. The dramatic image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky and Ilya Sutskever for the ImageNet challenge 2012 was a breakthrough in the field of computer vision.
Hinton received the 2018 Turing Award (often referred to as the "Nobel Prize of Computing"), together with Yoshua Bengio and Yann LeCun, for their work on deep learning. They are sometimes referred to as the “Godfathers of Deep Learning” and have continued to give public talks together.
Notable former PhD students and postdoctoral researchers from his group include Peter Dayan, Sam Roweis, Max Welling, Richard Zemel, Brendan Frey, Radford M. Neal, Yee Whye Teh, Ruslan Salakhutdinov, Ilya Sutskever, Yann LeCun, Alex Graves, and Zoubin Ghahramani.
Biography
Geoffrey Hinton, a pioneering figure in the field of artificial intelligence, embarked on an educational journey that led to profound contributions to the world of machine learning and neural networks. He commenced his academic endeavors at King's College, Cambridge, exhibiting an early penchant for interdisciplinary studies by switching between various subjects, including natural sciences, history of art, and philosophy. In 1970, Hinton graduated with a Bachelor of Arts in experimental psychology.
Hinton's thirst for knowledge persisted, and he furthered his studies at the University of Edinburgh, where he earned his Ph.D. in artificial intelligence in 1978 under the guidance of Christopher Longuet-Higgins. His research trajectory led him to diverse academic institutions, including the University of Sussex, the University of California, San Diego, and Carnegie Mellon University, where he navigated challenges in securing research funding in Britain. Hinton's academic journey culminated in his role as the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London. Presently, he holds a professorship in the computer science department at the University of Toronto, serving as a Canada Research Chair in Machine Learning and an advisor for the Learning in Machines & Brains program at the Canadian Institute for Advanced Research.
Geoffrey Hinton has made invaluable contributions to the field of artificial intelligence, particularly in the realm of neural networks. He is widely recognized for his groundbreaking work on backpropagation, a critical algorithm for training multi-layer neural networks, which has revolutionized the field. Hinton's research portfolio encompasses over 200 peer-reviewed publications, addressing various aspects of machine learning, memory, perception, and symbol processing. In 2013, he joined Google, bringing his expertise to the tech giant after the acquisition of his company, DNNresearch Inc. However, in May 2023, Hinton announced his resignation from Google.
Published Work
In 2022, Hinton published "The Forward-Forward Algorithm: Some Preliminary Investigations," exploring novel approaches to learning algorithms in neural networks. He also delved into the realm of discrete data generation in "Analog bits: Generating discrete data using diffusion models with self-conditioning."
His research further extended to addressing the challenge of scaling forward gradients with local losses in "Scaling Forward Gradient With Local Losses," offering insights into enhancing the training of neural networks. Hinton, in collaboration with his peers, presented a unified sequence interface for vision tasks in "A unified sequence interface for vision tasks" and a generalist framework for panoptic segmentation of images and videos in "A generalist framework for panoptic segmentation of images and videos."
Hinton's commitment to advancing the capabilities of neural networks is evident in his work on Gaussian-Bernoulli Restricted Boltzmann Machines (RBMs) in "Gaussian-Bernoulli RBMs Without Tears." He also explored the ability of AI models to infer wholes from ambiguous parts in "Testing GLOM's ability to infer wholes from ambiguous parts."
In 2021, Hinton contributed to the field with "How to represent part-whole hierarchies in a neural network," shedding light on neural network representations. He co-authored "Neural Additive Models: Interpretable Machine Learning with Neural Nets," emphasizing interpretable machine learning. Hinton, along with Yann LeCun and Yoshua Bengio, presented "Deep Learning for AI" in the Communications of the ACM, underscoring the significance of deep learning in the AI landscape.
Hinton's exploration of capsules continued with "Canonical Capsules: Unsupervised Capsules in Canonical Pose" and "Unsupervised part representation by Flow Capsules," both contributing to the development of unsupervised learning methodologies.
In 2020, he co-authored "NASA: Neural Articulated Shape Approximation," advancing the understanding of articulated shapes, and "Subclass distillation," a framework for efficient model training. Additionally, Hinton's work on self-supervised models and contrastive learning in "Big Self-Supervised Models are Strong Semi-Supervised Learners" and "A Simple Framework for Contrastive Learning of Visual Representations" demonstrated the potential of these approaches in improving machine learning models' performance.
Lastly, Hinton, along with his colleagues, explored the intersection of neuroscience and AI in "Backpropagation and the Brain," offering insights into the relationship between the brain's functioning and artificial neural networks. In "Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions," they addressed the crucial challenge of identifying and mitigating adversarial images in machine learning systems.
Vision
Geoffrey Hinton's vision for the field of artificial intelligence revolves around achieving human-level understanding and reasoning through the advancement of neural networks and machine learning. He envisions a future where AI systems possess not just impressive pattern recognition capabilities but also a deeper comprehension of context, enabling them to make sense of complex data and make decisions in a manner more akin to human thinking. Hinton's research focus on neural networks and deep learning reflects his belief that these technologies hold the key to unlocking AI's potential for human-level cognition.
Furthermore, Hinton is an advocate for pushing the boundaries of AI research, emphasizing the importance of exploring unconventional and innovative approaches to machine learning. He believes that to achieve true artificial intelligence, we need to move beyond shallow learning algorithms and embrace more sophisticated models inspired by the human brain. Hinton's vision extends to creating AI systems that can generalize knowledge, understand causal relationships, and adapt to novel situations—a pursuit that aligns with his commitment to the responsible development of AI technologies that benefit society and minimize risks.
Recognition and Awards
References
- Geoffrey Hinton | Mathematics Genealogy Project
- The story of the British 'Godfather of AI' | Sky News
- Hugh Christopher Longuet-Higgins. 11 April 1923 -- 27 March 2004: Elected FRS 1958 | Biographical Memoirs of Fellows of the Royal Society
- A minimum description length framework for unsupervised learning (PhD thesis) | University of Toronto
- Bayesian networks for pattern classification, data compression, and channel coding (PhD thesis) | University of Toronto
- Bayesian learning for neural networks (PhD thesis) | University of Toronto
- Bethe free energy and contrastive divergence approximations for undirected graphical models | University of Toronto
- Learning deep generative models (PhD thesis) | University of Toronto
- Training Recurrent Neural Networks | University of Toronto
- Hinton, Prof. Geoffrey Everest | Who's Who (online Oxford University Press ed)
- Deep learning pioneer Geoffrey Hinton quits Google | MIT Technology Review
- The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI | Wired
- Geoffrey E. Hinton – Google AI | Google AI
- Learning representations by back-propagating errors | Nature
- Geoffrey Hinton was briefly a Google intern in 2012 because of bureaucracy | TechCrunch
- Progress in AI seems like it's accelerating, but here's why it could be plateauing | MIT Technology Review
- How U of T's 'godfather' of deep learning is reimagining AI | University of Toronto News
- 'Godfather' of deep learning is reimagining AI | Phys.org
- Neural Networks for Machine Learning | University of Toronto
- U of T neural networks start-up acquired by Google | Toronto, ON
- The Forward-Forward Algorithm: Some Preliminary Investigations | arXiv Labs
- Architects of Intelligence: The truth about AI from the people building it | Amazon
- A learning algorithm for Boltzmann machines | Wiley
- Stories by Geoffrey E. Hinton in Scientific American | Scientific American
- Matrix capsules with EM routing | OpenReview