The Infinity Machine
Say “AI”, and people respond ‘Good’, ‘Bad’, and sometimes ‘Both’. The only certainty is that the biggest modern transformation is computational. The way to understand where it is going, and why it progresses so inexorably, is to know more about its past.
I recently narrated a book called ‘The Infinity Machine’ by Sebastian Mallaby (Penguin Random House Audio). It gives an eye-opening view into the development of AI, LLMs, generative AI and AGI, with far more depth and eloquence than I can.
Here are my reflections on the day of its release in the UK.
This book was the exploration of one part of the AI story - Google DeepMind and its leader Demis Hassabis. It also gave me some fascinating insights into other players in the game (OpenAI primarily) and the differences in their approaches over a number of years. The depth of research and obvious wealth of interviews give the prose a rich sense of reality, dwelling not just on the highlights of the journey but also on the long stretches of struggle, and the momentous advances that did not make it to the public consciousness.
What I took away from this, quite apart from the insight it provides into the major characters at DeepMind and OpenAI, is the frequent (but not universal) similarity of computational intelligence to the human biological intelligence on which it is based. Secondly, and more significantly, the idea that if anything is repeated, there can be a way devised to measure it, and with enough of these measurements, data can be organised in such a way that patterns form in it, or can be found in unexpected places. These patterns are analysed, understood, linked with the real world, and then reverse engineered to change how we work on reality.
And this can happen in so many fields! This book has really added another dimesion to my views on learning and creativity.
The only thing I don’t think has been reached is emotion and consciousness. Given how things have gone so far though, it may well reach those quicker than I anticipate. I’m fairly certain emotions will be quantified one day because everyone in the world has them most of the time, but true consciousness may be tricky. And I wonder if those with more than a fleeting insight into complete, abounding consciousness would be likely to decide to aid in the coding of it.
But that’s all conjecture. For now, the thing that remains missing in the way AI interacts with the majority of active users is that the computer is just an agent of the user’s will, and has no motives grown totally from itself. Nothing generated by an AI has any significance for the computer itself unless some reward signal is written into its code by a human or another computer. A push to do so, identifying more and more subtle reward signals, also seems inevitable to me, including to the point where it becomes so good at simulating emotions and motivations that telling the difference will be impossible.
Does that seem impossible to achieve? So did the Turing test, until in the last 5 years it was steamrollered through. And of course, the issue of alignment is ever present with these discussions.
I’m glad that new cures for disease are indisputably being enabled (or at least, sped up by orders of magnitude) by AI, but I have always been confused about why you’d want to make an AI too much like humans. There’s hardly a lack of people, but if we keep trying to replicate human behaviour I worry that humanity itself, the human spirit, will become undervalued compared to the productivity of AI assisted power.
I felt last year that the only thing I may be good for in coming decades is being a human being. It certainly has needed a shift in perspective to be at peace with the new wave of developments just around the corner (5-10 years).
Save your most recent verdict on the current state of AI until you’ve read this book. Or heard the audiobook! It was a pleasure to narrate and I hope it helps you get a deeper insight into these technologies, and that it is never a simple good vs bad computation.