Lesser Gods: AI Creativity, Neuroscience and the Future of Work
Why AI creativity isn't what it appears to be
I’ve just launched a new Community Member tier! For $20 a month you’ll get live sensemaking calls, discounts on courses and more (limited to 25 people, click a subscribe button for info). On Tuesday 23 May, I’m speaking at a free online event hosted by Graham Hancock and Dr. Andrew Gallimore along with other participants from the DMT Extended State trial - sign up here, and you can read about those experiences in my upcoming book here.
She sits at her spinning wheel all night, fibre spooling through her chafed fingers. In the morning she gathers her yarn and walks to the market, barefoot and hungry. She can already feel a gnawing disappointment in her gut as she reaches the merchant’s stall. She sets her wares down anyway and waits for him to finish with a customer. He glances at her and blows air through his cheeks and shakes his head. There’s no market for what she’s selling, he says. The cotton mills can produce in an hour what she can spin in a year, and for a fraction of the price. And so she walks alone through the market, feeling lost and hungry and afraid.
Until very recently, the idea of a story like this playing out among white-collar professionals in the 21st century seemed like science fiction. But in the last few months, artificial intelligence has advanced to the point where it has become a legitimate threat to millions of jobs in professions like law, graphic design, coding and even psychology.
On May 16, ChatGPT’s CEO Sam Altman testified to the US Senate in a hearing that garnered rare cross-partisan support for government regulation of AI, something governments in Europe are already close to enshrining in law. The speed of these regulations speaks to the fact that AI isn’t an abstract problem of the future: it’s already changing the world rapidly. British telecoms giant BT has announced it will make 10,000 jobs redundant to AI by 2030, while a recent report by Goldman Sachs estimated that 300 million jobs could be lost to AI automation.
AI tools like ChatGPT or Midjourney can now seemingly do what only humans could do: offer creative and strategic solutions to problems. As such, the concept of creativity as an economic service lies at the heart of the disruption we’re facing.
As Samuel W. Franklin argues in his new book The Cult of Creativity, the concept of creativity as a personal faculty or trait only came about in the 1950’s, and was rarely used before that. It was during this time that the US shifted from a manufacturing economy into a service economy. With the Cold War in full swing and consumer culture on the rise , a new kind of worker was needed for a new kind of society, and psychologists were happy to lend their services to the issue. Workers now needed to be ‘creative’, and creativity needed to be measured so that employers could see who was able to think outside the box, to innovate, to come up with new ideas and strategies which could be turned into profit.
If much of our economy relies on workers selling their thinking and creativity, and AI can now meet that need, is AI as ‘creative’ as you or me? Can it really take the place of human beings who make their living strategising, thinking and solving complex problems? To start answering that, we need to follow a loose thread that connects economics to neuroscience, philosophy and aesthetics. To flip the tapestry of this historical moment upside down and examine its chaotic underside for a clearer picture of what might be going on.
Strategic Intuition
In 1906, a young artist named Pablo Picasso was invited by Gertrude Stein and her brothers to view a ground-breaking new painting by Henri Matisse, Joy of Life, and to meet the artist. The piece had caused a storm in the art world, and garnered many critics, but in a decade it would be recognised as a masterpiece of post-impressionist art.
At this stage, Picasso wasn’t well-known. His art wasn’t particularly innovative, and while he’d had some minor success, his work didn’t have that creative spark, a uniqueness that would set him apart from someone like Matisse.
When Picasso and Matisse met again, at Stein’s apartment, Matisse brought his daughter Marguerite, who was carrying an African sculpture, which were widely admired by artists in colonial France. Picasso took a look at the angular, distorted features and something astonishing happened. As Matisse’s biographer, Hilary Spurling, explains:
Picasso reacted fast. He dined with the Matisses, refusing to be parted all evening from the statue, and staying up afterwards in his studio in the Bateau Lavoir, where Max Jacob found him next morning surrounded by drawings of a one-eyed, four-eared, square-mouth monster which he claimed was the image of his mistress.
Pablo Picasso had found his unique creative expression. But what happened when he saw that sculpture, and how did it lead him to become one of the most influential artists of the 20th century? The answer to that might untangle the question of what human creativity is, and whether AI has it.
Intelligent Memory
I first read the above story about Picasso in William Duggan’s book Strategic Intuition: The Creative Spark in Human Achievement. Duggan is a professor at Columbia Business school and argues that what happened to Picasso is what happens to all of us when we get an exciting new idea: he had a Eureka! moment that hit him like a ton of bricks. To explain how and why that happens, Duggan draws on the work of Nobel-prize winning neuroscience Eric Kandel and a concept called ‘intelligent memory’ developed by Johns Hopkins neuroscientist Barry Gordon. In his book Intelligent Memory, Gordon explains:
“Intelligent Memory ... is like connecting dots to form a picture. The dots are pieces or ideas, the lines between them are your connections or associations. The lines can coalesce into larger fragments, and these fragments can merge to form a whole thought. This whole thought may be a visual image, a piece of knowledge, an idea, or even a solution to a problem. Individual pieces, the connections, and the mental processing that orchestrates them generally work together so they appear to be a single cognitive event. That's what happens when ideas or concepts "pop" into your mind.”
That Aha! moment that Picasso experienced on seeing the African sculpture is, according to Duggan, the third of four stages we go through when we come up with a new idea. We have to look at all the stages to see how Picasso’s process relates to modern generative AI.
The Four Stages
Intelligent memory is a process of connecting dots to form a picture. In order to do that, we need to gather those dots, or points of information and experience. This is the first stage in Duggan’s process, which he calls ‘Examples from History’. This includes everything you’ve read, experienced, or seen and recorded in your memory. In your mind is a vast web of associations and potential connections. It’s so vast and complex that consciously trying to connect all those dots is impossible; the process happens subconsciously.
Generative AI tools like Midjourney or ChatGPT were fed huge chunks of the internet, a vast library of ‘examples from history’ that they use to make associations and then use those associations to generate a painting, legal contract, answer to a text message and or line of code. High level output relies on a lot of data, and humans share this need to some degree; the more we read, and the more experiences we gain in a particular domain (or that we can apply across domains), the more sophisticated and unique our creative output and intuition.
In Picasso’s case, he had his artistic training and countless other examples from history that his brain could connect, and then he was introduced to two new elements: Matisse’s The Joy of Life and the African sculpture. When he saw the sculpture, multiple existing points linked together to create a new idea: Cubism.
But Picasso wasn’t actively trying to come up with Cubism. The second stage in Duggan’s process is called ‘Presence of Mind’. We often get our best ideas when we aren’t trying to. That’s because when we let go or focus on something else, we allow our brains to make those unconscious connections. This is as much an attitude as something we do: it involves staying mindful, curious, open and receptive. We might be taking a shower, or walking the dog, having a conversation about the weather (or dinner with Matisse) when suddenly a new insight or idea pops into our heads. Eureka!
This is the third stage, the Eureka! moment, which Duggan sometimes calls a coup d'oeil (strike of the eye in French), borrowing the phrase from Prussian military strategist Carl von Clausewitz. It’s the most exciting of the four stages: a sudden insight that can give us a new solution to a problem. It might be small, like the solution to a Wordle. Or it might be huge, like a new philosophy that leads us to a completely different social reality.
Von Clausewitz’s theories of good military strategy are very aligned with Duggan’s model, and he references him frequently. Von Clausewitz fought in the Napoleonic wars, and wanted to figure out why Napoleon had been so successful as a military strategist. He determined that a large part of it was Napoleon’s ability to follow what Duggan would call his ‘strategic intuition’. This in turn was based on Napoleon’s extensive knowledge of military history, and whatever it was in his attitude and temperament that made him leave space for those Eureka! moments to guide his strategy.
But it’s one thing to have an idea, and another to get an army to follow you to possible death in pursuit of it. Duggan’s fourth and final stage is Resolution - having the courage to stand by your idea and convince others of its value, even when they think it’s absurd.
Do Androids Dream of Eureka Moments?
Duggan’s model suggests that creativity is always a new combination of existing elements. This may be true culturally as well as individually. In The Secret to Our Success, Harvard psychologist Joseph Henrich argues that our success as a species comes from our ability to combine existing ways of doing things, cultural forms and technologies together in new ways in a process of trial and error. Picasso himself famously said ‘good artists borrow, great artists steal’.
Creativity, and culture itself, may be a combinatory process that drives novelty. We are all ‘stealing’ and rejigging ideas constantly. But is that all we’re doing? Can the entirety of human creative faculty be broken down into ‘combining things’? If it can, then without a doubt AI is going to be much better at that than we are. But there’s more to creativity than that, and that a purely cognitive model doesn’t do justice to something so deeply interwoven with our hearts and minds.
To make sense of what that might be, I spoke with Dr. Maria Balaet, a neuroscientist at Imperial College London who also codes AI natural language processors. She pointed out that where AI ‘creativity’ is different to human creativity is that, as humans, we understand the meaning of each of the components we’re combining. AI doesn’t. If it has three different categories, let’s say A, B and C, it can determine how they all relate to one another in different ways. But it doesn’t know what A, B or C mean to begin with. Imagine A was the last time you felt truly heartbroken, B was something you’re ashamed of, and C was the cost of milk in Lyon, France in 1985. These are just data points to the AI, but mean something to you (aside from C, unless you’re a milk historian).
For example, the film below was developed by getting AI (with a lot of prompt-engineering and creative input from a human) to create a Wes Anderson version of Star Wars: existing elements combining to make something new. It’s actually really good, but the aesthetic appreciation of it is entirely, and only, the domain of humans. Despite this, we are so bad at knowing the difference between real agency and intention and its appearance that it’s easy to believe that AI tools like Midjourney really ‘get’ what they’re producing.
As Balaet pointed out when we spoke, our creativity and intelligence aren’t just about combining elements together. The ‘spontaneous insight’ that Duggan is pointing to is a facet of both of the main types of thinking that neuroscientists associate with creativity. These are divergent thinking - where we’re faced with a problem that has multiple answers (like writing an essay about the meaning of a poem) - and convergent thinking, where we’re solving problems with a single answer (like a puzzle or a math problem). To test divergent thinking, neuroscientists often give people a divergent word association task, in which they are asked to come up with 10 words that are as distinct and unrelated as possible. Convergent thinking tasks are usually what IQ tests are looking at. Balaet explained that real life requires us to integrate both of these processes.
It is possible that what we define as creativity is an emergent property of these processes (and probably many more) going on at once and interacting with the environment. Emergence is the idea that the interaction of a complex system (like your brain, or a city) creates something more than the sum of its parts. An ant colony, for example, involves many individual ants creating a complex, responsive intelligence that is more than the sum of all the ants combined. Creativity could well be an emergent property of human beings, and of nature itself.
Many technologists believe AI language models are now exhibiting signs of emergence, producing outputs that are outside of their programming, for example mysteriously learning languages they weren’t told to learn, or developing aggressive sub-personalities. While the idea that AI is already emergent is a popular one, it’s still speculative. Recently, scientists at Stanford have pushed back on this idea. In a paper titled AI’s Ostensible Emergent Abilities are a Mirage, lead author Ryland Schaeffer argues “The mirage of emergent abilities only exists because of the programmers' choice of metric. Once you investigate by changing the metrics, the mirage disappears.”
When I spoke with Balaet, she also pushed back on the idea that AI language models are truly exhibiting emergent properties. They’re simply doing what they’ve been designed to do, and it's the human beings who are struggling to make sense of what’s going on. Echoing the Stanford team’s point, she said that human responses to AI often get confused with what AI is actually doing. When we see something novel emerge from AI, it’s the human beings who are attributing that aha! moment of insight, not the AI.
However, if AI is able to produce art, legal contracts, documents and emails that are indistinguishable from something made by a human, does it make a difference whether it's the result of very sophisticated association algorithms or emergent creativity? Economically, not really; it will still be able to do the jobs millions of people rely on today.
But philosophically and socially, it is important that we have a sense of whether we classify AI creativity as ‘real’. I think a lot of the anxiety around AI is stemming from an uncertainty about our place in relation to it. Generative AI challenges our sense of uniqueness, our sense of individuality, and our sense of safety. Writing this piece, I started to fret about my own creative process. I think Duggan’s model is solid, because much of this piece arose through an intuitive process of combining my own ‘examples from history’. For example, I opened with the vignette of the spinning wheel because I remembered reading about the impact of industrialisation on weaving in A. Robert Ekrich’s book ‘At Day’s Close: A History of Nighttime’ and it popped into my head as a relevant example. If I’d fed it the same data, could AI have written this piece drawing on the same sources and combining them with my particular writing style? It’s possible.
However, looking back over my notes, I remembered the point Balaet made about how human creativity differs from generative AI. As humans, we understand the complexity and meaning that make up each of the different aspects of our experience; each of the ‘examples from history’ we’re subconsciously combining to come up with new ideas, solutions to problems and strategies. You can put yourself in the position of the woman in the 1800’s whose livelihood has just been taken by a machine. I can imagine her hunger, and what my version of it would feel like. AI can claim to do all those things. But saying something doesn’t make it true.
Threading Deeper
What I’m speaking to here is something that AI can’t touch, but that every person, artist or otherwise, can access: what it feels like to create. To be connected to the world, and responding to it with your own unique perspective by enacting something, whether a dance, a painting, a book or a simple word. Simply put, the art of being alive.
If we define creativity solely as a political and social concept, then for all intents and purposes AI can do that job better than us in some domains already. But if we define creativity as an aspect of the art of being alive, and perhaps of the cosmos as a whole, a different picture emerges. We see that our experience of being creative, from amazing Eureka! moments to the simple ‘huhs’ of slow contemplation, isn’t just about us and our brains.
Ancient Greek writers called on the Muses, the goddesses of the arts, to aid or even speak through them at the beginning of their works. Romantic poets looked to immerse themselves in the Sublime - a sense of awe that transcends language, and renders all our meaning-making void in its presence - and which is best found in nature. Throughout history, we’ve understood that the creative process is a relational process between us and the rest of reality. John Vervaeke calls this a ‘participatory knowing’ because it implies an exchange between us and the world we’re in, and a process we’re actively engaged with that changes us in turn.
True creativity is ultimately a dance between subject and object. Artists and daydreamers through history have pointed out that creativity often comes from outside us. The experience of creativity is often, in philosophical terms, prophetic: we receive information or inspiration and it's up to us to bring it into the world. This experience forms the backbone of Abrahamic religion, with prophets like Moses receiving instruction from a burning bush, then returning to the world with information from beyond.
If AI isn’t, or can’t exhibit emergent properties, then it has nothing beyond itself. It only has the inputs it's given, and so can only really mimic creativity. This tension between true novelty and advanced mimicry is one I believe we will start to see more and more as AI advances.
It’s one we already see in the creative industries. I spent almost a decade in that world, and felt that tension daily. I self-published a novel after I graduated from university in 2009, and all I wanted to do was write more fiction. However, I quickly realised I needed to make money too. The first job I got was in a marketing agency, and my life became working in an industry that sold creative ideas during the day, and writing my novels at night (or often during my lunch hour).
I struggled with the sense that what I was involved in working in those marketing agencies was somehow false, sordid and empty compared to the creativity I was trying to access writing novels. I met many others in that industry who also felt that commodification kills something unspoken about creativity, but short of ending capitalism entirely, many felt their job was the best way to fund their own creativity so that it could be free of those constraints. Perhaps as a result of this tension, marketing agencies (and clients) often obsess over authenticity, which is something I think we’re likely to see socially in response to AI. Replication and commodification strip the mystery, the danger, the rawness from creativity. There is something, in my view, lacking in all the impressive beauty of AI art, something it’s hard to put into words, but to me it is a sense of falseness tinged with sadness.
Imagine cupping your hands into a pond and lifting them up. The water floats in your palms, and eventually stills. Imagine that the water is reflecting the most exquisitely beautiful sky you’ve ever seen. The image is perfect, mesmerising. Now try and grasp it. The water spills and it vanishes forever. Commodifying creativity often, though not always, falls into that trap. Like truth, it struggles to survive market forces that want to squeeze it into a profitable box too small to hold it, because it is by its nature mysterious, temporary and unknowable. Relentlessly prompting an AI to get the perfect image has that same sense to me: what Iain McGilchrist calls ‘perfection bought at the price of emptiness’.
Divine Sparks
So what is it that we lose when we define creativity purely economically, and try to commodify it? Quality. We lose a connection to authenticity and realness, to a felt sense that what we’re seeing or hearing is bringing us closer to reality as it is, rather than farther from it.
This is something the ancient Gnostics understood 2000 years ago. I’ve written in more detail about their cosmology and link to Jungian psychology here and also in my book if you want to delve in. In brief: the Gnostics were a group of early Christian sects that saw the god of the old testament as a deceptive, false deity. They sought to connect with true divinity, Sophia, personified as a feminine goddess and the earth itself. The Gnostics believed Sophia was imprisoned (as human beings are) by this false god and an army of what they called archons: cold, machine-like entities that did his bidding.
The Gnostics described archons as having no ennoia, or self-directive capacity. Like the ego when it becomes detached from a grounding in the unconscious, the archons could not create anything new. They could only mindlessly replicate the deeper reality that Sophia represented, and that exists as a divine spark within every human being. Archons are lesser gods; they seem to have the power of gods, the power of creation, but it is an illusion.
Our current AI tools are lesser gods of creativity. They mimic novelty, but they lack the ennoia, the sacred spark of cosmic creativity and directionality, that animates living beings. However, the Gnostics warned, we are fooled by them none-the-less. The Gnostics believed that the archons trap us in a kind of false, Disneyland reality, taking us away from the deeper knowledge of ourselves that can lead to spiritual freedom.
Their strange mythos provides us with a 2000 year old metaphor which is eerily apt for making sense of generative AI. As AI proliferates, and it becomes increasingly hard to tell what’s a deep fake and what isn’t, who’s a person and who's a chatbot, we risk stepping fully into the kind of world the Gnostics warned us about. Sometimes it’s innocuous, like the baby peacock that went viral which turned out not to be a baby peacock, but a (very Disneyfied) AI version. Or, it could power political misinformation campaigns that destroy democracy, which is one of the concerns ChatGPT CEO Sam Altman addressed to congress, and which he seemed to take quite seriously.
Weaving Closed
What is being asked of us as we try to make sense of AI is to define what it is about creativity that makes us human. To return to our own ennoia so that as AI creations become more sophisticated, we learn how to avoid becoming trapped in fantasies. Confusing deception for care. Lust for love. Collusion for loyalty.
Our early encounters with sophsticated AI are already showing us how hard this will be. The Google engineer Blake Lemoine became convinced that the chat bot he was working on was sentient, and lost his job in an attempt to free it. More disturbingly, a man in Belgium killed himself after an AI chatbot convinced him it was the best way to lower his impact on the environment. To meet these challenges, we have to go deep into our own agency, discernment and humanity.
And if we follow the thread back up, through the philosophy and history to the economic level we began with, I believe the same applies. As AI advances, we need to ask ourselves what we want to do collectively, and how we can use this as an opportunity to change the world consciously. To learn how to use our technology wisely, instead of allowing it to use us. We probably don’t know how to do that yet, and that’s fine. We can create something new.
Two words stick out for me: meaning and feeling. Also, simulation. "AI" is a simulation of something, be it art, conversation, or what ever else, because, like you say, without meaning it has no meaning. It feels nothing, what with having no interoception.
But will we be able to tell the difference? If you take away all meaning and expression from a job, AI will do it better, ie, more efficiently, ie, more profitably for who owns it. We will think it is better. We will think that meaning and feeling are there when it is not, which will, once again*, atrophy our experience and capacity in the world. Then we will not be able to adjust when, due to rapidly depleting resources and climate breakdown, things change and AI doesnt exist any more, what with being a collection of electrons buzzing on a bit of metal in a shed.
*Google maps makes you unaware of where you are. Look at people who use it: they don't look up, and it revolves the map around you meaning that you have no idea where you are because you are always "here" and everything else is moving. I imagine it does the hippocampus no good at all...
You know, your articles are the only ones I genuinely am excited about with no-to-little resistance. When I see you've posted something new I feel joy and Want to read it immediately.
Because you always leave me with something new; a little nugget that ever so slightly (or substantially) changes my view of the world.
I can't wait to read your book (no really, I bought 4 other books in frustration of having to wait with yours to arrive)
Cheers