How Can We Define The Human Element In The Age of AI
At a recent writer’s group meeting here in Wellington, the discussion took a turn toward the inevitable: the use of AI in writing.
Cue the John Williams music as the table eyed each other up.
The workshop’s host casually inquired, “Who among you uses AI for writing?” The question hung in the air, loaded with morality that we haven’t really defined. I couldn’t resist asking for clarification. Were we talking about grammar correction, idea generation, or the actual task of writing itself?
She said all.
I owed up to the fact that I use AI for grammar correction, as well as idea generation. Though I rely on AI to create ideas, which I then avoid. After all, sometimes seeing what is bad sparks better ideas. Despite being a small concession, this admission raised a larger question: what else could AI eventually dominate? How much of the creative process can machines handle before losing something essential?
And what is that, something essential?
This conversation kind of made me anxious, for it reflected a broader anxiety that’s been brewing. As automation and AI increasingly encroach upon tasks once considered uniquely human, what then? As technology evolves, I believe we should ask one question: What exactly is the human element?
As the group explored the dilemma around the use of AI in writing, I learnt what the actual fear is. The group, myself included, were dealing with the fear of becoming replaceable and expendable. Because when that happens, we will become disposable.
The human element, as I’ll attempt to define it, might just be our last stand.
What Is The Human Element?
The human element is something we all know. It’s the part of us that feels like, well, us. You might say it’s the creativity that bursts forth when we’re inspired, or the emotional intelligence that lets us understand others on a deeper level. Maybe it’s the moral compass that guides us through ethical dilemmas, or the self-awareness that reminds us we’re more than just machines. But as clear as it seems when we talk about it, defining it is another story.
It’s why Doctor Andrew Huberman preaches what he studies.
It’s what drove Georges Méliès to make films.
It’s why Doctor Mike Israetel so strongly believes in his practice and science.
It’s why Edgar Poe wrote the way that he did.
So here is my attempt at defining what the human element is.
The human element encompasses a unique blend of traits: creativity, emotional intelligence, moral reasoning, and self-awareness with an underlying of flawedness.
However, despite listing these qualities, it feels like something is missing. There’s a fear that if we can’t define what makes us human, we are nothing more than machines. This exploration of the human element, then, isn’t just an academic exercise. It’s a bit of a ramble, sure, but it’s also a plea.
A plea to understand ourselves before it’s too late.
Am I The First In This?
No.
I’m not the first. Many wiser and greater individuals have made significant contributions in this area. The question of what constitutes the “human element” really has me intrigued. And worried. I feel we are not quite there yet.
One of the most discussed aspects of the human element is creativity. Creativity involves not just the generation of new ideas but the ability to connect disparate concepts in novel ways. It is driven by deep emotional and cognitive experiences. I wrote an article about the speed of creativity and while it needs a lot of work, its a start.
Margaret Boden, a renowned cognitive scientist, has made significant contributions to our understanding of creativity. Her work breaks down creativity into three distinct categories: combinational, exploratory, and transformational.
Combinational creativity is like a mental remix. It’s when we take existing ideas and mash them up in unexpected ways. Think of it as the intellectual equivalent of fusion cuisine . It uses familiar ingredients combined to create something entirely new. Exploratory creativity, on the other hand, is about pushing the boundaries of established frameworks. It’s akin to a jazz musician improvising within the structure of a well-known tune. The rules are there, but they’re being stretched to their limits. Things truly get interesting with transformational creativity. This is the realm of paradigm shifts and revolutionary ideas. It’s when someone looks at the rules and says, “What if we threw these out entirely?” Think Einstein and relativity, or Picasso and cubism.
Boden’s analysis wasn’t only about humans. She’s delved into how artificial intelligence measures up. While AI has shown some impressive capabilities, Boden argues it's missing something.
I feel that something missing is the human element.
Human creativity isn’t just about producing new ideas. It’s intertwined with our personal experiences, emotions, and cultural contexts. When we create, we’re not just mashing concepts. This is where AI falls short. It can mimic certain aspects of creativity, but it lacks the intentionality that drives human innovation.
Daniel Goleman is the man who made emotional intelligence (EQ) a household term. His work showcased the significance of emotional intelligence for life and work. Now, he didn’t actually talk about AI in his book. But if we look at his thesis, it’s pretty interesting to think about how AI stacks up in the EQ department.
Sure, AI can spot when someone’s smiling or frowning with image recognition. It might even generate a comforting phrase if programmed well. However, AI does not relate to those feelings. It’s like a really sophisticated parrot, mimicking emotions without understanding them.
When your friend is going through a tough time, you don’t just recognize they’re sad. You understand their sadness, sympathize, and know how to offer support. That’s emotional intelligence in action, and it’s something AI just can’t replicate. So while it may play at emotions, it’s missing the secret sauce that makes human emotional intelligence so powerful.
Wendell Wallachwork shows us that human ethics are like a tangled web of cultural norms, personal experiences, and gut feelings. You can’t simply write rules and be done. Google has tried, and we got the problematic Gemini of 2024.
Sure, we can program AI to make decisions based on ethical frameworks. But Wallach points out that this misses the mark. It’s like trying to teach someone to be a talented chef by just giving them a recipe book. When we humans make ethical choices, we’re not just following a flowchart. We’re drawing on our conscience, our emotions, and our understanding of the world. We empathize, analyze, and occasionally defy rules for the greater good.
AI is stuck following whatever ethical guidelines we give it. It can’t feel the weight of a moral dilemma or understand the real-world impact of its decisions.
So, broadly speaking, there seems to be some degree of the “human element” in emotionality, creativity, and ethical reasoning. It’s nice to know my definition of the human element is somewhat there.
These aspects have a deep intertwining with our lived experiences, cultural contexts, and personal connections, something AI cannot do… yet.
We Are So Flawed
Flawedness is perhaps the most unique part of the human element. Our capacity for error is not just a weakness, but a distinctive trait. While AI minimizes mistakes, humans are defined by theirs.
A study published in Nature by psychologist Jason Moser and his colleagues in 2011 examined how the brain responds to mistakes. In this study, Moser used event-related functional magnetic resonance imaging (fMRI) to investigate the neural processes involved when people monitor their own learning and failures. They found that specific brain regions, particularly the anterior cingulate cortex (ACC), are involved in detecting errors and signaling the need for cognitive control adjustments. This error-related activity was linked to subsequent behavioral improvements. It suggests that recognizing and processing errors is a critical aspect of learning and adaptation.
Here, our flawedness is crucial to the human element.
Shakespeare’s approach to language shows how flawedness can lead to significant linguistic innovations. He peppered his works with neologisms, which are words and phrases that he coined or popularized. Many of these innovations were born out of his experimentation with language. It is a process that involved both deliberate and accidental creative acts.
After all he is credited with introducing over 1,700 words in the English language. Shakespeare’s linguistic creativity often involved bending or blending different words, or inventing entirely new terms. A notable example is the word “swagger,” which Shakespeare used in Henry V. The term was likely an experiment in combining the sense of “strut” or “show off” with an added flair. This willingness to play with language contributed to how our flawed self is only something we can have.
It’s a widely known fact that the book title Coraline originated from a mistake. Neil Gaiman, while writing his novel, initially intended to name his protagonist “Caroline.” However, a typographical error led him to type “Coraline” instead. Loving the unique sound of the name, which was close yet distinct from his original choice, Gaiman kept it for the character.
And it’s a damn good name.
Unlike AI, we function with mistakes.
One word I had to resist while writing this article is “yet.” I feel it's a threatening potential. But even as AI continues to develop, it has yet to achieve the full spectrum.
Sure, AI can simulate creativity, mimic emotional responses, and follow ethical guidelines, but these are all imitations. They are reproductions without the underlying consciousness that give these traits depth and meaning. This “yet” hangs over us like a promise or a threat, depending on how you look at it. Is the human element something that machines will eventually replicate, or is it an ineffable quality that will always set us apart?
The “yet” is a reminder that the clock is ticking, and the need to define and protect the human element is more urgent than ever.
Despite its importance, the human element remains a slippery concept. I honestly believe we have to protect it in a world where automation is now a requirement.
But we will struggle to protect it if we cannot define it.
From my recent discussions in the writer’s group, it's clear that the real fear isn’t just about AI taking over tasks. It's definitively about becoming replaceable and expendable.
This anxiety goes deeper because being left behind and forgotten is just a hair’s breadth away from being replaced.
Would you consider a follow?
https://asymmetriccreativity.medium.com/
Bibliography
Boden, M. A. (2004). The Creative Mind: Myths and Mechanisms (2nd ed.). Routledge.
Goleman, Daniel. Emotional Intelligence: Why It Can Matter More Than IQ. New York: Bantam Books, 1995
Wallach, Wendell, and Colin Allen. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press, 2009.
Moser, Jason S., Tania Singer, and John P. O’Doherty. “On the Neural Systems Underlying the Monitoring of One’s Own Performance: An Event-Related fMRI Study.” Journal of Cognitive Neuroscience 23, no. 11 (2011).