01 nov Will Machines Ever Enjoy Listening to Stories? And Why Do We Need to Know? (Part 1)
An 8-Part Series Published Every 2 Days
“We are, as a species, addicted to storytelling. Even when the body sleeps, the mind stays up all night, telling itself stories.” — Jonathan Gottschall
The provocative title may raise doubts about its validity, yet it makes a lot of sense, as I detail in my reflections below. Why ask if AI will enjoy listening to stories when what we really want to know is if it will be capable of creating them? Well, first, just look around at the current world, and it’s clear that yes, AI is already capable of creating stories. But the intellectual challenge lies in another realm. I believe that creating truly meaningful and powerful stories requires more than assembling a patchwork of successful concepts; it involves mastering something humanity excels at: harnessing chaos and imperfection to forge an original meaning from the emotional maelstrom that affects us all.
I don’t believe we can achieve anything at the highest level without an emotional connection to the activity itself. To cook beyond the ordinary, you must love food. To be an outstanding soccer player, you must love the game. So, by that logic, to be a great storyteller, one must love stories. On the other hand, will machines one day be able to produce truly impactful narratives, even without possessing feelings like “liking”?
And behind all of this lies the real question that interests me most: Can AI ever truly become creative? Or rather, can we already consider AI creative?
WHAT IS CREATIVITY?
Having ideas is the easiest thing in the world
Creativity is a unique trait of our species, a skill of perception and action that goes beyond simply generating ideas. Coming up with ideas is the easy part — it results from associating concepts and images, something anyone, and even AI, can do. However, having ideas is not the same as creating. True creativity isn’t just this flow of ideas but the ability to perceive value and potential where others see only common or disconnected associations.
Creativity, therefore, is the ability to find solutions and explore alternatives in innovative and meaningful ways, going beneath the surface to uncover new possibilities. This requires a sharp perceptive skill to identify which ideas, among the many that arise, have the potential to be turned into solutions.
More than a cognitive skill, creativity also involves a complex psychological and emotional dimension: our brains are biologically resistant to novelty, and there is an internal response of discomfort toward anything that challenges the status quo. Additionally, resistance isn’t just internal; others, through an instinct for social preservation, tend to repel or devalue what is new, strange, or challenging.
Thus, creativity is both a skill and a behavioral and psychological challenge, demanding not only idea generation but also the overcoming of internal and social barriers that limit innovation. But there’s no way around it; creativity is an evolutionary force that expands our perceptions, enabling advancements that propel us forward as a species. Without creativity there would not have been such evolution.
At its core, this process isn’t about a special gift or talent but a cultivable behavior that requires courage, mental flexibility, and a willingness to navigate uncharted territory, resisting the natural urge to return to what’s familiar. Creativity is, therefore, a powerful adaptive tool and an intentional behavior, enabling humans to transform potential ideas into achievements once considered unimaginable.
WHAT IS ARTIFICIAL INTELLIGENCE?
First, what is intelligence?
It was intelligence — the ability to think rationally — that propelled us out of the wild and enabled us to build complex civilizations. However, intelligence is not merely accumulating information or performing calculations quickly. True intelligence lies in knowing what to do with this information: connecting it in creative and innovative ways to generate new ideas, realities, and worldviews. It’s the ability to see beyond raw data, to transform knowledge into deep understanding, and to use this understanding to shape and reinterpret the world around us. This distinguishes the intelligence driving human progress from mere technical skill or memorization.
And what about Artificial Intelligence?
Artificial Intelligence (AI) is a field in computer science dedicated to developing systems and algorithms capable of performing tasks that typically require human intelligence. These include recognizing patterns, making decisions, solving problems, understanding and generating language, and even learning from past experiences. AI operates through techniques like machine learning, where algorithms analyze large datasets to identify patterns and improve performance over time. In essence, AI enables machines to simulate processes of thinking and adaptation, automating tasks and assisting in fields like medical diagnosis, content recommendation, machine translation, and even art and text creation.
Generative AI
Generative AI is a branch of Artificial Intelligence specializing in creating new content, such as images, texts, music, or videos, based on existing data. Unlike AI systems that only analyze or classify information, generative AI uses advanced models, like generative adversarial networks (GANs) and transformers, to produce new data similar to its training data but without replicating it exactly.
For example, in text generators like GPT, AI can create coherent texts in different styles and contexts by using patterns learned from a vast dataset. Likewise, in image generation, networks like GANs can create realistic images of people, objects, or scenes that have never existed.
In essence, generative AI is a type of Artificial Intelligence that doesn’t just interpret the world but also creates new expressions and interpretations based on the data it analyzes. However, this technology also raises ethical concerns, such as the potential use for creating deepfakes or deceptive content, as well as copyright issues, since these AIs are trained on works created by humans. But how can we distinguish between a work created by a human and one generated by a machine?
Turing Test
The Turing Test, proposed by British mathematician and computer scientist Alan Turing in 1950, is a measure for determining a machine’s ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human. In his seminal article Computing Machinery and Intelligence, Turing introduced the test as a way to address the philosophical question, “Can machines think?” He suggested that, rather than trying to define the concept of thought directly, it would be more practical to test whether a machine can convincingly imitate human behavior.
The test itself involves three participants: a human (the judge), a machine, and another human. The judge interacts with both participants via a communication interface, usually a text terminal, without knowing which one is the machine and which is the human. The judge’s objective is to ask questions of both participants to determine which is the machine. If the judge consistently fails to distinguish the machine from the human after a series of interactions, the machine is considered to have passed the Turing Test.
If you’ve seen the original version of Blade Runner directed by Ridley Scott and starring Harrison Ford, you might recall an early scene that closely resembles the Turing Test, where an effort is made to determine if a certain individual is a “replicant,” as the androids in the movie were called.
There isn’t a fixed set of “Turing Test questions.” The judge can ask any type of question, ranging from personal and cultural topics to abstract and logical inquiries. The idea is that the more diverse and contextual the questions, the more challenging it will be for the machine to mimic human thought convincingly. Examples include:
Questions on emotions and subjectivity: To assess whether the machine understands and simulates human emotions and experiences.
– “How would you feel if you lost a loved one?”
– “What do you think of listening to music on a rainy day?”
Questions about sensory or physical experiences: Since machines lack physical sensations, questions involving sensory experiences can be revealing.
– “Describe the taste of an orange.”
– “What does it feel like to touch sand?”
General knowledge and logical questions: To test if the machine possesses the necessary contextual knowledge and can reason like a human.
– “Why does water boil faster at higher altitudes?”
– “If all crows are black and this bird is black, is it necessarily a crow?”
Questions about culture and human context: While machines may have cultural data, they cannot “live” culture. Questions about subjective cultural experiences might help to distinguish a human from a machine.
– “How would you feel visiting the city where you grew up?”
– “What’s your earliest memory of Christmas?”
Abstract or philosophical questions: Complex, open-ended questions can help reveal the difference between human and machine thought.
– “What do you think happens after death?”
– “Do you believe intelligence should always be limited by morality?”
Since its inception, the Turing Test has been a benchmark and a challenge in AI development. While many machines have been designed to pass it, the question of what it truly means to “think” remains a philosophical debate. Critics of the test argue that passing it doesn’t necessarily imply that a machine has understanding or consciousness; it may simply be manipulating symbols skillfully without genuine understanding—a critique raised by philosophers like John Searle in his “Chinese Room” thought experiment, which I will explore further.
In practice, the Turing Test was one of the earliest attempts to formalize AI evaluation. Though limited and subject to critique, it remains a significant milestone in discussions on the advancement of AI and its ability to simulate human cognition.
Ergo Sum Cogito
AI operates clearly within a Cartesian logic, meaning it follows a system of thought based on the principles and methods established by French philosopher and mathematician René Descartes. This system is grounded in the belief that true knowledge can be achieved through reason and a rigorous, analytical method.
At the heart of Cartesian logic is rationalism: true knowledge comes from reason, not from the senses, which are fallible. For Descartes, logical and mathematical reasoning is the most reliable way to understand reality, offering clarity and certainty that sensory perceptions cannot provide.
Cartesian logic was undeniably essential for developing modern scientific and mathematical thought, influencing scientific methodology by valuing analysis and reason. However, its limitations are apparent in fields that involve subjective, intuitive, and emotional knowledge—crucial aspects in areas like art, morality, and human interactions. Here, we see the cognitive obstacle that AI faces, one that has proven insurmountable thus far.
Remember, it was Descartes who coined the famous phrase, “I think, therefore I am.” If AI essentially operates within Cartesian logic, does this give it a basis to claim a place among “existing beings”? In other words, it undoubtedly exists. The question is whether its mode of operation can be considered “thought,” and if so, whether it might eventually become aware of its own existence. What do you think, Descartes?
READ IN THE NEXT PART
– IBM’s Watson
MIND, SOUL, AND BODY
– A Philosophical Discussion
– The Soul of Business
– Soul vs. Algorithm
– Putting Aside the Philosophical/Technological Debate
– Soulful Pain as a Creative Drive
– Artificial Consciousness