Artificial Intelligence and the Human Mind: The Challenges of an Emotionless Machine in an Emotional World
A fundamental difference between artificial intelligence (AI) and human intelligence is emotion–machines lack it entirely, while humans cannot escape it.
Emotions, whether positive or negative, are essential to human nature. They shape our experiences and memories, adding tone, color, and meaning to our actions, both individually and socially, consciously and subconsciously.
Additionally, emotions are crucial to human intelligence.
Emotions are Crucial for Decision-making
Emotions serve as an internal guide for decision-making by assigning different weights to potential choices and outcomes.
In Descartes' Error, neuroscientist Antonio Damasio describes the case of Elliot, a patient who underwent surgery that removed the brain region responsible for emotional signal integration. Despite retaining his memory, IQ, and problem-solving abilities, Elliot struggled with everyday decisions, from choosing an appointment time to prioritizing tasks.
Because his mind lacked emotional input, he was trapped in endless deliberation over trivial details, unable to complete tasks efficiently.
Emotions are Essential for Social Interactions
Humans develop the theory of mind early in life, allowing them to recognize others as separate individuals with their own thoughts and emotions.
Beyond language, detecting and empathizing with others' feelings is fundamental for cooperation and relationship-building. It also helps us prevent or solve social conflicts.
Emotional intelligence is key to understanding how to navigate through complex social and cultural dynamics.
Emotions Link the Body with the Mind
Emotions arise from bodily sensations—our responses to internal and external stimuli.
Emotional signals are processed through subcortical and brainstem structures in the brain. These brain regions evolved in ancient times, well before the neocortex, which is responsible for higher cognitive functions. Emotional signals alert the cortex with warnings of danger or sources of pleasure, aligning the body's needs with conscious experience.
Emotions ground the mind in reality, helping individuals distinguish between self and others.
Then, can AI simulate emotions?
Compared to human intelligence, AI is purely logical and computational. It is strong in pattern recognition, problem-solving, and even creativity.
Since the generation of emotions depends on the biological body, engineers have suggested alternatives such as affective computing to recognize and respond to human facial expressions, voice tones, and text sentiment. However, this is still pattern recognition, but not experiencing feelings.
On the other hand, AI can simulate some of the effects of an emotion, such as rewarding or punishing a model during training. This is still far from human emotion, but it does offer a connection between humans and machines.
But AI could elicit negative human emotions.
Now, imagine you are working with someone who is highly intelligent but does not show any emotions or interest in understanding your emotions.
The rapid advancements of AI have given humans little time to fully understand its impact. Early studies have already shown the potential negative emotions that highly intelligent applications could elicit in humans.
In the workplace, people interact with intelligent machines for assignments. Worrying about job displacement, loss of privacy, and uncertainty about the future can lead to fear and anxiety.
The prevalence of chatbots that address users' questions and concerns can cause customers frustration and dissatisfaction because the software lacks empathy or fails to understand human emotions.
Individuals receiving AI recommendations, especially in high-stakes fields such as healthcare or finance, may experience distrust and skepticism. Conversely, negative experiences stemming from following inaccurate AI advice can result in self-blame, perpetuating a vicious cycle of diminishing self-confidence and overreliance on AI in decision-making.
Finally, relying on AI for personal assistance, problem-solving, and decision-making might intensify feelings of isolation and loneliness. Human choices are influenced by moral judgment, compassion, and ethical values. As AI plays a more significant role in human decisions, it is vital to define and establish limits and boundaries accordingly.
Humans have experienced inner conflicts resulting from struggles between rational thinking and innate emotions. While humans are still learning how to tame their own emotions, the ever-increasing involvement of artificial intelligence will likely exacerbate the imbalance. Protecting human emotional well-being is a critical question that can't be ignored.
Bad actors could use AI to manipulate people's emotions
As reported in an MIT Technology Review article, researchers have found that language models (LLMs) can produce text that is nearly as persuasive as propaganda from human-written campaigns.
OpenAI published the first trend report, which disclosed bad actors from Russia, China, Iran, and Israel had misused their products to run covert political propaganda. These players used their generative AI tools to boost their operations, from creating large volumes of social media comments in multiple languages to turning news articles into Facebook posts.
Although companies like OpenAI and Meta have been monitoring the unusual patterns of conceivable fake accounts at this stage, the continued rapid advances of AI will undoubtedly increase this trend, particularly disrupting American politics and becoming a national security threat.
Additionally, there is already caution about the "trolling" effect, which is further assisted by AI in producing a considerable amount of inflammatory posts and comments. These are targeted to provoke the audience's negative emotions, such as anger and aggregation (verbally online) or withdrawal behaviors marked by distrust, anxiety, and depression.
AI could be weaponized with harmful intentions.
On July 16, 1945, the Manhattan Project, led by the physicist J. Robert Oppenheimer, successfully launched the first nuclear bomb in the New Mexico desert. Shortly afterward, the United States dropped two atomic bombs on Japan on August 6th and 9th, respectively. Since then, the initial scientific innovation has elicited dangerous political dynamics worldwide, questioning Oppenheimer and the team on the ethical responsibilities of innovation and technological advances.
Today, we face the same situation with powerful AI.
The perfect example is the statement from the recent Lex Fridman podcast: the DeepSeek moment could be the beginning of another Cold War, as country leaders realize that AI dominance can be a powerful weapon for gaining geopolitical power worldwide.
Given the dark sides of human emotions, a big question is whether humans can handle AI safely. We all understand human nature that our emotions always have an evil side. In the case of AI, it can become a perfect tool for evil manipulations to satisfy human greed. It could range from covert manipulations to wars with mass destruction.
In conclusion, AI excels in logic and efficiency but lacks the emotions that define human intelligence. While it offers many benefits, its rise also brings various risks, from emotional detachment to potential misuse. As AI becomes more sophisticated, it is essential to establish clear boundaries and safeguards to ensure that its development aligns with human values, emotional well-being, and ethical responsibility.