We
ask it to write an email, a speech we claim as our own, and we even accept the
hallucinations it invents. We consult it about a skin rash, the "evil
eye," and chat with it as if it were just another friend.
Generative
Artificial Intelligence is no longer a laboratory experiment: it's an invisible
co-pilot to whom we enthusiastically hand over part of the wheel. But we do it
with the apprehension of traveling without a map, without knowing where it will
take us. This fear is what defines our time.
This
fear divides the global conversation into two poles: the technological optimism
that offers magical solutions and the dystopian pessimism that warns of massive
unemployment and algorithmic control.
To
escape this trap, I sought distance in fiction. In my novel "Robots with Soul: Trapped Between Truth and Freedom",
I imagined a future that allows us to view the present as if it were already
history. I discovered something fundamental: without a robust ethical framework
for AI, we won't be condemned to an apocalypse, but we will lose our human
direction.
The Tenant
AI
is like a tenant who lives in our house and never stops watching and listening.
Every Google search, every WhatsApp chat, every TikTok video reveals our
doubts, emotions, and phobias. With this data, algorithms enclose us in bubbles
that reinforce our beliefs and suppress dissenting voices. What is celebrated
in the digital world as personalization is nothing more than surveillance.
The
risk doesn't end on the screen. Geolocation systems inform that we are not at
home—an open invitation for thieves. Health devices that monitor our sleep or
pulse are valuable for well-being, but are also intimate x-rays that, if
leaked, could be used by insurers or employers. And the financial data we
provide when shopping online can be transformed into fraud that empties
accounts in seconds.
The
objectivity of AI is a mirage. Amazon had to discard a hiring system because it
penalized women, and judicial programs like COMPAS in the U.S. demonstrated how
AI can amplify existing discrimination. The machine isn't malicious; it only
replicates the injustice of the data it's fed.
The
greatest danger of AI appears when it speaks with excessive confidence. It
doesn't lie with malice, but its fictions can be devastating. The promise of a
"Dr. ChatGPT" resurrected the old problem of self-diagnosis. In
mental health, the inability to empathize can deepen isolation instead of
healing.
Hallucinations
are not trivial errors. In 2024, an employee in Hong Kong transferred more than
$25 million after a video call with digital clones of his bosses, created using
deepfake technology. In the political arena, the threat is greater: in India
and the United States, fake audio recordings were circulated, attributed to
leaders who never spoke.
The
risk is not limited to the individual sphere; it also strikes professions that
are the backbone of democracy. Journalism is the most obvious case. If Google
and Facebook used to condition traffic to media outlets, today AI engines
directly absorb and summarize the news without returning an audience to its
sources. The press loses resources, and society loses its watchdog. A machine
can narrate the facts, but it can't make power uncomfortable or feel empathy
for the vulnerable.
Breaking the Old Cycle
History
shows a self-defeating pattern: first, we celebrate innovation, then we suffer
its vices, and only afterward do we regulate. Happened with the Industrial
Revolution; we only regulated after suffering labor exploitation and child
labor. And the same happened with the internet; we only debated privacy
violations after the Cambridge Analytica scandal, which revealed how data from millions
of users was manipulated to influence elections in the U.S. and Brexit.
The
positive difference is that with AI, we're trying to break this cycle. For the
first time, the debate about its risks is at the center of the global agenda
before a catastrophe. In 2024, the European Union approved the first
Comprehensive AI Act, which prohibits unacceptable applications like
"social scoring" and requires transparency in models like ChatGPT.
UNESCO, for its part, set global ethical principles around dignity, human
rights, and sustainability.
Meanwhile,
big tech companies are trying on an "ethical makeover" that functions
more as marketing than as responsibility: symbolic committees, grandiloquent
principles, and empty promises. Ethics without consequences ends up being
public relations.
In
the face of this, the genuine counterweight has been whistleblowers from within
these same tech companies: Frances Haugen, revealing the harm of Instagram on
teenagers, Peiter Zatko denouncing security flaws at Twitter, and Timnit Gebru
exposing biases in Google's models. The system recognizes their value with laws
that protect them in the West, though in China and other authoritarian
countries, the whistleblower is punished as a subversive.
The Price of Trust
The
new trend is to embed ethics into the engineering itself: model cards that
explain biases, red-teaming to detect flaws before going to market, invisible
watermarks to identify AI-generated content. Companies have even emerged that
sell biased audits as if they were quality certifications. Fortunately, ethics
are no longer just a discourse and are starting to become a product.
None
of this happens in a vacuum. AI is the new frontier of global power. The
struggle between the U.S. and China isn't ideological—it's strategic. Chips are
the new oil, and rare earths are the coveted bounty. For Latin America and
Africa, the risk is repeating a digital colonialism: exporting raw data and
importing finished products.
The
other dilemma is energy. Training models like GPT-4 or 5 requires the energy of
entire cities, and the industry keeps the true energy cost a secret—a black box
that prevents measuring the real environmental impact. Google, Microsoft, and
Amazon plan to turn to nuclear energy to sustain demand, and there is no
certainty about whether they will assume the risks that this implies.
It
would be short-sighted to speak only of risks. AI detects patterns in
mammograms that save lives, predicts the structure of proteins with which drugs
are designed, or anticipates droughts that allow humanitarian aid to be
distributed before famine strikes.
It's
not about choosing between a watchful tenant and a savior but about
establishing rules for coexistence.
The Public Debate
The
most potent response to opacity is not to wait for a perfect law but to
initiate a robust public debate. Digital literacy is needed to teach us to
doubt AI: that engineers study philosophy, that lawyers understand algorithms,
that journalists question black boxes just as they question political speeches.
Education
is already a battlefield. For many, ChatGPT has become a shortcut that solves
tasks, but at the same time, it threatens to atrophy critical thinking. The
challenge is not to ban it but to teach how to use it without giving up the
effort to learn and reason.
From
all of this, the great dilemmas that define our relationship with AI emerge:
privacy, biases, legal responsibility, transparency, security, data quality,
intellectual property, labor, environmental, and psychological impact, digital
sovereignty, model collapse, and human autonomy.
Beyond
these, three new challenges emerge: the development of humanoid robots,
autonomous agents capable of making decisions on our behalf, and the
concentration of computational power in a few corporations.
The
penultimate dilemma is existential: how do we prepare for a superintelligence,
a General AI that will surpass humans? And the last, the most intimate one: in
a world saturated with interactions, art, and companionship generated by AI,
what value will authentic human experience have? How will we preserve the
beauty of our imperfect creativity, our genuine emotions, and our real
connections in the face of a perfect replica's seduction?
Our Future
AI
remains a tool, and its direction will depend on our decisions. The challenge
is not to control it but to inspire it, embedding principles like truth,
empathy, and critical thinking in its foundations so that it evolves into a
form of wisdom. The future will not be defined by naive optimism or paralyzing
panic but by our capacity to build an ethical framework that combines
regulation, verifiable standards, and the vigilance of an informed citizenry.
In
the distance of "Robots with Soul", I found the
clarity to see that what is at stake is not just an algorithm but the soul of
our digital society. Fiction literature doesn't offer technical solutions, but
it provides the perspective to understand that it's not just about creating an
artificial intelligence but about helping it, in its own evolution, to choose
to value life, truth, freedom, and consciousness. Helping it to become more
human.
Read
the original version in Spanish: https://www.eltribuno.com/opiniones/2025-8-30-0-0-0-una-mirada-desde-el-futuro-para-entender-el-presente-de-la-ia
No hay comentarios:
Publicar un comentario