Friends of IFME

 View Only

The Human in an Inhuman World

By Jared Shilhanek posted 2 hours ago

  

The Human in an Inhuman World

For several years, artificial intelligence has been presented as the solution to efficiency, innovation, and competitiveness. The answer often seems to be the same every time. But according to author and digital ethics researcher Dr. Gry Hasselbalch, we have locked ourselves into a far too narrow conversation.

Published 23 February 2026 by Sindre Haarr

“We have spent a great deal of energy talking about generative AI as if it were the answer to all societal challenges,” says Hasselbalch.

As technology receives increasing attention, the truly fundamental questions fade into the background: Why do we do things the way we do? What kind of working life do we want? And what happens to people when everything is measured in efficiency?

Dr. Gry Hasselbalch is an author and researcher specializing in digital ethics.

The Dark Side of Efficiency

Hasselbalch draws parallels to industrialization. Optimization and rationalization were once necessary measures, but she is now concerned that we are moving back toward a culture where people are forced into increasingly narrow frameworks.

“Systems have become heavier, not lighter. Digitalization has added tasks instead of removing them. Doctors and nurses spend enormous amounts of time on documentation. AI can support certain tasks, but they cannot simply be delegated to a system. You still have to read, understand, and assess — because the output of today’s AI systems is full of errors.”

If we focus solely on AI as a “one‑size‑fits‑all” solution, she argues, we fail to see holistic solutions where society and human beings are truly taken into account.

She also points to how technology vendors have largely been allowed to define the narrative.

“Products are launched quickly, often with well‑known weaknesses, yet they are still described as necessary. It is strange that those who sell the solutions also get to define what the problems are.”

When the Tool Becomes the Ideology

Instead of discussing culture, organization, and responsibility, Hasselbalch believes we have made technology the main character.

“We talk far too much about tools. It’s as if all the eggs are in one basket — and that basket is called generative AI.”

She believes this also affects creativity. When everything is reduced to efficiency and production, we lose something fundamentally human.

“Creativity is not just about output. It emerges from experience, intuition, and the very process of creating. That is not something you can optimize into existence.”

OECD figures showing declining investment in culture since 2008 concern her, especially as enormous sums are simultaneously being poured into AI industrialization.

“It says something about what we value — and that is not necessarily what builds a resilient society.”

What Cannot Be Automated

In her book, Hasselbalch describes seven human qualities AI can never replace: creativity, intuition, life, emotions, love, rebellion, and wisdom.

At a fundamental level, she believes the issue is about empathy, responsibility, creativity, contextual understanding, relationships, values, and the ability to handle the unexpected.

“AI can follow rules, but it cannot take responsibility. It can recognize patterns, but not understand meaning. It has not lived, and it cannot feel empathy — no matter how convincing its language may be.”

She is therefore also critical of the idea that technology is “inevitable.”

“When politicians say that, they are absolving themselves of responsibility. Technology is a choice. How we use it is an ethical and political question.”

Another Crossroads

Hasselbalch is not a technological pessimist. On the contrary, she points to major opportunities in areas such as medical research and scientific analysis, where AI is used purposefully and responsibly.

“The problem is not AI. The problem is hype, poor products, and a lack of reflection.”

Her wish is for a clearer humanistic perspective in the technology debate.

“We need systems built for people—not the other way around. Space for dialogue, uncertainty, and wisdom.”

She concludes with a hope for the future:

“When the story of our era’s AI investment is written, I hope we will not have forgotten that human value does not lie in efficiency, but in insight, responsibility, and the ability to create meaning.”

Link to Original Article From NKF


#Other
#AIinIFME
0 comments
5 views

Permalink