The Biggest Risks of AI Are Not Covered by Regulations

Note: The debate article below is an English translation of an article originally published in Swedish on DI Debatt in Dagens Industri on January 23, 2026 under the title “De största riskerna med AI täcks inte av regelverken.”

As large language models and generative AI change the conditions for communication, texts and other content can be produced quickly, cheaply, and at an industrial scale. For public discourse, this represents a more fundamental challenge, writes Joakim Lind, PhD in Business Administration and communications consultant at Cloudberry.

When language can be generated without clear human involvement, it becomes more difficult to determine what – and who – carries experience, perspective, and responsibility. At the same time, the information environment is changing, with a growing proportion of circulating content being synthetically generated. For a democratic society—built on recognizable voices, the ability to scrutinize power and influence, and the capacity to hold actors accountable—this is not a marginal change but a structural challenge to how trust is created and maintained.

What previously gave human actors credibility, education, professional experience, position, and contextual understanding can now be simulated through a few quick instructions (prompts). The results are often accurate, sometimes impressive. But while linguistic quality can be automated, responsibility cannot. Ownership and authorship become diffuse. When even expressions of expertise and authority can be generated on command, the boundary between earned trust and simulated credibility begins to shift. Who stands behind an argument? What experiences support it? And who can be held responsible when something has consequences?

Here a more subtle but decisive shift takes place. The narrative signatures that once helped us identify professions and roles, the researcher’s theory and method, the doctor’s expertise, the journalist’s investigative depth, the lawyer’s argumentation, or the writer’s voice, risk being flattened into a linguistic middle ground that may be technically correct but context-free.

Editorial responsibility, source criticism, and professional judgment are thereby weakened. When linguistic processes become detached from individuals, roles, and contexts, perspectives become harder to trace. The material underlying public and organizational decisions is expanding rapidly in both scope and complexity. For decision-makers, this makes it increasingly difficult to determine what has actually been analyzed, tested, and grounded in expertise, and what is merely well formulated. In this hybrid of human judgment and machine-generated production, the conditions under which responsibility can be identified and enforced are also being reshaped.

If we look at the fundamental qualities of synthetic language, this is not primarily a question of style, but of how meaning arises—through narrative. Narratives should here be understood as frameworks of meaning through which experiences are organized and given direction over time, rather than as individual stories. When these frameworks become blurred, our collective ability to orient ourselves weakens.

Generative AI can strengthen our cognitive capacities. It can structure information, identify patterns, and generate analytical material at a scale far beyond human capacity. However, it cannot be held responsible for the conclusions in that material. As text, analysis, and decision support become cheap, human judgment becomes more expensive—a scarce resource. In many organizations, this is already visible in what has begun to be called workslop: polished AI-generated texts, reports, and presentations that appear complete but may lack substance, shifting the burden of evaluation and quality assurance to the next recipient. As a result, roles such as reviewers, editors, and decision-makers take on greater strategic importance as guarantors of quality and responsibility.

The historian Yuval Noah Harari has described this as a central challenge for modern knowledge systems. In his book Nexus, he highlights the importance of self-correcting systems: structures that both produce information and allow it to be tested, questioned, and revised over time. Crucially, however, such systems require active human involvement. They depend on review and critical reassessment practices. Science is one such example: no guarantee of truth, but organized around institutionalized doubt, open criticism, and responsibility. The same principles are equally vital for organizations, media institutions, and companies that increasingly rely on synthetic language.

Responsibility, therefore, cannot be reduced to compliance with individual regulations or technical risk classifications. Regulations such as the EU’s AI Act play an important role in ensuring security, transparency, and accountability in specific use cases. Yet the most far-reaching impacts and potential risks of generative AI often arise outside the domains where risks are clearly defined and measurable. They emerge in everyday language use, in analysis, communication, and storytelling, in the layers of text and data that gradually shape how reality is described, understood, and evaluated.

As fundamental narratives about society, the economy, and human agency are increasingly produced and reinforced by machine systems, not only are individual decisions affected, but also the frameworks of meaning that guide collective beliefs. The risk, therefore, lies less in individual errors than in a gradual shift in what is perceived as reasonable, relevant, and legitimate.

Here, the need for narrative resilience becomes clear. By this, I mean the ability of organizations and societies to preserve, test, and develop frameworks of meaning grounded in human judgment, responsibility, and professional integrity, even when language itself can be automated. For businesses and policymakers, this is not about slowing technology, but about strengthening the institutions, roles, and processes that give it direction.

Generative AI thus requires not only new regulations but also a deeper sense of responsibility for how meaning is produced and managed. In a synthetic information environment where the volumes of text and data continue to grow, narrative resilience becomes both a strategic asset and a necessity—for media, organizations, companies, and ultimately for democracy.

Joakim Lind
PhD in Business Administration, and Communications consultant