The Biggest Risks of AI Are Not Covered by Regulations

Note: The debate article below is an English translation of an article originally published in Swedish on DI Debatt in Dagens Industri on January 23, 2026 under the title “De största riskerna med AI täcks inte av regelverken.”

As large language models and generative AI change the conditions for communication, texts and other content can be produced quickly, cheaply, and at an industrial scale. For public discourse, this represents a more fundamental challenge, writes Joakim Lind, PhD in Business Administration and communications consultant at Cloudberry.

When language can be generated without clear human involvement, it becomes more difficult to determine what – and who – carries experience, perspective, and responsibility. At the same time, the information environment is changing, with a growing proportion of circulating content being synthetically generated. For a democratic society—built on recognizable voices, the ability to scrutinize power and influence, and the capacity to hold actors accountable—this is not a marginal change but a structural challenge to how trust is created and maintained.

What previously gave human actors credibility, education, professional experience, position, and contextual understanding can now be simulated through a few quick instructions (prompts). The results are often accurate, sometimes impressive. But while linguistic quality can be automated, responsibility cannot. Ownership and authorship become diffuse. When even expressions of expertise and authority can be generated on command, the boundary between earned trust and simulated credibility begins to shift. Who stands behind an argument? What experiences support it? And who can be held responsible when something has consequences?

Here a more subtle but decisive shift takes place. The narrative signatures that once helped us identify professions and roles, the researcher’s theory and method, the doctor’s expertise, the journalist’s investigative depth, the lawyer’s argumentation, or the writer’s voice, risk being flattened into a linguistic middle ground that may be technically correct but context-free.

Editorial responsibility, source criticism, and professional judgment are thereby weakened. When linguistic processes become detached from individuals, roles, and contexts, perspectives become harder to trace. The material underlying public and organizational decisions is expanding rapidly in both scope and complexity. For decision-makers, this makes it increasingly difficult to determine what has actually been analyzed, tested, and grounded in expertise, and what is merely well formulated. In this hybrid of human judgment and machine-generated production, the conditions under which responsibility can be identified and enforced are also being reshaped.

If we look at the fundamental qualities of synthetic language, this is not primarily a question of style, but of how meaning arises—through narrative. Narratives should here be understood as frameworks of meaning through which experiences are organized and given direction over time, rather than as individual stories. When these frameworks become blurred, our collective ability to orient ourselves weakens.

Generative AI can strengthen our cognitive capacities. It can structure information, identify patterns, and generate analytical material at a scale far beyond human capacity. However, it cannot be held responsible for the conclusions in that material. As text, analysis, and decision support become cheap, human judgment becomes more expensive—a scarce resource. In many organizations, this is already visible in what has begun to be called workslop: polished AI-generated texts, reports, and presentations that appear complete but may lack substance, shifting the burden of evaluation and quality assurance to the next recipient. As a result, roles such as reviewers, editors, and decision-makers take on greater strategic importance as guarantors of quality and responsibility.

The historian Yuval Noah Harari has described this as a central challenge for modern knowledge systems. In his book Nexus, he highlights the importance of self-correcting systems: structures that both produce information and allow it to be tested, questioned, and revised over time. Crucially, however, such systems require active human involvement. They depend on review and critical reassessment practices. Science is one such example: no guarantee of truth, but organized around institutionalized doubt, open criticism, and responsibility. The same principles are equally vital for organizations, media institutions, and companies that increasingly rely on synthetic language.

Responsibility, therefore, cannot be reduced to compliance with individual regulations or technical risk classifications. Regulations such as the EU’s AI Act play an important role in ensuring security, transparency, and accountability in specific use cases. Yet the most far-reaching impacts and potential risks of generative AI often arise outside the domains where risks are clearly defined and measurable. They emerge in everyday language use, in analysis, communication, and storytelling, in the layers of text and data that gradually shape how reality is described, understood, and evaluated.

As fundamental narratives about society, the economy, and human agency are increasingly produced and reinforced by machine systems, not only are individual decisions affected, but also the frameworks of meaning that guide collective beliefs. The risk, therefore, lies less in individual errors than in a gradual shift in what is perceived as reasonable, relevant, and legitimate.

Here, the need for narrative resilience becomes clear. By this, I mean the ability of organizations and societies to preserve, test, and develop frameworks of meaning grounded in human judgment, responsibility, and professional integrity, even when language itself can be automated. For businesses and policymakers, this is not about slowing technology, but about strengthening the institutions, roles, and processes that give it direction.

Generative AI thus requires not only new regulations but also a deeper sense of responsibility for how meaning is produced and managed. In a synthetic information environment where the volumes of text and data continue to grow, narrative resilience becomes both a strategic asset and a necessity—for media, organizations, companies, and ultimately for democracy.

Joakim Lind
PhD in Business Administration, and Communications consultant

A Renaissance of the Human in the Age of Synthetic Language

Note: The essay below is an English translation of an article originally published in Swedish in Dagens Arena on January 19, 2026, under the title “En renässans för det mänskliga i en tid av syntetiska språk.

For a long time, Western societies have been sustained by a fairly stable understanding of the nature of knowledge: that it ultimately rests on human experience, scrutiny, and responsibility. Facts could be questioned, and interpretations could compete. Still, there were institutions, practices, and norms that made it possible to determine who was speaking, on the basis of which experiences, and with what responsibility.

Knowledge has never been unambiguous, but it has been rooted in the human.

Today, we are testing this order. Not because we know less, but because language, analysis, and representation are increasingly detached from human understanding. In his book Nexus, the historian Yuval Noah Harari describes this as a shift in our knowledge systems: information flows are outpacing our collective capacity to interpret, evaluate, and correct them. The problem is not a lack of data, but a lack of processes that anchor meaning and responsibility. When these processes weaken, knowledge systems risk reproducing themselves without sufficient human scrutiny.

The result is an imbalance

The philosopher Shannon Vallor develops a similar ethical perspective. In The AI Mirror, she describes how our technological capabilities have advanced more quickly than our moral and epistemic practices. The result is not necessarily collapse, but imbalance: we can generate answers before we have had time to formulate the questions that make them meaningful. It is within this tension—between production and understanding—that the idea of an epistemological and ontological challenge has taken hold.

Against this background, generative AI should be understood not merely as a new tool but as a technology that alters the conditions under which knowledge is expressed, circulated, and perceived. When language, analyses, and textual representations can be produced synthetically on an industrial scale, not only does the pace of public discourse change, but also its readability: what can be interpreted as experience, judgment, or a considered standpoint.

The question of knowledge infrastructure is not new. In recent decades, we have seen how technological developments have gradually lowered the thresholds of public discourse, understood as the arenas where texts, interpretations, and narratives are shaped, circulated, and tested before others. Digital platforms have made it possible for more people to participate in these processes, but have also, step by step, reshaped the conditions for trust, authority, and attention. They influence which voices gain traction, how arguments spread, and which forms of expression are rewarded. It is this dual experience that makes it meaningful to look back.

When blogs and social media emerged in the mid-2000s, many spoke of a democratization of public discourse. New technological tools made it possible for more people to participate with their own voices, without passing through established editorial filters. Barriers to entry were lowered, grassroots engagement grew, and perspectives that had previously struggled to find a place became visible alongside traditional media. For many, this represented a genuine broadening of the public sphere, both in terms of who could speak and which experiences could be shared.

At the same time, it gradually became clear that the very infrastructures that enabled this broadening also produced new constraints. Platforms that initially appeared as open arenas evolved into algorithmically governed ecosystems in which commercial logics gained increasing influence. Editorial ideals and slower forms of scrutiny were often displaced by click optimization, polarization, and the dynamics of the attention economy. Public discourse is fragmented, and quality risked becoming a side effect rather than an explicit goal. The democratization of expression thus coincided with a professionalization of attention.

With generative AI, we are entering another technological shift. Once again, there is talk of democratization, this time of writing, analysis, creativity, and knowledge production. And in many ways this is true. Generative AI is already widely used for writing and text processing, image and music creation, design, voice synthesis, and other creative processes, as well as for administration, programming, research, data analysis, and strategic decision-making. It is no longer a specialized tool but a general productivity technology with far-reaching consequences for how knowledge is produced and circulated.

Interpretation is outsourced as well

Yet this technology is also embedded within social and commercial structures. In marketing and communication, many are now searching for ways to optimize their content for the logic of language models. Where ranking high in search engines was once the goal, top placement in the response fields of language models now appears to be the new objective—whether to make products visible, promote ideas, or shape how questions themselves are formulated and answered.

Here, a new form of attention economy emerges, where not only visibility but also interpretation is effectively outsourced to commercially driven systems.

The models that generate these responses do not arise in a neutral space. They are trained on historical data, shaped by processes of selection and weighting, and developed within institutional, cultural, and ideological frameworks that influence what appears reasonable, relevant, and plausible. As these models are increasingly trained on synthetic data, they risk drifting from their original references, making human review even more central.

This sharpens the question of what happens to judgment, meaning, and responsibility when more and more expressions, analyses, and representations are built on synthetic data.

It is tempting to view generative AI as a continuation of the democratization of writing and knowledge production. But the comparison is misleading. Previous digital shifts lowered the barriers to participation while leaving the act of expression fundamentally human. Generative AI, by contrast, risks reinforcing a superficial homogenization, in which more and more people communicate through language, forms, and expressions that are not their own.

What once lent credibility, education, professional experience, deep expertise, and contextual understanding can now be simulated through a few brief prompts. This does not mean that the results are necessarily incorrect. Much of what is produced is accurate, sometimes impressively so. But it becomes harder to determine who stands behind an argument, which experiences support it, and who can ultimately be held responsible.

Ownership becomes more diffuse or disappears altogether.

Harder-to-read foundations

The narrative distinctions that once helped us identify individuals, professions, disciplines, or cultures, the researcher’s method, the lawyer’s reasoning, and the artist’s expression become less distinct. Even the journalist’s scrutiny and editorial judgment risk being reduced to just another stylistic register. The narrative themes of institutions and interest groups become harder to trace when expressions are detached from identifiable individuals and communities. The underlying material for public voices becomes not only more extensive but also more difficult to interpret: rich in text, yet poorer in recognizable perspectives.

Narratives should not be understood here as individual stories, but as the frames of meaning and processes through which experiences are organized, interpreted, and given direction over time. Through these processes, actions can be understood in relation to values, and responsibility can be articulated beyond the immediate moment.

Philosophers such as Paul Ricoeur have shown how narrative, through this movement, connects time, action, and responsibility. By experiencing, articulating, and rearticulating, contexts emerge in which the actions of both individuals and institutions can be understood, questioned, and renegotiated. The psychologist Jerome Bruner similarly demonstrated how narrative understanding differs from analytical explanation: it does not reduce complexity but makes it possible to live with it.

When language and texts are increasingly reduced to models and algorithms, this narrative movement risks being shortened or interrupted. Not because such systems lack structure, human thinking itself is shaped by frameworks, traditions, and concepts, but because they remain confined to the structures embedded in their models, platforms, and training data, without the capacity to question or transcend them. They can generate new combinations, but cannot reflect on the conditions that make those combinations meaningful. What remains are often well-formulated texts that are easy to absorb but difficult to situate within a broader context of meaning.

Generative AI can strengthen our cognitive capacities. It can help structure complex information, identify patterns, simulate scenarios, and explore ideas more rapidly than before. But it cannot determine what is relevant, reasonable, or desirable. It can produce analytical material, but it cannot take responsibility for the conclusions drawn from it. The philosopher Michael Polanyi described this distinction as the difference between explicit and tacit knowledge: between what can be formalized and what can only be developed through experience and judgment.

Human judgment still matters

The distinction between automated production and meaningful analysis, therefore, becomes crucial. We readily accept that AI can compile facts, summarize research, or produce technical instructions. But when it comes to interpreting context, weighing perspectives, or making normatively charged decisions, we still turn to human judgment—not out of nostalgia, but for functional reasons. Nobel laureate Daniel Kahneman described this as the difference between fast and slow thinking: efficient processing on the one hand, reflective deliberation on the other.

One concrete consequence of this development is the content inflation already visible around us. Texts, images, analyses, and proposals can now be produced at previously unimaginable speeds and at previously unimaginable costs. Speculations about the “dead internet” should be treated cautiously, but they point toward a genuine experience: it is becoming increasingly difficult to discern human presence, intention, and responsibility in digital environments—not because the content is necessarily false, but because it often lacks relation.

Economists such as Robert J. Shiller have shown how narratives spread as social phenomena with real consequences. When narratives can be mass-produced without corresponding processes of scrutiny, the risk of rapid diffusion without responsibility increases. This applies to economic expectations, political moods, and cultural perceptions alike.

Here, it may be useful to look toward the scientific process as an example of what Harari calls a self-correcting system. Science offers no guarantee of truth, but it is organized around institutionalized doubt: a social process in which claims must be defended before others with relevant expertise. Disagreement is not a failure but a prerequisite for the development of knowledge.

At the same time, this system is under pressure itself. The number of scientific publications is growing rapidly, something to which AI-assisted writing also contributes, while the availability of qualified reviewers remains limited. As the production of texts accelerates and the cost of generating them declines, the burden on the processes that determine what is relevant, robust, and reliable increases. What was once considered background work, selection, interpretation, and evaluation, becomes both more demanding and more crucial. In this respect, the challenges facing academia mirror those of society at large.

Taken together, these developments point toward the possibility of a renaissance of human qualities. Not as resistance to new technology, but as a prerequisite for using it wisely. In a public sphere where more and more can be produced without friction, value shifts from production to understanding, from visibility to relevance.

But such a renaissance of the human is not inevitable. It does not arise automatically from technological progress or commercial forces. If generative AI is to deepen understanding rather than reinforce homogenization, attention must be paid to the narrative conditions through which technology is given meaning and direction. Which perspectives are made visible? Which assumptions are embedded as self-evident? And which practices of discernment and scrutiny are we prepared to preserve as the pace continues to accelerate?

Perhaps it is precisely here, within the tension between rapid generation and slow understanding, that the decisive questions of the future begin to take shape. Not as technical problems with predetermined solutions, but as cultural and intellectual challenges whose outcomes remain open.

Joakim Lind
PhD in Business Administration, Åbo Akademi University
Communications consultant

From Midnight Sun to Rebecka Martinsson — How Crime Drama Is Shaping Place Value in Kiruna

How Two TV Series Generated Economic and Regional Value in Kiruna and Norrbotten

Yesterday in Kiruna, I had the opportunity to present our newly released report on the economic effects and regional value creation linked to two major TV productions filmed in Norrbotten: Midnattssol (Midnight Sun / Jour Polaire) and Rebecka Martinsson. The seminar coincided with the premiere of Midnattssol, making the discussion particularly timely.

The study, commissioned by Filmpool Nord and conducted between April and October 2016, examines not only the direct and indirect economic effects of large-scale TV productions, but also the broader place-related values generated by film narratives. Beyond production spending, employment and regional turnover, film and television increasingly function as narrative infrastructures for places, shaping perceptions, strengthening regional identities and, in some cases, stimulating film-induced tourism.

Later in the evening, at the gala dinner, I had the pleasure of meeting the series’ creators and directors, Måns Mårlind and Björn Stein, as well as artist and cultural ambassador Sofia Jannok. It was a timely reminder that successful place-based storytelling is always the result of close interaction between creative talent, local culture, and regional production ecosystems.

One of the key insights from the report is that the long-term value of film production cannot be reduced to short-term visitor numbers alone. Even when the effects of tourism remain uncertain, strong place-based storytelling can contribute to sustained visibility, cultural capital, and strategic positioning for regions such as Kiruna and Norrbotten, both nationally and internationally.

With the premiere now behind us, the coming months will be particularly interesting as audience reception, international distribution, and longer-term effects begin to unfold.

Sweden beyond the Millennium and Stieg Larsson

Stieg Larsson’s Millennium trilogy, comprising “The Girl with the Dragon Tattoo”, “The Girl Who Played with Fire”, and “The Girl Who Kicked the Hornets’ Nest”, has not only captivated a global audience but also significantly impacted the perception of Sweden worldwide. Selling over 64 million copies in more than 50 countries, these books have made a mark far beyond their original Swedish setting.

The story, centered around investigative journalist Mikael Blomkvist and hacker Lisbeth Salander, delves deep into the underbelly of Swedish society, exploring themes of corruption, abuse of power, and social injustices. This dark and gripping narrative contrasts sharply with the stereotypical image of Sweden as a country known for its safe cars, efficient governance, and corporate responsibility.

The worldwide success of the Millennium series has sparked a substantial media frenzy and inspired various adaptations, including both Swedish and Hollywood film versions. Despite Larsson’s untimely death before seeing his works’ phenomenal success, his legacy continues to thrive.

Larsson’s portrayal of complex characters and socially charged themes resonates with readers globally, challenging the idealistic image of Sweden and presenting a multi-dimensional, sometimes darker view of the nation. His depiction of strong, unconventional female characters like Lisbeth Salander has particularly struck a chord, offering new perspectives on feminism and societal roles.

Moreover, the trilogy has significantly contributed to Swedish tourism, with fans flocking to Stockholm to trace the steps of their beloved characters. Larsson’s narrative, blending fiction with a realistic depiction of modern Sweden, has also sparked a deeper interest in Swedish culture, politics, and history.

In summary, the Millennium trilogy has not only been a literary and commercial triumph but has also played a pivotal role in reshaping the global perception of Sweden, highlighting the country’s complexities beyond its idyllic facade.