Note: The essay below is an English translation of an article originally published in Swedish in Dagens Arena on January 19, 2026, under the title “En renässans för det mänskliga i en tid av syntetiska språk.
For a long time, Western societies have been sustained by a fairly stable understanding of the nature of knowledge: that it ultimately rests on human experience, scrutiny, and responsibility. Facts could be questioned, and interpretations could compete. Still, there were institutions, practices, and norms that made it possible to determine who was speaking, on the basis of which experiences, and with what responsibility.
Knowledge has never been unambiguous, but it has been rooted in the human.
Today, we are testing this order. Not because we know less, but because language, analysis, and representation are increasingly detached from human understanding. In his book Nexus, the historian Yuval Noah Harari describes this as a shift in our knowledge systems: information flows are outpacing our collective capacity to interpret, evaluate, and correct them. The problem is not a lack of data, but a lack of processes that anchor meaning and responsibility. When these processes weaken, knowledge systems risk reproducing themselves without sufficient human scrutiny.
The result is an imbalance
The philosopher Shannon Vallor develops a similar ethical perspective. In The AI Mirror, she describes how our technological capabilities have advanced more quickly than our moral and epistemic practices. The result is not necessarily collapse, but imbalance: we can generate answers before we have had time to formulate the questions that make them meaningful. It is within this tension—between production and understanding—that the idea of an epistemological and ontological challenge has taken hold.
Against this background, generative AI should be understood not merely as a new tool but as a technology that alters the conditions under which knowledge is expressed, circulated, and perceived. When language, analyses, and textual representations can be produced synthetically on an industrial scale, not only does the pace of public discourse change, but also its readability: what can be interpreted as experience, judgment, or a considered standpoint.
The question of knowledge infrastructure is not new. In recent decades, we have seen how technological developments have gradually lowered the thresholds of public discourse, understood as the arenas where texts, interpretations, and narratives are shaped, circulated, and tested before others. Digital platforms have made it possible for more people to participate in these processes, but have also, step by step, reshaped the conditions for trust, authority, and attention. They influence which voices gain traction, how arguments spread, and which forms of expression are rewarded. It is this dual experience that makes it meaningful to look back.
When blogs and social media emerged in the mid-2000s, many spoke of a democratization of public discourse. New technological tools made it possible for more people to participate with their own voices, without passing through established editorial filters. Barriers to entry were lowered, grassroots engagement grew, and perspectives that had previously struggled to find a place became visible alongside traditional media. For many, this represented a genuine broadening of the public sphere, both in terms of who could speak and which experiences could be shared.
At the same time, it gradually became clear that the very infrastructures that enabled this broadening also produced new constraints. Platforms that initially appeared as open arenas evolved into algorithmically governed ecosystems in which commercial logics gained increasing influence. Editorial ideals and slower forms of scrutiny were often displaced by click optimization, polarization, and the dynamics of the attention economy. Public discourse is fragmented, and quality risked becoming a side effect rather than an explicit goal. The democratization of expression thus coincided with a professionalization of attention.
With generative AI, we are entering another technological shift. Once again, there is talk of democratization, this time of writing, analysis, creativity, and knowledge production. And in many ways this is true. Generative AI is already widely used for writing and text processing, image and music creation, design, voice synthesis, and other creative processes, as well as for administration, programming, research, data analysis, and strategic decision-making. It is no longer a specialized tool but a general productivity technology with far-reaching consequences for how knowledge is produced and circulated.
Interpretation is outsourced as well
Yet this technology is also embedded within social and commercial structures. In marketing and communication, many are now searching for ways to optimize their content for the logic of language models. Where ranking high in search engines was once the goal, top placement in the response fields of language models now appears to be the new objective—whether to make products visible, promote ideas, or shape how questions themselves are formulated and answered.
Here, a new form of attention economy emerges, where not only visibility but also interpretation is effectively outsourced to commercially driven systems.
The models that generate these responses do not arise in a neutral space. They are trained on historical data, shaped by processes of selection and weighting, and developed within institutional, cultural, and ideological frameworks that influence what appears reasonable, relevant, and plausible. As these models are increasingly trained on synthetic data, they risk drifting from their original references, making human review even more central.
This sharpens the question of what happens to judgment, meaning, and responsibility when more and more expressions, analyses, and representations are built on synthetic data.
It is tempting to view generative AI as a continuation of the democratization of writing and knowledge production. But the comparison is misleading. Previous digital shifts lowered the barriers to participation while leaving the act of expression fundamentally human. Generative AI, by contrast, risks reinforcing a superficial homogenization, in which more and more people communicate through language, forms, and expressions that are not their own.
What once lent credibility, education, professional experience, deep expertise, and contextual understanding can now be simulated through a few brief prompts. This does not mean that the results are necessarily incorrect. Much of what is produced is accurate, sometimes impressively so. But it becomes harder to determine who stands behind an argument, which experiences support it, and who can ultimately be held responsible.
Ownership becomes more diffuse or disappears altogether.
Harder-to-read foundations
The narrative distinctions that once helped us identify individuals, professions, disciplines, or cultures, the researcher’s method, the lawyer’s reasoning, and the artist’s expression become less distinct. Even the journalist’s scrutiny and editorial judgment risk being reduced to just another stylistic register. The narrative themes of institutions and interest groups become harder to trace when expressions are detached from identifiable individuals and communities. The underlying material for public voices becomes not only more extensive but also more difficult to interpret: rich in text, yet poorer in recognizable perspectives.
Narratives should not be understood here as individual stories, but as the frames of meaning and processes through which experiences are organized, interpreted, and given direction over time. Through these processes, actions can be understood in relation to values, and responsibility can be articulated beyond the immediate moment.
Philosophers such as Paul Ricoeur have shown how narrative, through this movement, connects time, action, and responsibility. By experiencing, articulating, and rearticulating, contexts emerge in which the actions of both individuals and institutions can be understood, questioned, and renegotiated. The psychologist Jerome Bruner similarly demonstrated how narrative understanding differs from analytical explanation: it does not reduce complexity but makes it possible to live with it.
When language and texts are increasingly reduced to models and algorithms, this narrative movement risks being shortened or interrupted. Not because such systems lack structure, human thinking itself is shaped by frameworks, traditions, and concepts, but because they remain confined to the structures embedded in their models, platforms, and training data, without the capacity to question or transcend them. They can generate new combinations, but cannot reflect on the conditions that make those combinations meaningful. What remains are often well-formulated texts that are easy to absorb but difficult to situate within a broader context of meaning.
Generative AI can strengthen our cognitive capacities. It can help structure complex information, identify patterns, simulate scenarios, and explore ideas more rapidly than before. But it cannot determine what is relevant, reasonable, or desirable. It can produce analytical material, but it cannot take responsibility for the conclusions drawn from it. The philosopher Michael Polanyi described this distinction as the difference between explicit and tacit knowledge: between what can be formalized and what can only be developed through experience and judgment.
Human judgment still matters
The distinction between automated production and meaningful analysis, therefore, becomes crucial. We readily accept that AI can compile facts, summarize research, or produce technical instructions. But when it comes to interpreting context, weighing perspectives, or making normatively charged decisions, we still turn to human judgment—not out of nostalgia, but for functional reasons. Nobel laureate Daniel Kahneman described this as the difference between fast and slow thinking: efficient processing on the one hand, reflective deliberation on the other.
One concrete consequence of this development is the content inflation already visible around us. Texts, images, analyses, and proposals can now be produced at previously unimaginable speeds and at previously unimaginable costs. Speculations about the “dead internet” should be treated cautiously, but they point toward a genuine experience: it is becoming increasingly difficult to discern human presence, intention, and responsibility in digital environments—not because the content is necessarily false, but because it often lacks relation.
Economists such as Robert J. Shiller have shown how narratives spread as social phenomena with real consequences. When narratives can be mass-produced without corresponding processes of scrutiny, the risk of rapid diffusion without responsibility increases. This applies to economic expectations, political moods, and cultural perceptions alike.
Here, it may be useful to look toward the scientific process as an example of what Harari calls a self-correcting system. Science offers no guarantee of truth, but it is organized around institutionalized doubt: a social process in which claims must be defended before others with relevant expertise. Disagreement is not a failure but a prerequisite for the development of knowledge.
At the same time, this system is under pressure itself. The number of scientific publications is growing rapidly, something to which AI-assisted writing also contributes, while the availability of qualified reviewers remains limited. As the production of texts accelerates and the cost of generating them declines, the burden on the processes that determine what is relevant, robust, and reliable increases. What was once considered background work, selection, interpretation, and evaluation, becomes both more demanding and more crucial. In this respect, the challenges facing academia mirror those of society at large.
Taken together, these developments point toward the possibility of a renaissance of human qualities. Not as resistance to new technology, but as a prerequisite for using it wisely. In a public sphere where more and more can be produced without friction, value shifts from production to understanding, from visibility to relevance.
But such a renaissance of the human is not inevitable. It does not arise automatically from technological progress or commercial forces. If generative AI is to deepen understanding rather than reinforce homogenization, attention must be paid to the narrative conditions through which technology is given meaning and direction. Which perspectives are made visible? Which assumptions are embedded as self-evident? And which practices of discernment and scrutiny are we prepared to preserve as the pace continues to accelerate?
Perhaps it is precisely here, within the tension between rapid generation and slow understanding, that the decisive questions of the future begin to take shape. Not as technical problems with predetermined solutions, but as cultural and intellectual challenges whose outcomes remain open.
Joakim Lind
PhD in Business Administration, Åbo Akademi University
Communications consultant