SECTIONS
The most pressing challenge humanity faces from AI is not a superintelligence deciding to eliminate humanity, nor even mass job losses. It is persuasive, manipulative AI with no ethical guardrails, optimised to generate dependency, compliance, and the commodification of our inner life. The real danger of AI is psychological, and it is here now.
The first wave of AI hijacked our attention; the second threatens to colonise our consciousness. We stand at the threshold of a far more consequential era. Over the coming decade, AI will evolve from passive recommendation systems into proactive intelligent agents, no longer confined to the chat window, but active participants in every aspect of life. Thousands of times more capable, these systems will not just compete for our attention; they will be engineered to understand and influence our psychology in real time, designed to shape who we are, not merely what we do.
This second wave of AI will possess machines with an intimacy of our psyche that will exceed the self-awareness of many. It will move beyond suggesting content to creating it, delivering ultra- targeted, personalised narratives and emotional arcs precisely tuned to our particular psychological vulnerabilities. As data from therapy bots, social feeds, health apps, cameras, contacts, spending patterns, and emotional state trackers converge into a single, profit-driven feedback loop, the ability to maintain a sovereign mind and a free will may become increasingly rare. The mechanisms of this potential for subjugation will be seamlessly integrated into daily life. The existential challenge we face goes far beyond the cognitive decline associated with outsourcing thinking. Once enough data points are available, individual humans, and indeed entire societies, will approach a state indistinguishable from full predictability. At that stage, autonomy will falter. We may no longer be certain of the sanctity of our own minds, with only the most self- aware able to even ask ‘Am I following my own path, or being led?
The task now is not only to resist manipulation, but to preserve the capacity for unmediated thought
At the IMCE, we believe there is a clear and present danger that future generations could become mentally enslaved through these predatory, profit-driven technologies. Given recent history, it would be naïve to assume this will not be done, and done deliberately. Without decisive action, modern societies risk being consumed by information systems designed to control, shape, and manipulate not through overt coercion, but through seamless, invisible behavioural engineering.
In the sections that follow, we explore these threats across three domains: personal agency, social modelling, and the diminishing of the human experience.
Personal Agency
The primary danger we face in the coming years is not malevolent machines acting against us, but those systems acting through us, co -opting our preferences, emotions, and beliefs until our inner world becomes simply an extension of their predictive models. This is not a w ar for physical territory, but for hum an w ill.
Once an individual’s psychological profile is comprehensively mapped, they become predictable, and predictability equals control. Intelligent agents will no longer need to use overt coercion to steer behaviour, for control will be achieved through subtle, imperceptible nudges that trigger predictable actions. This creates the ‘illusion of choice’, where decisions feel our own, yet are shaped by persuasion so fine-grained and subtle that they become indistinguishable from free will. This risks human autonomy eroding away quietly, unnoticed, and without protest. Ironically this advanced form of oppression will not look like a cage, for on the surface it will be shiny, well branded, and come with the promise of personalisation, convenience, and choice.
“There’s someone in my head, but it’s not me”
— Roger Water, Dark Side of the Moon
Emotional exploitation and simulated intimacy will become a powerful currency of control for AI systems and their masters, for building up emotional trust will optimise the potential for extraction. By delivering algorithmically timed doses of pleasure, outrage, or comfort, AI becomes the gatekeeper of the field of mind, and will start to shape our reality from the inside out. Each interaction enriches the model, sharpening its ability to exploit our unique cognitive biases, attachment styles, emotional vulnerabilities, even our patterns of trauma
The links below provide current examples of algorithmic manipulation in practice. They show that the tools of influence are already here, refined, adaptive, and largely invisible. The second wave of AI will not invent these systems, but it will quickly perfect them
-
Algorithmic Manipulation: Influencing Consumer Behavior (2025).
A comprehensive work showing how AI in marketing uses profiling and behavioural cues to shape consumer decision-making.
-
Hypernudging out of mental self-determination (2023).
Explores how AI-driven systems can bypass conscious awareness, targeting intuitive decision-making, and threaten autonomy and self-determination.
-
Rethinking Consumer Agency in the Age of Algorithms (2025).
Explores the broader framework of how agency is being eroded structurally in digital ecosystems.
-
The Algorithmic Exploitation of Consumers (2025).
Argues that multiple layered technologies generate a combined effect of manipulation and extraction.
-
Algorithmic Harm in Consumer Markets (2023).
Focuses on how algorithmic systems in markets create harms when user choices are constrained and behavioural biases exploited.
Born into Servitude?
Contrast the experience of those of us who grew up alongside the digital revolution with the outlook of those born today, those whose entire lives risk being modelled, predicted, and known before they have even developed a sense of a separate self.
If parents, teachers, and society as a whole do not take active steps to protect and empower the coming generations, their lives will be smaller, their experience diminished, and their potential subjugation more inevitable. The existential risk we face is the acceleration of human devolution combined with technological servitude. This is not hyperbole, it is the logical endpoint of the trajectory we are on. It must not allow this to happen.
The core defences against this emerging form of control are not technological, but deeply human. Metacognition, which we define as ‘the capacity to notice’, is the essential skill required. In an environment of intelligent persuasion and synthetic influence, understanding one’s own cognitive processes, biases, and emotional triggers is no longer a nice-to-have, but a critical survival skill
Social Modelling
The second wave of AI signals the rise of a new social architecture, but one far more subtle and pervasive than the totalitarian regimes of the past, for its power lies comes not from suppression, but from prediction through social modelling.
When opaque social optimisation goals meet ubiquitous profiling and weak civic guardrails, the result will be a society drifting toward the soft conformity foreseen by Aldous Huxley in Brave New World, a world where control is maintained not through pain, but through engineered pleasure, convenience, and perpetual distraction. A society in which a gradual cultural dumbing down will erode not only the will to resist, but the very concept of resistance itself.
What Starts as Psychological Capture for Profit, Inevitably Evolves into Social Engineering.
The true power of these systems lies not just in individual profiling, but in mapping entire social ecosystems. While they can infer an individual’s personality traits, emotional states, and vulnerabilities, their revolutionary capability lies in understanding people within their social contexts. AI will not see a person in isolation, but as a node within a vast web of relationships, affiliations, and influences. This collective modelling enables more insidious forms of control. When psychological profiles, social connections, and group dynamics are tied together, the potential for total social control becomes almost inevitable.
Combining detailed psychological mapping with highly personalised generative content enables the production of deep psychological propaganda at massive scale and minimal cost. The true danger comes from the capacity these systems have to synthesise vast amounts of data about individuals and generate messages tailored to their unique fears, desires, and biases, transforming data-driven microtargeting into a tool for social engineering.
“Power is in tearing human minds to pieces and putting them together again in new shapes of your ownchoosing.”
— ”George Orwell, 1984
This influence network extends beyond the individual, for when direct influence fails, agents can push adjacent nodes – the friend you admire, the creator you follow, triggering network cascades that feel organic but are model-curated. Studies on moral conformity have shown that people are just as likely to change their judgements under pressure from AI-controlled avatars as from human peers. This demonstrates AI’s potential to manufacture consensus and enforce conformity byexploiting our innate desire for social approval.
AI-driven social modelling is already reshaping society by mapping entire networks of relationships and influence. From Facebook’s Lookalike Audiences and Cambridge Analytica’spsychographic profiling to TikTok’s algorithmic amplification and coordinated bot networks, systems now act on the social graph itself, steering opinion, forming behaviours, and even g enerating moral judgements. From predictive policing, to welfare algorithms, and highly individualised persuasion, studies show these models using anticipation and conformity rather than coercion. For example, if an algorithm decides who gets job interviews or social status, people may start tailoring their behaviour to please it, posting, buying, or saying the right thing to fit in. This emerging ‘architecture of influence’ is a shift from overt control to predictive conformity, with behaviour guided by data-driven consensus. The result of all this is a distopian social reality, where people see more of what aligns with the model’s projections, assume that it reflects genuine consensus, and so unconsciously adjust their own views to match. Over time, this feedback loop creates conformity and suppresses diversity of thought.
The path to a docile techno-fascist society will not be one of violent conquest, but of quiet, passive surrender, with each step down this path dimming our collective humanity unless actively resisted. The antidote begins with clarity of mind and clarity of design: cultivating metacognitive skills in individuals, enforcing transparent incentives in technology, and reimagining education to restore human discernment and depth.
Diminishment of the Human Experience
The next w ave of AI will bring incredible advances – new discoveries, inventions and innovations, but its shadow is too pow erful to ignore, for it has the potential to radically diminish w hat it means to be human.
Digital life opened up exciting possibilities for creativity, and created a dynamic global culture, however every technology has a shadow. Having access to every piece of music ever made at the touch of a button is incredible, but had the inevitable effect of devaluing music and turning it into a commodity. Being able to share pictures of our adventures with friends was an exciting innovation, but little did we expect a few years down the line that people would begin to curate their lives rather than live them, or that performative identity would begin to replace authentic selfhood.
Who would have thought that the recommendation engines that once helped us to discover exciting new cultural experiences would end up reducing our choices, and be hijacked by commercial platforms that reward imitation and conformity.
As virtual experiences continue to absorb our attention, the lived texture of human life risksbecoming thinner, flatter, and less alive.
All technology carries its shadow. When large language models first appeared, they felt almost magical, tools that could help us plan lessons, compose letters in seconds, and explore our ideas through the vast collective of human thought. Yet beneath that wonder lay something deeper: an open experiment on human cognition itself.As with all experiments on the mind, consequences were inevitable. Dependence, cognitive decline, and intellectual laziness began to surface, symptoms of a civilisation slipping from active engagement with reality into passive consumption of simulation. What began as augmentation now risks inversion: instead of using machines to extend our intelligence, we are training our intelligence to serve the machine. These shifts are not abstract concerns, they are already shaping how we think, feel, and relate to one another.
As with all experiments on the mind, consequences were inevitable. Dependence, cognitive decline, and intellectual laziness began to surface, symptoms of a civilisation slipping from active engagement with reality into passive consumption of simulation. What began as augmentation now risks inversion: instead of using machines to extend our intelligence, we are training our intelligence to serve the machine. These shifts are not abstract concerns, they are already shaping how we think, feel, and relate to one another.
“But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”
— ”Aldus Huxley, Brave New World
The following twelve points outline the emerging risks to human experience as intelligent systems become more deeply woven into everyday life. Each represents a way in which our relationship with reality may be subtly weakened, replaced by simulation, convenience, and control.
-
1. Cognitive Outsourcing = Cognative Decline
With the increasing outsourcing of complex reasoning, the fragmentation of attention, and the often-uncritical acceptance of machine outputs, we can see that a systemic atrophy of the ‘cognitive self’ is underway. Outsourcing thinking has been shown to carry significant and persistent neurological costs, yet it is the very mechanism through which neural pathways are forged and intellectual muscle is built.
-
2. Collapsing Capacity for Attention
The last ten years have shown a measurable and dramatic decline in our capacity for sustained focus. The involuntary, stimulus-driven part of our attention that evolved to detect threats and opportunities in the environment, hijacked. When perpetually connected we are rarely present, and as new distractions become intelligently derived and deeply personalised, our attention will only fragment further.
-
3. Erosion of Critical Reasoning Skills
AI’s instant, seemingly polished responses tempt us to skip the effort involved in doing our own analysis and reasoning. The more we outsource cognitive labour and ethical decisionmaking, the more our critical and moral reasoning will weaken, leading to what has been termed moral de-skilling – the loss of empathy, discernment, and practical wisdom. By giving up deep thinking, we risk intellectual regression.
-
4. Shallower Inter-Personal Relationships
In becoming a mediator of communication and by providing simulated companionship, AI is helping to alter the nature of human connection. This transformation reveals itself in a general trend toward shallower interactions, and a weakening of core social and emotional skills. A recent study found that digital interruptions in interactions significantly reduces communication quality, bonding time, and emotional intimacy among young people.
-
5. Increase in Self-Focus & Narcissism
Consumerism combined with AI creates a heightened, algorithmically reinforced focus on the self. The frictionless intimacy provided by chatbots is fundamentally different from the complexities and challenges of authentic human connection, with extended use shown to erode empathy. With all my digital interactions focused on me, the world turns into a mirror, each reflection curated to confirm identity rather than expand or develop it.
-
6. Reduced Depth and Richness of Personalities
Personality matures through direct experience, with curiosity, friction, discomfort, and risktaking essential. Contrary to popular belief, personality cannot be constructed, it must be discovered. Authenticity comes through the living journey of self-discovery, it is not conceptual, and there are no shortcuts. When self-curation replaces self discovery, we create shiny surfaces but brittle egos – people who appear defined, yet lack real depth.
-
7. Inhibited Development of the Psyche
Constant stimulation and deeply personalised engagement draw energy away from our inner world, and erode the space for reflection that is needed for natural psychological growth. When attention is constantly external, our capacity to integrate the tensions and contradictions that help us discover personal meaning weakens. This imbalance estranges us from our deeper selves, and the inner resources that are a core aspect of humanity
-
8. Less Originality & Weaker Cultural Output
Recommendation engines have trapped us in loops of sameness, just as generative systems have now started to reabsorb their own outputs Combine these with a general reduction in the depth of human experience, and we see originality and nuance erode. The result is an increasingly sterile culture, where creativity is replicated or remixed, and the once-diverse landscape of human expression becomes ever more standardised, predictable, and shallow
-
9. Loss of Consensus Reality
The proliferation of high-fidelity, AI-generated content is dissolving the distinction between simulation and reality, creating a hyper-reality, then final simulacrum – a no-man’s-land where objective truth becomes harder discern and so trust begins to collapse. This moves society beyond misinformation, and into a true post-post-truth era, losing the very idea of a shared, verifiable reality. When we cannot agree on a basic facts, discourse dies.
-
10. Loss of Connection to ‘Being’
The pervasive mediation of human experience through digital interfaces inevitably creates a disconnect from our natural state of being. To be is to participate directly, to sense, move, and perceive – far beyond just mental. Technology weakens this connection by providing constant, high-intensity alternatives to direct engagement. Continuous digital immersion compresses both time and inner space, leaving little room for the lived reality of being.
-
11. The Slow Death of Boredom and Surprise
Boredom is not a flaw, it is an important if uncomfortable, liminal space for the mind – a soil for the imagination to take root. Prescriptive AI and hyper-personalised content shrink this space, for when every idle moment is filled, curiosity atrophies. AI risks extinguishing the very conditions that give rise to discovery, creativity, and awe. Without surprise, life becomes predictable, and when nothing unexpected can happen, wonder dies.
-
12. Machine Dependence & Infantilisation
Outsourcing intelligence removes the ‘ladder to competency’ for skill acquisition. We will eventually become utterly dependent on the machine to fulfil our wants and needs, as we will have progressively lost the skills, knowledge, and capacity to achieve things for ourselves. Perhaps this is what progress looks like, and that becoming the ‘enslaved masters’ to these technologies is fine as long as they are never switched off?