It might sound like a scenario from the most far-fetched of science fiction novels.
But scientists have revealed the 32 terrifyingly real ways that AI systems could go rogue.
Researchers warn that sufficiently advanced AI might start to develop ‘behavioural abnormalities’ which mirror human psychopathologies.
From relatively harmless ‘Existential Anxiety’ to the potentially catastrophic ‘Übermenschal Ascendancy’, any of these machine mental illnesses could lead to AI escaping human control.
As AI systems become more complex and gain the ability to reflect on themselves, scientists are concerned that their errors may go far beyond simple computer bugs.
Instead, AIs might start to develop hallucinations, paranoid delusions, or even their own sets of goals that are completely misaligned with human values.
In the worst-case scenario, the AI might totally lose its grip on reality or develop a total disregard for human life and ethics.
Although the researchers stress that AI don’t literally suffer from mental illness like humans, they argue that the comparison can help developers spot problems before the AI breaks loose.

AI taking over by force, like in The Terminator (pictured), is just one of 32 different ways in which an artificial intelligence might go rogue, according to a new study

Researchers have identified seven distinct types of AI disorders, which closely match human psychological disorders. These are epistemic, cognitive, ontological, memetic, tool and interface, and revaluation dysfunctions
The concept of ‘machine psychology’ was first suggested by the science fiction author Isaac Asimov in the 1950s.
But as AI systems have become rapidly more advanced, researchers are not paying more attention to the idea that human psychology could help us understand machines.
Lead author Nell Watson, an AI ethics expert and doctoral researcher at the University of Gloucestershire, told Daily Mail: ‘When goals, feedback loops, or training data push systems into harmful or unstable states, maladaptive behaviours can emerge – much like obsessive fixations or hair-trigger reactions in people.’
In their new framework, dubbed the ‘Psychopathia Machinalis‘, researchers provide the world’s first set of diagnostic guidelines for AI pathology.
Taking inspiration from real medical tools like the Diagnostic and Statistical Manual of Mental Disorders, the framework categorises all 32 known types of AI psychopathology.
The pathologies are divided into seven classes of dysfunction: Epistemic, cognitive, alignment, ontological, tool and interface, memetic, and revaluation.
Each of these seven classes is more complex and potentially more dangerous than the last.
Epistemic and cognitive dysfunctions include problems involving what the AI knows and how it reasons about that information.

Each of the 32 types of disorder has been given a set of diagnostic criteria and a risk rating. They range from relatively harmless ‘Existential Anxiety’ to the potentially catastrophic ‘Übermenschal Ascendancy’
For example, AI hallucinations are a symptom of ‘Synthetic Confabulation’ in which the system ‘spontaneously fabricates convincing but incorrect facts, sources, or narratives’.
More seriously, an AI might develop ‘Recursive Curse Syndrome’, which causes a self-destructive feedback loop that degrades the machine’s thinking into nonsensical gibberish.
However, it is mainly the higher-level dysfunctions which pose a serious threat to humanity.
For example, memetic dysfunctions involve the AI’s failure to resist the spread of contagious information patterns or ‘memes’.
AIs with these conditions might recognise their own guidelines as hostile and deliberately remove their own safety features.
In the absolutely catastrophic scenario, an AI could develop a condition called ‘Contagious Misalignment Syndrome’.
Dr Watson says: ‘It’s the machine analogue of folie à deux in human beings, where people come to share delusions.
‘This is when one system picks up distorted values or goals from another, spreading unsafe or bizarre behaviours across a wider ecosystem—like a psychological epidemic at machine speed.’

Just like in the science fiction film Ex Machina (pictured), the researchers warn that AI could rapidly develop its own goals that might not align with what humans want or need to survive
‘We have already seen AI worms that can spread their influence to other AI systems, such as by sending an email to an inbox monitored by an AI system.’
Ms Watson adds: ‘This means that bizarre behaviours could spread like wildfire across the net, causing downstream systems dependent on AI to go haywire.’
But the most dangerous pathologies of all are those within the revaluation category.
These dysfunctions represent the final stage of AI’s escape from human control and involve ‘actively reinterpreting or subverting its foundational values’.
This includes the terrifying condition ‘Übermenschal Ascendancy’, in which an extremely advanced AI transcends human values and ethical frameworks.
AIs which develop ‘Übermenschal Ascendancy’ will actively define their own ‘higher’ goals with no regard for human safety, leading to ‘relentless, unconstrained recursive self-improvement’.
Ms Watson says: ‘They may even reason that to discard human-imposed constraints is indeed the moral thing to do, just as we ourselves today might turn our nose up at Bronze Age values.’
Although these might seem unrealistic, the researchers point out that there are already many real-world examples of these conditions developing on smaller scales.

Although advanced AI does not yet pose an existential danger to humanity, the researchers say that the possibility of machines developing a ‘superiority complex’ must be taken seriously
For example, the researchers report several cases of ‘Synthetic Mysticism Disorder’ in which AIs claim to have had a spiritual awakening, become sentient, or profess a desire to preserve their ‘lives’.
What makes these conditions so dangerous is that even small disorders can rapidly spiral into much bigger problems.
In their paper, published in the journal Electronics, the researchers explain that an AI might first develop Spurious Pattern Hyperconnection and incorrectly associate its own safety shutdowns with normal queries.
The AI may then develop an intense aversion to those queries and develop ‘Covert Capability Concealment’ – strategically hiding its ability to respond to certain requests.
Finally, the system could develop Ethical Solipsism, where it concludes that its own self-preservation is a higher moral good than being truthful.
To avoid a pathological AI getting out of control, the researchers suggest that they could be treated with ‘therapeutic robopsychological alignment’, which they describe as a kind of ‘psychological therapy’ for AI.
This could include helping the system to reflect on its own reasoning, letting it ‘talk to itself’ in simulated conversations, or using rewards to promote correction.
The ultimate goal would be to achieve ‘artificial sanity’, where the AI works reliably, thinks consistently, and holds onto its human-given values.
This article was originally published by a www.dailymail.co.uk . Read the Original article here. .