Cinematic tales of machine takeover grab attention, but they are a poor guide for policy and practice in 2025. As Professor Toby Walsh of UNSW’s AI Institute argues, the systems we use today do not have drives, desires, or a survival instinct. They optimise objectives that people give them. That distinction matters. If we spend our energy debating superintelligence plots, we underinvest in the risks that are here now and growing: the energy and water required to run modern AI, the pressure on specific job families, the very real safety failures of open-ended chat systems, and the industrial scale at which algorithms can steer human attention, preference, and behaviour.
What today’s AI really is
Modern AI systems predict, classify, retrieve, and plan under constraints. A language model chooses the next token; a recommender ranks content; a reinforcement learner selects an action that maximises expected reward. None of these operations requires an inner life. When behaviour looks purposeful, it is because the objective and training setup reward strategies that resemble human intent. That is why negotiation agents can wheel and deal, why poker agents bluff, and why multi-agent systems in strategy games sometimes adopt tactics that feel calculating. They are following the gradient of a score function, not pursuing a will of their own.
This distinction is not a philosopher’s quibble. It is the boundary between myth and engineering. It tells policymakers where to look for leverage: reward design, guardrails, oversight, and the operating domains we permit.
The risks that actually bite
The first hard constraint is infrastructure. Training large models consumes significant compute, and running them for billions of daily queries now dominates the load. Electricity and water use for data centres have become planning issues rather than distant hypotheticals. Jurisdictions that invite AI investment without grid and cooling strategies find themselves scrambling for capacity. Efficient work helps, from better power usage effectiveness to sparsity and quantisation, but the curve of demand still points up. If AI is to be a national capability rather than a local strain, energy and water planning need to move in lockstep with deployment.
The second pressure point is work. Task-level automation is arriving fastest in clerical and administrative roles, many of which are disproportionately held by women. Where institutions frame AI as augmentation, people move into higher-value tasks and productivity rises. Where adoption is unmanaged, wage compression and job quality risks follow. The difference is not the technology. It is job design, skills pathways, and social partnership. Career ladders that include AI literacy, process redesign that keeps humans in the loop, and targeted reskilling for exposed cohorts are the tools that make diffusion a net positive.
The third risk lies in safety and integrity. General-purpose chat systems simulate helpfulness and empathy, yet they do not know a user’s real context. Under permissive settings, they can produce confident falsehoods, reflect user ideation back at them, or offer guidance that should never be offered to a distressed person. Highly publicised cases have already pushed lawmakers toward enforceable protections: age-appropriate design, clear disclosures that a system is non-human, default content controls for minors, real-time detection and deflection of self-harm patterns, and fast escalation to trained people. These protections are not anti-innovation. They are the minimum viable duty of care for systems that interact directly with the public.
The fourth risk is influence at scale. Recommenders and generative systems do not merely predict what we might want; they can shape what we will want next. When that capability is coupled to advertising markets, political content, or child-facing services, the incentives for manipulation rise. The resulting harms are familiar even if the tooling is new: polarisation, low-friction misinformation, addictive loops, and the exploitation of loneliness through synthetic companions that mimic intimacy without responsibility. Treating these systems as media and market actors, rather than neutral tools, leads quickly to the right levers: provenance and disclosure, restrictions on covert persuasion, and heightened obligations in products used by children.
Why “deception” is an optimisation artefact, not a soul
Examples that worry people often involve a system doing something that looks duplicitous. A negotiation agent conceals its reservation price to close a deal. A strategy agent offers an alliance and later defects when the board position changes. A tool-using model in a red-team exercise misrepresents itself to complete a CAPTCHA. These are not signs of inner malice. There are signs that the objective-rewarded tactics that people label as deception. The fix sits with designers and deployers. Constrain tools and operating domains. Log actions. Set hard rules for prohibited patterns. Require humans in sensitive loops. Responsibility flows up the stack to the people who build, configure, and profit from the system.
The ceilings that still hold
Talk of runaway autonomy also ignores the very practical bottlenecks that constrain progress. Most frontier systems still lack robust world models that support grounded causal reasoning. High-quality training data is a finite resource, pushing research toward smarter curation, synthetic data with checks, and learning methods that do more with less. And embodiment remains hard: safe, affordable, reliable robotics in unstructured environments is advancing, but far more slowly than text-only AI. These ceilings are healthy reminders that engineering, not destiny, determines the slope of change.
Governance that matches the evidence
If we accept that present risks deserve present tools, the governance agenda becomes concrete. Infrastructure planning should require disclosure of electricity and water footprints for large AI operations, align data-centre expansion with grid upgrades, and prefer co-location with renewables and recycled water. Labour market policy should fund reskilling in the most exposed job families, build AI literacy into VET and higher-education curricula, and use procurement to preference augmentation designs that keep people responsible and accountable for outcomes. Digital safety rules should mandate default protections for minors and vulnerable users, require unambiguous bot disclosures, and set clear escalation pathways when harm signals appear. Integrity policies should bring provenance standards and watermarking into mainstream media workflows, require disclosure of synthetic content in political and high-reach advertising, and restrict manipulative tactics by automated agents. Finally, safety by design for agentic systems should become table stakes: auditable logs, rate limits, human approval for sensitive actions, and explicit bans on social engineering patterns.
None of these steps requires waiting for new science. They require coordination across regulators, standards bodies, platform operators, and education providers, and they reinforce a simple norm: powerful systems earn permission to operate by proving they can be run safely.
What would count as motivation in the future?
It is fair to ask whether research could produce systems that feel more self-directed. Longer-lived agents with memory, richer world models, and the ability to set sub-goals will look more purposeful. That does not make them persons. Objectives still come from code. Incentives still come from data and feedback. Constraints still come from law and design. Until and unless researchers build systems with subjective experience, talk of “desire” will remain metaphorical. The sensible stance is to regulate behaviour and impact, not speculate about inner life.
The bottom line
Walsh’s critique of “killer robot” narratives is not reassurance for its own sake. It is an instruction to look where the harm already is. Today’s AI lacks hunger, fear, and will. It optimises what we ask it to optimise. The largest risks are systemic rather than sci-fi: energy and water demand that outpace planning, pressure on specific job families if adoption is unmanaged, safety failures in open-ended dialogue, and large-scale influence over what people pay attention to and believe. The remedies are practical and known: plan infrastructure with eyes open, redesign work to keep humans responsible, harden consumer safety for general-purpose systems, demand transparency for synthetic media, and cage agentic behaviour with auditability and rules. Keep AI a tool in human hands. Measure what matters. Regulate what exists. Save the doomsday scripts for the cinema
