Sunday, June 29, 2025

The brain implant revolution is here. Why is its inventor Tom Oxley terrified?

The brain implant revolution is here. Why is its inventor Tom Oxley terrified? 
His invention has the potential to enhance our humanity or obliterate it entirely. Can Tom Oxley safeguard us from those with malicious intent who seek to control our thoughts? 

NATASHA ROBINSON 

 June 27, 2025 

“It’s just blowing me away, what is coming,” says Australian neurologist Tom Oxley, the co-inventor of the world’s most innovative brain-computer interface (BCI) that is at the forefront of the world’s progression towards cognitive artificial intelligence. “It’s phenomenal. The next couple of decades are going to be very hard to predict. And every day, I’m increasingly thinking that BCIs are going to have more of an impact than anyone realises.” 
Brain-computer interfaces are tiny devices inserted directly into the brain, where they pick up electrical signals and transmit them to an external computer or device where they are ­decoded algorithmically. The subject of a cover story in this Magazine in 2023, a BCI called the Stentrode, developed at the University of ­Melbourne by Oxley’s company Synchron, is inserted into the brain non-invasively through the jugular vein. 
In 2022, Synchron, which initially received funding from the US Defense Advanced ­Research Projects Agency (DARPA) and the Australian Government, and later attracted ­investment from the likes of Bill Gates and Jeff Bezos, had become the first company in the world to be approved by the US Food and Drug Administration to conduct a human trial of its BCI in the US – outpacing Elon Musk’s company Neuralink, which is operating in the same space. Since then the Stentrode has been implanted into 10 people with neurodegenerative disease, enabling them to control devices such as computers and phones with their thoughts. 
While Oxley and his company co-founder Nicholas Opie’s vision for the company remains dedicated to restoring functionality in those with paralysis, Oxley is realistic that the technology will in coming years have wider ­application and demand: an era of radical human enhancement. 
A seismic development in Synchron’s ­evolution occurred in March, when Oxley ­announced a partnership between the company and chipmaking giant Nvidia, to build an AI brain foundation model that learns directly from neural data. The model, dubbed Chiral, connects Syncron’s BCI – developed in Melbourne – with Nvidia’s AI computing platform Holoscan, which allows developers to build AI streaming apps that can be displayed on Apple’s Vision Pro spatial computer, the tech giant’s early foray into extended reality. 
“A core human drive, encoded in our DNA, is to improve our condition,” says Oxley, a professorial fellow at the University of Melbourne’s department of medicine and now based in New York City. “For patients with neurological ­injury, this means restoring function. In the ­future, it seems inevitable that it will include enhancement [in the wider population]. BCIs will enable us to go beyond our physical limitations, to express, connect and create better than ever before. Neurotechnology should be a force for wellbeing, expanding human potential and improving quality of life.”
But the collision of the development of BCIs with the now-supercharged development of AI has ramifications almost beyond imagining. Currently, AI computational systems like ChatGPT learn from data, with machine ­learning technology modelling neural networks trained by large language models from text drawn from across the ­internet and digitised books. 
The prospect of AI platforms accessing data streams directly out of the brain opens up a future in which our private thoughts could be made transparent. While the US Food and Drug Administration is tightly controlling the application of AI in the BCIs it will assess and approve, the prospect of these devices directly accessing neural data ­nevertheless opens up great potential for ­surveillance, commercial exploitation, and even the loss of what it means to be human. 
“Liberal philosophers John Stuart Mill and John Locke and others, but even back further to ancient Eastern philosophers and ancient Western philosophers, wrote about the importance of the inner self, of cultivating the inner self, of having that private inner space to be able to grow and develop,” says Professor Nita Farahany, a leading scholar on the ethical, legal and social implications of emerging technologies.
She is working closely with Oxley on establishing an ethical framework for the future of ­neurotechnology. “It’s always been one of the cornerstones of the concept of liberty. The core concept of autonomy, I think, can be deeply ­enabled by neurotechnology and AI, but it also can be incredibly eroded.
“On the one hand, I think it’s incredible to enable somebody with neurodegenerative ­disease – who is non-verbal, or has locked-in syndrome – to reclaim their cognitive liberty and their self-determination, and to be able to speak again. I think that’s incredibly exciting. On the other hand, I find it terrifying. 
“How do we make sure the AI interface is acting with fidelity and truth to the user and their preferences?” 
Two decades ago, American inventor and ­futurist RayKurzweil predicted a moment in human history that he dubbed the “singularity”: a time when AI would reach such a point of ­advancement that a merger of human brains and the vast data within cloud-based computers would create a superhuman species. ­Kurzweil has predicted the year 2029 as the point at which AI will reach the level of human intelligence. The combination of natural and artificial intelligence will be made possible by BCIs which will ultimately function as nanobots, Kurzweil recently said in an interview; he reckons human intelligence will be expanded “a millionfold”, profoundly deepening awareness and consciousness. 
Billionaire Elon Musk – whose company Neuralink is also developing a BCI – believes AI may surpass human intelligence within the next two years. Musk, who has previously described AI as humanity’s biggest existential threat, has warned of catastrophic consequences if AI gets out of control. He has stressed that AI must align with human values, and is now positioning BCIs as a way to mitigate the risks of artificial superintelligence. He believes BCIs hold the key to ensuring that the new era of AI – in which the supertechnology could become sentient and even menacing – does not destroy humanity. Musk’s vision for Neuralink’s BCI is to enhance humankind to offset the existential risks of artificial ­intelligence – a theory dubbed “AI alignment”. It’s an ­outlook in step with transhumanist philosophy, which holds that neurotechnology is the gateway to human evolution, and that technology should be used to transcend our physical and mental limitations.

But Oxley is at odds with Musk on AI alignment – and believes that using BCIs as a vehicle to ­attempt to match the power of AI is ethically problematic. He’s focused instead on laying the groundwork to ensure the future of AI does not undermine fundamental human liberty.
“BCIs can’t solve AI alignment,” Oxley says. “The problem isn’t bandwidth, it’s behavioural control. AI is on an exponential trajectory, while human cognition – no matter how enhanced – remains biologically constrained. AI safety depends on governance and oversight, not plugging into our brains. Alignment must be addressed in a paradigm where humans will never fully comprehend every model output or decision. This represents the grand challenge of our time, yet it is not one that BCIs will fix.”
Almost two years after I first reported on thedevelopment of Synchron’s ­pioneering, non-­invasive BCI, I’m sitting down with Oxley at a cafe in Sydney; he’s on a brief trip home from New York to see family. It’s difficult to reconcile his achievements with the unassuming, youthful 44-year old sitting opposite, as he grapples with the enormous weight of responsibility he now feels around his invention. 
“Starting to understand that there are going to be mechanisms of subconscious thought process detection enabled by BCIs has made me realise that there is a danger with the technology,” Oxley says. “I am cautiously optimistic about the trajectory in the US, which I think is going to be gated by the FDA [Food and Drug Administration], which is kind of playing a global role [in regulating safety]. But there’s work to be done. Algorithms already manipulate human cognition. Integrating them directly into our brains puts us at risk of AI passively shaping our thoughts, desires and decisions, at a level we may not even perceive.
“I think this technology is just as likely to make us vulnerable as it is to help us, because you expose your cognitive processes that up until this point have been considered sacrosanct and very private. The technology is going to enable us to do things that we couldn’t previously do, but it’s going to come with risk.” 
The magnitude of that risk, and the burden of conscience and intellect that comes with being an agonist in opening up the possibility of what AI pessimists fear could be a dystopian future, has triggered Oxley to shift gear from ­entrepreneur and inventor to the ethical ­steward of a cutting-edge tech company. He’s at the forefront of worldwide efforts to embed the right to cognitive liberty within a set of governing principles for the future of neurotechnology. It’s an extraordinary gear shift for the neurologist, whose career as an inventor was initially purely focused on wanting to improve the lives of patients who were paralysed. Now he finds himself leading what is essentially a burgeoning tech company valued at about $US1 billion. 
“I did have a sense starting out that what we were doing was going to be hugely impactful,” he says. “I was looking to commit my intellectual, academic life to something that I thought was going to be impactful on a big scale. But the way it’s morphing and evolving now is quite humbling and exciting.
“I had an epiphany a couple of months ago that probably the most important thing I can do right now is to try and get the ethics of all of this right. That’s where I find myself right now. It’s in my dreams. It’s in my subconscious. It’s become probably the most important thing that I want to do.”
Cognitive liberty is a term popularised by Farahany, who says the concept of rights and freedoms embedded within liberal philosophy and democratic governance must be urgently updated and reimagined in the digital era. 
“The brain is the final frontier of privacy. It has always been presumed to be a space of freedom of thought, a private inner sphere, a secure entity,” Farahany says. “If you think about what the concept of liberty has meant over time, that privacy and the importance of the cultivation of self is at the core of the concept of human autonomy. 
“The right to cognitive liberty in the digital age is both the right to maintain mental privacy and freedom of thought, and the right to access and change our brains if we choose to do so. If we have structures in place, like a base layer that’s just reading neural data and a guardian layer that is adhering to the principles of ­cognitive liberty, we can align technologies to be acting consistent with enabling human flourishing. But if we don’t, that private inner space that was held sacred from the ­earliest philosophical writings to today – the capacity to form the self – I think will collapse over time.” 
The future of AI-powered neurotechnology is already moving apace. Nvidia – which makes the chips used worldwide by OpenAI systems, and which now has a market capitalisation of $A5.47 trillion, closely rivalling Microsoft at the top of the leaderboard of the world’s largest companies by market cap – in January announced its predictions for the future of AI in healthcare. It named digital health, digital biology including genomics, and digital devices including robotics and BCIs as the most significant new emerging technologies. That reflected bets already placed by the market: the BCI ­sector is now powered by at least $33 billion in private investment. 
Neural interface technologies are already hitting the consumer market prior to BCIs coming to fruition. Apple has patented a next-generation AirPods Sensor System that integrates electroencephalogram (EEG) brain sensors into its earphones. The devices’ ability to detect electrical signals generated by neuronal activity, which would be transmitted to an iPhone or computer, opens up the ability to ­interact with technology through thought ­control, and would give users insights direct from the brain into their own mental health, productivity and mood. Meta is working on wristwatch-embedded devices that utilise AI to interpret nerve impulses via electromyography, which would enable the wearer to learn, adapt and interact with their own mental state. 
But the prospect of AI accessing neural data directly via BCIs is a whole new ball game. Transmitting neural data direct from the brain to supercomputers means an individual’s every thought – even subconscious thoughts one is not even aware of – could be made transparent, akin to uploading the mind. Beyond that, our thoughts could be manipulated by powerful algorithms that open up the possibility of a terrifying new era of surveillance capitalism or even coercive state control. “Our last fortress of privacy is in jeopardy,” writes Farahany in her seminal book The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. “Our concept of liberty is in dire need of being updated.” 
Farahany describes the early neurotech devices that are beginning to hit the market as “harbingers of a future where the sanctity of our innermost thoughts may become accessible to others, from employers to advertisers, and even government actors”.
“This is how we find ourselves at a moment when we must be asking not just what these technologies can do, but what they mean for the unseen, unspoken parts of our existence,” Farahany writes in her book. “This is about more than preventing unwanted mental ­intrusions; it is a guiding principle for human flourishing on the road ahead. We should move quickly to affirm broader interpretations of self-determination, privacy and freedom of thought as core components of cognitive liberty.” 
The rise of social media, with its rampant ­algorithmic-enabled commercial exploitation, surveillance without consent and devastating impacts on human mental states, has already provided a glimpse of the consequences if the world does not achieve a critical balance ­between the positive potentials of AI-powered neurotechnology and the risks. Human concentration spans have been shredded by social media models that exploit dopamine-driven addiction to likes and attention; the mental health of many young people has deteriorated as a consequence, and data has been harvested and monetised on a massive scale. Oxley is ­determined not to let BCIs go in the same ­direction. 
“The dopaminergic drive within a human makes us very vulnerable,” says Oxley. “And if AI opens up to market forces and is able to prey on the weakness of humans, then we’ve got a real problem. There is a duty of care with this technology.” 
Oxley is now co-chairing, with Farahany,the newly formed Global Future Council on Neurotechnology, which convenes more than 700 experts from academia, business, government, civil society and international organisations as a time-bound think-tank. The Council – an ­initiative of the World Economic Forum – is concerned with ensuring the responsible development, integration and deployment of neurotechnologies including BCIs to unlock new avenues for human advancement, medical treatment, communication and cognitive augmentation.

UNESCO is also drafting a set of cognitive AI principles, while some Latin American countries have already moved to direct legislative regulation. 

Oxley has now put forward his own vision for addressing the existential risks to human autonomy, privacy and the potential for discrimination. He has structured his neurotechnology ethical philosophy around three pillars: Human Flourishing, Cognitive Sovereignty and Cognitive Pluralism.
“Innovation should prioritise human agency, fulfilment, and long-term societal benefits, ensuring that advancements uplift rather than diminish human dignity,” Oxley stated in a public outline of his ideas in a LinkedIn post earlier this year. “Regulation should enable ­responsible progress without imposing unnecessary restrictions that limit personal autonomy or access to life-enhancing technologies. If we get it right, BCIs would become a tool for human expression, connection and productivity, enabling humans to transcend physical limitations.
“Individuals must have absolute control over their own cognitive processes, free from ­external manipulation or coercion. Privacy and security are paramount: users must own and control their brain data, ensuring it is protected from exploitation by corporations, governments, or AI-driven algorithms. BCIs must ­prevent subconscious or direct co-option and safeguard against covert or overt AI influence in commerce and decision-making. This may require decentralised, user-controlled infrastructure to uphold cognitive autonomy. Above all, BCIs should enhance personal ­agency, not erode it.” 
If cognitive sovereignty cannot be guaranteed, AI-driven coercion and persuasion looms as a menacing prospect. “Advanced algorithms could exploit subconscious processes, subtly shaping thoughts, decisions and emotions for commercial, political or ideological agendas,” Oxley says. Rather, BCIs should enhance human agency, ensuring AI is “assistive, not intrusive… empowering individuals without shaping their decisions or subconscious cognition”. 
Neither Oxley nor Farahany are in favour of centralised regulation. They favour “decentralised cognitive autonomy ... a user-controlled, secure ecosystem [which] ensures that thoughts, choices and mental experiences remain free from corporate or governmental influence.” 
Oxley is also wary of the rise of “a singular model of intelligence, perception or cognition” that could promote tiered class systems, the rise of a “cognitive elite”, or deepen social inequalities.
“Cognitive diversity, much like neurodiversity, must be protected and upheld,” he says. “This includes addressing cultural discrimination between users and non-users of neurotechnology, particularly as enhancements become more widespread. Access to neurotechnologies must be democratised, ensuring that enhancements do not become a tool of exclusion but a potential means of empowerment for all.
“BCIs will either empower individuals or risk becoming tools of control. By prioritising human flourishing, cognitive sovereignty and cognitive pluralism, we can help ensure they enhance autonomy and creativity. There is much work ahead,” Oxley says. 
That work must begin, says Farahany, with a worldwide collective effort to reshape core ­notions of liberty for the modern age. 
“Having an AI that auto-completes our thoughts, that changes the way we express ourselves, changes our understanding of ourselves as well,” she says. “The systems that are sitting at the interface between this merger of AI and BCIs don’t have our empathy, don’t have our history, don’t have our cultural context and don’t have our brains, which have been built to be social and in relation to each other. And so I worry very much about how much of what it means to be human will remain as we go forward in this space. 
“How much of what it means to be human will remain is up to us, and how we design the technology and the safeguards that we put into place to really focus on enhancing and enabling human self determination. But I think that unless we’re thoughtful, that isn’t an inevitable outcome. When our private inner sphere becomes just as transparent as everything else about us, you know, will we simply become the Instagram versions of ourselves?” 
Oxley remains confident that we can keep the radical advancements that he is facilitating in check. “I think that if you look back at history, humanity has been through multiple periods of revolution and there was always this fear that things were about to go downhill, and they didn’t,” he says. “I think we stand on the precipice of the potential to expand the human experience in an incredibly powerful way. The thing that I’m most excited about with this technology is that it could help us overcome a lot of pain and suffering, and especially the human challenge of expressing our own experience. I think BCIs will ultimately enhance what it means to be human.”