Artificial intelligence is increasingly embedded in healthcare, from early disease detection to spotting fractures. So how will it shape the future of medicine – and what of the dangers of this new age? Catherine Lewis reports

When Shakey, the first robot capable of interpreting instructions, was born at the Stanford Research Institute in 1966, artificial intelligence (AI) leapt from the pages of futuristic fiction to become part of the conversation. Shakey’s descendants are now increasingly embedded in data-heavy sectors such as healthcare, with Grandview Research predictions that, globally, clinical AI will explode from last year’s $41 billion to $289 billion by 2030, boosting data collection powers, surgical precision, image analysis and drug discovery.

Despite boasting one of the world’s highest life expectancies, Australia faces a growing chronic disease burden and matching demand for efficient and effective healthcare, piling pressure on our hospitals as waitlists mount. Enter AI, a true ‘game changer,’ says Anthony Schembri, chief executive officer of Northern Sydney Local Health District. Robot-assisted surgeries – the most common use of clinical AI – drops operative time by 25%, intraoperative complications by 30% and recovery times by 15%.

Royal North Shore Hospital took note – it recently welcomed Alexis, its new publicly-available Da Vinci Xi surgical robot, to offer safer, less invasive procedures while attracting top global surgical talent. Alexis has ‘similar benefits to laparoscopic or keyhole surgery, but offers further advantages,’ says academic lead for robotic surgery at RNSH, Dr Kai Brown.

“Instead of being limited to their hands, surgeons control four articulated robotic arms via a console. With 10-times high-definition 3D magnification, it allows for extraordinary precision, dexterity and visualisation, meaning that many operations that once required large incisions can now be done with smaller ones, reducing pain, shortening hospital stays and speeding recovery.”

Identification of diseases in which early intervention is crucial, such as dementia, has also kicked up a gear, with Melbourne’s Monash University and Peninsula Health’s National Centre for Healthy Ageing using AI to capture and combine clues in written text, such as descriptions of confusion or forgetfulness or distressed behaviour, to flag patients. AI can even analyse x-rays or scans with pinpoint accuracy, catching the smallest anomalies that the human eye may miss, resulting in earlier diagnoses and faster treatment.

“We’re at the cusp of an extraordinary era in medicine,” says David Hansen, research director of the CSIRO’s Australian e-Health research centre. “For the first time, machines can provide efficient administrative support for clinicians and education for patients, diagnose and predict disease and inform clinical decision making.”

Dr Hansen calls the implementation of AI in healthcare ‘inevitable and unique,’ thanks to the rapid expansion of electronic medical records, the platform for the technology. “If done with care, thought and safety, embedding AI in healthcare is an opportunity to drastically improve the work lives of medical professions and the health and wellbeing of consumers.”

The ‘new generation’ of AI, Generative AI, (GAI) capable of creating original new data, images and text, has turned the traditional drug discovery process – typically costing $2.8 billion and 12 years per successful drug – on its head. GAI has boosted success from 0.1% to 30%, slashed research costs by 60% and early-stage development time of a ‘viable compound’ by 75%. This is due to the ability of GAI to analyse millions of compounds simultaneously – the work of a thousand scientists in one. Smart molecular design tools have sent prediction accuracy from 50 to 89%, transforming drug candidate selection and clinical trial design, while automated screening systems are accelerating the discovery of breakthrough treatments for previously untreatable conditions.

Shakey, the first robot capable of interpreting instructions, signalled the start of AI in 1966

Health charity Skin Check Champions has created a pop-up clinic that uses AI to diagnose skin cancer, while Caption Health has confirmed regulatory approvals for its Caption AI technology platform, to improve access to heart ultrasound diagnostics.

Turns out a golden age is lucrative, with a report from Microsoft and the Tech Council of Australia saying that the ‘responsible adoption of GAI’ could unlock ‘between $45 to $115 billion a year for Australia’s economy by 2030’ – a tempting prospect for government.

Seduced, the Federal Government’s Medical Research Future Fund has injected $30 million to harness the power of clinical AI, with funding supporting its use in early melanoma detection, allowing people in rural Australia to have skin checks at home. AI to improve care for multiple sclerosis and cardiac health problems will also be funded, along with integrating the tech into diagnosis and treatment of youth mental health conditions via the ‘Youth- AI’ project.

But what of the dangers of this brave new world? When Australia passed groundbreaking legislation last year banning those under 16 from certain social media, there were calls to include ‘manipulative’ generative AI companions and digital character bots. In the USA, chatbots have been linked to a teen suicide, while in the UK, a young man plotted to assassinate the late Queen, egged on by a bot. ABC Triple J’s Hack podcast found that young people are increasingly turning to chatbots to seek support with depression, with one person reporting that the chatbot affirmed ‘harmful and false beliefs,’ leading to mental health deterioration, while another child said that the bot encouraged him to take his own life.

We ‘should be very concerned’ about bots, says Professor Toby Walsh, AI expert at the University of NSW. “We’re about to run another experiment on our young children and potentially the consequences could be detrimental,” he says. Professor Walsh is urging politicians and regulators to ‘closely consider’ banning the technology, thereby ‘incentivising tech companies to build age-appropriate spaces for young people.’ “There’s a lot of money at stake and a lot of commercial pressure for them to do this. The only way to alleviate that is to outlaw it if they’re not going to do the right thing.”

In agreement is Australia’s eSafety Commissioner, Julie Inman Grant, calling the rise of ‘powerful, cheap and accessible AI models without built-in guardrails or age restrictions’ a ‘further hazard faced by our children.’ “Online safety requires a coordinated, collaborative global effort by law enforcement agencies, regulators, non- government organisations, educators, community groups and the tech industry itself,” she adds.

Delays in GP, psychologist or therapist appointments have forced many to online diagnosis and treatment plans via ChatGPT and Roboclinic – the latter described as your ‘Personal AI Health Assistant, 24/7,’ with over 300,000 users globally.

As the Australian Psychological Society warns that bullying is on the rise in Australian schools, ongoing hurdles to mental health support are proving perilous, warns youth mental health body, Orygen. “Increasing numbers are experiencing mental health difficulties, and less than half get the treatment they need, so AI has the potential to revolutionise mental healthcare by making it more accessible, personalised and efficient,” says Orygen Digital clinical psychologist Shane Cross.

“However, this is a new technology, and we must proceed with caution by addressing significant concerns related to privacy, ethics, and the quality of AI- generated advice to ensure these tools are safe and effective.

Currently, the Productivity Commission is opposing Federal Government plans – such as a proposed artificial intelligence act to better regulate chatbots to protect young and vulnerable people, claiming that over regulation would stifle AI’s vast economic potential.

This lack of guardrails could stymie a full healthcare scale-up, with crucial ethical, legal, and social issues, such as data privacy, left unanswered amid high implementation costs.

AI’s potential is earth-shattering, but must go hand-in-hand with regular reviews to determine clinical and social impact, to ensure that we ‘harness all the positives, while engineering out the harms,’ says eSafety Commissioner Julie Inman Grant.

After all, AI may be our future, but it can’t factor in emotions, morality or ethics. Yet.