
The rise of generative AI has prompted AI ethicists to propose a framework to mitigate the risks of using evolving technologies in healthcare. This coincides with ChatGPT’s OpenAI CEO urging US lawmakers to start regulating AI to keep humans safe.
Science fiction author Isaac Asimov introduced his three laws of robotics in his 1942 short story “Runaway.” He died in 1992, long before the rise of generative artificial intelligence that has occurred in recent years.
Generative AI includes algorithms such as ChatGPT or DALL-E, which can be used to create new content, including text, images, audio, video and computer code, using the data it has been trained on. Large language models (LLMs) are a key component of generative AI, neural networks trained on large amounts of unlabeled text using self-supervised or semi-supervised learning.
The capabilities of generative AI are growing exponentially.In healthcare, it has been used to predict patient outcomes by learning from large patient datasets, diagnose rare diseases with incredible accuracy, and Pass the USMLEachieving 60% with no prior study.
The potential for AI to enter healthcare and replace doctors, nurses, and other health professionals has prompted AI ethicist Stefan Harrer to propose a framework for using generative AI in medicine.
Harrer, chief innovation officer at the Digital Health Collaborative Research Center (DHCRC) and a member of the Health Artificial Intelligence Consortium (CHAI), said the problem with using generative AI is its ability to generate convincingly false, inappropriate, or dangerous .
“The essence of effective knowledge retrieval is asking the right questions, and the art of critical thinking depends on one’s ability to probe responses by assessing their validity to models of the world,” said Harrer, who lives in Melbourne, Australia. “The LL.M. cannot perform these tasks.”
Harrer thinks generative AI has the potential to transform healthcare, but it hasn’t yet. To that end, he proposed the introduction of an ethics-based regulatory framework with 10 principles that he said could mitigate the risks of generating AI in healthcare:
- AI is designed as a complementary tool that augments the capabilities of human decision makers, but does not replace them.
- Design AI to generate metrics on performance, usage, and impact to explain when and how AI is being used to assist decision making and scan for potential bias.
- Design AI that is based on and will adhere to the value system of the target user group.
- State the purpose and use of AI from the very beginning of concept or development work.
- Disclose all data sources used to train the AI.
- Design AI to clearly and transparently label AI-generated content.
- Regularly audit AI against data privacy, security, and performance standards.
- Record and share audit results, let users understand AI capabilities, limitations, and risks, and improve AI performance by retraining and updating algorithms.
- When hiring human developers, ensure fair work and safe work standards are applied.
- Establish legal precedents that clearly define when data can be used for AI training, and establish copyright, liability, and accountability frameworks to manage the implications of training data, AI-generated content, and human decisions made using that data.
Interestingly, Harrer’s framework coincides with ChatGPT’s Open AI CEO Sam Altman’s call to US lawmakers introduce government regulation In order to prevent the potential risks brought by AI to human beings. Altman, who co-founded OpenAI in 2015 with the backing of Elon Musk, recommends that governments introduce licensing and testing requirements before releasing more powerful AI models.
In Europe, an AI bill is due to be voted on in the European Parliament next month. If passed, the legislation could ban biometric surveillance, emotion recognition and some artificial intelligence systems used in policing.
Harrer’s fairly general framework could be applied to many workplaces where there is a risk that AI will replace humans. It seems to come at a time when people, even those responsible for creating technology, are asking the world to pause.
Is healthcare at greater risk than other employment sectors? Would such a framework be beneficial, and more importantly, would it actually reduce risk given the rate at which AI is improving? Only time will provide us with answers to these questions.
Papers published in journals Electronic Biomedicine.