09/09/2025
📢Meet the authors of our latest editorial!📢
Dr. Claudia Giorgetti
✔️SHORT BIO
https://ojs.umw.edu.pl/files/acem/CV_Giorgetti-19_250828_170538.pdf
Prof. Arianna Giorgetti
✔️SHORT BIO
https://ojs.umw.edu.pl/files/acem/CV%20AG%20%281%29.pdf
Prof. Rafael Boscolo-Berto
✔️SHORT BIO
https://ojs.umw.edu.pl/files/acem/CV%20RBB%20%281%29.pdf
✔️ READ THE EDITORIAL
Establishing new boundaries for medical liability: The role of AI as a decision-maker
https://advances.umw.edu.pl/en/ahead-of-print/208596/
✔️MINI-INTERVIEW
1. Assigning authorship to an AI tool in case of scientific papers is not practiced, while – as you clearly outline in your paper – there is such need in case of clinical decision support systems (CDSSs). What are the key differences between these 2 instances of AI use?
Authorship of scientific works is grounded in the recognition of creativity: since AI is generally not considered capable of genuine creativity, authorship is not attributed to it, so human authors remain fully responsible for the content. CDSSs, by contrast, can directly influence clinical decisions with a significant and direct impact on patient health. Here, the issue is not authorship in the sense of intellectual credit or copyright, but rather the need to attribute responsibility to the developers of CDSSs, where appropriate, to ensure trust, safety, and accountability. The goal is to delineate roles, responsibilities, and liability where AI contributes to or influences clinical decision-making.
2. In your opinion, is it a real danger that medical facilities’ managers will push medical professionals to rely more and more on AI to enhance efficiency at the expense of real oversight over AI tools?
Medical facilities’ managers are undoubtedly under pressure to reduce costs, especially given the growing demand for healthcare services and the shortage of professionals, so it seems likely that, where possible, they may encourage greater reliance on AI. On the other hand, maintaining robust oversight might be prioritized if a significant risk of liability is perceived. Improper over-reliance on AI by inexperienced and undertrained healthcare professionals certainly represent a concrete risk that can lead to serious health consequences. In order to prevent and avoid the negative effects of AI tools introduced in the healthcare system, education and training of the professionals is a key element, that might allow to obtain time efficiency without increasing adequate oversight.
3. What is the role of forensic medicine in cases of suspected malpractice or error involving the use of AI in medicine?
The role of forensic medicine in each case of suspected malpractice, including cases involving the use of AI in medicine, is to understand whether an error was committed, e.g., whether it was preventable and avoidable or not, and whether a harm or death was causally linked to such error. The involvement of AI introduces greater complexity in the landscape of liability because AI functions as a "new actor" whose responsibility may be full or partial, complicating how accountability is attributed among healthcare providers, AI developers, and others. Moreover, legal frameworks are still evolving to address these nuanced responsibilities, emphasizing the need for interdisciplinary collaboration and transparency in AI use.