Artificial Intelligence in Healthcare

Dialogue Meeting for Patients and Relatives

Artificial Intelligence Healthcare

Artificial Intelligence in Healthcare

On the 6th of November 2025, the Centre for Clinical Artificial Intelligence (CAI-X), on behalf of the EU projects AICE, EUCanScreen, and PREMIO COLLAB, held a dialogue meeting for patients and relatives about artificial intelligence (AI) in healthcare. The purpose of the meeting was to gather participants’ perspectives, expectations, and concerns regarding the use of AI in the future of cancer diagnostics and treatment. Below are the key points from the dialogue meeting.

1. Safety and Trust: What should AI be able to explain, and who should take responsibility?

  • Need for transparency – Patients and relatives want to know when AI is being used.
  • Explainability builds trust – Many emphasized the importance of explaining, in accessible language, how AI contributes to decisions.
  • Clear allocation of responsibility – The ultimate responsibility must remain with the doctor, and AI should not independently make clinical decisions.
  • Communicate concretely – It is essential to avoid technical jargon and instead describe what the technology actually does in practice.
  • Trust through quality and validation – Patients want assurance that AI solutions are tested and quality-assured under Danish and European conditions.

2. Data: How can consent and control be made real and understandable?

  • Genuine consent requires clarity – Consent must be communicated in a clear and understandable language. Information about the purpose and who has access to the data should be provided.
  • Differentiated consent – There should be options to choose between different levels of consent, e.g., data use for treatment only or also for research.
  • Cross-border and project sharing – Patients should be made aware of and provided insight into how Danish data is used in EU collaborations.
  • Data security and responsibility – There is significant trust in hospitals as guarantors of data security, even when AI is used.
  • Balancing benefit and protection – Patients expressed a willingness to share data if AI can clearly improve diagnostics and treatment.

3. Understanding: What is the ideal balance between AI and clinicians?

  • The clinician’s role is central – AI should be seen as a support tool, not a replacement for the clinician.
  • Combining technology and humanity feels ideal – AI can enhance quality and accelerate treatment, while the clinician ensures empathy and understanding.
  • Patient-friendly AI responses should be physician-verified – Physician approval of AI-generated responses provides reassurance and builds credibility.
  • Familiarity fosters trust – Understanding the purpose of AI reduces skepticism and increases trust.
  • Humans and machines as partners – AI can ease the workload of clinicians and free up time for patients when used thoughtfully.

Summary and Next Steps

Patients and relatives expressed both hope and trust that AI can contribute to earlier diagnoses, more precise treatments, and improved patient journeys, as long as the technology remains human-centered. CAI-X will incorporate participants’ input into ongoing work on developing and implementing AI solutions that empower patients and ensure transparency, accountability, and comprehensible communication throughout the healthcare system. The insights from the dialogue meeting will inform future efforts within the EU projects.

Subscribe to our newsletter to get news and updates.

Logo EU EUCanScreen Vertical

Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Health and Digital Executive Agency (HADEA). Neither the European Union nor HADEA can be held responsible for them.

This project has received funding from the European Union’s EU4HEALTH Programme under the Grant Agreement no 101162959