WHO Issues Guidance on Healthcare AI Ethics

The World Health Organization published comprehensive recommendations on the ethical governance of emerging large multi-modal AI models (LMM) increasingly used in healthcare like symptom analysis chatbots. Their rapid uncontrolled adoption makes determining proper oversight urgent.

Key Healthcare Application Areas

WHO identified five broad LMM application areas: clinical diagnosis and care, patient self-diagnosis, clerical healthcare administration, medical education via simulations and scientific R&D.

Caution Against Risks

Despite promising use cases, LMMs pose risks like false or biased statements that could improperly guide decisions. Training data quality issues may also perpetuate disparities.

Affordability, automation over-reliance and cyber threats due to the sensitivity of patient data managed by these algorithms are other pressing considerations.

Stakeholder Collaborations Needed

WHO advocated collaborative governance between health system stakeholders, technology firms, civil society and patients across all stages of LMM building and rollout.

Government Regulations and Investments

It advised governments to enact laws ensuring LMMs meet ethical duties and fund public computing infrastructure adhering to fairness principles. Assigning oversight bodies to vet healthcare AI is also recommended.

Responsible Industry Practices

For developers, engaging diverse potential users early and intentionally designing AI systems for well-defined tasks with explainability are among the principal recommendations.

Core Ethical Principles

WHO’s 2023 AI ethics guidance highlights six foundational principles for the health sector:

  • Protecting autonomy
  • Promoting wellbeing/safety
  • Ensuring transparency
  • Fostering accountability
  • Enabling inclusivity/equity
  • Furthering sustainability

Global Risks Identified

The World Economic Forum’s Global Risks Report categorized AI disinformation and media manipulation as a pressing short-term societal risk, especially for upcoming elections. Regulation is vital as 3 billion globally will soon vote.

UN experts now warn that AI may deprive lower-income countries of productivity gains and disproportionately affect women. Proactive measures to democratize access are imperative.

EU Leads in AI Governance

The EU recently passed an AI Act enforcing fundamental rights compliance to protect election integrity. Global policy coherence is essential to balance innovation with responsibility.

Cyber Threats

In addition to artificial intelligence (AI), the World Economic Forum report identifies quantum computing as another potentially disruptive technology. It raises particular concerns around “harvest attacks,” where wrongdoers amass currently encrypted data to later decrypt it when more powerful quantum computers emerge that can easily break modern encryption schemes. This highlights the pressing need to upgrade security and data protections now before quantum technologies enabling such cyber attacks become available.


Month: 

Category: 

Leave a Reply

Your email address will not be published. Required fields are marked *