Recentering responsible and explainable artificial intelligence research on patients: implications in perinatal psychiatry. Academic Article uri icon

Overview

abstract

  • In the setting of underdiagnosed and undertreated perinatal depression (PD), Artificial intelligence (AI) solutions are poised to help predict and treat PD. In the near future, perinatal patients may interact with AI during clinical decision-making, in their patient portals, or through AI-powered chatbots delivering psychotherapy. The increase in potential AI applications has led to discussions regarding responsible AI and explainable AI (XAI). Current discussions of RAI, however, are limited in their consideration of the patient as an active participant with AI. Therefore, we propose a patient-centered, rather than a patient-adjacent, approach to RAI and XAI, that identifies autonomy, beneficence, justice, trust, privacy, and transparency as core concepts to uphold for health professionals and patients. We present empirical evidence that these principles are strongly valued by patients. We further suggest possible design solutions that uphold these principles and acknowledge the pressing need for further research about practical applications to uphold these principles.

publication date

  • January 18, 2024

Identity

PubMed Central ID

  • PMC10832054

Digital Object Identifier (DOI)

  • 10.3389/fpsyt.2023.1321265

PubMed ID

  • 38304402

Additional Document Info

volume

  • 14