Abstract
Effective implementation of artificial intelligence in behavioral healthcare delivery depends on overcoming challenges that are pronounced in this domain. Self and social stigma contribute to under-reported symptoms, and under-coding worsens ascertainment. Health disparities contribute to algorithmic bias. Lack of reliable biological and clinical markers hinders model development, and model explainability challenges impede trust among users. In this perspective, we describe these challenges and discuss design and implementation recommendations to overcome them in intelligent systems for behavioral and mental health.
Original language | English |
---|---|
Pages (from-to) | 9-15 |
Number of pages | 7 |
Journal | JAMIA Open |
Volume | 3 |
Issue number | 1 |
DOIs | |
State | Published - 2021 |
Bibliographical note
Publisher Copyright:© The Author(s) 2020.
Funding
Authors’ effort were partially supported by the following grants: under grant # W81XWH-10-2-0181 and R01 MH116269-01 (CGW), the National Institute of General Medical Sciences of the National Institutes of Health under grant #P20 GM103424-17 (PD); U.S. National Center for Advancing Translational Sciences via grant #UL1TR001998 (RK); National Science Foundation under grant #1838745 (VS).
Funders | Funder number |
---|---|
National Science Foundation (NSF) | 1838745 |
National Institutes of Health (NIH) | 20 GM103424-17 |
National Institute of Mental Health | R01MH116269 |
National Institute of General Medical Sciences | |
National Center for Advancing Translational Sciences (NCATS) | 1TR001998 |
Keywords
- Algorithms
- Artificial intelligence
- Behavioral health
- Ethics
- Health disparities
- Mental health
- Precision medicine
- Predictive modeling
ASJC Scopus subject areas
- Health Informatics