Artificial Intelligence in Relation to Accurate Information and Tasks in Gynecologic Oncology and Clinical Medicine—Dunning–Kruger Effects and Ultracrepidarianism

Edward J. Pavlik, Jamie Land Woodward, Frank Lawton, Allison L. Swiecki-Sikora, Dharani D. Ramaiah, Taylor A. Rives

Research output: Contribution to journalReview articlepeer-review

1 Scopus citations

Abstract

Publications on the application of artificial intelligence (AI) to many situations, including those in clinical medicine, created in 2023–2024 are reviewed here. Because of the short time frame covered, here, it is not possible to conduct exhaustive analysis as would be the case in meta-analyses or systematic reviews. Consequently, this literature review presents an examination of narrative AI’s application in relation to contemporary topics related to clinical medicine. The landscape of the findings reviewed here span 254 papers published in 2024 topically reporting on AI in medicine, of which 83 articles are considered in the present review because they contain evidence-based findings. In particular, the types of cases considered deal with AI accuracy in initial differential diagnoses, cancer treatment recommendations, board-style exams, and performance in various clinical tasks, including clinical imaging. Importantly, summaries of the validation techniques used to evaluate AI findings are presented. This review focuses on AIs that have a clinical relevancy evidenced by application and evaluation in clinical publications. This relevancy speaks to both what has been promised and what has been delivered by various AI systems. Readers will be able to understand when generative AI may be expressing views without having the necessary information (ultracrepidarianism) or is responding as if the generative AI had expert knowledge when it does not. A lack of awareness that AIs may deliver inadequate or confabulated information can result in incorrect medical decisions and inappropriate clinical applications (Dunning–Kruger effect). As a result, in certain cases, a generative AI system might underperform and provide results which greatly overestimate any medical or clinical validity.

Original languageEnglish
Article number735
JournalDiagnostics
Volume15
Issue number6
DOIs
StatePublished - Mar 2025

Bibliographical note

Publisher Copyright:
© 2025 by the authors.

Keywords

  • accuracy
  • artificial intelligence
  • chatbot
  • clinical medicine
  • generative AI
  • gynecologic oncology
  • large language models

ASJC Scopus subject areas

  • Clinical Biochemistry

Fingerprint

Dive into the research topics of 'Artificial Intelligence in Relation to Accurate Information and Tasks in Gynecologic Oncology and Clinical Medicine—Dunning–Kruger Effects and Ultracrepidarianism'. Together they form a unique fingerprint.

Cite this