Local Large Language Models for Complex Structured Medical Tasks

V. K. Cody Bumgardner, Aaron Mullen, Sam Armstrong, Caylin Hickey, Jeff Talbert

Research output: Working paperPreprint

80 Downloads (Pure)

Abstract

This paper introduces an approach that combines the language reasoning capabilities of large language models (LLMs) with the benefits of local training to tackle complex, domain-specific tasks. Specifically, the authors demonstrate their approach by extracting structured condition codes from pathology reports. The proposed approach utilizes local LLMs, which can be fine-tuned to respond to specific generative instructions and provide structured outputs. The authors collected a dataset of over 150k uncurated surgical pathology reports, containing gross descriptions, final diagnoses, and condition codes. They trained different model architectures, including LLaMA, BERT and LongFormer and evaluated their performance. The results show that the LLaMA-based models significantly outperform BERT-style models across all evaluated metrics, even with extremely reduced precision. The LLaMA models performed especially well with large datasets, demonstrating their ability to handle complex, multi-label tasks. Overall, this work presents an effective approach for utilizing LLMs to perform domain-specific tasks using accessible hardware, with potential applications in the medical domain, where complex data extraction and classification are required.
Original languageUndefined/Unknown
StatePublished - Aug 3 2023

Bibliographical note

12 pages, Preprint of an article submitted for consideration in Pacific Symposium on Biocomputing \c{opyright} 2024 copyright World Scientific Publishing Company https://www.worldscientific.com/

Keywords

  • cs.CL
  • cs.AI

Fingerprint

Dive into the research topics of 'Local Large Language Models for Complex Structured Medical Tasks'. Together they form a unique fingerprint.

Cite this