Local Large Language Models for Complex Structured Tasks

V K Cody Bumgardner, Aaron Mullen, Samuel E Armstrong, Caylin Hickey, Victor Marek, Jeff Talbert

Research output: Contribution to journalArticlepeer-review

Abstract

This paper introduces an approach that combines the language reasoning capabilities of large language models (LLMs) with the benefits of local training to tackle complex language tasks. The authors demonstrate their approach by extracting structured condition codes from pathology reports. The proposed approach utilizes local, fine-tuned LLMs to respond to specific generative instructions and provide structured outputs. Over 150k uncurated surgical pathology reports containing gross descriptions, final diagnoses, and condition codes were used. Different model architectures were trained and evaluated, including LLaMA, BERT, and LongFormer. The results show that the LLaMA-based models significantly outperform BERT-style models across all evaluated metrics. LLaMA models performed especially well with large datasets, demonstrating their ability to handle complex, multi-label tasks. Overall, this work presents an effective approach for utilizing LLMs to perform structured generative tasks on domain-specific language in the medical domain.

Original languageEnglish
Pages (from-to)105-114
Number of pages10
JournalAMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science
Volume2024
StatePublished - 2024

Bibliographical note

©2024 AMIA - All rights reserved.

Fingerprint

Dive into the research topics of 'Local Large Language Models for Complex Structured Tasks'. Together they form a unique fingerprint.

Cite this