Systematic Error Removal Using Random Forest for Normalizing Large-Scale Untargeted Lipidomics Data

Sili Fan, Tobias Kind, Tomas Cajka, Stanley L. Hazen, W. H.Wilson Tang, Rima Kaddurah-Daouk, Marguerite R. Irvin, Donna K. Arnett, Dinesh K. Barupal, Oliver Fiehn

Research output: Contribution to journalArticlepeer-review

187 Scopus citations

Abstract

Large-scale untargeted lipidomics experiments involve the measurement of hundreds to thousands of samples. Such data sets are usually acquired on one instrument over days or weeks of analysis time. Such extensive data acquisition processes introduce a variety of systematic errors, including batch differences, longitudinal drifts, or even instrument-to-instrument variation. Technical data variance can obscure the true biological signal and hinder biological discoveries. To combat this issue, we present a novel normalization approach based on using quality control pool samples (QC). This method is called systematic error removal using random forest (SERRF) for eliminating the unwanted systematic variations in large sample sets. We compared SERRF with 15 other commonly used normalization methods using six lipidomics data sets from three large cohort studies (832, 1162, and 2696 samples). SERRF reduced the average technical errors for these data sets to 5% relative standard deviation. We conclude that SERRF outperforms other existing methods and can significantly reduce the unwanted systematic variation, revealing biological variance of interest.

Original languageEnglish
Pages (from-to)3590-3596
Number of pages7
JournalAnalytical Chemistry
Volume91
Issue number5
DOIs
StatePublished - Mar 5 2019

Bibliographical note

Publisher Copyright:
© 2019 American Chemical Society.

ASJC Scopus subject areas

  • Analytical Chemistry

Fingerprint

Dive into the research topics of 'Systematic Error Removal Using Random Forest for Normalizing Large-Scale Untargeted Lipidomics Data'. Together they form a unique fingerprint.

Cite this