PSMGD: Periodic Stochastic Multi-Gradient Descent for Fast Multi-Objective Optimization

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Multi-objective optimization (MOO) lies at the core of many machine learning (ML) applications that involve multiple, potentially conflicting objectives. Despite the long history of MOO, recent years have witnessed a surge in interest within the ML community in the development of gradient manipulation algorithms for MOO, thanks to the availability of gradient information in many ML problems. However, existing gradient manipulation methods for MOO often suffer from long training times, primarily due to the need for computing dynamic weights by solving an additional optimization problem to determine a common descent direction that can decrease all objectives simultaneously. To address this challenge, we propose a new and efficient algorithm called Periodic Stochastic Multi-Gradient Descent (PSMGD) to accelerate MOO. PSMGD is motivated by the key observation that dynamic weights across objectives exhibit small changes under minor updates over short intervals during the optimization process. Consequently, our PSMGD algorithm is designed to periodically compute these dynamic weights and utilizes them repeatedly, thereby effectively reducing the computational overload. Theoretically, we prove that PSMGD can achieve state-of-the-art convergence rates for strongly-convex, general convex, and non-convex functions. Additionally, we introduce a new computational complexity measure, termed backpropagation complexity, and demonstrate that PSMGD could achieve an objective-independent backpropagation complexity. Through extensive experiments, we verify that PSMGD can provide comparable or superior performance to state-of-the-art MOO algorithms while significantly reducing training time.

Original languageEnglish
Pages (from-to)21770-21778
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number20
DOIs
StatePublished - Apr 11 2025
Event39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, United States
Duration: Feb 25 2025Mar 4 2025

Bibliographical note

Publisher Copyright:
© 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Funding

JL acknowledges the funding from NSF grants CAREER CNS-2110259, CNS-2112471, IIS-2324052, DARPA YFA D24AP00265, ONR grant N00014-24-1-2729, and AFRL grant PGSC-SC-111374-19s. HY acknowledges the funding support from AI Seed Funding and GWBC Award at RIT.

FundersFunder number
Rochester Institute of Technology
Air Force Research LaboratoryPGSC-SC-111374-19s
National Science Foundation Arctic Social Science ProgramIIS-2324052, CNS-2112471, CNS-2110259
Defense Advanced Research Projects AgencyYFA D24AP00265
Office of Naval Research Naval AcademyN00014-24-1-2729

    ASJC Scopus subject areas

    • Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'PSMGD: Periodic Stochastic Multi-Gradient Descent for Fast Multi-Objective Optimization'. Together they form a unique fingerprint.

    Cite this