As one of adaptive optimal controls, the Q-learning based supervisory control for hybrid electric vehicle (HEV) energy management is rarely studied for its adaptability. In real-world driving scenarios, conditions such as vehicle loads, road conditions and traffic conditions may vary. If these changes occur and the vehicle supervisory control does not adapt to it, the resulting fuel economy may not be optimal. To our best knowledge, for the first time, the study investigates the adaptability of Q-learning based supervisory control for HEVs. A comprehensive analysis is presented for the adaptability interpretation with three varying factors: driving cycle, vehicle load condition, and road grade. A parallel HEV architecture is considered and Q-learning is used as the reinforcement learning algorithm to control the torque split between the engine and the electric motor. Model Predictive Control, Equivalent consumption minimization strategy and thermostatic control strategy are implemented for comparison. The Q-learning based supervisory control shows strong adaptability under different conditions, and it leads the fuel economy among four supervisory controls in all three varying conditions.
|Number of pages||10|
|Journal||IEEE Transactions on Intelligent Transportation Systems|
|State||Published - Jul 1 2022|
Bibliographical noteFunding Information:
This work was supported in part by the National Natural Science Foundation of China under Grant 51875054 and Grant 51705044 and in part by the Chongqing Natural Science Foundation for Distinguished Young Scholars, Chongqing Science and Technology Bureau, China, under Grant cstc2019jcyjjq0010.
© 2000-2011 IEEE.
- hybrid electric vehicle
- real-time implementation
- Reinforcement learning
- supervisory control
ASJC Scopus subject areas
- Automotive Engineering
- Mechanical Engineering
- Computer Science Applications