Deep learning (DL) models are becoming popular for estimating State of Charge (SOC) in batteries. These models are good at finding complex patterns in data. This means that there is no need to fully understand the physics of how batteries work to use them, making DL models easier to implement than other methods. Within DL, activation functions are pivotal as they introduce non-linearity, enabling the capture of complex data relationships. This study systematically examines the impact of various activation functions on the performance of a DL model, specifically Deep LSTM. Notable differences in model performance based on activation function choice are revealed. Mean Absolute Errors (MAE) are reported as 1.91 %, 1.99%, and 2.03% for models trained with SELU, Leaky ReLU, and Tanh activations, respectively. The SELU - trained model achieves the highest accuracy. However, the Tanh model significantly outperforms others in computational efficiency per step, particularly in GPU-enabled environments. It requires only 4ms/step, approximately 71 % faster than its nearest counterpart. This efficiency is critical for periodic model training to accommodate battery aging effects on SOC predictions over time, as reduced training time leads to lower computational and deployment costs.
Impact of Activation Functions in Deep Learning Based State of Charge Estimation for Batteries
31.07.2024
954203 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Elsevier | 2024
|