TY - JOUR
T1 - Oversampling techniques for imbalanced data in regression
AU - Belhaouari, Samir Brahim
AU - Islam, Ashhadul
AU - Kassoul, Khelil
AU - Al-Fuqaha, Ala
AU - Bouzerdoum, Abdesselam
N1 - Publisher Copyright:
© 2024 The Author(s)
PY - 2024/10/15
Y1 - 2024/10/15
N2 - Our study addresses the challenge of imbalanced regression data in Machine Learning (ML) by introducing tailored methods for different data structures. We adapt K-Nearest Neighbor Oversampling-Regression (KNNOR-Reg), originally for imbalanced classification, to address imbalanced regression in low population datasets, evolving to KNNOR-Deep Regression (KNNOR-DeepReg) for high-population datasets. For tabular data, we also present the Auto-Inflater neural network, utilizing an exponential loss function for Autoencoders. For image datasets, we employ Multi-Level Autoencoders, consisting of Convolutional and Fully Connected Autoencoders. For such high-dimension data our approach outperforms the Synthetic Minority Oversampling Technique for Regression (SMOTER) algorithm for the IMDB-WIKI and AgeDB image datasets. For tabular data we conducted a comprehensive experiment using various models trained on both augmented and non-augmented datasets, followed by performance comparisons on test data. The outcomes revealed a positive impact of data augmentation, with a success rate of 83.75% for Light Gradient Boosting Method (LightGBM) and 71.57% for the 18 other regressors employed in the study. This success rate is determined by the frequency of instances where models performed better when augmented data was used compared to instances with no augmentation. Access to the comparative code can be found in GitHub.
AB - Our study addresses the challenge of imbalanced regression data in Machine Learning (ML) by introducing tailored methods for different data structures. We adapt K-Nearest Neighbor Oversampling-Regression (KNNOR-Reg), originally for imbalanced classification, to address imbalanced regression in low population datasets, evolving to KNNOR-Deep Regression (KNNOR-DeepReg) for high-population datasets. For tabular data, we also present the Auto-Inflater neural network, utilizing an exponential loss function for Autoencoders. For image datasets, we employ Multi-Level Autoencoders, consisting of Convolutional and Fully Connected Autoencoders. For such high-dimension data our approach outperforms the Synthetic Minority Oversampling Technique for Regression (SMOTER) algorithm for the IMDB-WIKI and AgeDB image datasets. For tabular data we conducted a comprehensive experiment using various models trained on both augmented and non-augmented datasets, followed by performance comparisons on test data. The outcomes revealed a positive impact of data augmentation, with a success rate of 83.75% for Light Gradient Boosting Method (LightGBM) and 71.57% for the 18 other regressors employed in the study. This success rate is determined by the frequency of instances where models performed better when augmented data was used compared to instances with no augmentation. Access to the comparative code can be found in GitHub.
KW - AutoInflaters
KW - Data augmentation
KW - Imbalanced data
KW - Machine learning
KW - Nearest neighbor
UR - https://www.scopus.com/pages/publications/85193484036
U2 - 10.1016/j.eswa.2024.124118
DO - 10.1016/j.eswa.2024.124118
M3 - Article
AN - SCOPUS:85193484036
SN - 0957-4174
VL - 252
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 124118
ER -