TY - JOUR
T1 - Madl
T2 - A multilevel architecture of deep learning
AU - Belhaouari, Samir Brahim
AU - Raissouli, Hafsa
N1 - Publisher Copyright:
© 2021 The Authors. Published by Atlantis Press B.V.
PY - 2021
Y1 - 2021
N2 - Deep neural networks (DNN) are a powerful tool that is used in many real-life applications. Solving complicated real-life problems requires deeper and larger networks, and hence, a larger number of parameters to optimize. This paper proposes a multilevel architecture of deep learning (MADL) that breaks down the optimization to different levels and steps where networks are trained and optimized separately. Two approaches of passing the features from level i to level i + 1 are discussed. The first approach uses the output layer of level i as input to level i + 1 and the second approach discusses introducing an additional fully connected layer to pass the features from it directly to the next level. The experimentations showed that the second approach, that is the use of the features in the additional fully connected layer, gives a higher improvement. The paper also discusses an advanced customizable activation function that is comparable in its performance to rectified linear unit (ReLU). MADL is experimented using CIFAR-10 and exhibited an improvement of 0.84% compared to a single network resulting in an accuracy of 98.04%.
AB - Deep neural networks (DNN) are a powerful tool that is used in many real-life applications. Solving complicated real-life problems requires deeper and larger networks, and hence, a larger number of parameters to optimize. This paper proposes a multilevel architecture of deep learning (MADL) that breaks down the optimization to different levels and steps where networks are trained and optimized separately. Two approaches of passing the features from level i to level i + 1 are discussed. The first approach uses the output layer of level i as input to level i + 1 and the second approach discusses introducing an additional fully connected layer to pass the features from it directly to the next level. The experimentations showed that the second approach, that is the use of the features in the additional fully connected layer, gives a higher improvement. The paper also discusses an advanced customizable activation function that is comparable in its performance to rectified linear unit (ReLU). MADL is experimented using CIFAR-10 and exhibited an improvement of 0.84% compared to a single network resulting in an accuracy of 98.04%.
KW - Advanced activation function
KW - CIFAR-10, MADL
KW - Convolutional neural network
KW - Multilevel architecture of deep learning
UR - https://www.scopus.com/pages/publications/85101281705
U2 - 10.2991/ijcis.d.201216.003
DO - 10.2991/ijcis.d.201216.003
M3 - Article
AN - SCOPUS:85101281705
SN - 1875-6891
VL - 14
SP - 693
EP - 700
JO - International Journal of Computational Intelligence Systems
JF - International Journal of Computational Intelligence Systems
IS - 1
ER -