Madl: A multilevel architecture of deep learning

Samir Brahim Belhaouari*, Hafsa Raissouli

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Deep neural networks (DNN) are a powerful tool that is used in many real-life applications. Solving complicated real-life problems requires deeper and larger networks, and hence, a larger number of parameters to optimize. This paper proposes a multilevel architecture of deep learning (MADL) that breaks down the optimization to different levels and steps where networks are trained and optimized separately. Two approaches of passing the features from level i to level i + 1 are discussed. The first approach uses the output layer of level i as input to level i + 1 and the second approach discusses introducing an additional fully connected layer to pass the features from it directly to the next level. The experimentations showed that the second approach, that is the use of the features in the additional fully connected layer, gives a higher improvement. The paper also discusses an advanced customizable activation function that is comparable in its performance to rectified linear unit (ReLU). MADL is experimented using CIFAR-10 and exhibited an improvement of 0.84% compared to a single network resulting in an accuracy of 98.04%.

Original languageEnglish
Pages (from-to)693-700
Number of pages8
JournalInternational Journal of Computational Intelligence Systems
Volume14
Issue number1
DOIs
Publication statusPublished - 2021

Keywords

  • Advanced activation function
  • CIFAR-10, MADL
  • Convolutional neural network
  • Multilevel architecture of deep learning

Fingerprint

Dive into the research topics of 'Madl: A multilevel architecture of deep learning'. Together they form a unique fingerprint.

Cite this