Abstract
Deep neural networks (DNN) are a powerful tool that is used in many real-life applications. Solving complicated real-life problems requires deeper and larger networks, and hence, a larger number of parameters to optimize. This paper proposes a multilevel architecture of deep learning (MADL) that breaks down the optimization to different levels and steps where networks are trained and optimized separately. Two approaches of passing the features from level i to level i + 1 are discussed. The first approach uses the output layer of level i as input to level i + 1 and the second approach discusses introducing an additional fully connected layer to pass the features from it directly to the next level. The experimentations showed that the second approach, that is the use of the features in the additional fully connected layer, gives a higher improvement. The paper also discusses an advanced customizable activation function that is comparable in its performance to rectified linear unit (ReLU). MADL is experimented using CIFAR-10 and exhibited an improvement of 0.84% compared to a single network resulting in an accuracy of 98.04%.
| Original language | English |
|---|---|
| Pages (from-to) | 693-700 |
| Number of pages | 8 |
| Journal | International Journal of Computational Intelligence Systems |
| Volume | 14 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 2021 |
Keywords
- Advanced activation function
- CIFAR-10, MADL
- Convolutional neural network
- Multilevel architecture of deep learning
Fingerprint
Dive into the research topics of 'Madl: A multilevel architecture of deep learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver