A Neural Network pruning approach based on Compressive Sampling

Jie Yang*, Abdesselam Bouzerdoum, Son Lam Phung

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

13 Citations (Scopus)

Abstract

The balance between computational complexity and the architecture bottlenecks the development of Neural Networks (NNs). An architecture that is too large or too small will influence the performance to a large extent in terms of generalization and computational cost. In the past, saliency analysis has been employed to determine the most suitable structure, however, it is time-consuming and the performance is not robust. In this paper, a family of new algorithms for pruning elements (weighs and hidden neurons) in Neural Networks is presented based on Compressive Sampling (CS) theory. The proposed framework makes it possible to locate the significant elements, and hence find a sparse structure, without computing their saliency. Experiment results are presented which demonstrate the effectiveness of the proposed approach.

Original languageEnglish
Title of host publication2009 International Joint Conference on Neural Networks, IJCNN 2009
Pages3428-3435
Number of pages8
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event2009 International Joint Conference on Neural Networks, IJCNN 2009 - Atlanta, GA, United States
Duration: 14 Jun 200919 Jun 2009

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Conference

Conference2009 International Joint Conference on Neural Networks, IJCNN 2009
Country/TerritoryUnited States
CityAtlanta, GA
Period14/06/0919/06/09

Fingerprint

Dive into the research topics of 'A Neural Network pruning approach based on Compressive Sampling'. Together they form a unique fingerprint.

Cite this