TY - JOUR
T1 - Memristor-Based Design of Sparse Compact Convolutional Neural Network
AU - Wen, Shiping
AU - Wei, Huaqiang
AU - Yan, Zheng
AU - Guo, Zhenyuan
AU - Yang, Yin
AU - Huang, Tingwen
AU - Chen, Yiran
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2020/7/1
Y1 - 2020/7/1
N2 - Memristor has been widely studied for hardware implementation of neural networks due to the advantages of nanometer size, low power consumption, fast switching speed and functional similarity to biological synapse. However, it is difficult to realize memristor-based deep neural networks for there exist a large number of network parameters in general structures such as LeNet, FCN, etc. To mitigate this problem, this paper aims to design a memristor-based sparse compact convolutional neural network (MSCCNN) to reduce the number of memristors. We firstly use an average pooling and 1× 1 convolutional layer to replace fully connected layers. Meanwhile, depthwise separation convolution is utilized to replace traditional convolution to further reduce the number of parameters. Furthermore, a network pruning method is adopted to remove the redundant memristor crossbars for depthwise separation convolutional layers. Therefore, a more compact network structure is obtained while the recognition accuracy remaining unchanged. Simulation results show that the designed model achieves superior accuracy rates while greatly reducing the scale of the hardware circuit. Compared with traditional designs of memristor-based CNN, our proposed model has smaller area and lower power consumption.
AB - Memristor has been widely studied for hardware implementation of neural networks due to the advantages of nanometer size, low power consumption, fast switching speed and functional similarity to biological synapse. However, it is difficult to realize memristor-based deep neural networks for there exist a large number of network parameters in general structures such as LeNet, FCN, etc. To mitigate this problem, this paper aims to design a memristor-based sparse compact convolutional neural network (MSCCNN) to reduce the number of memristors. We firstly use an average pooling and 1× 1 convolutional layer to replace fully connected layers. Meanwhile, depthwise separation convolution is utilized to replace traditional convolution to further reduce the number of parameters. Furthermore, a network pruning method is adopted to remove the redundant memristor crossbars for depthwise separation convolutional layers. Therefore, a more compact network structure is obtained while the recognition accuracy remaining unchanged. Simulation results show that the designed model achieves superior accuracy rates while greatly reducing the scale of the hardware circuit. Compared with traditional designs of memristor-based CNN, our proposed model has smaller area and lower power consumption.
KW - Memristor
KW - convolutional neural network
KW - depthwise separable convolution
KW - network pruning
UR - https://www.scopus.com/pages/publications/85090956616
U2 - 10.1109/TNSE.2019.2934357
DO - 10.1109/TNSE.2019.2934357
M3 - Article
AN - SCOPUS:85090956616
SN - 2327-4697
VL - 7
SP - 1431
EP - 1440
JO - IEEE Transactions on Network Science and Engineering
JF - IEEE Transactions on Network Science and Engineering
IS - 3
M1 - 8798700
ER -