TY - JOUR
T1 - Memristor-Based Edge Computing of Blaze Block for Image Recognition
AU - Ran, Huanhuan
AU - Wen, Shiping
AU - Li, Qian
AU - Yang, Yin
AU - Shi, Kaibo
AU - Feng, Yuming
AU - Zhou, Pan
AU - Huang, Tingwen
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2022/5/1
Y1 - 2022/5/1
N2 - In this article, a novel edge computing system is proposed for image recognition via memristor-based blaze block circuit, which includes a memristive convolutional neural network (MCNN) layer, two single-memristiNe blaze blocks (SMBBs), four double-memristive blaze blocks (DMBBs), a global Avg-pooling (GAP) layer, and a memristive full connected (MFC) layer. SMBBs and DMBBs mainly utilize the depthwise separable convolution neural network (DwCNN) that can be implemented with a much smaller memristor crossbar (MC). In the backward propagation, we use batch normalization (BN) layers to accelerate the convergence. In the forward propagation, this circuit combines DwCNN layers/CNN layers with nonseparate BN layers, which means that the required number of operational amplifiers is cut by half as long as the greatly reduced power consumption. A diode is added after the rectified linear unit (ReLU) layer to limit the output of the circuit below the threshold voltage V-t of the memristor; thus, the circuit is more stable. Experiments show that the proposed memristor-based circuit achieves an accuracy of 84.38% on the CIFAR-10 data set with advantages in computing resources, calculation time, and power consumption. Experiments also show that, when the number of multistate conductance is 2(8) and the quantization bit of the data is 8, the circuit can achieve its best balance between power consumption and production cost.
AB - In this article, a novel edge computing system is proposed for image recognition via memristor-based blaze block circuit, which includes a memristive convolutional neural network (MCNN) layer, two single-memristiNe blaze blocks (SMBBs), four double-memristive blaze blocks (DMBBs), a global Avg-pooling (GAP) layer, and a memristive full connected (MFC) layer. SMBBs and DMBBs mainly utilize the depthwise separable convolution neural network (DwCNN) that can be implemented with a much smaller memristor crossbar (MC). In the backward propagation, we use batch normalization (BN) layers to accelerate the convergence. In the forward propagation, this circuit combines DwCNN layers/CNN layers with nonseparate BN layers, which means that the required number of operational amplifiers is cut by half as long as the greatly reduced power consumption. A diode is added after the rectified linear unit (ReLU) layer to limit the output of the circuit below the threshold voltage V-t of the memristor; thus, the circuit is more stable. Experiments show that the proposed memristor-based circuit achieves an accuracy of 84.38% on the CIFAR-10 data set with advantages in computing resources, calculation time, and power consumption. Experiments also show that, when the number of multistate conductance is 2(8) and the quantization bit of the data is 8, the circuit can achieve its best balance between power consumption and production cost.
KW - Blaze block
KW - Data quantization
KW - Image recognition
KW - Memristor-based edge computing
KW - Multistate conductance
UR - https://www.scopus.com/pages/publications/85099112590
U2 - 10.1109/TNNLS.2020.3045029
DO - 10.1109/TNNLS.2020.3045029
M3 - Article
C2 - 33373307
AN - SCOPUS:85099112590
SN - 2162-237X
VL - 33
SP - 2121
EP - 2131
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 5
ER -