TY - JOUR
T1 - DROPc-Dynamic Resource Optimization for Convolution Layer
AU - Akbar, Muhammad Ali
AU - Wang, Bo
AU - Belhaouari, Samir Brahim
AU - Bermak, Amine
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/6/30
Y1 - 2025/6/30
N2 - The computational complexity of convolutional neural networks (CNNs) becomes challenging for resource-constrained hardware devices. The convolution layer is predominant in the overall CNN architecture, performing the expensive multiplication and accumulation operation. Therefore, designing a hardware-efficient convolution layer will effectively improve the overall performance of a CNN. In this research, we propose a dynamic resource optimization (DROP) approach to improve the power and delay of the convolution layer. The proposed approach controls the computational path in accordance to the interrupts which are dependent on a non-zero-bit pattern. With a single interrupt, our solution provides 42.5% power and 36.7% delay efficiency compared to the standard bit-serial-parallel approach. Moreover, the power consumed by eight parallel functioning blocks is 27.7% less than the traditional bit-parallel approach.
AB - The computational complexity of convolutional neural networks (CNNs) becomes challenging for resource-constrained hardware devices. The convolution layer is predominant in the overall CNN architecture, performing the expensive multiplication and accumulation operation. Therefore, designing a hardware-efficient convolution layer will effectively improve the overall performance of a CNN. In this research, we propose a dynamic resource optimization (DROP) approach to improve the power and delay of the convolution layer. The proposed approach controls the computational path in accordance to the interrupts which are dependent on a non-zero-bit pattern. With a single interrupt, our solution provides 42.5% power and 36.7% delay efficiency compared to the standard bit-serial-parallel approach. Moreover, the power consumed by eight parallel functioning blocks is 27.7% less than the traditional bit-parallel approach.
KW - Convolution
KW - Convolutional neural network
KW - Hardware accelerator
UR - https://www.scopus.com/pages/publications/105010339417
U2 - 10.3390/electronics14132658
DO - 10.3390/electronics14132658
M3 - Article
AN - SCOPUS:105010339417
SN - 2079-9292
VL - 14
JO - Electronics (Switzerland)
JF - Electronics (Switzerland)
IS - 13
M1 - 2658
ER -