Computing Reviews

Block Walsh-Hadamard transform based binary layers in deep neural networks
Pan H., Badawi D., Cetin A. ACM Transactions on Embedded Computing Systems1(1):1-26,2022.Type:Article
Date Reviewed: 01/17/23

Convolution is a mathematical operation that calculates the integral of the product of two functions or signals with one of the signals flipped. Modern deep neural networks consider convolution as the core operation; it is a technique for function approximation, and the Fourier transform (FT) as a function can provide support for approximating other functions. Product convolution can be used for approximating integrals and can be implemented in Fourier transformation domains. In this paper, the authors propose using the binary block Walsh–Hadamard transform (WHT) instead of the Fourier transform for implementing convolution in neural networks.

The paper is divided into the following sections: (1) an introduction; (2) a literature review, including “related work” to neural networks and a review of the literature on the fast WHT; (3) methodology; (4) results; and (5) conclusion. Sections 2, 3, and 4 should help readers understand the work’s contribution to the field. The experimental part of the paper presents how the authors implemented “1D and 2D WHT-based binary layers to replace 1 x 1 and 3 x 3 convolution layers in deep neural networks.”

The paper is well structured and presents important information. Formulas and algorithms are presented and explained to help readers use the binary block WHT layer. This work can be recommended for researchers looking for an alternative to 1 x 1 and 3 x 3 convolution layers in deep neural networks and squeeze-and-excite layers.

Reviewer:  Thierry Edoh Review #: CR147538 (2303-0033)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy