Memristor Hadamard Multiplier, Compressed vector-matrix multiplication for Memristor-based ensemble neural networks

hardware
Author

Phan Anh VU

Published

December 2, 2024

Using an ensemble of neural networks is an effective means of quantifying the uncertainty of an output prediction. However, the memory cost of storing a large ensemble of neural networks quickly becomes prohibitive and limits their applicability. This paper details a three-stage in-memory computing circuit that performs analog-domain vector-matrix multiplication between an input voltage vector and a rank-1 compressed weight ensemble stored in the conductances of three Memristor arrays. For wide layers (thousands of neurons) and large ensemble sizes (hundreds to thousands of models), this circuit reduces the required number of Memristors by between two and three orders of magnitude relative to a non-compressed ensemble. Similarly, compared to a single neural network, the increase in the number of Memristors may be less than two-fold. We report SPICE simulations of the circuit and observe that \(75\%\) of total error does not deviate by more than \(25\%\) of the ideal value. A statistical analysis of the circuit explains these observations and offers insights regarding how the circuit may be improved.

Paper to be published at International Conference on Rebooting Computing 2024

https://github.com/phanav/memham/blob/main/reboot_compute_2024_FINAL.pdf