抄録
C-017
Approximately Quantizing Algorithm for In-memory Machine Learning Classifier
カテイ セキ・鶴 隆介・山内寛行(Fukuoka Inst. of Tech.)
Nowadays, with computing models in higher complexity, there’s increasing need of ultra-low-power MAC operation to control system power. To get over such challenge, direction of in-memory computing, where computation is performed within memory bit-cells by 1-Bit weights, is highlighted. However, conventional 1-Bit training algorithm still needs ensemble-learning to get over circuit non-linearity and low precision analog computing, thus increases system power. This work proposed Approximate Quantizing, a simple 1-Bit quantization algorithm for linear model. The proposed method can reach the similar accuracy as original 64-Bit with MNIST database and with faster and lower computing complexity, compared to conventional work.