○孫 吉元(九大),馬  雷(ハルビン工業大),趙 建軍(九大)
Deep Neural Networks (DNNs) have shown promising performance in various kinds of applications. However, different from human brain, a well-trained DNN is not capable of remembering old classes when learning new classes, which is called catastrophic forgetting of neural networks.
In this work, one approach based on feature extraction for DNNs to overcome the forgetting problem is proposed. It tries to maintain the knowledge of old classes by building and storing average feature vectors of the training data seen so far. Testing with MNIST and CIFAR dataset, we prove that our approach can efficiently decrease the cost for DNNs' continual learning of new classes as there is no need to retrain all the old classes.

footer 著作権について 倫理綱領 プライバシーポリシー セキュリティ 情報処理学会