A4 Refereed article in a conference publication
Efficient STDP Micro-Architecture for Silicon Spiking Neural Networks
Authors: Sergei Dytckov, Masoud Daneshtalab, Masoumeh Ebrahimi, Hassan Anwar, Juha Plosila, Hannu Tenhunen
Conference name: The Euromicro confertence on digital system design
Publication year: 2014
Journal: Digital System Design
First page : 496
Last page: 503
Number of pages: 8
ISSN: 0888-2118
DOI: https://doi.org/10.1109/DSD.2014.109(external)
Web address : http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6927283(external)
Spiking neural networks (SNNs) are the closest approach to biological neurons in comparison with conventional artificial neural networks (ANN). SNNs are composed of neurons and synapses which are interconnected with a complex pattern. As communication in such massively parallel computational systems is getting critical, the network-on-chip (NoC) becomes a promising solution for providing a scalable and robust interconnection fabric. However, using NoC for large-scale SNNs arises a trade-off among scalability, throughput, neuron/router ratio (cluster size), and area overhead. In this paper, we tackle the trade-off using clustering approach and try to optimize the synaptic resources utilization. An optimal cluster size can provide lowest area overhead and power consumption. For the learning purposes, a phenomenon known as spike-timing-dependent plasticity (STDP) is utilized. The micro-architectures of the network, clusters, and the computational neurons are also described. The presented approach suggests a promising solution of integrating NoCs and STDP-based SNNs for the optimal performance based on the underlying application.
Downloadable publication This is an electronic reprint of the original article. |