Abstract
With the rapidly growing use of convolutional neural networks (CNNs) in real-world applications related to machine learning and artificial intelligence (AI), several hardware accelerator designs for CNN inference and training have been proposed recently. In this chapter, we present ATRIA, a novel bit-parallel rate-coded unary-computing-based in-DRAM accelerator for energy-efficient and high-speed inference of CNNs. ATRIA employs light-weight modifications in DRAM cell arrays to implement bit-parallel rate-coded unary-computing (i.e., stochastic computing)-based acceleration of multiply-accumulate (MAC) operations inside DRAM. ATRIA significantly improves the latency, throughput, and efficiency of processing CNN inferences by enabling 16 MAC operations to be performed in only two consecutive memory operation cycles. We mapped four benchmark CNNs on ATRIA to compare its performance with five state-of-the-art in-DRAM accelerators from prior work. The results of our analysis show that ATRIA exhibits only 3.5% drop in CNN inference accuracy and still achieves improvements of up to 3.2 × in frames per second (FPS) and up to 10 × in efficiency (FPS/W/mm2), compared to the best-performing in-DRAM accelerator from prior work.
Original language | English |
---|---|
Title of host publication | Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing |
Subtitle of host publication | Hardware Architectures |
Pages | 393-409 |
Number of pages | 17 |
ISBN (Electronic) | 9783031195686 |
DOIs | |
State | Published - Jan 1 2023 |
Bibliographical note
Publisher Copyright:© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
Keywords
- Bit-parallel unary computing
- Convolutional neural networks
- In-memory processing
- Latency
- Multiply-accumulate
- Stochastic arithmetic
ASJC Scopus subject areas
- General Computer Science
- General Engineering
- General Social Sciences