Abstract
Recent years have seen a rapid increase in research activity in the field of DRAM-based Processing-In-Memory (PIM) accelerators, where the analog computing capability of DRAM is employed by minimally changing the inherent structure of DRAM peripherals to accelerate various data-centric applications. Several DRAM-based PIM accelerators for Convolutional Neural Networks (CNNs) have also been reported. Among these, the accelerators leveraging in-DRAM stochastic arithmetic have shown manifold improvements in processing latency and throughput, due to the ability of stochastic arithmetic to convert multiplications into simple bit-wise logical AND operations. However, the use of in-DRAM stochastic arithmetic for CNN acceleration requires frequent stochastic to binary number conversions. For that, prior works employ full adder-based or serial counter-based in-DRAM circuits. These circuits consume large area and incur long latency. Their in-DRAM implementations also require heavy modifications in DRAM peripherals, which significantly diminishes the benefits of using stochastic arithmetic in these accelerators. To address these shortcomings, this paper presents a new substrate for in-DRAM stochastic-to-binary number conversion called AGNI. AGNI makes minor modifications in DRAM peripherals using pass transistors, capacitors, encoders, and charge pumps, and re-purposes the sense amplifiers as voltage comparators, to enable in-situ binary conversion of input statistic operands of different sizes with iso latency. Our evaluations, based on detailed SPICE simulations (https://github.com/uky-UCAT/AGNI_SPICE.git), show that AGNI can achieve savings of at least 8× in area, at least 28× energy-delay product (EDP), and at least 21 in area × latency, compared to two in-DRAM stochastic-to-binary conversion circuits from prior works. These circuit-level benefits are demonstrated to propagate at the system-level to achieve at least 3.9× gain in performance across four deep CNN models.
Original language | English |
---|---|
Title of host publication | Proceedings of the 24th International Symposium on Quality Electronic Design, ISQED 2023 |
ISBN (Electronic) | 9798350334753 |
DOIs | |
State | Published - 2023 |
Event | 24th International Symposium on Quality Electronic Design, ISQED 2023 - San Francisco, United States Duration: Apr 5 2023 → Apr 7 2023 |
Publication series
Name | Proceedings - International Symposium on Quality Electronic Design, ISQED |
---|---|
Volume | 2023-April |
ISSN (Print) | 1948-3287 |
ISSN (Electronic) | 1948-3295 |
Conference
Conference | 24th International Symposium on Quality Electronic Design, ISQED 2023 |
---|---|
Country/Territory | United States |
City | San Francisco |
Period | 4/5/23 → 4/7/23 |
Bibliographical note
Publisher Copyright:© 2023 IEEE.
Keywords
- convolutional neural networks
- processing-in-memory
- stochastic to binary conversion
ASJC Scopus subject areas
- Hardware and Architecture
- Electrical and Electronic Engineering
- Safety, Risk, Reliability and Quality