An in-memory computing system based on stacked 3D resistive memories

Nature Electronics2022).” width=”800″ height=”513″/>

Figure summarizing the evaluation and performance of the researchers’ in-memory computing macro. Credit: Huo et al (Natural electronics2022).

Machine learning architectures based on convolutional neural networks (CNNs) have proven to be very useful for a wide range of applications, ranging from computer vision to image analysis and processing or generating the human language. However, to tackle more advanced tasks, these architectures become increasingly complex and computationally demanding.

In recent years, many electronic engineers around the world have therefore tried to develop devices capable of supporting the storage and computational load of complex CNN-based architectures. This includes denser memory devices that can support large amounts of weight (i.e. the trainable and untrainable parameters considered by the different layers of CNN).

Researchers from the Chinese Academy of Sciences, Beijing Institute of Technology and other Chinese universities recently developed a new in-memory computing system that could help run more complex CNN-based models more efficiently. Their memory component, presented in an article published in Natural electronicsis based on non-volatile in-memory calculation macros consisting of 3D memristor arrays.

“Scaling such systems on 3D matrices could provide higher parallelism, capacity, and density for the necessary vector matrix multiplication operations,” Qiang Huo and colleagues wrote in their paper. “However, three-dimensional scaling is difficult due to manufacturing issues and device variability. We report a two-kilobit non-volatile memory computational macro based on fabricated three-dimensional vertical resistive random-access memory using a 55 nm complementary metal-oxide-semiconductor process.”

Resistive random-access memories, or RRAMs, are non-volatile (i.e., retain data even after power interruptions) storage devices based on memristors. Memristors are electronic components that can limit or regulate the flow of electric current through circuits, while recording the amount of charge that previously passed through them.

RRAMs basically work by varying resistance through a memristor. While previous studies have demonstrated the great potential of these memory devices, conventional versions of these devices are separated from computing engines, which limits their possible applications.

In-memory computing RRAM devices were designed to overcome this limitation, by integrating the computations inside the memory. This can significantly reduce data transfer between memories and processors, improving overall system power efficiency.

The in-memory computing device created by Huo and his colleagues is a 3D RRAM with vertically stacked peripheral layers and circuitry. The circuitry of the device was fabricated using 55nm CMOS technology, the technology that underpins most integrated circuits on the market today.

The researchers evaluated their device by using it to perform complex operations and to run an edge detection model in brain MRIs. The team trained their models using two existing MRI datasets to train image recognition tools, known as the MNIST and CIFAR-10 datasets.

“Our macro can perform 3D vector matrix multiplication operations with a power efficiency of 8.32 tera-operations per second per watt when the input, weight, and output data are 8.9 and 22 bits respectively, and the bit density is 58.2 bit µm.–2“, the researchers wrote in their paper. “We show that the macro provides more accurate detection of brain MRI edges and improved inference accuracy on the CIFAR-10 dataset than conventional methods.”

In early tests, the vertical in-memory computing RRAM system created by Huo and his colleagues achieved remarkable results, outperforming conventional RRAM approaches. In the future, it could therefore prove to be very useful for running complex CNN-based models in a more energy-efficient way, while allowing for better accuracies and performance.

A four-megabit nvCIM macro for edge AI devices

More information:
Qiang Huo et al, An in-memory computing macro based on three-dimensional resistive random-access memory, Natural electronics (2022). DOI: 10.1038/s41928-022-00795-x

© 2022 Science X Network

Quote: An in-memory computing system based on stacked resistive 3D memories (September 1, 2022) Retrieved September 1, 2022 from html

This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.

Comments are closed.