ReRAM-based Machine Learning

ReRAM-based Machine Learning
Title ReRAM-based Machine Learning PDF eBook
Author Hao Yu
Publisher IET
Pages 260
Release 2021-03-05
Genre Computers
ISBN 1839530812

Download ReRAM-based Machine Learning Book in PDF, Epub and Kindle

Serving as a bridge between researchers in the computing domain and computing hardware designers, this book presents ReRAM techniques for distributed computing using IMC accelerators, ReRAM-based IMC architectures for machine learning (ML) and data-intensive applications, and strategies to map ML designs onto hardware accelerators.

Machine Learning Compilation Flow for a ReRAM-based Accelerator

Machine Learning Compilation Flow for a ReRAM-based Accelerator
Title Machine Learning Compilation Flow for a ReRAM-based Accelerator PDF eBook
Author 廖敏君
Publisher
Pages 0
Release 2022
Genre
ISBN

Download Machine Learning Compilation Flow for a ReRAM-based Accelerator Book in PDF, Epub and Kindle

Processing-in-Memory for AI

Processing-in-Memory for AI
Title Processing-in-Memory for AI PDF eBook
Author Joo-Young Kim
Publisher Springer Nature
Pages 168
Release 2022-07-09
Genre Technology & Engineering
ISBN 3030987817

Download Processing-in-Memory for AI Book in PDF, Epub and Kindle

This book provides a comprehensive introduction to processing-in-memory (PIM) technology, from its architectures to circuits implementations on multiple memory types and describes how it can be a viable computer architecture in the era of AI and big data. The authors summarize the challenges of AI hardware systems, processing-in-memory (PIM) constraints and approaches to derive system-level requirements for a practical and feasible PIM solution. The presentation focuses on feasible PIM solutions that can be implemented and used in real systems, including architectures, circuits, and implementation cases for each major memory type (SRAM, DRAM, and ReRAM).

Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing

Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing
Title Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing PDF eBook
Author Sudeep Pasricha
Publisher Springer Nature
Pages 418
Release 2023-11-01
Genre Technology & Engineering
ISBN 303119568X

Download Embedded Machine Learning for Cyber-Physical, IoT, and Edge Computing Book in PDF, Epub and Kindle

This book presents recent advances towards the goal of enabling efficient implementation of machine learning models on resource-constrained systems, covering different application domains. The focus is on presenting interesting and new use cases of applying machine learning to innovative application domains, exploring the efficient hardware design of efficient machine learning accelerators, memory optimization techniques, illustrating model compression and neural architecture search techniques for energy-efficient and fast execution on resource-constrained hardware platforms, and understanding hardware-software codesign techniques for achieving even greater energy, reliability, and performance benefits.

Analog Circuits for Machine Learning, Current/Voltage/Temperature Sensors, and High-speed Communication

Analog Circuits for Machine Learning, Current/Voltage/Temperature Sensors, and High-speed Communication
Title Analog Circuits for Machine Learning, Current/Voltage/Temperature Sensors, and High-speed Communication PDF eBook
Author Pieter Harpe
Publisher Springer Nature
Pages 351
Release 2022-03-24
Genre Technology & Engineering
ISBN 303091741X

Download Analog Circuits for Machine Learning, Current/Voltage/Temperature Sensors, and High-speed Communication Book in PDF, Epub and Kindle

This book is based on the 18 tutorials presented during the 29th workshop on Advances in Analog Circuit Design. Expert designers present readers with information about a variety of topics at the frontier of analog circuit design, with specific contributions focusing on analog circuits for machine learning, current/voltage/temperature sensors, and high-speed communication via wireless, wireline, or optical links. This book serves as a valuable reference to the state-of-the-art, for anyone involved in analog circuit research and development.

Built-in Fault-Tolerant Computing Paradigm for Resilient Large-Scale Chip Design

Built-in Fault-Tolerant Computing Paradigm for Resilient Large-Scale Chip Design
Title Built-in Fault-Tolerant Computing Paradigm for Resilient Large-Scale Chip Design PDF eBook
Author Xiaowei Li
Publisher Springer Nature
Pages 318
Release 2023-03-01
Genre Computers
ISBN 9811985510

Download Built-in Fault-Tolerant Computing Paradigm for Resilient Large-Scale Chip Design Book in PDF, Epub and Kindle

With the end of Dennard scaling and Moore’s law, IC chips, especially large-scale ones, now face more reliability challenges, and reliability has become one of the mainstay merits of VLSI designs. In this context, this book presents a built-in on-chip fault-tolerant computing paradigm that seeks to combine fault detection, fault diagnosis, and error recovery in large-scale VLSI design in a unified manner so as to minimize resource overhead and performance penalties. Following this computing paradigm, we propose a holistic solution based on three key components: self-test, self-diagnosis and self-repair, or “3S” for short. We then explore the use of 3S for general IC designs, general-purpose processors, network-on-chip (NoC) and deep learning accelerators, and present prototypes to demonstrate how 3S responds to in-field silicon degradation and recovery under various runtime faults caused by aging, process variations, or radical particles. Moreover, we demonstrate that 3S not only offers a powerful backbone for various on-chip fault-tolerant designs and implementations, but also has farther-reaching implications such as maintaining graceful performance degradation, mitigating the impact of verification blind spots, and improving chip yield. This book is the outcome of extensive fault-tolerant computing research pursued at the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences over the past decade. The proposed built-in on-chip fault-tolerant computing paradigm has been verified in a broad range of scenarios, from small processors in satellite computers to large processors in HPCs. Hopefully, it will provide an alternative yet effective solution to the growing reliability challenges for large-scale VLSI designs.

Hardware Accelerators for Machine Learning: From 3D Manycore to Processing-in-Memory Architectures

Hardware Accelerators for Machine Learning: From 3D Manycore to Processing-in-Memory Architectures
Title Hardware Accelerators for Machine Learning: From 3D Manycore to Processing-in-Memory Architectures PDF eBook
Author Aqeeb Iqbal Arka
Publisher
Pages 0
Release 2022
Genre Machine learning
ISBN

Download Hardware Accelerators for Machine Learning: From 3D Manycore to Processing-in-Memory Architectures Book in PDF, Epub and Kindle

Big data applications such as - deep learning and graph analytics require hardware platforms that are energy-efficient yet computationally powerful. 3D manycore architectures are the key to efficiently executing such compute- and data-intensive applications. Through silicon via (TSV)-based 3D manycore system is a promising solution in this direction as it enables integration of disparate heterogeneous computing cores on a single system. Recent industry trends show the viability of 3D integration in real products (e.g., Intel Lakefield SoC Architecture, the AMD Radeon R9 Fury X graphics card, and Xilinx Virtex-7 2000T/H580T, etc.). However, the achievable performance of conventional through-silicon-via (TSV)-based 3D systems is ultimately bottlenecked by the horizontal wires (wires in each planar die). Moreover, current TSV 3D architectures suffer from thermal limitations. Hence, TSV-based architectures do not realize the full potential of 3D integration. Monolithic 3D (M3D) integration, a breakthrough technology to achieve "More Moore and More Than Moore," and opens up the possibility of designing cores and associated network routers using multiple layers by utilizing monolithic inter-tier vias (MIVs) and hence, reducing the effective wire length. Compared to TSV-based 3D ICs, M3D offers the "true" benefits of vertical dimension for system integration: the size of a MIV used in M3D is over 100x smaller than a TSV. However, designing these new architectures often involves optimizingmultiple conflicting objectives (e.g., performance, thermal, etc.) due to thepresence of a mix of computing elements and communication methodologies; each with a different requirement for high performance. To overcome the difficult optimization challenges due to the large design space and complex interactions among the heterogeneous components (CPU, GPU, Last Level Cache, etc.) in an M3D-based manycore chip, Machine Learning algorithms can be explored as a promising solution to this problem and. The first part of this dissertation focuses on the design of high-performance and energy-efficient architectures for big-data applications, enabled by M3D vertical integration and data-driven machine learning algorithms. As an example, we consider heterogeneous manycore architectures with CPUs, GPUs, and Cache as the choice of hardware platform in this part of the work. The disparate nature of these processing elements introduces conflicting design requirements that need to be satisfied simultaneously. Moreover, the on-chip traffic pattern exhibited by different big-data applications (like many-to-few-to-many in CPU/GPU-based manycore architectures) need to be incorporated in the design process for optimal power-performance trade-off. In this dissertation, we first design a M3D-enabled heterogeneous manycore architecture and we demonstrate the efficacy of machine learning algorithms for efficiently exploring a large design space. For large design space exploration problems, the proposed machine learning algorithm can find good solutions in significantly less amount of time than exiting state-of-the-art counterparts. However, the M3D-enabled heterogeneous manycore architecture is still limited by the inherent memory bandwidth bottlenecks of traditional von-Neumann architectures. As a result, later in this dissertation, we focus on Processing-in-Memory (PIM) architectures tailor-made to accelerate deep learning applications such as Graph Neural Networks (GNNs) as such architectures can achieve massive data parallelism and do not suffer from memory bandwidth-related issues. We choose GNNs as an example workload as GNNs are more complex compared to traditional deep learning applications as they simultaneously exhibit attributes of both deep learning and graph computations. Hence, it is both compute- and data-intensive in nature. The high amount of data movement required by GNN computation poses a challenge to conventional von-Neuman architectures (such as CPUs, GPUs, and heterogeneous system-on-chips (SoCs)) as they have limited memory bandwidth. Hence, we propose the use of PIM-based non-volatile memory such as Resistive Random Access Memory (ReRAM). We leverage the efficient matrix operations enabled by ReRAMs and design manycore architectures that can facilitate the unique computation and communication needs of large-scale GNN training. We then exploit various techniques such as regularization methods to further accelerate GNN training ReRAM-based manycore systems. Finally, we streamline the GNN training process by reducing the amount of redundant information in both the GNN model and the input graph.Overall, this work focuses on the design challenges of high-performance and energy-efficient manycore architectures for machine learning applications. We propose novel architectures that use M3D or ReRAM-based PIM architectures to accelerate such applications. Moreover, we focus on hardware/software co-design to ensure the best possible performance.