Cache Hierarchy Design for Memory Intensive Applications

Cache Hierarchy Design for Memory Intensive Applications
Title Cache Hierarchy Design for Memory Intensive Applications PDF eBook
Author Vijay Kumar Vankadara
Publisher
Pages 124
Release 2004
Genre
ISBN

Download Cache Hierarchy Design for Memory Intensive Applications Book in PDF, Epub and Kindle

Cache and Memory Hierarchy Design

Cache and Memory Hierarchy Design
Title Cache and Memory Hierarchy Design PDF eBook
Author Steven A. Przybylski
Publisher Elsevier
Pages 238
Release 2014-06-28
Genre Computers
ISBN 0080500595

Download Cache and Memory Hierarchy Design Book in PDF, Epub and Kindle

An authoritative book for hardware and software designers. Caches are by far the simplest and most effective mechanism for improving computer performance. This innovative book exposes the characteristics of performance-optimal single and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution times. It presents useful data on the relative performance of a wide spectrum of machines and offers empirical and analytical evaluations of the underlying phenomena. This book will help computer professionals appreciate the impact of caches and enable designers to maximize performance given particular implementation constraints.

Multi-Core Cache Hierarchies

Multi-Core Cache Hierarchies
Title Multi-Core Cache Hierarchies PDF eBook
Author Rajeev Balasubramonian
Publisher Springer Nature
Pages 137
Release 2022-06-01
Genre Technology & Engineering
ISBN 303101734X

Download Multi-Core Cache Hierarchies Book in PDF, Epub and Kindle

A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints. The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, and practitioners who wish to understand the landscape of recent cache research. The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers. Table of Contents: Basic Elements of Large Cache Design / Organizing Data in CMP Last Level Caches / Policies Impacting Cache Hit Rates / Interconnection Networks within Large Caches / Technology / Concluding Remarks

Exploring Memory Hierarchy Design with Emerging Memory Technologies

Exploring Memory Hierarchy Design with Emerging Memory Technologies
Title Exploring Memory Hierarchy Design with Emerging Memory Technologies PDF eBook
Author Guangyu Sun
Publisher Springer Science & Business Media
Pages 126
Release 2013-09-18
Genre Technology & Engineering
ISBN 3319006819

Download Exploring Memory Hierarchy Design with Emerging Memory Technologies Book in PDF, Epub and Kindle

This book equips readers with tools for computer architecture of high performance, low power, and high reliability memory hierarchy in computer systems based on emerging memory technologies, such as STTRAM, PCM, FBDRAM, etc. The techniques described offer advantages of high density, near-zero static power, and immunity to soft errors, which have the potential of overcoming the “memory wall.” The authors discuss memory design from various perspectives: emerging memory technologies are employed in the memory hierarchy with novel architecture modification; hybrid memory structure is introduced to leverage advantages from multiple memory technologies; an analytical model named “Moguls” is introduced to explore quantitatively the optimization design of a memory hierarchy; finally, the vulnerability of the CMPs to radiation-based soft errors is improved by replacing different levels of on-chip memory with STT-RAMs.

The Fractal Structure of Data Reference

The Fractal Structure of Data Reference
Title The Fractal Structure of Data Reference PDF eBook
Author Bruce McNutt
Publisher Springer Science & Business Media
Pages 144
Release 2005-11-24
Genre Computers
ISBN 0306470349

Download The Fractal Structure of Data Reference Book in PDF, Epub and Kindle

The architectural concept of a memory hierarchy has been immensely successful, making possible today's spectacular pace of technology evolution in both the volume of data and the speed of data access. Its success is difficult to understand, however, when examined within the traditional "memoryless" framework of performance analysis. The `memoryless' framework cannot properly reflect a memory hierarchy's ability to take advantage of patterns of data use that are transient. The Fractal Structure of Data Reference: Applications to the Memory Hierarchy both introduces, and justifies empirically, an alternative modeling framework in which arrivals are driven by a statistically self-similar underlying process, and are transient in nature. The substance of this book comes from the ability of the model to impose a mathematically tractable structure on important problems involving the operation and performance of a memory hierarchy. It describes events as they play out at a wide range of time scales, from the operation of file buffers and storage control cache, to a statistical view of entire disk storage applications. Striking insights are obtained about how memory hierarchies work, and how to exploit them to best advantage. The emphasis is on the practical application of such results. The Fractal Structure of Data Reference: Applications to the Memory Hierarchy will be of interest to professionals working in the area of applied computer performance and capacity planning, particularly those with a focus on disk storage. The book is also an excellent reference for those interested in database and data structure research.

Redesigning the Memory Hierarchy to Exploit Static and Dynamic Application Information

Redesigning the Memory Hierarchy to Exploit Static and Dynamic Application Information
Title Redesigning the Memory Hierarchy to Exploit Static and Dynamic Application Information PDF eBook
Author Po-An Tsai
Publisher
Pages 177
Release 2019
Genre
ISBN

Download Redesigning the Memory Hierarchy to Exploit Static and Dynamic Application Information Book in PDF, Epub and Kindle

Memory hierarchies are crucial to performance and energy efficiency, but current systems adopt rigid, hardware-managed cache hierarchies that cause needless data movement. The root cause for the inefficiencies of cache hierarchies is that they adopt a legacy interface and ignore most application information. Specifically, they are structured as a rigid hierarchy of progressively larger and slower cache levels, with sizes and policies fixed at design time. Caches expose a flat address space to programs that hides the hierarchy's structure and transparently move data across cache levels in fixed-size blocks using simple, fixed heuristics. Besides squandering valuable application-level information, this design is very costly: providing the illusion of a flat address space requires complex address translation machinery, such as associative lookups in caches. This thesis proposes to redesign the memory hierarchy to better exploit application information. We take a cross-layer approach that redesigns the hardware-software interface to put software in control of the hierarchy and naturally convey application semantics. We focus on two main directions: First, we design reconfigurable cache hierarchies that exploit dynamic application information to optimize their structure on the fly, approaching the performance of the best application-specific hierarchy Hardware monitors application memory behavior at low overhead, and a software runtime uses this information to periodically reconfigure the system. This approach enables software to (i) build single- or multi-level virtual cache hierarchies tailored to the needs of each application, making effective use of spatially distributed and heterogeneous (e.g., SRAM and stacked DRAM) cache banks; (ii) replicate shared data near-optimally to minimize on-chip and off-chip traffic; and (iii) schedule computation across systems with heterogeneous hierarchies (e.g., systems with near-data processors). Specializing the memory system to each application improves performance and energy efficiency, since applications can avoid using resources that they do not benefit from, and use the remaining resources to hold their data at minimum latency and energy. For example, virtual cache hierarchies improve full-system energy-delay-product (EDP) by up to 85% over a combination of state-of-the-art techniques. Second, we redesign the memory hierarchy to exploit static application information by managing variable-sized objects, the natural unit of data access in programs, instead of fixed-size cache lines. We present the Hotpads object-based hierarchy, which leverages object semantics to hide the memory layout and dispense with the flat address space interface. Similarly to how memory-safe languages abstract the memory layout, Hotpads exposes an interface based on object pointers that disallows arbitrary address arithmetic. This avoids the need for associative caches. Instead, Hotpads moves objects across a hierarchy of directly addressed memories. It rewrites pointers to avoid most associative lookups, provides hardware support for memory management, and unifies hierarchical garbage collection and data placement. Hotpads also enables many new optimizations. For instance, we have designed Zippads, a memory hierarchy that leverages Hotpads to compress objects. Leveraging object semantics and the ability to rewrite pointers in Hotpads, Zippads compresses and stores objects more compactly, with a novel compression algorithm that exploits redundancy across objects. Though object-based languages are often seen as sacrificing performance for productivity, this work shows that hardware can exploit this abstraction to improve performance and efficiency over cache hierarchies: Hotpads reduces dynamic memory hierarchy energy by 2.6x and improves performance by 34%; and Zippads reduces main memory footprint by 2x while improving performance by 30%.

Multi-Core Cache Hierarchies

Multi-Core Cache Hierarchies
Title Multi-Core Cache Hierarchies PDF eBook
Author Rajeev Balasubramonian
Publisher Morgan & Claypool Publishers
Pages 137
Release 2011
Genre Computers
ISBN 9781598297539

Download Multi-Core Cache Hierarchies Book in PDF, Epub and Kindle

A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints.The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, and practitioners who wish to understand the landscape of recent cache research.The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers.Table of Contents: Basic Elements of Large Cache Design / Organizing Data in CMP Last Level Caches / Policies Impacting Cache Hit Rates / Interconnection Networks within Large Caches / Technology / Concluding Remarks