Redesigning the Memory Hierarchy to Exploit Static and Dynamic Application Information

Redesigning the Memory Hierarchy to Exploit Static and Dynamic Application Information
Title Redesigning the Memory Hierarchy to Exploit Static and Dynamic Application Information PDF eBook
Author Po-An Tsai
Publisher
Pages 177
Release 2019
Genre
ISBN

Download Redesigning the Memory Hierarchy to Exploit Static and Dynamic Application Information Book in PDF, Epub and Kindle

Memory hierarchies are crucial to performance and energy efficiency, but current systems adopt rigid, hardware-managed cache hierarchies that cause needless data movement. The root cause for the inefficiencies of cache hierarchies is that they adopt a legacy interface and ignore most application information. Specifically, they are structured as a rigid hierarchy of progressively larger and slower cache levels, with sizes and policies fixed at design time. Caches expose a flat address space to programs that hides the hierarchy's structure and transparently move data across cache levels in fixed-size blocks using simple, fixed heuristics. Besides squandering valuable application-level information, this design is very costly: providing the illusion of a flat address space requires complex address translation machinery, such as associative lookups in caches. This thesis proposes to redesign the memory hierarchy to better exploit application information. We take a cross-layer approach that redesigns the hardware-software interface to put software in control of the hierarchy and naturally convey application semantics. We focus on two main directions: First, we design reconfigurable cache hierarchies that exploit dynamic application information to optimize their structure on the fly, approaching the performance of the best application-specific hierarchy Hardware monitors application memory behavior at low overhead, and a software runtime uses this information to periodically reconfigure the system. This approach enables software to (i) build single- or multi-level virtual cache hierarchies tailored to the needs of each application, making effective use of spatially distributed and heterogeneous (e.g., SRAM and stacked DRAM) cache banks; (ii) replicate shared data near-optimally to minimize on-chip and off-chip traffic; and (iii) schedule computation across systems with heterogeneous hierarchies (e.g., systems with near-data processors). Specializing the memory system to each application improves performance and energy efficiency, since applications can avoid using resources that they do not benefit from, and use the remaining resources to hold their data at minimum latency and energy. For example, virtual cache hierarchies improve full-system energy-delay-product (EDP) by up to 85% over a combination of state-of-the-art techniques. Second, we redesign the memory hierarchy to exploit static application information by managing variable-sized objects, the natural unit of data access in programs, instead of fixed-size cache lines. We present the Hotpads object-based hierarchy, which leverages object semantics to hide the memory layout and dispense with the flat address space interface. Similarly to how memory-safe languages abstract the memory layout, Hotpads exposes an interface based on object pointers that disallows arbitrary address arithmetic. This avoids the need for associative caches. Instead, Hotpads moves objects across a hierarchy of directly addressed memories. It rewrites pointers to avoid most associative lookups, provides hardware support for memory management, and unifies hierarchical garbage collection and data placement. Hotpads also enables many new optimizations. For instance, we have designed Zippads, a memory hierarchy that leverages Hotpads to compress objects. Leveraging object semantics and the ability to rewrite pointers in Hotpads, Zippads compresses and stores objects more compactly, with a novel compression algorithm that exploits redundancy across objects. Though object-based languages are often seen as sacrificing performance for productivity, this work shows that hardware can exploit this abstraction to improve performance and efficiency over cache hierarchies: Hotpads reduces dynamic memory hierarchy energy by 2.6x and improves performance by 34%; and Zippads reduces main memory footprint by 2x while improving performance by 30%.

The Fractal Structure of Data Reference

The Fractal Structure of Data Reference
Title The Fractal Structure of Data Reference PDF eBook
Author Bruce McNutt
Publisher Springer Science & Business Media
Pages 144
Release 2005-11-24
Genre Computers
ISBN 0306470349

Download The Fractal Structure of Data Reference Book in PDF, Epub and Kindle

The architectural concept of a memory hierarchy has been immensely successful, making possible today's spectacular pace of technology evolution in both the volume of data and the speed of data access. Its success is difficult to understand, however, when examined within the traditional "memoryless" framework of performance analysis. The `memoryless' framework cannot properly reflect a memory hierarchy's ability to take advantage of patterns of data use that are transient. The Fractal Structure of Data Reference: Applications to the Memory Hierarchy both introduces, and justifies empirically, an alternative modeling framework in which arrivals are driven by a statistically self-similar underlying process, and are transient in nature. The substance of this book comes from the ability of the model to impose a mathematically tractable structure on important problems involving the operation and performance of a memory hierarchy. It describes events as they play out at a wide range of time scales, from the operation of file buffers and storage control cache, to a statistical view of entire disk storage applications. Striking insights are obtained about how memory hierarchies work, and how to exploit them to best advantage. The emphasis is on the practical application of such results. The Fractal Structure of Data Reference: Applications to the Memory Hierarchy will be of interest to professionals working in the area of applied computer performance and capacity planning, particularly those with a focus on disk storage. The book is also an excellent reference for those interested in database and data structure research.

A Primer on Compression in the Memory Hierarchy

A Primer on Compression in the Memory Hierarchy
Title A Primer on Compression in the Memory Hierarchy PDF eBook
Author Somayeh Sardashti
Publisher Springer Nature
Pages 70
Release 2022-05-31
Genre Technology & Engineering
ISBN 303101751X

Download A Primer on Compression in the Memory Hierarchy Book in PDF, Epub and Kindle

This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardware compression algorithms to cache, memory, and the memory/cache link. There are many non-trivial challenges that must be addressed to make data compression work well in this context. First, since compressed data must be decompressed before it can be accessed, decompression latency ends up on the critical memory access path. This imposes a significant constraint on the choice of compression algorithms. Second, while conventional memory systems store fixed-size entities like data types, cache blocks, and memory pages, these entities will suddenly vary in size in a memory system that employs compression. Dealing with variable size entities in a memory system using compression has a significant impact on the way caches are organized and how to manage the resources in main memory. We systematically discuss solutions in the open literature to these problems. Chapter 2 provides the foundations of data compression by first introducing the fundamental concept of value locality. We then introduce a taxonomy of compression algorithms and show how previously proposed algorithms fit within that logical framework. Chapter 3 discusses the different ways that cache memory systems can employ compression, focusing on the trade-offs between latency, capacity, and complexity of alternative ways to compact compressed cache blocks. Chapter 4 discusses issues in applying data compression to main memory and Chapter 5 covers techniques for compressing data on the cache-to-memory links. This book should help a skilled memory system designer understand the fundamental challenges in applying compression to the memory hierarchy and introduce him/her to the state-of-the-art techniques in addressing them.

Cache and Memory Hierarchy Design

Cache and Memory Hierarchy Design
Title Cache and Memory Hierarchy Design PDF eBook
Author Steven A. Przybylski
Publisher Morgan Kaufmann
Pages 1017
Release 1990
Genre Computers
ISBN 1558601368

Download Cache and Memory Hierarchy Design Book in PDF, Epub and Kindle

A widely read and authoritative book for hardware and software designers. This innovative book exposes the characteristics of performance-optimal single- and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution time.

Exploiting Software Information for an Efficient Memory Hierarchy

Exploiting Software Information for an Efficient Memory Hierarchy
Title Exploiting Software Information for an Efficient Memory Hierarchy PDF eBook
Author
Publisher
Pages
Release 2014
Genre
ISBN

Download Exploiting Software Information for an Efficient Memory Hierarchy Book in PDF, Epub and Kindle

Static Analysis of Dynamic Allocation Using Interesting Assignments

Static Analysis of Dynamic Allocation Using Interesting Assignments
Title Static Analysis of Dynamic Allocation Using Interesting Assignments PDF eBook
Author Charles C. Reid
Publisher
Pages 112
Release 2005
Genre Cache memory
ISBN

Download Static Analysis of Dynamic Allocation Using Interesting Assignments Book in PDF, Epub and Kindle

Explores the use of static analysis to provide information to a custom allocator for the purpose of exploiting reference locality within the memory hierarchy.

Reconfigurable Computing: Architectures, Tools and Applications

Reconfigurable Computing: Architectures, Tools and Applications
Title Reconfigurable Computing: Architectures, Tools and Applications PDF eBook
Author Andreas Koch
Publisher Springer
Pages 411
Release 2011-03-15
Genre Computers
ISBN 3642194753

Download Reconfigurable Computing: Architectures, Tools and Applications Book in PDF, Epub and Kindle

This book constitutes the refereed proceedings of the 7th International Symposium on Reconfigurable Computing: Architectures, Tools and Applications, ARC 2011, held in Belfast, UK, in March 2011. The 40 revised papers presented, consisting of 24 full papers, 14 poster papers, and the abstracts of 2 plenary talks, were carefully reviewed and selected from 88 submissions. The topics covered are reconfigurable accelerators, design tools, reconfigurable processors, applications, device architecture, methodology and simulation, and system architecture.