Multi-Core Cache Hierarchies

Multi-Core Cache Hierarchies
Title Multi-Core Cache Hierarchies PDF eBook
Author Rajeev Balasubramonian
Publisher Springer Nature
Pages 137
Release 2022-06-01
Genre Technology & Engineering
ISBN 303101734X

Download Multi-Core Cache Hierarchies Book in PDF, Epub and Kindle

A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints. The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, and practitioners who wish to understand the landscape of recent cache research. The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers. Table of Contents: Basic Elements of Large Cache Design / Organizing Data in CMP Last Level Caches / Policies Impacting Cache Hit Rates / Interconnection Networks within Large Caches / Technology / Concluding Remarks

Multi-Core Cache Hierarchies

Multi-Core Cache Hierarchies
Title Multi-Core Cache Hierarchies PDF eBook
Author Rajeev Balasubramonian
Publisher Morgan & Claypool Publishers
Pages 137
Release 2011
Genre Computers
ISBN 9781598297539

Download Multi-Core Cache Hierarchies Book in PDF, Epub and Kindle

A key determinant of overall system performance and power dissipation is the cache hierarchy since access to off-chip memory consumes many more cycles and energy than on-chip accesses. In addition, multi-core processors are expected to place ever higher bandwidth demands on the memory system. All these issues make it important to avoid off-chip memory access by improving the efficiency of the on-chip cache. Future multi-core processors will have many large cache banks connected by a network and shared by many cores. Hence, many important problems must be solved: cache resources must be allocated across many cores, data must be placed in cache banks that are near the accessing core, and the most important data must be identified for retention. Finally, difficulties in scaling existing technologies require adapting to and exploiting new technology constraints.The book attempts a synthesis of recent cache research that has focused on innovations for multi-core processors. It is an excellent starting point for early-stage graduate students, researchers, and practitioners who wish to understand the landscape of recent cache research.The book is suitable as a reference for advanced computer architecture classes as well as for experienced researchers and VLSI engineers.Table of Contents: Basic Elements of Large Cache Design / Organizing Data in CMP Last Level Caches / Policies Impacting Cache Hit Rates / Interconnection Networks within Large Caches / Technology / Concluding Remarks

Locality-aware Cache Hierarchy Management for Multicore Processors

Locality-aware Cache Hierarchy Management for Multicore Processors
Title Locality-aware Cache Hierarchy Management for Multicore Processors PDF eBook
Author
Publisher
Pages 194
Release 2015
Genre
ISBN

Download Locality-aware Cache Hierarchy Management for Multicore Processors Book in PDF, Epub and Kindle

Next generation multicore processors and applications will operate on massive data with significant sharing. A major challenge in their implementation is the storage requirement for tracking the sharers of data. The bit overhead for such storage scales quadratically with the number of cores in conventional directory-based cache coherence protocols. Another major challenge is limited cache capacity and the data movement incurred by conventional cache hierarchy organizations when dealing with massive data scales. These two factors impact memory access latency and energy consumption adversely. This thesis proposes scalable efficient mechanisms that improve effective cache capacity (i.e., by improving utilization) and reduce data movement by exploiting locality and controlling replication. First, a limited directory-based protocol, ACKwise is proposed to track the sharers of data in a cost-effective manner. ACKwise leverages broadcasts to implement scalable cache coherence. Broadcast support can be implemented in a 2-D mesh network by making simple changes to its routing policy without requiring any additional virtual channels. Second, a locality-aware replication scheme that better manages the private caches is proposed. This scheme controls replication based on data reuse information and seamlessly adapts between private and logically shared caching of on-chip data at the fine granularity of cache lines. A low-overhead runtime profiling capability to measure the locality of each cache line is built into hardware. Private caching is only allowed for data blocks with high spatio-temporal locality. Third, a Timestamp-based memory ordering validation scheme is proposed that enables the locality-aware private cache replication scheme to be implementable in processors with out-of-order memory that employ popular memory consistency models. This method does not rely on cache coherence messages to detect speculation violations, and hence is applicable to the locality-aware protocol. The timestamp mechanism is efficient due to the observation that consistency violations only occur due to conflicting accesses that have temporal proximity (i.e., within a few cycles of each other), thus requiring timestamps to be stored only for a small time window. Fourth, a locality-aware last-level cache (LLC) replication scheme that better manages the LLC is proposed. This scheme adapts replication at runtime based on fine-grained cache line reuse information and thereby, balances data locality and off-chip miss rate for optimized execution. Finally, all the above schemes are combined to obtain a cache hierarchy replication scheme that provides optimal data locality and miss rates at all levels of the cache hierarchy. The design of this scheme is motivated by the experimental observation that both locality-aware private cache & LLC replication enable varying performance improvements across benchmarks. These techniques enable optimal use of the on-chip cache capacity, and provide low-latency, low-energy memory access, while retaining the convenience of shared memory and preserving the same memory consistency model. On a 64-core multicore processor with out-of-order cores, Locality-aware Cache Hierarchy Replication improves completion time by 15% and energy by 22% over a state-of-the-art baseline while incurring a storage overhead of 30.7 KB per core. (i.e., 10% the aggregate cache capacity of each core).

Cache Coherence Techniques for Multicore Processors

Cache Coherence Techniques for Multicore Processors
Title Cache Coherence Techniques for Multicore Processors PDF eBook
Author Michael R. Marty
Publisher
Pages 232
Release 2008
Genre
ISBN

Download Cache Coherence Techniques for Multicore Processors Book in PDF, Epub and Kindle

Cache and Memory Hierarchy Design

Cache and Memory Hierarchy Design
Title Cache and Memory Hierarchy Design PDF eBook
Author Steven A. Przybylski
Publisher Elsevier
Pages 238
Release 2014-06-28
Genre Computers
ISBN 0080500595

Download Cache and Memory Hierarchy Design Book in PDF, Epub and Kindle

An authoritative book for hardware and software designers. Caches are by far the simplest and most effective mechanism for improving computer performance. This innovative book exposes the characteristics of performance-optimal single and multi-level cache hierarchies by approaching the cache design process through the novel perspective of minimizing execution times. It presents useful data on the relative performance of a wide spectrum of machines and offers empirical and analytical evaluations of the underlying phenomena. This book will help computer professionals appreciate the impact of caches and enable designers to maximize performance given particular implementation constraints.

Microprocessor Architecture

Microprocessor Architecture
Title Microprocessor Architecture PDF eBook
Author Jean-Loup Baer
Publisher Cambridge University Press
Pages 382
Release 2010
Genre Computers
ISBN 0521769922

Download Microprocessor Architecture Book in PDF, Epub and Kindle

This book describes the architecture of microprocessors from simple in-order short pipeline designs to out-of-order superscalars.

Reducing Load Latency in Multi-level Cache Hierarchy

Reducing Load Latency in Multi-level Cache Hierarchy
Title Reducing Load Latency in Multi-level Cache Hierarchy PDF eBook
Author Majid Jalili
Publisher
Pages 0
Release 2023
Genre
ISBN

Download Reducing Load Latency in Multi-level Cache Hierarchy Book in PDF, Epub and Kindle

High load latency that results from deep cache hierarchies and relatively slow main memory is an important limiter of single-thread performance. Despite decades of research, reducing load latency is still a top priority to achieve high performance. Data prefetch helps reduce this latency by fetching data up the hierarchy before it is requested by load instructions. However, data prefetching has shown to be lacking in many situations. I make three observations about modern processors relevant to load latency: (1) the cache hierarchy is getting deeper (L4 is being added) and larger in size, requiring new mechanisms to traverse the memory hierarchy without increasing load latency; (2) core counts are increasing and at the same time applications are exhibiting more complex and diverse access patterns, demanding more and better prefetchers to be adopted; and (3) overall processor utilization in cloud servers is very low (