Introduction of Cache Memory
- Cache memory is popularly called High-Speed Memory.
Definition
-
Cache memory is a small, temporary, high-speed memory that stores the most recently used instructions or data from main memory for processing and is placed between the CPU (Central Processing Unit) and the main memory (RAM) in a computer system used to minimize the gap in processing speed between the main memory & CPU.
Characteristics
- Performance: Its primary purpose is to store frequently accessed data and instructions to improve the performance of the system. It is used to increase the speed of processing by making current programs and data available to the CPU at a rapid rate. It acts as a high-speed buffer between the main memory and the CPU.
- Access Time: Cache memory access time is about 0.5 to 2.5 ns which is much less than that of the main memory. The CPU can access data from the cache much quicker than it can from RAM.
- Size: It’s smaller than the main memory (RAM) but much faster.
- Speed: It is significantly faster than main memory (RAM) and even faster than accessing data from disk storage. This speed advantage comes from the fact that cache memory is often built using faster types of memory technology, such as SRAM (Static Random Access Memory).
- Hierarchy: It is organized into a hierarchy, typically consisting of multiple levels (L1, L2, L3, etc.). The levels closest to the CPU (L1 and sometimes L2) are smaller and faster, while higher levels (L3) are larger but slower.
- Cache Hit and Cache Miss: It stores copies of the most frequently used data and instructions from RAM. When the CPU needs something, it first checks the cache. If the data is in the cache, the CPU can access it very quickly. If the data isn’t in the cache, the CPU then retrieves it from RAM, which is slower. When the CPU requests data or instructions, the cache checks if the data is already stored in its memory. If the data is found, it’s called a cache hit, and the data is retrieved quickly. If the data is not found in the cache, it’s called a cache miss, and the data must be fetched from the main memory.
- Cache Replacement Policies: It has limited capacity, so when new data needs to be cached and the cache is full, a cache replacement policy determines which data to evict from the cache. Popular replacement policies include Least Recently Used (LRU), First-In-First-Out (FIFO), and Random.
- Cache Coherency: In multi-core or multi-processor systems where each core has its own cache, cache coherency protocols ensure that the data seen by each core is consistent across all caches. This prevents data inconsistencies that could arise from one core modifying data that another core is also accessing.
Benefits
- It significantly improves system performance by reducing the average memory access time. Since data is fetched from the cache much faster than from the main memory, the CPU spends less time waiting for data, leading to faster program execution.
Drawbacks
- While cache memory provides significant performance benefits, it also adds complexity to the memory hierarchy and increases system cost. Additionally, managing cache coherence in multi-core systems can introduce overhead and complexity.
- Because of its very high cost, the capacity/size of this memory deployed is 2 to 3 percent of that of the main memory.
Types of Cache Memory
- There are two different types of cache memory: primary and secondary. Primary cache memory is found on the CPU itself whereas secondary cache memory is found on a separate chip close to the CPU. However, as time has progressed, the secondary cache has become rather obsolete as most caches are found on the CPU.
0 Comments