If you look for the different elements that make up a processors, you must have definitely heard of the term “cache”.
What is the importance of Cache memory in the processors? In this is article we will try to explain this component in a language that a beginner can grasp.
The advent of Cache memory has a lot do with how computer technology advanced so rapidly.
The Microprocessors or Central Processing Units (CPUs) have evolved over the many years since they have been in use.
Engineers have been striving hard to find more and more ways of making the CPUs work faster and more efficiently while at the same time trying to reduce their physical size.
The cache memory is just ONE of many innovations that engineers came up with to improve the efficiency of the processor.
Page Contents (Click Icon To Open/Close)
The Need for Cache Memory
Let us discuss how the concept of cache memory came about and why it is so important.
We will begin by first understanding what an instruction cycle is:
The Instruction Cycle
Say you want to run Notepad in windows.
You go to the Notepad icon with the mouse, double click on the icon and voila, the Notepad Window opens.
What actually happens inside the computer during this short time is as follows: –
- 1The Notepad program, which is stored on the hard disk, gets loaded into the RAM. As you know, any program is a set of instructions that tells the computer what to do.
- 2The instructions at the start of the program gets transferred to the CPU through a memory controller – which can be integrated into the CPU or be external to the CPU.
- 3The CPU executes the instructions and transfers the result back to the RAM.
This is the basic instruction cycle which repeats over and over again.
The Primary Motive Behind Cache Memory – The Program Execution Speed
As mentioned earlier, the CPU has to fetch the instruction from a storage this can include a hard disk or a RAM.
The problem is with the fetching and transferring speeds.
IF the CPU fetches instructions from the hard disk, which has very slow access and transfer speed, the program will execute very slowly. Even if the hard disk is an SSD.
Having the program in RAM and fetching the instructions from there will result in much faster program execution. Still, the CPU itself is extremely fast, compared to a RAM so fetching instructions at this speed will NOT be ideal for fast execution of program.
This in simpler terms is known ad bottlenecking where as slower component limits the potential of a faster component.
So, what to do?
Cache Memory To The Rescue
At some point in time, the engineers figured that if they could additionally have a mini RAM as an intermediate storage between the RAM and the CPU residing inside the CPU, then the time needed to fetch the information from this mini RAM by the CPU will obviously be very less in comparison to the time needed for fetching the information directly from the RAM.
The engineers did add mini RAM components inside the CPU and these were given the name Cache Memory. The word Cache is pronounced as “Cash”.
It is NOT possible for a cache memory to hold the instructions for running all the software under the sun. After all, a cachme memory only has storage measured in Megabytes!
Thus, it holds the MOST COMMON INSTRUCTION that users and most software use.
The Cache Memory Capacity Challenge – Locality Of Reference
The cache memory had to be very fast, so Static RAM (SRAM) was used for it.
Conventional RAM uses Dynamic RAM (DRAM) which has high density storage using capacitors, is cost friendly and uses low power.
Its downside is that the capacitors lose their charge and need a charging cycle repeatedly. This makes data access slow (This is called Latency).
The SRAM as found on Cache memory stores data in flip-flop circuits which make its access and data transfer very fast.
Its downside is its heavy cost because of the circuits needed. So, the only viable solution was to use it in extremely sparing capacity.
The push for using low storage capacity cache memory demanded that the data stored in the cache memory be most relevant for immediate execution.
As mentioned earlier, ONLY THE MOST COMMON INSTRUCTION are stored in cache.
The inherent nature of program instructions includes their being sequential and being repetitive, though not all the time.
So, engineers developed algorithms which selected instructions for storing in the cache memory based on either their closeness of address in RAM, named Spatial Locality, or based on the instructions being repeated, named Temporal Locality.
Collectively the concept was called Locality of Reference. It enabled most relevant data to be identified for storage in cache memory.
Cache Hit And Miss
The success rate of ensuring that the cache memory always has the next instruction present when the CPU needs it is not 100%.
It may happen that the CPU does not find the next instruction in the cache memory and has to fetch it from the RAM.
Such an event is called a Cache Miss.
A Cache Hit is an event where the CPU does find the next instruction in the cache memory.
The success rate of cache hit can be calculated using the below formula.
Success rate of cache hit = [ cache hits / (cache hits + cache misses) ] x 100
The Cache Memory Hierarchy
Today, the cache memory exists in CPUs in several levels and kinds.
Typically, Level 1 cache memories are directly interfaced with the execution portion of the CPU.
The Level 1 cache are split into Instruction Cache called I-Cache and Data Cache called D-Cache.
The I-Cache is denoted by L1i and the D-Cache is denoted by L1d.
So, if a CPU has 2 cores, each core will contain the L1 caches.
Physically, they are also the closest to the core.
Level 2 cache memory is common and connects to both of the split L1 Caches. It is denoted by L2.
Level 3 cache memory is common for the whole CPU, so in our example the Level 3 cache memory serves the L2 cache of both cores.
There can be even further levels for various types of CPUs, but for most domestic and office customers, Level 3 is mostly the highest level.
This hierarchy of cache memory completely streamlines the fetching of Instructions and data by the cores inside the CPU.
Importance of Cache Memory In Terms of Benefits Achieved
In terms of tangible benefits, the importance of cache memory is as follows.
1. Speeding Up Memory Access And Synchronizing With CPU
Using direct DRAM access from the CPU slows down the overall process of program execution, because of the disadvantages of the DRAM.
Using cache memory speeds up the process so that it matches or synchronizes with the CPU to achieve best results.
2. Very Low Latency
The latency, or the access time, is high for DRAM but very low for SRAM. This is an extremely advantageous feature of the cache memory.
3. High Throughput
The front-side-bus interfaces the CPU with the RAM.
Its width is mostly 32-bit or 64-bit. The L1 cache memory is connected to the Core via the back-side-bus.
Its width is normally much larger e.g. 128-bit or 256-bit. A wider bus means more data can be transferred in one step or the throughput of the back-side-bus is much greater that the throughput of the front-side-bus.
Hence addition of cache memory increases the throughput to the Core.
4. Introduction Of Buffering
The cache memory holds instructions and data that is most likely to be needed next.
In this way, the cache memory is practically acting as a buffer between the RAM and the CPU.
5. CPU Access To Most Needed Instructions
The cache memory holds instructions and data that is most likely to be needed next. Hence, the CPU may not need to search for the upcoming required data in the RAM and will have it readily available in the cache. This is a huge advantage of the cache memory.
6. Temporary Nature Of Stored Data
The data stored in the cache memory is temporary. This means that it can be replaced immediately when it becomes useless. Given the cache memory is very limited, this is a huge advantage.
The importance of Cache Memory in today’s Microprocessor industry cannot be stressed enough.
It has become an integral part of all CPUs designed and manufactured world-wide and has, in fact, been so for many decades.
The Cache Memory concept has throughout been evolving and will definitely continue to do so.
It has turned out to be a fundamental element of Microprocessors and one which we cannot do without.
Andrew White is the founder of TechGearoid, a leading technology review & information website that is designed to help consumers make better decisions when it comes to their IT purchases. As a specialist tech writer (nerd) with over 10 years of experience, he enjoys writing about everything there is to do with modern technology & the newest market innovations. When he isn’t providing value for his readers, he’s usually drinking coffee or at the beach. Andrew lives in Sydney, Australia with his wife and family.