Cache Memory

Cache Memory

Cache Memory mapping
Cache Memory definition
Cache Memory in Computer Organization 
Cache Memory explanation
Cache Memory techniques

Hello everyone!
Am here again with a new topic..
Today we will study about CACHE MEMORIES AND MAPPING FUNCTION

Lets start!

CACHE MEMORY

  • As we know that in a computer system the program which is to be executed is loaded in the main memory(DRAM). Processor then fetches the code and data from the main memory to execute a program. The DRAMs which form the main memory are slower devices. So it is necessary to insert wait states in memory read/write cycles. This reduces the speed of the execution. 
  • To speed up the process, high speed memories like SRAMs must be used. But if wee consider the cost and space for SRAM, it is not desirable to use to form the main memory. 
  • The solution for this problem is that most of the microcomputer programs work with only small sections of code and data at a particular time.
  • In the memory system small section of SRAM is added along with main memory , referred to as cache memory.
  • The  program which is to be executed is loaded in the main memory, but a part of program(code) and the data that work at a particular time is usually accessed from the cache memory. This is accomplished by loading the active part of code and data from the main memory to the cache memory. 
  • The cache memory looks after this swapping between main memory and cache memory with the help of DMA controller. If the processor finds that the addressed code or data is not available in cache, then processor accesses that code or data from the main memory(DRAM). 
  • The percentage of accesses where the processor finds the code or data word it needs in the cache memory is called the hit rate. The hit rate is normally greater than 90 percent
            Hit rate =            Number of hits              * 100%
                                 Total number of bus cycles

Now lets move on to mapping functions

MAPPING FUNCTION

  Usually, the cache memory can store a reasonable number of blocks at any given time, but this number is small compared to the total number of blocks in the main memory. The correspondence between the main memory blocks and those in the cache memory is specified by a mapping function.
 The are two main mapping techniques which decides the cache organisation:
  1. Direct mapping technique
  2. Associative mapping technique
To discuss these let us consider a cache consists of 128 blocks of 16 words each, for a total of 2048(2 k) words and assume that the main memory has 64 k words. This 64 k words of main memory is addressable by a 16 bit address and it can be viewed as 4 k blocks of 16 words each. The group of 128 blocks of 16 words each in main memory form a page.

1. Direct mapping technique
   It is the simplest mapping technique. In this technique, each block from the main memory has only one possible location in the cache organisation. In this example, the block i of the main memory maps on to block i module 128 of the cache, as shown in fig1. Therefore, whenever one of the main memory block 0, 128, 256,..... is loaded in the cache, it is stored in cache block 0. Block 1, 129, 257,..... are stored in cache block 1, and so on.

To implement such cache system, the address is divided into three fields, as shown in fig1. The lower order 4 bit select one of the 16 words in a block. This field is known as word field. The second field known as block field is used to distinguish a block from other blocks. Its length is 7 bits since 2^7=128. When a new block enters the cache, the 7 bit cache block field determines the cache position in which this block must be stored. The third field is a tag field. It is used to store the high order 5 bit of memory address of the block. These 5 bit are used to identify which of the 32 block(pages) that are mapped into the cache. 


Fig 1


When memory is accessed, the 7 bit cache block field of each address generated by CPU points to a particular block location in the cache. The high order 5 bits of the address are compared with the tag bits associate with that cache location. If they match, then the desired word in the block of the cache. If there is no match, then the block containing the required word must first be read from the main memory and loaded into the cache. This means that to determine whether requested word is in the cache, only tag field is necessary to be compared. This needs only one comparison.
  
    The main drawback of direct mapped cache is that if processor needs to access same memory locations from two different pages of the main memory frequently, the controller has to access main memory frequently. Since only one of these locations can be in the cache at a time. Therefore, wee can say that direct mapped cache to implement, however it is not very flexible.

2. Associative Mapping Technique

        The fig2. shows associative mapping technique.
In this technique, a main memory block can be placed into any cache block position. As there is no fix bloc, the memory address has only two fields: word and tag.
This technique is also referred to as fully associative cache. 

The 12 tag bits are required to identify a memory block when it is resident in the cache. The high order 12 bits of an address received from the CPU are compared to the tag bits of each block of the cache to see if the desired block is present.


fig2

Once the desired block is present, the 4 bit word is used to identify the necessary word from the cache. This technique gives complete freedom in choosing the cache location in which to place the memory block. Thus, the memory space in the cache can be used more efficiently. A new block that has to be loaded into the cache has to replace(remove) an existing block only if the cache is full. In such situation, it is necessary to use one of the possible replacement algorithm to select the block to be replaced.


0 comments