I think I see your confusion. The TLB and the data cache are two separate mechanisms. They are both caches of a sort, but they cache different things:
- The TLB is a cache for the virtual address to physical address lookup. The page tables provide a way to map virtualaddress physicaladdress, by looking up the virtual address in the page tables. However, doing the lookup in the page tables is slow (it involves 2-3 memory loads). If the processor had to do this lookup every time any instruction accessed memory, this would cause a substantial slowdown.Therefore, the TLB acts as a dedicated cache for this lookup. The TLB has a few TLB entries, where each TLB entry contains both a virtual address and its corresponding physical address.The TLB lets the processor very quickly convert virtual addresses to physical addresses. If an instruction asks the processor to do some memory operation on a (virtual) address, the processor first checks to see whether the TLB contains an entry for that virtual address. If it does, then that's called a "cache hit" for the TLB lookup, and since the TLB entry also contains the translated physical address, the processor immediately knows what physical address to use. If it doesn't, that's a cache miss for the TLB lookup, and the processor has to laboriously do the virtual-to-physical conversion by walking the page table. (Once it finishes doing that conversion, it adds an entry to the TLB so that future conversions of that virtual address will happen much more quickly.)
- The data cache is a cache for the contents of memory. Main memory lets you specify a physical address and read the value at that physical address. However, main memory is slow. If we had to go to main memory every time we wanted to do any memory operation, our processor would be very slow.Therefore, the data cache acts as a dedicated cache for memory reads. The data cache has some cache entries, where each cache entry contains a physical address and the value of memory at that address.The data cache lets the processor very quickly read from memory. When the processor wants to read memory at some (physical) address, it first checks the data cache to see whether it contains a cache entry for that address. If it does, this is known as a "cache hit" (in the data cache), and the processor can immediately use the data value stored in that cache entry, without needing to contact main memory. If it doesn't, then this is a "cache miss" (for the data cache), and the processor needs to go to main memory. (After the processor receives the value at that address from main memory, it adds a cache entry to the data cache so that attempts to read that same address will hit in the data cache.)
They are both caches, but they serve a different purpose. The processor uses both for each memory operation: it first uses the TLB to convert from virtual address to physical address, then it checks the data cache to speed up the process of reading the value stored in memory at that address.
No comments:
Post a Comment