Computer History: Cache Memory Part 1 of 2

We discuss early digital computer memory, let’s look at Computer History – Core Memory, and mention that today’s standard RAM (Random Access Memory) is chip memory. This fits the commonly cited application of Moore’s Law (Gordon Moore was one of the founders of Intel). It states that component density in integrated circuits, which can be paraphrased as performance per unit cost, doubles every 18 months. Early core memory had cycle times in microseconds, today we’re talking nanoseconds.

You may be familiar with the term cache, applied to PCs. It is one of the performance features mentioned when talking about the latest CPU or hard drive. You can have L1 or L2 cache in the processor and disk cache of various sizes. Some programs also have a cache, also known as a buffer, for example when writing data to a CD burner. Early CD recording programs had ‘overloads’. The end result of these was a nice supply of coasters!

Mainframe systems have used caching for many years. The concept became popular in the 1970s as a way to speed up memory access time. This was the time when core memory was phased out and replaced with integrated circuits or chips. Although the chips were much more efficient in terms of physical space, they had other problems with reliability and heat generation. Chips of a certain design were faster, hotter, and more expensive than chips of another design, which were cheaper but slower. Speed ​​has always been one of the most important factors in computer sales, and design engineers have always been looking for ways to improve performance.

The concept of cache memory is based on the fact that a computer is inherently a sequential processing machine. Of course, one of the great advantages of the computer program is that it can be ‘branched’ or ‘jumped’ out of sequence, the subject of another article in this series. However, there are still enough times when one instruction follows another to make a buffer or cache a useful addition to the computer.

The basic idea of ​​the cache is to predict what data is required from memory to be processed by the CPU. Consider a program, which is made up of a series of instructions, each of which is stored in a location in memory, say from address 100 upwards. The instruction at location 100 is read from memory and executed by the CPU, then the next instruction is read from location 101 and executed, then 102, 103, etc.

If the memory in question is core memory, it will take maybe 1 microsecond to read an instruction. If the processor takes, say, 100 nanoseconds to execute the instruction, then it has to wait 900 nanoseconds for the next instruction (1 microsecond = 1000 nanoseconds). The effective repetition rate of the CPU is 1 microsecond. (The times and speeds listed are typical, but do not refer to any specific hardware, they merely give an illustration of the principles involved.)

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *