You are on page 1of 1

CPU cache is something more complicated than other kind of caches and thus, CPU caches are divided

into 2 groups, the Level 1 and Level 2 caches, usually called L1 and L2. A L1 cache is some kind of memory
which is built into the same CPU and it is the place where the CPU first try to access. The L2 cache is
another memory but instead of feeding the CPU, this one feeds the L1 cache and this way the L2 cache
can be understood as a cache of the L1 cache.

L2 caches may be built the same way as the L1 caches, into the CPU but sometimes it can also be located
in another chip or in a MCP (Multichip Package Module), it can also be a completely separate chip. With
some exceptions, L1 and L2 caches are considered SRAM (static RAM) while the memory of the computer
is considered DRAM (Dynamic Ram) or any kind of variation of DRAM. Some processors use another
cache level named L3.

The difference between L1 and L2 (and L3 in some cases) is the size. L1 is smaller than L2 and L3. This way
the data is easier to be found in the L1 than L2, making the access much faster, if the data is not found in
the L1 the data will be looked in the L2 bigger cache and if it is not there, an access to memory will be
needed making the access much slower than either to L1 or L2.

The way the caches are managed depends on the architecture of the processors, but there are 2 main
methods, inclusive and exclusive. In some processors the data which is stored on the L1 cache will also be
present in the L2, this is called inclusive or more technically strictly inclusive. The AMD Athlon for
example uses an exclusive cache pattern so the data will either be available in the L1 or the L2 but will
never be stored in both. Intel Pentium II, III and 4 use a mixed pattern where the data must not be in both
of them but usually it is. This is called mainly inclusive policy.

Which method is better is a very complicated question. The exclusive cache method can store more data
because the data is not repeated on both of them. The advantage is even greater depending on the size
of both caches. The major advantage of inclusive policy is that when other devices or processors in a
system with several processors want to delete some data, they only need to check the L2 cache because
the data will be also stored in the L1 cache for sure, while the exclusive cache policy will have to check on
both the L1 and the L2 cache, making the operation slower.

L1 cache is physically next to the processing core and is implemented in SRAM, or Static
RAM which is fast and constant when powered on. It does not require refresh cycles. It is
generally split with half used for instruction code and the the other used for data.
L2 cache is physically close to the core, but is implemented in DRAM or Dynamic RAM and
goes through refresh cycles many time a second to retain its memory. It is not as fast as L1
and cannot be accessed during refresh.
L3 cache has come into vogue with the advent of multi-core CPUs. Whereas these chips
will have both L1 and L2 caches for each separate core; there is a common fairly large L3
shared by all cores. It is usually the size of all other caches combined or a few multiples of
all other caches combined. It is also implemented in DRAM. One unusual thing is that a
multi-core chip that is running software that may not be capable of or need all cores will
have a core flush its caches into the L3 before that core goes dormant.

You might also like