You are on page 1of 8

Saturday, February 14, 2009

Intro to Caching,Caching algorithms and caching frameworks part 3

Introduction:
In part 1 we talked about the basics and terminologies of cache and we have also shown
replacement policies , in part 2 we implemented some of these famous replacement
polices and now in this part we will continue talking about the implementation of two
famous algorithms which are LFU and LRU. Again, the implementation in this article is for
sake of demonstration and in order to use it (we just concentrate over the replacement
algorithm and we will skip other things like loading data and so on), you will have to do
some extra work but you can base your implementation over it.

Meet LFU Cache Implementation:


public synchronized Object getElement(Object key) {

Object obj;

obj = table.get(key);

if (obj != null) {
CacheElement element = (CacheElement) obj;
element.setHitCount(element.getHitCount() + 1);
return element.getObjectValue();
}
return null;

public final synchronized void addElement(Object key, Object value) {

Object obj;

obj = table.get(key);

if (obj != null) {
CacheElement element;

// Just replace the value.


element = (CacheElement) obj;
element.setObjectValue(value);
element.setObjectKey(key);

return;
}

if (!isFull()) {

index = numEntries;
++numEntries;
} else {
CacheElement element = removeLfuElement();
index = element.getIndex();
table.remove(element.getObjectKey());

cache[index].setObjectValue(value);
cache[index].setObjectKey(key);
cache[index].setIndex(index);
table.put(key, cache[index]);
}

public CacheElement removeLfuElement() {

CacheElement[] elements = getElementsFromTable();


CacheElement leastElement = leastHit(elements);
return leastElement;
}

public static CacheElement leastHit(CacheElement[] elements) {

CacheElement lowestElement = null;


for (int i = 0; i < elements.length; i++) {
CacheElement element = elements[i];
if (lowestElement == null) {
lowestElement = element;

} else {
if (element.getHitCount() < lowestElement.getHitCount()) {
lowestElement = element;
}
}
}
return lowestElement;
}
Analyzing LFU Cache Code (Talk Show):

Presenter: it is getting hotter and hotter now, our next contestant is LFU cache, please
make some noise for it.

Audience began to scream for LFU which made LFU hesitated.

Hello, I am LFU, when the cache client want to add a new element and cache is full (no
enough room for the new entry) I will have to kick out the least frequently used entry, by
using the help of the removelfuElement method which will allow me to get the least
frequently used element, after I get it, I will remove this entry and place the new entry

else {
CacheElement element = removeLfuElement();
index = element.getIndex();
table.remove(element.getObjectKey());

If we dived into this method , I am saying if we dived into this method (still nothing
happened)
LFU tried pressing the next button on the presentation remote control (to get the next
presentation slide) but I didn t work.

Ahh now we are talking, ok if we dived into this method we will see that the method is
just getting the whole elements in cache by calling getElementsFromTable method and
then returns the element with the least hit.

public CacheElement removeLfuElement() {

CacheElement[] elements = getElementsFromTable();


CacheElement leastElement = leastHit(elements);
return leastElement;
}
}

By calling leastHit method which loops over the cache elements and check if the current
element has the least hit, if so, I will make it my lowestElement which I am going replace
the new entry with.

public static CacheElement leastHit(CacheElement[] elements) {

CacheElement lowestElement = null;


for (int i = 0; i <>
CacheElement element = elements[i];
if (lowestElement == null)
{ lowestElement = element; }
else {

if (element.getHitCount() <>
{ lowestElement = element; }
}
}
return lowestElement;
}

LFU stopped talking and waited for any action from the audience and the only action it get
was scratching heads (audience didn t get some stuff).

One of the production team whispered to LFU cache and said: you didn t mention how the
lowest element will be distinguished from another element?

Then LFU cache started talking gain and said: By default when you add the element to the
cache its hitCoint will be the same as the previous element so how do we handle the hit
count thing?

Every time I encounter a cache hit I will increment the hit count of the entry and then
return the entry the cache client asked for which would be something like that

public synchronized Object getElement(Object key) {

Object obj;

obj = table.get(key);
if (obj != null) {
CacheElement element = (CacheElement) obj;
element.setHitCount(element.getHitCount() + 1);
return element.getObjectValue();
}
return null;

Magnifying the Code:


Did anyone say magnification?

Meet LRU Cache Implementation:


private void moveToFront(int index) {
int nextIndex, prevIndex;

if(head != index) {
nextIndex = next[index];
prevIndex = prev[index];

// Only the head has a prev entry that is an invalid index so


// we don't check.
next[prevIndex] = nextIndex;

// Make sure index is valid. If it isn't, we're at the tail


// and don't set prev[next].
if(nextIndex >= 0)
prev[nextIndex] = prevIndex;
else
tail = prevIndex;

prev[index] = -1;
next[index] = head;
prev[head] = index;
head = index;
}
}

public final synchronized void addElement(Object key, Object value) {


int index;
Object obj;

obj = table.get(key);

if(obj != null) {
CacheElement entry;

// Just replace the value, but move it to the front.


entry = (CacheElement)obj;
entry.setObjectValue(value);
entry.setObjectKey(key);

moveToFront(entry.getIndex());

return;
}

// If we haven't filled the cache yet, place in next available spot


// and move to front.
if(!isFull()) {
if(_numEntries > 0) {
prev[_numEntries] = tail;
next[_numEntries] = -1;
moveToFront(numEntries);
}
++numEntries;
} else {
// We replace the tail of the list.
table.remove(cache[tail].getObjectKey());
moveToFront(tail);
}

cache[head].setObjectValue(value);
cache[head].setObjectKey(key);
table.put(key, cache[head]);
}

Analyzing LRU Cache Code (Talk show):


After LFU finished talking, there were not much screaming, they didn t like the
presentation and LFU was hesitating while talking, this gave a big push to LRU which
started by saying:

This time I will consider the case also when the cache is not full, I am little more complex
than those other algorithms, when the cache isn t full and it is the first entry I will just
increment the numEntries which represents the number of entries in the cache.
After adding a second entry I will need to move it to the front by calling moveToFront
method (we will talk about it soon), I didn t do this for the first entry because it is for
sure the first element.

So let s see some action.

As you can see I am stating that the previous of the current entry will have the tail value
and the next entry will be -1 (undefined in other words) these are just initial data.

After adding the new entry (which isn t the first entry) I will move it to front.

if(!isFull()) {
if(_numEntries > 0) {
prev[_numEntries] = tail;
next[_numEntries] = -1;
moveToFront(numEntries);
}
++numEntries;
}

The moveToFront method moves an entry to the head of the array so that the least
recently used elements reside at the bottom of the array.

Before I do any move I check if the head is not equal to current index (this will be false in
case we only have 1 entry) if yes, then assign the value of the next of the current entry
(which is a pointer to next entry as in linked list) to nextIndex and the value of the
previous of the current entry (which is a pointer to the previous entry as in linked list) to
prevIndex

int nextIndex, prevIndex;

if(head != index) {
nextIndex = next[index];
prevIndex = prev[index];

Then I assign the value of the nextIndex to the value of next of the previous entry

// Only the head has a prev entry that is an invalid index so


// we don't check.
next[prevIndex] = nextIndex;

After that I am going to check for the nextIndex if it is greater that or equal 0 then the
previous the next entry will have the value of prevIndex , else the tail will be equal to the
prevIndex

// Make sure index is valid. If it isn't, we're at the tail


// and don't set prev[next].
if(nextIndex >= 0)
prev[nextIndex] = prevIndex;
else tail = prevIndex;

And because I moved this entry to the front so there won t be any previous entry for it so
am assigning -1 to it and the next entry of the current entry (top one) will be the head
(previous old head) and the prev of head (the old head) will have the index of the current
entry and then the new head is assigned the new index (current index)
prev[index] = -1;
next[index] = head;
prev[head] = index;
head = index;

Magnifying the Code:


It is magnifying time! Get your magnifying glass we are going to see some interesting
stuff here

It is Confession Time! :
LRU didn t mention that it is possible to implement the LRU algorithm in a simple way ,
our previous implementation is based on Arrays , the other implementation that LRU cache
didn t mention is through LinkedHashMap which was introduced in JDK 1.4

public class LRUCache2 extends LinkedHashMap


{
private
static final int MAX_ENTRIES = 3;

public LRUCache2()

{
super(MAX_ENTRIES+1, .75F,
true);
}
// This method is
invoked by put and putAll after inserting a new entry into

// the map. It allows the map to have up to 3 entries and then


// delete the oldest entry each time a new entry is
added. protected boolean removeEldestEntry(Map.Entry eldest)
{
return
this.size() > MAX_ENTRIES;
}

For sure, the LinkedHashMap solution is less time consuming that the array solution and it
is more efficient coz you will leave the handling of the deletion and so on to the JDK itself,
so you won t bother yourself implementing such stuff.
OSCache use such implementation in its LRU caching implementation.

Conclusion:
We have seen how to implement LFU and LRU algorithms and the two ways to implement
the LRU, it is based on you to choose which way to use, Arrays or LinkedHashMap for me I
would recommend Arrays for small size entries and LinkedHashMap for big size entries.

In next part we will be talking about the Caching framework and a comparison between
them and what caching algorithm is employed by which caching framework, stay tuned till
then ;)

You might also like