Cache is a temporary buffer (with quick access) that stores data so that future requests for this data can be processed faster. If requested data is contained in cache then request can be served by reading the cache, otherwise data has to be computed or fetched from its original storage. Definitely, in first case our application will perform operation quicker.
But we have to pay for this advantage - to store a lot of data cache need to use a huge amount of memory. It's not a big problem when we are talking about cache on local disk (or sdcard) to store, for example, downloaded images. But if we need to store some data in RAM to have the quickest access to this data we will have to limit our cache by some constant value which often much less than all data size. This automatically implies 'preemptive nature' of cache: if there is no free space in cache and we need to put one more 'item' - something (already stored in cache) must be removed.
First and most obvious way to implement own cache would be
SoftReference. Let's take a look on simple (not multithreaded) cache based on
SoftReference. Let our cache will be called
SoftCache and let is be parameterized:
public class SoftCache<K, V>
Data will be stored in
HashMap:
private Map<K, SoftReference<V>> mCacheMap = new HashMap<K, SoftReference<V>>();
And methods to
put new item to cache (we must supply our cache records with keys to be able to get them),
get item (with appropriate key) from cache and to
remove it:
public V put(K key, V value) {
SoftReference<V> softRef = mCacheMap.put(key, new SoftReference<V>(value));
if (softRef == null) {
return null;
}
V oldValue = softRef.get();
softRef.clear();
return oldValue;
}
public V get(K key) {
SoftReference<V> softRef = mCacheMap.get(key);
return softRef == null ? null : softRef.get();
}
public V remove(K key) {
SoftReference<V> softRef = mCacheMap.remove(key);
if (softRef == null) {
return null;
}
V oldValue = softRef.get();
softRef.clear();
return oldValue;
}
As we can see, this approach is easy enough to implement. But it has a few disadvantages:
- We can't specify cache size.
- SoftReference doesn't give us control over when and how remove records from cache when we need to add new item to already 'full' cache
SoftReference approach may be interested as a general purpose cache. But if we want to optimize our cache to be the most efficient for certain purpose we need somehow define rules for items replacing in cache. Generally speaking, it's very difficult issue of computer science and it's often called Cache Algorithms.
So let's take a look to some of Cache Algorithms without going into details of algorithms' efficiency. Our goal here is to understand algorithms' functions and try to find how we can apply caches based on these algorithms to everyday problems.
First in First out (FIFO)
Algorithm with least effort for managing the cache entries. It's based on using queue as a storage for cache entries. When cache is full and an entry needs to be replaced (since new entry need to be inserted) - entry at the front of the queue (the oldest entry) will be removed and replaced with the current fetched entry.
It's not easy to find certain case where this algorithm will give max output, probably it should be used as a general purpose cache when we can't find better approaches.
Least Frequently Used (LFU)
This Algorithm is based on counting how often an entry being requested - it's increments a counter associated with each cache entry. Entry with least frequently used counter is removed first when cache is full and new entry need to be added. This algorithm is not very fast (since it requires brute-force search for element to remove when cache is full). But LFU cache is a good choice when the request frequency does not vary over time (i.e. what was the most frequent yesterday is also the same today). As an very abstract example of Android application which using LFU cache may be some media player which requested audio streams from internet - in most cases users are listening the same music, so it may be cached to provide immediate access. But as I said, it's very abstract example, because in most cases, our cache will be to small to store even a few audio streams. But I believe this example shows certain case where certain algorithm gives real advantage.
Least Recently Used (LRU)
Main idea of LRU is to remove the least recently used items first when free space in cache is required. Every new item is placed on the top of the cache, while older top items falled down. But whenever an item is accessed, it will be placed at the top again. When the cache exceeds its size limit, items from the bottom will be discarded. This algorithm does not require exhaustive search, so it's faster comparatively to LFU.
LRU will be good choice for some kind of gallery application or applications to render some big pictures which usually broken on small pieces - LRU helps to avoid yank effect when user doing move events around the same area.
B.t.w., since api 12
LRU cache is present in android sdk and compatibility package.
Most Recently Used (MRU)
The opposite to LRU - adding new items to the top of cache MRU removes items from top when the cache exceeds its size limit. Good choice for common database memory caches, when already fetched data, probably, will not be requested in the foreseeable future.
There are a lot of other algorithms, but we will focus on this and go to implementation.
Let's start from generic base cache element to be used for all cache implementations:
public class CacheElement<K, V> {
/**
* Cache element value.
*/
private V mValue;
/**
* Cache element key.
*/
private K mKey;
/**
* Counter for LFU algorithm.
*/
private int mHitCount;
...
// mutator methods
}
I guess here all is obvious.
Next, we will define base cache class with some common functionality:
public abstract class AbstractCache<K, V> {
private static final int MAX_ITEMS_IN_CACHE = 3;
protected Map<K, CacheElement<K, V>> mTable = new HashMap<K, CacheElement<K, V>>();
protected boolean isFull() {
return itemsCount() >= MAX_ITEMS_IN_CACHE;
}
protected CacheElement<K, V> createCacheElement(K key, V value) {
CacheElement<K, V> cacheElement = new CacheElement<K, V>();
cacheElement.setKey(key);
cacheElement.setValue(value);
return cacheElement;
}
protected boolean updateCacheElement(CacheElement<K, V> element, K key, V value) {
if (element != null) {
element.setValue(value);
return true;
}
return false;
}
public abstract void put(K key, V value);
public abstract V get(K key);
public abstract void print();
protected abstract int itemsCount();
}
Let's look closely to this generic class.
MAX_ITEMS_IN_CACHE defined maximum of elements, to be placed in cache. Of course, it would be more convenient to have size limit, but calculation of object size in java is not trivial procedure and since it's only example we can proceed with count limitation.
mTable is used for quick search of already stored (cached) items by key.
isFull method is quite obvious - it's used to define is cache full and one of the cache items need to be deleted.
createCacheElement and
updateCacheElement are used to create and update already added to cache element respectively. And 4 abstract methods to be implemented in certain cache implementation. Should be noted, that
print is used only for test purposes and it will be implemented using
System.out#print to be able to run on desktop.
So, first will be
FIFO cache
public class FIFOCacheImpl<K, V> extends AbstractCache<K, V> {
protected Queue<CacheElement<K, V>> mCache = new ConcurrentLinkedQueue<CacheElement<K, V>>();
@Override
public final synchronized void put(K key, V value) {
CacheElement<K, V> element = mTable.get(key);
if (updateCacheElement(element, key, value)) {
return;
}
CacheElement<K, V> cacheElement = createCacheElement(key, value);
if (isFull()) {
CacheElement<K, V> elem = mCache.poll();
mTable.remove(elem.getKey());
}
mCache.offer(cacheElement);
mTable.put(key, cacheElement);
}
@Override
public final synchronized V get(K key) {
CacheElement<K, V> element = mTable.get(key);
return element != null ? element.getValue() : null;
}
@Override
public void print() {
System.out.print("{ ");
for (CacheElement<K, V> cacheElement : mCache) {
System.out.print("[ " + cacheElement.getKey() + " - " + cacheElement.getValue() + " ] ");
}
System.out.println("}");
}
@Override
protected int itemsCount() {
return mCache.size();
}
}
Queue is used as primary storage for items to provide FIFO behavior. Methods
get,
print and
itemsCount are quite obvious, so let's move to
put right away. At first step we are using quick dictionary
mTable to find if item already exists in cache and if such item found - update it with new value. If item with
key doesn't exists in cache - create it and if cache is not full put it to cache and table (for quick search). But if cache is full - remove item from head (if we are adding new elements to tail) of cache and from search dictionary (
mTable).
And simple example:
AbstractCache<Integer, String> fifoCache = new FIFOCacheImpl<Integer, String>();
fifoCache.put(1, "s1");
fifoCache.print();
fifoCache.put(2, "s2");
fifoCache.print();
fifoCache.put(3, "s3");
fifoCache.print();
fifoCache.put(4, "s4");
fifoCache.print();
fifoCache.put(5, "s5");
fifoCache.print();
and system output:
{ [ 1 - s1 ] }
{ [ 1 - s1 ] [ 2 - s2 ] }
{ [ 1 - s1 ] [ 2 - s2 ] [ 3 - s3 ] }
{ [ 2 - s2 ] [ 3 - s3 ] [ 4 - s4 ] }
{ [ 3 - s3 ] [ 4 - s4 ] [ 5 - s5 ] }
So, all as expected: when cache is full - oldest items removed from cache.
Next will be
LFU cache
public class LFUCacheImpl<K, V> extends AbstractCache<K, V> {
protected List<CacheElement<K, V>> mCache = new LinkedList<CacheElement<K, V>>();
@Override
public void put(K key, V value) {
CacheElement<K, V> element = mTable.get(key);
if (updateCacheElement(element, key, value)) {
return;
}
CacheElement<K, V> cacheElement = createCacheElement(key, value);
if (isFull()) {
CacheElement<K, V> elem = removeLfuElement();
mTable.remove(elem.getKey());
}
mCache.add(cacheElement);
mTable.put(key, cacheElement);
}
@Override
public final synchronized V get(K key) {
CacheElement<K, V> element = mTable.get(key);
if (element != null) {
element.incrementHitCount();
return element.getValue();
}
return null;
}
@Override
public void print() {
System.out.print("{ ");
for (CacheElement<K, V> cacheElement : mCache) {
System.out.print("[ " + cacheElement.getKey() + " - " + cacheElement.getValue() + " : " +
cacheElement.getHitCount() + " ] ");
}
System.out.println("}");
}
@Override
protected int itemsCount() {
return mCache.size();
}
public CacheElement<K, V> removeLfuElement() {
CacheElement<K, V> leastElement = leastHit();
mCache.remove(leastElement);
return leastElement;
}
public CacheElement<K, V> leastHit() {
CacheElement<K, V> lowestElement = null;
for (CacheElement<K, V> element : mCache) {
if (lowestElement == null) {
lowestElement = element;
} else {
if (element.getHitCount() < lowestElement.getHitCount()) {
lowestElement = element;
}
}
}
return lowestElement;
}
}
I won't dwell on the code: the idea is the same as for
FIFO cache, only a few moment need to be clarified: when item is requested by
get and requested item exists in cache - item's hit count is incremented.
leastHit is using
mHitCount to find item to remove from cache.
AbstractCache<Integer, String> lfuCache = new LFUCacheImpl<Integer, String>();
lfuCache.put(1, "s1");
lfuCache.print();
lfuCache.get(1);
lfuCache.get(1);
lfuCache.print();
lfuCache.put(2, "s2");
lfuCache.print();
lfuCache.get(2);
lfuCache.get(2);
lfuCache.get(2);
lfuCache.print();
lfuCache.put(3, "s3");
lfuCache.print();
lfuCache.get(3);
lfuCache.get(3);
lfuCache.get(3);
lfuCache.get(3);
lfuCache.print();
lfuCache.put(4, "s4");
lfuCache.print();
lfuCache.get(4);
lfuCache.print();
lfuCache.put(5, "s5");
lfuCache.print();
and system output:
{ [ 1 - s1 : 0 ] }
{ [ 1 - s1 : 2 ] }
{ [ 1 - s1 : 2 ] [ 2 - s2 : 0 ] }
{ [ 1 - s1 : 2 ] [ 2 - s2 : 3 ] }
{ [ 1 - s1 : 2 ] [ 2 - s2 : 3 ] [ 3 - s3 : 0 ] }
{ [ 1 - s1 : 2 ] [ 2 - s2 : 3 ] [ 3 - s3 : 4 ] }
{ [ 2 - s2 : 3 ] [ 3 - s3 : 4 ] [ 4 - s4 : 0 ] }
{ [ 2 - s2 : 3 ] [ 3 - s3 : 4 ] [ 4 - s4 : 1 ] }
{ [ 2 - s2 : 3 ] [ 3 - s3 : 4 ] [ 5 - s5 : 0 ] }
So, as we can see, when item is requested from cache - its hit count is incremented. Least demanded items removes from cache when it's full.
And the last will be
LRU cache (
MLU is almost the same)
public class LRUCacheImpl<K, V> extends AbstractCache<K, V> {
protected List<CacheElement<K, V>> mCache = new ArrayList<CacheElement<K, V>>();
@Override
public void put(K key, V value) {
CacheElement<K, V> element = mTable.get(key);
if (updateCacheElement(element, key, value)) {
Collections.swap(mCache, mCache.indexOf(element), 0);
return;
}
CacheElement<K, V> cacheElement = createCacheElement(key, value);
if (isFull()) {
CacheElement<K, V> elem = mCache.remove(mCache.size() - 1);
mTable.remove(elem.getKey());
}
mCache.add(0, cacheElement);
mTable.put(key, cacheElement);
}
@Override
public final synchronized V get(K key) {
CacheElement<K, V> element = mTable.get(key);
if (element != null) {
Collections.swap(mCache, mCache.indexOf(element), 0);
return element.getValue();
}
return null;
}
@Override
public void print() {
System.out.print("{ ");
for (CacheElement<K, V> cacheElement : mCache) {
System.out.print("[ " + cacheElement.getKey() + " - " + cacheElement.getValue() + " ] ");
}
System.out.println("}");
}
@Override
protected int itemsCount() {
return mCache.size();
}
}
As we can see, requested (or updated item) is moved to the first position of list, while last element is removed, when cache is full. And here is example:
AbstractCache<Integer, String> lruCache = new LRUCacheImpl<Integer, String>();
lruCache.put(1, "s1");
lruCache.print();
lruCache.put(2, "s3");
lruCache.print();
lruCache.put(1, "s1");
lruCache.print();
lruCache.get(2);
lruCache.print();
lruCache.put(3, "s1");
lruCache.print();
lruCache.put(4, "s1");
lruCache.print();
lruCache.get(4);
lruCache.print();
lruCache.put(5, "s1");
lruCache.print();
lruCache.put(6, "s1");
lruCache.print();
and system output:
{ [ 1 - s1 ] }
{ [ 2 - s3 ] [ 1 - s1 ] }
{ [ 1 - s1 ] [ 2 - s3 ] }
{ [ 2 - s3 ] [ 1 - s1 ] }
{ [ 3 - s1 ] [ 2 - s3 ] [ 1 - s1 ] }
{ [ 4 - s1 ] [ 3 - s1 ] [ 2 - s3 ] }
{ [ 4 - s1 ] [ 3 - s1 ] [ 2 - s3 ] }
{ [ 5 - s1 ] [ 4 - s1 ] [ 3 - s1 ] }
{ [ 6 - s1 ] [ 5 - s1 ] [ 4 - s1 ] }
So, all as expected, last requested item is on top, while not requested items are falling to the down.
So, it was short introduction to different cache implementations to be used in your Android applications.
Thank you for attention.