my understanding of the problem was that u wud be having at most n different
things in your cache.

LRU is a good online algo, but unfortunately its easy to beat it certainly
making it pay O(n) even if u use push to front on inserts and its definitely
not O(1) on a look up.

And that said the operations which the cache should suport is definitely of
a FIFO queue becuase as the n+1th different element comes, the 1st element
is knocked off. This particular requirement of the algorithm is not very
realistic, rather LRU is becuase keeping mostly accessed elements in a cache
makes more sense.

But if you look at the Java caches like ehcache by terracotta or jboss
cache, they store all and not the frequently used ones, they have an in
memory store and an on disk store(they page in and out if the run out of
physical memory) and also a distributed multimachine store.

These kinds of caches are more common on the server side. Memcached is one
old example which has its C implementation which has a two phase hash table,
one which redirects to the machine which will hold the cache, and then
another hashtable per machine which would say this is the object ur looking
for.

So LRU, LFU etc are online caching algorithms which are heuristic aproaches
towards in memory caches.

But yes hash tables are also in vogue. But thats the state of the art.

The problem demands something different.
My understanding of the problem was it is a cache which can store at most n
elements which means they r distinct elements, and when the n+1th different
element comes which is not in the cache, the 1st one is knocked off.

But the insert and lookup should be O(1).

@sharad could you confirm whether the understanding is correct or not?

Now if u have to knock of 1st when n+1th different guy comes, its definitely
a FIFO queue, operation wise. And problem demands the insert and look up to
be O(1).

Now there is a little ambiguity as to whether the O(1) could be expected or
its a stronger bound like worst case or amortized.

If it could be expected bound without much worry of the problem setter, we
can do by having any implementation of the FIFO queue by just keeping a
pinter to the head and the tail( the first element and the last element) and
addingnew elements to the tail and knocking off from the head and then
readjusting the head to point to the newer head. All this would be costing
O(1) in total.

But this would not garuntee a O(1) lookup for any element in the cache.
Neither would LRU, with its push to front heuristic.

The nearest bet is a HashTable. But the problem is collitions. We could use
perfect hashing to have a high probaility of non colliding hashes making the
lookup expected O(1) not a worst case or an amortized O(1).

But hang on, look at the problem, it also says when a new element comes in
an old element is knocked off. So if the new element hahses to a filled slot
instead of the one freeing up, we can have a list of k elements (k is a
contstant whose value can be decided) which would keep the elements which
have a colliding hash while trying to get inserted.  After k such collides,
there would be k slots open in the hash table. Then we would need a hash
function which could take those k elements and hash them back to the empty
slots, free up the K sized list. Till the time the k sized list fills up,
lookups which try to find an element in the hashtable and fail, go and
iterate through all the k elements. when the k elements are hashed back
using a different hash function, the function pointer is also kept in the
slots to which those k elements orginally hashed so that when we use the
first hash function we go to the slot, try to match our element and if it
does not succeed we use the other function pointer kept in the slot to go to
the new location.

All of this would cause the insert to remain O(1) ( just by adding O(k) to
it) and the look up would definitely remain O(1).

So the problem remains to find a hash function which maps k integers(lets
say requests have idetifiers which are integers for now, and two requests
cannot have same identifiers) to k integers( the free slots) in O(1) time.

for k=1, it would be something like this:
u have 2 integers to form the hash function, so basically instead of pinning
a hash function to the slots, pin the next location to the slot.

need to analyze how to do it wid K>1 but yes the pay off would be in the
constant factor.

What if a request doesnt have an integer indeitifier instead it has a tupple
of arbitrary values like strings integer float objects which together form
the identifier of the request, we surely would need to think, my take would
be some kind of bit wise operations until we have some other information
about the identifier tupple. But that bit wise operations should be
discoverable in O(1).


On Wed, Jul 7, 2010 at 9:59 AM, Ashish Goel <ashg...@gmail.com> wrote:

> let me put it this way, say for last n times you made the same request, how
> should the cache look like?
>
> Does the browser keep history as a stack only?
>
> simplest design is to use stack and then parse most commonly used sites,
> but you would like this to be preprocessed rather than finding at run time
>
>
>
> Best Regards
> Ashish Goel
> "Think positive and find fuel in failure"
> +919985813081
> +919966006652
>
>
> On Wed, Jul 7, 2010 at 9:54 AM, Anand <anandut2...@gmail.com> wrote:
>
>> Since cache can only store the last n requests, Insert the n+1th
>> request in the cache and delete one of the older requests from the
>>  cache.
>>
>> To satisfy the third requirement in the question, we need to implement
>> FIFO. why do we need LRU. question does not speak anything about it.
>>
>> On Tue, Jul 6, 2010 at 8:22 PM, Ashish Goel <ashg...@gmail.com> wrote:
>>
>>> no FIFO here,
>>>
>>> the request can be random and hence if the array has say 1,2,3,4,5,6,7
>>> with 1 being least recently used and 7 got the timeslice junst now, if
>>> someone requests for say 4, the list should become 1,2,4,5,6,7,4 and if the
>>> request is for 8 then it would be 2,3,4,5,6,7,8
>>>
>>> FIFO is a BAD chois, the average response time is very high
>>>
>>> when someone says that the three operations be in O(1), it is just not
>>> possible, even in hashmap, that is not possible with linked list or array or
>>> whatever implementation
>>>
>>>
>>>
>>> Best Regards
>>> Ashish Goel
>>> "Think positive and find fuel in failure"
>>> +919985813081
>>> +919966006652
>>>
>>>
>>> On Tue, Jul 6, 2010 at 8:31 PM, Anand <anandut2...@gmail.com> wrote:
>>>
>>>> @Ashish: why do we need to shift array element when we are using FIFO
>>>> queue. With array also could easily implement FIFO queue with out the
>>>> overhead of shifting the elements.
>>>>
>>>>
>>>>
>>>>  On Tue, Jul 6, 2010 at 1:29 AM, Ashish Goel <ashg...@gmail.com> wrote:
>>>>
>>>>> the linked lists has been used, but the key idea is to implement LRU
>>>>> algo
>>>>>
>>>>> so if you use associate a timestamp with each request, and use that for
>>>>> prioritising the the bucket queue, or use a simple front tail mechnism to
>>>>> remove from front and push at the end as soon as a request is made, that
>>>>> will work
>>>>>
>>>>> implement it through arrays would imply that the element shifting would
>>>>> be cumbersome and time consuming(eg say 5 elements for a particular bucket
>>>>> and the middle one is accesses, now it should go at end, but with array
>>>>> implementation 4th and 5th will need to be pushed up and 3rd to be pushed
>>>>> down) so under such conditions, a DLL is the best
>>>>>
>>>>>
>>>>>
>>>>> Best Regards
>>>>> Ashish Goel
>>>>> "Think positive and find fuel in failure"
>>>>> +919985813081
>>>>> +919966006652
>>>>>
>>>>>
>>>>> On Tue, Jul 6, 2010 at 10:36 AM, Anand <anandut2...@gmail.com> wrote:
>>>>>
>>>>>> @topojoy.
>>>>>>
>>>>>> Why do we need linked list. We are use an array of struct which stores
>>>>>> the response and request information.
>>>>>> request identifier can be hashed to generate a key which can be used
>>>>>> as an index to the array of structures.
>>>>>>
>>>>>> We don't need linked list as we do not need to parse the request and
>>>>>> response. For indexing we are any way using hashing.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Jul 4, 2010 at 11:49 PM, Satya <satya...@gmail.com> wrote:
>>>>>>
>>>>>>> See below links. Most of web applications(facebook,classmates) use
>>>>>>> memcached to speed up their sites.
>>>>>>> memcached uses hashing.
>>>>>>>
>>>>>>> http://memcached.org/
>>>>>>> http://www.facebook.com/note.php?note_id=39391378919
>>>>>>> <http://memcached.org/>http://en.wikipedia.org/wiki/Memcached
>>>>>>> .........
>>>>>>> Satya
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jul 2, 2010 at 11:46 PM, sharad <sharad20073...@gmail.com>wrote:
>>>>>>>
>>>>>>>> [1] Design a layer in front of a system which cache the last n
>>>>>>>> requests and the responses to them from the system.
>>>>>>>> what data structure would you use to implement the cache in the
>>>>>>>> later
>>>>>>>> to support following operations.
>>>>>>>> [a] When a request comes look it up in the cache and if it hits then
>>>>>>>> return the response from here and do not pass the request to the
>>>>>>>> system
>>>>>>>> [b] If the request is not found in the cache then pass it on to the
>>>>>>>> system
>>>>>>>> [c] Since cache can only store the last n requests, Insert the n+1th
>>>>>>>> request in the cache and delete one of the older requests from the
>>>>>>>> cache
>>>>>>>> The objective is to achieve all the three operations in O(1).
>>>>>>>>
>>>>>>>> --
>>>>>>>> You received this message because you are subscribed to the Google
>>>>>>>> Groups "Algorithm Geeks" group.
>>>>>>>> To post to this group, send email to algoge...@googlegroups.com.
>>>>>>>> To unsubscribe from this group, send email to
>>>>>>>> algogeeks+unsubscr...@googlegroups.com<algogeeks%2bunsubscr...@googlegroups.com>
>>>>>>>> .
>>>>>>>> For more options, visit this group at
>>>>>>>> http://groups.google.com/group/algogeeks?hl=en.
>>>>>>>>
>>>>>>>>
>>>>>>>  --
>>>>>>> You received this message because you are subscribed to the Google
>>>>>>> Groups "Algorithm Geeks" group.
>>>>>>> To post to this group, send email to algoge...@googlegroups.com.
>>>>>>> To unsubscribe from this group, send email to
>>>>>>> algogeeks+unsubscr...@googlegroups.com<algogeeks%2bunsubscr...@googlegroups.com>
>>>>>>> .
>>>>>>> For more options, visit this group at
>>>>>>> http://groups.google.com/group/algogeeks?hl=en.
>>>>>>>
>>>>>>
>>>>>>  --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "Algorithm Geeks" group.
>>>>>> To post to this group, send email to algoge...@googlegroups.com.
>>>>>> To unsubscribe from this group, send email to
>>>>>> algogeeks+unsubscr...@googlegroups.com<algogeeks%2bunsubscr...@googlegroups.com>
>>>>>> .
>>>>>> For more options, visit this group at
>>>>>> http://groups.google.com/group/algogeeks?hl=en.
>>>>>>
>>>>>
>>>>>  --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "Algorithm Geeks" group.
>>>>> To post to this group, send email to algoge...@googlegroups.com.
>>>>> To unsubscribe from this group, send email to
>>>>> algogeeks+unsubscr...@googlegroups.com<algogeeks%2bunsubscr...@googlegroups.com>
>>>>> .
>>>>> For more options, visit this group at
>>>>> http://groups.google.com/group/algogeeks?hl=en.
>>>>>
>>>>
>>>>  --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "Algorithm Geeks" group.
>>>> To post to this group, send email to algoge...@googlegroups.com.
>>>> To unsubscribe from this group, send email to
>>>> algogeeks+unsubscr...@googlegroups.com<algogeeks%2bunsubscr...@googlegroups.com>
>>>> .
>>>> For more options, visit this group at
>>>> http://groups.google.com/group/algogeeks?hl=en.
>>>>
>>>
>>>  --
>>> You received this message because you are subscribed to the Google Groups
>>> "Algorithm Geeks" group.
>>> To post to this group, send email to algoge...@googlegroups.com.
>>> To unsubscribe from this group, send email to
>>> algogeeks+unsubscr...@googlegroups.com<algogeeks%2bunsubscr...@googlegroups.com>
>>> .
>>> For more options, visit this group at
>>> http://groups.google.com/group/algogeeks?hl=en.
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "Algorithm Geeks" group.
>> To post to this group, send email to algoge...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> algogeeks+unsubscr...@googlegroups.com<algogeeks%2bunsubscr...@googlegroups.com>
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/algogeeks?hl=en.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Algorithm Geeks" group.
> To post to this group, send email to algoge...@googlegroups.com.
> To unsubscribe from this group, send email to
> algogeeks+unsubscr...@googlegroups.com<algogeeks%2bunsubscr...@googlegroups.com>
> .
> For more options, visit this group at
> http://groups.google.com/group/algogeeks?hl=en.
>



-- 
Topo.

There are days when solitude is a heady wine that intoxicates you with
freedom, others when it is a bitter tonic, and still others when it is a
poison that makes you beat your head against the wall.  ~Colette

-- 
You received this message because you are subscribed to the Google Groups 
"Algorithm Geeks" group.
To post to this group, send email to algoge...@googlegroups.com.
To unsubscribe from this group, send email to 
algogeeks+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/algogeeks?hl=en.

Reply via email to