On Wed, 16 Apr 2008, Tobias Schlitt wrote:

> On 04/16/2008 08:05 PM Derick Rethans wrote:
> > On Mon, 14 Apr 2008, Tobias Schlitt wrote:
> >> On 04/07/2008 09:25 PM Tobias Schlitt wrote:
> 
> > Some minor comments:
> 
> >> Replacement strategies
> >> ----------------------
> >>
> >> [snip]
> >>
> >> A problem with any of the replacement strategies is, that additional 
> >> information about a cache item needs to be stored. For example: To 
> >> realize an LRU algorithm, the last access time of an item needs to be 
> >> stored. For LFU, the number of accesses needs to be stored. The cache 
> >> storages currently don't support adding such information and an 
> >> appropriate place to store this persistently and efficiently needs to be 
> >> thought out.
> 
> > For file based caches, the filesystem already keeps track of the last
> > access time, so it doesn't have to be stored separately. Perhaps it'd be
> > possible to only use the extra information when it's actually used. So
> > for LRU and the file based caches, you don't really need to store
> > anything; while you would need it for LFU and file and others.
> 
> This would break abstraction and (IMO) lower the performance. If you
> rely on the file system data, you will need to iterate over all cache
> items and perform an filemtime() call for each of them instead of
> iterating over an array of this data.

But if you've 400000 items in the cache, reading the meta array *could* 
be slower then just iterating. And "breaking abstraction" is not a 
reason. We don't have to be purists in design :)

> >> Cache propagation
> >> -----------------
> >>
> >> There are two possibilities to define propagation of cache items through
> >> the stack, where a decision needs to be made for one consistent
> >> solution:
> 
> > I thought we decided on going only with "Propagate in store" ?
> 
> Still the 2 possibilities are mentioned. This is just the requirements
> section and it just states that we had to possibilities for the design
> here. The design itself should only describe the method we decided for.

It's in the design document, perhaps they should be split up then?

> >> ezcCacheStack
> >> -------------
> >>
> >> [snip]
> >>
> >> This also ensures, that caches are not stacked recursively, since
> >> ezcCacheStack won't implement this interface itself.
> 
> > Is it actually a problem if there are multiple stacks that are hooked up
> > together?
> 
> This is a good question. We could allow this, but it would also mean
> that we need possibilities to store meta data for the stack (which will
> be used in a stack). Replacement strategies could be mixed up this way
> and requires multiple different meta data formats to be stored. I'd
> generally say we don't want that. Do we?
> 
> If the requirement occurs later, we can still add the possibility, I'd say.

Sure, that's fine.

> >> The given $id parameter must be a string or integer which identifies the
> >> given storage uniquely in the stack.
> 
> > Couldn't we just pre-generate this number a-la auto increment?
> 
> What if the user changes the order? How will we notice? This would
> require us to store this information in the storage in addition.

I didn't see any functions for reordering them, so how is this a 
problem?

> >> It needs to be clearified, if restored items should be bubbled up 
> >> to higher storages.
> 
> > Why wouldn't we want that?
> 
> Because it costs time again. Imagine a 4 level stack, where file system
> is the lowest and largest one. Each time an item would be restored from
> the there, it would be placed in 3 higher storages again. Not sure if we
> want this, since it slows down the read process.

I think this is one of the major things why you'd want a stacked 
cache... so yes, I definitely think it should be stored in higher 
storages. Perhaps we can add an option to restore() to *not* do that, 
but I do want the general case to store it in the higher caches.

Derick
-- 
Components mailing list
Components@lists.ez.no
http://lists.ez.no/mailman/listinfo/components

Reply via email to