In a new generalized strategy it would not be RAFContainer objects, it would be some new abstraction for an open file. Maybe something that ties in closer with the relatively newer io abstraction provided by
opensource/java/engine/org/apache/derby/io As you point out, such a project would have to consider current assumptions on number of outstanding objects. Øystein Grøvlen wrote:
"MM" == Mike Matrigali <[EMAIL PROTECTED]> writes:MM> it is ugly, but a way to store duplicate keys in a hash table is MM> to store lists as the object in the hash table rather than the MM> object itself. For instance in the store/lang interface where MM> store returns a hash table with a requested result set, if duplicates MM> are allowed it stores the object if there is 1 or a list if there MM> is more than one. The ugly part is that it requires a class comparison MM> to tell what kind of thing is being returned. Storing a list of items per hash entry will also beat the purpose of using the cache to limit the number of items to the size of the cache's hash table. MM> Having said that I agree that a new implementation may be more MM> appropriate, as the desire is to get the cache manager to track the MM> LRU nature of the individual items in the duplicate list. And it MM> would be nice if it walked the list and asked the "is busy" question MM> to each object - this may require a new interface to be implemented MM> by each object to make it more generic. I agree. MM> As I have said before this cache code is relatively small, and creating MM> a new implementation for open files seems like a good approach. MM> At the same time I think it might make sense to have a server wide MM> service providing this open file cache, rather than hidden in the MM> raw store implementation. I think there is currently a problem with MM> sort because it does not also go through a open file cache - and there MM> are some security manager issues which also may be because there is MM> not one path to do I/O to files. I agree that some general file cache that could be used by several mechanism it to be preferred. I also think this is the best way to guarantee a limit on the total number of open file descriptors. What is not clear to me, is whether the objects to be cached can be RAFContainer objects. It seems to be that some of the code assumes that there is only a single RAFContainer object per container (e.g, truncation and backup). The serialization of accesses to a file that we are trying to avoid for read operations, may be assumed by other operations.
