Thank you Alex! I'll think it over lately. On Tue, 2012-07-24 at 08:54 -0600, Alex Rousskov wrote: > On 07/23/2012 02:45 AM, Alexander Komyagin wrote: > > On Fri, 2012-07-20 at 10:22 -0600, Alex Rousskov wrote: > >> On 07/20/2012 08:28 AM, Alexander Komyagin wrote: > >>> Hi! I've taken a look at the code related to object caching and found > >>> out only two places where this restriction (hard coded to 32K) is > >>> actually applied: > >>> > >>> 1) DiscIO/IpcIoFile.cc: when Squid worker pushes an i/o request to > >>> disker via IpcIoFile::push() and disker handles that request with > >>> DiskerHandleRequest(). IpcIoMsg object contains the memory page for i/o. > >>> Before and after i/o plain byte arrays are used for data storage. > >>> > >>> So why not to use an array of pages for i/o here instead of one single > >>> page? > >>> We know the exact object size here so we can easily calculate the number > >>> of pages needed to load/store and object. > >> > >> Properly locating, locking, and securely updating a single shared page > >> is much easier than doing so for N pages. We will support multi-page > >> shared caching eventually, but it is far more complex than just > >> calculating the number of needed pages (N), especially if you do not > >> want to reserve all pages in advance. > >> > >> You found where the 32KB page size limit is used. The other, far more > >> important limit that is implicit in the current code is the number of > >> shared pages per object that the current algorithms support. That limit > >> is 1. > > > > Probably I missed those algorithms, can you point them out for me, so I > > can take a look? > > It is difficult to pinpoint a specific piece of code because the "one > entry, one slot" design is used in many places. The StoreMap class > provides the entry:slot mapping. You can see how the MemStore uses that > map to open and close map slots while reading or writing shared memory > pages. > > To support shared caching of large objects, one would have to hide the > complexities inside > > * StoreMap (creating a virtual slot that maps to multiple map slots); > * StoreMap users such as MemStore (using multiple map slots to store a > large object, with some kind of links between map slots); or > * both StoreMap and StoreMap users, splitting the low-level and the > high-level complexities accordingly. > > The latter is more likely, IMO. > > > > OK. I think that once the cleanup is done and Store API is fixed (though > > is shall take some time), multi-page caching support won't be a big > > problem. I bet you already have some ideas on how to implement it. Or > > not? :) > > Ideas are easy; finding the time to complete the cleanup is difficult. > It is not going to be "done" on its own, and the last time I asked on > squid-dev, there was no consensus that finishing the cleanup is the best > use of my time. > > > Cheers, > > Alex.
-- Best wishes, Alexander Komyagin
