Gerhard Froehlich wrote: >>From: Stefano Mazzocchi [mailto:[EMAIL PROTECTED]] >> >> > > <skip/> > > Berin, you introduced some days ago a Profiler in Avalon Excalibur. Is that > Component designed to handle such things? > > Gerhard (still trying to get the big picture of caching)
It was yesterday and today. The "Profiler" is really a method of Instrumenting Avalon based projects (like cocoon). All that is really defined is a series of interfaces that define the contracts involved in such a system. However, if you were to have ProfilePoints that take a Resource and perform your cost calculations on it, it can feed the adaptive nature of such a Cache. In fact the CacheController would be a ProfileReport as well, recieving all the samples from the ProfilePoints. It can easily perform the adaptive cache mechanism. However, keep in mind that once the JVM is recycled, all the hard work and statistical gathering is reset. It might be worth considering a persistence mechanism for the results of the adaptive tests. There might even come a point where the results of cache adaptation do not change, and themselves become superfluous. At what point this occurs, I do not know. However, if the results of the adaptation were stored persistently, the Cache decisions could be made by a much simpler Cache implementation that merely followed the instructions from the adaptation algorithm. In that case, the adaptive cache would only be useful during development. However, what I see as more practical is that the specific files cached might change by the time of day. For instance, web administrators usually have log analyzers that will easily determine which parts of the site get hit heaviest at which time of day. This is very important. For instance, we may want the cache to artificially lengthen the ergodic period for specific resources during their peak load in order to scale more gracefully as more users view those resources. In affect, this is similar to the approach that Slashdot uses in it's server clustering. They have a couple of dynamic servers with full fledged capability, however the bulk of the site's use is reading the front page or possibly the extended story and comments (most users do not post). This allows the Slashdot team to have a few servers that only have static pages--that are updated every 20-30 minutes. That is quite an ergodic period for a news site. As load increases to the point that the dynamic servers cannot sustain, additional static page servers are brought into the cluster. This is a macro scale of what I was referring to, but it is similar to the concept of dropping packets to allow the server to slowly degrade as load increases instead of come to a screaching halt. Such an adaptive cache would be _more_ concerned with slowly degrading performance by serving stale data rather than ensuring the data retrieved from the cache is the most up to date. This would be an adaptive cache that *could* possibly have pluggable or adaptible policies regarding stale data. For instance, a site like Slashdot can get away with marginally stale data as the average user is not constantly using it in day to day work. However, such a policy would be vey detrimental to a corporate web site that managed the daily workflow of information that is core to the company's needs. It is also detrimental to web sites that require that users cannot see each other's data, where privacy and security are chief concerns. However, for sites like D-Haven (if I get the time) is supposed to be, stale data would be acceptable because the content would have a naturally longer ergodic period to begin with. -- "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." - Benjamin Franklin --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, email: [EMAIL PROTECTED]