On Monday, April 14, 2003, at 07:02 PM, Matt Sergeant wrote:This is not totally coherent yet, but hopefully its getting closer.
I think it's a nice thought, but there's going to be a lot of caveats.
- If the cache "stage" is towards the end of the pipeline, it's still going to have to check all the 'dependancies' before it so the overhead is the same.
If the same operation with the same features has the same overhead, then that's not much of a caveat :)
If it's towards the start then the cache isn't going to be very effective since the later stages will always have to be run.
As above.
- Unless there aren't multiple cache "stages" then there's going to be the same issue of forcing a re-run of the entire pipeline if anything changes anywhere. If there are multiple stages to avoid this, then the overhead is similar to that of incremental caching.
As above.
- Users are going to have to have detailed knowledge of how elements in the pipeline work in order to place cache points appropriately, and to be able to specify what affects the caching.
I'll be blunt - implementing it in this way would cause a lot of confusion for users and would most likely result in a lot of people doing it the wrong way. You'd be forcing the users to deal with something that can be handle internally for the most part. The only benefit would be the ability to optimize for specific scenarios (what I would consider as rather premature optimization).
That's not very blunt :)
Since we have to maintain some backwards compatibility, I'd add to Matt's description that configuring a pipeline with no cache step means a cache step with the present behaviour is magically inserted. Good magic, users don't see the difference.
And for those that will want to cache their way, then it'll be a little more complex but that's to be expected, after all they're doing something extra. I know I've had quite a number of XSPs that could have been cached on query string + a touch file touched every time a db was updated. Those could have benefitted from smarter caching, and I wouldn't call that premature optimisation.
--r