> This is a very good idea, and sounds much better than having those

the major problem with all dirty caching is that we have more than one  
caching layer, and of course, things abort.

the fact, that people should be shown dirty versions instead of proper  
article leads to situation where in case of vandal fighting, etc,  
people will see stale versions, instead of waiting few seconds and  
getting real one.

In theory, update flow could look like this:

1. Set "I'm working on this" in a parallelism coordinator or lock  
manager
2. Do all database transactions & commit
3. Parse
4. Set memcached object
5. Invalidate squid objects

Now, should we parse, block or serve stale, could be dynamic, e.g. if  
we detect more than x parallel parses we fall back to blocking for few  
seconds, once we detect more than y of blocked threads on the task, or  
block expires and there's no fresh content yet (or there's new  
copy.. ) - then stale stuff can be served.
In perfect world that asks for specialized software :)

Do note, for past quite a few years we did lots and lots of work to  
avoid stale content being served. I would not see dirty serving as  
something we should be proud of ;-)

Domas

_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to