On Tue, 16 Dec 2008 14:32:39 +0100, Joel Hedlund wrote: > Duncan Booth wrote: >> I think you probably are correct. The only thing I can think that might >> help is if you can catch all the situations where changes to the >> dependent values might change the hash and wrap them up: before >> changing the hash pop the item out of the dict, then reinsert it after >> the change. > > That would probably require a lot of uncomfortable signal handling, > especially for a piece of functionality I'd like to be as unobtrusive as > possible in the application. > >> Alternatively give up on defining hash and __eq__ for FragmentInfo and >> rely on object identity instead. > > Object identity wouldn't work so well for caching. Objects would always > be drawn as they appeared for the first time. No updates would be shown > until the objects were flushed from the cache.
Perhaps I don't understand your program structure, but I don't see how that follows. > I've been experimenting with a list cache now and I can't say I'm > noticing any change in performance for a cache of 100 items. 100 items isn't very big though. If you have 50,000 items you may notice significant slow down :) If having many items in the cache is possible, you should consider using a binary search instead of a linear search through the cache. See the bisect module. -- Steven -- http://mail.python.org/mailman/listinfo/python-list