Except that I don't see why Cached!(...) needs to physically separate the mutable state from the rest of the object. I mean, I see that Cached!(...) would have to cast away immutable (break the type system) in order to put mutable state in an immutable object, but if we set aside the current type system for a moment, *in principle* what's the big deal if the mutable state is physically located within the object? In many cases you can save significant time and memory by avoiding all that hashtable management, and performance Nazis like me will want that speed (when it comes to standard libraries, I demand satisfaction).

Now, I recognize and respect the benefits of transitive immutability:
1. safe multithreading
2. allowing compiler optimizations that are not possible in C++
3. ability to store compile-time immutable literals in ROM

(3) does indeed require mutable state to be stored separately, but it doesn't seem like a common use case (and there is a workaround), and I don't see how (1) and (2) are necessarily broken.

I must be tired.

Regarding (1), right after posting this I remembered the difference between caching to a "global" hashtable and storing the cached value directly within the object: the hashtable is thread-local, but the object itself may be shared between threads. So that's a pretty fundamental difference.

Even so, if Cached!(...) puts mutable state directly in the object, fast synchronization mechanisms could be used to ensure that two threads don't step on each other, if they both compute the cached value at the same time. If the cached value is something simple like a hashcode, an atomic write should suffice. And both threads should compute the same result so it doesn't matter who wins.

Reply via email to