Suppose we had a caching solution (you could think of it as
@cached, but
it could be done in a library). The user would need to provide
a const,
pure function which returns the same value that is stored in
the cache.
This is enforceable. The only way to write to the cache, is by
calling
the function.
How far would that take us? I don't think there are many use
cases for
logically pure, apart from caching, but I have very little
idea about
logical const.
I think a caching solution would cover most valid needs and
indeed would be checkable.
We can even try its usability with a library-only solution. The
idea is to plant a mixin inside the object that defines a
static hashtable mapping addresses of objects to cached values
of the desired types. The destructor of the object removes the
address of the current object from the hash (if there).
Given that the hashtable is global, it doesn't obey the regular
rules for immutability, so essentially each object has access
to a private stash of unbounded size. The cost of getting to
the stash is proportional to the number of objects within the
thread that make use of that stash.
Uh, it better not be proportional. Hashtable gives us O(1), one
hopes.
Sample usage:
class Circle {
private double radius;
private double circumferenceImpl() const {
return radius * 2 * pi;
}
mixin Cached!(double, "circumference", circumferenceImpl);
...
}
auto c = new const(Circle);
Aside: what's the difference between this and new
immutable(Circle)?
double len1 = c.circumference;
double len2 = c.circumference;
Upon the first use of property c.circumference, Lazy computes
the value by calling this.circumferenceImpl() and stashes it in
the hash. The second call just does a hash lookup.
In this example searching the hash may actually take longer
than computing the thing, but I'm just proving the concept.
If this is a useful artifact, Walter had an idea a while ago
that we can have the compiler help by using the per-object
monitor pointer instead of the static hashtable. Right now the
pointer points to a monitor object, but it could point to a
little struct containing e.g. a Monitor and a void*, which
opens the way to O(1) access to unbounded cached data. The
compiler would then "understand" to not consider that date
regular field accesses, and not make assumptions about them
being immutable.
Any takers for Cached? It would be good to assess its level of
usefulness first.
I like this idea, and I suspect it could be used to implement not
just caching but lazy immutable data structures.
Except that I don't see why Cached!(...) needs to physically
separate the mutable state from the rest of the object. I mean, I
see that Cached!(...) would have to cast away immutable (break
the type system) in order to put mutable state in an immutable
object, but if we set aside the current type system for a moment,
*in principle* what's the big deal if the mutable state is
physically located within the object? In many cases you can save
significant time and memory by avoiding all that hashtable
management, and performance Nazis like me will want that speed
(when it comes to standard libraries, I demand satisfaction).
Now, I recognize and respect the benefits of transitive
immutability:
1. safe multithreading
2. allowing compiler optimizations that are not possible in C++
3. ability to store compile-time immutable literals in ROM
(3) does indeed require mutable state to be stored separately,
but it doesn't seem like a common use case (and there is a
workaround), and I don't see how (1) and (2) are necessarily
broken.
As a separate question, do you think it possible to implement
Cached!(...) to access an immutable field by casting away
immutable, without screwing up (1) and (2)?