Mark, > Variables that are marked as shared have a distinct descriptor class > (VariableSharedDescriptor) which allows us to lock the Java monitor > associated with that AvailObject for the duration of operations that are > supposed to be atomic. Not only does this cause mutual exclusion of access > to the variable, but it also acts as a memory barrier. Since a variable's > value slot is the only slot that can change in Avail (basically), this > ensures that fibers running simultaneously on distinct Threads see a > perfectly consistent view of memory. If they're accessing shared objects, > those objects aren't changing. If they're accessing shared variables, they > pass through a memory barrier that ensures all reads and writes appear to be > perfectly serialized. > > This is an *exceptionally* clean memory model. It hides all the messiness of > Java's model while avoiding locks on everything except shared variable > accesses. Atomic operations on non-shared variables don't even need to > ensure memory coherence to be correct
indeed the memory model it is very clean - and portable! > Heh, that's a cornerstone of the way the Avail VM works. Mutable objects are > allowed to be destroyed or recycled by certain operations. Say we have an > AvailObject that represents the number 10^100. Now say we attempt to add one > to it (say by invoking primitive#1 – see P_001_Addition.java). If the object > is mutable, we know that the primitive will consume both the 10^100 object > and the one object off the operand stack, then push their sum. However, if > the 10^100 object is mutable, we are assured that there are no other > references to it. So why not recycle that space? So we do. We actually > clobber that integer object (an AvailObject with IntegerDescriptor.mutable() > as its descriptor), at least if the result occupies the same amount of space > as the original. So we recover some of the cost of uniformly boxing objects. > And the AvailObject representing the new value (which previously represented > the old value) has no other incoming references, so it remains mutable. We > do the same for tuples, sets, maps, objects, floats, doubles, closures, > continuations, and probably a lot more. And the best part is that the Avail > programmer can remain completely oblivious to this fact, since it can't have > any visible effect except improved performance. The Avail programmer doesn't > have to care whether "foo" == "foo" like in Java, nor does he care if > "foo"[1]→¢g clobbers the region of memory where "foo" occurs, or allocates a > new region of memory to store "goo". It has the same visible effect. i can’t say how much i like this approach. it saves the programmer from programming explicit functional monadic constructs (which i passionately hate). it is dream marriage between the imperative and functional world! > So now you see why we want to keep things mutable as long as we can. > Eventually the Level Two optimizer will do significantly more powerful flow > analysis to determine whether a value is being used monadically (i.e., never > has to become immutable), but the Level One mechanisms are already pretty > good. And because all of our underlying objects are represented as trees > when they get sufficiently big, even immutable and shared objects have > reasonably efficient operations. E.g., adding an element to a huge set has > to clone *at most* a handful of small Bagwell hash tree bins, but ideally > even those can just be clobbered directly if they're still mutable. great stuff. >> and that an object that is not shared (has 1 reference) can be ‘destroyed’ >> while manipulated? > > Yes, as above, but "is mutable" is the right distinction. got it. >> i need this to be able to hash-cons objects (via weak maps) > > I suggest wrapping a WeakHashMap in a POJO for now. You should be able to > instantiate and manipulate almost everything in the Java library (or anything > else that's on the class path) using the POJO interface. Wow, I haven't seen > someone do hash-consing since I read Henry Baker's paper on the subject > mumble-mumble years ago. i’ve developed a dsl (currently in scala) that has the potential for exponential computing (think hash-life), based on a novel memory-managed memoization technique. it solves the problem of memoization: “what to keep, and what not?”. in my framework, an immutable value my carry its computational steps or inheritance - whereby every computational step is (weakly) hash-consed. when a immutable value (and its computational inheritance with it) is garbage collected, the associated weakly hash-consed versions can be released. this scheme only works when functions are pure functions, and their arguments purely functional data structures. next to that, it can be made optimal efficient, when such functional data structures can be uniquely represented, regardless of the order of operations. and with avail, i can add another dimension: very advanced typing! not to say that it perfectly matches with my values. cheers, Robbert.
