On Fri, Aug 30, 2013 at 10:23 PM, Sandro Magi <[email protected]>wrote:
> You seem to have misunderstood something critical. Each newly created > object starts in its own fresh region. This region is merged into a larger > region only if the larger region gets a pointer to this object. > Yes, I understand that. By my read, this means tenured heap state (for a thread) all ends up in one big region (because for it to stay alive something long lived has to point to it), yet this is exactly the place where cycles typically cause trouble. The only way a second region can exist as far as I can see, is for short-lived data which does not reference global data. In my experience this type of data infrequently has cycles, so normal ARC handles it just fine. Further, this data often has pointers to tenured global heap data, which would force it to be merged into the tenured-region anyhow. Lastly, this is the data which is most-efficiently handled by generational GC, so it does not pose a problem anyhow. For example, consider a simple async webserver. It has some list of current request sessions, off-which all state about the sessions is referenced. This would force all that globally referenced state to be in a big global region, which is where the primary cyclic data-structure issues lie. Subregions would be created for short-lived "stack scoped" data in handler functions, so long as that data did not reference global data or vice-versa. The paper is worded as if it is solving something in cyclic data-structure detection, but I don't see how is provides efficient detection of cyclic structures. In applications I can think of, there would be a big blobby tenured global heap. To find cycles within it, this algorithm does a "split attempt", which is effectively an efficient tracing cycle finder. Is the advantage that short-live stack-scoped regions (with no references in-or-out to the global blobby region), no longer need to do per-object reference counts? I don't see much advantage here, since they typically only have one reference.
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
