On Tuesday, 23 September 2014 at 16:47:09 UTC, David Nadlinger wrote:
I was briefly discussing this with Andrei at (I think) DConf 2013. I suggested moving data to a separate global GC heap on casting stuff to shared.

Yes, that sounds expensive. A real example from my work: client receives big dataset (~1GB) from server in a background thread, builds and checks constraints and indexes (which is sort of expensive too; RBTree) and hands it over to the main thread. And client machine is not quite powerful for frequent marshaling of such big dataset, handling it at all is enough of a problem. If you copied it twice, you have 3GB working set, and GC needs somewhat 2x reserve, raising memory requirements to 6GB, without dup requirements are 1-2GB. Also when you trigger collection during copying to shared GC, what it does, stops the world again?

Reply via email to