On Monday, 1 July 2013 at 04:50:29 UTC, Jonathan M Davis wrote:
On Monday, July 01, 2013 06:27:15 Marco Leise wrote:
Am Sun, 30 Jun 2013 22:55:26 +0200

schrieb "Gabi" <galim...@bezeqint.net>:
> I wonder why is that.. Why would deleting 1 million objects > in > C++ (using std::shared_ptr for example) have to be slower > than > the garbage collection freeing a big chunk of million > objects all
> at once.

I have no numbers, but I think especially when you have complex graph structures linked with pointers, the GC needs a while to follow all the links and mark the referenced objects as still in use. And this will be done every now and then when you allocate N new objects.

The other thing to consider is that when the GC runs, it has to figure out whether anything needs to be collected. And regardless of whether anything actually needs to be collected, it has to go through all of the various references to mark them and then to sweep them. With deterministic destruction, you don't have to do that. If you have a fairly small number of heap allocations in your program, it's generally not a big deal. But if you're constantly allocating and deallocating small objects, then the GC is going to be run a lot more frequently, and it'll have a lot more objects to have to examine. So, having lots of small objects which are frequently being created and destroyed is pretty much guaranteed to tank your performance if they're being allocated by the GC. You really want reference counting for those sorts
of situations.

This is only true in the current D GC's situation.

Modern parallel compacting GCs don't suffer from this.

--
Paulo

Reply via email to