On 06.03.2012 16:27, Manu wrote:
On 26 February 2012 00:55, Walter Bright <newshou...@digitalmars.com
<mailto:newshou...@digitalmars.com>> wrote:

    On 2/25/2012 2:08 PM, Paulo Pinto wrote:

        Most standard compiler malloc()/free() implementations are
        actually slower than
        most advanced GC algorithms.


    Most straight up GC vs malloc/free benchmarks miss something
    crucial. A GC allows one to do substantially *fewer* allocations.
    It's a lot faster to not allocate than to allocate.


Do you really think that's true? Are there any statistics to support that?
I'm extremely sceptical of this claim.

I would have surely thought using a GC leads to a significant *increase*
in allocations for a few reasons:
   It's easy to allocate, ie, nothing to discourage you
   It's easy to clean up - you don't have to worry about cleanup
problems, makes it simpler to use in many situations
   Dynamic arrays are easy - many C++ users will avoid dynamic arrays
because

you mean like new[] ? That's just gloried allocation. STL vector is a dynamic array of C++ and it's being used a lot.

the explicit allocation/clean up implies complexity, one will
always use the stack,
 or a fixed array where they can get away with it

Still possible

   Slicing,
Nope

concatenation, etc performs bucket loads of implicit GC
allocations

It sure does, like it does without GC with reallocs and refcounting.

   Strings... - C coders who reject the stl will almost always have a
separate string heap with very particular allocation patterns, and
almost always refcounted

   Phobos/druntine allocate liberally - the CRT almost never allocates

It's just most of CRT has incredibly bad usability, partly because it lack _any_ notion of allocators. And policy of using statically allocated shared data like in localtime, srand etc. shown remarkably bad M-T scalability.

This is my single biggest fear in D. I have explicit control within my
own code, but I wonder if many D libraries will be sloppy and
over-allocate all over the place, and be generally unusable in many
applications.
If D is another language like C where the majority of libraries
(including the standard libraries I fear) are unusable in various
contexts, then that kinda defeats the purpose. D's module system is one
of its biggest selling points.

I think there should be strict phobos allocation policies, and ideally,
druntime should NEVER allocate if it can help it.



    Consider C strings. You need to keep track of ownership of it. That
    often means creating extra copies, rather than sharing a single copy.


Rubbish, strings are almost always either refcounted

like COW?

or on the stack for
dynamic strings, or have fixed memory allocated within structures. I
don't think I've ever seen someone duplicating strings into separate
allocations liberally.

I've seen some, and sometimes memory corruption when people hesitated to copy even when they *do* need to pass a copy.


    Enter C++'s shared_ptr. But that works by, for each object,
    allocating a *second* chunk of memory to hold the reference count.
    Right off the bat, you've got twice as many allocations & frees with
    shared_ptr than a GC would have.


Who actually uses shared_ptr?

Like everybody? Though with c++11 move semantics a unique_ptr is going to lessen it's widespread use. And there are ways to spend less then 2 proper memory allocations per shared_ptr, like keeping special block allocator for ref-counters.
More importantly smart pointers are here to stay in C++.

Talking about the stl is misleading... an
overwhelming number of C/C++ programmers avoid the stl like the plague
(for these exact reasons).

Instead of using it properly. It has a fair share of failures but it's not that bad.

Performance oriented programmers rarely use
STL out of the box, and that's what we're talking about here right? If
you're not performance oriented, then who cares about the GC either?


--
Dmitry Olshansky

Reply via email to