Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-14 Thread Kagamin via Digitalmars-d
On Saturday, 13 September 2014 at 21:46:45 UTC, Andrei 
Alexandrescu wrote:
No, it's all eager copy. std::string is thoroughly botched. A 
good

inexpensive lesson for us. -- Andrei


I mean possible lifetime management options are:
1. string
2. string*
3. shared_ptrstring
4. weak_ptrstring
5. unshared_ptrstring (not interlocked; does something like 
this exist?)


This way string is just like any other object. It's C++ after 
all, the foot must be shot.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-14 Thread Paulo Pinto via Digitalmars-d

Am 14.09.2014 10:27, schrieb Kagamin:

On Saturday, 13 September 2014 at 21:46:45 UTC, Andrei Alexandrescu wrote:

No, it's all eager copy. std::string is thoroughly botched. A good
inexpensive lesson for us. -- Andrei


I mean possible lifetime management options are:
1. string
2. string*
3. shared_ptrstring
4. weak_ptrstring
5. unshared_ptrstring (not interlocked; does something like this exist?)

This way string is just like any other object. It's C++ after all, the
foot must be shot.


You forgot a few other ones:

6. string::c_str() (let char* botch string internals)
7. shared_ptrstring
8. shared_ptrstring*
9. weak_ptrstring
10. weak_ptrstring*
11. unique_ptrstring
12. unique_ptrstring*

Just because some one these don't make sense semantically, I am willing 
to bet someone out there is writing them now.


And I did leave out the move variations, as the list is already quite long.


--
Paulo




Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-14 Thread po via Digitalmars-d



I mean possible lifetime management options are:
1. string
2. string*
3. shared_ptrstring
4. weak_ptrstring
5. unshared_ptrstring (not interlocked; does something like 
this exist?)


This way string is just like any other object. It's C++ after 
all, the foot must be shot.


 Exactly, you can compose string with the semantics you want!
 You are making C++ look good:) Although I can't recall ever 
actually wrapping a string in shared_ptr/weak_ptr/unique_ptr...



6. string::c_str() (let char* botch string internals)


It returns a const char* so you would have to cast const away to 
do that--


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-14 Thread Paulo Pinto via Digitalmars-d

Am 14.09.2014 16:19, schrieb po:

...


6. string::c_str() (let char* botch string internals)


It returns a const char* so you would have to cast const away to do that--


Which everyone does all the time, because the main reason c_str() exists 
is to interface with C style APIs, most of them taking only char* strings.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-14 Thread Andrei Alexandrescu via Digitalmars-d

On 9/14/14, 1:27 AM, Kagamin wrote:

On Saturday, 13 September 2014 at 21:46:45 UTC, Andrei Alexandrescu wrote:

No, it's all eager copy. std::string is thoroughly botched. A good
inexpensive lesson for us. -- Andrei


I mean possible lifetime management options are:
1. string
2. string*
3. shared_ptrstring
4. weak_ptrstring
5. unshared_ptrstring (not interlocked; does something like this exist?)

This way string is just like any other object. It's C++ after all, the
foot must be shot.


Oh I see. (3 and 4 are infrequent, 5 doesn't exist). -- Andrei



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-13 Thread po via Digitalmars-d
Smart pointers are rarely used, most C++ stuff is done by 
value.


Strings too?


 Two string types are used.

-std::string type: by value, has smaller buffer optimization, 
used at startup/logging, and for any dynamic strings with 
unbounded possible values


-immutable string handles: by value. When created it looks up 
into a hash to find or create that string. Two immutable strings 
with the same value, will always use the same pointer(like Lua). 
These are never destroyed, they are intended as handles.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-13 Thread Kagamin via Digitalmars-d

On Saturday, 13 September 2014 at 12:05:59 UTC, po wrote:
Smart pointers are rarely used, most C++ stuff is done by 
value.


Strings too?


 Two string types are used.

-std::string type: by value, has smaller buffer optimization, 
used at startup/logging, and for any dynamic strings with 
unbounded possible values


Are you sure? From basic_string.h:

_CharT*
_M_refcopy() throw()
{
#ifndef _GLIBCXX_FULLY_DYNAMIC_STRING
  if (__builtin_expect(this != _S_empty_rep(), false))
#endif
__gnu_cxx::__atomic_add_dispatch(this-_M_refcount, 
1);

  return _M_refdata();
}  // XXX MT


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-13 Thread po via Digitalmars-d



Are you sure? From basic_string.h:

_CharT*
_M_refcopy() throw()
{
#ifndef _GLIBCXX_FULLY_DYNAMIC_STRING
  if (__builtin_expect(this != _S_empty_rep(), false))
#endif

__gnu_cxx::__atomic_add_dispatch(this-_M_refcount, 1);

  return _M_refdata();
}  // XXX MT


 COW is just an implementation detail of GCC's crappy string.

 Microsoft string for instance, does not do COW, it uses a 15 
byte SBO. And you can also just replace it with your own string 
type(EASTL)


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-13 Thread Andrei Alexandrescu via Digitalmars-d

On 9/13/14, 9:13 AM, Kagamin wrote:

On Saturday, 13 September 2014 at 12:05:59 UTC, po wrote:

Smart pointers are rarely used, most C++ stuff is done by value.


Strings too?


 Two string types are used.

-std::string type: by value, has smaller buffer optimization, used at
startup/logging, and for any dynamic strings with unbounded possible
values


Are you sure? From basic_string.h:

 _CharT*
 _M_refcopy() throw()
 {
#ifndef _GLIBCXX_FULLY_DYNAMIC_STRING
   if (__builtin_expect(this != _S_empty_rep(), false))
#endif
 __gnu_cxx::__atomic_add_dispatch(this-_M_refcount, 1);
   return _M_refdata();
 }  // XXX MT


C++11 makes all refcounting implementations of std::string illegal. -- 
Andrei


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-13 Thread deadalnix via Digitalmars-d
On Saturday, 13 September 2014 at 19:34:10 UTC, Andrei 
Alexandrescu wrote:
C++11 makes all refcounting implementations of std::string 
illegal. -- Andrei


#facepalm


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-13 Thread Kagamin via Digitalmars-d
On Saturday, 13 September 2014 at 19:34:10 UTC, Andrei 
Alexandrescu wrote:
C++11 makes all refcounting implementations of std::string 
illegal. -- Andrei


Ah, so lifetime management can be rolled on top of string if 
needed? Hmm... makes sense.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-13 Thread Andrei Alexandrescu via Digitalmars-d
Kagamin s...@here.lot wrote:
 On Saturday, 13 September 2014 at 19:34:10 UTC, Andrei Alexandrescu wrote:
 C++11 makes all refcounting implementations of std::string  illegal. -- 
 Andrei
 
 Ah, so lifetime management can be rolled on top of string if needed? Hmm... 
 makes sense.

No, it's all eager copy. std::string is thoroughly botched. A good
inexpensive lesson for us. -- Andrei


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Jacob Carlborg via Digitalmars-d

On 11/09/14 21:02, eles wrote:


Could you provide one or two short but illustrative examples in Tango
and Phobos showing the howto and the why not in Phobos?


Tango:

import tango.text.Unicode;

void foo ()
{
char[3] result; // pre-allocate buffer on the stack
auto b = foo.toUpper(result);
}

Phobos:

import std.uni;

void foo ()
{
auto b = foo.toUpper(); // no way to use a pre-allocated buffer
}


Will Andrei's allocators improve that with some rewrite of Phobos?


Yes, they could.

--
/Jacob Carlborg


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Daniel Kozak via Digitalmars-d
V Fri, 12 Sep 2014 08:47:55 +0200
Jacob Carlborg via Digitalmars-d digitalmars-d@puremagic.com napsáno:

 On 11/09/14 21:02, eles wrote:
 
  Could you provide one or two short but illustrative examples in
  Tango and Phobos showing the howto and the why not in Phobos?
 
 Tango:
 
 import tango.text.Unicode;
 
 void foo ()
 {
  char[3] result; // pre-allocate buffer on the stack
  auto b = foo.toUpper(result);
 }
 
 Phobos:
 
 import std.uni;
 
 void foo ()
 {
  auto b = foo.toUpper(); // no way to use a pre-allocated buffer
 }
 
  Will Andrei's allocators improve that with some rewrite of Phobos?
 
 Yes, they could.
 

toUpperInPlace could help little, but still not perfect



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread eles via Digitalmars-d

On Thursday, 11 September 2014 at 19:56:17 UTC, Paulo Pinto wrote:

Am 11.09.2014 20:32, schrieb Daniel Alves:


It is incredible how Objective-C's ARC became a symbol for 
reference counting, instead of the living proof of Apple's 
failure to produce
a working GC for Objective-C that didn't crash every couple of 
seconds.


I think I fail to grasp something here. For me, ARC is something 
that is managed at runtime: you have a counter on a chunk of 
memory and you increase it with each new reference towards that 
memory, then you decrement it when memory is released. In the 
end, when the counter reaches 0, you drop the chunk.


OTOH, code analysis and automatically inserting free/delete where 
the programmers would/should have done it is not really that. Is 
a compile-time approach and not different of manual memory 
management.


Which one is, in the end, the approach took by Apple, and which 
one is the true ARC?...




Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread eles via Digitalmars-d

On Thursday, 11 September 2014 at 20:11:45 UTC, deadalnix wrote:

On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov
wrote:



  - Other memory management technique require bookkeeping. In a
multicore environment, that mean expensive synchronization.


It is also true that when you start to really optimize the GC 
(precise, concurrent etc.), its complexity is not lesser than the 
complexity of that bookkeeping.


I think there is place for both, just allow taking one or the 
other path when the programmer decides it is the way to go.


Remember, what people like in C is inclusively the shoot the 
foot simplicity and availability. If you want it, you do it.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread eles via Digitalmars-d
On Friday, 12 September 2014 at 06:47:56 UTC, Jacob Carlborg 
wrote:

On 11/09/14 21:02, eles wrote:



Thank you. Basically, it is about a different interface of the 
functions, something like the difference between new and 
placement new.


This could be added to Phobos too, through those allocators. I 
was afraid the issue is deeper.




Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Dominikus Dittes Scherkl via Digitalmars-d

On Friday, 12 September 2014 at 07:49:59 UTC, eles wrote:

On Thursday, 11 September 2014 at 20:11:45 UTC, deadalnix wrote:
  - Other memory management technique require bookkeeping. In a
 multicore environment, that mean expensive synchronization.

It is also true that when you start to really optimize the GC 
(precise, concurrent etc.), its complexity is not lesser than 
the complexity of that bookkeeping.


May be, but this complexity is hidden, the programmer has not to 
take care about it - and the extra cost for synchronization is 
still speared. So I think D is well of if it provides GC and 
manual memory management. Additional supporting ARC is 
superflouos.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread eles via Digitalmars-d
On Friday, 12 September 2014 at 08:27:55 UTC, Dominikus Dittes 
Scherkl wrote:

On Friday, 12 September 2014 at 07:49:59 UTC, eles wrote:
On Thursday, 11 September 2014 at 20:11:45 UTC, deadalnix 
wrote:



May be, but this complexity is hidden


The old question: at what cost?
There is always a trade-off.
I do not defend one more point of view than the other, but I am 
awarethat sometimes you simply need manual control.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread po via Digitalmars-d


programmers who really have this stuff down... how much of your 
code and your mental energy with C++ is spent on memory 
ownership rules?  Is it really a productive use of your time?  
Does the program materially benefit from the design required to 
make it safe, correct, and self-documenting with respect to 
memory ownership and data lifetime?  Are smart pointers really 
that pleasant to work with?


 It just depends on the codebase.

 Yes old C++ codebases are terrible for multithreaded code. It 
would be nightmare to make sense of it and guarantee it doesn't 
have bugs.


 But using modern C++11/14 + TBB it really isn't hard at all. It 
is fairly trivial to scale to N cores using a task based 
approach. Smart pointers are rarely used, most C++ stuff is done 
by value.
  When dynamic lifetimes are required, again it is rarely based 
on shared_ptr, far more often it is unique_ptr so there is no 
atomic ref counting that the GC crowd loves to cry about.
 C++ also has all kinds of parallel loop constructs(via TBB,  
OpenMP, or MS Concurrency RT  etc). Again trivial to use.
  TBB has many parallel containers(priority queue, vector, 
unordered_map).



 For instance, I work on a game engine, almost everything is 
either by value or unique.


The only stuff that is shared and thus is requires ref counting 
are external assets(shaders,models,sounds, some gpu resources). 
These objects are also closed, and thus incapable of circular 
references. Their ref counts are also rarely modified, at most 
I'd expect just a few of them to dec/inc per frame as objects are 
added/removed.








Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread ketmar via Digitalmars-d
On Fri, 12 Sep 2014 07:49:57 +
eles via Digitalmars-d digitalmars-d@puremagic.com wrote:

 Remember, what people like in C is inclusively the shoot the 
 foot simplicity and availability. If you want it, you do it.
but you can shoot yourself in the foot in D! you can run around in
circles shooting everything that moves (including yourself). even
dead bodies, doors, walls, paintings, furniture and your neighbours'
nasty dog.


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Jacob Carlborg via Digitalmars-d

On 12/09/14 08:59, Daniel Kozak via Digitalmars-d wrote:


toUpperInPlace could help little, but still not perfect


Converting text to uppercase doesn't work in-place in some cases. For 
example the German double S will take two letters in uppercase form.


--
/Jacob Carlborg


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Jacob Carlborg via Digitalmars-d

On 12/09/14 09:58, eles wrote:

On Friday, 12 September 2014 at 06:47:56 UTC, Jacob Carlborg wrote:

On 11/09/14 21:02, eles wrote:



Thank you. Basically, it is about a different interface of the
functions, something like the difference between new and placement new.

This could be added to Phobos too, through those allocators. I was
afraid the issue is deeper.


Yes, or output ranges. I think output ranges already has been added in 
some places.


--
/Jacob Carlborg


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Paulo Pinto via Digitalmars-d

On Friday, 12 September 2014 at 07:46:03 UTC, eles wrote:
On Thursday, 11 September 2014 at 19:56:17 UTC, Paulo Pinto 
wrote:

Am 11.09.2014 20:32, schrieb Daniel Alves:


It is incredible how Objective-C's ARC became a symbol for 
reference counting, instead of the living proof of Apple's 
failure to produce
a working GC for Objective-C that didn't crash every couple of 
seconds.


I think I fail to grasp something here. For me, ARC is 
something that is managed at runtime: you have a counter on a 
chunk of memory and you increase it with each new reference 
towards that memory, then you decrement it when memory is 
released. In the end, when the counter reaches 0, you drop the 
chunk.


OTOH, code analysis and automatically inserting free/delete 
where the programmers would/should have done it is not really 
that. Is a compile-time approach and not different of manual 
memory management.


Which one is, in the end, the approach took by Apple, and which 
one is the true ARC?...


ARC was a term popularized by Apple when they introduced the said 
feature in Objective-C.


In the GC literature it is plain reference counting.

ARC in Objective-C is a mix of both approaches that you mention.

It only applies to Objective-C classes that follow the 
retain/release patterns since the NeXTStep days. For structs, 
malloc() or even classes that don't follow the Cooca patterns, 
only manual memory management is possible.


The compiler inserts the retain/release calls that a programmer 
would write manually, at the locations one would expect from the 
said patterns.


Then a second pass, via dataflow analysis, removes the pairs of 
retain/release that are superfluous, due to object lifetime 
inside a method/function block.


This way you get automatic reference counting, as long as those 
classes use the said patterns correctly. As a plus the code gets 
to interact with libraries that are clueless about ARC.


Now, having said this, when Apple introduced GC in Objective-C it 
was very fragile, only worked with Objective-C classes, was full 
of take care of X when you do Y and required all Frameworks on 
the project to have compatible build settings.


Of course, more often than not, the result was random crashes 
when using third party libraries, that Apple never sorted out.


So ARC in Objective-C ended up being a better solution, due to 
interoperability issues, and not just because RC is better than 
GC.


--
Paulo







Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread eles via Digitalmars-d

On Friday, 12 September 2014 at 12:39:51 UTC, Paulo  Pinto wrote:

On Friday, 12 September 2014 at 07:46:03 UTC, eles wrote:
On Thursday, 11 September 2014 at 19:56:17 UTC, Paulo Pinto 
wrote:

Am 11.09.2014 20:32, schrieb Daniel Alves:




ARC was a term popularized by Apple when they introduced the 
said feature in Objective-C.


Many thanks.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Kagamin via Digitalmars-d

On Friday, 12 September 2014 at 08:50:17 UTC, po wrote:
 But using modern C++11/14 + TBB it really isn't hard at all. 
It is fairly trivial to scale to N cores using a task based 
approach. Smart pointers are rarely used, most C++ stuff is 
done by value.


Strings too?

 For instance, I work on a game engine, almost everything is 
either by value or unique.


The only stuff that is shared and thus is requires ref 
counting are external assets(shaders,models,sounds, some gpu 
resources). These objects are also closed, and thus incapable 
of circular references.


For closed external resources one can often figure out ownership 
and if it's done, you don't even need smart pointers, as you 
already know, where to destroy the object. The difficult task is 
to do it for all allocated memory everywhere. Duplication is 
certainly possible, but it kinda goes against efficiency.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Marco Leise via Digitalmars-d
Am Fri, 12 Sep 2014 13:45:45 +0200
schrieb Jacob Carlborg d...@me.com:

 On 12/09/14 08:59, Daniel Kozak via Digitalmars-d wrote:
 
  toUpperInPlace could help little, but still not perfect
 
 Converting text to uppercase doesn't work in-place in some cases. For 
 example the German double S will take two letters in uppercase form.

The German double S, I see ... Let me help you out of this.

The letter ß, named SZ, Eszett, sharp S, hunchback S, backpack
S, Dreierles-S, curly S or double S in Swiss, becomes SS in
upper case since 1967, because it is never used as the start
of a word and thus doesn't have an upper case representation
of its own. Before, from 1926 on, the translation was to SZ.
So a very old Unicode library might give you incorrect results.

The uppercase letter I on the other hand depends on the locale.
E.g. in England the lower case version is i, whereas in Turkey
it is ı, because they also have a dotted İ, which becomes i.

;)

-- 
Marco



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Marco Leise via Digitalmars-d
Am Thu, 11 Sep 2014 13:44:09 +
schrieb Adam D. Ruppe destructiona...@gmail.com:

 On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
 wrote:
  And I think of idea of complete extraction of GC from D.
 
 You could also recompile the runtime library without the GC. 
 Heck, with the new @nogc on your main, the compiler (rather than 
 the linker) should even give you nicish error messages if you try 
 to use it, but I've done it before that was an option.
 
 Generally though, GC fear is overblown. Use it in most places and 
 just don't use it where it makes things worse.

The Higgs JIT compiler running 3x faster just because you call
GC.reserve(1024*1024*1024); show how much fear is appropriate
(with this GC implementation).

-- 
Marco



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Paulo Pinto via Digitalmars-d
On Thursday, 11 September 2014 at 20:55:43 UTC, Andrey Lifanov 
wrote:
Everyone tells about greatness and safety of GC, and that it is 
hard to live without it... But, I suppose, you all do know the 
one programming language in which 95% of AAA-quality popular 
desktop software and OS is written. And this language is C/C++.


Because due to the way the market changed in the last 20 years, 
compiler vendors focused on native code compilers for C and C++, 
while the

others faded away.



How do you explain this? Just because we are stubborn and silly 
people, we use terrible old C++? No. The real answer: there is 
no alternative.


There used to exist.

I am old enough to remeber when C only mattered if coding on UNIX.



Stop telling fairy tales that there is not possible to program 
safe in C++. Every experienced programmer can easily handle 
parallel programming and memory management in C++. Yes, it 
requires certain work and knowledge, but it is possible, and 
many of us do it on the everyday basis (on my current work we 
use reference counting, though the overall quality of code is 
terrible, I must admit).


Of course, it is possible to do safe coding in C++, but you need 
good coders on the team.


I always try to apply the safe practices from the Algol world, as 
well as, many good practices I have learned since I got in touch 
with C++ back in 1993.


My pure C programming days were coffined to the Turbo Pascal - 
C++ transition, university projects and my first job. Never liked 
its unsafe design.


Now the thing is, I could only make use of safe programming 
practices like compiler specific collections (later STL) and RAII,
when coding on my own or in small teams composed of good C++ 
developers.


More often than not, the C++ codebases I have met on my projects 
looked either C compiled with a C++ compiler or OOP gone wild. 
With lots of nice macros as well.


When the teams had high rotation, then the code quality was even 
worse.


A pointer goes boom and no one knows which module is responsible 
for doing what in terms of memory management.


We stopped using C++ on our consulting projects back in 2005, as 
we started to focus mostly on JVM and .NET projects.


Still use it for my hobby coding, or some jobs on side, where I 
can control the code quality though.


However, I am also found of system programming languages with GC, 
having had the opportunity to use the Oberon OS back in the 
mid-90's.


--
Paulo



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Chris via Digitalmars-d

On Friday, 12 September 2014 at 12:39:51 UTC, Paulo  Pinto wrote:

On Friday, 12 September 2014 at 07:46:03 UTC, eles wrote:
On Thursday, 11 September 2014 at 19:56:17 UTC, Paulo Pinto 
wrote:

Am 11.09.2014 20:32, schrieb Daniel Alves:


It is incredible how Objective-C's ARC became a symbol for 
reference counting, instead of the living proof of Apple's 
failure to produce
a working GC for Objective-C that didn't crash every couple 
of seconds.


I think I fail to grasp something here. For me, ARC is 
something that is managed at runtime: you have a counter on a 
chunk of memory and you increase it with each new reference 
towards that memory, then you decrement it when memory is 
released. In the end, when the counter reaches 0, you drop the 
chunk.


OTOH, code analysis and automatically inserting free/delete 
where the programmers would/should have done it is not really 
that. Is a compile-time approach and not different of manual 
memory management.


Which one is, in the end, the approach took by Apple, and 
which one is the true ARC?...


ARC was a term popularized by Apple when they introduced the 
said feature in Objective-C.


In the GC literature it is plain reference counting.

ARC in Objective-C is a mix of both approaches that you mention.

It only applies to Objective-C classes that follow the 
retain/release patterns since the NeXTStep days. For structs, 
malloc() or even classes that don't follow the Cooca patterns, 
only manual memory management is possible.


The compiler inserts the retain/release calls that a programmer 
would write manually, at the locations one would expect from 
the said patterns.


Then a second pass, via dataflow analysis, removes the pairs of 
retain/release that are superfluous, due to object lifetime 
inside a method/function block.


This way you get automatic reference counting, as long as those 
classes use the said patterns correctly. As a plus the code 
gets to interact with libraries that are clueless about ARC.


Now, having said this, when Apple introduced GC in Objective-C 
it was very fragile, only worked with Objective-C classes, was 
full of take care of X when you do Y and required all 
Frameworks on the project to have compatible build settings.


Of course, more often than not, the result was random crashes 
when using third party libraries, that Apple never sorted out.


So ARC in Objective-C ended up being a better solution, due to 
interoperability issues, and not just because RC is better than 
GC.


--
Paulo


[Caveat: I'm no expert]
I once read a manual that explained the GC in Objective-C (years 
ago). It said that some objects never get collected although 
they're dead, but the garbage collector can no longer reach them. 
But maybe that's true of other GC implementations too (Java?). 
ARC definitely makes more sense for Objective-C than what they 
had before. But that's for Objective-C with its retain-release 
mechanism. Also, I wonder, is ARC really automatic. Sure, the 
compiler inserts retain-release automatically (what the 
programmer would have done manually in the old days). But 
that's not really a GC algorithm that scans and collects during 
runtime. Isn't it cheating? Also, does anyone know what problems 
Apple programmers have encountered with ARC?


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Marco Leise via Digitalmars-d
Am Fri, 12 Sep 2014 15:43:14 +
schrieb Chris wend...@tcd.ie:

 [Caveat: I'm no expert]
 I once read a manual that explained the GC in Objective-C (years
 ago). It said that some objects never get collected although
 they're dead, but the garbage collector can no longer reach them.
 But maybe that's true of other GC implementations too (Java?).

With only ARC, if two objects reference each other, they keep
each other alive indefinitely unless one of the references is a
weak reference, which doesn't count as a real reference
count and will cause the destruction.
Other than that, in case of Java or D it is just a question of
how you define never I guess. Since a tracing GC only runs
every now and then, there might be uncollected dead objects
floating around at program termination.

 [...] But that's not really a GC algorithm that scans and
 collects during runtime. Isn't it cheating?

A GC algorithm that scans and collects during runtime is
called a tracing GC. ARC none the less collects garbage.
You, the programmer, don't need to do that.

-- 
Marco



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread eles via Digitalmars-d

On Friday, 12 September 2014 at 20:41:53 UTC, Marco Leise wrote:

Am Fri, 12 Sep 2014 15:43:14 +
schrieb Chris wend...@tcd.ie:



With only ARC, if two objects reference each other, they keep
each other alive indefinitely unless one of the references is a
weak reference, which doesn't count as a real reference


But do we need more than that? Translating the question into C++:

what use case wouldn't be covered by unique_ptr and shared_ptr?

Cycles like that could happen in manual memory management, too. 
There is Valgrind for that...


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-12 Thread Paulo Pinto via Digitalmars-d

Am 13.09.2014 01:52, schrieb eles:

On Friday, 12 September 2014 at 20:41:53 UTC, Marco Leise wrote:

Am Fri, 12 Sep 2014 15:43:14 +
schrieb Chris wend...@tcd.ie:



With only ARC, if two objects reference each other, they keep
each other alive indefinitely unless one of the references is a
weak reference, which doesn't count as a real reference


But do we need more than that? Translating the question into C++:

what use case wouldn't be covered by unique_ptr and shared_ptr?


Cycles, that is why weak_ptr also exists.



Cycles like that could happen in manual memory management, too. There is
Valgrind for that...


For those that can compile their code under GNU/Linux.

There are lots of OS where Valgrind does not run.

--
Paulo


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Kagamin via Digitalmars-d
You can also help with allocators design: 
http://forum.dlang.org/thread/lji5db$30j3$1...@digitalmars.com


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Kagamin via Digitalmars-d
The current idea is to add @nogc attribute to functions in 
phobos, which can live without GC, so that you could use them 
from other @nogc functions.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread via Digitalmars-d
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library.


I can enlighten you ;-) The reason is safety. Past experience 
(especially with C  C++) has shown that manual memory management 
is easy to get wrong. Besides, certain features would not easily 
be possible without it (dynamic arrays, closures).


I and many other C/C++ programmers prefer to control things 
manually and use flexible allocation schemes that suitable for 
concrete situations. When every nanosecond matters, this is the 
only option, at least nowadays.


So this particular thing stops many of us from using D. When 
you can abandon performance, you usually choose Java (Scala) or 
C# because of their rich support and libraries. And the 
opposite matter is when you need high performance. In this case 
there is almost nothing to choose from. C/C++11 now looks not 
so bad.


And I think of idea of complete extraction of GC from D. For 
this to achieve, I suppose, I have to dig deeply into D 
compiler and also correct/re-implement many things in modules, 
so this ends up with almost new version of D.


I would like to hear your suggestions and some basic 
instructions. Maybe this is not a good idea at all or it will 
be very hard to realize.


I don't think it is necessary to remove the GC completely. It can 
only interfere with your program in three situations:


1) When you allocate and run out of memory, the GC will first try 
to release some unneeded memory before requesting more from the 
OS. If you don't allocate, anything in a performance critical 
section of your program, the GC will never run.
2) (Actually a consequence of 1) When the GC runs, it stops all 
threads, including those that never allocate.

3) When you call it manually.

For 1), there is the @nogc attribute. Any function marked with 
this attribute is guaranteed to never allocate on the GC heap, 
including via any other functions it calls. If you write `void 
main() @nogc`, your entire program will be GC-less. This 
attribute has only been introduced in the latest release, and 
some parts of the standard library cannot be used with it yet.


For 2), you can call `GC.disable()` and `GC.enable()` to switch 
the GC on/off temporarily. You can still allocate memory, but the 
GC will not run.


For 3): Do it when you can afford the latency, for example 
between frames, or when you are not in a performance critical 
section of your program. Right now, it is not anytime capable, 
which means you cannot give it a deadline until that it either 
has to finish, or abort the current operation. This would be an 
interesting enhancement.


As for manual memory management, Andrei is currently working on 
an allocator library: 
http://erdani.com/d/phobos-prerelease/std_allocator.html


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Adam D. Ruppe via Digitalmars-d
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:

And I think of idea of complete extraction of GC from D.


You could also recompile the runtime library without the GC. 
Heck, with the new @nogc on your main, the compiler (rather than 
the linker) should even give you nicish error messages if you try 
to use it, but I've done it before that was an option.


Generally though, GC fear is overblown. Use it in most places and 
just don't use it where it makes things worse.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d

Thank you for quick response!

I guess I need further investigation and write good tests to 
compare C++ and D solutions. The main advantage over GC-language 
is that in case of manual memory management I know completely 
when and what will be freed or allocated (with the help of smart 
pointers/reference counting, of course). You can say that this 
has no importance to programmer, but it has, because you don't 
have performance spikes and don't have to waste the processor and 
slow memory time for scanning what things need to be collected. 
So, the big advantage (with the price of greater responsibility) 
is much greater predictability of how your program will perform.


The main problem of heap-intensive programs with huge amount of 
objects is heap fragmentation. During the program work there can 
be holes of memory chunks which complicate further allocations, 
specially for big continuous arrays. Also it ruins cache 
performance, because similar objects, belonging to one array, can 
be stationed far from each other, divided by such holes. Fairly 
speaking, C/C++ do not have the built-in solution for such 
problem, but you can program it manually there.


So maybe instead of getting rid of GC I will consider the 
implementation of optimized moving GC.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 15:23:53 +
Andrey Lifanov via Digitalmars-d digitalmars-d@puremagic.com wrote:

 is that in case of manual memory management I know completely 
 when and what will be freed or allocated (with the help of smart 
 pointers/reference counting, of course).
but you don't. you can only estimate, that's all. what if you passing
refcounted object to another function which stores it somewhere? oops.

and in D you are free to use structs, `scoped!`, and malloc()/free()
(see std.typecons for scoped template source to see how it's done).

you can write your own array implementations too. slices are hard to do
without proper GC though, and closures requres GC, AFAIK.

 So maybe instead of getting rid of GC I will consider the 
 implementation of optimized moving GC.
copying GC requires support from the compiler side, and it is not that
easy at all (imagine malloc()ed blocks which holds references to
GC-alloced objects, for example). current D GC is conservative, so you
can just register malloc()ed block as root, but for copying GC you'll
need either to inform GC about exact block structure, or provide your
own scan/copy/fix callbacks.

and if copying GC will not do 'stop-the-world', some threads can hold
pointers in registers... and inserting read/write barriers will hurt
performance...


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d
I have recently found: 
http://en.wikibooks.org/wiki/D_Programming/Garbage_collector/Thoughts_about_better_GC_implementations


Good stuff there.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Paulo Pinto via Digitalmars-d

Am 11.09.2014 14:38, schrieb Andrey Lifanov:

Hello everyone! Being a C/C++ programmer I don't understand, why such
language as D (system programming language) implemented garbage
collector as a core feature, not as additional optional module or
library. I and many other C/C++ programmers prefer to control things
manually and use flexible allocation schemes that suitable for concrete
situations. When every nanosecond matters, this is the only option, at
least nowadays.

...


Since the mid-70's there are system programming languages with GC.

Namely Algol 68(IFIP), Mesa/Cedar (Xerox), Modula-3 (Olivetti), Oberon 
(ETHZ) and a few others.


They just happened to be married to OSes that weren't as successful as 
UNIX jumping out of the research labs into the industry.


It is about time systems programming catches up with the 70's 
innovations outside of the PDP-11 world.


--
Paulo


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d

On Thursday, 11 September 2014 at 15:39:26 UTC, ketmar via
Digitalmars-d wrote:

On Thu, 11 Sep 2014 15:23:53 +
Andrey Lifanov via Digitalmars-d digitalmars-d@puremagic.com 
wrote:


is that in case of manual memory management I know completely 
when and what will be freed or allocated (with the help of 
smart pointers/reference counting, of course).
but you don't. you can only estimate, that's all. what if you 
passing
refcounted object to another function which stores it 
somewhere? oops.


What do you mean? Some sort of memory leak? I guess, you can
always write programs, no matter with or without GC, that will
steal or hide stuff. As usual countermeasures, you just have to
carefully plan single storage/multiple users code.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Sean Kelly via Digitalmars-d

On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz wrote:
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library.


I can enlighten you ;-) The reason is safety. Past experience 
(especially with C  C++) has shown that manual memory 
management is easy to get wrong. Besides, certain features 
would not easily be possible without it (dynamic arrays, 
closures).


GC is hugely important for concurrent programming as well.  Many 
of the more powerful techniques are basically impossible without 
garbage collection.


But I think this largely comes down to standard library design.  
Java, for example, is a pretty okay language from a syntax 
perspective.  The problem with it is more that doing anything 
with the standard library requires generating tons of often 
temporary objects.  In the server programming realm, an 
unbelievable amount of effort has been put into working around 
this particular problem (look at what the Netty group has been 
doing, for example).  So it's not so much that the language 
supports garbage collection as that the established programming 
paradigm encourages you to lean heavily on it.


By allowing manual memory management, D is far closer to C++.  
The problem is that, like Java, many APIs in the standard library 
are written in such a way that memory allocations are 
unavoidable.  However, it doesn't have to be this way.  An 
essential design rule for Tango, for example, was to perform no 
hidden allocations anywhere.  And it's completely possible with 
Tango to write an application that doesn't allocate at all once 
things are up and running.  With Phobos... not so much.


In short, I think that a crucial factor affecting the perception 
of a language is its standard library.  It stands as a template 
for how code in that language is intended to be written, and is 
the framework from which essentially all applications are built.  
Breaking from this tends to be difficult to the point of where 
you're really better off looking for a different language that 
suits your needs better.


I think Java is in a weird spot in that it's so deeply entrenched 
at this point that many think it's easier to try and force people 
to change their programming habits than it is to get them to use 
a different language.  Though encouraging a transition to a 
compatible language with better fundamentals is probably 
preferable (Scala?).


C++ is kind of in the same situation, which I guess is why some 
feel that C++ interop might be a good thing.  But realistically, 
working with a truly hybrid code base only serves to further 
complicate things when the motivating goal is simplification.  
It's typically far preferable to simply have communicating agents 
written in different languages that all talk the same protocol.  
That C++ app can go into maintenance mode and the Java, D, and 
whatever other new stuff just talks to it via a socket connection 
and then shivers and washes its hands when done.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Paulo Pinto via Digitalmars-d

Am 11.09.2014 18:02, schrieb Sean Kelly:

On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz wrote:

On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov wrote:

Hello everyone! Being a C/C++ programmer I don't understand, why such
language as D (system programming language) implemented garbage
collector as a core feature, not as additional optional module or
library.


I can enlighten you ;-) The reason is safety. Past experience
(especially with C  C++) has shown that manual memory management is
easy to get wrong. Besides, certain features would not easily be
possible without it (dynamic arrays, closures).


GC is hugely important for concurrent programming as well.  Many of the
more powerful techniques are basically impossible without garbage
collection.

But I think this largely comes down to standard library design. Java,
for example, is a pretty okay language from a syntax perspective.  The
problem with it is more that doing anything with the standard library
requires generating tons of often temporary objects.  In the server
programming realm, an unbelievable amount of effort has been put into
working around this particular problem (look at what the Netty group has
been doing, for example).  So it's not so much that the language
supports garbage collection as that the established programming paradigm
encourages you to lean heavily on it.

By allowing manual memory management, D is far closer to C++. The
problem is that, like Java, many APIs in the standard library are
written in such a way that memory allocations are unavoidable.  However,
it doesn't have to be this way.  An essential design rule for Tango, for
example, was to perform no hidden allocations anywhere.  And it's
completely possible with Tango to write an application that doesn't
allocate at all once things are up and running.  With Phobos... not so
much.

In short, I think that a crucial factor affecting the perception of a
language is its standard library.  It stands as a template for how code
in that language is intended to be written, and is the framework from
which essentially all applications are built. Breaking from this tends
to be difficult to the point of where you're really better off looking
for a different language that suits your needs better.

I think Java is in a weird spot in that it's so deeply entrenched at
this point that many think it's easier to try and force people to change
their programming habits than it is to get them to use a different
language.  Though encouraging a transition to a compatible language with
better fundamentals is probably preferable (Scala?).

C++ is kind of in the same situation, which I guess is why some feel
that C++ interop might be a good thing.  But realistically, working with
a truly hybrid code base only serves to further complicate things when
the motivating goal is simplification. It's typically far preferable to
simply have communicating agents written in different languages that all
talk the same protocol. That C++ app can go into maintenance mode and
the Java, D, and whatever other new stuff just talks to it via a socket
connection and then shivers and washes its hands when done.


It has been acknowledged that it was a mistake not to allow better 
control in Java where to place the data, for the last mile in performance.


This is why the major focus in Java 9+ are value types, control over 
array layouts, a new FFI interface and promotion of unsafe to public API.


http://www.oracle.com/technetwork/java/javase/community/jlssessions-2255337.html

This is a consequence of Java's use in big data and high performance 
trading systems.


D's advantage is that (except for current GC), it offers today what Java 
can only offer in the next revision, or even later if not all features 
happen to be 9 ready.


--
Paulo




Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 15:54:17 +
Andrey Lifanov via Digitalmars-d digitalmars-d@puremagic.com wrote:

 What do you mean? Some sort of memory leak?
no, i meat that predictability is lost there. and with single-linked
lists, for example, freeing list head can take big amount of time too
if the list is sufficiently long.

what D really needs is GC that will not stopping the world on
collecting. you will be able to separate threads that allocating from
threads that not, and non-allocating threads will work without pauses.
then 'worker' threads can use lists of free objects almost without GC
hits.


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Ali Çehreli via Digitalmars-d

On 09/11/2014 09:59 AM, ketmar via Digitalmars-d wrote:

 On Thu, 11 Sep 2014 15:54:17 +
 Andrey Lifanov via Digitalmars-d digitalmars-d@puremagic.com wrote:

 What do you mean? Some sort of memory leak?
 no, i meat that predictability is lost there. and with single-linked
 lists, for example, freeing list head can take big amount of time too
 if the list is sufficiently long.

In support of your point, one of Bartosz Milewski's blog posts[1] has a 
link to a paper[2] where he says There is actual research showing that 
the two approaches are just two sides of the same coin.


Ali

[1] http://bartoszmilewski.com/2013/09/19/edward-chands/

[2] http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d

Thank you all for replies!

I'm not saying that GC is evil. I just want to have different 
options and more control, when this is required. If D offered 
such choice, many good C++ programmers would have certainly 
considered D as a perfect alternative to C++.


D states that there is no strict and dogmatic rules that it 
follows about programming languages paradigms. And that it is a 
general purpose language. So I think it would be nice to have 
more options of how we can manage memory.


I will continue investigation and certainly inform you if it ends 
with something useful.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Daniel Alves via Digitalmars-d
You know, currently I spend most of my time programming in ObjC, 
but I really love C, C++ and D.


Since the Clang Compiler, ObjC dropped the GC entirely. Yes, 
that's right, no GC at all. And, in fact, it does support 
concurrent programming and everything else. The magic behind it 
is ARC - Automated Reference Counting 
(http://clang.llvm.org/docs/AutomaticReferenceCounting.html): the 
compiler analyzes your code, figures out object scopes and sets 
the correct calls to retain/release/autorelease (for those who 
are not familiar with ObjC, pointers are mostly reference 
counted). So there is no need for a GC and all its complications.


In addition to that, Rusty also has an approach like ObjC called 
Region Pointers and objects' Lifetime 
(http://doc.rust-lang.org/guide-pointers.html#boxes). The idea is 
the same, but, depending on the type of the pointer, the compiler 
may add a call for freeing or for decrementing a pointer 
reference counter.


Finally, it looks like there is a language called Cyclone that 
goes the same way (paper here: 
http://www.cs.umd.edu/projects/cyclone/papers/cyclone-regions.pdf)


Since I read Andrei's book, D Programming Language, I've been 
asking myself why D does not go this way...


Anyone knows about a good reason for that?

On Thursday, 11 September 2014 at 18:04:06 UTC, Andrey Lifanov 
wrote:

Thank you all for replies!

I'm not saying that GC is evil. I just want to have different 
options and more control, when this is required. If D offered 
such choice, many good C++ programmers would have certainly 
considered D as a perfect alternative to C++.


D states that there is no strict and dogmatic rules that it 
follows about programming languages paradigms. And that it is a 
general purpose language. So I think it would be nice to have 
more options of how we can manage memory.


I will continue investigation and certainly inform you if it 
ends with something useful.




Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 18:04:05 +
Andrey Lifanov via Digitalmars-d digitalmars-d@puremagic.com wrote:

 Thank you all for replies!
 
 I'm not saying that GC is evil. I just want to have different 
 options and more control, when this is required. If D offered 
 such choice
but D *is* offering such choice.


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 18:32:09 +
Daniel Alves via Digitalmars-d digitalmars-d@puremagic.com wrote:

 compiler analyzes your code, figures out object scopes and sets 
 the correct calls to retain/release/autorelease
this *is* GC. it's just hidden behind compiler magic and can't be
changed without altering the compiler.


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread bachmeier via Digitalmars-d
On Thursday, 11 September 2014 at 18:32:10 UTC, Daniel Alves 
wrote:
You know, currently I spend most of my time programming in 
ObjC, but I really love C, C++ and D.


Since the Clang Compiler, ObjC dropped the GC entirely. Yes, 
that's right, no GC at all. And, in fact, it does support 
concurrent programming and everything else. The magic behind it 
is ARC - Automated Reference Counting 
(http://clang.llvm.org/docs/AutomaticReferenceCounting.html): 
the compiler analyzes your code, figures out object scopes and 
sets the correct calls to retain/release/autorelease (for those 
who are not familiar with ObjC, pointers are mostly reference 
counted). So there is no need for a GC and all its 
complications.


In addition to that, Rusty also has an approach like ObjC 
called Region Pointers and objects' Lifetime 
(http://doc.rust-lang.org/guide-pointers.html#boxes). The idea 
is the same, but, depending on the type of the pointer, the 
compiler may add a call for freeing or for decrementing a 
pointer reference counter.


Finally, it looks like there is a language called Cyclone that 
goes the same way (paper here: 
http://www.cs.umd.edu/projects/cyclone/papers/cyclone-regions.pdf)


Since I read Andrei's book, D Programming Language, I've been 
asking myself why D does not go this way...


Anyone knows about a good reason for that?

On Thursday, 11 September 2014 at 18:04:06 UTC, Andrey Lifanov 
wrote:

Thank you all for replies!

I'm not saying that GC is evil. I just want to have different 
options and more control, when this is required. If D offered 
such choice, many good C++ programmers would have certainly 
considered D as a perfect alternative to C++.


D states that there is no strict and dogmatic rules that it 
follows about programming languages paradigms. And that it is 
a general purpose language. So I think it would be nice to 
have more options of how we can manage memory.


I will continue investigation and certainly inform you if it 
ends with something useful.


Here are a few of the bazillion threads that have discussed the 
topic:

http://forum.dlang.org/thread/ljrm0d$28vf$1...@digitalmars.com?page=1
http://forum.dlang.org/thread/lphnen$1ml7$1...@digitalmars.com?page=1
http://forum.dlang.org/thread/outhxagpohmodjnkz...@forum.dlang.org


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Joakim via Digitalmars-d
On Thursday, 11 September 2014 at 18:32:10 UTC, Daniel Alves 
wrote:
You know, currently I spend most of my time programming in 
ObjC, but I really love C, C++ and D.


Since the Clang Compiler, ObjC dropped the GC entirely. Yes, 
that's right, no GC at all. And, in fact, it does support 
concurrent programming and everything else. The magic behind it 
is ARC - Automated Reference Counting 
(http://clang.llvm.org/docs/AutomaticReferenceCounting.html): 
the compiler analyzes your code, figures out object scopes and 
sets the correct calls to retain/release/autorelease (for those 
who are not familiar with ObjC, pointers are mostly reference 
counted). So there is no need for a GC and all its 
complications.


In addition to that, Rusty also has an approach like ObjC 
called Region Pointers and objects' Lifetime 
(http://doc.rust-lang.org/guide-pointers.html#boxes). The idea 
is the same, but, depending on the type of the pointer, the 
compiler may add a call for freeing or for decrementing a 
pointer reference counter.


Finally, it looks like there is a language called Cyclone that 
goes the same way (paper here: 
http://www.cs.umd.edu/projects/cyclone/papers/cyclone-regions.pdf)


Since I read Andrei's book, D Programming Language, I've been 
asking myself why D does not go this way...


Anyone knows about a good reason for that?


There have been some long threads about ARC, including this one 
from a couple months ago:


http://forum.dlang.org/thread/mailman.2370.1402931804.2907.digitalmar...@puremagic.com

Walter doesn't think ARC can be done efficiently.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread eles via Digitalmars-d

On Thursday, 11 September 2014 at 16:02:31 UTC, Sean Kelly wrote:
On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz 
wrote:
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:


hidden allocations anywhere.  And it's completely possible with 
Tango to write an application that doesn't allocate at all once 
things are up and running.  With Phobos... not so much.


Hi,

Could you provide one or two short but illustrative examples in 
Tango and Phobos showing the howto and the why not in Phobos?


Will Andrei's allocators improve that with some rewrite of Phobos?

Thanks.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread via Digitalmars-d

On Thursday, 11 September 2014 at 16:02:31 UTC, Sean Kelly wrote:
On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz 
wrote:
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov 
wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library.


I can enlighten you ;-) The reason is safety. Past experience 
(especially with C  C++) has shown that manual memory 
management is easy to get wrong. Besides, certain features 
would not easily be possible without it (dynamic arrays, 
closures).


GC is hugely important for concurrent programming as well.  
Many of the more powerful techniques are basically impossible 
without garbage collection.


There is an interesting alternative that the Linux kernel uses,
called RCU (read-copy-update). They have a convention that
references to RCU managed data must not be held (= borrowed by
kernel threads as local pointers) across certain events,
especially context switches. Thus, when a thread modifies an RCU
data structure, say a linked list, and wants to remove an element
from it, it unlinks it and tells RCU to release the element's
memory later. The RCU infrastructure will then release it once
all processors on the system have gone through a context switch,
at which point there is a guarantee that no thread can hold a
reference to it anymore.

But this is a very specialized solution and requires a lot
discipline, of course.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Sean Kelly via Digitalmars-d

On Thursday, 11 September 2014 at 19:14:42 UTC, Marc Schütz wrote:
On Thursday, 11 September 2014 at 16:02:31 UTC, Sean Kelly 
wrote:
On Thursday, 11 September 2014 at 13:16:07 UTC, Marc Schütz 
wrote:
On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey 
Lifanov wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library.


I can enlighten you ;-) The reason is safety. Past experience 
(especially with C  C++) has shown that manual memory 
management is easy to get wrong. Besides, certain features 
would not easily be possible without it (dynamic arrays, 
closures).


GC is hugely important for concurrent programming as well.  
Many of the more powerful techniques are basically impossible 
without garbage collection.


There is an interesting alternative that the Linux kernel uses,
called RCU (read-copy-update). They have a convention that
references to RCU managed data must not be held (= borrowed by
kernel threads as local pointers) across certain events,
especially context switches. Thus, when a thread modifies an RCU
data structure, say a linked list, and wants to remove an 
element from it, it unlinks it and tells RCU to release the 
element's

memory later. The RCU infrastructure will then release it once
all processors on the system have gone through a context switch,
at which point there is a guarantee that no thread can hold a
reference to it anymore.


Yes, RCU is one approach I was thinking of.  The mechanism that
detects when to collect the memory is basically a garbage
collector.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Paulo Pinto via Digitalmars-d

Am 11.09.2014 20:32, schrieb Daniel Alves:

You know, currently I spend most of my time programming in ObjC, but I
really love C, C++ and D.

Since the Clang Compiler, ObjC dropped the GC entirely. Yes, that's
right, no GC at all. And, in fact, it does support concurrent
programming and everything else. 



It is incredible how Objective-C's ARC became a symbol for reference 
counting, instead of the living proof of Apple's failure to produce

a working GC for Objective-C that didn't crash every couple of seconds.

Marketing is great!

--
Paulo



Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread deadalnix via Digitalmars-d

On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov
wrote:
Hello everyone! Being a C/C++ programmer I don't understand, 
why such language as D (system programming language) 
implemented garbage collector as a core feature, not as 
additional optional module or library. I and many other C/C++ 
programmers prefer to control things manually and use flexible 
allocation schemes that suitable for concrete situations. When 
every nanosecond matters, this is the only option, at least 
nowadays.


So this particular thing stops many of us from using D. When 
you can abandon performance, you usually choose Java (Scala) or 
C# because of their rich support and libraries. And the 
opposite matter is when you need high performance. In this case 
there is almost nothing to choose from. C/C++11 now looks not 
so bad.


And I think of idea of complete extraction of GC from D. For 
this to achieve, I suppose, I have to dig deeply into D 
compiler and also correct/re-implement many things in modules, 
so this ends up with almost new version of D.


I would like to hear your suggestions and some basic 
instructions. Maybe this is not a good idea at all or it will 
be very hard to realize.


Thank you for your attention!


Dear C++ programmer. I know you language do not give a damn about
multicore, but it is a reality in the hardware for more than 10
years now.

As it turns out, outside being a convenience, a GC is capital for
multicore programming. Here are some of the reason:
  - Other memory management technique require bookkeeping. In a
multicore environment, that mean expensive synchronization.
  - It allow, combined with immutability, to get rid of the
concept of ownership. That mean data sharing without any sort of
synchronization once again. This is useful for multicore, but
even on a single core, D's immutable strings + slicing have
proven to be a killer feature for anything that is text
processing like.
  - This is an enabler for lock free data structures. As you don't
need to do memory management manually, your datastructure can
remain valid even with less operation, which generally makes it
easier to make those atomic/lock free.

It has other various benefits:
  - It removes a whole class of bugs (and memory corruption bug
tend to be not the easiest to debug).
  - It remove constraint from the original design. That mean you
can get a prototype working faster, and reduce time to market.
This is key for many companies. Obviously, it still mean that
you'll have to do memory management work if you want to make your
code fast and efficient, but this is now something you can
iterate on while you have a product working.

Now that do not mean GC is the alpha and omega of memory
management, but, as seen, it has great value. Right now, the
implementation is now super good, and it has been made a priority
recently. We also recognize that other technique have value, and
that is why the standard lib propose tool to do reference
counting (and it can do it better than C++ has it knows if
synchronization is necessary). There is important work done to
reduce memory allocation were it is not needed in the standard
lib, and @nogc will allow you to make sure some part of your code
do not rely on the GC.


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Andrey Lifanov via Digitalmars-d
Everyone tells about greatness and safety of GC, and that it is 
hard to live without it... But, I suppose, you all do know the 
one programming language in which 95% of AAA-quality popular 
desktop software and OS is written. And this language is C/C++.


How do you explain this? Just because we are stubborn and silly 
people, we use terrible old C++? No. The real answer: there is no 
alternative.


Stop telling fairy tales that there is not possible to program 
safe in C++. Every experienced programmer can easily handle 
parallel programming and memory management in C++. Yes, it 
requires certain work and knowledge, but it is possible, and many 
of us do it on the everyday basis (on my current work we use 
reference counting, though the overall quality of code is 
terrible, I must admit).


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread ketmar via Digitalmars-d
On Thu, 11 Sep 2014 20:55:42 +
Andrey Lifanov via Digitalmars-d digitalmars-d@puremagic.com wrote:

 Everyone tells about greatness and safety of GC, and that it is 
 hard to live without it... But, I suppose, you all do know the 
 one programming language in which 95% of AAA-quality popular 
 desktop software and OS is written. And this language is C/C++.
 
 How do you explain this?
there were times when cool software was written in assembler language.
and the real answer was: there is no alternative.

stop telling fairy tales that it is easy to program safe in C++. but if
you still want C++, i can give you some links where you can download
free C++ compilers.

why switch to D and throwing out one of it's greatest features? as we
told you, there *IS* the way to avoid GC if you want. did you read that
messages? those about 'scoped' and other things? and about the things
you'll lose if you don't want to use GC? did you noticed that you can
mix GC and manual allocations (with some carefull coding, of course)?

you gave no use cases, yet insisting that GC is bad-bad-bad-bad. we
told you that refcounting is a form of GC, and it's not that
predictable as you believe, but you still talking about no GC.

please, do you really want to learn or just trolling?

btw, i think that this whole thread belongs to 'D.learn'.


signature.asc
Description: PGP signature


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread Sean Kelly via Digitalmars-d
On Thursday, 11 September 2014 at 20:55:43 UTC, Andrey Lifanov 
wrote:
Every experienced programmer can easily handle parallel 
programming and memory management in C++.


Eliminate parallel programming from that statement and I could 
be convinced to believe you, though after years of diagnosing 
bugs that almost invariably tied back to dangling pointer issues, 
even that would be a hard sell.  But even for programmers who 
really have this stuff down... how much of your code and your 
mental energy with C++ is spent on memory ownership rules?  Is it 
really a productive use of your time?  Does the program 
materially benefit from the design required to make it safe, 
correct, and self-documenting with respect to memory ownership 
and data lifetime?  Are smart pointers really that pleasant to 
work with?


Re: Getting completely (I mean ENTIRELY) rid off GC

2014-09-11 Thread deadalnix via Digitalmars-d
On Thursday, 11 September 2014 at 20:55:43 UTC, Andrey Lifanov 
wrote:
Everyone tells about greatness and safety of GC, and that it is 
hard to live without it... But, I suppose, you all do know the 
one programming language in which 95% of AAA-quality popular 
desktop software and OS is written. And this language is C/C++.


How do you explain this? Just because we are stubborn and silly 
people, we use terrible old C++? No. The real answer: there is 
no alternative.


Stop telling fairy tales that there is not possible to program 
safe in C++. Every experienced programmer can easily handle 
parallel programming and memory management in C++. Yes, it 
requires certain work and knowledge, but it is possible, and 
many of us do it on the everyday basis (on my current work we 
use reference counting, though the overall quality of code is 
terrible, I must admit).


You mean safe like openssl, gnutls or apple's one ?