On Wednesday, 27 January 2016 at 05:32:04 UTC, Chris Wright wrote:
On Tue, 26 Jan 2016 21:15:07 +0000, rsw0x wrote:
On Tuesday, 26 January 2016 at 20:40:50 UTC, Chris Wright
wrote:
On Tue, 26 Jan 2016 19:04:33 +0000, rsw0x wrote:
GC in D is a pipedream, if it wasn't, why is it still so
horrible? Everyone keeps dancing around the fact that if the
GC wasn't horrible, nobody would work around it.
Rather, if everyone believed the GC was awesome in all
circumstances, nobody would work around it. It could be
awesome in all circumstances, but if people believe
otherwise, they'll try working around it. It could be
generally awesome but bad for a certain use case, in which
case people who need to support that use case will need to
work around it.
In this case, I think it's a marketing issue, not a technical
one. D's being marketed as an alternative to C++, and
existing C++ users tend to believe that any garbage collector
is too slow to be usable.
In any case where you attempt to write code in D that is equal
in performance to C++, you must avoid the GC.
Over on d.learn there was a post by H.S. Teoh today about how a
struct deserializer she wrote used techniques that absolutely
require a GC in order to work without leaking memory
everywhere. The version of the code without this technique was
an order of magnitude slower.
You could probably implement something in C++ that had similar
performance, but you'd be writing a custom string type with
reference counting, and it would be terribly annoying to use.
As I haven't reviewed the code, I'll leave no comment.
But the huge reason that it's a marketing problem rather than a
technical one is that I doubt most people care enough about
performance most of the time to need to avoid garbage
collection. The success of Unity3D shows that you can write
games in garbage collected languages, and those have at least
moderate latency requirements.
Unity3D engine is written in C++, moot point. It's also well
known in gaming communities for being incredibly underperforming.
There's no "AAA" games written in Unity, just indie games and
phone games.
Hilariously enough, if you google "What major games have been
made with Unity?" The first result(for me, anyways) was a reddit
thread
https://www.reddit.com/r/gamedev/comments/2qigo0/why_arent_more_aaa_games_developed_with_unity/
Top response
"It's super easy to get from 0% to 90% in Unity but really hard
to get from 90% to 100%.* The garbage collector is a major
culprit in this."
If you're doing real-time programming, you do need to be very
careful about the GC. If your entire application is real-time,
then you can't use a GC at all. If portions of it have lower
latency requirements, which, I'm told, is common, you can use
@nogc in strategic places, call GC.disable and GC.enable where
appropriate, and enjoy the best of both worlds.
Except the GC has leaked into the entire language.
Here's a fun idea, let's play "things that definitely should be
usable in @nogc but are unusable in @nogc"
all core.thread.* functions are unusable in @nogc
for some dumb reason mutexes, semaphores, condition variables et
cetera are classes; I have an in-house library that essentially
wraps pthread primitives in structs.
I have no way to use exceptions in @nogc outside of massive
hacks, I have to resort to C error handling.
I have to manually create my own functors to be able to capture
variables by value in lambdas so that they're usable in @nogc.
How C++98.
Alias template parameters are again massive hidden abusers of the
GC.
Data structures? Haha. Nothing.
Logging? Time to create your own fprintf wrapper and wonder why
you're not just using C directly.
This is just a tiny sample of my experience using @nogc.
A bit of digging suggests that real-time programs have troubles
with malloc being too slow or having unpredictable latency, and
there are a few specialized allocators for real-time systems.
This is almost entirely applicable only to hard real-time. Note
that hard real-time is a tiny minority of most real-time
problems; it's almost always a variant of soft real-time.
So this isn't so much a problem with GC in particular; it's
more of a problem with general- purpose allocators.
Yes, you allocate outside of the real-time parts and reuse memory
in pools/buffers. This is completely against the grain of having
a standard library that relies on a GC, the massive majority of
Phobos(and hell, a lot of the runtime and language itself) is
completely unusable in @nogc code.
The major problem with having a GC that it lulls developers into
thinking it's okay to allocate *everywhere* because the cost of
an allocation is now out of sight, and therefore, out of mind.
This is doubly true for D's hidden allocations.
As a quick example, a lot of the code Walter rewrote to use
ranges in Phobos over last year(? I think?) runs magnitudes
faster while not allocating at all.
Yes, there are real-time garbage collectors. Yes, the London
Stock Exchange did pay 18 million pounds to move away from C# to
a low-latency C++ system despite this.
NASDAQ's OMX system, to the best of my knowledge, uses Java. With
the garbage collector disabled and replaced with a C++-like
allocator(JavaOne 2007, NASDAQ's CIO discussed this, and it is
also talked about by Irene Aldridge in her 2012 publication)