Le 26/02/2012 08:48, Paulo Pinto a écrit :
Am 26.02.2012 00:45, schrieb Andrew Wiley:
On Sat, Feb 25, 2012 at 5:01 PM, Paulo Pinto<pj...@progtools.org> wrote:
Am 25.02.2012 23:40, schrieb Andrew Wiley:

On Sat, Feb 25, 2012 at 4:29 PM, Paulo Pinto<pj...@progtools.org>
wrote:

Am 25.02.2012 23:17, schrieb Peter Alexander:


On Saturday, 25 February 2012 at 22:08:31 UTC, Paulo Pinto wrote:


Am 25.02.2012 21:26, schrieb Peter Alexander:


On Saturday, 25 February 2012 at 20:13:42 UTC, so wrote:


On Saturday, 25 February 2012 at 18:47:12 UTC, Nick Sabalausky
wrote:

Interesting. I wish he'd elaborate on why it's not an option
for his
daily
work.



Not the design but the implementation, memory management would
be the
first.



Memory management is not a problem. You can manage memory just as
easily
in D as you can in C or C++. Just don't use global new, which
they'll
already be doing.



I couldn't agree more.

The GC issue comes around often, but I personally think that the
main
issue is that the GC needs to be optimized, not that manual memory
management is required.

Most standard compiler malloc()/free() implementations are actually
slower than most advanced GC algorithms.



If you require realtime performance then you don't use either the
GC or
malloc/free. You allocate blocks up front and use those when you need
consistent high performance.

It doesn't matter how optimised the GC is. The eventual collection is
inevitable and if it takes anything more than a small fraction of a
second then it will be too slow for realtime use.


There are GC realtime algorithms, which are actually in use, in
systems
like the French Ground Master 400 missile radar system.

There is no more realtime than that. I surely would not like that such
systems had a pause the world GC.


Can you give any description of how that is done (or any relevant
papers), and how it can be made to function reasonably on low end
consumer hardware and standard operating systems? Without that, your
example is irrelevant.
Azul has already shown that realtime non-pause GC is certainly
possible, but only with massive servers, lots of CPUs, and large
kernel modifications.

And, as far as I'm aware, that still didn't solve the generally
memory-hungry behaviors of the JVM.


Sure.

http://www.militaryaerospace.com/articles/2009/03/thales-chooses-aonix-perc-virtual-machine-software-for-ballistic-missile-radar.html


http://www.atego.com/products/aonix-perc-raven/

Neither of those links have any information on how this actually
works. In fact, the docs on Atego's site pretty much state that their
JVM is highly specialized and requires programmers to follow very
different rules from typical Java, which makes this technology look
less and less viable for general usage.

I don't see how this example is relevant for D. I can't find any
details on the system you're mentioning, but assuming they developed
something similar to Azul, the fundamental problem is that D has to
target platforms in general use, not highly specialized server
environments with modified kernels and highly parallel hardware. Until
such environments come into general use (assuming they do at all; Azul
seems to be having trouble getting their virtual memory manipulation
techniques merged into the Linux kernel), D can't make use of them,
and we're right back to saying that GCs have unacceptably long pause
times for realtime applications.

In Java's case they are following the Java's specification for real time
applications.

http://java.sun.com/javase/technologies/realtime/index.jsp

I did not mention any specific algorithm, because like most companies, I
am sure Atego patents most of it.

Still a quick search in Google reveals a few papers:

http://research.microsoft.com/apps/video/dl.aspx?id=103698&amp;l=i

http://www.cs.cmu.edu/~spoons/gc/vee05.pdf

http://domino.research.ibm.com/comm/research_people.nsf/pages/bacon.presentations.html/$FILE/Bacon05BravelyTalk.ppt


http://www.cs.technion.ac.il/~erez/Papers/real-time-pldi.pdf

http://www.cs.purdue.edu/homes/lziarek/pldi10.pdf

I know GC use is a bit of a religious debate but C++ was the very last
systems programming language without automatic memory management. And
even C++ has got some form in C++11.

At least in the desktop area, in a decade from now, most likely system
programming in desktop OS will either make use of reference counting
(WinRT or ARC), or it will use a GC (similar to Spin, Inferno,
Singularity, Oberon).

This is how I see the trend going, but hey, I am just a simple person
and I get to be wrong lots of time.

--
Paulo

Thank you for the documentation. The more I know about GC, the more I think that user should be able to choose which GC they want.

Reply via email to