On 25.08.2014 12:14, Marco Leise wrote:
Am Sun, 24 Aug 2014 06:39:28 +0000
schrieb "Paulo Pinto" <pj...@progtools.org>:

Examples of real, working desktop OS, that people really used at
their work, done in system programming languages with GC.

Mesa/Cedar
https://archive.org/details/bitsavers_xeroxparcteCedarProgrammingEnvironmentAMidtermRepo_13518000

Oberon and derivatives
http://progtools.org/article.php?name=oberon&section=compilers&type=tutorial

SPIN
http://en.wikipedia.org/wiki/Modula-3
http://en.wikipedia.org/wiki/SPIN_%28operating_system%29

What is really needed for the average Joe systems programmer to
get over this "GC in systems programming" stygma, is getting the
likes of Apple, Googlel, Microsoft and such to force feed them
into programming languages.

Yes, but when these systems were invented, was the focus on a
fast lag free multimedia experience or on safety?

BlueBottle an Oberon sucessor has a video player. Codecs are written in
Assembly everything else in Active Oberon.

This is no different from using C, as the language does not support vector instructions.

How do you
get the memory for the GC heap, when you are just about to
write the kernel that manages the systems physical memory?

Easy, the GC lives at the kernel.

Do these systems manually manage memory in performance
sensitive parts or do they rely on GC as much as technically
feasible?

They are a bit more GC friendly than D. Only memory allocated via NEW is
on the GC heap.

Additionally, there is also the possibility to use manual memory management, but only inside UNSAFE modules.

> Could they use their languages as is or did they
create a fork for their OS? What was the expected memory space
at the time of authoring the kernel? Does the language usually
allow raw pointers, unions, interfacing with C etc., or is it
more confined like Java?


I am lost here. These languages have a symbiotic relationship with the
OS, just like C has with UNIX.

Thankfully no C at sight. Spin does support C and has a POSIX interface implemented in Modula-3, though.

I see how you can write an OS with GC already in the kernel or
whatever. However there are too many question marks to jump to
conclusions about D.

o the larger the heap the slower the collection cycles
   (how will it scale in the future with e.g. 1024 GiB RAM?)
o the less free RAM, the more often the collector is called
   (game consoles are always out of RAM)
o tracing GCs have memory overhead
   (memory, that could have been used as disk cache for example)
o some code needs to run in soft-real-time, like audio
   processing plugins; stop-the-world GC is bad here
o non-GC threads are still somewhat arcane and system
   specific
o if you accidentally store the only reference to a GC heap
   object in a non-GC thread it might get collected
   (a hypothetical or existing language may have a better
   offering here)

"For programs that cannot afford garbage collection, Modula-3
provides a set of reference types that are not traced by the
garbage collector."
Someone evaluating D may come across the question "What if I
get into one of those 10% of use case where tracing GC is not
a good option?" It might be some application developer working
for a company that sells video editing software, that has to
deal with complex projects and object graphs, playing sounds
and videos while running some background tasks like generating
preview versions of HD material or auto-saving and serializing
the project to XML.
Or someone writing an IDE auto-completion plugin that has
graphs of thousands of small objects from the source files in
import paths that are constantly modified while the user types
in the code-editor.
Plus anyone who finds him-/herself in a memory constrained
and/or (soft-)realtime environment.

Sometimes it results in the idea that D's GC is some day going
to be as fast as the primary one in Java or C#, which people
found to be acceptable for soft-real-time desktop applications.
Others start contemplating if it is worth writing their own D
runtime to remove the stop-the-world GC entirely.
Personally I mostly want to be sure Phobos is transparent about
GC allocations and that not all threads stop for the GC cycle.
That should make soft-real-time in D a lot saner :)


Those questions I can hardly answer due to lack of experience.

I can only tell that we have replaced C++ applications by Java ones for telecommunications control and they perform fast enough. The same type of work Erlang is used for.

Another thing is that many programmers don't know how to use profilers.

I was able in some cases, thanks to memory profilers, to have a better
memory use in Java/C#, than the legacy application done with the standard malloc that was part of the compiler.

But this will only change with generation change.

--
Paulo

Reply via email to