On Wednesday, 5 February 2014 at 16:50:40 UTC, Ola Fosheim Grøstad wrote:
On Wednesday, 5 February 2014 at 16:04:06 UTC, Dicebot wrote:
It is up to programmer to decide. Right now he does not have a choice and sometimes you can't afford to have GC in your program at all (as in can't have it linked to the binary), not just can't call collection cycles. Having sane fallback is very desired.

Yes, if D is going to be a system level programming language then there is no other option than.

Proposed solution does not seem to save you from uncontrollably long collection cycles anyway as it still uses same memory pool so I don't see how it can help even games, not even speaking about more demanding applications.

Well, for games and game servers I think a 100ms delay once or twice per hour is inconsequential in terms of impact.

If you can reduce the GC load by various means it might work out for most applications.

1. Reduce the set considered for GC by having the GC not scanning paths that are known to be covered by RC.

2. Improving speed of GC by avoiding interior pointers etc.

3. Reducing the number of calls to GC by having RC take care of the majority of the memory releases.

4. Have local GC by collecting roots of nodes that are known to create cycles.

I don't think ARC is an option for OS-level development and critical applications anyway. :-)

Different scenarios have different needs.

Haven't you just basically confirmed my opinion? :)

In a way. :-) But what if the question is this:

How can you in a pragmatic way come up with a solution that cover most soft real time applications?

A compiler switch that default to RC (i.e. turn standard GC features into standard RC features) could in theory get you pretty close, but I think clever RC/GC memory management requires whole program analysis…

For a competitive game 100ms delays during gameplay would be completely unacceptable.

Reply via email to