On 4/18/2014 2:46 AM, Manu via Digitalmars-d wrote:
Consensus is that there's no performant GC available to D either.
Switching from one non-performant system to another won't help matters.
Can you show that Obj-C suffers serious performance penalty from it's ARC system? Have there been comparisons?
O-C doesn't use ARC for all pointers, nor is it memory safe.
Java is designed to be GC compatible from the ground up. D is practically incompatible with GC in the same way as C, but it was shoe-horned in there anyway.
This isn't quite correct. I implemented a GC for Java back in the 90's. D has semantics that are conducive to GC that C doesn't have, and this was done based on my experience with GC's. To wit, objects can't have internal references, so that a moving collector can be implemented.
Everyone that talks about fantasy 'awesome-GC's admits it would be impossible to implement in D for various reasons.
The same goes for fantasy ARC :-)
inc/dec isn't as cheap as you imply. The dec usually requires the creation of an exception handling unwinder to do it. Why do you need to do that?
Because if a function exits via a thrown exception, the dec's need to happen.
Why do you have such a strong opposition if this is the case?
Because I know what kind of code will have to be generated for it.
But you're always talking about how D creates way less garbage than other languages, which seems to be generally true. It needs to be tested before you can make presumptions about performance.
ARC, in order to be memory safe, would have to be there for ALL pointers, not just allocated objects.
Well, I'd like to see it measured in practise. But most common scenarios I imagine appear like they'd eliminate nicely. pure, and perhaps proper escape analysis (planned?) offer great opportunity for better elimination than other implementations like Obj-C.
If you're not aware of the exception handler issue, then I think those assumptions about performance are unwarranted. Furthermore, if we implement ARC, then it is way too slow, then D simply loses its appeal. We could then spend the next 5 years attempting to produce a "sufficiently smart compiler" to buy that performance back, and by then it will be far too late.
First off, now pointers are 24 bytes in size. Secondly, every pointer dereference becomes two dereferences (not so good for cache performance). That's not necessarily true. What length is the compiler typically able to eliminate inc/dec pairs? How many remain in practise? We don't know.
Yes, we don't know. We do know that we don't have an optimizer that will do that, and we know that GDC and LDC won't do it either, because those optimizers are designed for C++, not ARC.
The performance is to be proven. Under this approach, you'd bunch references up close together, so there's a higher than usual probability the rc will be in cache already. I agree, it's theoretically a problem, but I have no evidence to show that it's a deal breaker. In lieu of any other options, it's worth exploring.
I'm just stunned you don't find the double indirection of rc a problem, given your adamant (and correct) issues with virtual function call dispatch.
Well the alternative is to distinguish them in the type system.
There's a dramatic redesign of D.
I don't feel like you've given me any evidence that ARC in not feasible. Just that you're not interested in trying it out. Please, kill it technically. Not just with dismissal and FUD.
I've given you technical reasons. You don't agree with them, that's ok, but doesn't mean I have not considered your arguments, all of which have come up before. See the thread for the previous discussion on this. It's not like I haven't tried.
http://forum.dlang.org/thread/l34lei$255v$1...@digitalmars.com