On Thursday, 18 February 2016 at 12:16:49 UTC, Radu wrote:
As a casual user of the language I see that there is a fragmentation of resources and a waste in this regard with people developing in mainline, then some of you LDC guys catching up.
As Iain already pointed out the main problem is (undocumented or weird) AST changes. This makes a merge sometimes painful. This can (and will) go better.
This is IMHO the only "waste". Nobody of the LDC team does frontend development, we are all focused on the glue layer.
My simple assumption is that if presumably the dmd backend is not maintained anymore, a lot of the core dmd people can focus on improving whatever problems the frontend or glue layers have.
As far as I know only Walter (and Daniel I think) work on the backend. This is not "a lot of the core dmd people".
This could only mean that you core LDC guys could focus on llvm backend optimizations (both code gen and performance related). I'm going to assume that those kind of performance optimizations are also constantly done by upstream llvm, so more win here.
By chance I am an LLVM committer, too. But the LDC team only focuses on getting the glue library and the runtime library right. Adding new useful optimizations is hard work. The people working on it are either researchers or backed by a big company.
Users will not magically turn to contributors if their perception is that there is always going to be a catch-up game to play somewhere. Not to mention that if one want's to get something in LDC, one has to commit it in mainline, which is DMD, you just multiplied the know-how someone needs to have to do some useful work...
It depends on the feature you want. If you want a new language feature then yes. But then you do not change LDC, you change the language specification and therefore the reference compiler.
You can add a lot of features without ever touching DMD frontend code. The sanitizers, for example. Or the not-yet-merged PR for profile-guided optimizations.
And finally, just pointing people to ldc/gdc (always a version or 2 behind, another grief) each time dmd performance is poor, looks awfully wrong.
I really find this "speed" argument doubtful. My experience is that if you really need performance you must *know* what you are doing. Just picking some code from a web site, compiling it and then complaining that the resulting binary is slower than that of language xy is not a serious approach.
For a novice user, LDC can be discouraging: just type ldc2 -help-hidden. But you may need to know about these options to e.g. enable the right auto-vectorizer for your problem.
I once wrote an MD5 implementation in pure Java which was substantially faster than the reference implementation in C from RFC 1321 (gcc -O3 compiled). C is not faster than Java if you know Java but not C. The same is true for D.
I really like the compiler diversity. What I miss (hint!) is a program to verify the compiler/backend correctness. Just generate a random D program, compile with all 3 compilers and compare the output. IMHO we could find a lot of backend bugs this way. This would help all D compilers.
Regards, Kai