Thanks for your comments Walter.

Certainly I'm not on about berating D; I have quite an affection for D and high 
hopes
for the success of D as it matures over time.  It is, of course, important for 
you to
know what show-stoppers people are finding (which is why you asked).

Also, while I'm not currently actively developing with D, I still enjoy 
watching this channel,
especially some of the episodes featuring skits by retard, bearophile and other 
cool boffins**

http://en.wikipedia.org/wiki/Boffin  (**taking the complimentary meaning of the 
word).

If I had more time available I'd happily contribute in some concrete way 
(code/bug fixes etc)
to the common cause of D but alas that might have to wait for another lifetime.

> > #2 significant deterioration of D's GC under high working dataset volume.  
> > The GC did
> > not seem to scale very well.  Since moving back to C++ I've implemented my 
> > own
> > memory management technique (note I said memory management not *automatic* 
> > GC).
> > One of the biggest reasons for using D in the first place (automatic GC) no 
> > longer held for me.
> > This topic also discussed much before on this NG.
> 
> It is possible to do your own memory management with D.

Agreed; but so is the case with C++, my point being that automatic GC (in D) is 
no longer
(for me) the number one reason to use D.

> There've been a lot of proposals for improved gc, but not really anyone 
> willing to step up and do the work. On the plus side, D's gc has proven 
> to be remarkably robust and reliable. It's a solid, if pedestrian, 
> implementation.

Actually I've solved the memory management problem in C++ to the degree that
suits my purposes.  I developed the idea quite independently of others (I did
this a few months ago when having decided to go back to C++).

Having developed the technique myself, thought that no doubt it must already 
have
a name.  After a bit of web searching I happened across the term "region based 
memory
management".  I don't have the references handy still, but believe it's used in 
Cyclone
and has been proposed for "real-time Java".

If I were to revert to D in the future, I would definitely have this in my D 
toolbox.

> I'm going to argue a bit with dmd not having optimization. It actually 
> does have a well developed and reliable data flow analysis driven 
> optimizer. It does register allocation based on live range analysis, and 
> it even has a sophisticated instruction scheduler in it. Where it's 
> deficient is in floating point code gen, but the rest is pretty good.

I won't argue the issue; probably I made some wrong assumptions /
interpretations re my D experience.

Regards
Justin Johansson





Walter Bright Wrote:

> Justin Johansson wrote:
> > The #1 show-stopper for me was lack of shared object (.so) support under 
> > Linux; virtually
> > mandated by my audience (Apache/LAMP).  (A workaround like FastCGI is 
> > simply not
> > appealing to customers.)  This topic discussed many times before on this NG.
> 
> I know this is a big problem and I know it's been discussed a lot, I 
> just wanted to be sure what was stopping you.
> 
> 
> > #2 significant deterioration of D's GC under high working dataset volume.  
> > The GC did
> > not seem to scale very well.  Since moving back to C++ I've implemented my 
> > own
> > memory management technique (note I said memory management not *automatic* 
> > GC).
> > One of the biggest reasons for using D in the first place (automatic GC) no 
> > longer held for me.
> > This topic also discussed much before on this NG.
> 
> It is possible to do your own memory management with D.
> 
> There've been a lot of proposals for improved gc, but not really anyone 
> willing to step up and do the work. On the plus side, D's gc has proven 
> to be remarkably robust and reliable. It's a solid, if pedestrian, 
> implementation.
> 
> 
> > #3 problems with circular module references.  I forget some of the detail 
> > but think, if I
> > recall correctly, that it was to do with static class members so had to 
> > pull a lot of source
> > code into one big file .. then leading to problem #4
> 
> The circular module thing is usually a result of static constructors in 
> each of two modules that import each other. There are many solutions to 
> this, such as switching to lazy initialization, moving the 
> initializations to a 3rd module, having the initialization done by a 
> function called explicitly from main(), etc.
> 
> 
> > #4 The performance of the IDE that I was using (Descent under Eclipse) did 
> > not scale
> > very well with large source files.  Tried a few other development tools but 
> > found Descent
> > to be overall the best but, like I say, not adequate at scaling in a large 
> > project.
> > Sure this is not a D language problem per se but a practical issue that is 
> > still likely to
> > put some users off.
> 
> > #5 problems with circular references when dealing with a complex class 
> > template design
> 
> This has gotten a lot better in the last 3 months or so.
> 
> > #6 a general feeling of "gee, I never had any of these problems with C++" 
> > .. this comment
> > relating to the general immaturity (bugs-wise) of the D compiler compared 
> > with what's
> > available for C++ these days ..
> 
> In the last 10 years C++ compilers have gotten a lot better. Before 
> 2000, they all suffered from endless bugs.
> 
> > so I guess a comment about the outlandish size of executable
> > files produced by the D compiler
> 
> I have some ideas about improving the size, but the priority right now 
> is finalizing D2.
> 
> > and general immature (lack of) optimization of generated
> > code by D compiler is apt for this point as well.
> 
> I'm going to argue a bit with dmd not having optimization. It actually 
> does have a well developed and reliable data flow analysis driven 
> optimizer. It does register allocation based on live range analysis, and 
> it even has a sophisticated instruction scheduler in it. Where it's 
> deficient is in floating point code gen, but the rest is pretty good.

Reply via email to