Re: Non-moving generational GC [was: Template Metaprogramming Made Easy (Huh?)]

2009-09-16 Thread Jeremie Pelletier
Fawzi Mohamed Wrote:

 On 2009-09-15 04:51:19 +0200, Robert Jacques sandf...@jhu.edu said:
 
  On Mon, 14 Sep 2009 18:53:51 -0400, Fawzi Mohamed fmoha...@mac.com wrote:
  
  On 2009-09-14 17:07:00 +0200, Robert Jacques sandf...@jhu.edu said:
  
  On Mon, 14 Sep 2009 09:39:51 -0400, Leandro Lucarella  
  llu...@gmail.com  wrote:
  Jeremie Pelletier, el 13 de septiembre a las 22:58 me escribiste:
  [snip]
  [1) to allocate large objects that have a guard object it is a good 
  idea  to pass through the GC because if memory is tight a gc collection 
  is  triggered thereby possibly freeing some extra memory
  2) using gc malloc is not faster than malloc, especially with several  
  threads the single lock of the basic gc makes itself felt.
  
  for how I use D (not realtime) the two things I would like to see from  
  new gc are:
  1) multiple pools (at least one per cpu, with thread id hash to assign  
  threads to a given pool).
  This to avoid the need of a global gc lock in the gc malloc, and if  
  possible use memory close to the cpu when a thread is pinned, not to  
  have really thread local memory, if you really need local memory  
  different from the stack then maybe a separate process should be used.  
  This is especially well doable with 64 bits, with 32 memory  
  usage/fragmentation could become an issue.
  2) multiple thread doing the collection (a main thread distributing the 
   work to other threads (one per cpu), that do the mark phase using 
  atomic  ops).
  
  other better gc, less latency (but not at the cost of too much  
  computation), would be nice to have, but are not a priority for my 
  usage.
  
  Fawzi
  
  
  For what it's worth, the whole point of thread-local GC is to do 1) and 
   2). For the purposes of clarity, thread-local GC refers to each thread 
   having it's own GC for non-shared objects + a shared GC for shared  
  objects. Each thread's GC may allocate and collect independently of 
  each  other (e.g. in parallel) without locking/atomics/etc.
 
 Well I want at least thread local pools (or almost, one can probably 
 restrict it to the number of cpus, which will give most of the 
 benefit), but not an extra partition of the memory in thread local and 
 shared.
 Such a partition might be easier in D2 (I think it was discussed, but 
 even then I am not fully sure about the benefit), because then you have 
 to somehow be able to share and maybe even unshare an object, which 
 will be cumbersome. Thread local things add a level in the memory 
 hierarchy that I am not fully sure is worth having, in it you should 
 have almost only low level plumbing.
 If you really want that much separation for many things then maybe a 
 separate process + memmap might be better.
 The fast local storage for me is the stack, and one might think about 
 being more aggressive in using it, the heap is potentially shared.
 Well at least that is my feeling.
 
 Note that on 64 bit one can easily use a few bits to subdivide the 
 memory in parts, making finding the pool group very quick, and this 
 discussion is orthogonal to being generational or not.
 
 Fawzi
 

I just posted my memory manager to pastebin:
http://pastebin.com/f7459ba9d

I gave up on the generational feature, its indeed impossible without write 
barriers to keep track of pointers from old generations to newer ones. I had 
the whole tracing algorithm done but without generations, a naive scan and 
sweep is faster because it has way less cache misses.

I'd like to get some feedback on it if possible.


Re: Non-moving generational GC [was: Template Metaprogramming Made Easy (Huh?)]

2009-09-16 Thread Lutger
Jeremie Pelletier wrote:
...
 
 I just posted my memory manager to pastebin:
 http://pastebin.com/f7459ba9d
 
 I gave up on the generational feature, its indeed impossible without write
 barriers to keep track of pointers from old generations to newer ones. I
 had the whole tracing algorithm done but without generations, a naive scan
 and sweep is faster because it has way less cache misses.
 
 I'd like to get some feedback on it if possible.

I think that it deserves a new thread...



Re: Non-moving generational GC [was: Template Metaprogramming Made Easy (Huh?)]

2009-09-16 Thread dsimcha
== Quote from Lutger (lutger.blijdest...@gmail.com)'s article
 Jeremie Pelletier wrote:
 ...
 
  I just posted my memory manager to pastebin:
  http://pastebin.com/f7459ba9d
 
  I gave up on the generational feature, its indeed impossible without write
  barriers to keep track of pointers from old generations to newer ones. I
  had the whole tracing algorithm done but without generations, a naive scan
  and sweep is faster because it has way less cache misses.
 
  I'd like to get some feedback on it if possible.
 I think that it deserves a new thread...

Yes, preferably on D.announce, and please explain what you did for the people 
who
didn't read the original (horribly long, off-topic) thread.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-15 Thread language_fan
Mon, 14 Sep 2009 21:55:10 -0600, Rainer Deyke thusly wrote:

 language_fan wrote:
 The members of the last group have studied computer science and
 languages, in particular. They have found a pet academic language,
 typically a pure one, but paradigms may differ. In fact this is the
 group which uses something other than the hybrid
 object-oriented/procedural model. They appreciate a strong, orthogonal
 core language that scales cleanly. They are not scared of esoteric
 non-C-like syntax. They use languages that are not ready to take a step
 to the real world during the 70 next years.
 
 Of the three types, this comes closest to describing me.  Yet, I am
 completely self-taught, and my preferred language is still C++.  (I
 wouldn't call it my pet language.  I loathe C++, I just haven't found a
 suitable replacement yet.)
 
 Stereotypes are dangerous.

Indeed they are. My post should have been taken with a grain of salt. The 
idea was to show that languages in each group have their advantages and 
disadvantages. There is nothing wrong with being self-taught, many times 
people with formal education lack the passion many amateurs share.

What is bad is that many people can only express their ideas in one kind 
of language, and that is usually their pet language. If you study Java, 
C#, C++, and D, they are all very similar to each other. Especially if 
you try to avoid learning all advanced features that are not common to 
all of them. In that case you don't know four different languages, but a 
single simple language mostly suitable for basic end user application 
development. On the other hand, knowing 40 academic languages will not 
get you far, either.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-15 Thread language_fan
Tue, 15 Sep 2009 00:25:46 +0200, Lutger thusly wrote:

 That's a fancy way of saying that anyone who has not studied CS is a
 moron and therefore cannot understand what is good about languages, thus
 they lose any argument automatically. Am I right?

I just recommend learning basic concepts until terms like generational 
garbage collection, closure, register allocation, immutability, loop 
fusion, term rewriting, regular languages, type constructor, virtual 
constructor, and covariance do not scare you anymore.

If something small like optional semicolons or some other syntactic 
nuance prevents you from finishing your job, how the heck are you 
supposed to build any real world programs? Just to put this to some 
perspective, syntax matters, but not much. Nowadays you can easily write 
a tool that parses stuff written in language X and outputs it in pretty 
printed form in language Y. This is what happens on .NET, for instance. 
Most of the languages there are just syntactic skins for the same common 
core language.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-15 Thread Don

language_fan wrote:

Tue, 15 Sep 2009 00:25:46 +0200, Lutger thusly wrote:


That's a fancy way of saying that anyone who has not studied CS is a
moron and therefore cannot understand what is good about languages, thus
they lose any argument automatically. Am I right?


I just recommend learning basic concepts until terms like generational 
garbage collection, closure, register allocation, immutability, loop 
fusion, term rewriting, regular languages, type constructor, virtual 
constructor, and covariance do not scare you anymore.


If something small like optional semicolons or some other syntactic 
nuance prevents you from finishing your job, how the heck are you 
supposed to build any real world programs? Just to put this to some 
perspective, syntax matters, but not much. 


Nowadays you can easily write
a tool that parses stuff written in language X and outputs it in pretty 
printed form in language Y. This is what happens on .NET, for instance. 
Most of the languages there are just syntactic skins for the same common 
core language.


It sounds as though talking about VB.NET, which is a non-existent 
language (it's a parsing step ONLY). It's just C# with a different parse 
table, and exists only for marketing reasons (to disguise the fact that 
MS abandoned VB). I don't think you can conclude anything general from that.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-15 Thread Justin Johansson
language_fan Wrote:

 ... Nowadays you can easily write 
 a tool that parses stuff written in language X and outputs it in pretty 
 printed form in language Y. This is what happens on .NET, for instance. 
 Most of the languages there are just syntactic skins for the same common 
 core language.

Yes, well, Ted Neward pretty much makes this observation re LLVM saying

Holy frickin' crap. I think I'm in love.

In case you missed it, here's the link

http://blogs.tedneward.com/2008/02/24/Some+Interesting+Tidbits+About+LLVM.aspx

JJ/



Re: Non-moving generational GC [was: Template Metaprogramming Made Easy (Huh?)]

2009-09-15 Thread Fawzi Mohamed

On 2009-09-15 04:51:19 +0200, Robert Jacques sandf...@jhu.edu said:


On Mon, 14 Sep 2009 18:53:51 -0400, Fawzi Mohamed fmoha...@mac.com wrote:


On 2009-09-14 17:07:00 +0200, Robert Jacques sandf...@jhu.edu said:

On Mon, 14 Sep 2009 09:39:51 -0400, Leandro Lucarella  
llu...@gmail.com  wrote:

Jeremie Pelletier, el 13 de septiembre a las 22:58 me escribiste:

[snip]
[1) to allocate large objects that have a guard object it is a good 
idea  to pass through the GC because if memory is tight a gc collection 
is  triggered thereby possibly freeing some extra memory
2) using gc malloc is not faster than malloc, especially with several  
threads the single lock of the basic gc makes itself felt.


for how I use D (not realtime) the two things I would like to see from  
new gc are:
1) multiple pools (at least one per cpu, with thread id hash to assign  
threads to a given pool).
This to avoid the need of a global gc lock in the gc malloc, and if  
possible use memory close to the cpu when a thread is pinned, not to  
have really thread local memory, if you really need local memory  
different from the stack then maybe a separate process should be used.  
This is especially well doable with 64 bits, with 32 memory  
usage/fragmentation could become an issue.
2) multiple thread doing the collection (a main thread distributing the 
 work to other threads (one per cpu), that do the mark phase using 
atomic  ops).


other better gc, less latency (but not at the cost of too much  
computation), would be nice to have, but are not a priority for my 
usage.


Fawzi



For what it's worth, the whole point of thread-local GC is to do 1) and 
 2). For the purposes of clarity, thread-local GC refers to each thread 
 having it's own GC for non-shared objects + a shared GC for shared  
objects. Each thread's GC may allocate and collect independently of 
each  other (e.g. in parallel) without locking/atomics/etc.


Well I want at least thread local pools (or almost, one can probably 
restrict it to the number of cpus, which will give most of the 
benefit), but not an extra partition of the memory in thread local and 
shared.
Such a partition might be easier in D2 (I think it was discussed, but 
even then I am not fully sure about the benefit), because then you have 
to somehow be able to share and maybe even unshare an object, which 
will be cumbersome. Thread local things add a level in the memory 
hierarchy that I am not fully sure is worth having, in it you should 
have almost only low level plumbing.
If you really want that much separation for many things then maybe a 
separate process + memmap might be better.
The fast local storage for me is the stack, and one might think about 
being more aggressive in using it, the heap is potentially shared.

Well at least that is my feeling.

Note that on 64 bit one can easily use a few bits to subdivide the 
memory in parts, making finding the pool group very quick, and this 
discussion is orthogonal to being generational or not.


Fawzi



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-15 Thread language_fan
Tue, 15 Sep 2009 16:41:25 +0200, Lutger thusly wrote:

 language_fan wrote:
 
 Tue, 15 Sep 2009 00:25:46 +0200, Lutger thusly wrote:
 
 That's a fancy way of saying that anyone who has not studied CS is a
 moron and therefore cannot understand what is good about languages,
 thus they lose any argument automatically. Am I right?
 
 I just recommend learning basic concepts until terms like generational
 garbage collection, closure, register allocation, immutability, loop
 fusion, term rewriting, regular languages, type constructor, virtual
 constructor, and covariance do not scare you anymore.
 
 
 Right right, I don't disagree with that. It was more the 'ruby/python
 programmers make apps no-one uses using amateur tools | c-family users
 worship FOO and the rest are academics that use pure functional
 languages' part that tripped me up. You know, the majority of software
 isn't built by academics, NASA uses C mostly, etc. A little nuance
 wouldn't hurt here.

Ok. I did not even mean you should write all your programs in LISP or 
Prolog. The point is, once you know how to use various kinds of 
techniques, you can use whatever language you want. The problem is, most 
programmers only know 1-3 languages, and those languages are typically 
very similar to each other (e.g. Java, C, and C++). Some problems are 
inherently functional or easily expressed with regular expressions or as 
a logical constraint satisfaction problem. It does not make sense to 
write your own priority queue for each new task.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-15 Thread BCS

Hello language_fan,


Fri, 11 Sep 2009 22:41:32 +, BCS thusly wrote:


Hello language_fan,


Game development is one of the largest users of systems programming
languages.


I would mandate the 10-25% test no mater what language is being used.

The bulk of programming is done for Finance, Insurance and Real
Estate (and is done in COBOL /yuck) The most common programs out
there are OSs and MS Office. As I said, I don't care about games.


I was talking about systems programming languages like C or D. From
wikipedia



I'm talking about systems programming languages AND any other language that 
is used.



What is a bit confusing is that you mentioned operating systems and MS
Office. 99.9% of companies worldwide do not develop any code even as
plugins to those. For example MS Office is a native executable only
for business reasons. There is nothing preventing them from providing
it as an applet or web service (like google does). Office suites are
in no way performance limited these days. In fact I think parts of the
competitor OpenOffice.org has been written in Java.


My point is that games are NOT representative. In terms of lines of code 
written, Finance, Insurance and Real Estate dominate and in terms of lines 
of code executed, (after LAMPACK) MS Office, it's clones, Windows and Linux 
dominate. The only place games dominate is in the mind share some category 
of home users.



It cannot be known beforehand which features are unnecessary, and there
is a hard limit on how much can be removed. So either you can remove
say 30-50% of features


Clearly you can't cut core features, but you can make some eye candy
features go away when there isn't enough power to run them.

Making business decisions is not that easy, 


Easy? No. But that's what someone gets paid big money to do. Or are you saying 
that it's impossible?...



especially if you have no idea of the application domain.


I didn't do enough market research so I'm going to give the end user every 
thing they might want and then ask them to buy a bigger computer to run it 
because I'm to lazy to make the resulting mess fast.



There are several stakeholders and various contracts involved.


Our program manger is to lazy to get the stakeholders to give us a rational 
coherent spec.


The business decisions here is that *I* WILL NOT force my costumers to buy 
a new computer every 1-3 years. And even if they don't need to buy a new 
computer, if I can make my code 1% faster for 1 minute of effort, I only 
need to save my user 100 minutes of time for it to be a net gain. I'm asserting, 
without proof, that there are vanishing few desktop applications out there 
that need anywhere near the computer power that is available now days (e.i. 
they should rarely have any perceivable wait time on a remotely modern system). 




or do a complete redesign.


If a different design is practical and would be faster, you should
have used it in the first place or should be planning on doing it at
some point anyway (I have never seen a non trivial program that was
fast enough that I didn't whish it was faster).


Large parts of software projects worldwide fail. Redesigning for
instance a single iteration is not that bad. You seem to favor the
top-down waterfall model. Unfortunately the waterfall model usually
fails. If you had studied software engineering lately, you would know
that.



/Some/ sort of design is needed even in an agile model, and even if you don't 
bother with a detailed design, as you pointed out, redesigning/refactor should 
always be an option. Again, if a different faster design is practical, sooner 
or later you should use it. 


This is the classic fast cheap or well done, pick two. For
anything that will ship, I'll always pick well done.


That is ok if you are a hobby programmer, but in real world e.g. in
the game industry the contracts pretty much dictate the schedules
and if you spend too much time on the project, the producer will not
offer any extra money. So if $1000..$1500 / month is ok for you,
then fine.


I will grant that games can legitimately require top of the line
hardware (scientific programs, and some things like photoshop can
also) but most anything that runs on a desktop should be written so
that people can run it with the hardware they have now, rather than
have to buy new hardware



The above can be read as Ok you might have a point about games but...


Nowadays, as the piracy is hindering PC sales quite a lot, the focus
is on console, mobile, and online games.


Could you quit going back to games already?! I DON'T CARE A WIT ABOUT GAMES 
If an argument doesn't apply to non-games I don't care.





Re: Template Metaprogramming Made Easy (Huh?)

2009-09-14 Thread Nick B

Jeremie Pelletier wrote:

Tom S Wrote:


Jeremie Pelletier wrote:

Tom S Wrote:


Jeremie Pelletier wrote:

I myself allocate all my meshes and textures directly on the GC and I'm pretty 
sure its faster than C's malloc and much safer.
Hm, why would it be faster with the GC than malloc? I'm pretty sure it's 
the opposite :P Plus, I could use a specialized malloc implementation, 
like TLSF.

The D GC is already specialized, and given its being used quite a lot in D, 
there are good chances its already sitting in the CPU cache, its heap already 
having the available memory block waiting on a freelist, or if the alloc is 
more than 0x1000 bytes, the pages available in a pool. You'd need to use malloc 
quite a lot to get the same optimal performance, and mixing the two would 
affect the performance of both.
It might be specialized for _something_, but it definitely isn't 
real-time systems. I'd say with my use cases there's a very poor chance 
the GC is sitting in the CPU cache since most of the time my memory is 
preallocated and managed by specialized structures and/or malloc. I've 
found that using the GC only for the hard-to-manually-manage objects 
works best. The rest is handled by malloc and the GC has a very shallow 
vision of the world thus its collection runs are very fast. Of course 
there's a drawback that both the GC and malloc will have some pages 
cached, wasting memory, but I don't let the GC touch too much so it 
should be minimal. YMMV of course - all depends on the memory allocation 
patterns of the application.


I understand your points for using a separate memory manager, and I agree with 
you that having less active allocations make for faster sweeps, no matter how 
little of them are scanned for pointers. However I just had an idea on how to 
implement generational collection on a non-moving GC which should solve your 
issues (and well, mines too) with the collector not being fast enough. I need 
to do some hacking on my custom GC first, but I believe it could give yet 
another performance boost. I'll add my memory manager to my list of code 
modules to make public :)

Jeremie

If the code is really usefull, why not offer it to the Tango team, for 
formal inclusion  in the next release ?


Nick B


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-14 Thread Jeremie Pelletier
Nick B Wrote:

 Jeremie Pelletier wrote:
  Tom S Wrote:
  
  Jeremie Pelletier wrote:
  Tom S Wrote:
 
  Jeremie Pelletier wrote:
  I myself allocate all my meshes and textures directly on the GC and I'm 
  pretty sure its faster than C's malloc and much safer.
  Hm, why would it be faster with the GC than malloc? I'm pretty sure it's 
  the opposite :P Plus, I could use a specialized malloc implementation, 
  like TLSF.
  The D GC is already specialized, and given its being used quite a lot in 
  D, there are good chances its already sitting in the CPU cache, its heap 
  already having the available memory block waiting on a freelist, or if 
  the alloc is more than 0x1000 bytes, the pages available in a pool. You'd 
  need to use malloc quite a lot to get the same optimal performance, and 
  mixing the two would affect the performance of both.
  It might be specialized for _something_, but it definitely isn't 
  real-time systems. I'd say with my use cases there's a very poor chance 
  the GC is sitting in the CPU cache since most of the time my memory is 
  preallocated and managed by specialized structures and/or malloc. I've 
  found that using the GC only for the hard-to-manually-manage objects 
  works best. The rest is handled by malloc and the GC has a very shallow 
  vision of the world thus its collection runs are very fast. Of course 
  there's a drawback that both the GC and malloc will have some pages 
  cached, wasting memory, but I don't let the GC touch too much so it 
  should be minimal. YMMV of course - all depends on the memory allocation 
  patterns of the application.
  
  I understand your points for using a separate memory manager, and I agree 
  with you that having less active allocations make for faster sweeps, no 
  matter how little of them are scanned for pointers. However I just had an 
  idea on how to implement generational collection on a non-moving GC which 
  should solve your issues (and well, mines too) with the collector not being 
  fast enough. I need to do some hacking on my custom GC first, but I believe 
  it could give yet another performance boost. I'll add my memory manager to 
  my list of code modules to make public :)
 Jeremie
 
 If the code is really usefull, why not offer it to the Tango team, for 
 formal inclusion  in the next release ?
 
 Nick B

Because I dropped support for D1 long ago. If either the Tango or Phobos team 
like my code once I publish it, they are free to adapt it for their runtime. 

I rewrote the GC from scratch and optimized over the past 2 years to support my 
custom D runtime. It cannot be used as-is with neither phobos or tango without 
either changing the public interface of the GC or rewriting every runtime 
routine calling into the GC. I would only release it to public domain as an 
example of how to implement a tracing generational non-moving GC. I still need 
to implement the generational part, but I got the general algorithm down on 
paper today so I should have it working sometime this week.

I'm not a big fan of code licenses and therefore like to write most of my code 
myself, if only to learn how it works. I rarely mind people asking for my code 
either, so long as I get credited for it :)


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-14 Thread bearophile
Jeremie Pelletier:

I haven't had to use the C heap whatsoever so far in D, could you give me an 
example of where you need it?

1) I'm able to allocate a bigger single chunk of memory from the C heap, about 
1.8 GB, while the GC heap of DMD on Windows allows only for a smaller chunk. In 
a program I've had to allocate a single chunk of memory.
2) I think the GC is faster at allocating small chunks of memory, while the C 
heap is faster at allocating large chunks.
3) GC-managed pointers have several restrictions (so much different that maybe 
I'd like them to be seen as a different type from the compiler, that requires a 
cast to be converted from/to C pointers. I don't know why this idea was not 
appreciated by D designers). One of them is that GC-managed pointers can't be 
tagged, you can't add one or two bits to GC-managed pointers, and such tags are 
useful when you want to implement certain data structures.


Indeed, and sometimes it's way faster than that.

But lot of people will judge D against more modern languages like C#, Scala 8or 
Java) and not against C.


C++ isn't anymore complex than D2,

I don't agree, see below.


I can't think of many features C++ has over D2. I can name quite a few 
features D2 has over C++ :)

Complexity and not-simplicity come from many things, like corner cases, rules 
against rules against rules, unsafe things that the compiler isn't able to 
catch in case of programmer errors, unnatural syntax, and not just from 
features. For example the Python 3+ language has many more features than pure 
C, yet Python3 is simpler than C :-)


it still allows for dirty work to be done when you need it.

The less dirty you make it the better it will be when you try to mantain/debug 
your D code :-)


You don't need to code your application core in C and your application 
behavior in a scripting language on top of the C core. D allows you to write 
it all in one language with the same productivity, if not better productivity 
for not having to write the abstraction layer between C and scripting.

While D is quite better than C, that of yours is a dream. In practice D isn't 
dynamic and for certain purposes D is not close to the productivity of Python 
:-) (Even just because in Python you can find tons of modules and bindings 
already done). There are ways to improve D still for such purposes. D can be 
more scalable as Scala :-)

Bye,
bearophile


Non-moving generational GC [was: Template Metaprogramming Made Easy (Huh?)]

2009-09-14 Thread Leandro Lucarella
Jeremie Pelletier, el 13 de septiembre a las 22:58 me escribiste:
 Tom S Wrote:
 
  Jeremie Pelletier wrote:
   Tom S Wrote:
   
   Jeremie Pelletier wrote:
   I myself allocate all my meshes and textures directly on the GC and I'm 
   pretty sure its faster than C's malloc and much safer.
   Hm, why would it be faster with the GC than malloc? I'm pretty sure it's 
   the opposite :P Plus, I could use a specialized malloc implementation, 
   like TLSF.
   
   The D GC is already specialized, and given its being used quite a lot in 
   D, there are good chances its already sitting in the CPU cache, its heap 
   already having the available memory block waiting on a freelist, or if 
   the alloc is more than 0x1000 bytes, the pages available in a pool. You'd 
   need to use malloc quite a lot to get the same optimal performance, and 
   mixing the two would affect the performance of both.
  
  It might be specialized for _something_, but it definitely isn't 
  real-time systems. I'd say with my use cases there's a very poor chance 
  the GC is sitting in the CPU cache since most of the time my memory is 
  preallocated and managed by specialized structures and/or malloc. I've 
  found that using the GC only for the hard-to-manually-manage objects 
  works best. The rest is handled by malloc and the GC has a very shallow 
  vision of the world thus its collection runs are very fast. Of course 
  there's a drawback that both the GC and malloc will have some pages 
  cached, wasting memory, but I don't let the GC touch too much so it 
  should be minimal. YMMV of course - all depends on the memory allocation 
  patterns of the application.
 
 I understand your points for using a separate memory manager, and
 I agree with you that having less active allocations make for faster
 sweeps, no matter how little of them are scanned for pointers. However
 I just had an idea on how to implement generational collection on
 a non-moving GC which should solve your issues (and well, mines too)
 with the collector not being fast enough. I need to do some hacking on

I saw a paper about that. The idea was to simply have some list of
objects/pages in each generation and modify that lists instead of moving
objects. I can't remember the name of the paper so I can't find it now :S

The problem with generational collectors (in D) is that you need
read/write barriers to track inter-generational pointers (to be able to
use pointers to younger generations in the older ones as roots when
scanning), which can make the whole deal a little unpractical for
a language that doesn't want to impose performance penalty to thing you
wont use (I don't see a way to instrument read/writes to pointers to the
GC only). This is why RC was always rejected as an algorithm for the GC in
D, I think.

 my custom GC first, but I believe it could give yet another performance
 boost. I'll add my memory manager to my list of code modules to make
 public :)

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)

Pack and get dressed
before your father hears us,
before all hell breaks loose.


Re: Non-moving generational GC [was: Template Metaprogramming Made Easy (Huh?)]

2009-09-14 Thread Robert Jacques
On Mon, 14 Sep 2009 09:39:51 -0400, Leandro Lucarella llu...@gmail.com  
wrote:

Jeremie Pelletier, el 13 de septiembre a las 22:58 me escribiste:

[snip]

I understand your points for using a separate memory manager, and
I agree with you that having less active allocations make for faster
sweeps, no matter how little of them are scanned for pointers. However
I just had an idea on how to implement generational collection on
a non-moving GC which should solve your issues (and well, mines too)
with the collector not being fast enough. I need to do some hacking on


I saw a paper about that. The idea was to simply have some list of
objects/pages in each generation and modify that lists instead of moving
objects. I can't remember the name of the paper so I can't find it now :S

The problem with generational collectors (in D) is that you need
read/write barriers to track inter-generational pointers (to be able to
use pointers to younger generations in the older ones as roots when
scanning), which can make the whole deal a little unpractical for
a language that doesn't want to impose performance penalty to thing you
wont use (I don't see a way to instrument read/writes to pointers to the
GC only). This is why RC was always rejected as an algorithm for the GC  
in

D, I think.


my custom GC first, but I believe it could give yet another performance
boost. I'll add my memory manager to my list of code modules to make
public :)




As a counter-point, objective-c just introduced a thread-local GC.  
According to a blog post  
(http://www.sealiesoftware.com/blog/archive/2009/08/28/objc_explain_Thread-local_garbage_collection.html)  
apparently this has allowed pause times similar to the pause times of the  
previous generational GC. (Except that the former is doing a full collect,  
and the later still has work to do) On that note, it would probably be a  
good idea if core.gc.BlkAttr supported shared and immutable state flags,  
which could be used to support a thread-local GC.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-14 Thread BCS

Hello Lutger,


That's cool, but scrapple is exactly that: an assortment of small(ish)
projects / pieces of code that otherwise don't warrant a full project.
If you feel like putting it online, just ping BCS and I'm sure he'll
give you access right away.


All I need is your dsource user name.




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-14 Thread language_fan
Mon, 14 Sep 2009 07:33:59 -0400, bearophile thusly wrote:

 But lot of people will judge D against more modern languages like C#,
 Scala or Java) and not against C.

Programmers often belong to three kinds of groups. First come the fans of 
traditionally weakly typed compiled languages (basic, c, c++). They have 
tried some dynamic or academic languages but did not like them. They 
fancy efficiency and close to metal feel. They think compilation to 
native code is the best way to produce programs, and think types should 
reflect the feature set of their cpu. They believe the syntax C uses was 
defined by their God.

The second group started with interpreted languages built by amateurs 
(php, ruby, python, some game scripting language etc). They do not 
understand the meaning the types or compilation. They prefer writing 
short programs that usually seem to work. They hate formal specifications 
and proofs about program properties. They are usually writing simple web 
applications or some basic shareware utilies no one uses. They also hate 
trailing semicolons.

The members of the last group have studied computer science and 
languages, in particular. They have found a pet academic language, 
typically a pure one, but paradigms may differ. In fact this is the group 
which uses something other than the hybrid object-oriented/procedural 
model. They appreciate a strong, orthogonal core language that scales 
cleanly. They are not scared of esoteric non-C-like syntax. They use 
languages that are not ready to take a step to the real world during 
the 70 next years.

So yes, every group has a bit different expectations..

C++ isn't anymore complex than D2,
 
 I don't agree, see below.

It would help all of you if you could somehow formally specify how you 
measure language complexity. Is it the length of the grammar definition 
or something else? Otherwise these are just subjective opinions.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-14 Thread Lutger
language_fan wrote:

 Mon, 14 Sep 2009 07:33:59 -0400, bearophile thusly wrote:
 
 But lot of people will judge D against more modern languages like C#,
 Scala or Java) and not against C.
 
 Programmers often belong to three kinds of groups. First come the fans of
 traditionally weakly typed compiled languages (basic, c, c++). They have
 tried some dynamic or academic languages but did not like them. They
 fancy efficiency and close to metal feel. They think compilation to
 native code is the best way to produce programs, and think types should
 reflect the feature set of their cpu. They believe the syntax C uses was
 defined by their God.
 
 The second group started with interpreted languages built by amateurs
 (php, ruby, python, some game scripting language etc). They do not
 understand the meaning the types or compilation. They prefer writing
 short programs that usually seem to work. They hate formal specifications
 and proofs about program properties. They are usually writing simple web
 applications or some basic shareware utilies no one uses. They also hate
 trailing semicolons.
 
 The members of the last group have studied computer science and
 languages, in particular. They have found a pet academic language,
 typically a pure one, but paradigms may differ. In fact this is the group
 which uses something other than the hybrid object-oriented/procedural
 model. They appreciate a strong, orthogonal core language that scales
 cleanly. They are not scared of esoteric non-C-like syntax. They use
 languages that are not ready to take a step to the real world during
 the 70 next years.
 

That's a fancy way of saying that anyone who has not studied CS is a moron 
and therefore cannot understand what is good about languages, thus they lose 
any argument automatically. Am I right?



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-14 Thread Jeremie Pelletier
Lutger Wrote:

 language_fan wrote:
 
  Mon, 14 Sep 2009 07:33:59 -0400, bearophile thusly wrote:
  
  But lot of people will judge D against more modern languages like C#,
  Scala or Java) and not against C.
  
  Programmers often belong to three kinds of groups. First come the fans of
  traditionally weakly typed compiled languages (basic, c, c++). They have
  tried some dynamic or academic languages but did not like them. They
  fancy efficiency and close to metal feel. They think compilation to
  native code is the best way to produce programs, and think types should
  reflect the feature set of their cpu. They believe the syntax C uses was
  defined by their God.
  
  The second group started with interpreted languages built by amateurs
  (php, ruby, python, some game scripting language etc). They do not
  understand the meaning the types or compilation. They prefer writing
  short programs that usually seem to work. They hate formal specifications
  and proofs about program properties. They are usually writing simple web
  applications or some basic shareware utilies no one uses. They also hate
  trailing semicolons.
  
  The members of the last group have studied computer science and
  languages, in particular. They have found a pet academic language,
  typically a pure one, but paradigms may differ. In fact this is the group
  which uses something other than the hybrid object-oriented/procedural
  model. They appreciate a strong, orthogonal core language that scales
  cleanly. They are not scared of esoteric non-C-like syntax. They use
  languages that are not ready to take a step to the real world during
  the 70 next years.
  
 
 That's a fancy way of saying that anyone who has not studied CS is a moron 
 and therefore cannot understand what is good about languages, thus they lose 
 any argument automatically. Am I right?
 

I dunno if that's what OP meant, but studying CS does not make you a reference 
in programming languages. I didn't even complete my first year of CS because I 
wasn't learning as fast as I wanted. School teaches you theory anyways, a job 
will teach you how to apply it in the real world. Anyone who can read and has 
the slightest interest in programming can learn the theory by themselves.

As for the different classes of programmers, I think the OP pushed more the 
extremes than the general cases. I came across a series of articles by Eric 
Lippert a few weeks ago talking about the matter:

http://blogs.msdn.com/ericlippert/archive/tags/Cargo+Cult+Programming/default.aspx


Re: Non-moving generational GC [was: Template Metaprogramming Made Easy (Huh?)]

2009-09-14 Thread Fawzi Mohamed

On 2009-09-14 17:07:00 +0200, Robert Jacques sandf...@jhu.edu said:

On Mon, 14 Sep 2009 09:39:51 -0400, Leandro Lucarella 
llu...@gmail.com  wrote:

Jeremie Pelletier, el 13 de septiembre a las 22:58 me escribiste:

[snip]

I understand your points for using a separate memory manager, and
I agree with you that having less active allocations make for faster
sweeps, no matter how little of them are scanned for pointers. However
I just had an idea on how to implement generational collection on
a non-moving GC which should solve your issues (and well, mines too)
with the collector not being fast enough. I need to do some hacking on


I saw a paper about that. The idea was to simply have some list of
objects/pages in each generation and modify that lists instead of moving
objects. I can't remember the name of the paper so I can't find it now :S

The problem with generational collectors (in D) is that you need
read/write barriers to track inter-generational pointers (to be able to
use pointers to younger generations in the older ones as roots when
scanning), which can make the whole deal a little unpractical for
a language that doesn't want to impose performance penalty to thing you
wont use (I don't see a way to instrument read/writes to pointers to the
GC only). This is why RC was always rejected as an algorithm for the GC  in
D, I think.


my custom GC first, but I believe it could give yet another performance
boost. I'll add my memory manager to my list of code modules to make
public :)




As a counter-point, objective-c just introduced a thread-local GC.  
According to a blog post  
(http://www.sealiesoftware.com/blog/archive/2009/08/28/objc_explain_Thread-local_garbage_collection.html) 
 apparently this has allowed pause times similar to the pause times of 
the  previous generational GC. (Except that the former is doing a full 
collect,  and the later still has work to do) On that note, it would 
probably be a  good idea if core.gc.BlkAttr supported shared and 
immutable state flags,  which could be used to support a thread-local 
GC.


1) to allocate large objects that have a guard object it is a good idea 
to pass through the GC because if memory is tight a gc collection is 
triggered thereby possibly freeing some extra memory
2) using gc malloc is not faster than malloc, especially with several 
threads the single lock of the basic gc makes itself felt.


for how I use D (not realtime) the two things I would like to see from 
new gc are:
1) multiple pools (at least one per cpu, with thread id hash to assign 
threads to a given pool).
This to avoid the need of a global gc lock in the gc malloc, and if 
possible use memory close to the cpu when a thread is pinned, not to 
have really thread local memory, if you really need local memory 
different from the stack then maybe a separate process should be used. 
This is especially well doable with 64 bits, with 32 memory 
usage/fragmentation could become an issue.
2) multiple thread doing the collection (a main thread distributing the 
work to other threads (one per cpu), that do the mark phase using 
atomic ops).


other better gc, less latency (but not at the cost of too much 
computation), would be nice to have, but are not a priority for my 
usage.


Fawzi



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-14 Thread Christopher Wright

language_fan wrote:
In fact this is the group 
which uses something other than the hybrid object-oriented/procedural 
model.


Damn straight! They use a hybrid OO/procedural/functional model. Like D. 
Or C#.


Oh, but Prolog, you may say. And I admit, I've seen it used a couple 
times by academics. But I've seen similar languages used at my job, and 
we're not by any means an algorithms shop.


I'm actually aware of very few languages that break the mold. There are 
toy languages like Befunge; there are solver-oriented languages like 
Prolog and Zimpl; and there are a couple oddities like METAFONT.


What languages have you seen that are so innovative and different in 
paradigm?


Re: Non-moving generational GC [was: Template Metaprogramming Made Easy (Huh?)]

2009-09-14 Thread Robert Jacques

On Mon, 14 Sep 2009 18:53:51 -0400, Fawzi Mohamed fmoha...@mac.com wrote:


On 2009-09-14 17:07:00 +0200, Robert Jacques sandf...@jhu.edu said:

On Mon, 14 Sep 2009 09:39:51 -0400, Leandro Lucarella  
llu...@gmail.com  wrote:

Jeremie Pelletier, el 13 de septiembre a las 22:58 me escribiste:

[snip]

I understand your points for using a separate memory manager, and
I agree with you that having less active allocations make for faster
sweeps, no matter how little of them are scanned for pointers. However
I just had an idea on how to implement generational collection on
a non-moving GC which should solve your issues (and well, mines too)
with the collector not being fast enough. I need to do some hacking on

 I saw a paper about that. The idea was to simply have some list of
objects/pages in each generation and modify that lists instead of  
moving
objects. I can't remember the name of the paper so I can't find it now  
:S

 The problem with generational collectors (in D) is that you need
read/write barriers to track inter-generational pointers (to be able to
use pointers to younger generations in the older ones as roots when
scanning), which can make the whole deal a little unpractical for
a language that doesn't want to impose performance penalty to thing you
wont use (I don't see a way to instrument read/writes to pointers to  
the
GC only). This is why RC was always rejected as an algorithm for the  
GC  in

D, I think.

my custom GC first, but I believe it could give yet another  
performance

boost. I'll add my memory manager to my list of code modules to make
public :)


 As a counter-point, objective-c just introduced a thread-local GC.   
According to a blog post   
(http://www.sealiesoftware.com/blog/archive/2009/08/28/objc_explain_Thread-local_garbage_collection.html)  
 apparently this has allowed pause times similar to the pause times of  
the  previous generational GC. (Except that the former is doing a full  
collect,  and the later still has work to do) On that note, it would  
probably be a  good idea if core.gc.BlkAttr supported shared and  
immutable state flags,  which could be used to support a thread-local  
GC.


1) to allocate large objects that have a guard object it is a good idea  
to pass through the GC because if memory is tight a gc collection is  
triggered thereby possibly freeing some extra memory
2) using gc malloc is not faster than malloc, especially with several  
threads the single lock of the basic gc makes itself felt.


for how I use D (not realtime) the two things I would like to see from  
new gc are:
1) multiple pools (at least one per cpu, with thread id hash to assign  
threads to a given pool).
This to avoid the need of a global gc lock in the gc malloc, and if  
possible use memory close to the cpu when a thread is pinned, not to  
have really thread local memory, if you really need local memory  
different from the stack then maybe a separate process should be used.  
This is especially well doable with 64 bits, with 32 memory  
usage/fragmentation could become an issue.
2) multiple thread doing the collection (a main thread distributing the  
work to other threads (one per cpu), that do the mark phase using atomic  
ops).


other better gc, less latency (but not at the cost of too much  
computation), would be nice to have, but are not a priority for my usage.


Fawzi



For what it's worth, the whole point of thread-local GC is to do 1) and  
2). For the purposes of clarity, thread-local GC refers to each thread  
having it's own GC for non-shared objects + a shared GC for shared  
objects. Each thread's GC may allocate and collect independently of each  
other (e.g. in parallel) without locking/atomics/etc.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-14 Thread Rainer Deyke
language_fan wrote:
 The members of the last group have studied computer science and 
 languages, in particular. They have found a pet academic language, 
 typically a pure one, but paradigms may differ. In fact this is the group 
 which uses something other than the hybrid object-oriented/procedural 
 model. They appreciate a strong, orthogonal core language that scales 
 cleanly. They are not scared of esoteric non-C-like syntax. They use 
 languages that are not ready to take a step to the real world during 
 the 70 next years.

Of the three types, this comes closest to describing me.  Yet, I am
completely self-taught, and my preferred language is still C++.  (I
wouldn't call it my pet language.  I loathe C++, I just haven't found a
suitable replacement yet.)

Stereotypes are dangerous.


-- 
Rainer Deyke - rain...@eldwood.com


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-13 Thread Justin Johansson
 - If programs run quickly it saves some time.
 
 A good language has to try to save time in all those ways and more.

Tks bearophile for that extensive writeup.  A good read.

btw. Downloaded the Bud tool (on linux) but couldn't get it to compile.  First 
had to rename usage of macro to makro then it bitched about some missing 
module so I gave up.

-- Justin


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-13 Thread Jeremie Pelletier
bearophile Wrote:

 Justin Johansson:
 
 would you mind saying what salient things there are about D that presumably 
 attracts to the language.  It just helps to know why others are here as one 
 ventures into new territory.
 
 That's not an easy question. This is a personal answer, other people will 
 like other sides of D. I like D1/D2 for:
 - I don't think of it as a propetary language, like C#.
 - Sometimes I want the freedom to use memory as I like, with structs, and 
 even unions. If you have large datasets you find that using more than 20 
 bytes for a number as in Python doesn't help. Values also reduce indirection, 
 this speeds up things. This allows a more efficient usage of memory, and this 
 helps increase cache locality, that increases performance. Unfortunately 
 GC-managed D pointers can't be tagged, so I have to use memory from the C 
 heap for them. And sometimes you need pointers. That's why I'd like D to have 
 more/better ways to increase safety when using pointers (like using memory 
 regions when not in release mode, etc).

I haven't had to use the C heap whatsoever so far in D, could you give me an 
example of where you need it? In fact, the *only* place I use the C heap is in 
my garbage collector's internals, for pool structs and mark ranges. I use 
pointers to GC memory all the time too, there are plenty of algorithms, 
especially in loops, that can run faster with pointer arithmetic than slices 
and it's still the fastest way to pass struct references around.

 - I like this newsgroups, I can talk to people, and they sometimes answer my 
 numerous questions. I am learning a lot. Sometimes I receive no answers, but 
 it's acceptable. For its nature this newsgroup attracts some strange people 
 too.
 - I often use Python, it's the language I like more, but for some purposes 
 it's too much slow. And I am tired of writing vectorized code in NumPy and 
 the like. Cython reference count makes me sick and ShedSkin while nice is a 
 failed experiment. D feels like freedom, while sometimes using Python feels 
 like programming with mittens for me.
 - There are some things that I'd like to see in a language, and D being in 
 development still and being not controlled by an ivory tower committee give 
 me the illusion to see some of my ideas realized. So far I haven't influenced 
 a lot the development of D. On the other hand if everyone can influence a lot 
 the language the result may be a patchwork. So some dynamic compromise has to 
 be found every day.

I also like this community driven model, but this forum has more people 
submitting ideas than people able to implement them on time, I'm pretty sure 
the TODO list is rather huge at this time :) I for one much prefer D 
development the way it is now than the working group model used by the W3C or 
Khronos for example.

The public bugzilla is really nice too, once you get used to it, one of the 
issues I submitted got fixed in 2.032, I've also sent maybe 3 or 4 patches to 
the compiler source in other issues so far too, hopefully they'll be used in 
2.033!

 - D looks a lot like C, yet in D I can write code several times faster than 
 C. Sometimes 5-10 times faster. This is an enormous difference.

Indeed, and sometimes it's way faster than that. I've ported many C headers to 
D and I'm always amazed at how many things I can throw out, just the DirectX 
headers were at least 50% smaller in D and MUCH easier to read. Such simplicity 
is also reflected in the compiler by having quite a lot less tokens and parse 
nodes to create and analyze.

I must admit however that I sometimes miss the C preprocessor, or at least wish 
mixins had a syntax closer to that used by the C preprocessor. But it's a good 
idea to keep D without a preprocessor, its much better for everything to have a 
scope.

 - I am quite sensitive to syntax noise, boilerplate code. I like elegance, 
 clarity in semantics, I hate corner cases, yet I want a language that's 
 efficient, readable, and the less bug-prone as possible. C++ looks too much 
 complex for me. D1 is simple enough for me, and D2 is getting a bit too much 
 complex. I may appreciate the idea of a D 1.5 language that fixes some holes 
 and limits of D1 while keeping language simple enough (struct constructors, 
 and few other things. Such things don't significantly increase the difficulty 
 in using the language).

C++ isn't anymore complex than D2, its syntax just isn't as elegant. Other than 
multiple inheritance which is partially solved through object composition, I 
can't think of many features C++ has over D2. I can name quite a few features 
D2 has over C++ :)

What I like about D is that while its elegant, it still allows for dirty work 
to be done when you need it. You don't need to code your application core in C 
and your application behavior in a scripting language on top of the C core. D 
allows you to write it all in one language with the same productivity, if not 
better productivity for 

Re: Template Metaprogramming Made Easy (Huh?)

2009-09-13 Thread Tom S

Jeremie Pelletier wrote:

I haven't had to use the C heap whatsoever so far in D, could you give me an 
example of where you need it? In fact, the *only* place I use the C heap is in 
my garbage collector's internals, for pool structs and mark ranges. I use 
pointers to GC memory all the time too, there are plenty of algorithms, 
especially in loops, that can run faster with pointer arithmetic than slices 
and it's still the fastest way to pass struct references around.


I use the C heap a lot when I need slabs of memory that the GC should 
not look into for performance reasons. This includes images/textures, 
mesh data and some data structures that I release manually - again, for 
efficiency reasons.



- I like how D doesn't totally ignore safety as C does, in D sometimes the default 
way is the safer one, and the unsafe way is used only where you ask for it.  I'd 
like to see more safeties added to D, like optional run-time and compile-time 
integral overflow tests, some pointer safety, better template error messages 
(template constraints help some in such regard), stack traces, less compiler bugs, 
safer casts (in C# you need explicit casts to convert double = float), a safer 
printf, some optional annotations inspired by Plint (a lint program) to give more 
semantics to the compiler, that can be used to both speed up code and avoid bugs. 
There's lot that can be done in this regard. And release-mode performance can be 
usually kept unchanged.


Stack traces is a feature for the runtime, I made one for mine, which shows a 
dialog window with the stack trace, current registers values, loaded modules 
and whatnot. Took me almost a week to piece together my CodeView reader, I 
still need a DWARF reader for linux. I'm gonna try and implement it in druntime 
and submit the patch to bugzilla.


Tango's runtime already does stack tracing on Windows and *NIX, however 
its CV parser is subject to some licensing issues :( Perhaps you could 
release yours under some liberal license so it can be plugged there? :)



--
Tomasz Stachowiak
http://h3.team0xf.com/
h3/h3r3tic on #D freenode


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-13 Thread Tom S

Jeremie Pelletier wrote:

Tom S Wrote:


Jeremie Pelletier wrote:

I haven't had to use the C heap whatsoever so far in D, could you give me an 
example of where you need it? In fact, the *only* place I use the C heap is in 
my garbage collector's internals, for pool structs and mark ranges. I use 
pointers to GC memory all the time too, there are plenty of algorithms, 
especially in loops, that can run faster with pointer arithmetic than slices 
and it's still the fastest way to pass struct references around.
I use the C heap a lot when I need slabs of memory that the GC should 
not look into for performance reasons. This includes images/textures, 
mesh data and some data structures that I release manually - again, for 
efficiency reasons.


The garbage collector in D already mark allocations which contains pointers and 
scans these only. If you want to know if a type contains pointers, check the 
'flags' property of the typeinfo or classinfo, test for bit0 and bit1 
respectively. This is what the GC uses at runtime when allocating memory to 
know if it should tag the allocation as containing possible pointers.


Yea I know, but I want structures with pointers manually managed as well.



I myself allocate all my meshes and textures directly on the GC and I'm pretty 
sure its faster than C's malloc and much safer.


Hm, why would it be faster with the GC than malloc? I'm pretty sure it's 
the opposite :P Plus, I could use a specialized malloc implementation, 
like TLSF.





- I like how D doesn't totally ignore safety as C does, in D sometimes the default 
way is the safer one, and the unsafe way is used only where you ask for it.  I'd 
like to see more safeties added to D, like optional run-time and compile-time 
integral overflow tests, some pointer safety, better template error messages 
(template constraints help some in such regard), stack traces, less compiler bugs, 
safer casts (in C# you need explicit casts to convert double = float), a safer 
printf, some optional annotations inspired by Plint (a lint program) to give more 
semantics to the compiler, that can be used to both speed up code and avoid bugs. 
There's lot that can be done in this regard. And release-mode performance can be 
usually kept unchanged.

Stack traces is a feature for the runtime, I made one for mine, which shows a 
dialog window with the stack trace, current registers values, loaded modules 
and whatnot. Took me almost a week to piece together my CodeView reader, I 
still need a DWARF reader for linux. I'm gonna try and implement it in druntime 
and submit the patch to bugzilla.
Tango's runtime already does stack tracing on Windows and *NIX, however 
its CV parser is subject to some licensing issues :( Perhaps you could 
release yours under some liberal license so it can be plugged there? :)




Sure, I wouldn't mind at all, I'm not into licenses myself so I might just 
release it to public domain. I'll try and get a standalone package ready and 
post it somewhere, I just don't know where yet :x


Sweet :D As for a place, there are plenty of options, e.g. 
http://dsource.org/projects/scrapple/ or a separate dsource project.




--
Tomasz Stachowiak
http://h3.team0xf.com/
h3/h3r3tic on #D freenode


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-13 Thread Jeremie Pelletier
Tom S Wrote:

 Jeremie Pelletier wrote:
  Tom S Wrote:
  
  Jeremie Pelletier wrote:
  I haven't had to use the C heap whatsoever so far in D, could you give me 
  an example of where you need it? In fact, the *only* place I use the C 
  heap is in my garbage collector's internals, for pool structs and mark 
  ranges. I use pointers to GC memory all the time too, there are plenty of 
  algorithms, especially in loops, that can run faster with pointer 
  arithmetic than slices and it's still the fastest way to pass struct 
  references around.
  I use the C heap a lot when I need slabs of memory that the GC should 
  not look into for performance reasons. This includes images/textures, 
  mesh data and some data structures that I release manually - again, for 
  efficiency reasons.
  
  The garbage collector in D already mark allocations which contains pointers 
  and scans these only. If you want to know if a type contains pointers, 
  check the 'flags' property of the typeinfo or classinfo, test for bit0 and 
  bit1 respectively. This is what the GC uses at runtime when allocating 
  memory to know if it should tag the allocation as containing possible 
  pointers.
 
 Yea I know, but I want structures with pointers manually managed as well.

Then just inform the GC to not scan the allocations you want, or better yet 
have a static ctor modify the flag of the typeinfo you don't want scanned.

  I myself allocate all my meshes and textures directly on the GC and I'm 
  pretty sure its faster than C's malloc and much safer.
 
 Hm, why would it be faster with the GC than malloc? I'm pretty sure it's 
 the opposite :P Plus, I could use a specialized malloc implementation, 
 like TLSF.

The D GC is already specialized, and given its being used quite a lot in D, 
there are good chances its already sitting in the CPU cache, its heap already 
having the available memory block waiting on a freelist, or if the alloc is 
more than 0x1000 bytes, the pages available in a pool. You'd need to use malloc 
quite a lot to get the same optimal performance, and mixing the two would 
affect the performance of both.

  - I like how D doesn't totally ignore safety as C does, in D sometimes 
  the default way is the safer one, and the unsafe way is used only where 
  you ask for it.  I'd like to see more safeties added to D, like optional 
  run-time and compile-time integral overflow tests, some pointer safety, 
  better template error messages (template constraints help some in such 
  regard), stack traces, less compiler bugs, safer casts (in C# you need 
  explicit casts to convert double = float), a safer printf, some 
  optional annotations inspired by Plint (a lint program) to give more 
  semantics to the compiler, that can be used to both speed up code and 
  avoid bugs. There's lot that can be done in this regard. And 
  release-mode performance can be usually kept unchanged.
  Stack traces is a feature for the runtime, I made one for mine, which 
  shows a dialog window with the stack trace, current registers values, 
  loaded modules and whatnot. Took me almost a week to piece together my 
  CodeView reader, I still need a DWARF reader for linux. I'm gonna try and 
  implement it in druntime and submit the patch to bugzilla.
  Tango's runtime already does stack tracing on Windows and *NIX, however 
  its CV parser is subject to some licensing issues :( Perhaps you could 
  release yours under some liberal license so it can be plugged there? :)
 
  
  Sure, I wouldn't mind at all, I'm not into licenses myself so I might just 
  release it to public domain. I'll try and get a standalone package ready 
  and post it somewhere, I just don't know where yet :x
 
 Sweet :D As for a place, there are plenty of options, e.g. 
 http://dsource.org/projects/scrapple/ or a separate dsource project.

I thought of that, but I don't feel like opening a project for just a few 
random code snippets or standalone classes. I think I'll just post it in this 
forum and let interested people grab it for now.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-13 Thread Tom S

Jeremie Pelletier wrote:

Tom S Wrote:


Jeremie Pelletier wrote:

I myself allocate all my meshes and textures directly on the GC and I'm pretty 
sure its faster than C's malloc and much safer.
Hm, why would it be faster with the GC than malloc? I'm pretty sure it's 
the opposite :P Plus, I could use a specialized malloc implementation, 
like TLSF.


The D GC is already specialized, and given its being used quite a lot in D, 
there are good chances its already sitting in the CPU cache, its heap already 
having the available memory block waiting on a freelist, or if the alloc is 
more than 0x1000 bytes, the pages available in a pool. You'd need to use malloc 
quite a lot to get the same optimal performance, and mixing the two would 
affect the performance of both.


It might be specialized for _something_, but it definitely isn't 
real-time systems. I'd say with my use cases there's a very poor chance 
the GC is sitting in the CPU cache since most of the time my memory is 
preallocated and managed by specialized structures and/or malloc. I've 
found that using the GC only for the hard-to-manually-manage objects 
works best. The rest is handled by malloc and the GC has a very shallow 
vision of the world thus its collection runs are very fast. Of course 
there's a drawback that both the GC and malloc will have some pages 
cached, wasting memory, but I don't let the GC touch too much so it 
should be minimal. YMMV of course - all depends on the memory allocation 
patterns of the application.




- I like how D doesn't totally ignore safety as C does, in D sometimes the default 
way is the safer one, and the unsafe way is used only where you ask for it.  I'd 
like to see more safeties added to D, like optional run-time and compile-time 
integral overflow tests, some pointer safety, better template error messages 
(template constraints help some in such regard), stack traces, less compiler bugs, 
safer casts (in C# you need explicit casts to convert double = float), a safer 
printf, some optional annotations inspired by Plint (a lint program) to give more 
semantics to the compiler, that can be used to both speed up code and avoid bugs. 
There's lot that can be done in this regard. And release-mode performance can be 
usually kept unchanged.

Stack traces is a feature for the runtime, I made one for mine, which shows a 
dialog window with the stack trace, current registers values, loaded modules 
and whatnot. Took me almost a week to piece together my CodeView reader, I 
still need a DWARF reader for linux. I'm gonna try and implement it in druntime 
and submit the patch to bugzilla.
Tango's runtime already does stack tracing on Windows and *NIX, however 
its CV parser is subject to some licensing issues :( Perhaps you could 
release yours under some liberal license so it can be plugged there? :)



Sure, I wouldn't mind at all, I'm not into licenses myself so I might just 
release it to public domain. I'll try and get a standalone package ready and 
post it somewhere, I just don't know where yet :x
Sweet :D As for a place, there are plenty of options, e.g. 
http://dsource.org/projects/scrapple/ or a separate dsource project.


I thought of that, but I don't feel like opening a project for just a few 
random code snippets or standalone classes. I think I'll just post it in this 
forum and let interested people grab it for now.


WORKSFORME :)


--
Tomasz Stachowiak
http://h3.team0xf.com/
h3/h3r3tic on #D freenode


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-13 Thread Jeremie Pelletier
Tom S Wrote:

 Jeremie Pelletier wrote:
  Tom S Wrote:
  
  Jeremie Pelletier wrote:
  I myself allocate all my meshes and textures directly on the GC and I'm 
  pretty sure its faster than C's malloc and much safer.
  Hm, why would it be faster with the GC than malloc? I'm pretty sure it's 
  the opposite :P Plus, I could use a specialized malloc implementation, 
  like TLSF.
  
  The D GC is already specialized, and given its being used quite a lot in D, 
  there are good chances its already sitting in the CPU cache, its heap 
  already having the available memory block waiting on a freelist, or if the 
  alloc is more than 0x1000 bytes, the pages available in a pool. You'd need 
  to use malloc quite a lot to get the same optimal performance, and mixing 
  the two would affect the performance of both.
 
 It might be specialized for _something_, but it definitely isn't 
 real-time systems. I'd say with my use cases there's a very poor chance 
 the GC is sitting in the CPU cache since most of the time my memory is 
 preallocated and managed by specialized structures and/or malloc. I've 
 found that using the GC only for the hard-to-manually-manage objects 
 works best. The rest is handled by malloc and the GC has a very shallow 
 vision of the world thus its collection runs are very fast. Of course 
 there's a drawback that both the GC and malloc will have some pages 
 cached, wasting memory, but I don't let the GC touch too much so it 
 should be minimal. YMMV of course - all depends on the memory allocation 
 patterns of the application.

I understand your points for using a separate memory manager, and I agree with 
you that having less active allocations make for faster sweeps, no matter how 
little of them are scanned for pointers. However I just had an idea on how to 
implement generational collection on a non-moving GC which should solve your 
issues (and well, mines too) with the collector not being fast enough. I need 
to do some hacking on my custom GC first, but I believe it could give yet 
another performance boost. I'll add my memory manager to my list of code 
modules to make public :)


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-13 Thread Tom S

Jeremie Pelletier wrote:

Tom S Wrote:


Jeremie Pelletier wrote:

Tom S Wrote:


Jeremie Pelletier wrote:

I myself allocate all my meshes and textures directly on the GC and I'm pretty 
sure its faster than C's malloc and much safer.
Hm, why would it be faster with the GC than malloc? I'm pretty sure it's 
the opposite :P Plus, I could use a specialized malloc implementation, 
like TLSF.

The D GC is already specialized, and given its being used quite a lot in D, 
there are good chances its already sitting in the CPU cache, its heap already 
having the available memory block waiting on a freelist, or if the alloc is 
more than 0x1000 bytes, the pages available in a pool. You'd need to use malloc 
quite a lot to get the same optimal performance, and mixing the two would 
affect the performance of both.
It might be specialized for _something_, but it definitely isn't 
real-time systems. I'd say with my use cases there's a very poor chance 
the GC is sitting in the CPU cache since most of the time my memory is 
preallocated and managed by specialized structures and/or malloc. I've 
found that using the GC only for the hard-to-manually-manage objects 
works best. The rest is handled by malloc and the GC has a very shallow 
vision of the world thus its collection runs are very fast. Of course 
there's a drawback that both the GC and malloc will have some pages 
cached, wasting memory, but I don't let the GC touch too much so it 
should be minimal. YMMV of course - all depends on the memory allocation 
patterns of the application.


I understand your points for using a separate memory manager, and I agree with 
you that having less active allocations make for faster sweeps, no matter how 
little of them are scanned for pointers. However I just had an idea on how to 
implement generational collection on a non-moving GC which should solve your 
issues (and well, mines too) with the collector not being fast enough. I need 
to do some hacking on my custom GC first, but I believe it could give yet 
another performance boost. I'll add my memory manager to my list of code 
modules to make public :)


Sounds great, I can't wait! :D


--
Tomasz Stachowiak
http://h3.team0xf.com/
h3/h3r3tic on #D freenode


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-12 Thread language_fan
Fri, 11 Sep 2009 16:16:07 -0400, Justin Johansson thusly wrote:

  Another point related to maintenance costs is the choice of language.
  Many smaller companies choose easier languages like php, visual
  basic, or python for their applications because they outsource
  library writing and it is much cheaper and easier to hire new
  workforce for business logic when the existing codebase does not use
  any complex idioms or language constructs. Some indian schoolboy can
  write e.g. php pages $1-2/ h, a hard-core C++/Haskell professional
  easily costs $500/h.
 
 Haven't heard of many $500/h gigs.
 
 Andrei
 
 I have.  Haven't you every been to see a lawyer :-)
 
 Seriously, the OP must have meant $500/day.

Unfortunately no. I live in the ECU area and even my consulting costs are 
on average about 150..200 eur/h + travel expenses. A normal senior level 
developer on any field of computer engineering gets $500/day.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-12 Thread language_fan
Fri, 11 Sep 2009 22:41:32 +, BCS thusly wrote:

 Hello language_fan,
 
 Fri, 11 Sep 2009 16:33:56 +, BCS thusly wrote:
 
 First, I have zero interest in game development so that's not an issue
 for me.
 
 Game development is one of the largest users of systems programming
 languages.
 
 I would mandate the 10-25% test no mater what language is being used.
 
 The bulk of programming is done for Finance, Insurance and Real Estate
 (and is done in COBOL /yuck) The most common programs out there are
 OSs and MS Office. As I said, I don't care about games.

I was talking about systems programming languages like C or D. From 
wikipedia

The term system programming language is also (and perhaps more widely) 
used to mean a language for system programming: that is, a language 
designed for writing system software as distinct from application 
software. In contrast with application languages, such system programming 
languages typically offer more direct access to the physical hardware of 
the machine: an archetypical system programming language in this sense 
was BCPL. The distinction between languages for system programming and 
applications programming became blurred with widespread popularity of C.

The various application domains you listed actually often do not use 
systems programming languages, at least the majority of their code does 
not. Things like database engines, drivers, operating systems, firmware, 
virtual machines (and games) on the other hand have no choice. For 
instance the code I have seen on the finance industry used languages Java 
(J2EE), Awk, Perl, and Javascript.

What is a bit confusing is that you mentioned operating systems and MS 
Office. 99.9% of companies worldwide do not develop any code even as 
plugins to those. For example MS Office is a native executable only for 
business reasons. There is nothing preventing them from providing it as 
an applet or web service (like google does). Office suites are in no way 
performance limited these days. In fact I think parts of the competitor 
OpenOffice.org has been written in Java.

  
 OK so the lead knows that they can make things x times faster. Well
 then the demo on the 10-25th percentile machine must not be x times
 slower than what you need on ship day. Exactly the same as would be
 true if the demo were done on a 50th or 75th percentile machine.
 
 Well basically you could do that. Usually it does not work that way.
 The idea is to prioritize the features and remove the worst ones.
 
 well that's also a way to make it run faster.
 
 It
 cannot be known beforehand which features are unnecessary, and there is
 a hard limit on how much can be removed. So either you can remove say
 30-50% of features
 
 Clearly you can't cut core features, but you can make some eye candy
 features go away when there isn't enough power to run them.

Making business decisions is not that easy, especially if you have no 
idea of the application domain. There are several stakeholders and 
various contracts involved.

 
 or do a complete redesign.
 
 If a different design is practical and would be faster, you should have
 used it in the first place or should be planning on doing it at some
 point anyway (I have never seen a non trivial program that was fast
 enough that I didn't whish it was faster).

Large parts of software projects worldwide fail. Redesigning for instance 
a single iteration is not that bad. You seem to favor the top-down 
waterfall model. Unfortunately the waterfall model usually fails. If you 
had studied software engineering lately, you would know that.

 But if you end up
 using only 50% of the potential resources of the platform, your game
 will usually suck (if the audience is technology oriented as it usually
 is).
 
 This is the classic fast cheap or well done, pick two. For anything
 that will ship, I'll always pick well done.
 
 That is ok if you are a hobby programmer, but in real world e.g. in the
 game industry the contracts pretty much dictate the schedules and if
 you spend too much time on the project, the producer will not offer any
 extra money. So if $1000..$1500 / month is ok for you, then fine.
 
 
 I will grant that games can legitimately require top of the line
 hardware (scientific programs, and some things like photoshop can also)
 but most anything that runs on a desktop should be written so that
 people can run it with the hardware they have now, rather than have to
 buy new hardware

Nowadays, as the piracy is hindering PC sales quite a lot, the focus is 
on console, mobile, and online games. The hardware specifications do not 
change that often, but it is still a bit hard to foretell what kind of 
stuff works. If the producer decides to want a split screen game mode 2 
months before deadline, it is not clear at all if the final frame rate 
will be below acceptable level in some parts of the game.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Don

dsimcha wrote:

== Quote from Nick Sabalausky (a...@a.a)'s article

In general though, I find the programmer time is more expensive than
hardware line to largely be a cop-out.


Fair enough, but can you elaborate on this?  Of course hardware is getting 
cheaper
relative to programming time.  This is obvious to anyone who doesn't live under 
a
rock.  My previous post was pointing out how this is relevant in case that was
less obvious.


Probably because you need to consider maintenance time. Poor quality 
ends up costing you in the long term. Good quality code gets reused.


OTOH the age where performance matters, may be coming to an end. I hope 
not, because I always considered optimization to be one of the most 
interesting things about programming. But it's already becoming a niche 
market.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread language_fan
Thu, 10 Sep 2009 20:25:04 -0400, Justin Johansson thusly wrote:

 2. Scala cannot make up her mind if she's a scripting language or
 serious language.  Optional semicolons at the end of statements are
 really frustrating for code style consistency.  Worst when sometimes you
 need them and you left them out everywhere else for code style
 consistency.  (JavaScript has this problem too and JS guru Douglas
 Crockford recommends semicolons always be used in that language despite
 them being optional when statements clearly separated by newlines.)

Ok, so you did not like optional semicolons.

 
 3. Half-baked embeddable XML support in the language looks like she
 borrowed from ECMAScript's E4X.

Ok, borrowing features is bad.

 
 4. Too many different ways of doing things.  All very interesting and no
 doubt very clever but she needs to shave her hairy legs with Occam's
 razor before she starts to look like the sister of Frankenstein's
 monster**.

Examples? I can tell you that D has too many looping constructs: goto, 
for, foreach, foreach_reverse, do-while, while, recursive functions, 
recursive templates.

 4. Way too much of an academic approach to language design; appears to
 be a grand experiment to see how many academic papers can be derived
 from it.

Ok, academic research is bad.

 5. Newcomers to the language will find it's type system concepts
 overwhelming - co-variance and contra-variance etc.  (don't know how D2
 will address this better though). Yes these issues are important for OO
 libraries but feel there must be a more practical way out of the
 language complexity.  Personally I always kept away from the hairy and
 scary bits of C++; you don't need 'em in a practical language.

I know many professional Java/C# coders who know the concepts of 
variances. The concepts are not new.. have you ever written code in Java 
(? extends Foo) etc. ? Java is a simple language, much simpler than D and 
even in Java land you need to care about these. I have also heard C++ 
developers pondering these issues. Giving them the proper names comes 
from the academia, though.

 I've heard Scala's argument that all the complexity is hidden in the
 libraries so need to worry about it.  Unfortunately I don't believe her.
  I learn a lot about a language by studying the library code and expect
 it to be as easy to read and understand as mainline code.

It is. The core language does not have e.g. arrays or AAs. Library code 
is known to be much more complex than ordinary application code.

I would not recommend studying a new language by reading the sources of 
libraries. It is like trying to learn quick-sort by reading the sources 
of JVM, ffmpeg or the Linux kernel. For instance there is the book 
'Programming in Scala', which is much better a starting point as the 
language also has some theoretical prerequisites. Reading the library 
code gets easier after reading the book.

 5. Not her fault (i.e. of the language), but after six months of
 courting Scala with the Eclipse plugin, suffering IDE crash after crash
 and lost code I just could not bring myself to suffering her any longer.

When Descent was new, it also sucked. It used to hang badly when it 
parsed partial code, like when one did not yet have a closing bracket for 
arrays. The Netbeans plugin for Scala is more stable.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread language_fan
Fri, 11 Sep 2009 09:34:54 +0200, Don thusly wrote:

 dsimcha wrote:
 == Quote from Nick Sabalausky (a...@a.a)'s article
 In general though, I find the programmer time is more expensive than
 hardware line to largely be a cop-out.
 
 Fair enough, but can you elaborate on this?  Of course hardware is
 getting cheaper relative to programming time.  This is obvious to
 anyone who doesn't live under a rock.  My previous post was pointing
 out how this is relevant in case that was less obvious.
 
 Probably because you need to consider maintenance time. Poor quality
 ends up costing you in the long term. Good quality code gets reused.

Another point related to maintenance costs is the choice of language. 
Many smaller companies choose easier languages like php, visual basic, 
or python for their applications because they outsource library writing 
and it is much cheaper and easier to hire new workforce for business 
logic when the existing codebase does not use any complex idioms or 
language constructs. Some indian schoolboy can write e.g. php pages $1-2/
h, a hard-core C++/Haskell professional easily costs $500/h.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread bearophile
Justin Johansson:

I'm somewhat reluctant to discuss Scala too much here as this is a D forum,

Discussing other languages is allowed here, especially if they show good things 
or bad things that may help D development.


 2. Scala cannot make up her mind if she's a scripting language or serious 
 language.

Scala purpose is to try to be good for both small and large programs, that's 
the meaning of Scala, that stands for Scalable language. I'd like to see D2 
become a little fitter for small programs.


Optional semicolons at the end of statements are really frustrating for code 
style consistency.

Semicolons are noise, they slow down programming a little. Better to design 
future languages where newlines are enough. See also the Delight language, 
that's better than normal D.

 
 Read Cedric's blog June 2008 for example
 http://beust.com/weblog/archives/000490.html

The comments to that blog post are more intelligent and useful than the main 
post. See for example the comment by Amit Patel.


http://www.unlimitednovelty.com/2009/04/why-i-dont-like-scala.html
Furthermore, Scala's object model is not only familiar, it's extremely 
well-studied and well-optimized. The JVM provides immense capability to inline 
method calls, which means calls which span multiple objects can be condensed 
down to a single function call. This is because the Smalltalk-inspired 
illusion that these objects are receiving and sending messages is completely 
suspended, and objects are treated in C++-style as mere chunks of state, thus 
an inlined method call can act on many of them at once as if they were simple 
chunks of state. In Reia, all objects are concurrent, share no state, and can 
only communicate with messages. Inlining calls across objects is thoroughly 
impossible since sending messages in Reia is not some theoretical construct, 
it's what really happens and cannot simply be abstracted away into a function 
call which mutates the state of multiple objects.

D compilers are generally not even able to inline most virtual calls, so Reia 
doesn't look like a fitting design for a language that must be fast as D. Reia 
design looks good for a higher level language. Scala is designed to be faster 
than Reia. Maybe someday people will find ways to efficiently compile Reia too, 
similar things have appened several times.


http://clojure.org/

Clojure is not OOP, and it lacks several of the modern functional features 
(like a powerful type system). And it's slow. I don't like it a lot, despite 
its management of immutability is cute.
Something about this topic:
http://blog.higher-order.net/2009/02/01/understanding-clojures-persistentvector-implementation/
http://blog.higher-order.net/2009/09/08/understanding-clojures-persistenthashmap-deftwice/

Bye,
bearophile


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Justin Johansson
 Ok, so you did not like optional semicolons.
I don't like needless options and that was one example that I picked.  (Hope D 
is listening too.)

  3. Half-baked embeddable XML support in the language looks like she
  borrowed from ECMAScript's E4X.
 Ok, borrowing features is bad.
Not at all. E4X is not a model that I would borrow from (and acknowledge Scala 
probably did its own thing independent of E4X anyway).  My preference would be 
to leverage the W3C XML stack to better advantage if I were to meld XML 
together with a new PL.  There is too much impedance mismatch between general 
purpose PL's and XML processors whether they be parsers, schema validators, 
XSLT or XQuery processors.  A heck of a lot of pain-staking work has gone into 
creating the XPath 2.0 Data Model (XDM).  Its a solid model which underpins 
XSLT 2 and XQuery and leverages XSchema Data Types to excellent effect. It's a 
work of some fine academics and domain experts, well worth borrowing from and 
not worth reinventing in a non standards-compliant way.  So if you are going to 
support XML as a first-class citizen in a PL it makes good sense to base the 
XML part of your data model/type system on XDM.

  4. Too many different ways of doing things.  All very interesting and no
 Examples? I can tell you that D has too many looping constructs: goto, 
 for, foreach, foreach_reverse, do-while, while, recursive functions, 
It's how I felt at the time .. 6-9 months ago .. memory has a tendency to fade 
when you decide to move on.  I'm far from expert at D yet, but, agree, 
foreach_reverse appears somewhat alarming.

  4. Way too much of an academic approach to language design; appears to
 Ok, academic research is bad.
Again not at all.  Academics do what they are good at which is research. 
Engineers and technologists apply that research to make consumer-ready products 
from which the public benefit.  There's no doubt that the research behind Scala 
will be to the benefit of the evolution of programming languages in general.

Further, I'd much rather see universities teach Scala in CS courses in 
preference to any of C++, Java or D.  The latter are all something one can 
learn on the job if they have a good founding in the fundamentals.  Since 
Scala's designer studied under Nicholas Wirth, arguably Scala is the successor 
of Modula-2 and Pascal (two languages developed by Wirth) and it wouldn't be 
any surprise if Scala becomes the next teaching language.

My original point was that Scala being in my opinion too academic, was not for 
my developer's taste; not that anything academic is bad.

 'Programming in Scala', which is much better a starting point as the 
Yes I read it.

  5. Not her fault (i.e. of the language), but after six months of
  courting Scala with the Eclipse plugin, suffering IDE crash after crash
  and lost code I just could not bring myself to suffering her any longer.
 
 When Descent was new, it also sucked. It used to hang badly when it 
 parsed partial code, like when one did not yet have a closing bracket for 
 arrays. The Netbeans plugin for Scala is more stable.
Fair comment.  Often it's the luck of timing when you decide to dive into 
something new.  If people want to try out Scala I'd recommend the Netbeans IDE 
also if not otherwise tied to Eclipse.  My interest at the time was with 
Eclipse RCP so admittedly using a different IDE wasn't something I was too keen 
on.

Anyway, thanks for your comments on my comments .. all in the spirit of good 
debate.

Cheers
Justin



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Justin Johansson
bearophile Wrote:
 Justin Johansson:
 I'm somewhat reluctant to discuss Scala too much here as this is a D forum,
 Discussing other languages is allowed here, especially if they show good 
 things or bad things that may help D development.

Okay.  Thanks for mentioning that, bearophile, and your other reply comments 
noted.

Since I'm still feeling D around the edges, would you mind saying what salient 
things there are about D that presumably attracts to the language.  It just 
helps to know why others are here as one ventures into new territory.

Cheers
Justin Johansson





Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread #ponce
 Nowadays when everyone soon has 12-core CPUs in front of them, especially 
 x86-64 ones, managing each register and memory module (cache or main 
 memory) manually is a major pain in the ass. Why do you want to do that 
 in the first place? For greater speed? 

Yes. I don't speak about managing each register and memory module manually, i 
speak about the compiler making the most efficient code. VM language simply do 
more stuff. The best JIT comes with Java server and has approx. 90% of C++ 
speed, but it's not the one bundled with most JRE.


I worked a year for a software synthesizer company. It's an extremely CPU-bound 
domain : imagine you have to generate dozens of voices with 2% CPU usage, at 
44100hz. Really NO overhead is acceptable. 

It is not a matter of an algorithm being faster than others or shit. You have 
one algorithm, there is no other way to do it and you have to do it fast.

Also audio plugins are threaded by the plugin host so the core usage is not 
your problem. Your main issue is to use the best of one core.

Race conditions are not such a problem either for these apps, you usually have 
an audio thread and an UI thread, and you synchronize through spinlocks.

What I mean is that in the domain we are really _stuck_ with C++. 
And you have to write the C++ of the most horrible kind : portable and 
efficient C++. At some point, the amount of ugly things you have to do even 
explode combinatorially (think about portable memory alignment, portable 
assembly code...). 

Also, C++ is consistently destroyed by some vendors, encouraging 
compiler-specific extensions over standard.

 The problem is, your program 
 usually has tons of memory leaks, potential race conditions and 
 deadlocks, and states where is segfaults. Even if you develop for free, I 
 do not want to use your buggy pos. YMMV

Fortunately, you can't use it now.
I use D mainly for pet projects but have used it for tooling at my job (and the 
speed, productivity and C-style syntax was a major selling point). 

I think one could make audio plugins with D, with far less pain. It could even 
be the best language for the job. You won't see soon a commercially available 
audio plugin made in a VM language, because VM language are not pragmatic 
enough (yes, some have tried with Java and ended up calling native code).

 So you are part of the efficiency is priority #1 subgroup, after all. 
 There is nothing wrong with that, I just happened to guess that.

Yes, I came to D especially for the efficiency. What's the problem ?



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread BCS

Hello dsimcha,


== Quote from Nick Sabalausky (a...@a.a)'s article


In general though, I find the programmer time is more expensive than
hardware line to largely be a cop-out.


Fair enough, but can you elaborate on this?  Of course hardware is
getting cheaper relative to programming time.  This is obvious to
anyone who doesn't live under a rock.  My previous post was pointing
out how this is relevant in case that was less obvious.



Always be VERY careful when you compare cost that are paid by different people.

For programming, the better the product does, the more irrelevant programmer 
times is. To boot, programmer time is a more or less fixed cost (you can 
stop paying it any time you want), and slow code is an open ended one (your 
customers will be paying for it until the last person quits using your program). 
I'd even go so far as to venture that for any reasonably successful program, 
quite a lot of optimization time can be a net gain for the economy at large. 

If I ever am in a position to do it, I will mandate that executive demos 
will always be first done using a 10-25th percentile machine from our current 
target market. Only once it is shown to run reasonably on that, will the 
team be allowed to show what it can do on better hardware. 





Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread language_fan
Fri, 11 Sep 2009 16:10:28 +, BCS thusly wrote:

 If I ever am in a position to do it, I will mandate that executive demos
 will always be first done using a 10-25th percentile machine from our
 current target market. Only once it is shown to run reasonably on that,
 will the team be allowed to show what it can do on better hardware.

I wonder how you can do that. E.g. if you are on the console game 
industry, the platforms and their capabilities are well known. No 
developer will want to write code for something prev gen. The lead coders 
and their superiors also have experience in optimizing the pre-releases 
and know how much can be improved each time. Agile development methods 
are used so the final result does not really come as a surprise. Often 
the timeframe of a final release is as low as 6 months, with one month 
iterations. There simply is not time to hand optimize every possible bit. 
Many have switched to c# from obj-c and c++, because the legacy languages 
just suck when they are in a hurry.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread BCS

Hello language_fan,


Fri, 11 Sep 2009 16:10:28 +, BCS thusly wrote:


If I ever am in a position to do it, I will mandate that executive
demos will always be first done using a 10-25th percentile machine
from our current target market. Only once it is shown to run
reasonably on that, will the team be allowed to show what it can do
on better hardware.


I wonder how you can do that. E.g. if you are on the console game
industry, the platforms and their capabilities are well known. No
developer will want to write code for something prev gen.


First, I have zero interest in game development so that's not an issue for 
me.



The lead
coders and their superiors also have experience in optimizing the
pre-releases and know how much can be improved each time. Agile
development methods are used so the final result does not really come
as a surprise.


OK so the lead knows that they can make things x times faster. Well then 
the demo on the 10-25th percentile machine must not be x times slower than 
what you need on ship day. Exactly the same as would be true if the demo 
were done on a 50th or 75th percentile machine.



Often the timeframe of a final release is as low as 6
months, with one month iterations. There simply is not time to hand
optimize every possible bit. Many have switched to c# from obj-c and
c++, because the legacy languages just suck when they are in a hurry.


As above, this is exactly the same regardless of what the demo is done on.

This is the classic fast cheap or well done, pick two. For anything that 
will ship, I'll always pick well done.





Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread language_fan
Fri, 11 Sep 2009 16:33:56 +, BCS thusly wrote:

 First, I have zero interest in game development so that's not an issue
 for me.

Game development is one of the largest users of systems programming 
languages.

 OK so the lead knows that they can make things x times faster. Well then
 the demo on the 10-25th percentile machine must not be x times slower
 than what you need on ship day. Exactly the same as would be true if the
 demo were done on a 50th or 75th percentile machine.

Well basically you could do that. Usually it does not work that way. The 
idea is to prioritize the features and remove the worst ones. It cannot 
be known beforehand which features are unnecessary, and there is a hard 
limit on how much can be removed. So either you can remove say 30-50% of 
features or do a complete redesign. But if you end up using only 50% of 
the potential resources of the platform, your game will usually suck (if 
the audience is technology oriented as it usually is).

 
 This is the classic fast cheap or well done, pick two. For anything
 that will ship, I'll always pick well done.

That is ok if you are a hobby programmer, but in real world e.g. in the 
game industry the contracts pretty much dictate the schedules and if you 
spend too much time on the project, the producer will not offer any extra 
money. So if $1000..$1500 / month is ok for you, then fine.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Andrei Alexandrescu

language_fan wrote:

Fri, 11 Sep 2009 09:34:54 +0200, Don thusly wrote:


dsimcha wrote:

== Quote from Nick Sabalausky (a...@a.a)'s article

In general though, I find the programmer time is more expensive than
hardware line to largely be a cop-out.

Fair enough, but can you elaborate on this?  Of course hardware is
getting cheaper relative to programming time.  This is obvious to
anyone who doesn't live under a rock.  My previous post was pointing
out how this is relevant in case that was less obvious.

Probably because you need to consider maintenance time. Poor quality
ends up costing you in the long term. Good quality code gets reused.


Another point related to maintenance costs is the choice of language. 
Many smaller companies choose easier languages like php, visual basic, 
or python for their applications because they outsource library writing 
and it is much cheaper and easier to hire new workforce for business 
logic when the existing codebase does not use any complex idioms or 
language constructs. Some indian schoolboy can write e.g. php pages $1-2/

h, a hard-core C++/Haskell professional easily costs $500/h.


Haven't heard of many $500/h gigs.

Andrei


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Nick Sabalausky
language_fan f...@bar.com.invalid wrote in message 
news:h8dth4$28h...@digitalmars.com...
 Fri, 11 Sep 2009 16:10:28 +, BCS thusly wrote:

 If I ever am in a position to do it, I will mandate that executive demos
 will always be first done using a 10-25th percentile machine from our
 current target market. Only once it is shown to run reasonably on that,
 will the team be allowed to show what it can do on better hardware.

 I wonder how you can do that. E.g. if you are on the console game
 industry, the platforms and their capabilities are well known. No
 developer will want to write code for something prev gen.

It's a little more complicated than that. The prev gen are often just plain 
different platforms, so it's more than just deling with somewhat lower 
specs. And even if the developers want to, the publisher suits aren't 
necessrily going to care much about anything that isn't the latest, 
greatest and most 'buzz'ed platform. And with the possible exception of 
Sony, the console manufacturers themselves like to abandon their old systems 
to help push the new.

 The lead coders
 and their superiors also have experience in optimizing the pre-releases
 and know how much can be improved each time. Agile development methods
 are used so the final result does not really come as a surprise. Often
 the timeframe of a final release is as low as 6 months, with one month
 iterations. There simply is not time to hand optimize every possible bit.

Console game development does use a lot of optimization, but as far as 
actual code that needs to be optimized, that's largely located within the 
middleware. So the development process of the actual *game* studio isn't 
quite as relevant here as that of the middleware developers.

 Many have switched to c# from obj-c and c++, because the legacy languages
 just suck when they are in a hurry.

I have a hard time believing anything other then Xbox/PC exclusives are 
written in C#. And even then it would just be because, again, the actual 
game studio programmers are usually just doing game logic these days, and 
glueing middleware together, and that's not where the vast majority of all 
the bytes and cycles go.




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Nick Sabalausky
Justin Johansson proc...@adam-dott-com.au wrote in message 
news:h8ddv4$1i6...@digitalmars.com...

  4. Too many different ways of doing things.  All very interesting and 
  no
 Examples? I can tell you that D has too many looping constructs: goto,
 for, foreach, foreach_reverse, do-while, while, recursive functions,
 It's how I felt at the time .. 6-9 months ago .. memory has a tendency to 
 fade when you decide to move on.  I'm far from expert at D yet, but, 
 agree, foreach_reverse appears somewhat alarming.


Fortunately, that opinion seems to be shared with even the top D people, 
AIUI. The 'foreach_reverse' dates back from long before ranges. With D2, 
using 'foreach' over a reversable range is generally the preferred way to 
go. I wouldn't be surprised to see 'foreach_reverse' dissapear at some 
point. 




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Nick Sabalausky
Justin Johansson proc...@adam-dott-com.au wrote in message 
news:h8dh5s$1nr...@digitalmars.com...
 bearophile Wrote:
 Justin Johansson:
 I'm somewhat reluctant to discuss Scala too much here as this is a D 
 forum,
 Discussing other languages is allowed here, especially if they show good 
 things or bad things that may help D development.

 Okay.  Thanks for mentioning that, bearophile, and your other reply 
 comments noted.

 Since I'm still feeling D around the edges, would you mind saying what 
 salient things there are about D that presumably attracts to the language. 
 It just helps to know why others are here as one ventures into new 
 territory.


In my case, a big part of it was because, as far as I can tell, D is the 
only real modern systems langauge besides C++, and after years of using C++ 
as my primary language, I've grown to hate it and see it as a pathetic 
lumbering undead corpse that keeps getting thoughtlessly patched up and 
reanimated and just won't die. And beyond that, D's one of the very very few 
modern langauges out there that doesn't try to cram *anything* down my 
throat, whether it be a VM or a pedantic development style or philosophy. 




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Justin Johansson
  Another point related to maintenance costs is the choice of language. 
  Many smaller companies choose easier languages like php, visual basic, 
  or python for their applications because they outsource library writing 
  and it is much cheaper and easier to hire new workforce for business 
  logic when the existing codebase does not use any complex idioms or 
  language constructs. Some indian schoolboy can write e.g. php pages $1-2/
  h, a hard-core C++/Haskell professional easily costs $500/h.
 
 Haven't heard of many $500/h gigs.
 
 Andrei

I have.  Haven't you every been to see a lawyer :-)

Seriously, the OP must have meant $500/day.



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Nick Sabalausky
bearophile bearophileh...@lycos.com wrote in message 
news:h8d7tu$179...@digitalmars.com...

 Semicolons are noise, they slow down programming a little.


That's *very* programmer-dependent. It originally took took me all of about 
a week to get used to semicolons after growing up on basic (and even then it 
was a very very minor time sink), and now just it takes all of about a split 
second to press that key. But, any time I use a language that doesn't allow 
semicolon line endings, I keep sticking them in without even thinking about 
it. Then the compiler complains, and I have to go back and fix it, and that 
slows down programming more than just instinctively hitting a key.


 Read Cedric's blog June 2008 for example
 http://beust.com/weblog/archives/000490.html

 The comments to that blog post are more intelligent and useful than the 
 main post. See for example the comment by Amit Patel.


Thanks for pointing that out. That's a *very* good comment. And interesting 
too because he talks about using switch for parsers (although actually, so 
does the original article) and just the other day I was making an 
implementation of Haxe's proprocessor. I ended up with code like this:

switch(directive)
{
case #if:
...
case #elseif:
...
case #else:
...
case #end:
...
case #error:
...
default:
...
}

Works fine. When I originally read that article, although I understood his 
point and agree there are (pardon the puns) many cases for which switch is 
the wrong choice, I was thinking What, am I *really* supposed turn those 
strings into polymorphic objects? What a pedantic waste!




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread bearophile
Justin Johansson:

would you mind saying what salient things there are about D that presumably 
attracts to the language.  It just helps to know why others are here as one 
ventures into new territory.

That's not an easy question. This is a personal answer, other people will like 
other sides of D. I like D1/D2 for:
- I don't think of it as a propetary language, like C#.
- Sometimes I want the freedom to use memory as I like, with structs, and even 
unions. If you have large datasets you find that using more than 20 bytes for a 
number as in Python doesn't help. Values also reduce indirection, this speeds 
up things. This allows a more efficient usage of memory, and this helps 
increase cache locality, that increases performance. Unfortunately GC-managed D 
pointers can't be tagged, so I have to use memory from the C heap for them. And 
sometimes you need pointers. That's why I'd like D to have more/better ways to 
increase safety when using pointers (like using memory regions when not in 
release mode, etc).
- I like this newsgroups, I can talk to people, and they sometimes answer my 
numerous questions. I am learning a lot. Sometimes I receive no answers, but 
it's acceptable. For its nature this newsgroup attracts some strange people too.
- I often use Python, it's the language I like more, but for some purposes it's 
too much slow. And I am tired of writing vectorized code in NumPy and the like. 
Cython reference count makes me sick and ShedSkin while nice is a failed 
experiment. D feels like freedom, while sometimes using Python feels like 
programming with mittens for me.
- There are some things that I'd like to see in a language, and D being in 
development still and being not controlled by an ivory tower committee give me 
the illusion to see some of my ideas realized. So far I haven't influenced a 
lot the development of D. On the other hand if everyone can influence a lot the 
language the result may be a patchwork. So some dynamic compromise has to be 
found every day.
- D looks a lot like C, yet in D I can write code several times faster than C. 
Sometimes 5-10 times faster. This is an enormous difference.
- I am quite sensitive to syntax noise, boilerplate code. I like elegance, 
clarity in semantics, I hate corner cases, yet I want a language that's 
efficient, readable, and the less bug-prone as possible. C++ looks too much 
complex for me. D1 is simple enough for me, and D2 is getting a bit too much 
complex. I may appreciate the idea of a D 1.5 language that fixes some holes 
and limits of D1 while keeping language simple enough (struct constructors, and 
few other things. Such things don't significantly increase the difficulty in 
using the language).
- I like how D doesn't totally ignore safety as C does, in D sometimes the 
default way is the safer one, and the unsafe way is used only where you ask for 
it.  I'd like to see more safeties added to D, like optional run-time and 
compile-time integral overflow tests, some pointer safety, better template 
error messages (template constraints help some in such regard), stack traces, 
less compiler bugs, safer casts (in C# you need explicit casts to convert 
double = float), a safer printf, some optional annotations inspired by Plint 
(a lint program) to give more semantics to the compiler, that can be used to 
both speed up code and avoid bugs. There's lot that can be done in this regard. 
And release-mode performance can be usually kept unchanged.
- I like how templates allow me to do some things that are doable only in less 
common languages like Lisp and Haskell, or in dynamic languages.
- To use D I don't need to fill forms, receive emails, pay, install with an 
installer, or use an IDE. The compiler is free, you just need to download a 
zip, and recently such zip is well organized too inside. I uncompress the zip, 
seth a path or two and I am already able to use it.
- The LDC compiler on Linux produces binaries that are efficient almost as C++ 
and sometimes more. Someday LDC will be available on Windows too. LDC 
developers are good people, they fix things very quickly (often in 24 hours), 
and they don't ignore user requests. For example in LDC the == among 
associative arrays now works. LDC developers are almost as serious people as 
LLVM devs, but no one gets paid for LDC (while the head of LLVM is paid by 
Apple).
- D contains many tricks and handy features that save lot of time and make 
programming shorter and simpler. Some of such features are half-unfinished 
(some GC/GC-pointers semantics, module system, unittest system, contract 
programming, and several other things) and Walter doesn't seem willing to 
finish them soon, but having them partially unfinished is better than not 
having them.

So in summary I like the freedom D gives me in using memory, and the freedom to 
program in the style I see most fit for a program, its work-in-progress nature 
that gives the illusion to be able to influence it in good ways, its simple 

Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread BCS

Hello language_fan,


Fri, 11 Sep 2009 16:33:56 +, BCS thusly wrote:


First, I have zero interest in game development so that's not an
issue for me.


Game development is one of the largest users of systems programming
languages.


I would mandate the 10-25% test no mater what language is being used. 

The bulk of programming is done for Finance, Insurance and Real Estate (and 
is done in COBOL /yuck) The most common programs out there are OSs and 
MS Office. As I said, I don't care about games.



OK so the lead knows that they can make things x times faster. Well
then the demo on the 10-25th percentile machine must not be x times
slower than what you need on ship day. Exactly the same as would be
true if the demo were done on a 50th or 75th percentile machine.


Well basically you could do that. Usually it does not work that way.
The idea is to prioritize the features and remove the worst ones. 


well that's also a way to make it run faster.


It
cannot be known beforehand which features are unnecessary, and there
is a hard limit on how much can be removed. So either you can remove
say 30-50% of features


Clearly you can't cut core features, but you can make some eye candy features 
go away when there isn't enough power to run them.



or do a complete redesign.


If a different design is practical and would be faster, you should have used 
it in the first place or should be planning on doing it at some point anyway 
(I have never seen a non trivial program that was fast enough that I didn't 
whish it was faster).



But if you end up
using only 50% of the potential resources of the platform, your game
will usually suck (if the audience is technology oriented as it
usually is).


This is the classic fast cheap or well done, pick two. For anything
that will ship, I'll always pick well done.


That is ok if you are a hobby programmer, but in real world e.g. in
the game industry the contracts pretty much dictate the schedules and
if you spend too much time on the project, the producer will not offer
any extra money. So if $1000..$1500 / month is ok for you, then fine.



I will grant that games can legitimately require top of the line hardware 
(scientific programs, and some things like photoshop can also) but most anything 
that runs on a desktop should be written so that people can run it with the 
hardware they have now, rather than have to buy new hardware





Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread bearophile
Saves my time both when I program and when I run the program. Compared to this 
all other things are secondary.

Saving time is one of the most important qualities of a programming language. 
A language can save time in many ways:
- If it's easy to find with Google, if it's fast to download and install. If 
you have to pay for it you need waste some time. Sometimes installers help save 
time, but a zip file too can sometimes save time compared to an ugly installer. 
If you don't have to fill forms to download it it saves your time. Having an 
integrated editor, or having a IDE+compiler can save programming time. A good 
IDE can even turn a boring language like Java in an usable and quite useful 
one. Languages like C# are designed to almost require an IDE.
- Having good online documentation and a good help can save lot of time.
- A rich, easy to use and well debugged standard library can save a lot of 
time. Having a community of code (like Python modules you can find online to do 
almost everything, that are written in an uniform style that's easy to read and 
understand) can save a very large amount of time, even months.
- If it's simple or similar to other languages, if its semantics is clear, this 
saves some time to learn it, sometimes many months.
- A compact syntax saves a little programming time. A clear syntax improves 
readability, and this saves a lot of time when the program has to be debugged, 
modified, improved or just undertood.
- A clear and high level semantics allows the programmer to think less about 
details, and this speeds up the invention or implementation of algorithms, and 
saves time (Python is among the best at this)..
- If it helps avoid bugs or remove them it can save lot of programming time.
- Some built-in features of the language can save some programming time.
- If the compilation process is simple this saves time (see the Bud tool). If 
the language is designed to be compiled quickly (or to not require compilation) 
this saves time.
- If programs run quickly it saves some time.

A good language has to try to save time in all those ways and more.

Bye,
bearophile


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Rainer Deyke
Nick Sabalausky wrote:
 That's *very* programmer-dependent. It originally took took me all of about 
 a week to get used to semicolons after growing up on basic (and even then it 
 was a very very minor time sink), and now just it takes all of about a split 
 second to press that key. But, any time I use a language that doesn't allow 
 semicolon line endings, I keep sticking them in without even thinking about 
 it. Then the compiler complains, and I have to go back and fix it, and that 
 slows down programming more than just instinctively hitting a key.

If you're going to judge features on the basis of habit, then the best
language is always the language you have been using for the longest time.

I'm not entirely happy with the way Scala handles the division between
statements - Scala's rules seem arbitrary and complex - but semicolons
*are* noise, no matter how habitually I use them and how much time I
waste removing them afterwards.

My preferred rule is this:  If two lines have the same indentation, they
are separate statements.  If the second line is indented further than
the first line, the second line is a continuation of the statement
started in the first line.  Surprisingly, even Python (which already has
significant indentation) doesn't use this simple and obvious rule.


-- 
Rainer Deyke - rain...@eldwood.com


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-11 Thread Benji Smith

Rainer Deyke wrote:

I'm not entirely happy with the way Scala handles the division between
statements - Scala's rules seem arbitrary and complex - but semicolons
*are* noise, no matter how habitually I use them and how much time I
waste removing them afterwards.


I don't know anything about scala, but I've been working on an 
Actionscript compiler recently (the language is based on ECMAScript, so 
it's very much like JavaScript in this respect) and the optional 
semicolon rules are completely maddening.


The ECMAScript spec basically says: virtual semicolons must be inserted 
at end-of-line whenever the non-insertion of semicolons would result in 
an erroneous parse.


So there are really only three ways to handle it, and all of them are 
insane:


1) Treat the newline character as a token (rather than as skippable 
whitespace) and include that token as an optional construct in every 
single production where it can legally occur. This results in hundreds 
of optional semicolons throughout the grammar, and makes the whole thing 
a nightmare to read, but at least it still uses a one-pass CFG.


CLASS :=
  class
  NEWLINE?
  IDENTIFIER
  NEWLINE?
  {
  NEWLINE?
  (
MEMBER
NEWLINE?
  )*
  }

2) Use lexical lookahead, dispatched from the parser. The tokenizer 
determines whether to treat a newline as a statement terminator based on 
the current parse state (are we in the middle of a parenthetized 
expression?) and the upcoming tokens on the next line. This is nasty 
because the grammar becomes context-sensitive and conflates lexical 
analysis with parsing.


2) Whenever the parser encounters an error, have it back up to the 
beginning of the previous production and insert a virtual semicolon into 
the token stream. Then try reparsing. Since there might be multiple 
newlines contained in a single multiline expression, it might take 
arbitrarily many rewrite attempts before reaching a correct parse.


The thing about most compiler construction tools is that they don't 
allow interaction between the context-guided tokenization, and they're 
not designed for the creation of backup-and-retry processing, or the 
insertion of virtual tokens into the token stream.


Ugly stuff.

Anyhoo, I know this is waaay off topic. But I think any language 
designer including optional semicolons in their language desperately 
deserves a good swift punch in the teeth.


--benji


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread language_fan
Thu, 10 Sep 2009 03:21:10 -0400, Nick Sabalausky thusly wrote:


 I really really hope that my current excitement with D continues and
 that another 30-60 days down the track I don't end up becoming
 disillusioned with D as I did with Scala.


 As someone who's been meaning to take a look at Scala, I'm very curious:
 What did you dislike about it?

At least the problem with many old-school developers is that even though 
a new language is 10x more readable, 100x more flexible, 1000x safer and 
1x faster to develop with, if it's 0.1% slower than C++, they have no 
reason to use it. Well, it shouldn't be a surprise given that JVM *is* a 
VM.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread #ponce
 At least the problem with many old-school developers is that even though 
 a new language is 10x more readable, 100x more flexible, 1000x safer and 
 1x faster to develop with, if it's 0.1% slower than C++, they have no 
 reason to use it. Well, it shouldn't be a surprise given that JVM *is* a 
 VM.

So I'm a 22 yo oldschool developer.
One reason I hate VM language is not really about speed but that you have few 
control on what the CPU do, and very few control over memory and cache usage. 
Compound value types also are often lacking (especially in Java).

See python internal objects size : 
http://www.codexon.com/posts/memory-size-of-python-objects
Everything takes 4x the memory it would take in D. Each field access seems to 
be a lookup into a hashmap. This is plain ugly.

Also, when some part of a program is slow in C / C++ / D, most of time you have 
a way to speed it up. It may be painful but there is one.



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread Nick Sabalausky
#ponce alil...@gmail.com wrote in message 
news:h8acpi$21u...@digitalmars.com...
 At least the problem with many old-school developers is that even though
 a new language is 10x more readable, 100x more flexible, 1000x safer and
 1x faster to develop with, if it's 0.1% slower than C++, they have no
 reason to use it. Well, it shouldn't be a surprise given that JVM *is* a
 VM.

 So I'm a 22 yo oldschool developer.
 One reason I hate VM language is not really about speed but that you have 
 few control on what the CPU do, and very few control over memory and cache 
 usage. Compound value types also are often lacking (especially in Java).

 See python internal objects size : 
 http://www.codexon.com/posts/memory-size-of-python-objects
 Everything takes 4x the memory it would take in D. Each field access seems 
 to be a lookup into a hashmap. This is plain ugly.

 Also, when some part of a program is slow in C / C++ / D, most of time you 
 have a way to speed it up. It may be painful but there is one.


Yea, and another issue with VM-only languages is that you can't really use 
them for systems programming (ex, try using Java or Python to write 
firmware, or a commercially-competetive NDS game. And what language is a 
person going to use to implement that VM in? Another VMed langauge? That 
would be pointless). Now, for some people that's not an issue, because they 
may never venture outside of web development and maybe end-user desktop apps 
(Hell, most of the programmers I've met here in Cleveland would never be 
*capable* of going beyond such simple types of software...but that's a whole 
other set of discussions...). But for others that does really limit how much 
use they can get out of it and forces them to divide their effort and focus 
by constantly switching between languages for each type of task. (Count me 
as another old-school 20-something ;)...ugh, but not for much longer...:/ ) 




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread language_fan
Thu, 10 Sep 2009 04:18:58 -0400, #ponce thusly wrote:

 At least the problem with many old-school developers is that even
 though a new language is 10x more readable, 100x more flexible, 1000x
 safer and 1x faster to develop with, if it's 0.1% slower than C++,
 they have no reason to use it. Well, it shouldn't be a surprise given
 that JVM *is* a VM.
 
 So I'm a 22 yo oldschool developer.
 One reason I hate VM language is not really about speed but that you
 have few control on what the CPU do, and very few control over memory
 and cache usage. Compound value types also are often lacking (especially
 in Java).

Nowadays when everyone soon has 12-core CPUs in front of them, especially 
x86-64 ones, managing each register and memory module (cache or main 
memory) manually is a major pain in the ass. Why do you want to do that 
in the first place? For greater speed? The problem is, your program 
usually has tons of memory leaks, potential race conditions and 
deadlocks, and states where is segfaults. Even if you develop for free, I 
do not want to use your buggy pos. YMMV

 Also, when some part of a program is slow in C / C++ / D, most of time
 you have a way to speed it up. It may be painful but there is one.

So you are part of the efficiency is priority #1 subgroup, after all. 
There is nothing wrong with that, I just happened to guess that.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread language_fan
Thu, 10 Sep 2009 04:18:58 -0400, #ponce thusly wrote:

 See python internal objects size :
 http://www.codexon.com/posts/memory-size-of-python-objects Everything
 takes 4x the memory it would take in D. Each field access seems to be a
 lookup into a hashmap. This is plain ugly.

Python is not the only language running on a VM, you know. Your 
comparison would be fairer if you happened to choose a statically typed 
language such as Java, which is a bit more performance oriented. If you 
have time, you can try to build a python compiler that compiles to native 
code *without* resorting to damn slow hashmaps.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread language_fan
Thu, 10 Sep 2009 06:05:59 -0400, Nick Sabalausky thusly wrote:

 #ponce alil...@gmail.com wrote in message
 news:h8acpi$21u...@digitalmars.com...
 At least the problem with many old-school developers is that even
 though a new language is 10x more readable, 100x more flexible, 1000x
 safer and 1x faster to develop with, if it's 0.1% slower than C++,
 they have no reason to use it. Well, it shouldn't be a surprise given
 that JVM *is* a VM.

 So I'm a 22 yo oldschool developer.
 One reason I hate VM language is not really about speed but that you
 have few control on what the CPU do, and very few control over memory
 and cache usage. Compound value types also are often lacking
 (especially in Java).

 See python internal objects size :
 http://www.codexon.com/posts/memory-size-of-python-objects Everything
 takes 4x the memory it would take in D. Each field access seems to be a
 lookup into a hashmap. This is plain ugly.

 Also, when some part of a program is slow in C / C++ / D, most of time
 you have a way to speed it up. It may be painful but there is one.


 Yea, and another issue with VM-only languages is that you can't really
 use them for systems programming (ex, try using Java or Python to write
 firmware

For what's it worth, I have not seen any firmware projects that use D, 
either. Not that it's impossible, no one in the firmware industry just is 
not interested in D. When you have limited amount of memory resources, 
the bloaty executables that dmd generates (all firmware uses x86 opcodes, 
right), the typeinfos, and the garbage collector are a nuisance.

, or a commercially-competetive NDS game. And what language is a
 person going to use to implement that VM in? Another VMed langauge? That
 would be pointless). Now, for some people that's not an issue, because
 they may never venture outside of web development and maybe end-user
 desktop apps (Hell, most of the programmers I've met here in Cleveland
 would never be *capable* of going beyond such simple types of
 software...but that's a whole other set of discussions...).

I happen to know that for instance the code (written in e.g. Java) 
banking industry, online shops, and web search engines use, isn't quite 
the simplest type of software. You can ask amazon or google or the 
technical department of your favorite bank. They wouldn't use C++ or D in 
its current form for the majority of their tasks, it's just how it is.

The problem with many developers is that they refuse to understand the 
mathematics behind algorithms. Time complexity just happens to matter 
more when you have millions of clients etc. If Java is 50% slower, it 
doesn't matter when your algorithm is 4 orders of magnitude faster with 
10 million clients. Writing good algorithms is quite hard and error prone 
in low level languages and that's why bad algorithms are usually used 
even if they are known to perform weakly.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread Nick Sabalausky
 language_fan f...@bar.com.invalid wrote in message 
news:h8b5tf$bs...@digitalmars.com...
 Thu, 10 Sep 2009 06:05:59 -0400, Nick Sabalausky thusly wrote:

 Yea, and another issue with VM-only languages is that you can't really
 use them for systems programming (ex, try using Java or Python to write
 firmware

 For what's it worth, I have not seen any firmware projects that use D,
 either. Not that it's impossible, no one in the firmware industry just is
 not interested in D. When you have limited amount of memory resources,
 the bloaty executables that dmd generates (all firmware uses x86 opcodes,
 right), the typeinfos, and the garbage collector are a nuisance.


True, but as you note, at least it's possible. Plus a lot of that bloat can 
be ripped out anyway. And LLVM has a C backend for all those processors that 
only have C compilers available. *And* there's interest in making all this 
happen. But with firmware on a VM-only language, you'd just be facing a 
dead-end (or at most, one of those mere side-show Oh, wow, like what he 
did! In that language! Cool! Ha ha! project with no real-world practical 
application).

 , or a commercially-competetive NDS game. And what language is a
 person going to use to implement that VM in? Another VMed langauge? That
 would be pointless). Now, for some people that's not an issue, because
 they may never venture outside of web development and maybe end-user
 desktop apps (Hell, most of the programmers I've met here in Cleveland
 would never be *capable* of going beyond such simple types of
 software...but that's a whole other set of discussions...).

 I happen to know that for instance the code (written in e.g. Java)
 banking industry,

Banking software is a glorified calculator. Granted, much more complex than 
a typical calculator (though not necessarily handheld graphing calculators), 
but still, a calculator. And when it still takes a day or two, in 2009, for 
certain simple transactions to go through, I have to seriously question how 
good of a job they're doing anyway.

 online shops,

I don't believe that for a second. And as far as performance, if they need 
more of that, they can (and do) throw hardware at it.

 and web search engines use,

Lumping search engines in with ordinary web apps is like lumping compilers 
in with typical end-user desktop apps, or like lumping home console games in 
with Flash games. Yea, I'll grant that searching is a serious topic, but 
throwing an HTML frontend over it doesn't make it comparable with web apps 
in general.

 isn't quite
 the simplest type of software. You can ask amazon or google or the
 technical department of your favorite bank. They wouldn't use C++ or D in
 its current form for the majority of their tasks, it's just how it is.


Right, they don't use systems langauges because they don't need to (except 
in google's case, and unless I'm mistaken, I think I've heard that they do 
write some of their low-level performance critical stuff in C).

Like I already said, many people will never need something that goes 
low-level. And so thay can get away with investing all their resources in 
VM-only languages. And like I already said, that's fine. But for someone to 
come in and indicate that complaints against VM-only languages are always 
unsubstantiated is just plain absurd.

 The problem with many developers is that they refuse to understand the
 mathematics behind algorithms. Time complexity just happens to matter
 more when you have millions of clients etc. If Java is 50% slower, it
 doesn't matter when your algorithm is 4 orders of magnitude faster with
 10 million clients.

Yes, which just helps back up my remark that those apps are a lot easier. 
Think about it: Time complexity is always a concern when dealing with 
performance. But on some software, that's the only performance question. 
Easy. Nothing else to worry about. Just identify the type of algorithm, look 
up the complexity if you don't already know it off the top of your head, and 
you're done. But then on other software, you *ALSO* have to worry about 
whether or not the complexity *is* actually the bottleneck in the first 
place. And when it isn't - and, much of the time, even when it *is* - then 
you have to dive into the *real* world of software optimization...*in 
addition to* that time complexity that's still going to creep up from time 
to time anyway.

 Writing good algorithms is quite hard and error prone
 in low level languages and that's why bad algorithms are usually used
 even if they are known to perform weakly.

And then there's D, which handles both low and high level.




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread Nick Sabalausky
language_fan f...@bar.com.invalid wrote in message 
news:h8b51a$bs...@digitalmars.com...
 Thu, 10 Sep 2009 04:18:58 -0400, #ponce thusly wrote:

 So I'm a 22 yo oldschool developer.
 One reason I hate VM language is not really about speed but that you
 have few control on what the CPU do, and very few control over memory
 and cache usage. Compound value types also are often lacking (especially
 in Java).

 Nowadays when everyone soon has 12-core CPUs in front of them, especially
 x86-64 ones, managing each register and memory module (cache or main
 memory) manually is a major pain in the ass.

That's just plain arrogant and ignorant. I swear, the next time I see yet 
another person pulling out the That's all they offer in the stores, 
therefore that must be only thing that's actually in use, and if anyone uses 
less, well then screw them for not being as big of a consumer whore as I am 
bullshit, my head's going to explode.

 Why do you want to do that
 in the first place? For greater speed? The problem is, your program
 usually has tons of memory leaks, potential race conditions and
 deadlocks, and states where is segfaults. Even if you develop for free, I
 do not want to use your buggy pos. YMMV


Have you even been paying *any* attention to D? You're on a D newsgroup, but 
your paragraph right there makes it sound as if you think C++ is the only 
native-compiled language that is, was or ever will be. I'd expect that kind 
of ignorance on a C++ board, or a Java or Python board, but here...where 
there's a giant glaring counter-example right under your nose? Are you 
kidding me?

Plus...what in the world makes you think VMed languages don't get errors, 
memory leaks, and race conditions? Segfaults I'll grant you, but that's 
hardly any different for the end-user than an unhandled exception.

 Also, when some part of a program is slow in C / C++ / D, most of time
 you have a way to speed it up. It may be painful but there is one.

 So you are part of the efficiency is priority #1 subgroup, after all.
 There is nothing wrong with that, I just happened to guess that.

Don't be so obtuse. Just because something is occasionally *a* priority, 
clearly does not imply it's a person's #1 priority.




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread language_fan
Thu, 10 Sep 2009 16:49:47 -0400, Nick Sabalausky thusly wrote:

 language_fan f...@bar.com.invalid wrote in message

 Nowadays when everyone soon has 12-core CPUs in front of them,
 especially x86-64 ones, managing each register and memory module (cache
 or main memory) manually is a major pain in the ass.
 
 That's just plain arrogant and ignorant. I swear, the next time I see
 yet another person pulling out the That's all they offer in the stores,
 therefore that must be only thing that's actually in use, and if anyone
 uses less, well then screw them for not being as big of a consumer whore
 as I am bullshit, my head's going to explode.

If I go to a store, the cheapest computer I can buy has a dual core cpu - 
that's just how it is. The $500..600 class computers have quad cores. 
Even the $100..200 range netbooks soon have (if they don't yet) dual 
cores. If we assume that most computers just break down in 2-5 years, we 
will pretty soon have only multi-core computers left. My old Pentium 2 is 
already quite dead and the motherboard in my Athlon XP 2000+ broke down 
last year. I've given away all older machines. I really don't expect them 
to be functional or usable these days.

 Plus...what in the world makes you think VMed languages don't get
 errors, memory leaks, and race conditions? Segfaults I'll grant you, but
 that's hardly any different for the end-user than an unhandled
 exception.

There are couple of things a VM fixes. Not all of them, but some. 
Switching to a safer languages helps even more. I don't like C++.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread Lutger
language_fan wrote:

(...)
 
 For what's it worth, I have not seen any firmware projects that use D,
 either. Not that it's impossible, no one in the firmware industry just is
 not interested in D. When you have limited amount of memory resources,
 the bloaty executables that dmd generates (all firmware uses x86 opcodes,
 right), the typeinfos, and the garbage collector are a nuisance.
 

If you don't mind me asking, what is your interest in D?




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread Nick Sabalausky
language_fan f...@bar.com.invalid wrote in message 
news:h8bqe5$v8...@digitalmars.com...
 Thu, 10 Sep 2009 16:49:47 -0400, Nick Sabalausky thusly wrote:

 language_fan f...@bar.com.invalid wrote in message

 Nowadays when everyone soon has 12-core CPUs in front of them,
 especially x86-64 ones, managing each register and memory module (cache
 or main memory) manually is a major pain in the ass.

 That's just plain arrogant and ignorant. I swear, the next time I see
 yet another person pulling out the That's all they offer in the stores,
 therefore that must be only thing that's actually in use, and if anyone
 uses less, well then screw them for not being as big of a consumer whore
 as I am bullshit, my head's going to explode.

 If I go to a store, the cheapest computer I can buy has a dual core cpu -
 that's just how it is. The $500..600 class computers have quad cores.
 Even the $100..200 range netbooks soon have (if they don't yet) dual
 cores.

Argh! That's exactly what I'm talking about! I don't care what the stores do 
or don't have in stock! Store stock != Actual usage. I swear I'm going to 
tattoo that into someone's forehead someday.

Plus there's the second-hand market. And these days there's a *lot* of 
second-hand hardware that's perfectly capable of doing anything that most 
people would ever need to do. My secondary system, which does everything I 
need it to, cost me around US$75, second-hand. Find me a quad-core system at 
that price...(and then ask me why I would care since I already have a 
computer that runs fine and does what I need it to do).

 If we assume that most computers just break down in 2-5 years, we
 will pretty soon have only multi-core computers left. My old Pentium 2 is
 already quite dead and the motherboard in my Athlon XP 2000+ broke down
 last year. I've given away all older machines. I really don't expect them
 to be functional or usable these days.


Fine, but if you're buying computers that break down in 2-5 years, then 
you're buying *really shitty* computers, so it's a meaningless premise. Both 
of my computers are made from parts that are around 8 years old, and they 
both work *perfectly fine*. Hell, I have two 486's and an Apple 2 that still 
run (not that I use them much). Having breakdowns after only 2-5 years is a 
clear sign that you're either buying garbage in the first place, or your UPS 
is on the fritz (or don't have one), or something. Either way, something's 
definitely not right.

And before I get the inevitable D00d thats soo old U shud by a new 1!, 
yes, I *could* go buy a new system. But why should I? I don't do a single 
thing that can't be done just fine on my single-cores. And the only things 
that run poorly are the things are written by teenage lazy hack I don't 
care about intelligent coding, because everyone should be just like me and 
want to sink all their money into new hardware just because they can! 
programmers, or by people like Cliffy B and those at Apple/Sun/MS who have a 
vested interest in getting people to buy new hardware.

 Plus...what in the world makes you think VMed languages don't get
 errors, memory leaks, and race conditions? Segfaults I'll grant you, but
 that's hardly any different for the end-user than an unhandled
 exception.

 There are couple of things a VM fixes. Not all of them, but some.
 Switching to a safer languages helps even more. I don't like C++.

I don't like C++ either ;)




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread dsimcha
== Quote from Nick Sabalausky (a...@a.a)'s article
 And before I get the inevitable D00d thats soo old U shud by a new 1!,
 yes, I *could* go buy a new system. But why should I? I don't do a single
 thing that can't be done just fine on my single-cores. And the only things
 that run poorly are the things are written by teenage lazy hack I don't
 care about intelligent coding, because everyone should be just like me and
 want to sink all their money into new hardware just because they can!

Not sure I buy this.  Let's analyze it in simple microeconomics.  Both 
programmer
time and computer hardware are scarce, expensive commodities.  To some extent, 
one
can be substituted for the other.  (A programmer can either spend less time
writing crappier code that needs more hardware or vice-versa.)  All else being
equal, you want the cheapest software you can get.

For the sake of this argument, I'm going to assume that the software is paid for
directly by the consumer, though the argument could be extended to cases where 
it
is paid for indirectly (business websites, etc.) and free software.  A company 
can
either deliver really unoptimized software for little programmer time, and thus
cheaply, or really fast software expensively.  As a consumer, you only care 
about
*total* cost.  Therefore, as the cost of better hardware goes down, the only
rational thing to do is spend less time optimizing software.

Of course, this doesn't work for special purpose computers that only run one 
piece
of software, but let's say the average computer user runs ~20 pieces of software
regularly.  If a new computer costs $400, and each piece of software can be made
on average $20 cheaper by not optimizing it, then you break even.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread Justin Johansson
Nick Sabalausky Wrote:
 As someone who's been meaning to take a look at Scala, I'm very curious: 
 What did you dislike about it?

I'm somewhat reluctant to discuss Scala too much here as this is a D forum, but 
since more than one person asked it's only fair to reply with something.  Now 
this is just my opinion, subjective and ignorant as it may be so, please, its 
horses for courses and YMMV.  Also I fully support bio-diversity in programming 
languages and it's ridiculous to suggest that there is or should be only only 
true language and then to can a language because its not your idea of the one 
true language.  Regarding my writing style, I attempt to employ a little humour 
(lame as it may be) so please take that into account before anyone decides to 
get really angry at me.

Well here goes ...

1. Moved in with her for six months.  She's an overkill of a language, 
seductive at first but then like a Black Widow spider eats her mate after 
courtship.

http://en.wikipedia.org/wiki/Latrodectus_hesperus

2. Scala cannot make up her mind if she's a scripting language or serious 
language.  Optional semicolons at the end of statements are really frustrating 
for code style consistency.  Worst when sometimes you need them and you left 
them out everywhere else for code style consistency.  (JavaScript has this 
problem too and JS guru Douglas Crockford recommends semicolons always be used 
in that language despite them being optional when statements clearly separated 
by newlines.)

3. Half-baked embeddable XML support in the language looks like she borrowed 
from ECMAScript's E4X.

4. Too many different ways of doing things.  All very interesting and no doubt 
very clever but she needs to shave her hairy legs with Occam's razor before she 
starts to look like the sister of Frankenstein's monster**.

http://en.wikipedia.org/wiki/Occam%27s_razor

4. Way too much of an academic approach to language design; appears to be a 
grand experiment to see how many academic papers can be derived from it.

Google for Scala experiment with and without the quotes and you'll soon 
realize she was designed in a lab from a kitchen sink full of PL body parts 
like her brother in fiction**.

Read Cedric's blog June 2008 for example

http://beust.com/weblog/archives/000490.html

Still there's no such thing as a failed experiment.  An experiment is just an 
experiment.  The light bulb wouldn't exist today if Edison wasn't as 
tenacious as he was.  He simply treated failed experiments as just another 
way of learning how not to make a light bulb.

5. Newcomers to the language will find it's type system concepts overwhelming - 
co-variance and contra-variance etc.  (don't know how D2 will address this 
better though). Yes these issues are important for OO libraries but feel there 
must be a more practical way out of the language complexity.  Personally I 
always kept away from the hairy and scary bits of C++; you don't need 'em in a 
practical language.

I've heard Scala's argument that all the complexity is hidden in the libraries 
so need to worry about it.  Unfortunately I don't believe her.  I learn a lot 
about a language by studying the library code and expect it to be as easy to 
read and understand as mainline code.

5. Not her fault (i.e. of the language), but after six months of courting Scala 
with the Eclipse plugin, suffering IDE crash after crash and lost code I just 
could not bring myself to suffering her any longer.

Read August 2009 comments by Tim at

http://blog.jayway.com/2009/03/12/scala-ide-support/

Half a year after the above post, I’m still shocked at how badly the Scala 
plug-in for Eclipse behaves. I’ve downloaded several variants on Eclipse over 
the last half-year (currently 3.4.2), and NONE of them have been able to do 
even basic things reliably with the Scala plug-in (or vise-versa)
...
without, say, basic knowledge of code (how to find a referenced method) and the 
ability to run more than five minutes, I might as well go back to Vim. (Which I 
think I’m going to have to do.)

When I went looking for an Eclipse plugin for D a few weeks ago, I soon 
discovered Descent.  It's not perfect by any means, but works well enough not 
to be a thorn in your side and deserves credit for where it's at already.

6. In agreement with comments by Tony Arcieri (April 2009)

I was initially excited about Scala but slowly grew discontented with it.

http://www.unlimitednovelty.com/2009/04/why-i-dont-like-scala.html

To sum up, after six months of living in, I felt I still wasn't getting 
anywhere close to being intimate with her.  It really shouldn't take that long 
to get up to speed with a new PL.

My advice for D2 language designers is not to copy every cool feature of 
Scala.  For good balance of ideas, look at Rich Hickey's Clojure language.

http://clojure.org/



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread Nick Sabalausky
dsimcha dsim...@yahoo.com wrote in message 
news:h8c52o$2a8...@digitalmars.com...
 == Quote from Nick Sabalausky (a...@a.a)'s article
 And before I get the inevitable D00d thats soo old U shud by a new 1!,
 yes, I *could* go buy a new system. But why should I? I don't do a single
 thing that can't be done just fine on my single-cores. And the only 
 things
 that run poorly are the things are written by teenage lazy hack I don't
 care about intelligent coding, because everyone should be just like me 
 and
 want to sink all their money into new hardware just because they can!

 Not sure I buy this.  Let's analyze it in simple microeconomics.  Both 
 programmer
 time and computer hardware are scarce, expensive commodities.  To some 
 extent, one
 can be substituted for the other.  (A programmer can either spend less 
 time
 writing crappier code that needs more hardware or vice-versa.)  All else 
 being
 equal, you want the cheapest software you can get.

 For the sake of this argument, I'm going to assume that the software is 
 paid for
 directly by the consumer, though the argument could be extended to cases 
 where it
 is paid for indirectly (business websites, etc.) and free software.  A 
 company can
 either deliver really unoptimized software for little programmer time, and 
 thus
 cheaply, or really fast software expensively.  As a consumer, you only 
 care about
 *total* cost.  Therefore, as the cost of better hardware goes down, the 
 only
 rational thing to do is spend less time optimizing software.

 Of course, this doesn't work for special purpose computers that only run 
 one piece
 of software, but let's say the average computer user runs ~20 pieces of 
 software
 regularly.  If a new computer costs $400, and each piece of software can 
 be made
 on average $20 cheaper by not optimizing it, then you break even.

There are a *lot* of 'if's and assumptions in that analysis.

In general though, I find the programmer time is more expensive than 
hardware line to largely be a cop-out.




Re: Template Metaprogramming Made Easy (Huh?)

2009-09-10 Thread Jeremie Pelletier
Justin Johansson Wrote:
 5. Newcomers to the language will find it's type system concepts overwhelming 
 - co-variance and contra-variance etc.  (don't know how D2 will address this 
 better though). Yes these issues are important for OO libraries but feel 
 there must be a more practical way out of the language complexity.  
 Personally I always kept away from the hairy and scary bits of C++; you don't 
 need 'em in a practical language.
 
 I've heard Scala's argument that all the complexity is hidden in the 
 libraries so need to worry about it.  Unfortunately I don't believe her.  I 
 learn a lot about a language by studying the library code and expect it to be 
 as easy to read and understand as mainline code.

I couldn't agree more, I learned how to use D by studying its runtime library 
over the past few years. To me it is especially useful to study a runtime 
library when it is used to implement features of the language, so you get a 
clear grasp of what using these features imply. I lost count on how many neat 
tricks I learned reading Andrei's metaprogramming code.



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-09 Thread language_fan
Tue, 08 Sep 2009 17:25:08 -0400, Justin Johansson thusly wrote:

 D is to C++ as Scala is to Java.

The word you are looking for may be 'successor'.

 The very articulate Paul Graham writes in The Hundred-Year Language
 http://www.paulgraham.com/hundred.html Though the situation is better
 in the sciences, the overlap between the kind of work you're allowed to
 do and the kind of work that yields good languages is distressingly
 small. ... types seem to be an inexhaustible source of research papers,
 despite the fact that static typing seems to preclude true macros--
 without which, in my opinion, no language is worth using. If I'm not
 mistaken, (LISP) macros**, metaprogramming, templates are different
 views of the same thing and any language which makes template
 metaprogramming easy is definitely worth it. ** Yes I know, there is
 nothing as pure as LISP macros but since I tend to lead a rather impure
 life 'D' has my attention now.

I do not know what is so pure in LISP's macros. Macro is a pure 
evaluation function that takes a meta-program and outputs another program 
that can be compiled to an executable (or interpreted). The larger 
difference is that macros in LISP are cleaner since they allow modifying 
all code as data. The lack of type system is another thing. A type system 
for meta-language has non-trivial requirements and I have to say that the 
most general system in D (string mixins) is not that large an improvement 
over LISP's macros. Go and see how template haskell did the same..

 Given Martin Odersky awarded top ACM recognition
 http://actualites.epfl.ch/index.php?
module=procontentfunc=displayid=2046
 for Scala, FP etc, perhaps Walter Bright should be considered for a
 Fields Medal for D :-) http://en.wikipedia.org/wiki/Fields_Medal

Martin is a computer scientist, Walter is an engineer. Martin creates new 
science, Walter just applies existing knowledge. Those awards are only 
meant for real scientists - engineers have their own award systems.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-09 Thread language_fan
Tue, 08 Sep 2009 18:09:01 -0400, bearophile thusly wrote:

 CLisp macros are not pure at all, Scheme macros are a bit less dirty :-)

Even though conversation on this newsgroup is mostly bikeshedding, it is 
wrong to use confusing new unestablished terms like dirtyness. The 
correct word is hygiene, and there is even a wikipedia page for all you 
folks who have forgotten what it means: http://en.wikipedia.org/wiki/
Hygienic_macro


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-09 Thread Justin Johansson
language_fan Wrote:

 Tue, 08 Sep 2009 17:25:08 -0400, Justin Johansson thusly wrote:
  ** Yes I know, there is
  nothing as pure as LISP macros but since I tend to lead a rather impure
  life 'D' has my attention now.
 
 I do not know what is so pure in LISP's macros. Macro is a pure 
 evaluation function that takes a meta-program and outputs another program 

I put that caveat in there to avoid being flamed by religious LISPers.  It was 
a diplomatic concession.
The argument I've often seen regarding LISP style macros is that they let you 
directly manipulate the AST which is why they (well some of them) think macro 
systems in other PLs are a poor imitation of the one true LISP macro. :-)

 
  for Scala, FP etc, perhaps Walter Bright should be considered for a
  Fields Medal for D :-) http://en.wikipedia.org/wiki/Fields_Medal
 
 Martin is a computer scientist, Walter is an engineer. Martin creates new 
 science, Walter just applies existing knowledge. Those awards are only 
 meant for real scientists - engineers have their own award systems.

Indeed; wasn't meant to be taken too literally; poetic license;
my friend's brother has a Fields Medal so I'm well aware what this is.



Template Metaprogramming Made Easy (Huh?)

2009-09-08 Thread Walter Bright

http://www.reddit.com/r/programming/comments/9iidr/template_metaprogramming_made_easy_huh/


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-08 Thread Walter Bright

Justin Johansson wrote:

Lest this newsgroup slip into being just mutual appreciation society
forum, I'd suggest that the seasoned D gurus go forth and evangelize
the language by commenting back on that Reddit link. 
http://www.reddit.com/r/programming/comments/9iidr/template_metaprogramming_made_easy_huh/


So do I, especially your comments!!!


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-08 Thread bearophile
Justin Johansson:

D is to C++ as Scala is to Java.

Scala allows to write shorter programs compared to Java ones, is more flexible 
and more complex than Java.
D2 is less complex than C++, it's a bit less verbose than C++, and a bit less 
flexible than C++.
(Both Scala and D add some functional sides to their older languages. Scala is 
currently more functional-friendly than D2).


Paul Graham:
static typing seems to preclude true macros

Paul knows Lisp well, but I don't believe in that statement. I'll read more 
about this.


If I'm not mistaken, (LISP) macros**, metaprogramming, templates are different 
views of the same thing

Lisp macros are quite more powerful than C++-style templates. 
Time ago Walter was interested in adding AST (compile-time only) macros to D, 
but I think he's not interested in adding them any more.


 ** Yes I know, there is nothing as pure as LISP macros but since I tend to 
 lead a rather impure life 'D' has my attention now.

CLisp macros are not pure at all, Scheme macros are a bit less dirty :-)


I don't end up becoming disillusioned with D as I did with Scala.

I don't program in Scala, and overall it's probably not my ideal language at 
all, but I think it's a cute language (and the JavaVM it runs on has some good 
things, like its GCs, some inlining/profiling code, etc), and I believe it has 
some lessons to teach to D developers. Can you tell me/us why you think Scala 
was not good enough for you?

Bye,
bearophile


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-08 Thread Justin Johansson
Walter Bright Wrote:

 Justin Johansson wrote:
  Lest this newsgroup slip into being just mutual appreciation society
  forum, I'd suggest that the seasoned D gurus go forth and evangelize
  the language by commenting back on that Reddit link. 
  http://www.reddit.com/r/programming/comments/9iidr/template_metaprogramming_made_easy_huh/
 
 So do I, especially your comments!!!

You're welcome, Walter, and whilst I tend to fear of being flamed in a public 
forum (I don't have a thick enough skin to be a politician), I tried to 
practice, and not just preach, by making my own contribution to the reddit 
discussion a short while ago.

Anyway since I'm a newcomer here it might help if I introduce myself so people 
know where I'm coming from.

Like yourself, I have a formal engineering degree.  Read in your bio that you 
originally did mechanical eng whereas I did electrical eng and with love of 
maths and physics.  Like you I went from engineering to exclusively software.  
That being circa 1982, I'm from the older crowd.  (Surprise .. read your 
comment somewhere that D tends to appeal to the younger crowd.)  Have degree in 
computer science as well .. back when Fortran was on the Control Data 6400 menu 
.. shortly before birth of Vax and Wirth producing that pedagogical language, 
Pascal.

Got into software via electronics and microprocessors - 8085, SCAMP (first 
16-bit microprocessor based on Data General Nova minicomputer architecture), 
6809 and later 68020 and Texas Instruments TMS32010 digital signal processor.  
Developed multitasking executive for Z80 and 68020 for real-time scientific 
instrumentation in assembler and C.  Got into OO with Smalltalk and with birth 
of C++ compilers, notably Zortech C++.

After 15+ years in C++ world, 3-4 years ago was forced into a Java labour camp 
due to diminishing requirements for C++ skills in the local job market.

Always felt that somehow I missed out on the Baroque era of LISP and FP but 
looks like there's a Renaissance happening now with new languages like Clojure 
and Scala.  D looks like getting there too (FP-wise I mean, though agree with 
other writer on this thread that Scala is currently perhaps a little more 
FP-friendly).

However what's attracting me to D now is its pragmatism and it not being 
governed by a corporate agenda or any overarching theory of programming to 
quote the DM intro page.  For me D represents an opportunity to reach back to 
my roots in bare metal systems programming in a modern setting.

For the moment though, for stability and license reasons, I've decided to stick 
with DMD 1.0 and Phobos for a new project that I'm working on.  Linux platform 
btw.

Cheers
Justin Johansson



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-08 Thread Jarrett Billingsley
On Tue, Sep 8, 2009 at 8:40 PM, Justin
Johanssonproc...@adam-dott-com.au wrote:

 Got into OO with Smalltalk and with birth of C++ compilers, notably Zortech 
 C++.

Then you've had experience with Walter's compilers before ;) He wrote Zortech.


Re: Template Metaprogramming Made Easy (Huh?)

2009-09-08 Thread Justin Johansson
Jarrett Billingsley wrote:
 Johansson wrote:
  Got into OO with Smalltalk and with birth of C++ compilers, notably Zortech 
  C++.
 
 Then you've had experience with Walter's compilers before ;) He wrote Zortech.

Indeed, that's why the honourable mention in my bio :-)

Recall at the time it was the damm fastest blazing gun in the West - both 
compile and runtime wise!

Since being accustomed to consulting manufacturers' CPU  bit-slice 
architecture timing data sheets, microcode etc. to wring clock cycles out of 
low-level critical code, it was always interesting to examine the machine 
generated instructions produced by an optimizing C compiler do see if better 
could be done with hand-crafted assembler.

There was never much to gain using assembler on the PC with Walter's compiler 
though except in some very time critical routines like task context switching, 
semaphore locking, interrupt service routines and the like.

For embedded microprocessor work (CPUs not supported by Zortech), the (other) 
contemporary C compilers were generally pretty poor at optimizing machine 
instructions so assembly language was often a necessary tool of resort in those 
environments.



Re: Template Metaprogramming Made Easy (Huh?)

2009-09-08 Thread Justin Johansson
Hi bearophile,

I'm not sure that I want fuel the language wars but curiously I notice you say

I don't program in Scala

but in http://www.mail-archive.com/digitalmars-d@puremagic.com/msg15946.html

you (allegedly) say

- to summarize: I use Scala for high level tasks, and came back to D when I 
need to see the actual machine code and optimize some tight inner loop. D is 
sometimes more suitable for this than C/C++ since it has a bit saner syntax 
and high level abstractions. But in general I nowadays write 90% of my code 
in Scala. I'm much happier and more productive writing Scala. YMMV

Has something changed for you re Scala since you wrote this?

Justin Johansson