On 5/30/15 2:38 PM, Shachar Shemesh wrote:
On 30/05/15 03:57, Steven Schveighoffer wrote:
But I don't see how speed of compiler should sacrifice runtime
performance.
Our plan was to compile with DMD during the development stage, and then
switch to GDC for code intended for deployment. This plan simply cannot
work if each time we try and make that switch, Liran has to spend two
months, each time yanking a different developer from the work said
developer needs to be doing, in order to figure out which line of source
gets compiled incorrectly.
You're answering a question that was not asked. Obviously,
compiler-generated code should match what the source says. That's way
more important than speed of compilation or speed of execution.
So given that a compiler actually *works* (i.e. produces valid
binaries), is speed of compilation better than speed of execution of the
resulting binary? How much is too much? And there are thresholds for
things that really make the difference between works and not works. For
instance, a requirement for 30GB of memory is not feasible for most
systems. If you have to have 30GB of memory to compile, then the
effective result is that compiler doesn't work. Similarly, if a compiler
takes 2 weeks to output a binary, even if it's the fastest binary on the
planet, that compiler doesn't work.
But if we are talking the difference between a compiler taking 10
minutes to produce a binary that is 20% faster than a compiler that
takes 1 minute, what is the threshold of pain you are willing to accept?
My preference is for the 10 minute compile time to get the fastest
binary. If it's possible to switch the compiler into "fast mode" that
gives me a slower binary, I might use that for development.
My original statement was obviously exaggerated, I would not put up with
days-long compile times, I'd find another way to do development. But
compile time is not as important to me as it is to others.
-Steve