Re: tooling quality and some random rant

2011-02-23 Thread Bruno Medeiros

On 14/02/2011 12:37, Jacob Carlborg wrote:

On 2011-02-13 16:07, Gary Whatmore wrote:

Paulo Pinto Wrote:


"Nick Sabalausky" wrote in message
news:ij7v76$1q4t$1...@digitalmars.com...

... (cutted) ...

That's not the compiler, that's the linker. I don't know what linker
DMD
uses on OSX, but on Windows it uses OPTLINK which is written in
hand-optimized Asm so it's really hard to change. But Walter's been
converting it to C (and maybe then to D once that's done) bit-by-bit
(so
to speak), so linker improvements are at least on the horizon.

...


Why C and not directly D?

It is really bad adversting for D to know that when its creator came
around
to
rewrite the linker, Walter decided to use C instead of D.


I'm guessing that Walter feels more familiar and comfortable
developing C/C++ instead of D. He's the creator of D, but has written
very small amounts of D and probably cannot write idiomatic D very
fluently. Another issue is the immature toolchain.

This might sound like blasphemy, but I believe the skills and
knowledge for developing large scale applications in language XYZ
cannot be extrapolated from small code snippets or from experience
with projects in other languages. You just need to eat your own
dogfood and get your feet wet by doing.

People like the Tango's 'kris' and this 'h3r3tic' are the real world D
experts. Sadly they've all left D. We need a new generation of
experts, because these old guys ranting about every issue are more
harmful than good to the community.


Kris is still around.



Out of curiosity, what do you mean "still around". Still working with D?

--
Bruno Medeiros - Software Engineer


Re: tooling quality and some random rant

2011-02-23 Thread Bruno Medeiros

On 13/02/2011 23:28, retard wrote:

Sun, 13 Feb 2011 15:06:46 -0800, Brad Roberts wrote:


On 2/13/2011 3:01 PM, Walter Bright wrote:

Michel Fortin wrote:

But note I was replying to your reply to Denis who asked specifically
for demangled names for missing symbols. This by itself would be a
useful improvement.


I agree with that, but there's a caveat. I did such a thing years ago
for C++ and Optlink. Nobody cared, including the people who asked for
that feature. It's a bit demotivating to bother doing that again.


No offense, but this argument gets kinda old and it's incredibly weak.

Today's tooling expectations are higher.  The audience isn't the same.
And clearly people are asking for it.  Even the past version of it I
highly doubt no one cared, you just didn't hear from those that liked
it.  After all, few people go out of their way to talk about what they
like, just what they don't.


Half of the readers have already added me to their killfile, but here
goes some on-topic humor:

http://www.winandmac.com/wp-content/uploads/2010/03/ipad-hp-fail.jpg



The only fail here is that comparison

--
Bruno Medeiros - Software Engineer


Re: tooling quality and some random rant

2011-02-20 Thread Walter Bright

nedbrek wrote:

Hope that helps,


Thanks, this is great info!


Re: tooling quality and some random rant

2011-02-19 Thread nedbrek

"distcc"  wrote in message news:ijp9ji$1hvd$1...@digitalmars.com...
> nedbrek Wrote:
>> "Walter Bright"  wrote in message
>> news:ijnt3o$22dm$1...@digitalmars.com...
>>> nedbrek wrote:
 Also, "macro op fusion" allows you can get a branch along with the last
 instruction in decode, potentially giving you 5 macroinstructions per
 cycle from decode.  Make sure it is the flags producing instruction
 (cmp-br).

>>>
>>> I can't find any Intel documentation on this. Can you point me to some?
>>
>> The best available source is the optimization reference manual
>> (http://www.intel.com/products/processor/manuals/).  The latest version 
>> is
>> 248966.pdf, which mentions "Decodes up to four instructions, or up to 
>> five
>> with macro-fusion" (page 33).  Also, page 36: "Macro-fusion merges two
>> instructions into a single ?op. Intel Core microarchitecture is capable 
>> of
>> one macro-fusion per cycle in 32-bit operation".  It's unclear if macro
>> fusion is off entirely in 64 bit mode, and whether this has changed in 
>> more
>> recent processors...
>
> I remember reading that macro fusion is entirely off in 64 bit mode in 
> Nehalem
> and earlier generations, and supported in Sandy Bridge.
>
> When generating code for loops, the compiler could also make use of Loop 
> Stream
> Coder to avoid i-cache misses.

Serves me right, it is a little further in, page 52: "In Intel 
microarchitecture (Nehalem) , macro-fusion is supported in 64-bit mode, and 
the following instruction sequences are supported: (big list)".

That would leave it off of 65nm (Merom) and 45nm (Penryn) parts.  These are 
identifiable through CPUID.

The guide is broken up into sections based on the particular chip, so you 
end up having to read them all to get a general feel for things...

Ned




Re: tooling quality and some random rant

2011-02-19 Thread distcc
nedbrek Wrote:

> Hello,
> 
> "Walter Bright"  wrote in message 
> news:ijnt3o$22dm$1...@digitalmars.com...
> > nedbrek wrote:
> >> Reordering happens in the scheduler. A simple model is "Fetch", 
> >> "Schedule", "Retire".  Fetch and retire are done in program order.  For 
> >> code that is hitting well in the cache, the biggest bottleneck is that 
> >> "4" decoder (the complex instruction decoder).  Reducing the number of 
> >> complex instructions will be a big win here (and settling them into the 
> >> 4-1-1(-1) pattern).
> >>
> >> Of course, on anything after Core 2, the "1" decoders can handle pushes, 
> >> pops, and load-ops (r+=m) (although not load-op-store (m+=r)).
> >>
> >> Also, "macro op fusion" allows you can get a branch along with the last 
> >> instruction in decode, potentially giving you 5 macroinstructions per 
> >> cycle from decode.  Make sure it is the flags producing instruction 
> >> (cmp-br).
> >>
> >
> > I can't find any Intel documentation on this. Can you point me to some?
> 
> The best available source is the optimization reference manual 
> (http://www.intel.com/products/processor/manuals/).  The latest version is 
> 248966.pdf, which mentions "Decodes up to four instructions, or up to five 
> with macro-fusion" (page 33).  Also, page 36: "Macro-fusion merges two 
> instructions into a single ?op. Intel Core microarchitecture is capable of 
> one macro-fusion per cycle in 32-bit operation".  It's unclear if macro 
> fusion is off entirely in 64 bit mode, and whether this has changed in more 
> recent processors...

I remember reading that macro fusion is entirely off in 64 bit mode in Nehalem 
and earlier generations, and supported in Sandy Bridge.

When generating code for loops, the compiler could also make use of Loop Stream 
Coder to avoid i-cache misses.


Re: tooling quality and some random rant

2011-02-19 Thread nedbrek
Hello,

"Walter Bright"  wrote in message 
news:ijnt3o$22dm$1...@digitalmars.com...
> nedbrek wrote:
>> Reordering happens in the scheduler. A simple model is "Fetch", 
>> "Schedule", "Retire".  Fetch and retire are done in program order.  For 
>> code that is hitting well in the cache, the biggest bottleneck is that 
>> "4" decoder (the complex instruction decoder).  Reducing the number of 
>> complex instructions will be a big win here (and settling them into the 
>> 4-1-1(-1) pattern).
>>
>> Of course, on anything after Core 2, the "1" decoders can handle pushes, 
>> pops, and load-ops (r+=m) (although not load-op-store (m+=r)).
>>
>> Also, "macro op fusion" allows you can get a branch along with the last 
>> instruction in decode, potentially giving you 5 macroinstructions per 
>> cycle from decode.  Make sure it is the flags producing instruction 
>> (cmp-br).
>>
>
> I can't find any Intel documentation on this. Can you point me to some?

The best available source is the optimization reference manual 
(http://www.intel.com/products/processor/manuals/).  The latest version is 
248966.pdf, which mentions "Decodes up to four instructions, or up to five 
with macro-fusion" (page 33).  Also, page 36: "Macro-fusion merges two 
instructions into a single ?op. Intel Core microarchitecture is capable of 
one macro-fusion per cycle in 32-bit operation".  It's unclear if macro 
fusion is off entirely in 64 bit mode, and whether this has changed in more 
recent processors...

They recommend against aligning code in general to 4-1-1-1 (also page 36), 
but I'd assume this is for a very targeted application.  As always, it is 
best to run things both ways and measure.

The next section (2.1.2.5) talks about stack pointer tracking - which allows 
macro operations which used to be 2 uops (pop r -> load r = [esp]; inc esp) 
to become one (just the load).  Pushes, which used to be 3 uops 
(store_address esp, store_data r, dec esp) should also be one fused uop (via 
sta/std fusion and store point tracking).


Another good resource is "Real World Tech", particularly:
http://www.realworldtech.com/page.cfm?ArticleID=RWT030906143144

Page 4 covers the front end: "Macro-op fusion lets the decoders combine two 
macro instructions into a single uop. Specifically, x86 compare or test 
instructions are fused with x86 jumps to produce a single uop and any 
decoder can perform this optimization."


Finally, the Intel Technology Journal has some really good details (when you 
can find them! :)

For example:
http://download.intel.com/technology/itj/2003/volume07issue02/art03_pentiumm/vol7iss2_art03.pdf

details the original processor to use micro-op fusion (Pentium M or Banias - 
which was the base design for Dothan and Yonah).  See page 26 (epage 7/18) - 
which starts the section "MICRO-OPS FUSION".  It gives a lot of detail of 
the store address / store data fusion.


Hope that helps,
Ned




Re: tooling quality and some random rant

2011-02-18 Thread Walter Bright

nedbrek wrote:
Reordering happens in the scheduler. A simple model is "Fetch", "Schedule", 
"Retire".  Fetch and retire are done in program order.  For code that is 
hitting well in the cache, the biggest bottleneck is that "4" decoder (the 
complex instruction decoder).  Reducing the number of complex instructions 
will be a big win here (and settling them into the 4-1-1(-1) pattern).


Of course, on anything after Core 2, the "1" decoders can handle pushes, 
pops, and load-ops (r+=m) (although not load-op-store (m+=r)).


Also, "macro op fusion" allows you can get a branch along with the last 
instruction in decode, potentially giving you 5 macroinstructions per cycle 
from decode.  Make sure it is the flags producing instruction (cmp-br).


(I used to work for Intel :)


I can't find any Intel documentation on this. Can you point me to some?


Re: tooling quality and some random rant

2011-02-18 Thread nedbrek
Hello all,

"Walter Bright"  wrote in message 
news:ijeih9$2aso$2...@digitalmars.com...
> Don wrote:
>> That would really be fun.
>> BTW, the current Intel processors are basically the same as Pentium Pro, 
>> with a few improvements. The strange thing is, because of all of the 
>> reordering that happens, swapping the order of two (non-dependent) 
>> instructions makes no difference at all. So you always need to look at 
>> every instruction in the a loop, before you can do any scheduling.
>
> I was looking at Agner's document, and it looks like ordering the 
> instructions in the 4-1-1 or 4-1-1-1 for optimal decoding could work. This 
> would fit right in with the way the scheduler works.
>
> I had thought that with the CPU automatically reordering instructions, 
> that scheduling them was obsolete.

Reordering happens in the scheduler. A simple model is "Fetch", "Schedule", 
"Retire".  Fetch and retire are done in program order.  For code that is 
hitting well in the cache, the biggest bottleneck is that "4" decoder (the 
complex instruction decoder).  Reducing the number of complex instructions 
will be a big win here (and settling them into the 4-1-1(-1) pattern).

Of course, on anything after Core 2, the "1" decoders can handle pushes, 
pops, and load-ops (r+=m) (although not load-op-store (m+=r)).

Also, "macro op fusion" allows you can get a branch along with the last 
instruction in decode, potentially giving you 5 macroinstructions per cycle 
from decode.  Make sure it is the flags producing instruction (cmp-br).

(I used to work for Intel :)
Ned




Re: tooling quality and some random rant

2011-02-15 Thread Walter Bright

Don wrote:

Walter Bright wrote:

Don wrote:
In hand-coded asm, instruction scheduling still gives more than half 
of the same benefit that it used to do. But, it's become ten times 
more difficult. You have to use Agner Fog's manuals, not Intel/AMD.


For example:
(1) a common bottleneck on all Intel processors, is that you can only 
read from three registers per cycle, but you can also read from any 
register which has been modified in the last three cycles.

(2) it's important to break dependency chains.

On the BigInt code, instruction scheduling gave a speedup of ~40%.


Wow. I didn't know that. Do any compilers currently schedule this stuff?


Intel probably does. I don't think any others do a very good job. Agner 
told me that he had had no success in getting compiler vendors to be 
interested in his work.


Well, this one is. In fact, could we get Agner to actively help us out with 
this?


Any chance you want to take a look at cgsched.c? I had great success 
using the same algorithm for the quite different Pentium and P6 
scheduling minutia.


That would really be fun.
BTW, the current Intel processors are basically the same as Pentium Pro, 
with a few improvements. The strange thing is, because of all of the 
reordering that happens, swapping the order of two (non-dependent) 
instructions makes no difference at all. So you always need to look at 
every instruction in the a loop, before you can do any scheduling.


I was looking at Agner's document, and it looks like ordering the instructions 
in the 4-1-1 or 4-1-1-1 for optimal decoding could work. This would fit right in 
with the way the scheduler works.


I had thought that with the CPU automatically reordering instructions, that 
scheduling them was obsolete.


Re: tooling quality and some random rant

2011-02-15 Thread Lutger Blijdestijn
retard wrote:

> Mon, 14 Feb 2011 20:10:47 +0100, Lutger Blijdestijn wrote:
> 
>> retard wrote:
>> 
>>> Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
>>> 
> Unfortunately DMC is always out of the question because the
> performance is 10-20 (years) behind competition, fast compilation
> won't help it.
 
 Can you please give a few links on this?
>>> 
>>> What kind of proof you need then? Just take some existing piece of code
>>> with high performance requirements and compile it with dmc. You lose.
>>> 
>>> http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
>>> http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
>>> http://lists.boost.org/boost-testing/2005/06/1520.php
>>> http://www.digitalmars.com/d/archives/c++/chat/66.html
>>> http://www.drdobbs.com/cpp/184405450
>>> 
>>> 
>> That is ridiculous, have you even bothered to read your own links? In
>> some of them dmc wins, others the differences are minimal and for all of
>> them dmc is king in compilation times.
> 
> DMC doesn't clearly win in any of the tests and these are merely some
> naive examples I found by doing 5 minutes of googling. Seriously, take a
> closer look - the gcc version is over 5 years old. Nobody even bothers
> doing dmc benchmarks anymore, dmc is so out of the league. I repeat, this
> was about performance of the generated binaries, not compile times.
> 
> Like I said: take some existing piece of code with high performance
> requirements and compile it with dmc. You lose. I honestly don't get what
> I need to prove here. Since you have no clue, presumably you aren't even
> using dmc and won't be considering it.

You go on ranting about dmc as if it is dwarfed by other compilers (which it 
might very well be), then provide 'proof' that doesn't prove this at all and 
now I must be convinced that it's because the other compilers are so old? 
You lose. You don't have to prove anything, but when you do, don't do it 
with dubious and inconclusive benchmarks. That's all.
 


Re: tooling quality and some random rant

2011-02-15 Thread Don

bearophile wrote:

Walter:


Huh, I simply could never find a document about how to use those which gave me any 
comfortable sense that the author knew what he was talking about.<


http://www.agner.org/optimize/

--

Don:


A problem with that, is that the prefetching instructions are vendor-specific.<


Right. Then I suggest some higher-level annotations (pragmas?) that the 
programmer uses to better state the temporal semantics of memory accesses in a 
performance-critical part of D code.



Also, it's quite difficult to use them correctly. If you put them in the wrong 
place, or use them too much, they slow your code down.<


CPU caches have a simple purpose. Light speed is finite (how much distance does light travel in vacuum/doped silicon during a clock cycle of a 5 GHz POWER6 CPU? http://en.wikipedia.org/wiki/POWER6 ), and finding one thing among many things is slower than finding among few ones. So you speed up your memory accesses if you read information from a smaller group of data located closer to you. Most CPUs don't have a little faster memory that you manage yourself (http://en.wikipedia.org/wiki/Scratchpad_RAM ), the CPUs copy data from/to cache levels by themselves, so on such CPUs the illusion of a flat memory is at the hardware level, not just at C language level. Cache manage their memory in few different ways, often bigger CPUs offer ways to alter such ways a little, using special instructions. 


The main difference is how they keep coherence across different core 
caches and in what situations they store back data from the cache to RAM.


I think you may be confusing prefetch instructions with non-temporal stores.

The problem with prefetch instructions, is that they interfere with the 
hardware prefetch mechanism. The hardware prefetch is actually very 
good, and it's only under specific circumstances that a manual prefetch 
can beat it. I think it's unlikely that you can use prefetching 
beneficially, unless you've looked at the generated asm code.



In some cases in your program you want to read from an array, and store data inside it 
again and another one too, but you never want to store far away data in the first one. 
There are few other common patterns of memory usage. In theory a normal language like 
Fortran is enough to specify what memory you want to read or write and when you want to 
do it. In practice today compilers are not so good at inferring such semantics, so some 
high level annotations probably help. In future maybe compilers will get better, so they 
will ignore those annotations, just like they often ignore "register" 
annotations. Being system-level programming languages practical things, adding 
annotations is not bad, even if 5-10 years later those annotations become less useful.


Here you're definitely talking about non-temporal stores.
Yes, there is some chance that an annotation for non-temporal stores 
could be beneficial.


Re: tooling quality and some random rant

2011-02-15 Thread Don

Walter Bright wrote:

Don wrote:
In hand-coded asm, instruction scheduling still gives more than half 
of the same benefit that it used to do. But, it's become ten times 
more difficult. You have to use Agner Fog's manuals, not Intel/AMD.


For example:
(1) a common bottleneck on all Intel processors, is that you can only 
read from three registers per cycle, but you can also read from any 
register which has been modified in the last three cycles.

(2) it's important to break dependency chains.

On the BigInt code, instruction scheduling gave a speedup of ~40%.


Wow. I didn't know that. Do any compilers currently schedule this stuff?


Intel probably does. I don't think any others do a very good job. Agner 
told me that he had had no success in getting compiler vendors to be 
interested in his work.


Any chance you want to take a look at cgsched.c? I had great success 
using the same algorithm for the quite different Pentium and P6 
scheduling minutia.


That would really be fun.
BTW, the current Intel processors are basically the same as Pentium Pro, 
with a few improvements. The strange thing is, because of all of the 
reordering that happens, swapping the order of two (non-dependent) 
instructions makes no difference at all. So you always need to look at 
every instruction in the a loop, before you can do any scheduling.


Re: tooling quality and some random rant

2011-02-15 Thread bearophile
Walter:

>Huh, I simply could never find a document about how to use those which gave me 
>any comfortable sense that the author knew what he was talking about.<

http://www.agner.org/optimize/

--

Don:

>A problem with that, is that the prefetching instructions are vendor-specific.<

Right. Then I suggest some higher-level annotations (pragmas?) that the 
programmer uses to better state the temporal semantics of memory accesses in a 
performance-critical part of D code.


>Also, it's quite difficult to use them correctly. If you put them in the wrong 
>place, or use them too much, they slow your code down.<

CPU caches have a simple purpose. Light speed is finite (how much distance does 
light travel in vacuum/doped silicon during a clock cycle of a 5 GHz POWER6 
CPU? http://en.wikipedia.org/wiki/POWER6 ), and finding one thing among many 
things is slower than finding among few ones. So you speed up your memory 
accesses if you read information from a smaller group of data located closer to 
you. Most CPUs don't have a little faster memory that you manage yourself 
(http://en.wikipedia.org/wiki/Scratchpad_RAM ), the CPUs copy data from/to 
cache levels by themselves, so on such CPUs the illusion of a flat memory is at 
the hardware level, not just at C language level. Cache manage their memory in 
few different ways, often bigger CPUs offer ways to alter such ways a little, 
using special instructions. The main difference is how they keep coherence 
across different core caches and in what situations they store back data from 
the cache to RAM.

In some cases in your program you want to read from an array, and store data 
inside it again and another one too, but you never want to store far away data 
in the first one. There are few other common patterns of memory usage. In 
theory a normal language like Fortran is enough to specify what memory you want 
to read or write and when you want to do it. In practice today compilers are 
not so good at inferring such semantics, so some high level annotations 
probably help. In future maybe compilers will get better, so they will ignore 
those annotations, just like they often ignore "register" annotations. Being 
system-level programming languages practical things, adding annotations is not 
bad, even if 5-10 years later those annotations become less useful.

Bye,
bearophile


Re: tooling quality and some random rant

2011-02-15 Thread spir

On 02/15/2011 03:47 AM, bearophile wrote:

Don:


But still, cache effects are more important than instruction scheduling
in 99% of cases.


I agree.
CPUs have prefetching instructions, but D doesn't expose them as intrinsics. A 
bit more higher level visibility for those instructions may be positive today.

Being D a system language, another possible idea is to partially unveil what's under the 
"array as a random access memory" illusion.


By the way, what does D rewrite:
foreach (e ; array) {
f(e);
}
to? I would guess something along the line of:
auto p = array.ptr
while (notAtEnd) {
f(*p);
++ p;
}
?

Denis
--
_
vita es estrany
spir.wikidot.com



Re: tooling quality and some random rant

2011-02-15 Thread Lars T. Kyllingstad
On Mon, 14 Feb 2011 15:03:01 -0500, Steven Schveighoffer wrote:

> I think linker errors in general are one of those things that few people
> understand, and most cope with just pattern recognition "Oh, I see
> _deh_start, probably forgot main()" with no regards to logic. :) 

Please get out of my head. :)

-Lars


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

bearophile wrote:

I agree. CPUs have prefetching instructions, but D doesn't expose them as
intrinsics. A bit more higher level visibility for those instructions may be
positive today.


Huh, I simply could never find a document about how to use those which gave me 
any comfortable sense that the author knew what he was talking about.


The same goes for the memory fence instructions. Talk to 3 experts about them, 
and you get 3 wildly different answers. The Intel docs are zero help.


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

Don wrote:
In hand-coded asm, instruction scheduling still gives more than half of 
the same benefit that it used to do. But, it's become ten times more 
difficult. You have to use Agner Fog's manuals, not Intel/AMD.


For example:
(1) a common bottleneck on all Intel processors, is that you can only 
read from three registers per cycle, but you can also read from any 
register which has been modified in the last three cycles.

(2) it's important to break dependency chains.

On the BigInt code, instruction scheduling gave a speedup of ~40%.


Wow. I didn't know that. Do any compilers currently schedule this stuff?

Any chance you want to take a look at cgsched.c? I had great success using the 
same algorithm for the quite different Pentium and P6 scheduling minutia.


Re: tooling quality and some random rant

2011-02-14 Thread Don

bearophile wrote:

Don:

But still, cache effects are more important than instruction scheduling 
in 99% of cases.


I agree.
CPUs have prefetching instructions, but D doesn't expose them as intrinsics. A 
bit more higher level visibility for those instructions may be positive today.


A problem with that, is that the prefetching instructions are 
vendor-specific. Also, it's quite difficult to use them correctly. If 
you put them in the wrong place, or use them too much, they slow your 
code down.




Being D a system language, another possible idea is to partially unveil what's under the 
"array as a random access memory" illusion. Memory hierarchy makes array access 
times quite variable according to what level of the memory pyramid your data is stored 
into (http://dotnetperls.com/memory-hierarchy ). This is why numeric algorithms that work 
on large arrays enjoy tiling a lot now. The Chapel language has language-level support 
for a high level specification of tilings, while Fortran compilers perform some limited 
forms of tiling by themselves.


I think it is impossible to be a modern systems language without some 
support for memory heirarchy.
I think we'll be able to take advantage of D's awesome metaprogramming, 
to support cache-aware algorithms. As a first step, I added cache size 
determination to core.cpuid some time ago. We have a long way to go, still.


Re: tooling quality and some random rant (PathScale)

2011-02-14 Thread ./C


> Mon, 14 Feb 2011 13:00:00 -0800, Walter Bright wrote:
> 
> 
> How about [2]:
> 
> "LTO is quite promising.  Actually it is in line or even better with
> improvement got from other compilers (pathscale is the most convenient
> compiler to check lto separately: lto gave there upto 5% improvement
> on SPECFP2000 and 3.5% for SPECInt2000 making compiler about 50%
> slower and generated code size upto 30% bigger).  LTO in GCC actually
> results in significant code reduction which is quite different from
> pathscale.  That is one of rare cases on my mind when a specific
> optimization works actually better in gcc than in other optimizing
> compilers."
> 
> [2] http://gcc.gnu.org/ml/gcc/2009-10/msg00155.html

PathScale is in the process of making significant improvements to our IPA 
optimization and welcome feedback and more testers in March.  Please email me 
directly if you're a current customer or not.

Thanks!

Christopher


Re: tooling quality and some random rant

2011-02-14 Thread bearophile
Don:

> But still, cache effects are more important than instruction scheduling 
> in 99% of cases.

I agree.
CPUs have prefetching instructions, but D doesn't expose them as intrinsics. A 
bit more higher level visibility for those instructions may be positive today.

Being D a system language, another possible idea is to partially unveil what's 
under the "array as a random access memory" illusion. Memory hierarchy makes 
array access times quite variable according to what level of the memory pyramid 
your data is stored into (http://dotnetperls.com/memory-hierarchy ). This is 
why numeric algorithms that work on large arrays enjoy tiling a lot now. The 
Chapel language has language-level support for a high level specification of 
tilings, while Fortran compilers perform some limited forms of tiling by 
themselves.

Bye,
bearophile


Re: tooling quality and some random rant

2011-02-14 Thread Don

Walter Bright wrote:

retard wrote:
 > There are no arch specific optimizations for PIII, Pentium 4, Pentium D,
Core, Core 2, Core i7, Core i7 2600K, and similar kinds of products from
AMD.

The optimal instruction sequences varied dramatically on those earlier 
processors, but not so much at all on the later ones. Reading the latest 
Intel/AMD instruction set references doesn't even provide that 
information anymore.


In particular, instruction scheduling no longer seems to matter, except 
for the Intel Atom, which benefits very much from Pentium style 
instruction scheduling. Ironically, dmc++ is the only available current 
compiler which supports that.


In hand-coded asm, instruction scheduling still gives more than half of 
the same benefit that it used to do. But, it's become ten times more 
difficult. You have to use Agner Fog's manuals, not Intel/AMD.


For example:
(1) a common bottleneck on all Intel processors, is that you can only 
read from three registers per cycle, but you can also read from any 
register which has been modified in the last three cycles.

(2) it's important to break dependency chains.

On the BigInt code, instruction scheduling gave a speedup of ~40%.

But still, cache effects are more important than instruction scheduling 
in 99% of cases.


No mention of auto-vectorization 


dmc doesn't do auto-vectorization. I agree that's an issue.





 > or whole program

I looked into that, there's not a lot of oil in that well.


 > and instruction level optimizations the very latest GCC and LLVM are 
now slowly adopting.


Huh? Every compiler in existence has done, and always has done, 
instruction level optimizations.



Note: a lot of modern compilers expend tremendous effort optimizing 
access to global variables (often screwing up multithreaded code in the 
process). I've always viewed this as a crock, since modern programming 
style eschews globals as much as possible.


Re: tooling quality and some random rant

2011-02-14 Thread gölgeliyele

On 2/14/11 3:22 PM, retard wrote:


Your obsession with fast compile times is incomprehensible. It doesn't
have any relevance in the projects I'm talking about. On multicore 'make -
jN', distcc&  low cost clusters, and incremental compilation already
mitigate most of the issues. LLVM is also supposed to compile large
projects faster than the 'legacy' gcc. There are also faster linkers than
GNU ld. If you're really obsessed with compile times, there are far
better languages such as D.

The extensive optimizations and fast compile times have an inverse
correlation. Of course your compiler compiles faster if it optimizes
less. What's the point here?

All your examples and stories are from 1980's and 1990's. Any idea how
well dmc fares against latest Intel / Microsoft / GNU compilers?


I work on a >1M LOC C++ project and using distcc with 4 nodes and 
ccache. Unfortunately, it is not enough. Yes, there are various cases 
where runtime performance matters a lot. But compile time performance of 
C++ is a huge problem. I am glad that Walter cares about this.


The point about optimizations vs compile time seems to be a valid one. 
However, even without optimizations turned on gcc sucks big time w.r.t. 
compilation time. And most of the time is being spent in parsing 
gazillion number of headers. I did not have a chance to work with 
Intel's and MS's compilers.


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

retard wrote:

Mon, 14 Feb 2011 13:00:00 -0800, Walter Bright wrote:


In particular, instruction scheduling no longer seems to matter, except
for the Intel Atom, which benefits very much from Pentium style
instruction scheduling. Ironically, dmc++ is the only available current
compiler which supports that.


I can't see how dmc++ is the only available current compiler which 
supports that. For example this article (April 15, 2010) [1] tells:


"The GCC 4.5 announcement was made at GNU.org. Changes from GCC 4.4, 
which was released almost one year ago, include the

 * use of the MPC library to evaluate complex arithmetic at compile time
 * C++0x improvements
 * automatic parallelization as part of Graphite
 * support for new ARM processors
 * Intel Atom optimizations and tuning support, and
 * AMD Orochi optimizations too"

GCC has supported i586 scheduling as long as I can remember.


"Optimizations and tuning support" is not necessarily scheduling. dmc 
specifically does scheduling for the U and V pipes on the Pentium, and does a 
near perfect job of it (better than any other compiler of the time that I 
checked, most of which didn't even attempt it).


The only way to tell if a compiler does it is by trying it and examining the 
emitted instructions. Reading the marketing literature isn't good enough.




[1] http://www.phoronix.com/scan.php?page=news_item&px=ODE1Ng


 > or whole program

I looked into that, there's not a lot of oil in that well.


How about [2]:

"LTO is quite promising.  Actually it is in line or even better with
improvement got from other compilers (pathscale is the most convenient
compiler to check lto separately: lto gave there upto 5% improvement
on SPECFP2000 and 3.5% for SPECInt2000 making compiler about 50%
slower and generated code size upto 30% bigger).  LTO in GCC actually
results in significant code reduction which is quite different from
pathscale.  That is one of rare cases on my mind when a specific
optimization works actually better in gcc than in other optimizing
compilers."

[2] http://gcc.gnu.org/ml/gcc/2009-10/msg00155.html


LTO is different from whole program analysis.

BTW, you can sometimes get dramatic speedups by running the dmc profiler, and 
then feeding the .def file it generates back into the linker. This will reorder 
the code for optimum speed. That is LTO, but is not whole program optimization.


C++'s compilation model thwarts true whole program analysis at every step. D, on 
the other hand, is designed to support it. dmd has some initial support for 
that, as it will inline code from across any modules you hand it the source for.



In my opinion the up to 5% improvement is pretty good compared to 
advances in typical minor compiler version upgades. For example [3]:


"The Fortran-written NAS Parallel Benchmarks from NASA with the LU.A test 
is running significantly faster with GCC 4.5. This new compiler is 
causing NAS LU.A to run 15% better than the other tested GCC releases."


Yes, 5% is a decent improvement. You'd have to look closer to see where the 
improvement is coming from, though, to draw any useful conclusions. It could be 
(and this happens) one single tweak of one expression node that was crappily 
written in the first place.




[3] http://www.phoronix.com/scan.php?
page=article&item=gcc_45_benchmarks&num=6


 > and instruction level optimizations the very latest GCC and LLVM are
 > now
slowly adopting.

Huh? Every compiler in existence has done, and always has done,
instruction level optimizations.


I don't know this area well enough, but here is a list of optimizations 
it does http://llvm.org/docs/Passes.html - from what I've read, GNU GCC 
doesn't implement all of these.


Every compiler implements a list of those, and those lists vary a lot from 
compiler to compiler. dmc probably has a thousand of those patterns embedded in 
it that it specifically recognizes.




Note: a lot of modern compilers expend tremendous effort optimizing
access to global variables (often screwing up multithreaded code in the
process). I've always viewed this as a crock, since modern programming
style eschews globals as much as possible.


I only know that modern C/C++ compilers are doing more and more things 
automatically. And that might soon include automatic vectorization + 
multithreading of some computationally intensive code via OpenMP.


D is actually far friendlier to vectorization than C/C++ are.


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 13:00:00 -0800, Walter Bright wrote:

> In particular, instruction scheduling no longer seems to matter, except
> for the Intel Atom, which benefits very much from Pentium style
> instruction scheduling. Ironically, dmc++ is the only available current
> compiler which supports that.

I can't see how dmc++ is the only available current compiler which 
supports that. For example this article (April 15, 2010) [1] tells:

"The GCC 4.5 announcement was made at GNU.org. Changes from GCC 4.4, 
which was released almost one year ago, include the
 * use of the MPC library to evaluate complex arithmetic at compile time
 * C++0x improvements
 * automatic parallelization as part of Graphite
 * support for new ARM processors
 * Intel Atom optimizations and tuning support, and
 * AMD Orochi optimizations too"

GCC has supported i586 scheduling as long as I can remember.

[1] http://www.phoronix.com/scan.php?page=news_item&px=ODE1Ng

>  > or whole program
> 
> I looked into that, there's not a lot of oil in that well.

How about [2]:

"LTO is quite promising.  Actually it is in line or even better with
improvement got from other compilers (pathscale is the most convenient
compiler to check lto separately: lto gave there upto 5% improvement
on SPECFP2000 and 3.5% for SPECInt2000 making compiler about 50%
slower and generated code size upto 30% bigger).  LTO in GCC actually
results in significant code reduction which is quite different from
pathscale.  That is one of rare cases on my mind when a specific
optimization works actually better in gcc than in other optimizing
compilers."

[2] http://gcc.gnu.org/ml/gcc/2009-10/msg00155.html

In my opinion the up to 5% improvement is pretty good compared to 
advances in typical minor compiler version upgades. For example [3]:

"The Fortran-written NAS Parallel Benchmarks from NASA with the LU.A test 
is running significantly faster with GCC 4.5. This new compiler is 
causing NAS LU.A to run 15% better than the other tested GCC releases."

[3] http://www.phoronix.com/scan.php?
page=article&item=gcc_45_benchmarks&num=6

>  > and instruction level optimizations the very latest GCC and LLVM are
>  > now
> slowly adopting.
> 
> Huh? Every compiler in existence has done, and always has done,
> instruction level optimizations.

I don't know this area well enough, but here is a list of optimizations 
it does http://llvm.org/docs/Passes.html - from what I've read, GNU GCC 
doesn't implement all of these.

> Note: a lot of modern compilers expend tremendous effort optimizing
> access to global variables (often screwing up multithreaded code in the
> process). I've always viewed this as a crock, since modern programming
> style eschews globals as much as possible.

I only know that modern C/C++ compilers are doing more and more things 
automatically. And that might soon include automatic vectorization + 
multithreading of some computationally intensive code via OpenMP.


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

Nick Sabalausky wrote:
If it isn't already, maybe all this should be mentioned on the D site. 



Maybe you're right.


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-14 21:43, Nick Sabalausky wrote:

"Jacob Carlborg"  wrote in message
news:ijbtpv$61a$1...@digitalmars.com...

On 2011-02-13 23:38, spir wrote:

On 02/13/2011 10:35 PM, Nick Sabalausky wrote:

"spir"  wrote in message
news:mailman.1602.1297626622.4748.digitalmar...@puremagic.com...


Also, I really miss a D for D lexical- syntactic- semantic- analyser
that
would produce D data structures. This would open the door hoards of
projects, including tool chain elements, meta-studies on D,
improvements
of these basic tools (efficiency, semantis analysis), decelopment of
back-ends (including studies on compiler optimisation specific to D's
semantics), etc.
Even more important, the whole cummunity, which is imo rather
high-level,
would be able to take part to such challenges, in their favorite
language.
Isn't is ironic D depends so much on C++, while many programmers come
to D
fed up with this language, presicely?



DDMD: http://www.dsource.org/projects/ddmd


Definitely a good thing, and more! :-) Thank your for the pointer, Nick.
I will skim across the project as soon as I have some hours free. And
see if --with my very limited competence in the domain-- I can
contribute in any way.
I have an idea for a side-feature if I can understand the produced AST:
generate Types as D data structures on request (--meta), write them into
a plain D module to be imported on need. A major aspect, I guess, of the
'meta' namespace discussed on this list.

Denis


Currently it doesn't compile on Posix, and never has as far as I know.
That's one thing you can help with if you want to. Don't know the status
on Windows



It compiles fine on Windows.

Some of the last few commits were related to compling on Linux and OSX. Does
the latest version still not work?


No, if I was the last one who did those commits. Since then a few 
necessary bugs has been fixed in DMD.



--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Nick Sabalausky
"Walter Bright"  wrote in message 
news:ijc4fk$iv3$1...@digitalmars.com...
>
> I hear stuff about how dmc should catch up with LLVM and do modern things 
> like data flow analysis, yet dmc has done data flow analysis since 1985. I 
> also hear that dmc should do named return value optimization, not 
> realizing that dmc *invented* named return value optimization and has done 
> it since 1991. These claims are clearly made simply based on assumptions 
> and reading the marketing literature of other compilers.
>

If it isn't already, maybe all this should be mentioned on the D site. 




Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

retard wrote:
> There are no arch specific optimizations for PIII, Pentium 4, Pentium D,
Core, Core 2, Core i7, Core i7 2600K, and similar kinds of products from
AMD.

The optimal instruction sequences varied dramatically on those earlier 
processors, but not so much at all on the later ones. Reading the latest 
Intel/AMD instruction set references doesn't even provide that information anymore.


In particular, instruction scheduling no longer seems to matter, except for the 
Intel Atom, which benefits very much from Pentium style instruction scheduling. 
Ironically, dmc++ is the only available current compiler which supports that.



No mention of auto-vectorization 


dmc doesn't do auto-vectorization. I agree that's an issue.


> or whole program

I looked into that, there's not a lot of oil in that well.


> and instruction level optimizations the very latest GCC and LLVM are now 
slowly adopting.


Huh? Every compiler in existence has done, and always has done, instruction 
level optimizations.



Note: a lot of modern compilers expend tremendous effort optimizing access to 
global variables (often screwing up multithreaded code in the process). I've 
always viewed this as a crock, since modern programming style eschews globals as 
much as possible.


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

retard wrote:

Your obsession with fast compile times is incomprehensible.


Yet people complain about excessive compile times with C++ all the time, such as 
overnight builds. Quite a few dmc++ customers stick with it because of compile 
times.




It doesn't have any relevance in the projects I'm talking about.


It's relevant when you make claims you cannot create fast code with dmc, since 
dmc is itself built with dmc.



The extensive optimizations and fast compile times have an inverse 
correlation. Of course your compiler compiles faster if it optimizes 
less. What's the point here?


It compiles far faster for debug builds, too. That is directly relevant to 
productivity in the edit/compile/debug loop.


It also makes a big difference to me that I can run the test suite in half an 
hour rather than an hour. It means I'll be less tempted to skip running the suite.



All your examples and stories are from 1980's and 1990's. Any idea how 
well dmc fares against latest Intel / Microsoft / GNU compilers?


Bearophile posted a benchmark last year where he concluded that modern compilers 
like LLVM beat the pants off of primitive, obsolete compilers like dmc for 
integer arithmetic. A little investigation showed it had nothing whatsoever to 
do with the compiler - it was the runtime library implementation of long divide 
that was the culprit. I corrected that, and the runtimes became indistinguishable.


I hear stuff about how dmc should catch up with LLVM and do modern things like 
data flow analysis, yet dmc has done data flow analysis since 1985. I also hear 
that dmc should do named return value optimization, not realizing that dmc 
*invented* named return value optimization and has done it since 1991. These 
claims are clearly made simply based on assumptions and reading the marketing 
literature of other compilers.


The point is, compiler optimizers hit a wall around 15 years ago. Only tiny 
improvements have happened since then. (Not considering vectorization, which is 
a big improvement.)


Where dmc needs improvement is in floating point code, particularly in using XMM 
registers and doing vectorization. dmc does an excellent and competitive job 
with optimization rewrites, register assignment, scheduling and detail code 
generation. There's only so much juice you can get out of those grapes.


Re: tooling quality and some random rant

2011-02-14 Thread Nick Sabalausky
"Jacob Carlborg"  wrote in message 
news:ijbtpv$61a$1...@digitalmars.com...
> On 2011-02-13 23:38, spir wrote:
>> On 02/13/2011 10:35 PM, Nick Sabalausky wrote:
>>> "spir" wrote in message
>>> news:mailman.1602.1297626622.4748.digitalmar...@puremagic.com...

 Also, I really miss a D for D lexical- syntactic- semantic- analyser
 that
 would produce D data structures. This would open the door hoards of
 projects, including tool chain elements, meta-studies on D, 
 improvements
 of these basic tools (efficiency, semantis analysis), decelopment of
 back-ends (including studies on compiler optimisation specific to D's
 semantics), etc.
 Even more important, the whole cummunity, which is imo rather
 high-level,
 would be able to take part to such challenges, in their favorite
 language.
 Isn't is ironic D depends so much on C++, while many programmers come
 to D
 fed up with this language, presicely?

>>>
>>> DDMD: http://www.dsource.org/projects/ddmd
>>
>> Definitely a good thing, and more! :-) Thank your for the pointer, Nick.
>> I will skim across the project as soon as I have some hours free. And
>> see if --with my very limited competence in the domain-- I can
>> contribute in any way.
>> I have an idea for a side-feature if I can understand the produced AST:
>> generate Types as D data structures on request (--meta), write them into
>> a plain D module to be imported on need. A major aspect, I guess, of the
>> 'meta' namespace discussed on this list.
>>
>> Denis
>
> Currently it doesn't compile on Posix, and never has as far as I know. 
> That's one thing you can help with if you want to. Don't know the status 
> on Windows
>

It compiles fine on Windows.

Some of the last few commits were related to compling on Linux and OSX. Does 
the latest version still not work?





Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

Jacob Carlborg wrote:

Done: http://d.puremagic.com/issues/show_bug.cgi?id=5577


Thank you.


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 11:38:50 -0800, Walter Bright wrote:

> Lutger Blijdestijn wrote:
>> retard wrote:
>> 
>>> Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
>>>
> Unfortunately DMC is always out of the question because the
> performance is 10-20 (years) behind competition, fast compilation
> won't help it.
 Can you please give a few links on this?
>>> What kind of proof you need then? Just take some existing piece of
>>> code with high performance requirements and compile it with dmc. You
>>> lose.
>>>
>>> http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
>>> http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
>>> http://lists.boost.org/boost-testing/2005/06/1520.php
>>> http://www.digitalmars.com/d/archives/c++/chat/66.html
>>> http://www.drdobbs.com/cpp/184405450
>>>
>>>
>> That is ridiculous, have you even bothered to read your own links? In
>> some of them dmc wins, others the differences are minimal and for all
>> of them dmc is king in compilation times.
> 
> 
> People tend to see what they want to see. There was a computer magazine
> roundup in the late 1980's where they benchmarked a dozen or so
> compilers. The text enthusiastically declared Borland to be the fastest
> compiler, while their own benchmark tables clearly showed Zortech as
> winning across the board.
> 
> The ironic thing about retard not recommending dmc for fast code is dmc
> is built using dmc, and dmc is *far* faster at compiling than any of the
> others.

Your obsession with fast compile times is incomprehensible. It doesn't 
have any relevance in the projects I'm talking about. On multicore 'make -
jN', distcc & low cost clusters, and incremental compilation already 
mitigate most of the issues. LLVM is also supposed to compile large 
projects faster than the 'legacy' gcc. There are also faster linkers than 
GNU ld. If you're really obsessed with compile times, there are far 
better languages such as D.

The extensive optimizations and fast compile times have an inverse 
correlation. Of course your compiler compiles faster if it optimizes 
less. What's the point here?

All your examples and stories are from 1980's and 1990's. Any idea how 
well dmc fares against latest Intel / Microsoft / GNU compilers? 


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 20:10:47 +0100, Lutger Blijdestijn wrote:

> retard wrote:
> 
>> Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
>> 
 Unfortunately DMC is always out of the question because the
 performance is 10-20 (years) behind competition, fast compilation
 won't help it.
>>> 
>>> Can you please give a few links on this?
>> 
>> What kind of proof you need then? Just take some existing piece of code
>> with high performance requirements and compile it with dmc. You lose.
>> 
>> http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
>> http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
>> http://lists.boost.org/boost-testing/2005/06/1520.php
>> http://www.digitalmars.com/d/archives/c++/chat/66.html
>> http://www.drdobbs.com/cpp/184405450
>> 
>> 
> That is ridiculous, have you even bothered to read your own links? In
> some of them dmc wins, others the differences are minimal and for all of
> them dmc is king in compilation times.

DMC doesn't clearly win in any of the tests and these are merely some 
naive examples I found by doing 5 minutes of googling. Seriously, take a 
closer look - the gcc version is over 5 years old. Nobody even bothers 
doing dmc benchmarks anymore, dmc is so out of the league. I repeat, this 
was about performance of the generated binaries, not compile times.

Like I said: take some existing piece of code with high performance 
requirements and compile it with dmc. You lose. I honestly don't get what 
I need to prove here. Since you have no clue, presumably you aren't even 
using dmc and won't be considering it.

Just take a look at the command line parameters:
-[0|2|3|4|5|6]  8088/286/386/486/Pentium/P6 code

There are no arch specific optimizations for PIII, Pentium 4, Pentium D, 
Core, Core 2, Core i7, Core i7 2600K, and similar kinds of products from 
AMD. No mention of auto-vectorization or whole program and instruction 
level optimizations the very latest GCC and LLVM are now slowly adopting.


Re: tooling quality and some random rant

2011-02-14 Thread Steven Schveighoffer
On Mon, 14 Feb 2011 14:24:05 -0500, Andrej Mitrovic  
 wrote:



I think this void main() issue is blown out of proportion. They'll see
the error message once, and they won't know what it means. Ok.

But the second time, they'll know. No start address == no main. Maybe
the linker should just add another line saying that you might be
missing main, and that's it.

You guys want to rewrite the compiler for this one silly issue, come on!


No, not at all (at least for me).  I'm just pointing out that the error  
that occurs when main is missing (probably one of the more common linker  
errors) is far more confusing in D than it is in C++.


That doesn't mean D is unusable, or Walter should drop everything and fix  
this problem, or that C++ is better.  It's just an observation.


I think linker errors in general are one of those things that few people  
understand, and most cope with just pattern recognition "Oh, I see  
_deh_start, probably forgot main()" with no regards to logic. :)  "Fixing"  
the linker so it suggests the right thing is likely impossible because the  
linker doesn't know where everything is or what one must include in order  
to satisfy it.


That being said, fixing the linker so it demangles symbols would make the  
errors 10x easier to understand.


-Steve


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

retard wrote:

Mon, 14 Feb 2011 10:01:53 -0800, Walter Bright wrote:


retard wrote:

Mon, 14 Feb 2011 04:44:43 +0200, so wrote:


Unfortunately DMC is always out of the question because the
performance is 10-20 (years) behind competition, fast compilation
won't help it.

Can you please give a few links on this?

What kind of proof you need then? Just take some existing piece of code
with high performance requirements and compile it with dmc. You lose.

http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37

That link shows dmc winning.


No, it doesn't. In the Fib-5 test where the optimizations bring 
largest improvements in wall clock time, g++ 3.3.1, vc++7, bc++ 5.5.1, 
and icc are all faster with optimized settings.


And dmc is faster with Fib-25000.



This test is a joke anyway.


You picked these benchmarks, not me.


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-14 19:07, Walter Bright wrote:

Jacob Carlborg wrote:

I agree with you here except for the last sentence. Please stop saying
it's ok just because it's ok in C/C++.


I bring that up because the thread started with the implication that D
was worse than C/C++ in this regard.


Fair enough.

--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-14 18:55, Walter Bright wrote:

Jacob Carlborg wrote:

On 2011-02-13 18:36, Andrej Mitrovic wrote:

Could you elaborate on that? Aren't .di files supposed to be
auto-generated by the compiler, and not hand-written?


Yes, but they don't always work.


Where they don't work, please file bug reports to bugzilla.


Done: http://d.puremagic.com/issues/show_bug.cgi?id=5577

--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

Lutger Blijdestijn wrote:

retard wrote:


Mon, 14 Feb 2011 04:44:43 +0200, so wrote:


Unfortunately DMC is always out of the question because the performance
is 10-20 (years) behind competition, fast compilation won't help it.

Can you please give a few links on this?

What kind of proof you need then? Just take some existing piece of code
with high performance requirements and compile it with dmc. You lose.

http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
http://lists.boost.org/boost-testing/2005/06/1520.php
http://www.digitalmars.com/d/archives/c++/chat/66.html
http://www.drdobbs.com/cpp/184405450



That is ridiculous, have you even bothered to read your own links? In some 
of them dmc wins, others the differences are minimal and for all of them dmc 
is king in compilation times.



People tend to see what they want to see. There was a computer magazine roundup 
in the late 1980's where they benchmarked a dozen or so compilers. The text 
enthusiastically declared Borland to be the fastest compiler, while their own 
benchmark tables clearly showed Zortech as winning across the board.


The ironic thing about retard not recommending dmc for fast code is dmc is built 
using dmc, and dmc is *far* faster at compiling than any of the others.


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 10:01:53 -0800, Walter Bright wrote:

> retard wrote:
>> Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
>> 
 Unfortunately DMC is always out of the question because the
 performance is 10-20 (years) behind competition, fast compilation
 won't help it.
>>> Can you please give a few links on this?
>> 
>> What kind of proof you need then? Just take some existing piece of code
>> with high performance requirements and compile it with dmc. You lose.
>> 
>> http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
>> http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
> 
> That link shows dmc winning.

No, it doesn't. In the Fib-5 test where the optimizations bring 
largest improvements in wall clock time, g++ 3.3.1, vc++7, bc++ 5.5.1, 
and icc are all faster with optimized settings. This test is a joke 
anyway. I wouldn't pick a compiler for video transcoding based on some 
Fib-1 results, seriously.


Re: tooling quality and some random rant

2011-02-14 Thread Andrej Mitrovic
I think this void main() issue is blown out of proportion. They'll see
the error message once, and they won't know what it means. Ok.

But the second time, they'll know. No start address == no main. Maybe
the linker should just add another line saying that you might be
missing main, and that's it.

You guys want to rewrite the compiler for this one silly issue, come on!


Re: tooling quality and some random rant

2011-02-14 Thread Lutger Blijdestijn
retard wrote:

> Mon, 14 Feb 2011 04:44:43 +0200, so wrote:
> 
>>> Unfortunately DMC is always out of the question because the performance
>>> is 10-20 (years) behind competition, fast compilation won't help it.
>> 
>> Can you please give a few links on this?
> 
> What kind of proof you need then? Just take some existing piece of code
> with high performance requirements and compile it with dmc. You lose.
> 
> http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
> http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
> http://lists.boost.org/boost-testing/2005/06/1520.php
> http://www.digitalmars.com/d/archives/c++/chat/66.html
> http://www.drdobbs.com/cpp/184405450
> 

That is ridiculous, have you even bothered to read your own links? In some 
of them dmc wins, others the differences are minimal and for all of them dmc 
is king in compilation times.



Re: tooling quality and some random rant

2011-02-14 Thread spir

On 02/14/2011 06:54 PM, Steven Schveighoffer wrote:

On Sun, 13 Feb 2011 14:12:02 -0500, Walter Bright 
wrote:


Vladimir Panteleev wrote:

On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright
 wrote:


golgeliyele wrote:

I don't think C++ and gcc set a good bar here.


Short of writing our own linker, we're a bit stuck with what ld does.

That's not true. The compiler has knowledge of what symbols will be passed
to the linker, and can display its own, much nicer error messages. I've
mentioned this in our previous discussion on this topic.


Not without reading the .o files passed to the linker, and the libraries, and
figuring out what would be pulled in from those libraries. In essence, the
compiler would have to become a linker.

It's not impossible, but is a tremendous amount of work in order to improve
one error message, and one error message that generations of C and C++
programmers are comfortable dealing with.


I'm not saying that this should be done and is worth the tremendous effort.

However, when linking a c++ app without a main, here is what I get:

/usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/crt1.o: In function `_start':
(.text+0x18): undefined reference to `main'

When linking a d app without a main, we get:


/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(dmain2_517_1a5.o): In
function `_D2rt6dmain24mainUiPPaZi7runMainMFZv':
src/rt/dmain2.d:(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x16): undefined
reference to `_Dmain'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(deh2_4e7_525.o): In
function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable':
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x4):
undefined reference to `_deh_beg'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0xc):
undefined reference to `_deh_beg'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x13):
undefined reference to `_deh_end'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x37):
undefined reference to `_deh_end'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_eb_258.o): In
function `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x1d):
undefined reference to `_tlsend'
src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x24):
undefined reference to `_tlsstart'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_ee_6e4.o): In
function `thread_attachThis':
src/core/thread.d:(.text.thread_attachThis+0x53): undefined reference to
`_tlsstart'
src/core/thread.d:(.text.thread_attachThis+0x5c): undefined reference to 
`_tlsend'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_e8_713.o): In
function `thread_entryPoint':
src/core/thread.d:(.text.thread_entryPoint+0x29): undefined reference to 
`_tlsend'
src/core/thread.d:(.text.thread_entryPoint+0x2f): undefined reference to
`_tlsstart'
collect2: ld returned 1 exit status
--- errorlevel 1

Let's not pretend that generations of c/C++ coders are going to attribute this
slew of errors to a missing main function. The first time I see this, I'm going
to think I missed something else.

I understand that to fix this, we need the linker to be more helpful, or we
need to make dmd more helpful. I don't know how much effort it is, or how much
it's worth it, I just wanted to point out that your statement about equivalence
to C++ is stretching it.

I personally think we need to get the linker to demangle symbols better. That
would go a long way...


The "public" problem is not with the (admittedly very bad) error message in 
iself. The problem imo is that newcomers have high chances to stumble on this 
merroges (or points of similar friendliness) at the very start of their 
adventures with D, and thus think D tools just treat programmers that way, and 
the D community finds this just normal. Oops!


I would be happy dmd to assume the main func is supposed to be located in the 
very first module passed on the command-line, if this can help. What do you think?

"Error: cannot find main() function in module 'app.d'."
(But this would not solve the case of /multiple/ mains, which happens to me 
several times a day, namely each time I have run an imported module's test 
suite separately ;-)


Denis
--
_
vita es estrany
spir.wikidot.com



Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-14 00:01, Walter Bright wrote:

Michel Fortin wrote:

But note I was replying to your reply to Denis who asked specifically
for demangled names for missing symbols. This by itself would be a
useful improvement.


I agree with that, but there's a caveat. I did such a thing years ago
for C++ and Optlink. Nobody cared, including the people who asked for
that feature. It's a bit demotivating to bother doing that again.


Maybe you can give it another try, there's a completely  new community 
here now (I assume).


On the other hand, that's unfortunately how people behave. They loudly 
complain when there's something they don't like and they sit silently 
when they're happy.


--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-14 00:28, retard wrote:

Sun, 13 Feb 2011 15:06:46 -0800, Brad Roberts wrote:


On 2/13/2011 3:01 PM, Walter Bright wrote:

Michel Fortin wrote:

But note I was replying to your reply to Denis who asked specifically
for demangled names for missing symbols. This by itself would be a
useful improvement.


I agree with that, but there's a caveat. I did such a thing years ago
for C++ and Optlink. Nobody cared, including the people who asked for
that feature. It's a bit demotivating to bother doing that again.


No offense, but this argument gets kinda old and it's incredibly weak.

Today's tooling expectations are higher.  The audience isn't the same.
And clearly people are asking for it.  Even the past version of it I
highly doubt no one cared, you just didn't hear from those that liked
it.  After all, few people go out of their way to talk about what they
like, just what they don't.


Half of the readers have already added me to their killfile, but here
goes some on-topic humor:

http://www.winandmac.com/wp-content/uploads/2010/03/ipad-hp-fail.jpg


I had something similar with an attachable keyboard.


Sometimes people don't yet know what they want.

For example the reason we write portable C++ in some projects is that
it's easier to switch between VC++, ICC, GCC, and LLVM. Whichever
produces best performing code. Unfortunately DMC is always out of the
question because the performance is 10-20 behind competition, fast
compilation won't help it.



--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-13 23:38, spir wrote:

On 02/13/2011 10:35 PM, Nick Sabalausky wrote:

"spir" wrote in message
news:mailman.1602.1297626622.4748.digitalmar...@puremagic.com...


Also, I really miss a D for D lexical- syntactic- semantic- analyser
that
would produce D data structures. This would open the door hoards of
projects, including tool chain elements, meta-studies on D, improvements
of these basic tools (efficiency, semantis analysis), decelopment of
back-ends (including studies on compiler optimisation specific to D's
semantics), etc.
Even more important, the whole cummunity, which is imo rather
high-level,
would be able to take part to such challenges, in their favorite
language.
Isn't is ironic D depends so much on C++, while many programmers come
to D
fed up with this language, presicely?



DDMD: http://www.dsource.org/projects/ddmd


Definitely a good thing, and more! :-) Thank your for the pointer, Nick.
I will skim across the project as soon as I have some hours free. And
see if --with my very limited competence in the domain-- I can
contribute in any way.
I have an idea for a side-feature if I can understand the produced AST:
generate Types as D data structures on request (--meta), write them into
a plain D module to be imported on need. A major aspect, I guess, of the
'meta' namespace discussed on this list.

Denis


Currently it doesn't compile on Posix, and never has as far as I know. 
That's one thing you can help with if you want to. Don't know the status 
on Windows


--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Steven Schveighoffer
On Mon, 14 Feb 2011 13:24:26 -0500, Walter Bright  
 wrote:



Steven Schveighoffer wrote:
On Sun, 13 Feb 2011 14:12:02 -0500, Walter Bright  
 wrote:



Vladimir Panteleev wrote:
On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright  
 wrote:



golgeliyele wrote:

I don't think C++ and gcc set a good bar here.


Short of writing our own linker, we're a bit stuck with what ld does.
 That's not true. The compiler has knowledge of what symbols will be  
passed to the linker, and can display its own, much nicer error  
messages. I've mentioned this in our previous discussion on this  
topic.


Not without reading the .o files passed to the linker, and the  
libraries, and figuring out what would be pulled in from those  
libraries. In essence, the compiler would have to become a linker.


It's not impossible, but is a tremendous amount of work in order to  
improve one error message, and one error message that generations of C  
and C++ programmers are comfortable dealing with.
 I'm not saying that this should be done and is worth the tremendous  
effort.

 However, when linking a c++ app without a main, here is what I get:
 /usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/crt1.o: In function  
`_start':

(.text+0x18): undefined reference to `main'
 When linking a d app without a main, we get:
   
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(dmain2_517_1a5.o):  
In function `_D2rt6dmain24mainUiPPaZi7runMainMFZv':
src/rt/dmain2.d:(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x16):  
undefined reference to `_Dmain'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(deh2_4e7_525.o):  
In function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable':
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x4):  
undefined reference to `_deh_beg'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0xc):  
undefined reference to `_deh_beg'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x13):  
undefined reference to `_deh_end'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x37):  
undefined reference to `_deh_end'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_eb_258.o):  
In function `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x1d):  
undefined reference to `_tlsend'
src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x24):  
undefined reference to `_tlsstart'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_ee_6e4.o):  
In function `thread_attachThis':
src/core/thread.d:(.text.thread_attachThis+0x53): undefined reference  
to `_tlsstart'
src/core/thread.d:(.text.thread_attachThis+0x5c): undefined reference  
to `_tlsend'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_e8_713.o):  
In function `thread_entryPoint':
src/core/thread.d:(.text.thread_entryPoint+0x29): undefined reference  
to `_tlsend'
src/core/thread.d:(.text.thread_entryPoint+0x2f): undefined reference  
to `_tlsstart'

collect2: ld returned 1 exit status
--- errorlevel 1
 Let's not pretend that generations of c/C++ coders are going to  
attribute this slew of errors to a missing main function.


I understand what you're saying, but experienced C/C++ programmers are  
used to paying attention only to the first error message :-)


Really?  I find that in a mess of linker errors, the error isn't always  
the first line.  It doesn't help that the name of the function "missing"  
is not called main (as it is called in the d source file).


But Like I said, it's not critical -- the error is listed, it's just not  
as user-friendly as the C++ error.




I personally think we need to get the linker to demangle symbols  
better.  That would go a long way...


Not for the above messages.


I meant to demangle things like _D2rt6dmain24mainUiPPaZi7runMainMFZv  Note  
how the _Dmain is buried between some of these large symbols.  Those  
seemingly random nonsense symbols make the whole error listing seem  
unreadable.


-Steve


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-13 20:49, Lutger Blijdestijn wrote:

gölgeliyele wrote:
...


I think what we need here is numbers from a project that everyone has
access to. What is the largest D project right now? Can we get numbers on
that? How much time does it take to compile that project after a change
(assuming we are feeding all .d files at once)?


Well you can take phobos, I believe Andrei used it once to compare against
Go. With std.datetime it is now also much bigger :)

Tango is another large project, I remember someone posted a compilation
speed of a couple of seconds (Tango is huge, perhaps 300KLoC).

But projects and settings may vary a lot. For sure, optlink is one hell of a
speed monster and you might not get similar speeds with ld on a large
project.


It takes around 12.5 seconds for my machine to build Tango using the bob 
executable.


2.4Ghz Intel Core 2 Duo
2G RAM
Mac OS X 10.6.6

--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

Steven Schveighoffer wrote:
On Sun, 13 Feb 2011 14:12:02 -0500, Walter Bright 
 wrote:



Vladimir Panteleev wrote:
On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright 
 wrote:



golgeliyele wrote:

I don't think C++ and gcc set a good bar here.


Short of writing our own linker, we're a bit stuck with what ld does.
 That's not true. The compiler has knowledge of what symbols will be 
passed to the linker, and can display its own, much nicer error 
messages. I've mentioned this in our previous discussion on this topic.


Not without reading the .o files passed to the linker, and the 
libraries, and figuring out what would be pulled in from those 
libraries. In essence, the compiler would have to become a linker.


It's not impossible, but is a tremendous amount of work in order to 
improve one error message, and one error message that generations of C 
and C++ programmers are comfortable dealing with.


I'm not saying that this should be done and is worth the tremendous effort.

However, when linking a c++ app without a main, here is what I get:

/usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/crt1.o: In function 
`_start':

(.text+0x18): undefined reference to `main'

When linking a d app without a main, we get:


/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(dmain2_517_1a5.o): 
In function `_D2rt6dmain24mainUiPPaZi7runMainMFZv':
src/rt/dmain2.d:(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x16): 
undefined reference to `_Dmain'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(deh2_4e7_525.o): In 
function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable':
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x4): 
undefined reference to `_deh_beg'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0xc): 
undefined reference to `_deh_beg'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x13): 
undefined reference to `_deh_end'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x37): 
undefined reference to `_deh_end'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_eb_258.o): 
In function `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x1d): 
undefined reference to `_tlsend'
src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x24): 
undefined reference to `_tlsstart'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_ee_6e4.o): 
In function `thread_attachThis':
src/core/thread.d:(.text.thread_attachThis+0x53): undefined reference to 
`_tlsstart'
src/core/thread.d:(.text.thread_attachThis+0x5c): undefined reference to 
`_tlsend'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_e8_713.o): 
In function `thread_entryPoint':
src/core/thread.d:(.text.thread_entryPoint+0x29): undefined reference to 
`_tlsend'
src/core/thread.d:(.text.thread_entryPoint+0x2f): undefined reference to 
`_tlsstart'

collect2: ld returned 1 exit status
--- errorlevel 1

Let's not pretend that generations of c/C++ coders are going to 
attribute this slew of errors to a missing main function.


I understand what you're saying, but experienced C/C++ programmers are used to 
paying attention only to the first error message :-)


I personally think we need to get the linker to demangle symbols 
better.  That would go a long way...


Not for the above messages.


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

Jacob Carlborg wrote:
I agree with you here except for the last sentence. Please stop saying 
it's ok just because it's ok in C/C++.


I bring that up because the thread started with the implication that D was worse 
than C/C++ in this regard.


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

Andrej Mitrovic wrote:

I've no idea. But Optlink actually has a switch you can use to disable
outputting corrupt executables. I've no idea what the use case for
this is.


It's from the olden days where you could use optlink to create all sorts of 
specialized binary files, such as ones you'll be blowing into EEPROMs. Those did 
not have normal start addresses.


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

retard wrote:

Mon, 14 Feb 2011 04:44:43 +0200, so wrote:


Unfortunately DMC is always out of the question because the performance
is 10-20 (years) behind competition, fast compilation won't help it.

Can you please give a few links on this?


What kind of proof you need then? Just take some existing piece of code 
with high performance requirements and compile it with dmc. You lose.


http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37


That link shows dmc winning.


http://lists.boost.org/boost-testing/2005/06/1520.php
http://www.digitalmars.com/d/archives/c++/chat/66.html
http://www.drdobbs.com/cpp/184405450

Many of those are already old. GCC 4.6, LLVM 2.9, and ICC 12 are much 
faster, especially on multicore hardware. A quick look at DMC changelog 
doesn't reveal any significant new optimizations durin the past 10 years 
except some Pentium 4 opcodes and fixes on library level.


I rarely see a benchmark where DMC produces fastest code. In addition, 
most open source projects are not compatible with DMC's toolchain out of 
the box. If execution performance of the generated code is your top 
priority, I wouldn't recommend using DigitalMars products.


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

Lutger Blijdestijn wrote:
Let me take the opportunity to say I care about an unrelated usability 
feature: the spelling suggestion. However small it's pretty nice so thanks 
for doing that.


I like that one too, I liked it so much I wired it into dmc++ as well!


Re: tooling quality and some random rant

2011-02-14 Thread Walter Bright

Jacob Carlborg wrote:

On 2011-02-13 18:36, Andrej Mitrovic wrote:

Could you elaborate on that? Aren't .di files supposed to be
auto-generated by the compiler, and not hand-written?


Yes, but they don't always work.


Where they don't work, please file bug reports to bugzilla.


Re: tooling quality and some random rant

2011-02-14 Thread Steven Schveighoffer
On Sun, 13 Feb 2011 14:12:02 -0500, Walter Bright  
 wrote:



Vladimir Panteleev wrote:
On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright  
 wrote:



golgeliyele wrote:

I don't think C++ and gcc set a good bar here.


Short of writing our own linker, we're a bit stuck with what ld does.
 That's not true. The compiler has knowledge of what symbols will be  
passed to the linker, and can display its own, much nicer error  
messages. I've mentioned this in our previous discussion on this topic.


Not without reading the .o files passed to the linker, and the  
libraries, and figuring out what would be pulled in from those  
libraries. In essence, the compiler would have to become a linker.


It's not impossible, but is a tremendous amount of work in order to  
improve one error message, and one error message that generations of C  
and C++ programmers are comfortable dealing with.


I'm not saying that this should be done and is worth the tremendous effort.

However, when linking a c++ app without a main, here is what I get:

/usr/lib/gcc/i686-linux-gnu/4.4.5/../../../../lib/crt1.o: In function  
`_start':

(.text+0x18): undefined reference to `main'

When linking a d app without a main, we get:


/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(dmain2_517_1a5.o): In  
function `_D2rt6dmain24mainUiPPaZi7runMainMFZv':
src/rt/dmain2.d:(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x16):  
undefined reference to `_Dmain'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(deh2_4e7_525.o): In  
function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable':
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x4):  
undefined reference to `_deh_beg'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0xc):  
undefined reference to `_deh_beg'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x13):  
undefined reference to `_deh_end'
src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x37):  
undefined reference to `_deh_end'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_eb_258.o): In  
function `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x1d):  
undefined reference to `_tlsend'
src/core/thread.d:(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x24):  
undefined reference to `_tlsstart'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_ee_6e4.o): In  
function `thread_attachThis':
src/core/thread.d:(.text.thread_attachThis+0x53): undefined reference to  
`_tlsstart'
src/core/thread.d:(.text.thread_attachThis+0x5c): undefined reference to  
`_tlsend'
/home/steves/dmd-2.051/linux/bin/../lib/libphobos2.a(thread_e8_713.o): In  
function `thread_entryPoint':
src/core/thread.d:(.text.thread_entryPoint+0x29): undefined reference to  
`_tlsend'
src/core/thread.d:(.text.thread_entryPoint+0x2f): undefined reference to  
`_tlsstart'

collect2: ld returned 1 exit status
--- errorlevel 1

Let's not pretend that generations of c/C++ coders are going to attribute  
this slew of errors to a missing main function.  The first time I see  
this, I'm going to think I missed something else.


I understand that to fix this, we need the linker to be more helpful, or  
we need to make dmd more helpful.  I don't know how much effort it is, or  
how much it's worth it, I just wanted to point out that your statement  
about equivalence to C++ is stretching it.


I personally think we need to get the linker to demangle symbols better.   
That would go a long way...


-Steve


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-13 20:12, Walter Bright wrote:

Vladimir Panteleev wrote:

On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright
 wrote:


golgeliyele wrote:

I don't think C++ and gcc set a good bar here.


Short of writing our own linker, we're a bit stuck with what ld does.


That's not true. The compiler has knowledge of what symbols will be
passed to the linker, and can display its own, much nicer error
messages. I've mentioned this in our previous discussion on this topic.


Not without reading the .o files passed to the linker, and the
libraries, and figuring out what would be pulled in from those
libraries. In essence, the compiler would have to become a linker.

It's not impossible, but is a tremendous amount of work in order to
improve one error message, and one error message that generations of C
and C++ programmers are comfortable dealing with.


I agree with you here except for the last sentence. Please stop saying 
it's ok just because it's ok in C/C++. Isn't that why we use D, because 
we're not satisfied with C/C++.


--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-13 19:42, Vladimir Panteleev wrote:

On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright
 wrote:


golgeliyele wrote:

I don't think C++ and gcc set a good bar here.


Short of writing our own linker, we're a bit stuck with what ld does.


That's not true. The compiler has knowledge of what symbols will be
passed to the linker, and can display its own, much nicer error
messages. I've mentioned this in our previous discussion on this topic.



Would the compiler be able to figure out if you build a library or an 
executable?


--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Andrej Mitrovic
On 2/14/11, Don  wrote:
>
> Why is that a "warning"?
> Why on earth does it create a corrupt exe file, instead of reporting an
> error???
>

I've no idea. But Optlink actually has a switch you can use to disable
outputting corrupt executables. I've no idea what the use case for
this is.


Re: tooling quality and some random rant

2011-02-14 Thread retard
Mon, 14 Feb 2011 04:44:43 +0200, so wrote:

>> Unfortunately DMC is always out of the question because the performance
>> is 10-20 (years) behind competition, fast compilation won't help it.
> 
> Can you please give a few links on this?

What kind of proof you need then? Just take some existing piece of code 
with high performance requirements and compile it with dmc. You lose.

http://biolpc22.york.ac.uk/wx/wxhatch/wxMSW_Compiler_choice.html
http://permalink.gmane.org/gmane.comp.lang.c++.perfometer/37
http://lists.boost.org/boost-testing/2005/06/1520.php
http://www.digitalmars.com/d/archives/c++/chat/66.html
http://www.drdobbs.com/cpp/184405450

Many of those are already old. GCC 4.6, LLVM 2.9, and ICC 12 are much 
faster, especially on multicore hardware. A quick look at DMC changelog 
doesn't reveal any significant new optimizations durin the past 10 years 
except some Pentium 4 opcodes and fixes on library level.

I rarely see a benchmark where DMC produces fastest code. In addition, 
most open source projects are not compatible with DMC's toolchain out of 
the box. If execution performance of the generated code is your top 
priority, I wouldn't recommend using DigitalMars products.


Re: tooling quality and some random rant

2011-02-14 Thread Don

Andrej Mitrovic wrote:

Don't forget DLLs.

But why not just change the linker error message from:
OPTLINK : Warning 134: No Start Address

to:
OPTLINK : Warning 134: No Start Address
"Are you missing a main() function?"


Why is that a "warning"?
Why on earth does it create a corrupt exe file, instead of reporting an 
error???


Re: tooling quality and some random rant

2011-02-14 Thread Andrej Mitrovic
Don't forget DLLs.

But why not just change the linker error message from:
OPTLINK : Warning 134: No Start Address

to:
OPTLINK : Warning 134: No Start Address
"Are you missing a main() function?"


Re: tooling quality and some random rant

2011-02-14 Thread Vladimir Panteleev
On Sun, 13 Feb 2011 21:12:02 +0200, Walter Bright  
 wrote:



Vladimir Panteleev wrote:
On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright  
 wrote:



golgeliyele wrote:

I don't think C++ and gcc set a good bar here.


Short of writing our own linker, we're a bit stuck with what ld does.
 That's not true. The compiler has knowledge of what symbols will be  
passed to the linker, and can display its own, much nicer error  
messages. I've mentioned this in our previous discussion on this topic.


Not without reading the .o files passed to the linker, and the  
libraries, and figuring out what would be pulled in from those  
libraries. In essence, the compiler would have to become a linker.


You are trying to solve a much bigger problem, which indeed sounds like a  
lot of effort for something so insignificant. What I'm talking about is  
much simpler.


Let's take two cases which will cover over 99% of such cases when using  
DMD.


In both cases, the user only passes .d files to DMD, no extra .obj or .lib  
files, as is the case most of the time:


1) The user forgot to declare main().

If you don't pass the -c or -lib switches to the compiler, it's reasonable  
to expect that the user wants to compile and link an executable. But DMD  
knows that there is no D main() symbol in the files passed to it! So it  
can print a nice error message without having to run the linker to print  
its ugly one.


2) The user didn't pass all of his program's modules to the compiler.

By far the most common cause, we've discussed this one before. It only  
requires knowing if a certain module is part of the standard library or  
not. Even simply doing it for modules present in the current directory  
would help. I know it's not consistent, but neither is import hinting for  
certain standard library functions, and both are great ideas.


--
Best regards,
 Vladimirmailto:vladi...@thecybershadow.net


Re: tooling quality and some random rant

2011-02-14 Thread Lutger Blijdestijn
Walter Bright wrote:

> Michel Fortin wrote:
>> But note I was replying to your reply to Denis who asked specifically
>> for demangled names for missing symbols. This by itself would be a
>> useful improvement.
> 
> I agree with that, but there's a caveat. I did such a thing years ago for
> C++ and Optlink. Nobody cared, including the people who asked for that
> feature. It's a bit demotivating to bother doing that again.

Let me take the opportunity to say I care about an unrelated usability 
feature: the spelling suggestion. However small it's pretty nice so thanks 
for doing that.


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-13 18:36, Andrej Mitrovic wrote:

On 2/13/11, Alan Smithee  wrote:

You can do the same in D using .di files.


Except no one really does that because such an approach is insanely
error prone. E.g. with classes, you need to copy entire definitions.
Change any ordering, forget a field, change a type, and you're having
undefined behavior.



Could you elaborate on that? Aren't .di files supposed to be
auto-generated by the compiler, and not hand-written?


Yes, but they don't always work.

--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-13 18:19, golgeliyele wrote:

p.s.: Does anyone know what the best way to use this newsgroup is? Is there a 
better web interface? If not, is there a free
newsgroup (on a Mac) reader that is easy to use?


I'm using Thunderbird.

--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-13 13:24, Nick Sabalausky wrote:

"Peter Alexander"  wrote in message
news:ij8a8p$2gqv$1...@digitalmars.com...

On 13/02/11 10:10 AM, Peter Alexander wrote:

On 13/02/11 6:52 AM, Nick Sabalausky wrote:

D compiles a few orders of magnitude faster than C++ does. Better
handling
of incremental building might be nice for really large projects, but
it's
really not a big issue for D, not like it is for C++.


The only person I know that's worked on large D projects is Tomasz, and
he claimed that he was getting faster compile times in C++ due to being
able to do incremental builds.

"Walter might claim that DMD is fast, but it’s not exactly blazing when
you confront it with a few hundred thousand lines of code. With C/C++,
you’d split your source into .c and .h files, which mean that a
localized change of a .c file only requires the compilation of a single
unit. Take an incremental linker as well, and C++ compiles faster than
D. With D you often have the situation of having to recompile everything
upon the slightest change." (http://h3.gd/devlog/?p=22)


Turns out this may have been solved:
https://bitbucket.org/h3r3tic/xfbuild/wiki/Home


The problem that xfbuild ended up running into is that DMD puts the
generated code for instantiated temples into an unpredictable object file.
This leads to situations where certain functions end up being lost from the
object files unless you do a full rebuild. Essentialy it breaks incremental
compilation. There's a detailed explanation of it somewhere on the xfbuild
site.


Walter has said in a thread here that if you build with the -lib option 
it will output all templates into all object files.


--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread Jacob Carlborg

On 2011-02-13 16:07, Gary Whatmore wrote:

Paulo Pinto Wrote:


"Nick Sabalausky"  wrote in message
news:ij7v76$1q4t$1...@digitalmars.com...

... (cutted) ...

That's not the compiler, that's the linker. I don't know what linker DMD
uses on OSX, but on Windows it uses OPTLINK which is written in
hand-optimized Asm so it's really hard to change. But Walter's been
converting it to C (and maybe then to D once that's done) bit-by-bit (so
to speak), so linker improvements are at least on the horizon.

...


Why C and not directly D?

It is really bad adversting for D to know that when its creator came around
to
rewrite the linker, Walter decided to use C instead of D.


I'm guessing that Walter feels more familiar and comfortable developing C/C++ 
instead of D. He's the creator of D, but has written very small amounts of D 
and probably cannot write idiomatic D very fluently. Another issue is the 
immature toolchain.

This might sound like blasphemy, but I believe the skills and knowledge for 
developing large scale applications in language XYZ cannot be extrapolated from 
small code snippets or from experience with projects in other languages. You 
just need to eat your own dogfood and get your feet wet by doing.

People like the Tango's 'kris' and this 'h3r3tic' are the real world D experts. 
Sadly they've all left D. We need a new generation of experts, because these 
old guys ranting about every issue are more harmful than good to the community.


Kris is still around.

--
/Jacob Carlborg


Re: tooling quality and some random rant

2011-02-14 Thread spir

On 02/14/2011 02:29 AM, Denis Koroskin wrote:

On Mon, 14 Feb 2011 02:01:53 +0300, Walter Bright 
wrote:


Michel Fortin wrote:

But note I was replying to your reply to Denis who asked specifically for
demangled names for missing symbols. This by itself would be a useful
improvement.


I agree with that, but there's a caveat. I did such a thing years ago for C++
and Optlink. Nobody cared, including the people who asked for that feature.
It's a bit demotivating to bother doing that again.


Many people are unthankful by nature. They tell about missing features while
taking existing ones as granted.
It doesn't mean no one cares about them. If no one would care, why would we
even discuss those features?


Very often, heavily discussed designs are somewhat good. When they are truely 
bad, one does not even know where/how to start critics... We just feel their 
wrongness, but expressing it is hard time, even more proposing inprovements; so 
that we wish for a blank page.
Good designs show their bugs much more obviously, everyone can enter the critic 
dance ;-)


Denis
--
_
vita es estrany
spir.wikidot.com



Re: tooling quality and some random rant

2011-02-14 Thread Don

golgeliyele wrote:

I am relatively new to D. As a long time C++ coder, I love D. Recently, I have 
started doing some coding with D. One of the things that
bothered me was the 'perceived' quality of the tooling. There are some 
relatively minor things that make the tooling look bad.



The error reporting has issues as well. I noticed that the compiler leaks low 
level errors to the user. If you forget to add a main to your
app or misspell it, you get errors like:

Undefined symbols:
  "__Dmain", referenced from:
  _D2rt6dmain24mainUiPPaZi7runMainMFZv in libphobos2.a(dmain2_513_1a5.o)

I mean, wow, this should really be handled better.


Not solvable in general, but still solvable in the cases that matter. 
Created a bug report:

http://d.puremagic.com/issues/show_bug.cgi?id=5573


Re: tooling quality and some random rant

2011-02-13 Thread so

On Sun, 13 Feb 2011 19:47:30 +0200, Alan Smithee  wrote:


Gary Whatmore Wrote (fixed that for you):

Let's try to act reasonable here. Walter fanboyism is already
getting old and sadly favored by our famous NG trolls, that is
pretty much everyone here. I wouldn't be shocked to hear this Gary
Whatmore will be bashing D in about 2 years' time when he realizes
how naive he has been.

The creators haven't even attempted eating their own dog food. On
the other hand it's crystal clear that such a task as writing a
language and its compiler without any support from anyone is the
very definition of "Not Invented Here" that only a handful of
developers are willing to pursue on this planet. As a result D is
one of the most broken languages ever built. I honestly wish we
would sometimes question Walter's competence. He only has so much
time. All this love talk here blinds even more potential users. We
would already have a working compiler if they didn't want to
reinvent everything.


This love talk exists just because of some people occasionally insulting  
people (especially Walter) with no basis whatsoever.
You might come here state your problems, opinions and propose solutions if  
you have in mind. But no they prefer bitching, insulting.
People might respect Walter and this might go to "fanboyism". On the other  
hand insulting him is disgusting, and baseless.
Is he forcing anyone else to use D? He is just minding his own business as  
far as i can see.


If you think something is broken, prove it and try to find a solutions.
If the community doesn't help you, leave it them to their misery, there  
are other languages after all.


One thing you are right on the point is that languages are designed by  
"designers", this has been like this for long time.


Re: tooling quality and some random rant

2011-02-13 Thread Kevin Bealer
Sorry this was a completely unintentional error --- I meant to say "in case 
anyone
doubts Gary's post".  Blame the lateness of the night and/or my annoyingly lossy
wireless keyboard.

Kevin


Re: tooling quality and some random rant

2011-02-13 Thread Kevin Bealer
> our famous Reddit trolls, that is retard = uriel = eternium = lurker

In case anyone doubts gay's guess... for those who don't follow entertainment
trivia, Alan Smithee is a pseudonym used by directors disowning a film (google
it).  So anyone using this name is actually effectively *claiming* to be a 
imposter.

K


Re: tooling quality and some random rant

2011-02-13 Thread gölgeliyele

On 2/13/11 2:05 PM, Walter Bright wrote:

golgeliyele wrote:



2. dmd compiler's command line options:
This is mostly an esthetic issue. However, it is like the entrance to
your house. People who are not sure about entering in
care about what it looks like from the outside. If Walter is willing,
I can work on a command line options interface proposal
that would keep backwards compatibility with the existing options.
This would enable a staged transition. Would there be
an interest in this?


A proposal would be nice. But please keep in mind that people often view
their build systems / makefiles as black boxes, and breaking them with
incompatible changes can be extremely annoying.


Here is one proposal:

Digital Mars D Compiler v2.051
Copyright (c) 1999-2010 by Digital Mars written by Walter Bright
Documentation: http://www.digitalmars.com/d/2.0/index.html
Usage:
  dmd [options] 

D source files

Options:
  --commandsread arguments from a command file
  -c, --compile   only compile, do not link
  --coverage  do code coverage analysis
  -D, --ddoc  generate documentation
  --ddoc-dir write documentation file to a directory
  --ddoc-file   write documentation file to a file
  -d, --deprecatedallow deprecated features
  --debug compile in debug code
  --debug-levelcompile in debug code <= level
  --debug-identcompile in debug code identified by ident
  --debug-lib   set symbolic debug library to name
  --default-lib set default library to name
  --dependencieswrite module dependencies to a file
  --dylib generate dylib
  -g, --sym-debug add symbolic debug info
  --sym-debug-c   add symbolic debug info, pretend to be C
  -H, --headergenerate 'header' file
  --header-dir   write 'header' file to a directory
  --header-file write 'header' file to a file
  --help  print this help
  -I, --imports where to look for imports
  --ignore-bad-pragmasignore unsupported pragmas
  --inlinedo function inlining
  -J, --string-imports  where to look for string imports
  -L, --linker-flags   pass flags to the linker
  --lib   generate library rather than object files
  --man   open web browser on manual page
  --linker-mapgenerate linker .map file
  --no-bounds-check   turns off array bounds checking
  --no-float  do not emit reference to floating point
  -O, --optimize  optimize
  -n, --no-object-filedo not write object file
  --object-dir   write object, library files to a directory
  --output  name output file to a file name
  --no-path-strip do not strip paths from source file
  --profile   profile runtime performance of code
  --quiet suppress unnecessary messages
  --release   compile release version
  --runrun resulting program file, passing args
  --unittest  compile in unit tests
  -v, --verbose   verbose
  --versioncompile in version >= level
  --versioncompile in version identified by ident
  --tls-vars  list all variables going into thread 
local storage

  -w, --warnings  enable warnings
  -W, --info-warnings enable informational warnings
  -X, --json  generate JSON file
  --json-file   write JSON file to a given file



Re: tooling quality and some random rant

2011-02-13 Thread so

Unfortunately DMC is always out of the
question because the performance is 10-20 behind competition, fast
compilation won't help it.


Can you please give a few links on this?


Re: tooling quality and some random rant

2011-02-13 Thread Walter Bright

Denis Koroskin wrote:
On Mon, 14 Feb 2011 02:01:53 +0300, Walter Bright 
 wrote:



Michel Fortin wrote:
But note I was replying to your reply to Denis who asked specifically 
for demangled names for missing symbols. This by itself would be a 
useful improvement.


I agree with that, but there's a caveat. I did such a thing years ago 
for C++ and Optlink. Nobody cared, including the people who asked for 
that feature. It's a bit demotivating to bother doing that again.


Many people are unthankful by nature. They tell about missing features 
while taking existing ones as granted.
It doesn't mean no one cares about them. If no one would care, why would 
we even discuss those features?


Tellingly, I accidentally broke that feature, and nobody complained about that, 
either.


Re: tooling quality and some random rant

2011-02-13 Thread Denis Koroskin
On Mon, 14 Feb 2011 02:01:53 +0300, Walter Bright  
 wrote:



Michel Fortin wrote:
But note I was replying to your reply to Denis who asked specifically  
for demangled names for missing symbols. This by itself would be a  
useful improvement.


I agree with that, but there's a caveat. I did such a thing years ago  
for C++ and Optlink. Nobody cared, including the people who asked for  
that feature. It's a bit demotivating to bother doing that again.


Many people are unthankful by nature. They tell about missing features  
while taking existing ones as granted.
It doesn't mean no one cares about them. If no one would care, why would  
we even discuss those features?


Re: tooling quality and some random rant

2011-02-13 Thread retard
Sun, 13 Feb 2011 15:06:46 -0800, Brad Roberts wrote:

> On 2/13/2011 3:01 PM, Walter Bright wrote:
>> Michel Fortin wrote:
>>> But note I was replying to your reply to Denis who asked specifically
>>> for demangled names for missing symbols. This by itself would be a
>>> useful improvement.
>> 
>> I agree with that, but there's a caveat. I did such a thing years ago
>> for C++ and Optlink. Nobody cared, including the people who asked for
>> that feature. It's a bit demotivating to bother doing that again.
> 
> No offense, but this argument gets kinda old and it's incredibly weak.
> 
> Today's tooling expectations are higher.  The audience isn't the same. 
> And clearly people are asking for it.  Even the past version of it I
> highly doubt no one cared, you just didn't hear from those that liked
> it.  After all, few people go out of their way to talk about what they
> like, just what they don't.

Half of the readers have already added me to their killfile, but here 
goes some on-topic humor:

http://www.winandmac.com/wp-content/uploads/2010/03/ipad-hp-fail.jpg

Sometimes people don't yet know what they want.

For example the reason we write portable C++ in some projects is that 
it's easier to switch between VC++, ICC, GCC, and LLVM. Whichever 
produces best performing code. Unfortunately DMC is always out of the 
question because the performance is 10-20 behind competition, fast 
compilation won't help it.


Re: tooling quality and some random rant

2011-02-13 Thread Brad Roberts
On 2/13/2011 3:01 PM, Walter Bright wrote:
> Michel Fortin wrote:
>> But note I was replying to your reply to Denis who asked specifically for 
>> demangled names for missing symbols. This by
>> itself would be a useful improvement.
> 
> I agree with that, but there's a caveat. I did such a thing years ago for C++ 
> and Optlink. Nobody cared, including the
> people who asked for that feature. It's a bit demotivating to bother doing 
> that again.

No offense, but this argument gets kinda old and it's incredibly weak.

Today's tooling expectations are higher.  The audience isn't the same.  And 
clearly people are asking for it.  Even the
past version of it I highly doubt no one cared, you just didn't hear from those 
that liked it.  After all, few people go
out of their way to talk about what they like, just what they don't.

Later,
Brad


Re: tooling quality and some random rant

2011-02-13 Thread Walter Bright

Michel Fortin wrote:
But note I was replying to your reply to Denis who asked specifically 
for demangled names for missing symbols. This by itself would be a 
useful improvement.


I agree with that, but there's a caveat. I did such a thing years ago for C++ 
and Optlink. Nobody cared, including the people who asked for that feature. It's 
a bit demotivating to bother doing that again.


Re: tooling quality and some random rant

2011-02-13 Thread Michel Fortin

On 2011-02-13 16:37:19 -0500, Walter Bright  said:


Michel Fortin wrote:
Parsing error messages is a problem indeed. But demangling symbol names 
is easy.


Demangling doesn't get us where golgeliyele wants to go.


Correct.

But note I was replying to your reply to Denis who asked specifically 
for demangled names for missing symbols. This by itself would be a 
useful improvement.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: tooling quality and some random rant

2011-02-13 Thread Andrew Wiley
On Sun, Feb 13, 2011 at 4:35 PM, Alan Smithee  wrote:
> Nick Sabalausky Wrote:
>
>> "Perhaps"? Well, is it or isn't it? Are we supposed to just assume
> that lack of use means it's actually broken and not just unpopular?
>
> Assume it's broken or demonstrate large projects written in D to
> show that it CAN be unpopular because something else makes up for
> it.

How's about all of druntime (at least on my self-build Linux DMD).

>> Then contribute instead of just flaming.
>
> I'm 12 years old and what is this? Your language is flawed, you
> don't see it - do not want.
>
Honestly, I agree with Nick here (which is somewhat rare, actually):
You're in the D mailing lists and you don't want to use D. Why, then,
are you here?


Re: tooling quality and some random rant

2011-02-13 Thread Alan Smithee
Agreed. These things might make D appear like less of a joke, thus
attracting more hapless users to their subsequent dismay.


Re: tooling quality and some random rant

2011-02-13 Thread spir

On 02/13/2011 10:35 PM, Nick Sabalausky wrote:

"spir"  wrote in message
news:mailman.1602.1297626622.4748.digitalmar...@puremagic.com...


Also, I really miss a D for D lexical- syntactic- semantic- analyser that
would produce D data structures. This would open the door hoards of
projects, including tool chain elements, meta-studies on D, improvements
of these basic tools (efficiency, semantis analysis), decelopment of
back-ends (including studies on compiler optimisation specific to D's
semantics), etc.
Even more important, the whole cummunity, which is imo rather high-level,
would be able to take part to such challenges, in their favorite language.
Isn't is ironic D depends so much on C++, while many programmers come to D
fed up with this language, presicely?



DDMD: http://www.dsource.org/projects/ddmd


Definitely a good thing, and more! :-) Thank your for the pointer, Nick.
I will skim across the project as soon as I have some hours free. And see if 
--with my very limited competence in the domain-- I can contribute in any way.
I have an idea for a side-feature if I can understand the produced AST: 
generate Types as D data structures on request (--meta), write them into a 
plain D module to be imported on need. A major aspect, I guess, of the 'meta' 
namespace discussed on this list.


Denis
--
_
vita es estrany
spir.wikidot.com



Re: tooling quality and some random rant

2011-02-13 Thread Alan Smithee
Nick Sabalausky Wrote:

> "Perhaps"? Well, is it or isn't it? Are we supposed to just assume
that lack of use means it's actually broken and not just unpopular?

Assume it's broken or demonstrate large projects written in D to
show that it CAN be unpopular because something else makes up for
it.

> Just like you're doing?  If you're sure that .di files are broken,
then *show us* how.

People did - go figure. A swing of Walter's magical wand saying
"everything is OK!" seems to suffice for fanboys. Until they
disappear realizing the miasma surrounding D. Like this bloke:
http://www.jfbillingsley.com/blog/?p=53

> Then contribute instead of just flaming.

I'm 12 years old and what is this? Your language is flawed, you
don't see it - do not want.


Re: tooling quality and some random rant

2011-02-13 Thread Walter Bright

Michel Fortin wrote:
Parsing error messages is a problem indeed. But demangling symbol names 
is easy.


Demangling doesn't get us where golgeliyele wants to go.


Re: tooling quality and some random rant

2011-02-13 Thread Nick Sabalausky
"spir"  wrote in message 
news:mailman.1602.1297626622.4748.digitalmar...@puremagic.com...
>
> Also, I really miss a D for D lexical- syntactic- semantic- analyser that 
> would produce D data structures. This would open the door hoards of 
> projects, including tool chain elements, meta-studies on D, improvements 
> of these basic tools (efficiency, semantis analysis), decelopment of 
> back-ends (including studies on compiler optimisation specific to D's 
> semantics), etc.
> Even more important, the whole cummunity, which is imo rather high-level, 
> would be able to take part to such challenges, in their favorite language. 
> Isn't is ironic D depends so much on C++, while many programmers come to D 
> fed up with this language, presicely?
>

DDMD: http://www.dsource.org/projects/ddmd






Re: tooling quality and some random rant

2011-02-13 Thread Nick Sabalausky
"Alan Smithee"  wrote in message 
news:ij967s$12rb$1...@digitalmars.com...
> Andrej Mitrovic Wrote:
>
>> Could you elaborate on that? Aren't .di files supposed to be auto-
> generated by the compiler, and not hand-written?
>
> Yea, aren't they? How come no one uses that feature? Perhaps it's
> intrinsically broken? *hint hint*
>

"Perhaps"? Well, is it or isn't it? Are we supposed to just assume that lack 
of use means it's actually broken and not just unpopular?


>
> This NG assumes a curious stance. Sprouting claims and standing by
> them until they're shown invalid, and then some.

Just like you're doing?  If you're sure that .di files are broken, then 
*show us* how.

>
> "But it takes time!" ... uh, yea, how's for 11 years? Or at least 4
> which D has been past the 1.0 version. How many people gave up on
> their med/large projects and moved to "lesser" languages in this
> span?

Then contribute instead of just flaming.





Re: tooling quality and some random rant

2011-02-13 Thread Paulo Pinto
Hi,

this is what I miss in D and Go.

Most developers that only used C and C++ aren't aware how easy it is to 
compile applications in more
modern languages.

It is funny that both D and Go advertise their compilation speed, when I was 
used to fast compilation since
the MS-DOS days with Turbo Pascal.

JVM and .Net based languages have editors that do compile on save.

Most game studios that have changed from C++ to C# and Java as main 
development language always
cite the productivity gain in the compile-test-debug cycle.

I was a bit disappointed to find out that both Go and D still propose a 
compiler/linker model.

--
Paulo

"charlie"  wrote in message 
news:ij95ge$119o$1...@digitalmars.com...
> golgeliyele Wrote:
>
>> It is a mistake to consider the language without the tooling that goes 
>> along with it. I think there is still time to recover from
>> this error. Large projects are often build as a series of libraries. When 
>> the shared library problem is to be attacked, I think
>> the tooling needs to be part of that design. Solving the tooling problem 
>> will raise D to one level up and I hope the
>> community will step up to the challenge.
>
> So far D 1.0 development has forced me to study the compiler and library 
> internals much more than I could ever imagine. Had 10 years of Pascal, 
> Delphi, and Java programming under my belt, but never really knew what's 
> the difference between a compiler frontend and compiler. I knew the linker 
> though, but couldn't imagine there could be so many incompatibilities.
>
> For example the Delphi community has a large set of commonly used 
> libraries for the casual user. I also ended up learning a great deal of 
> regexps because my editor didn't support D and don't feel awkward reading 
> dmd internals such as cod2.c or mtype.c now. This was all necessary to use 
> D in a simple GUI project and to sidestep common bugs.
>
> I really like D. The elegance of the language can be blamed for the most 
> part. In retrospect, I ended up running into more bugs than ever before 
> and spent more time than with any other SDK. However it was so fun that it 
> really wasn't a problem. Basically if you're using D at work, I recommend 
> studying the libraries and finding workaround for bugs at home. This way 
> you won't be spending too much time fighting the tool chain in 
> professional context and get extra points from the voluntarily open source 
> hobby. It also helps our community.
>
> This newsgroup's a valuable source of information. Read about tuning of 
> JVM, race cars, rocket science, CRT monitors, and DVCS here. We don't 
> always have to discuss grave business matters. 




Re: tooling quality and some random rant

2011-02-13 Thread Paulo Pinto
Hi,

now I am conviced. Thanks for the explanation.

--
Paulo

"Walter Bright"  wrote in message 
news:ij99gb$18fm$1...@digitalmars.com...
> Paulo Pinto wrote:
>> Why C and not directly D?
>>
>> It is really bad adversting for D to know that when its creator came 
>> around to rewrite the linker, Walter decided to use C instead of D.
>
> That's a very good question.
>
> The answer is in the technical details of transitioning optlink from an 
> all assembler project to a higher level language. I do it function by 
> function, meaning there will be hundreds of "hybrid" versions that are 
> partly in the high level language, partly in asm. Currently, it's around 
> 5% in C.
>
> 1. Optlink has its own "runtime" system and startup code. With C, and a 
> little knowledge about how things work under the hood, it's easier to 
> create "headless" functions that require zero runtime and startup support. 
> With D, the D compiler will create ModuleInfo and TypeInfo objects, which 
> more or less rely on some sort of D runtime existing.
>
> 2. The group/segment names emitted by the C compiler match what Optlink 
> uses. It matches what dmd does, too, except that dmd emits more such 
> names, requiring more of an understanding of Optlink to get them in the 
> right places.
>
> 3. The hybrid intermediate versions require that the asm portions of 
> Optlink be able to call the high level language functions. In order to 
> avoid an error-prone editting of scores of files, it is very convenient to 
> have the function names used by the asm code exactly match the names 
> emitted by the compiler. I accomplished this by "tweaking" the dmc C 
> compiler. I didn't really want to mess with the D compiler to do the same.
>
> 4. Translating asm to a high level language starts with a rote 
> translation, i.e. using goto's, raw pointers, etc., which match 1:1 with 
> the assembler logic. No attempt is made to infer higher level logic. This 
> makes mistakes in the translation easier to find. But it's not the way 
> anyone in their right mind would develop C code. The higher level 
> abstractions in C are not useful here, and neither are the higher level 
> abstractions in D.
>
> Once the entire Optlink code base has been converted, then it becomes a 
> simple process to:
>
> 1. Dump the Optlink runtime, and switch to the C runtime.
>
> 2. Translate the C code to D.
>
> And then:
>
> 3. Refactor the D code into higher level abstractions.
>
>
> I've converted a massive code base from asm to C++ before (DASH for Data 
> I/O) and I discovered that attempting to refactor the code while 
> translating it is fraught with disaster. Doing the hybrid approach is much 
> faster and more likely to be successful.
>
>
> TL,DR: The C version is there only as a transitional step, as it's 
> somewhat easier to create a hybrid asm/C code base than a hybrid asm/D 
> one. The goal is to create a D version. 




Re: tooling quality and some random rant

2011-02-13 Thread spir

On 02/13/2011 08:30 PM, Walter Bright wrote:

1. people just check out when they see pages and pages of wacky switches. Has
anyone ever actually read all of man gcc?


+ 12_000 /lines/ in my version

Denis
--
_
vita es estrany
spir.wikidot.com



Re: tooling quality and some random rant

2011-02-13 Thread Michel Fortin

On 2011-02-13 14:38:20 -0500, Walter Bright  said:


Denis Koroskin wrote:
It's not impossible, but is a tremendous amount of work in order to 
improve one error message, and one error message that generations of C 
and C++ programmers are comfortable dealing with.


What's wrong with parsing low-level linker error messages and output 
them in human-readable form? E.g. demangle missing symbols.


Yes, that can be done. The downside is since dmd does not control what 
linker the user has, it becomes a constant source of problems trying to 
keep it working as it constantly breaks with linker changes and an 
arbitrarily long list of linkers on various distributions.


Parsing error messages is a problem indeed. But demangling symbol names 
is easy. Try this:


dmd ... 2>&1 | ddemangle

With ddemangle being a compiled version of this program:

import std.stdio;
import core.demangle;

void main()
{
foreach (line; stdin.byLine())
{
size_t beginIdx, endIdx;

enum State { searching_, searchingD, searchingEnd, done }
State state;
foreach (i, char c; line)
{
switch (state)
{
case State.searching_:
if (c == '_')
{
beginIdx = i;
state = State.searchingD;
}
break;
case State.searchingD:
if (c == 'D')
state = State.searchingEnd;
else if (c != '_')
state = State.searching_;
break;
case State.searchingEnd:
if (c == ' ' || c == '"' || c == '\'')
{
endIdx = i;
state = State.done;
}
break;
}
if (state == State.done)
break;
}

if (endIdx > beginIdx)
			writeln(line[0..beginIdx], demangle(line[beginIdx..endIdx]), 
line[endIdx..$]);

else
writeln(line);
}
}


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: tooling quality and some random rant

2011-02-13 Thread spir

On 02/13/2011 07:53 PM, Walter Bright wrote:

Paulo Pinto wrote:

Why C and not directly D?

It is really bad adversting for D to know that when its creator came around
to rewrite the linker, Walter decided to use C instead of D.


That's a very good question.

The answer is in the technical details of transitioning optlink from an all
assembler project to a higher level language. I do it function by function,
meaning there will be hundreds of "hybrid" versions that are partly in the high
level language, partly in asm. Currently, it's around 5% in C.

1. Optlink has its own "runtime" system and startup code. With C, and a little
knowledge about how things work under the hood, it's easier to create
"headless" functions that require zero runtime and startup support. With D, the
D compiler will create ModuleInfo and TypeInfo objects, which more or less rely
on some sort of D runtime existing.

2. The group/segment names emitted by the C compiler match what Optlink uses.
It matches what dmd does, too, except that dmd emits more such names, requiring
more of an understanding of Optlink to get them in the right places.

3. The hybrid intermediate versions require that the asm portions of Optlink be
able to call the high level language functions. In order to avoid an
error-prone editting of scores of files, it is very convenient to have the
function names used by the asm code exactly match the names emitted by the
compiler. I accomplished this by "tweaking" the dmc C compiler. I didn't really
want to mess with the D compiler to do the same.

4. Translating asm to a high level language starts with a rote translation,
i.e. using goto's, raw pointers, etc., which match 1:1 with the assembler
logic. No attempt is made to infer higher level logic. This makes mistakes in
the translation easier to find. But it's not the way anyone in their right mind
would develop C code. The higher level abstractions in C are not useful here,
and neither are the higher level abstractions in D.

Once the entire Optlink code base has been converted, then it becomes a simple
process to:

1. Dump the Optlink runtime, and switch to the C runtime.

2. Translate the C code to D.

And then:

3. Refactor the D code into higher level abstractions.


I've converted a massive code base from asm to C++ before (DASH for Data I/O)
and I discovered that attempting to refactor the code while translating it is
fraught with disaster. Doing the hybrid approach is much faster and more likely
to be successful.


TL,DR: The C version is there only as a transitional step, as it's somewhat
easier to create a hybrid asm/C code base than a hybrid asm/D one. The goal is
to create a D version.


Great! Thank you very much for this clear & comprehensive explanation of the 
process, Walter. (*)


Denis

(I can understand what you mean with this 2-stage translation --beeing easier, 
safer, and finally far more efficient-- having done something similar, but at a 
smaller-scale, probably, in the field of automation; where languages are often 
even closer to asm than C, 'cause much "memory" is in fact binary IO cards, 
directly accessed as is.)

--
_
vita es estrany
spir.wikidot.com



Re: tooling quality and some random rant

2011-02-13 Thread spir

On 02/13/2011 04:07 PM, Gary Whatmore wrote:

his might sound like blasphemy, but I believe the skills and knowledge for 
developing large scale applications in language XYZ cannot be extrapolated from 
small code snippets or from experience with projects in other languages. You 
just need to eat your own dogfood and get your feet wet by doing.


Precisely. A common route for the development of a static and compiled language 
(even more one intended as system programming language) is to "eat its own 
dogfood" by becoming its own compiler. From what I've heard, this is a great 
boost for the language's evolution, precisely because the creators use their 
language everyday from then --instead of becoming more & more experts in 
another one.


Also, I really miss a D for D lexical- syntactic- semantic- analyser that would 
produce D data structures. This would open the door hoards of projects, 
including tool chain elements, meta-studies on D, improvements of these basic 
tools (efficiency, semantis analysis), decelopment of back-ends (including 
studies on compiler optimisation specific to D's semantics), etc.
Even more important, the whole cummunity, which is imo rather high-level, would 
be able to take part to such challenges, in their favorite language. Isn't is 
ironic D depends so much on C++, while many programmers come to D fed up with 
this language, presicely?


Denis
--
_
vita es estrany
spir.wikidot.com



Re: tooling quality and some random rant

2011-02-13 Thread Lutger Blijdestijn
gölgeliyele wrote:
...
> 
> I think what we need here is numbers from a project that everyone has
> access to. What is the largest D project right now? Can we get numbers on
> that? How much time does it take to compile that project after a change
> (assuming we are feeding all .d files at once)?

Well you can take phobos, I believe Andrei used it once to compare against 
Go. With std.datetime it is now also much bigger :)

Tango is another large project, I remember someone posted a compilation 
speed of a couple of seconds (Tango is huge, perhaps 300KLoC).

But projects and settings may vary a lot. For sure, optlink is one hell of a 
speed monster and you might not get similar speeds with ld on a large 
project. 


Re: tooling quality and some random rant

2011-02-13 Thread Walter Bright

Denis Koroskin wrote:
It's not impossible, but is a tremendous amount of work in order to 
improve one error message, and one error message that generations of C 
and C++ programmers are comfortable dealing with.


What's wrong with parsing low-level linker error messages and output 
them in human-readable form? E.g. demangle missing symbols.


Yes, that can be done. The downside is since dmd does not control what linker 
the user has, it becomes a constant source of problems trying to keep it working 
as it constantly breaks with linker changes and an arbitrarily long list of 
linkers on various distributions.


Re: tooling quality and some random rant

2011-02-13 Thread Denis Koroskin
On Sun, 13 Feb 2011 22:12:02 +0300, Walter Bright  
 wrote:



Vladimir Panteleev wrote:
On Sun, 13 Feb 2011 20:26:50 +0200, Walter Bright  
 wrote:



golgeliyele wrote:

I don't think C++ and gcc set a good bar here.


Short of writing our own linker, we're a bit stuck with what ld does.
 That's not true. The compiler has knowledge of what symbols will be  
passed to the linker, and can display its own, much nicer error  
messages. I've mentioned this in our previous discussion on this topic.


Not without reading the .o files passed to the linker, and the  
libraries, and figuring out what would be pulled in from those  
libraries. In essence, the compiler would have to become a linker.


It's not impossible, but is a tremendous amount of work in order to  
improve one error message, and one error message that generations of C  
and C++ programmers are comfortable dealing with.


What's wrong with parsing low-level linker error messages and output them  
in human-readable form? E.g. demangle missing symbols.


Re: tooling quality and some random rant

2011-02-13 Thread Walter Bright

bearophile wrote:

Walter:


With D, the D compiler will create ModuleInfo and TypeInfo objects, which
more or less rely on some sort of D runtime existing.


In LDC there are no_typeinfo (and in maybe no_moduleinfo) pragmas to disable
the generation of those for specific types/modules: 
http://www.dsource.org/projects/ldc/wiki/Docs#no_typeinfo


pragma(no_typeinfo) { struct Opaque {} }

If it's useful then something similar may be added to DMD too.



I think it's best to avoid such things.


Re: tooling quality and some random rant

2011-02-13 Thread Walter Bright

gölgeliyele wrote:

Walter Bright  wrote:
 

golgeliyele wrote:

1. Difficult to understand linker errors due to missing main():
...
The problem is the main() can come from a library, or some other .obj file 
handed to the compiler that the compiler doesn't look inside. It's a very 
flexible way to build things, and trying to impose more order on that will 
surely wind up with complaints from some developers.
 
I would like to question this. Is there a D project where the technique of 
putting main() into a library has proved useful? I used this in a C++ project of 
mine, but I have regretted that already. I can imagine having a compiler option 
to avoid the pre-link check for main(), but I would suggest not even having 
that. Of course unless we get to know what those complaints you mentioned are :)


I find that people have all kinds of ways they wish to use a compiler. Is it 
worth restricting all that just for the case of one error message?


I also have tried to avoid adding endless command line switches as the solution 
to every variation people want. These cause:


1. people just check out when they see pages and pages of wacky switches. Has 
anyone ever actually read all of man gcc?


2. different compiler switches can have unexpected interactions and 
complications when used together. This is impossible to test for, as the 
combinations increase as the factorial of the number of switches.


3. people tend to copy/paste makefiles from one project to the next. They 
copy/paste the switches, too, usually with no idea what those switches do. I.e. 
they treat those switches as some sort of sacred incantation that they dare not 
change.


Re: tooling quality and some random rant

2011-02-13 Thread bearophile
Walter:

> With D, the D compiler will create ModuleInfo and TypeInfo objects,
> which more or less rely on some sort of D runtime existing.

In LDC there are no_typeinfo (and in maybe no_moduleinfo) pragmas to disable 
the generation of those for specific types/modules:
http://www.dsource.org/projects/ldc/wiki/Docs#no_typeinfo

pragma(no_typeinfo) {
  struct Opaque {}
}

If it's useful then something similar may be added to DMD too.

Bye,
bearophile


Re: tooling quality and some random rant

2011-02-13 Thread gölgeliyele
Daniel Gibson  wrote:
 
> Am 13.02.2011 20:01, schrieb gölgeliyele:
>> I don't think 
>> supporting multiple compilation models is a good thing. 
>> 
> 
> I think incremental compilation is a very useful feature for large projects so
> it should be available.
> Also the possibility to link in .o files that were generated from C code with 
D
> programs is a must - so only supporting the model of feeding all .d files to 
dmd
> is not an option.
> 
> But not supporting the model of feeding all .d files to dmd is very useful and
> should be possible.
> 
> So *I* /do/ think that supporting multiple compilation models is a good 
thing :-)
> 

Ok, I might have misspoken there. I am not against incremental compilation. 
What 
the heck, the lack of it is the reason I started the thread. However, I would 
like to see a coherent compilation model. Feeding all .d files to the compiler 
does not necessarily mean that it needs to be a from-scratch compilation. 

Isn't the need for tools like xfBuild an indication that something is wrong 
here. If you can point me to a write up that describes how to setup an 
incremental compilation for a large project, without using advanced tools like 
xfBuild, that would be very helpful. 






Re: tooling quality and some random rant

2011-02-13 Thread spir

On 02/13/2011 01:59 PM, bearophile wrote:

Walter:


In C++, you get essentially the same thing from g++:

/usr/lib/gcc/x86_64-linux-gnu/4.4.5/../../../../lib/crt1.o: In function 
`_start':
(.text+0x20): undefined reference to `main'
collect2: ld returned 1 exit status


Lot of people come here because they want a compiler+language better than C++ 
:-)
If you compile this:

void main() {
 writeln("Hello world");
}

Since some time dmd shows an error fit for D newbies:
test.d(2): Error: 'writeln' is not defined, perhaps you need to import 
std.stdio; ?

Probably many Python/JS/Perl/PHP/etc programmers that may want to try D don't 
know what a linker is. When they want to develop a large multi-module D program 
they must know something about how a linker works. But D has to scale down to 
smaller programs too, where there are only one or very few modules, written by 
not experts of C-class languages. In this situation more readable error 
messages, produced by dmd that catches a basic error before the linker, is 
probably useful.


Couldn't have written this one better ;-)

Denis
--
_
vita es estrany
spir.wikidot.com



  1   2   >