InfoWorld: Free at last! D language's official compiler is open source

2017-04-10 Thread Walter Bright via Digitalmars-d-announce

http://www.infoworld.com/article/3188427/application-development/free-at-last-d-languages-official-compiler-is-open-source.html


Re: Official compiler

2016-03-02 Thread Bruno Medeiros via Digitalmars-d

On 26/02/2016 06:19, Walter Bright wrote:


I wish LLVM would switch to the Boost license, in particular removing
this clause:

"Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimers in the
documentation and/or other materials provided with the distribution."

Reading it adversely means if I write a simple utility and include a few
lines from LLVM, I have to include that license in the binary and a
means to print it out. If I include a bit of code from several places,
each with their own version of that license, there's just a bunch of
crap to deal with to be in compliance.


Then add the license info to a "readme" or "copyright" file. Is that 
really such a hassle? It seems a trivial task to me. For example:

https://github.com/rust-lang/rust/blob/master/COPYRIGHT
(that file is included in the binary distributions)

--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: Official compiler

2016-02-29 Thread Iain Buclaw via Digitalmars-d
On 29 February 2016 at 00:43, Walter Bright via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On 2/28/2016 1:35 AM, Iain Buclaw via Digitalmars-d wrote:
>
>> Surely with Fibers everything would be deterministic though?
>>
>
> I don't see the point of fibers if:
>
> 1. they are running on the same core
>

That's a reasonable stance to have.  I was only considering the speed up of
using yield/continue on a declarations' semantic pass verses the double
round-robin we currently do for a couple of passes just because of
forward-reference issues.


Re: Official compiler

2016-02-28 Thread Walter Bright via Digitalmars-d

On 2/28/2016 1:35 AM, Iain Buclaw via Digitalmars-d wrote:

Surely with Fibers everything would be deterministic though?


I don't see the point of fibers if:

1. they are running on the same core
2. none of them do any waiting, such as waiting on I/O requests

The only I/O a compiler does is reading the source files and writing the object 
file. At one time, dmd did have an async thread to read the source files, but 
that was removed as discussed in this thread.


To speed up dmd, using multicores is necessary, and that requires 
synchronization.


Re: Official compiler

2016-02-28 Thread Andrei Alexandrescu via Digitalmars-d

On 02/28/2016 11:15 AM, Márcio Martins wrote:

There is no reason why it should be limited to these forums, is there?
Such a survey should be fairly more "realistic" and "representative"
than feelings, emotions and anecdotal evidence.

I think it would be interesting and useful to know what is important for:
-users just starting to use D
-users already heavily invested in the language
-users in each distinct usage (gamedev, web, scripts, real-time, ...)
-users proficient in alternative languages
-companies of different sizes
-size of D codebase


Putting the horses on the proper end of the cart is to first make sure 
we make it easy to align the three compiler versions together. Only 
then, choosing which compiler is more promoted, default etc. becomes a 
simple matter of branding.


Márcio, we are a small enough community that we can't enact things by 
fiat. We've tried before, invariably with bad results. Of course you are 
free to speculate that this time may be different, but it's just that - 
speculation.



Andrei



Re: Official compiler

2016-02-28 Thread Márcio Martins via Digitalmars-d

On Sunday, 28 February 2016 at 15:02:24 UTC, Mike Parker wrote:
On Sunday, 28 February 2016 at 13:31:17 UTC, Márcio Martins 
wrote:


Could we maybe create a quick informative survey, 
(surveymonkey?), so we can get a glimpse of why people like D 
and what they believe would improve their experience with the 
language? Perhaps also why they have chosen to or not to adopt 
D more seriously or professionally?


Given that there is such a wide diversity of people currently 
using it, I think it would be nice for the project leadership 
and all of us in the community to get a more realistic view on 
this matter, to better understand what's important, chose the 
future direction and what are the real selling points. Right 
now it seems like there are a lot of mixed signals even among 
long-time users and contributors.


Such a survey wouldn't be anywhere near "realistic." The number 
and types of users who regularly keep up with the forums are 
highly unlikely to be a representative sample of D users.


There is no reason why it should be limited to these forums, is 
there?
Such a survey should be fairly more "realistic" and 
"representative" than feelings, emotions and anecdotal evidence.


I think it would be interesting and useful to know what is 
important for:

-users just starting to use D
-users already heavily invested in the language
-users in each distinct usage (gamedev, web, scripts, real-time, 
...)

-users proficient in alternative languages
-companies of different sizes
-size of D codebase


Re: Official compiler

2016-02-28 Thread asdf via Digitalmars-d
On Sunday, 28 February 2016 at 12:59:01 UTC, Dibyendu Majumdar 
wrote:
Should LLVM move to an Apache License would that help in 
migrating to an LLVM backend as the standard backend?


Regards
Dibyendu


LLVM is great but you wouldn't want to be locked down to only one 
backend, probably. LLVM does have good support for a variety of 
architectures though... A bytecode code generator might be good 
for bootstrapping (after the nuclear apocalypse) but everyone 
just cross-compiles.


Re: Official compiler

2016-02-28 Thread Mike Parker via Digitalmars-d

On Sunday, 28 February 2016 at 15:02:24 UTC, Mike Parker wrote:



Such a survey wouldn't be anywhere near "realistic." The number 
and types of users who regularly keep up with the forums are 
highly unlikely to be a representative sample of D users.


Not to mention that only a fraction of people who view the forums 
would actually take the survey.


Re: Official compiler

2016-02-28 Thread Mike Parker via Digitalmars-d

On Sunday, 28 February 2016 at 13:31:17 UTC, Márcio Martins wrote:

Could we maybe create a quick informative survey, 
(surveymonkey?), so we can get a glimpse of why people like D 
and what they believe would improve their experience with the 
language? Perhaps also why they have chosen to or not to adopt 
D more seriously or professionally?


Given that there is such a wide diversity of people currently 
using it, I think it would be nice for the project leadership 
and all of us in the community to get a more realistic view on 
this matter, to better understand what's important, chose the 
future direction and what are the real selling points. Right 
now it seems like there are a lot of mixed signals even among 
long-time users and contributors.


Such a survey wouldn't be anywhere near "realistic." The number 
and types of users who regularly keep up with the forums are 
highly unlikely to be a representative sample of D users.


Re: Official compiler

2016-02-28 Thread Márcio Martins via Digitalmars-d
On Thursday, 25 February 2016 at 01:53:51 UTC, Walter Bright 
wrote:

On 2/17/2016 4:35 PM, Chris Wright wrote:

And since DMD is
something like twice as fast as LDC, there's at least some 
argument in

favor of keeping it around.


When I meet someone new who says they settled on D in their 
company for development, I casually ask why they selected D?


  "Because it compiles so fast."

It's not a minor issue.


Could we maybe create a quick informative survey, 
(surveymonkey?), so we can get a glimpse of why people like D and 
what they believe would improve their experience with the 
language? Perhaps also why they have chosen to or not to adopt D 
more seriously or professionally?


Given that there is such a wide diversity of people currently 
using it, I think it would be nice for the project leadership and 
all of us in the community to get a more realistic view on this 
matter, to better understand what's important, chose the future 
direction and what are the real selling points. Right now it 
seems like there are a lot of mixed signals even among long-time 
users and contributors.


Re: Official compiler

2016-02-28 Thread Dibyendu Majumdar via Digitalmars-d

On Friday, 26 February 2016 at 22:20:09 UTC, Walter Bright wrote:

I am referring to this thread:

http://lists.llvm.org/pipermail/llvm-dev/2015-October/091536.html


Thanks for the pointer. If anyone wants to chip in on that 
thread, feel free!


Hi Walter,

Should LLVM move to an Apache License would that help in 
migrating to an LLVM backend as the standard backend?


Regards
Dibyendu



Re: Official compiler

2016-02-28 Thread Iain Buclaw via Digitalmars-d
On 27 February 2016 at 23:30, Walter Bright via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

On 2/27/2016 12:05 PM, Timon Gehr wrote:
>
>> On 26.02.2016 23:41, Walter Bright wrote:
>>
>>> On 2/26/2016 1:10 PM, Timon Gehr wrote:
>>>
 Different passes are not really required once semantic analysis becomes
 asynchronous. Just keep track of semantic analysis dependencies, with
 strong and
 weak dependencies and different means to resolve cycles of weak
 dependencies.
 Then write the semantic analysis of each component in a linear fashion
 and pause
 it whenever it depends on information that has not yet been obtained,
 until that
 information is computed.

>>>
>>> I'll put you in charge of debugging that :-)
>>>
>>
>> I am/was (I have not worked on it a lot lately). I haven't found it to be
>> particularly hard to debug.
>>
>
> It'll get 100 times harder if it's a heisenbug due to synchronization
> issues.
>
>
Surely with Fibers everything would be deterministic though?


Re: Official compiler

2016-02-27 Thread Walter Bright via Digitalmars-d

On 2/27/2016 12:05 PM, Timon Gehr wrote:

On 26.02.2016 23:41, Walter Bright wrote:

On 2/26/2016 1:10 PM, Timon Gehr wrote:

Different passes are not really required once semantic analysis becomes
asynchronous. Just keep track of semantic analysis dependencies, with
strong and
weak dependencies and different means to resolve cycles of weak
dependencies.
Then write the semantic analysis of each component in a linear fashion
and pause
it whenever it depends on information that has not yet been obtained,
until that
information is computed.


I'll put you in charge of debugging that :-)


I am/was (I have not worked on it a lot lately). I haven't found it to be
particularly hard to debug.


It'll get 100 times harder if it's a heisenbug due to synchronization issues.



It is likely the best way to fix the "forward reference error" situation.
My code which does this does not compile on DMD versions after 2.060 due to
forward reference issues. I have just reduced one of them:
https://issues.dlang.org/show_bug.cgi?id=15733


Thanks for preparing a bug report.



Re: Official compiler

2016-02-27 Thread Timon Gehr via Digitalmars-d

On 26.02.2016 23:41, Walter Bright wrote:

On 2/26/2016 1:10 PM, Timon Gehr wrote:

Different passes are not really required once semantic analysis becomes
asynchronous. Just keep track of semantic analysis dependencies, with
strong and
weak dependencies and different means to resolve cycles of weak
dependencies.
Then write the semantic analysis of each component in a linear fashion
and pause
it whenever it depends on information that has not yet been obtained,
until that
information is computed.


I'll put you in charge of debugging that :-)


I am/was (I have not worked on it a lot lately). I haven't found it to 
be particularly hard to debug.

It is likely the best way to fix the "forward reference error" situation.
My code which does this does not compile on DMD versions after 2.060 due 
to forward reference issues. I have just reduced one of them: 
https://issues.dlang.org/show_bug.cgi?id=15733


Re: Official compiler

2016-02-26 Thread Walter Bright via Digitalmars-d

On 2/26/2016 1:10 PM, Timon Gehr wrote:

Different passes are not really required once semantic analysis becomes
asynchronous. Just keep track of semantic analysis dependencies, with strong and
weak dependencies and different means to resolve cycles of weak dependencies.
Then write the semantic analysis of each component in a linear fashion and pause
it whenever it depends on information that has not yet been obtained, until that
information is computed.


I'll put you in charge of debugging that :-)


Re: Official compiler

2016-02-26 Thread Walter Bright via Digitalmars-d

On 2/26/2016 10:34 AM, Iain Buclaw via Digitalmars-d wrote:

One interesting line of development (though would be difficult to implement)
would be to do all three semantic passes asynchronously using fibers.


I'd be terrified of all the synchronizing that would be necessary there. The 
lexing, and code generation would be far easier to parallelize.



If I understand correctly, sdc already does this with many cases that need
ironing out.


The "many cases that need ironing out" is always the problem :-)



Re: Official compiler

2016-02-26 Thread Walter Bright via Digitalmars-d

On 2/26/2016 11:17 AM, David Nadlinger wrote:

I was referring to something different in my post, though, as the question
concerned "low-hanging fruit". The problem there is really just that template
names sometimes grow unreasonably long pretty quickly. As an example, without
wanting to divulge any internals, some of the mangled symbols (!) in the Weka
codebase are several hundred kilobytes in size. core.demangle gives up on them
anyway, and they appear to be extremely repetitive. Note that just like in
Steven's post which I linked earlier, the code in question does not involve any
crazy recursive meta-templates, but IIRC makes use of Voldemort types. Tracking
down and fixing this – one would almost be tempted to just use standard data
compression – would lead to a noticeable decrease in compile and link times for
affected code.


A simple solution is to just use lz77 compression on the strings. This is used 
for Win32 and works well. (I had a PR to put that in Phobos, but it was rejected.)


https://www.digitalmars.com/sargon/lz77.html

As a snide aside, the mangling schemes used by Microsoft and g++ have a built-in 
compression scheme, but they are overly complex and produce lousy results. Lz77 
is simpler and far more effective :-)


An alternative is to generate an SHA hash of the name, which will be unique, but 
the downside is it is not reversible and so cannot be demangled.


Re: Official compiler

2016-02-26 Thread Walter Bright via Digitalmars-d

On 2/26/2016 3:45 AM, Dibyendu Majumdar wrote:

On Friday, 26 February 2016 at 11:35:04 UTC, Dibyendu Majumdar wrote:

On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright wrote:

[...]


I recall there was a thread in the LLVM mailing list last year about moving to
a different license. So maybe that is on the cards, and the D community could
chip on that conversation.



I am referring to this thread:

http://lists.llvm.org/pipermail/llvm-dev/2015-October/091536.html


Thanks for the pointer. If anyone wants to chip in on that thread, feel free!


Re: Official compiler

2016-02-26 Thread Walter Bright via Digitalmars-d

On 2/26/2016 5:15 AM, Steven Schveighoffer wrote:

I think it's much stronger when the email/logs are maintained by a disinterested
third party.

For example, I'd say emails that were maintained on a private server by one of
the parties in the case would be less reliable than logs stored on yahoo's
servers that neither party has access to.

There would also be no shortage of witnesses "Yes, I remember the day Walter
added feature x, and github's logs are correct".

I think Walter is on solid ground there.

-Steve


Not only that, everyone who has accessed the github repository has their own 
copy of the repository. It's a distributed repository, not a single sourced one.


I also keep the email logs I get from it.

It's a thousand times better than producing a date stamp from a file on my 
backup hard disk.




Re: Official compiler

2016-02-26 Thread Iain Buclaw via Digitalmars-d
On 26 Feb 2016 10:16 pm, "Timon Gehr via Digitalmars-d" <
digitalmars-d@puremagic.com> wrote:
>
> On 26.02.2016 19:34, Iain Buclaw via Digitalmars-d wrote:
>>
>> On 26 Feb 2016 9:45 am, "Walter Bright via Digitalmars-d"
>> > wrote:
>>  >
>>  > On 2/26/2016 12:20 AM, Iain Buclaw via Digitalmars-d wrote:
>>  >>
>>  >> I thought that mulithreaded I/O did not change anything, or slowed
>> compilation
>>  >> down in some cases?
>>  >>
>>  >> Or I recall seeing a slight slowdown when I first tested it in gdc
>> all those
>>  >> years ago.  So left it disabled - probably for the best too.
>>  >
>>  >
>>  >
>>  > Running one test won't really give much useful information. I also
wrote:
>>  >
>>  > "On a machine with local disk and running nothing else, no speedup.
>> With a slow filesystem, like an external, network, or cloud (!) drive,
>> yes. I would also expect it to speed up when the machine is running a
>> lot of other stuff."
>>
>> Ah ha. Yes I can sort of remember that comment.
>>
>> One interesting line of development (though would be difficult to
>> implement) would be to do all three semantic passes asynchronously using
>> fibers.
>>
>> If I understand correctly, sdc already does this with many cases that
>> need ironing out.
>>
>
> Different passes are not really required once semantic analysis becomes
asynchronous. Just keep track of semantic analysis dependencies, with
strong and weak dependencies and different means to resolve cycles of weak
dependencies. Then write the semantic analysis of each component in a
linear fashion and pause it whenever it depends on information that has not
yet been obtained, until that information is computed.

Yes.  In our case, it may be best to go for small steps.  First remove the
'deferred' semantic pass, then merge semantic 1+2, then finally as you
describe above.

Easier said than done I guess though.


Re: Official compiler

2016-02-26 Thread Timon Gehr via Digitalmars-d

On 26.02.2016 19:34, Iain Buclaw via Digitalmars-d wrote:

On 26 Feb 2016 9:45 am, "Walter Bright via Digitalmars-d"
> wrote:
 >
 > On 2/26/2016 12:20 AM, Iain Buclaw via Digitalmars-d wrote:
 >>
 >> I thought that mulithreaded I/O did not change anything, or slowed
compilation
 >> down in some cases?
 >>
 >> Or I recall seeing a slight slowdown when I first tested it in gdc
all those
 >> years ago.  So left it disabled - probably for the best too.
 >
 >
 >
 > Running one test won't really give much useful information. I also wrote:
 >
 > "On a machine with local disk and running nothing else, no speedup.
With a slow filesystem, like an external, network, or cloud (!) drive,
yes. I would also expect it to speed up when the machine is running a
lot of other stuff."

Ah ha. Yes I can sort of remember that comment.

One interesting line of development (though would be difficult to
implement) would be to do all three semantic passes asynchronously using
fibers.

If I understand correctly, sdc already does this with many cases that
need ironing out.



Different passes are not really required once semantic analysis becomes 
asynchronous. Just keep track of semantic analysis dependencies, with 
strong and weak dependencies and different means to resolve cycles of 
weak dependencies. Then write the semantic analysis of each component in 
a linear fashion and pause it whenever it depends on information that 
has not yet been obtained, until that information is computed.


Re: Official compiler

2016-02-26 Thread H. S. Teoh via Digitalmars-d
On Fri, Feb 26, 2016 at 01:53:21PM -0500, Andrei Alexandrescu via Digitalmars-d 
wrote:
> On 02/26/2016 10:38 AM, David Nadlinger wrote:
> >On Thursday, 25 February 2016 at 23:06:43 UTC, H. S. Teoh wrote:
> >>Are there any low-hanging fruit left that could make dmd faster?
> >
> >A big one would be overhauling the template mangling scheme so it
> >does not generate mangled names a few hundred kilo (!) bytes in size
> >anymore for code that uses templates and voldemort types.
> 
> My understanding is the main problem is the _same_ templates are
> repeatedly instantiated with the same exact parameters - the epitome
> of redundant work.
[...]

I must be missing something, but why can't we use the obvious solution
to use some kind of hash table to track previous instantiations?


T

-- 
There are two ways to write error-free programs; only the third one works.


Re: Official compiler

2016-02-26 Thread David Nadlinger via Digitalmars-d
On Friday, 26 February 2016 at 18:53:21 UTC, Andrei Alexandrescu 
wrote:
My understanding is the main problem is the _same_ templates 
are repeatedly instantiated with the same exact parameters - 
the epitome of redundant work. -- Andrei


Within one compiler execution, there might be some optimization 
potential in the way semantically equivalent template 
instantiations are merged, yes – it's been a while since I have 
looked at the related code (see e.g. 
TemplateDeclaration.findExistingInstance).


Another area matching your description would be that of the same 
template being instantiated from multiple compilation units, 
where it can be omitted from some of the compilation units (i.e. 
object files). Our current logic for that is broken anyway, see 
e.g. https://issues.dlang.org/show_bug.cgi?id=15318.


I was referring to something different in my post, though, as the 
question concerned "low-hanging fruit". The problem there is 
really just that template names sometimes grow unreasonably long 
pretty quickly. As an example, without wanting to divulge any 
internals, some of the mangled symbols (!) in the Weka codebase 
are several hundred kilobytes in size. core.demangle gives up on 
them anyway, and they appear to be extremely repetitive. Note 
that just like in Steven's post which I linked earlier, the code 
in question does not involve any crazy recursive meta-templates, 
but IIRC makes use of Voldemort types. Tracking down and fixing 
this – one would almost be tempted to just use standard data 
compression – would lead to a noticeable decrease in compile and 
link times for affected code.


 — David


Re: Official compiler

2016-02-26 Thread Andrei Alexandrescu via Digitalmars-d

On 02/26/2016 09:50 AM, David Nadlinger wrote:

Can we please keep this out of here?


Thank you!! -- Andrei



Re: Official compiler

2016-02-26 Thread Andrei Alexandrescu via Digitalmars-d

On 02/26/2016 10:38 AM, David Nadlinger wrote:

On Thursday, 25 February 2016 at 23:06:43 UTC, H. S. Teoh wrote:

Are there any low-hanging fruit left that could make dmd faster?


A big one would be overhauling the template mangling scheme so it does
not generate mangled names a few hundred kilo (!) bytes in size anymore
for code that uses templates and voldemort types.


My understanding is the main problem is the _same_ templates are 
repeatedly instantiated with the same exact parameters - the epitome of 
redundant work. -- Andrei




Re: Official compiler

2016-02-26 Thread David Nadlinger via Digitalmars-d
On Friday, 26 February 2016 at 18:19:57 UTC, Steven Schveighoffer 
wrote:
The idea is that ldc and gdc will get plenty of warning if 
something breaks.


As stated, this in itself would be utterly useless. Right now, 
you can be absolutely certain that the AST semantics will change 
in between each DMD release. Sometimes in obvious ways because 
fields are removed and so on, but much more often silently and in 
a hard-to-track-down fashion because the structure of the AST or 
the interpretation of certain node properties changes.


In other words, we don't need any warning that something breaks, 
because we already know it will. The people that need the warning 
are the authors of the breaking front-end commits, so that they 
can properly document the changes and make sure they are 
acceptable for the other backends (right now, you typically have 
to reverse-engineer that from the DMD glue layer changes). 
Ideally, of course, no such changes would be merged without 
making sure that all the backends have already been adapted for 
them first.


 — David


Re: Official compiler

2016-02-26 Thread Iain Buclaw via Digitalmars-d
On 26 Feb 2016 9:45 am, "Walter Bright via Digitalmars-d" <
digitalmars-d@puremagic.com> wrote:
>
> On 2/26/2016 12:20 AM, Iain Buclaw via Digitalmars-d wrote:
>>
>> I thought that mulithreaded I/O did not change anything, or slowed
compilation
>> down in some cases?
>>
>> Or I recall seeing a slight slowdown when I first tested it in gdc all
those
>> years ago.  So left it disabled - probably for the best too.
>
>
>
> Running one test won't really give much useful information. I also wrote:
>
> "On a machine with local disk and running nothing else, no speedup. With
a slow filesystem, like an external, network, or cloud (!) drive, yes. I
would also expect it to speed up when the machine is running a lot of other
stuff."

Ah ha. Yes I can sort of remember that comment.

One interesting line of development (though would be difficult to
implement) would be to do all three semantic passes asynchronously using
fibers.

If I understand correctly, sdc already does this with many cases that need
ironing out.


Re: Official compiler

2016-02-26 Thread Steven Schveighoffer via Digitalmars-d

On 2/26/16 9:26 AM, Radu wrote:

On Friday, 26 February 2016 at 13:11:11 UTC, Steven Schveighoffer wrote:

On 2/26/16 7:02 AM, Radu wrote:

On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:

I don't see anything unfair. gdc, ldc, and dmd are each as good as
their respective teams make them.



The lack of fairness comes from the way the ecosystem is setup, you have
the reference compiler released, then everybody needs to catch up with
it. Why not have others be part of the official release? This will
undoubtedly increase the quality of the frontend and the glue layer, and
probably the runtime, just because they will be tested on more
architectures each release.

No matter how you put it, both LDC and GDC are limited in manpower, and
also caught in the merge game with mainline. This is a bottle neck if
they need to attract more talent. Right of the bat you need to do a lot
of grunt work handling different repos, each at their own revision, plus
all the knowledge about build env and testing env.


The issue here is the front-end not the back end. Daniel has already
stated this was a goal (to make the front end shared code). So it will
happen (I think Daniel has a pretty good record of following through,
we do have a D-based front end now after all).

Any effort to make both LDC and GDC part of the "official" release
would be artificial -- instead of LDC and GDC getting released
"faster", they would simply hold up dmd's release until they caught
up. And this is probably more pressure than their developers need.

When the front end is shared, then the releases will be quicker, and
you can be happier with it.



OK, a shared front end will be great!

My main concern is that if they are not integrated withing the daily
pull-merge-auto-test loop they will always tend to drift and get out of
sync while trying to fix stuff that breaks.


I think the intention is to make all of the compilers supported with 
some reasonable form of CI (not sure if all PRs would be tested this 
way, because that may be too much of a burden on the test servers).


The idea is that ldc and gdc will get plenty of warning if something breaks.

-Steve


Re: Official compiler

2016-02-26 Thread David Nadlinger via Digitalmars-d

On Thursday, 25 February 2016 at 23:06:43 UTC, H. S. Teoh wrote:

Are there any low-hanging fruit left that could make dmd faster?


A big one would be overhauling the template mangling scheme so it 
does not generate mangled names a few hundred kilo (!) bytes in 
size anymore for code that uses templates and voldemort types. 
For an example, see 
http://forum.dlang.org/post/n96k3g$ka5$1...@digitalmars.com, 
although the problem can get much worse in big code bases. I've 
seen just the handling of the mangle strings (generation, ...) 
making up a significant part of the time profile.


 — David


Re: Official compiler

2016-02-26 Thread David Nadlinger via Digitalmars-d

On Friday, 26 February 2016 at 11:50:27 UTC, Russel Winder wrote:
On Fri, 2016-02-26 at 11:12 +, BBasile via Digitalmars-d 
wrote:

[…]
BTW Malicious people can cheat and commit in the past, 
according

to

https://github.com/gelstudios/gitfiti

commitment date is not reliable.


Indeed, which is why Mercurial is a much better system, though 
it is far from perfect.


"hg commit" knows the "--date" option just as well. Can we please 
keep this out of here?


 — David


Re: Official compiler

2016-02-26 Thread Radu via Digitalmars-d
On Friday, 26 February 2016 at 13:11:11 UTC, Steven Schveighoffer 
wrote:

On 2/26/16 7:02 AM, Radu wrote:
On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright 
wrote:
I don't see anything unfair. gdc, ldc, and dmd are each as 
good as

their respective teams make them.



The lack of fairness comes from the way the ecosystem is 
setup, you have
the reference compiler released, then everybody needs to catch 
up with
it. Why not have others be part of the official release? This 
will
undoubtedly increase the quality of the frontend and the glue 
layer, and

probably the runtime, just because they will be tested on more
architectures each release.

No matter how you put it, both LDC and GDC are limited in 
manpower, and
also caught in the merge game with mainline. This is a bottle 
neck if
they need to attract more talent. Right of the bat you need to 
do a lot
of grunt work handling different repos, each at their own 
revision, plus

all the knowledge about build env and testing env.


The issue here is the front-end not the back end. Daniel has 
already stated this was a goal (to make the front end shared 
code). So it will happen (I think Daniel has a pretty good 
record of following through, we do have a D-based front end now 
after all).


Any effort to make both LDC and GDC part of the "official" 
release would be artificial -- instead of LDC and GDC getting 
released "faster", they would simply hold up dmd's release 
until they caught up. And this is probably more pressure than 
their developers need.


When the front end is shared, then the releases will be 
quicker, and you can be happier with it.


-Steve


OK, a shared front end will be great!

My main concern is that if they are not integrated withing the 
daily pull-merge-auto-test loop they will always tend to drift 
and get out of sync while trying to fix stuff that breaks.


If the author of the pull request gets auto feedback from DMD and 
LDC on hes changes test results, than he will be aware of 
potential problems he might create.


The integration doesn't necessarily needs to be tightly coupled, 
i.e. LDC can keep its infrastructure and auto sync/run any merges 
from mainline. The issue is what to do with breaking changes.


Ideally, no breaking should be allowed for when fixing 
regressions or bugs, and any breaking on front-end or glue layers 
should at least be talked with the LDC/GDC guys.


All of the above needs steering from the leadership to follow 
trough.


And BTW, I'm happy with what D has become :), always room for 
improvements, thank you!


Re: Official compiler

2016-02-26 Thread Steven Schveighoffer via Digitalmars-d

On 2/26/16 6:04 AM, Russel Winder via Digitalmars-d wrote:

On Fri, 2016-02-26 at 02:52 -0800, Walter Bright via Digitalmars-d
wrote:

[…]
I'm not aware of any, either, that is specific to github. But given
how digital
records in general (such as email, social media posts, etc.) are
routinely
accepted as evidence, I'd be very surprised if github wasn't.


Be careful about make assumptions of admissibility as evidence. I have
been expert witness in three cases regarding email logs and it is not
always so simple to have them treated as a matter of record. Of course
the USA is not the UK, rules and history are different in every
jurisdiction – and the USA has more than one!



I think it's much stronger when the email/logs are maintained by a 
disinterested third party.


For example, I'd say emails that were maintained on a private server by 
one of the parties in the case would be less reliable than logs stored 
on yahoo's servers that neither party has access to.


There would also be no shortage of witnesses "Yes, I remember the day 
Walter added feature x, and github's logs are correct".


I think Walter is on solid ground there.

-Steve


Re: Official compiler

2016-02-26 Thread Steven Schveighoffer via Digitalmars-d

On 2/26/16 7:02 AM, Radu wrote:

On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:

I don't see anything unfair. gdc, ldc, and dmd are each as good as
their respective teams make them.



The lack of fairness comes from the way the ecosystem is setup, you have
the reference compiler released, then everybody needs to catch up with
it. Why not have others be part of the official release? This will
undoubtedly increase the quality of the frontend and the glue layer, and
probably the runtime, just because they will be tested on more
architectures each release.

No matter how you put it, both LDC and GDC are limited in manpower, and
also caught in the merge game with mainline. This is a bottle neck if
they need to attract more talent. Right of the bat you need to do a lot
of grunt work handling different repos, each at their own revision, plus
all the knowledge about build env and testing env.


The issue here is the front-end not the back end. Daniel has already 
stated this was a goal (to make the front end shared code). So it will 
happen (I think Daniel has a pretty good record of following through, we 
do have a D-based front end now after all).


Any effort to make both LDC and GDC part of the "official" release would 
be artificial -- instead of LDC and GDC getting released "faster", they 
would simply hold up dmd's release until they caught up. And this is 
probably more pressure than their developers need.


When the front end is shared, then the releases will be quicker, and you 
can be happier with it.


-Steve


Re: Official compiler

2016-02-26 Thread Radu via Digitalmars-d

On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:

On 2/26/2016 1:47 AM, Radu wrote:
Please don't get me wrong, we all apreciate what you offered 
to the D community,
but all these legal arguments are strongly tied to you, and 
less so to the

community.


Didn't Google get hung out to dry over 6 lines of Java code or 
something like that? And I don't know how long you've been 
around here, but we DID have precisely these sorts of problems 
during the Phobos/Tango rift. Ignoring licensing issues can 
have ugly consequences.




I'm around here since 2004, not as vocal as I'm now, but yes, I 
remember those ugly times.
Due diligence is mandatory when dealing with software license, 
agreed, but we can't extrapolate your experience re. the backend 
with whatever is used in LDC or any other compiler. I'm sure in 
this regard LDC is not at peril.




Your LLVM license nit pick is hilarious, you can't do that 
when the "oficial" D

compiler has a non-liberal licensed backend, you just can't.


That's not under my control, and is one of the reasons why D 
gravitated towards the Boost license for everything we could.




Yes, agreed, boost FTW, but still doesn't solve the backend issue.



But setting things aside, we all need to acknowledge that the 
current setup is
not fair to motivated and proven third party compilers, their 
contributors, and

their users.


I don't see anything unfair. gdc, ldc, and dmd are each as good 
as their respective teams make them.




The lack of fairness comes from the way the ecosystem is setup, 
you have the reference compiler released, then everybody needs to 
catch up with it. Why not have others be part of the official 
release? This will undoubtedly increase the quality of the 
frontend and the glue layer, and probably the runtime, just 
because they will be tested on more architectures each release.


No matter how you put it, both LDC and GDC are limited in 
manpower, and also caught in the merge game with mainline. This 
is a bottle neck if they need to attract more talent. Right of 
the bat you need to do a lot of grunt work handling different 
repos, each at their own revision, plus all the knowledge about 
build env and testing env.




The D ecosistem must create and foster a friendly environment 
to anyone wanting
to have a good compiler that is current with the 
language/runtime/phobos

developments.


And that's what we do. It's why we have 3 major compilers.


See above, just having 3 compilers (could be 5 for the matter), 
it's not enough. We will be better with just one that works 
great, but if that is not possible, at least give me the option 
to use the latest and greatest D on my Linux embedded ARM boards.




Re: Official compiler

2016-02-26 Thread Russel Winder via Digitalmars-d
On Fri, 2016-02-26 at 11:12 +, BBasile via Digitalmars-d wrote:
> […]
> BTW Malicious people can cheat and commit in the past, according 
> to
> 
> https://github.com/gelstudios/gitfiti
> 
> commitment date is not reliable.

Indeed, which is why Mercurial is a much better system, though it is
far from perfect.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Official compiler

2016-02-26 Thread Dibyendu Majumdar via Digitalmars-d
On Friday, 26 February 2016 at 11:35:04 UTC, Dibyendu Majumdar 
wrote:
On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright 
wrote:

[...]


I recall there was a thread in the LLVM mailing list last year 
about moving to a different license. So maybe that is on the 
cards, and the D community could chip on that conversation.




I am referring to this thread:

http://lists.llvm.org/pipermail/llvm-dev/2015-October/091536.html


Re: Official compiler

2016-02-26 Thread Dibyendu Majumdar via Digitalmars-d

On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright wrote:
I wish LLVM would switch to the Boost license, in particular 
removing this clause:


"Redistributions in binary form must reproduce the above 
copyright notice, this list of conditions and the following 
disclaimers in the documentation and/or other materials 
provided with the distribution."


Reading it adversely means if I write a simple utility and 
include a few lines from LLVM, I have to include that license 
in the binary and a means to print it out. If I include a bit 
of code from several places, each with their own version of 
that license, there's just a bunch of crap to deal with to be 
in compliance.


Hi Walter,

I recall there was a thread in the LLVM mailing list last year 
about moving to a different license. So maybe that is on the 
cards, and the D community could chip on that conversation.


I feel that by moving an LLVM backend D will gain the help / 
expertise of a large number of companies that are working on LLVM 
including Microsoft & Google. Isn't Clang's claim that it is much 
faster than gcc when it comes to compiling? So maybe the speed of 
compilation using LLVM is not such an issue as presumably a lot 
of the cost in C++ compilation is in the front-end and with D the 
same issues won't arise?


In any case with scarce resources it seems wasteful to have 
people working on multiple backends - it would make more sense to 
converge to one backend - and LLVM being non-GPL and having a lot 
of momentum may be the best option.


I also feel that a lot of the C++ interfacing could be done by 
using the Clang libraries - again for similar reasons that you 
will gain from work already being done.


Regards
Dibyendu


Re: Official compiler

2016-02-26 Thread BBasile via Digitalmars-d

On Friday, 26 February 2016 at 10:41:31 UTC, Russel Winder wrote:
On Thu, 2016-02-25 at 22:19 -0800, Walter Bright via 
Digitalmars-d

wrote:
[…]


One thing I adore about github is it provides a legal audit 
trail of

where the
code came from. While that proves nothing about whether 
contributions

are stolen
or not, it provides a date stamp (like my registered copyright 
did),

and if
stolen code does make its way into the code base, it can be 
precisely

excised.
Github takes a great load off my mind.

[…]

Has there been case law in the USA that gives a Git log 
official status as a record of history? I haven't done a 
detailed search here, but I am not aware of any case law in the 
UK on this. Other jursidictions will have their own rules 
obviously.


BTW Malicious people can cheat and commit in the past, according 
to


https://github.com/gelstudios/gitfiti

commitment date is not reliable.


Re: Official compiler

2016-02-26 Thread Walter Bright via Digitalmars-d

On 2/26/2016 1:47 AM, Radu wrote:

Please don't get me wrong, we all apreciate what you offered to the D community,
but all these legal arguments are strongly tied to you, and less so to the
community.


Didn't Google get hung out to dry over 6 lines of Java code or something like 
that? And I don't know how long you've been around here, but we DID have 
precisely these sorts of problems during the Phobos/Tango rift. Ignoring 
licensing issues can have ugly consequences.




Your LLVM license nit pick is hilarious, you can't do that when the "oficial" D
compiler has a non-liberal licensed backend, you just can't.


That's not under my control, and is one of the reasons why D gravitated towards 
the Boost license for everything we could.




But setting things aside, we all need to acknowledge that the current setup is
not fair to motivated and proven third party compilers, their contributors, and
their users.


I don't see anything unfair. gdc, ldc, and dmd are each as good as their 
respective teams make them.




The D ecosistem must create and foster a friendly environment to anyone wanting
to have a good compiler that is current with the language/runtime/phobos
developments.


And that's what we do. It's why we have 3 major compilers.



Re: Official compiler

2016-02-26 Thread Russel Winder via Digitalmars-d
On Fri, 2016-02-26 at 02:52 -0800, Walter Bright via Digitalmars-d
wrote:
> […]
> I'm not aware of any, either, that is specific to github. But given
> how digital 
> records in general (such as email, social media posts, etc.) are
> routinely 
> accepted as evidence, I'd be very surprised if github wasn't.

Be careful about make assumptions of admissibility as evidence. I have
been expert witness in three cases regarding email logs and it is not
always so simple to have them treated as a matter of record. Of course
the USA is not the UK, rules and history are different in every
jurisdiction – and the USA has more than one! 

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Official compiler

2016-02-26 Thread Walter Bright via Digitalmars-d

On 2/26/2016 2:41 AM, Russel Winder via Digitalmars-d wrote:

Has there been case law in the USA that gives a Git log official status
as a record of history? I haven't done a detailed search here, but I am
not aware of any case law in the UK on this. Other jursidictions will
have their own rules obviously.


I'm not aware of any, either, that is specific to github. But given how digital 
records in general (such as email, social media posts, etc.) are routinely 
accepted as evidence, I'd be very surprised if github wasn't.




Re: Official compiler

2016-02-26 Thread Russel Winder via Digitalmars-d
On Thu, 2016-02-25 at 16:51 +, David Nadlinger via Digitalmars-d
wrote:
> […]
> Travis CI, which is probably the one "trendy, hipster" service 
> most would think of, has been supporting D for quite some while 
> now due to Martin Nowak's great work. (He put Iain's name and 
> mine on it too, but we didn't really contribute at all.)

Indeed Travis-CI advertises it's D capability. Apologies for implying
it didn't.

Other cloud CI services are definitely lacking though, at least in
their advertising of supported langauges.
 
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Official compiler

2016-02-26 Thread Russel Winder via Digitalmars-d
On Thu, 2016-02-25 at 22:19 -0800, Walter Bright via Digitalmars-d
wrote:
[…]
> 
> One thing I adore about github is it provides a legal audit trail of
> where the 
> code came from. While that proves nothing about whether contributions
> are stolen 
> or not, it provides a date stamp (like my registered copyright did),
> and if 
> stolen code does make its way into the code base, it can be precisely
> excised. 
> Github takes a great load off my mind.
[…]

Has there been case law in the USA that gives a Git log official status
as a record of history? I haven't done a detailed search here, but I am
not aware of any case law in the UK on this. Other jursidictions will
have their own rules obviously.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Official compiler

2016-02-26 Thread Radu via Digitalmars-d

On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright wrote:

On 2/18/2016 1:30 PM, Jonathan M Davis wrote:
It's not a strawman. Walter has state previously that he's 
explicitly avoided
looking at the source code for other compilers like gcc, 
because he doesn't want
anyone to be able to accuse him of stealing code, copyright 
infringement, etc.
Now, that's obviously much more of a risk with gcc than llvm 
given their
respective licenses, but it is a position that Walter has 
taken when the issue

has come up, and it's not something that I'm making up.

Now, if Walter were willing to give up on the dmd backend 
entirely, then
presumably, that wouldn't be a problem anymore regardless of 
license issues, but
he still has dmc, which uses the same backend, so I very much 
doubt that that's

going to happen.


It's still an issue I worry about. I've been (falsely) accused 
of stealing code in the past, even once accused of having 
stolen the old Datalight C compiler from some BYU students. 
Once a game company stole Empire, and then had the astonishing 
nerve to sic their lawyers on me accusing me of stealing it 
from them! (Showing them my registered copyright of the source 
code that predated their claim by 10 years was entertaining.)


More recently this came up in the Tango/Phobos rift, as some of 
the long term members here will recall.


So it is not an issue to be taken too lightly. I have the scars 
to prove it :-/


One thing I adore about github is it provides a legal audit 
trail of where the code came from. While that proves nothing 
about whether contributions are stolen or not, it provides a 
date stamp (like my registered copyright did), and if stolen 
code does make its way into the code base, it can be precisely 
excised. Github takes a great load off my mind.


There are other reasons to have dmd's back end. One obvious one 
is we wouldn't have had a Win64 port without it. And anytime we 
wish to experiment with something new in code generation, it's 
a heluva lot easier to do that with dmd than with the 
monumental code bases of gcc and llvm.


One thing that has changed a lot in my attitudes is I no longer 
worry about people stealing my code. If someone can make good 
use of my stuff, have at it. Boost license FTW!


I wish LLVM would switch to the Boost license, in particular 
removing this clause:


"Redistributions in binary form must reproduce the above 
copyright notice, this list of conditions and the following 
disclaimers in the documentation and/or other materials 
provided with the distribution."


Reading it adversely means if I write a simple utility and 
include a few lines from LLVM, I have to include that license 
in the binary and a means to print it out. If I include a bit 
of code from several places, each with their own version of 
that license, there's just a bunch of crap to deal with to be 
in compliance.


Please don't get me wrong, we all apreciate what you offered to 
the D community, but all these legal arguments are strongly tied 
to you, and less so to the community.


Your LLVM license nit pick is hilarious, you can't do that when 
the "oficial" D compiler has a non-liberal licensed backend, you 
just can't. Speaking of which, I think realistically DMD's 
backend will generally have ~ 1 major contributor, I think you 
guessed who that is.


But setting things aside, we all need to acknowledge that the 
current setup is not fair to motivated and proven third party 
compilers, their contributors, and their users.


The D ecosistem must create and foster a friendly environment to 
anyone wanting to have a good compiler that is current with the 
language/runtime/phobos developments.


I'm not seeing you, or Andrei, exploring and encouraging this 
actively, what I see is a defensive approach on DMD's merits.


Re: Official compiler

2016-02-26 Thread Ola Fosheim Grøstad via Digitalmars-d
On Thursday, 25 February 2016 at 23:48:15 UTC, Xavier Bigand 
wrote:

IMO if Go is a fast compiler is just because dmd shows the way.


Go was designed to compile fast because Google was looking for 
something faster than C++ for largish projects. The authors were 
also involved with Unix/Plan9 and have experience with creating 
languages and compilers for building operating systems...


Anyway, compilation speed isn't the primary concern these days 
when you look at how people pick their platform. People tend to 
go for languages/compilers that are convenient, generate good 
code, support many platforms and resort to parallell builds when 
the project grows.


You can build a very fast compiler for a stable language with a 
simple type system like C that don't even build an AST (using an 
implicit AST) and do code-gen on the fly. But it turns out people 
prefer sticking to GCC even when other C compilers have been 
10-20x faster.




Re: Official compiler

2016-02-26 Thread Walter Bright via Digitalmars-d

On 2/26/2016 12:20 AM, Iain Buclaw via Digitalmars-d wrote:

I thought that mulithreaded I/O did not change anything, or slowed compilation
down in some cases?

Or I recall seeing a slight slowdown when I first tested it in gdc all those
years ago.  So left it disabled - probably for the best too.



Running one test won't really give much useful information. I also wrote:

"On a machine with local disk and running nothing else, no speedup. With a slow 
filesystem, like an external, network, or cloud (!) drive, yes. I would also 
expect it to speed up when the machine is running a lot of other stuff."


Re: Official compiler

2016-02-26 Thread Iain Buclaw via Digitalmars-d
On 25 Feb 2016 11:05 pm, "Walter Bright via Digitalmars-d" <
digitalmars-d@puremagic.com> wrote:
>
> On 2/25/2016 1:50 PM, Andrei Alexandrescu wrote:
>>
>> Good to know, thanks! -- Andrei
>
>
> DMD did slow down because it was now being compiled by DMD instead of
g++. Also, dmd was doing multithreaded file I/O, but that was removed
because speed didn't matter .
>

I thought that mulithreaded I/O did not change anything, or slowed
compilation down in some cases?

Or I recall seeing a slight slowdown when I first tested it in gdc all
those years ago.  So left it disabled - probably for the best too.


Re: Official compiler

2016-02-25 Thread Jonathan M Davis via Digitalmars-d

On Friday, 26 February 2016 at 06:19:27 UTC, Walter Bright wrote:
I wish LLVM would switch to the Boost license, in particular 
removing this clause:


"Redistributions in binary form must reproduce the above 
copyright notice, this list of conditions and the following 
disclaimers in the documentation and/or other materials 
provided with the distribution."


Reading it adversely means if I write a simple utility and 
include a few lines from LLVM, I have to include that license 
in the binary and a means to print it out. If I include a bit 
of code from several places, each with their own version of 
that license, there's just a bunch of crap to deal with to be 
in compliance.


That's why I tend to encourage folks to use the Boost license 
rather than the BSD license when it comes up (LLVM isn't 
BSD-licensed, but its license is very similar). While source 
attribution makes sense, I just don't want to deal with binary 
attribution in anything I write. It does make some sense when you 
don't want someone to be able to claim that they didn't use your 
code (even if you're not looking to require that they open 
everything up like the GPL does), but for the most part, I just 
don't think that that's worth it - though it is kind of cool that 
some commercial stuff (like the PS4) is using BSD-licensed code, 
and we know it, because they're forced to give attribution with 
their binaries.


- Jonathan M Davis


Re: Official compiler

2016-02-25 Thread Walter Bright via Digitalmars-d

On 2/18/2016 1:30 PM, Jonathan M Davis wrote:

It's not a strawman. Walter has state previously that he's explicitly avoided
looking at the source code for other compilers like gcc, because he doesn't want
anyone to be able to accuse him of stealing code, copyright infringement, etc.
Now, that's obviously much more of a risk with gcc than llvm given their
respective licenses, but it is a position that Walter has taken when the issue
has come up, and it's not something that I'm making up.

Now, if Walter were willing to give up on the dmd backend entirely, then
presumably, that wouldn't be a problem anymore regardless of license issues, but
he still has dmc, which uses the same backend, so I very much doubt that that's
going to happen.


It's still an issue I worry about. I've been (falsely) accused of stealing code 
in the past, even once accused of having stolen the old Datalight C compiler 
from some BYU students. Once a game company stole Empire, and then had the 
astonishing nerve to sic their lawyers on me accusing me of stealing it from 
them! (Showing them my registered copyright of the source code that predated 
their claim by 10 years was entertaining.)


More recently this came up in the Tango/Phobos rift, as some of the long term 
members here will recall.


So it is not an issue to be taken too lightly. I have the scars to prove it :-/

One thing I adore about github is it provides a legal audit trail of where the 
code came from. While that proves nothing about whether contributions are stolen 
or not, it provides a date stamp (like my registered copyright did), and if 
stolen code does make its way into the code base, it can be precisely excised. 
Github takes a great load off my mind.


There are other reasons to have dmd's back end. One obvious one is we wouldn't 
have had a Win64 port without it. And anytime we wish to experiment with 
something new in code generation, it's a heluva lot easier to do that with dmd 
than with the monumental code bases of gcc and llvm.


One thing that has changed a lot in my attitudes is I no longer worry about 
people stealing my code. If someone can make good use of my stuff, have at it. 
Boost license FTW!


I wish LLVM would switch to the Boost license, in particular removing this 
clause:

"Redistributions in binary form must reproduce the above copyright notice, this 
list of conditions and the following disclaimers in the documentation and/or 
other materials provided with the distribution."


Reading it adversely means if I write a simple utility and include a few lines 
from LLVM, I have to include that license in the binary and a means to print it 
out. If I include a bit of code from several places, each with their own version 
of that license, there's just a bunch of crap to deal with to be in compliance.




Re: Official compiler

2016-02-25 Thread Jonathan M Davis via Digitalmars-d

On Friday, 26 February 2016 at 00:56:22 UTC, Walter Bright wrote:

On 2/25/2016 3:06 PM, H. S. Teoh via Digitalmars-d wrote:

I remember you did a bunch of stuff to the optimizer after the
switchover to self-hosting; how much of a difference did that 
make? Are

there any low-hanging fruit left that could make dmd faster?


There's a lot of low hanging fruit in dmd. In particular, far 
too many templates are instantiated over and over.


LOL. That would be an understatement. IIRC, at one point, Don 
figured out that we were instantiating _millions_ of templates 
for the std.algorithm unit tests. The number of templates used in 
template constraints alone is likely through the roof. Imagine 
how many times something like isInputRange!string gets compiled 
in your typical program. With how template-heavy range-base code 
is, almost anything we can do to speed of the compiler with 
regards to templates is likely to pay off.


- Jonathan M Davis



Re: Official compiler

2016-02-25 Thread Chris Wright via Digitalmars-d
On Fri, 26 Feb 2016 00:48:15 +0100, Xavier Bigand wrote:

> Is dmd multi-threaded?

Not at present.

It should be relatively easy to parallelize IO and parsing, at least in 
theory. I think IO parallelism was removed with the ddmd switch, maybe? 
But you'd have to identify the files you need to read in advance, so 
that's not as straightforward.

D's metaprogramming is too complex for a 100% solution for parallelizing 
semantic analysis on a module level. But you could create a partial 
solution:
* After parsing, look for unconditional imports. Skip static if/else 
blocks, skip template bodies, but grab everything else.
* Make a module dependency graph from that.
* Map each module to a task.
* Merge dependency cycles into single tasks. You now have a DAG.
* While there are any tasks in the graph:
  - Find all leaf tasks in the graph.
  - Run semantic analysis on them in parallel.

When you encounter a conditional or mixed in import, you can insert it 
into the DAG if it's not already there, but it would be simpler just to 
run analysis right then and there.

Alternatively, you can find regular and conditional imports and try to 
use them all. But this requires you to hold errors until you're certain 
that the module is used, and you end up doing more work overall. And that 
could be *tons* more work. Consider:

  module a;
  enum data = import("ten_million_records.csv");
  mixin(createClassesFromData(data));

  module b;
  enum shouldUseModuleA = false;

  module c;
  import b;
  static if (shouldUseModuleA) import a;

And even if you ignored that, you'd still have to deal with mixed in 
imports, which can be the result of arbitrarily complex CTFE expressions.

While all of this is straightforward in theory, it probably isn't so 
simple in practice.


Re: Official compiler

2016-02-25 Thread Walter Bright via Digitalmars-d

On 2/25/2016 3:06 PM, H. S. Teoh via Digitalmars-d wrote:

I remember you did a bunch of stuff to the optimizer after the
switchover to self-hosting; how much of a difference did that make? Are
there any low-hanging fruit left that could make dmd faster?


There's a lot of low hanging fruit in dmd. In particular, far too many templates 
are instantiated over and over.


The data structures need to be looked at, and the excessive memory consumption 
also slows things down.



On a related note, I discovered an O(n^2) algorithm in the front-end...
it's unlikely to be an actual bottleneck in practice (basically it's
quadratic in the number of fields in an aggregate), though you never
know. It actually does a full n^2 iterations, and seemed like it could
be at least pared down to n(n+1)/2, even without doing better than
O(n^2).


Please add a comment to the source code about this and put it in a PR.



Re: Official compiler

2016-02-25 Thread Walter Bright via Digitalmars-d

On 2/25/2016 3:06 PM, David Nadlinger wrote:

On Thursday, 25 February 2016 at 22:03:47 UTC, Walter Bright wrote:

DMD did slow down because it was now being compiled by DMD instead of g++.

You can compile it using LDC just fine now. ;)


I think we should ask Martin to set that up for the release builds.


Also, dmd was doing multithreaded file I/O, but that was removed because speed
didn't matter .

Did we ever have any numbers showing that this in particular produced a tangible
performance benefit (even a single barnacle)?


On a machine with local disk and running nothing else, no speedup. With a slow 
filesystem, like an external, network, or cloud (!) drive, yes. I would also 
expect it to speed up when the machine is running a lot of other stuff.




LDC doesn't do so either. I think what rsw0x referred to is doing a normal
"C-style" parallel compilation of several compilation unit. I'm not sure why
this couldn't also be done with DMD, though.


-j should work just fine with dmd.

There's a lot internal to the compiler that can be parallelized - just about 
everything but the semantic analysis.




Re: Official compiler

2016-02-25 Thread Xavier Bigand via Digitalmars-d

Le 25/02/2016 03:48, Walter Bright a écrit :

On 2/24/2016 6:05 PM, Adam D. Ruppe wrote:

I've also heard from big users who want the performance more than
compile time
and hit difficulty in build scaling..


I know that performance trumps all for many users. But we can have both
- dmd and ldc/gdc.

My point is that compile speed is a valuable and distinguishing feature
of D. It's one that I have to constantly maintain, or it bit rots away.
It's one that people regularly dismiss as unimportant. Sometimes it
seems I'm the only one working on the compiler who cares about it.

For comparison, C++ compiles like a pig, I've read that Rust compiles
like a pig, and Go makes a lot of hay for compiling fast.


I think that you are very gentle with C++. It can be hell, when you are 
working on multiple platforms with different compiler and build systems 
it took a lot of efforts and time to maintain compilation time at a 
decent level.
I recently made optimizations on our build configurations after adding 
some boost modules at my day job. Our build time double instantly.

All optimizations have a great cost on at least an other point.
- PIMPL: Increase the code complexity, decrease performances
- Precompiled header: not standard, mingw is limited to 130Mo generated file
- Unity build: can be hard to add to many build system if 
auto-generated. compiler can crash with an out of memory (mingw will be 
the first)

- cleaning our includes: how doing that without tools?
- multi-threaded compilation: not standard, sometimes it have to be 
configured on computer


So thank you for having created a fast compiler even if I just can dream 
to be able to use it a day professionally.

IMO if Go is a fast compiler is just because dmd shows the way.

Is dmd multi-threaded?

PS: I don't understand why import modules aren't already here in c++, it 
make years that clang team is working on it.


Re: Official compiler

2016-02-25 Thread H. S. Teoh via Digitalmars-d
On Thu, Feb 25, 2016 at 02:03:47PM -0800, Walter Bright via Digitalmars-d wrote:
> On 2/25/2016 1:50 PM, Andrei Alexandrescu wrote:
> >Good to know, thanks! -- Andrei
> 
> DMD did slow down because it was now being compiled by DMD instead of
> g++.  Also, dmd was doing multithreaded file I/O, but that was removed
> because speed didn't matter .
[...]

I remember you did a bunch of stuff to the optimizer after the
switchover to self-hosting; how much of a difference did that make? Are
there any low-hanging fruit left that could make dmd faster?

On a related note, I discovered an O(n^2) algorithm in the front-end...
it's unlikely to be an actual bottleneck in practice (basically it's
quadratic in the number of fields in an aggregate), though you never
know. It actually does a full n^2 iterations, and seemed like it could
be at least pared down to n(n+1)/2, even without doing better than
O(n^2).


T

-- 
What is Matter, what is Mind? Never Mind, it doesn't Matter.


Re: Official compiler

2016-02-25 Thread David Nadlinger via Digitalmars-d
On Thursday, 25 February 2016 at 22:03:47 UTC, Walter Bright 
wrote:
DMD did slow down because it was now being compiled by DMD 
instead of g++.


You can compile it using LDC just fine now. ;)

Also, dmd was doing multithreaded file I/O, but that was 
removed because speed didn't matter .


Did we ever have any numbers showing that this in particular 
produced a tangible performance benefit (even a single barnacle)?



As I said, keeping the compiler speed up is a constant battle.


And this leaves me wondering why nobody ever wrote a 
comprehensive compiler performance tracking tool. There is Dash, 
my half-finished CI-style project (that a couple of people were 
interested in picking up after DConf, but which never really 
happened), and Vladimir's quite limited TrenD adaption of 
Mozilla's areweslimyet, but nobody really came up with something 
that would be part of our day-to-day development workflow.


Currently, dmd makes zero user of multicore. I didn't know that 
ldc did.


LDC doesn't do so either. I think what rsw0x referred to is doing 
a normal "C-style" parallel compilation of several compilation 
unit. I'm not sure why this couldn't also be done with DMD, 
though.


 — David


Re: Official compiler

2016-02-25 Thread Atila Neves via Digitalmars-d

On Thursday, 25 February 2016 at 22:38:45 UTC, Atila Neves wrote:

On Thursday, 25 February 2016 at 19:55:20 UTC, rsw0x wrote:

[...]


Would it be possible to point me in the directions of projects 
where you saw ldc being faster? I didn't try per-module, but on 
the projects I tried, dmd is still considerably faster when 
compiling per-package. And given that per-package is nearly 
always faster than per-module... 
(http://forum.dlang.org/post/yfykbayodugukemvo...@forum.dlang.org)


Atila


Forgot to say: I measured seconds ago on the most recent dmd and 
ldc with an 8-core CPU with hyperthreading (so 16 threads).


Atila


Re: Official compiler

2016-02-25 Thread Atila Neves via Digitalmars-d

On Thursday, 25 February 2016 at 19:55:20 UTC, rsw0x wrote:

On Thursday, 25 February 2016 at 19:25:38 UTC, deadalnix wrote:

On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
If we would make GDC or LDC the official compiler then the 
next question which pops up is about compilation speed




ldc is still significantly faster than clang, or gdc than gcc. 
I don't think this is that much of a valid concern, especially 
for smaller programs.


For larger programs, LDC with single-file compilation outdoes 
DMD by a large factor on any recent multi-core CPU for both 
debug and release builds in my tests. DMD did not scale across 
cores anywhere near as well as LDC.
OTOH, it does not benefit from singleobj this way when doing 
release builds.


Would it be possible to point me in the directions of projects 
where you saw ldc being faster? I didn't try per-module, but on 
the projects I tried, dmd is still considerably faster when 
compiling per-package. And given that per-package is nearly 
always faster than per-module... 
(http://forum.dlang.org/post/yfykbayodugukemvo...@forum.dlang.org)


Atila


Re: Official compiler

2016-02-25 Thread Walter Bright via Digitalmars-d

On 2/25/2016 1:50 PM, Andrei Alexandrescu wrote:

Good to know, thanks! -- Andrei


DMD did slow down because it was now being compiled by DMD instead of g++. Also, 
dmd was doing multithreaded file I/O, but that was removed because speed didn't 
matter .


As I said, keeping the compiler speed up is a constant battle.

Currently, dmd makes zero user of multicore. I didn't know that ldc did.


Re: Official compiler

2016-02-25 Thread Andrei Alexandrescu via Digitalmars-d

On 02/25/2016 02:55 PM, rsw0x wrote:

On Thursday, 25 February 2016 at 19:25:38 UTC, deadalnix wrote:

On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:

If we would make GDC or LDC the official compiler then the next
question which pops up is about compilation speed



ldc is still significantly faster than clang, or gdc than gcc. I don't
think this is that much of a valid concern, especially for smaller
programs.


For larger programs, LDC with single-file compilation outdoes DMD by a
large factor on any recent multi-core CPU for both debug and release
builds in my tests. DMD did not scale across cores anywhere near as well
as LDC.
OTOH, it does not benefit from singleobj this way when doing release
builds.


Good to know, thanks! -- Andrei


Re: Official compiler

2016-02-25 Thread rsw0x via Digitalmars-d

On Thursday, 25 February 2016 at 19:25:38 UTC, deadalnix wrote:

On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
If we would make GDC or LDC the official compiler then the 
next question which pops up is about compilation speed




ldc is still significantly faster than clang, or gdc than gcc. 
I don't think this is that much of a valid concern, especially 
for smaller programs.


For larger programs, LDC with single-file compilation outdoes DMD 
by a large factor on any recent multi-core CPU for both debug and 
release builds in my tests. DMD did not scale across cores 
anywhere near as well as LDC.
OTOH, it does not benefit from singleobj this way when doing 
release builds.


Re: Official compiler

2016-02-25 Thread deadalnix via Digitalmars-d

On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
If we would make GDC or LDC the official compiler then the next 
question which pops up is about compilation speed




ldc is still significantly faster than clang, or gdc than gcc. I 
don't think this is that much of a valid concern, especially for 
smaller programs.




Re: Official compiler

2016-02-25 Thread rsw0x via Digitalmars-d
On Thursday, 25 February 2016 at 17:57:49 UTC, David Nadlinger 
wrote:
I'm only playing devil's advocate because many people here make 
it seem as if there was no cost to supporting multiple 
compilers, while there most definitely is. This ranges from the 
blatant duplication of work over PR issues to the fact that big 
language/compiler features are all but impossible to implement 
for anybody but you, since you are the only one who knows how 
to implement them on DMD (and in the current situation, not 
having them available in DMD would be a deal-breaker).




It would be nice if the DMD frontend was completely uprooted from 
the DMD backend and put into separate git projects. The frontend 
should be completely agnostic from which backend it's using or 
else it's just more trouble the LDC/GDC developers have to deal 
with.


Re: Official compiler

2016-02-25 Thread David Nadlinger via Digitalmars-d
On Thursday, 25 February 2016 at 03:05:21 UTC, Walter Bright 
wrote:

On 2/18/2016 9:52 AM, Kai Nacke wrote:

I really like the compiler diversity.


Me too. Having 3 major implementations is a great source of 
strength for D.


I like it too. I just think that we can't afford it at this 
point, and that this is a major impediment for improving the 
quality of the D ecosystem.


 — David


Re: Official compiler

2016-02-25 Thread David Nadlinger via Digitalmars-d
On Thursday, 25 February 2016 at 02:58:08 UTC, Walter Bright 
wrote:
A big chunk of that was getting D to catch C++ exceptions. And 
before I did this work, neither GDC nor LDC did, either. It's 
not a simple matter of just turning it on given Dwarf EH.


That's beside the point, the C++ interop needed to be worked out 
either way and is not specific to the DMD backend. In that stupid 
example I gave, I was referring to the DWARF EH implementation 
itself, which will have taken you a non-negligible amount of time 
due to all the barely documented details, unless you are even 
more of a super-human compiler implementation expert than I 
already know you are. ;)


Don't get me wrong, I couldn't care less about the details of how 
long it took whom to implement C++ EH interop (or the fact that 
it did exist before in LDC/Calypso, and in the form of prototypes 
for vanilla GDC/LDC, etc.).


I'm only playing devil's advocate because many people here make 
it seem as if there was no cost to supporting multiple compilers, 
while there most definitely is. This ranges from the blatant 
duplication of work over PR issues to the fact that big 
language/compiler features are all but impossible to implement 
for anybody but you, since you are the only one who knows how to 
implement them on DMD (and in the current situation, not having 
them available in DMD would be a deal-breaker).


Sure, the fact that you know all the nitty-gritty details of one 
backend might make implementing certain changes easier for you, 
as you pointed out. But the fact that this one backend is obscure 
compared to the usual suspects, poorly documented and 
license-encumbered pretty much ensures that you will remain the 
only person to tackle such projects in the future.


 — David




Re: Official compiler

2016-02-25 Thread karabuta via Digitalmars-d
On Thursday, 25 February 2016 at 01:53:51 UTC, Walter Bright 
wrote:

On 2/17/2016 4:35 PM, Chris Wright wrote:

And since DMD is
something like twice as fast as LDC, there's at least some 
argument in

favor of keeping it around.


When I meet someone new who says they settled on D in their 
company for development, I casually ask why they selected D?


  "Because it compiles so fast."

It's not a minor issue.


+1 Well spoken


Re: Official compiler

2016-02-25 Thread David Nadlinger via Digitalmars-d
On Thursday, 25 February 2016 at 09:04:17 UTC, Russel Winder 
wrote:
I wonder if anyone has noticed, or appreciated that, all the 
trendy, hipster cloud based CI servers support Go, sometimes 
C++ and C (sort of), but not Rust, or D.


Travis CI, which is probably the one "trendy, hipster" service 
most would think of, has been supporting D for quite some while 
now due to Martin Nowak's great work. (He put Iain's name and 
mine on it too, but we didn't really contribute at all.)


Of course, there is always room for improving the integration 
with this and similar services. When I'm saying that dividing the 
attention between three compilers is a strategic mistake, it's 
not because I doubt that having multiple compilers is not a nice 
thing to have. It certainly is. But I'm convinced that expending 
the same amount of effort on the wider ecosystem would get us 
much farther.


 — David


Re: Official compiler

2016-02-25 Thread karabuta via Digitalmars-d
On Thursday, 18 February 2016 at 11:12:57 UTC, Jonathan M Davis 
wrote:

On Thursday, 18 February 2016 at 06:57:01 UTC, Kai Nacke wrote:
even if DMD is the official reference compiler, the download 
page http://dlang.org/download.html already mentions "strong 
optimization" as pro of GDC/LDC vs. "very fast compilation 
speeds" as pro of DMD.


If we would make GDC or LDC the official compiler then the 
next question which pops up is about compilation speed


Yeah. dmd's compilation speed has been a huge win for us and 
tends to make a very good first impression. And as far as 
development goes, fast compilation speed matters a lot more 
than fast binaries. So, assuming that they're compatible enough 
(which ideally they are but aren't always), I would argue that 
the best approach would be to use dmd to develop your code and 
then use gdc or ldc to build the production binary. We benefit 
by having all of these compilers, and I seriously question that 
changing which one is the "official" one is going to help any. 
It just shifts which set of complaints we get.


- Jonathan M Davis


Yep. Fast compilation during development must not be sacrificed 
for fast binaries. What are you really building to have fast 
binaries during development?


However, I strongly agree with cleaning up the language instead 
of adding more features.


Re: Official compiler

2016-02-25 Thread Joakim via Digitalmars-d
On Thursday, 25 February 2016 at 02:58:08 UTC, Walter Bright 
wrote:

On 2/18/2016 11:54 AM, David Nadlinger wrote:

But imagine that Walter
would have invested all the time he spent e.g. on implementing 
DWARF EH into
optimizing the LDC frontend/glue layer/backend pass structure 
instead. Who
knows, we might have an LDC-based compiler today that is 
faster than the DMD we

currently have.


A big chunk of that was getting D to catch C++ exceptions. And 
before I did this work, neither GDC nor LDC did, either. It's 
not a simple matter of just turning it on given Dwarf EH.


The point being, a lot of things are not going to happen for D 
unless I do them. Many of these require changing the front end, 
back end, and the runtime library in concert. It's a lot easier 
to make these work when the person working on it understands 
how all three work.


Once they're done, they provide a good guide on how to get it 
to work with a monumental code base like the gdc and ldc 
backends are.


That's a good argument for keeping your backend.  I also like 
that it will be in D one day, meaning a completely bootstrapped D 
compiler. :)


It would help if you weren't doing other stuff that others could 
also do, as you've complained about.  You should keep a list of 
tasks online, ones you consider important but that others could 
reasonably do.  That would give them an avenue to take stuff off 
your plate, freeing you up to work on what you do best.


Re: Official compiler

2016-02-25 Thread Russel Winder via Digitalmars-d
On Wed, 2016-02-24 at 18:48 -0800, Walter Bright via Digitalmars-d
wrote:
> […]
> 
> For comparison, C++ compiles like a pig, I've read that Rust compiles
> like a 
> pig, and Go makes a lot of hay for compiling fast.

I wonder if anyone has noticed, or appreciated that, all the trendy,
hipster cloud based CI servers support Go, sometimes C++ and C (sort
of), but not Rust, or D.

Public CI and deployment support are increasingly an issue for FOSS
projects, not just for goodness, but also for marketing.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Official compiler

2016-02-25 Thread Radu via Digitalmars-d
On Thursday, 25 February 2016 at 03:05:21 UTC, Walter Bright 
wrote:

On 2/18/2016 9:52 AM, Kai Nacke wrote:

I really like the compiler diversity.


Me too. Having 3 major implementations is a great source of 
strength for D.


This needs to go further, currently there is no up to date, high 
performance, cross architecture compiler.


The way I see it is to integrate one of the compilers, best 
candidate is LDC, in  the release cycle.


I know that LDC is really close to get to 2.70 level, if mainline 
will only be for regressions and bug fixes for a while, there is 
a good chance LDC could catch up and be part of the daily 
merge-auto-tester loop. This will lower the pressure to 
constantly merge from LDC and allow to focus on other parts of 
the LDC compiler.


Can this be attainable?


Re: Official compiler

2016-02-24 Thread rsw0x via Digitalmars-d

On Thursday, 25 February 2016 at 03:47:33 UTC, rsw0x wrote:
On Thursday, 25 February 2016 at 03:26:54 UTC, Adam D. Ruppe 
wrote:

On Thursday, 25 February 2016 at 03:16:57 UTC, rsw0x wrote:

licensing issues


I can't see any... Walter would be licensed to distribute all 
three.


GDC is under GPL


Oh, my bad I reread the post. I thought he meant combine them as 
in single frontend/three backends in a single executable.

Nevermind.


Re: Official compiler

2016-02-24 Thread rsw0x via Digitalmars-d
On Thursday, 25 February 2016 at 03:26:54 UTC, Adam D. Ruppe 
wrote:

On Thursday, 25 February 2016 at 03:16:57 UTC, rsw0x wrote:

licensing issues


I can't see any... Walter would be licensed to distribute all 
three.


GDC is under GPL


Re: Official compiler

2016-02-24 Thread Adam D. Ruppe via Digitalmars-d

On Thursday, 25 February 2016 at 03:16:57 UTC, rsw0x wrote:

licensing issues


I can't see any... Walter would be licensed to distribute all 
three.




Re: Official compiler

2016-02-24 Thread rsw0x via Digitalmars-d

On Thursday, 25 February 2016 at 03:07:20 UTC, Puming wrote:
On Thursday, 25 February 2016 at 02:48:24 UTC, Walter Bright 
wrote:

[...]


Maybe in the future, when ldc/gdc catches up versions with dmd, 
we can combine them into a bundle for downloads? Then new 
people can just download the compiler bundle and run dmd or 
ldc/gdc as they like.


licensing issues


Re: Official compiler

2016-02-24 Thread Puming via Digitalmars-d
On Thursday, 25 February 2016 at 02:48:24 UTC, Walter Bright 
wrote:

On 2/24/2016 6:05 PM, Adam D. Ruppe wrote:
I've also heard from big users who want the performance more 
than compile time

and hit difficulty in build scaling..


I know that performance trumps all for many users. But we can 
have both - dmd and ldc/gdc.


My point is that compile speed is a valuable and distinguishing 
feature of D. It's one that I have to constantly maintain, or 
it bit rots away. It's one that people regularly dismiss as 
unimportant. Sometimes it seems I'm the only one working on the 
compiler who cares about it.


For comparison, C++ compiles like a pig, I've read that Rust 
compiles like a pig, and Go makes a lot of hay for compiling 
fast.


Maybe in the future, when ldc/gdc catches up versions with dmd, 
we can combine them into a bundle for downloads? Then new people 
can just download the compiler bundle and run dmd or ldc/gdc as 
they like.


Re: Official compiler

2016-02-24 Thread Walter Bright via Digitalmars-d

On 2/18/2016 9:52 AM, Kai Nacke wrote:

I really like the compiler diversity.


Me too. Having 3 major implementations is a great source of strength for D.



Re: Official compiler

2016-02-24 Thread Walter Bright via Digitalmars-d

On 2/18/2016 11:54 AM, David Nadlinger wrote:

But imagine that Walter
would have invested all the time he spent e.g. on implementing DWARF EH into
optimizing the LDC frontend/glue layer/backend pass structure instead. Who
knows, we might have an LDC-based compiler today that is faster than the DMD we
currently have.


A big chunk of that was getting D to catch C++ exceptions. And before I did this 
work, neither GDC nor LDC did, either. It's not a simple matter of just turning 
it on given Dwarf EH.


The point being, a lot of things are not going to happen for D unless I do them. 
Many of these require changing the front end, back end, and the runtime library 
in concert. It's a lot easier to make these work when the person working on it 
understands how all three work.


Once they're done, they provide a good guide on how to get it to work with a 
monumental code base like the gdc and ldc backends are.




Re: Official compiler

2016-02-24 Thread Walter Bright via Digitalmars-d

On 2/24/2016 6:05 PM, Adam D. Ruppe wrote:

I've also heard from big users who want the performance more than compile time
and hit difficulty in build scaling..


I know that performance trumps all for many users. But we can have both - dmd 
and ldc/gdc.


My point is that compile speed is a valuable and distinguishing feature of D. 
It's one that I have to constantly maintain, or it bit rots away. It's one that 
people regularly dismiss as unimportant. Sometimes it seems I'm the only one 
working on the compiler who cares about it.


For comparison, C++ compiles like a pig, I've read that Rust compiles like a 
pig, and Go makes a lot of hay for compiling fast.


Re: Official compiler

2016-02-24 Thread Brian Schott via Digitalmars-d

On Thursday, 25 February 2016 at 02:08:32 UTC, Paul O'Neil wrote:

On 02/18/2016 02:06 PM, rsw0x wrote:
I believe Brian Schott had worked on something like this for 
D... Did that ever go anywhere?


Brian's project is at https://github.com/Hackerpilot/generated .

I can't speak to the state of the project, but it hasn't been 
touched in about a year.


I built that to fuzz test parsers, not code generation or 
anything else. I can pretty much guarantee that its output should 
not compile.


Re: Official compiler

2016-02-24 Thread Paul O'Neil via Digitalmars-d
On 02/18/2016 02:06 PM, rsw0x wrote:
> On Thursday, 18 February 2016 at 17:52:10 UTC, Kai Nacke wrote:
>> I really like the compiler diversity. What I miss (hint!) is a program
>> to verify the compiler/backend correctness. Just generate a random D
>> program, compile with all 3 compilers and compare the output. IMHO we
>> could find a lot of backend bugs this way. This would help all D
>> compilers.
>>
>> Regards,
>> Kai
> 
> reminds me of csmith
>  https://embed.cs.utah.edu/csmith/
> 
> I believe Brian Schott had worked on something like this for D... Did
> that ever go anywhere?

Brian's project is at https://github.com/Hackerpilot/generated .

I can't speak to the state of the project, but it hasn't been touched in
about a year.

-- 
Paul O'Neil
Github / IRC: todayman


Re: Official compiler

2016-02-24 Thread Adam D. Ruppe via Digitalmars-d
On Thursday, 25 February 2016 at 01:53:51 UTC, Walter Bright 
wrote:
When I meet someone new who says they settled on D in their 
company for development, I casually ask why they selected D?


  "Because it compiles so fast."


I actually agree this is a big issue and one of the killer 
features to me.


But, I also need to point out that there's a selection bias going 
on here: of course D's users today like D's strengths today. If 
they didn't, they wouldn't be using it.


I've also heard from big users who want the performance more than 
compile time and hit difficulty in build scaling..


Re: Official compiler

2016-02-24 Thread Walter Bright via Digitalmars-d

On 2/17/2016 4:35 PM, Chris Wright wrote:

And since DMD is
something like twice as fast as LDC, there's at least some argument in
favor of keeping it around.


When I meet someone new who says they settled on D in their company for 
development, I casually ask why they selected D?


  "Because it compiles so fast."

It's not a minor issue.



Re: Official compiler

2016-02-24 Thread jmh530 via Digitalmars-d
On Wednesday, 24 February 2016 at 22:43:07 UTC, Xavier Bigand 
wrote:


I know Visuald support ldc, but for dub I didn't find anything 
on how it find which compiler to use.


I agree the docs could be better. If you type dub build --help, 
it shows that --compiler is an option. So you would just pass 
--compiler=ldc2.





Re: Official compiler

2016-02-24 Thread Xavier Bigand via Digitalmars-d

Le 17/02/2016 23:57, Márcio Martins a écrit :

I was reading the other thread "Speed kills" and was wondering if there
is any practical reason why DMD is the official compiler?

Currently, newcomers come expecting their algorithm from rosetta code to
run faster in D than their current language, but then it seems like it's
actually slower. What gives?

Very often the typical answer from this community is generally "did you
use LDC/GDC?".

Wouldn't it be a better newcomer experience if the official compiler was
either LDC or GDC?
For us current users it really doesn't matter what is labelled official,
we pick what serves us best, but for a newcomer, the word official
surely carries a lot of weight, doesn't it?

 From a marketing point of view, is it better for D as a language that
first-timers try the bleeding-edge, latest language features with DMD,
or that their expectations of efficient native code are not broken?

Apologies if this has been discussed before...


Like you said it's only a marketing issue.

DMD will and can stay the reference, but IMO as you also evoke it the 
most important thing is certainly to have LDC/GDC more sync with DMD 
front-end version because new comers will feel comfortable to use an 
other compiler up to date that can also target more platforms.


I am exactly in this case, I prefer to use a compiler that make my code 
running on all platforms (android, iOS,...), but I don't want suffering 
of staying on an older front-end that will limit me by missing features 
or bugs.

So I am using DMD with some frustration lol.

And ldc/gdc need more love :
1. A direct link to download latest version (instead of a like to the 
project page)

2. An installer like for DMD that can download Visuald optionally
3. Be auto detected by building tools (dub, Visuald,...)

I know Visuald support ldc, but for dub I didn't find anything on how it 
find which compiler to use.





Re: Official compiler

2016-02-19 Thread Ola Fosheim Grøstad via Digitalmars-d
On Friday, 19 February 2016 at 09:06:28 UTC, Jonathan M Davis 
wrote:
Walter has stated previously that there have been cases of 
lawyers coming to him about him possibly violating someone 
else's copyright, and when he tells them that he's never even 
looked at the source code, that satisfies them. And when the 
GPL is involved, that' paranoia is probably a very good idea.


If FSF lawyers contact you without solid reason then it is 
newsworthy and should make headlines.  So I sincerely doubt that 
anyone from FSF has done so.


Some lawyers are trying to make a living out of acting like 
manipulative bastards, randomly fishing for a case, hoping you 
will put something in writing that they can twist. Does not mean 
they have a leg to stand on, just don't admit anything to them in 
writing.



you're not. And while the LLVM license would definitely allow 
LLVM code to be mixed into dmd's backend as long as the


Well, that would be silly anyway. IMO the better approach would 
be to create a high level typed IR and have a clean 
non-optimizing backend (or JIT). Then leave the optimized backend 
for LLVM.


Basically clean up the source code and introduce a clean separate 
layer between the templating system and codegen.


That way more people could work on it. Make it easy to work on 
one aspect of the compiler without understanding the whole.



Regardless, whether Walter is willing to look at LLVM/LDC or 
work on it at all is up to him.


Sure. It is better to have a very simple backend for an 
experimental/reference compiler.


But DMDs backend isn't simpler to understand than LLVM.

If it was dead simple and made the compiler easier to understand, 
then it would be good to have it in.




Re: Official compiler

2016-02-19 Thread Radu via Digitalmars-d
On Friday, 19 February 2016 at 09:06:28 UTC, Jonathan M Davis 
wrote:
On Thursday, 18 February 2016 at 21:39:45 UTC, Ola Fosheim 
Grøstad wrote:
On Thursday, 18 February 2016 at 21:30:29 UTC, Jonathan M 
Davis wrote:
It's not a strawman. Walter has state previously that he's 
explicitly avoided looking at the source code for other 
compilers like gcc, because he doesn't want anyone to be able 
to accuse him of stealing code, copyright infringement, etc.


Isn't this much more likely to happen if you don't look at the 
codebase for other compilers? How do you know if someone 
submitting code isn't just translating from GCC if you haven't 
looked at GCC?  If you have looked at GCC, then you can just 
choose a different implementation. :-)


Anyway, the clean-virgin thing in programming is related to 
reverse engineering very small codebases where the 
implementation most likely is going to be very similar (like 
BIOS). So you have one team writing the spec and another team 
implementing the spec (with no communication between them).


Walter has stated previously that there have been cases of 
lawyers coming to him about him possibly violating someone 
else's copyright, and when he tells them that he's never even 
looked at the source code, that satisfies them. And when the 
GPL is involved, that' paranoia is probably a very good idea. 
With the BSD license, and the license that LLVM uses (which is 
very close to the BSD license), it's nowhere near the same 
level of issue, since it really only comes down to giving 
attribution. But we had problems with that at one point with 
Tango code (which is BSD-licensed), so the dmd and Phobos devs 
as a whole have avoided even looking at Tango so that we could 
always, legitimately say that we hadn't looked at it and 
therefore could not possibly have copied from it.


So, this can be a real problem, even if it's just an issue with 
someone thinking that you should be giving attribution when 
you're not. And while the LLVM license would definitely allow 
LLVM code to be mixed into dmd's backend as long as the 
appropriate attribution was given, I don't know if Symantec 
would be okay with that or not. The fact that Symantec owns the 
dmd backend just makes things weird all around.


Regardless, whether Walter is willing to look at LLVM/LDC or 
work on it at all is up to him. I doubt that he'll choose to 
based on what he's said previously, but he might. However, I 
think that it's quite safe to say that GCC/GDC are completely 
off the table for him, because that's GPL-licensed, and folks 
definitely get unhappy when they think that you might have 
copied GPL code, and it's _not_ as simple as giving attribution 
to be able to take GPL-licensed code and mix it into your own, 
since it's a copyleft license (and one of the most extreme of 
them at that).


- Jonathan M Davis


For a change he might enjoy being the one that doesn't look/work 
with the backend, if those legal issues are such a major worry 
for him.


I remember back when DMD was even closed licensed than today that 
it sat on a different repo, so people would not look at its code 
by accident. DMD had its use at the beginning when only Walter 
was running the show, and things needed to be coded fast, he knew 
his turf and that allowed him to implement stuff with ease, but 
today that argument has little value, see the whole Dwarf EH 
stuff he had to do...


There are plenty things to do in the language design, frontend 
and runtime, hes work would greatly improve those parts.


Re: Official compiler

2016-02-19 Thread Jonathan M Davis via Digitalmars-d
On Thursday, 18 February 2016 at 21:39:45 UTC, Ola Fosheim 
Grøstad wrote:
On Thursday, 18 February 2016 at 21:30:29 UTC, Jonathan M Davis 
wrote:
It's not a strawman. Walter has state previously that he's 
explicitly avoided looking at the source code for other 
compilers like gcc, because he doesn't want anyone to be able 
to accuse him of stealing code, copyright infringement, etc.


Isn't this much more likely to happen if you don't look at the 
codebase for other compilers? How do you know if someone 
submitting code isn't just translating from GCC if you haven't 
looked at GCC?  If you have looked at GCC, then you can just 
choose a different implementation. :-)


Anyway, the clean-virgin thing in programming is related to 
reverse engineering very small codebases where the 
implementation most likely is going to be very similar (like 
BIOS). So you have one team writing the spec and another team 
implementing the spec (with no communication between them).


Walter has stated previously that there have been cases of 
lawyers coming to him about him possibly violating someone else's 
copyright, and when he tells them that he's never even looked at 
the source code, that satisfies them. And when the GPL is 
involved, that' paranoia is probably a very good idea. With the 
BSD license, and the license that LLVM uses (which is very close 
to the BSD license), it's nowhere near the same level of issue, 
since it really only comes down to giving attribution. But we had 
problems with that at one point with Tango code (which is 
BSD-licensed), so the dmd and Phobos devs as a whole have avoided 
even looking at Tango so that we could always, legitimately say 
that we hadn't looked at it and therefore could not possibly have 
copied from it.


So, this can be a real problem, even if it's just an issue with 
someone thinking that you should be giving attribution when 
you're not. And while the LLVM license would definitely allow 
LLVM code to be mixed into dmd's backend as long as the 
appropriate attribution was given, I don't know if Symantec would 
be okay with that or not. The fact that Symantec owns the dmd 
backend just makes things weird all around.


Regardless, whether Walter is willing to look at LLVM/LDC or work 
on it at all is up to him. I doubt that he'll choose to based on 
what he's said previously, but he might. However, I think that 
it's quite safe to say that GCC/GDC are completely off the table 
for him, because that's GPL-licensed, and folks definitely get 
unhappy when they think that you might have copied GPL code, and 
it's _not_ as simple as giving attribution to be able to take 
GPL-licensed code and mix it into your own, since it's a copyleft 
license (and one of the most extreme of them at that).


- Jonathan M Davis


Re: Official compiler

2016-02-18 Thread Chris Wright via Digitalmars-d
On Fri, 19 Feb 2016 05:29:20 +, Ola Fosheim Grøstad wrote:

> On Thursday, 18 February 2016 at 23:42:11 UTC, Chris Wright wrote:
>> There are damages for patent infringement. There are higher damages for
>> willful infringement.
> 
> Iff you use it as a means for production. There is nothing illegal about
> implementing patented techniques in source code (i.e. describing them)
> and distributing it.

That depends on where the patent was filed and where the lawsuit is being 
executed.

>> regarding GCC. And thanks to how software patents generally are, it'd
>> probably be regarding something that most C/C++ compilers need to
>> implement and the most obvious implementation for that feature.
> 
> If that is the case then there will be prior art that predates the
> patent.

Not if it's for a feature added to a C++ standard after the patent was 
filed. Not if it's for a feature that modern compilers consider standard 
but wasn't standard before the patent was created. Not if it's in a 
jurisdiction that uses first-to-file rather than first-to-invent.


Re: Official compiler

2016-02-18 Thread Ola Fosheim Grøstad via Digitalmars-d

On Thursday, 18 February 2016 at 23:42:11 UTC, Chris Wright wrote:
You testify it under oath, and you hope you look honest. You 
can show a lack of GCC source code on your home computer, 
possibly.


If they actually have a strong case it will be highly unlikely 
that you have arrived at it independently. Of course, all you 
have to do is to remove the code and FSF will be happy. So if you 
let it go all the way to the court you can only blame yourself 
for being pedantic.


FSF will only sue over a strong case that carries political 
weight. A loss in court is a PR disaster for FSF.


There are damages for patent infringement. There are higher 
damages for willful infringement.


Iff you use it as a means for production. There is nothing 
illegal about implementing patented techniques in source code 
(i.e. describing them) and distributing it.


regarding GCC. And thanks to how software patents generally 
are, it'd probably be regarding something that most C/C++ 
compilers need to implement and the most obvious implementation 
for that feature.


If that is the case then there will be prior art that predates 
the patent.


If Walter had read the GCC source code from an infringing 
version after that case came to light, that's the sort of thing 
that can bring on triple damages. It depends on relative lawyer 
quality, of course, but it's much harder for the plaintiffs if 
there's no indication that you've accessed the GCC source code.


It should help you, not hurt you, if you learnt about a technique 
from a widespread codebase from an organization that is known for 
avoiding patents. If anything that proves that you didn't pick it 
up from the filed patent and was in good faith?


If the case came to light (e.g. you knew about it) and you didn't 
vet your own codebase then you will be to blame no matter where 
you got it from? But FSF would make sure they remove patented 
techniques from GCC so that scenario would be very unlikely.


In other words, you are more likely to be hit by a bus when 
crossing the street. I find this kind of anxiety hysterical to be 
honest. The only thing I get out of this is that companies 
shouldn't admit to using open source codebases.


Of course, one reason for avoiding reading other people's source 
code is that you have a client that makes it a requirement.




Re: Official compiler

2016-02-18 Thread jmh530 via Digitalmars-d
On Thursday, 18 February 2016 at 20:28:41 UTC, David Nadlinger 
wrote:


You can use rdmd with ldmd2 just as well (and presumably gdmd 
too).




First I'm hearing of it.



Re: Official compiler

2016-02-18 Thread Chris Wright via Digitalmars-d
On Thu, 18 Feb 2016 22:41:46 +, Ola Fosheim Grøstad wrote:

> On Thursday, 18 February 2016 at 22:22:57 UTC, Chris Wright wrote:
>> With copyright, the fact that you created yours on your own is
>> sufficient defense, assuming the courts agree. If by sheer coincidence
>> you come up with code identical to what's in GCC, but you can show that
>> you didn't take the code from GCC, you're in the clear.
> 
> And how are you going to show that? You can't, because it is widespread.

You testify it under oath, and you hope you look honest. You can show a 
lack of GCC source code on your home computer, possibly.

>> Patents, well, you're infringing even if you didn't refer to any other
>> source. But if you did look at another source, especially if you looked
>> in the patent database, you open yourself up to increased damages.
> 
> There are no damages for GCC.

There are damages for patent infringement. There are higher damages for 
willful infringement.

The patent doesn't have to be held by the FSF or a contributor to GCC. 
There might be a patent troll that sued a third party regarding GCC. And 
thanks to how software patents generally are, it'd probably be regarding 
something that most C/C++ compilers need to implement and the most 
obvious implementation for that feature.

If Walter had read the GCC source code from an infringing version after 
that case came to light, that's the sort of thing that can bring on 
triple damages. It depends on relative lawyer quality, of course, but 
it's much harder for the plaintiffs if there's no indication that you've 
accessed the GCC source code.


Re: [OT] Re: Official compiler

2016-02-18 Thread Márcio Martins via Digitalmars-d
On Thursday, 18 February 2016 at 20:18:14 UTC, David Nadlinger 
wrote:
On Wednesday, 17 February 2016 at 22:57:20 UTC, Márcio Martins 
wrote:

[…]


On a completely unrelated note, you aren't by any chance the 
Márcio Martins who is giving a talk at ETH in a couple of days, 
are you?


 — David


No, I'm not.


Re: Official compiler

2016-02-18 Thread Márcio Martins via Digitalmars-d

On Thursday, 18 February 2016 at 22:33:15 UTC, Iain Buclaw wrote:
On 18 February 2016 at 22:23, Jonathan M Davis via 
Digitalmars-d < digitalmars-d@puremagic.com> wrote:



[...]


Actually, I'm sure this is a great way to let bugs in.  There's 
no saying what could happen if you switch compiler and turn the 
optimisations throttle to full.  In 99% of cases, one would 
hope all is good.  But the bigger the codebase you're dealing 
with, the more you should really use both side by side when 
testing to ensure that no heisenbugs creep in.


Yep, that issue I reported a while ago with floating-point casts 
comes to mind.


Re: Official compiler

2016-02-18 Thread Ola Fosheim Grøstad via Digitalmars-d

On Thursday, 18 February 2016 at 22:22:57 UTC, Chris Wright wrote:
With copyright, the fact that you created yours on your own is 
sufficient defense, assuming the courts agree. If by sheer 
coincidence you come up with code identical to what's in GCC, 
but you can show that you didn't take the code from GCC, you're 
in the clear.


And how are you going to show that? You can't, because it is 
widespread.


Patents, well, you're infringing even if you didn't refer to 
any other source. But if you did look at another source, 
especially if you looked in the patent database, you open 
yourself up to increased damages.


There are no damages for GCC.



Re: Official compiler

2016-02-18 Thread Iain Buclaw via Digitalmars-d
On 18 February 2016 at 22:23, Jonathan M Davis via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On Thursday, 18 February 2016 at 20:28:41 UTC, David Nadlinger wrote:
>
>> On Thursday, 18 February 2016 at 17:56:32 UTC, Jonathan M Davis wrote:
>>
>>> […] if you want to be writing scripts in D (which is really useful), you
>>> need rdmd, which means using dmd
>>>
>>
>> You can use rdmd with ldmd2 just as well (and presumably gdmd too).
>>
>
> Good to know.
>
> Clear only to somebody with x86-centric vision. I'm not claiming that the
>> somewhat lower compile times aren't good for productivity. But being able
>> to easily tap into the rich LLVM ecosystem or even just targeting the most
>> widely used CPU architecture (in terms of units) is also something not to
>> be forgotten when considering the development process.
>>
>
> Having ldc is huge, but as long as you're targeting x86(_64) as one of
> your platforms, developing with dmd is going to be faster thanks to the
> fast compilation times. And if we can get dmd and ldc to be fully
> compatible like they should be, then as long as your code is
> cross-platform, it should be possible to develop it with dmd and then
> target whatever you want with ldc - though obviously some stuff will have
> to be done with ldc when it's something that dmd can't do (like a version
> block targeting ARM), and anything that's going to ultimately be released
> using ldc should be tested on it. But that fast compilation time is so
> tremendous in the edit-test-edit cycle, that I just can't see using ldc as
> the main compiler for development unless what you're doing isn't targeting
> x86(_64) at all, or ldc isn't compatible enough with dmd to do most of the
> development with dmd.
>
> But assuming that dmd and gdc/ldc are compatible, I would definitely argue
> that the best way to do D development is to do most of the development with
> dmd and then switch to gdc or ldc for production. That way, you get the
> fast compilation times when you need it, and your final binary is better
> optimized.
>
> - Jonathan M Davis
>

Actually, I'm sure this is a great way to let bugs in.  There's no saying
what could happen if you switch compiler and turn the optimisations
throttle to full.  In 99% of cases, one would hope all is good.  But the
bigger the codebase you're dealing with, the more you should really use
both side by side when testing to ensure that no heisenbugs creep in.


Re: Official compiler

2016-02-18 Thread Chris Wright via Digitalmars-d
On Thu, 18 Feb 2016 21:39:45 +, Ola Fosheim Grøstad wrote:

> On Thursday, 18 February 2016 at 21:30:29 UTC, Jonathan M Davis wrote:
>> It's not a strawman. Walter has state previously that he's explicitly
>> avoided looking at the source code for other compilers like gcc,
>> because he doesn't want anyone to be able to accuse him of stealing
>> code, copyright infringement, etc.
> 
> Isn't this much more likely to happen if you don't look at the codebase
> for other compilers? How do you know if someone submitting code isn't
> just translating from GCC if you haven't looked at GCC?

That's the exact opposite of true.

With copyright, the fact that you created yours on your own is sufficient 
defense, assuming the courts agree. If by sheer coincidence you come up 
with code identical to what's in GCC, but you can show that you didn't 
take the code from GCC, you're in the clear.

Patents, well, you're infringing even if you didn't refer to any other 
source. But if you did look at another source, especially if you looked 
in the patent database, you open yourself up to increased damages.


Re: Official compiler

2016-02-18 Thread Ola Fosheim Grøstad via Digitalmars-d
On Thursday, 18 February 2016 at 21:30:29 UTC, Jonathan M Davis 
wrote:
It's not a strawman. Walter has state previously that he's 
explicitly avoided looking at the source code for other 
compilers like gcc, because he doesn't want anyone to be able 
to accuse him of stealing code, copyright infringement, etc.


Isn't this much more likely to happen if you don't look at the 
codebase for other compilers? How do you know if someone 
submitting code isn't just translating from GCC if you haven't 
looked at GCC?  If you have looked at GCC, then you can just 
choose a different implementation. :-)


Anyway, the clean-virgin thing in programming is related to 
reverse engineering very small codebases where the implementation 
most likely is going to be very similar (like BIOS). So you have 
one team writing the spec and another team implementing the spec 
(with no communication between them).




Re: Official compiler

2016-02-18 Thread Jonathan M Davis via Digitalmars-d
On Thursday, 18 February 2016 at 20:24:31 UTC, David Nadlinger 
wrote:
On Thursday, 18 February 2016 at 11:12:57 UTC, Jonathan M Davis 
wrote:
And actually, he'd risk legal problems if he did, because he 
doesn't want anyone to be able to accuse him of taking code 
from gcc or llvm.


That's a silly strawman, and you should know better than 
putting that forward as an argument by now.


Walter is of course free to do whatever he pleases, and I would 
totally understand if his reason was just that it's hard to 
give something up you've worked on for a long time.


But please don't make up argument trying to rationalize 
whatever personal decision somebody else made. You could 
literally copy LLVM source code into your application and sell 
it as a closed-source product without risking any copyright 
problems (if you comply with the very modest attribution clause 
of the license).


It's not a strawman. Walter has state previously that he's 
explicitly avoided looking at the source code for other compilers 
like gcc, because he doesn't want anyone to be able to accuse him 
of stealing code, copyright infringement, etc. Now, that's 
obviously much more of a risk with gcc than llvm given their 
respective licenses, but it is a position that Walter has taken 
when the issue has come up, and it's not something that I'm 
making up.


Now, if Walter were willing to give up on the dmd backend 
entirely, then presumably, that wouldn't be a problem anymore 
regardless of license issues, but he still has dmc, which uses 
the same backend, so I very much doubt that that's going to 
happen.


- Jonathan M Davis


Re: Official compiler

2016-02-18 Thread Jonathan M Davis via Digitalmars-d
On Thursday, 18 February 2016 at 20:28:41 UTC, David Nadlinger 
wrote:
On Thursday, 18 February 2016 at 17:56:32 UTC, Jonathan M Davis 
wrote:
[…] if you want to be writing scripts in D (which is really 
useful), you need rdmd, which means using dmd


You can use rdmd with ldmd2 just as well (and presumably gdmd 
too).


Good to know.

Clear only to somebody with x86-centric vision. I'm not 
claiming that the somewhat lower compile times aren't good for 
productivity. But being able to easily tap into the rich LLVM 
ecosystem or even just targeting the most widely used CPU 
architecture (in terms of units) is also something not to be 
forgotten when considering the development process.


Having ldc is huge, but as long as you're targeting x86(_64) as 
one of your platforms, developing with dmd is going to be faster 
thanks to the fast compilation times. And if we can get dmd and 
ldc to be fully compatible like they should be, then as long as 
your code is cross-platform, it should be possible to develop it 
with dmd and then target whatever you want with ldc - though 
obviously some stuff will have to be done with ldc when it's 
something that dmd can't do (like a version block targeting ARM), 
and anything that's going to ultimately be released using ldc 
should be tested on it. But that fast compilation time is so 
tremendous in the edit-test-edit cycle, that I just can't see 
using ldc as the main compiler for development unless what you're 
doing isn't targeting x86(_64) at all, or ldc isn't compatible 
enough with dmd to do most of the development with dmd.


But assuming that dmd and gdc/ldc are compatible, I would 
definitely argue that the best way to do D development is to do 
most of the development with dmd and then switch to gdc or ldc 
for production. That way, you get the fast compilation times when 
you need it, and your final binary is better optimized.


- Jonathan M Davis


  1   2   >