Re: Announcing bottom-up-build - a build system for C/C++/D

2013-06-26 Thread Rob T
This build system seems to be very well suited for building 
complex large projects in a sensible way.


I successfully tested the example build on Debian linux. I will 
definitely explore this further using one of my own projects.


One issue I immediately ran into, is when I run bub incorrectly 
it hangs after writing the "bail" message to console. ctrl-c does 
not kill it, and I have to run a process kill commandto terminate.


Seems it gets stuck in doBailer() while(true) loop, but I only 
glanced at the source quickly before posting back here.


--rt


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Mathias Lang
I've read (almost), everything, so I hope I won't miss a point here:
a) I've heard about MSVC, Red Hat, Qt, Linux and so on. From my
understanding, none of the projects mentionned have gone from free (as in
free beer) to hybrid/closed. And I'm not currently able to think of one
successful, widespread project that did.
b) Thinking that being free (as a beer and/or as freedom), hybrid, closed
source of whatever is a single critera of success seems foolish. I'm not
asking for a complete comparison (I think my mailbox won't stand it ;-) ),
but please stop comparing a free operating software with a paid compiler,
and assume the former have more users than the later because it's free (and
vice-versa). In addition, I don't see the logic behind comparing something
born in the 90s with something from the 2000s. Remember the Dot-com bubble ?
c) There are other way to get more people involved, for exemple if
dlang.orgbecomes a foundation (see related thread), we would be able
to apply for
GSoC.
d) People pay for something they need. They don't adopt something because
they can pay for it. That's why paid compiler must follow language
promotion, not the other way around.


2013/6/27 Joseph Rushton Wakeling 

> On Wednesday, 26 June 2013 at 21:29:12 UTC, Iain Buclaw wrote:
>
>> Don't call be Shirley...
>>
>
> Serious? :-)
>
>  By the way, I hope you didn't feel I was trying to speak on behalf of GDC
>>> -- wasn't my intention. :-)
>>>
>>
>> I did, and it hurt.  :o)
>>
>
> Oh no.  50 shades of #DD ? :-)
>


Announcing bottom-up-build - a build system for C/C++/D

2013-06-26 Thread Graham St Jack
Bottom-up-build (bub) is a build system written in D which supports 
building of large C/C++/D projects. It works fine on Linux, with a 
Windows port nearly completed. It should work on OS-X, but I haven't 
tested it there. 

Bub is hosted on https://github.com/GrahamStJack/bottom-up-build.


Some of bub's features that are useful on large projects are:

Built files are located outside the source directory, using a different 
build directory for (say) debug, release, profile, etc.

Very simple configuration files, making the build infrastructure easy to 
maintain.

Automatic deduction of which libraries to link with.

Automatic execution and evaluation of tests.

Enforcement of dependency control, with prevention of circularities 
between modules and directories.

Generated files are not scanned for imports/includes until after they are 
up to date. This is a real enabler for code generation.

Files in the build directory that should not be there are automatically 
deleted. It is surprising how often a left-over build artifact can make 
you think that something works, only to discover your mistake after a 
commit. This feature eliminates that problem.

The dependency graph is accurate, maximising opportunity for multiple 
build jobs and thus speeding up builds significantly.


An early precursor to bub was developed to use on a large C++ project 
that had complex dependencies and used a lot of code generation. Bub is a 
major rewrite designed to be more general-purpose.

The positive effect of the bub precursor on the project was very 
significant. Examples of positive consequences are:

Well-defined dependencies and elimination of circularities changed the 
design so that implementation and testing proceeded from the bottom up.

Paying attention to dependencies eliminated many unnecessary ones, 
resulting in a substantial increase in the reusability of code. This was 
instrumental in changing the way subsequent projects were designed, so 
that they took advantage of the large (and growing) body of reusable code.
The reusable code improved in design and quality with each project that 
used it.

Tests were compiled, linked and executed very early in the build - 
typically immediately after the code under test. This meant that 
regressions were usually detected within a few seconds of initiating a 
build. This was transformative to work rate, and willingness to make 
sweeping changes.

Doing a clean is hardly ever necessary. This is important because it 
dramatically reduces the total amount of time that builds take, which 
matters on a large project (especially C++).

Having a build system that works with both C++ and D meant that it was 
easy to "slip" some D code into the project. Initially as scripts, then 
as utilities, and so on. Having side-by-side comparisons of D against 
bash scripts and C++ modules had the effect of turning almost all the 
other team members into D advocates.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 21:29:12 UTC, Iain Buclaw wrote:

Don't call be Shirley...


Serious? :-)

By the way, I hope you didn't feel I was trying to speak on 
behalf of GDC -- wasn't my intention. :-)


I did, and it hurt.  :o)


Oh no.  50 shades of #DD ? :-)


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 19:01:42 UTC, Joakim wrote:
Why are they guaranteed such patches?  They have advantages 
because they use different compiler backends.  If they think 
their backends are so great, let them implement their own 
optimizations and compete.


I could respond at greater length, but I think that substantial 
flaws of your point of view are exposed in this single paragraph. 
 GDC and LDC aren't competitors, they are valuable collaborators.


Re: dlibgit updated to libgit2 v0.19.0

2013-06-26 Thread Andrej Mitrovic
On 6/26/13, Sönke Ludwig  wrote:
> I've been using dlibgit since some time

Btw, I'm curious what kind of work you've done using dlibgit (if it's
ok to ask)?

> I've already registered a fork with (partially) updated bindings for the
> master version of libgit2: http://registry.vibed.org/packages/dlibgit

I saw some of your commits now. I'm happy to see that we no longer
need bitfields in v0.19.0, and it seems most of the inline functions
in libgit2 are gone, making porting easier. Those libgit devs are
doing a great job.


Re: dlibgit updated to libgit2 v0.19.0

2013-06-26 Thread Andrej Mitrovic
On 6/26/13, Sönke Ludwig  wrote:
> Great to hear. I've been using dlibgit since some time and actually I've
> already registered a fork with (partially) updated bindings for the
> master version of libgit2: http://registry.vibed.org/packages/dlibgit

Ah, didn't know that. For now you may want to hold on to that package
until I port the v0.17 samples to v0.19, to verify the new bindings
work properly.

Btw, the reason why I've moved everything under the "git.c" package is
because at some point I want to implement either a class or
struct-based D API around the C API, so it's easier to use from client
code.

The new D API will use modules such as "git.branch" while the C-based
API "git.c.branch".


Re: dlibgit updated to libgit2 v0.19.0

2013-06-26 Thread Sönke Ludwig

Am 26.06.2013 21:36, schrieb Andrej Mitrovic:

https://github.com/AndrejMitrovic/dlibgit

These are the D bindings to the libgit2 library. libgit2 is a
versatile git library which can read/write loose git object files,
parse commits, tags, and blobs, do tree traversals, and much more.

The dlibgit master branch is now based on the recent libgit2 v0.19.0
release. The previous bindings were based on 0.17.0, and there have
been many new features introduced since then.

Note: The D-based samples have not yet been updated to v0.19.0, but
I'll work on this in the coming days.

Note: I might also look into making this a dub-aware package, if
that's something people want.



Great to hear. I've been using dlibgit since some time and actually I've 
already registered a fork with (partially) updated bindings for the 
master version of libgit2: http://registry.vibed.org/packages/dlibgit


Unfortunately I never got to finish it completely, which is why I didn't 
make a pull request yet. But anyway, since 0.19.0 now contains the 
latest features, I might as well drop my fork and point the registry to 
your repository.


You can take my package.json as a template:
https://github.com/s-ludwig/dlibgit/blob/master/package.json

It should probably get a "targetType": "none" field, since it's 
header-only, and "authors"/"copyright" fields are missing.




Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
On Jun 26, 2013 9:50 PM, "Joseph Rushton Wakeling" <
joseph.wakel...@webdrake.net> wrote:
>
> On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
>>
>> I can't be bothered to read all points the both of you have mentioned
thus far, but I do hope to add a voice of reason to calm you down. ;)
>
>
> Quick, nurse, the screens!
>
> ... or perhaps, "Someone throw a bucket of water over them"? :-P
>
>

Don't call be Shirley...

>> From a licensing perspective, the only part of the source that can be
"closed off" is the DMD backend.  Any optimisation fixes in the DMD backend
does not affect GDC/LDC.
>
>
> To be honest, I can't see the "sales value" of optimization fixes in the
DMD backend given that GDC and LDC already have such strong performance.
 The one strong motivation to use DMD over the other two compilers is (as
you describe) access to the bleeding edge of features, but I'd have thought
this will stop being an advantage in time as/when the frontend becomes a
genuinely "plug-and-play" component.
>

Sometimes it feels like achieving this is as trying to break down a brick
barrier with a shoelace.

> By the way, I hope you didn't feel I was trying to speak on behalf of GDC
-- wasn't my intention. :-)
>

I did, and it hurt.  :o)

>> Having used closed source languages in the past, I strongly believe that
closed languages do not stimulate growth or adoption at all.  And where
adoption does occur, knowledge is kept within specialised groups.
>
>
> Last year I had the dubious privilege of having to work with MS Visual
Basic for a temporary job.  What was strikingly different from the various
open source languages was that although there was an extensive quantity of
documentation available from Microsoft, it was incredibly badly organized,
much of it was out of date, and there was no meaningful community support
that I could find.
>
> I got the job done, but I would surely have had a much easier experience
with any of the open source languages out there.  Suffice to say that the
only reason I used VB in this case was because it was an obligatory part of
the work -- I'd never use it by choice.
>

Yes, it's like trying to learn D, but the only reference you have of the
language is the grammar page, and an IDE which offers thousands of
auto-complete options for things that *sound* like what you want, but don't
compile when it comes to testing.  :o)

Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
On Jun 26, 2013 9:00 PM, "Joakim"  wrote:
>
> On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
>>
>> From a licensing perspective, the only part of the source that can be
"closed off" is the DMD backend.  Any optimisation fixes in the DMD backend
does not affect GDC/LDC.
>
> This is flat wrong. I suggest you read the Artistic license, it was
chosen for a reason, ie it allows closing of source as long as you provide
the original, unmodified binaries with any modified binaries.  I suspect
optimization fixes will be in both the frontend and backend.
>

Code generation is in the back end, so the answer to that is simply 'no'.

>> You should try reading The Cathedral and the Bazaar if you don't
understand why an open approach to development has caused the D programming
language to grow by ten fold over the last year or so.
>>
>> If you still don't understand, read it again ad infinitum.
>
> Never read it but I have corresponded with the author, and I found him to
be as religious about pure open source as Stallman is about the GPL.  I
suggest you try examining why D is still such a niche language even with
"ten fold" growth.  If you're not sure why, I suggest you look at the
examples and reasons I've given, as to why closed source and hybrid models
do much better.
>

Then you should read it, as the 'cathedral' in question was GCC - a project
started by Stallman. :)

>> Think I might just point out that GDC had SIMD support before DMD. And
that Remedy used GDC to get their D development off the ground.  It was
features such as UDAs, along with many language bug fixes that were only
available in DMD development that caused them to switch over.
>>
>> In other words, they needed a faster turnaround for bugs at the time
they were adopting D, however the D front-end in GDC stays pretty much
stable on the current release.
>
> Not sure what point you are trying to make, as both gdc and dmd are open
source.  I'm suggesting closing such patches, for a limited time.
>

Closing patches benefit no one.  And more to the point,  you can't say that
two compiler's implement the same language if both have different language
features.

>>> I see no reason why another "upcoming" project like D couldn't do the
same. :)
>>
>>
>> You seem to be confusing D for an Operating System, Smartphone, or any
general consumer product.
>
> You seem to be confusing the dmd compiler to not be a piece of software,
just like the rest, or the many proprietary C++ compilers out there.
>

You seem to think when I say D I'm referring to dmd, or any other D
compiler out there.

>
>> - The language implementation is open source. This allows anyone to take
the current front-end code - or even write their own clean-room
implementation from ground-up - and integrate it to their own backend X.
>
> Sort of.  The dmd frontend is open source, but the backend is not under
an open source license.  Someone can swap out the backend and go completely
closed, for example, using ldc (ldc used to have one or two GPL files,
those would obviously have to be removed).
>

The backend is not part of the D language implementation / specification.
(for starters, it's not documented anywhere except as code).

>> - The compiler itself is not associated with the development of the
language, so those who are owners of the copyright are free to do what they
want with their binary releases.
>>
>> - The development model of D on github has adopted a "pull, review and
merge" system, where any changes to the language or compiler do not go in
unless it goes through proper coding review and testing (thank's to the
wonderful auto-tester).  So your suggestion of an "open core" model has a
slight fallacy here in that any changes to the closed off compiler would
have to go through the same process to be accepted into the open one - and
it might even be rejected.
>
> I'm not sure why you think "open core" patches that are opened after a
time limit would be any more likely to be rejected from that review
process.  The only fallacy I see here is yours.
>

Where did I say that? I only invited you to speculate on what would happen
if a 'closed patch' got rejected.  This leads back to the point that you
can't call it a compiler for the D programming language if it derives from
the specification / implementation.

>
>> DMD - as in refering to the binary releases - can be closed / paid /
whatever it likes.
>>
>> The D Programming Language - as in the D front-end implementation - is
under a dual GPL/Artistic license and cannot be used by any closed source
product without said product releasing their copy of the front-end sources
also.  This means that your "hybrid" proposal only works for code that is
not under this license - eg: the DMD backend - which is not what the vast
majority of contributors actually submit patches for.
>
> Wrong, you have clearly not read the Artistic license.
>

I'll allow you to keep on thinking that for a while longer...

>> If you strongly believe that a pro

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
I can't be bothered to read all points the both of you have 
mentioned thus far, but I do hope to add a voice of reason to 
calm you down. ;)


Quick, nurse, the screens!

... or perhaps, "Someone throw a bucket of water over them"? :-P

From a licensing perspective, the only part of the source that 
can be "closed off" is the DMD backend.  Any optimisation fixes 
in the DMD backend does not affect GDC/LDC.


To be honest, I can't see the "sales value" of optimization fixes 
in the DMD backend given that GDC and LDC already have such 
strong performance.  The one strong motivation to use DMD over 
the other two compilers is (as you describe) access to the 
bleeding edge of features, but I'd have thought this will stop 
being an advantage in time as/when the frontend becomes a 
genuinely "plug-and-play" component.


By the way, I hope you didn't feel I was trying to speak on 
behalf of GDC -- wasn't my intention. :-)


Having used closed source languages in the past, I strongly 
believe that closed languages do not stimulate growth or 
adoption at all.  And where adoption does occur, knowledge is 
kept within specialised groups.


Last year I had the dubious privilege of having to work with MS 
Visual Basic for a temporary job.  What was strikingly different 
from the various open source languages was that although there 
was an extensive quantity of documentation available from 
Microsoft, it was incredibly badly organized, much of it was out 
of date, and there was no meaningful community support that I 
could find.


I got the job done, but I would surely have had a much easier 
experience with any of the open source languages out there.  
Suffice to say that the only reason I used VB in this case was 
because it was an obligatory part of the work -- I'd never use it 
by choice.


- The development model of D on github has adopted a "pull, 
review and merge" system, where any changes to the language or 
compiler do not go in unless it goes through proper coding 
review and testing (thank's to the wonderful auto-tester).  So 
your suggestion of an "open core" model has a slight fallacy 
here in that any changes to the closed off compiler would have 
to go through the same process to be accepted into the open one 
- and it might even be rejected.


I had a similar thought but from a slightly different angle -- 
that allowing "open core" in the frontend would damage the 
effectiveness of the review process.  How can you restrict 
certain features to proprietary versions without having also a 
two-tier hierarchy of reviewers?  And would you be able to 
maintain the broader range of community review if some select, 
paid few had privileged review access?


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim

On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
From a licensing perspective, the only part of the source that 
can be "closed off" is the DMD backend.  Any optimisation fixes 
in the DMD backend does not affect GDC/LDC.
This is flat wrong. I suggest you read the Artistic license, it 
was chosen for a reason, ie it allows closing of source as long 
as you provide the original, unmodified binaries with any 
modified binaries.  I suspect optimization fixes will be in both 
the frontend and backend.


You should try reading The Cathedral and the Bazaar if you 
don't understand why an open approach to development has caused 
the D programming language to grow by ten fold over the last 
year or so.


If you still don't understand, read it again ad infinitum.
Never read it but I have corresponded with the author, and I 
found him to be as religious about pure open source as Stallman 
is about the GPL.  I suggest you try examining why D is still 
such a niche language even with "ten fold" growth.  If you're not 
sure why, I suggest you look at the examples and reasons I've 
given, as to why closed source and hybrid models do much better.


Think I might just point out that GDC had SIMD support before 
DMD. And that Remedy used GDC to get their D development off 
the ground.  It was features such as UDAs, along with many 
language bug fixes that were only available in DMD development 
that caused them to switch over.


In other words, they needed a faster turnaround for bugs at the 
time they were adopting D, however the D front-end in GDC stays 
pretty much stable on the current release.
Not sure what point you are trying to make, as both gdc and dmd 
are open source.  I'm suggesting closing such patches, for a 
limited time.


I see no reason why another "upcoming" project like D couldn't 
do the same. :)


You seem to be confusing D for an Operating System, Smartphone, 
or any general consumer product.
You seem to be confusing the dmd compiler to not be a piece of 
software, just like the rest, or the many proprietary C++ 
compilers out there.


Having used closed source languages in the past, I strongly 
believe that closed languages do not stimulate growth or 
adoption at all.  And where adoption does occur, knowledge is 
kept within specialised groups.
Perhaps there is some truth to that.  But nobody is suggesting a 
purely closed-source language either.


I don't think a "purely community-run project" is a worthwhile 
goal, particularly if you are aiming for a million users and 
professionalism.  I think there is always opportunity for 
mixing of commercial implementations and community 
involvement, as very successful hybrid projects like Android 
or Chrome have shown.


Your argument seems lost on me as you seem to be taking a very 
strange angle of association with the D language and/or 
compiler, and you don't seem to understand how the development 
process of D works either.
I am associating D, an open source project, with Android and 
Chrome, two of the most successful open source projects at the 
moment, which both benefit from hybrid models.  I find it strange 
that you cannot follow.  If I don't understand how the 
development process of D works, you could point out an example, 
instead of making basic mistakes in not knowing what licenses it 
uses and what they allow. :)


- The language implementation is open source. This allows 
anyone to take the current front-end code - or even write their 
own clean-room implementation from ground-up - and integrate it 
to their own backend X.
Sort of.  The dmd frontend is open source, but the backend is not 
under an open source license.  Someone can swap out the backend 
and go completely closed, for example, using ldc (ldc used to 
have one or two GPL files, those would obviously have to be 
removed).


- The compiler itself is not associated with the development of 
the language, so those who are owners of the copyright are free 
to do what they want with their binary releases.


- The development model of D on github has adopted a "pull, 
review and merge" system, where any changes to the language or 
compiler do not go in unless it goes through proper coding 
review and testing (thank's to the wonderful auto-tester).  So 
your suggestion of an "open core" model has a slight fallacy 
here in that any changes to the closed off compiler would have 
to go through the same process to be accepted into the open one 
- and it might even be rejected.
I'm not sure why you think "open core" patches that are opened 
after a time limit would be any more likely to be rejected from 
that review process.  The only fallacy I see here is yours.


- Likewise, because of licensing and copyright assignments in 
place on the D front-end implementation.  Any closed D compiler 
using it would have to make its sources of the front-end, with 
local modifications, available upon request.  So it makes no 
sense whatsoever to make language features - such as SIMD - 

dlibgit updated to libgit2 v0.19.0

2013-06-26 Thread Andrej Mitrovic
https://github.com/AndrejMitrovic/dlibgit

These are the D bindings to the libgit2 library. libgit2 is a
versatile git library which can read/write loose git object files,
parse commits, tags, and blobs, do tree traversals, and much more.

The dlibgit master branch is now based on the recent libgit2 v0.19.0
release. The previous bindings were based on 0.17.0, and there have
been many new features introduced since then.

Note: The D-based samples have not yet been updated to v0.19.0, but
I'll work on this in the coming days.

Note: I might also look into making this a dub-aware package, if
that's something people want.

Licensing information:

libgit2 is licensed under a very permissive license (GPLv2 with a
special Linking Exception). This basically means that you can link it
(unmodified) with any kind of software without having to release its
source code.

dlibtgit github page: https://github.com/AndrejMitrovic/dlibgit
libgit2 homepage: libgit2.github.com/
libgit2 repo: https://github.com/libgit2/libgit2/


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
I can't be bothered to read all points the both of you have 
mentioned thus far, but I do hope to add a voice of reason to 
calm you down. ;)




On Wednesday, 26 June 2013 at 17:42:23 UTC, Joakim wrote:
On Wednesday, 26 June 2013 at 12:02:38 UTC, Joseph Rushton 
Wakeling wrote:
Now, in trying to drive more funding and professional effort 
towards D development, do you _really_ think that the right 
thing to do is to turn around to all those people and say: 
"Hey guys, after all the work you put in to make D so great, 
now we're going to build on that, but you'll have to wait 6 
months for the extra goodies unless you pay"?
Yes, I think it is the right thing to do.  I am only talking 
about closing off the optimization patches, all bugfixes and 
feature patches would likely be applied to both the free and 
paid compilers, certainly bugfixes.  So not _all_ the "extra 
goodies" have to be paid for, and even the optimization patches 
are eventually open-sourced.




From a licensing perspective, the only part of the source that 
can be "closed off" is the DMD backend.  Any optimisation fixes 
in the DMD backend does not affect GDC/LDC.



How do you think that will affect the motivation of all those 
volunteers -- the code contributors, the bug reporters, the 
forum participants?  What could you say to the maintainers of 
GDC or LDC, after all they've done to enable people to use the 
language, that could justify denying their compilers 
up-to-date access to the latest features?  How would it affect 
the atmosphere of discussion about language development -- 
compared to the current friendly, collegial approach?
I don't know how it will affect their motivation, as they 
probably differ in the reasons they contribute.


If D becomes much more popular because the quality of 
implementation goes up and their D skills and contributions 
become much more prized, I suspect they will be very happy. :) 
If they are religious zealots about having only a single, 
completely open-source implementation- damn the superior 
results from hybrid models- perhaps they will be unhappy.  I 
suspect the former far outnumber the latter, since D doesn't 
employ the purely-GPL approach the zealots usually insist on.




You should try reading The Cathedral and the Bazaar if you don't 
understand why an open approach to development has caused the D 
programming language to grow by ten fold over the last year or so.


If you still don't understand, read it again ad infinitum.



... and -- how do you think it would affect uptake, if it was 
announced that access to the best features would come at a 
price?
Please stop distorting my argument.  There are many different 
types of patches added to the dmd frontend every day: bugfixes, 
features, optimizations, etc.  I have only proposed closing the 
optimization patches.


However, I do think some features can also be closed this way.  
For example, Walter has added features like SIMD modifications 
only for Remedy.  He could make this type of feature closed 
initially, available only in the paid compiler.  As the feature 
matures and is paid for, it would eventually be merged into the 
free compiler.  This is usually not a problem as those who want 
that kind of performance usually make a lot of money off of it 
and are happy to pay for that performance: that is all I'm 
proposing with my optimization patches idea also.




Think I might just point out that GDC had SIMD support before 
DMD. And that Remedy used GDC to get their D development off the 
ground.  It was features such as UDAs, along with many language 
bug fixes that were only available in DMD development that caused 
them to switch over.


In other words, they needed a faster turnaround for bugs at the 
time they were adopting D, however the D front-end in GDC stays 
pretty much stable on the current release.



In another email you mentioned Microsoft's revenues from 
Visual Studio but -- leaving aside for a moment all the moral 
and strategic concerns of closing things up -- Visual Studio 
enjoys that success because it's a virtually essential tool 
for professional development on Microsoft Windows, which still 
has an effective monopoly on modern desktop computing.  
Microsoft has the market presence to be able to dictate terms 
like that -- no one else does.  Certainly no upcoming 
programming language could operate like that!
Yes, Microsoft has unusual leverage.  But Visual Studio's 
compiler is not the only paid C++ compiler in the market, hell, 
Walter still sells C and C++ compilers.


I'm not proposing D operate just like Microsoft.  I'm 
suggesting a subtle compromise, a mix of that familiar closed 
model and the open source model you prefer, a hybrid model that 
you are no doubt familiar with, since you correctly pegged the 
licensing lingo earlier, when you mentioned "open core."


These hybrid models are immensely popular these days: the two 
most popular software projects of the last decade, iOS and 
Android, a

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim
On Wednesday, 26 June 2013 at 17:28:22 UTC, Joseph Rushton 
Wakeling wrote:
Perhaps you'd like to explain to the maintainers of GDC and LDC 
why, after all they've done for D, you think it would be 
acceptable to turn to them and say: "Hey guys, we're going to 
make improvements and keep them from you for 9 months so we can 
make money" ... ?
Why are they guaranteed such patches?  They have advantages 
because they use different compiler backends.  If they think 
their backends are so great, let them implement their own 
optimizations and compete.


Or doesn't the cooperative relationship between the 3 main D 
compilers mean much to you?
As I've noted in an earlier response, LDC could also provide a 
closed version and license those patches.


Leaving aside the moral issues, you might consider that any 
work paid for by revenues would be offset by a drop in 
voluntary contributions, including corporate contributors.  And 
sensible companies will avoid "open core" solutions.
Or maybe the work paid by revenues would be far more and even 
more people would volunteer, when D becomes a more successful 
project through funding from the paid compiler.  Considering how 
dominant "open core" and other hybrid models are these days, it 
is laughable that you suggest that anyone is avoiding it. :)



A few articles worth reading on these factors:
http://webmink.com/essays/monetisation/
http://webmink.com/essays/open-core/
http://webmink.com/essays/donating-money/
I have corresponded with the author of that blog before.  I found 
him to be a religious zealot who recounted the four freedoms of 
GNU to me like a mantra.  Perhaps that's why Sun was run into the 
ground when they followed his ideas about open sourcing most 
everything.  I don't look to him for worthwhile reading on these 
issues.


I think this ignores the decades-long history we have with 
open source software by now.  It is not merely "wanting to 
make the jump," most volunteers simply do not want to do 
painful tasks like writing documentation or cannot put as much 
time into development when no money is coming in.  Simply 
saying "We have to try harder to be professional" seems naive 
to me.


Odd that you talk about ignoring things, because the general 
trend we've seen in the decades-long history of free software 
is that the software business seems to getting more and more 
open with every year.  These days there's a strong expectation 
of free licensing.
Yes, it is getting "more and more open," because hybrid models 
are being used more. :) Pure open source software, with no binary 
blobs, has almost no adoption, so it isn't your preferred purist 
approach that is doing well.  And the reasons are the ones I 
gave, volunteers can do a lot of things, but there are a lot of 
things they won't do.


It's hardly fair to compare languages without also taking into 
account their relative age.  C++ has its large market share 
substantially due to historical factors -- it was a major 
"first mover", and until the advent of D, it was arguably the 
_only_ language that had that combination of power/flexibility 
and performance.

Yes, C++ has been greatly helped by its age.

So far as compiler implementations are concerned, I'd say that 
it was the fact that there were many different implementations 
that helped C++.  On the other hand, proprietary 
implementations may in some ways have damaged adoption, as 
before standardization you'd have competing, incompatible 
proprietary versions which limited the portability of code.
But you neglect to mention that most of those "many different 
implementations" were closed.  I agree that completely closed 
implementations can also cause incompatibilities, which is why I 
have suggested a hybrid model with limited closed-source patches.


The binary blobs are nevertheless part of the vanilla kernel, 
not something "value added" that gets charged for.  They're 
irrelevant to the development model of the kernel -- they are 
an irritation that's tolerated for practical reasons, rather 
than a design feature.
They are not always charged for, but they put the lie to the 
claims that linux uses a pure open source model.  Rather, it is 
usually a different kind of hybrid model.  If it were so pure, 
there would be no blobs at all.  The blobs are certainly not 
irrelevant, as linux wouldn't run on all the hardware that needs 
those binary blobs, if they weren't included.  Not sure what to 
make of your non sequitur of binary blobs not being a "design 
feature."


As for paying for blobs, I'll note that the vast majority of 
linux kernels installed are in Android devices, where one pays 
for the hardware _and_ the development effort to develop the 
blobs that run the hardware.  So paying for the "value added" 
from blobs seems to be a very successful model. :)


So if one looks at linux in any detail, hybrid models are more 
the norm than the exception, even with the GPL. :)


But no one is selling proprietary extensions to the

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim
On Wednesday, 26 June 2013 at 12:02:38 UTC, Joseph Rushton 
Wakeling wrote:
Now, in trying to drive more funding and professional effort 
towards D development, do you _really_ think that the right 
thing to do is to turn around to all those people and say: "Hey 
guys, after all the work you put in to make D so great, now 
we're going to build on that, but you'll have to wait 6 months 
for the extra goodies unless you pay"?
Yes, I think it is the right thing to do.  I am only talking 
about closing off the optimization patches, all bugfixes and 
feature patches would likely be applied to both the free and paid 
compilers, certainly bugfixes.  So not _all_ the "extra goodies" 
have to be paid for, and even the optimization patches are 
eventually open-sourced.


How do you think that will affect the motivation of all those 
volunteers -- the code contributors, the bug reporters, the 
forum participants?  What could you say to the maintainers of 
GDC or LDC, after all they've done to enable people to use the 
language, that could justify denying their compilers up-to-date 
access to the latest features?  How would it affect the 
atmosphere of discussion about language development -- compared 
to the current friendly, collegial approach?
I don't know how it will affect their motivation, as they 
probably differ in the reasons they contribute.


If D becomes much more popular because the quality of 
implementation goes up and their D skills and contributions 
become much more prized, I suspect they will be very happy. :) If 
they are religious zealots about having only a single, completely 
open-source implementation- damn the superior results from hybrid 
models- perhaps they will be unhappy.  I suspect the former far 
outnumber the latter, since D doesn't employ the purely-GPL 
approach the zealots usually insist on.


We could poll them and find out.  You keep talking about closed 
patches as though they can only piss off the volunteers.  But if 
I'm right and a hybrid model would lead to a lot more funding and 
adoption of D, their volunteer work places them in an ideal 
position, where their D skills and contributions are much more 
valued and they can then probably do paid work in D.  I suspect 
most will end up happier.


I have not proposed denying GDC and LDC "access to the latest 
features," only optimization patches.  LDC could do the same as 
dmd and provide a closed, paid version with the optimization 
patches, which it could license from dmd.  GDC couldn't do this, 
of course, but that is the result of their purist GPL-only 
approach.


Why do you think a hybrid model would materially "affect the 
atmosphere of discussion about language development?"  Do you 
believe that the people who work on hybrid projects like Android, 
probably the most widely-used, majority-OSS project in the world, 
are not able to collaborate effectively?


... and -- how do you think it would affect uptake, if it was 
announced that access to the best features would come at a 
price?
Please stop distorting my argument.  There are many different 
types of patches added to the dmd frontend every day: bugfixes, 
features, optimizations, etc.  I have only proposed closing the 
optimization patches.


However, I do think some features can also be closed this way.  
For example, Walter has added features like SIMD modifications 
only for Remedy.  He could make this type of feature closed 
initially, available only in the paid compiler.  As the feature 
matures and is paid for, it would eventually be merged into the 
free compiler.  This is usually not a problem as those who want 
that kind of performance usually make a lot of money off of it 
and are happy to pay for that performance: that is all I'm 
proposing with my optimization patches idea also.


As for how it would "affect uptake," I think most people know 
that free products are usually less capable than paid products.  
The people who don't need the capability use Visual Studio 
Express, those who need it pay for the full version of Visual 
Studio.  There's no reason D couldn't employ a similar segmented 
model.


 There are orders of magnitude of difference between uptake of 
free and non-free services no matter what the domain, and 
software is one where free (as in freedom and beer) is much 
more strongly desired than in many other fields.
Yes, you're right, non-free services have orders of magnitude 
more uptake. :p


I think there are advantages to both closed and open source, 
which is why hybrid open/closed source models are currently very 
popular.  Open source allows more collaboration from outside, 
while closed source allows for _much_ more funding from paying 
customers.  I see no reason to dogmatically insist that these 
source models not be mixed.


There's a big difference between introducing commercial models 
with a greater degree of paid professional work, and 
introducing closed components.  Red Hat is a good example of 
that -- I can get, legally 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 15:52:33 UTC, Joakim wrote:
I suggest you read my original post more carefully.  I have not 
suggested closing up the entire D toolchain, as you seem to 
imply.  I have suggested working on optimization patches in a 
closed-source manner and providing two versions of the D 
compiler: one that is faster, closed, and paid, with these 
optimization patches, another that is slower, open, and free, 
without the optimization patches.


Over time, the optimization patches are merged back to the free 
branch, so that the funding from the closed compiler makes even 
the free compiler faster, but only after some delay so that 
users who value performance will actually pay for the closed 
compiler.  There can be a hard time limit, say nine months, so 
that you know any closed patches from nine months back will be 
opened and applied to the free compiler.  I suspect that the 
money will be good enough so that any bugfixes or features 
added by the closed developers will be added to the free 
compiler right away, with no delay.


Perhaps you'd like to explain to the maintainers of GDC and LDC 
why, after all they've done for D, you think it would be 
acceptable to turn to them and say: "Hey guys, we're going to 
make improvements and keep them from you for 9 months so we can 
make money" ... ?


Or doesn't the cooperative relationship between the 3 main D 
compilers mean much to you?


Thanks for the work that you and Don have done with 
Sociomantic.  Why do you think more companies don't do this?  
My point is that if there were money coming in from a paid 
compiler, Walter could fund even more such work.


Leaving aside the moral issues, you might consider that any work 
paid for by revenues would be offset by a drop in voluntary 
contributions, including corporate contributors.  And sensible 
companies will avoid "open core" solutions.


A few articles worth reading on these factors:
http://webmink.com/essays/monetisation/
http://webmink.com/essays/open-core/
http://webmink.com/essays/donating-money/

I think this ignores the decades-long history we have with open 
source software by now.  It is not merely "wanting to make the 
jump," most volunteers simply do not want to do painful tasks 
like writing documentation or cannot put as much time into 
development when no money is coming in.  Simply saying "We have 
to try harder to be professional" seems naive to me.


Odd that you talk about ignoring things, because the general 
trend we've seen in the decades-long history of free software is 
that the software business seems to getting more and more open 
with every year.  These days there's a strong expectation of free 
licensing.


If I understand your story right, the volunteers need to put a 
lot of effort into "bootstrapping" the project to be more 
professional, companies will see this and jump in, then they 
fund development from then on out?  It's possible, but is there 
any example you have in mind?  The languages that go this 
completely FOSS route tend not to have as much adoption as 
those with closed implementations, like C++.


It's hardly fair to compare languages without also taking into 
account their relative age.  C++ has its large market share 
substantially due to historical factors -- it was a major "first 
mover", and until the advent of D, it was arguably the _only_ 
language that had that combination of power/flexibility and 
performance.


So far as compiler implementations are concerned, I'd say that it 
was the fact that there were many different implementations that 
helped C++.  On the other hand, proprietary implementations may 
in some ways have damaged adoption, as before standardization 
you'd have competing, incompatible proprietary versions which 
limited the portability of code.


And yet the linux kernel ships with many binary blobs, almost 
all the time.  I don't know how they legally do it, considering 
the GPL, yet it is much more common to run a kernel with binary 
blobs than a purely FOSS version.  The vast majority of linux 
installs are due to Android and every single one has 
significant binary blobs and closed-source modifications to the 
Android source, which is allowed since most of Android is under 
the more liberal Apache license, with only the linux kernel 
under the GPL.


The binary blobs are nevertheless part of the vanilla kernel, not 
something "value added" that gets charged for.  They're 
irrelevant to the development model of the kernel -- they are an 
irritation that's tolerated for practical reasons, rather than a 
design feature.


Again, I don't know how they get away with all the binary 
drivers in the kernel, perhaps that is a grey area with the 
GPL.  For example, even the most open source Android devices, 
the Nexus devices sold directly by Google and running stock 
Android, have many binary blobs:


https://developers.google.com/android/nexus/drivers

Other than Android, linux is really only popular on servers, 
where 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 15:18, Joseph Rushton Wakeling wrote:


They don't own them, though -- they commit resources to them because the
language's ongoing development serves their business needs.


Yes, exactly.

--
/Jacob Carlborg


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim
On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella 
wrote:

Joakim, el 25 de June a las 23:37 me escribiste:
I don't know the views of the key contributors, but I wonder 
if they
would have such a knee-jerk reaction against any paid/closed 
work.


Against being paid no, against being closed YES. Please don't 
even think
about it. It was a hell of a ride trying to make D more open to 
step back now.
I suggest you read my original post more carefully.  I have not 
suggested closing up the entire D toolchain, as you seem to 
imply.  I have suggested working on optimization patches in a 
closed-source manner and providing two versions of the D 
compiler: one that is faster, closed, and paid, with these 
optimization patches, another that is slower, open, and free, 
without the optimization patches.


Over time, the optimization patches are merged back to the free 
branch, so that the funding from the closed compiler makes even 
the free compiler faster, but only after some delay so that users 
who value performance will actually pay for the closed compiler.  
There can be a hard time limit, say nine months, so that you know 
any closed patches from nine months back will be opened and 
applied to the free compiler.  I suspect that the money will be 
good enough so that any bugfixes or features added by the closed 
developers will be added to the free compiler right away, with no 
delay.



What we need is companies paying to people to improve the
compiler and toolchain. This is slowly starting to happen, in
Sociomantic we are already 2 people dedicating some time to 
improve D as

part of our job (Don and me).
Thanks for the work that you and Don have done with Sociomantic.  
Why do you think more companies don't do this?  My point is that 
if there were money coming in from a paid compiler, Walter could 
fund even more such work.


We need more of this, and to get this, we need companies to 
start using
D, and to get this, we need professionalism (I agree 100% with 
Andrei on
this one). Is a bootstrap effort, and is not like volunteers 
need more
time to be professional, is just that you have to want to make 
the jump.
I think this ignores the decades-long history we have with open 
source software by now.  It is not merely "wanting to make the 
jump," most volunteers simply do not want to do painful tasks 
like writing documentation or cannot put as much time into 
development when no money is coming in.  Simply saying "We have 
to try harder to be professional" seems naive to me.


I think is way better to do less stuff but with higher quality, 
nobody
is asking people for more time, is just changing the focus a 
bit, at
least for some time. Again, this is only bootstrapping, and is 
always
hard and painful. We need to make the jump to make companies 
comfortable

using D, then things will start rolling by themselves.
If I understand your story right, the volunteers need to put a 
lot of effort into "bootstrapping" the project to be more 
professional, companies will see this and jump in, then they fund 
development from then on out?  It's possible, but is there any 
example you have in mind?  The languages that go this completely 
FOSS route tend not to have as much adoption as those with closed 
implementations, like C++.


First of all, your examples are completely wrong. The projects 
you are

mentioning are 100% free, with no closed components (except for
components done by third-party).
You are misstating what I said: I said "commercial," not 
"closed," and gave different examples of commercial models.  But 
lets look at them.



Your examples are just reinforcing what
I say above. Linux is completely GPL, so it's not even only 
open source.
Is Free Software, meaning the license if more restrictive than, 
for
example, phobos. This means is harder to adopt by companies and 
you
can't possibly change it in a closed way if you want to 
distribute

a binary.
And yet the linux kernel ships with many binary blobs, almost all 
the time.  I don't know how they legally do it, considering the 
GPL, yet it is much more common to run a kernel with binary blobs 
than a purely FOSS version.  The vast majority of linux installs 
are due to Android and every single one has significant binary 
blobs and closed-source modifications to the Android source, 
which is allowed since most of Android is under the more liberal 
Apache license, with only the linux kernel under the GPL.


Again, I don't know how they get away with all the binary drivers 
in the kernel, perhaps that is a grey area with the GPL.  For 
example, even the most open source Android devices, the Nexus 
devices sold directly by Google and running stock Android, have 
many binary blobs:


https://developers.google.com/android/nexus/drivers

Other than Android, linux is really only popular on servers, 
where you can "change it in a closed way" because you are not 
"distributing a binary."  Google takes advantage of this to run 
linux on a millio

Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Michel Fortin

On 2013-06-26 11:07:45 +, Sönke Ludwig  said:


Naively I first thought that .class and .protocolof were candidates for
__traits, but actually it looks like they might simply be implemented
using a templated static property:

class ObjcObject {
  static @property ProtocolType!T protocolof(this T)() {
return ProtocolType!T.staticInstance;
  }
}

That's of course assuming that the static instance is somehow accessible
from normal D code.


I don't think you get what protocolof is, or if so I can't understand 
what you're trying to suggest with the code above. It's a way to obtain 
the pointer identifying a protocol. You don't "call" protocolof on a 
class, but on the interface. Like this:


extern (Objective-C) interface MyInterface {}

NSObject object;
if (object.conformsToProtocol(MyInterface.protocolof))
{ … }

protocolof is a pointer generated by the compiler that represents the 
Objective-C protocol for that interface. It's pretty much alike other 
compiler generated properties such as mangleof and nameof. There's 
nothing unusual about protocolof.


And that conformsToProtocol function above is a completely normal 
function by the way.


As for .class, it's pretty much alike to .classinfo for D objects. The 
difference is that it returns an instance of a different type depending 
on the class (Objective-C has a metaclass hierarchy), so it needs to be 
handled by the compiler. I used .class to mirror the name in 
Objective-C code. Since this has to be compiler generated and it's type 
is magic to be typeof(this).Class, I see no harm in using a keyword for 
it. I could have called it .classinfo, but that'd be rather misleading 
if you asked me (it's not a ClassInfo object, nor does it behave like 
ClassInfo).



The __selector type class might be replaceable by a library type
Selector!(R, ARGS).


It could. But it needs compiler support for if you want to extract them 
from functions in a type-safe manner. If the compiler has to understand 
the type, better make it a language extension.



It would also be great to have general support for
implicit constructors and make string->NSString and delegate->ObjcBlock
available in the library instead of dedicated compiler special case.


String literals are implicitly convertible to NSString with absolutely 
no overhead.



Not sure about constructors in interfaces, they seem a bit odd, but
using "init" instead and letting "new" call that is also odd...


Well, there's supported in Objective-C (as init methods), so we have to 
support them.



You already mentioned @IBAction and @IBOutlet, those can obviously be
UDAs, as well as @optional and other similar keywords.


Indeed.


Maybe it's possible like this to reduce the syntax additions to
extern(Objective-C) and possibly constructors in interfaces.


Maybe. But not at the cost of memory safety.

The idea is that something written in @safe D should be memory-safe, it 
should be provable by the compiler. And this should apply to 
Objective-C code written in D too. Without this requirement we could 
make it less magic, and allow, for instance, NSObject.alloc().init(). 
But that's not @safe, which is why constructors were implemented.


But we can't do this at the cost of disallowing existing idioms do in 
Objective-C. For instance, I could get a pointer to a class object, and 
create a new object for it. If you define this:


extern (Objective-C):
interface MyProtocol {
this(string);
}
class MyObject : NSObject, MyProtocol {
this(string) {}
}

you can then write this:

MyProtocol.Class c = MyObject.class;
NSObject o = new c("baca");

And the compiler then knows that the class pointer can allocate objects 
that can be constructed with a string parameter. This is something that 
can and is done in Objective-C (hence why you'll find constructors on 
interfaces). The idea is to add provable memory safety on top of it. 
(Note that the above example is not implemented yet, nor documented.)


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca/



Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 13:07, Sönke Ludwig wrote:


I agree, it will only influence tools that include a parser. Few syntax
highlighters parse the code (although *some* do), so this was probably
not the best example.


Absolutely, some even do semantic analyze. Example, the syntax 
highlighter in Eclipse for Java highlights instance variables 
differently from identifiers. Don't know if there's any syntax 
highlighters for D that do this.



Naively I first thought that .class and .protocolof were candidates for
__traits, but actually it looks like they might simply be implemented
using a templated static property:

class ObjcObject {
   static @property ProtocolType!T protocolof(this T)() {
 return ProtocolType!T.staticInstance;
   }
}


So what would ProtocolType do? I think I need to look at the 
implementation of .class and .protocolof. In Objective-C there are 
runtime functions to do the same, I don't know if those would work for D 
as well.



That's of course assuming that the static instance is somehow accessible
from normal D code. Sorry if this doesn't really make sense, I don't
know anything of the implementation details.

The __selector type class might be replaceable by a library type
Selector!(R, ARGS).


Hmm, that might be possible. We would need a trait to get the selector 
for a method, which we should have anyway. But this uses templates 
again. We don't want to move everything to library code then we would 
have the same problem as with the bridge.



It would also be great to have general support for
implicit constructors and make string->NSString and delegate->ObjcBlock
available in the library instead of dedicated compiler special case.


Since strings and delegates are already implemented in the language, 
would it be possible to add implicit conversions for these types in the 
library?



Not sure about constructors in interfaces, they seem a bit odd, but
using "init" instead and letting "new" call that is also odd...


Using "alloc.init" would be more Objective-C like and using "new" would 
be more D like.



You already mentioned @IBAction and @IBOutlet, those can obviously be
UDAs, as well as @optional and other similar keywords.


The compiler will need to know about @optional. I don't think that the 
compiler will need to know about @IBAction and @IBOutlet, but if it 
does, there are a couple of advantages we could implement. @IBOutlet 
only make sense on instance variables. @IBAction only make sense on 
instance method with the following signature:


void foo (id sender) { }

Possibly any Objective-C type could be used as the argument type.


Maybe it's possible like this to reduce the syntax additions to
extern(Objective-C) and possibly constructors in interfaces.


I'm open to suggestions.


I don't mean the additions as a whole of course, but each single
language change vs. a library based solution of the same feature ;) In
general this is a great addition from a functional view! I was very much
looking forward for it to get back to life.


Great. It's just a question of what is possible to implement in library 
code.


--
/Jacob Carlborg


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
On 26 June 2013 15:04, eles  wrote:
> On Tuesday, 25 June 2013 at 08:21:38 UTC, Mike Parker wrote:
>>
>> On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:
>> D Season of Code! Then we don't have to restrict ourselves to one time of
>> the year.
>
>
> D Seasons of Code! Why to restrict to a single season? Let's code all the
> year long! :)

Programmers need to hibernate too, you know. ;)

--
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Leandro Lucarella
Jacob Carlborg, el 26 de June a las 14:39 me escribiste:
> On 2013-06-26 12:16, Leandro Lucarella wrote:
> 
> >Yeah, right, probably Python and Ruby have only 5k users...
> 
> There are companies backing those languages, at least Ruby, to some
> extent.

Read my other post, I won't repeat myself :)

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
JUNTAN FIRMAS Y HUELLAS POR EL CACHORRO CONDENADO A MUERTE...
-- Crónica TV


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread eles

On Tuesday, 25 June 2013 at 08:21:38 UTC, Mike Parker wrote:

On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:
D Season of Code! Then we don't have to restrict ourselves to 
one time of the year.


D Seasons of Code! Why to restrict to a single season? Let's code 
all the year long! :)


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 12:39:05 UTC, Jacob Carlborg wrote:

On 2013-06-26 12:16, Leandro Lucarella wrote:


Yeah, right, probably Python and Ruby have only 5k users...


There are companies backing those languages, at least Ruby, to 
some extent.


They don't own them, though -- they commit resources to them 
because the language's ongoing development serves their business 
needs.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 12:16, Leandro Lucarella wrote:


Yeah, right, probably Python and Ruby have only 5k users...


There are companies backing those languages, at least Ruby, to some extent.

--
/Jacob Carlborg


Re: An idea - make dlang.org a fundation

2013-06-26 Thread Adam D. Ruppe

On Wednesday, 26 June 2013 at 10:18:58 UTC, Jacob Carlborg wrote:
that you're talking about the graphical designer I was talking 
about the one implementing the design, web developer/frontend 
developer or what to call it.


Ah yes. Still though, I don't think ddoc is that big of a deal, 
especially since there's a few of us here who can do the 
translations if needed.


I wouldn't give the graphical designer access to the code 
either. It needs to be integrated with the backend code (which 
is Ruby or similar) anyway, to fetch the correct data and so on.


Right.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Tuesday, 25 June 2013 at 21:38:01 UTC, Joakim wrote:
I don't know the views of the key contributors, but I wonder if 
they would have such a knee-jerk reaction against any 
paid/closed work.  The current situation would seem much more 
of a kick in the teeth to me: spending time trying to be 
"professional," as Andrei asks, and producing a viable, stable 
product used by a million developers, corporate users included, 
but never receiving any compensation for this great tool you've 
poured effort into, that your users are presumably often making 
money with.


Obviously I can't speak for the core developers, or even for the 
community as a group.  But I can make the following observations.


D's success as a language is _entirely_ down to volunteer effort 
-- as Walter highlighted in his keynote.  Volunteer effort is 
responsible for the development of the compiler frontend, the 
runtime, and the standard library.  Volunteers have put in the 
hard work of porting these to other compiler backends.  
Volunteers have made and reviewed language improvement proposals, 
and have been vigilant in reporting and resolving bugs.  
Volunteers also contribute to vibrant discussions on these very 
forums, providing support and advice to those in need of help.  
And many of these volunteers have been doing so over the course 
of years.


Now, in trying to drive more funding and professional effort 
towards D development, do you _really_ think that the right thing 
to do is to turn around to all those people and say: "Hey guys, 
after all the work you put in to make D so great, now we're going 
to build on that, but you'll have to wait 6 months for the extra 
goodies unless you pay"?


How do you think that will affect the motivation of all those 
volunteers -- the code contributors, the bug reporters, the forum 
participants?  What could you say to the maintainers of GDC or 
LDC, after all they've done to enable people to use the language, 
that could justify denying their compilers up-to-date access to 
the latest features?  How would it affect the atmosphere of 
discussion about language development -- compared to the current 
friendly, collegial approach?


... and -- how do you think it would affect uptake, if it was 
announced that access to the best features would come at a price? 
 There are orders of magnitude of difference between uptake of 
free and non-free services no matter what the domain, and 
software is one where free (as in freedom and beer) is much more 
strongly desired than in many other fields.


I understand that such a shift from being mostly OSS to having 
some closed components can be tricky, but that depends on the 
particular community.  I don't think any OSS project has ever 
become popular without having some sort of commercial model 
attached to it.  C++ would be nowhere without commercial 
compilers; linux would be unheard of without IBM and Red Hat 
figuring out a consulting/support model around it; and Android 
would not have put the linux kernel on hundreds of millions of 
computing devices without the hybrid model that Google 
employed, where they provide an open source core, paid for 
through increased ad revenue from Android devices, and the 
hardware vendors provide closed hardware drivers and UI skins 
on top of the OSS core.


There's a big difference between introducing commercial models 
with a greater degree of paid professional work, and introducing 
closed components.  Red Hat is a good example of that -- I can 
get, legally and for free, a fully functional copy of Red Hat 
Enterprise Linux without paying a penny.  It's just missing the 
Red Hat name and logos and the support contract.


In another email you mentioned Microsoft's revenues from Visual 
Studio but -- leaving aside for a moment all the moral and 
strategic concerns of closing things up -- Visual Studio enjoys 
that success because it's a virtually essential tool for 
professional development on Microsoft Windows, which still has an 
effective monopoly on modern desktop computing.  Microsoft has 
the market presence to be able to dictate terms like that -- no 
one else does.  Certainly no upcoming programming language could 
operate like that!


This talk prominently mentioned scaling to a million users and 
being professional: going commercial is the only way to get 
there.


It's more likely that closing off parts of the offering would 
limit that uptake, for reasons already given.  On the other hand, 
with more and more organizations coming to use and rely on D, 
there are plenty of other ways professional development could be 
brought in.  Just to take one example: companies with a 
mission-critical interest in D have a corresponding interest in 
their developers giving time to the language itself.  How many 
such companies do you think there need to be before D has a 
stable of skilled professional developers being paid explicitly 
to maintain and develop the language?


Your citation of the Linux kerne

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Dicebot
On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella 
wrote:
Android might be the only valid case (but I'm not really 
familiar with Android model), but the kernel, since is based on 
Linux, has to have the source code when

released. Maybe the drivers are closed source.


It is perfectly open 
http://source.android.com/source/licenses.html ;)
Drivers tend to be closed source, but drivers are not part fo 
Android project, they are private to vendors.


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Sönke Ludwig
Am 26.06.2013 12:09, schrieb Jacob Carlborg:
> On 2013-06-26 10:54, Sönke Ludwig wrote:
> 
>> I agree. Even though it may not be mentioned in books and many people
>> may never see the changes, it still *does* make the language more
>> complex. One consequence is that language processing tools (compilers,
>> syntax highlighters etc.) get updated/written with this in mind.
> 
> I don't think there will require much change for tools (non-compilers).
> I see three "big" changes, non of them are at the lexical level:

I agree, it will only influence tools that include a parser. Few syntax
highlighters parse the code (although *some* do), so this was probably
not the best example.

>> This is why I would also suggest to try and make another pass over the
>> changes, trying to move every bit from language to library that is
>> possible - without compromising the result too much, of course (e.g. due
>> to template bloat like in the older D->ObjC bridge). Maybe it's possible
>> to put some things into __traits or other more general facilities to
>> avoid changing the language grammar.
> 
> I don't see what could be but in __traits that could help. Do you have
> any suggestions?

Naively I first thought that .class and .protocolof were candidates for
__traits, but actually it looks like they might simply be implemented
using a templated static property:

class ObjcObject {
  static @property ProtocolType!T protocolof(this T)() {
return ProtocolType!T.staticInstance;
  }
}

That's of course assuming that the static instance is somehow accessible
from normal D code. Sorry if this doesn't really make sense, I don't
know anything of the implementation details.

The __selector type class might be replaceable by a library type
Selector!(R, ARGS). It would also be great to have general support for
implicit constructors and make string->NSString and delegate->ObjcBlock
available in the library instead of dedicated compiler special case.

Not sure about constructors in interfaces, they seem a bit odd, but
using "init" instead and letting "new" call that is also odd...

You already mentioned @IBAction and @IBOutlet, those can obviously be
UDAs, as well as @optional and other similar keywords.

Maybe it's possible like this to reduce the syntax additions to
extern(Objective-C) and possibly constructors in interfaces.

> 
>> On the other hand I actually very much hate to suggest this, as it
>> probably causes a lot of additional work. But really, we shouldn't take
>> *any* language additions lightly, even relatively isolated ones. Like
>> always, new syntax must be able to "pull its own weight" (IMO, of
>> course).
> 
> I would say that for anyone remotely interested in Mac OS X or iOS
> development it pull its own weight several times over. In my opinion I
> think it's so obvious it pulls its own weight I shouldn't need to
> justify the changes.
> 

I don't mean the additions as a whole of course, but each single
language change vs. a library based solution of the same feature ;) In
general this is a great addition from a functional view! I was very much
looking forward for it to get back to life.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Leandro Lucarella
Joakim, el 26 de June a las 08:33 me escribiste:
> It is amazing how far D has gotten with no business model: money
> certainly isn't everything.  But it is probably impossible to get to
> a million users or offer professionalism without commercial
> implementations.

Yeah, right, probably Python and Ruby have only 5k users...

This argument is BS.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
Are you such a dreamer?
To put the world to rights?
I'll stay home forever
Where two & two always
makes up five


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Leandro Lucarella
Joakim, el 25 de June a las 23:37 me escribiste:
> On Tuesday, 25 June 2013 at 20:58:16 UTC, Joseph Rushton Wakeling
> wrote:
> >>I wonder what the response would be to injecting some money and
> >>commercialism into the D ecosystem.
> >
> >Given how D's whole success stems from its community, I think an
> >"open core" model (even with time-lapse) would be disastrous. It'd
> >be like kicking everyone in the teeth after all the work they put
> >in.
> I don't know the views of the key contributors, but I wonder if they
> would have such a knee-jerk reaction against any paid/closed work.

Against being paid no, against being closed YES. Please don't even think
about it. It was a hell of a ride trying to make D more open to step
back now. What we need is companies paying to people to improve the
compiler and toolchain. This is slowly starting to happen, in
Sociomantic we are already 2 people dedicating some time to improve D as
part of our job (Don and me).

We need more of this, and to get this, we need companies to start using
D, and to get this, we need professionalism (I agree 100% with Andrei on
this one). Is a bootstrap effort, and is not like volunteers need more
time to be professional, is just that you have to want to make the jump.
I think is way better to do less stuff but with higher quality, nobody
is asking people for more time, is just changing the focus a bit, at
least for some time. Again, this is only bootstrapping, and is always
hard and painful. We need to make the jump to make companies comfortable
using D, then things will start rolling by themselves.

> The current situation would seem much more of a kick in the teeth to
> me: spending time trying to be "professional," as Andrei asks, and
> producing a viable, stable product used by a million developers,
> corporate users included, but never receiving any compensation for
> this great tool you've poured effort into, that your users are
> presumably often making money with.
> 
> I understand that such a shift from being mostly OSS to having some
> closed components can be tricky, but that depends on the particular
> community.  I don't think any OSS project has ever become popular
> without having some sort of commercial model attached to it.  C++
> would be nowhere without commercial compilers; linux would be
> unheard of without IBM and Red Hat figuring out a consulting/support
> model around it; and Android would not have put the linux kernel on
> hundreds of millions of computing devices without the hybrid model
> that Google employed, where they provide an open source core, paid
> for through increased ad revenue from Android devices, and the
> hardware vendors provide closed hardware drivers and UI skins on top
> of the OSS core.

First of all, your examples are completely wrong. The projects you are
mentioning are 100% free, with no closed components (except for
components done by third-party). Your examples are just reinforcing what
I say above. Linux is completely GPL, so it's not even only open source.
Is Free Software, meaning the license if more restrictive than, for
example, phobos. This means is harder to adopt by companies and you
can't possibly change it in a closed way if you want to distribute
a binary. Same for C++, which is not a project, is a standards, but the
most successful and widespread compiler, GCC, not only is free, is the
battle horse of free software, of the GNU project and created by the
most extremist free software advocate ever. Android might be the only
valid case (but I'm not really familiar with Android model), but the
kernel, since is based on Linux, has to have the source code when
released. Maybe the drivers are closed source.

You are missing more closely related projects, like Python, Haskel,
Ruby, Perl, and probably 90% of the newish programming languages, which
are all 100% open source. And very successful I might say. The key is
always breaking into the corporate ground and make those corporations
contribute.

There are valid examples of project using hybrid models but they are
usually software as a service models, not very applicable to
a compiler/language, like Wordpress, or other web applications. Other
valid examples are MySQL, or QT I think used an hybrid model at least
once. Lots of them died and were resurrected as 100% free projects, like
StarOffice -> OpenOffice -> LibreOffice.

And finally making the *optimizer* (or some optimizations) closed will
be hardly a good business, being that there are 2 other backends out
there that usually kicks DMD backend ass already, so people needing more
speed will probably just switch to gdc or ldc.

> This talk prominently mentioned scaling to a million users and being
> professional: going commercial is the only way to get there.

As in breaking into the commercial world? Then agreed. If you imply
commercial == closing some parts of the source, then I think you are WAY
OFF.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/

Re: An idea - make dlang.org a fundation

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 00:55, Aleksandar Ruzicic wrote:


There is no need for designer to know what DDOC is. For the past few
years I have worked with many designers which had only basic knowledge
about HTML and even less about CSS (most of them don't know anything
about JavaScript but they "know jQuery a bit"). They just give me PSD
and I do slicing and all coding.


Again, "web designer" was not the correct word(s). Something more like 
web developer/frontend developer, who ever writes the final format.



So if any redesign of dlang.org is going to happen I volunteer to do all
coding, so there is no need to look for designer which is comfortable
writing DDOC.


Ok, good.

--
/Jacob Carlborg


Re: An idea - make dlang.org a fundation

2013-06-26 Thread Jacob Carlborg

On 2013-06-25 23:45, Adam D. Ruppe wrote:


For my work sites, I often don't give the designer access to the html at
all. They have one of two options: make it work with pure css, or send
me an image of what it is supposed to look like, and I'll take it from
there.


"web designer" was properly not the best word(s). I would say that 
you're talking about the graphical designer I was talking about the one 
implementing the design, web developer/frontend developer or what to 
call it.


I wouldn't give the graphical designer access to the code either. It 
needs to be integrated with the backend code (which is Ruby or similar) 
anyway, to fetch the correct data and so on.


--
/Jacob Carlborg


Re: An idea - make dlang.org a fundation

2013-06-26 Thread Jacob Carlborg

On 2013-06-25 22:19, Andrei Alexandrescu wrote:


Truth be told the designer delivered HTML, which we converted to DDoc.


Ok, I see that "web designer" was properly not the correct word(s). "Web 
developer" is perhaps better. The one who builds the final format.


--
/Jacob Carlborg


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 10:54, Sönke Ludwig wrote:


I agree. Even though it may not be mentioned in books and many people
may never see the changes, it still *does* make the language more
complex. One consequence is that language processing tools (compilers,
syntax highlighters etc.) get updated/written with this in mind.


I don't think there will require much change for tools (non-compilers). 
I see three "big" changes, non of them are at the lexical level:


extern (Objective-C)
[foo:bar:]
foo.class

Any tool that just deals with syntax highlighting (on a lexical level) 
should be able to handle these changes. Sure, you might want to add a 
special case for "foo.class" to not highlight "class" in this case.



This is why I would also suggest to try and make another pass over the
changes, trying to move every bit from language to library that is
possible - without compromising the result too much, of course (e.g. due
to template bloat like in the older D->ObjC bridge). Maybe it's possible
to put some things into __traits or other more general facilities to
avoid changing the language grammar.


I don't see what could be but in __traits that could help. Do you have 
any suggestions?



On the other hand I actually very much hate to suggest this, as it
probably causes a lot of additional work. But really, we shouldn't take
*any* language additions lightly, even relatively isolated ones. Like
always, new syntax must be able to "pull its own weight" (IMO, of course).


I would say that for anyone remotely interested in Mac OS X or iOS 
development it pull its own weight several times over. In my opinion I 
think it's so obvious it pulls its own weight I shouldn't need to 
justify the changes.


--
/Jacob Carlborg


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Sönke Ludwig
Am 24.06.2013 20:10, schrieb Brian Schott:
> On Monday, 24 June 2013 at 17:51:08 UTC, Walter Bright wrote:
>> On 6/24/2013 3:04 AM, Jacob Carlborg wrote:
>>> On 2013-06-23 23:02, bearophile wrote:
>>>
 Instead of:
 extern (Objective-C)

 Is it better to use a naming more D-idiomatic?

 extern (Objective_C)
>>>
>>> As Simen said, we already have extern (C++). But I can absolutely
>>> change this if
>>> people wants to.
>>
>> Objective-C is just perfect.
> 
> linkageAttribute:
>   'extern' '(' Identifier ')'
> | 'extern' '(' Identifier '++' ')'
> | 'extern' '(' Identifier '-' Identifier ')'
> ;

Maybe it makes sense to generalize it instead:

linkageAttribute: 'extern' '(' linkageAttributeIdentifier ')';

linkageAttributeIdentifier:
linkageAttributeToken
  | linkageAttributeIdentifier linkageAttributeToken
  ;

linkageAttributeToken: identifier | '-' | '++' | '#' | '.';


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Sönke Ludwig
Am 24.06.2013 23:26, schrieb bearophile:
> Walter Bright:
> 
>> Yes, but since I don't know much about O-C programming, the feature
>> should be labeled "experimental" until we're sure it's the right design.
> 
> This change opens a new target of D development (well, it was already
> open for the people willing to use a not standard dmd compiler), but it
> also introduce some extra complexity in the language, that every D
> programmer will have to pay forever, even all the ones that will not use
> those features. So such changes need to be introduced with care and
> after extensive discussions in the main newsgroup. Probably each one new
> thing introduced needs a separate discussion.
> 
> Bye,
> bearophile

I agree. Even though it may not be mentioned in books and many people
may never see the changes, it still *does* make the language more
complex. One consequence is that language processing tools (compilers,
syntax highlighters etc.) get updated/written with this in mind.

This is why I would also suggest to try and make another pass over the
changes, trying to move every bit from language to library that is
possible - without compromising the result too much, of course (e.g. due
to template bloat like in the older D->ObjC bridge). Maybe it's possible
to put some things into __traits or other more general facilities to
avoid changing the language grammar.

On the other hand I actually very much hate to suggest this, as it
probably causes a lot of additional work. But really, we shouldn't take
*any* language additions lightly, even relatively isolated ones. Like
always, new syntax must be able to "pull its own weight" (IMO, of course).