Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim

On Wednesday, 26 June 2013 at 01:25:42 UTC, Bill Baxter wrote:
On Tue, Jun 25, 2013 at 2:37 PM, Joakim joa...@airpost.net 
wrote:
This talk prominently mentioned scaling to a million users and 
being

professional: going commercial is the only way to get there.



IDEs are something you can have a freemium model for.  Core 
languages are
not these days.  If you have to pay to get the optimized 
version of the
language there are just too many other places to look that 
don't charge.
 You want the best version of the language to be in everyone's 
hands...  Hard to make much money selling things to developers.
I agree that there is a lot of competition for programming 
languages.  However, Visual Studio brought in $400 million in 
extensions alone a couple years back:


http://blogs.msdn.com/b/somasegar/archive/2011/04/12/happy-1st-birthday-visual-studio-2010.aspx

Microsoft doesn't break out numbers for Visual Studio itself, but 
it might be a billion+ dollars a year, not to mention all the 
other commercial C++ compilers out there.  If the aim is to 
displace C++ and gain a million users, it is impossible to do so 
without commercial implementations.  All the languages that you 
are thinking about that do no offer a single commercial 
implementation- remember, even Perl and Python have commercial 
options, eg ActiveState- have almost no usage compared to C++.  
It is true that there are large companies like Apple or 
Sun/Oracle that give away a lot of tooling for free, but D 
doesn't have such corporate backing.


It is amazing how far D has gotten with no business model: money 
certainly isn't everything.  But it is probably impossible to get 
to a million users or offer professionalism without commercial 
implementations.


In any case, the fact that the D front-end is under the Artistic 
license and most of the rest of the code is released under 
similarly liberal licensing means that someone can do this on 
their own, without any other permission from the community, and I 
expect that if D is successful, someone will.


I'm simply suggesting that the original developers jump-start 
that process by doing it themselves, in the hybrid form I've 
suggested, rather than potentially getting cut out of the 
decision-making process when somebody else does it.


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Sönke Ludwig
Am 24.06.2013 23:26, schrieb bearophile:
 Walter Bright:
 
 Yes, but since I don't know much about O-C programming, the feature
 should be labeled experimental until we're sure it's the right design.
 
 This change opens a new target of D development (well, it was already
 open for the people willing to use a not standard dmd compiler), but it
 also introduce some extra complexity in the language, that every D
 programmer will have to pay forever, even all the ones that will not use
 those features. So such changes need to be introduced with care and
 after extensive discussions in the main newsgroup. Probably each one new
 thing introduced needs a separate discussion.
 
 Bye,
 bearophile

I agree. Even though it may not be mentioned in books and many people
may never see the changes, it still *does* make the language more
complex. One consequence is that language processing tools (compilers,
syntax highlighters etc.) get updated/written with this in mind.

This is why I would also suggest to try and make another pass over the
changes, trying to move every bit from language to library that is
possible - without compromising the result too much, of course (e.g. due
to template bloat like in the older D-ObjC bridge). Maybe it's possible
to put some things into __traits or other more general facilities to
avoid changing the language grammar.

On the other hand I actually very much hate to suggest this, as it
probably causes a lot of additional work. But really, we shouldn't take
*any* language additions lightly, even relatively isolated ones. Like
always, new syntax must be able to pull its own weight (IMO, of course).


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Sönke Ludwig
Am 24.06.2013 20:10, schrieb Brian Schott:
 On Monday, 24 June 2013 at 17:51:08 UTC, Walter Bright wrote:
 On 6/24/2013 3:04 AM, Jacob Carlborg wrote:
 On 2013-06-23 23:02, bearophile wrote:

 Instead of:
 extern (Objective-C)

 Is it better to use a naming more D-idiomatic?

 extern (Objective_C)

 As Simen said, we already have extern (C++). But I can absolutely
 change this if
 people wants to.

 Objective-C is just perfect.
 
 linkageAttribute:
   'extern' '(' Identifier ')'
 | 'extern' '(' Identifier '++' ')'
 | 'extern' '(' Identifier '-' Identifier ')'
 ;

Maybe it makes sense to generalize it instead:

linkageAttribute: 'extern' '(' linkageAttributeIdentifier ')';

linkageAttributeIdentifier:
linkageAttributeToken
  | linkageAttributeIdentifier linkageAttributeToken
  ;

linkageAttributeToken: identifier | '-' | '++' | '#' | '.';


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 10:54, Sönke Ludwig wrote:


I agree. Even though it may not be mentioned in books and many people
may never see the changes, it still *does* make the language more
complex. One consequence is that language processing tools (compilers,
syntax highlighters etc.) get updated/written with this in mind.


I don't think there will require much change for tools (non-compilers). 
I see three big changes, non of them are at the lexical level:


extern (Objective-C)
[foo:bar:]
foo.class

Any tool that just deals with syntax highlighting (on a lexical level) 
should be able to handle these changes. Sure, you might want to add a 
special case for foo.class to not highlight class in this case.



This is why I would also suggest to try and make another pass over the
changes, trying to move every bit from language to library that is
possible - without compromising the result too much, of course (e.g. due
to template bloat like in the older D-ObjC bridge). Maybe it's possible
to put some things into __traits or other more general facilities to
avoid changing the language grammar.


I don't see what could be but in __traits that could help. Do you have 
any suggestions?



On the other hand I actually very much hate to suggest this, as it
probably causes a lot of additional work. But really, we shouldn't take
*any* language additions lightly, even relatively isolated ones. Like
always, new syntax must be able to pull its own weight (IMO, of course).


I would say that for anyone remotely interested in Mac OS X or iOS 
development it pull its own weight several times over. In my opinion I 
think it's so obvious it pulls its own weight I shouldn't need to 
justify the changes.


--
/Jacob Carlborg


Re: An idea - make dlang.org a fundation

2013-06-26 Thread Jacob Carlborg

On 2013-06-25 22:19, Andrei Alexandrescu wrote:


Truth be told the designer delivered HTML, which we converted to DDoc.


Ok, I see that web designer was properly not the correct word(s). Web 
developer is perhaps better. The one who builds the final format.


--
/Jacob Carlborg


Re: An idea - make dlang.org a fundation

2013-06-26 Thread Jacob Carlborg

On 2013-06-25 23:45, Adam D. Ruppe wrote:


For my work sites, I often don't give the designer access to the html at
all. They have one of two options: make it work with pure css, or send
me an image of what it is supposed to look like, and I'll take it from
there.


web designer was properly not the best word(s). I would say that 
you're talking about the graphical designer I was talking about the one 
implementing the design, web developer/frontend developer or what to 
call it.


I wouldn't give the graphical designer access to the code either. It 
needs to be integrated with the backend code (which is Ruby or similar) 
anyway, to fetch the correct data and so on.


--
/Jacob Carlborg


Re: An idea - make dlang.org a fundation

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 00:55, Aleksandar Ruzicic wrote:


There is no need for designer to know what DDOC is. For the past few
years I have worked with many designers which had only basic knowledge
about HTML and even less about CSS (most of them don't know anything
about JavaScript but they know jQuery a bit). They just give me PSD
and I do slicing and all coding.


Again, web designer was not the correct word(s). Something more like 
web developer/frontend developer, who ever writes the final format.



So if any redesign of dlang.org is going to happen I volunteer to do all
coding, so there is no need to look for designer which is comfortable
writing DDOC.


Ok, good.

--
/Jacob Carlborg


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Leandro Lucarella
Joakim, el 25 de June a las 23:37 me escribiste:
 On Tuesday, 25 June 2013 at 20:58:16 UTC, Joseph Rushton Wakeling
 wrote:
 I wonder what the response would be to injecting some money and
 commercialism into the D ecosystem.
 
 Given how D's whole success stems from its community, I think an
 open core model (even with time-lapse) would be disastrous. It'd
 be like kicking everyone in the teeth after all the work they put
 in.
 I don't know the views of the key contributors, but I wonder if they
 would have such a knee-jerk reaction against any paid/closed work.

Against being paid no, against being closed YES. Please don't even think
about it. It was a hell of a ride trying to make D more open to step
back now. What we need is companies paying to people to improve the
compiler and toolchain. This is slowly starting to happen, in
Sociomantic we are already 2 people dedicating some time to improve D as
part of our job (Don and me).

We need more of this, and to get this, we need companies to start using
D, and to get this, we need professionalism (I agree 100% with Andrei on
this one). Is a bootstrap effort, and is not like volunteers need more
time to be professional, is just that you have to want to make the jump.
I think is way better to do less stuff but with higher quality, nobody
is asking people for more time, is just changing the focus a bit, at
least for some time. Again, this is only bootstrapping, and is always
hard and painful. We need to make the jump to make companies comfortable
using D, then things will start rolling by themselves.

 The current situation would seem much more of a kick in the teeth to
 me: spending time trying to be professional, as Andrei asks, and
 producing a viable, stable product used by a million developers,
 corporate users included, but never receiving any compensation for
 this great tool you've poured effort into, that your users are
 presumably often making money with.
 
 I understand that such a shift from being mostly OSS to having some
 closed components can be tricky, but that depends on the particular
 community.  I don't think any OSS project has ever become popular
 without having some sort of commercial model attached to it.  C++
 would be nowhere without commercial compilers; linux would be
 unheard of without IBM and Red Hat figuring out a consulting/support
 model around it; and Android would not have put the linux kernel on
 hundreds of millions of computing devices without the hybrid model
 that Google employed, where they provide an open source core, paid
 for through increased ad revenue from Android devices, and the
 hardware vendors provide closed hardware drivers and UI skins on top
 of the OSS core.

First of all, your examples are completely wrong. The projects you are
mentioning are 100% free, with no closed components (except for
components done by third-party). Your examples are just reinforcing what
I say above. Linux is completely GPL, so it's not even only open source.
Is Free Software, meaning the license if more restrictive than, for
example, phobos. This means is harder to adopt by companies and you
can't possibly change it in a closed way if you want to distribute
a binary. Same for C++, which is not a project, is a standards, but the
most successful and widespread compiler, GCC, not only is free, is the
battle horse of free software, of the GNU project and created by the
most extremist free software advocate ever. Android might be the only
valid case (but I'm not really familiar with Android model), but the
kernel, since is based on Linux, has to have the source code when
released. Maybe the drivers are closed source.

You are missing more closely related projects, like Python, Haskel,
Ruby, Perl, and probably 90% of the newish programming languages, which
are all 100% open source. And very successful I might say. The key is
always breaking into the corporate ground and make those corporations
contribute.

There are valid examples of project using hybrid models but they are
usually software as a service models, not very applicable to
a compiler/language, like Wordpress, or other web applications. Other
valid examples are MySQL, or QT I think used an hybrid model at least
once. Lots of them died and were resurrected as 100% free projects, like
StarOffice - OpenOffice - LibreOffice.

And finally making the *optimizer* (or some optimizations) closed will
be hardly a good business, being that there are 2 other backends out
there that usually kicks DMD backend ass already, so people needing more
speed will probably just switch to gdc or ldc.

 This talk prominently mentioned scaling to a million users and being
 professional: going commercial is the only way to get there.

As in breaking into the commercial world? Then agreed. If you imply
commercial == closing some parts of the source, then I think you are WAY
OFF.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Leandro Lucarella
Joakim, el 26 de June a las 08:33 me escribiste:
 It is amazing how far D has gotten with no business model: money
 certainly isn't everything.  But it is probably impossible to get to
 a million users or offer professionalism without commercial
 implementations.

Yeah, right, probably Python and Ruby have only 5k users...

This argument is BS.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
Are you such a dreamer?
To put the world to rights?
I'll stay home forever
Where two  two always
makes up five


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Sönke Ludwig
Am 26.06.2013 12:09, schrieb Jacob Carlborg:
 On 2013-06-26 10:54, Sönke Ludwig wrote:
 
 I agree. Even though it may not be mentioned in books and many people
 may never see the changes, it still *does* make the language more
 complex. One consequence is that language processing tools (compilers,
 syntax highlighters etc.) get updated/written with this in mind.
 
 I don't think there will require much change for tools (non-compilers).
 I see three big changes, non of them are at the lexical level:

I agree, it will only influence tools that include a parser. Few syntax
highlighters parse the code (although *some* do), so this was probably
not the best example.

 This is why I would also suggest to try and make another pass over the
 changes, trying to move every bit from language to library that is
 possible - without compromising the result too much, of course (e.g. due
 to template bloat like in the older D-ObjC bridge). Maybe it's possible
 to put some things into __traits or other more general facilities to
 avoid changing the language grammar.
 
 I don't see what could be but in __traits that could help. Do you have
 any suggestions?

Naively I first thought that .class and .protocolof were candidates for
__traits, but actually it looks like they might simply be implemented
using a templated static property:

class ObjcObject {
  static @property ProtocolType!T protocolof(this T)() {
return ProtocolType!T.staticInstance;
  }
}

That's of course assuming that the static instance is somehow accessible
from normal D code. Sorry if this doesn't really make sense, I don't
know anything of the implementation details.

The __selector type class might be replaceable by a library type
Selector!(R, ARGS). It would also be great to have general support for
implicit constructors and make string-NSString and delegate-ObjcBlock
available in the library instead of dedicated compiler special case.

Not sure about constructors in interfaces, they seem a bit odd, but
using init instead and letting new call that is also odd...

You already mentioned @IBAction and @IBOutlet, those can obviously be
UDAs, as well as @optional and other similar keywords.

Maybe it's possible like this to reduce the syntax additions to
extern(Objective-C) and possibly constructors in interfaces.

 
 On the other hand I actually very much hate to suggest this, as it
 probably causes a lot of additional work. But really, we shouldn't take
 *any* language additions lightly, even relatively isolated ones. Like
 always, new syntax must be able to pull its own weight (IMO, of
 course).
 
 I would say that for anyone remotely interested in Mac OS X or iOS
 development it pull its own weight several times over. In my opinion I
 think it's so obvious it pulls its own weight I shouldn't need to
 justify the changes.
 

I don't mean the additions as a whole of course, but each single
language change vs. a library based solution of the same feature ;) In
general this is a great addition from a functional view! I was very much
looking forward for it to get back to life.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Dicebot
On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella 
wrote:
Android might be the only valid case (but I'm not really 
familiar with Android model), but the kernel, since is based on 
Linux, has to have the source code when

released. Maybe the drivers are closed source.


It is perfectly open 
http://source.android.com/source/licenses.html ;)
Drivers tend to be closed source, but drivers are not part fo 
Android project, they are private to vendors.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Tuesday, 25 June 2013 at 21:38:01 UTC, Joakim wrote:
I don't know the views of the key contributors, but I wonder if 
they would have such a knee-jerk reaction against any 
paid/closed work.  The current situation would seem much more 
of a kick in the teeth to me: spending time trying to be 
professional, as Andrei asks, and producing a viable, stable 
product used by a million developers, corporate users included, 
but never receiving any compensation for this great tool you've 
poured effort into, that your users are presumably often making 
money with.


Obviously I can't speak for the core developers, or even for the 
community as a group.  But I can make the following observations.


D's success as a language is _entirely_ down to volunteer effort 
-- as Walter highlighted in his keynote.  Volunteer effort is 
responsible for the development of the compiler frontend, the 
runtime, and the standard library.  Volunteers have put in the 
hard work of porting these to other compiler backends.  
Volunteers have made and reviewed language improvement proposals, 
and have been vigilant in reporting and resolving bugs.  
Volunteers also contribute to vibrant discussions on these very 
forums, providing support and advice to those in need of help.  
And many of these volunteers have been doing so over the course 
of years.


Now, in trying to drive more funding and professional effort 
towards D development, do you _really_ think that the right thing 
to do is to turn around to all those people and say: Hey guys, 
after all the work you put in to make D so great, now we're going 
to build on that, but you'll have to wait 6 months for the extra 
goodies unless you pay?


How do you think that will affect the motivation of all those 
volunteers -- the code contributors, the bug reporters, the forum 
participants?  What could you say to the maintainers of GDC or 
LDC, after all they've done to enable people to use the language, 
that could justify denying their compilers up-to-date access to 
the latest features?  How would it affect the atmosphere of 
discussion about language development -- compared to the current 
friendly, collegial approach?


... and -- how do you think it would affect uptake, if it was 
announced that access to the best features would come at a price? 
 There are orders of magnitude of difference between uptake of 
free and non-free services no matter what the domain, and 
software is one where free (as in freedom and beer) is much more 
strongly desired than in many other fields.


I understand that such a shift from being mostly OSS to having 
some closed components can be tricky, but that depends on the 
particular community.  I don't think any OSS project has ever 
become popular without having some sort of commercial model 
attached to it.  C++ would be nowhere without commercial 
compilers; linux would be unheard of without IBM and Red Hat 
figuring out a consulting/support model around it; and Android 
would not have put the linux kernel on hundreds of millions of 
computing devices without the hybrid model that Google 
employed, where they provide an open source core, paid for 
through increased ad revenue from Android devices, and the 
hardware vendors provide closed hardware drivers and UI skins 
on top of the OSS core.


There's a big difference between introducing commercial models 
with a greater degree of paid professional work, and introducing 
closed components.  Red Hat is a good example of that -- I can 
get, legally and for free, a fully functional copy of Red Hat 
Enterprise Linux without paying a penny.  It's just missing the 
Red Hat name and logos and the support contract.


In another email you mentioned Microsoft's revenues from Visual 
Studio but -- leaving aside for a moment all the moral and 
strategic concerns of closing things up -- Visual Studio enjoys 
that success because it's a virtually essential tool for 
professional development on Microsoft Windows, which still has an 
effective monopoly on modern desktop computing.  Microsoft has 
the market presence to be able to dictate terms like that -- no 
one else does.  Certainly no upcoming programming language could 
operate like that!


This talk prominently mentioned scaling to a million users and 
being professional: going commercial is the only way to get 
there.


It's more likely that closing off parts of the offering would 
limit that uptake, for reasons already given.  On the other hand, 
with more and more organizations coming to use and rely on D, 
there are plenty of other ways professional development could be 
brought in.  Just to take one example: companies with a 
mission-critical interest in D have a corresponding interest in 
their developers giving time to the language itself.  How many 
such companies do you think there need to be before D has a 
stable of skilled professional developers being paid explicitly 
to maintain and develop the language?


Your citation of the Linux kernel 

Re: An idea - make dlang.org a fundation

2013-06-26 Thread Adam D. Ruppe

On Wednesday, 26 June 2013 at 10:18:58 UTC, Jacob Carlborg wrote:
that you're talking about the graphical designer I was talking 
about the one implementing the design, web developer/frontend 
developer or what to call it.


Ah yes. Still though, I don't think ddoc is that big of a deal, 
especially since there's a few of us here who can do the 
translations if needed.


I wouldn't give the graphical designer access to the code 
either. It needs to be integrated with the backend code (which 
is Ruby or similar) anyway, to fetch the correct data and so on.


Right.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 12:16, Leandro Lucarella wrote:


Yeah, right, probably Python and Ruby have only 5k users...


There are companies backing those languages, at least Ruby, to some extent.

--
/Jacob Carlborg


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 12:39:05 UTC, Jacob Carlborg wrote:

On 2013-06-26 12:16, Leandro Lucarella wrote:


Yeah, right, probably Python and Ruby have only 5k users...


There are companies backing those languages, at least Ruby, to 
some extent.


They don't own them, though -- they commit resources to them 
because the language's ongoing development serves their business 
needs.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread eles

On Tuesday, 25 June 2013 at 08:21:38 UTC, Mike Parker wrote:

On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:
D Season of Code! Then we don't have to restrict ourselves to 
one time of the year.


D Seasons of Code! Why to restrict to a single season? Let's code 
all the year long! :)


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Leandro Lucarella
Jacob Carlborg, el 26 de June a las 14:39 me escribiste:
 On 2013-06-26 12:16, Leandro Lucarella wrote:
 
 Yeah, right, probably Python and Ruby have only 5k users...
 
 There are companies backing those languages, at least Ruby, to some
 extent.

Read my other post, I won't repeat myself :)

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
JUNTAN FIRMAS Y HUELLAS POR EL CACHORRO CONDENADO A MUERTE...
-- Crónica TV


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
On 26 June 2013 15:04, eles e...@eles.com wrote:
 On Tuesday, 25 June 2013 at 08:21:38 UTC, Mike Parker wrote:

 On Tuesday, 25 June 2013 at 05:57:30 UTC, Peter Williams wrote:
 D Season of Code! Then we don't have to restrict ourselves to one time of
 the year.


 D Seasons of Code! Why to restrict to a single season? Let's code all the
 year long! :)

Programmers need to hibernate too, you know. ;)

--
Iain Buclaw

*(p  e ? p++ : p) = (c  0x0f) + '0';


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 13:07, Sönke Ludwig wrote:


I agree, it will only influence tools that include a parser. Few syntax
highlighters parse the code (although *some* do), so this was probably
not the best example.


Absolutely, some even do semantic analyze. Example, the syntax 
highlighter in Eclipse for Java highlights instance variables 
differently from identifiers. Don't know if there's any syntax 
highlighters for D that do this.



Naively I first thought that .class and .protocolof were candidates for
__traits, but actually it looks like they might simply be implemented
using a templated static property:

class ObjcObject {
   static @property ProtocolType!T protocolof(this T)() {
 return ProtocolType!T.staticInstance;
   }
}


So what would ProtocolType do? I think I need to look at the 
implementation of .class and .protocolof. In Objective-C there are 
runtime functions to do the same, I don't know if those would work for D 
as well.



That's of course assuming that the static instance is somehow accessible
from normal D code. Sorry if this doesn't really make sense, I don't
know anything of the implementation details.

The __selector type class might be replaceable by a library type
Selector!(R, ARGS).


Hmm, that might be possible. We would need a trait to get the selector 
for a method, which we should have anyway. But this uses templates 
again. We don't want to move everything to library code then we would 
have the same problem as with the bridge.



It would also be great to have general support for
implicit constructors and make string-NSString and delegate-ObjcBlock
available in the library instead of dedicated compiler special case.


Since strings and delegates are already implemented in the language, 
would it be possible to add implicit conversions for these types in the 
library?



Not sure about constructors in interfaces, they seem a bit odd, but
using init instead and letting new call that is also odd...


Using alloc.init would be more Objective-C like and using new would 
be more D like.



You already mentioned @IBAction and @IBOutlet, those can obviously be
UDAs, as well as @optional and other similar keywords.


The compiler will need to know about @optional. I don't think that the 
compiler will need to know about @IBAction and @IBOutlet, but if it 
does, there are a couple of advantages we could implement. @IBOutlet 
only make sense on instance variables. @IBAction only make sense on 
instance method with the following signature:


void foo (id sender) { }

Possibly any Objective-C type could be used as the argument type.


Maybe it's possible like this to reduce the syntax additions to
extern(Objective-C) and possibly constructors in interfaces.


I'm open to suggestions.


I don't mean the additions as a whole of course, but each single
language change vs. a library based solution of the same feature ;) In
general this is a great addition from a functional view! I was very much
looking forward for it to get back to life.


Great. It's just a question of what is possible to implement in library 
code.


--
/Jacob Carlborg


Re: D/Objective-C, extern (Objective-C)

2013-06-26 Thread Michel Fortin

On 2013-06-26 11:07:45 +, Sönke Ludwig slud...@outerproduct.org said:


Naively I first thought that .class and .protocolof were candidates for
__traits, but actually it looks like they might simply be implemented
using a templated static property:

class ObjcObject {
  static @property ProtocolType!T protocolof(this T)() {
return ProtocolType!T.staticInstance;
  }
}

That's of course assuming that the static instance is somehow accessible
from normal D code.


I don't think you get what protocolof is, or if so I can't understand 
what you're trying to suggest with the code above. It's a way to obtain 
the pointer identifying a protocol. You don't call protocolof on a 
class, but on the interface. Like this:


extern (Objective-C) interface MyInterface {}

NSObject object;
if (object.conformsToProtocol(MyInterface.protocolof))
{ … }

protocolof is a pointer generated by the compiler that represents the 
Objective-C protocol for that interface. It's pretty much alike other 
compiler generated properties such as mangleof and nameof. There's 
nothing unusual about protocolof.


And that conformsToProtocol function above is a completely normal 
function by the way.


As for .class, it's pretty much alike to .classinfo for D objects. The 
difference is that it returns an instance of a different type depending 
on the class (Objective-C has a metaclass hierarchy), so it needs to be 
handled by the compiler. I used .class to mirror the name in 
Objective-C code. Since this has to be compiler generated and it's type 
is magic to be typeof(this).Class, I see no harm in using a keyword for 
it. I could have called it .classinfo, but that'd be rather misleading 
if you asked me (it's not a ClassInfo object, nor does it behave like 
ClassInfo).



The __selector type class might be replaceable by a library type
Selector!(R, ARGS).


It could. But it needs compiler support for if you want to extract them 
from functions in a type-safe manner. If the compiler has to understand 
the type, better make it a language extension.



It would also be great to have general support for
implicit constructors and make string-NSString and delegate-ObjcBlock
available in the library instead of dedicated compiler special case.


String literals are implicitly convertible to NSString with absolutely 
no overhead.



Not sure about constructors in interfaces, they seem a bit odd, but
using init instead and letting new call that is also odd...


Well, there's supported in Objective-C (as init methods), so we have to 
support them.



You already mentioned @IBAction and @IBOutlet, those can obviously be
UDAs, as well as @optional and other similar keywords.


Indeed.


Maybe it's possible like this to reduce the syntax additions to
extern(Objective-C) and possibly constructors in interfaces.


Maybe. But not at the cost of memory safety.

The idea is that something written in @safe D should be memory-safe, it 
should be provable by the compiler. And this should apply to 
Objective-C code written in D too. Without this requirement we could 
make it less magic, and allow, for instance, NSObject.alloc().init(). 
But that's not @safe, which is why constructors were implemented.


But we can't do this at the cost of disallowing existing idioms do in 
Objective-C. For instance, I could get a pointer to a class object, and 
create a new object for it. If you define this:


extern (Objective-C):
interface MyProtocol {
this(string);
}
class MyObject : NSObject, MyProtocol {
this(string) {}
}

you can then write this:

MyProtocol.Class c = MyObject.class;
NSObject o = new c(baca);

And the compiler then knows that the class pointer can allocate objects 
that can be constructed with a string parameter. This is something that 
can and is done in Objective-C (hence why you'll find constructors on 
interfaces). The idea is to add provable memory safety on top of it. 
(Note that the above example is not implemented yet, nor documented.)


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca/



Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim
On Wednesday, 26 June 2013 at 11:08:17 UTC, Leandro Lucarella 
wrote:

Joakim, el 25 de June a las 23:37 me escribiste:
I don't know the views of the key contributors, but I wonder 
if they
would have such a knee-jerk reaction against any paid/closed 
work.


Against being paid no, against being closed YES. Please don't 
even think
about it. It was a hell of a ride trying to make D more open to 
step back now.
I suggest you read my original post more carefully.  I have not 
suggested closing up the entire D toolchain, as you seem to 
imply.  I have suggested working on optimization patches in a 
closed-source manner and providing two versions of the D 
compiler: one that is faster, closed, and paid, with these 
optimization patches, another that is slower, open, and free, 
without the optimization patches.


Over time, the optimization patches are merged back to the free 
branch, so that the funding from the closed compiler makes even 
the free compiler faster, but only after some delay so that users 
who value performance will actually pay for the closed compiler.  
There can be a hard time limit, say nine months, so that you know 
any closed patches from nine months back will be opened and 
applied to the free compiler.  I suspect that the money will be 
good enough so that any bugfixes or features added by the closed 
developers will be added to the free compiler right away, with no 
delay.



What we need is companies paying to people to improve the
compiler and toolchain. This is slowly starting to happen, in
Sociomantic we are already 2 people dedicating some time to 
improve D as

part of our job (Don and me).
Thanks for the work that you and Don have done with Sociomantic.  
Why do you think more companies don't do this?  My point is that 
if there were money coming in from a paid compiler, Walter could 
fund even more such work.


We need more of this, and to get this, we need companies to 
start using
D, and to get this, we need professionalism (I agree 100% with 
Andrei on
this one). Is a bootstrap effort, and is not like volunteers 
need more
time to be professional, is just that you have to want to make 
the jump.
I think this ignores the decades-long history we have with open 
source software by now.  It is not merely wanting to make the 
jump, most volunteers simply do not want to do painful tasks 
like writing documentation or cannot put as much time into 
development when no money is coming in.  Simply saying We have 
to try harder to be professional seems naive to me.


I think is way better to do less stuff but with higher quality, 
nobody
is asking people for more time, is just changing the focus a 
bit, at
least for some time. Again, this is only bootstrapping, and is 
always
hard and painful. We need to make the jump to make companies 
comfortable

using D, then things will start rolling by themselves.
If I understand your story right, the volunteers need to put a 
lot of effort into bootstrapping the project to be more 
professional, companies will see this and jump in, then they fund 
development from then on out?  It's possible, but is there any 
example you have in mind?  The languages that go this completely 
FOSS route tend not to have as much adoption as those with closed 
implementations, like C++.


First of all, your examples are completely wrong. The projects 
you are

mentioning are 100% free, with no closed components (except for
components done by third-party).
You are misstating what I said: I said commercial, not 
closed, and gave different examples of commercial models.  But 
lets look at them.



Your examples are just reinforcing what
I say above. Linux is completely GPL, so it's not even only 
open source.
Is Free Software, meaning the license if more restrictive than, 
for
example, phobos. This means is harder to adopt by companies and 
you
can't possibly change it in a closed way if you want to 
distribute

a binary.
And yet the linux kernel ships with many binary blobs, almost all 
the time.  I don't know how they legally do it, considering the 
GPL, yet it is much more common to run a kernel with binary blobs 
than a purely FOSS version.  The vast majority of linux installs 
are due to Android and every single one has significant binary 
blobs and closed-source modifications to the Android source, 
which is allowed since most of Android is under the more liberal 
Apache license, with only the linux kernel under the GPL.


Again, I don't know how they get away with all the binary drivers 
in the kernel, perhaps that is a grey area with the GPL.  For 
example, even the most open source Android devices, the Nexus 
devices sold directly by Google and running stock Android, have 
many binary blobs:


https://developers.google.com/android/nexus/drivers

Other than Android, linux is really only popular on servers, 
where you can change it in a closed way because you are not 
distributing a binary.  Google takes advantage of this to run 
linux on a million servers 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 15:18, Joseph Rushton Wakeling wrote:


They don't own them, though -- they commit resources to them because the
language's ongoing development serves their business needs.


Yes, exactly.

--
/Jacob Carlborg


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 15:52:33 UTC, Joakim wrote:
I suggest you read my original post more carefully.  I have not 
suggested closing up the entire D toolchain, as you seem to 
imply.  I have suggested working on optimization patches in a 
closed-source manner and providing two versions of the D 
compiler: one that is faster, closed, and paid, with these 
optimization patches, another that is slower, open, and free, 
without the optimization patches.


Over time, the optimization patches are merged back to the free 
branch, so that the funding from the closed compiler makes even 
the free compiler faster, but only after some delay so that 
users who value performance will actually pay for the closed 
compiler.  There can be a hard time limit, say nine months, so 
that you know any closed patches from nine months back will be 
opened and applied to the free compiler.  I suspect that the 
money will be good enough so that any bugfixes or features 
added by the closed developers will be added to the free 
compiler right away, with no delay.


Perhaps you'd like to explain to the maintainers of GDC and LDC 
why, after all they've done for D, you think it would be 
acceptable to turn to them and say: Hey guys, we're going to 
make improvements and keep them from you for 9 months so we can 
make money ... ?


Or doesn't the cooperative relationship between the 3 main D 
compilers mean much to you?


Thanks for the work that you and Don have done with 
Sociomantic.  Why do you think more companies don't do this?  
My point is that if there were money coming in from a paid 
compiler, Walter could fund even more such work.


Leaving aside the moral issues, you might consider that any work 
paid for by revenues would be offset by a drop in voluntary 
contributions, including corporate contributors.  And sensible 
companies will avoid open core solutions.


A few articles worth reading on these factors:
http://webmink.com/essays/monetisation/
http://webmink.com/essays/open-core/
http://webmink.com/essays/donating-money/

I think this ignores the decades-long history we have with open 
source software by now.  It is not merely wanting to make the 
jump, most volunteers simply do not want to do painful tasks 
like writing documentation or cannot put as much time into 
development when no money is coming in.  Simply saying We have 
to try harder to be professional seems naive to me.


Odd that you talk about ignoring things, because the general 
trend we've seen in the decades-long history of free software is 
that the software business seems to getting more and more open 
with every year.  These days there's a strong expectation of free 
licensing.


If I understand your story right, the volunteers need to put a 
lot of effort into bootstrapping the project to be more 
professional, companies will see this and jump in, then they 
fund development from then on out?  It's possible, but is there 
any example you have in mind?  The languages that go this 
completely FOSS route tend not to have as much adoption as 
those with closed implementations, like C++.


It's hardly fair to compare languages without also taking into 
account their relative age.  C++ has its large market share 
substantially due to historical factors -- it was a major first 
mover, and until the advent of D, it was arguably the _only_ 
language that had that combination of power/flexibility and 
performance.


So far as compiler implementations are concerned, I'd say that it 
was the fact that there were many different implementations that 
helped C++.  On the other hand, proprietary implementations may 
in some ways have damaged adoption, as before standardization 
you'd have competing, incompatible proprietary versions which 
limited the portability of code.


And yet the linux kernel ships with many binary blobs, almost 
all the time.  I don't know how they legally do it, considering 
the GPL, yet it is much more common to run a kernel with binary 
blobs than a purely FOSS version.  The vast majority of linux 
installs are due to Android and every single one has 
significant binary blobs and closed-source modifications to the 
Android source, which is allowed since most of Android is under 
the more liberal Apache license, with only the linux kernel 
under the GPL.


The binary blobs are nevertheless part of the vanilla kernel, not 
something value added that gets charged for.  They're 
irrelevant to the development model of the kernel -- they are an 
irritation that's tolerated for practical reasons, rather than a 
design feature.


Again, I don't know how they get away with all the binary 
drivers in the kernel, perhaps that is a grey area with the 
GPL.  For example, even the most open source Android devices, 
the Nexus devices sold directly by Google and running stock 
Android, have many binary blobs:


https://developers.google.com/android/nexus/drivers

Other than Android, linux is really only popular on servers, 
where you can 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim
On Wednesday, 26 June 2013 at 12:02:38 UTC, Joseph Rushton 
Wakeling wrote:
Now, in trying to drive more funding and professional effort 
towards D development, do you _really_ think that the right 
thing to do is to turn around to all those people and say: Hey 
guys, after all the work you put in to make D so great, now 
we're going to build on that, but you'll have to wait 6 months 
for the extra goodies unless you pay?
Yes, I think it is the right thing to do.  I am only talking 
about closing off the optimization patches, all bugfixes and 
feature patches would likely be applied to both the free and paid 
compilers, certainly bugfixes.  So not _all_ the extra goodies 
have to be paid for, and even the optimization patches are 
eventually open-sourced.


How do you think that will affect the motivation of all those 
volunteers -- the code contributors, the bug reporters, the 
forum participants?  What could you say to the maintainers of 
GDC or LDC, after all they've done to enable people to use the 
language, that could justify denying their compilers up-to-date 
access to the latest features?  How would it affect the 
atmosphere of discussion about language development -- compared 
to the current friendly, collegial approach?
I don't know how it will affect their motivation, as they 
probably differ in the reasons they contribute.


If D becomes much more popular because the quality of 
implementation goes up and their D skills and contributions 
become much more prized, I suspect they will be very happy. :) If 
they are religious zealots about having only a single, completely 
open-source implementation- damn the superior results from hybrid 
models- perhaps they will be unhappy.  I suspect the former far 
outnumber the latter, since D doesn't employ the purely-GPL 
approach the zealots usually insist on.


We could poll them and find out.  You keep talking about closed 
patches as though they can only piss off the volunteers.  But if 
I'm right and a hybrid model would lead to a lot more funding and 
adoption of D, their volunteer work places them in an ideal 
position, where their D skills and contributions are much more 
valued and they can then probably do paid work in D.  I suspect 
most will end up happier.


I have not proposed denying GDC and LDC access to the latest 
features, only optimization patches.  LDC could do the same as 
dmd and provide a closed, paid version with the optimization 
patches, which it could license from dmd.  GDC couldn't do this, 
of course, but that is the result of their purist GPL-only 
approach.


Why do you think a hybrid model would materially affect the 
atmosphere of discussion about language development?  Do you 
believe that the people who work on hybrid projects like Android, 
probably the most widely-used, majority-OSS project in the world, 
are not able to collaborate effectively?


... and -- how do you think it would affect uptake, if it was 
announced that access to the best features would come at a 
price?
Please stop distorting my argument.  There are many different 
types of patches added to the dmd frontend every day: bugfixes, 
features, optimizations, etc.  I have only proposed closing the 
optimization patches.


However, I do think some features can also be closed this way.  
For example, Walter has added features like SIMD modifications 
only for Remedy.  He could make this type of feature closed 
initially, available only in the paid compiler.  As the feature 
matures and is paid for, it would eventually be merged into the 
free compiler.  This is usually not a problem as those who want 
that kind of performance usually make a lot of money off of it 
and are happy to pay for that performance: that is all I'm 
proposing with my optimization patches idea also.


As for how it would affect uptake, I think most people know 
that free products are usually less capable than paid products.  
The people who don't need the capability use Visual Studio 
Express, those who need it pay for the full version of Visual 
Studio.  There's no reason D couldn't employ a similar segmented 
model.


 There are orders of magnitude of difference between uptake of 
free and non-free services no matter what the domain, and 
software is one where free (as in freedom and beer) is much 
more strongly desired than in many other fields.
Yes, you're right, non-free services have orders of magnitude 
more uptake. :p


I think there are advantages to both closed and open source, 
which is why hybrid open/closed source models are currently very 
popular.  Open source allows more collaboration from outside, 
while closed source allows for _much_ more funding from paying 
customers.  I see no reason to dogmatically insist that these 
source models not be mixed.


There's a big difference between introducing commercial models 
with a greater degree of paid professional work, and 
introducing closed components.  Red Hat is a good example of 
that -- I can get, legally and for 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim
On Wednesday, 26 June 2013 at 17:28:22 UTC, Joseph Rushton 
Wakeling wrote:
Perhaps you'd like to explain to the maintainers of GDC and LDC 
why, after all they've done for D, you think it would be 
acceptable to turn to them and say: Hey guys, we're going to 
make improvements and keep them from you for 9 months so we can 
make money ... ?
Why are they guaranteed such patches?  They have advantages 
because they use different compiler backends.  If they think 
their backends are so great, let them implement their own 
optimizations and compete.


Or doesn't the cooperative relationship between the 3 main D 
compilers mean much to you?
As I've noted in an earlier response, LDC could also provide a 
closed version and license those patches.


Leaving aside the moral issues, you might consider that any 
work paid for by revenues would be offset by a drop in 
voluntary contributions, including corporate contributors.  And 
sensible companies will avoid open core solutions.
Or maybe the work paid by revenues would be far more and even 
more people would volunteer, when D becomes a more successful 
project through funding from the paid compiler.  Considering how 
dominant open core and other hybrid models are these days, it 
is laughable that you suggest that anyone is avoiding it. :)



A few articles worth reading on these factors:
http://webmink.com/essays/monetisation/
http://webmink.com/essays/open-core/
http://webmink.com/essays/donating-money/
I have corresponded with the author of that blog before.  I found 
him to be a religious zealot who recounted the four freedoms of 
GNU to me like a mantra.  Perhaps that's why Sun was run into the 
ground when they followed his ideas about open sourcing most 
everything.  I don't look to him for worthwhile reading on these 
issues.


I think this ignores the decades-long history we have with 
open source software by now.  It is not merely wanting to 
make the jump, most volunteers simply do not want to do 
painful tasks like writing documentation or cannot put as much 
time into development when no money is coming in.  Simply 
saying We have to try harder to be professional seems naive 
to me.


Odd that you talk about ignoring things, because the general 
trend we've seen in the decades-long history of free software 
is that the software business seems to getting more and more 
open with every year.  These days there's a strong expectation 
of free licensing.
Yes, it is getting more and more open, because hybrid models 
are being used more. :) Pure open source software, with no binary 
blobs, has almost no adoption, so it isn't your preferred purist 
approach that is doing well.  And the reasons are the ones I 
gave, volunteers can do a lot of things, but there are a lot of 
things they won't do.


It's hardly fair to compare languages without also taking into 
account their relative age.  C++ has its large market share 
substantially due to historical factors -- it was a major 
first mover, and until the advent of D, it was arguably the 
_only_ language that had that combination of power/flexibility 
and performance.

Yes, C++ has been greatly helped by its age.

So far as compiler implementations are concerned, I'd say that 
it was the fact that there were many different implementations 
that helped C++.  On the other hand, proprietary 
implementations may in some ways have damaged adoption, as 
before standardization you'd have competing, incompatible 
proprietary versions which limited the portability of code.
But you neglect to mention that most of those many different 
implementations were closed.  I agree that completely closed 
implementations can also cause incompatibilities, which is why I 
have suggested a hybrid model with limited closed-source patches.


The binary blobs are nevertheless part of the vanilla kernel, 
not something value added that gets charged for.  They're 
irrelevant to the development model of the kernel -- they are 
an irritation that's tolerated for practical reasons, rather 
than a design feature.
They are not always charged for, but they put the lie to the 
claims that linux uses a pure open source model.  Rather, it is 
usually a different kind of hybrid model.  If it were so pure, 
there would be no blobs at all.  The blobs are certainly not 
irrelevant, as linux wouldn't run on all the hardware that needs 
those binary blobs, if they weren't included.  Not sure what to 
make of your non sequitur of binary blobs not being a design 
feature.


As for paying for blobs, I'll note that the vast majority of 
linux kernels installed are in Android devices, where one pays 
for the hardware _and_ the development effort to develop the 
blobs that run the hardware.  So paying for the value added 
from blobs seems to be a very successful model. :)


So if one looks at linux in any detail, hybrid models are more 
the norm than the exception, even with the GPL. :)


But no one is selling proprietary extensions to the kernel (not 
that 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
I can't be bothered to read all points the both of you have 
mentioned thus far, but I do hope to add a voice of reason to 
calm you down. ;)




On Wednesday, 26 June 2013 at 17:42:23 UTC, Joakim wrote:
On Wednesday, 26 June 2013 at 12:02:38 UTC, Joseph Rushton 
Wakeling wrote:
Now, in trying to drive more funding and professional effort 
towards D development, do you _really_ think that the right 
thing to do is to turn around to all those people and say: 
Hey guys, after all the work you put in to make D so great, 
now we're going to build on that, but you'll have to wait 6 
months for the extra goodies unless you pay?
Yes, I think it is the right thing to do.  I am only talking 
about closing off the optimization patches, all bugfixes and 
feature patches would likely be applied to both the free and 
paid compilers, certainly bugfixes.  So not _all_ the extra 
goodies have to be paid for, and even the optimization patches 
are eventually open-sourced.




From a licensing perspective, the only part of the source that 
can be closed off is the DMD backend.  Any optimisation fixes 
in the DMD backend does not affect GDC/LDC.



How do you think that will affect the motivation of all those 
volunteers -- the code contributors, the bug reporters, the 
forum participants?  What could you say to the maintainers of 
GDC or LDC, after all they've done to enable people to use the 
language, that could justify denying their compilers 
up-to-date access to the latest features?  How would it affect 
the atmosphere of discussion about language development -- 
compared to the current friendly, collegial approach?
I don't know how it will affect their motivation, as they 
probably differ in the reasons they contribute.


If D becomes much more popular because the quality of 
implementation goes up and their D skills and contributions 
become much more prized, I suspect they will be very happy. :) 
If they are religious zealots about having only a single, 
completely open-source implementation- damn the superior 
results from hybrid models- perhaps they will be unhappy.  I 
suspect the former far outnumber the latter, since D doesn't 
employ the purely-GPL approach the zealots usually insist on.




You should try reading The Cathedral and the Bazaar if you don't 
understand why an open approach to development has caused the D 
programming language to grow by ten fold over the last year or so.


If you still don't understand, read it again ad infinitum.



... and -- how do you think it would affect uptake, if it was 
announced that access to the best features would come at a 
price?
Please stop distorting my argument.  There are many different 
types of patches added to the dmd frontend every day: bugfixes, 
features, optimizations, etc.  I have only proposed closing the 
optimization patches.


However, I do think some features can also be closed this way.  
For example, Walter has added features like SIMD modifications 
only for Remedy.  He could make this type of feature closed 
initially, available only in the paid compiler.  As the feature 
matures and is paid for, it would eventually be merged into the 
free compiler.  This is usually not a problem as those who want 
that kind of performance usually make a lot of money off of it 
and are happy to pay for that performance: that is all I'm 
proposing with my optimization patches idea also.




Think I might just point out that GDC had SIMD support before 
DMD. And that Remedy used GDC to get their D development off the 
ground.  It was features such as UDAs, along with many language 
bug fixes that were only available in DMD development that caused 
them to switch over.


In other words, they needed a faster turnaround for bugs at the 
time they were adopting D, however the D front-end in GDC stays 
pretty much stable on the current release.



In another email you mentioned Microsoft's revenues from 
Visual Studio but -- leaving aside for a moment all the moral 
and strategic concerns of closing things up -- Visual Studio 
enjoys that success because it's a virtually essential tool 
for professional development on Microsoft Windows, which still 
has an effective monopoly on modern desktop computing.  
Microsoft has the market presence to be able to dictate terms 
like that -- no one else does.  Certainly no upcoming 
programming language could operate like that!
Yes, Microsoft has unusual leverage.  But Visual Studio's 
compiler is not the only paid C++ compiler in the market, hell, 
Walter still sells C and C++ compilers.


I'm not proposing D operate just like Microsoft.  I'm 
suggesting a subtle compromise, a mix of that familiar closed 
model and the open source model you prefer, a hybrid model that 
you are no doubt familiar with, since you correctly pegged the 
licensing lingo earlier, when you mentioned open core.


These hybrid models are immensely popular these days: the two 
most popular software projects of the last decade, iOS and 
Android, are 

dlibgit updated to libgit2 v0.19.0

2013-06-26 Thread Andrej Mitrovic
https://github.com/AndrejMitrovic/dlibgit

These are the D bindings to the libgit2 library. libgit2 is a
versatile git library which can read/write loose git object files,
parse commits, tags, and blobs, do tree traversals, and much more.

The dlibgit master branch is now based on the recent libgit2 v0.19.0
release. The previous bindings were based on 0.17.0, and there have
been many new features introduced since then.

Note: The D-based samples have not yet been updated to v0.19.0, but
I'll work on this in the coming days.

Note: I might also look into making this a dub-aware package, if
that's something people want.

Licensing information:

libgit2 is licensed under a very permissive license (GPLv2 with a
special Linking Exception). This basically means that you can link it
(unmodified) with any kind of software without having to release its
source code.

dlibtgit github page: https://github.com/AndrejMitrovic/dlibgit
libgit2 homepage: libgit2.github.com/
libgit2 repo: https://github.com/libgit2/libgit2/


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joakim

On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
From a licensing perspective, the only part of the source that 
can be closed off is the DMD backend.  Any optimisation fixes 
in the DMD backend does not affect GDC/LDC.
This is flat wrong. I suggest you read the Artistic license, it 
was chosen for a reason, ie it allows closing of source as long 
as you provide the original, unmodified binaries with any 
modified binaries.  I suspect optimization fixes will be in both 
the frontend and backend.


You should try reading The Cathedral and the Bazaar if you 
don't understand why an open approach to development has caused 
the D programming language to grow by ten fold over the last 
year or so.


If you still don't understand, read it again ad infinitum.
Never read it but I have corresponded with the author, and I 
found him to be as religious about pure open source as Stallman 
is about the GPL.  I suggest you try examining why D is still 
such a niche language even with ten fold growth.  If you're not 
sure why, I suggest you look at the examples and reasons I've 
given, as to why closed source and hybrid models do much better.


Think I might just point out that GDC had SIMD support before 
DMD. And that Remedy used GDC to get their D development off 
the ground.  It was features such as UDAs, along with many 
language bug fixes that were only available in DMD development 
that caused them to switch over.


In other words, they needed a faster turnaround for bugs at the 
time they were adopting D, however the D front-end in GDC stays 
pretty much stable on the current release.
Not sure what point you are trying to make, as both gdc and dmd 
are open source.  I'm suggesting closing such patches, for a 
limited time.


I see no reason why another upcoming project like D couldn't 
do the same. :)


You seem to be confusing D for an Operating System, Smartphone, 
or any general consumer product.
You seem to be confusing the dmd compiler to not be a piece of 
software, just like the rest, or the many proprietary C++ 
compilers out there.


Having used closed source languages in the past, I strongly 
believe that closed languages do not stimulate growth or 
adoption at all.  And where adoption does occur, knowledge is 
kept within specialised groups.
Perhaps there is some truth to that.  But nobody is suggesting a 
purely closed-source language either.


I don't think a purely community-run project is a worthwhile 
goal, particularly if you are aiming for a million users and 
professionalism.  I think there is always opportunity for 
mixing of commercial implementations and community 
involvement, as very successful hybrid projects like Android 
or Chrome have shown.


Your argument seems lost on me as you seem to be taking a very 
strange angle of association with the D language and/or 
compiler, and you don't seem to understand how the development 
process of D works either.
I am associating D, an open source project, with Android and 
Chrome, two of the most successful open source projects at the 
moment, which both benefit from hybrid models.  I find it strange 
that you cannot follow.  If I don't understand how the 
development process of D works, you could point out an example, 
instead of making basic mistakes in not knowing what licenses it 
uses and what they allow. :)


- The language implementation is open source. This allows 
anyone to take the current front-end code - or even write their 
own clean-room implementation from ground-up - and integrate it 
to their own backend X.
Sort of.  The dmd frontend is open source, but the backend is not 
under an open source license.  Someone can swap out the backend 
and go completely closed, for example, using ldc (ldc used to 
have one or two GPL files, those would obviously have to be 
removed).


- The compiler itself is not associated with the development of 
the language, so those who are owners of the copyright are free 
to do what they want with their binary releases.


- The development model of D on github has adopted a pull, 
review and merge system, where any changes to the language or 
compiler do not go in unless it goes through proper coding 
review and testing (thank's to the wonderful auto-tester).  So 
your suggestion of an open core model has a slight fallacy 
here in that any changes to the closed off compiler would have 
to go through the same process to be accepted into the open one 
- and it might even be rejected.
I'm not sure why you think open core patches that are opened 
after a time limit would be any more likely to be rejected from 
that review process.  The only fallacy I see here is yours.


- Likewise, because of licensing and copyright assignments in 
place on the D front-end implementation.  Any closed D compiler 
using it would have to make its sources of the front-end, with 
local modifications, available upon request.  So it makes no 
sense whatsoever to make language features - such as SIMD - 
closed off.

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:
I can't be bothered to read all points the both of you have 
mentioned thus far, but I do hope to add a voice of reason to 
calm you down. ;)


Quick, nurse, the screens!

... or perhaps, Someone throw a bucket of water over them? :-P

From a licensing perspective, the only part of the source that 
can be closed off is the DMD backend.  Any optimisation fixes 
in the DMD backend does not affect GDC/LDC.


To be honest, I can't see the sales value of optimization fixes 
in the DMD backend given that GDC and LDC already have such 
strong performance.  The one strong motivation to use DMD over 
the other two compilers is (as you describe) access to the 
bleeding edge of features, but I'd have thought this will stop 
being an advantage in time as/when the frontend becomes a 
genuinely plug-and-play component.


By the way, I hope you didn't feel I was trying to speak on 
behalf of GDC -- wasn't my intention. :-)


Having used closed source languages in the past, I strongly 
believe that closed languages do not stimulate growth or 
adoption at all.  And where adoption does occur, knowledge is 
kept within specialised groups.


Last year I had the dubious privilege of having to work with MS 
Visual Basic for a temporary job.  What was strikingly different 
from the various open source languages was that although there 
was an extensive quantity of documentation available from 
Microsoft, it was incredibly badly organized, much of it was out 
of date, and there was no meaningful community support that I 
could find.


I got the job done, but I would surely have had a much easier 
experience with any of the open source languages out there.  
Suffice to say that the only reason I used VB in this case was 
because it was an obligatory part of the work -- I'd never use it 
by choice.


- The development model of D on github has adopted a pull, 
review and merge system, where any changes to the language or 
compiler do not go in unless it goes through proper coding 
review and testing (thank's to the wonderful auto-tester).  So 
your suggestion of an open core model has a slight fallacy 
here in that any changes to the closed off compiler would have 
to go through the same process to be accepted into the open one 
- and it might even be rejected.


I had a similar thought but from a slightly different angle -- 
that allowing open core in the frontend would damage the 
effectiveness of the review process.  How can you restrict 
certain features to proprietary versions without having also a 
two-tier hierarchy of reviewers?  And would you be able to 
maintain the broader range of community review if some select, 
paid few had privileged review access?


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
On Jun 26, 2013 9:00 PM, Joakim joa...@airpost.net wrote:

 On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:

 From a licensing perspective, the only part of the source that can be
closed off is the DMD backend.  Any optimisation fixes in the DMD backend
does not affect GDC/LDC.

 This is flat wrong. I suggest you read the Artistic license, it was
chosen for a reason, ie it allows closing of source as long as you provide
the original, unmodified binaries with any modified binaries.  I suspect
optimization fixes will be in both the frontend and backend.


Code generation is in the back end, so the answer to that is simply 'no'.

 You should try reading The Cathedral and the Bazaar if you don't
understand why an open approach to development has caused the D programming
language to grow by ten fold over the last year or so.

 If you still don't understand, read it again ad infinitum.

 Never read it but I have corresponded with the author, and I found him to
be as religious about pure open source as Stallman is about the GPL.  I
suggest you try examining why D is still such a niche language even with
ten fold growth.  If you're not sure why, I suggest you look at the
examples and reasons I've given, as to why closed source and hybrid models
do much better.


Then you should read it, as the 'cathedral' in question was GCC - a project
started by Stallman. :)

 Think I might just point out that GDC had SIMD support before DMD. And
that Remedy used GDC to get their D development off the ground.  It was
features such as UDAs, along with many language bug fixes that were only
available in DMD development that caused them to switch over.

 In other words, they needed a faster turnaround for bugs at the time
they were adopting D, however the D front-end in GDC stays pretty much
stable on the current release.

 Not sure what point you are trying to make, as both gdc and dmd are open
source.  I'm suggesting closing such patches, for a limited time.


Closing patches benefit no one.  And more to the point,  you can't say that
two compiler's implement the same language if both have different language
features.

 I see no reason why another upcoming project like D couldn't do the
same. :)


 You seem to be confusing D for an Operating System, Smartphone, or any
general consumer product.

 You seem to be confusing the dmd compiler to not be a piece of software,
just like the rest, or the many proprietary C++ compilers out there.


You seem to think when I say D I'm referring to dmd, or any other D
compiler out there.


 - The language implementation is open source. This allows anyone to take
the current front-end code - or even write their own clean-room
implementation from ground-up - and integrate it to their own backend X.

 Sort of.  The dmd frontend is open source, but the backend is not under
an open source license.  Someone can swap out the backend and go completely
closed, for example, using ldc (ldc used to have one or two GPL files,
those would obviously have to be removed).


The backend is not part of the D language implementation / specification.
(for starters, it's not documented anywhere except as code).

 - The compiler itself is not associated with the development of the
language, so those who are owners of the copyright are free to do what they
want with their binary releases.

 - The development model of D on github has adopted a pull, review and
merge system, where any changes to the language or compiler do not go in
unless it goes through proper coding review and testing (thank's to the
wonderful auto-tester).  So your suggestion of an open core model has a
slight fallacy here in that any changes to the closed off compiler would
have to go through the same process to be accepted into the open one - and
it might even be rejected.

 I'm not sure why you think open core patches that are opened after a
time limit would be any more likely to be rejected from that review
process.  The only fallacy I see here is yours.


Where did I say that? I only invited you to speculate on what would happen
if a 'closed patch' got rejected.  This leads back to the point that you
can't call it a compiler for the D programming language if it derives from
the specification / implementation.


 DMD - as in refering to the binary releases - can be closed / paid /
whatever it likes.

 The D Programming Language - as in the D front-end implementation - is
under a dual GPL/Artistic license and cannot be used by any closed source
product without said product releasing their copy of the front-end sources
also.  This means that your hybrid proposal only works for code that is
not under this license - eg: the DMD backend - which is not what the vast
majority of contributors actually submit patches for.

 Wrong, you have clearly not read the Artistic license.


I'll allow you to keep on thinking that for a while longer...

 If you strongly believe that a programming language can't be big (as in
1M users) without being 

Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Iain Buclaw
On Jun 26, 2013 9:50 PM, Joseph Rushton Wakeling 
joseph.wakel...@webdrake.net wrote:

 On Wednesday, 26 June 2013 at 19:26:37 UTC, Iain Buclaw wrote:

 I can't be bothered to read all points the both of you have mentioned
thus far, but I do hope to add a voice of reason to calm you down. ;)


 Quick, nurse, the screens!

 ... or perhaps, Someone throw a bucket of water over them? :-P



Don't call be Shirley...

 From a licensing perspective, the only part of the source that can be
closed off is the DMD backend.  Any optimisation fixes in the DMD backend
does not affect GDC/LDC.


 To be honest, I can't see the sales value of optimization fixes in the
DMD backend given that GDC and LDC already have such strong performance.
 The one strong motivation to use DMD over the other two compilers is (as
you describe) access to the bleeding edge of features, but I'd have thought
this will stop being an advantage in time as/when the frontend becomes a
genuinely plug-and-play component.


Sometimes it feels like achieving this is as trying to break down a brick
barrier with a shoelace.

 By the way, I hope you didn't feel I was trying to speak on behalf of GDC
-- wasn't my intention. :-)


I did, and it hurt.  :o)

 Having used closed source languages in the past, I strongly believe that
closed languages do not stimulate growth or adoption at all.  And where
adoption does occur, knowledge is kept within specialised groups.


 Last year I had the dubious privilege of having to work with MS Visual
Basic for a temporary job.  What was strikingly different from the various
open source languages was that although there was an extensive quantity of
documentation available from Microsoft, it was incredibly badly organized,
much of it was out of date, and there was no meaningful community support
that I could find.

 I got the job done, but I would surely have had a much easier experience
with any of the open source languages out there.  Suffice to say that the
only reason I used VB in this case was because it was an obligatory part of
the work -- I'd never use it by choice.


Yes, it's like trying to learn D, but the only reference you have of the
language is the grammar page, and an IDE which offers thousands of
auto-complete options for things that *sound* like what you want, but don't
compile when it comes to testing.  :o)

Regards
-- 
Iain Buclaw

*(p  e ? p++ : p) = (c  0x0f) + '0';


Re: dlibgit updated to libgit2 v0.19.0

2013-06-26 Thread Sönke Ludwig

Am 26.06.2013 21:36, schrieb Andrej Mitrovic:

https://github.com/AndrejMitrovic/dlibgit

These are the D bindings to the libgit2 library. libgit2 is a
versatile git library which can read/write loose git object files,
parse commits, tags, and blobs, do tree traversals, and much more.

The dlibgit master branch is now based on the recent libgit2 v0.19.0
release. The previous bindings were based on 0.17.0, and there have
been many new features introduced since then.

Note: The D-based samples have not yet been updated to v0.19.0, but
I'll work on this in the coming days.

Note: I might also look into making this a dub-aware package, if
that's something people want.



Great to hear. I've been using dlibgit since some time and actually I've 
already registered a fork with (partially) updated bindings for the 
master version of libgit2: http://registry.vibed.org/packages/dlibgit


Unfortunately I never got to finish it completely, which is why I didn't 
make a pull request yet. But anyway, since 0.19.0 now contains the 
latest features, I might as well drop my fork and point the registry to 
your repository.


You can take my package.json as a template:
https://github.com/s-ludwig/dlibgit/blob/master/package.json

It should probably get a targetType: none field, since it's 
header-only, and authors/copyright fields are missing.




Re: dlibgit updated to libgit2 v0.19.0

2013-06-26 Thread Andrej Mitrovic
On 6/26/13, Sönke Ludwig slud...@outerproduct.org wrote:
 Great to hear. I've been using dlibgit since some time and actually I've
 already registered a fork with (partially) updated bindings for the
 master version of libgit2: http://registry.vibed.org/packages/dlibgit

Ah, didn't know that. For now you may want to hold on to that package
until I port the v0.17 samples to v0.19, to verify the new bindings
work properly.

Btw, the reason why I've moved everything under the git.c package is
because at some point I want to implement either a class or
struct-based D API around the C API, so it's easier to use from client
code.

The new D API will use modules such as git.branch while the C-based
API git.c.branch.


Re: dlibgit updated to libgit2 v0.19.0

2013-06-26 Thread Andrej Mitrovic
On 6/26/13, Sönke Ludwig slud...@outerproduct.org wrote:
 I've been using dlibgit since some time

Btw, I'm curious what kind of work you've done using dlibgit (if it's
ok to ask)?

 I've already registered a fork with (partially) updated bindings for the
 master version of libgit2: http://registry.vibed.org/packages/dlibgit

I saw some of your commits now. I'm happy to see that we no longer
need bitfields in v0.19.0, and it seems most of the inline functions
in libgit2 are gone, making porting easier. Those libgit devs are
doing a great job.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 19:01:42 UTC, Joakim wrote:
Why are they guaranteed such patches?  They have advantages 
because they use different compiler backends.  If they think 
their backends are so great, let them implement their own 
optimizations and compete.


I could respond at greater length, but I think that substantial 
flaws of your point of view are exposed in this single paragraph. 
 GDC and LDC aren't competitors, they are valuable collaborators.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Joseph Rushton Wakeling

On Wednesday, 26 June 2013 at 21:29:12 UTC, Iain Buclaw wrote:

Don't call be Shirley...


Serious? :-)

By the way, I hope you didn't feel I was trying to speak on 
behalf of GDC -- wasn't my intention. :-)


I did, and it hurt.  :o)


Oh no.  50 shades of #DD ? :-)


Announcing bottom-up-build - a build system for C/C++/D

2013-06-26 Thread Graham St Jack
Bottom-up-build (bub) is a build system written in D which supports 
building of large C/C++/D projects. It works fine on Linux, with a 
Windows port nearly completed. It should work on OS-X, but I haven't 
tested it there. 

Bub is hosted on https://github.com/GrahamStJack/bottom-up-build.


Some of bub's features that are useful on large projects are:

Built files are located outside the source directory, using a different 
build directory for (say) debug, release, profile, etc.

Very simple configuration files, making the build infrastructure easy to 
maintain.

Automatic deduction of which libraries to link with.

Automatic execution and evaluation of tests.

Enforcement of dependency control, with prevention of circularities 
between modules and directories.

Generated files are not scanned for imports/includes until after they are 
up to date. This is a real enabler for code generation.

Files in the build directory that should not be there are automatically 
deleted. It is surprising how often a left-over build artifact can make 
you think that something works, only to discover your mistake after a 
commit. This feature eliminates that problem.

The dependency graph is accurate, maximising opportunity for multiple 
build jobs and thus speeding up builds significantly.


An early precursor to bub was developed to use on a large C++ project 
that had complex dependencies and used a lot of code generation. Bub is a 
major rewrite designed to be more general-purpose.

The positive effect of the bub precursor on the project was very 
significant. Examples of positive consequences are:

Well-defined dependencies and elimination of circularities changed the 
design so that implementation and testing proceeded from the bottom up.

Paying attention to dependencies eliminated many unnecessary ones, 
resulting in a substantial increase in the reusability of code. This was 
instrumental in changing the way subsequent projects were designed, so 
that they took advantage of the large (and growing) body of reusable code.
The reusable code improved in design and quality with each project that 
used it.

Tests were compiled, linked and executed very early in the build - 
typically immediately after the code under test. This meant that 
regressions were usually detected within a few seconds of initiating a 
build. This was transformative to work rate, and willingness to make 
sweeping changes.

Doing a clean is hardly ever necessary. This is important because it 
dramatically reduces the total amount of time that builds take, which 
matters on a large project (especially C++).

Having a build system that works with both C++ and D meant that it was 
easy to slip some D code into the project. Initially as scripts, then 
as utilities, and so on. Having side-by-side comparisons of D against 
bash scripts and C++ modules had the effect of turning almost all the 
other team members into D advocates.


Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu

2013-06-26 Thread Mathias Lang
I've read (almost), everything, so I hope I won't miss a point here:
a) I've heard about MSVC, Red Hat, Qt, Linux and so on. From my
understanding, none of the projects mentionned have gone from free (as in
free beer) to hybrid/closed. And I'm not currently able to think of one
successful, widespread project that did.
b) Thinking that being free (as a beer and/or as freedom), hybrid, closed
source of whatever is a single critera of success seems foolish. I'm not
asking for a complete comparison (I think my mailbox won't stand it ;-) ),
but please stop comparing a free operating software with a paid compiler,
and assume the former have more users than the later because it's free (and
vice-versa). In addition, I don't see the logic behind comparing something
born in the 90s with something from the 2000s. Remember the Dot-com bubble ?
c) There are other way to get more people involved, for exemple if
dlang.orgbecomes a foundation (see related thread), we would be able
to apply for
GSoC.
d) People pay for something they need. They don't adopt something because
they can pay for it. That's why paid compiler must follow language
promotion, not the other way around.


2013/6/27 Joseph Rushton Wakeling joseph.wakel...@webdrake.net

 On Wednesday, 26 June 2013 at 21:29:12 UTC, Iain Buclaw wrote:

 Don't call be Shirley...


 Serious? :-)

  By the way, I hope you didn't feel I was trying to speak on behalf of GDC
 -- wasn't my intention. :-)


 I did, and it hurt.  :o)


 Oh no.  50 shades of #DD ? :-)



Re: Announcing bottom-up-build - a build system for C/C++/D

2013-06-26 Thread Rob T
This build system seems to be very well suited for building 
complex large projects in a sensible way.


I successfully tested the example build on Debian linux. I will 
definitely explore this further using one of my own projects.


One issue I immediately ran into, is when I run bub incorrectly 
it hangs after writing the bail message to console. ctrl-c does 
not kill it, and I have to run a process kill commandto terminate.


Seems it gets stuck in doBailer() while(true) loop, but I only 
glanced at the source quickly before posting back here.


--rt


Re: Opinions on DConf talks

2013-06-26 Thread Joakim

On Wednesday, 26 June 2013 at 03:22:16 UTC, Manu wrote:
I guess, in summary, sorry you were underwhelmed/disappointed. 
To be
honest, I was too, I'd hoped I could offer more. I think a lot 
of other
people did too... but maybe next year there will be another one 
with an

additional year's practical experience...? :)
No need to apologize or defend your talk.  I was simply expecting 
a talk about Using D Alongside a Game Engine, not Integrating 
D into an Existing C++ Game Engine. ;) Your talk was a nice 
technical introduction to the latter, I'm sure it was very useful 
for those wondering about the potential pitfalls of integrating 
with C++ and it was kind of amazing all the hoops you jumped 
through.  The last part of your talk, where you talked about 
actual D use, was what I was looking forward to the whole talk 
being about.  Maybe next year, :) as you say.


Re: What features of D are you using now which you thought you'd never goint to use?

2013-06-26 Thread monarch_dodra

On Tuesday, 25 June 2013 at 22:39:07 UTC, Jonathan M Davis wrote:

On Wednesday, June 26, 2013 00:11:29 Timon Gehr wrote:

On 06/25/2013 10:37 PM, Jonathan M Davis wrote:
 On Tuesday, June 25, 2013 21:42:17 Timon Gehr wrote:
 Take will check the wrapped range's 'empty' repeatedly. 
 takeExactly does

 not need to do that at all.
 
 It only does that with assertions. ...


https://github.com/D-Programming-Language/phobos/blob/master/std/range.d#L26
48


Clearly, I missed that. Well, it is true that takeExactly 
avoids calling empty
on the range that it's wrapping in its own empty function, but 
it's pretty
rare that a wrapper range doesn't call the wrapped empty in its 
own empty
function. So, I'd tend to view that as on optimization on 
takeExactly's part

rather than a deficiency on take's part.

Regardless, it's definitely more efficient to use takeExactly 
when you can. The
_only_ benefit to take over takeExactly (assuming that the 
propagation issue is
fixed) is that you don't have to guarantee that the range 
you're passing it has
enough elements. If you know that it does, then takeExactly is 
better.


- Jonathan M Davis


In regards to Take/TakeExactly, it might be best to implement 
both as a common struct? eg:


private struct TakeImplementation(bool Exactly = false)
{...}
alias Take = TakeImplementation!false;
private alias TakeExactly = TakeImplementation!true; 
//TakeExactly is obscure


I mean, at the end of the day, except for pop, they are 
basically the same functions... I think it would be better to 
have a few static ifs in select locations, rather than 
duplicating everything with subtle differences...


Re: What features of D are you using now which you thought you'd never goint to use?

2013-06-26 Thread Jonathan M Davis
On Wednesday, June 26, 2013 08:30:20 monarch_dodra wrote:
 In regards to Take/TakeExactly, it might be best to implement
 both as a common struct? eg:
 
 private struct TakeImplementation(bool Exactly = false)
 {...}
 alias Take = TakeImplementation!false;
 private alias TakeExactly = TakeImplementation!true;
 //TakeExactly is obscure
 
 I mean, at the end of the day, except for pop, they are
 basically the same functions... I think it would be better to
 have a few static ifs in select locations, rather than
 duplicating everything with subtle differences...

In almost all cases, takeExactly _does_ forward to take. The only cases when 
it doesn't are when you call takeExactly on the result of takeExactly (so you 
get the same type you had before) and when you call takeExactly on a finite 
range without length, in which case, it _can't_ be Take, because it functions 
differently (in particular, with takeExactly, it doesn't bother to check 
whether the source range is running out of elements, because it assumes that 
it has enough, which take doesn't do). And it's not like much gets duplicated 
with the struct that takeExactly defines in that one case. It's a very small 
struct that does very little. I really don't think that you'd buy anything by 
trying to combine that struct with Take. If anything, I think that it would 
complicate things unnecessarily.

At this point, I think that take and takeExactly share as much implementation 
as makes sense, and that's already most of the implementation. So, I really 
don't think that there's a problem here that needs fixing (aside from 
propagating the two traits that aren't currently being propagated in 
takeExactly like they should be).

- Jonathan M Davis


Re: top time wasters in DMD, as reported by gprof

2013-06-26 Thread John Colvin

On Monday, 24 June 2013 at 18:01:11 UTC, Walter Bright wrote:

On 6/24/2013 6:19 AM, dennis luehring wrote:

how does that look using msvc compiling the dmd compiler
as it turns out that msvc make dmd much faster


The profile report was done by gcc/gprof.

And besides, better compilers shouldn't change profile results.


I'm confused Different optimisers often produce radically 
different profile results. Or are am I misunderstanding you?


Re: Anybody using D's COM objects?

2013-06-26 Thread Paulo Pinto

On Tuesday, 25 June 2013 at 19:22:05 UTC, Adam Wilson wrote:
On Tue, 25 Jun 2013 11:29:00 -0700, Walter Bright 
newshou...@digitalmars.com wrote:



Any projects using AddRef() and Release()?


If you want to work against the new Windows Runtime, you'll 
need COM, and additional interface IInspectable.


Or any new Win32 APIs since Windows XP, as most of them are 
actually COM based.


--
Paulo


Re: Anybody using D's COM objects?

2013-06-26 Thread Sönke Ludwig
Am 25.06.2013 20:29, schrieb Walter Bright:
 Any projects using AddRef() and Release()?

I'm currently using it for Direct2D and Direct3D 9/10/11. Also, I have
an MIDL - D translator for WinRT and plan to make a language
projection for it as well.


Re: fun project - improving calcHash

2013-06-26 Thread Kagamin
In the case of high memory usage the input string is unlikely to 
be in cache, so may be it's better to optimize cache misses 
instead of computation speed.


Re: Opinions on DConf talks

2013-06-26 Thread deadalnix

On Wednesday, 26 June 2013 at 01:38:26 UTC, Walter Bright wrote:

On 6/25/2013 5:40 PM, Manu wrote:

Believe it or not, I'm actually a friendly guy! ...or at
least, I like to think so... ;)


I can vouch that Manu is a friendly guy!


You may think so, but he is just an hypocrite :P


Re: Opinions on DConf talks

2013-06-26 Thread Walter Bright

On 6/26/2013 2:46 AM, deadalnix wrote:

You may think so, but he is just an hypocrite :P


That's out of line here.


Re: Opinions on DConf talks

2013-06-26 Thread deadalnix

On Wednesday, 26 June 2013 at 10:06:19 UTC, Walter Bright wrote:

On 6/26/2013 2:46 AM, deadalnix wrote:

You may think so, but he is just an hypocrite :P


That's out of line here.


The smiley isn't there randomly. And frankly, I really like 
Manu's style, he is intellectually stimulating.


Re: why allocators are not discussed here

2013-06-26 Thread Robert Schadek
On 06/26/2013 12:50 AM, Adam D. Ruppe wrote:
 On Tuesday, 25 June 2013 at 22:22:09 UTC, cybervadim wrote:
 (introducing a new keyword allocator)

 It would be easier to just pass an allocator object that provides the
 necessary methods and don't use new at all. (I kinda wish new wasn't
 in the language. It'd make this a little more consistent.)


I did think about this as well, but than I came up with something that
IMHO is even simpler.

Imagine we have two delegates:

void* delegate(size_t);  // this one allocs
void delegate(void*);// this one frees

you pass both to a function that constructs you object. The first is
used for allocation the
memory, the second gets attached to the TypeInfo and is used by the gc
to free
the object. This would be completely transparent to the user.

The use in a container is similar. Just use the alloc delegate to
construct the objects and
attach the free delegate to the typeinfo. You could even mix allocator
strategies in the middle
of the lifetime of the container.



Re: Opinions on DConf talks

2013-06-26 Thread Iain Buclaw
On 26 June 2013 10:46, deadalnix deadal...@gmail.com wrote:
 On Wednesday, 26 June 2013 at 01:38:26 UTC, Walter Bright wrote:

 On 6/25/2013 5:40 PM, Manu wrote:

 Believe it or not, I'm actually a friendly guy! ...or at
 least, I like to think so... ;)


 I can vouch that Manu is a friendly guy!


 You may think so, but he is just an hypocrite :P

grammar an hypocrite ??? /nazi


Manu's a lovable hippy, and I can vouch having shared a small hotel
room with him (though I have bias because I'm a technological hippy
also ;)


--
Iain Buclaw

*(p  e ? p++ : p) = (c  0x0f) + '0';


Re: why allocators are not discussed here

2013-06-26 Thread Jacob Carlborg

On 2013-06-26 01:16, Adam D. Ruppe wrote:


You'd want it to be RAII or delegate based, so the scope is clear.

with_allocator(my_alloc, {
  do whatever here
});


or

{
ChangeAllocator!my_alloc dummy;

do whatever here
} // dummy's destructor ends the allocator scope


I think the former is a bit nicer, since the dummy variable is a bit
silly. We'd hope that delegate can be inlined.


It won't be inlined. You would need to make it a template parameter to 
have it inlined.


--
/Jacob Carlborg


Re: Anybody using D's COM objects?

2013-06-26 Thread Jacob Carlborg

On 2013-06-25 23:06, Walter Bright wrote:


I emailed you some information about this today - if you didn't get it,
please email me your correct email address. d...@me.com looks fake :-)


I have received your emails. Sometimes it's a good idea to have an email 
address which does not contain your real name, this is obviously not one 
of these cases.


--
/Jacob Carlborg


Re: D repl

2013-06-26 Thread bearophile
It looks very nice. I like the interactive shell in Python and 
Haskell. Even languages like Scala enjoy it. The importance of a 
good REPL can't be underestimated. I'd like a good repl in the 
standard D distribution (despite the installation with dub is 
easy).


Notes:
- Regarding the input and output lines, I suggest to take a look 
at Mathematica and Sage. I think it's better to give numbers only 
to the inputs (or separate numbers to inputs and outputs), and to 
denote inputs and outputs differently. So there's no need for the 
=.

- print stringNums: very nice.
- .map!(a = a.to!string)  and .map!(a = S(a)) can also be 
written like this:


import std.stdio, std.algorithm, std.range, std.conv;
void main() {
10.iota.map!text.writeln;
static struct S { int x; }
10.iota.map!S.writeln;
}

- is type x  the same as  print typeof(x)  ?
- line 29: foreach that prints the last result of the iteration: 
it's interesting.


I have followed the instructions:

git clone https://github.com/callumenator/dabble
cd dabble
dub build --config=console

But the compilation stops with the errors:

Running dmd (compile)...
...\dub\packages\pegged-master\pegged\dynamic\grammar.d(245): 
Error: not a property eps
...\dub\packages\pegged-master\pegged\dynamic\grammar.d(418): 
Error: not a property fail



I think DUB should print _where_ it copies files, and it should 
use a less hard to find place to stores them. (An optional idea 
is to store inside the directory of dub a link to the directory 
where dub stores those files).


Bye,
bearophile


Re: Opinions on DConf talks

2013-06-26 Thread Kagamin

On Tuesday, 25 June 2013 at 19:38:04 UTC, MattCoder wrote:
But one little thing that comes in mind now is: It really needs 
this type of conference when we live in Internet era?


I believe conferences privatize information. Dconf is not half 
bad, but there're much worse cases. Video is low-quality medium 
to deliver technical information, in some cases it's completely 
inaccessible. Well, if it's not supposed to share information, 
then ok, but usually it's persieved in a different way.


Re: why allocators are not discussed here

2013-06-26 Thread Jason House
Bloomberg released an STL alternative called BSL which contains 
an alternate allocator model. In a nutshell object supporting 
custom allocators can optionally take an allocator pointer as an 
argument. Containers will save the pointer and use it for all 
their allocations. It seems simple enough and does not embed the 
allocator in the type.


https://github.com/bloomberg/bsl/wiki/BDE-Allocator-model

On Tuesday, 25 June 2013 at 22:22:09 UTC, cybervadim wrote:
I know Andrey mentioned he was going to work on Allocators a 
year ago. In DConf 2013 he described the problems he needs to 
solve with Allocators. But I wonder if I am missing the 
discussion around that - I tried searching this forum, found a 
few threads that was not actually a brain storm for Allocators 
design.


Please point me in the right direction
or
is there a reason it is not discussed
or
should we open the discussion?


The easiest approach for Allocators design I can imagine would 
be to let user specify which Allocator operator new should get 
the memory from (introducing a new keyword allocator). This 
gives a total control, but assumes user knows what he is doing.


Example:

CustomAllocator ca;
allocator(ca) {
  auto a = new A; // operator new will use 
ScopeAllocator::malloc()

  auto b = new B;

  free(a); // that should call ScopeAllocator::free()
  // if free() is missing for allocated area, it is a user 
responsibility to make sure custom Allocator can handle that

}

By default allocator is the druntime using GC, free(a) does 
nothing for it.



if some library defines its allocator (e.g. specialized 
container), there should be ability to:

1. override allocator
2. get access to the allocator used

I understand that I spent 5 mins thinking about the way 
Allocators may look.
My point is - if somebody is working on it, can you please 
share your ideas?


Re: why allocators are not discussed here

2013-06-26 Thread cybervadim

On Wednesday, 26 June 2013 at 13:16:25 UTC, Jason House wrote:
Bloomberg released an STL alternative called BSL which contains 
an alternate allocator model. In a nutshell object supporting 
custom allocators can optionally take an allocator pointer as 
an argument. Containers will save the pointer and use it for 
all their allocations. It seems simple enough and does not 
embed the allocator in the type.


https://github.com/bloomberg/bsl/wiki/BDE-Allocator-model


I think the problem with such approach is that you have to 
maniacally add support for custom allocator to every class if you 
want them to be on a custom allocator.
If we simply able to say - all memory allocated in this area {} 
should use my custom allocator, that would simplify the code and 
no need to change std lib.
The next step is to notify allocator when the memory should be 
released. But for the stack based allocator that is not required.
More over, if we introduce access to different GCs (e.g. 
mark-n-sweep, semi-copy, ref counted), we should be able to say 
this {} piece of code is my temporary, so use semi-copy GC, the 
other code is long lived and not much objects created, so use ref 
counted. That is, it is all runtime support and no need changing 
the library code.


Re: why allocators are not discussed here

2013-06-26 Thread Dmitry Olshansky

26-Jun-2013 14:03, Robert Schadek пишет:

On 06/26/2013 12:50 AM, Adam D. Ruppe wrote:

On Tuesday, 25 June 2013 at 22:22:09 UTC, cybervadim wrote:

(introducing a new keyword allocator)


It would be easier to just pass an allocator object that provides the
necessary methods and don't use new at all. (I kinda wish new wasn't
in the language. It'd make this a little more consistent.)



I did think about this as well, but than I came up with something that
IMHO is even simpler.

Imagine we have two delegates:

void* delegate(size_t);  // this one allocs
void delegate(void*);// this one frees

you pass both to a function that constructs you object. The first is
used for allocation the
memory, the second gets attached to the TypeInfo and is used by the gc
to free
the object.


Then it's just GC but with an extra complication.


This would be completely transparent to the user.

The use in a container is similar. Just use the alloc delegate to
construct the objects and
attach the free delegate to the typeinfo. You could even mix allocator
strategies in the middle
of the lifetime of the container.




--
Dmitry Olshansky


Re: why allocators are not discussed here

2013-06-26 Thread Dmitry Olshansky

26-Jun-2013 02:22, cybervadim пишет:

I know Andrey mentioned he was going to work on Allocators a year ago.
In DConf 2013 he described the problems he needs to solve with
Allocators. But I wonder if I am missing the discussion around that - I
tried searching this forum, found a few threads that was not actually a
brain storm for Allocators design.

Please point me in the right direction
or
is there a reason it is not discussed
or
should we open the discussion?


The easiest approach for Allocators design I can imagine would be to let
user specify which Allocator operator new should get the memory from
(introducing a new keyword allocator). This gives a total control, but
assumes user knows what he is doing.

Example:

CustomAllocator ca;
allocator(ca) {
   auto a = new A; // operator new will use ScopeAllocator::malloc()
   auto b = new B;

   free(a); // that should call ScopeAllocator::free()
   // if free() is missing for allocated area, it is a user
responsibility to make sure custom Allocator can handle that
}


Awful. What that extra syntax had brought you? Except that now new is 
unsafe by design?
Other questions involve how does this allocation scope goes inside of 
functions, what is the mechanism of passing it up and down of call-stack.


Last but not least I fail to see how scoped allocators alone (as 
presented) solve even half of the problem.


--
Dmitry Olshansky


Re: why allocators are not discussed here

2013-06-26 Thread H. S. Teoh
On Wed, Jun 26, 2013 at 04:10:49PM +0200, cybervadim wrote:
 On Wednesday, 26 June 2013 at 13:16:25 UTC, Jason House wrote:
 Bloomberg released an STL alternative called BSL which contains an
 alternate allocator model. In a nutshell object supporting custom
 allocators can optionally take an allocator pointer as an
 argument. Containers will save the pointer and use it for all
 their allocations. It seems simple enough and does not embed the
 allocator in the type.
 
 https://github.com/bloomberg/bsl/wiki/BDE-Allocator-model
 
 I think the problem with such approach is that you have to
 maniacally add support for custom allocator to every class if you
 want them to be on a custom allocator.

Yeah, that's a major inconvenience with the C++ allocator model. There's
no way to say switch to allocator A within this block of code; if
you're given a binary-only library that doesn't support allocators,
you're out of luck. And even if you have the source code, you have to
manually modify every single line of code that performs allocation to
take an additional parameter -- not a very feasible approach.


 If we simply able to say - all memory allocated in this area {}
 should use my custom allocator, that would simplify the code and no
 need to change std lib.
 The next step is to notify allocator when the memory should be
 released. But for the stack based allocator that is not required.
 More over, if we introduce access to different GCs (e.g.
 mark-n-sweep, semi-copy, ref counted), we should be able to say this
 {} piece of code is my temporary, so use semi-copy GC, the other
 code is long lived and not much objects created, so use ref counted.
 That is, it is all runtime support and no need changing the library
 code.

Yeah, I think the best approach would be one that doesn't require
changing a whole mass of code to support. Also, one that doesn't require
language changes would be far more likely to be accepted, as the core D
devs are leery of adding yet more complications to the language.

That's why I proposed that gc_alloc and gc_free be made into
thread-global function pointers, that can be swapped with a custom
allocator's version. This doesn't have to be visible to user code; it
can just be an implementation detail in std.allocator, for example. It
allows us to implement custom allocators across a block of code that
doesn't know (and doesn't need to know) what allocator will be used.


T

-- 
Fact is stranger than fiction.


Re: why allocators are not discussed here

2013-06-26 Thread cybervadim
On Wednesday, 26 June 2013 at 14:17:03 UTC, Dmitry Olshansky 
wrote:
Awful. What that extra syntax had brought you? Except that now 
new is unsafe by design?
Other questions involve how does this allocation scope goes 
inside of functions, what is the mechanism of passing it up and 
down of call-stack.


Last but not least I fail to see how scoped allocators alone 
(as presented) solve even half of the problem.


Extra syntax allows me not touching the existing code.
Imagine you have a stateless event processing. That is event 
comes, you do some calculation, prepare the answer and send it 
back. It will look like:


void onEvent(Event event)
{
   process();
}

Because it is stateless, you know all the memory allocated during 
processing will not be required afterwards. So the syntax I 
suggested requires a very little change in code. process() may be 
implemented using std lib, doing several news and resizing.


With new syntax:


void onEvent(Event event)
{
   ScopedAllocator alloc;
   allocator(alloc) {
 process();
   }
}

So now you do not use GC for all that is created inside the 
process().
ScopedAllocator is a simple stack that will free all memory in 
one go.


It is up to the runtime implementation to make sure all memory 
that is allocated inside allocator{} scope is actually allocated 
using ScopedAllocator and not GC.


Does it make sense?


Re: why allocators are not discussed here

2013-06-26 Thread Robert Schadek

 Imagine we have two delegates:

 void* delegate(size_t);  // this one allocs
 void delegate(void*);// this one frees

 you pass both to a function that constructs you object. The first is
 used for allocation the
 memory, the second gets attached to the TypeInfo and is used by the gc
 to free
 the object.

 Then it's just GC but with an extra complication.

IMHO, not really, as the place you get the memory from is not managed by
the GC, or at least not
directly. The GC algorithm would see that there is a free delegate
attached to the object and would
use this to free the memory.

The same should hold true for calling GC.free.

Or are you talking about ref counting and such?


Re: why allocators are not discussed here

2013-06-26 Thread cybervadim

On Wednesday, 26 June 2013 at 14:26:03 UTC, H. S. Teoh wrote:
Yeah, I think the best approach would be one that doesn't 
require
changing a whole mass of code to support. Also, one that 
doesn't require
language changes would be far more likely to be accepted, as 
the core D

devs are leery of adding yet more complications to the language.

That's why I proposed that gc_alloc and gc_free be made into
thread-global function pointers, that can be swapped with a 
custom
allocator's version. This doesn't have to be visible to user 
code; it
can just be an implementation detail in std.allocator, for 
example. It
allows us to implement custom allocators across a block of code 
that
doesn't know (and doesn't need to know) what allocator will be 
used.




Yes, being able to change gc_alloc, gc_free would do the work. If 
runtime  remembers the stack of gc_alloc/gc_free functions like 
pushd, popd, that would simplify its usage.

I think this is a very nice and simple solution to the problem.



Re: why allocators are not discussed here

2013-06-26 Thread Dmitry Olshansky

26-Jun-2013 18:27, cybervadim пишет:

On Wednesday, 26 June 2013 at 14:17:03 UTC, Dmitry Olshansky wrote:

Awful. What that extra syntax had brought you? Except that now new is
unsafe by design?
Other questions involve how does this allocation scope goes inside of
functions, what is the mechanism of passing it up and down of call-stack.

Last but not least I fail to see how scoped allocators alone (as
presented) solve even half of the problem.


Extra syntax allows me not touching the existing code.
Imagine you have a stateless event processing. That is event comes, you
do some calculation, prepare the answer and send it back. It will look
like:

void onEvent(Event event)
{
process();
}

Because it is stateless, you know all the memory allocated during
processing will not be required afterwards.


Here is a chief problem - the assumption that is required to make it 
magically work.


Now what I see is:

T arr[];//TLS

//somewhere down the line
arr = ... ;
else{
...
alloctor(myAlloc){
arr = array(filter!);
}
...
}
return arr;

Having an unsafe magic wand that may transmogrify some code to switch 
allocation strategy I consider naive and dangerous.


Who ever told you process does return before allocating a few Gigs of 
RAM (and hoping on GC collection)? Right, nobody. Maybe it's an event 
loop that may run forever.


What is missing is that code up to date assumes new == GC and works 
_like that_.



So the syntax I suggested
requires a very little change in code. process() may be implemented
using std lib, doing several news and resizing.

With new syntax:


void onEvent(Event event)
{
ScopedAllocator alloc;
allocator(alloc) {
  process();
}
}

So now you do not use GC for all that is created inside the process().
ScopedAllocator is a simple stack that will free all memory in one go.

It is up to the runtime implementation to make sure all memory that is
allocated inside allocator{} scope is actually allocated using
ScopedAllocator and not GC.

Does it make sense?


Yes, but it's horribly broken.

--
Dmitry Olshansky


Re: why allocators are not discussed here

2013-06-26 Thread Dmitry Olshansky

26-Jun-2013 03:16, Adam D. Ruppe пишет:

On Tuesday, 25 June 2013 at 22:50:55 UTC, H. S. Teoh wrote:

And maybe (b) can be implemented by making gc_alloc / gc_free
overridable function pointers? Then we can override their values and
use scope guards to revert them back to the values they were before.


Yea, I was thinking this might be a way to go. You'd have a global
(well, thread-local) allocator instance that can be set and reset
through stack calls.

You'd want it to be RAII or delegate based, so the scope is clear.

with_allocator(my_alloc, {
  do whatever here
});


or

{
ChangeAllocator!my_alloc dummy;

do whatever here
} // dummy's destructor ends the allocator scope



Both suffer from
a) being totally unsafe and in fact bug prone since all references 
obtained in there are now dangling (and there is no indication where 
they came from)
b) imagine you need to use an allocator for a stateful object. Say 
forward range of some other ranges (e.g. std.regex) both scoped/stacked 
to allocate its internal stuff. 2nd one may handle it but not the 1st one.
c) transfer of objects allocated differently up the call graph (scope 
graph?), is pretty much neglected I see.


I kind of wondering how our knowledgeable community has come to this.
(must have been starving w/o allocators way too long)


{
malloced_string str;
auto got = to!string(10, str);
} // str is out of scope, so it gets free()'d. unsafe though: if you
stored a copy of got somewhere, it is now a pointer to freed memory. I'd
kinda like language support of some sort to help mitigate that though,
like being a borrowed pointer that isn't allowed to be stored, but
that's another discussion.

In contrast 'container as an output range' works both safely and would 
be still customizable.


IMHO the only place for allocators is in containers other kinds of code 
may just ignore allocators completely.


std.algorithm and friends should imho be customized on 2 things only:

a) containers to use (instead of array)
b) optionally a memory source (or allocator) f container is 
temporary(scoped) to tie its life-time to smth.


Want temporary stuff? Use temporary arrays, hashmaps and whatnot i.e. 
types tailored for a particular use case (e.g. with a temporary/scoped 
allocator in mind).
These would all be unsafe though. Alternative is ref-counting pointers 
to an allocator. With word on street about ARC it could be nice 
direction to pursue.


Allocators (as Andrei points out in his video) have many kinds:
a) persistence: infinite, manual, scoped
b) size: unlimited vs fixed
c) block-size: any, fixed, or *any* up to some maximum size

Most of these ARE NOT interchangeable!
Yet some are composable however I'd argue that allocators are not 
composable but have some reusable parts that in turn are composable.


Code would have to cutter for specific flavors of allocators still so 
we'd better reduce this problem to the selection of containers.


--
Dmitry Olshansky


Re: why allocators are not discussed here

2013-06-26 Thread Dmitry Olshansky

26-Jun-2013 05:24, Adam D. Ruppe пишет:

I was just quickly skimming some criticism of C++ allocators, since my
thought here is similar to what they do. On one hand, maybe D can do it
right by tweaking C++'s design rather than discarding it.



Criticisms are:

A) Was defined to not have any state (as noted in the standard)
B) Parametrized on type (T) yet a container that is parametrized on it 
may need to allocate something else completely (a node with T).
C) Containers are parametrized on allocators so say 2 lists with 
different allocators are incompatible in a sense that e.g. you can't 
splice pieces of  them together.


Of the above IMHO we can deduce that
a) Should support stateful allocators but we have to make sure we don't 
pay storage space for state-less ones (global ones e.g. mallocator).

b) Should preferably be typeless and let container define what they allocate
c) Hardly solvable unless we require a way to reassign objects between 
allocators (at least of similar kinds)




Anyway, bottom line is I don't think that criticism necessarily applies
to D. But there's surely many others and I'm more or less a n00b re
c++'s allocators so idk yet.



--
Dmitry Olshansky


Re: D repl

2013-06-26 Thread Sönke Ludwig

Am 26.06.2013 14:19, schrieb bearophile:
 Running dmd (compile)...
 ...\dub\packages\pegged-master\pegged\dynamic\grammar.d(245): Error: 
not a property eps
 ...\dub\packages\pegged-master\pegged\dynamic\grammar.d(418): Error: 
not a property fail


Do you have the latest version of DUB installed? It looks like -property 
was specified on the compiler command line (checkable with dub -v). 
This was removed recently after Jonathan convinced me that it is 
worthless to support, since all recent property related DIPs make 
parenthesis optional and it seems pretty clear that this will be the way 
forward.


 I think DUB should print _where_ it copies files, and it should use a 
less hard to find place to stores them. (An optional idea is to store 
inside the directory of dub a link to the directory where dub stores 
those files).


I agree. You can look it up using dub list-installed, but it should 
also print that at installation time.




Re: Opinions on DConf talks

2013-06-26 Thread Andrei Alexandrescu

On 6/26/13 5:23 AM, Kagamin wrote:

On Tuesday, 25 June 2013 at 19:38:04 UTC, MattCoder wrote:

But one little thing that comes in mind now is: It really needs this
type of conference when we live in Internet era?


I believe conferences privatize information. Dconf is not half bad, but
there're much worse cases. Video is low-quality medium to deliver
technical information, in some cases it's completely inaccessible. Well,
if it's not supposed to share information, then ok, but usually it's
persieved in a different way.


This all seems very odd to me.

Andrei


Re: Opinions on DConf talks

2013-06-26 Thread Ali Çehreli

On 06/26/2013 03:37 AM, Iain Buclaw wrote:

 On 26 June 2013 10:46, deadalnix deadal...@gmail.com wrote:

 You may think so, but he is just an hypocrite :P

 grammar an hypocrite ??? /nazi

The French are exempt from that rule . ;)

Ali



Re: Opinions on DConf talks

2013-06-26 Thread Ali Çehreli

On 06/26/2013 03:15 AM, deadalnix wrote:

 On Wednesday, 26 June 2013 at 10:06:19 UTC, Walter Bright wrote:
 On 6/26/2013 2:46 AM, deadalnix wrote:
 You may think so, but he is just an hypocrite :P

 That's out of line here.

 The smiley isn't there randomly.

I think Walter got you there. ;)

Ali



Re: D repl

2013-06-26 Thread bearophile

Sönke Ludwig:


Do you have the latest version of DUB installed?


I have installed it right to try the D repl. I have installed the 
latest Windows version here, precompiled binaries (I think that 
it's better to show only the latest versions in a table, and all 
the older versions in a different and less visible table):

http://registry.vibed.org/download


This was removed recently after Jonathan convinced me that it 
is worthless to support, since all recent property related DIPs 
make parenthesis optional and it seems pretty clear that this 
will be the way forward.


I have stopped compiling my code with -property since some 
months, despite as Jonathan I kind of liked it :-)


Bye,
bearophile


Re: DIP42 - Add enum E(T) = expression; eponymous template support

2013-06-26 Thread Denis Shelomovskij

26.06.2013 1:31, Walter Bright пишет:

http://wiki.dlang.org/DIP42


What about enhancement 7364 [1] (from [2] discussion)?

As we still have such cases:
---
static if (...)
enum fullyQualifiedNameImplForTypes = ...;
else static if (...)
enum fullyQualifiedNameImplForTypes = ...;
else static if (...)
...
---
which will look better this way:
---
static if (...)
enum template = ...;
else static if (...)
enum template = ...;
else ...
---

Also note current syntax is error-prone as one can easily make a typo or 
copy paste mistake which will lead to cryptic template errors.


[1] http://d.puremagic.com/issues/show_bug.cgi?id=7364
[2] http://forum.dlang.org/thread/jfh7po$3b$1...@digitalmars.com?page=1


--
Денис В. Шеломовский
Denis V. Shelomovskij


Re: DIP42 - Add enum E(T) = expression; eponymous template support

2013-06-26 Thread Andrej Mitrovic
On 6/26/13, Denis Shelomovskij verylonglogin@gmail.com wrote:
 which will look better this way:
 ---
 static if (...)
  enum template = ...;
 else static if (...)
  enum template = ...;
 else ...
 ---

Yeah I agree, this is more important than DIP42's shortened syntax for
simple templates. It's the more complicated templates that are the
problem.


Re: why allocators are not discussed here

2013-06-26 Thread H. S. Teoh
On Wed, Jun 26, 2013 at 01:16:31AM +0200, Adam D. Ruppe wrote:
 On Tuesday, 25 June 2013 at 22:50:55 UTC, H. S. Teoh wrote:
 And maybe (b) can be implemented by making gc_alloc / gc_free
 overridable function pointers? Then we can override their values
 and use scope guards to revert them back to the values they were
 before.
 
 Yea, I was thinking this might be a way to go. You'd have a global
 (well, thread-local) allocator instance that can be set and reset
 through stack calls.
 
 You'd want it to be RAII or delegate based, so the scope is clear.
 
 with_allocator(my_alloc, {
  do whatever here
 });
 
 
 or
 
 {
ChangeAllocator!my_alloc dummy;
 
do whatever here
 } // dummy's destructor ends the allocator scope
 
 
 I think the former is a bit nicer, since the dummy variable is a bit
 silly. We'd hope that delegate can be inlined.

Actually, D's frontend leaves something to be desired when it comes to
inlining delegates. It *is* done sometimes, but not as often as one may
like. For example, opApply generally doesn't inline its delegate, even
when it's just a thin wrapper around a foreach loop.

But yeah, I think the former has nicer syntax. Maybe we can help the
compiler with inlining by making the delegate a compile-time parameter?
But it forces a switch of parameter order, which is Not Nice (hurts
readability 'cos the allocator argument comes after the block instead of
before).


 But, the template still has a big advantage: you can change the
 type. And I think that is potentially enormously useful.

True. It can use different types for different allocators that does (or
doesn't) do cleanups at the end of the scope, depending on what the
allocator needs to do.


 Another question is how to tie into output ranges. Take std.conv.to.
 
 auto s = to!string(10); // currently, this hits the gc
 
 What if I want it to go on a stack buffer? One option would be to
 rewrite it to use an output range, and then call it like:
 
 char[20] buffer;
 auto s = to!string(10, buffer); // it returns the slice of the
 buffer it actually used
 
 (and we can do overloads so to!string(10, radix) still works, as
 well as to!string(10, radix, buffer). Hassle, I know...)

I think supporting the multi-argument version of to!string() is a good
thing, but what to do with library code that calls to!string()? It'd be
nice if we could somehow redirect those GC calls without having to comb
through the entire Phobos codebase for stray calls to to!string().


[...]
 The fun part is the output range works for that, and could also work
 for something like this:
 
 struct malloced_string {
 char* ptr;
 size_t length;
 size_t capacity;
 void put(char c) {
 if(length = capacity)
ptr = realloc(ptr, capacity*2);
 ptr[length++] = c;
 }
 
 char[] slice() { return ptr[0 .. length]; }
 alias slice this;
 mixin RefCounted!this; // pretend this works
 }
 
 
 {
malloced_string str;
auto got = to!string(10, str);
 } // str is out of scope, so it gets free()'d. unsafe though: if you
 stored a copy of got somewhere, it is now a pointer to freed memory.
 I'd kinda like language support of some sort to help mitigate that
 though, like being a borrowed pointer that isn't allowed to be
 stored, but that's another discussion.

Nice!


 And that should work. So then what we might do is provide these
 little output range wrappers for various allocators, and use them on
 many functions.
 
 So we'd write:
 
 import std.allocators;
 import std.range;
 
 // mallocator is provided in std.allocators and offers the goods
 OutputRange!(char, mallocator) str;
 
 auto got = to!string(10, str);

I like this. However, it still doesn't address how to override the
default allocator in, say, Phobos functions.


 What's nice here is the output range is useful for more than just
 allocators. You could also to!string(10, my_file) or a delegate,
 blah blah blah. So it isn't too much of a burden, it is something
 you might naturally use anyway.

Now *that* is a very nice idea. I like having a way of bypassing using a
string buffer, and just writing the output directly to where it's
intended to go. I think to() with an output range parameter definitely
should be implemented. It doesn't address all of the issues, but it's a
very big first step IMO.


 Also, we may have the problem of the wrong allocator
 being used to free the object.
 
 Another reason why encoding the allocator into the type is so nice.
 For the minimal D I've been playing with, the idea I'm running with
 is all allocated memory has some kind of special type, and then
 naked pointers are always assumed to be borrowed, so you should
 never store or free them.

Interesting idea. So basically you can tell which allocator was used to
allocate an object just by looking at its type? That's not a bad idea,
actually.


 auto foo = HeapArray!char(capacity);
 
 void bar(char[] lol){}
 
 bar(foo); // allowed, foo has an alias this on slice


Re: why allocators are not discussed here

2013-06-26 Thread H. S. Teoh
On Wed, Jun 26, 2013 at 04:31:40PM +0200, cybervadim wrote:
 On Wednesday, 26 June 2013 at 14:26:03 UTC, H. S. Teoh wrote:
 Yeah, I think the best approach would be one that doesn't require
 changing a whole mass of code to support. Also, one that doesn't
 require language changes would be far more likely to be accepted, as
 the core D devs are leery of adding yet more complications to the
 language.
 
 That's why I proposed that gc_alloc and gc_free be made into
 thread-global function pointers, that can be swapped with a custom
 allocator's version. This doesn't have to be visible to user code; it
 can just be an implementation detail in std.allocator, for example.
 It allows us to implement custom allocators across a block of code
 that doesn't know (and doesn't need to know) what allocator will be
 used.
 
 
 Yes, being able to change gc_alloc, gc_free would do the work. If
 runtime  remembers the stack of gc_alloc/gc_free functions like pushd,
 popd, that would simplify its usage.  I think this is a very nice and
 simple solution to the problem.

Adam's idea does this: tie each replacement of gc_alloc/gc_free to some
stack-based object, that automatically cleans up in the dtor. So
something along these lines:

struct CustomAlloc(A) {
void* function(size_t size) old_alloc;
void  function(void* ptr)   old_free;

this(A alloc) {
old_alloc = gc_alloc;
old_free  = gc_free;

gc_alloc = A.alloc;
gc_free  = A.free;
}

~this() {
gc_alloc = old_alloc;
gc_free  = old_free;

// Cleans up, e.g., region allocator deletes the
// region
A.cleanup();
}
}

class C {}

void main() {
auto c = new C();   // allocates using default allocator 
(GC)
{
CustomAlloc!MyAllocator _;

// Everything from here on until end of block
// uses MyAllocator

auto d = new C();   // allocates using MyAllocator

{
CustomAlloc!AnotherAllocator _;
auto e = new C(); // allocates using 
AnotherAllocator

// End of scope: auto cleanup, gc_alloc and
// gc_free reverts back to MyAllocator
}

auto f = new C();   // allocates using MyAllocator

// End of scope: auto cleanup, gc_alloc and
// gc_free reverts back to default values
}
auto g = new C();   // allocates using default allocator
}


So you effectively have an allocator stack, and user code never has to
directly manipulate gc_alloc/gc_free (which would be dangerous).


T

-- 
Almost all proofs have bugs, but almost all theorems are true. -- Paul Pedersen


Re: why allocators are not discussed here

2013-06-26 Thread Dicebot
Some type system help is required to guarantee that references to 
such scope-allocated data won't escape.


Re: why allocators are not discussed here

2013-06-26 Thread H. S. Teoh
On Wed, Jun 26, 2013 at 06:51:54PM +0400, Dmitry Olshansky wrote:
 26-Jun-2013 03:16, Adam D. Ruppe пишет:
 On Tuesday, 25 June 2013 at 22:50:55 UTC, H. S. Teoh wrote:
 And maybe (b) can be implemented by making gc_alloc / gc_free
 overridable function pointers? Then we can override their values and
 use scope guards to revert them back to the values they were before.
 
 Yea, I was thinking this might be a way to go. You'd have a global
 (well, thread-local) allocator instance that can be set and reset
 through stack calls.
 
 You'd want it to be RAII or delegate based, so the scope is clear.
 
 with_allocator(my_alloc, {
   do whatever here
 });
 
 
 or
 
 {
 ChangeAllocator!my_alloc dummy;
 
 do whatever here
 } // dummy's destructor ends the allocator scope
 
 
 Both suffer from
 a) being totally unsafe and in fact bug prone since all references
 obtained in there are now dangling (and there is no indication where
 they came from)

How is this different from using malloc() and free() manually? You have
no indication of where a void* came from either, and the danger of
dangling references is very real, as any C/C++ coder knows. And I assume
that *some* people will want to be defining custom allocators that wrap
around malloc/free (e.g. the game engine guys who want total control).


 b) imagine you need to use an allocator for a stateful object. Say
 forward range of some other ranges (e.g. std.regex) both
 scoped/stacked to allocate its internal stuff. 2nd one may handle it
 but not the 1st one.

Yeah this is a complicated area. A container basically needs to know how
to allocate its elements. So somehow that information has to be
somewhere.


 c) transfer of objects allocated differently up the call graph
 (scope graph?), is pretty much neglected I see.

They're incompatible. You can't safely make a linked list that contains
both GC-allocated nodes and malloc() nodes. That's just a bomb waiting
to explode in your face. So in that sense, Adam's idea of using a
different type for differently-allocated objects makes sense. A
container has to declare what kind of allocation its members are using;
any other way is asking for trouble.


 I kind of wondering how our knowledgeable community has come to this.
 (must have been starving w/o allocators way too long)

We're just trying to provoke Andrei into responding. ;-)


[...]
 IMHO the only place for allocators is in containers other kinds of
 code may just ignore allocators completely.

But some people clamoring for allocators are doing so because they're
bothered by Phobos using ~ for string concatenation, which implicitly
uses the GC. I don't think we can just ignore that.


 std.algorithm and friends should imho be customized on 2 things only:
 
 a) containers to use (instead of array)
 b) optionally a memory source (or allocator) f container is
 temporary(scoped) to tie its life-time to smth.
 
 Want temporary stuff? Use temporary arrays, hashmaps and whatnot
 i.e. types tailored for a particular use case (e.g. with a
 temporary/scoped allocator in mind).
 These would all be unsafe though. Alternative is ref-counting
 pointers to an allocator. With word on street about ARC it could be
 nice direction to pursue.

Ref-counting is not fool-proof, though. There's always cycles to mess
things up.


 Allocators (as Andrei points out in his video) have many kinds:
 a) persistence: infinite, manual, scoped
 b) size: unlimited vs fixed
 c) block-size: any, fixed, or *any* up to some maximum size
 
 Most of these ARE NOT interchangeable!
 Yet some are composable however I'd argue that allocators are not
 composable but have some reusable parts that in turn are composable.

I was listening to Andrei's talk this morning, but I didn't quite
understand what he means by composable allocators. Is he talking about
nesting, say, a GC inside a region allocated by a region allocator?


 Code would have to cutter for specific flavors of allocators still
 so we'd better reduce this problem to the selection of containers.
[...]

Hmm. Sounds like we have two conflicting things going on here:

1) En massé replacement of gc_alloc/gc_free in a certain block of code
(which may be the entire program), e.g., for the avoidance of GC in game
engines, etc.. Basically, the code is allocator-agnostic, but at some
higher level we want to control which allocator is being used.

2) Specific customization of containers, etc., as to which allocator(s)
should be used, with (hopefully) some kind of support from the type
system to prevent mistakes like dangling pointers, escaping references,
etc.. Here, the code is NOT allocator-agnostic; it has to be written
with the specific allocation model in mind. You can't just replace the
allocator with another one without introducing bugs or problems.

These two may interact in complex ways... e.g., you might want to use
malloc to allocate a pool, then use a custom gc_alloc/gc_free to
allocate from this pool in order to support language 

Re: DIP42 - Add enum E(T) = expression; eponymous template support

2013-06-26 Thread Joseph Rushton Wakeling
On 06/25/2013 11:31 PM, Walter Bright wrote:
 http://wiki.dlang.org/DIP42

The answer to life, the universe and everything? :-)



Re: Opinions on DConf talks

2013-06-26 Thread deadalnix
On Wednesday, 26 June 2013 at 15:58:41 UTC, Andrei Alexandrescu 
wrote:

On 6/26/13 5:23 AM, Kagamin wrote:

On Tuesday, 25 June 2013 at 19:38:04 UTC, MattCoder wrote:
But one little thing that comes in mind now is: It really 
needs this

type of conference when we live in Internet era?


I believe conferences privatize information. Dconf is not half 
bad, but
there're much worse cases. Video is low-quality medium to 
deliver
technical information, in some cases it's completely 
inaccessible. Well,
if it's not supposed to share information, then ok, but 
usually it's

persieved in a different way.


This all seems very odd to me.

Andrei


You've spent to much time running the confs. When you can't go, 
they are the most frustrating thing ever. So much information is 
exchanged and so many people are out of it.


I remember finding myself watching conferences live at completely 
crazy schedules due to timeshift.


I guess recording provide a fair balance. Especially since 
DConf's are very high quality.


Re: why allocators are not discussed here

2013-06-26 Thread Dicebot
By the way, while this topic gets some attention, I want to make 
a notice that there are actually two orthogonal entities that 
arise when speaking about configurable allocation - allocators 
itself and global allocation policies. I think good design should 
address both of those.


For example, changing global allocator for custom one has limited 
usability - you are anyway limited by the language design that 
makes only GC or ref-counting viable general options. However, 
some way to prohibit automatic allocations at runtime while still 
allowing manual ones may be useful - and it does not matter what 
allocator is actually used to get that memory. Once such API is 
designed, tighter classification and control may be added with 
time.


Re: top time wasters in DMD, as reported by gprof

2013-06-26 Thread dennis luehring

Am 26.06.2013 07:38, schrieb SomeDude:

On Monday, 24 June 2013 at 16:46:51 UTC, dennis luehring wrote:


so it could be std library implementation related - can DMC use
the msvc libs? (just for the compare)

and you should also try 2010 - or better 2012 msvc (it still
gets speedier code out)


Is there still a free version of the VS compiler ?



always the latest (currently) vs2012 as express edition (only the MFC 
library is missing)


Re: top time wasters in DMD, as reported by gprof

2013-06-26 Thread dennis luehring

Am 26.06.2013 08:47, schrieb John Colvin:

On Monday, 24 June 2013 at 18:01:11 UTC, Walter Bright wrote:

On 6/24/2013 6:19 AM, dennis luehring wrote:

how does that look using msvc compiling the dmd compiler
as it turns out that msvc make dmd much faster


The profile report was done by gcc/gprof.

And besides, better compilers shouldn't change profile results.


I'm confused Different optimisers often produce radically
different profile results. Or are am I misunderstanding you?



maybe he talk about the dmd -release parameter but even that should 
produce different results (or is the same code use the same amount?)


Re: why allocators are not discussed here

2013-06-26 Thread Brian Rogoff

On Wednesday, 26 June 2013 at 17:25:24 UTC, H. S. Teoh wrote:
I was listening to Andrei's talk this morning, but I didn't 
quite
understand what he means by composable allocators. Is he 
talking about
nesting, say, a GC inside a region allocated by a region 
allocator?


Maybe he was talking about a freelist allocator over a reap, as
described by the HeapLayers project http://heaplayers.org/ in the
paper from 2001 titled 'Composing High-Performance Memory
Allocators'. I'm pretty sure that web site was referenced in the
talk. A few publications there are from Andrei.

I agree that D should support programming without a GC, with
different GCs than the default one, and custom allocators, and
that features which demand a GC will be troublesome.

-- Brian


Re: why allocators are not discussed here

2013-06-26 Thread Adam D. Ruppe

On Wednesday, 26 June 2013 at 16:40:20 UTC, H. S. Teoh wrote:
I think supporting the multi-argument version of to!string() is 
a good thing, but what to do with library code that calls 
to!string()? It'd be nice if we could somehow redirect those GC 
calls without having to comb through the entire Phobos codebase 
for stray calls to to!string().



Let's consider what kinds of allocations we have. We can break 
them up into two broad groups: internal and visible.


Internal allocations, in theory, don't matter. These can be on 
the stack, the gc heap, malloc/free, whatever. The function 
itself is responsible for their entire lifetime.


Changing these either optimize, in the case of reusing a region, 
or leak if you switch it to manual and the function doesn't know 
it.


Visible allocations are important because the caller is 
responsible for freeing them. Here, I really think we want the 
type system's help: either it should return something that we 
know we're responsible for, or take a buffer/output range from us 
to receive the data in the first place.


Either way, the function signature should reflect what's going on 
with visible allocations. It'd possibly return a wrapped type and 
it'd take an output range/buffer/allocator.




With internals though, the only reason I can see why you'd want 
to change them outside the function is to give them a region of 
some sort to work with, especially since you don't know for sure 
what it is doing - these are all local variables to the 
function/call stack. And here, I don't think we want to change 
the allocator wholesale.


At most, we'd want to give it hints that what we're doing are 
short lived. (Or, better yet, have it figure this out on its own, 
like a generational gc.)




So I think this is more about tweaking the gc than replacing it, 
at most adding a couple new functions to it:


GC.hint_short_lived // returns a helper struct with a static 
refcount:


TempGcAllocator {
 static int tempCount = 0;
 static void* localRegion;
 this() { tempCount++; } // pretend this works
 ~this() { tempCount--; if(tempCount == 0) 
gc.tryToCollect(localRegion); }


 T create(T, Args...)(Args args) { return GC.new_short_lived 
T(args); }

}


and gc.tryToCollect() does a quick scan for anything into the 
local region. If there's nothing in there, it frees the whole 
thing. If there is, in the name of memory safety, it just 
reintegrates that local region into the regular memory and gc's 
its components normally.




The reason the count is static is that you don't have to pass 
this thing down the call stack. Any function that wants to adapt 
to this generational hint system just calls hint_short_lived. If 
you're a leaf function, that's ok, the static count means you'll 
inherit the region from the function above you.


You would NOT use this in main(), as that defeats the purpose.



I think to() with an output range parameter definitely
should be implemented.


No doubt about it, we should aim for most phobos functions not to 
allocate at all, if given an output range they can use.



Interesting idea. So basically you can tell which allocator was 
used to allocate an object just by looking at its type?


Right, then you'll know if you have to free() it. (Or it can free 
itself with its destructor.)



This is a bit inconvenient. So your member variables will have 
to know what allocation type is being used. Not the end of the

world, of course, but not as pretty as one would like.


Yeah, you'd need to know if you own them or not too (are you 
responsible for freeing that string you just got passed? If no, 
are you sure it won't be freed while you're still using it?), but 
I just think that's a part of memory management you can't 
sidestep.


There's two easy answers: 1) always make a private copy of 
anything you store (and perhaps write to) or 2) use a gc and 
trust it to always be the owner.


In any other case, I think you *have* to think about it, and the 
type telling you can help you make that decision.



and allows you to mix differently-allocated objects without 
having to


Important to remember though that you are borrowing these 
references, not taking ownership.


I think the rule of all pointers/slices are borrowed is fairly 
workable though. With the gc, that's ok, you don't own anything. 
The garbage collector is responsible for it all, so store away. 
(Though if it is mutable, you might want to idup it so you don't 
get overwritten by someone else. But that's a separate question 
from allocation method and already encoded in D's type 
system).


So never free() a naked pointer, unless you know what you're 
doing like interfacing with a C library, prefer to only free a 
ManuallyAllocated!(pointer).


hell a C library binding could change the type too, it'd still be 
binary compatible. RefCounted!T wouldn't be, but 
ManuallyAllocated!T would just be a wrapper around T*.


I think I'm starting to ramble!


Notes from C++ static analysis

2013-06-26 Thread bearophile

An interesting blog post found through Reddit:

http://randomascii.wordpress.com/2013/06/24/two-years-and-thousands-of-bugs-of-/

The post is about the heavy usage of static analysis on lot of 
C++ code. They have a Python script that shows new warnings only 
the first time they appear in the code base. This is a simple but 
very useful memory, to solve one of the most important downsides 
of warnings.


The article groups bugs in some different categories. Some of the 
D code below is derived from the article.


- - - - - - - - - - - - - - - - - -

Format strings:

The most common problem they find are errors in the format string 
of printf-like functions (despite the code is C++):


The top type of bug that /analyze finds is format string errors 
– mismatches between printf-style format strings and the 
corresponding arguments. Sometimes there is a missing argument, 
sometimes there is an extra argument, and sometimes the 
arguments don’t match, such as printing a float, long or ‘long 
long’ with %d.


Such errors in D are less bad, because writef(%d,x) is usable 
for all kind of integral values. On the other hand this D program 
prints just 10 with no errors, ignoring the second x:


import std.stdio;
void main() {
size_t x = 10;
writefln(%d, x, x);
}

In a modern statically typed language I'd like such code to give 
a compile-time error.


This is how how Rust gets this right:

println(fmt!(hello, %d, j))

https://github.com/mozilla/rust/blob/master/src/libsyntax/ext/fmt.rs
https://github.com/Aatch/rust-fmt

In D it can be written a safe function that needs no extra static 
analysis:


ctWritefln!%d(x, x);

- - - - - - - - - - - - - - - - - -

Variable shadowing:

This is a much less common problem in D because this code gives a 
errors:


void main() {
bool result = true;
if (true) {
bool result = false;
}
foreach (i; 0 .. 10) {
foreach (i; 0 .. 20) {
}
}
for (int i = 0; i  10; i++) {
for (int i = 0; i  20; i++) {
}
}
}


test.d(4): Error: is shadowing declaration test.main.result
test.d(7): Error: is shadowing declaration test.main.i
test.d(11): Error: is shadowing declaration test.main.i


There are some situations where this doesn't help, but they are 
not common in idiomatic D code:


void main() {
int i, j;
for (i = 0; i  10; i++) {
for (i = 0; i  20; i++) {
}
}
}


In D this is one case similar to variable shadowing, that the 
compiler doesn't help you with:


class Foo {
int x, y, z, w;
this(in int x_, in int y_, in int z_, in int w_) {
this.x = x_;
this.y = y_;
this.z = z;
this.w = w_;
}
}
void main() {
auto f = new Foo(1, 2, 3, 4);
}


I believe the compile should give some warning there:
http://d.puremagic.com/issues/show_bug.cgi?id=3878

- - - - - - - - - - - - - - - - - -

Logic bugs:


bool someFunction() { return true; }
uint getFlags() { return uint.max; }
void main() {
uint kFlagValue = 2u ^^ 14;
if (someFunction() || getFlags() | kFlagValue) {}
}


The D compiler gives no warnings. from the article:

The code above is an expensive and misleading way to go if ( 
true ). Visual Studio gave a clear warning that described the 
problem well:


warning C6316: Incorrect operator:  tested expression is 
constant and non-zero.  Use bitwise-and to determine whether bits 
are set.



See:
http://msdn.microsoft.com/en-us/library/f921xb29.aspx

A simpler example:

enum INPUT_VALUE = 2;
void f(uint flags) {
if (flags | INPUT_VALUE) {}
}


I have just added it to Bugzilla:
http://d.puremagic.com/issues/show_bug.cgi?id=10480



Another problem:

void main() {
bool bOnFire = true;
float angle = 20.0f + bOnFire ? 5.0f : 10.0f;
}


D compiler gives no warnings.

Visual Studio gave:

warning C6336: Arithmetic operator has precedence over question 
operator, use parentheses to clarify intent.

warning C6323: Use of arithmetic operator on Boolean type(s).


See:
http://msdn.microsoft.com/en-us/library/ms182085.aspx

I opened an ER lot of time ago, Require parenthesization of 
ternary operator when compounded:

http://d.puremagic.com/issues/show_bug.cgi?id=8757

- - - - - - - - - - - - - - - - - -

Signed, unsigned, and tautologies:

Currently this gives no warnings:


This code would have been fine if both a and b were signed – but 
one of them wasn’t, making this operation nonsensical.



import std.algorithm: max;
void main() {
int a = -10;
uint b = 5;
auto result = max(0, a - b);
}



We had quite a few places where we were checking to see if 
unsigned variables were less than zero -- now we have fewer.


This is a well known problem, it's an issue in Bugzilla since lot 
of time, and it seems there is no simple way to face it in D.


Bye,
bearophile


Re: why allocators are not discussed here

2013-06-26 Thread Adam D. Ruppe

On Wednesday, 26 June 2013 at 17:25:24 UTC, H. S. Teoh wrote:

malloc to allocate a pool, then use a custom gc_alloc/gc_free to
allocate from this pool in order to support language built-ins 
like ~ and ~= without needing to rewrite every function that 
uses strings.


Blargh, I forgot about operator ~ on built ins. For custom types 
it is easy enough to manage, just overload it. You can even do ~= 
on types that aren't allowed to allocate, if they have a certain 
capacity set up ahead of time (like a stack buffer)


But for built ins, blargh, I don't even think we can hint on them 
to the gc. Maybe we should just go ahead and make the gc 
generational. (If you aren't using gc, I say leave binary ~ 
unimplemented in all cases. Use ~= on a temporary instead 
whenever you would do that. It is easier to follow the lifetime 
if you explicitly declare your temporary.)


Re: Notes from C++ static analysis

2013-06-26 Thread H. S. Teoh
On Wed, Jun 26, 2013 at 08:08:08PM +0200, bearophile wrote:
 An interesting blog post found through Reddit:
 
 http://randomascii.wordpress.com/2013/06/24/two-years-and-thousands-of-bugs-of-/
[...]
 The most common problem they find are errors in the format string of
 printf-like functions (despite the code is C++):

None of my C++ code uses iostream. I still find stdio.h more comfortable
to use, in spite of its many problems. One of the most annoying features
of iostream is the abuse of operator and operator for I/O. Format
strings are an ingenious idea sorely lacking in the iostream department
(though admittedly the way it was implemented in stdio is rather unsafe,
due to the inability of C to do many compile-time checks).


 The top type of bug that /analyze finds is format string errors –
 mismatches between printf-style format strings and the corresponding
 arguments. Sometimes there is a missing argument, sometimes there is
 an extra argument, and sometimes the arguments don’t match, such as
 printing a float, long or ‘long long’ with %d.
 
 Such errors in D are less bad, because writef(%d,x) is usable for
 all kind of integral values.

Less bad? Actually, IME format strings in D are amazingly useful! You
can pretty much use %s 99% of the time, because static type inference
works so well in D! The only time I actually write anything other than
%s is when I need to specify floating-point formatting options, like
%precision, or scientific format vs. decimal, etc..

Then throw in the array formatters %(...%), and D format strings will
totally blow C's stdio out of the water.


 On the other hand this D program prints
 just 10 with no errors, ignoring the second x:
 
 import std.stdio;
 void main() {
 size_t x = 10;
 writefln(%d, x, x);
 }
 
 In a modern statically typed language I'd like such code to give a
 compile-time error.

This looks like a bug to me. Please file one. :)


[...]
 There are some situations where this doesn't help, but they are not
 common in idiomatic D code:
 
 void main() {
 int i, j;
 for (i = 0; i  10; i++) {
 for (i = 0; i  20; i++) {
 }
 }
 }

I don't think this particular error is compiler-catchable. Sometimes,
you *want* the nested loop to reuse the same index (though probably not
in exactly the formulation as above, most likely the inner loop will
omit the i=0 part). The compiler can't find such errors unless it reads
the programmer's mind.


 In D this is one case similar to variable shadowing, that the
 compiler doesn't help you with:
 
 class Foo {
 int x, y, z, w;
 this(in int x_, in int y_, in int z_, in int w_) {
 this.x = x_;
 this.y = y_;
 this.z = z;
 this.w = w_;
 }
 }

Yeah, this one bit me before. Really hard. I had code that looked like
this:

class C {
int x;
this(int x) {
x = f(x);   // ouch
}
int f(int x) { ... }
}

This failed horribly, so I rewrote the //ouch line to:

this.x = x;

But that is still very risky, since in a member function that doesn't
shadow x, the above line is equivalent to this.x = this.x.

Anyway, in the end I decided that naming member function arguments after
member variables is a Very Stupid Idea, and that it should never be
done. It would be nice if the D compiler rejected such code.


[...]
 Logic bugs:
[...]
 enum INPUT_VALUE = 2;
 void f(uint flags) {
 if (flags | INPUT_VALUE) {}
 }
 
 
 I have just added it to Bugzilla:
 http://d.puremagic.com/issues/show_bug.cgi?id=10480
[...]

Huh? Shouldn't that be (flags  ~INPUT_VALUE)?

How would the compiler catch such cases in general, though? I mean, like
in arbitrarily complex boolean expressions.


T

-- 
It said to install Windows 2000 or better, so I installed Linux instead.


Re: Notes from C++ static analysis

2013-06-26 Thread Adam D. Ruppe

On Wednesday, 26 June 2013 at 18:08:10 UTC, bearophile wrote:
In D this is one case similar to variable shadowing, that the 
compiler doesn't help you with:

this.z = z;


I'd argue that assigning something to itself is never useful.


Re: Notes from C++ static analysis

2013-06-26 Thread H. S. Teoh
On Wed, Jun 26, 2013 at 08:57:46PM +0200, Adam D. Ruppe wrote:
 On Wednesday, 26 June 2013 at 18:08:10 UTC, bearophile wrote:
 In D this is one case similar to variable shadowing, that the
 compiler doesn't help you with:
 this.z = z;
 
 I'd argue that assigning something to itself is never useful.

Unless opAssign does something unusual.

But yeah, that's bad practice and the compiler should warn about it. The
reason it doesn't, though, IIRC is because of generic code, where it
would suck to have to special-case when two template arguments actually
alias the same thing.


T

-- 
People say I'm arrogant, and so I am!!


Re: why allocators are not discussed here

2013-06-26 Thread cybervadim
On Wednesday, 26 June 2013 at 14:59:41 UTC, Dmitry Olshansky 
wrote:
Here is a chief problem - the assumption that is required to 
make it magically work.


Now what I see is:

T arr[];//TLS

//somewhere down the line
arr = ... ;
else{
...
alloctor(myAlloc){
arr = array(filter!);
}
...
}
return arr;

Having an unsafe magic wand that may transmogrify some code to 
switch allocation strategy I consider naive and dangerous.


Who ever told you process does return before allocating a few 
Gigs of RAM (and hoping on GC collection)? Right, nobody. Maybe 
it's an event loop that may run forever.


What is missing is that code up to date assumes new == GC and 
works _like that_.


Not magic, but the tool which is quite powerful and thus it may 
shoot your leg.
This is unsafe, but if you want it safe, don't use allocators, 
stay with GC.
In the example above, you get first arr freed by GC, second arr 
may point to nothing if myAlloc was implemented to free it 
before. Or you may get a proper arr reference if myAlloc used 
malloc and didn't free it. The fact that you may write bad code 
does not make the language (or concept) bad.




Re: Notes from C++ static analysis

2013-06-26 Thread Adam D. Ruppe

On Wednesday, 26 June 2013 at 18:54:17 UTC, H. S. Teoh wrote:

import std.stdio;
void main() {
size_t x = 10;
writefln(%d, x, x);
}

In a modern statically typed language I'd like such code to 
give a compile-time error.


This looks like a bug to me. Please file one. :)


Not necessarily, since you might want a format string to be a 
runtime variable, like when doing translations. I could live with 
there being another function that does runtime though.


Things might be confusing too because of positional parameters 
(%1$d). You might offer something that isn't necessarily used:


config.dateFormat = %3$d/%2$d;
writeln(config.dateFormat, year, month, day);

Anyway, in the end I decided that naming member function 
arguments after member variables is a Very Stupid Idea,


Blargh, I do it a lot. But I would be ok with the lhs of a member 
when there's a parameter of the same name requiring that you call 
it this.x.


How would the compiler catch such cases in general, though? I 
mean, like in arbitrarily complex boolean expressions.


The Microsoft compiler warned about it, after constant folding, 
working out to if(1).


I'm a little concerned that it would complain about some false 
positives though, which can be quite deliberate in D, like 
if(__ctfe).


Re: Notes from C++ static analysis

2013-06-26 Thread Andrei Alexandrescu

On 6/26/13 11:08 AM, bearophile wrote:

On the other hand this D program prints just
10 with no errors, ignoring the second x:

import std.stdio;
void main() {
size_t x = 10;
writefln(%d, x, x);
}

In a modern statically typed language I'd like such code to give a
compile-time error.


Actually this is good because it allows to customize the format string 
to print only a subset of available information (I've actually used this).



This is how how Rust gets this right:

println(fmt!(hello, %d, j))

https://github.com/mozilla/rust/blob/master/src/libsyntax/ext/fmt.rs
https://github.com/Aatch/rust-fmt


This is potentially inefficient because it creates a string instead of 
formatting straight in the output buffer.



Andrei


Re: Notes from C++ static analysis

2013-06-26 Thread dennis luehring

Am 26.06.2013 21:07, schrieb Adam D. Ruppe:

On Wednesday, 26 June 2013 at 18:54:17 UTC, H. S. Teoh wrote:

import std.stdio;
void main() {
size_t x = 10;
writefln(%d, x, x);
}

In a modern statically typed language I'd like such code to
give a compile-time error.


This looks like a bug to me. Please file one. :)


Not necessarily, since you might want a format string to be a
runtime variable, like when doing translations. I could live with
there being another function that does runtime though.


then you normaly quote the % with %% or something else to inactivate it 
- thats much more clean then just to allow it for this corner case out 
of the box




Re: why allocators are not discussed here

2013-06-26 Thread Dmitry Olshansky

26-Jun-2013 23:04, cybervadim пишет:

On Wednesday, 26 June 2013 at 14:59:41 UTC, Dmitry Olshansky wrote:



Having an unsafe magic wand that may transmogrify some code to switch
allocation strategy I consider naive and dangerous.

Who ever told you process does return before allocating a few Gigs of
RAM (and hoping on GC collection)? Right, nobody. Maybe it's an event
loop that may run forever.

What is missing is that code up to date assumes new == GC and works
_like that_.


Not magic, but the tool which is quite powerful and thus it may shoot
your leg.


I know what kind of thing you are talking about. It's ain't powerful 
it's just a hack that doesn't quite do what advertised.



This is unsafe, but if you want it safe, don't use allocators, stay with
GC.


BTW you were talking changing allocation of the code you didn't write.
There is not even single fact that makes the thing safe. It's all 
working by chance or because the thing was designed to work with scoped 
allocator to begin with.


I believe the 2nd case (design to use scoped allocation) is
a) The behavior is guaranteed (determinism vs GC etc)
b) Safety is assured be the designer not pure luck (and reasonable 
assumption that may not hold)



In the example above, you get first arr freed by GC, second arr may
point to nothing if myAlloc was implemented to free it before. Or you
may get a proper arr reference if myAlloc used malloc and didn't free
it.


Yeah I know, hence I showed it. BTW forget about malloc I'm not talking 
about explicit malloc being an alternative to you scheme.


 The fact that you may write bad code does not make the language (or
 concept) bad.

It does. Because it introduces easy unreliable and bug prone usage.

--
Dmitry Olshansky


Re: why allocators are not discussed here

2013-06-26 Thread Dmitry Olshansky

26-Jun-2013 21:35, Dicebot пишет:

By the way, while this topic gets some attention, I want to make a
notice that there are actually two orthogonal entities that arise when
speaking about configurable allocation - allocators itself and global
allocation policies. I think good design should address both of those.



Sadly I believe that global allocators would still have to be compatible 
with GC (to not break code in hard to track ways) thus basically being a 
GC. Hence we can easily stop talking about them ;)




--
Dmitry Olshansky


Re: Notes from C++ static analysis

2013-06-26 Thread dennis luehring

Am 26.06.2013 21:33, schrieb Andrei Alexandrescu:

On 6/26/13 11:08 AM, bearophile wrote:

On the other hand this D program prints just
10 with no errors, ignoring the second x:

import std.stdio;
void main() {
size_t x = 10;
writefln(%d, x, x);
}

In a modern statically typed language I'd like such code to give a
compile-time error.


Actually this is good because it allows to customize the format string
to print only a subset of available information (I've actually used this).


why is there always a tiny need for such tricky stuff - isn't that only 
usefull in very rare cases


Re: Notes from C++ static analysis

2013-06-26 Thread dennis luehring

Am 26.06.2013 21:53, schrieb dennis luehring:

Am 26.06.2013 21:33, schrieb Andrei Alexandrescu:

On 6/26/13 11:08 AM, bearophile wrote:

On the other hand this D program prints just
10 with no errors, ignoring the second x:

import std.stdio;
void main() {
size_t x = 10;
writefln(%d, x, x);
}

In a modern statically typed language I'd like such code to give a
compile-time error.


Actually this is good because it allows to customize the format string
to print only a subset of available information (I've actually used this).


why is there always a tiny need for such tricky stuff - isn't that only
usefull in very rare cases



or better said - could then someone add a description to writefln why 
there is a need that writefln can handle more values then asked in the 
format-string - maybe with an example that realy shows the usefullness 
of this feature - and why an simple enum + if/else can't handle this 
also very elegant







Re: Notes from C++ static analysis

2013-06-26 Thread H. S. Teoh
On Wed, Jun 26, 2013 at 09:07:30PM +0200, Adam D. Ruppe wrote:
 On Wednesday, 26 June 2013 at 18:54:17 UTC, H. S. Teoh wrote:
 import std.stdio;
 void main() {
 size_t x = 10;
 writefln(%d, x, x);
 }
 
 In a modern statically typed language I'd like such code to give
 a compile-time error.
 
 This looks like a bug to me. Please file one. :)
 
 Not necessarily, since you might want a format string to be a
 runtime variable, like when doing translations. I could live with
 there being another function that does runtime though.
[...]

Wait, I thought we were talking about *compile-time* warnings for
extraneous arguments to writefln. If the format string is not known at
compile-time, then there's nothing to be done, and as you said, it's
arguably better to allow more arguments than format specifiers if you're
doing i18n.

But if the format string is known at compile-time, and there are
extraneous arguments, then it should be a warning / error.


T

-- 
People tell me that I'm skeptical, but I don't believe it.


Re: why allocators are not discussed here

2013-06-26 Thread Dmitry Olshansky

26-Jun-2013 21:23, H. S. Teoh пишет:


Both suffer from
a) being totally unsafe and in fact bug prone since all references
obtained in there are now dangling (and there is no indication where
they came from)


How is this different from using malloc() and free() manually? You have
no indication of where a void* came from either, and the danger of
dangling references is very real, as any C/C++ coder knows. And I assume
that *some* people will want to be defining custom allocators that wrap
around malloc/free (e.g. the game engine guys who want total control).


Why the heck you people think I purpose to use malloc directly as 
alternative to whatever hackish allocator stack proposed?


Use the darn container. For starters I'd make allocation strategy a 
parameter of each containers. At least they do OWN memory.


Then refactor out common pieces into a framework of allocation helpers. 
I'd personally in the end would separate concerns into 3 entities:


1. Memory area objects - think as allocators but without the circuitry 
to do the allocation, e.g. a chunk of memory returned by malloc/alloca 
can be wrapped into a memory area object.


2. Allocators (Policies) - a potentially nested combination of such 
circuitry that makes use of memory areas. Free-lists, pools, stacks 
etc. Safe ones have ref-counting on memory areas, unsafe once don't. 
(Though safety largely depends on the way you got that chunk of memory)


3. Containers/Warppers as above objects that handle life-cycle of 
objects and make use of allocators. In fact allocators are part of

type but not memory area objects.





b) imagine you need to use an allocator for a stateful object. Say
forward range of some other ranges (e.g. std.regex) both
scoped/stacked to allocate its internal stuff. 2nd one may handle it
but not the 1st one.


Yeah this is a complicated area. A container basically needs to know how
to allocate its elements. So somehow that information has to be
somewhere.



c) transfer of objects allocated differently up the call graph
(scope graph?), is pretty much neglected I see.


They're incompatible. You can't safely make a linked list that contains
both GC-allocated nodes and malloc() nodes.


What I mean is that if types are the same as built-ins it would be a 
horrible mistake. If not then we are talking about containers anyway.
And if these have a ref-counted pointer to their allocator then the 
whole thing is safe albeit at the cost of performance.


Sadly alias this to some built-in (=e.g. slice) allows squirreling away 
underlying reference too easily.


As such I don't believe in any of the 2 *lies*:
a) built-ins can be refurbished to use custom allocators
b) we can add opSlice/alias this or whatever to our custom type to get 
access to the underlying built-ins safely and transparently


Both are just nuclear bombs waiting a good time to explode.

That's just a bomb waiting

to explode in your face. So in that sense, Adam's idea of using a
different type for differently-allocated objects makes sense.


Yes, but one should be careful here as not to have exponential explosion 
in the code size. So some allocators have to be compatible and if there 
is a way to transfer ownership it'd be bonus points (and a large pot of 
these mind you).



A
container has to declare what kind of allocation its members are using;
any other way is asking for trouble.


Hence my thoughts to move this piece of circuitry to containers 
proper. The whole idea that by swapping malloc with myMalloc you can 
translate to a wildly different allocation scheme doesn't quite hold.


I think it may be interesting to try and put a wall in different place 
namely in between allocation strategy and memory areas it works on.




I kind of wondering how our knowledgeable community has come to this.
(must have been starving w/o allocators way too long)


We're just trying to provoke Andrei into responding. ;-)


Cool, then keep it coming but ... safety and other holes has to be taken 
care of.



[...]

IMHO the only place for allocators is in containers other kinds of
code may just ignore allocators completely.


But some people clamoring for allocators are doing so because they're
bothered by Phobos using ~ for string concatenation, which implicitly
uses the GC. I don't think we can just ignore that.


~= would work with any sensible array-like contianer.
~ is sadly only a convenience for scripts and/or non-performance 
(determinism) critical apps unfortunately.




std.algorithm and friends should imho be customized on 2 things only:

a) containers to use (instead of array)
b) optionally a memory source (or allocator) f container is
temporary(scoped) to tie its life-time to smth.

Want temporary stuff? Use temporary arrays, hashmaps and whatnot
i.e. types tailored for a particular use case (e.g. with a
temporary/scoped allocator in mind).
These would all be unsafe though. Alternative is ref-counting
pointers to an allocator. With word on street about ARC it 

Re: Notes from C++ static analysis

2013-06-26 Thread Adam D. Ruppe

On Wednesday, 26 June 2013 at 20:06:43 UTC, H. S. Teoh wrote:

But if the format string is known at compile-time, and there are
extraneous arguments, then it should be a warning / error.


We can't do that in D today, unless we do a writefln!fmt(args) 
in addition to writefln(fmt, args...);


tbh I kinda wish we could overload functions on literals though.


  1   2   3   >