Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Trass3r

In general good work!

But again phobos makes a simple std.string function unCTFEable.
Now I have to use an ugly hack to achieve something as simple as toUpper:

mixin( (){char[] tmp = dup; toUpperInPlace(tmp); return tmp;}() );


We are looking for a D1 programmer in Berlin, full time position

2011-07-12 Thread Mathias Laurenz Baumann

Good day,

our company is looking for a D1 programmer for a full time position at our  
office in Berlin.

See the link for more details

   http://www.sociomantic.com/careers/software-developer/

Regards,

   --Mathias

--
Mathias Baumann
Research and Development

sociomantic labs GmbH
Münzstraße 19
10178 BERLIN
DEUTSCHLAND

http://www.sociomantic.com

Fon:   +49 (0)30 5015 4701
Fax:   +49 (0)30 2403 6715
Skype: Mathias Baumann (m4renz)
---

sociomantic labs GmbH, Location: Berlin
Commercial Register - AG Charlottenburg: HRB 121302 B
VAT No. - USt-ID: DE 266262100
Managing Directors: Thomas Nicolai, Thomas Brandhoff


Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Trass3r
Now I have to use an ugly hack to achieve something as simple as  
toUpper:


mixin( (){char[] tmp = dup; toUpperInPlace(tmp); return tmp;}() );


Damn i found that too and wanted to mention it in the dmd-beta list b4  
release. But the workaround is simple. At least this one was fixed:


http://lists.puremagic.com/pipermail/dmd-beta/2011-July/000773.html

Cause that was making cl4d with all its string mixins pretty much  
unbuildable at all.


Yeah I've done some crazy shit in the cl4d code :D
But in the end that was just another workaround cause template mixins  
couldn't mixin constructors.

Good news: this seems to have been fixed.
Bad news: there still is another problem. I asked about it in D.learn.


btw, that problem you reported, where did it occur in cl4d?


Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Leandro Lucarella
Jonathan M Davis, el 11 de julio a las 22:21 me escribiste:
 On Tuesday 12 July 2011 01:28:11 Leandro Lucarella wrote:
  Jonathan M Davis, el 11 de julio a las 18:15 me escribiste:
Despite the confusing non-standard descriptions in --help, -w is the
Treat warnings as errors setting, so it *should* stop compilation
- that's the whole point of -w. The proper Turn warnings on
setting is -wi, not -w.
   
   True. But when we're dealing with messages for something which is
   scheduled for deprecation
  
  What's the point of scheduled for deprecation anyway? Things are
  deprecated, or aren't, anything else should be in the documentation. You
  can always use deprecated features using a compiler, so again... what's
  the point of scheduled for deprectation? I can't really understand
  that concept.
 
 The idea is to have 3 stages while deprecating something.
 
 1. Scheduled for Deprecation.
 2. Deprecated.
 3. Removed.
 
 When a symbol has been deprecated, -d is required to compile any code using 
 that symbol. So, deprecation breaks code. You either have to change your code 
 so that it doesn't use the deprecated symbol, or you have to change your 
 build 
 scripts to use -d. In either case, deprecating a symbol without warning is 
 going to cause problems for anyone maintaining code which uses that symbol.

If you don't want your code to break when something is deprecated, you
should *always* compile with -d, so no, you don't have to change the
build system if you always have -d. Maybe all needed is just a -di (as
in -wi), where deprecation are informed but not treated as errors.

Scheduled for deprecation makes no sense, it's a user decision to use
deprecated code or not, and if should be a user decision if he/she wants
a warning about deprecated stuff or not.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
No tengo alas, como un planeador.
No tengo luces, como un plato volador.
Perdi mi rumbo soy un auto chocador.


Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Jonathan M Davis
On 2011-07-12 10:07, Leandro Lucarella wrote:
 Jonathan M Davis, el 11 de julio a las 22:21 me escribiste:
  On Tuesday 12 July 2011 01:28:11 Leandro Lucarella wrote:
   Jonathan M Davis, el 11 de julio a las 18:15 me escribiste:
 Despite the confusing non-standard descriptions in --help, -w is
 the Treat warnings as errors setting, so it *should* stop
 compilation - that's the whole point of -w. The proper Turn
 warnings on setting is -wi, not -w.

True. But when we're dealing with messages for something which is
scheduled for deprecation
   
   What's the point of scheduled for deprecation anyway? Things are
   deprecated, or aren't, anything else should be in the documentation.
   You can always use deprecated features using a compiler, so again...
   what's the point of scheduled for deprectation? I can't really
   understand that concept.
  
  The idea is to have 3 stages while deprecating something.
  
  1. Scheduled for Deprecation.
  2. Deprecated.
  3. Removed.
  
  When a symbol has been deprecated, -d is required to compile any code
  using that symbol. So, deprecation breaks code. You either have to
  change your code so that it doesn't use the deprecated symbol, or you
  have to change your build scripts to use -d. In either case, deprecating
  a symbol without warning is going to cause problems for anyone
  maintaining code which uses that symbol.
 
 If you don't want your code to break when something is deprecated, you
 should *always* compile with -d, so no, you don't have to change the
 build system if you always have -d. Maybe all needed is just a -di (as
 in -wi), where deprecation are informed but not treated as errors.
 
 Scheduled for deprecation makes no sense, it's a user decision to use
 deprecated code or not, and if should be a user decision if he/she wants
 a warning about deprecated stuff or not.

Except, of course, that good cood generally won't use anything that's 
deprecated, since the deprecated item is going to go away. The user can decide 
to use -d and use deprecated code if it makes sense for them, but that really 
should be decided on a case-by-case basis. Most programmers won't want to use 
-d. So, they're not going to compile with -d by default. So, deprecating 
something will break their code. By saying that you're scheduling something 
for deprecation, you're giving the user time to deal with the change before it 
breaks their code.

Now, I could see an argument for changing how deprecation works so that 
deprecated has no effect unless a flag turns it on, in which case, the user is 
deciding whether they want to warned about using deprecated code. In such a 
case, scheduled for deprecation isn't quite as necessary, but then you're 
just going to break everyone's code when you remove the function (at least 
everyone who didn't use the flag to be warned about using deprecated symbols). 
And rather than being able to then change their build scripts to use -d while 
they fix the problem, they then _have_ to go change their code (or not upgrade 
their libraries) in order for their code to compile. But that's not how 
deprecated works in D.

When a symbol is deprecated, it's an error to use it unless you compile with -
d. So, there is no warning about using deprecated stuff. It's an outright 
error. It just so happens that you can turn it off if you need to (hopefully 
as a quick fix). And given that deprecating a symbol introduces errors into 
the code of anyone who uses that symbol, informing people ahead of time gives 
them the opportunity to change their code before it breaks. The result is a 
much smoother process.

1. Something is scheduled for deprecation, so programmers then have the time 
to figure out what they're going to do to change their code, and they have 
time to make the changes. Nothing breaks. No one is forced to make immediate 
changes.

2. The symbol is then deprecated. Anyone who did not take the time to make 
changes as they were told that they were going to have to do then has broken 
code, but they have the quick fix of compiling with -d if they need to. 
They're still going to have to figure out what they're going to do about 
changing their code, and they're forced to look at the problem at least far 
enough to enable -d, but their code can still work with some changes to their 
build scripts.

3. The symbol is outright removed. Programmers have had ample time to change 
their code, and if they haven't, they now have to. But they were told that the 
symbol was going away and had to have made changes to their build scripts to 
even use it this long, so the developer of the library hasn't just screwed 
them over.

The idea is to provide a smooth path for necessary changes. And just 
deprecating something out of the blue does _not_ do that.

- Jonathan M Davis


Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Jonathan M Davis
On 2011-07-12 13:52, Leandro Lucarella wrote:
 Jonathan M Davis, el 12 de julio a las 18:12 me escribiste:
When a symbol has been deprecated, -d is required to compile any code
using that symbol. So, deprecation breaks code. You either have to
change your code so that it doesn't use the deprecated symbol, or you
have to change your build scripts to use -d. In either case,
deprecating a symbol without warning is going to cause problems for
anyone maintaining code which uses that symbol.
   
   If you don't want your code to break when something is deprecated, you
   should *always* compile with -d, so no, you don't have to change the
   build system if you always have -d. Maybe all needed is just a -di (as
   in -wi), where deprecation are informed but not treated as errors.
   
   Scheduled for deprecation makes no sense, it's a user decision to use
   deprecated code or not, and if should be a user decision if he/she
   wants a warning about deprecated stuff or not.
  
  Except, of course, that good cood generally won't use anything that's
  deprecated, since the deprecated item is going to go away. The user can
  decide to use -d and use deprecated code if it makes sense for them, but
  that really should be decided on a case-by-case basis. Most programmers
  won't want to use -d. So, they're not going to compile with -d by
  default. So, deprecating something will break their code. By saying that
  you're scheduling something for deprecation, you're giving the user time
  to deal with the change before it breaks their code.
  
  Now, I could see an argument for changing how deprecation works so that
  deprecated has no effect unless a flag turns it on, in which case, the
  user is deciding whether they want to warned about using deprecated
  code. In such a case, scheduled for deprecation isn't quite as
  necessary, but then you're just going to break everyone's code when you
  remove the function (at least everyone who didn't use the flag to be
  warned about using deprecated symbols). And rather than being able to
  then change their build scripts to use -d while they fix the problem,
  they then _have_ to go change their code (or not upgrade their
  libraries) in order for their code to compile. But that's not how
  deprecated works in D.
 
 So then, why don't we fix it (patch attached, you can apply it with 'git
 am file'). I think -di is the real solution to the problem. Things are
 deprecated or not, and people want to be informed if they are using
 something deprecated or not. Scheduled for deprecation seems to be a way
 to say show me a deprecation message, not a real state.
 
  When a symbol is deprecated, it's an error to use it unless you compile
  with - d. So, there is no warning about using deprecated stuff. It's an
  outright error. It just so happens that you can turn it off if you need
  to (hopefully as a quick fix). And given that deprecating a symbol
  introduces errors into the code of anyone who uses that symbol,
  informing people ahead of time gives them the opportunity to change
  their code before it breaks. The result is a much smoother process.
 
 OK, then we should fix the compiler (again, patch attached). -di is the
 solution. My patch doesn't change the defaults, but if people think is
 better to show deprecation errors by default, it can be trivially
 changed.
 
  1. Something is scheduled for deprecation, so programmers then have the
  time to figure out what they're going to do to change their code, and
  they have time to make the changes. Nothing breaks. No one is forced to
  make immediate changes.
 
 This is what deprecated is for! Removing stuff breaks code, not
 deprecating stuff! Deprecated really is scheduled for removal, so
 scheduled for deprecation is scheduled for scheduled for removal, it
 makes no sense. Fix the compiler!
 
  2. The symbol is then deprecated. Anyone who did not take the time to
  make changes as they were told that they were going to have to do then
  has broken code, but they have the quick fix of compiling with -d if
  they need to. They're still going to have to figure out what they're
  going to do about changing their code, and they're forced to look at the
  problem at least far enough to enable -d, but their code can still work
  with some changes to their build scripts.
  
  3. The symbol is outright removed. Programmers have had ample time to
  change their code, and if they haven't, they now have to. But they were
  told that the symbol was going away and had to have made changes to
  their build scripts to even use it this long, so the developer of the
  library hasn't just screwed them over.
  
  The idea is to provide a smooth path for necessary changes. And just
  deprecating something out of the blue does _not_ do that.
 
 Unless we fix the compiler :)

This doesn't really fix the problem. Deprecating something is still going to 
break code unless people actively try and avoid it by using -di, so 

Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Adam D. Ruppe
Jonathan M Davis wrote:
 Deprecating something is still going to break code

Breaking with deprecated is an entirely different kind of breakage
than removing something.

deprecated means simply please don't use this specific thing. You
can tell it shut up I know better than you and be on your way.

It's in your face enough that you can change it right there and then
if you want to, but it's easy enough to shut it up too.


Here's my preference list for changes:

Top preference: don't change stuff.

Next: use the deprecated attribute

Next: versioned scheduled to be deprecated messages. I don't like
being spammed every time I compile.

Next: scheduled to be deprecated messages as they are now

Last: removing it entirely. (this should be very, very rare
especially if we want to be called stable. Nothing has pissed me
off more with the last few releases than Phobos losing
functionality.)


Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Michel Fortin

On 2011-07-12 16:52:10 -0400, Leandro Lucarella l...@llucax.com.ar said:


This is what deprecated is for! Removing stuff breaks code, not
deprecating stuff! Deprecated really is scheduled for removal, so
scheduled for deprecation is scheduled for scheduled for removal, it
makes no sense. Fix the compiler!


Actually it sometime makes sense that you'd schedule something to be 
later scheduled for removal. If there is no replacement for a certain 
feature, making it deprecated is just a nuisance since the compiler 
will complain about the problem but you have no alternative yet.


Now, I think the argument for scheduling things for deprecation is just 
an extreme of that: deprecating things is a nuisance because it breaks 
code, so we'll schedule them to be deprecated later. But then we add a 
warning for those things scheduled for deprecation because we don't 
want people to use them, and scheduled for deprecation has just 
become a synonym for deprecated but not yet breaking your code. 
Fixing deprecated to not break code by default is a better option, I 
think.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Jonathan M Davis
On 2011-07-12 15:09, Adam D. Ruppe wrote:
 Jonathan M Davis wrote:
  Deprecating something is still going to break code
 
 Breaking with deprecated is an entirely different kind of breakage
 than removing something.
 
 deprecated means simply please don't use this specific thing. You
 can tell it shut up I know better than you and be on your way.

 It's in your face enough that you can change it right there and then
 if you want to, but it's easy enough to shut it up too. 

True. But Walter has been pretty insistent that things not be deprecated 
without warning first, because code which compiled perfectly before doesn't 
anymore, even if all you have to do is change your build scripts.

Now, as for the scheduled for deprecation messages, we can stop doing that. 
But then the documentation is going to be the only thing warning anyone, and 
then code is going to get broken when stuff is actually deprecated. Given the 
fact that you can use -d, that's not the end of the world. But it does mean 
that deprecation is going to tend to come out of nowhere for most people, and 
Walter has been very adamant about avoiding suddenly breaking people's code - 
even by requiring them to add -d to their build scripts.

So, if most people don't want the messages, then the messages will go away. 
But that means that people actually need to pay attention to the changelog and 
documentation.

 Here's my preference list for changes:
 
 Top preference: don't change stuff.
 
 Next: use the deprecated attribute
 
 Next: versioned scheduled to be deprecated messages. I don't like
 being spammed every time I compile.
 
 Next: scheduled to be deprecated messages as they are now
 
 Last: removing it entirely. (this should be very, very rare
 especially if we want to be called stable. Nothing has pissed me
 off more with the last few releases than Phobos losing
 functionality.)

The current plan is that _everything_ which gets deprecated will be removed. 
Stuff which is deprecated is not intended to stick around. Now, it should be 
pretty rare that deprecated stuff doesn't have a replacement. Outright 
removing functionality should be very rare indeed. It may happen in a few 
cases where the functionality just isn't generally useful, but overall, it 
should be rare.

Deprecation is likely to fall primarily in 3 categories at this point:

1. Renaming stuff to follow Phobos' naming conventions. A lot of this was 
fixed with 2.054, but there's still some left to do. For the most part though, 
this should be a set of fixes which will be done fairly soon and then we won't 
have to make those kind of changes again.

2. Small redesigns of older functionality. The prime case that I can think of 
is that there has been talk of replacing the use of patterns in std.string 
with uses of std.regex.Regex.

3. Full module redesigns due to the need of serious improvement. std.datetime 
replacing std.date would be a prime example of this, but there are a few other 
modules which are supposed to be redesigned (e.g. std.xml and std.stream).

The idea at least is that these sort of changes should be taking place fairly 
soon and that we then won't need to do any of that kind of thing anymore (or 
at least only very rarely). The review process should catch most of these sort 
of issues before they actually get into Phobos in the first place. But some 
code has not aged well as D and Phobos have changed, and Phobos has not always 
been consistent in naming, and that needs to be fixed sooner rather than 
later.

- Jonathan M Davis


Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Adam D. Ruppe
Jonathan M Davis wrote:
 The current plan is that _everything_ which gets deprecated will
 be removed.

What's the reason for removing things? Surely it's not disk space!


Anyway, let's look at the three categories. While I hate change,
there are two kinds of change: trivial and painful. Remember, D
isn't a useless piece of junk dynamic language - trivial changes
are easy to find and easy to change with confidence.

Painful changes though, are, well, painful.


 1. Renaming stuff to follow Phobos' naming conventions.

These are trivial, just change it. It's like ripping off a band-aid.
The compiler will tell you what broke and how to fix it (the spell
checker ought to catch camelcase changes without keeping the old
name).

Do it fast, feel the brief pain, and move on.


 2. Small redesigns of older functionality.

These should be reasonably trivial fixes too, but might warrant
deprecating the old on a case by case basis. If it's mindless
to change though, just rip that bandage off.

Just make sure that the types are different or something, while
still super easy to change, so the compiler will point it out to you.


 3. Full module redesigns due to the need of serious improvement

This is where the pain comes in, since instead of spending 15
minutes running a mindless find/replace when the compiler tells
you to, it requires gutting a lot of code and rethinking it,
converting databases, etc.

These should ideally aim to redesign the internals, but keep the
same interface. Maybe adding to it or writing the old as an
emulation layer over the new.

This is preferred, since then new and old exist together. It avoids
actually breaking anybody's code.


If that's impossible though, this is the most likely candidate
for deprecation. Unlike a name change, it isn't easy to change,
so a compile error is more likely to mean reverting dmd versions
than actually changing it.


Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Walter Bright

On 7/11/2011 8:31 AM, Andrej Mitrovic wrote:

Walter, could you please add these to the changelog:


Done.


Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Andrej Mitrovic
Thanks!


Re: dmd 1.069 and 2.054 release

2011-07-12 Thread Jonathan M Davis
On Tuesday 12 July 2011 23:38:10 Adam D. Ruppe wrote:
 Jonathan M Davis wrote:
  The current plan is that _everything_ which gets deprecated will
  be removed.
 
 What's the reason for removing things? Surely it's not disk space!
 
 
 Anyway, let's look at the three categories. While I hate change,
 there are two kinds of change: trivial and painful. Remember, D
 isn't a useless piece of junk dynamic language - trivial changes
 are easy to find and easy to change with confidence.
 
 Painful changes though, are, well, painful.
 
  1. Renaming stuff to follow Phobos' naming conventions.
 
 These are trivial, just change it. It's like ripping off a band-aid.
 The compiler will tell you what broke and how to fix it (the spell
 checker ought to catch camelcase changes without keeping the old
 name).
 
 Do it fast, feel the brief pain, and move on.

Hmm. I don't think that Walter would be very happy about that, since it does 
immediately break code, but as long as the name change is easily found by the 
spellchecker, it wouldn't be a big deal to fix. So, under at least some 
circumstances, that may be acceptable. Regardless, it could be grounds for 
making the deprecation cycle for renaming relatively short instead of around 1 
year as is the current plan for deprecation in general.

  2. Small redesigns of older functionality.
 
 These should be reasonably trivial fixes too, but might warrant
 deprecating the old on a case by case basis. If it's mindless
 to change though, just rip that bandage off.
 
 Just make sure that the types are different or something, while
 still super easy to change, so the compiler will point it out to you.
 
  3. Full module redesigns due to the need of serious improvement
 
 This is where the pain comes in, since instead of spending 15
 minutes running a mindless find/replace when the compiler tells
 you to, it requires gutting a lot of code and rethinking it,
 converting databases, etc.
 
 These should ideally aim to redesign the internals, but keep the
 same interface. Maybe adding to it or writing the old as an
 emulation layer over the new.
 
 This is preferred, since then new and old exist together. It avoids
 actually breaking anybody's code.
 
 
 If that's impossible though, this is the most likely candidate
 for deprecation. Unlike a name change, it isn't easy to change,
 so a compile error is more likely to mean reverting dmd versions
 than actually changing it.

At this point, I expect that the module rewrites are going to generally be 
full-on, completely incompatible rewrites. Fixing the API is one of the major 
reasons for the rewrites (particularly when converting a module  to being 
range-based as is going to occur with std.stream), so just changing the 
implementation isn't going to cut it. It may be that in some cases, it's 
essentially a rewrite of a broken implementation, but I'm not aware of any 
such case at the moment. These are definitely cases, however, where the full 
deprecation cycle is going to be used, so there should be plenty of time to fix 
code. Hopefully, these changes get done fairly soon, but some of them don't 
seem to be going anywhere yet in spite of major discussions about them (e.g. 
std.stream). They're also the most work to do, so while I would expect most of 
the #1 and #2 types of deprecations to occur fairly soon for the most part, 
the full module rewrites could take a while, which is unfortunate.

Off the top of my head, I know that std.xml, std.stream, std.path, and std.json 
are going to get rewrites on some level, and std.container could get some 
major rewrites depending on what Andrei does with memory management in it, 
though what it needs primarily is new containers. Also, Andrei thinks that 
std.encoding is a failed experiment which needs to be redone, so that's 
probably going to need to be rewritten at some point. And as I understand it, 
all of those except for std.stream and std.encoding have someone actively 
working on them (though maybe Andrei has been working on std.stream too; I 
don't know). So, hopefully it won't be too much longer before they're done, 
but it could also be a while unfortunately.

So, anyway, there are some module rewrites to be done, and they're likely to 
be pretty major for better or worse. But once those are done, with the review 
process vetting new modules, deprecation issues like these should be far 
rarer, and Phobos should be heading towards stability.

- Jonathan M Davis


Re: Immutable separator to join() doesn't work

2011-07-12 Thread Jonathan M Davis
On Tuesday 12 July 2011 15:46:41 Daniel Murphy wrote:
 Jonathan M Davis jmdavisp...@gmx.com wrote in message
 news:mailman.1552.1310429761.14074.digitalmar...@puremagic.com...
 
  This enhancement request would make the situation with immutable and
  const arrays so that they're much more in line with mutable container
  types and static arrays:
  
  http://d.puremagic.com/issues/show_bug.cgi?id=6289
  
  - Jonathan M Davis
 
 WAIT WHAT?  That doesn't work?!?

Nope. It works for static arrays but not for const or immutable arrays. Try 
it. It'll fail. I don't know _why_ it doesn't work, but it doesn't. If it did, 
this would be a much smaller issue. It would be nice if templates were 
improved such that they instantiated range-based functions in a manner which 
worked for static arrays and const or immutable arrays, but if you could solve 
the problem by slicing a const or immutable array, it would make the situation 
far less problematic.

- Jonathan M Davis


Re: Immutable separator to join() doesn't work

2011-07-12 Thread Daniel Murphy
Jonathan M Davis jmdavisp...@gmx.com wrote in message 
news:mailman.1554.1310450510.14074.digitalmar...@puremagic.com...
 Nope. It works for static arrays but not for const or immutable arrays. 
 Try
 it. It'll fail. I don't know _why_ it doesn't work, but it doesn't. If it 
 did,
 this would be a much smaller issue. It would be nice if templates were
 improved such that they instantiated range-based functions in a manner 
 which
 worked for static arrays and const or immutable arrays, but if you could 
 solve
 the problem by slicing a const or immutable array, it would make the 
 situation
 far less problematic.

 - Jonathan M Davis

Yeah, looking at the implementation and the test cases that rely on this, it 
seems to have been done to allow slicing typedefs to yeild the same type.  I 
really doubt this is something we need to support any more.
Every time this issue came up, I've always assumed this was how it worked!
Honestly, template deduction with implicit conversions is very unlikely to 
ever happen.  While it looks nice for one parameter, it quickly turns into a 
huge mess for multiple parameters.

There is a fairly easy workaround that could be used throughout phobos:
Accept T when isXXXRange!T || isXXXRange!(T[]), and use a static if to slice 
it when necessary.  This would solve the problem for containers, static 
arrays, immutable arrays, and other immutable ranges. 




Re: Any companies using D?

2011-07-12 Thread Sönke Ludwig

Am 12.07.2011 06:38, schrieb ChrisW:

Obviously Digital Mars does development in D, but I was wondering if
any other companies have yet taken the plunge -- even if just in their
RD department -- to try using D as their development language?


I am working for a company that develops photographic software. Right 
now I am working there on a compiler that is written in D. The decision 
was easy because this is a stand-alone project and the risk is well 
under control.


I would also really like to use it on a larger scale and get other 
people in - but from my experiences with my (largish) private project I 
see the risk (or time loss) due to compiler/linker bugs it still far too 
high. I would love to see a no-known-regression policy for new releases 
to decrease that risk.




Re: Lock-Free Actor-Based Flow Programming in D2 for GSOC2011?

2011-07-12 Thread Kagamin
eris Wrote:

 Windows uses a proactor model instead of reactor, so it schedules I/O first 
 and
 then waits for an IO completion flag. I've modified my reactor so that it 
 presents
 a reactor facade even on Windows systems.

Huh? What does it change? IO is done pretty much the same on all systems: 
client requests an io operation, OS send the thread to sleep until the 
operation is complete.


CTFE writeln

2011-07-12 Thread KennyTM~
I've just opened a pull request* to enable std.stdio.writeln in CTFE. 
Any comments?


*: https://github.com/D-Programming-Language/dmd/pull/237


Re: Using the d-p-l.org ddoc style for 3rd party libraries?

2011-07-12 Thread Johannes Pfau
Walter Bright wrote:
On 7/11/2011 12:34 PM, Andrei Alexandrescu wrote:
 On 7/11/11 1:03 PM, Johannes Pfau wrote:
 Hi,

 is it ok to adapt the d-p-l.org library reference style for own
 projects?

 I'd like to use it for the cairoD documentation.

 Fine by me. Walter?

Yes.


Thanks, I published the documentation here:
http://jpf91.github.com/cairoD/api/cairo_c_cairo.html

I removed all the Digital Mars copyright notices for now, should I add
those back?

-- 
Johannes Pfau



Re: Using the d-p-l.org ddoc style for 3rd party libraries?

2011-07-12 Thread James Fisher
On Mon, Jul 11, 2011 at 7:03 PM, Johannes Pfau s...@example.com wrote:

 is it ok to adapt the d-p-l.org library reference style for own
 projects?


Not strictly relevant, but I'm working on the d-p-l.org design, including
neatening the documentation.  There's a thread on this.

The current design is the work of David Gileadi, I am told, so presumably he
owns the copyright on the front-end design?


Re: Immutable separator to join() doesn't work

2011-07-12 Thread Jonathan M Davis
On Tuesday 12 July 2011 16:16:59 Daniel Murphy wrote:
 There is a fairly easy workaround that could be used throughout phobos:
 Accept T when isXXXRange!T || isXXXRange!(T[]), and use a static if to slice
 it when necessary.  This would solve the problem for containers, static
 arrays, immutable arrays, and other immutable ranges.

I don't know whether that's a good idea for containers or const/immutable 
arrays, but it _definitely_ is a bad idea for static arrays. It would end up 
copying the entire array just because you forgot to slice it. Personally, I'm 
_far_ more inclined to say that you should just expect to have to slice 
something when you pass it to a range-based function. I think that the fact 
that the container that gets most used at this point is the dynamic array 
has gotten people used to not having to use slices much when proper containers 
would require them. Since arrays are really slices, it errodes the line 
between container and range, and I think that as proper containers are 
completed in std.container and enter mainstream use, it's going to throw a lot 
of people off, because they aren't going to function the quite same as arrays 
(primarily due to the fact that a container and a range are not the same thing 
with actual containers).

But regardless, while your suggestion might be a good idea in some cases, it's 
definitely not a good solution for static arrays. And I'm skeptical that it's a 
good idea in any case, but it would allow for immutable arrays to be used with 
range-based functions. It would likely be better, however, to simply make it 
so that slices of them can be used with range-based functions such as is the 
case with static arrays.

- Jonathan M Davis


Re: Immutable separator to join() doesn't work

2011-07-12 Thread Daniel Murphy
Jonathan M Davis jmdavisp...@gmx.com wrote in message 
news:mailman.1557.1310461819.14074.digitalmar...@puremagic.com...
 Personally, I'm
 _far_ more inclined to say that you should just expect to have to slice
 something when you pass it to a range-based function.

That's my thinking too.

 It would likely be better, however, to simply make it
 so that slices of them can be used with range-based functions such as is 
 the
 case with static arrays.

I really think this was always supposed to work, but the compiler was 
modified to allow slicing typedefs to result in typedefs.  Hopefully my 
patch for this will get pulled soon. 




Re: Lock-Free Actor-Based Flow Programming in D2 for GSOC2011?

2011-07-12 Thread Piotr Szturmaj

Kagamin wrote:

eris Wrote:


Windows uses a proactor model instead of reactor, so it schedules I/O first 
and
then waits for an IO completion flag. I've modified my reactor so that it 
presents
a reactor facade even on Windows systems.


Huh? What does it change? IO is done pretty much the same on all systems: 
client requests an io operation, OS send the thread to sleep until the 
operation is complete.


You mean synchronous blocking IO. Proactor in Windows means asynchronous 
non-blocking IO (overlapped IO) and completion ports. Client may request 
multiple IO operations and its thread is not put to sleep. Instead, 
client receives all completed operations using 
GetQueuedCompletionStatus() or using callback function.


Re: Lock-Free Actor-Based Flow Programming in D2 for GSOC2011?

2011-07-12 Thread Kagamin
Piotr Szturmaj Wrote:

 Kagamin wrote:
  eris Wrote:
 
  Windows uses a proactor model instead of reactor, so it schedules I/O 
  first and
  then waits for an IO completion flag. I've modified my reactor so that it 
  presents
  a reactor facade even on Windows systems.
 
  Huh? What does it change? IO is done pretty much the same on all systems: 
  client requests an io operation, OS send the thread to sleep until the 
  operation is complete.
 
 You mean synchronous blocking IO. Proactor in Windows means asynchronous 
 non-blocking IO (overlapped IO) and completion ports. Client may request 
 multiple IO operations and its thread is not put to sleep. Instead, 
 client receives all completed operations using 
 GetQueuedCompletionStatus() or using callback function.

From what I understand, reactor is meant to be synchronous?


Re: Lock-Free Actor-Based Flow Programming in D2 for GSOC2011?

2011-07-12 Thread Piotr Szturmaj

Kagamin wrote:

Piotr Szturmaj Wrote:


Kagamin wrote:

eris Wrote:


Windows uses a proactor model instead of reactor, so it schedules I/O first 
and
then waits for an IO completion flag. I've modified my reactor so that it 
presents
a reactor facade even on Windows systems.


Huh? What does it change? IO is done pretty much the same on all systems: 
client requests an io operation, OS send the thread to sleep until the 
operation is complete.


You mean synchronous blocking IO. Proactor in Windows means asynchronous
non-blocking IO (overlapped IO) and completion ports. Client may request
multiple IO operations and its thread is not put to sleep. Instead,
client receives all completed operations using
GetQueuedCompletionStatus() or using callback function.


 From what I understand, reactor is meant to be synchronous?


Reactor is event handling pattern, when client specify event handlers 
like onRecv() and just wait for the events - these are delivered as soon 
as they arrive. Proactor on the other side requires that client issue 
each asynchronous operation manually - there will be no events delivered 
until client requests them.


For example in linux's epoll, events are delivered as soon as there is 
available data in the buffer (in level triggered mode). In Windows NT 
family recv event is delivered only after successfull call to WSARecv().


Proactor model has (a debatable) advantage of specifying data buffer 
before issuing async IO. This could avoid data copying in some 
circumstances, because IO manager can read data directly into that 
buffer. In reactor models (epoll) buffer is provided after IO completes. 
This involves copying of data from internal buffer to user's buffer.


Re: CTFE writeln

2011-07-12 Thread bearophile
KennyTM~:

 I've just opened a pull request* to enable std.stdio.writeln in CTFE. 
 Any comments?
 
 *: https://github.com/D-Programming-Language/dmd/pull/237

A tidy printing function is needed at CT, and the better CTFE becomes, the more 
need there is for it.
I have suggested the ctputs name if it's simple (ctputs is supposed to work 
equally well at compile time and a run time!). But if you are able to implement 
the whole writeln, then calling it writeln is better, less names to remember 
and to use :-) A printing function is not just for debugging.

Bye,
bearophile


Re: Any companies using D?

2011-07-12 Thread Adam Ruppe
ChrisW wrote:
 I was wondering if any other companies have yet taken the plunge
 [...] to try using D as their development language?

I use D for virtually everything, including for a company's flagship
product for the last year and a half or so.


Re: Lock-Free Actor-Based Flow Programming in D2 for GSOC2011?

2011-07-12 Thread Kagamin
Piotr Szturmaj Wrote:

 Reactor is event handling pattern, when client specify event handlers 
 like onRecv() and just wait for the events - these are delivered as soon 
 as they arrive. Proactor on the other side requires that client issue 
 each asynchronous operation manually - there will be no events delivered 
 until client requests them.
 
 For example in linux's epoll, events are delivered as soon as there is 
 available data in the buffer (in level triggered mode). In Windows NT 
 family recv event is delivered only after successfull call to WSARecv().

epoll seems like windows synchronization api except that in windows one 
shouldn't wait on file handles directly.


Re: Using the d-p-l.org ddoc style for 3rd party libraries?

2011-07-12 Thread David Gileadi

On 7/12/11 2:02 AM, James Fisher wrote:

On Mon, Jul 11, 2011 at 7:03 PM, Johannes Pfau s...@example.com
mailto:s...@example.com wrote:

is it ok to adapt the d-p-l.org http://d-p-l.org library reference
style for own
projects?


Not strictly relevant, but I'm working on the d-p-l.org
http://d-p-l.org design, including neatening the documentation.
  There's a thread on this.

The current design is the work of David Gileadi, I am told, so
presumably he owns the copyright on the front-end design?


We never discussed this that I recall, but I hereby assign any copyright 
I might hold to Walter Bright and anyone else he designates.  Since I'm 
not a lawyer, if anything else is needed I'd be happy to provide it.  I 
may not need to do anything at all; the pages all state that the 
copyright belongs to Digital Mars.


Re: toStringz or not toStringz

2011-07-12 Thread Regan Heath
On Fri, 08 Jul 2011 18:59:47 +0100, Walter Bright  
newshou...@digitalmars.com wrote:



On 7/8/2011 4:53 AM, Regan Heath wrote:
On Fri, 08 Jul 2011 10:49:08 +0100, Walter Bright  
newshou...@digitalmars.com

wrote:


On 7/8/2011 2:26 AM, Regan Heath wrote:

Why can't we have the
compiler call it automatically whenever we pass a string, or char[]  
to an extern

C function, where the parameter is defined as char*?


Because char* in C does not necessarily mean zero terminated string.


Sure, but in many (most?) cases it does. And in those cases where it  
doesn't you

could argue ubyte* or byte* should have been used in the D extern C
declaration instead. Plus, in those cases, worst case scenario, D  
passes an
extra \0 byte to those functions which either ignore it because they  
were also
passed a length, or expect a fixed sized structure, or .. I don't know  
what as I
can't imagine another case where char* would be used without it being a  
zero

terminated string, or passing/knowing the length ahead of time.


In the worst case, you're adding an extra memory allocation and function  
call overhead (that is hidden to the user, and not turn-off-able). This  
is not acceptable when interfacing to C.


This worst case only happens when:
1. The extern C function takes a char* and is NOT expecting a zero  
terminated string.
2. The char[], string, etc being passed is a fixed length array, or a  
slice which has no available space left for the \0.


So, it's rare.  I would guess a less than 1% of cases for general  
programming.


And, it *is* turn-off-able.  You simply change the extern C to use  
ubyte*, byte*, or void* (instead of char*).  This is arguably a better  
definition for this sort of function in the first place.



D is already allocating an extra \0 byte for string constants right?


Yes, but in a way that is essentially free.


Yep, this is essentially free, and calling toStringz automatically would  
be almost as free, for 99% of cases.  Plus it would just work which is a  
big deal when you're talking about first impressions etc.


--
Using Opera's revolutionary email client: http://www.opera.com/mail/


CTFE: What limitations remain?

2011-07-12 Thread dsimcha
The documentation for CTFE is outdated and specifies limitations that no
longer exist thanks to Don's massive overhaul.  For example, a big one is that
pointers now work.  What limitations that could potentially be removed still
do exist in CTFE as of 2.054?


Re: toStringz or not toStringz

2011-07-12 Thread Steven Schveighoffer
On Tue, 12 Jul 2011 09:54:15 -0400, Regan Heath re...@netmail.co.nz  
wrote:


On Fri, 08 Jul 2011 18:59:47 +0100, Walter Bright  
newshou...@digitalmars.com wrote:



On 7/8/2011 4:53 AM, Regan Heath wrote:
On Fri, 08 Jul 2011 10:49:08 +0100, Walter Bright  
newshou...@digitalmars.com

wrote:


On 7/8/2011 2:26 AM, Regan Heath wrote:

Why can't we have the
compiler call it automatically whenever we pass a string, or char[]  
to an extern

C function, where the parameter is defined as char*?


Because char* in C does not necessarily mean zero terminated string.


Sure, but in many (most?) cases it does. And in those cases where it  
doesn't you

could argue ubyte* or byte* should have been used in the D extern C
declaration instead. Plus, in those cases, worst case scenario, D  
passes an
extra \0 byte to those functions which either ignore it because they  
were also
passed a length, or expect a fixed sized structure, or .. I don't know  
what as I
can't imagine another case where char* would be used without it being  
a zero

terminated string, or passing/knowing the length ahead of time.


In the worst case, you're adding an extra memory allocation and  
function call overhead (that is hidden to the user, and not  
turn-off-able). This is not acceptable when interfacing to C.


This worst case only happens when:
1. The extern C function takes a char* and is NOT expecting a zero  
terminated string.
2. The char[], string, etc being passed is a fixed length array, or a  
slice which has no available space left for the \0.


So, it's rare.  I would guess a less than 1% of cases for general  
programming.


What if you expect the function is expecting to write to the buffer, and  
the compiler just made a copy of it?  Won't that be pretty surprising?


-Steve


Re: toStringz or not toStringz

2011-07-12 Thread Regan Heath
On Tue, 12 Jul 2011 15:18:04 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 09:54:15 -0400, Regan Heath re...@netmail.co.nz  
wrote:


On Fri, 08 Jul 2011 18:59:47 +0100, Walter Bright  
newshou...@digitalmars.com wrote:



On 7/8/2011 4:53 AM, Regan Heath wrote:
On Fri, 08 Jul 2011 10:49:08 +0100, Walter Bright  
newshou...@digitalmars.com

wrote:


On 7/8/2011 2:26 AM, Regan Heath wrote:

Why can't we have the
compiler call it automatically whenever we pass a string, or char[]  
to an extern

C function, where the parameter is defined as char*?


Because char* in C does not necessarily mean zero terminated  
string.


Sure, but in many (most?) cases it does. And in those cases where it  
doesn't you

could argue ubyte* or byte* should have been used in the D extern C
declaration instead. Plus, in those cases, worst case scenario, D  
passes an
extra \0 byte to those functions which either ignore it because they  
were also
passed a length, or expect a fixed sized structure, or .. I don't  
know what as I
can't imagine another case where char* would be used without it being  
a zero

terminated string, or passing/knowing the length ahead of time.


In the worst case, you're adding an extra memory allocation and  
function call overhead (that is hidden to the user, and not  
turn-off-able). This is not acceptable when interfacing to C.


This worst case only happens when:
1. The extern C function takes a char* and is NOT expecting a zero  
terminated string.
2. The char[], string, etc being passed is a fixed length array, or a  
slice which has no available space left for the \0.


So, it's rare.  I would guess a less than 1% of cases for general  
programming.


What if you expect the function is expecting to write to the buffer, and  
the compiler just made a copy of it?  Won't that be pretty surprising?


Assuming a C function in this form:

  void write_to_buffer(char *buffer, int length);

You might initially extern it as:

  extern C void write_to_buffer(char *buffer, int length);

And, you could call it one of 2 ways (legitimately):

  char[] foo = new char[100];
  write_to_buffer(foo, foo.length);

or:

  char[100] foo;
  write_to_buffer(foo, foo.length);

and in both cases, toStringz would do nothing as foo is zero terminated  
already (in both cases), or am I wrong about that?


--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: toStringz or not toStringz

2011-07-12 Thread Steven Schveighoffer
On Tue, 12 Jul 2011 10:50:07 -0400, Regan Heath re...@netmail.co.nz  
wrote:


On Tue, 12 Jul 2011 15:18:04 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 09:54:15 -0400, Regan Heath re...@netmail.co.nz  
wrote:


On Fri, 08 Jul 2011 18:59:47 +0100, Walter Bright  
newshou...@digitalmars.com wrote:



On 7/8/2011 4:53 AM, Regan Heath wrote:
On Fri, 08 Jul 2011 10:49:08 +0100, Walter Bright  
newshou...@digitalmars.com

wrote:


On 7/8/2011 2:26 AM, Regan Heath wrote:

Why can't we have the
compiler call it automatically whenever we pass a string, or  
char[] to an extern

C function, where the parameter is defined as char*?


Because char* in C does not necessarily mean zero terminated  
string.


Sure, but in many (most?) cases it does. And in those cases where it  
doesn't you

could argue ubyte* or byte* should have been used in the D extern C
declaration instead. Plus, in those cases, worst case scenario, D  
passes an
extra \0 byte to those functions which either ignore it because they  
were also
passed a length, or expect a fixed sized structure, or .. I don't  
know what as I
can't imagine another case where char* would be used without it  
being a zero

terminated string, or passing/knowing the length ahead of time.


In the worst case, you're adding an extra memory allocation and  
function call overhead (that is hidden to the user, and not  
turn-off-able). This is not acceptable when interfacing to C.


This worst case only happens when:
1. The extern C function takes a char* and is NOT expecting a zero  
terminated string.
2. The char[], string, etc being passed is a fixed length array, or a  
slice which has no available space left for the \0.


So, it's rare.  I would guess a less than 1% of cases for general  
programming.


What if you expect the function is expecting to write to the buffer,  
and the compiler just made a copy of it?  Won't that be pretty  
surprising?


Assuming a C function in this form:

   void write_to_buffer(char *buffer, int length);


No, assuming C function in this form:

void ucase(char* str);

Essentially, a C function which takes a writable already-null-terminated  
string, and writes to it.



You might initially extern it as:

   extern C void write_to_buffer(char *buffer, int length);

And, you could call it one of 2 ways (legitimately):

   char[] foo = new char[100];
   write_to_buffer(foo, foo.length);

or:

   char[100] foo;
   write_to_buffer(foo, foo.length);

and in both cases, toStringz would do nothing as foo is zero terminated  
already (in both cases), or am I wrong about that?


In neither case are they required to be null terminated.  The only thing  
that guarantees null termination is a string literal.  Even abc.dup is  
not going to be guaranteed to be null terminated.  For an actual example,  
try 012345678901234.dup.  This should have a 0x0f right after the last  
character.


-Steve


Re: toStringz or not toStringz

2011-07-12 Thread Steven Schveighoffer
On Tue, 12 Jul 2011 10:59:58 -0400, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 10:50:07 -0400, Regan Heath re...@netmail.co.nz  
wrote:


and in both cases, toStringz would do nothing as foo is zero terminated  
already (in both cases), or am I wrong about that?


In neither case are they required to be null terminated.  The only thing  
that guarantees null termination is a string literal.  Even abc.dup is  
not going to be guaranteed to be null terminated.  For an actual  
example, try 012345678901234.dup.  This should have a 0x0f right after  
the last character.


And, actually, the cost penalty of checking if you are going to segfault  
(i.e. checking if the ptr is into heap data, and then getting the length)  
is quite costly.  You must take the GC lock.


-Steve


Re: CTFE: What limitations remain?

2011-07-12 Thread d coder
For example, a big one is that
 pointers now work.


Do function pointers work?


Re: toStringz or not toStringz

2011-07-12 Thread Regan Heath
On Tue, 12 Jul 2011 15:59:58 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 10:50:07 -0400, Regan Heath re...@netmail.co.nz  
wrote:


What if you expect the function is expecting to write to the buffer,  
and the compiler just made a copy of it?  Won't that be pretty  
surprising?


Assuming a C function in this form:

   void write_to_buffer(char *buffer, int length);


No, assuming C function in this form:

void ucase(char* str);

Essentially, a C function which takes a writable already-null-terminated  
string, and writes to it.


Ok, that's an even better example for my case.

It would be used/called like...

  char[] foo;
  .. code which populates foo with something ..
  ucase(foo);

and in D today this would corrupt memory.  Unless the programmer  
remembered to write:


  ucase(toStringz(foo));

So, +1 for compiler called toStringz.

I am assuming also that if this idea were implemented it would handle  
things intelligently, like for example if when toStringz is called the  
underlying array is out of room and needs to be reallocated, the compiler  
would update the slice/reference 'foo' in the same way as it already does  
for an append which triggers a reallocation.



You might initially extern it as:

   extern C void write_to_buffer(char *buffer, int length);

And, you could call it one of 2 ways (legitimately):

   char[] foo = new char[100];
   write_to_buffer(foo, foo.length);

or:

   char[100] foo;
   write_to_buffer(foo, foo.length);

and in both cases, toStringz would do nothing as foo is zero terminated  
already (in both cases), or am I wrong about that?


In neither case are they required to be null terminated.


True, but I was outlining the worst case scenario for my suggestion, not  
describing the real C function requirements.


In this particular case the extern C declaration (IMO) for this style of  
function should be one of:


  extern C void write_to_buffer(ubyte *buffer, int length);
  extern C void write_to_buffer(byte *buffer, int length);
  extern C void write_to_buffer(void *buffer, int length);

which would all be ignored by my suggestion.


The only thing that guarantees null termination is a string literal.


string literals /and/ calling toStringz.

Even abc.dup is not going to be guaranteed to be null terminated.  For  
an actual example, try 012345678901234.dup.  This should have a 0x0f  
right after the last character.


Why 0x0f?  Does the allocator initialise array memory to it's offset from  
the start of the block or something?


I have just realised that char is initialised to 0xFF.  That is a problem  
as my two examples above would be arrays full of 0xFF, not \0.. meaning  
toStringz would have to reallocate to append \0 to them, drat.  That is  
yet another reason to use ubyte or byte when interfacing with C.


Ok, how about going the other way.  Can we have something to decorate  
extern C function parameters to trigger an implicit call of toStringz on  
them?


--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: toStringz or not toStringz

2011-07-12 Thread Regan Heath
On Tue, 12 Jul 2011 16:04:15 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 10:59:58 -0400, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 10:50:07 -0400, Regan Heath re...@netmail.co.nz  
wrote:


and in both cases, toStringz would do nothing as foo is zero  
terminated already (in both cases), or am I wrong about that?


In neither case are they required to be null terminated.  The only  
thing that guarantees null termination is a string literal.  Even  
abc.dup is not going to be guaranteed to be null terminated.  For an  
actual example, try 012345678901234.dup.  This should have a 0x0f  
right after the last character.


And, actually, the cost penalty of checking if you are going to segfault  
(i.e. checking if the ptr is into heap data, and then getting the  
length) is quite costly.  You must take the GC lock.


I wouldn't know anything about this.  I was assuming when toStringz was  
called on a slice it would use the array capacity and length to figure out  
where the \0 needed to be, and do as little work as possible to achieve  
it.  Meaning in most cases that \0 is written to 1 past the length, inside  
already allocated capacity.


--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Any companies using D?

2011-07-12 Thread Jesse Phillips
ChrisW Wrote:

 Obviously Digital Mars does development in D, but I was wondering if
 any other companies have yet taken the plunge -- even if just in their
 RD department -- to try using D as their development language?

Well, the company hasn't really endorsed my usage of D, but I've been creating 
some verification tools for our product which use to be done by hand (or not at 
all) with exporting to Excel. It is something to use internally and can't 
actually do any harm. The capabilities it has demonstrated have gotten 
interest, but the response I'd got from someone in RD was I've just never met 
someone that uses [D].


Re: toStringz or not toStringz

2011-07-12 Thread Steven Schveighoffer
On Tue, 12 Jul 2011 11:41:56 -0400, Regan Heath re...@netmail.co.nz  
wrote:


On Tue, 12 Jul 2011 15:59:58 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 10:50:07 -0400, Regan Heath re...@netmail.co.nz  
wrote:


What if you expect the function is expecting to write to the buffer,  
and the compiler just made a copy of it?  Won't that be pretty  
surprising?


Assuming a C function in this form:

   void write_to_buffer(char *buffer, int length);


No, assuming C function in this form:

void ucase(char* str);

Essentially, a C function which takes a writable  
already-null-terminated string, and writes to it.


Ok, that's an even better example for my case.

It would be used/called like...

   char[] foo;
   .. code which populates foo with something ..
   ucase(foo);

and in D today this would corrupt memory.  Unless the programmer  
remembered to write:


No, it wouldn't compile.  char[] does not cast implicitly to char *.  (if  
it does, that needs to change).


I am assuming also that if this idea were implemented it would handle  
things intelligently, like for example if when toStringz is called the  
underlying array is out of room and needs to be reallocated, the  
compiler would update the slice/reference 'foo' in the same way as it  
already does for an append which triggers a reallocation.


OK, but what if it's like this:

char[] foo = new char[100];
auto bar = foo;

ucase(foo);

In most cases, bar is also written to, but in some cases only foo is  
written to.


Granted, we're getting further out on the hypothetical limb here :)  But  
my point is, making it require explicit calling of toStringz instead of  
implicit makes the code less confusing, because you understand oh,  
toStringz may reallocate, so I can't expect bar to also get updated vs.  
simply calling a function with a buffer.



You might initially extern it as:

   extern C void write_to_buffer(char *buffer, int length);

And, you could call it one of 2 ways (legitimately):

   char[] foo = new char[100];
   write_to_buffer(foo, foo.length);

or:

   char[100] foo;
   write_to_buffer(foo, foo.length);

and in both cases, toStringz would do nothing as foo is zero  
terminated already (in both cases), or am I wrong about that?


In neither case are they required to be null terminated.


True, but I was outlining the worst case scenario for my suggestion, not  
describing the real C function requirements.


No, I mean you were wrong, D does not guarantee either of those (stack  
allocated or heap allocated) is null terminated.  So toStringz must add a  
'\0' at the end (which is mildly expensive for heap data, and very  
expensive for stack data).



The only thing that guarantees null termination is a string literal.


string literals /and/ calling toStringz.

Even abc.dup is not going to be guaranteed to be null terminated.   
For an actual example, try 012345678901234.dup.  This should have a  
0x0f right after the last character.


Why 0x0f?  Does the allocator initialise array memory to it's offset  
from the start of the block or something?


The final byte of the block is used as the hidden array length (in this  
case 15).


-Steve


dmd 2.054 segfaults

2011-07-12 Thread d coder
Here is a reduced test case. Before filing a bug report, just wanted to make
sure if I am doing something obviously wrong here.

import std.stdio;
struct Foo(IF, size_t N) { }
interface Bar { }
void main() {
  void printFoo(T: Foo!(IF, N), IF, size_t N)(T foo)
if(is(IF == interface)) {
writeln(Type: , T.stringof);
  }
  Foo!(Bar, 1) foo;
  printFoo(foo);
}


Re: dmd 2.054 segfaults

2011-07-12 Thread d coder


 import std.stdio;
 struct Foo(IF, size_t N) { }
 interface Bar { }
 void main() {
   void printFoo(T: Foo!(IF, N), IF, size_t N)(T foo)
 if(is(IF == interface)) {
 writeln(Type: , T.stringof);
   }
   Foo!(Bar, 1) foo;
   printFoo(foo);
 }



Just tried and found that it works just fine with 2.053. If somebody
provides a patch for 2.054, I will be happy to test on my bigger use cases.
:-)

Regards
- Puneet


Re: dmd 2.054 segfaults

2011-07-12 Thread d coder
Also found that if I take out the size_t N template parameter, it works
fine with dmd-2.054 too. So the following test code compiles and runs just
fine:

import std.stdio;
struct Foo(IF/*, size_t N*/) {}
interface Bar {}
void main() {
  void printFoo(T: Foo!(IF/*, N*/), IF/*, size_t N*/)(T foo)
if(is(IF == interface)) {
writeln(Type: , T.stringof);
  }
  Foo!(Bar/*, 1*/) foo;
  printFoo(foo);
}


Re: Any companies using D?

2011-07-12 Thread bearophile
Jesse Phillips:

 but the response I'd got from someone in RD was I've just never met someone 
 that uses [D].

I presume they use only the R (language) then.

Sorry,
bearophile


Re: toStringz or not toStringz

2011-07-12 Thread Regan Heath
On Tue, 12 Jul 2011 17:09:04 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 11:41:56 -0400, Regan Heath re...@netmail.co.nz  
wrote:


On Tue, 12 Jul 2011 15:59:58 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 10:50:07 -0400, Regan Heath re...@netmail.co.nz  
wrote:


What if you expect the function is expecting to write to the buffer,  
and the compiler just made a copy of it?  Won't that be pretty  
surprising?


Assuming a C function in this form:

   void write_to_buffer(char *buffer, int length);


No, assuming C function in this form:

void ucase(char* str);

Essentially, a C function which takes a writable  
already-null-terminated string, and writes to it.


Ok, that's an even better example for my case.

It would be used/called like...

   char[] foo;
   .. code which populates foo with something ..
   ucase(foo);

and in D today this would corrupt memory.  Unless the programmer  
remembered to write:


No, it wouldn't compile.  char[] does not cast implicitly to char *.   
(if it does, that needs to change).


Replace foo with foo.ptr, it makes no difference to the point I was making.

I am assuming also that if this idea were implemented it would handle  
things intelligently, like for example if when toStringz is called the  
underlying array is out of room and needs to be reallocated, the  
compiler would update the slice/reference 'foo' in the same way as it  
already does for an append which triggers a reallocation.


OK, but what if it's like this:

char[] foo = new char[100];
auto bar = foo;

ucase(foo);

In most cases, bar is also written to, but in some cases only foo is  
written to.


Granted, we're getting further out on the hypothetical limb here :)  But  
my point is, making it require explicit calling of toStringz instead of  
implicit makes the code less confusing, because you understand oh,  
toStringz may reallocate, so I can't expect bar to also get updated vs.  
simply calling a function with a buffer.


This is not a 'new' problem introduced the idea, it's a general problem  
for D/arrays/slices and the same happens with an append, right?  In which  
case it's not a reason against the idea.



You might initially extern it as:

   extern C void write_to_buffer(char *buffer, int length);

And, you could call it one of 2 ways (legitimately):

   char[] foo = new char[100];
   write_to_buffer(foo, foo.length);

or:

   char[100] foo;
   write_to_buffer(foo, foo.length);

and in both cases, toStringz would do nothing as foo is zero  
terminated already (in both cases), or am I wrong about that?


In neither case are they required to be null terminated.


True, but I was outlining the worst case scenario for my suggestion,  
not describing the real C function requirements.


No, I mean you were wrong, D does not guarantee either of those (stack  
allocated or heap allocated) is null terminated.  So toStringz must add  
a '\0' at the end (which is mildly expensive for heap data, and very  
expensive for stack data).


Ah, ok, this was because I had forgotten char is initialised to 0xFF.  If  
it was initialised to \0 then both arrays would have been full of null  
terminators.  The default value of char is the killing blow to the idea.



The only thing that guarantees null termination is a string literal.


string literals /and/ calling toStringz.

Even abc.dup is not going to be guaranteed to be null terminated.   
For an actual example, try 012345678901234.dup.  This should have a  
0x0f right after the last character.


Why 0x0f?  Does the allocator initialise array memory to it's offset  
from the start of the block or something?


The final byte of the block is used as the hidden array length (in this  
case 15).


Good to know.

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: toStringz or not toStringz

2011-07-12 Thread Regan Heath

Gah.. bad grammar.. 1/2 baked sentences..

On Tue, 12 Jul 2011 18:00:41 +0100, Regan Heath re...@netmail.co.nz  
wrote:
On Tue, 12 Jul 2011 17:09:04 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:
No, it wouldn't compile.  char[] does not cast implicitly to char *.   
(if it does, that needs to change).


Replace foo with foo.ptr, it makes no difference to the point I was  
making.


Which was that a new D user would pass foo.ptr rather than go looking for,  
and find toStringz.  We've had a number of cases on the learn NG in the  
past.



OK, but what if it's like this:

char[] foo = new char[100];
auto bar = foo;

ucase(foo);

In most cases, bar is also written to, but in some cases only foo is  
written to.


Granted, we're getting further out on the hypothetical limb here :)   
But my point is, making it require explicit calling of toStringz  
instead of implicit makes the code less confusing, because you  
understand oh, toStringz may reallocate, so I can't expect bar to also  
get updated vs. simply calling a function with a buffer.


This is not a 'new' problem introduced the idea, it's a general problem

-- ^by
for D/arrays/slices and the same happens with an append, right?  In  
which case it's not a reason against the idea.


Re: Using the d-p-l.org ddoc style for 3rd party libraries?

2011-07-12 Thread Walter Bright

On 7/12/2011 1:50 AM, Johannes Pfau wrote:

Thanks, I published the documentation here:
http://jpf91.github.com/cairoD/api/cairo_c_cairo.html

I removed all the Digital Mars copyright notices for now, should I add
those back?


No. But one thing does concern me, the use of the D logo. Using it in the upper 
left corner implies that Cairo is an officially supported D library. May I 
suggest instead one of the other logos in the recent thread about logos in the D 
newsgroup? Or heck, you get to have the fun of creating a logo!




Re: Using the d-p-l.org ddoc style for 3rd party libraries?

2011-07-12 Thread Walter Bright

On 7/12/2011 6:48 AM, David Gileadi wrote:

We never discussed this that I recall, but I hereby assign any copyright I might
hold to Walter Bright and anyone else he designates. Since I'm not a lawyer, if
anything else is needed I'd be happy to provide it. I may not need to do
anything at all; the pages all state that the copyright belongs to Digital Mars.


I had never thought about it, but thanks! That removes any potential issue.


Re: dmd 2.054 segfaults

2011-07-12 Thread Brad Roberts
Does dmd crash or does the resulting app?  If the former, then please file a 
bug report.  The validity of the code isn't relevant.

On Jul 12, 2011, at 9:41 AM, d coder dlang.co...@gmail.com wrote:

 Also found that if I take out the size_t N template parameter, it works 
 fine with dmd-2.054 too. So the following test code compiles and runs just 
 fine:
 
 import std.stdio;
 struct Foo(IF/*, size_t N*/) {}
 interface Bar {}
 void main() {
   void printFoo(T: Foo!(IF/*, N*/), IF/*, size_t N*/)(T foo)
 if(is(IF == interface)) {
 writeln(Type: , T.stringof);
   }
   Foo!(Bar/*, 1*/) foo;
   printFoo(foo);
 }
 


Re: Using the d-p-l.org ddoc style for 3rd party libraries?

2011-07-12 Thread Johannes Pfau
Walter Bright wrote:
On 7/12/2011 1:50 AM, Johannes Pfau wrote:
 Thanks, I published the documentation here:
 http://jpf91.github.com/cairoD/api/cairo_c_cairo.html

 I removed all the Digital Mars copyright notices for now, should I
 add those back?

No. But one thing does concern me, the use of the D logo. Using it in
the upper left corner implies that Cairo is an officially supported D
library. May I suggest instead one of the other logos in the recent
thread about logos in the D newsgroup? Or heck, you get to have the
fun of creating a logo!


Ok, I removed it.
I might add another one later, but for now I just want to have the api
documentation working ;-)

-- 
Johannes Pfau



Re: dmd 2.054 segfaults

2011-07-12 Thread d coder
dmd crashes during compile process itself. I have filed a bug report.
http://d.puremagic.com/issues/show_bug.cgi?id=6295

Regards
- Puneet


Re: dmd 2.054 segfaults

2011-07-12 Thread KennyTM~

On Jul 13, 11 00:33, d coder wrote:


import std.stdio;
struct Foo(IF, size_t N) { }
interface Bar { }
void main() {
   void printFoo(T: Foo!(IF, N), IF, size_t N)(T foo)
 if(is(IF == interface)) {
 writeln(Type: , T.stringof);
   }
   Foo!(Bar, 1) foo;
   printFoo(foo);
}



Seems to be my fault :|. Please file a bugzilla anyway. The compiler 
should not segfault whether you're doing right or wrong.


Re: toStringz or not toStringz

2011-07-12 Thread Steven Schveighoffer
On Tue, 12 Jul 2011 13:00:41 -0400, Regan Heath re...@netmail.co.nz  
wrote:


On Tue, 12 Jul 2011 17:09:04 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 11:41:56 -0400, Regan Heath re...@netmail.co.nz  
wrote:


On Tue, 12 Jul 2011 15:59:58 +0100, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Tue, 12 Jul 2011 10:50:07 -0400, Regan Heath re...@netmail.co.nz  
wrote:


What if you expect the function is expecting to write to the  
buffer, and the compiler just made a copy of it?  Won't that be  
pretty surprising?


Assuming a C function in this form:

   void write_to_buffer(char *buffer, int length);


No, assuming C function in this form:

void ucase(char* str);

Essentially, a C function which takes a writable  
already-null-terminated string, and writes to it.


Ok, that's an even better example for my case.

It would be used/called like...

   char[] foo;
   .. code which populates foo with something ..
   ucase(foo);

and in D today this would corrupt memory.  Unless the programmer  
remembered to write:


No, it wouldn't compile.  char[] does not cast implicitly to char *.   
(if it does, that needs to change).


Replace foo with foo.ptr, it makes no difference to the point I was  
making.


You fix does not help in that case, foo.ptr will be passed as a non-null  
terminated string.


So, your proposal fixes the case:

1. The user tries to pass a string/char[] to a C function.  Fails to  
compile.
2. Instead of trying to understand the issue, realizes the .ptr member is  
the right type, and switches to that.


It does not fix or help with cases where:

 * a programmer notices the type of the parameter is char * and uses  
foo.ptr without trying foo first. (crash)
 * a programmer calls toStringz without going through the compile/fix  
cycle above.
 * a programmer tries to pass string/char[], fails to compile, then looks  
up how to interface with C and finds toStringz


I think this fix really doesn't solve a very common problem.

I am assuming also that if this idea were implemented it would handle  
things intelligently, like for example if when toStringz is called the  
underlying array is out of room and needs to be reallocated, the  
compiler would update the slice/reference 'foo' in the same way as it  
already does for an append which triggers a reallocation.


OK, but what if it's like this:

char[] foo = new char[100];
auto bar = foo;

ucase(foo);

In most cases, bar is also written to, but in some cases only foo is  
written to.


Granted, we're getting further out on the hypothetical limb here :)   
But my point is, making it require explicit calling of toStringz  
instead of implicit makes the code less confusing, because you  
understand oh, toStringz may reallocate, so I can't expect bar to also  
get updated vs. simply calling a function with a buffer.


This is not a 'new' problem introduced the idea, it's a general problem  
for D/arrays/slices and the same happens with an append, right?  In  
which case it's not a reason against the idea.


It's new to the features of the C function being called.  If you look up  
the man page for such a hypothetical function, it might claim that it  
alters the data passed in through the argument, but it seems to not be the  
case!  So there's no way for someone (who arguably is not well versed in C  
functions if they didn't know to use toStringz) to figure out why the code  
seems not to do what it says it should.  Such a programmer may blame  
either the implementation of the C function, or blame the D compiler for  
not calling the function properly.





You might initially extern it as:

   extern C void write_to_buffer(char *buffer, int length);

And, you could call it one of 2 ways (legitimately):

   char[] foo = new char[100];
   write_to_buffer(foo, foo.length);

or:

   char[100] foo;
   write_to_buffer(foo, foo.length);

and in both cases, toStringz would do nothing as foo is zero  
terminated already (in both cases), or am I wrong about that?


In neither case are they required to be null terminated.


True, but I was outlining the worst case scenario for my suggestion,  
not describing the real C function requirements.


No, I mean you were wrong, D does not guarantee either of those (stack  
allocated or heap allocated) is null terminated.  So toStringz must add  
a '\0' at the end (which is mildly expensive for heap data, and very  
expensive for stack data).


Ah, ok, this was because I had forgotten char is initialised to 0xFF.   
If it was initialised to \0 then both arrays would have been full of  
null terminators.  The default value of char is the killing blow to the  
idea.


toStringz does not currently check for '\0' anywhere in the existing  
string.  It simply appends '\0' to the end of the passed string.  If you  
want it to check for '\0', how far should it go?  Doesn't this also add to  
the overhead (looping over all chars looking for '\0')?


Note also, that toStringz has 

Re: Any companies using D?

2011-07-12 Thread Adam Ruppe
bearophile wrote:
 I presume they use only the R (language) then

LOL


Re: D Programming Language Specification ebook

2011-07-12 Thread Walter Bright

On 7/4/2011 4:18 PM, Walter Bright wrote:

On 7/3/2011 7:11 PM, Walter Bright wrote:

What do you all think? $0? $4.99? $9.99?


Ok, you all made a good case. It'll be $0.


I just published it on Amazon. It'll take a couple days before it goes live, if 
it survives their review process.


Unfortunately, there seems to be no option to make it free. While Amazon has 
lots of free kindle books, they don't provide that option for self-publishers.


So, I picked the minimum price, which is $0.99. You can still download it free 
from the website. I suppose the $.99 pays for the convenience of having Amazon 
load it directly onto your Kindle.


Re: Any companies using D?

2011-07-12 Thread Justin Whear
I work at an economics firm and we use D extensively for our in-house
tools (our product is web-based).


Re: Any companies using D?

2011-07-12 Thread Walter Bright

On 7/11/2011 9:38 PM, ChrisW wrote:

Obviously Digital Mars does development in D, but I was wondering if
any other companies have yet taken the plunge -- even if just in their
RD department -- to try using D as their development language?


Just today:

http://www.digitalmars.com/d/archives/digitalmars/D/announce/We_are_looking_for_a_D1_programmer_in_Berlin_full_time_position_20996.html


Re: Any companies using D?

2011-07-12 Thread Vladimir Panteleev
On Tue, 12 Jul 2011 07:38:42 +0300, ChrisW littleratb...@yahoo.co.jp  
wrote:



Obviously Digital Mars does development in D, but I was wondering if
any other companies have yet taken the plunge -- even if just in their
RD department -- to try using D as their development language?


This question has also been asked on StackOverflow:

http://stackoverflow.com/questions/56315/d-programming-language-in-the-real-world

http://stackoverflow.com/questions/250383/is-anyone-using-d-in-commercial-applications

--
Best regards,
 Vladimirmailto:vladi...@thecybershadow.net


Re: Any companies using D?

2011-07-12 Thread Nemo
ChrisW Wrote:

 Obviously Digital Mars does development in D, but I was wondering if
 any other companies have yet taken the plunge -- even if just in their
 RD department -- to try using D as their development language?

I work for a large American company that writes a lot of software. The company 
does not use D at this point, mostly because of a legacy code base, but it is a 
strong candidate for future work.

For proprietary reasons I can't identify the company nor the plans for use of 
D, but rest assured that D will be used in future here.

Nemo


Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Nick Sabalausky
Yet again, a new DMD release has broken my code, and other people's code, 
too, just because of Phobos functions loosing their CTFE-ability. (strip(), 
toLower(), etc... And yes, I did bring up the strip() regression on the beta 
list, to no effect.)

We praise and promote the hell out of CTFE (and rightly so) for being able 
to take ordinary functions and run them at compile-time. Except, with every 
new release, I become more and more convinced that Phobos should *not* be 
used in CTFE, because I know from experience that if I do, chances are it 
will soon break. So we have to maintain separate CTFE-guaranteed versions of 
whatever we need from Phobos. Obviously, aside from being a royal pain in 
the ass,  that completely defeats the point of CTFE. Worse still, it makes 
us look really, really bad when we go out and promote a CTFE that just plain 
doesn't work as advertised when it comes the language's own standard 
library.

Granted, I *completely* understand and sympathize with the practical issues 
involved (Phobos being heavily in-development, CTFE itself undergoing big 
improvements, etc...) So I fully agree it made perfect sense for this 
bleeding-edge/unstable branch of D to not concern itself with CTFE-ability 
regressions in Phobos just yet...

BUT...*Now* D2 has been declared the main version of D, suitable and 
recommended for new projects, and is being promoted as such. That changes 
the situation. Our old, previously sensible, approach to Phobos CTFE-ability 
has now become Breaking regressions in each release.

Therefore, I think *now* is the time when Phobos needs to start having 
regression tests for CTFE-ability. If something doesn't work, then hell, at 
least a quick-and-dirty if(_ctfe) branch would at least be better than 
outright breakage. And if there are worries about that hindering improvement 
of DMD's CTFE mechanism, then a version identifier could be added to turn 
off the CTFE-workaround codepaths.

I realize it's not exactly helpful to say Hey, you should do X! instead of 
just simply pitching in and helping. But I'm hoping we can at least agree 
that we've reached a point where it's appropriate, and not premature, for 
this to be done. (One step at a time, right?)




Anyone want to run this through dmc?

2011-07-12 Thread bcs
http://blog.regehr.org/archives/558

That guy is working on some interesting stuff related to compiler bug
finding. It would be nice if we could get the DMx back-end into his
set of tested compilers. (BTW: what would it take to get a DMC/linux
built given there is a DMD/linux?)

I killed off my last windows box a few months bask or I'd do this my
self.

(Yes, I know this isn't exactly D related.)


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Adam D. Ruppe
I'd like to point out that *normal code* in Phobos is losing
functionality far too often too, like replace() no longer working
on immutable strings as of the last release.

Generalized templates are great, but not at the cost of existing
functionality!


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Trass3r

I totally agree.

At least simple functions like string manipulation need to be available at  
compile-time to use string mixins properly.


2.051 broke replace(), then it was fixed.
2.052 broke it again by moving it into std.array and changing the  
implementation.

Now with 2.054 toUpper() is broken.

If you have a library that extensively uses CTFE this is a major problem.


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Andrej Mitrovic
Last time I brought this issue up in bugzilla it was shot down with
We don't guarantee and don't have to guarantee functions will always
be CTFE-able between releases.

I end up having to put initializers in a module ctor. Not nice!


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Nick Sabalausky
Andrej Mitrovic andrej.mitrov...@gmail.com wrote in message 
news:mailman.1574.1310511524.14074.digitalmar...@puremagic.com...
 Last time I brought this issue up in bugzilla it was shot down with
 We don't guarantee and don't have to guarantee functions will always
 be CTFE-able between releases.


Yea, I've seen statements to that effect, too. It was a fairly reasonable 
stance when D2 was the unstable branch, but now that D2 is the main D, 
it ultimately means that you can't use Phobos in CTFE because it'll likely 
break. And that's a *huge* blow to CTFE in general.




Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Andrej Mitrovic
I don't understand what strip() could be doing to break CTFE anyway?


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Johann MacDonagh

On 7/12/2011 7:17 PM, Andrej Mitrovic wrote:

I don't understand what strip() could be doing to break CTFE anyway?


I believe a lot of the std.string functionality was modified to use 
routines in druntime in this latest release. The default sc.ini links to 
a static druntime lib as opposed to compiling the source. That means 
CTFE won't have the source for, in strip's example, _aApplycd2 
(rt/aApply.d in druntime). Fixing this (and adding in support for 
std.intrinsic) should fix a ton of bugs where a small change in Phobos 
kills CTFE.




Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Jonathan M Davis
On 2011-07-12 16:17, Andrej Mitrovic wrote:
 I don't understand what strip() could be doing to break CTFE anyway?

Don has been making huge changes to CTFE. Stuff that didn't used to compile, 
now complie. Stuff which compiled but shouldn't have now doesn't compile. 
There's probably stuff which used to compile and should still compile which 
doesn't compile now too. But with all of those changes, I'm not sure that it's 
at all reasonable to expect CTFE-ability to be stable. It should be heading in 
that direction, but I'm not sure how stable Don considers it. Certainly, strip 
could be failing for a perfectly legitimate reason, or it could be a bug. I 
have no idea. But with all of the changes that have been being made to CTFE, 
I'm not at all surprised if stuff has quit working. There's probably more that 
works now that didn't before, but with all of the recent changes, breakage 
doesn't surprise me one bit.

Having tests in Phobos for CTFE would catch many breakages, but if Don isn't 
yet guaranteeing what can and can't be CTFEed by the compiler, then such 
breakages could easily be because of fixes which made it so that stuff which 
wasn't supposed to compiled but did stopped compiling. So, until Don thinks 
that what he's been doing to CTFE is appropriately stable and can make 
guarantees about what will and won't be CTFEable as far as language features 
go, then Phobos can't make any guarantees about CTFEability.

So, basically, a lot of CTFE changes have been happening, and Don has said 
pretty much said that we're not currently making guarantees about what's 
CTFEable and what isn't. And until the changes stabilize and Don is willing to 
make guarantees, Phobos can't guarantee anything about CTFE.

- Jonathan M Davis


Re: CTFE: What limitations remain?

2011-07-12 Thread Johann MacDonagh

On 7/12/2011 10:22 AM, dsimcha wrote:

The documentation for CTFE is outdated and specifies limitations that no
longer exist thanks to Don's massive overhaul.  For example, a big one is that
pointers now work.  What limitations that could potentially be removed still
do exist in CTFE as of 2.054?


Right now I'm up against two limitations:
http://d.puremagic.com/issues/show_bug.cgi?id=4046
http://d.puremagic.com/issues/show_bug.cgi?id=6268

That being said, I am amazed at what I can get away with now with
CTFE. I find myself saying there's no way I can do that, then
testing out with enum x = theresNoWay(); and being amazed at the clean
compile.


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread David Nadlinger

On 7/13/11 1:29 AM, Jonathan M Davis wrote:

So, basically, a lot of CTFE changes have been happening, and Don has said
pretty much said that we're not currently making guarantees about what's
CTFEable and what isn't. And until the changes stabilize and Don is willing to
make guarantees, Phobos can't guarantee anything about CTFE.


Yes, but still I think a comprehensive regression test suite would be 
helpful, also for Don, because it would show him, well, if there are any 
regressions. I mean, it's not like Don would say, »Hey I just work on 
CTFE stuff that is fun to do and don't care about the rest«…


David


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Trass3r
Am 13.07.2011, 01:24 Uhr, schrieb Johann MacDonagh  
johann.macdonagh@spam..gmail.com:



On 7/12/2011 7:17 PM, Andrej Mitrovic wrote:

I don't understand what strip() could be doing to break CTFE anyway?


I believe a lot of the std.string functionality was modified to use  
routines in druntime in this latest release. The default sc.ini links to  
a static druntime lib as opposed to compiling the source. That means  
CTFE won't have the source for, in strip's example, _aApplycd2  
(rt/aApply.d in druntime). Fixing this (and adding in support for  
std.intrinsic) should fix a ton of bugs where a small change in Phobos  
kills CTFE.


That's true. I had several of these no source available errors.

Though I think toUpper crashed at:

S toUpper(S)(S s) @trusted pure
if(isSomeString!S)
{
foreach (i, dchar cOuter; s)   ---
{


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Johann MacDonagh

On 7/12/2011 7:35 PM, Trass3r wrote:

Am 13.07.2011, 01:24 Uhr, schrieb Johann MacDonagh
johann.macdonagh@spam..gmail.com:


On 7/12/2011 7:17 PM, Andrej Mitrovic wrote:

I don't understand what strip() could be doing to break CTFE anyway?


I believe a lot of the std.string functionality was modified to use
routines in druntime in this latest release. The default sc.ini links
to a static druntime lib as opposed to compiling the source. That
means CTFE won't have the source for, in strip's example, _aApplycd2
(rt/aApply.d in druntime). Fixing this (and adding in support for
std.intrinsic) should fix a ton of bugs where a small change in
Phobos kills CTFE.


That's true. I had several of these no source available errors.

Though I think toUpper crashed at:

S toUpper(S)(S s) @trusted pure
if(isSomeString!S)
{
foreach (i, dchar cOuter; s) ---
{


That's exactly the issue. When the compiler hits a foreach over a string 
that wants to convert its character type to another, it has to insert 
special magic to make that work (UTF decoding, etc...). Clearly this 
stuff shouldn't be in phobos, so it's in druntime. Unfortunately, the 
compiler doesn't have access to druntime source by default.


I'm not sure how it was doing it before. This worked in 2.053.


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Nick Sabalausky
Jonathan M Davis jmdavisp...@gmx.com wrote in message 
news:mailman.1576.1310513383.14074.digitalmar...@puremagic.com...
 On 2011-07-12 16:17, Andrej Mitrovic wrote:
 I don't understand what strip() could be doing to break CTFE anyway?

 Don has been making huge changes to CTFE. Stuff that didn't used to 
 compile,
 now complie. Stuff which compiled but shouldn't have now doesn't compile.
 There's probably stuff which used to compile and should still compile 
 which
 doesn't compile now too. But with all of those changes, I'm not sure that 
 it's
 at all reasonable to expect CTFE-ability to be stable. It should be 
 heading in
 that direction, but I'm not sure how stable Don considers it. Certainly, 
 strip
 could be failing for a perfectly legitimate reason, or it could be a bug. 
 I
 have no idea. But with all of the changes that have been being made to 
 CTFE,
 I'm not at all surprised if stuff has quit working. There's probably more 
 that
 works now that didn't before, but with all of the recent changes, breakage
 doesn't surprise me one bit.


I definitely expected that in 2.053, since that's the version that had the 
CTFE overhaul. But it's not being re-overhauled in each version after 
2.053 - just bugfixes and added features. So at this point, I think it's 
reasonable to expect that any CTFE changes that break Phobos code should be 
fixed before release, even if only a temporary fix, in either the CTFE 
engine or in the Phobos code that was broken.




Re: Anyone want to run this through dmc?

2011-07-12 Thread bcs
== Quote from bcs (b...@example.com)'s article
 It would be nice if we could get the DMx back-end into his
 set of tested compilers. (BTW: what would it take to get a DMC/linux
 built given there is a DMD/linux?)

Seems he's interested (so the above is a relevant question):

 Quote from regehr:

 bcs, I�d be happy to add the Digital Mars compiler if I can do so on
 Linux. (The MSVC result in this post is from one of my more
 MS-friendly students.)


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Jonathan M Davis
On 2011-07-12 15:50, Adam D. Ruppe wrote:
 I'd like to point out that *normal code* in Phobos is losing
 functionality far too often too, like replace() no longer working
 on immutable strings as of the last release.
 
 Generalized templates are great, but not at the cost of existing
 functionality!

Please make sure that you report all such regressions. In some cases, they 
probably can't be fixed (especially if it's related to a compiler change which 
fixes something which shouldn't have worked before), but in a lot of such 
cases, it's likely simply because there wasn't a test case for it, and it 
wasn't caught.

As for immutable arrays, they're generally not going to work with range-based 
functions because of how templates work ( 
http://d.puremagic.com/issues/show_bug.cgi?id=6148 ). But if the change 
requested in http://d.puremagic.com/issues/show_bug.cgi?id=6289 is merged in, 
then slicing immutable arrays should work. Arguably however, most functions in 
std.array should work with immutable arrays (since it _is_ std._array_ after 
all), and if you can assume arrays rather than generic ranges, you can 
generally make them work with immutable arrays. So, replace may very well be 
fixed to work with immutable arrays. But if you don't point out such 
regressions, they don't necessarily get caught.

So, please point out such regressions in Phobos. That way they can be fixed 
and unit tests can be added to help ensure that they don't happen again.

- Jonathan M Davis


Re: Anyone want to run this through dmc?

2011-07-12 Thread Trass3r

Doing this on Windows is a nightmare.
After finally getting csmith to run, it produced code that apparently is  
invalid. (since also gcc reports errors)

The included perl script doesn't work at all.


Prototype buildsystem Drake

2011-07-12 Thread Nick Sabalausky
The recent discussions about package managers and buildsystems has prompted 
me to get off my ass (or rather, *on* my ass as per how programming is 
usually performed...) and start on the D-based rake-inspired buildtool I've 
been meaning to make for awhile now. It's not actually usable yet, but 
there's a sample drakefile demonstrating everything, and it does actually 
compile and run (but just doesn't build the dependency tree or actually run 
any of the build steps). *Should* work on Posix, but I only tested on 
Windows, so I may have fucked up the Posix version...

Apologies to Jacob Carlborg for the name being so close to dake. Didn't 
really know what else to call it (Duck, maybe?) Like dake, it's inspired 
by Ruby's Rake. But unlike dake, the buildscript is written in D instead of 
Ruby, which was my #1 reason for making a Rake alternative.

Before I go implemeting everything, I'd like to get input on it. Is it 
something that could be a major D tool?

Overview of Drake and links to all the code are here:

https://bitbucket.org/Abscissa256/drake/wiki/Home 




Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Jonathan M Davis
On 2011-07-12 17:16, Nick Sabalausky wrote:
 Jonathan M Davis jmdavisp...@gmx.com wrote in message
 news:mailman.1576.1310513383.14074.digitalmar...@puremagic.com...
 
  On 2011-07-12 16:17, Andrej Mitrovic wrote:
  I don't understand what strip() could be doing to break CTFE anyway?
  
  Don has been making huge changes to CTFE. Stuff that didn't used to
  compile,
  now complie. Stuff which compiled but shouldn't have now doesn't compile.
  There's probably stuff which used to compile and should still compile
  which
  doesn't compile now too. But with all of those changes, I'm not sure that
  it's
  at all reasonable to expect CTFE-ability to be stable. It should be
  heading in
  that direction, but I'm not sure how stable Don considers it. Certainly,
  strip
  could be failing for a perfectly legitimate reason, or it could be a bug.
  I
  have no idea. But with all of the changes that have been being made to
  CTFE,
  I'm not at all surprised if stuff has quit working. There's probably more
  that
  works now that didn't before, but with all of the recent changes,
  breakage doesn't surprise me one bit.
 
 I definitely expected that in 2.053, since that's the version that had the
 CTFE overhaul. But it's not being re-overhauled in each version after
 2.053 - just bugfixes and added features. So at this point, I think it's
 reasonable to expect that any CTFE changes that break Phobos code should be
 fixed before release, even if only a temporary fix, in either the CTFE
 engine or in the Phobos code that was broken.

Well, regardless of whether CTFE is still having a major overhaul, there's 
still plenty of work being done on it. If you look at the changelog, you'll 
see that there are a lot of CTFE-related changes, and I know that more CTFE 
changes are in the works for 2.055 based on what Don has said. So, it doesn't 
surprise me at all that issues with it are cropping up. And unless Don is 
willing to say that we're going to guarantee the CTFEability of functions, 
then there will be no guarantees of any functions remaining CTFEable, and 
we're definitely not going to test that functions stay CTFEable.

If Don is willing to say that CTFE is stable enough to make guarantees about 
it and that we can make guarantees about Phobos functions being CTFEable, then 
I think that it makes perfect sense to add tests for it and make sure that 
functions remain CTFEable if they're currently CTFEable. But as long as Don is 
not willing to say that, then Phobos is not going to have any such guarantees 
and it makes no sense to test for CTFEability.

Ideally, I think that we would be able to make such guarantees and test for 
them, but Don has been against it, and he's the one managing CTFE at this 
point, so it's really up to him.

- Jonathan M Davis


C# contracts video

2011-07-12 Thread bearophile
A long video presentation of C# contracts, Compile-time Verification, It's Not 
Just for Type Safety Any More (Jul 05, 2011), by Greg Young (the speaker is 
quite bombastic):
http://www.infoq.com/presentations/Contracts-Library

The compile-time error shown in the video at about 10 minutes and 40 seconds is 
doable in D with this idea I've shown:
http://d.puremagic.com/issues/show_bug.cgi?id=5906

While the error shown from 11.45 requires something much better, as the 
solver/inferencer used by them.

He says they are slowly adding contracts to all the dotnet framework.

He says (30 minutes 20 seconds) that most unittests become useless. The 
compiler even says some certain unittests can't fail (after statically 
verifying the contracts).

He says that all this stuff is still in its infancy (in dotnet, the Contracts 
library itself, etc).

He says that the dynamic keyword and the contracts, both added to C#4, are 
essentially one against each other.

Around 53.00 he explains the risks of overspecification in Contracts (that 
cause problems similar to writing too many unittests), or adding too much 
specific contracts to an interface, while they are fit for a single 
implementation of it.

Bye,
bearophile


Re: Anyone want to run this through dmc?

2011-07-12 Thread bcs
== Quote from Trass3r (u...@known.com)'s article
 Doing this on Windows is a nightmare.
 After finally getting csmith to run, it produced code that
apparently is
 invalid. (since also gcc reports errors)
 The included perl script doesn't work at all.

That's why I'm wondering about building, DMC/linux.

Also, my original motivation for posting was in hopes of getting the
test case he listed run through DMC, but somehow that got lost in the
edits. Anyone?


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread bcs
== Quote from Adam D. Ruppe (destructiona...@gmail.com)'s article
 I'd like to point out that *normal code* in Phobos is losing
 functionality far too often too, like replace() no longer working
 on immutable strings as of the last release.
 Generalized templates are great, but not at the cost of existing
 functionality!

The answer to both problems is to have a test suit that is very easy
for the average user to add code to that's run along with the current 
auto-tester. What I'm thinking of is somthing like codepad.org where
anyone can submit a code sample and where interested parties get informed 
when the results for them change. With that in place, when
some one hits a regression it can be added to the (or a) regression
suit with just a few clicks. With a bit more work, you could even let
people go back in time and show that a test case user to run.

Heck, I was looking into building exactly that a few months back but
real life got in the way.


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread bcs
== Quote from Andrej Mitrovic (andrej.mitrov...@gmail.com)'s article
 Last time I brought this issue up in bugzilla it was shot down with
 We don't guarantee and don't have to guarantee functions will
always
 be CTFE-able between releases.

Maybe there should be a std.ctfe.* that looks a bit like std.* that IS
guaranteed to work. Ideally it would be nothing but a bunch of alias.

 I end up having to put initializers in a module ctor. Not nice!



Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Brad Roberts
On 7/12/2011 7:14 PM, bcs wrote:
 == Quote from Adam D. Ruppe (destructiona...@gmail.com)'s article
 I'd like to point out that *normal code* in Phobos is losing
 functionality far too often too, like replace() no longer working
 on immutable strings as of the last release.
 Generalized templates are great, but not at the cost of existing
 functionality!
 
 The answer to both problems is to have a test suit that is very easy
 for the average user to add code to that's run along with the current 
 auto-tester. What I'm thinking of is somthing like codepad.org where
 anyone can submit a code sample and where interested parties get informed 
 when the results for them change. With that in place, when
 some one hits a regression it can be added to the (or a) regression
 suit with just a few clicks. With a bit more work, you could even let
 people go back in time and show that a test case user to run.
 
 Heck, I was looking into building exactly that a few months back but
 real life got in the way.

Why separate at all?  Add tests to phobos directly like any other unit test.




Re: CTFE writeln

2011-07-12 Thread Brad Roberts
On 7/12/2011 1:20 AM, KennyTM~ wrote:
 I've just opened a pull request* to enable std.stdio.writeln in CTFE. Any 
 comments?
 
 *: https://github.com/D-Programming-Language/dmd/pull/237

Unless it supports everything that the runtime version does, using the same 
name is a bad idea, imho.


Inconsistencies between global imports and function local imports

2011-07-12 Thread Tyro[a.c.edwards]
Methinks function local imports (introduced in 2.054) is a great idea, 
however if it is to be allowed, I believe it should provide all the 
functionality of global imports: which it currently does not.


import std.stdio;
import std.string;
import std.conv;

// Note: all of these import formats work if imported here but only
// the first work if imported locally to the function.

//import std.utf;
//import std.utf: toUTF16z;
//import std.utf: wcp = toUTF16z;

void main()
{
auto s = 漢字を書くのはどうかな~?;
auto s1 = genText(s);
writeln(to!string(typeid(s1)));
}

auto genText(string t)
{
import std.utf; // This works
//import std.utf: toUTF16z; // This doesn't
//import std.utf: wcp = toUTF16z; // Neither does this

version(Unicode)
{
// Note: Everything here works with global import
// but only the first line works with function local imports

return toUTF16z(t);   // This works
//return t.toUTF16z;  // This doesn't
//return wcp(t);  // Neither does this
//return t.wcp;   // Or this
}
else
{
return t.toStringz;
}
}


[OT - toUTFz]

Wasn't there discussion about adding toUTFz to the std.utf? For some 
reason I thought that was forthcoming in 2.054... whatever happened there?


Re: Inconsistencies between global imports and function local imports

2011-07-12 Thread Jonathan M Davis
On Wednesday 13 July 2011 11:56:10 Tyro[a.c.edwards] wrote:
 Methinks function local imports (introduced in 2.054) is a great idea,
 however if it is to be allowed, I believe it should provide all the
 functionality of global imports: which it currently does not.
 
 import std.stdio;
 import std.string;
 import std.conv;
 
 // Note: all of these import formats work if imported here but only
 // the first work if imported locally to the function.
 
 //import std.utf;
 //import std.utf: toUTF16z;
 //import std.utf: wcp = toUTF16z;
 
 void main()
 {
   auto s = 漢字を書くのはどうかな~?;
   auto s1 = genText(s);
   writeln(to!string(typeid(s1)));
 }
 
 auto genText(string t)
 {
   import std.utf; // This works
   //import std.utf: toUTF16z; // This doesn't
   //import std.utf: wcp = toUTF16z; // Neither does this
 
   version(Unicode)
   {
   // Note: Everything here works with global import
   // but only the first line works with function local imports
 
   return toUTF16z(t);   // This works
   //return t.toUTF16z;  // This doesn't
   //return wcp(t);  // Neither does this
   //return t.wcp;   // Or this
   }
   else
   {
   return t.toStringz;
   }
 }

import std.utf: toUTF16z;

is broken to begin with:

http://d.puremagic.com/issues/show_bug.cgi?id=314
http://d.puremagic.com/issues/show_bug.cgi?id=5161

Rather than just importing the symbol like it should, a selective import 
essentially create a new symbol. So, that's probably why it doesn't work when 
importing inside of a function.

 [OT - toUTFz]
 
 Wasn't there discussion about adding toUTFz to the std.utf? For some
 reason I thought that was forthcoming in 2.054... whatever happened there?

It hasn't been merged in yet. It should be in 2.055.

https://github.com/D-Programming-Language/phobos/pull/123

- Jonathan M Davis


Re: Inconsistencies between global imports and function local imports

2011-07-12 Thread Andrej Mitrovic
I've reported the UFCS issue with function imports to bugzilla a few weeks ago.


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread bcs
== Quote from Brad Roberts (bra...@puremagic.com)'s article
 On 7/12/2011 7:14 PM, bcs wrote:
  == Quote from Adam D. Ruppe (destructiona...@gmail.com)'s article
  I'd like to point out that *normal code* in Phobos is losing
  functionality far too often too, like replace() no longer working
  on immutable strings as of the last release.
  Generalized templates are great, but not at the cost of existing
  functionality!
 
  The answer to both problems is to have a test suit that is very
easy
  for the average user to add code to that's run along with the
current auto-tester. What I'm thinking of is somthing like codepad.org
where
  anyone can submit a code sample and where interested parties get
informed when the results for them change. With that in place, when
  some one hits a regression it can be added to the (or a)
regression
  suit with just a few clicks. With a bit more work, you could even
let
  people go back in time and show that a test case user to run.
 
  Heck, I was looking into building exactly that a few months back
but
  real life got in the way.
 Why separate at all?  Add tests to phobos directly like any other
unit test.

Because that requires commit privileges, having git installed and
about a dozen other things. I would like to be able to past some code
into a web page, tweak it till it shows what I want and post a URL.

As for moving stuff from there into phobos; well that might work for
selected cases, but for the system I'm thinking of you may end up with
a selection of interesting cases (ones the devs care about) that's
several times the size of phobos and a corpus of active test (ones
that someone cares about) an order of magnitude larger than that. As
long as the search and monitor tools to handle it are done well, the
more test the better I say.


Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread Nick Sabalausky
bcs b...@example.com wrote in message 
news:iviv9h$2ee$1...@digitalmars.com...
 == Quote from Andrej Mitrovic (andrej.mitrov...@gmail.com)'s article
 Last time I brought this issue up in bugzilla it was shot down with
 We don't guarantee and don't have to guarantee functions will
 always
 be CTFE-able between releases.

 Maybe there should be a std.ctfe.* that looks a bit like std.* that IS
 guaranteed to work. Ideally it would be nothing but a bunch of alias.


If we do that we may as well just stick their bodies inside the original 
function in an if(_ctfe) block.




Re: Time for Phobos CTFE-ability unittests...right? RIGHT?

2011-07-12 Thread bcs
== Quote from Nick Sabalausky (a@a.a)'s article
 bcs b...@example.com wrote in message
 news:iviv9h$2ee$1...@digitalmars.com...
  == Quote from Andrej Mitrovic (andrej.mitrov...@gmail.com)'s
article
  Last time I brought this issue up in bugzilla it was shot down
with
  We don't guarantee and don't have to guarantee functions will
  always
  be CTFE-able between releases.
 
  Maybe there should be a std.ctfe.* that looks a bit like std.*
that IS
  guaranteed to work. Ideally it would be nothing but a bunch of
alias.
 
 If we do that we may as well just stick their bodies inside the
original
 function in an if(_ctfe) block.

That only gets a small part of the benefit, the rest is that it would
document to the end user what is CTFE clean and also document to the
dev what functions need to be CTFE clean. Putting if(_ctfe) in the
function ends up making the user look at the code (rather than docs)
and still doesn't give a definitive answer as to if the code works
this time or, given that, if it may break next time around.


Re: CTFE writeln

2011-07-12 Thread Walter Bright

On 7/12/2011 7:21 PM, Brad Roberts wrote:

On 7/12/2011 1:20 AM, KennyTM~ wrote:

I've just opened a pull request* to enable std.stdio.writeln in CTFE. Any 
comments?

*: https://github.com/D-Programming-Language/dmd/pull/237


Unless it supports everything that the runtime version does, using the same 
name is a bad idea, imho.


I agree with Brad, though I need to look at the actual pull request.

There's also pragma(msg, hello at compile time).


Re: Anyone want to run this through dmc?

2011-07-12 Thread bcs
I broke down and installed wine:

bcs@doors:~/Downloads/dmc$ cat split.cpp
#include stdio.h
struct S0 {
  unsigned f1 : 1;
};

struct S0 s;

int main (void) {
  int x = -3;
  int y = x = (0, s.f1);
  printf (%d\n, y);
  return 0;
}
bcs@doors:~/Downloads/dmc$ wine dm/bin/dmc.exe split.cpp
link split,,,user32+kernel32/noi;

bcs@doors:~/Downloads/dmc$ wine split.exe
1

seems DMC is broke too, but it's debatable if this test case is of
value to DMD.


Re Build tools for D [ was Re: Prototype buildsystem Drake ]

2011-07-12 Thread Russel Winder
On Tue, 2011-07-12 at 21:02 -0400, Nick Sabalausky wrote:
[ . . . ]
 Before I go implemeting everything, I'd like to get input on it. Is it 
 something that could be a major D tool?
[ . . . ]

Given the nature of the debate, I will add to the mix that SCons, a
pre-existing -- and indeed working :-) -- system, has a D tool and can
compile and link D code.  What it needs is some love to bring it up to
the level of the C and C++ support.

SCons has a built-in D tool which needs work, but rather than fork SCons
to work on the tool I have created a separate tool that can be used with
any SCons installation.  See https://bitbucket.org/russel/scons_dmd_new.
Prior to any SCons release a patch between this version and the one in
the SCons core will be made and applied.

I started using SCons (and recently Waf -- which also has a D tool) in
preference to Rake because SCons has very superior support for C, C++,
Fortran and LaTeX.  Using Rake for these is like using assembly language
to write a GUI application:  with Rake you have to build everything
yourself, with SCons most of the work is done for you in the tool
infrastructure.  There was Rant which tried to replace Rake in the C, C
++, Fortran arena but the project died.  Rake appears to have almost no
traction outside the Ruby community.  Even Buildr (which tried to build
on Rake to combat Maven seems to have no headway in the market.

Clearly there is always a trend for a language to demand a specialist
build tool:  Scala has SBT, Clojure has Leiningen, Ruby has Rake, but
there is also a section of the universe that thinks the same build tool
should be usable across all languages.  Ant and Maven were Java specific
but now can handle anything targeting the JVM.  SCons and Waf come from
a C, C++, Fortran, LaTeX, D background but can now handle Java, Scala,
etc.  Gradle arose as a replacement for Maven on the JVM, but is now
(finally) branching out into C, C++, etc.

Note also that systems like Gradle and indeed Maven, are not just code
compiler and link systems, they also manage code review tools for
creating coverage reports, bug reports, documentation generation, even
whole websites.

Currently, from what I can tell, we have a number of individuals making
proposals for ideas.  Nothing wrong with that per se.  However I think
this is the third time this debate has occurred since I have been
(peripherally) involved with the core D community.  This does strike me
as wrong since it means that there is no momentum being created, the
energy associated with each debate simply dissipates.

I think that in order to create a momentum, a fundamental choice needs
to be made:

--  Should D have a specialist tool (à la SBT/Scala, Leiningen/Clojure
-- at the risk of massive NIH-ism, just as the Scala and Clojure folk
appear to have) or go with a generalist tool such as Gradle or SCons.

True there has to be debate about the possibilities in each category so
as to get the lay of the land and a feel for the pluses and
minuses, but always the aim should be to answer the above question.

Then we can move to debating the possibilities.

Then we can create some momentum behind doing something by creating a
group of activists.

Then D build wins.


-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Re Build tools for D [ was Re: Prototype buildsystem Drake ]

2011-07-12 Thread Russel Winder
And, of course, I should have mentioned CMake and CMakeD.

The fact that I forgot, shows my prejudice against Makefile-based
systems and for direct DAG-based systems such as Gradle, SCons and Waf.
This though should not stop CMakeD being a part of this debate. 

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Re Build tools for D [ was Re: Prototype buildsystem Drake ]

2011-07-12 Thread Jonathan M Davis
On Wednesday 13 July 2011 06:12:58 Russel Winder wrote:
 And, of course, I should have mentioned CMake and CMakeD.
 
 The fact that I forgot, shows my prejudice against Makefile-based
 systems and for direct DAG-based systems such as Gradle, SCons and Waf.
 This though should not stop CMakeD being a part of this debate.

From previous discussions, it seems that one of the primary reasons for having 
a D build tool in many people's minds is to also handle package management of 
D libraries (like Haskell's cabal or rubygems for ruby). And as great as 
cmaked, scons, gradle, waf, and other such tools may be, they don't do that.

- Jonathan M Davis


Re: Re Build tools for D [ was Re: Prototype buildsystem Drake ]

2011-07-12 Thread Russel Winder
On Tue, 2011-07-12 at 22:28 -0700, Jonathan M Davis wrote:
[ . . . ]
 From previous discussions, it seems that one of the primary reasons for 
 having 
 a D build tool in many people's minds is to also handle package management of 
 D libraries (like Haskell's cabal or rubygems for ruby). And as great as 
 cmaked, scons, gradle, waf, and other such tools may be, they don't do that.

Go is currently using Make for this -- they have a structured Makefile
hierarchy that handles most compilation and linking in the context of a
rigidly enforced filestore structure, and goinstall for bringing in
packages from outside into the filestore hierarchy.  It is a bit
primitive at the minute, but is being worked on and rapidly improved.

Go actually has a plethora of build tools, include a couple of
SCons-based ones, most of which are falling by the wayside.  I think the
effort expended doing this has not been a waste as information is being
generated that is adding to the pool.  I think they will replace the
Makefile system shortly with one of the tools, possibly GoBuild, but I
can't remember exactly.

Haskell and Ruby both have inward looking approaches -- to put it
bluntly (at the risk of causing offense to some), Haskell build only
cares about Haskell source and Ruby build only cares about Ruby source.
Almost by definition D has to live in a mixed C, C++, Fortran, D
universe, so this has to be an issue from the outset.

I agree with the premise that package management must be a core part of
the build management, but I disagree that Gradle, SCons, and Waf cannot
handle this.  They cannot handle this today true, but in the same way
that there is currently no system that does it properly as yet.  So I
would suggest that package management is currently a green field
problem that can be handled by a D-specific tool or one of Gradle, SCons
or Waf.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: opCast, c bindings and normal casts.

2011-07-12 Thread Johannes Pfau
Johannes Pfau wrote:
Hi,

I have a wrapper for a object aware c library (cairo). Take for
example two classes, Surface and a subclass, ImageSurface. Now this
code has to be valid:
---
auto ptr = cairo_image_surface_create(CAIRO_FORMAT_ARGB32, 512, 512);
Surface s = new Surface(ptr);
ImageSurface imgs = cast(ImageSurface)s;
---

As D cannot know that 's' really should be an ImageSurface, I have
implemented opCast to get this example working:
---
class Surface
{
static Surface castFrom(Surface other)
{
return other;
}

T opCast(T)() if(isImplicitlyConvertible!(T, Surface))
{
return T.castFrom(this);
}
}
class ImageSurface : Surface
{
static ImageSurface castFrom(Surface other)
{
auto type = cairo_surface_get_type(other.nativePointer);
if(type == cairo_surface_type_t.CAIRO_SURFACE_TYPE_IMAGE)
{
return new ImageSurface(other.nativePointer);
}
else
return null;
}
}
---

This code works quite well. But it performs unnecessary calls to
cairo_surface_get_type (and allocates unnecessary objects) for simple
cases:
---
auto surface = new ImageSurface(Format.CAIRO_FORMAT_ARGB32, 400, 400);
Surface tmp = cast(Surface)surface;
ImageSurface test = cast(ImageSurface)as;
---

In this case, the first D object already is an ImageSurface so the
custom opCast code isn't needed for the last line.

So the question is: Is there some way to check in the opCast function
if a normal D object cast would succeed and then just return it's
result?


In case anyone's interested:
I think this could be done with _d_dynamic_cast from rt.cast_
(druntime).
I've dropped the opCast/castFrom approach though (not working in some
cases), so I haven't tested this.

-- 
Johannes Pfau



Declaring a D pointer to a C function

2011-07-12 Thread Johannes Pfau
From a discussion related to derelict:
How do you write this:
---
alias extern(C) int function(void* test) FTInitFunc;
FTInitFunc FT_Init_FreeType
---
without the alias?
---
extern(C) int function(void* test) FT_Init_FreeType;
---
is not the same!
both are fields containing a C function pointer, but the first field
has D name mangling (_D4test16FT_Init_FreeTypePUPvZi) and the second
has C name mangling: (FT_Init_FreeType, which conflicts with the C
function FT_Init_FreeType)

And a related question from stackoverflow:
(http://stackoverflow.com/questions/6257078/casting-clutteractor-to-clutterstage)
How to write this:
---
alias extern(C) void function(void*, const char*) setTitleFunc;
auto clutter_stage_set_title =
getSym!(setTitleFunc)(clutter_stage_set_title);
---
without the alias?

http://d.puremagic.com/issues/show_bug.cgi?id=2168 and
http://d.puremagic.com/issues/show_bug.cgi?id=4288 seem to be related,
extern(C) seems to work almost nowhere ;-)


-- 
Johannes Pfau



Re: SDL with D

2011-07-12 Thread Mike Parker

On 7/12/2011 4:21 PM, Dainius (GreatEmerald) wrote:

I see. And what about Lua? I see lots and lots of different libraries
for that on dsource, and there is even some support in Derelict as
well. I assume that LuaD is the one in active development and most
fitting for current D2?


Derelict has no Lua binding yet. It's on the todo list. I'm waiting for 
Lua 5.2 to be released.


Re: SDL with D

2011-07-12 Thread Dainius (GreatEmerald)
According to the Derelict page, there already are unofficial bindings.
But I guess they wouldn't work with Derelict2 anyway.


  1   2   >