Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Nick Sabalausky
"Lionello Lunesu"  wrote in message 
news:h30vss$pm...@digitalmars.com...
>
> Walter, since the lib/include folders were split according to OS, the dmd2 
> zip consistently has an extensionless "lib" file in the dmd2 folder.

It's also in D1. 




Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques
On Wed, 08 Jul 2009 00:08:13 -0400, Brad Roberts   
wrote:



Walter Bright wrote:

Robert Jacques wrote:

On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright
 wrote:

Robert Jacques wrote:

(Caveat: most 32-bit compilers probably defaulted integer to int,
though 64-bit compilers are probably defaulting integer to long.)


All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers
are setting int at 32 bits for sensible compatibility reasons.


But are the 64-bit compilers setting the internal "integer" type to 32
or 64 bits? (I'm not running any 64-bit OSes at the moment to test  
this)


Not that I've seen. I'd be very surprised if any did.



From wikipedia: http://en.wikipedia.org/wiki/64-bit


model   short   int longllong   ptrsSample operating systems
LLP64   16  32  32  64  64  Microsoft Win64 (X64/IA64)
LP6416  32  64  64  64  Most UNIX and UNIX-like systems
(Solaris, Linux, etc)
ILP64   16  64  64  64  64  HAL
SILP64  64  64  64  64  64   ?


Thanks, but what we're looking for is is what format the data is in in  
register. For example, in 32-bit C, bytes/shorts are computed as ints and  
truncated back down. I've found some references to 64-bit native integers  
in the CLI spec, but nothing definative.


The question boils down to is b == 0 or not:

int a = 2147483647;
long b = a+a+2; // or long long depending on platform



Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Brad Roberts
Walter Bright wrote:
> Robert Jacques wrote:
>> On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright
>>  wrote:
>>> Robert Jacques wrote:
 (Caveat: most 32-bit compilers probably defaulted integer to int,
 though 64-bit compilers are probably defaulting integer to long.)
>>>
>>> All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers
>>> are setting int at 32 bits for sensible compatibility reasons.
>>
>> But are the 64-bit compilers setting the internal "integer" type to 32
>> or 64 bits? (I'm not running any 64-bit OSes at the moment to test this)
> 
> Not that I've seen. I'd be very surprised if any did.

>From wikipedia: http://en.wikipedia.org/wiki/64-bit

model   short   int longllong   ptrsSample operating systems
LLP64   16  32  32  64  64  Microsoft Win64 (X64/IA64)
LP6416  32  64  64  64  Most UNIX and UNIX-like systems
(Solaris, Linux, etc)
ILP64   16  64  64  64  64  HAL
SILP64  64  64  64  64  64   ?


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Walter Bright

Robert Jacques wrote:
On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright 
 wrote:

Robert Jacques wrote:
(Caveat: most 32-bit compilers probably defaulted integer to int, 
though 64-bit compilers are probably defaulting integer to long.)


All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers 
are setting int at 32 bits for sensible compatibility reasons.


But are the 64-bit compilers setting the internal "integer" type to 32 
or 64 bits? (I'm not running any 64-bit OSes at the moment to test this)


Not that I've seen. I'd be very surprised if any did.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques
On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright  
 wrote:

Robert Jacques wrote:
(Caveat: most 32-bit compilers probably defaulted integer to int,  
though 64-bit compilers are probably defaulting integer to long.)


All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are  
setting int at 32 bits for sensible compatibility reasons.


But are the 64-bit compilers setting the internal "integer" type to 32 or  
64 bits? (I'm not running any 64-bit OSes at the moment to test this)


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Walter Bright

Thanks.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Walter Bright

Robert Jacques wrote:

So by the spec   (and please correct me if I'm reading this wrong)
g = e + f => g = cast(long)(  cast(integer)e + cast(integer)f  );
where integer is unbounded in bits (and therefore has no overflow)
therefore
g = e + f;  => d = cast(long) e + cast(long) f;
is more in keeping with the spec than
g = cast(long)(e+f);
in terms of a practical implementation, since there's less possibility 
for overflow error.


The spec leaves a lot of room for implementation defined behavior. But 
still, there are common definitions for those implementation defined 
behaviors, and C programs routinely rely on them. Just like the C 
standard supports 32 bit "bytes", but essentially zero C programs will 
port to such a platform without major rewrites.


Silently changing the expected results is a significant problem. The guy 
who does the translation is hardly likely to be the guy who wrote the 
program. When he notices the program failing, I guarantee he'll write it 
off as "D sux". He doesn't have the time to debug what looks like a 
fault in D, and frankly I would agree with him.


I have a lot of experience with people porting C/C++ programs to Digital 
Mars compilers. They run into some implementation-defined issue, or rely 
on some bug in B/M/X compilers, and yet it's always DM's problem, not 
B/M/X or the code. There's no point in fighting that, it's just the way 
it is, and to deal with reality means that DM must follow the same 
implementation-defined behavior and bugs as B/M/X compilers do.


For a C integer expression, D must either refuse to compile it or 
produce the same results.


(Caveat: most 32-bit compilers probably defaulted integer to int, though 
64-bit compilers are probably defaulting integer to long.)


All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are 
setting int at 32 bits for sensible compatibility reasons.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jesse Phillips
On Tue, 07 Jul 2009 11:05:31 -0400, bearophile wrote:

> KennyTM~ Wrote:
>> Maybe http://msdn.microsoft.com/en-us/vcsharp/aa336815.aspx .
> 
> That compromise design looks good to be adopted by D too :-)
> 
> Bye,
> bearophile

For which we have, case 1, 2, 3: writeln("I believe");


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Lionello Lunesu


"Walter Bright"  wrote in message 
news:h2s0me$30f...@digitalmars.com...

Something for everyone here.


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.046.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.031.zip


Great release, thanks to all those that have contributed to it!

Walter, since the lib/include folders were split according to OS, the dmd2 
zip consistently has an extensionless "lib" file in the dmd2 folder.


This is because of the 'install' target in win32.mak that would previously 
copy phobos.lib and gcstub.obj to the lib folder, but now copies their 
contents to a file called "lib" instead.


I've made a patch and attached it to 
http://d.puremagic.com/issues/show_bug.cgi?id=3153


L. 



Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques
On Tue, 07 Jul 2009 21:05:45 -0400, Walter Bright  
 wrote:



Andrei Alexandrescu wrote:

Robert Jacques wrote:

long g;
g = e + f;  => d = cast(long) e + cast(long) f;


Works today.


Wrong. I just tested this and what happens today is:
g = cast(long)(e+f);
And this is (I think) correct behavior according to the new rules and  
not a bug. In the new rules int is special, in this suggestion, it's  
not.
 I think this is a good idea that would improve things. I think,  
however, it would be troublesome to implement because expressions are  
typed bottom-up. The need here is to "teleport" type information from  
the assignment node to the addition node, which is downwards. And I'm  
not sure how this would generalize to other operators beyond "=".


It's also troublesome because it would silently produce different  
answers than C would.


Please, correct me if I'm wrong, but it seems C works by promoting  
byte/short/etc to int and then casting back down if need be. (Something  
tells me this wasn't always true) So (I think) the differences would be  
limited to integer expressions assigned to longs. Also, doing this 'right'  
might be important to 64-bit platforms.


Actually, after finding and skiming the C spec (from  
http://frama-c.cea.fr/download/acsl_1.4.pdf via wikipedia)


"
2.2.3 Typing
The language of logic expressions is typed (as in multi-sorted first-order  
logic). Types are either C types

or logic types defined as follows:
 ?mathematical? types: integer for unbounded, mathematical integers, real  
for real numbers,

boolean for booleans (with values written \true and \false);
 logic types introduced by the specification writer (see Section 2.6).
There are implicit coercions for numeric types:
 C integral types char, short, int and long, signed or unsigned, are all  
subtypes of type

integer;
 integer is itself a subtype of type real;
 C types float and double are subtypes of type real.

...

2.2.4 Integer arithmetic and machine integers
The following integer arithmetic operations apply to mathematical  
integers: addition, subtraction, multiplication,
unary minus. The value of a C variable of an integral type is promoted to  
a mathematical
integer. As a consequence, there is no such thing as "arithmetic overflow"  
in logic expressions.
Division and modulo are also mathematical operations, which coincide with  
the corresponding C
operations on C machine integers, thus following the ANSI C99 conventions.  
In particular, these are not
the usual mathematical Euclidean division and remainder. Generally  
speaking, division rounds the result
towards zero. The results are not specified if divisor is zero; otherwise  
if q and r are the quotient and the

remainder of n divided by d then:"
"

So by the spec   (and please correct me if I'm reading this wrong)
g = e + f => g = cast(long)(  cast(integer)e + cast(integer)f  );
where integer is unbounded in bits (and therefore has no overflow)
therefore
g = e + f;  => d = cast(long) e + cast(long) f;
is more in keeping with the spec than
g = cast(long)(e+f);
in terms of a practical implementation, since there's less possibility for  
overflow error.


(Caveat: most 32-bit compilers probably defaulted integer to int, though  
64-bit compilers are probably defaulting integer to long.)









Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Derek Parnell
On Tue, 07 Jul 2009 18:26:36 -0700, Walter Bright wrote:


> All the messages from the dawn of time are online and available at 
> http://www.digitalmars.com/d/archives/digitalmars/D/ and are searchable 
> from the search box in the upper left.

Okaaayy ... I see that this (checking for integer overflow) has been an
issue since at least 2003. 

  http://www.digitalmars.com/d/archives/19850.html

At this rate, D v2 will be released some time after C++0X :-)

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Robert Jacques wrote:
On Tue, 07 Jul 2009 21:21:47 -0400, Andrei Alexandrescu 
 wrote:



Robert Jacques wrote:
On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu 
 wrote:



Robert Jacques wrote:

long g;
g = e + f;  => d = cast(long) e + cast(long) f;


Works today.

 Wrong. I just tested this and what happens today is:
g = cast(long)(e+f);
And this is (I think) correct behavior according to the new rules 
and not a bug. In the new rules int is special, in this suggestion, 
it's not.


I think this is a good idea that would improve things. I think, 
however, it would be troublesome to implement because expressions 
are typed bottom-up. The need here is to "teleport" type information 
from the assignment node to the addition node, which is downwards. 
And I'm not sure how this would generalize to other operators beyond 
"=".



Andrei
 Hmm... why can't multiple expressions be built simultaneously and 
then the best chosen once the assignment/function call/etc is 
reached? This would also have the benifet of paving the way for 
polysemous values & expressions.


Anything can be done... in infinite time with infinite resources. :o)

Andrei


:) Well, weren't polysemous expressions already in the pipeline somewhere?


I'm afraid they didn't get wings. We have incidentally found different 
ways to address the issues they were supposed to address.


Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques
On Tue, 07 Jul 2009 21:21:47 -0400, Andrei Alexandrescu  
 wrote:



Robert Jacques wrote:
On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu  
 wrote:



Robert Jacques wrote:

long g;
g = e + f;  => d = cast(long) e + cast(long) f;


Works today.

 Wrong. I just tested this and what happens today is:
g = cast(long)(e+f);
And this is (I think) correct behavior according to the new rules and  
not a bug. In the new rules int is special, in this suggestion, it's  
not.


I think this is a good idea that would improve things. I think,  
however, it would be troublesome to implement because expressions are  
typed bottom-up. The need here is to "teleport" type information from  
the assignment node to the addition node, which is downwards. And I'm  
not sure how this would generalize to other operators beyond "=".



Andrei
 Hmm... why can't multiple expressions be built simultaneously and then  
the best chosen once the assignment/function call/etc is reached? This  
would also have the benifet of paving the way for polysemous values &  
expressions.


Anything can be done... in infinite time with infinite resources. :o)

Andrei


:) Well, weren't polysemous expressions already in the pipeline somewhere?


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Walter Bright

Andrei Alexandrescu wrote:
You can implement that as a library. In fact I wanted to do it for 
Phobos for a long time. I've discussed it in this group too (to an 
unusual consensus), but I forgot the thread's title and stupid 
Thunderbird "download 500 headers at a time forever even long after have 
changed that idiotic default option" won't let me find it.


All the messages from the dawn of time are online and available at 
http://www.digitalmars.com/d/archives/digitalmars/D/ and are searchable 
from the search box in the upper left.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jesse Phillips
On Tue, 07 Jul 2009 18:43:41 -0300, Leandro Lucarella wrote:

> 
> (BTW, nice job with the Wiki for whoever did it, I don't remember who
> was putting a lot of work on improving the Wiki, but it's really much
> better organized now)

Hi, thanks. 

> I think we can add a DIP (D Improvement Proposal =) section in the
> "Language Development" section:
> http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel

I was reusing the Discussion and Ideas for these things, but DIP could be 
for those brought forward by the involved few of accepting ideas, since 
Ideas and Discussion will likely end up with a lot of old or less thought 
out ideas.

http://www.prowiki.org/wiki4d/wiki.cgi?IdeaDiscussion


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Derek Parnell
On Tue, 07 Jul 2009 20:13:40 -0500, Andrei Alexandrescu wrote:

> Derek Parnell wrote:
>> Here is where I propose having a signal to the compiler about which
>> specific variables I'm worried about, and if I code an assignment to one of
>> these that can potentially overflow, then the compiler must issue a
>> message. 
> 
> You can implement that as a library. In fact I wanted to do it for 
> Phobos for a long time. 

What does "implement that as a library" actually mean?

Does it mean that a Phobos module could be written that defines a struct
template (presumably) that holds the data and implements opAssign, etc...
to issue a message if required. I assume it could do some limited
compile-time value tests so it doesn't always have to issue a message.

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Robert Jacques wrote:
On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu 
 wrote:



Robert Jacques wrote:

long g;
g = e + f;  => d = cast(long) e + cast(long) f;


Works today.

 Wrong. I just tested this and what happens today is:
g = cast(long)(e+f);
And this is (I think) correct behavior according to the new rules and 
not a bug. In the new rules int is special, in this suggestion, it's 
not.


I think this is a good idea that would improve things. I think, 
however, it would be troublesome to implement because expressions are 
typed bottom-up. The need here is to "teleport" type information from 
the assignment node to the addition node, which is downwards. And I'm 
not sure how this would generalize to other operators beyond "=".



Andrei


Hmm... why can't multiple expressions be built simultaneously and then 
the best chosen once the assignment/function call/etc is reached? This 
would also have the benifet of paving the way for polysemous values & 
expressions.


Anything can be done... in infinite time with infinite resources. :o)

Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques
On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu  
 wrote:



Robert Jacques wrote:

long g;
g = e + f;  => d = cast(long) e + cast(long) f;


Works today.

 Wrong. I just tested this and what happens today is:
g = cast(long)(e+f);
And this is (I think) correct behavior according to the new rules and  
not a bug. In the new rules int is special, in this suggestion, it's  
not.


I think this is a good idea that would improve things. I think, however,  
it would be troublesome to implement because expressions are typed  
bottom-up. The need here is to "teleport" type information from the  
assignment node to the addition node, which is downwards. And I'm not  
sure how this would generalize to other operators beyond "=".



Andrei


Hmm... why can't multiple expressions be built simultaneously and then the  
best chosen once the assignment/function call/etc is reached? This would  
also have the benifet of paving the way for polysemous values &  
expressions.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Derek Parnell wrote:

Here is where I propose having a signal to the compiler about which
specific variables I'm worried about, and if I code an assignment to one of
these that can potentially overflow, then the compiler must issue a
message. 


You can implement that as a library. In fact I wanted to do it for 
Phobos for a long time. I've discussed it in this group too (to an 
unusual consensus), but I forgot the thread's title and stupid 
Thunderbird "download 500 headers at a time forever even long after have 
changed that idiotic default option" won't let me find it.



Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Leandro Lucarella
Andrei Alexandrescu, el  7 de julio a las 16:54 me escribiste:
> Leandro Lucarella wrote:
> >Andrei Alexandrescu, el  7 de julio a las 15:12 me escribiste:
> >>Leandro Lucarella wrote:
> >>>Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:
> Leandro Lucarella wrote:
> >This seems nice. I think it would be nice if this kind of things are
> >commented in the NG before a compiler release, to allow community input
> >and discussion.
> Yup, that's what happened to case :o).
> 
> >I think this kind of things are the ones that deserves some kind of RFC
> >(like Python PEPs) like someone suggested a couple of days ago.
> I think that's a good idea. Who has the time and resources to set that up?
> >>>What's wrong with the Wiki?
> >>Where's the link?
> >I mean the D Wiki!
> >http://prowiki.org/wiki4d/wiki.cgi
> >(BTW, nice job with the Wiki for whoever did it, I don't remember who was
> >putting a lot of work on improving the Wiki, but it's really much better
> >organized now)
> >I think we can add a DIP (D Improvement Proposal =) section in the
> >"Language Development" section:
> >http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel
> 
> Great idea. I can only hope the technical level will be much higher than
> the two threads related to switch.

I think proposals should be published there but discussed here, so be
ready for all kind of discussions (the ones you like and the ones you
don't =). From time to time, when there is some kind of agreement, the
proposal should be updated (with a new "revision number").

I just went wild and added a DIP index[1] and the first DIP (DIP1),
a template for creating new DIPs[2].

This are just rought drafts, but I think they are good enought to start
with. Comments are apreciated.

I will post a "formal" announcement too.

[1] http://www.prowiki.org/wiki4d/wiki.cgi?DiPs
[2] http://www.prowiki.org/wiki4d/wiki.cgi?DiP1

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)

"The Guinness Book of Records" holds the record for being the most
stolen book in public libraries


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Walter Bright

Andrei Alexandrescu wrote:

Robert Jacques wrote:

long g;
g = e + f;  => d = cast(long) e + cast(long) f;


Works today.


Wrong. I just tested this and what happens today is:
g = cast(long)(e+f);
And this is (I think) correct behavior according to the new rules and 
not a bug. In the new rules int is special, in this suggestion, it's not.


I think this is a good idea that would improve things. I think, however, 
it would be troublesome to implement because expressions are typed 
bottom-up. The need here is to "teleport" type information from the 
assignment node to the addition node, which is downwards. And I'm not 
sure how this would generalize to other operators beyond "=".


It's also troublesome because it would silently produce different 
answers than C would.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Derek Parnell
On Tue, 07 Jul 2009 19:39:55 -0500, Andrei Alexandrescu wrote:

> Nick Sabalausky wrote:
>> "bearophile"  wrote in message 
>> news:h3093m$2mu...@digitalmars.com...
>>> Before adding a feature X let's discuss them, ... If not enough people 
>>> like a solution then let's not add it.
>> 
>> Something like that was attempted once before. Andrei didn't like what we 
>> had to say, got huffy, and withdrew from the discussion. Stay tuned for the 
>> exciting sequel where the feature goes ahead as planned anyway, and our 
>> protagonists get annoyed that people still have objections to it. 
> 
> Put yourself in my place. What would you do? Honest. Sometimes I find it 
> difficult to find the right mix of being honest, being technically 
> accurate, being polite, and not wasting too much time explaining myself.
> 
> Andrei

Ditto.

We know that the development of the D language is not a democratic process,
and that's fine. Really, it is. However, clear rationale for decisions made
would go a long way to helping reduce dissent, as would some
pre-announcements to avoid surprises. 

By the way, I appreciate that you guys are now closing off bugzilla issues
before the release of their fix implementation. It a good heads-up and
demonstrates activity in between releases. Well done.

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Robert Jacques wrote:

long g;
g = e + f;  => d = cast(long) e + cast(long) f;


Works today.


Wrong. I just tested this and what happens today is:
g = cast(long)(e+f);
And this is (I think) correct behavior according to the new rules and 
not a bug. In the new rules int is special, in this suggestion, it's not.


I think this is a good idea that would improve things. I think, however, 
it would be troublesome to implement because expressions are typed 
bottom-up. The need here is to "teleport" type information from the 
assignment node to the addition node, which is downwards. And I'm not 
sure how this would generalize to other operators beyond "=".



Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Nick Sabalausky wrote:
"bearophile"  wrote in message 
news:h3093m$2mu...@digitalmars.com...
Before adding a feature X let's discuss them, ... If not enough people 
like a solution then let's not add it.


Something like that was attempted once before. Andrei didn't like what we 
had to say, got huffy, and withdrew from the discussion. Stay tuned for the 
exciting sequel where the feature goes ahead as planned anyway, and our 
protagonists get annoyed that people still have objections to it. 


Put yourself in my place. What would you do? Honest. Sometimes I find it 
difficult to find the right mix of being honest, being technically 
accurate, being polite, and not wasting too much time explaining myself.


Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Derek Parnell
On Tue, 07 Jul 2009 18:10:24 -0400, Robert Jacques wrote:

> On Tue, 07 Jul 2009 18:05:26 -0400, Derek Parnell  wrote:
> 
>> On Tue, 07 Jul 2009 14:05:33 -0400, Robert Jacques wrote:
>>
>>
>>> Well, how often does everyone else use bytes?
>>
>> Cryptography, in my case.
>>
> 
> Cool. If you don't mind, what's you're take new rules? (As different use  
> cases and points of view are very valuable)

By new rules you mean the ones implemented in D 2.031?

I'm not sure yet. I need to use them more in practice to see how they sort
themselves out. It seems that what they are trying to do is predict runtime
behaviour at compile time and make the appropriate (as defined by Walter)
steps to avoid runtime errors.

Anyhow, and be warned that I'm just thinking out loud here, we could have a
scheme where the coder explicitly tells the compiler that, in certain
specific sections of code, the coder would like to have runtime checking of
overflow situations added by the compiler. Something like ...

   byte a,b,c;

   try {
 a = b + c;
   }
   catch (OverflowException e) { ... }

and in this situation the compiler would not give a message, because I've
instructed the compiler to generate runtime checking.

The problem we would now have though is balancing the issuing-of-messages
with the ease-of-coding. It seems that the most common kind of assignment
is where the LHS type is the same as the RHS type(s), so we don't want to
make that any harder to code. But clearly, this is also the most common
source of potential overflows. Ok, let's assume that we don't want the D
compiler to be our nanny; that we are adults and understand stuff. This now
leads me to think that unless the coder says differently, the compiler
should be silent about potential overflows. 

The "try .. catch" example above is verbose, however it does scream
"run-time checking" to me so it is probably worth the effort. The only
remaining issue for me is how to catch accidental overflows in the special
cases where I, as a responsible coder, knowingly wish to avoid.

Here is where I propose having a signal to the compiler about which
specific variables I'm worried about, and if I code an assignment to one of
these that can potentially overflow, then the compiler must issue a
message. 

NOTE BENE: For the purposes of these examples, I use the word "guard" as
the signal for the compiler to guard against overflows. I don't care so
much about which specific signalling method could be adopted. This is still
conceptual stuff, okay?

   guard byte a; // I want this byte guarded.
   byte b,c; // I don't care about these bytes.

   a = 3 + 29; // No message 'cos 32 fits into a byte.
   a = b + c;  // Message 'cos it could overflow.
   a = cast(byte)(b + c);  // No message 'cos cast overrides messages.
   a++; // Message - overflow is possible.
   a += 1; // Message - overflow is possible.
   a = a + 1 // Message - overflow is possible.
   a = cast(byte)a + 1;  // No message 'cos cast overrides messages.

And for a really smart compiler ...

   a = 0; 
   a++; // No message as it can determine that the run time value
// at this point in time is okay.

   for (a = 'a'; a <= 'z'; a++) // Still no message.

Additionally, I'm pretty certain that I think ...

  auto x = y + z;

should ensure that 'x' is a type that will always be able to hold any value
from (y.min + z.min) to (y.max + z.max) inclusive. 

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Nick Sabalausky wrote:
"Andrei Alexandrescu"  wrote in message 
news:h30907$2lk...@digitalmars.com...

Nick Sabalausky wrote:
"Andrei Alexandrescu"  wrote in message 
news:h2vprn$1t7...@digitalmars.com...
This is a different beast. We simply couldn't devise a satisfactory 
scheme within the constraints we have. No simple solution we could think 
of has worked, nor have a number of sophisticated solutions. Ideas would 
be welcome, though I need to warn you that the devil is in the details 
so the ideas must be fully baked; too many good sounding high-level 
ideas fail when analyzed in detail.


I assume then that you've looked at something lke C#'s checked/unchecked 
scheme and someone's (I forget who) idea of expanding that to something 
like unchecked(overflow, sign)? What was wrong with those sorts of 
things?
An unchecked-based approach was not on the table. Our focus was more on 
checking things properly, instead of over-checking and then relying on 
"unchecked" to disable that.




C#'s scheme supports the opposite as well. Not checking for the stuff where 
you mostly don't care, and then "checked" to enable the checks in the spots 
where you do care. And then there's been the suggestions for finer-graned 
control for whevever that's needed. 


Well unfortunately that all wasn't considered. If properly championed, 
it would. I personally consider the current approach superior because 
it's safe and unobtrusive.


Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Nick Sabalausky
"Andrei Alexandrescu"  wrote in message 
news:h30907$2lk...@digitalmars.com...
> Nick Sabalausky wrote:
>> "Andrei Alexandrescu"  wrote in message 
>> news:h2vprn$1t7...@digitalmars.com...
>>> This is a different beast. We simply couldn't devise a satisfactory 
>>> scheme within the constraints we have. No simple solution we could think 
>>> of has worked, nor have a number of sophisticated solutions. Ideas would 
>>> be welcome, though I need to warn you that the devil is in the details 
>>> so the ideas must be fully baked; too many good sounding high-level 
>>> ideas fail when analyzed in detail.
>>>
>>
>> I assume then that you've looked at something lke C#'s checked/unchecked 
>> scheme and someone's (I forget who) idea of expanding that to something 
>> like unchecked(overflow, sign)? What was wrong with those sorts of 
>> things?
>
> An unchecked-based approach was not on the table. Our focus was more on 
> checking things properly, instead of over-checking and then relying on 
> "unchecked" to disable that.
>

C#'s scheme supports the opposite as well. Not checking for the stuff where 
you mostly don't care, and then "checked" to enable the checks in the spots 
where you do care. And then there's been the suggestions for finer-graned 
control for whevever that's needed. 




Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Nick Sabalausky
"bearophile"  wrote in message 
news:h3093m$2mu...@digitalmars.com...
> Before adding a feature X let's discuss them, ... If not enough people 
> like a solution then let's not add it.

Something like that was attempted once before. Andrei didn't like what we 
had to say, got huffy, and withdrew from the discussion. Stay tuned for the 
exciting sequel where the feature goes ahead as planned anyway, and our 
protagonists get annoyed that people still have objections to it. 




Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Walter Bright

Robert Jacques wrote:
The new rules are definitely an improvement over C, but they make 
byte/ubyte/short/ushort second class citizens, because practically every 
assignment requires a cast:

byte a,b,c;
c = cast(byte) a + b;


They've always been second class citizens, as their types keep getting 
promoted to int. They've been second class on the x86 CPUs, too, as 
short operations tend to be markedly slower than the corresponding int 
operations.


And if it weren't for compatibility issues, it would almost be worth it 
to remove them completely.


Shorts and bytes are very useful in arrays and data structures, but 
aren't worth much as local variables. If I see a:


short s;

as a local, it always raises an eyebrow with me that there's a lurking bug.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Walter Bright

Andrei Alexandrescu wrote:

Nick Sabalausky wrote:
I assume then that you've looked at something lke C#'s 
checked/unchecked scheme and someone's (I forget who) idea of 
expanding that to something like unchecked(overflow, sign)? What was 
wrong with those sorts of things? 


An unchecked-based approach was not on the table. Our focus was more on 
checking things properly, instead of over-checking and then relying on 
"unchecked" to disable that.


We also should be careful not to turn D into a "bondage and discipline" 
language that nobody will use unless contractually forced to.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Derek Parnell
On Tue, 07 Jul 2009 21:20:42 +0200, "Jérôme M. Berger" wrote:

> Andrei Alexandrescu wrote:
>> Jérôme M. Berger wrote:
>>> Andrei Alexandrescu wrote:
 Jérôme M. Berger wrote:
> Andrei Alexandrescu wrote:
>> Derek Parnell wrote:
>>> It seems that D would benefit from having a standard syntax format 
>>> for
>>> expressing various range sets;
>>>  a. Include begin Include end, i.e. []
>>>  b. Include begin Exclude end, i.e. [)
>>>  c. Exclude begin Include end, i.e. (]
>>>  d. Exclude begin Exclude end, i.e. ()
>>
>> I'm afraid this would majorly mess with pairing of parens.
>>
> I think Derek's point was to have *some* syntax to mean this, 
> not necessarily the one he showed (which he showed because I believe 
> that's the "standard" mathematical way to express it for English 
> speakers). For example, we could say that [] is always inclusive and 
> have another character which makes it exclusive like:
>  a. Include begin Include end, i.e. [  a .. b  ]
>  b. Include begin Exclude end, i.e. [  a .. b ^]
>  c. Exclude begin Include end, i.e. [^ a .. b  ]
>  d. Exclude begin Exclude end, i.e. [^ a .. b ^]

 I think Walter's message really rendered the whole discussion moot. 
 Post of the year:

 =
 I like:

a .. b+1

 to mean inclusive range.
 =

 Consider "+1]" a special symbol that means the range is to be closed 
 to the right :o).

>>> Ah, but:
>>>  - This is inconsistent between the left and right limit;
>>>  - This only works for integers, not for floating point numbers.
>> 
>> How does it not work for floating point numbers?
>> 
>   Is that a trick question? Depending on the actual value of b, you 
> might have b+1 == b (if b is large enough). Conversely, range a .. 
> b+1 may contain a lot of extra numbers I may not want to include 
> (like b+0.5)...
> 
>   Jerome

If Andrei is not joking (the smiley notwithstanding) the "+1" doesn't mean
add one to the previous expression, instead it means that the previous
expression's value is the last value in the range set.

Subtle, no?

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Derek Parnell
On Tue, 07 Jul 2009 13:16:14 -0500, Andrei Alexandrescu wrote:


> Safe D is concerned with memory safety only.

That's a pity. Maybe it should be renamed to Partially-Safe D, or Safe-ish
D, Memory-Safe D, or ...  well you get the point. Could be misleading for
the great unwashed.

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Derek Parnell
On Tue, 07 Jul 2009 14:16:12 -0500, Andrei Alexandrescu wrote:

> Bill Baxter wrote:
>> 2009/7/7 Andrei Alexandrescu :
>>> I think Walter's message really rendered the whole discussion moot. Post of
>>> the year:
>>>
>>> =
>>> I like:
>>>
>>>   a .. b+1
>>>
>>> to mean inclusive range.
>>> =
>> 
>> Not everything is an integer.
> 
> Works with pointers too.

A pointer is an integer because the byte it is referring to always has an
integral address value. Pointers do not point to partial bytes.

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques

On Tue, 07 Jul 2009 18:05:26 -0400, Derek Parnell  wrote:


On Tue, 07 Jul 2009 14:05:33 -0400, Robert Jacques wrote:



Well, how often does everyone else use bytes?


Cryptography, in my case.



Cool. If you don't mind, what's you're take new rules? (As different use  
cases and points of view are very valuable)




Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Derek Parnell
On Tue, 07 Jul 2009 20:13:45 +0200, "Jérôme M. Berger" wrote:

> Andrei Alexandrescu wrote:
>> Derek Parnell wrote:
>>> It seems that D would benefit from having a standard syntax format for
>>> expressing various range sets;
>>>  a. Include begin Include end, i.e. []
>>>  b. Include begin Exclude end, i.e. [)
>>>  c. Exclude begin Include end, i.e. (]
>>>  d. Exclude begin Exclude end, i.e. ()
>> 
>> I'm afraid this would majorly mess with pairing of parens.
>> 
>   I think Derek's point was to have *some* syntax to mean this, not 
> necessarily the one he showed 

Thank you, Jérôme. I got too frustrated to explain it well enough.

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Derek Parnell
On Tue, 07 Jul 2009 14:05:33 -0400, Robert Jacques wrote:


> Well, how often does everyone else use bytes?

Cryptography, in my case.

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Leandro Lucarella wrote:

Andrei Alexandrescu, el  7 de julio a las 15:12 me escribiste:

Leandro Lucarella wrote:

Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:

Leandro Lucarella wrote:

This seems nice. I think it would be nice if this kind of things are
commented in the NG before a compiler release, to allow community input
and discussion.

Yup, that's what happened to case :o).


I think this kind of things are the ones that deserves some kind of RFC
(like Python PEPs) like someone suggested a couple of days ago.

I think that's a good idea. Who has the time and resources to set that up?

What's wrong with the Wiki?

Where's the link?


I mean the D Wiki!
http://prowiki.org/wiki4d/wiki.cgi

(BTW, nice job with the Wiki for whoever did it, I don't remember who was
putting a lot of work on improving the Wiki, but it's really much better
organized now)

I think we can add a DIP (D Improvement Proposal =) section in the
"Language Development" section:
http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel


Great idea. I can only hope the technical level will be much higher than 
the two threads related to switch.


Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Leandro Lucarella
Andrei Alexandrescu, el  7 de julio a las 15:12 me escribiste:
> Leandro Lucarella wrote:
> >Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:
> >>Leandro Lucarella wrote:
> >>>This seems nice. I think it would be nice if this kind of things are
> >>>commented in the NG before a compiler release, to allow community input
> >>>and discussion.
> >>Yup, that's what happened to case :o).
> >>
> >>>I think this kind of things are the ones that deserves some kind of RFC
> >>>(like Python PEPs) like someone suggested a couple of days ago.
> >>I think that's a good idea. Who has the time and resources to set that up?
> >What's wrong with the Wiki?
> 
> Where's the link?

I mean the D Wiki!
http://prowiki.org/wiki4d/wiki.cgi

(BTW, nice job with the Wiki for whoever did it, I don't remember who was
putting a lot of work on improving the Wiki, but it's really much better
organized now)

I think we can add a DIP (D Improvement Proposal =) section in the
"Language Development" section:
http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)

Ya ni el cielo me quiere, ya ni la muerte me visita
Ya ni el sol me calienta, ya ni el viento me acaricia


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques
On Tue, 07 Jul 2009 14:16:14 -0400, Andrei Alexandrescu  
 wrote:



Robert Jacques wrote:
On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu  
 wrote:

Robert Jacques wrote:
 Andrei, I have a short vector template (think vec!(byte,3), etc)  
where I've had to wrap the majority lines of code in cast(T)( ... ),  
because I support bytes and shorts. I find that both a kludge and a  
pain.


Well suggestions for improving things are welcome. But I don't think  
it will fly to make int+int yield a long.

 Suggestion 1:
Loft the right hand of the expression (when lofting is valid) to the  
size of the left hand. i.e.


What does loft mean in this context?


Sorry. loft <=> up-casting. i.e.
byte => short => int => long => cent? => bigInt?


byte a,b,c;
c = a + b;  => c = a + b;


Unsafe.


So is int + int or long + long. Or float + float for that matter. My point  
is that if a programmer is assigning a value to a byte (or short or int or  
long) then they are willing to accept the accociated over/under flow  
errors of that type.



short d;
d = a + b;  => d = cast(short) a + cast(short) b;


Should work today modulo bugs.


int e, f;
e = a + b;  => e = cast(short) a + cast(short) b;


Why cast to short? e has type int.


Opps. You're right. (I was thinking of the new rules, not my suggestion)
Should be:
e = a + b;  => e = cast(int) a + cast(int) b;

e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) +  
cast(int) d; Or e = cast(int) a + (cast(int) b + cast(int)d);


I don't understand this.


Same "Opps. You're right." as above.
e = a + b + d; => e = cast(int) a + cast(int) b + cast(int) d;


long g;
g = e + f;  => d = cast(long) e + cast(long) f;


Works today.


Wrong. I just tested this and what happens today is:
g = cast(long)(e+f);
And this is (I think) correct behavior according to the new rules and not  
a bug. In the new rules int is special, in this suggestion, it's not.


When choosing operator overloads or auto, prefer the ideal lofted  
interpretation (as per the new rules, but without the exception for  
int/long), over truncated variants. i.e.

auto h = a + b; => short h = cast(short) a + cast(short) b;


This would yield semantics incompatible with C expressions.


How so?
The auto rule is identical to the "new rules".
The overload rule is identical to the "new rules", except when no match  
can be found, in which case it tries to "relax" the expression to a  
smaller number of bits.


This would also properly handled some of the corner/inconsistent cases  
with the current rules:

ubyte  i;
ushort j;
j = -i;=> j = -cast(short)i; (This currently evaluates to j =  
cast(short)(-i);


That should not compile, sigh. Walter wouldn't listen...


And
a += a;
is equivalent to
a = a + a;


Well not quite equivalent. In D2 they aren't. The former clarifies that  
you want to reassign the expression to a, and no cast is necessary. The  
latter would not compile if a is shorter than int.


I understand, but that dichotomy increases the cognitive load on the  
programmer. Also, there's the issue of

byte x;
++x;
which is defined in the spec as being equvilent to
x = x + 1;


and is logically consistent with
byte[] k,l,m;
m[] = k[] + l[];
 Essentially, instead of trying to prevent overflows, except for those  
from int and long, this scheme attempts to minimize the risk of  
overflows, including those from int (and long, once cent exists. Maybe  
long+long=>bigInt?)


But if you close operations for types smaller than int, you end up with  
a scheme even more error-prone that C!


Since C (IIRC) always evaluates "x+x" in the manner most prone to causing  
overflows, no matter the type, a scheme can't be more error-prone than C  
(at the instruction level). However, it can be less consistent, which I  
grant can lead to higher level logic errors. (BTW, operations for types  
smaller than int are closed (by my non-mathy definition) in C)


The new rules are definitely an improvement over C, but they make  
byte/ubyte/short/ushort second class citizens, because practically every  
assignment requires a cast:

byte a,b,c;
c = cast(byte) a + b;
And if it weren't for compatibility issues, it would almost be worth it to  
remove them completely.






Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jérôme M. Berger

Walter Bright wrote:

Andrei Alexandrescu wrote:

Bill Baxter wrote:

2009/7/7 Andrei Alexandrescu :
I think Walter's message really rendered the whole discussion moot. 
Post of

the year:

=
I like:

  a .. b+1

to mean inclusive range.
=


Not everything is an integer.


Works with pointers too.


It works for the cases where an inclusive range makes sense.


Doesn't work with floats, which *do* make sense too...

Jerome
--
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:
 - A floating point range should allow you to specify the iteration 
step, or else it should allow you to iterate through all numbers that 
can be represented with the corresponding precision;


We don't have that, so you'd need to use a straigh for statement.



struct FloatRange {
   float begin, end, step;
   bool includeBegin, includeEnd;

   int opApply (int delegate (ref float) dg) {
  whatever;
   }

   whatever;
}

 - The second issue remains: what if I want to include b but not b+ε 
for any ε>0?


real a, b;
...
for (real f = a; f <= b; update(f))
{
}

I'd find it questionable to use ranged for with floats anyway.

So would I. But a range of floats is useful for more than iterating 
over it. Think interval arithmetic for example.


Cool. I'm positive that open ranges will not prevent you from 
implementing such a library (and from subsequently proposing it to 
Phobos :o)).



Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jérôme M. Berger

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:
 - A floating point range should allow you to specify the iteration 
step, or else it should allow you to iterate through all numbers that 
can be represented with the corresponding precision;


We don't have that, so you'd need to use a straigh for statement.



struct FloatRange {
   float begin, end, step;
   bool includeBegin, includeEnd;

   int opApply (int delegate (ref float) dg) {
  whatever;
   }

   whatever;
}

 - The second issue remains: what if I want to include b but not b+ε 
for any ε>0?


real a, b;
...
for (real f = a; f <= b; update(f))
{
}

I'd find it questionable to use ranged for with floats anyway.

	So would I. But a range of floats is useful for more than iterating 
over it. Think interval arithmetic for example.


Jerome
--
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Leandro Lucarella wrote:

Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:

Leandro Lucarella wrote:

This seems nice. I think it would be nice if this kind of things are
commented in the NG before a compiler release, to allow community input
and discussion.

Yup, that's what happened to case :o).


I think this kind of things are the ones that deserves some kind of RFC
(like Python PEPs) like someone suggested a couple of days ago.

I think that's a good idea. Who has the time and resources to set that up?


What's wrong with the Wiki?


Where's the link?

Andrei



Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

bearophile wrote:

Andrei Alexandrescu:

How often did you encounter that issue?


Please, let's be serious, and let's stop adding special cases to D,
or they will kill the language.


Don't get me going about what could kill the language.


Lately I have seen too many special
cases. For example the current design of the rules of integral seems
bad. It has bugs and special cases from the start.


Bugs don't imply that the feature is bad. The special cases are well
understood and are present in all of C, C++, C#, and Java.

Value range propagation as defined in D is principled and puts D on the 
right side of both safety and speed. It's better than all other 
languages mentioned above: safer than C and C++, and requiring much 
fewer casts than C# and Java.



Andrei



Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Leandro Lucarella
Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:
> Leandro Lucarella wrote:
> >This seems nice. I think it would be nice if this kind of things are
> >commented in the NG before a compiler release, to allow community input
> >and discussion.
> 
> Yup, that's what happened to case :o).
> 
> >I think this kind of things are the ones that deserves some kind of RFC
> >(like Python PEPs) like someone suggested a couple of days ago.
> 
> I think that's a good idea. Who has the time and resources to set that up?

What's wrong with the Wiki?

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)

He andáu muchos caminos, muchos caminos he andáu, Chile tiene el buen
vino y Suecia, el bacalao. Esta'o Unido tiene el hot do', Cuba tiene el
mojito, Guatemala, el cornalito y Brasil la feishoada.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread bearophile
Andrei Alexandrescu:
> How often did you encounter that issue?

Please, let's be serious, and let's stop adding special cases to D, or they 
will kill the language.
Lately I have seen too many special cases.
For example the current design of the rules of integral seems bad. It has bugs 
and special cases from the start.
The .. used in case is another special case, even if Andrei is blind regarding 
that, and doesn't see its problem.
Why for a change people here stop implementing things, and start implementing a 
feature only after 55-60+% of the people think it's a good idea?
Languages like C# and Scala show several features good to be copied, let's copy 
them, and let's not add any more half-backed things. Before adding a feature X 
let's discuss them, let's create a forum or place to keep a thred for each 
feature plus a wiki-based text of the best solution found, etc. If not enough 
people like a solution then let's not add it. Better not having a feature than 
having a bad one, see Python that even today misses basic things like a 
switch/case.

Bye,
bearophile


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Nick Sabalausky wrote:
"Andrei Alexandrescu"  wrote in message 
news:h2vprn$1t7...@digitalmars.com...
This is a different beast. We simply couldn't devise a satisfactory scheme 
within the constraints we have. No simple solution we could think of has 
worked, nor have a number of sophisticated solutions. Ideas would be 
welcome, though I need to warn you that the devil is in the details so the 
ideas must be fully baked; too many good sounding high-level ideas fail 
when analyzed in detail.




I assume then that you've looked at something lke C#'s checked/unchecked 
scheme and someone's (I forget who) idea of expanding that to something like 
unchecked(overflow, sign)? What was wrong with those sorts of things? 


An unchecked-based approach was not on the table. Our focus was more on 
checking things properly, instead of over-checking and then relying on 
"unchecked" to disable that.


Andrei



Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Jérôme M. Berger wrote:
 - A floating point range should allow you to specify the iteration 
step, or else it should allow you to iterate through all numbers that 
can be represented with the corresponding precision;


We don't have that, so you'd need to use a straigh for statement.

 - The second issue remains: what if I want to include b but not b+ε for 
any ε>0?


real a, b;
...
for (real f = a; f <= b; update(f))
{
}

I'd find it questionable to use ranged for with floats anyway.


Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Nick Sabalausky
"Andrei Alexandrescu"  wrote in message 
news:h2vprn$1t7...@digitalmars.com...
>
> This is a different beast. We simply couldn't devise a satisfactory scheme 
> within the constraints we have. No simple solution we could think of has 
> worked, nor have a number of sophisticated solutions. Ideas would be 
> welcome, though I need to warn you that the devil is in the details so the 
> ideas must be fully baked; too many good sounding high-level ideas fail 
> when analyzed in detail.
>

I assume then that you've looked at something lke C#'s checked/unchecked 
scheme and someone's (I forget who) idea of expanding that to something like 
unchecked(overflow, sign)? What was wrong with those sorts of things? 




Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jérôme M. Berger

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Derek Parnell wrote:
It seems that D would benefit from having a standard syntax 
format for

expressing various range sets;
 a. Include begin Include end, i.e. []
 b. Include begin Exclude end, i.e. [)
 c. Exclude begin Include end, i.e. (]
 d. Exclude begin Exclude end, i.e. ()


I'm afraid this would majorly mess with pairing of parens.

I think Derek's point was to have *some* syntax to mean this, 
not necessarily the one he showed (which he showed because I 
believe that's the "standard" mathematical way to express it for 
English speakers). For example, we could say that [] is always 
inclusive and have another character which makes it exclusive like:

 a. Include begin Include end, i.e. [  a .. b  ]
 b. Include begin Exclude end, i.e. [  a .. b ^]
 c. Exclude begin Include end, i.e. [^ a .. b  ]
 d. Exclude begin Exclude end, i.e. [^ a .. b ^]


I think Walter's message really rendered the whole discussion moot. 
Post of the year:


=
I like:

   a .. b+1

to mean inclusive range.
=

Consider "+1]" a special symbol that means the range is to be 
closed to the right :o).



Ah, but:
 - This is inconsistent between the left and right limit;
 - This only works for integers, not for floating point numbers.


How does it not work for floating point numbers?

Is that a trick question? Depending on the actual value of b, you 
might have b+1 == b (if b is large enough). Conversely, range a .. b+1 
may contain a lot of extra numbers I may not want to include (like 
b+0.5)...


It wasn't a trick question, or it was of sorts. If you iterate with e.g. 
foreach through a floating-point range that has b == b + 1, you're bound 
to get in a lot of trouble because the running variable will be 
incremented.



Well:
 - A floating point range should allow you to specify the iteration 
step, or else it should allow you to iterate through all numbers 
that can be represented with the corresponding precision;
 - The second issue remains: what if I want to include b but not 
b+ε for any ε>0?


Jerome
--
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Walter Bright

Andrei Alexandrescu wrote:

Bill Baxter wrote:

2009/7/7 Andrei Alexandrescu :
I think Walter's message really rendered the whole discussion moot. 
Post of

the year:

=
I like:

  a .. b+1

to mean inclusive range.
=


Not everything is an integer.


Works with pointers too.


It works for the cases where an inclusive range makes sense.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Leandro Lucarella wrote:

Andrei Alexandrescu, el  7 de julio a las 13:18 me escribiste:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Derek Parnell wrote:

It seems that D would benefit from having a standard syntax format for
expressing various range sets;
a. Include begin Include end, i.e. []
b. Include begin Exclude end, i.e. [)
c. Exclude begin Include end, i.e. (]
d. Exclude begin Exclude end, i.e. ()

I'm afraid this would majorly mess with pairing of parens.

   I think Derek's point was to have *some* syntax to mean this, not 
necessarily the one he showed (which he showed because I believe that's the 
"standard" mathematical way to express it for English speakers). For example, we 
could say that [] is always inclusive and have another character which makes it 
exclusive like:

a. Include begin Include end, i.e. [  a .. b  ]
b. Include begin Exclude end, i.e. [  a .. b ^]
c. Exclude begin Include end, i.e. [^ a .. b  ]
d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot. Post of the 
year:


=
I like:

   a .. b+1

to mean inclusive range.
=

Consider "+1]" a special symbol that means the range is to be closed to the right 
:o).


What about bearophile response: what about x..uint.max+1?


How often did you encounter that issue?


Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Derek Parnell wrote:
It seems that D would benefit from having a standard syntax 
format for

expressing various range sets;
 a. Include begin Include end, i.e. []
 b. Include begin Exclude end, i.e. [)
 c. Exclude begin Include end, i.e. (]
 d. Exclude begin Exclude end, i.e. ()


I'm afraid this would majorly mess with pairing of parens.

I think Derek's point was to have *some* syntax to mean this, 
not necessarily the one he showed (which he showed because I 
believe that's the "standard" mathematical way to express it for 
English speakers). For example, we could say that [] is always 
inclusive and have another character which makes it exclusive like:

 a. Include begin Include end, i.e. [  a .. b  ]
 b. Include begin Exclude end, i.e. [  a .. b ^]
 c. Exclude begin Include end, i.e. [^ a .. b  ]
 d. Exclude begin Exclude end, i.e. [^ a .. b ^]


I think Walter's message really rendered the whole discussion moot. 
Post of the year:


=
I like:

   a .. b+1

to mean inclusive range.
=

Consider "+1]" a special symbol that means the range is to be closed 
to the right :o).



Ah, but:
 - This is inconsistent between the left and right limit;
 - This only works for integers, not for floating point numbers.


How does it not work for floating point numbers?

Is that a trick question? Depending on the actual value of b, you 
might have b+1 == b (if b is large enough). Conversely, range a .. b+1 
may contain a lot of extra numbers I may not want to include (like 
b+0.5)...


It wasn't a trick question, or it was of sorts. If you iterate with e.g. 
foreach through a floating-point range that has b == b + 1, you're bound 
to get in a lot of trouble because the running variable will be 
incremented.



Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Leandro Lucarella
Andrei Alexandrescu, el  7 de julio a las 13:18 me escribiste:
> Jérôme M. Berger wrote:
> >Andrei Alexandrescu wrote:
> >>Derek Parnell wrote:
> >>>It seems that D would benefit from having a standard syntax format for
> >>>expressing various range sets;
> >>> a. Include begin Include end, i.e. []
> >>> b. Include begin Exclude end, i.e. [)
> >>> c. Exclude begin Include end, i.e. (]
> >>> d. Exclude begin Exclude end, i.e. ()
> >>
> >>I'm afraid this would majorly mess with pairing of parens.
> >>
> >I think Derek's point was to have *some* syntax to mean this, not 
> >necessarily the one he showed (which he showed because I believe that's the 
> >"standard" mathematical way to express it for English speakers). For 
> >example, we 
> >could say that [] is always inclusive and have another character which makes 
> >it 
> >exclusive like:
> > a. Include begin Include end, i.e. [  a .. b  ]
> > b. Include begin Exclude end, i.e. [  a .. b ^]
> > c. Exclude begin Include end, i.e. [^ a .. b  ]
> > d. Exclude begin Exclude end, i.e. [^ a .. b ^]
> 
> I think Walter's message really rendered the whole discussion moot. Post of 
> the 
> year:
> 
> =
> I like:
> 
>a .. b+1
> 
> to mean inclusive range.
> =
> 
> Consider "+1]" a special symbol that means the range is to be closed to the 
> right 
> :o).

What about bearophile response: what about x..uint.max+1?

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)

More than 50% of the people in the world have never made
Or received a telephone call


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jérôme M. Berger

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Derek Parnell wrote:
It seems that D would benefit from having a standard syntax format 
for

expressing various range sets;
 a. Include begin Include end, i.e. []
 b. Include begin Exclude end, i.e. [)
 c. Exclude begin Include end, i.e. (]
 d. Exclude begin Exclude end, i.e. ()


I'm afraid this would majorly mess with pairing of parens.

I think Derek's point was to have *some* syntax to mean this, 
not necessarily the one he showed (which he showed because I believe 
that's the "standard" mathematical way to express it for English 
speakers). For example, we could say that [] is always inclusive and 
have another character which makes it exclusive like:

 a. Include begin Include end, i.e. [  a .. b  ]
 b. Include begin Exclude end, i.e. [  a .. b ^]
 c. Exclude begin Include end, i.e. [^ a .. b  ]
 d. Exclude begin Exclude end, i.e. [^ a .. b ^]


I think Walter's message really rendered the whole discussion moot. 
Post of the year:


=
I like:

   a .. b+1

to mean inclusive range.
=

Consider "+1]" a special symbol that means the range is to be closed 
to the right :o).



Ah, but:
 - This is inconsistent between the left and right limit;
 - This only works for integers, not for floating point numbers.


How does it not work for floating point numbers?

	Is that a trick question? Depending on the actual value of b, you 
might have b+1 == b (if b is large enough). Conversely, range a .. 
b+1 may contain a lot of extra numbers I may not want to include 
(like b+0.5)...


Jerome
--
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Derek Parnell wrote:

It seems that D would benefit from having a standard syntax format for
expressing various range sets;
 a. Include begin Include end, i.e. []
 b. Include begin Exclude end, i.e. [)
 c. Exclude begin Include end, i.e. (]
 d. Exclude begin Exclude end, i.e. ()


I'm afraid this would majorly mess with pairing of parens.

I think Derek's point was to have *some* syntax to mean this, not 
necessarily the one he showed (which he showed because I believe 
that's the "standard" mathematical way to express it for English 
speakers). For example, we could say that [] is always inclusive and 
have another character which makes it exclusive like:

 a. Include begin Include end, i.e. [  a .. b  ]
 b. Include begin Exclude end, i.e. [  a .. b ^]
 c. Exclude begin Include end, i.e. [^ a .. b  ]
 d. Exclude begin Exclude end, i.e. [^ a .. b ^]


I think Walter's message really rendered the whole discussion moot. 
Post of the year:


=
I like:

   a .. b+1

to mean inclusive range.
=

Consider "+1]" a special symbol that means the range is to be closed 
to the right :o).



Ah, but:
 - This is inconsistent between the left and right limit;
 - This only works for integers, not for floating point numbers.


How does it not work for floating point numbers?

Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Bill Baxter wrote:

2009/7/7 Andrei Alexandrescu :

I think Walter's message really rendered the whole discussion moot. Post of
the year:

=
I like:

  a .. b+1

to mean inclusive range.
=


Not everything is an integer.


Works with pointers too.

Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Bill Baxter
2009/7/7 Andrei Alexandrescu :
> I think Walter's message really rendered the whole discussion moot. Post of
> the year:
>
> =
> I like:
>
>   a .. b+1
>
> to mean inclusive range.
> =

Not everything is an integer.

--bb


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread bearophile
Andrei Alexandrescu:
> Safe D is concerned with memory safety only.

And hopefully you will understand that is wrong :-)

Bye,
bearophile


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread bearophile
Andrei Alexandrescu:
> I think Walter's message really rendered the whole discussion moot. Post 
> of the year:
> =
> I like:
> a .. b+1
> to mean inclusive range.

That was my preferred solution, starting from months ago.

Bye,
bearophile


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jérôme M. Berger

Andrei Alexandrescu wrote:

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Derek Parnell wrote:

It seems that D would benefit from having a standard syntax format for
expressing various range sets;
 a. Include begin Include end, i.e. []
 b. Include begin Exclude end, i.e. [)
 c. Exclude begin Include end, i.e. (]
 d. Exclude begin Exclude end, i.e. ()


I'm afraid this would majorly mess with pairing of parens.

I think Derek's point was to have *some* syntax to mean this, not 
necessarily the one he showed (which he showed because I believe 
that's the "standard" mathematical way to express it for English 
speakers). For example, we could say that [] is always inclusive and 
have another character which makes it exclusive like:

 a. Include begin Include end, i.e. [  a .. b  ]
 b. Include begin Exclude end, i.e. [  a .. b ^]
 c. Exclude begin Include end, i.e. [^ a .. b  ]
 d. Exclude begin Exclude end, i.e. [^ a .. b ^]


I think Walter's message really rendered the whole discussion moot. Post 
of the year:


=
I like:

   a .. b+1

to mean inclusive range.
=

Consider "+1]" a special symbol that means the range is to be closed to 
the right :o).



Ah, but:
 - This is inconsistent between the left and right limit;
 - This only works for integers, not for floating point numbers.

Jerome
--
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jérôme M. Berger

Robert Jacques wrote:
On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu 
 wrote:

Robert Jacques wrote:
BTW: this means byte and short are not closed under arithmetic 
operations, which drastically limit their usefulness.


I think they shouldn't be closed because they overflow for relatively 
small values.


Andrei, consider anyone who want to do image manipulation (or computer 
vision, video, etc). Since images are one of the few areas that use 
bytes extensively, and have to map back into themselves, they are 
basically sorry out of luck.


	Wrong example: in most cases, when doing image manipulations, you 
don't want the overflow to wrap but instead to be clipped. Having 
the compiler notify you when there is a risk of an overflow and 
require you to be explicit in how you want it to be handled is 
actually a good thing IMO.


Jerome
--
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Robert Jacques wrote:
On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu 
 wrote:

Robert Jacques wrote:
 Andrei, I have a short vector template (think vec!(byte,3), etc) 
where I've had to wrap the majority lines of code in cast(T)( ... ), 
because I support bytes and shorts. I find that both a kludge and a 
pain.


Well suggestions for improving things are welcome. But I don't think 
it will fly to make int+int yield a long.


Suggestion 1:
Loft the right hand of the expression (when lofting is valid) to the 
size of the left hand. i.e.


What does loft mean in this context?


byte a,b,c;
c = a + b;  => c = a + b;


Unsafe.


short d;
d = a + b;  => d = cast(short) a + cast(short) b;


Should work today modulo bugs.


int e, f;
e = a + b;  => e = cast(short) a + cast(short) b;


Why cast to short? e has type int.

e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) + 
cast(int) d; Or e = cast(int) a + (cast(int) b + cast(int)d);


I don't understand this.


long g;
g = e + f;  => d = cast(long) e + cast(long) f;


Works today.

When choosing operator overloads or auto, prefer the ideal lofted 
interpretation (as per the new rules, but without the exception for 
int/long), over truncated variants. i.e.

auto h = a + b; => short h = cast(short) a + cast(short) b;


This would yield semantics incompatible with C expressions.

This would also properly handled some of the corner/inconsistent cases 
with the current rules:

ubyte  i;
ushort j;
j = -i;=> j = -cast(short)i; (This currently evaluates to j = 
cast(short)(-i);


That should not compile, sigh. Walter wouldn't listen...


And
a += a;
is equivalent to
a = a + a;


Well not quite equivalent. In D2 they aren't. The former clarifies that 
you want to reassign the expression to a, and no cast is necessary. The 
latter would not compile if a is shorter than int.



and is logically consistent with
byte[] k,l,m;
m[] = k[] + l[];

Essentially, instead of trying to prevent overflows, except for those 
from int and long, this scheme attempts to minimize the risk of 
overflows, including those from int (and long, once cent exists. Maybe 
long+long=>bigInt?)


But if you close operations for types smaller than int, you end up with 
a scheme even more error-prone that C!



Suggestion 2:
Enable the full rules as part of SafeD and allow non-promotion in 
un-safe D. Note this could be synergistically combined with Suggestion 1.


Safe D is concerned with memory safety only.


Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

Derek Parnell wrote:

It seems that D would benefit from having a standard syntax format for
expressing various range sets;
 a. Include begin Include end, i.e. []
 b. Include begin Exclude end, i.e. [)
 c. Exclude begin Include end, i.e. (]
 d. Exclude begin Exclude end, i.e. ()


I'm afraid this would majorly mess with pairing of parens.

I think Derek's point was to have *some* syntax to mean this, not 
necessarily the one he showed (which he showed because I believe that's 
the "standard" mathematical way to express it for English speakers). For 
example, we could say that [] is always inclusive and have another 
character which makes it exclusive like:

 a. Include begin Include end, i.e. [  a .. b  ]
 b. Include begin Exclude end, i.e. [  a .. b ^]
 c. Exclude begin Include end, i.e. [^ a .. b  ]
 d. Exclude begin Exclude end, i.e. [^ a .. b ^]


I think Walter's message really rendered the whole discussion moot. Post 
of the year:


=
I like:

   a .. b+1

to mean inclusive range.
=

Consider "+1]" a special symbol that means the range is to be closed to 
the right :o).



Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jérôme M. Berger

Andrei Alexandrescu wrote:

Derek Parnell wrote:

It seems that D would benefit from having a standard syntax format for
expressing various range sets;
 a. Include begin Include end, i.e. []
 b. Include begin Exclude end, i.e. [)
 c. Exclude begin Include end, i.e. (]
 d. Exclude begin Exclude end, i.e. ()


I'm afraid this would majorly mess with pairing of parens.

	I think Derek's point was to have *some* syntax to mean this, not 
necessarily the one he showed (which he showed because I believe 
that's the "standard" mathematical way to express it for English 
speakers). For example, we could say that [] is always inclusive and 
have another character which makes it exclusive like:

 a. Include begin Include end, i.e. [  a .. b  ]
 b. Include begin Exclude end, i.e. [  a .. b ^]
 c. Exclude begin Include end, i.e. [^ a .. b  ]
 d. Exclude begin Exclude end, i.e. [^ a .. b ^]


Jerome

PS: If you *really* want messed parens pairing, try it with the 
French convention:   []   [[   ]]   ][   ;)

--
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques
On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu  
 wrote:

Robert Jacques wrote:
 Andrei, I have a short vector template (think vec!(byte,3), etc) where  
I've had to wrap the majority lines of code in cast(T)( ... ), because  
I support bytes and shorts. I find that both a kludge and a pain.


Well suggestions for improving things are welcome. But I don't think it  
will fly to make int+int yield a long.


Suggestion 1:
Loft the right hand of the expression (when lofting is valid) to the size  
of the left hand. i.e.


byte a,b,c;
c = a + b;  => c = a + b;

short d;
d = a + b;  => d = cast(short) a + cast(short) b;

int e, f;
e = a + b;  => e = cast(short) a + cast(short) b;
e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) + cast(int)  
d; Or e = cast(int) a + (cast(int) b + cast(int)d);


long g;
g = e + f;  => d = cast(long) e + cast(long) f;

When choosing operator overloads or auto, prefer the ideal lofted  
interpretation (as per the new rules, but without the exception for  
int/long), over truncated variants. i.e.

auto h = a + b; => short h = cast(short) a + cast(short) b;

This would also properly handled some of the corner/inconsistent cases  
with the current rules:

ubyte  i;
ushort j;
j = -i;=> j = -cast(short)i; (This currently evaluates to j =  
cast(short)(-i);


And
a += a;
is equivalent to
a = a + a;
and is logically consistent with
byte[] k,l,m;
m[] = k[] + l[];

Essentially, instead of trying to prevent overflows, except for those from  
int and long, this scheme attempts to minimize the risk of overflows,  
including those from int (and long, once cent exists. Maybe  
long+long=>bigInt?)



Suggestion 2:
Enable the full rules as part of SafeD and allow non-promotion in un-safe  
D. Note this could be synergistically combined with Suggestion 1.





BTW: this means byte and short are not closed under arithmetic  
operations, which drastically limit their usefulness.


I think they shouldn't be closed because they overflow for relatively  
small values.
 Andrei, consider anyone who want to do image manipulation (or computer  
vision, video, etc). Since images are one of the few areas that use  
bytes extensively, and have to map back into themselves, they are  
basically sorry out of luck.


I understand, but also keep in mind that making small integers closed is  
the less safe option. So we'd be hurting everyone for the sake of the  
image manipulation folks.


Andrei


Well, how often does everyone else use bytes?


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Moritz Warning
On Tue, 07 Jul 2009 08:53:49 +0200, Lars T. Kyllingstad wrote:

> Ary Borenszweig wrote:
>> のしいか (noshiika) escribió:
>>> Thank you for the great work, Walter and all the other contributors.
>>>
>>> But I am a bit disappointed with the CaseRangeStatement syntax. Why is
>>> it
>>>case 0: .. case 9:
>>> instead of
>>>case 0 .. 9:
>>>
>>> With the latter notation, ranges can be easily used together with
>>> commas, for example:
>>>case 0, 2 .. 4, 6 .. 9:
>>>
>>> And CaseRangeStatement, being inconsistent with other syntaxes using
>>> the .. operator, i.e. slicing and ForeachRangeStatement, includes the
>>> endpoint.
>>> Shouldn't D make use of another operator to express ranges that
>>> include the endpoints as Ruby or Perl6 does?
>> 
>> I agree.
>> 
>> I think this syntax is yet another one of those things people looking
>> at D will say "ugly" and turn their heads away.
> 
> 
> When the discussion first came up in the NG, I was a bit sceptical about
> Andrei's suggestion for the case range statement as well. Now, I
> definitely think it's the best choice, and it's only because I realised
> it can be written like this:
> 
>  case 1:
>  ..
>  case 4:
>  // do stuff
> 
[snip]

I think it looks much better that way and users are more likely to be 
comfortable with the syntax.
I hope it will be displayed in the examples that way.

Still, the syntax at all looks a bit alien because it's a syntax addition.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jarrett Billingsley
On Tue, Jul 7, 2009 at 11:33 AM, Andrei
Alexandrescu wrote:
>
> Well 32-bit architectures may be a historical relic but I don't think 32-bit
> integers are. And I think it would be too disruptive a change to promote
> results of arithmetic operation between integers to long.
>
> ...
>
> This is a different beast. We simply couldn't devise a satisfactory scheme
> within the constraints we have. No simple solution we could think of has
> worked, nor have a number of sophisticated solutions. Ideas would be
> welcome, though I need to warn you that the devil is in the details so the
> ideas must be fully baked; too many good sounding high-level ideas fail when
> analyzed in detail.

Hm.  Just throwing this out there, as a possible solution for both problems.

Suppose you kept the current set of integer types, but made all of
them "open" (i.e. byte+byte=short, int+int=long etc.).  Furthermore,
you made it impossible to implicitly convert between the signed and
unsigned types of the same size (the int<>uint hole disappears).

But then you introduce two new native-size integer types.  Well, we
already have them - ptrdiff_t and size_t - but give them nicer names,
like word and uword.  Unlike the other integer types, these would be
implicitly convertible to one another.  They'd more or less take the
place of 'int' and 'uint' in most code, since most of the time, the
size of the integer isn't that important.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Leandro Lucarella wrote:

This seems nice. I think it would be nice if this kind of things are
commented in the NG before a compiler release, to allow community input
and discussion.


Yup, that's what happened to case :o).


I think this kind of things are the ones that deserves some kind of RFC
(like Python PEPs) like someone suggested a couple of days ago.


I think that's a good idea. Who has the time and resources to set that up?


Andrei



Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Leandro Lucarella
Andrei Alexandrescu, el  7 de julio a las 00:48 me escribiste:
> Robert Jacques wrote:
> >On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright 
> > 
> >wrote:
> >>Something for everyone here.
> >>
> >>
> >>http://www.digitalmars.com/d/1.0/changelog.html
> >>http://ftp.digitalmars.com/dmd.1.046.zip
> >>
> >>
> >>http://www.digitalmars.com/d/2.0/changelog.html
> >>http://ftp.digitalmars.com/dmd.2.031.zip
> >Thanks for another great release.
> >Also, I'm not sure if this is a bug or a feature with regard to the new
> >integer rules:
> >   byte x,y,z;
> >   z = x+y;// Error: cannot implicitly convert expression (cast(int)x + 
> >cast(int)y) of type int to byte
> >which makes sense, in that a byte can overflow, but also doesn't make sense, 
> >since integer behaviour is different.
> 
> Walter has implemented an ingenious scheme for disallowing narrowing
> conversions while at the same time minimizing the number of casts
> required. He hasn't explained it, so I'll sketch an explanation here.
> 
> The basic approach is "value range propagation": each expression is
> associated with a minimum possible value and a maximum possible value.
> As complex expressions are assembled out of simpler expressions, the
> ranges are computed and propagated.
> 
> For example, this code compiles:
> 
> int x = whatever();
> bool y = x & 1;
> 
> The compiler figures that the range of x is int.min to int.max, the
> range of 1 is 1 to 1, and (here's the interesting part), the range of
> x & 1 is 0 to 1. So it lets the code go through. However, it won't allow
> this:
> 
> int x = whatever();
> bool y = x & 2;
> 
> because x & 2 has range between 0 and 2, which won't fit in a bool.
> 
> The approach generalizes to arbitrary complex expressions. Now here's the 
> trick 
> though: the value range propagation is local, i.e. all ranges are forgotten 
> beyond one expression. So as soon as you move on to the next statement, the 
> ranges have been forgotten.
> 
> Why? Simply put, increased implementation difficulties and increased
> compiler memory footprint for diminishing returns. Both Walter and
> I noticed that expression-level value range propagation gets rid of all
> dangerous cases and the vast majority of required casts. Indeed, his
> test suite, Phobos, and my own codebase required surprisingly few
> changes with the new scheme. Moreover, we both discovered bugs due to
> the new feature, so we're happy with the status quo.
> 
> Now consider your code:
> 
> byte x,y,z;
> z = x+y;
> 
> The first line initializes all values to zero. In an intra-procedural
> value range propagation, these zeros would be propagated to the next
> statement, which would range-check. However, in the current approach,
> the ranges of x, y, and z are forgotten at the first semicolon. Then,
> x+y has range -byte.min-byte.min up to byte.max+byte.max as far as the
> type checker knows. That would fit in a short (and by the way I just
> found a bug with that occasion) but not in a byte.

This seems nice. I think it would be nice if this kind of things are
commented in the NG before a compiler release, to allow community input
and discussion.

I think this kind of things are the ones that deserves some kind of RFC
(like Python PEPs) like someone suggested a couple of days ago.

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)



Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Robert Jacques wrote:
On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu 
 wrote:

Robert Jacques wrote:
 That's really cool. But I don't think that's actually happening (Or 
are these the bugs you're talking about?):

 byte x,y;
short z;
z = x+y;  // Error: cannot implicitly convert expression 
(cast(int)x + cast(int)y) of type int to short

 // Repeat for ubyte, bool, char, wchar and *, -, /


http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add 
to it.


Added. In summary, + * - / % >> >>> don't work for types 8-bits and 
under. << is inconsistent (x<<1 errors, but x>= <<= >>>=) and pre/post increments (++ --) 
compile which is maddeningly inconsistent, particularly when the spec 
defines ++x as sugar for x = x + 1, which doesn't compile.



And by that logic shouldn't the following happen?
 int x,y;
int z;
z = x+y;  // Error: cannot implicitly convert expression 
(cast(long)x + cast(long)y) of type long to int


No. Int remains "special", i.e. arithmetic operations on it don't 
automatically grow to become long.


i.e. why the massive inconsistency between byte/short and int/long? 
(This is particularly a pain for generic i.e. templated code)


I don't find it a pain. It's a practical decision.


Andrei, I have a short vector template (think vec!(byte,3), etc) where 
I've had to wrap the majority lines of code in cast(T)( ... ), because I 
support bytes and shorts. I find that both a kludge and a pain.


Well suggestions for improving things are welcome. But I don't think it 
will fly to make int+int yield a long.


BTW: this means byte and short are not closed under arithmetic 
operations, which drastically limit their usefulness.


I think they shouldn't be closed because they overflow for relatively 
small values.


Andrei, consider anyone who want to do image manipulation (or computer 
vision, video, etc). Since images are one of the few areas that use 
bytes extensively, and have to map back into themselves, they are 
basically sorry out of luck.


I understand, but also keep in mind that making small integers closed is 
the less safe option. So we'd be hurting everyone for the sake of the 
image manipulation folks.



Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Jarrett Billingsley wrote:

The only thing is: why doesn't _this_ fail, then?

int x, y, z;
z = x + y;

I'm sure it's out of convenience, but what about in ten, fifteen years
when 32-bit architectures are a historical relic and there's still
this hole in the type system?


Well 32-bit architectures may be a historical relic but I don't think 
32-bit integers are. And I think it would be too disruptive a change to 
promote results of arithmetic operation between integers to long.



The same argument applies for the implicit conversions between int and
uint.  If you're going to do that, why not have implicit conversions
between long and ulong on 64-bit platforms?


This is a different beast. We simply couldn't devise a satisfactory 
scheme within the constraints we have. No simple solution we could think 
of has worked, nor have a number of sophisticated solutions. Ideas would 
be welcome, though I need to warn you that the devil is in the details 
so the ideas must be fully baked; too many good sounding high-level 
ideas fail when analyzed in detail.



Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques
On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu  
 wrote:

Robert Jacques wrote:
 That's really cool. But I don't think that's actually happening (Or  
are these the bugs you're talking about?):

 byte x,y;
short z;
z = x+y;  // Error: cannot implicitly convert expression  
(cast(int)x + cast(int)y) of type int to short

 // Repeat for ubyte, bool, char, wchar and *, -, /


http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add  
to it.


Added. In summary, + * - / % >> >>> don't work for types 8-bits and under.  
<< is inconsistent (x<<1 errors, but x<(+= *= -= /= %= >>= <<= >>>=) and pre/post increments (++ --) compile  
which is maddeningly inconsistent, particularly when the spec defines ++x  
as sugar for x = x + 1, which doesn't compile.



And by that logic shouldn't the following happen?
 int x,y;
int z;
z = x+y;  // Error: cannot implicitly convert expression  
(cast(long)x + cast(long)y) of type long to int


No. Int remains "special", i.e. arithmetic operations on it don't  
automatically grow to become long.


i.e. why the massive inconsistency between byte/short and int/long?  
(This is particularly a pain for generic i.e. templated code)


I don't find it a pain. It's a practical decision.


Andrei, I have a short vector template (think vec!(byte,3), etc) where  
I've had to wrap the majority lines of code in cast(T)( ... ), because I  
support bytes and shorts. I find that both a kludge and a pain.


BTW: this means byte and short are not closed under arithmetic  
operations, which drastically limit their usefulness.


I think they shouldn't be closed because they overflow for relatively  
small values.


Andrei, consider anyone who want to do image manipulation (or computer  
vision, video, etc). Since images are one of the few areas that use bytes  
extensively, and have to map back into themselves, they are basically  
sorry out of luck.





Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Leandro Lucarella
Andrei Alexandrescu, el  6 de julio a las 18:32 me escribiste:
> Leandro Lucarella wrote:
> >Andrei Alexandrescu, el  6 de julio a las 10:44 me escribiste:
> And what did those people use when they wanted to express a range of case 
> labels? In other words, where did those people turn their heads towards?
> >>>They probably used an if.
> >>So they used an inferior means to start with.
> >Yes, but when you try to make people move to a different language, you
> >have to do considerably better. When I have to choose between something
> >well know, well supported and mature and something that is, at least,
> >unknown (even if it's mature and well supported, I won't know that until
> >I use it a lot so is a risk), I want to be really good, not just barely
> >good.
> 
> That goes without saying.
> 
> >Details as this one are not deal breaker on their own, but when they are
> >a lot, it tends to make the language look ugly as a whole.
> 
> You are just saying it's ugly. I don't think it's ugly. Walter doesn't
> think it's ugly. Other people don't think it's ugly. Many of the people
> who said it's ugly actually came up with proposals that are arguably
> ugly, hopelessly confusing, or both. Look at only some of the rehashed
> proposals of today: the genial "case [0 .. 10]:" which is horribly
> inconsistent, and the awesome "case 0: ... case 10:", also inconsistent
> (and gratuitously so) because ellipses today only end lists without
> having something to their right. The authors claim those are better than
> the current syntax, and one even claimed "beauty", completely ignoring
> the utter lack of consistency with the rest of the language. I don't
> claim expertise in language design, so I wish there were a few good
> experts in this group.

Please read the thread at D NG, the current syntax *is* inconsistent too.

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)



Re: [OT] dmd 1.046 and 2.031 releases

2009-07-07 Thread BCS

Hello Daniel,


BCS wrote:


Hello Daniel,


[1] like me. My girlfriend disagrees with me on this,


You have a girlfriend that even bothers to have an opinion on a
programming issue, lucky bastard.


No, when I said "like me", I meant:

"Unless there's some egregious problem aside from being a bit on the
ugly side (like me), ..."

My girlfriend is actually a nurse, but I could ask for her opinion on
case ranges if you want.  :)



Odd, the exact same words read differently around midnight. :b




Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Leandro Lucarella
aarti_pl, el  7 de julio a las 00:27 me escribiste:
> Leandro Lucarella pisze:
> >Andrei Alexandrescu, el  6 de julio a las 10:44 me escribiste:
> And what did those people use when they wanted to express a range of case 
> labels? In other words, where did those people turn their heads towards?
> >>>They probably used an if.
> >>So they used an inferior means to start with.
> >Yes, but when you try to make people move to a different language, you
> >have to do considerably better. When I have to choose between something
> >well know, well supported and mature and something that is, at least,
> >unknown (even if it's mature and well supported, I won't know that until
> >I use it a lot so is a risk), I want to be really good, not just barely
> >good.
> >Details as this one are not deal breaker on their own, but when they are
> >a lot, it tends to make the language look ugly as a whole.
> >What bugs me the most is there are a lot of new constructs in the language
> >that are plain ugly, from the start. D is buying it's own baggage
> >(__traits, enum for manifest constants, now the case range, and I'm sure
> >I'm forgetting something else) with no reason...
> 
> ...
> * foreach_reverse
> * access to variadic function parameters with _argptr & _arguments
> * mess with compile time is expression (sorry, that's my favorite ugliness 
> :-] )

But these ones at least are not new (I'm sure they were new at some point,
but now are baggage). The new constructs are future baggage.

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)



Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Leandro Lucarella
Daniel Keep, el  7 de julio a las 15:40 me escribiste:
> 
> 
> Andrei Alexandrescu wrote:
> > bearophile wrote:
> >> Jason House:
> >>> Hardly. There seemed to mostly be complaints about it with Andrei
> >>> saying things like "I can't believe you don't see the elegance of the
> >>> syntax". In the end, Andrei commented that he shouldn't involve the
> >>> community in such small changes and went silent.<
> >>
> >> He was wrong. Even very intelligent people now and then do the wrong
> >> thing.
> > 
> > Of course the latter statement is true, but is in no way evidence
> > supporting the former. About the former, in that particular case I was
> > right.
> > 
> > Andrei
> 
> Now, now.  Let's all play nicely together...
> 
> I don't like the `case a:..case b:` syntax.  It doesn't matter.  The
> functionality is in place and the syntax has a sane explanation and
> rationale.
> 
> Unless there's some egregious problem aside from being a bit on the ugly
> side [1], it's just bike-shedding.  And I'm so, SOOO sick of bike-shedding.

I think Walter is right, this syntax introduce an inconsistency in the
".." operator semantics, which is used with inclusive meaning sometimes
(case) and with exclusive meaning other times (slices and foreach).

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)



Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jarrett Billingsley
On Tue, Jul 7, 2009 at 1:48 AM, Andrei
Alexandrescu wrote:
>
> Walter has implemented an ingenious scheme for disallowing narrowing
> conversions while at the same time minimizing the number of casts required.
> He hasn't explained it, so I'll sketch an explanation here.
>
> The basic approach is "value range propagation": each expression is
> associated with a minimum possible value and a maximum possible value. As
> complex expressions are assembled out of simpler expressions, the ranges are
> computed and propagated.
>
> For example, this code compiles:
>
> int x = whatever();
> bool y = x & 1;
>
> The compiler figures that the range of x is int.min to int.max, the range of
> 1 is 1 to 1, and (here's the interesting part), the range of x & 1 is 0 to
> 1. So it lets the code go through. However, it won't allow this:
>
> int x = whatever();
> bool y = x & 2;
>
> because x & 2 has range between 0 and 2, which won't fit in a bool.

Very cool.  :)

> The approach generalizes to arbitrary complex expressions. Now here's the
> trick though: the value range propagation is local, i.e. all ranges are
> forgotten beyond one expression. So as soon as you move on to the next
> statement, the ranges have been forgotten.
>
> Why? Simply put, increased implementation difficulties and increased
> compiler memory footprint for diminishing returns. Both Walter and I noticed
> that expression-level value range propagation gets rid of all dangerous
> cases and the vast majority of required casts. Indeed, his test suite,
> Phobos, and my own codebase required surprisingly few changes with the new
> scheme. Moreover, we both discovered bugs due to the new feature, so we're
> happy with the status quo.

Sounds fairly reasonable.

> Now consider your code:
>
> byte x,y,z;
> z = x+y;
>
> The first line initializes all values to zero. In an intra-procedural value
> range propagation, these zeros would be propagated to the next statement,
> which would range-check. However, in the current approach, the ranges of x,
> y, and z are forgotten at the first semicolon. Then, x+y has range
> -byte.min-byte.min up to byte.max+byte.max as far as the type checker knows.
> That would fit in a short (and by the way I just found a bug with that
> occasion) but not in a byte.

The only thing is: why doesn't _this_ fail, then?

int x, y, z;
z = x + y;

I'm sure it's out of convenience, but what about in ten, fifteen years
when 32-bit architectures are a historical relic and there's still
this hole in the type system?

The same argument applies for the implicit conversions between int and
uint.  If you're going to do that, why not have implicit conversions
between long and ulong on 64-bit platforms?


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Jarrett Billingsley
On Tue, Jul 7, 2009 at 11:15 AM, Jarrett
Billingsley wrote:
> On Tue, Jul 7, 2009 at 1:48 AM, Andrei
> Alexandrescu wrote:
>>
>> Walter has implemented an ingenious scheme for disallowing narrowing
>> conversions while at the same time minimizing the number of casts required.
>> He hasn't explained it, so I'll sketch an explanation here.
>>
>> The basic approach is "value range propagation": each expression is
>> associated with a minimum possible value and a maximum possible value. As
>> complex expressions are assembled out of simpler expressions, the ranges are
>> computed and propagated.
>>
>> For example, this code compiles:
>>
>> int x = whatever();
>> bool y = x & 1;
>>
>> The compiler figures that the range of x is int.min to int.max, the range of
>> 1 is 1 to 1, and (here's the interesting part), the range of x & 1 is 0 to
>> 1. So it lets the code go through. However, it won't allow this:
>>
>> int x = whatever();
>> bool y = x & 2;
>>
>> because x & 2 has range between 0 and 2, which won't fit in a bool.
>
> Very cool.  :)
>
>> The approach generalizes to arbitrary complex expressions. Now here's the
>> trick though: the value range propagation is local, i.e. all ranges are
>> forgotten beyond one expression. So as soon as you move on to the next
>> statement, the ranges have been forgotten.
>>
>> Why? Simply put, increased implementation difficulties and increased
>> compiler memory footprint for diminishing returns. Both Walter and I noticed
>> that expression-level value range propagation gets rid of all dangerous
>> cases and the vast majority of required casts. Indeed, his test suite,
>> Phobos, and my own codebase required surprisingly few changes with the new
>> scheme. Moreover, we both discovered bugs due to the new feature, so we're
>> happy with the status quo.
>
> Sounds fairly reasonable.
>
>> Now consider your code:
>>
>> byte x,y,z;
>> z = x+y;
>>
>> The first line initializes all values to zero. In an intra-procedural value
>> range propagation, these zeros would be propagated to the next statement,
>> which would range-check. However, in the current approach, the ranges of x,
>> y, and z are forgotten at the first semicolon. Then, x+y has range
>> -byte.min-byte.min up to byte.max+byte.max as far as the type checker knows.
>> That would fit in a short (and by the way I just found a bug with that
>> occasion) but not in a byte.
>
> The only thing is: why doesn't _this_ fail, then?
>
> int x, y, z;
> z = x + y;
>
> I'm sure it's out of convenience, but what about in ten, fifteen years
> when 32-bit architectures are a historical relic and there's still
> this hole in the type system?
>
> The same argument applies for the implicit conversions between int and
> uint.  If you're going to do that, why not have implicit conversions
> between long and ulong on 64-bit platforms?
>

I think I've confused the mailing list's threading algorithm.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread bearophile
KennyTM~ Wrote:
> Maybe http://msdn.microsoft.com/en-us/vcsharp/aa336815.aspx .

That compromise design looks good to be adopted by D too :-)

Bye,
bearophile


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Don

Walter Bright wrote:

Something for everyone here.


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.046.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.031.zip


Why is 'final switch' required? Another possible way of dealing with the 
same issue would be:


switch(e) {
case E.A: blah; break;
case E.B: blah; break;
...
default: assert(0);
}

Ie, if switch is over an enum type, and the 'default' clause consists 
only of assert(0), the compiler could generate a warning if some of the 
possible enum values never appear in a case statement.


It's not quite the same as 'final switch', but I think it captures most 
of the use cases.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread KennyTM~

Ary Borenszweig wrote:

Jesse Phillips escribió:

On Mon, 06 Jul 2009 14:38:53 -0500, Andrei Alexandrescu wrote:


Denis Koroskin wrote:

Reuse goto?

So any case-labeled code should end either with a control flow statement
that transfers control elswhere? That sounds like a great idea.
Fall-through is so rare and so rarely intended, it makes sense to
require the programmer to state the intent explicitly via a goto case.

Andrei


The goto method already works, the only change needed would be to not 
have fallthru default.


http://digitalmars.com/d/2.0/statement.html#GotoStatement


But that's kind of redundant:

case 1: goto case 11:
case 11: goto case 111:
case 111: goto case :
case :
doIt();

don't you think?


Maybe http://msdn.microsoft.com/en-us/vcsharp/aa336815.aspx .



If you change the case expression, you must change it twice.

Why not:

case 1: continue case;
case 11: continue case;

etc.?


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Michel Fortin
On 2009-07-07 01:12:12 -0400, Andrei Alexandrescu 
 said:



Nick Sabalausky wrote:
Those examples are all cases where the meaning and context are wildly 
different fbetween one use and the other. But with '..', both uses are 
very similar: "From xxx to (incl/excl) yyy". Big differences are ok, 
they stand out as obvious. Small differences can be more problematic.


You'd have an uphill battle using a counterfeit Swiss army knife 
against a battery of Gatling guns





arguing that

case 'a': .. case 'z':

is very similar with

0 .. 10

That's actually much more different than e.g.

a = b * c;

versus

b * c;


They aren't so much different if you consider "case 'a':" and "case 
'z':" as two items joined by a "..", which I believe is the expected 
way to read it. In the first case "case 'a':" and "case 'z':" joined by 
a ".." means an inclusive range, in the second case "0" and "10" joined 
by a ".." means an exclusive one.


With "b * c", the meaning is completly different depending on whether b 
is a type or not. If "b" is a type, you can't reasonabily expect "b * 
c" to do a multiplication and you'll get an error about it if that's 
what you're trying to do. Wheras with "case 'a': .. case 'b':" you can 
reasonably expect an exclusive range if you aren't too familiar with 
the syntax (and that's a resonable expectation if you know about 
ranges), and you won't get an error of you mix things up; thus, clarity 
of the syntax becomes more important.


I still think that having that syntax is better than nothing, but I do 
believe it's an inconsistency and that it may looks ambiguous to 
someone unfamiliar with it.


But my worse grief about that new feature is the restriction about 256 
values which is pretty limitating if you're writing a parser dealing 
with ranges of unicode characters. I guess I'll have to continue using 
ifs for that.



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: [OT] dmd 1.046 and 2.031 releases

2009-07-07 Thread Ary Borenszweig

Andrei Alexandrescu escribió:

BCS wrote:

Hello Daniel,


[1] like me. My girlfriend disagrees with me on this,


You have a girlfriend that even bothers to have an opinion on a 
programming issue, lucky bastard.


My understanding is that he's referring to a different issue.


though. *I* think she's crazy, but I'm not exactly
inclined to try and change her mind. :)


That reminds me of a quote: "If you assume a woman's mind is supposed 
to work like a man's, the only conclusion you can come to is they are 
*all* crazy."


To paraphrase: "If you assume a woman's mind is supposed to work like a 
man's, you won't get laid. Ever."


lol :)


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Tom S

Derek Parnell wrote:

Then we can use "-deps=dep.txt -nogen" to get the dependency data so build
tools can then work out what needs to actually be compiled. And in that
vein, a hash (eg CRC32, MD5, SHA256) of the file's used by DMD would be
nice to see in the 'deps' file. Would help build tools detect which files
have been modified.


I think this should be the job of the build tool, not the compiler. For 
example, xfBuild uses the compiler-generated dependency files to keep 
track of its own project database containing dependencies and file 
modification times. I guess I'll be adding hashes as well :) Why a 
separate file? When doing incremental builds, you'll only pass some of 
the project's modules to the compiler so the deps file would not contain 
everything. The proper approach is to parse it and update the project 
database with it.




May I make a small syntax suggestion for the deps format. Instead of
enclosing a path in parentheses, and using ':' as a field delimiter, have
the first (and last) character of each line be the field delimiter to use
in that line. The delimiter would be guaranteed to never be part of any of
the fields' characters. That way, we don't need escape characters and
parsing the text is greatly simplified.


I don't think the parsing is currently very complicated at all, but I 
guess YMMV. I'd argue that the current format is easier to generate and 
more human-readable than your proposed syntax. The latter might also be 
harder to process by UNIXy tools like grep or cut.



Also, simplifying the paths by resolving the ".." and "." would be nice. 


Yea, that would be nice.


--
Tomasz Stachowiak
http://h3.team0xf.com/
h3/h3r3tic on #D freenode


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Brad Roberts wrote:

That's really cool. But I don't think that's actually happening (Or
are these the bugs you're talking about?):

byte x,y;
short z;
z = x+y;  // Error: cannot implicitly convert expression
(cast(int)x + cast(int)y) of type int to short

// Repeat for ubyte, bool, char, wchar and *, -, /

http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add
to it.


Before going too far, consider:

byte x, y, z;
short a;
a = x + y + z;

How far should the logic go?


Arbitrarily far for any given expression, which is the beauty of it all. 
In the case above, the expression is evaluated as (x + y) + z, yielding 
a range of -byte.min-byte.min to byte.max+byte.max for the parenthesized 
part. Then that range is propagated to the second addition yielding a 
final range of -byte.min-byte.min-byte.min to byte.max+byte.max+byte.max 
for the entire expression. That still fits in a short, so the expression 
is valid.


Now, if you add more than 255 bytes, things won't compile anymore ;o).


Andrei


Re: [OT] dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

BCS wrote:

Hello Daniel,


[1] like me. My girlfriend disagrees with me on this,


You have a girlfriend that even bothers to have an opinion on a 
programming issue, lucky bastard.


My understanding is that he's referring to a different issue.


though. *I* think she's crazy, but I'm not exactly
inclined to try and change her mind. :)


That reminds me of a quote: "If you assume a woman's mind is supposed to 
work like a man's, the only conclusion you can come to is they are *all* 
crazy."


To paraphrase: "If you assume a woman's mind is supposed to work like a 
man's, you won't get laid. Ever."



Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Robert Jacques wrote:

Another inconsistency:

byte[] x,y,z;
z[] = x[]*y[]; // Compiles


Bugzilla is its name.

Andrei


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Brad Roberts
>> That's really cool. But I don't think that's actually happening (Or
>> are these the bugs you're talking about?):
>>
>> byte x,y;
>> short z;
>> z = x+y;  // Error: cannot implicitly convert expression
>> (cast(int)x + cast(int)y) of type int to short
>>
>> // Repeat for ubyte, bool, char, wchar and *, -, /
> 
> http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add
> to it.

Before going too far, consider:

byte x, y, z;
short a;
a = x + y + z;

How far should the logic go?


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Andrei Alexandrescu

Robert Jacques wrote:
On Tue, 07 Jul 2009 01:48:41 -0400, Andrei Alexandrescu 
 wrote:



Robert Jacques wrote:
On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright 
 wrote:



Something for everyone here.


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.046.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.031.zip

 Thanks for another great release.
 Also, I'm not sure if this is a bug or a feature with regard to the 
new integer rules:

byte x,y,z;
   z = x+y;// Error: cannot implicitly convert expression 
(cast(int)x + cast(int)y) of type int to byte
 which makes sense, in that a byte can overflow, but also doesn't 
make sense, since integer behaviour is different.


Walter has implemented an ingenious scheme for disallowing narrowing 
conversions while at the same time minimizing the number of casts 
required. He hasn't explained it, so I'll sketch an explanation here.


The basic approach is "value range propagation": each expression is 
associated with a minimum possible value and a maximum possible value. 
As complex expressions are assembled out of simpler expressions, the 
ranges are computed and propagated.


For example, this code compiles:

int x = whatever();
bool y = x & 1;

The compiler figures that the range of x is int.min to int.max, the 
range of 1 is 1 to 1, and (here's the interesting part), the range of 
x & 1 is 0 to 1. So it lets the code go through. However, it won't 
allow this:


int x = whatever();
bool y = x & 2;

because x & 2 has range between 0 and 2, which won't fit in a bool.

The approach generalizes to arbitrary complex expressions. Now here's 
the trick though: the value range propagation is local, i.e. all 
ranges are forgotten beyond one expression. So as soon as you move on 
to the next statement, the ranges have been forgotten.


Why? Simply put, increased implementation difficulties and increased 
compiler memory footprint for diminishing returns. Both Walter and I 
noticed that expression-level value range propagation gets rid of all 
dangerous cases and the vast majority of required casts. Indeed, his 
test suite, Phobos, and my own codebase required surprisingly few 
changes with the new scheme. Moreover, we both discovered bugs due to 
the new feature, so we're happy with the status quo.


Now consider your code:

byte x,y,z;
z = x+y;

The first line initializes all values to zero. In an intra-procedural 
value range propagation, these zeros would be propagated to the next 
statement, which would range-check. However, in the current approach, 
the ranges of x, y, and z are forgotten at the first semicolon. Then, 
x+y has range -byte.min-byte.min up to byte.max+byte.max as far as the 
type checker knows. That would fit in a short (and by the way I just 
found a bug with that occasion) but not in a byte.


That's really cool. But I don't think that's actually happening (Or are 
these the bugs you're talking about?):


byte x,y;
short z;
z = x+y;  // Error: cannot implicitly convert expression (cast(int)x 
+ cast(int)y) of type int to short


// Repeat for ubyte, bool, char, wchar and *, -, /


http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add 
to it.



And by that logic shouldn't the following happen?

int x,y;
int z;
z = x+y;  // Error: cannot implicitly convert expression 
(cast(long)x + cast(long)y) of type long to int


No. Int remains "special", i.e. arithmetic operations on it don't 
automatically grow to become long.


i.e. why the massive inconsistency between byte/short and int/long? 
(This is particularly a pain for generic i.e. templated code)


I don't find it a pain. It's a practical decision.

BTW: this means byte and short are not closed under arithmetic 
operations, which drastically limit their usefulness.


I think they shouldn't be closed because they overflow for relatively 
small values.



Andrei


Re: [OT] dmd 1.046 and 2.031 releases

2009-07-07 Thread Daniel Keep


BCS wrote:
> Hello Daniel,
> 
>> [1] like me. My girlfriend disagrees with me on this,
> 
> You have a girlfriend that even bothers to have an opinion on a
> programming issue, lucky bastard.

No, when I said "like me", I meant:

"Unless there's some egregious problem aside from being a bit on the
ugly side (like me), ..."

My girlfriend is actually a nurse, but I could ask for her opinion on
case ranges if you want.  :)

>> though. *I* think she's crazy, but I'm not exactly
>> inclined to try and change her mind. :)
> 
> That reminds me of a quote: "If you assume a woman's mind is supposed to
> work like a man's, the only conclusion you can come to is they are *all*
> crazy." OTOH you can switch the perspective on that around and I expect
> it's just as true. It should be pointed out that, almost by definition,
> you can't have 50% of the world be crazy.

My opinion is based more on that, with respect to the above issue, she
seems to think differently to more or less everyone else I've ever met,
including myself.


Re: dmd 1.046 and 2.031 releases

2009-07-07 Thread Robert Jacques
On Tue, 07 Jul 2009 02:35:44 -0400, Robert Jacques   
wrote:


On Tue, 07 Jul 2009 01:48:41 -0400, Andrei Alexandrescu  
 wrote:



Robert Jacques wrote:
On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright  
 wrote:



Something for everyone here.


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.046.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.031.zip

 Thanks for another great release.
 Also, I'm not sure if this is a bug or a feature with regard to the  
new integer rules:

byte x,y,z;
   z = x+y;// Error: cannot implicitly convert expression  
(cast(int)x + cast(int)y) of type int to byte
 which makes sense, in that a byte can overflow, but also doesn't make  
sense, since integer behaviour is different.


Walter has implemented an ingenious scheme for disallowing narrowing  
conversions while at the same time minimizing the number of casts  
required. He hasn't explained it, so I'll sketch an explanation here.


The basic approach is "value range propagation": each expression is  
associated with a minimum possible value and a maximum possible value.  
As complex expressions are assembled out of simpler expressions, the  
ranges are computed and propagated.


For example, this code compiles:

int x = whatever();
bool y = x & 1;

The compiler figures that the range of x is int.min to int.max, the  
range of 1 is 1 to 1, and (here's the interesting part), the range of x  
& 1 is 0 to 1. So it lets the code go through. However, it won't allow  
this:


int x = whatever();
bool y = x & 2;

because x & 2 has range between 0 and 2, which won't fit in a bool.

The approach generalizes to arbitrary complex expressions. Now here's  
the trick though: the value range propagation is local, i.e. all ranges  
are forgotten beyond one expression. So as soon as you move on to the  
next statement, the ranges have been forgotten.


Why? Simply put, increased implementation difficulties and increased  
compiler memory footprint for diminishing returns. Both Walter and I  
noticed that expression-level value range propagation gets rid of all  
dangerous cases and the vast majority of required casts. Indeed, his  
test suite, Phobos, and my own codebase required surprisingly few  
changes with the new scheme. Moreover, we both discovered bugs due to  
the new feature, so we're happy with the status quo.


Now consider your code:

byte x,y,z;
z = x+y;

The first line initializes all values to zero. In an intra-procedural  
value range propagation, these zeros would be propagated to the next  
statement, which would range-check. However, in the current approach,  
the ranges of x, y, and z are forgotten at the first semicolon. Then,  
x+y has range -byte.min-byte.min up to byte.max+byte.max as far as the  
type checker knows. That would fit in a short (and by the way I just  
found a bug with that occasion) but not in a byte.


That's really cool. But I don't think that's actually happening (Or are  
these the bugs you're talking about?):


 byte x,y;
 short z;
 z = x+y;  // Error: cannot implicitly convert expression  
(cast(int)x + cast(int)y) of type int to short


 // Repeat for ubyte, bool, char, wchar and *, -, /

And by that logic shouldn't the following happen?

 int x,y;
 int z;
 z = x+y;  // Error: cannot implicitly convert expression  
(cast(long)x + cast(long)y) of type long to int


i.e. why the massive inconsistency between byte/short and int/long?  
(This is particularly a pain for generic i.e. templated code)


BTW: this means byte and short are not closed under arithmetic  
operations, which drastically limit their usefulness.


Another inconsistency:

byte[] x,y,z;
z[] = x[]*y[]; // Compiles


Re: [OT] dmd 1.046 and 2.031 releases

2009-07-07 Thread BCS

Hello Daniel,


[1] like me. My girlfriend disagrees with me on this,


You have a girlfriend that even bothers to have an opinion on a programming 
issue, lucky bastard.



though. *I* think she's crazy, but I'm not exactly
inclined to try and change her mind. :)


That reminds me of a quote: "If you assume a woman's mind is supposed to 
work like a man's, the only conclusion you can come to is they are *all* 
crazy." OTOH you can switch the perspective on that around and I expect it's 
just as true. It should be pointed out that, almost by definition, you can't 
have 50% of the world be crazy.