Re: Windows 2000 support

2012-06-05 Thread Dmitry Olshansky

On 06.06.2012 6:09, tim krimm wrote:



BTW - I am posting from a Windows 2000 machine.
Windows 2000 is handed for older machines.
It is also good for bare bone machines from tiger direct.
Yes I also use linux on these machines in dual boot mode.



Using outdated OS to connect to the internet is one of biggest mistakes 
ever. For instance doesn't W2k happen to have this so convenient for 
malware IE6 ? ;) + There are various security breaches that nobody will 
fix ASAP anymore.



I have several old legal licensed copies.
I can fairly easily install and uninstall windows 2000.
I do not have to deal with Microsoft's registration headaches.
Believe it or not I had a customer that required windows 98 support last
year.


What can I say try updating it at least to XP that still enjoys security 
fixes(?)


Embarcadero (formerly Borland) C++ Builder still supports these older
systems.




--
Dmitry Olshansky


Re: Implicit type conversions with data loss

2012-06-05 Thread Jonathan M Davis
On Wednesday, June 06, 2012 10:07:04 Dmitry Olshansky wrote:
> On 05.06.2012 22:06, ctrl wrote:
> > I don't want them to be performed at all. How do I disable this 'feature'?
> > 
> > For example, take a look at this code:
> > 
> > import std.stdio;
> > void main() {
> > int x = -1;
> > uint b = x;
> > writeln(b);
> > }
> > 
> > It outputs 4294967295, but I want a compile-time error instead. Any
> > suggestions?
> > (compiler version dmd 2.059)
> 
> There is no information lost. Try casting it back to int.

int <-> uint is a narrowing conversion in _both_ directions, and doesn't lose 
data in _either_ direction, which I find rather funny.

In any case, if you want to do such a conversion while checking to make sure 
that the value will fit in the type being converted to, you can use 
std.conv.to:

auto x = to!uint(-1); //this wil throw
auto y = to!int(uint.max); //this will throw

But the only way to statically prevent such conversions is to wrap your 
integers in structs.

In some ways, making implicit conversions between signed and unsigned types 
would be nice, but you need to convert one to the other often enough, that all 
of the necessary casting could get quite annoying, and if you're using an 
explicit cast rather than std.conv.to, you actually get _less_ safety, because 
the cast will work even if the current implicit conversion wouldn't. For 
example,

int x = -1;
//illegal, except in cases where the compiler can statically determine that
//the conversion won't be truncated (which it could determine in this case,
//but not in the general case).
ushort y = x;

//Legal regardless of the value of x.
ushort z = cast(ushort)x;

So, while disallowing the implicit conversion would have some benefits, there's 
a good chance that it would ultimately cause far more problems than it would 
solve.

- Jonathan M Davis


Re: GitHub for Windows

2012-06-05 Thread Nick Sabalausky
"Steven Schveighoffer"  wrote in message 
news:op.wfd7p6fxeav7ka@steves-laptop...
> On Sat, 02 Jun 2012 16:30:07 -0400, Nick Sabalausky 
>  wrote:
>
>> Ouch. I haven't had virus problems on my XP system (knock on wood...), 
>> but
>> my sister's had a lot of virus trouble on her Win7 machine (and guess who
>> had to fix the fucking thing every time...) Of course, my dad had a lot 
>> of
>> virus trouble on his old XP machne (and again, guess who got to fix the
>> goddamn thing), but then again, he's an idiot and does all sorts of 
>> stupid
>> shit like click on ads, and give the advertiser pages his phone number 
>> when
>> they ask for it, and doing all that *despite* noticing that it all seemed
>> fishy, and god knows what else that he *hasn't* told me about. Colossal
>> fucking moron.
>
> Hehe, I think all of us here have similar stories.
>

Heh, yea.

Ages ago, I used to be eager to let people know I was good with computers. 
That was because I was proud of it and (at the time ;) ) I liked computers 
so much.

But then I learned to be very selective about disclosing that bit of 
information because it just makes people want you to fix their busted, 
usually malware-riddled, computer. I think the final straw was in college 
when I found myself sitting in some girl's dorm room, fixing her computer 
(they were all Win9x back then - fun), for free, while she went out with her 
boyfriend. I believe the right term for that is: "What the fuck am I doing, 
and why the hell did my dumbass self agree to this?!?" ;)

> My in-laws have vista, and after I had to reinstall their computer due to 
> malware messing up some internal microsoft services, I told them either 
> they find someone else to help them with the computer, or agree to be 
> non-admin users on their system.  Now only I have admin privileges, and 
> things have gone much smoother since then.
>

Not a bad strategy.

After my best efforts, my sister's Win7 laptop still has some bizarre audio 
ads that play out her speakers now and then. I'm done fucking with that, and 
fresh out of tactics anyway, so my next step is to just backup her data and 
do a clean wipe and fresh reinstall. But I told her she'd have to meet me 
halfway (hah! "halfway"...more like 10%) by finding Lenovo's phone number 
and ordering the restore discs herself (I can't believe they skimp out on 
including them these days - how fucking cheap!) Unsurprisingly, she's been 
putting up with the ads ever since.

God, users are so fucking lazy. After *years* of trying, I've never been 
able to get either of my parents to even touch (literally) a "How to use a 
computer" book. How the hell do they think *I* learned? Christ, they *saw* 
me reading...*books* as a kid! I seriously suspect they might actually think 
having been born after 1980 is what gives people computer literacy. No - 
it's just *normal* literacy that does it. If a *9 year old* could do it, why 
the fuck can't those grown adults?

At one point, (again, years ago) I even *picked out* a book for my mom form 
the library, one of those dead-simple ones that's 90% *pictures*...It just 
sat there *literally* untouched for months while I kept renewing, and 
renewing, and renewing it. I'm not sure she even cleaned around it - and 
she's a neat freak.

It was around that point I decided - I will *not* help a person with basic 
computer usage unless they can get off their lazy fucking ass and show the 
basic, *basic* initiative of checking a relevent book out from the library 
(themselves) and reading the damn thing. I'm not a babysitter for adults.

Since then, I've gotten bitched out many times that I won't spoon-fed her 
the kind of answers that would be the equivalent of giving directions to a 
store like "Push your right foot down on the left...uhh, you don't know what 
a 'pedal' is...ok, the left 'floor-thing'...while the car moves forward for 
34 seconds, then lift up 2 inches, and turn the big round thing in front of 
you 4 inches to the right..." Etc. After years of that I think she's finally 
gotten the message...so now, instead of looking at a basic damn picture 
book, she just flubs her way though or, failing that, asks my sister for 
help. Heaven forbid anyone should *ever* have to fucking *learn* anything.

Shit, it's a good thing I don't have heart disease: Writing this message 
probably would have done me in. I feel like Lewis Black, but without the 
funny.

> Unfortunately, malware can still fuck up your IE profile.
>




Re: Implicit type conversions with data loss

2012-06-05 Thread Dmitry Olshansky

On 05.06.2012 22:06, ctrl wrote:

I don't want them to be performed at all. How do I disable this 'feature'?

For example, take a look at this code:

import std.stdio;
void main() {
int x = -1;
uint b = x;
writeln(b);
}

It outputs 4294967295, but I want a compile-time error instead. Any
suggestions?
(compiler version dmd 2.059)


There is no information lost. Try casting it back to int.

--
Dmitry Olshansky


Re: Windows 2000 support

2012-06-05 Thread Jonathan M Davis
On Tuesday, June 05, 2012 11:43:34 Denis Shelomovskij wrote:
> it's time to make a decision. Original comment:
> https://github.com/D-Programming-Language/druntime/pull/212#issuecomment-582
> 7106
> 
> So what we will do with Windows 2000? Personally I don't like this pull
> request (druntime pull 212). It makes not-very-good-looking druntime
> uglier. I'd like voting about this to be done. Something like:
> 
> 1. Officially announce that minimum supported Windows version is 5.1
> (aka XP) since v2.053
>1. Add link like "Email @denis-sh to get D stuff with partial support
> for Windows 2000".
>2. Just call all Windows 2000 users dinosaurs.
> 
> 2. [A bit improve and] Merge this pull and officially announce that
> Windows 2000 is partially supported.
> 
> 3. Maniacally add full Windows 2000 support.
> 
> 4. Leave Issue 6024 opened forever.
> 
> 
> 
> And from my next comment
> https://github.com/D-Programming-Language/druntime/pull/212#issuecomment-582
> 7146: Oh, it's few days more than a year Windows 2000 is silently
> unsupported!
> 
> Links:
> * http://d.puremagic.com/issues/show_bug.cgi?id=6024

Personally, I like the tact of saying that we'll support whatever versions of 
Windows that Microsoft does (which would mean no support for Win2K), but if 
adding some Win2K-specific stuff to fix some Win2K specific issues doesn't cost 
us 
much, then it's fine with me. The problem is when there's a lot of it and/or 
it's disruptive. Some of the Win9x support definitely complicated stuff, and 
removing it was a definite step in the right direction IMHO. Win2K's situation 
is not quite the same however, so fixing some of the issues with it isn't 
necessarily a problem.

Honestly though, if it were purely up to me, I'd just go with the tact of 
saying that we'll support whatever versions of Windows that Microsoft 
supports, and anything that happens to work on older versions will work, and 
anything that doesn't, oh well.

- Jonathan M Davis


Re: Windows 2000 support

2012-06-05 Thread Jonathan M Davis
On Wednesday, June 06, 2012 05:31:51 tim krimm wrote:
> On Wednesday, 6 June 2012 at 03:07:36 UTC, Jonathan M Davis wrote:
> 
> What causes the "RTLCaptureContext could not be located" error
> for instance?

I don't know anything about that. My first guess would be that it's related to 
druntime, but I don't know. As I understand it, dmd itself should work on 
Win2K, but it would be very easy for there to be an issue with the libraries 
which would prevent them from running, and it may be that there's a bug in dmd 
which prevents it from working properly in Win2K.

> You are saying it is object code in the runtime library that is
> linked in and not the object code generated by DMD for my D code.
> So I can create a stub "run time" like in the XOMB OS and still
> run D programs.

druntime may or may not work. Parts of Phobos _won't_ work (at minimum 
std.datetime and anything that relies on the portions of it that don't work). 
Creating your own runtime would work as long as you create all of the pieces 
that are needed (I wouldn't really advise trying though - it's not something 
that sounds like it would be much fun).

> The other reason I still use 2000 is:
> I hate windows vista and windows 7 with a passion
> but I can no longer get XP.
> Unless I buy a refurbished PC with XP
> that is coming off of lease.
> 
> I tolerate windows 7 only because I have too.
> Windows 8 is going to be even worse.
> I also hate paying the "Microsoft tax" and supporting the "evil
> empire".
> 
> Hopefully I will be able to convince future customers that Linux
> is better.

I'm primarily a Linux user myself, but unfortunately, since we want D to be 
properly cross-platform (and since a lot of people _do_ use and like Windows), 
we need to support it. The question is how old a version that we'll support, 
and the fewer older versions that we support, the easier that it is for us.

- Jonathan M Davis


Re: Windows 2000 support

2012-06-05 Thread tim krimm

On Wednesday, 6 June 2012 at 03:07:36 UTC, Jonathan M Davis wrote:

What causes the "RTLCaptureContext could not be located" error 
for instance?


You are saying it is object code in the runtime library that is 
linked in and not the object code generated by DMD for my D code.
So I can create a stub "run time" like in the XOMB OS and still 
run D programs.


The other reason I still use 2000 is:
I hate windows vista and windows 7 with a passion
but I can no longer get XP.
Unless I buy a refurbished PC with XP
that is coming off of lease.

I tolerate windows 7 only because I have too.
Windows 8 is going to be even worse.
I also hate paying the "Microsoft tax" and supporting the "evil 
empire".


Hopefully I will be able to convince future customers that Linux 
is better.


Re: Windows 2000 support

2012-06-05 Thread Jonathan M Davis
On Wednesday, June 06, 2012 04:09:12 tim krimm wrote:
> On Tuesday, 5 June 2012 at 14:43:47 UTC, mta`chrono wrote:
> > Drop support since even Microsoft dropped support. Even if
> > druntime will
> > support Windows 2000, all my the programs I code will at least
> > require
> > Windows XP.
> 
> I agree with removing the windows 2000 requirement from the run
> time library.
> 
> What about the DMD compiler itself?
> Does DMD have a Windows XP+ requirement as well?
> I would like to request that DMD itself not depend on XP.
> But only if does not require a lot of work.
> 
> BTW - I am posting from a Windows 2000 machine.
> Windows 2000 is handed for older machines.
> It is also good for bare bone machines from tiger direct.
> Yes I also use linux on these machines in dual boot mode.
> 
> I have several old legal licensed copies.
> I can fairly easily install and uninstall windows 2000.
> I do not have to deal with Microsoft's registration headaches.
> Believe it or not I had a customer that required windows 98
> support last year.
> 
> Embarcadero (formerly Borland) C++ Builder still supports these
> older systems.

dmd should run on older machines - though I would be very concerned about 
running out of memory if much in the way of templates or CTFE is used. It's 
the libraries that have issues. Supporting older OSes means disallowing newer 
OS function calls, which can be quite problematic. For instance, some of what 
std.datetime does would be easier if we could require Vista or newer, since 
Microsoft added some time-related stuff in Vista. As it is, it requires a 
function which is only in XP or newer (which is really weird considering that 
the function which does the opposite conversion is on Win2K).

We obviously can't require that users have anything newer than XP, because XP 
is still used far too much for that, but in general, the sooner that older 
OSes are unsupported, the better off libraries which use system calls are. 
Regardless, dmd should run on older machines as long as they perform well 
enough to compile what you're trying to compile. dmd doesn't require any of 
the newer system calls.

- Jonathan M Davis


Re: Windows 2000 support

2012-06-05 Thread tim krimm


OOPS

Ment to say
 Windows 2000 is handy for older machines and "bare bone" 
machines.


Believe it or not I had a customer that required windows 98 
support last year.
I guess if I am a dinosaur than my win 98 customer must have been 
a pre-dinosaur.




Re: Windows 2000 support

2012-06-05 Thread tim krimm

On Tuesday, 5 June 2012 at 14:43:47 UTC, mta`chrono wrote:
Drop support since even Microsoft dropped support. Even if 
druntime will
support Windows 2000, all my the programs I code will at least 
require

Windows XP.


I agree with removing the windows 2000 requirement from the run 
time library.


What about the DMD compiler itself?
Does DMD have a Windows XP+ requirement as well?
I would like to request that DMD itself not depend on XP.
But only if does not require a lot of work.

BTW - I am posting from a Windows 2000 machine.
Windows 2000 is handed for older machines.
It is also good for bare bone machines from tiger direct.
Yes I also use linux on these machines in dual boot mode.

I have several old legal licensed copies.
I can fairly easily install and uninstall windows 2000.
I do not have to deal with Microsoft's registration headaches.
Believe it or not I had a customer that required windows 98 
support last year.


Embarcadero (formerly Borland) C++ Builder still supports these 
older systems.




Re: Test for array literal arguments?

2012-06-05 Thread bearophile

Peter Alexander:

One problem with this approach is that it only solves some 
cases and cannot work in general.


The general solution is named "Partial compilation", it's a mess 
and probably you don't want it in the DMD compiler (despite it 
seems LLVM is getting able to do it a bit). Yet lot of people are 
studying partial compilation for 20+ years or more, because it's 
very interesting and potentially useful.




- Adds more rules for overload resolution.


This needs to be studied. But keep in mind that Walter has 
already tried and refused that idea of "static" arguments. So you 
can't assume it's an easy thing to implement.
Here we are discussing just about the second part of my post. The 
title of my post refers to just the first half of it.



However, the biggest problem with this proposal (in my opinion) 
is that it is unnecessary. I care deeply about performance, but 
tiny optimisations like this are simply not important 99% of 
the time. When they are important, just write a specific 
optimised version and use that. Yes, you lose generality, but 
special needs call for special cases. Let's not complicate the 
language and bloat the codebase further for questionable gain.


Writing specialized versions without any language help is not 
nice, and I think the gain is significant, it's not just tiny 
optimizations. My D programs contain lot of stuff known at 
compile-time. I think such simple poor's man hand-made version of 
partial compilation is able to do things like (done by true 
partial compilation):


http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.31.5469&rep=rep1&type=pdf

Bye,
bearophile


Re: Test for array literal arguments?

2012-06-05 Thread Peter Alexander

On Tuesday, 5 June 2012 at 23:48:48 UTC, bearophile wrote:
So we are back to an idea Walter has tried and refused few 
years ago, of compile-time known arguments with "static". The 
small difference here is (I think) that both iPow templates are 
allowed to exist in the code at the same time, and the iPow 
overload with "static" is preferred by the argument is known at 
compile-time:


I was also thinking about this idea today. I was writing a small 
math function (Van der Corput sequence generator) with the 
signature vdc(int n, int b) and noticed that the code could be 
faster when b == 2 (the most common case) because divides can be 
turned into shifts and mods turned to bitwise AND etc.


You could duplicate the function to take the static args as 
template args, but that's ugly.


I also came to the conclusion of using 'static' as a parameter 
"storage class" to specify that the parameter is known at 
compile-time.


One problem with this approach is that it only solves some cases 
and cannot work in general. It also has other implications:
- Code with 1 or more optimised versions will require extra 
maintenance/testing.
- Makes code more difficult to reason about (can be difficult to 
tell which is version is called).

- Adds more rules for overload resolution.

However, the biggest problem with this proposal (in my opinion) 
is that it is unnecessary. I care deeply about performance, but 
tiny optimisations like this are simply not important 99% of the 
time. When they are important, just write a specific optimised 
version and use that. Yes, you lose generality, but special needs 
call for special cases. Let's not complicate the language and 
bloat the codebase further for questionable gain.


Re: Implicit type conversions with data loss

2012-06-05 Thread Thiez

On Tuesday, 5 June 2012 at 22:17:57 UTC, bearophile wrote:


Or you can add an assert/enforce, or you can create a small 
struct that represent safely assignable uints, etc. No solution 
is good.


Bye,
bearophile


Surely structs could work?

struct safeType(T) {
  T value;
}

Define all operations that you can safely perform on T on the 
struct, but only with structs of the same type and on T. It 
wouldn't be very pretty, but I would work, wouldn't it? Writing 
the template would be annoying but you'd only have to do it once.


Test for array literal arguments?

2012-06-05 Thread bearophile

Warning: this post contains some partially uncooked ideas.

I hope Walter will read this post :-)

First of all the little problem. I'd like:

[1, 3, 7].canFind(x)

To be compiled with an in-lined:

x == 1 || x == 3 || x == 7


(A very optimizing D back-end is able to inline the array 
creation, see that the array length is known at compile-time, to 
unroll the search loop according to that length, doing what I am 
asking here. Maybe LDC2 with link-time optimization is able to do 
it. DMD is not able to do it. Having a very opitimizing back-end 
is good, but languages like C and Go (and Java as a 
counter-example) show that not _requiring_ a very optimizing 
back-end is good for a language).


--

This is a starting point for the discussion, it's a modified and 
simplified implementation of canFind() that also contains an 
optimization for short fixed-sized arrays:




import std.stdio: writeln;
import std.traits: ForeachType, isStaticArray;
import std.string: xformat, join;

bool canFind(Range, T)(Range items, T item)
if (is(ForeachType!Range == T)) {
  static if (isStaticArray!Range && Range.length < 5) {
static if (Range.length == 0) {
  return false;
} else {
  static string genEq(string seqName, string itemName, int 
len)

  /*pure nothrow*/ {
string[] result;
foreach (i; 0 .. len)
  result ~= xformat("%s[%d] == %s", seqName, i, itemName);
return result.join(" || ");
  }
  return mixin(genEq("items", "item", Range.length));
}
  } else {
foreach (x; items)
  if (x == item)
return true;
return false;
  }
}

int main() {
  int x = 3; // run-time value
  int[3] a = [1, 3, 7];
  return a.canFind(x);
  //assert([1, 3, 7].canFind(x));
}


The asm of the relevant functions (dmd 2.060alpha, 32 bit, -O 
-release -inline):



_D4test18__T7canFindTG3iTiZ7canFindFG3iiZb13genExpressionFAyaAyaiZAya
L0: sub ESP,0Ch
pushEBX
xor EBX,EBX
pushESI
mov ESI,EAX
testESI,ESI
mov dword ptr 8[ESP],0
mov dword ptr 0Ch[ESP],0
jle L59
L1D:pushdword ptr FLAT:_DATA[014h]
pushdword ptr FLAT:_DATA[010h]
pushdword ptr 02Ch[ESP]
pushdword ptr 02Ch[ESP]
pushEBX
pushdword ptr 030h[ESP]
pushdword ptr 030h[ESP]
callnear ptr 
_D3std6string24__T7xformatTaTAyaTiTAyaZ7xformatFxAaAyaiAyaZAya

pushEDX
mov EDX,offset FLAT:_D13TypeInfo_AAya6__initZ
pushEAX
lea ECX,010h[ESP]
pushECX
pushEDX
callnear ptr __d_arrayappendcT
inc EBX
add ESP,010h
cmp EBX,ESI
jl  L1D
L59:pushdword ptr 0Ch[ESP]
pushdword ptr 0Ch[ESP]
pushdword ptr FLAT:_DATA[024h]
pushdword ptr FLAT:_DATA[020h]
callnear ptr 
_D3std5array22__T8joinImplTAAyaTAyaZ8joinImplFAAyaAyaZAya

pop ESI
pop EBX
add ESP,0Ch
ret 010h


_D4test18__T7canFindTG3iTiZ7canFindFG3iiZb
mov EDX,EAX
cmp 4[ESP],EDX
je  L18
cmp 8[ESP],EDX
je  L18
cmp 0Ch[ESP],EDX
je  L18
xor EAX,EAX
jmp short   L1D
L18:mov EAX,1
L1D:ret 0Ch


__Dmain comdat
L0: sub ESP,0Ch
mov EAX,offset FLAT:_D12TypeInfo_xAi6__initZ
pushEBX
pushESI
push0Ch
push3
pushEAX
callnear ptr __d_arrayliteralTX
add ESP,8
mov EBX,EAX
mov dword ptr [EAX],1
mov ECX,3
mov EDX,EBX
pushEBX
lea ESI,010h[ESP]
mov 4[EBX],ECX
mov dword ptr 8[EBX],7
pushESI
callnear ptr _memcpy
add ESP,0Ch
mov EAX,3
pushdword ptr 010h[ESP]
pushdword ptr 010h[ESP]
pushdword ptr 010h[ESP]
callnear ptr 
_D4test18__T7canFindTG3iTiZ7canFindFG3iiZb

and EAX,0FFh
pop ESI
pop EBX
add ESP,0Ch
ret


That asm shows two problems, that I can't fully explain:
- The very existance of a canFind instance with an inline xformat 
call (!);
- The missed inlining of the canFind instance that performs just 
the 3 equals (_D4test18__T7canFindTG3iTiZ7canFindFG3iiZb ).


But for this discussion I don't care about those two problems. 
I'd like this too to be computed using the mixin(genEq):


int main() {
int x = 3; // run-time value
assert([1, 3, 7].canFind(x));
}


That doesn't happen because [1, 3, 7] is a int[] literal, so 
inside canFind() isStaticArray!Range is false.


On the other hand currently this works, because despite [1, 3, 7] 
being a int[] literal, is also implicitly castable to int[3]:



import std.string: xformat, join;

bool canFind(int[3] items, int item) {
  static string genEq(string seqName, string itemName, int len) {
string[] result;
for

Casts, overflows and demonstrations

2012-06-05 Thread bearophile

This is a reduced part of some D code:


import std.bigint, std.conv, std.algorithm, std.range;

void foo(BigInt number)
in {
assert(number >= 0);
} body {
ubyte[] digits = text(number + 1)
 .retro()
 .map!(c => cast(ubyte)(c - '0'))()
 .array();
// ...
}

void main() {}



The important line of code adds one to 'number', converts it to a 
string, scans it starting from its end, and for each char (digit) 
finds its value, removing the ASCII value of '0', and casts the 
result to ubyte. Then converts the lazy range to an array, an 
ubyte[].


The cast in the D code is needed because 'c' is a char. If you 
remove '0' from a char, in D the result is an int, and D doesn't 
allow to assign that int (I guess the compiler performs range 
analysis on the expression, so it knows the result can be 
negative too) to an ubyte, to avoid losing information.


Casts are dangerous so it's better to avoid them where possible. 
A cast looks kind of safe because you usually know what you are 
doing while you program. But when later you change other parts of 
the code, the cast keeps being silent, and maybe it's not casting 
from the type you think it does. Maybe that kind of bugs are 
avoided by a templated function like this that makes it explicit 
both from and to types (it doesn't compile if the from type is 
wrong) (this code is not fully correct, the traits is not working 
well):



template Cast(From, To) if (__traits(compiles, 
cast(To)From.init)) {

To Cast(T)(T x) if (is(T == From)) {
return cast(To)x;
}
}
void main() {
int x = -100;
ubyte y = Cast!(int, ubyte)(x);
string s = "123";
int y2 = Cast!(string, int)(s);
}


The following code is similar, but to!() performs a run-time test 
that makes it sure the subtraction result is representable inside 
an ubyte, otherwise throws an exception:


ubyte[] digits = text(number + 1)
 .retro()
 .map!(c => to!ubyte(c - '0'))()
 .array();


That code is safer than the cast, but it performs a run-time test 
for each digit, this is not good.


In theory a smarter compiler (working on good enough code) is 
able to do better: text() calls a BigInt method that returns the 
textual representation of the value in base ten (today such 
method is toString(), but maybe this situation will change and 
improve). BigInt.toString() could have a post-condition like this:



string toString()
out(result) {
  size_t start = 0;
  if (this < 0) {
assert(result[0] = '-');
start = 1;
  }
  foreach (digit; result[start .. $])
assert(digit >= '0' && digit <= '9');
  // If you want you can also assert that the first
  // digit is zero only if the bigint value is zero.
} body {
  // ...
}


Given that information, plus the foo pre-condition 
in{assert(number >= 0);}, a smart compiler is able to infer that 
(or asks the programmer to demonstrate that) text() returns an 
array of just ['0',..,'9'] chars, that retro() doesn't change the 
contents of the range, so if you remove '0' from them you get a 
number in [0,..,9] that is always representable in an ubyte. So 
no cast is needed.


Now and then I take a look at the ongoing development and 
refinement of the "Modern Eiffel" language (it's a kind of 
Eiffel2, see 
http://tecomp.sourceforge.net/index.php?file=doc/papers/lang/modern_eiffel.txt 
), that is supposed to be (or become able) to perform those 
inferences (or to use them if the programmer has demonstrated 
them), so I think it will be able to spare both that cast and the 
run-time tests on each char, avoiding overflow bugs.


According to Bertrand Meyer and others in 20 years similar things 
are going is going to become a part of the normal programming 
experience.


Bye,
bearophile


Re: AST Macros?

2012-06-05 Thread Paul D. Anderson

On Tuesday, 5 June 2012 at 21:20:43 UTC, Jacob Carlborg wrote:

On 2012-06-05 11:02, foobar wrote:

This argument was raised before. That "heap of problems" is as 
vague as

the proposed AST system(s).
As far as I can tell, that heap of problems is mainly about 
making it
harder to make internal breaking changes since the compiler is 
no longer

a black box.

Now, I'd argue that having a stable API for those compiler 
internals in
needed anyway. Besides the obvious benefits of a more modular 
design
that better encapsulates the different layers of the 
compilation
process, it allows us to implement a compiler as a set of 
libraries
which benefits the tool ecosystem, IDEs, text-editors, lint 
tools, etc.
Thools which could reuse subsets of these libraries (e.g. 
think of
Clang's design and how it allowed for the vim auto-complete 
plugin).


Even _without_ the AST macros I think it's a worthy goal to 
pursuit, AST

macros simply make the outcome that much sweeter.


I couldn't agree more.


Can we move this to a DIP?

Paul




Re: Windows 2000 support

2012-06-05 Thread Paulo Pinto

On Tuesday, 5 June 2012 at 15:48:07 UTC, Jonathan M Davis wrote:

On Tuesday, June 05, 2012 19:34:38 Dmitry Olshansky wrote:

> If it was not for the damned Windows, there would be a single
> universal operating system interface for all operating 
> systems.


If POSIX standardization was ever successful. If all you need 
is covered
by oldish Unix interface, if ... And there is ton of small 
details that
try to stub you in the eye while porting from say Linux to OS 
X.


When writing std.datetime, I was shocked to find out that Mac 
OS X doesn't have
the librt functions in spite of the fact that they're POSIX. My 
guess is that
they're from some version of POSIX that Mac OS X doesn't 
support, but
regardless, the fact that something is POSIX doesn't seem to 
actually
guarantee much. It puts you in the general ballpark of your 
stuff working if
it's using POSIX stuff, but you have to make it sure (and 
potentially tweak)
everything that you do which relies on POSIX functionality for 
each OS to make
sure that it functions correctly. All you have to do is go 
through druntime
and see all of the differences between each of the POSIX 
systems to see how
much they vary, in spite of the fact that they're all 
supposedly following the

POSIX standard.

- Jonathan M Davis


This is the hard reality of UNIX systems, that many aren't aware 
of

because they only know one specific system.

Long time ago, 1999-2003, I had my share of pain supporting 
server applications across Aix, HP-UX, Solaris, Linux, BSD 
besides Windows.


The one that gave us more headaches was HP-UX, due to the archaic 
compiler available on the system and the 32-64 bit transition 
happening on those days.


--
Paulo


Re: Implicit type conversions with data loss

2012-06-05 Thread bearophile
Languages as Ada, Delphi, C# and few others (C/C++ too, with a 
new Clang feature) know that overflow of fixnums is a very 
common source of bad bugs, so they offer optional run-time 
tests to assignments and numerical operations. D too will 
eventually need those.


A little example of the difficulties involved:
http://blog.regehr.org/archives/721

Bye,
bearophile


Re: Windows 2000 support

2012-06-05 Thread Paulo Pinto

On Tuesday, 5 June 2012 at 15:32:05 UTC, Gor Gyolchanyan wrote:
On Tue, Jun 5, 2012 at 7:03 PM, Dmitry Olshansky 
 wrote:


On 05.06.2012 18:57, Alex Rønne Petersen wrote:


On 05-06-2012 16:52, Gor Gyolchanyan wrote:


On Tue, Jun 5, 2012 at 6:43 PM, mta`chrono 

> wrote:

Drop support since even Microsoft dropped support. Even if 
druntime will
support Windows 2000, all my the programs I code will at 
least require

Windows XP.


+1

--
Bye,
Gor Gyolchanyan.



Agreed.



Same here, just make it official and be done with it.

--
Dmitry Olshansky



So, the set of supported operating systems will be:
1. Windows XP +
2. POSIX

If it was not for the damned Windows, there would be a single
universal operating system interface for all operating systems.

--
Bye,
Gor Gyolchanyan.



Forgetting, of course, that there are many industrial operating
systems that don't fully support POSIX, if at all.

Or that even with POSIX, the support is not the same across all
commercial UNIX systems.

I like UNIX and POSIX, but that is not the universal API that 
many think.


--
Paulo


Re: Implicit type conversions with data loss

2012-06-05 Thread bearophile

ctrl:

I don't want them to be performed at all. How do I disable this 
'feature'?


For example, take a look at this code:

import std.stdio;
void main() {
int x = -1;
uint b = x;
writeln(b);
}

It outputs 4294967295, but I want a compile-time error instead. 
Any suggestions?

(compiler version dmd 2.059)


D is designed to be a safe language, maybe it will be used for 
industrial processes that require a significant amount of safety. 
So D tries to _statically_ refuse value conversions that cause 
data loss. But for practical reasons (this mean to avoid the 
introduction of too many casts, that are even more dangerous) 
this rule is not adopted in some cases. As example D allows you 
to assign doubles<==float, that causes some precision loss.


An int and uint are represented with 32 bits, so casting one to 
the other doesn't cause data loss, but the range of the numbers 
they represent is different, so in general their conversion is 
unsafe.


Languages as Ada, Delphi, C# and few others (C/C++ too, with a 
new Clang feature) know that overflow of fixnums is a very common 
source of bad bugs, so they offer optional run-time tests to 
assignments and numerical operations. D too will eventually need 
those.


In the meantime you can do this, that's not so fast (the inlined 
tests in Ada/C#/Delphi are far faster):


import std.stdio, std.conv;
void main() {
int x = -1;
auto b = to!uint(x);
writeln(b);
}


Or you can add an assert/enforce, or you can create a small 
struct that represent safely assignable uints, etc. No solution 
is good.


Bye,
bearophile


Re: Windows 2000 support

2012-06-05 Thread Jacob Carlborg

On 2012-06-05 17:47, Jonathan M Davis wrote:


When writing std.datetime, I was shocked to find out that Mac OS X doesn't have
the librt functions in spite of the fact that they're POSIX. My guess is that
they're from some version of POSIX that Mac OS X doesn't support, but
regardless, the fact that something is POSIX doesn't seem to actually
guarantee much. It puts you in the general ballpark of your stuff working if
it's using POSIX stuff, but you have to make it sure (and potentially tweak)
everything that you do which relies on POSIX functionality for each OS to make
sure that it functions correctly. All you have to do is go through druntime
and see all of the differences between each of the POSIX systems to see how
much they vary, in spite of the fact that they're all supposedly following the
POSIX standard.

- Jonathan M Davis


The Posix support on Mac OS X isn't the best. I think it was pretty bad 
in Mac OS X 10.4. In 10.5 it got a lot better. I think it's getting 
better in each version.


--
/Jacob Carlborg


Re: AST Macros?

2012-06-05 Thread Jacob Carlborg

On 2012-06-05 09:08, Don Clugston wrote:

On 04/06/12 20:46, Jacob Carlborg wrote:

On 2012-06-04 10:03, Don Clugston wrote:


AST macros were discussed informally on the day after the conference,
and it quickly became clear that the proposed ones were nowhere near
powerful enough. Since that time nobody has come up with another
proposal, as far as I know.


I think others have suggested doing something similar like Nemerle,
Scala or Nimrod.



Yes but only in very vague terms -- not in any more words than that.
When I look at the Nimrod docs, it basically seems to be nothing more
than "expose the compiler internal data structures". Which is extremely
easy to do but causes a heap of problems in the long term.


Yes, no formal proposition has been made.

--
/Jacob Carlborg


Re: AST Macros?

2012-06-05 Thread Jacob Carlborg

On 2012-06-05 11:02, foobar wrote:


This argument was raised before. That "heap of problems" is as vague as
the proposed AST system(s).
As far as I can tell, that heap of problems is mainly about making it
harder to make internal breaking changes since the compiler is no longer
a black box.

Now, I'd argue that having a stable API for those compiler internals in
needed anyway. Besides the obvious benefits of a more modular design
that better encapsulates the different layers of the compilation
process, it allows us to implement a compiler as a set of libraries
which benefits the tool ecosystem, IDEs, text-editors, lint tools, etc.
Thools which could reuse subsets of these libraries (e.g. think of
Clang's design and how it allowed for the vim auto-complete plugin).

Even _without_ the AST macros I think it's a worthy goal to pursuit, AST
macros simply make the outcome that much sweeter.


I couldn't agree more.

--
/Jacob Carlborg


Re: foreach over pointer to range

2012-06-05 Thread Artur Skawina
On 06/05/12 22:41, simendsjo wrote:
> On Tue, 05 Jun 2012 22:38:22 +0200, Artur Skawina  wrote:
> 
>> On 06/05/12 22:23, simendsjo wrote:
>>> On Tue, 05 Jun 2012 20:46:51 +0200, Timon Gehr  wrote:
>>>

 It should be dropped. A pointer to range is a perfectly fine range.
>>>
>>>
>>> Sure..? I couldn't get it to work either:
>>> struct R {
>>> string test = "aoeu";
>>> @property front() { return test[0]; }
>>> @property bool empty() { return !test.length; }
>>> void popFront(){test = test[0..$];}
>>> }
>>>
>>> void main() {
>>> R r;
>>> R* p = &r;
>>> foreach(ch; p) // invalid foreach aggregate p
>>> writeln(ch);
>>> }
>>
>> It /is/ a valid range, but it's /not/ currently accepted
>> by foreach.
>>
> (...)
>>
>> which works, but only obfuscates the code and can be less efficient.
> 
> Well, then it's not a *perfectly fine* range, is it then :)

It *is* a perfectly fine range; the problem is with 'foreach'.

artur


Re: foreach over pointer to range

2012-06-05 Thread simendsjo
On Tue, 05 Jun 2012 22:38:22 +0200, Artur Skawina   
wrote:



On 06/05/12 22:23, simendsjo wrote:
On Tue, 05 Jun 2012 20:46:51 +0200, Timon Gehr   
wrote:




It should be dropped. A pointer to range is a perfectly fine range.



Sure..? I couldn't get it to work either:
struct R {
string test = "aoeu";
@property front() { return test[0]; }
@property bool empty() { return !test.length; }
void popFront(){test = test[0..$];}
}

void main() {
R r;
R* p = &r;
foreach(ch; p) // invalid foreach aggregate p
writeln(ch);
}


It /is/ a valid range, but it's /not/ currently accepted
by foreach.


(...)


which works, but only obfuscates the code and can be less efficient.

artur


Well, then it's not a *perfectly fine* range, is it then :)


Re: foreach over pointer to range

2012-06-05 Thread Artur Skawina
On 06/05/12 22:23, simendsjo wrote:
> On Tue, 05 Jun 2012 20:46:51 +0200, Timon Gehr  wrote:
> 
>>
>> It should be dropped. A pointer to range is a perfectly fine range.
> 
> 
> Sure..? I couldn't get it to work either:
> struct R {
> string test = "aoeu";
> @property front() { return test[0]; }
> @property bool empty() { return !test.length; }
> void popFront(){test = test[0..$];}
> }
> 
> void main() {
> R r;
> R* p = &r;
> foreach(ch; p) // invalid foreach aggregate p
> writeln(ch);
> }

It /is/ a valid range, but it's /not/ currently accepted
by foreach.

So you have to write the above as:
   
   struct R {
   string test = "aoeu";
   @property front() { return test[0]; }
   @property bool empty() { return !test.length; }
   void popFront(){test = test[0..$];}
   }

   struct RangePtr(R) {
  R* ptr;
  alias ptr this;
  @property front()() { return ptr.front; }
   }

   void main() {
   R r;
   auto p = RangePtr!R(&r);
   foreach(ch; p)
   writeln(ch);
   }

which works, but only obfuscates the code and can be less efficient.

artur


Re: runtime hook for Crash on Error

2012-06-05 Thread deadalnix

Le 04/06/2012 21:29, Steven Schveighoffer a écrit :

On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston  wrote:


1. There exist cases where you cannot know why the assert failed.
2. Therefore you never know why an assert failed.
3. Therefore it is not safe to unwind the stack from a nothrow function.

Spot the fallacies.

The fallacy in moving from 2 to 3 is more serious than the one from 1
to 2: this argument is not in any way dependent on the assert occuring
in a nothrow function. Rather, it's an argument for not having
AssertError at all.


I'm not sure that is the issue here at all. What I see is that the
unwinding of the stack is optional, based on the assumption that there's
no "right" answer.

However, there is an underlying driver for not unwinding the stack --
nothrow. If nothrow results in the compiler optimizing out whatever
hooks a function needs to properly unwind itself (my limited
understanding is that this helps performance), then there *is no
choice*, you can't properly unwind the stack.

-Steve


It change nothing in term of performances as long as you not throw. And 
when you throw, performance are not your main problem.


Re: foreach over pointer to range

2012-06-05 Thread simendsjo

On Tue, 05 Jun 2012 20:46:51 +0200, Timon Gehr  wrote:



It should be dropped. A pointer to range is a perfectly fine range.



Sure..? I couldn't get it to work either:
struct R {
string test = "aoeu";
@property front() { return test[0]; }
@property bool empty() { return !test.length; }
void popFront(){test = test[0..$];}
}

void main() {
R r;
R* p = &r;
foreach(ch; p) // invalid foreach aggregate p
writeln(ch);
}


Re: foreach over pointer to range

2012-06-05 Thread Artur Skawina
On 06/05/12 21:25, Peter Alexander wrote:
> On Tuesday, 5 June 2012 at 18:46:51 UTC, Timon Gehr wrote:
>> On 06/05/2012 08:42 PM, Artur Skawina wrote:
>>> "foreach (e; pointer_to_range)" currently fails with:
>>>
>>>Error: foreach: Range* is not an aggregate type
>>>
>>> It can be worked around with...
> 
> Why not: foreach(e; *pointer_to_range)
> 
> Seems like the obvious solution to me, and works.

Works by copying the whole range struct, which is what I don't
want to happen. And no, using a class is not an option. :)

artur


Re: foreach over pointer to range

2012-06-05 Thread Peter Alexander

On Tuesday, 5 June 2012 at 18:46:51 UTC, Timon Gehr wrote:

On 06/05/2012 08:42 PM, Artur Skawina wrote:

"foreach (e; pointer_to_range)" currently fails with:

   Error: foreach: Range* is not an aggregate type

It can be worked around with...


Why not: foreach(e; *pointer_to_range)

Seems like the obvious solution to me, and works.




Re: foreach over pointer to range

2012-06-05 Thread Timon Gehr

On 06/05/2012 08:42 PM, Artur Skawina wrote:

"foreach (e; pointer_to_range)" currently fails with:

Error: foreach: Range* is not an aggregate type

It can be worked around with

struct RangePtr(R) {
   R* ptr;
   alias ptr this;
   @property front()() { return ptr.front; }
}

but this adds unnecessary overhead (unfortunately such struct is not
always treated the same as a real pointer, eg when passing it around).

Is there some reason that makes the is-aggregate check necessary, or could
it be dropped?


Thanks,

artur


It should be dropped. A pointer to range is a perfectly fine range.


foreach over pointer to range

2012-06-05 Thread Artur Skawina
"foreach (e; pointer_to_range)" currently fails with:

   Error: foreach: Range* is not an aggregate type

It can be worked around with 

   struct RangePtr(R) {
  R* ptr;
  alias ptr this;
  @property front()() { return ptr.front; }
   }

but this adds unnecessary overhead (unfortunately such struct is not
always treated the same as a real pointer, eg when passing it around).

Is there some reason that makes the is-aggregate check necessary, or could
it be dropped?


Thanks,

artur


Re: Windows 2000 support

2012-06-05 Thread Stewart Gordon

On 05/06/2012 08:43, Denis Shelomovskij wrote:


2. [A bit improve and] Merge this pull and officially announce that Windows 
2000 is
partially supported.



Best course of action IMO.  After all, it's only a few blocks of code in two files.  I 
can't see what the fuss over folding it in is about.


Stewart.


Re: Implicit type conversions with data loss

2012-06-05 Thread Paul D. Anderson

On Tuesday, 5 June 2012 at 18:06:15 UTC, ctrl wrote:
I don't want them to be performed at all. How do I disable this 
'feature'?


For example, take a look at this code:

import std.stdio;
void main() {
int x = -1;
uint b = x;
writeln(b);
}

It outputs 4294967295, but I want a compile-time error instead. 
Any suggestions?

(compiler version dmd 2.059)


I doubt that a 'feature' that's been in D and it's predecessors 
for such a long time is going to be easy to disable. Probably the 
best you can do is to work with it: add an assert, perhaps, or 
define a new type that checks for this condition.


Paul



Implicit type conversions with data loss

2012-06-05 Thread ctrl
I don't want them to be performed at all. How do I disable this 
'feature'?


For example, take a look at this code:

import std.stdio;
void main() {
int x = -1;
uint b = x;
writeln(b);
}

It outputs 4294967295, but I want a compile-time error instead. 
Any suggestions?

(compiler version dmd 2.059)


Re: Donations

2012-06-05 Thread Jonas Drewsen

On Sunday, 3 June 2012 at 09:18:33 UTC, Walter Bright wrote:

On 6/3/2012 1:12 AM, Jonas Drewsen wrote:
Would be nice if there was a tshirt with the new D logo. 
Something like


http://www.blender3d.org/e-shop/product_info_n.php?products_id=141

How do you upload the prints to cafepress? Maybe someone on 
this list can create

something cool looking. Anyone?


The web site has some buttons for setting up a shop and 
uploading the artwork. It's pretty simple.


Anyone that can put the vector art for the dlogo online?

I might have go with attaching the text so that we can have a 
more up-to-date t-shirt.


-Jonas



Re: Increment / Decrement Operator Behavior

2012-06-05 Thread Mikael Lindsten
2012/6/5 Jonathan M Davis 
>
>
> I think that Bernard is being a bit harsh, but in essence, I agree. Since
> the
> evaluation order of arguments is undefined, programmers should be aware of
> that
> and code accordingly. If they don't bother to learn, then they're going to
> get
> bitten, and that's life.
>
> Now, Walter _has_ expressed interest in changing it so that the order of
> evaluation for function arguments is fully defined as being left-to-right,
> which solves the issue. I'd still council against getting into the habit of
> writing code which relies on the order of evaluation for the arguments to a
> function, since it's so common for other languages not to define it (so
> that
> the compiler can better optimize the calls), and so getting into the habit
> of
> writing code which _does_ depend on the order of evalution for function
> arguments will cause you to write bad code you when you work in most other
> programming languages.
>
> As for treating pre or post-increment operators specially in some manner,
> that
> doesn't make sense. The problem is far more general than that. If we're
> going
> to change anything, it would be to make it so that the language itself
> defines
> the order of evaluation of function arguments as being left-to-right.
>
> - Jonathan M Davis
>

Agree completely!


Re: runtime hook for Crash on Error

2012-06-05 Thread Sean Kelly
On Jun 5, 2012, at 8:44 AM, Jonathan M Davis  wrote:
> 
> In many cases, it's probably fine, but if the program is in a bad enough 
> state 
> that an Error is thrown, then you can't know for sure that any particular 
> such 
> block will execute properly (memory corruption being the extreme case), and 
> if 
> it doesn't run correctly, then it could make things worse (e.g. writing 
> invalid data to a file, corrupting that file). Also, if the stack is not 
> unwound 
> perfectly (as nothrow prevents), then the program's state will become 
> increasingly invalid the farther that the program gets from the throw point, 
> which will increase the chances of cleanup code functioning incorrectly, as 
> any assumptions that they've made about the program state are increasingly 
> likely to be wrong (as well as it being increasingly likely that the 
> variables 
> that they operate on no longer being valid).

Then we should really just abort on Error. What I don't understand is the 
assertion that it isn't safe to unwind the stack on Error and yet that 
catch(Error) clauses should still execute. If the program state is really so 
bad that nothing can be done safely then why would the user attempt to log the 
error condition or anything else?

I think an argument could be made that the current behavior of stack unwinding 
should continue and a hook should be added to let the user call abort or 
whatever instead. But we couldn't make abort the default and let the user 
disable that. 

Re: Windows 2000 support

2012-06-05 Thread Russel Winder
On Tue, 2012-06-05 at 19:31 +0400, Gor Gyolchanyan wrote:
[...]

> If it was not for the damned Windows, there would be a single
> universal operating system interface for all operating systems.

On the other hand, Windows represents something of the order of 85% of
all shipped workstations, and, reputedly, 70% of all developers develop
in Windows. So whilst I eshew Windows, I recognize that programming
languages and development tools must be well supported on Windows, as
well as Mac OS X, Linux, UNIX, etc. to have any possibility of any
traction.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Windows 2000 support

2012-06-05 Thread Jonathan M Davis
On Tuesday, June 05, 2012 19:34:38 Dmitry Olshansky wrote:
> > If it was not for the damned Windows, there would be a single
> > universal operating system interface for all operating systems.
> 
> If POSIX standardization was ever successful. If all you need is covered
> by oldish Unix interface, if ... And there is ton of small details that
> try to stub you in the eye while porting from say Linux to OS X.

When writing std.datetime, I was shocked to find out that Mac OS X doesn't have 
the librt functions in spite of the fact that they're POSIX. My guess is that 
they're from some version of POSIX that Mac OS X doesn't support, but 
regardless, the fact that something is POSIX doesn't seem to actually 
guarantee much. It puts you in the general ballpark of your stuff working if 
it's using POSIX stuff, but you have to make it sure (and potentially tweak) 
everything that you do which relies on POSIX functionality for each OS to make 
sure that it functions correctly. All you have to do is go through druntime 
and see all of the differences between each of the POSIX systems to see how 
much they vary, in spite of the fact that they're all supposedly following the 
POSIX standard.

- Jonathan M Davis


Re: runtime hook for Crash on Error

2012-06-05 Thread Jonathan M Davis
On Tuesday, June 05, 2012 13:57:14 Don Clugston wrote:
> On 05/06/12 09:07, Jonathan M Davis wrote:
> > On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
> >> On 04/06/12 21:29, Steven Schveighoffer wrote:
> >>> On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston  wrote:
>  1. There exist cases where you cannot know why the assert failed.
>  2. Therefore you never know why an assert failed.
>  3. Therefore it is not safe to unwind the stack from a nothrow
>  function.
>  
>  Spot the fallacies.
>  
>  The fallacy in moving from 2 to 3 is more serious than the one from 1
>  to 2: this argument is not in any way dependent on the assert occuring
>  in a nothrow function. Rather, it's an argument for not having
>  AssertError at all.
> >>> 
> >>> I'm not sure that is the issue here at all. What I see is that the
> >>> unwinding of the stack is optional, based on the assumption that there's
> >>> no "right" answer.
> >>> 
> >>> However, there is an underlying driver for not unwinding the stack --
> >>> nothrow. If nothrow results in the compiler optimizing out whatever
> >>> hooks a function needs to properly unwind itself (my limited
> >>> understanding is that this helps performance), then there *is no
> >>> choice*, you can't properly unwind the stack.
> >>> 
> >>> -Steve
> >> 
> >> No, this whole issue started because the compiler currently does do
> >> unwinding whenever it can. And Walter claimed that's a bug, and it
> >> should be explicitly disabled.
> >> 
> >> It is, in my view, an absurd position. AFAIK not a single argument has
> >> been presented in favour of it. All arguments have been about "you
> >> should never unwind Errors".
> > 
> > It's quite clear that we cannot completely, correctly unwind the stack in
> > the face of Errors.
> 
> Well that's a motherhood statement. Obviously in the face of extreme
> memory corruption you can't guarantee *any* code is valid.
> The *main* reason why stack unwinding would not be possible is if
> nothrow intentionally omits stack unwinding code.

It's not possible precisely because of nothrow.

> > As such, no one should be relying on stack unwinding when an
> > Error is thrown.
> 
> This conclusion DOES NOT FOLLOW. And I am getting so sick of the number
> of times this fallacy has been repeated in this thread.
> 
> These kinds of generalizations are completely invalid in a systems
> programming language.

If nothrow prevents the stack from being correctly unwound, then no, you 
shouldn't be relying on stack unwinding when an Error is thrown, because it's 
_not_ going to work properly.

> > Regardless, I think that there are a number of people in this thread who
> > are mistaken in how recoverable they think Errors and/or segfaults are,
> > and they seem to be the ones pushing the hardest for full stack unwinding
> > on the theory that they could somehow ensure safe recovery and a clean
> > shutdown when an Error occurs, which is almost never possible, and
> > certainly isn't possible in the general case.
> > 
> > - Jonathan M Davis
> 
> Well I'm pushing it because I implemented it (on Windows).
> 
> I'm less knowledgeable about what happens on other systems, but know
> that on Windows, the whole system is far, far more robust than most
> people on this thread seem to think.
> 
> I can't see *any* problem with executing catch(Error) clauses. I cannot
> envisage a situation where that can cause a problem. I really cannot.

In many cases, it's probably fine, but if the program is in a bad enough state 
that an Error is thrown, then you can't know for sure that any particular such 
block will execute properly (memory corruption being the extreme case), and if 
it doesn't run correctly, then it could make things worse (e.g. writing 
invalid data to a file, corrupting that file). Also, if the stack is not 
unwound 
perfectly (as nothrow prevents), then the program's state will become 
increasingly invalid the farther that the program gets from the throw point, 
which will increase the chances of cleanup code functioning incorrectly, as 
any assumptions that they've made about the program state are increasingly 
likely to be wrong (as well as it being increasingly likely that the variables 
that they operate on no longer being valid).

A lot of it comes down to worst case vs typical case. In the typical case, the 
code causing the Error is isolated enough and the code doing the cleanup is 
self-contained enough that trying to unwind the stack as much as possible will 
result in more correct behavior than skipping it all. But in the worst case, 
you can't rely on running any code being safe, because the state of the 
program is very much invalid, in which case, it's better to kill the program 
ASAP. Walter seems to subscribe to the approach that it's best to assume the 
worst case (e.g. that an assertion failure indicates horrible memory 
corruption), and always have Errors function that way, whereas others 
subscribe to

Re: Windows 2000 support

2012-06-05 Thread Dmitry Olshansky

On 05.06.2012 19:31, Gor Gyolchanyan wrote:

On Tue, Jun 5, 2012 at 7:03 PM, Dmitry Olshansky  wrote:


On 05.06.2012 18:57, Alex Rønne Petersen wrote:


On 05-06-2012 16:52, Gor Gyolchanyan wrote:


On Tue, Jun 5, 2012 at 6:43 PM, mta`chronomailto:chr...@mta-international.net>>  wrote:

Drop support since even Microsoft dropped support. Even if druntime will
support Windows 2000, all my the programs I code will at least require
Windows XP.


+1

--
Bye,
Gor Gyolchanyan.



Agreed.



Same here, just make it official and be done with it.

--
Dmitry Olshansky



So, the set of supported operating systems will be:
1. Windows XP +
2. POSIX

If it was not for the damned Windows, there would be a single
universal operating system interface for all operating systems.



If POSIX standardization was ever successful. If all you need is covered 
by oldish Unix interface, if ... And there is ton of small details that 
try to stub you in the eye while porting from say Linux to OS X.



--
Dmitry Olshansky


Re: Windows 2000 support

2012-06-05 Thread Gor Gyolchanyan
On Tue, Jun 5, 2012 at 7:03 PM, Dmitry Olshansky  wrote:
>
> On 05.06.2012 18:57, Alex Rønne Petersen wrote:
>>
>> On 05-06-2012 16:52, Gor Gyolchanyan wrote:
>>>
>>> On Tue, Jun 5, 2012 at 6:43 PM, mta`chrono >> > wrote:
>>>
>>> Drop support since even Microsoft dropped support. Even if druntime will
>>> support Windows 2000, all my the programs I code will at least require
>>> Windows XP.
>>>
>>>
>>> +1
>>>
>>> --
>>> Bye,
>>> Gor Gyolchanyan.
>>
>>
>> Agreed.
>>
>
> Same here, just make it official and be done with it.
>
> --
> Dmitry Olshansky


So, the set of supported operating systems will be:
1. Windows XP +
2. POSIX

If it was not for the damned Windows, there would be a single
universal operating system interface for all operating systems.

--
Bye,
Gor Gyolchanyan.


Re: Windows 2000 support

2012-06-05 Thread Dmitry Olshansky

On 05.06.2012 18:57, Alex Rønne Petersen wrote:

On 05-06-2012 16:52, Gor Gyolchanyan wrote:

On Tue, Jun 5, 2012 at 6:43 PM, mta`chrono mailto:chr...@mta-international.net>> wrote:

Drop support since even Microsoft dropped support. Even if druntime will
support Windows 2000, all my the programs I code will at least require
Windows XP.


+1

--
Bye,
Gor Gyolchanyan.


Agreed.



Same here, just make it official and be done with it.

--
Dmitry Olshansky


Re: Windows 2000 support

2012-06-05 Thread Alex Rønne Petersen

On 05-06-2012 16:52, Gor Gyolchanyan wrote:

On Tue, Jun 5, 2012 at 6:43 PM, mta`chrono mailto:chr...@mta-international.net>> wrote:

Drop support since even Microsoft dropped support. Even if druntime will
support Windows 2000, all my the programs I code will at least require
Windows XP.


+1

--
Bye,
Gor Gyolchanyan.


Agreed.

--
Alex Rønne Petersen
a...@lycus.org
http://lycus.org


Re: Windows 2000 support

2012-06-05 Thread Gor Gyolchanyan
On Tue, Jun 5, 2012 at 6:43 PM, mta`chrono wrote:

> Drop support since even Microsoft dropped support. Even if druntime will
> support Windows 2000, all my the programs I code will at least require
> Windows XP.
>

+1

-- 
Bye,
Gor Gyolchanyan.


Re: Windows 2000 support

2012-06-05 Thread mta`chrono
Drop support since even Microsoft dropped support. Even if druntime will
support Windows 2000, all my the programs I code will at least require
Windows XP.


Re: Making generalized Trie type in D

2012-06-05 Thread Roman D. Boiko

On Tuesday, 5 June 2012 at 12:57:10 UTC, Dmitry Olshansky wrote:

On 05.06.2012 9:33, Roman D. Boiko wrote:

On Tuesday, 5 June 2012 at 05:28:48 UTC, Roman D. Boiko wrote:
... without deep analysis I can't come up with a good API / 
design for
that (without overcomplicating it). Probably keeping mutable 
and
immutable APIs separate is the best choice. Will return to 
this

problem once I get a bit of free time.
Simplest and possibly the best approach is to provide an 
immutable
wrapper over mutable implementation, but that may be difficult 
to make
efficient given the need to support insert / delete as common 
operations.




I suspect I would have to add another policy like Persistent 
that will preallocate some slack space implicitly so that some 
pages can be shallowly copied on each assign(a-la COW) and 
immutable parts still reused.
Another simpler way would be to use an separate field 
"pointer-to-actual base" for each level, thus allowing it to 
redirect it away to new (modified) copy. Still looks like 
policy as it _maybe_  slightly slower.


Anyway as a start this should work:

auto modifyDupGlobalImmutableTrie(Trie(immutable(T), ...) t
, scope delegate(Trie(immutable(T), ...) )pure dg) __pure__
{
auto copy = t.dup;//this would be one day a shallow copy
with(copy)
{   
dg(copy);
}
return copy;
}

//later on
{
...
immutable newTrie = modifyDupGlobalImmutableTrie(yourTrie);
...
}


Yes,something like that should work. I finished support request 
and will investigate this and your std.uni. maybe it is better to 
avoid immutability... or do bulk ins /del before  copy.


Re: Making generalized Trie type in D

2012-06-05 Thread Dmitry Olshansky

On 05.06.2012 9:33, Roman D. Boiko wrote:

On Tuesday, 5 June 2012 at 05:28:48 UTC, Roman D. Boiko wrote:

... without deep analysis I can't come up with a good API / design for
that (without overcomplicating it). Probably keeping mutable and
immutable APIs separate is the best choice. Will return to this
problem once I get a bit of free time.

Simplest and possibly the best approach is to provide an immutable
wrapper over mutable implementation, but that may be difficult to make
efficient given the need to support insert / delete as common operations.



I suspect I would have to add another policy like Persistent that will 
preallocate some slack space implicitly so that some pages can be 
shallowly copied on each assign(a-la COW) and immutable parts still reused.
Another simpler way would be to use an separate field "pointer-to-actual 
base" for each level, thus allowing it to redirect it away to new 
(modified) copy. Still looks like policy as it _maybe_  slightly slower.


Anyway as a start this should work:

auto modifyDupGlobalImmutableTrie(Trie(immutable(T), ...) t
, scope delegate(Trie(immutable(T), ...) )pure dg) __pure__
{
auto copy = t.dup;//this would be one day a shallow copy
with(copy)
{   
dg(copy);
}
return copy;
}

//later on
{
...
immutable newTrie = modifyDupGlobalImmutableTrie(yourTrie);
...
}

--
Dmitry Olshansky


Re: runtime hook for Crash on Error

2012-06-05 Thread Dmitry Olshansky

On 05.06.2012 15:57, Don Clugston wrote:

On 05/06/12 09:07, Jonathan M Davis wrote:

On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:

On 04/06/12 21:29, Steven Schveighoffer wrote:

On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston wrote:

1. There exist cases where you cannot know why the assert failed.
2. Therefore you never know why an assert failed.
3. Therefore it is not safe to unwind the stack from a nothrow
function.

Spot the fallacies.

The fallacy in moving from 2 to 3 is more serious than the one from 1
to 2: this argument is not in any way dependent on the assert occuring
in a nothrow function. Rather, it's an argument for not having
AssertError at all.


I'm not sure that is the issue here at all. What I see is that the
unwinding of the stack is optional, based on the assumption that
there's
no "right" answer.

However, there is an underlying driver for not unwinding the stack --
nothrow. If nothrow results in the compiler optimizing out whatever
hooks a function needs to properly unwind itself (my limited
understanding is that this helps performance), then there *is no
choice*, you can't properly unwind the stack.

-Steve


No, this whole issue started because the compiler currently does do
unwinding whenever it can. And Walter claimed that's a bug, and it
should be explicitly disabled.

It is, in my view, an absurd position. AFAIK not a single argument has
been presented in favour of it. All arguments have been about "you
should never unwind Errors".


It's quite clear that we cannot completely, correctly unwind the stack
in the
face of Errors.


Well that's a motherhood statement. Obviously in the face of extreme
memory corruption you can't guarantee *any* code is valid.
The *main* reason why stack unwinding would not be possible is if
nothrow intentionally omits stack unwinding code.


As such, no one should be relying on stack unwinding when an
Error is thrown.


This conclusion DOES NOT FOLLOW. And I am getting so sick of the number
of times this fallacy has been repeated in this thread.


Finally voice of reason. My prayers must have touched somebody up above...



These kinds of generalizations are completely invalid in a systems
programming language.


Regardless, I think that there are a number of people in this thread
who are
mistaken in how recoverable they think Errors and/or segfaults are,
and they
seem to be the ones pushing the hardest for full stack unwinding on
the theory
that they could somehow ensure safe recovery and a clean shutdown when an
Error occurs, which is almost never possible, and certainly isn't
possible in
the general case.

- Jonathan M Davis


Well I'm pushing it because I implemented it (on Windows).

I'm less knowledgeable about what happens on other systems, but know
that on Windows, the whole system is far, far more robust than most
people on this thread seem to think.



Exactly, hence the whole idea about SEH in the OS.


I can't see *any* problem with executing catch(Error) clauses. I cannot
envisage a situation where that can cause a problem. I really cannot.

And catch(Exception) clauses won't be run, because of the exception
chaining scheme we have implemented.

The only difficult case is 'finally' clauses, which may be expecting an
Exception.



--
Dmitry Olshansky


Re: runtime hook for Crash on Error

2012-06-05 Thread Don Clugston

On 05/06/12 09:07, Jonathan M Davis wrote:

On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:

On 04/06/12 21:29, Steven Schveighoffer wrote:

On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston  wrote:

1. There exist cases where you cannot know why the assert failed.
2. Therefore you never know why an assert failed.
3. Therefore it is not safe to unwind the stack from a nothrow function.

Spot the fallacies.

The fallacy in moving from 2 to 3 is more serious than the one from 1
to 2: this argument is not in any way dependent on the assert occuring
in a nothrow function. Rather, it's an argument for not having
AssertError at all.


I'm not sure that is the issue here at all. What I see is that the
unwinding of the stack is optional, based on the assumption that there's
no "right" answer.

However, there is an underlying driver for not unwinding the stack --
nothrow. If nothrow results in the compiler optimizing out whatever
hooks a function needs to properly unwind itself (my limited
understanding is that this helps performance), then there *is no
choice*, you can't properly unwind the stack.

-Steve


No, this whole issue started because the compiler currently does do
unwinding whenever it can. And Walter claimed that's a bug, and it
should be explicitly disabled.

It is, in my view, an absurd position. AFAIK not a single argument has
been presented in favour of it. All arguments have been about "you
should never unwind Errors".


It's quite clear that we cannot completely, correctly unwind the stack in the
face of Errors.


Well that's a motherhood statement. Obviously in the face of extreme 
memory corruption you can't guarantee *any* code is valid.
The *main* reason why stack unwinding would not be possible is if 
nothrow intentionally omits stack unwinding code.



As such, no one should be relying on stack unwinding when an
Error is thrown.


This conclusion DOES NOT FOLLOW. And I am getting so sick of the number 
of times this fallacy has been repeated in this thread.


These kinds of generalizations are completely invalid in a systems 
programming language.



Regardless, I think that there are a number of people in this thread who are
mistaken in how recoverable they think Errors and/or segfaults are, and they
seem to be the ones pushing the hardest for full stack unwinding on the theory
that they could somehow ensure safe recovery and a clean shutdown when an
Error occurs, which is almost never possible, and certainly isn't possible in
the general case.

- Jonathan M Davis


Well I'm pushing it because I implemented it (on Windows).

I'm less knowledgeable about what happens on other systems, but know 
that on Windows, the whole system is far, far more robust than most 
people on this thread seem to think.


I can't see *any* problem with executing catch(Error) clauses. I cannot 
envisage a situation where that can cause a problem. I really cannot.


And catch(Exception) clauses won't be run, because of the exception 
chaining scheme we have implemented.


The only difficult case is 'finally' clauses, which may be expecting an 
Exception.


Re: Increment / Decrement Operator Behavior

2012-06-05 Thread Timon Gehr

On 06/04/2012 08:36 PM, Xinok wrote:

The increment and decrement operators are highly dependent on operator
precedence and associativity. If the actions are performed in a
different order than the developer presumed, it could cause unexpected
behavior.

I had a simple idea to change the behavior of this operator. It works
for the postfix operators but not prefix. Take the following code:

size_t i = 5;
writeln(i--, i--, i--);

As of now, this writes "543". With my idea, instead it would write,
"555". Under the hood, the compiler would rewrite the code as:

size_t i = 5;
writeln(i, i, i);
--i;
--i;
--i;

It decrements the variable after the current statement. While not the
norm, this behavior is at least predictable. For non-static variables,
such as array elements, the compiler could store a temporary reference
to the variable so it can decrement it afterwards.

I'm not actually proposing we actually make this change. I simply
thought it was a nifty idea worth sharing.


The behaviour the language requires is that the function call executes 
as if the parameters were evaluated from left to right. This is exactly 
the behaviour you observe. What is the problem you want to fix?


Re: AST Macros?

2012-06-05 Thread foobar

On Tuesday, 5 June 2012 at 07:08:19 UTC, Don Clugston wrote:

On 04/06/12 20:46, Jacob Carlborg wrote:

On 2012-06-04 10:03, Don Clugston wrote:

AST macros were discussed informally on the day after the 
conference,
and it quickly became clear that the proposed ones were 
nowhere near
powerful enough. Since that time nobody has come up with 
another

proposal, as far as I know.


I think others have suggested doing something similar like 
Nemerle,

Scala or Nimrod.



Yes but only in very vague terms -- not in any more words than 
that. When I look at the Nimrod docs, it basically seems to be 
nothing more than "expose the compiler internal data 
structures". Which is extremely easy to do but causes a heap of 
problems in the long term.


This argument was raised before. That "heap of problems" is as 
vague as the proposed AST system(s).
As far as I can tell, that heap of problems is mainly about 
making it harder to make internal breaking changes since the 
compiler is no longer a black box.


Now, I'd argue that having a stable API for those compiler 
internals in needed anyway. Besides the obvious benefits of a 
more modular design that better encapsulates the different layers 
of the compilation process, it allows us to implement a compiler 
as a set of libraries which benefits the tool ecosystem, IDEs, 
text-editors, lint tools, etc. Thools which could reuse subsets 
of these libraries (e.g. think of Clang's design and how it 
allowed for the vim auto-complete plugin).


Even _without_ the AST macros I think it's a worthy goal to 
pursuit, AST macros simply make the outcome that much sweeter.


Re: meta namespace aka Issue 3702

2012-06-05 Thread kenji hara
2012/6/5 Denis Shelomovskij :
> Is anyone working on "Issue 3702 - Replace __traits and is(typeof()) with a
> 'magic namespace'"?
>
> Why Shin Fujishiro's meta still isn't used? Is he against? Where is he? Is
> he OK? I hope he is OK and just have to time for D.

Unfortunately, he is out of touch from Nov 2010...

Bye.

Kenji Hara


Windows 2000 support

2012-06-05 Thread Denis Shelomovskij
it's time to make a decision. Original comment: 
https://github.com/D-Programming-Language/druntime/pull/212#issuecomment-5827106


So what we will do with Windows 2000? Personally I don't like this pull 
request (druntime pull 212). It makes not-very-good-looking druntime 
uglier. I'd like voting about this to be done. Something like:


1. Officially announce that minimum supported Windows version is 5.1 
(aka XP) since v2.053
  1. Add link like "Email @denis-sh to get D stuff with partial support 
for Windows 2000".

  2. Just call all Windows 2000 users dinosaurs.

2. [A bit improve and] Merge this pull and officially announce that 
Windows 2000 is partially supported.


3. Maniacally add full Windows 2000 support.

4. Leave Issue 6024 opened forever.



And from my next comment 
https://github.com/D-Programming-Language/druntime/pull/212#issuecomment-5827146:

Oh, it's few days more than a year Windows 2000 is silently unsupported!

Links:
* http://d.puremagic.com/issues/show_bug.cgi?id=6024


--
Денис В. Шеломовский
Denis V. Shelomovskij


meta namespace aka Issue 3702

2012-06-05 Thread Denis Shelomovskij
Is anyone working on "Issue 3702 - Replace __traits and is(typeof()) 
with a 'magic namespace'"?


Why Shin Fujishiro's meta still isn't used? Is he against? Where is he? 
Is he OK? I hope he is OK and just have to time for D.


And add links with your own meta implementation to Issue 3702 please.

Links:
* http://d.puremagic.com/issues/show_bug.cgi?id=3702

--
Денис В. Шеломовский
Denis V. Shelomovskij


Re: AST Macros?

2012-06-05 Thread Don Clugston

On 04/06/12 20:46, Jacob Carlborg wrote:

On 2012-06-04 10:03, Don Clugston wrote:


AST macros were discussed informally on the day after the conference,
and it quickly became clear that the proposed ones were nowhere near
powerful enough. Since that time nobody has come up with another
proposal, as far as I know.


I think others have suggested doing something similar like Nemerle,
Scala or Nimrod.



Yes but only in very vague terms -- not in any more words than that. 
When I look at the Nimrod docs, it basically seems to be nothing more 
than "expose the compiler internal data structures". Which is extremely 
easy to do but causes a heap of problems in the long term.





Re: runtime hook for Crash on Error

2012-06-05 Thread Jonathan M Davis
On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
> On 04/06/12 21:29, Steven Schveighoffer wrote:
> > On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston  wrote:
> >> 1. There exist cases where you cannot know why the assert failed.
> >> 2. Therefore you never know why an assert failed.
> >> 3. Therefore it is not safe to unwind the stack from a nothrow function.
> >> 
> >> Spot the fallacies.
> >> 
> >> The fallacy in moving from 2 to 3 is more serious than the one from 1
> >> to 2: this argument is not in any way dependent on the assert occuring
> >> in a nothrow function. Rather, it's an argument for not having
> >> AssertError at all.
> > 
> > I'm not sure that is the issue here at all. What I see is that the
> > unwinding of the stack is optional, based on the assumption that there's
> > no "right" answer.
> > 
> > However, there is an underlying driver for not unwinding the stack --
> > nothrow. If nothrow results in the compiler optimizing out whatever
> > hooks a function needs to properly unwind itself (my limited
> > understanding is that this helps performance), then there *is no
> > choice*, you can't properly unwind the stack.
> > 
> > -Steve
> 
> No, this whole issue started because the compiler currently does do
> unwinding whenever it can. And Walter claimed that's a bug, and it
> should be explicitly disabled.
> 
> It is, in my view, an absurd position. AFAIK not a single argument has
> been presented in favour of it. All arguments have been about "you
> should never unwind Errors".

It's quite clear that we cannot completely, correctly unwind the stack in the 
face of Errors. As such, no one should be relying on stack unwinding when an 
Error is thrown. The implementation may manage it in some cases, but it's 
going to be unreliable in the general case regardless of how desirable it may 
or may not be.

The question is whether it's better to skip stack undwinding entirely when an 
Error is thrown. There are definitely cases where that would be better, since 
running cleanup code could just make things worse, corrupting even more stuff 
(including files and the like which may persist passed the termination of the 
program). On the other hand, there's a lot of cleanup code which would execute 
just fine when most Errors are thrown, and not running cleanup code causes its 
own set of problems. There's no way for the program to know which of the two 
situations that it's in when an Error is thrown. So, we have to pick one or 
the other.

I really don't know which is the better way to go. I'm very tempted to go with 
Walter on this one, since it would avoid making the worst case scenario worse, 
and if you have cleanup which _must_ be done, you're going to have to find a 
different way to handle it, because even perfect stack unwinding won't protect 
you from everything (e.g. power loss killing the computer). But arguably, the 
general case is cleaner if we do as much stack unwinding as we can.

Regardless, I think that there are a number of people in this thread who are 
mistaken in how recoverable they think Errors and/or segfaults are, and they 
seem to be the ones pushing the hardest for full stack unwinding on the theory 
that they could somehow ensure safe recovery and a clean shutdown when an 
Error occurs, which is almost never possible, and certainly isn't possible in 
the general case.

- Jonathan M Davis