Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread deadalnix

On Saturday, 11 May 2013 at 22:24:38 UTC, Simen Kjaeraas wrote:
I'm not convinced. unique, like const or immutable, is 
transitive. If foo

is unique, then so is foo.bar.



That isn't true. Please read microsoft's paper.


Re: DConf 2013 keynote

2013-05-11 Thread deadalnix

On Saturday, 11 May 2013 at 21:06:28 UTC, Walter Bright wrote:

On 5/11/2013 1:07 PM, Jeff Nowakowski wrote:
I can get by with a hammer and nails too, but if I was a 
professional roofer I'd
be an idiot not to use a nail gun. That's the problem with all 
this focus on
boilerplate. An IDE does so much more to make you productive 
in any language,

especially one that has static types.


I didn't say an IDE was bad for D, I only said that if you need 
an IDE to generate boilerplate for you, then there's something 
wrong with the language.




I keep repeating myself, but this is true, unless the boilerplate 
is here for the very reason of IDE integration.


Stupid inexpressive language are easier to write tools for, this 
is the very reason why java have such a great tooling.



IDE's have lots of other valuable uses.




Re: What is a "pull request"?

2013-05-11 Thread skeptical
In article , m.stras...@gmail.com says...
> 
> On Saturday, 11 May 2013 at 11:00:41 UTC, Mehrdad wrote:
> > Whoa, what the heck happened here...
> 
> Lack of moderation and surprisingly naive reaction of D community.

If you want "moderation" (aka censorship), perhaps C++ is a better programming 
language for you. Dickhead.


Re: What is a "pull request"?

2013-05-11 Thread skeptical
In article , wfunct...@hotmail.com says...
> 
> On Saturday, 11 May 2013 at 07:47:34 UTC, skeptical wrote:
> > In article , 
> > pub...@kyllingen.net says...
> >> 
> >> On Saturday, 11 May 2013 at 06:07:39 UTC, skeptical wrote:
> >> >
> >> >
> >> > What is a "pull request"? Thank you.
> >> 
> >> 
> >> https://help.github.com/articles/using-pull-requests
> >
> > I asked you, you tell me. I don't follow links anymore. I am 
> > beyond that. So why don't you shush (I'm known to say it worse) 
> > if "you nothing to add"? (Bitch).
> 
> Whoa, what the heck happened here...

Thanks for noticing. Now that I have your attention...


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Daniel Murphy
"Jonathan M Davis"  wrote in message 
news:mailman.1222.1368325870.4724.digitalmar...@puremagic.com...
> The big problem is when you need to compile
> the compiler. You have a circular dependency due to the compiler depending 
> on
> itself, and have to break it somehow. As long as newer compilers can 
> compiler
> older ones, you're fine, but that's bound to fall apart at some point 
> unless
> you freeze everything. But even bug fixes could make the old compiler not
> compile anymore, so unless the language and compiler (and anything they 
> depend
> on) is extremely stable, you risk not being able to compile older 
> compilers,
> and it's hard to guarantee that level of stability, especially if the 
> compiler
> is not restricted in what features it uses or in what it uses from the
> standard library.
>
> - Jonathan M Davis

My thought was that you ensure (for the foreseeable future) that all D 
versions of the compiler compile with the most recent C++ version of the 
compiler. 




Re: Mobipocket to EPUB

2013-05-11 Thread Walter Bright

On 5/11/2013 5:49 PM, Borden wrote:

Further, after I've done my changes, what's the procedure to get my changes
merged with GitHub?


Generate a "pull request". There's plenty of tutorials on it if you google the 
phrase.




Re: DConf 2013 keynote

2013-05-11 Thread Jonathan M Davis
On Saturday, May 11, 2013 23:59:44 Nick Sabalausky wrote:
> On Fri, 10 May 2013 19:04:31 -0400
> 
> "Jonathan M Davis"  wrote:
> > On Friday, May 10, 2013 14:31:00 H. S. Teoh wrote:
> > > As they say in information theory: it is the stuff that stands out,
> > > that is different from the rest, that carries the most information.
> > > The stuff that's pretty much repeated every single time conveys
> > > very little information.
> > 
> > This is an excellent way of looking at language design (and program
> > design for that matter).
> 
> Not to mention data compression ;)

LOL. Yes. That's pretty much what you have to look at in data compression by 
definition - that and finding ways to make more of the data which is different 
the same without losing too much information or quality (at least with lossy 
compression).

- Jonathan M Davis


Re: DConf 2013 keynote

2013-05-11 Thread Nick Sabalausky
On Fri, 10 May 2013 19:04:31 -0400
"Jonathan M Davis"  wrote:

> On Friday, May 10, 2013 14:31:00 H. S. Teoh wrote:
> > As they say in information theory: it is the stuff that stands out,
> > that is different from the rest, that carries the most information.
> > The stuff that's pretty much repeated every single time conveys
> > very little information.
> 
> This is an excellent way of looking at language design (and program
> design for that matter).
> 

Not to mention data compression ;)



Re: DConf 2013 keynote

2013-05-11 Thread Nick Sabalausky
On Fri, 10 May 2013 21:45:06 -0700
"H. S. Teoh"  wrote:
> 
> Yes, which is why I love D so much. All I need is a text editor and
> the compiler, and I can do everything. Even unittesting and coverage
> are all integrated. No need for external tools, no need to install a
> whole bunch of support software, all the essentials are bundled with
> the compiler. How much more compelling can it get?
> 

The nicest thing of all, IMO, about not strictly needing all that
support software is that basic things like
editing/navigating/opening/closing code is always and forever 100%
unobstructed by things like startup delays and keyboard input lag which
have no business existing on the rocket-engined supercomputers we now
call "a PC".



Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Jonathan M Davis
On Saturday, May 11, 2013 19:56:00 Walter Bright wrote:
> At least for dmd, we keep all the old binaries up and downloadable for that
> reason.

That helps considerably, though if the compiler is old enough, that won't work 
for Linux due to glibc changes and whatnot. I expect that my particular 
situation is quite abnormal, but I thought that it was worth raising the point 
that if you're compiler has to compile itself, then changes to the language 
(and anything else the compiler depends on) can be that much more costly, so 
it may be worth minimizing what the compiler depends on (as Daniel is 
suggesting).

As we increase our stability, the likelihood of problems will be less, but 
we'll probably never eliminate them. Haskell's case is as bad as it is because 
they released a new standard for it and did it in a way that it doesn't 
necessarily work to build the old one anymore (and if it does, it tends to be 
a pain). It would be akin to if dmd were building itself when we went from D1 
to D2, and the new compiler could only compile D1 when certain flags were used, 
and those flags were overly complicated to boot. So, it's much worse than 
simply going from one version of the compiler to the next.

- Jonathan M Davis


Re: DConf 2013 keynote

2013-05-11 Thread Nick Sabalausky
On Fri, 10 May 2013 18:59:23 -0700
"H. S. Teoh"  wrote:
> 
[...snip Java vs D samples...]
> Talk about signal-to-noise ratio.
> 
> And don't get me started on all those BlahBlahBlahClassWrapper's and
> BlahBlahBlahClassWrapperFactoryWrapper's. Ugh. And Integer vs. int,
> and other such atrocities. What, built-in atomic types are defective
> so we need to wrap them in classes now? Flawed language design,
> anybody?
> 
> I find D superior to Java in just about every possible way. Except
[..snip..]
> 
> > Sometimes C++ give me hives, it's so error prone and an
> > under-productive language for the actual industry needs, that
> > certainly why Google created the Go.
> 
> Surprisingly enough, before I found D, I actually considered ditching
> C++ for C. I only stayed with C++ because it has certain niceties,
> like exceptions, (and no need to keep typing 'struct' everywhere on a
> type that's blatantly obviously a struct) that in C is a royal pain
> in the neck. C++ is just over-complex, and its complexity in
> different areas interact badly with each other, making it an utter
> nightmare to work with beyond trivial textbook examples. OO
> programming in C++ is so nasty, it's laughable -- if I wanted OO,
> Java would be far superior. 
> 

Yea. Somewhere close to 10 years ago, it was precisely the nightmarish
combination of C++ and Java that pushed me to do some language
searching which led me to (an early) D.

Learning Java taught me all the reasons to hate C++, but then Java also
went and threw away the *good* things about C/C++, too. As those
were the languages I needed to use the most, the constant "Which hell
do I want? Hell A or Hell B?" damn near soured me on programming in
general.

Then C# and D came along and made programmer life tolerable again ;)
I've since gotten tired of C# too, though. The limitations of its
generics, and MS's continued disinterest in fixing them, finally drove
me to ditch it forever. D by contrast has only gotten better with age.

> I found that C++ is only tolerable when I
> use it as "C with classes".

That's always been my strategy with C++. Originally because I didn't
know any of its fancier stuff, and now because I just don't want to
deal with any of its "frills".

Funny thing: I absolutely can't stand highly dynamic languages, period,
but after re-introducing myself to C/C++ on a project last year, I'm
understanding much better why so many game devs are so big on Lua.



Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Diggory

On Sunday, 12 May 2013 at 01:16:43 UTC, Simen Kjaeraas wrote:
I believe you're overthinking this. First, what is global 
unique variable?
A unique value will per definition have to be thread-local (if 
not, other
threads have a reference to it, and it's not unique). Thus, 
when passed to
a function, the value is inaccessible outside of that function 
until it

returns. (I guess fibers might be an exception here?)


While in the function, that function can access a value both 
through the global variable which is supposed to be "unique" and 
through the lent parameter. This could cause problems because 
"unique" no longer means "unique", although it's difficult to see 
how serious that might be.


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Walter Bright

On 5/11/2013 7:30 PM, Jonathan M Davis wrote:

But in theory, the way to solve the problem of your program not compiling with
the new compiler is to compile with the compiler it was developed with in the
first place, and then if you want to upgrade your code, you upgrade your code
and use it with the new compiler. The big problem is when you need to compile
the compiler. You have a circular dependency due to the compiler depending on
itself, and have to break it somehow. As long as newer compilers can compiler
older ones, you're fine, but that's bound to fall apart at some point unless
you freeze everything. But even bug fixes could make the old compiler not
compile anymore, so unless the language and compiler (and anything they depend
on) is extremely stable, you risk not being able to compile older compilers,
and it's hard to guarantee that level of stability, especially if the compiler
is not restricted in what features it uses or in what it uses from the
standard library.


It isn't just compiling the older compiler, it is compiling it and verifying 
that it works.


At least for dmd, we keep all the old binaries up and downloadable for that 
reason.



Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Jonathan M Davis
On Saturday, May 11, 2013 18:18:27 Walter Bright wrote:
> On 5/11/2013 6:09 PM, Jonathan M Davis wrote:
> > So, we might be better of restricting how much the compiler depends on -
> > or we may decide that the workaround is to simply build the last C++
> > version of the compiler and then move forward from there. But I think
> > that the issue should at least be raised.
> 
> Last month I tried compiling an older 15 line D utility, and 10 of those
> lines broke due to phobos changes.
> 
> I discussed this a bit with Andrei, and proposed that we keep around aliases
> for the old names, and put them inside a:
> 
> version (OldNames) {
>alias newname oldname;
>
> }
> 
> or something like that.

Well, that particular problem should be less of an issue in the long run. We 
renamed a lot of stuff in an effort to make the naming more consistent, but we 
haven't been doing much of that for a while now. And fortunately, those 
changes are obvious and quick.

But in theory, the way to solve the problem of your program not compiling with 
the new compiler is to compile with the compiler it was developed with in the 
first place, and then if you want to upgrade your code, you upgrade your code 
and use it with the new compiler. The big problem is when you need to compile 
the compiler. You have a circular dependency due to the compiler depending on 
itself, and have to break it somehow. As long as newer compilers can compiler 
older ones, you're fine, but that's bound to fall apart at some point unless 
you freeze everything. But even bug fixes could make the old compiler not 
compile anymore, so unless the language and compiler (and anything they depend 
on) is extremely stable, you risk not being able to compile older compilers, 
and it's hard to guarantee that level of stability, especially if the compiler 
is not restricted in what features it uses or in what it uses from the 
standard library.

- Jonathan M Davis


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Walter Bright

On 5/11/2013 6:09 PM, Jonathan M Davis wrote:

So, we might be better of restricting how much the compiler depends on - or we
may decide that the workaround is to simply build the last C++ version of the
compiler and then move forward from there. But I think that the issue should
at least be raised.


Last month I tried compiling an older 15 line D utility, and 10 of those lines 
broke due to phobos changes.


I discussed this a bit with Andrei, and proposed that we keep around aliases for 
the old names, and put them inside a:


   version (OldNames) {
  alias newname oldname;
  
   }

or something like that.



Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Simen Kjaeraas

On 2013-05-12, 02:36, Diggory wrote:


Temporary copies are fine. Only those that escape the function are
problematic.


The problem I had with this is that it temporarily breaks the invariant  
on the calling code's "unique" variable. For the duration of the  
function it is no longer unique.


This is basically the same as with D's @pure. You may break purity as
much as you like, as long as no side effects are visible *outside* the
function.


One way to not break the invariant would be to imagine it as first  
moving the value into the function when it is called, and then moving it  
back out when it returns.


Obviously it may not be good to actually implement it this way, but all  
that is necessary is to make accessing the original variable before the  
function has returned, undefined. In most cases the only limit this  
imposes is not passing a "unique" variable for more than one argument to  
a function.


Hm. I didn't think of that. I believe passing the same unique value twice
should be fine, as long as both are lent. Obviously, in the case of
non-lent, passing it twice should be an error (probably not possible to
statically determine in all cases, so a runtime error occasionally, or
just go conservative and outlaw some valid uses).


The problem is when there is a global "unique" variable being passed to  
a function. Perhaps in this case the compiler could actually emulate the  
move operation in and out of the function by temporarily clearing the  
global for the duration of the call so that it can't then be accessed by  
the function, thus maintaining the unique invariant.


I believe you're overthinking this. First, what is global unique variable?
A unique value will per definition have to be thread-local (if not, other
threads have a reference to it, and it's not unique). Thus, when passed to
a function, the value is inaccessible outside of that function until it
returns. (I guess fibers might be an exception here?)

--
Simen


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Jonathan M Davis
On Saturday, May 11, 2013 17:51:24 Walter Bright wrote:
> On 5/11/2013 2:09 PM, Jonathan M Davis wrote:
> > I have to be able
> > to build old haskell code without updating it,
> 
> I guess this is the crux of the matter. Why can't you update the source?

Well, in this particular case, it has to do with work on my master's thesis, 
and I have the code in various stages of completion and need to be able to 
look at exactly what it was doing at each of those stages for writing the 
actual paper. Messing with the code risks changing what it does, and it wasn't 
necessarily in a great state anyway given that I'm basically dealing with 
snapshots of the code over time, and not all of the snapshots are necessarily 
fully functional.

In the normal case, I'd definitely want to update my code, but I still might 
need to get the old code working before doing that so that I can be sure of 
how it works before changing it. Obviously, things like solid unit testing 
help with that, but if you're dealing with code that hasn't been updated in a 
while, it's not necessarily a straightforward task to update it, especially 
when it's in a language that you're less familiar with. It's even worse if 
it's code written by someone else entirely, and you're just trying to get it 
working (which isn't my current situation, but that's often the case when 
building old code).

Ultimately, I don't know how much we need to care about situations where 
people need to compile an old version of the compiler, and all they have is 
the new compiler. Much as its been causing me quite a bit of grief in haskell, 
for the vast majority of people, it's not likely to come up. But I think that 
it at least needs to be brought up so that it can be considered when deciding 
what we're doing with regards to porting the front-end to D. I think that main 
reason that C++ avoids the problem is that it's so rarely updated (which 
causes a whole different set of problems). And while we obviously want to 
minimize breakage caused by changes to the library, language, or just due to 
bugs, they _are_ going to have an effect with regards to building older 
compilers if the compiler itself is affected by them.

So, we might be better of restricting how much the compiler depends on - or we 
may decide that the workaround is to simply build the last C++ version of the 
compiler and then move forward from there. But I think that the issue should 
at least be raised.

- Jonathan M Davis


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Walter Bright

On 5/11/2013 2:09 PM, Jonathan M Davis wrote:

I have to be able
to build old haskell code without updating it,


I guess this is the crux of the matter. Why can't you update the source?


Mobipocket to EPUB

2013-05-11 Thread Borden

Good evening, all,

I appreciate the work of the people who've made the D 
Documentation, and I've wanted to download an eBook of the 
language specification to read offline. I see that the downloads 
page has the spec in Mobipocket format, but, according to 
wikipedia, it's been superceded by the more widely-adopted, and 
more open, EPUB format.


To this end, I've downloaded the website source code from GitHub 
and I've been fiddling with posix.mak and its supporting files to 
compile the language spec into EPUB. I've been able (sorta) to 
get it to work, but I want to co-ordinate my effort with anyone 
else who's doing the migration.


Further, after I've done my changes, what's the procedure to get 
my changes merged with GitHub?


With thanks,


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Jonathan M Davis
On Sunday, May 12, 2013 02:36:28 Diggory wrote:
> It's not just for the benefit of the compiler either - attributes
> help get across the intent of the code rather than just what it
> does and can be very powerful in ensuring correct code.

Yes, but the more you have, the more the programmer has to understand and keep 
track of. There's cost in cognitive load. So, you want to add enough that you 
can do what you need to do and get some solid benefits from the attributes that 
you have, but at some point, you have to stop adding them, or the language 
becomes unwieldy. Whether it would ultimately be good or bad with regards to 
unique specifically is still an open question, but it means that any attributes 
you add really need to pull their weight, especially when we already have so 
many of them.

- Jonathan M Davis


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Diggory

On Saturday, 11 May 2013 at 22:24:38 UTC, Simen Kjaeraas wrote:
In which case we end up with duplicates of all functions to 
support both

unique and non-unique parameters. Hence we'd need scope or lent.


OK, I see that we need a way to accept unique values without 
enforcing them.


Temporary copies are fine. Only those that escape the function 
are

problematic.


The problem I had with this is that it temporarily breaks the 
invariant on the calling code's "unique" variable. For the 
duration of the function it is no longer unique.


One way to not break the invariant would be to imagine it as 
first moving the value into the function when it is called, and 
then moving it back out when it returns.


Obviously it may not be good to actually implement it this way, 
but all that is necessary is to make accessing the original 
variable before the function has returned, undefined. In most 
cases the only limit this imposes is not passing a "unique" 
variable for more than one argument to a function.


The problem is when there is a global "unique" variable being 
passed to a function. Perhaps in this case the compiler could 
actually emulate the move operation in and out of the function by 
temporarily clearing the global for the duration of the call so 
that it can't then be accessed by the function, thus maintaining 
the unique invariant.


Can you detail the process involved in assignment from one 
unique to
another unique? Would the original unique be destroyed? Leaving 
only the

'copy' remaining?
Yep, it would just be standard "move" semantics, same as if you 
initialise a variable with an rvalue.


It's been brought up quite a few times (it might have even have 
been brought
up by Bartoz ages ago - I don't recall for sure). The problem 
is that it
complicates the type system yet further. We'd have to find a 
way to introduce
it without impacting the rest of the type system much or 
complicating it much
further. And that's a tall order. Not to mention, we already 
have a lot of
attributes, and every new one adds to the complexity and 
cognitive load of the
language. So, we have to be very careful about what we do and 
don't add at

this point.

- Jonathan M Davis


The main change to existing code would seem to be adding "scope" 
or "lent" to parameters where relevant so that unique values can 
be accepted. It only makes sense to use it for ref parameters, 
classes and slices, so it's not like it would need to be added 
everywhere. Interestingly that's yet another optimisation that 
can be done - if a slice is unique it can freely be appended to 
without some of the usual checks.


With regard to lots of attributes, I think a language which tries 
to be as much as D tries to be is going to end up with a lot of 
attributes in the end anyway. With a combination of better IDE 
and compiler support for inferring attributes, it shouldn't cause 
too much of a problem - attributes are generally very simple to 
understand, the hard part is always knowing which one to apply 
when writing library code.


It's not just for the benefit of the compiler either - attributes 
help get across the intent of the code rather than just what it 
does and can be very powerful in ensuring correct code.


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Jonathan M Davis
On Saturday, May 11, 2013 23:43:19 John Colvin wrote:
> Can't this be eased with readily available binaries and cross
> compilation?
> 
> E.g. We drop the C++ version in 2.7. You want DMD version 2.8.2.
> The minimum needed to compile 2.8.2 is 2.7.5:
> 
> You can download a binary of 2.7.5 for any common system, cross
> compile 2.8.2 for your development system, viola! If there are
> binaries available for your development system, then it becomes
> almost trivial.

Sure, but that assumes that you have access to a compatible binary. That's not 
always easy, and it can be particularly nasty in *nix. A binary built a few 
years ago stands a good chance of being completely incompatible with current 
systems even if all it depends on is glibc, let alone every other dependency 
that might have changed. It's even harder when your language is not one 
included by default in distros. For Windows, this probably wouldn't be an 
issue, but it could be a big one for *nix systems.

> Even if this wasn't possible for some reason, recursively
> building successive versions of the compiler is a completely
> automatable process. dmd+druntime+phobos compiles quickly enough
> that it's not a big problem.

Sure, assuming that you can get an old enough version of the compiler which 
you can actually compile. It's by no means an insurmountable problem, but you 
_do_ very much risk being in a situation where you literally have to compile 
the last C++ version of D's compiler and then compile every version of the 
compiler since then until you get to the one you want. And anyone who doesn't 
know that they could go to an older compiler which was in C++ (let alone which 
version it was) is going to have a lot of trouble.

I don't know how much we want to worry about this, but it's very much a real 
world problem when you don't have a binary for an older version of the 
compiler that you need, and the current compiler can't build it. It's been 
costing me a lot of time trying to sort that out in Haskell thanks to the 
shift from the 98 standard to 2010.

- Jonathan M Davis


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Iain Buclaw
On May 11, 2013 6:35 PM, "David Nadlinger"  wrote:
>
> On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:
>>
>> That... doesn't sound very nice to me.  How much of phobos are we
>> realistically going to need?
>
>
> All of it? Well, not quite, but large parts at least.
>
> If we are going to stick to the C subset of the language, there is little
point in translating it to D in the first place.
>
> Of course, there will be some restrictions arising from the fact that the
code base needs to work with D versions from a year back or so. But to me
duplicating the whole standard library inside the compiler source seems
like maintenance hell.
>
> David

I don't think it would be anything in the slightest at all.   For
instance,  Bigint implementation is big,  BIG.  :)

What would be ported to the compiler may be influenced by BigInt,  but
would be a limited subset of its functionality tweaked for the purpose of
use in the front end.

I am more concerned from GDC's perspective of things.  Especially when it
comes to building from hosts that may have phobos disabled (this is a
configure switch).

Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: GSoC 2013 ideas page

2013-05-11 Thread Samuel Lampa

On 03/27/2013 04:00 AM, Andrei Alexandrescu wrote:
I just started http://wiki.dlang.org/GSOC_2013_Ideas. I'll add some 
tomorrow; we should have quite a few before the application deadline 
(March 29) in order to strengthen our application.


Please chime in!


I Just got an idea ... too late I guess? Should I create a GSOC 2014 
page and add it there? :)


// Samuel


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Simen Kjaeraas

On 2013-05-12, 00:31, Manu wrote:


This is a very interesting idea.
It would also be a massive advantage when passing ownership between
threads, which is a long-standing problem that's not solves at all.
There currently exists no good way to say "I now give ownership to you",
which is what you basically always do when putting a job on a queue to be
picked up by some foreign thread.
Using shared is cumbersome, and feels very inelegant, casts everywhere,  
and

once the casts appear, any safety is immediately lost.

Can you detail the process involved in assignment from one unique to
another unique? Would the original unique be destroyed? Leaving only the
'copy' remaining?


Not speaking for Diggory, but that's generally the idea, yes. In code:

class A { /* ... */ }

void foo(A a) { /* ... */ }

void fun( ) {
unique A a = new A();
unique A b = a;
assert(a is null);
foo(b);
assert(b is null);
}

And with my suggestion for 'lent':

void bar(lent A a) { /* Assigning a (or anything reachable from a) to a  
global in here is verboten. */ }


void gun( ) {
unique A a = new A();
bar(a);
assert(a !is null);
}

--
Simen


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Jonathan M Davis
On Saturday, May 11, 2013 13:12:00 Diggory wrote:
> Just listened to this talk and it made me think about the various
> type qualifiers. Has there ever been any thought of introducing a
> new type qualifier/attribute, "unique"?

It's been brought up quite a few times (it might have even have been brought 
up by Bartoz ages ago - I don't recall for sure). The problem is that it 
complicates the type system yet further. We'd have to find a way to introduce 
it without impacting the rest of the type system much or complicating it much 
further. And that's a tall order. Not to mention, we already have a lot of 
attributes, and every new one adds to the complexity and cognitive load of the 
language. So, we have to be very careful about what we do and don't add at 
this point.

- Jonathan M Davis


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Manu
On 11 May 2013 21:12, Diggory  wrote:

> Just listened to this talk and it made me think about the various type
> qualifiers. Has there ever been any thought of introducing a new type
> qualifier/attribute, "unique"? I know it already exists as a standard
> library class but I think there are several advantages to having it as a
> language feature:
>
> - "unique" objects can be moved into default, const, unique or immutable
> variables, but can never be copied.
>
> - "new"/constructors always returns a "unique" object, which can then be
> moved into any type, completely eliminating the need for different types of
> constructors.
>
> - Functions which create new objects can also return a "unique" object
> solving the problem mentioned in this talk of whether or not to return
> immutable values.
>
> - "assumeUnique" would actually return a "unique" type, but would be
> unnecessary in most cases.
>
> - Strings can be efficiently built in "unique" character arrays and then
> safely returned as immutable without a cast.
>
> - The compiler can actually provide strong guarantees about uniqueness
> compared to the rather weak guarantees possible in std.typecons.Unique.
>
> - It can be extremely useful for optimisation if the compiler can know
> that there are no other references to an object. There are countless times
> when this knowledge would make otherwise unsafe optimisations safe.
>

This is a very interesting idea.
It would also be a massive advantage when passing ownership between
threads, which is a long-standing problem that's not solves at all.
There currently exists no good way to say "I now give ownership to you",
which is what you basically always do when putting a job on a queue to be
picked up by some foreign thread.
Using shared is cumbersome, and feels very inelegant, casts everywhere, and
once the casts appear, any safety is immediately lost.

Can you detail the process involved in assignment from one unique to
another unique? Would the original unique be destroyed? Leaving only the
'copy' remaining?


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Paulo Pinto

Am 11.05.2013 23:43, schrieb John Colvin:

On Saturday, 11 May 2013 at 21:09:57 UTC, Jonathan M Davis wrote:

On Saturday, May 11, 2013 20:40:46 deadalnix wrote:

Except that now, it is a pain to migrate old haskell stuff to
newer haskelle stuff if you missed several compile release.

You ends up building recursively from the native version to the
version you want.


Yeah. And I'm stuck with the opposite problem at the moment. I have to
be able
to build old haskell code without updating it, but I don't have an older
version of ghc built currently, and getting a version old enough to
compile my
code has turned out to be a royal pain, because the old compiler won't
compile
with the new compiler. I don't even know if I'm going to be able to do
it.

If you're always moving forward, you're okay, but if you have to deal
with
older code, then you quickly run into trouble if the compiler is
written in an
up-to-date version of the language that it's compiling. At least at this
point, if you needed something like 2.059 for some reason, you can
just grab
2.059, compile it, and use it with your code. But if the compiler were
written
in D, and the version of D with 2.059 was not fully compatible with the
current version, then compiling 2.059 would become a nightmare.

The situation between a normal program and the compiler is quite
different.
With a normal program, if your code isn't going to work with the current
compiler due to language or library changes, then you just grab an older
version of the compiler and use that (possibly upgrading your code
later if
you intend to maintain it long term). But if it's the compiler that
you're
trying to compile, then you're screwed by any language or library
changes that
affect the compiler, because it could very well become impossible to
compile
older versions of the compiler.

Yes, keeping language and library changes to a minimum reduces the
problem,
but unless they're absolutely frozen, you risk problems. Even changes
with
high ROI (like making implicit fall-through on switch statements illegal)
could make building older compilers impossible.

So, whatever we do with porting dmd to D, we need to be very careful.
We don't
want to lock ourselves in so that we can't make changes to the
language or
libraries even when we really need to, but we don't want to make it too
difficult to build older versions of the compiler for people who have
to either.
At the extreme, we could end up in a situation where you have to grab the
oldest version of the compiler which was written in C++, and then
build each
newer version of the compiler in turn until you get to the one that
you want.

- Jonathan M Davis


Can't this be eased with readily available binaries and cross compilation?

E.g. We drop the C++ version in 2.7. You want DMD version 2.8.2. The
minimum needed to compile 2.8.2 is 2.7.5:

You can download a binary of 2.7.5 for any common system, cross compile
2.8.2 for your development system, viola! If there are binaries
available for your development system, then it becomes almost trivial.


Even if this wasn't possible for some reason, recursively building
successive versions of the compiler is a completely automatable process.
dmd+druntime+phobos compiles quickly enough that it's not a big problem.



I also don't understand the problem. This is how compilers get 
botstraped all the time.


You just use toolchain X to build toolchain X+1.

--
Paulo


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Simen Kjaeraas

On 2013-05-11, 23:40, Diggory wrote:



unique is interesting, and holds many promises. However, its effects may
be wide-spanning and have many corner case.

In addition to mutability, unique applies to shared/unshared - a unique
value may safely be moved to another thread.

Pure functions whose parameters are all unique or value types will  
always

return a unique result. (Note that this is similar to how pure function
results are now implicitly castable to immutable, but unique is  
stricter)


For unique values not to lose unicity when passed to functions, there
must be ways to specify that the function will not create new aliases to
the passed value. scope might fit the bill here, otherwise something  
like

lent must be added.


That's solved by the rule that "unique" values can only be moved not  
copied.


What is? The last part?


To pass a "unique" parameter by value to a function the original must be  
invalidated in the process. The only other way would be to pass by  
reference, in which case the function argument must also be declared  
"unique".


In which case we end up with duplicates of all functions to support both
unique and non-unique parameters. Hence we'd need scope or lent.


The rule about pure functions returning "unique" is not in general true  
- if it returns a class which has a pointer to itself, or a pointer to  
another class which has a pointer to itself then the return value is not  
unique. The return value must specifically be declared unique.


I'm not convinced. unique, like const or immutable, is transitive. If foo
is unique, then so is foo.bar.



The only problem I can see is with the "this" pointer:
- If we have unique and non-unique functions it means duplicating  
everything, or at least remembering to add the "unique" attribute.

- Unique would then be both a type modifier and a function attribute
- It's not immediately obvious what operations can be performed by a  
"unique" member function.
- It is not simply equivalent to marking the "this" parameter as unique  
because that would mean erasing the argument passed in for that  
parameter, ie. invalidating the object!


Look above again. With 'lent', the function guarantees not to create new,
escaping aliases. Thus, a unique value may be passed by ref (e.g. the
'this' pointer) without erasing the value.

The syntax would thus be:

class A {
void forble() lent {
globalValue = this; // Compile-time error.
}
}



But I think that can also be solved satisfactorily:
- Make the unique-ness of a member function something which is  
implicitly determined by the compiler based on its content.


Won't work. The function body is not always available.


- Any function which only dereferences the "this" pointer can safely be  
marked "unique" internally, and therefore can be called on a unique  
variable. If it copies the "this" pointer (auto x = this) then it is not  
"unique".


Temporary copies are fine. Only those that escape the function are
problematic.

--
Simen


Re: DConf 2013 keynote

2013-05-11 Thread Diggory


I didn't say an IDE was bad for D, I only said that if you need 
an IDE to generate boilerplate for you, then there's something 
wrong with the language.


IDE's have lots of other valuable uses.


There is one case of generating boilerplate code which can hardly 
be avoided and would at least partially solve a problem that 
keeps coming up in one form or another:


For every new language feature that uses some form of attributes 
(ie. almost all of them) there is the problem of how 
automatically the attributes are applied, generally with these 
possibilities:


- The attribute is a purely internal concept used and deduced by 
the compiler. This has the problem that unless these attributes 
are in some way stored in the .di file the compiler has now way 
to determine them when it cannot see the code. It also leaks 
implementation details into the interface.


- The attribute is explicitly defined but inferred in some cases 
by the compiler.
This has the problem that it's now not obvious whether the 
attribute can be inferred or not, there are more rules to know 
about when automatic deduction is done, and there will still be 
many cases where the attributes cannot be safely inferred without 
leaking implementation detail, but the programmer forgets to add 
them.


- The attribute is explicitly defined but is inferred when "auto" 
is present.
This has the problem that there's no good way to finely tune 
which attributes "auto" should infer, and no way to un-set 
attributes. When "auto" is used on methods for use by external 
code it is again leaking implementation detail.


None of these are very satisfactory. A good solution should make 
it clear to the programmer which attributes are applied, make it 
easy to apply all the attributes which can be inferred but also 
easy to then change them, and not change when the implementation 
changes.


An IDE command which automatically infers all the attributes 
would seem to be the only way to solve this well. Unfortunately 
it doesn't exist yet... Anyway it would be worthwhile deciding on 
a consistent way to handle attributes as the number of them 
increase, and it would be worth making sure that whatever way is 
chosen is compatible with such a potential IDE feature.


Another option would be to add an attribute called "default" or 
something like that, and have the compiler issue a message if it 
finds a function with no attributes that tells the programmer 
what attributes the function COULD have so it's a reminder to 
either add them, or put "default" after it.


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread John Colvin

On Saturday, 11 May 2013 at 21:09:57 UTC, Jonathan M Davis wrote:

On Saturday, May 11, 2013 20:40:46 deadalnix wrote:

Except that now, it is a pain to migrate old haskell stuff to
newer haskelle stuff if you missed several compile release.

You ends up building recursively from the native version to the
version you want.


Yeah. And I'm stuck with the opposite problem at the moment. I 
have to be able
to build old haskell code without updating it, but I don't have 
an older
version of ghc built currently, and getting a version old 
enough to compile my
code has turned out to be a royal pain, because the old 
compiler won't compile
with the new compiler. I don't even know if I'm going to be 
able to do it.


If you're always moving forward, you're okay, but if you have 
to deal with
older code, then you quickly run into trouble if the compiler 
is written in an
up-to-date version of the language that it's compiling. At 
least at this
point, if you needed something like 2.059 for some reason, you 
can just grab
2.059, compile it, and use it with your code. But if the 
compiler were written
in D, and the version of D with 2.059 was not fully compatible 
with the

current version, then compiling 2.059 would become a nightmare.

The situation between a normal program and the compiler is 
quite different.
With a normal program, if your code isn't going to work with 
the current
compiler due to language or library changes, then you just grab 
an older
version of the compiler and use that (possibly upgrading your 
code later if
you intend to maintain it long term). But if it's the compiler 
that you're
trying to compile, then you're screwed by any language or 
library changes that
affect the compiler, because it could very well become 
impossible to compile

older versions of the compiler.

Yes, keeping language and library changes to a minimum reduces 
the problem,
but unless they're absolutely frozen, you risk problems. Even 
changes with
high ROI (like making implicit fall-through on switch 
statements illegal)

could make building older compilers impossible.

So, whatever we do with porting dmd to D, we need to be very 
careful. We don't
want to lock ourselves in so that we can't make changes to the 
language or
libraries even when we really need to, but we don't want to 
make it too
difficult to build older versions of the compiler for people 
who have to either.
At the extreme, we could end up in a situation where you have 
to grab the
oldest version of the compiler which was written in C++, and 
then build each
newer version of the compiler in turn until you get to the one 
that you want.


- Jonathan M Davis


Can't this be eased with readily available binaries and cross 
compilation?


E.g. We drop the C++ version in 2.7. You want DMD version 2.8.2. 
The minimum needed to compile 2.8.2 is 2.7.5:


You can download a binary of 2.7.5 for any common system, cross 
compile 2.8.2 for your development system, viola! If there are 
binaries available for your development system, then it becomes 
almost trivial.



Even if this wasn't possible for some reason, recursively 
building successive versions of the compiler is a completely 
automatable process. dmd+druntime+phobos compiles quickly enough 
that it's not a big problem.


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Diggory


unique is interesting, and holds many promises. However, its 
effects may

be wide-spanning and have many corner case.

In addition to mutability, unique applies to shared/unshared - 
a unique

value may safely be moved to another thread.

Pure functions whose parameters are all unique or value types 
will always
return a unique result. (Note that this is similar to how pure 
function
results are now implicitly castable to immutable, but unique is 
stricter)


For unique values not to lose unicity when passed to functions, 
there
must be ways to specify that the function will not create new 
aliases to
the passed value. scope might fit the bill here, otherwise 
something like

lent must be added.


That's solved by the rule that "unique" values can only be moved 
not copied. To pass a "unique" parameter by value to a function 
the original must be invalidated in the process. The only other 
way would be to pass by reference, in which case the function 
argument must also be declared "unique".


The rule about pure functions returning "unique" is not in 
general true - if it returns a class which has a pointer to 
itself, or a pointer to another class which has a pointer to 
itself then the return value is not unique. The return value must 
specifically be declared unique.


The only problem I can see is with the "this" pointer:
- If we have unique and non-unique functions it means duplicating 
everything, or at least remembering to add the "unique" attribute.
- Unique would then be both a type modifier and a function 
attribute
- It's not immediately obvious what operations can be performed 
by a "unique" member function.
- It is not simply equivalent to marking the "this" parameter as 
unique because that would mean erasing the argument passed in for 
that parameter, ie. invalidating the object!


But I think that can also be solved satisfactorily:
- Make the unique-ness of a member function something which is 
implicitly determined by the compiler based on its content.
- Any function which only dereferences the "this" pointer can 
safely be marked "unique" internally, and therefore can be called 
on a unique variable. If it copies the "this" pointer (auto x = 
this) then it is not "unique".


Re: ARM targetting cross-toolchain with GDC

2013-05-11 Thread Timofei Bolshakov

On Thursday, 2 May 2013 at 17:28:47 UTC, Johannes Pfau wrote:

Am Thu, 02 May 2013 18:54:28 +0200
schrieb "Timofei Bolshakov" :


Thank you!
I will check that today. What to do about static asserts in 
thread.d?


Those should be fixed by the ucontext changes. The
static if(__traits( compiles, ucontext_t )) code path
will be used if we make ucontext_t available.


Many thanks for your answer!
I am using compiler for several weeks already - and today I've 
seen that I did not thank you.

So, all what you have said works and works fine!


Re: DConf 2013 keynote

2013-05-11 Thread Walter Bright

On 5/11/2013 1:07 PM, Jeff Nowakowski wrote:

I can get by with a hammer and nails too, but if I was a professional roofer I'd
be an idiot not to use a nail gun. That's the problem with all this focus on
boilerplate. An IDE does so much more to make you productive in any language,
especially one that has static types.


I didn't say an IDE was bad for D, I only said that if you need an IDE to 
generate boilerplate for you, then there's something wrong with the language.


IDE's have lots of other valuable uses.


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Jonathan M Davis
On Saturday, May 11, 2013 20:40:46 deadalnix wrote:
> Except that now, it is a pain to migrate old haskell stuff to
> newer haskelle stuff if you missed several compile release.
> 
> You ends up building recursively from the native version to the
> version you want.

Yeah. And I'm stuck with the opposite problem at the moment. I have to be able 
to build old haskell code without updating it, but I don't have an older 
version of ghc built currently, and getting a version old enough to compile my 
code has turned out to be a royal pain, because the old compiler won't compile 
with the new compiler. I don't even know if I'm going to be able to do it.

If you're always moving forward, you're okay, but if you have to deal with 
older code, then you quickly run into trouble if the compiler is written in an 
up-to-date version of the language that it's compiling. At least at this 
point, if you needed something like 2.059 for some reason, you can just grab 
2.059, compile it, and use it with your code. But if the compiler were written 
in D, and the version of D with 2.059 was not fully compatible with the 
current version, then compiling 2.059 would become a nightmare.

The situation between a normal program and the compiler is quite different. 
With a normal program, if your code isn't going to work with the current 
compiler due to language or library changes, then you just grab an older 
version of the compiler and use that (possibly upgrading your code later if 
you intend to maintain it long term). But if it's the compiler that you're 
trying to compile, then you're screwed by any language or library changes that 
affect the compiler, because it could very well become impossible to compile 
older versions of the compiler.

Yes, keeping language and library changes to a minimum reduces the problem, 
but unless they're absolutely frozen, you risk problems. Even changes with 
high ROI (like making implicit fall-through on switch statements illegal) 
could make building older compilers impossible.

So, whatever we do with porting dmd to D, we need to be very careful. We don't 
want to lock ourselves in so that we can't make changes to the language or 
libraries even when we really need to, but we don't want to make it too 
difficult to build older versions of the compiler for people who have to 
either. 
At the extreme, we could end up in a situation where you have to grab the 
oldest version of the compiler which was written in C++, and then build each 
newer version of the compiler in turn until you get to the one that you want.

- Jonathan M Davis


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Dmitry Olshansky

11-May-2013 22:15, Daniel Murphy пишет:

If we aren't confident that we can write and maintain a large real-world
application in D just yet, we must pull the emergency brakes on the whole
DDDMD effort, right now.

David


I'm confident in D, just not in phobos.  Even if phobos didn't exist, we'd
still be in better shape using D than C++.  What exactly are we going to
need from phobos?  sockets?  std.datetime? std.regex? std.container?



Sockets may come in handy one day. Caching compiler daemon etc.
std.container well ... mm ... eventually.


If we use them in the compiler, we effectively freeze them.  We can't use
the new modules, because the old toolchains don't have them.  We can't fix
old broken modules because the compiler depends on them.  If you add code to
work around old modules being gone in later versions, you pretty much end up
moving the source into the compiler after all.



I propose a different middle ground:

Define a minimal subset of phobos, compilable and usable separately.
Then full phobos will depend on it in turn (or rather contain it). 
Related to my recent thread on limiting inter-dependencies - we will 
have to face that problem while make a subset of phobos.


It has some operational costs but will limit the frozen surface.

--
Dmitry Olshansky


Re: DConf 2013 keynote

2013-05-11 Thread Jeff Nowakowski

On 05/11/2013 12:45 AM, H. S. Teoh wrote:


Yes, which is why I love D so much. All I need is a text editor and the
compiler, and I can do everything. Even unittesting and coverage are all
integrated. No need for external tools, no need to install a whole bunch
of support software, all the essentials are bundled with the compiler.
How much more compelling can it get?


I can get by with a hammer and nails too, but if I was a professional 
roofer I'd be an idiot not to use a nail gun. That's the problem with 
all this focus on boilerplate. An IDE does so much more to make you 
productive in any language, especially one that has static types.


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Simen Kjaeraas

On 2013-05-11, 13:12, Diggory wrote:

Just listened to this talk and it made me think about the various type  
qualifiers. Has there ever been any thought of introducing a new type  
qualifier/attribute, "unique"? I know it already exists as a standard  
library class but I think there are several advantages to having it as a  
language feature:


- "unique" objects can be moved into default, const, unique or immutable  
variables, but can never be copied.


- "new"/constructors always returns a "unique" object, which can then be  
moved into any type, completely eliminating the need for different types  
of constructors.


- Functions which create new objects can also return a "unique" object  
solving the problem mentioned in this talk of whether or not to return  
immutable values.


- "assumeUnique" would actually return a "unique" type, but would be  
unnecessary in most cases.


- Strings can be efficiently built in "unique" character arrays and then  
safely returned as immutable without a cast.


- The compiler can actually provide strong guarantees about uniqueness  
compared to the rather weak guarantees possible in std.typecons.Unique.


- It can be extremely useful for optimisation if the compiler can know  
that there are no other references to an object. There are countless  
times when this knowledge would make otherwise unsafe optimisations safe.


unique is interesting, and holds many promises. However, its effects may
be wide-spanning and have many corner case.

In addition to mutability, unique applies to shared/unshared - a unique
value may safely be moved to another thread.

Pure functions whose parameters are all unique or value types will always
return a unique result. (Note that this is similar to how pure function
results are now implicitly castable to immutable, but unique is stricter)

For unique values not to lose unicity when passed to functions, there
must be ways to specify that the function will not create new aliases to
the passed value. scope might fit the bill here, otherwise something like
lent must be added.

--
Simen


Re: D pull request review process -- strawman formal definition, query for tools

2013-05-11 Thread Thomas Koch
Steven Schveighoffer wrote:
> OK, so now to implement this kind of thing, we need to have a
> collaboration tool that allows tracking the review through its workflow.
> No idea what the best tool or what a good tool would be.  We need
> something kind of like an issue tracker (can github issue tracker do
> this?).  I don't have a lot of experience with doing online project
> collaboration except with D.  I like trello, but it seems to have not
> caught on here.

The best tool for this is gerrit:

- http://en.wikipedia.org/wiki/Gerrit_%28software%29#Notable_users
- Presentation: http://skillsmatter.com/podcast/home/intro-git-gerrit
   I've downloaded and cut just the gerrit part:
   http://koch.ro/temp/gerrit_skillsmatter.mp4
- gerrit vs. github pull requests:
http://julien.danjou.info/blog/2013/rant-about-github-pull-request-workflow-implementation

Gerrit for D:

- Gerrit works on top of Git.
- Gerrit is designed to automatically run unit and style checks even before 
a human reviews the code.
- Gerrit tracks the whole history of a change request across all versions of 
this change. One can see diffs between versions of one particular change 
request.
- Gerrit supports topics to further specify what a change is about

Regards, Thomas Koch


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread David Nadlinger

On Saturday, 11 May 2013 at 18:15:22 UTC, Daniel Murphy wrote:
If we use them in the compiler, we effectively freeze them.  We 
can't use

the new modules, because the old toolchains don't have them.


Fair enough, but in such a case we could always add the parts of 
them we really need to the compiler source until the module is 
present in the last supported version. The critical difference of 
this scenario to your approach is that the extra maintenance 
burden is limited in time: The code is guaranteed to be removed 
again after (say) a year, and as Phobos stabilizes more and more, 
the total amount of such "compatibility" code will go down as 
well.



We can't fix
old broken modules because the compiler depends on them.


I don't see your point here:
  1) The same is true for any client code out there. The only 
difference is that we now directly experience what any D library 
writer out there has to go through anyway, if they want their 
code to work with multiple compiler releases.
  2) If a module is so broken that any "fix" would break all 
client code, we probably are not going to use it in the compiler 
anyway.



If you add code to
work around old modules being gone in later versions, you 
pretty much end up

moving the source into the compiler after all.


Yes, but how often do you think this will happen? At the current 
point, the barrier for such changes should be quite high anyway. 
The amount of D2 code in the wild is already non-negligible and 
growing steadily.


If we only need to be able to compile with a version from 6 
months ago, this
is not a problem.  A year and it's still workable.  But two 
years?  Three?

We can get something right here that gcc got so horribly wrong.


Care to elaborate on that?

David


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread deadalnix
On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu 
wrote:

On 5/11/13 1:10 PM, Daniel Murphy wrote:

Yes it's possible, but it seems like a really bad idea because:
- Phobos is huge
- Changes in phobos now have the potential to break the 
compiler


The flipside is:

- Phobos offers many amenities and opportunities for reuse
- Breakages in Phobos will be experienced early on a large 
system using them


I've talked about this with Simon Peyton-Jones who was 
unequivocal to assert that writing the Haskell compiler in 
Haskell has had enormous benefits in improving its quality.




Except that now, it is a pain to migrate old haskell stuff to 
newer haskelle stuff if you missed several compile release.


You ends up building recursively from the native version to the 
version you want.


We have an implementation in C+ that work, we got to ensure that 
whatever port of DMD is made in D, it does work with the C+ 
version.


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Andrei Alexandrescu

On 5/11/13 2:15 PM, Daniel Murphy wrote:

"David Nadlinger"  wrote in message
news:wynfxitcgpiggwemr...@forum.dlang.org...

On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu wrote:

- Breakages in Phobos will be experienced early on a large system using
them

I've talked about this with Simon Peyton-Jones who was unequivocal to
assert that writing the Haskell compiler in Haskell has had enormous
benefits in improving its quality.


This.

If we aren't confident that we can write and maintain a large real-world
application in D just yet, we must pull the emergency brakes on the whole
DDDMD effort, right now.

David


I'm confident in D, just not in phobos.  Even if phobos didn't exist, we'd
still be in better shape using D than C++.  What exactly are we going to
need from phobos?  sockets?  std.datetime? std.regex? std.container?

If we use them in the compiler, we effectively freeze them.  We can't use
the new modules, because the old toolchains don't have them.  We can't fix
old broken modules because the compiler depends on them.  If you add code to
work around old modules being gone in later versions, you pretty much end up
moving the source into the compiler after all.

If we only need to be able to compile with a version from 6 months ago, this
is not a problem.  A year and it's still workable.  But two years?  Three?
We can get something right here that gcc got so horribly wrong.


But you're exactly enumerating the problems any D user would face when 
we make breaking changes to Phobos.


Andrei



Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Daniel Murphy
"David Nadlinger"  wrote in message 
news:wynfxitcgpiggwemr...@forum.dlang.org...
> On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu wrote:
>> - Breakages in Phobos will be experienced early on a large system using 
>> them
>>
>> I've talked about this with Simon Peyton-Jones who was unequivocal to 
>> assert that writing the Haskell compiler in Haskell has had enormous 
>> benefits in improving its quality.
>
> This.
>
> If we aren't confident that we can write and maintain a large real-world 
> application in D just yet, we must pull the emergency brakes on the whole 
> DDDMD effort, right now.
>
> David

I'm confident in D, just not in phobos.  Even if phobos didn't exist, we'd 
still be in better shape using D than C++.  What exactly are we going to 
need from phobos?  sockets?  std.datetime? std.regex? std.container?

If we use them in the compiler, we effectively freeze them.  We can't use 
the new modules, because the old toolchains don't have them.  We can't fix 
old broken modules because the compiler depends on them.  If you add code to 
work around old modules being gone in later versions, you pretty much end up 
moving the source into the compiler after all.

If we only need to be able to compile with a version from 6 months ago, this 
is not a problem.  A year and it's still workable.  But two years?  Three? 
We can get something right here that gcc got so horribly wrong. 




Re: Issue 3789 and some interesting pull requests

2013-05-11 Thread Mr. Anonymous

On Wednesday, 8 May 2013 at 03:03:16 UTC, bearophile wrote:

Maybe Kenji and others have fixed one important bug:
http://d.puremagic.com/issues/show_bug.cgi?id=3789

- - - - - - - - - - -

On GitHub there are many open pull requests for dmd, and some 
of them look quite interesting (even if some of them probably 
need to be improved):


This allows to support a[$-1, 2..$], it's useful to implement a 
good vector library in D:

https://github.com/D-Programming-Language/dmd/pull/443

To improve the type merging in array literals:
https://github.com/D-Programming-Language/dmd/pull/684

__ctfeWrite and __ctfeWriteln, I'm asking for a better compile 
time printing since years:

https://github.com/D-Programming-Language/dmd/pull/692

Taking the address of a deprecated function should trigger a 
deprecated message, etc:

https://github.com/D-Programming-Language/dmd/pull/1064

This is for a small but handy change in D that allows to write 
"new C().foo();" instead of "(new C()).foo();".

https://github.com/D-Programming-Language/dmd/pull/

Support for T() syntax for build-in types. This avoids to use 
an ugly cast() in some cases:

https://github.com/D-Programming-Language/dmd/pull/1356

Overloading template and non-template functions:
https://github.com/D-Programming-Language/dmd/pull/1409

With value range propagation we can disable some array bound 
tests:

https://github.com/D-Programming-Language/dmd/pull/1493

To improve the use of template functions in different modules:
https://github.com/D-Programming-Language/dmd/pull/1660

Recursive build for the compiler:
https://github.com/D-Programming-Language/dmd/pull/1861

To make D more flexible, allowing the idea of a library-defined 
dup:

https://github.com/D-Programming-Language/dmd/pull/1943

Bye,
bearophile


Isn't it better to merge these before DDMD?


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Daniel Murphy
"David Nadlinger"  wrote in message 
news:bwkwvbjdykrnsdezp...@forum.dlang.org...
> On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:
>> That... doesn't sound very nice to me.  How much of phobos are we
>> realistically going to need?
>
> All of it? Well, not quite, but large parts at least.
>
> If we are going to stick to the C subset of the language, there is little 
> point in translating it to D in the first place.
>

I disagree.  Phobos is great, but there are thousands of things in the 
language itself that make it much more pleasant and effective than C++.

> Of course, there will be some restrictions arising from the fact that the 
> code base needs to work with D versions from a year back or so. But to me 
> duplicating the whole standard library inside the compiler source seems 
> like maintenance hell.
>
> David

I agree.  But I was thinking much longer term compatibility, and a much 
smaller chunk of phobos. 




Re: Migrating D front end to D - post Dconf

2013-05-11 Thread David Nadlinger

On Saturday, 11 May 2013 at 17:48:27 UTC, David Nadlinger wrote:

[…] the whole DDDMD effort […]


Whoops, must be a Freudian slip, revealing how much I'd like to 
see the D compiler being written in idiomatic D. ;)


David


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread David Nadlinger
On Saturday, 11 May 2013 at 17:36:18 UTC, Andrei Alexandrescu 
wrote:
- Breakages in Phobos will be experienced early on a large 
system using them


I've talked about this with Simon Peyton-Jones who was 
unequivocal to assert that writing the Haskell compiler in 
Haskell has had enormous benefits in improving its quality.


This.

If we aren't confident that we can write and maintain a large 
real-world application in D just yet, we must pull the emergency 
brakes on the whole DDDMD effort, right now.


David


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Andrei Alexandrescu

On 5/11/13 1:10 PM, Daniel Murphy wrote:

Yes it's possible, but it seems like a really bad idea because:
- Phobos is huge
- Changes in phobos now have the potential to break the compiler


The flipside is:

- Phobos offers many amenities and opportunities for reuse
- Breakages in Phobos will be experienced early on a large system using them

I've talked about this with Simon Peyton-Jones who was unequivocal to 
assert that writing the Haskell compiler in Haskell has had enormous 
benefits in improving its quality.



Andrei



Re: Migrating D front end to D - post Dconf

2013-05-11 Thread David Nadlinger

On Saturday, 11 May 2013 at 17:23:53 UTC, Daniel Murphy wrote:
That... doesn't sound very nice to me.  How much of phobos are 
we

realistically going to need?


All of it? Well, not quite, but large parts at least.

If we are going to stick to the C subset of the language, there 
is little point in translating it to D in the first place.


Of course, there will be some restrictions arising from the fact 
that the code base needs to work with D versions from a year back 
or so. But to me duplicating the whole standard library inside 
the compiler source seems like maintenance hell.


David


Re: D graph library

2013-05-11 Thread Joseph Rushton Wakeling
On 05/11/2013 07:08 PM, H. S. Teoh wrote:
> Well, maybe what we can do is to make a list of attributes that are
> needed by various graph algorithms, and see if there's a way to distill
> them into a small set of attributes without too many scattered attribute
> checks. For example, if we can categorize graphs into a small set of
> types (ala input range, forward range, etc.) plus a small number of
> optoinal attributes, that would be ideal, I think.

Sure, we can have a go at this.

> Are there actually algorithms that depend on having an .edges property?
> AFAIK, most iterate over nodes in some way.

I have written code that iterates over edges (see my "dregs" code on GitHub).
In the case of the problem I was addressing, it enabled the most space-efficient
storage of the data.

There's no reason why an algorithm should be required to assume the existence of
an edges() property, but for an explicit graph type I think having an edges()
property is generally expected (and is certainly implemented in all the examples
I've looked at).

> Hmm. Maybe something similar to std.range.SortedRange would be useful:
> 
>   struct BipartiteGraph(G) {
>   enum isBipartite = true;
>   G impl;
>   alias impl this;
>   }
> 
>   BipartiteGraph!G verifyIsBipartite(G)(G graph) {
>   // verifies bipartiteness of G and returns graph wrapped
>   // in a BipartiteGraph!G.
>   }
> 
>   auto createBipartiteGraph(T...)(T graphData) {
>   // return an instantiation of BipartiteGraph that's
>   // guaranteed to be bipartite by construction.
>   }
> 
>   auto someBipartiteAlgo(G)(BipartiteGraph!G graph) {
>   // freely assume bipartiteness of graph
>   }
> 
> This way we can make use of the type system to statically ensure
> bipartiteness, while still allowing runtime verifications of arbitrary
> graphs.

Yes, that could be nice.  Bear in mind that the BipartiteGraph type would have
to not just have a boolean claiming bipartiteness, but checks to ensure that
edges added did not violate the bipartite property.

> Aha. I think we found the source of contention here. You're thinking
> about concrete graph types that can be used for general graph
> applications, whereas I'm thinking of Phobos-style generic graph
> algorithm *templates* that can process any graph-like type satisfying
> certain requirements.

Yes, that seems to be the main source of our differences of opinion.  I entirely
agree with you about the need for generic graph algorithm templates, but we do
need graph types as well, and most of my specifications were intended with those
in mind.

> In that light, I think it makes sense to provide a set of concrete graph
> types that people can use (e.g., an adjacency list type, an incidence
> list type, matrix representations maybe, to list a few commonly-used
> graph representations), but the algorithms themselves will be generic
> templates that can work with any concrete type as long as it has the
> attributes required by the algorithm. The concrete graph types (which
> may themselves be templates, e.g. to allow arbitary node types or edge
> labels) will of course satisfy (most of) these requirements by default,
> so they can be immediately used with the algorithms without further ado.
> At the same time, the user can decide to create their own graph types,
> and as long as the target algorithm's prerequisites are satisfied, it
> can be used with those custom types.

I think we've come to some broad agreement here ... :-)

> Actually, I'll be on vacation for a week and a half, with probably
> spotty internet connectivity, so no need to hurry. :)

So, have fun, and look forward to catching up with you when you get back :-)



Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Daniel Murphy
"David Nadlinger"  wrote in message 
news:mwkwqttkbdpmzvyvi...@forum.dlang.org...
> On Saturday, 11 May 2013 at 17:10:51 UTC, Daniel Murphy wrote:
>> If you decide that all later versions of the compiler must compile with 
>> all
>> earlier versions of phobos, then those phobos modules are unable to 
>> change.
>
> In (the rare) case of breaking changes, we could always work around them 
> in the compiler source (depending on __VERSION__), rather than duplicating 
> everything up-front.
>
> I believe *this* is the nice middle ground.
>
> David

That... doesn't sound very nice to me.  How much of phobos are we 
realistically going to need? 




Re: Migrating D front end to D - post Dconf

2013-05-11 Thread David Nadlinger

On Saturday, 11 May 2013 at 17:10:51 UTC, Daniel Murphy wrote:
If you decide that all later versions of the compiler must 
compile with all
earlier versions of phobos, then those phobos modules are 
unable to change.


In (the rare) case of breaking changes, we could always work 
around them in the compiler source (depending on __VERSION__), 
rather than duplicating everything up-front.


I believe *this* is the nice middle ground.

David


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Daniel Murphy
"David Nadlinger"  wrote in message 
news:llovknbpvcnksinsn...@forum.dlang.org...
> On Saturday, 11 May 2013 at 15:16:29 UTC, Daniel Murphy wrote:
>> "Iain Buclaw"  wrote in message
>> news:mailman.1201.1368284962.4724.digitalmar...@puremagic.com...
>>>
>>> Actually, the more I sit down and think about it, the more I question
>>> whether or not it is a good idea for the D D front end to have a 
>>> dependency
>>> on phobos.   Maybe I should stop thinking in general.  :)
>>>
>>
>> Yeah, the compiler can't depend on phobos.
>
> Why?
>
> If we keep a "must compile with several past versions" policy anyway, what 
> would make Phobos special?
>
> David

Yes it's possible, but it seems like a really bad idea because:
- Phobos is huge
- Changes in phobos now have the potential to break the compiler

If you decide that all later versions of the compiler must compile with all 
earlier versions of phobos, then those phobos modules are unable to change.

If you do it the other way and say old versions of the compiler must be able 
to compile the newer compilers and their versions of phobos, you've locked 
phobos to an old subset of D.  (And effectively made the compiler source 
base enormous)

The nice middle ground is you take the chunk of phobos you need, add it to 
the compiler source, and say 'this must always compile with earlier versions 
of the compiler'. 




Re: D graph library

2013-05-11 Thread H. S. Teoh
On Sat, May 11, 2013 at 03:18:16PM +0200, Joseph Rushton Wakeling wrote:
> On 05/10/2013 07:52 PM, H. S. Teoh wrote:
[...]
> > In fact, now that I think of it... most of the features you listed
> > can arguably be optional: only a few algorithms (none that I know
> > of, actually) require enumeration of edges, some algorithms may only
> > require an input range of nodes, each of which gives an input range
> > of edges, etc.. Would it make sense to define a set of graph
> > properties (e.g., via templates ala
> > std.range.is{Input,Forward,Bidirectional,...}Range, or hasLength,
> > hasSwappableElements, etc.), and have each algorithm require
> > whatever minimal subset of them is necessary to do the work? Because
> > it makes little sense to require, say, a graph that returns a total
> > count of nodes, if a given algorithm never needs to use that
> > information anyway. This way, a structure for which the total number
> > of nodes is expensive to compute can still be used with that
> > algorithm.
> 
> That's a fair point.  I don't know that I like the idea of defining
> all those different kinds of checks for different graph attributes,
> though.

Well, maybe what we can do is to make a list of attributes that are
needed by various graph algorithms, and see if there's a way to distill
them into a small set of attributes without too many scattered attribute
checks. For example, if we can categorize graphs into a small set of
types (ala input range, forward range, etc.) plus a small number of
optoinal attributes, that would be ideal, I think.


> > Not if the graph is computed on-the-fly. E.g. chess analysis
> > applications in which the complete graph is infeasible to compute.
> 
> Well, it ought still to be possible to define a range -- albeit, not a
> random-access one -- that covers all the edges in that graph.  It's
> just that it'll take forever to finish.
> 
> Agree though that this is a case where an edges() property is probably
> not desirable.

Are there actually algorithms that depend on having an .edges property?
AFAIK, most iterate over nodes in some way. I can see *nodes* having an
.edges property, but I'm not sure about the utility of enumerating edges
over the entire graph. Or am I missing something obvious?


> > In that case, I wonder if it's even necessary to unify the two graph
> > types. Why not just have two separate types and have the type system
> > sort out compatibility with algorithms for us?
> 
> That might be a possible solution, but whether directed or undirected
> (or mixed), a graph will broadly have the same general public
> interface.

True.


> > Are there any algorithms that need to do something different
> > depending on whether the input graph is bipartite or not? Just
> > wondering if it's worth the trouble of introducing an additional
> > field (it may well be necessary -- I admit my knowledge of graph
> > algorithms is rather spotty).
> 
> Put it this way: sometimes, you might want to check if a graph _is_
> bipartite, that is, if its nodes can be divided into two disjoint sets
> A and B such that links in the graph are always between nodes in the
> different sets, never between nodes from the same set.  That could be
> a runtime test that you could apply to any graph.
> 
> Then again, you might deliberately want to construct an explicitly
> bipartite graph, and you might want to have checks and balances in
> place to make sure that you don't accidentally add nodes between nodes
> in the same set.

Hmm. Maybe something similar to std.range.SortedRange would be useful:

struct BipartiteGraph(G) {
enum isBipartite = true;
G impl;
alias impl this;
}

BipartiteGraph!G verifyIsBipartite(G)(G graph) {
// verifies bipartiteness of G and returns graph wrapped
// in a BipartiteGraph!G.
}

auto createBipartiteGraph(T...)(T graphData) {
// return an instantiation of BipartiteGraph that's
// guaranteed to be bipartite by construction.
}

auto someBipartiteAlgo(G)(BipartiteGraph!G graph) {
// freely assume bipartiteness of graph
}

This way we can make use of the type system to statically ensure
bipartiteness, while still allowing runtime verifications of arbitrary
graphs.


> > Right. Which brings me back to my idea that perhaps these graph
> > properties should be optional, or perhaps we should have some kind
> > of hierarchy of graph types (ala input range, forward range,
> > bidirectional range, etc.), so that individual algorithms can choose
> > which features are mandatory and which are optional.
> > 
> > (I really dislike algorithms that are artificially restricted just
> > because the author decided that feature X is required, no matter if
> > the algorithm doesn't even use it.)
> 
> Fair point.  _Algorithms_ should require the minimal number of
> constraints.  Most of my suggestions 

Re: Migrating D front end to D - post Dconf

2013-05-11 Thread David Nadlinger

On Saturday, 11 May 2013 at 16:27:37 UTC, deadalnix wrote:

On Saturday, 11 May 2013 at 16:15:13 UTC, David Nadlinger wrote:

On Saturday, 11 May 2013 at 16:08:02 UTC, deadalnix wrote:
On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger 
wrote:
If we keep a "must compile with several past versions" 
policy anyway, what would make Phobos special?


David


It prevent the use of newer feature of D in phobos.


?!

It prevents the use of newer Phobos features in the compiler, 
but we would obviously use the Phobos version that comes with 
the host D compiler to compile the frontend, not the version 
shipping with the frontend.


Maybe I'm missing something obvious, but I really can't see 
the issue here.


David


No, that is what have been said : you got to fork phobos and 
ship your own with the compiler.


I still don't get what your point is.

To build any D application (which might be a D compiler or not), 
you need a D compiler on your host system. This D compiler will 
come with druntime, Phobos and any number of other libraries 
installed.


Now, if the application you are building using that host compiler 
is DMD, you will likely use that new DMD to build a (newer) 
version of druntime and Phobos later on. But this doesn't have 
anything to do with what libraries of the host system the 
application can or can't use.


No fork in sight anywhere.

David


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread deadalnix

On Saturday, 11 May 2013 at 16:15:13 UTC, David Nadlinger wrote:

On Saturday, 11 May 2013 at 16:08:02 UTC, deadalnix wrote:
On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger 
wrote:
If we keep a "must compile with several past versions" policy 
anyway, what would make Phobos special?


David


It prevent the use of newer feature of D in phobos.


?!

It prevents the use of newer Phobos features in the compiler, 
but we would obviously use the Phobos version that comes with 
the host D compiler to compile the frontend, not the version 
shipping with the frontend.


Maybe I'm missing something obvious, but I really can't see the 
issue here.


David


No, that is what have been said : you got to fork phobos and ship 
your own with the compiler.


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread David Nadlinger

On Saturday, 11 May 2013 at 16:08:02 UTC, deadalnix wrote:

On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger wrote:
If we keep a "must compile with several past versions" policy 
anyway, what would make Phobos special?


David


It prevent the use of newer feature of D in phobos.


?!

It prevents the use of newer Phobos features in the compiler, but 
we would obviously use the Phobos version that comes with the 
host D compiler to compile the frontend, not the version shipping 
with the frontend.


Maybe I'm missing something obvious, but I really can't see the 
issue here.


David


Re: DConf 2013 keynote

2013-05-11 Thread David Nadlinger

On Saturday, 11 May 2013 at 14:42:38 UTC, Iain Buclaw wrote:

On May 11, 2013 2:30 PM, "Andrei Alexandrescu" <
seewebsiteforem...@erdani.org> wrote:


On 5/11/13 2:49 AM, Daniel Murphy wrote:


"H. S. Teoh"  wrote in message
news:mailman.1188.1368237816.4724.digitalmar...@puremagic.com...



Excellent!!

What about GDC/LDC though? Or are we hoping that the GCC 
(LDC)
maintainers will be willing to accept a bootstrapping D 
compiler by the

time we're ready to go pure D?



The GDC/LDC maintainers are onboard.



Walter and I are also on board.

Andrei


If the flurry of activity from myself, David and Daniel isn't a 
clear

sign.  We are all on the same page.


We are indeed.

As far as LDC goes, upstream issues aren't a potential source of 
trouble, as we are not an official LLVM project anyway.


GDC is likely to be more of an issue in this regard, but I'll 
leave it to Iain to judge that.


David


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread deadalnix

On Saturday, 11 May 2013 at 15:51:26 UTC, David Nadlinger wrote:

On Saturday, 11 May 2013 at 15:16:29 UTC, Daniel Murphy wrote:

"Iain Buclaw"  wrote in message
news:mailman.1201.1368284962.4724.digitalmar...@puremagic.com...


Actually, the more I sit down and think about it, the more I 
question
whether or not it is a good idea for the D D front end to 
have a dependency

on phobos.   Maybe I should stop thinking in general.  :)



Yeah, the compiler can't depend on phobos.


Why?

If we keep a "must compile with several past versions" policy 
anyway, what would make Phobos special?


David


It prevent the use of newer feature of D in phobos.


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread David Nadlinger

On Saturday, 11 May 2013 at 15:16:29 UTC, Daniel Murphy wrote:

"Iain Buclaw"  wrote in message
news:mailman.1201.1368284962.4724.digitalmar...@puremagic.com...


Actually, the more I sit down and think about it, the more I 
question
whether or not it is a good idea for the D D front end to have 
a dependency

on phobos.   Maybe I should stop thinking in general.  :)



Yeah, the compiler can't depend on phobos.


Why?

If we keep a "must compile with several past versions" policy 
anyway, what would make Phobos special?


David


Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Daniel Murphy
"Iain Buclaw"  wrote in message 
news:mailman.1201.1368284962.4724.digitalmar...@puremagic.com...
>
> Actually, the more I sit down and think about it, the more I question
> whether or not it is a good idea for the D D front end to have a 
> dependency
> on phobos.   Maybe I should stop thinking in general.  :)
>

Yeah, the compiler can't depend on phobos.  But if we really need to, we can 
clone a chunk of phobos and add it to the compiler.  Just so long as there 
isn't a loop.  BigInt is a pretty good candidate. 




Re: Migrating D front end to D - post Dconf

2013-05-11 Thread Iain Buclaw
On May 5, 2013 2:36 PM, "Iain Buclaw"  wrote:
>
> Daniel and/or David,
>
> We should list down in writing the issues preventing DMD, GDC, and LDC
having a shared code base.  From what David has shown me, LDC will need the
most work for this, but I'll list down what I can remember.
>
> 1. Support extern(C++) classes so can have a split C++/D implementation
of eg: Expression and others.
>
> 2. Support representing integers and floats to a greater precision than
what the host can natively support. In D there's BigInt for integral types,
and there's a possibility of using std.numeric for floats.  For me,
painless conversion between eg: BigInt <-> GCC's double_int is a
requirement, but that is more of an after thought at this point in time.
>

Actually, the more I sit down and think about it, the more I question
whether or not it is a good idea for the D D front end to have a dependency
on phobos.   Maybe I should stop thinking in general.  :)

Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread deadalnix

On Saturday, 11 May 2013 at 14:53:39 UTC, Ali Çehreli wrote:

On 05/11/2013 04:12 AM, Diggory wrote:

> Has there ever been any thought of introducing a new type
> qualifier/attribute, "unique"?

There has been discussions about this idea. Just two on the 
D.learn newsgroup that mention an experimental UniqueMutable 
idea:


  http://forum.dlang.org/thread/itr5o1$poi$1...@digitalmars.com

  
http://forum.dlang.org/thread/jd26ig$27qd$1...@digitalmars.com?page=2


I am sure there are other on this newsgroup as well.

Ali


Microsft have a very good paper on the subject : 
http://research.microsoft.com/pubs/170528/msr-tr-2012-79.pdf


But I don't think this should be implemented before more deep 
issue are sorted out.


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Ali Çehreli

On 05/11/2013 04:12 AM, Diggory wrote:

> Has there ever been any thought of introducing a new type
> qualifier/attribute, "unique"?

There has been discussions about this idea. Just two on the D.learn 
newsgroup that mention an experimental UniqueMutable idea:


  http://forum.dlang.org/thread/itr5o1$poi$1...@digitalmars.com

  http://forum.dlang.org/thread/jd26ig$27qd$1...@digitalmars.com?page=2

I am sure there are other on this newsgroup as well.

Ali



Re: What is a "pull request"?

2013-05-11 Thread H. S. Teoh
On Sat, May 11, 2013 at 01:00:40PM +0200, Mehrdad wrote:
> On Saturday, 11 May 2013 at 07:47:34 UTC, skeptical wrote:
> >In article ,
> >pub...@kyllingen.net says...
> >>
> >>On Saturday, 11 May 2013 at 06:07:39 UTC, skeptical wrote:
> >>>
> >>>
> >>> What is a "pull request"? Thank you.
> >>
> >>
> >>https://help.github.com/articles/using-pull-requests
> >
> >I asked you, you tell me. I don't follow links anymore. I am
> >beyond that. So why don't you shush (I'm known to say it worse) if
> >"you nothing to add"? (Bitch).
> 
> Whoa, what the heck happened here...

YHBT. YHL. HAND.

:-P


T

-- 
Why can't you just be a nonconformist like everyone else? -- YHL


Re: DConf 2013 keynote

2013-05-11 Thread Iain Buclaw
On May 11, 2013 2:30 PM, "Andrei Alexandrescu" <
seewebsiteforem...@erdani.org> wrote:
>
> On 5/11/13 2:49 AM, Daniel Murphy wrote:
>>
>> "H. S. Teoh"  wrote in message
>> news:mailman.1188.1368237816.4724.digitalmar...@puremagic.com...
>>>
>>>
>>> Excellent!!
>>>
>>> What about GDC/LDC though? Or are we hoping that the GCC (LDC)
>>> maintainers will be willing to accept a bootstrapping D compiler by the
>>> time we're ready to go pure D?
>>>
>>
>> The GDC/LDC maintainers are onboard.
>
>
> Walter and I are also on board.
>
> Andrei

If the flurry of activity from myself, David and Daniel isn't a clear
sign.  We are all on the same page.

Regards
-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: DConf 2013 keynote

2013-05-11 Thread Andrei Alexandrescu

On 5/11/13 2:49 AM, Daniel Murphy wrote:

"H. S. Teoh"  wrote in message
news:mailman.1188.1368237816.4724.digitalmar...@puremagic.com...


Excellent!!

What about GDC/LDC though? Or are we hoping that the GCC (LDC)
maintainers will be willing to accept a bootstrapping D compiler by the
time we're ready to go pure D?



The GDC/LDC maintainers are onboard.


Walter and I are also on board.

Andrei


Re: D graph library

2013-05-11 Thread Joseph Rushton Wakeling
On 05/10/2013 07:52 PM, H. S. Teoh wrote:
> I suppose we can make it an optional thing, such that algorithm A may
> ask for only an InputRange (and a graph with bidirectional range of
> nodes will still work) but algorithm B may ask for more, say a
> ForwardRange or RandomAccessRange. Each algorithm will require the
> minimum necessary to perform efficiently (though of course it can be
> special-cased to take advantage of additional features of a higher
> range, if that is available in a given instantiation).

Yes, algorithms should certainly ask for no more than they need.

> In fact, now that I think of it... most of the features you listed can
> arguably be optional: only a few algorithms (none that I know of,
> actually) require enumeration of edges, some algorithms may only require
> an input range of nodes, each of which gives an input range of edges,
> etc.. Would it make sense to define a set of graph properties (e.g., via
> templates ala std.range.is{Input,Forward,Bidirectional,...}Range, or
> hasLength, hasSwappableElements, etc.), and have each algorithm require
> whatever minimal subset of them is necessary to do the work? Because it
> makes little sense to require, say, a graph that returns a total count
> of nodes, if a given algorithm never needs to use that information
> anyway. This way, a structure for which the total number of nodes is
> expensive to compute can still be used with that algorithm.

That's a fair point.  I don't know that I like the idea of defining all those
different kinds of checks for different graph attributes, though.

> Not if the graph is computed on-the-fly. E.g. chess analysis
> applications in which the complete graph is infeasible to compute.

Well, it ought still to be possible to define a range -- albeit, not a
random-access one -- that covers all the edges in that graph.  It's just that
it'll take forever to finish.

Agree though that this is a case where an edges() property is probably not
desirable.

> In that case, I wonder if it's even necessary to unify the two graph
> types. Why not just have two separate types and have the type system
> sort out compatibility with algorithms for us?

That might be a possible solution, but whether directed or undirected (or
mixed), a graph will broadly have the same general public interface.

> Are there any algorithms that need to do something different depending
> on whether the input graph is bipartite or not? Just wondering if it's
> worth the trouble of introducing an additional field (it may well be
> necessary -- I admit my knowledge of graph algorithms is rather spotty).

Put it this way: sometimes, you might want to check if a graph _is_ bipartite,
that is, if its nodes can be divided into two disjoint sets A and B such that
links in the graph are always between nodes in the different sets, never between
nodes from the same set.  That could be a runtime test that you could apply to
any graph.

Then again, you might deliberately want to construct an explicitly bipartite
graph, and you might want to have checks and balances in place to make sure that
you don't accidentally add nodes between nodes in the same set.

> Right. Which brings me back to my idea that perhaps these graph
> properties should be optional, or perhaps we should have some kind of
> hierarchy of graph types (ala input range, forward range, bidirectional
> range, etc.), so that individual algorithms can choose which features
> are mandatory and which are optional.
> 
> (I really dislike algorithms that are artificially restricted just
> because the author decided that feature X is required, no matter if the
> algorithm doesn't even use it.)

Fair point.  _Algorithms_ should require the minimal number of constraints.
Most of my suggestions for range types etc. represent what I think is useful for
a general-purpose graph data type rather than what is absolutely necessary for
individual specialized applications.

Must dash again, so will catch up on your other points later ... ! :-)


Re: What is a "pull request"?

2013-05-11 Thread Dicebot

On Saturday, 11 May 2013 at 11:00:41 UTC, Mehrdad wrote:

Whoa, what the heck happened here...


Lack of moderation and surprisingly naive reaction of D community.


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Diggory
Just listened to this talk and it made me think about the various 
type qualifiers. Has there ever been any thought of introducing a 
new type qualifier/attribute, "unique"? I know it already exists 
as a standard library class but I think there are several 
advantages to having it as a language feature:


- "unique" objects can be moved into default, const, unique or 
immutable variables, but can never be copied.


- "new"/constructors always returns a "unique" object, which can 
then be moved into any type, completely eliminating the need for 
different types of constructors.


- Functions which create new objects can also return a "unique" 
object solving the problem mentioned in this talk of whether or 
not to return immutable values.


- "assumeUnique" would actually return a "unique" type, but would 
be unnecessary in most cases.


- Strings can be efficiently built in "unique" character arrays 
and then safely returned as immutable without a cast.


- The compiler can actually provide strong guarantees about 
uniqueness compared to the rather weak guarantees possible in 
std.typecons.Unique.


- It can be extremely useful for optimisation if the compiler can 
know that there are no other references to an object. There are 
countless times when this knowledge would make otherwise unsafe 
optimisations safe.






Re: What is a "pull request"?

2013-05-11 Thread Mehrdad

On Saturday, 11 May 2013 at 07:47:34 UTC, skeptical wrote:
In article , 
pub...@kyllingen.net says...


On Saturday, 11 May 2013 at 06:07:39 UTC, skeptical wrote:
>
>
> What is a "pull request"? Thank you.


https://help.github.com/articles/using-pull-requests


I asked you, you tell me. I don't follow links anymore. I am 
beyond that. So why don't you shush (I'm known to say it worse) 
if "you nothing to add"? (Bitch).


Whoa, what the heck happened here...


Re: What is a "pull request"?

2013-05-11 Thread Maxim Fomin
On Saturday, 11 May 2013 at 06:35:56 UTC, Lars T. Kyllingstad 
wrote:

On Saturday, 11 May 2013 at 06:07:39 UTC, skeptical wrote:



What is a "pull request"? Thank you.



https://help.github.com/articles/using-pull-requests


you have been trolled


Re: DLL crash inside removethreadtableentry - where's the source code for that?

2013-05-11 Thread Trey Brisbane

Yep, problem solved.

Thanks very much for your help! :)


Re: DLL crash inside removethreadtableentry - where's the source code for that?

2013-05-11 Thread Trey Brisbane

On Saturday, 11 May 2013 at 07:38:53 UTC, Walter Bright wrote:


I thought this was already fixed. What's the date/size on your 
snn.lib? The latest is:


02/25/2013  06:19 PM   573,952 snn.lib


In dmd.2.062.zip (the one I'm using):
574,464  2012-12-11  7:30 AM

In dmc.zip:
573,952  2013-02-26  11:19 AM  <-- the one I should be using?

In dmc856.zip (from the Digital Mars site):
574,464  2012-12-11  7:30 AM

Shouldn't these be in sync? :P
Anyway, thanks for the tip. I'll give it a shot and post back.


Re: DConf 2013 keynote

2013-05-11 Thread deadalnix

On Friday, 10 May 2013 at 23:29:33 UTC, H. S. Teoh wrote:
It turns out that this mysterious "stuck" state was caused by 
the stack
trace code -- but not in any of the usual ways. In order to 
produce the
trace, it uses fprintf to write info to the log, and fprintf in 
turn
calls malloc at various points to allocate the necessary 
buffers to do
that. Now, if for some reason free() segfaults (e.g., you pass 
in an
illegal pointer), then libc is still holding the internal 
malloc mutex
lock when the OS sends the SEGV to the process, so when the 
stack trace
handler then calls fprintf, which in turn calls malloc, it 
deadlocks.
Further SIGSEGV's won't help, since it only makes the deadlock 
worse.




This is the very reason why the NullPointerError handler build a 
fake stack frame and hijack the EIP register in order to NOT do 
that kind of stuff into the signal handler.


This is very confusing and must be put into some runtime code and 
never used directly by users.


Re: What is a "pull request"?

2013-05-11 Thread skeptical
In article , dl.so...@gmx.net says...
> 
> Am 11.05.2013 09:50, schrieb skeptical:
> > I asked you, you tell me.
> 
> can i tell you too?
> 
> a "pull request" means "don't read fucking manuals ever"

don't swear at me! Make me read your fucking manuals, you meant, dare me. You 
want me to say "fuck you". That was many years ago grasshopper.


Re: DConf 2013 keynote

2013-05-11 Thread deadalnix

On Friday, 10 May 2013 at 19:55:58 UTC, sk wrote:
In any case, I totally agree that if a language *needs* an IDE 
in order to cope with the amount of required boilerplate, then 
something is clearly very, very wrong at a fundamental level.


May be this is true for expert or professional programmers. But 
for people like me who only use D occasionally an IDE is a must.


IDE mainly helps me in reducing the amount of things I need to 
memorize or remember like API, building tool names, command 
syntaxes, etc. This is very important as my main profession is 
not programming.




Especially since we aren't very good at API consistency.


"no subject"

2013-05-11 Thread skeptical
Deep and darkness slumber, endless sleep...
 nothing moves inside my funeral suite. 

I feel the sun slip down, as hunger strikes, 
 waking like being born here comes the night!

All my senses awakened, by little demons,
 taste the human heartbeat... bittersweet. 

(bittersweet)



Re: new DIP39: Safe rvalue references: backwards compatible, safe against ref/nonref code evolution, compatible with UFCS and DIP38

2013-05-11 Thread deadalnix

On Saturday, 11 May 2013 at 02:24:02 UTC, Timothee Cour wrote:

Abstract

We propose to introduce rvalue references that are:
* safe: guarantees memory safety so that references will always 
point

to valid memory.
* backwards compatible: current valid D code will continue to 
work
without change. In addition, additional code becomes valid with 
call

site rvalue ref annotation.
* safe against ref/nonref code evolution: call site rvalue ref
compulsory annotation turns ref/nonref changes into compile 
errors

instead of silently changing code behavior.
* both const ref or ref can be used with rvalue refs (more 
flexible than C++)
* no call site ref annotation when input ref argument is 
already an
lvalue (different from C#), for backwards compatibility (and 
making it

less verbose)
* compatible with UFCS
* compatible with DIP38: can use same inref/outref internal 
compiler

annotation for input references that can be returned by ref by a
function. But DIP38 is optional.

link: http://wiki.dlang.org/DIP39


OK, First remark on the way the DIP is presented. You start right 
away with example which make it hard to understand o general big 
picture.


Second, ^ is not unused in D and the DIP introduce an ambiguity 
here.


Making the temporary creation explicit is an idea that hasn't 
been explored that much. I have to think more about it.


My Bio-Data

2013-05-11 Thread Nandkishor Arvind Dhekane

My Bio Data

Name: Nandkishor Arvind Dhekane

Address:   541 Budhawar Peth,
  Near,Kasardevi Dharmashala,
  Pune: 411002.

Landline No: (020) 24457135

Mobile No:  9665036399

Date of Birth:  23/03/1974

Age:   40

E-Mail nandu...@rediffmail.com/dhekanenandkis...@yahoo.com

Qualification:  (B.A.)

Computer Courses:  DTP, Visual Basic, C, C++, VC++, Oracle, Core 
Java, html, xml, Microsoft Access, p.h.p. ,   c.s.s.,dhtml,cgi 
bin, script,jsp,vb.net,  DBASE, Pascal, ASP.Net, Visual Foxpro,  
MySQL, JavaScript, vb script, Open Source, Introduction of 
C#,ADO.Net, Autherpoint.Lite


Others:  Webpage Design & Development, Hardware & Networking, MS 
Office, Animation Paintshop, Video Editing, MSCIT & MCED Pass , 
e-book software Development, Graphic & Design Development, Web 
Designing & Development/Programming


Advance Programming Skill:  [MS PowerPoint Presentation] Lotus 
Approach, Lotus Freelance Graphics


 Special Programming Skill: Web Software Development In MS Office 
2000/2003/2007/2010/xp/Mac/Linux,Open Office 3.2.org


Occupation:  Home base Software Development Jobs.

Designation:  Multimedia MS Office  Web Software Developer

Area of Interests:  Advertising & Multimedia Presentation 
Software Development


Experience: 6 Month's worked in Logic Enterprises as a deta entry 
operatorfrom19 March to 6 September 2008. & 6 Month’s 
Experience in R.T.O. , as a detabase programmer.


My Software Projects:  Mobile Radio, Calculator & Notepad.[My Own 
Software Projects].
Software: [ http://www.Windowsapplication.com ] Search Engine 
Application. This is my web application.


Family Particulars:  I have no father, brothers, sisters. I stay 
with my mother.



 Signeture

N.A.Dhekane


Re: I want to add a Phobos module with template mixins for common idioms.

2013-05-11 Thread skeptical
In article , generic...@gmail.com says...
> 
> On Wednesday, 8 May 2013 at 20:11:34 UTC, Idan Arye wrote:
>

Aryan?? White premisey your message? 

Wait, Ary lye-in-dyke won the indie 4500, is he gay? OMG, DYKES are the issue! 
How many dykes have driven race cars? 

Aside (hehe!): Why does Danica Patrick suck? Ok, ok, the "joke", is: because a 
woman can't drive! (It's a pretend joke if she has bred or maybe even if she 
has 
"married"). Else: women who can't drive open-wheeled "cars", don't really know 
what the f is up with anything. I haven't done it, but I know I would die doing 
it. I think she should stop, and I should sell software and me and her will 
have 
babies and die. 

I dunno, the woman makes the first move, they say (I think that's where I 
screwed up?!). 


Re: What is a "pull request"?

2013-05-11 Thread dennis luehring

Am 11.05.2013 09:50, schrieb skeptical:

I asked you, you tell me.


can i tell you too?

a "pull request" means "don't read fucking manuals ever"



Re: What is a "pull request"?

2013-05-11 Thread skeptical
In article , pub...@kyllingen.net says...
> 
> On Saturday, 11 May 2013 at 06:07:39 UTC, skeptical wrote:
> >
> >
> > What is a "pull request"? Thank you.
> 
> 
> https://help.github.com/articles/using-pull-requests

I asked you, you tell me. I don't follow links anymore. I am beyond that. So 
why 
don't you shush (I'm known to say it worse) if "you nothing to add"? (Bitch).


Re: DConf 2013 keynote

2013-05-11 Thread skeptical
In article , 
jmdavisp...@gmx.com says...
> 
> On Friday, May 10, 2013 17:14:01 H. S. Teoh wrote:
> > On Fri, May 10, 2013 at 05:09:18PM -0700, Walter Bright wrote:
> > > On 5/10/2013 4:27 PM, H. S. Teoh wrote:
> > > >Seriously, D has so spoiled me I can't stand programming in another
> > > >language these days. :-P
> > > 
> > > Me too. Sometimes it makes it hard to work on the dmd front end!
> > 
> > Now, *that* is not a good thing at all! When are we going to start
> > moving towards bootstrapping D? Did any conclusions ever come of that
> > discussion some time ago about how this might impact GDC/LDC?
> 
> Daniel Murphy (yebblies) has an automated C++ to D converted for the 
> front-end 
> that he's been working on (which won't work on general C++ code but works on 
> the front-end's code), and he's been making pull requests to dmd to adjust 
> the 
> code so that it's more easily converted. So, once he's done with that, it'll 
> be trivial to have the same compiler in both C++ and D (with all of the 
> changes going in the C++ code), and we can maintain it that way until we're 
> ready to go pure D. And after that, we can start refactoring the D code and 
> take advantage of what D can do.
> 
> - Jonathan M Davis

What's your point?


Re: DConf 2013 keynote

2013-05-11 Thread skeptical
In article , 
hst...@quickfur.ath.cx says...

> Right, and D is still in development

No, it isn't. I think it left that when Walter made it a Stallman project. 



Re: DConf 2013 keynote

2013-05-11 Thread skeptical
In article , 
hst...@quickfur.ath.cx says...
> 
> On Sat, May 11, 2013 at 02:41:59AM +0200, Flamaros wrote:
> [...]
> > More I work with D, less I want to work with C++.
> 
> Yup. I think that applies to a lot of us here. :)
> 

Wankers.

> I find D superior to Java in just about every possible way. 

But you are not saying anything, because that is like sayint that you find gay 
sex superior to heterosexual sex in every possible way.

Admit it: you're a wanker.

> Surprisingly enough, before I found D, I actually considered ditching
> C++ for C. I only stayed with C++ because it has certain niceties, like
> exceptions, 

Explain that "nicety" or shut up? You wouldn't know an "exception", if you 
caught a venereal disease from it. (But you did, and want to spread it around?!)

> C++ is just over-complex, and its complexity in different areas
> interact badly with each other, making it an utter nightmare to work
> with beyond trivial textbook examples.

You were suggesting that D is better? Why you and not Walter or his "supporter" 
Andrei? Do they really need you to evangelize the awesome powers of healing of 
D?

>  OO programming in C++ is so
> nasty, it's laughable

OO? Show your guage ("bitch").

> -- if I wanted OO, Java would be far superior.

"surely". Kids get to post on the internet and be addressed as within all of 
the 
knowledgebase, but ya know what, that dick-wanking period is over. Not because 
I 
say it is, but because that is what it is. YOU, adolescent (not that there 
aren't exceptions to every "rule", but that that there usually aren't (save for 
society rules and government rules)).

> I
> found that C++ is only tolerable when I use it as "C with classes". Its
> OO features suck.

I hear that.

> 
> At my day job, 

I've tried that. I gave up years ago, but, well, I don't know.

> we 

That's surely it: I'm not a "we". Surely that's it. It is, isn't it. It's like 
that movie.



Re: DConf 2013 keynote

2013-05-11 Thread skeptical
In article , flamaros.xav...@gmail.com 
says...

> We border probably unconscious when we use the C + + for certain 
> uses. 

It is no curiosity to me. But I'm not part of your "we".


Re: DConf 2013 keynote

2013-05-11 Thread skeptical
In article , flamaros.xav...@gmail.com 
says...
> 
> On Saturday, 11 May 2013 at 00:09:21 UTC, Walter Bright wrote:
> > On 5/10/2013 4:27 PM, H. S. Teoh wrote:
> >> Seriously, D has so spoiled me I can't stand programming in 
> >> another
> >> language these days. :-P
> >
> > Me too. Sometimes it makes it hard to work on the dmd front end!
> 
> More I work with D, less I want to work with C++.
> Using D is just as funny as I found Java, but with a greater 
> potential and global control of what we do. A lot of things are 
> just as simple as they need and can be.
> 
> Sometimes C++ give me hives, it's so error prone and an 
> under-productive language for the actual industry needs, that 
> certainly why Google created the Go.

You are side-stepping the fact that D was not a consideration for Google. If 
C++ 
is a failure there, then isn't D a double failure?



Re: DConf 2013 keynote

2013-05-11 Thread skeptical
In article , 
hst...@quickfur.ath.cx says...
> 
> On Fri, May 10, 2013 at 05:09:18PM -0700, Walter Bright wrote:
> > On 5/10/2013 4:27 PM, H. S. Teoh wrote:
> > >Seriously, D has so spoiled me I can't stand programming in another
> > >language these days. :-P
> > 
> > Me too. Sometimes it makes it hard to work on the dmd front end!
> 
> Now, *that* is not a good thing at all! 

He was announcing that he is "hard and ready". So be it. But people, please 
REQUIRE a RECENT STD test (like yesterday), and WAIT A YEAR before you have 
sex. 
OK? It's a selfish request: if you all die from inter-combobulating, who will I 
have to talk with?

(It's not too late, is it?)



Re: DConf 2013 keynote

2013-05-11 Thread skeptical
In article , newshou...@digitalmars.com says...
> 
> On 5/10/2013 4:27 PM, H. S. Teoh wrote:
> > Seriously, D has so spoiled me I can't stand programming in another
> > language these days. :-P
> 
> Me too. Sometimes it makes it hard to work on the dmd front end!

One's own dick is nirvana? It has relevance: no disease if you don't already 
have one. I'm not saying "stick your dick into the ocean (of communicable 
disease (and aside, did y'all know that 1 out of 4 people have herpes, a 
disease 
that cannot be cured? And that is probably optimistic: cuz if you're relying on 
statistics, you're probably in a statistical group where herpes is guaranteed 
and other diseases are too: HIV, etc.)).

You too, huh? Sounds like a study is appropriate for "your" group. 


Re: DConf 2013 keynote

2013-05-11 Thread skeptical
In article , newshou...@digitalmars.com says...
> 
> On 5/10/2013 2:31 PM, H. S. Teoh wrote:
> > Note how much boilerplate is necessary to make the code work
> > *correctly*.
> 
> It's worse than that. Experience shows that this rat's nest style of code 
> often 
> is incorrect because it is both complex and never tested. While D doesn't 
> make 
> it more testable, at least it makes it simple, and hence more likely to be 
> correct.

I quip shit like that in other NGs too. The thing is, I haven't given up: I 
don't try to pickup the immature with expressions like "experience shows.. 
blah, 
blah". You seem to have either conceded, confessed, or given up. I'm "a 
management consultant", that is necessarily to say that if I blurt out stuff 
that I refuse to give to those who would want to pay me for that, and I know 
they will use it to their advantage against others when they don't have any 
right to others' ... well, who does? Who has RIGHT to anyone else's labors, 
efforts, time, knowledge...? 



You have the lingo now? I mean, "D solves rat's nest style of complex code"... 
thank god you were fired as an apprentice to being an airplane engineer!


Re: DLL crash inside removethreadtableentry - where's the source code for that?

2013-05-11 Thread Walter Bright

On 5/11/2013 12:10 AM, Trey Brisbane wrote:

On Sunday, 17 February 2013 at 11:32:02 UTC, Ben Davis wrote:

On 17/02/2013 07:56, Rainer Schuetze wrote:

_removethreadtableentry is a function in the DM C runtime library. It
has the bug that it tries to free a data record that has never been
allocated if the thread that loaded the DLL is terminated. This is the
entry at index 1.


That's a good start :)

Can it be fixed? Who would be able to do it?

Or is there some code I can put in my project that will successfully work
around the issue?

I get the impression the source is available for money. I found this page
http://www.digitalmars.com/download/freecompiler.html which mentions complete
library source under a link to the shop. I *could* buy it and see if I can fix
it myself, but it seems a bit risky.

By the way, thanks for Visual D :)


Sorry to necro this thread, but I'm currently experiencing the exact same issue.
Was this ever fixed? If not, was there a bug filed?


I thought this was already fixed. What's the date/size on your snn.lib? The 
latest is:


02/25/2013  06:19 PM   573,952 snn.lib


Re: DLL crash inside removethreadtableentry - where's the source code for that?

2013-05-11 Thread Trey Brisbane

On Sunday, 17 February 2013 at 11:32:02 UTC, Ben Davis wrote:

On 17/02/2013 07:56, Rainer Schuetze wrote:
_removethreadtableentry is a function in the DM C runtime 
library. It
has the bug that it tries to free a data record that has never 
been
allocated if the thread that loaded the DLL is terminated. 
This is the

entry at index 1.


That's a good start :)

Can it be fixed? Who would be able to do it?

Or is there some code I can put in my project that will 
successfully work around the issue?


I get the impression the source is available for money. I found 
this page http://www.digitalmars.com/download/freecompiler.html 
which mentions complete library source under a link to the 
shop. I *could* buy it and see if I can fix it myself, but it 
seems a bit risky.


By the way, thanks for Visual D :)


Sorry to necro this thread, but I'm currently experiencing the 
exact same issue. Was this ever fixed? If not, was there a bug 
filed?


Re: DConf 2013 Day 1 Talk 2 (Copy and Move Semantics)

2013-05-11 Thread Walter Bright

On 5/10/2013 11:49 PM, Ali Çehreli wrote:

As far as I know, I am one of the few who point out the glaring exception-unsafe
behavior of the assignment operator of C++.


Since Andrei did the design (copy-swap), I'm sure he's well aware of it!