Re: TDPL in Russian

2010-11-17 Thread Bruno Medeiros

On 15/11/2010 12:27, Adrian Matoga wrote:

I wish you will be given a translation of the same quality, TDPL is
worth it.



What do you mean by this? You mean a Portuguese translation?

--
Bruno Medeiros - Software Engineer


Re: Utah Valley University teaches D (using TDPL)

2010-11-17 Thread Lutger Blijdestijn
bearophile wrote:

 Lutger Blijdestijn:
 
 Actually the unix convention is to give exit code 0 as an indicator of
 success, so there is feedback. It is very usable for scripting.
 
 But currently something like that is not present in the D unittest system.

rdmd --main -unittest somemodule.d 



Re: [OT] DVCS

2010-11-17 Thread Alexey Khmara
add + commit is not a bad design at all. It is just design choice,
and it also about patch control system, that allows more logical
commit history and more precise control over VCS. It allows to code
all things you want and place into commit only part of your changes.
You even can stage part of file - if, for example, you done two
logically different changes without commit between them. May be, good
analogy will be reading file with one command versus open-read-close
sequence - simplicity versus good control.

This feature allows very comfortable, free coding style - you write
what you want ad understand now, and later you can divide your changes
to logically related sets. You do not controlled by limits imposed by
VCS - work on one feature, commit, work on another. Instead VCS
works in your style and rhythm. Usually you don't want run commit
-a. Instead when you run git status you see several files that you
do not want to commit right now. So you use add + commit sequence,
may be - several times to commit different changesets as distinct
entities with distinct comments.

I think it's very good point of view - to track not file versions,
patchsets that represent something meaningful - new features, bugfixes
etc, and have VCS follow your practices and rhythm - and have
understandable version tree at the end.


Re: DDT 0.4.0 released (formerly Mmrnmhrm)

2010-11-17 Thread Lutger Blijdestijn
Looking pretty good so far! 



Re: [OT] DVCS

2010-11-17 Thread Jérôme M. Berger
Alexey Khmara wrote:
 add + commit is not a bad design at all. It is just design choice,
 and it also about patch control system, that allows more logical
 commit history and more precise control over VCS. It allows to code
 all things you want and place into commit only part of your changes.
 You even can stage part of file - if, for example, you done two
 logically different changes without commit between them. May be, good
 analogy will be reading file with one command versus open-read-close
 sequence - simplicity versus good control.
 
 This feature allows very comfortable, free coding style - you write
 what you want ad understand now, and later you can divide your changes
 to logically related sets. You do not controlled by limits imposed by
 VCS - work on one feature, commit, work on another. Instead VCS
 works in your style and rhythm. Usually you don't want run commit
 -a. Instead when you run git status you see several files that you
 do not want to commit right now. So you use add + commit sequence,
 may be - several times to commit different changesets as distinct
 entities with distinct comments.
 
 I think it's very good point of view - to track not file versions,
 patchsets that represent something meaningful - new features, bugfixes
 etc, and have VCS follow your practices and rhythm - and have
 understandable version tree at the end.

This has nothing to do with Git's staging area. Mercurial also
tracks patchsets that represent something meaningful and has full
support for partial commits (with record or crecord) so you can
write what you want and understand now, and later [...] divide your
changes to logically related sets. On the other hand, you are not
forced into this model when you know you have only worked on a
single feature and want to commit it.

Jerome
-- 
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: [OT] DVCS

2010-11-17 Thread klickverbot

On 11/17/10 10:32 PM, Jérôme M. Berger wrote:

[…] you are not
forced into this model when you know you have only worked on a
single feature and want to commit it.


You are not forced to use the staging area with Git either (although 
most of the developers I know do use it), it's just the default that is 
different.


If you want to save the extra characters per commit, just add an alias 
like »ci = commit -a« to your ~/.gitconfig, just like you might want to 
use »nudge = push --rev .« with Mercurial…


Re: [OT] DVCS

2010-11-17 Thread klickverbot

On 11/17/10 10:27 PM, Jérôme M. Berger wrote:

[…]It might be possible to change the configuration so
that this won't happen, but the simple fact that this happens with
the *default* config does not fill me with confidence regarding data
integrity and Git...


This is not exactly true, at least not for the Git on Windows installer, 
which presents you with the three possible choices for handling line 
endings.


Also, I am not quite sure if this deserves the label »data corruption«, 
because even if you have auto-conversion of line endings turned on and 
Git fails to auto-detect a file as binary, no history data is corrupted 
and you can fix the problem by just switching off auto-conversion 
(either globally or just for the file in question via gitattributes) – 
in contrast to actual history/database corruption.


Re: blip 0.5

2010-11-17 Thread Bill Baxter
Nice work!  Is it for D2 or D1?  Or both?

--bb

On Wed, Nov 17, 2010 at 2:42 PM, Fawzi Mohamed fa...@gmx.ch wrote:

 I am happy to announce blip 0.5

http://dsource.org/projects/blip

 why 0.5? because it works for me, but hopefully it will work for others
 too, and 1.0 will be a release with more contributors...

 Blip is a library that offers

  * N-dimensional arrays (blip.narray) that have a nice interface to lapack
 (that leverages the wrappers of baxissimo)
  * 2,3 and 4D vectors, matrixes and quaternions from the omg library of
 h3r3tic
  * multidimensional arrays, with nice to use wrappers to blas/lapack
  * a testing framework that can cope both with combinatorial and random
 testing
   this means that you can define an environment (be it struct or class,
 maybe even templatized)
   and then define generators that create one such environment (see
 blip.rtest.BasicGenerators)
   then you can define testing functions that will receive newly generated
 environments and do the tests
  * serialization (blip.serialization) that supports both json format, that
 can be used also for input files and an
   efficient binary representation
  * MPI parallelization built on the top of mpi, but abstracting it away (so
 that a pure tcp implementation is possible),
   for tightly coupled parallelization
  * a Distribued Objects framework that does rpc via proxies
 (blip.parallel.rpc)
  * a simple socket library that can be used to connect external programs,
 even if written in fortran or C (for a weak parallel coupling)
  * a coherent and efficient io abstraction

 But what might be most interesting is.

  * SMP parallelization (blip.parallel.smp) a numa aware very flexible
 framework

 a parallelization framework that can cope well with both thread like and
 data like parallelism, integrated with libev
 to offer efficient socket i/o and much more.

 An overview of blip is given in
http://dsource.org/projects/blip/wiki/BlipOverview
 The parallelization is discussed in
http://dsource.org/projects/blip/wiki/ParallelizationConcepts
 finally to install it see
http://dsource.org/projects/blip/wiki/GettingStarted

 enjoy

 Fawzi



Re: blip 0.5

2010-11-17 Thread klickverbot

On 11/18/10 1:12 AM, Bill Baxter wrote:

Nice work!  Is it for D2 or D1?  Or both?
--bb


I hope you don't mind me answering, Fawzi:
Currently, it's D1 only.


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread Jay Byrd
On Tue, 16 Nov 2010 23:55:42 -0700, Rainer Deyke wrote:

 On 11/16/2010 22:24, Andrei Alexandrescu wrote:
 I'm curious what the response to my example will be. So far I got one
 that doesn't even address it.
 
 I really don't see the problem with requiring that '{' goes on the same
 line as 'if'.

It *isn't* required. But if you don't put it there, *you get the wrong 
result*. I really don't understand why the problem with Andrei's example 
isn't blatantly obvious to everyone, but I would not want to use any 
product of anyone for whom it isn't.


 It's something you learn once and never forget because it
 is reinforced through constant exposure.  After a day or two, '{' on a
 separate line will just feel wrong and raise an immediate alarm in your
 mind.
 
 I would even argue that Go's syntax actually makes code /easier/ to read
 and write.  Let's say I see something like this in C/C++/D:
 
 if(blah())
 {
   x++;
 }
 
 This is not my usual style, so I have to stop and think.  It could be
 correct code written in another style, or it could be code that has been
 mangled during editing and now needs to be fixed.  In Go, I /know/ it's
 mangled code, and I'm far less likely to encounter it, so I can find
 mangled code much more easily.  A compiler error would be even better,
 but Go's syntax is already an improvement over C/C++/D.
 
 There are huge problems with Go that will probably keep me from ever
 using the language.  This isn't one of them.



Re: In praise of Go discussion on ycombinator

2010-11-17 Thread Andrei Alexandrescu

On 11/17/10 12:00 AM, Jay Byrd wrote:

On Tue, 16 Nov 2010 23:55:42 -0700, Rainer Deyke wrote:


On 11/16/2010 22:24, Andrei Alexandrescu wrote:

I'm curious what the response to my example will be. So far I got one
that doesn't even address it.


I really don't see the problem with requiring that '{' goes on the same
line as 'if'.


It *isn't* required. But if you don't put it there, *you get the wrong
result*. I really don't understand why the problem with Andrei's example
isn't blatantly obvious to everyone, but I would not want to use any
product of anyone for whom it isn't.


Exactly, that comes as a big surprise to me, too, in that discussion. I 
can only hypothesize that some readers just glaze over the example 
thinking ah, whatever... a snippet trying to support some naysay, and 
they reply armed with that presupposition.


Andrei


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread so

It *isn't* required. But if you don't put it there, *you get the wrong
result*.


You didn't mean that, did you?

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread so

The problem pointed out can readily be fixed by requiring statements to
have at least one token. go has much more severe problems than that. And
there are plenty of bugs and mistakes in D, harder to fix, that could be
deemed deal-killers by someone with an axe to grind. It's not an
intellectually honest approach.


Since you are trying to fix it, you agree it is a design error.
For the bugs and mistakes in D:
Bugs are bugs, what you say is just nonsense.
With mistakes i take you mean mistakes in D design, and you said there are  
many of them, you need to elaborate that one a bit.


--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread so

It *isn't* required. But if you don't put it there, *you get the wrong
result*.


You didn't mean that, did you?


Oh you did! and i agree.

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: std.date

2010-11-17 Thread Daniel Gibson

Kagamin schrieb:

Jonathan M Davis Wrote:

Honestly, leap seconds are complete stupidity with regards to computers. They 
just complicate things.


I think, it's ok, computers work with nominal time and synchronize with world 
as needed. Hardly you can catch a bug with leap seconds.


As long as you're not Oracle and your enterprise clusterware crap reboots:
http://www.theregister.co.uk/2009/01/07/oracle_leap_second/


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread Daniel Gibson

Rainer Deyke schrieb:

On 11/16/2010 22:24, Andrei Alexandrescu wrote:

I'm curious what the response to my example will be. So far I got one
that doesn't even address it.


I really don't see the problem with requiring that '{' goes on the same
line as 'if'.  It's something you learn once and never forget because it
is reinforced through constant exposure.  After a day or two, '{' on a
separate line will just feel wrong and raise an immediate alarm in your
mind.

I would even argue that Go's syntax actually makes code /easier/ to read
and write.  Let's say I see something like this in C/C++/D:

if(blah())
{
  x++;
}

This is not my usual style, so I have to stop and think.  


What about
if( (blah() || foo())  (x  42)
 (baz.iDontKnowHowtoNameThisMethod() !is null)
 someOtherThing.color = COLORS.Octarine )
{
  x++;
}


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread Daniel Gibson

Daniel Gibson schrieb:

Rainer Deyke schrieb:

On 11/16/2010 22:24, Andrei Alexandrescu wrote:

I'm curious what the response to my example will be. So far I got one
that doesn't even address it.


I really don't see the problem with requiring that '{' goes on the same
line as 'if'.  It's something you learn once and never forget because it
is reinforced through constant exposure.  After a day or two, '{' on a
separate line will just feel wrong and raise an immediate alarm in your
mind.

I would even argue that Go's syntax actually makes code /easier/ to read
and write.  Let's say I see something like this in C/C++/D:

if(blah())
{
  x++;
}

This is not my usual style, so I have to stop and think.  


What about
if( (blah() || foo())  (x  42)
 (baz.iDontKnowHowtoNameThisMethod() !is null)
 someOtherThing.color = COLORS.Octarine )
{
  x++;
}


someOtherThing.color = COLORS.Octarine was supposed to be
someOtherThing.color == COLORS.Octarine of course.


Re: std.date

2010-11-17 Thread Kagamin
Daniel Gibson Wrote:

  I think, it's ok, computers work with nominal time and synchronize with 
  world as needed. Hardly you can catch a bug with leap seconds.
 
 As long as you're not Oracle and your enterprise clusterware crap reboots:
 http://www.theregister.co.uk/2009/01/07/oracle_leap_second/

Synchronization can fail if the code asserts that number of seconds is not 
greater than 59 (Jonathan's lib does the same, I think). Is it the cause?


Re: std.date

2010-11-17 Thread Daniel Gibson

Kagamin schrieb:

Daniel Gibson Wrote:


I think, it's ok, computers work with nominal time and synchronize with world 
as needed. Hardly you can catch a bug with leap seconds.

As long as you're not Oracle and your enterprise clusterware crap reboots:
http://www.theregister.co.uk/2009/01/07/oracle_leap_second/


Synchronization can fail if the code asserts that number of seconds is not 
greater than 59 (Jonathan's lib does the same, I think). Is it the cause?


How are leap seconds handled on a computer anyway? Does the clock really count 
to 60 seconds (instead of 59) before the next minute starts, or is the clock 
just slowed down a bit (like it's - IIRC - done when changing the time with NTP 
or such)?


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread ponce
 That one point you made would be a  
 deal-killer for me (not that I'm close to using Go or anything, but no  
 need to invest any more time on it after that).

That was a good point and it's a deal-killer for me too.
It's too much similar to the Javascript object literal syntax
http://stackoverflow.com/questions/3641519/why-results-varies-upon-placement-of-curly-braces-in-javascript-code
 


Re: RFC, ensureHeaped

2010-11-17 Thread spir
On Wed, 17 Nov 2010 00:03:05 -0700
Rainer Deyke rain...@eldwood.com wrote:

 Making functions weakly pure by default means that temporarily adding a
 tiny debug printf to any function will require a shitload of cascading
 'impure' annotations.  I would consider that completely unacceptable.

Output in general, programmer feedback in particuliar, should simply not be 
considered effect. It is transitory change to dedicated areas of memory -- not 
state. Isn't this the sense of output, after all? (One cannot read it back, 
thus it has no consequence on future process.) The following is imo purely 
referentially transparent and effect-free (where effect means changing state); 
it always executes the same way, produces the same result, and never influences 
later processes else as via said result:

uint square(uint n) {
uint sq = n*n;
writefln(%s^2 = %s, n, sq);
return sq;
}

Sure, the physical machine's state has changed, but it's not the same machine 
(state) as the one the program runs on (as the one the program can play with). 
There is some bizarre confusion.
[IMO, FP's notion of purity is at best improper for imperative programming ( 
at worst requires complicated hacks for using FP itself). We need to find our 
own way to make programs easier to understand and reason about.]


Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com



Eror message comprehensibility

2010-11-17 Thread Russel Winder
I had accidentally written:

 immutable pi = 4.0 * reduce ! ( a + b ) ( 0 , outputData ) * delta ;

the error message received was:

Error: template instance std.algorithm.reduce!(a + 
b).reduce!(int,Map!(partialSum,Tuple!(int,int,double)[])) error instantiating

which isn't wrong, but neither is it that helpful.  Actually it is
helpful at all really.  Is there no way of saying in a more clear manner
reduce initial value is an int but the type in the array is double so
they are not addition compatible.?

The correct line is of course:

 immutable pi = 4.0 * reduce ! ( a + b ) ( 0.0 , outputData ) * delta ;

but that isn't easily deducible from the error message presented.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: std.date

2010-11-17 Thread Kagamin
Daniel Gibson Wrote:

  Synchronization can fail if the code asserts that number of seconds is not 
  greater than 59 (Jonathan's lib does the same, I think). Is it the cause?
 
 How are leap seconds handled on a computer anyway? Does the clock really 
 count 
 to 60 seconds (instead of 59) before the next minute starts, or is the clock 
 just slowed down a bit (like it's - IIRC - done when changing the time with 
 NTP 
 or such)?

This is how it looked on linux:

bash-2.05b# date 
Thu Jan 1 00:59:58 CET 2009 
bash-2.05b# date 
Thu Jan 1 00:59:59 CET 2009 
bash-2.05b# date 
Thu Jan 1 00:59:60 CET 2009 
bash-2.05b# date 
Thu Jan 1 01:00:00 CET 2009 
bash-2.05b# date 
Thu Jan 1 01:00:01 CET 2009 
bash-2.05b# 


Why unix time is signed

2010-11-17 Thread Kagamin
From wiki:

There was originally some controversy over whether the Unix time_t should be 
signed or unsigned. If unsigned, its range in the future would be doubled, 
postponing the 32-bit overflow (by 68 years). However, it would then be 
incapable of representing times prior to 1970. Dennis Ritchie, when asked 
about this issue, said that he hadn't thought very deeply about it, but was of 
the opinion that the ability to represent all times within his lifetime would 
be nice. (Ritchie's birth, in 1941, is around Unix time #8722;893 400 000.) 
The consensus is for time_t to be signed, and this is the usual practice. The 
software development platform for version 6 of the QNX operating system has an 
unsigned 32-bit time_t, though older releases used a signed type.

In some newer operating systems, time_t has been widened to 64 bits. In the 
negative direction, this goes back more than twenty times the age of the 
universe, and so suffices. In the positive direction, whether the 
approximately 293 billion representable years is truly sufficient depends on 
the ultimate fate of the universe, but it is certainly adequate for most 
practical purposes.

This is a good example when one wants to represent big numbers, he doesn't use 
usigned type, he uses signed 64-bit type.


Re: RFC, ensureHeaped

2010-11-17 Thread spir
On Tue, 16 Nov 2010 23:28:37 -0800
Jonathan M Davis jmdavisp...@gmx.com wrote:

 It has already been argued that I/O should be exempt (at least for debugging 
 purposes), and I think that that would could be acceptable for weakly pure 
 functions. But it's certainly true that as it stands, dealing with I/O and 
 purity doesn't work very well. And since you have to try and mark as much as 
 possible pure (to make it weakly pure at least) if you want much hope of 
 being 
 able to have much of anything be strongly pure, it doesn't take long before 
 you 
 can't actually have I/O much of anywhere - even for debugging. It's 
 definitely a 
 problem.

(See also my previous post on this thread).
What we are missing is a clear notion of program state, distinct from physical 
machine. A non-referentially transparent function is one that reads from this 
state; between 2 runs of the function, this state may have been changed by the 
program itself, so that execution is influenced. Conversely, an effect-ive 
function is one that changes state; such a change may influence parts of the 
program that read it, including possibly itself.

This true program state is not the physical machine's one. Ideally, there would 
be in the core language's organisation a clear definition of what state is -- 
it could be called state, or world. An approximation in super simple 
imperative languages is the set of global variables. (Output does not write 
onto globals -- considering writing onto video port or memory state change is 
close to nonsense ;-) In pure OO, this is more or less the set of objects / 
object fields. (A func that does not affect any object field is effect-free.)
State is something the program can read (back); all the rest, such as writing 
to unreachable parts of memory like for output, cannot have any consequence on 
future process (*). I'm still far to be clear on this topic; as of now, I think 
only assignments to state, as so defined, should be considered effects.
This would lead to a far more practicle notion of purity, I guess, esp for 
imperative and/or OO programming.


Denis

(*) Except possibly when using low level direct access to (pseudo) memory 
addresses. Even then, one cannot read plain output ports, or write to plain 
input ports, for instance.
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com



Debugging with gdb on Posix but setAssertHandler is deprecated

2010-11-17 Thread Jens Mueller
Hi,

I've written a small module for debugging on Posix systems.
It uses raise(SIGTRAP) and a custom errorHandlerType with
setAssertHandler. But setAssertHandler is deprecated.
Why is it deprecated? How should I do it instead?

I want to do it generally for Error and Exception. Don't know how yet.
How do you debug your programs? I've read that gdb has catch throw for
C++ to break when exceptions are thrown. I don't like changing gdb for
this.
My idea is that instead of throwing an Error/Exception I print it and
then raise SIGTRAP.

Jens


Re: RFC, ensureHeaped

2010-11-17 Thread bearophile
Steven Schveighoffer:

It makes me think that this is going to be extremely confusing for a while, 
because people are so used to pure being equated with a functional language, 
so when they see a function is pure but takes mutable data, they will be 
scratching their heads.

I agree, it's a (small) problem. Originally 'pure' in D was closer to the 
correct definition of purity. Then its semantics was changed and it was not 
replaced by @strongpure/@weakpure annotations, so there is now a bit of 
semantic mismatch.



Rainer Deyke:

 Making functions weakly pure by default means that temporarily adding a
 tiny debug printf to any function will require a shitload of cascading
 'impure' annotations.  I would consider that completely unacceptable.

To face this problem I have proposed a pureprintf() function (or purewriteln), 
that's a kind of alias of printf (or writeln), the only differences between 
pureprintf() and printf() are the name and D seeing the first one as strongly 
pure.

The pureprintf() is meant only for *unreliable* debug prints, not for the 
normal program console output.



spir:

Output in general, programmer feedback in particuliar, should simply not be 
considered effect.

You are very wrong.


 The following is imo purely referentially transparent and effect-free (where 
 effect
 means changing state); it always executes the same way, produces the same 
 result,
 and never influences later processes else as via said result:
 
 uint square(uint n) {
 uint sq = n*n;
 writefln(%s^2 = %s, n, sq);
 return sq;
 }

If we replace that function signature with this (assuming writefln is 
considered pure):

pure uint square(uint n) { ...


Then the following code will print one or two times according to how much 
optimizations the compiler is performing:

void main() {
uint x = square(10) + square(10);
}

Generally in DMD if you compile with -O you will see only one print. If you 
replace the signature with this one:

pure double square(double n) { ...

You will see two prints. In general the compiler is able to replace two calls 
with same arguments to a strongly pure function with a single call. DMD doesn't 
do it on floating point numbers to respect its not-optimization FP rules, but 
LDC doesn't respect them if you use the 
-enable-unsafe-fp-math compiler switch, so if you use -enable-unsafe-fp-math 
you will probably see only one print.

Generally if the compiler sees code like:

uint x = foo(x) + bar(x);

And both foo and bar are strongly pure, the compiler must be free to call them 
in any order it likes, because they are side-effects-free.

So normal printing functions can't be allowed inside pure functions, because 
printing is a big side effect (even memory allocation is a side effect, because 
I may cast the dynamic array pointer to size_t and then use this number. Even 
exceptions are a side effect, but probably they give less troubles than 
printing).

I have suggested the pureprintf() that allows the user to remember its printing 
will be unreable (printing may appear or disappear according to compiler used, 
optimization levels, day of the week).

Bye,
bearophile


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread Steven Schveighoffer

On Wed, 17 Nov 2010 02:56:09 -0500, Jay Byrd jayb...@rebels.com wrote:


On Wed, 17 Nov 2010 00:58:28 -0500, Steven Schveighoffer wrote:


On Wed, 17 Nov 2010 00:24:50 -0500, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


On 11/16/10 9:21 PM, Steven Schveighoffer wrote:

On Wed, 17 Nov 2010 00:10:54 -0500, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


http://news.ycombinator.com/item?id=1912728

Andrei


I like go because every single feature go has is the best ever!

yawn...

-Steve


I'm curious what the response to my example will be. So far I got one
that doesn't even address it.


It's possible that you left your point as an exercise for the reader, so
it may be lost.

But it's not important.  It's unlikely that you will convince goers that
the language deficiencies are not worth the advantages.  Only time will
tell how popular/useful it will be.  That one point you made would be a
deal-killer for me (not that I'm close to using Go or anything, but no
need to invest any more time on it after that).



The problem pointed out can readily be fixed by requiring statements to
have at least one token.


Yes, but instead of that, they decided to point it out in the tutorial and  
documentation, it's on you to not make the mistake, not them.  Makes you  
feel like they said yes, we noticed that you can have that problem, but  
we don't think it's a big deal, just remember to be careful.


Being someone who likes the brace-on-its-own-line style, I can't really  
see myself getting past that part.  And I don't really have any other  
complaints about Go, I've never used it.  This one issue needs to be fixed  
for Go to be considered a serious language.  Hopefully they realize this.


When a language has an error-prone issue, one that allows you to write  
valid code that is never desired, but looks completely correct, it results  
in buggy code, period.


An instance of this in D was the precedence for logic operators and  
comparison operators.  x | y == 5 was interpreted as x | (y == 5).  This  
was thankfully fixed, and it's a very similar issue to Go's mistake.  With  
all the experience people have with designing languages and code analysis  
by compilers, there are no excuses to have something like this get into a  
new language.


The best thing to teach Go's creators about how bad this is is to make the  
change you suggest and see how their standard library fares.  This has  
been instrumental in having such changes made to D (for example, the logic  
operator thing found about 6 cases in phobos where the code was incorrect).



go has much more severe problems than that. And
there are plenty of bugs and mistakes in D, harder to fix, that could be
deemed deal-killers by someone with an axe to grind. It's not an
intellectually honest approach.


I agree that D has hard to fix mistakes.  The glaring one IMO is lack of  
tail-const for classes.  It's a huge hole in the const feature set and I  
can see people being turned off by it.  Another is lack of runtime  
introspection, or the conservative GC.


That doesn't mean D is worse or better than Go.  It's just that Go is a  
non-starter (at least for me) without fixing that mistake.  If they fixed  
it, would I start using Go?  Probably not, I have too much other things to  
do.  But if Go was around when I was looking for a language to switch to  
from C++ 4 years ago, that one issue makes the decision a no-brainer.



(cue retard's comment about D zealots not having any open mindedness)

-Steve


No zealots are open minded. That's why I'm a supporter of various things
but not a zealot about anything.


I consider myself not to be a zealot either (but definitely biased).  The  
comment was a friendly dig at retard, that's all ;)


-Steve


Re: RFC, ensureHeaped

2010-11-17 Thread Steven Schveighoffer
On Wed, 17 Nov 2010 02:03:05 -0500, Rainer Deyke rain...@eldwood.com  
wrote:



On 11/16/2010 21:53, Steven Schveighoffer wrote:

It makes me think that this is going to be extremely confusing for a
while, because people are so used to pure being equated with a
functional language, so when they see a function is pure but takes
mutable data, they will be scratching their heads.  It would be awesome
to make weakly pure the default, and it would also make it so we have to
change much less code.


Making functions weakly pure by default means that temporarily adding a
tiny debug printf to any function will require a shitload of cascading
'impure' annotations.  I would consider that completely unacceptable.


As would I.  But I think in the case of debugging, we can have trusted  
pure.  This can be achieved by using extern(C) pure runtime functions.



(Unless, of course, purity is detected automatically without the use of
annotations at all.)


That would be ideal, but the issue is that the compiler may only have the  
signature and not the implementation.  D would need to change its  
compilation model for this to work (and escape analysis, and link-time  
optimizations, etc.)


-Steve


Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread dsimcha
== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article
 I think in general containers don't work across multiple threads unless
 specifically designed to do that.

I'm making the assumption that you'd handle all the synchronization issues
yourself.  When you need to update the container, there's an obvious issue.  In
general I don't like the trend in D of building things into arrays, containers,
etc. such that they can't be shared across threads due to obscure implementation
details even when it looks safe.

(For arrays, I'm referring to the appending issue, which is problematic when I 
try
to append to an array from multiple threads, synchronizing manually.)

 dcollections containers would probably all fail if you tried to use them
  from multiple threads.
 That being said, I'm not a huge fan of reference counting period.
 Containers have no business being reference counted anyways, since their
 resource is memory, and should be handled by the GC.  This doesn't mean
 pieces of it shouldn't be reference counted or allocated via malloc or
 whatever, but the object itself's lifetime should be managed by the GC
 IMO.  Not coincidentally, this is how dcollections is set up.

I think reference counting is an elegant solution to a niche problem (namely,
memory management of large containers that won't have circular references when
memory is tight), but given all the baggage it creates, I don't think it should 
be
the default for any container.  I think we need to start thinking about custom
allocators, and allow reference counting but make GC the default.


Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread Steven Schveighoffer

On Wed, 17 Nov 2010 09:17:05 -0500, dsimcha dsim...@yahoo.com wrote:


== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article

I think in general containers don't work across multiple threads unless
specifically designed to do that.


I'm making the assumption that you'd handle all the synchronization  
issues
yourself.  When you need to update the container, there's an obvious  
issue.  In
general I don't like the trend in D of building things into arrays,  
containers,
etc. such that they can't be shared across threads due to obscure  
implementation

details even when it looks safe.


I think that we need a wrapper for containers that implements the shared  
methods required and manually locks things in order to use them.  Then you  
apply this wrapper to any container type, and it's now a shared container.


There are also certain types of containers that lend themselves to shared  
access.  For example, I can see a linked list where each node contains a  
lock being a useful type.


(For arrays, I'm referring to the appending issue, which is problematic  
when I try

to append to an array from multiple threads, synchronizing manually.)


I'm interested what you mean here, I tried to make sure cross-thread  
appending is possible.



dcollections containers would probably all fail if you tried to use them
 from multiple threads.
That being said, I'm not a huge fan of reference counting period.
Containers have no business being reference counted anyways, since their
resource is memory, and should be handled by the GC.  This doesn't mean
pieces of it shouldn't be reference counted or allocated via malloc or
whatever, but the object itself's lifetime should be managed by the GC
IMO.  Not coincidentally, this is how dcollections is set up.


I think reference counting is an elegant solution to a niche problem  
(namely,
memory management of large containers that won't have circular  
references when
memory is tight), but given all the baggage it creates, I don't think it  
should be
the default for any container.  I think we need to start thinking about  
custom

allocators, and allow reference counting but make GC the default.


The memory management of a container's innards is open to reference  
counting (or whatever, I agree that allocators should be supported  
somewhere).  I just object to reference counting of the container itself,  
as it's not important to me whether a container gets closed outside the GC  
automatically.


With something like a File it's different since you are coupling closing a  
file (a very limited resource) with cleaning memory (a relatively abundant  
resource).


-Steve


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread Matthias Pleh

Am 17.11.2010 14:55, schrieb Steven Schveighoffer:

Being someone who likes the brace-on-its-own-line style


i++

greets
Matthias


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread Nick Sabalausky
Andrei Alexandrescu seewebsiteforem...@erdani.org wrote in message 
news:ic03ui$gj...@digitalmars.com...
 On 11/17/10 12:00 AM, Jay Byrd wrote:
 On Tue, 16 Nov 2010 23:55:42 -0700, Rainer Deyke wrote:

 On 11/16/2010 22:24, Andrei Alexandrescu wrote:
 I'm curious what the response to my example will be. So far I got one
 that doesn't even address it.

 I really don't see the problem with requiring that '{' goes on the same
 line as 'if'.

 It *isn't* required. But if you don't put it there, *you get the wrong
 result*. I really don't understand why the problem with Andrei's example
 isn't blatantly obvious to everyone, but I would not want to use any
 product of anyone for whom it isn't.

 Exactly, that comes as a big surprise to me, too, in that discussion. I 
 can only hypothesize that some readers just glaze over the example 
 thinking ah, whatever... a snippet trying to support some naysay, and 
 they reply armed with that presupposition.


Sad as it may be, most people, and worse still, most programmers, have no 
qualms about safety by convention. That's why they don't see this as a big 
problem. Safety-by-convention is one of those things that's only recognized 
as a real problem by people with enough self-discipline and people who have 
actually been burned enough by dong it wrong.




Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread dsimcha
== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article
 I think that we need a wrapper for containers that implements the shared
 methods required and manually locks things in order to use them.  Then you
 apply this wrapper to any container type, and it's now a shared container.
 There are also certain types of containers that lend themselves to shared
 access.  For example, I can see a linked list where each node contains a
 lock being a useful type.

This is a good idea to some degree, but the thing is that it forces you to use
shared even when you're going for fine-grained parallelism and want to just 
cowboy
it.  For fine-grained parallelism use cases, my hunch is that cowboying is going
to be the only game in town for a long time in all languages, not just D.

  (For arrays, I'm referring to the appending issue, which is problematic
  when I try
  to append to an array from multiple threads, synchronizing manually.)
 I'm interested what you mean here, I tried to make sure cross-thread
 appending is possible.
  dcollections containers would probably all fail if you tried to use them
   from multiple threads.

Ok, I stand corrected.  It seemed to work in practice, but always I just assumed
that it was a Bad Thing to do and worked for the Wrong Reasons.

 memory (a relatively abundant resource).

Apparently you've never tired working with multigigabyte datasets using a
conservative garbage collector and 32-bit address space.




Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread Steven Schveighoffer

On Wed, 17 Nov 2010 10:14:21 -0500, dsimcha dsim...@yahoo.com wrote:


== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article

I think that we need a wrapper for containers that implements the shared
methods required and manually locks things in order to use them.  Then  
you
apply this wrapper to any container type, and it's now a shared  
container.
There are also certain types of containers that lend themselves to  
shared

access.  For example, I can see a linked list where each node contains a
lock being a useful type.


This is a good idea to some degree, but the thing is that it forces you  
to use
shared even when you're going for fine-grained parallelism and want to  
just cowboy
it.  For fine-grained parallelism use cases, my hunch is that cowboying  
is going

to be the only game in town for a long time in all languages, not just D.


There is always the possibility of cowboying it.  But I don't see that  
the standard lib should be catering to this.




 (For arrays, I'm referring to the appending issue, which is  
problematic

 when I try
 to append to an array from multiple threads, synchronizing manually.)
I'm interested what you mean here, I tried to make sure cross-thread
appending is possible.
 dcollections containers would probably all fail if you tried to use  
them

  from multiple threads.


Ok, I stand corrected.  It seemed to work in practice, but always I just  
assumed

that it was a Bad Thing to do and worked for the Wrong Reasons.


There is specific code in array appending that locks a global lock when  
appending to shared arrays.  Appending to __gshared arrays from multiple  
threads likely will not work in some cases though.  I don't know how to  
get around this, since the runtime is not made aware that the data is  
shared.



memory (a relatively abundant resource).


Apparently you've never tired working with multigigabyte datasets using a
conservative garbage collector and 32-bit address space.


Is that supported by out-of-the-box containers?  I would expect you need  
to create a special data structure to deal with such things.


And no, I don't regularly work with such issues.  But my point is,  
reference counting the *container* which uses some sort of memory  
allocation to implement its innards is not coupling a limited resource to  
memory allocation/deallocation.  In other words, I think it's better to  
have the container be a non-reference counted type, even if you  
reference-count the elements.  I prefer class semantics to be quite  
honest, where explicit initialization is required.


-Steve


Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread Sean Kelly
Steven Schveighoffer Wrote:
 
 There is specific code in array appending that locks a global lock when  
 appending to shared arrays.  Appending to __gshared arrays from multiple  
 threads likely will not work in some cases though.  I don't know how to  
 get around this, since the runtime is not made aware that the data is  
 shared.

The shared attribute will have to become a part of the TypeInfo, much like 
const is now.  Knowing whether data is shared can affect where/how the memory 
block is allocated by the GC, etc.


Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread Steven Schveighoffer
On Wed, 17 Nov 2010 11:58:20 -0500, Sean Kelly s...@invisibleduck.org  
wrote:



Steven Schveighoffer Wrote:


There is specific code in array appending that locks a global lock when
appending to shared arrays.  Appending to __gshared arrays from multiple
threads likely will not work in some cases though.  I don't know how to
get around this, since the runtime is not made aware that the data is
shared.


The shared attribute will have to become a part of the TypeInfo, much  
like const is now.  Knowing whether data is shared can affect where/how  
the memory block is allocated by the GC, etc.


shared is part of it, but __gshared is not.

Since __gshared is the hack to allow bare metal sharing, I don't see how  
it can be part of the type info.


The issue is that if you append to such an array and it adds more pages in  
place, the block length location will move.  Since each thread caches its  
own copy of the block info, one will be wrong and look at array data  
thinking it's a length field.


Even if you surround the appends with a lock, it will still cause problems  
because of the cache.  I'm not sure there's any way to reliably append to  
such data from multiple threads.


-Steve


Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread dsimcha
== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article
 The issue is that if you append to such an array and it adds more pages in
 place, the block length location will move.  Since each thread caches its
 own copy of the block info, one will be wrong and look at array data
 thinking it's a length field.
 Even if you surround the appends with a lock, it will still cause problems
 because of the cache.  I'm not sure there's any way to reliably append to
 such data from multiple threads.
 -Steve

Would assumeSafeAppend() do the trick?



Re: Compiler optimization breaks multi-threaded code

2010-11-17 Thread stephan

atomicOp uses a CAS loop for the RMW operations.
Ignore my comment. I should have looked at the code in core.atomic 
before commenting. I just had one test case with atomicOp!(+=) that 
worked, and assumed that atomicOp!(+=) was implemented with lock xadd.



I'm thinking of exposing atomicStore and atomicLoad in core.atomic so folks 
have something to use until the compiler work is taken care of.
This would be a good solution. Not only does it solve this ordering 
issue, but it is actually what I want to do with a shared variable while 
I am already within a critical section anyways.


Re: std.date

2010-11-17 Thread Jonathan M Davis
On Wednesday, November 17, 2010 04:15:52 Kagamin wrote:
 Daniel Gibson Wrote:
   Synchronization can fail if the code asserts that number of seconds is
   not greater than 59 (Jonathan's lib does the same, I think). Is it the
   cause?
  
  How are leap seconds handled on a computer anyway? Does the clock really
  count to 60 seconds (instead of 59) before the next minute starts, or is
  the clock just slowed down a bit (like it's - IIRC - done when changing
  the time with NTP or such)?
 
 This is how it looked on linux:
 
 bash-2.05b# date
 Thu Jan 1 00:59:58 CET 2009
 bash-2.05b# date
 Thu Jan 1 00:59:59 CET 2009
 bash-2.05b# date
 Thu Jan 1 00:59:60 CET 2009
 bash-2.05b# date
 Thu Jan 1 01:00:00 CET 2009
 bash-2.05b# date
 Thu Jan 1 01:00:01 CET 2009
 bash-2.05b#

That's the standard, but supposedly it varies a bit in how it's handled - at 
least if you read it up on Wikipedia.

I'd have to go digging in std.datetime again to see exactly what would happen 
on 
a leap second, but IIRC you end up with either 59 twice or 00 twice. Unix time 
specifically ignores leap seconds, and in 99.99% of situations, if you 
have a 60th second, it's a programming error, so TimeOfDay considers 60 to be 
outside of its range and throws if you try and set its second to 60.

SysTime is really the only type where it would make much sense to worry about 
leap seconds, but since the only way that you're going to get them is if you go 
out of your way by using a PosixTimeZone which starts with right/ for your 
time zone, it seemed silly to worry about it overly much. The _system time_ 
ignores leap seconds after all, _even_ if you use one of the time zones that 
starts with right/ as your system's time zone. So, the result is that if you 
use one of the PosixTimeZones with leap seconds, it will correctly adjust for 
leap seconds except when adding or removing a leap second, at which point, 
you'd 
get a duplicate time for two seconds in a row in the case of an addition and 
probably would skip a second in the case of subtraction (though that's actually 
probably the correct behavior for a subtraction - not that they've ever 
subtracted an leap seconds yet). It might be less than ideal if you _really_ 
care about leap seconds, but allowing for a 60th second could really mess with 
calculations and allow for bugs to go uncaught in user code. So, allowing for a 
60th second when adding a leap second would help an extreme corner case at the 
cost of harming the normal case, and I decided against it.

- Jonathan M Davis


Re: std.date

2010-11-17 Thread Jonathan M Davis
On Wednesday, November 17, 2010 09:51:30 Jonathan M Davis wrote:
 On Wednesday, November 17, 2010 04:15:52 Kagamin wrote:
  Daniel Gibson Wrote:
Synchronization can fail if the code asserts that number of seconds
is not greater than 59 (Jonathan's lib does the same, I think). Is
it the cause?
   
   How are leap seconds handled on a computer anyway? Does the clock
   really count to 60 seconds (instead of 59) before the next minute
   starts, or is the clock just slowed down a bit (like it's - IIRC -
   done when changing the time with NTP or such)?
  
  This is how it looked on linux:
  
  bash-2.05b# date
  Thu Jan 1 00:59:58 CET 2009
  bash-2.05b# date
  Thu Jan 1 00:59:59 CET 2009
  bash-2.05b# date
  Thu Jan 1 00:59:60 CET 2009
  bash-2.05b# date
  Thu Jan 1 01:00:00 CET 2009
  bash-2.05b# date
  Thu Jan 1 01:00:01 CET 2009
  bash-2.05b#
 
 That's the standard, but supposedly it varies a bit in how it's handled -
 at least if you read it up on Wikipedia.
 
 I'd have to go digging in std.datetime again to see exactly what would
 happen on a leap second, but IIRC you end up with either 59 twice or 00
 twice. Unix time specifically ignores leap seconds, and in 99.99%
 of situations, if you have a 60th second, it's a programming error, so
 TimeOfDay considers 60 to be outside of its range and throws if you try
 and set its second to 60.
 
 SysTime is really the only type where it would make much sense to worry
 about leap seconds, but since the only way that you're going to get them
 is if you go out of your way by using a PosixTimeZone which starts with
 right/ for your time zone, it seemed silly to worry about it overly
 much. The _system time_ ignores leap seconds after all, _even_ if you use
 one of the time zones that starts with right/ as your system's time
 zone. So, the result is that if you use one of the PosixTimeZones with
 leap seconds, it will correctly adjust for leap seconds except when adding
 or removing a leap second, at which point, you'd get a duplicate time for
 two seconds in a row in the case of an addition and probably would skip a
 second in the case of subtraction (though that's actually probably the
 correct behavior for a subtraction - not that they've ever subtracted an
 leap seconds yet). It might be less than ideal if you _really_ care about
 leap seconds, but allowing for a 60th second could really mess with
 calculations and allow for bugs to go uncaught in user code. So, allowing
 for a 60th second when adding a leap second would help an extreme corner
 case at the cost of harming the normal case, and I decided against it.

Actually, this results in the entertaining situation where you can have two 
SysTimes which convert to identical strings but where one is less than the 
other 
when compared (because their internal times are in unadjusted UTC and would 
differ). Of course, you'd have to get two times which were exactly 1 second 
apart, and their precision is 100 ns (though it only manages microsecond 
precision on Linux since that's as precise as the system clock is; I believe 
that Windows is slightly higher precision but not the full 100 ns), so the odds 
of it happening aren't terribly high, but it is technically possible. I suppose 
that if you got a bug because of it, it would be because you were converting 
all 
of your times to strings, and your program couldn't deal with the fact that 
your 
strings were suddenly 1 second back in time. Other than that, I don't expect 
that it would result in a problem. And since the clock can do that _anyway_ 
when 
it's adjusted for skew by NTP, I don't see that as being all that big a deal 
(though in the case of adjusting for NTP, the internal stdTimes for the 
SysTimes 
would be off as well, while in the leap second case, they aren't).

- Jonathan M Davis


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread bearophile
Nick Sabalausky:

Sad as it may be, most people, and worse still, most programmers, have no 
qualms about safety by convention.

This is an interesting topic, there is a lot to say about it. Bugs and errors 
have many sources, and you need to balance different and sometimes opposed 
needs to minimize them. Some of the sources of those troubles are not intuitive 
at all. In some situations safety by convention is the less bad solution.

You are used to C-like languages, and probably you don't see the very large 
amounts of safety by convention things they do or ask to do. If you look at 
safer languages (like SPARK, a safer variant of Ada) you see a large amount of 
things you never want to do in normal programs. And I am now aware that even 
SPARK contains big amounts of things that are safe just because the programmer 
is supposed to do them in the right way.

If you start piling more and more constraints and requirements on the work of 
the programmer you don't produce a safer language, but a language that no one 
is able to use or no one has enough time and resources to use (unless the 
program is critically important). This is a bit like the worse is better 
design strategy, sometimes to maximize the safety you have to leave the program 
some space to do things that don't look safe at all. Designing a good language 
is hard, even C#, that's one of the most carefully languages around has got 
some things wrong (like using + to concat strings, or much worse 
http://blogs.msdn.com/b/ericlippert/archive/2007/10/17/covariance-and-contravariance-in-c-part-two-array-covariance.aspx
 ).

Bye,
bearophile


Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread Steven Schveighoffer

On Wed, 17 Nov 2010 12:09:11 -0500, dsimcha dsim...@yahoo.com wrote:


== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article
The issue is that if you append to such an array and it adds more pages  
in
place, the block length location will move.  Since each thread caches  
its

own copy of the block info, one will be wrong and look at array data
thinking it's a length field.
Even if you surround the appends with a lock, it will still cause  
problems
because of the cache.  I'm not sure there's any way to reliably append  
to

such data from multiple threads.
-Steve


Would assumeSafeAppend() do the trick?



No, that does not affect your cache.  I probably should add a function to  
append without using the cache.


-Steve


Re: The Next Big Language [OT]

2010-11-17 Thread Bruno Medeiros

On 18/10/2010 19:45, Steven Schveighoffer wrote:

On Mon, 18 Oct 2010 14:36:57 -0400, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


...bury the hatch and...


Sorry, I can't let this one pass... bury the *hatchet* :)

This isn't Lost.

-Steve


LOOOL


Oh man, I miss that series, even though it was going downhill..

--
Bruno Medeiros - Software Engineer


Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread dsimcha
== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article
 On Wed, 17 Nov 2010 12:09:11 -0500, dsimcha dsim...@yahoo.com wrote:
  == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article
  The issue is that if you append to such an array and it adds more pages
  in
  place, the block length location will move.  Since each thread caches
  its
  own copy of the block info, one will be wrong and look at array data
  thinking it's a length field.
  Even if you surround the appends with a lock, it will still cause
  problems
  because of the cache.  I'm not sure there's any way to reliably append
  to
  such data from multiple threads.
  -Steve
 
  Would assumeSafeAppend() do the trick?
 
 No, that does not affect your cache.  I probably should add a function to
 append without using the cache.
 -Steve

I thought the whole point of assumeSafeAppend is that it puts the current ptr 
and
length into the cache as-is.


Re: blog: Overlooked Essentials for Optimizing Code (Software

2010-11-17 Thread Bruno Medeiros

On 11/11/2010 11:50, lurker wrote:

ruben niemann Wrote:


Diego Cano Lagneaux Wrote:


Well, I think a simple look at the real world is enough to agree that you
need several years of experience and good skills. Moreover, my personal
experience is that it's easier to get a job (and therefore the much needed
working experience) when you have a 3-year degree than a 5-year one, at
least in Spain: I've been told at many job interviews that I was
'overqualified' (I didn't care about that, just wanted to work, but they
did)


Same happened to me. I've MSc in computer engineering from a technical 
university. I began my PhD studies (pattern recognition and computer vision), 
but put those on hold after the first year because it seemed there isn't much 
non-academic work on that field and because of other more urgent issues. Four 
years after getting my MSc I'm still writing user interface html / css / 
javascript / php in a small enterprise. Hoping to see D or some strongly typed 
language in use soon. I'm one of the techies running the infrastructure, I 
should have studied marketing / management if I wanted to go up in the 
organization and earn more.


It's usually your own fault if you don't get promotions. My career started with 
WAP/XHTML/CSS, J2EE, Tapestry, Struts, then Stripes, Spring, Hibernate, jQuery, and 
few others. Due to my lack of small talk social skills, I was frist moved from client 
interface and trendy things to the backend coding and testing, later began doing 
sysadmin work at the same company. My working area is in the basement floor near a 
tightly locked and cooled hall full of servers. It's pretty cold here, I rarely see 
people (too lazy to climb upstairs to fetch a cup of coffee so I brought my own 
espresso coffee maker here) and when I do, they're angry because somefoobar  
doesn't work again.


So lurker is actually also your job description? :P

--
Bruno Medeiros - Software Engineer


Re: std.container.BinaryHeap + refCounted = WTF???

2010-11-17 Thread Steven Schveighoffer

On Wed, 17 Nov 2010 13:58:55 -0500, dsimcha dsim...@yahoo.com wrote:


== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article

On Wed, 17 Nov 2010 12:09:11 -0500, dsimcha dsim...@yahoo.com wrote:
 == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article
 The issue is that if you append to such an array and it adds more  
pages

 in
 place, the block length location will move.  Since each thread caches
 its
 own copy of the block info, one will be wrong and look at array data
 thinking it's a length field.
 Even if you surround the appends with a lock, it will still cause
 problems
 because of the cache.  I'm not sure there's any way to reliably  
append

 to
 such data from multiple threads.
 -Steve

 Would assumeSafeAppend() do the trick?

No, that does not affect your cache.  I probably should add a function  
to

append without using the cache.
-Steve


I thought the whole point of assumeSafeAppend is that it puts the  
current ptr and

length into the cache as-is.


All the cache does is store the block info -- block start, block size, and  
block flags.  The length is stored in the block directly.  The cache  
allows me to skip a call to the GC (and lock the GC's global mutex) by  
getting the block info directly from a small cache.  The block info is  
then used to determine where and how the used length is stored.


Since the length is stored at the end, a change in block size in one cache  
while being unchanged in another cache can lead to problems.


assumeSafeAppend sets the used block length as the given array's length  
so the block can be used again for appending.  It does not affect the  
cache.


Another option is to go back to the mode where the used length is stored  
at the beginning of large blocks (this caused alignment problems for some  
people).


-Steve


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread Rainer Deyke
On 11/17/2010 03:26, Daniel Gibson wrote:
 Rainer Deyke schrieb:
 Let's say I see something like this in C/C++/D:

 if(blah())
 {
   x++;
 }

 This is not my usual style, so I have to stop and think.  
 
 What about
 if( (blah() || foo())  (x  42)
  (baz.iDontKnowHowtoNameThisMethod() !is null)
  someOtherThing.color = COLORS.Octarine )
 {
   x++;
 }

At first glance, it looks like two statements to me.  The intended
meaning could have been this:

if ((blah() || foo())
 (x  42)
 (baz.iDontKnowHowtoNameThisMethod() !is null)
 someOtherThing.color == COLORS.Octarine) {
  ++x;
}

Or this:

if((blah() || foo())
 (x  42)
 (baz.iDontKnowHowtoNameThisMethod() !is null)
 someOtherThing.color == COLORS.Octarine) {}
{
  ++x;
}

The latter seems extremely unlikely, so it was probably the former.
Still, I have to stop and think about it.  There is also the third
possibility that the intended meaning of the statement is something else
entirely, and the relevant parts have been lost or have not yet been
written.

Language-enforced coding standards are a good thing, because they make
foreign code easier to read.  For this purpose, it doesn't matter if the
chosen style is your usual style or if you subjectively like it.  Even a
bad coding standard is better than no coding standard.


-- 
Rainer Deyke - rain...@eldwood.com


Re: std.date

2010-11-17 Thread Kagamin
Jonathan M Davis Wrote:

  This is how it looked on linux:
  
  bash-2.05b# date
  Thu Jan 1 00:59:58 CET 2009
  bash-2.05b# date
  Thu Jan 1 00:59:59 CET 2009
  bash-2.05b# date
  Thu Jan 1 00:59:60 CET 2009
  bash-2.05b# date
  Thu Jan 1 01:00:00 CET 2009
  bash-2.05b# date
  Thu Jan 1 01:00:01 CET 2009
  bash-2.05b#
 
 That's the standard, but supposedly it varies a bit in how it's handled - at 
 least if you read it up on Wikipedia.
 
 I'd have to go digging in std.datetime again to see exactly what would happen 
 on 
 a leap second, but IIRC you end up with either 59 twice or 00 twice.

An exception will be thrown, this is tested:

assertExcThrown!(DateTimeException, (){TimeOfDay(0, 0, 0).second = 
60;})(LineInfo());

 and in 99.99% of situations, if you 
 have a 60th second, it's a programming error, so TimeOfDay considers 60 to be 
 outside of its range and throws if you try and set its second to 60.

That's probably why Oracle and Solaris rebooted on 2009-01-01.


Re: RFC, ensureHeaped

2010-11-17 Thread Rainer Deyke
On 11/17/2010 05:10, spir wrote:
 Output in general, programmer feedback in particuliar, should simply
 not be considered effect. It is transitory change to dedicated areas
 of memory -- not state. Isn't this the sense of output, after all?

My debug output actually goes through my logging library which, among
other things, maintains a list of log messages in memory.  If this is
considered pure, then we might as well strip pure from the language,
because it has lost all meaning.


-- 
Rainer Deyke - rain...@eldwood.com


Re: std.date

2010-11-17 Thread Jonathan M Davis
On Wednesday 17 November 2010 12:37:18 Kagamin wrote:
 Jonathan M Davis Wrote:
   This is how it looked on linux:
   
   bash-2.05b# date
   Thu Jan 1 00:59:58 CET 2009
   bash-2.05b# date
   Thu Jan 1 00:59:59 CET 2009
   bash-2.05b# date
   Thu Jan 1 00:59:60 CET 2009
   bash-2.05b# date
   Thu Jan 1 01:00:00 CET 2009
   bash-2.05b# date
   Thu Jan 1 01:00:01 CET 2009
   bash-2.05b#
  
  That's the standard, but supposedly it varies a bit in how it's handled -
  at least if you read it up on Wikipedia.
  
  I'd have to go digging in std.datetime again to see exactly what would
  happen on a leap second, but IIRC you end up with either 59 twice or 00
  twice.
 
 An exception will be thrown, this is tested:
 
 assertExcThrown!(DateTimeException, (){TimeOfDay(0, 0, 0).second =
 60;})(LineInfo());

Except that _no_ calculation in std.datetime would _ever_ result in a second 
being 60. A user program would have to do that by trying to create a TimeOfDay 
or DateTime (which contains a TimeOfDay) with a second value of 60.

The question is what string value or DateTime/TimeOfDay value SysTime gives 
when 
converting from a time that is during that leap second application when the 
TimeZone being used handles leap seconds. It's either going to give 59 or 00, 
but never 60. I'd have to look at the code in PosixTimeZone to see which.

  and in 99.99% of situations, if you
  have a 60th second, it's a programming error, so TimeOfDay considers 60
  to be outside of its range and throws if you try and set its second to
  60.
 
 That's probably why Oracle and Solaris rebooted on 2009-01-01.

Possibly. But that would mean that their code handled the 60th second (and if 
it 
did, they would likely have done it properly). Unless there is some way to get 
a 
time out of the OS which gives you a 60th second, and they were using that 
method of getting the time, they never would have even seen a value of 60 for 
the seconds anywhere. You have to work at it to get that 60th second. Most 
stuff 
just ignores leap seconds completely. If whatever they did caused an exception 
(like TimeOfDay would throw) and _that_ is what took the system done, then they 
have other major problems.

I have no idea what really caused the problem, but my guess would be that it 
was 
code that assumed something which didn't hold when that new leap second was 
hit, 
and it resulted in a segfault. Given how robust that kind of software has to 
be, 
I would not expect it to go down from a mere exception, unless it were a 
_major_ 
one - like what D would typically have as an Error.

- Jonathan M Davis


Re: datetime review part 2 [Update 4]

2010-11-17 Thread Kagamin
Jonathan M Davis Wrote:

 Latest: http://is.gd/gSwDv
 

You use QueryPerformanceCounter.
Is this code tested on Windows? MSDN doesn't specify what 
QueryPerformanceCounter returns.
see http://msdn.microsoft.com/en-us/magazine/cc163996.aspx


Re: In praise of Go discussion on ycombinator

2010-11-17 Thread Simen kjaeraas

Matthias Pleh s...@alter.com wrote:


Am 17.11.2010 14:55, schrieb Steven Schveighoffer:

Being someone who likes the brace-on-its-own-line style


i++


Surely you mean:

i
++
;

--
Simen


Re: datetime review part 2 [Update 4]

2010-11-17 Thread Jonathan M Davis
On Wednesday, November 17, 2010 13:44:32 Kagamin wrote:
 Jonathan M Davis Wrote:
  Latest: http://is.gd/gSwDv
 
 You use QueryPerformanceCounter.
 Is this code tested on Windows? MSDN doesn't specify what
 QueryPerformanceCounter returns. see
 http://msdn.microsoft.com/en-us/magazine/cc163996.aspx

SHOO wrote that portion of the code as part of his stopwatch stuff, so I'm 
probably not as clear on that as I am on most of the other stuff, but it was 
working for him, as far as I know, and I've done some testing on wine 
(including 
making sure that all of the unit tests pass), so it's not like it blows up or 
anything like that. There could be some sort of fundamental problem with it or 
subtle bug that I'm not aware of, but it at least appears to work. I'd have to 
study up on it to see whether there are any real problems with it. I was just 
using the code that SHOO had to get the current system time at high resolution. 
I figured that he'd figured all that out, and there was no reason to duplicate 
effort.

- Jonathan M Davis


Re: datetime review part 2 [Update 4]

2010-11-17 Thread Todd VanderVeen
The article was written in 2004. A high precision event timer has been
incorporated in chipsets since 2005.

http://en.wikipedia.org/wiki/High_Precision_Event_Timer

I hope were not basing decisions on support for NT4.0 :)


== Quote from Kagamin (s...@here.lot)'s article
 Jonathan M Davis Wrote:
  Latest: http://is.gd/gSwDv
 
 You use QueryPerformanceCounter.
 Is this code tested on Windows? MSDN doesn't specify what
QueryPerformanceCounter returns.
 see http://msdn.microsoft.com/en-us/magazine/cc163996.aspx



Re: datetime review part 2 [Update 4]

2010-11-17 Thread Jonathan M Davis
On Wednesday 17 November 2010 16:09:22 Todd VanderVeen wrote:
 The article was written in 2004. A high precision event timer has been
 incorporated in chipsets since 2005.
 
 http://en.wikipedia.org/wiki/High_Precision_Event_Timer
 
 I hope were not basing decisions on support for NT4.0 :)

I'm sure not. I believe that most or all of the Windows system calls that are 
made in std.datetime date back to Win2k. WindowsTimeZone could be improved if I 
could assume that Windows was Vista or newer, but that's obviously not 
reasonable at this point, so I used the Win2k functions for getting time zone 
information (the main difference being that the new ones can get correct DST 
info 
for historical dates whereas the old ones only ever use the current DST rules). 
I believe that the general philosophy is to support the oldest Windows OS that 
is reasonable (so, for example, if you can do it one way and support back to 
Win98 and another way which would support to Win2K and they're pretty much 
equal 
as far as utility or complexity goes, then choose the Win98 way). I don't know 
what the upper limit is though. XP obviously has to be supported, so anything 
newer than that is automatically out, but I don't know if a system function 
which was added in XP would be okay or not. Regardless, std.datetime assumes 
that your version of Windows is at least Win2k but does not assume that it's 
newer than that.

- Jonathan M Davis


Re: std.date

2010-11-17 Thread Steve Teale
Jonathan M Davis Wrote:

... (though in the case of adjusting for NTP, the internal stdTimes for the 
SysTimes 
 would be off as well, while in the leap second case, they aren't).
 
 - Jonathan M Davis

OK, all, thanks for answering that question, but my primary gripe was that the 
current std.date does not have a constructor like this(). My assumption being 
that such a constructor would go to the OS and give you an object corresponding 
to now.

I've looked at Jonathan's documentation, and I don't see a constructor like 
that there either.

So if I want to write a timed log entry, what's the recommendation?

Steve


Re: std.regexp vs std.regex [Re: RegExp.find() now crippled]

2010-11-17 Thread Steve Teale
Andrei Alexandrescu Wrote:

 
 It's probably common courtesy that should be preserved. I just committed 
 the fix prompted by Lutger (thanks).
 
 Andrei

Thanks Andrei. When the next version is released I'll remove the temporary 
findRex() function from my current code.

Steve ;=)


Re: datetime review part 2 [Update 4]

2010-11-17 Thread Kagamin
Jonathan M Davis Wrote:

 I'd have to study up on it to see whether there are any real problems with it.

Speaking in posix terms, performance counter is more like CLOCK_MONOTONIC and 
using it as CLOCK_REALTIME is a dependency on undefined behavior.


Re: datetime review part 2 [Update 4]

2010-11-17 Thread Steve Teale
It's difficult to find a suitable entry point in this thread, so I'll just 
arbitrarily use here.

Various language libraries have flexible facilities for formatting date/time 
values, maybe c#, and certainly PHP, whereby you can specify a format string, 
something like %d'th %M %Y.

Is this a useful idea? I have some code kicking around somewhere.

Steve



Re: std.date

2010-11-17 Thread Kagamin
Steve Teale Wrote:

 So if I want to write a timed log entry, what's the recommendation?

I won't dare to use std.date.


Re: std.date

2010-11-17 Thread Jonathan M Davis
On Wednesday 17 November 2010 21:35:03 Steve Teale wrote:
 Jonathan M Davis Wrote:
 
 ... (though in the case of adjusting for NTP, the internal stdTimes for the
 SysTimes
 
  would be off as well, while in the leap second case, they aren't).
  
  - Jonathan M Davis
 
 OK, all, thanks for answering that question, but my primary gripe was that
 the current std.date does not have a constructor like this(). My
 assumption being that such a constructor would go to the OS and give you
 an object corresponding to now.
 
 I've looked at Jonathan's documentation, and I don't see a constructor like
 that there either.
 
 So if I want to write a timed log entry, what's the recommendation?

Structs can't have default constructors, so it's impossible to do that. In the 
case of std.datetime, the way to get the current time is Clock.currTime(), and 
since SysTime has a toString() method, you can just print it. So, you can do 
writeln(Clock.currTime().toString()); It has other types of methods of 
converting to and from strings if you want a specific format, but toString() 
works just fine if you aren't picky about the format (it's also the most 
readable 
of the various formats).

As for std.date.  IIRC, you'd use getUTCTime() to get the current time as a 
d_time and toUTCString() to print it. As I recall, anything that converts to or 
from UTC is broken, so I wouldn't advise it.

If you really want the current time as local time and don't want to use 
std.datetime before it's actually in Phobos, then I'd advise just using the 
standard C functions. They're in core.stdc.time. The list can be found here: 
http://www.cppreference.com/wiki/chrono/c/start

This program would print the current time:

import core.stdc.time;
import std.conv;
import std.stdio;
import std.string;

void main()
{
auto t = time(null);
writeln(strip(to!string(ctime(t;
}


to!string() is used because ctime() returns a char*, and strip() is used 
because 
ctime() returns a string with a newline at the end.
 
- Jonathan M Davis


Re: datetime review part 2 [Update 4]

2010-11-17 Thread Jonathan M Davis
On Wednesday 17 November 2010 21:57:58 Steve Teale wrote:
 It's difficult to find a suitable entry point in this thread, so I'll just
 arbitrarily use here.
 
 Various language libraries have flexible facilities for formatting
 date/time values, maybe c#, and certainly PHP, whereby you can specify a
 format string, something like %d'th %M %Y.
 
 Is this a useful idea? I have some code kicking around somewhere.

It is definitely a useful idea. However, having strings in standard formats is 
more important (and far simpler - albeit not simple), so that's what's included 
at the moment. At some point, I will likely implement toString() and 
fromString() methods which take format strings, but having those be 
appropriately flexible and getting them right is not simple, so I put them off 
for 
the moment. I have several TODO comments in the code with ideas of 
functionality 
to add at a later date, and that is one of them. So, ideally it would be in 
there, but it already took long enough to implement what's there that it seemed 
appropriate to put off implementing further functionality until the basic 
design 
is approved and in Phobos.

- Jonathan M Davis


Re: DDMD not update£¬why£¿

2010-11-17 Thread DOLIVE
DOLIVE дµ½:

 Why do not you update it? GDC has been updated to dmd2.049 .


refuel, make an all out effort

thank you very much!


Re: datetime review part 2 [Update 4]

2010-11-17 Thread Jonathan M Davis
On Wednesday 17 November 2010 21:51:24 Kagamin wrote:
 Jonathan M Davis Wrote:
  I'd have to study up on it to see whether there are any real problems
  with it.
 
 Speaking in posix terms, performance counter is more like CLOCK_MONOTONIC
 and using it as CLOCK_REALTIME is a dependency on undefined behavior.

If you have a better way to do it, I'm all ears. However, it's the only way 
that 
I know of to get high-precision time on Windows.

- Jonathan M Davis


why no implicit convertion?

2010-11-17 Thread Matthias Pleh

void foo(char[] a) {}
void bar(char[][] b) {}

int main(string[] args)
{
char[4] a;
char[4][4] b;
foo(a);// OK: implicit convertion
bar(b);// Error: cannot implicitly convert
   //char[4u][4u] to char[][]
}

what is the reason for the different behaviour?
What's best to pass such multidimensional arrays?


Re: why no implicit convertion?

2010-11-17 Thread Tomek Sowiński

Matthias Pleh s...@alter.com napisał(a):


void foo(char[] a) {}
void bar(char[][] b) {}

int main(string[] args)
{
 char[4] a;
 char[4][4] b;
 foo(a);// OK: implicit convertion
 bar(b);// Error: cannot implicitly convert
//char[4u][4u] to char[][]
}

what is the reason for the different behaviour?


I *think* it's because multi-dim static arrays are a strip of contiguous  
memory and no length information is held with the data, so if it was  
converted to a dynamic array of arrays (who do hold their lengths), there  
wouldn't be room for the lengths of the arrays.



What's best to pass such multidimensional arrays?


Good question. Maybe new char[][](4) and point the inner arrays to the  
chunks of the static array?


--
Tomek


Re: why no implicit convertion?

2010-11-17 Thread Steven Schveighoffer
Matthias Pleh Wrote:

 void foo(char[] a) {}
 void bar(char[][] b) {}
 
 int main(string[] args)
 {
  char[4] a;
  char[4][4] b;
  foo(a);// OK: implicit convertion
  bar(b);// Error: cannot implicitly convert
 //char[4u][4u] to char[][]
 }
 
 what is the reason for the different behaviour?

char[][] is an array of dynamic arrays.  A dynamic array consists of a length 
and a pointer
A char[4][4] is a fixed array of fixed arrays.  A fixed array consists of just 
data, the length is part of the type, and the pointer is implied.

 What's best to pass such multidimensional arrays?

two ways, if you want to support multiple lengths of 4-element char arrays, you 
could do:

void bar(char[4][])

if you want to support only a 4x4 array, you can do:

void bar(ref char[4][4])

If you want to pass by value, omit the ref, but this will copy all the data and 
you will not be able to update the original array from within the function.

-Steve


Re: why no implicit convertion?

2010-11-17 Thread spir
On Wed, 17 Nov 2010 22:10:19 +0100
Matthias Pleh s...@alter.com wrote:

 void foo(char[] a) {}
 void bar(char[][] b) {}
 
 int main(string[] args)
 {
  char[4] a;
  char[4][4] b;
  foo(a);// OK: implicit convertion
  bar(b);// Error: cannot implicitly convert
 //char[4u][4u] to char[][]
 }
 
 what is the reason for the different behaviour?
 What's best to pass such multidimensional arrays?

I may be wrong, but it seems (also from main's signature) you're trying to 
apply plain C point-of-view to a D feature (dyn array).
Also, maybe what you need is an array of strings?
Finally, to initialise a dynamic array at a given dimension, you may use the 
idiom
string[] strings = new string[dim];
(but this does not work for 2 dimensions, I guess)
Sorry if I'm wrong and this does not help.


Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com



Re: why no implicit convertion?

2010-11-17 Thread bearophile
Matthias Pleh:

 So I solved it with:
 
 void bar(char* buf, int width, int height)
 
 Good old C :)

Most times this is not a good D solution :-(

This compiles (but it created a new instantiation of bar for each different 
input matrix):

void bar(int N, int M)(int[N][M] buf) {}
void main() {
int[4][4] m;
bar(m);
}

Bye,
bearophile


Re: why no implicit convertion?

2010-11-17 Thread bearophile
 void bar(int N, int M)(ref int[N][M] buf) {}

But for a matrix this is often better:
void bar(int N, int M)(ref int[N][M] buf) {

Or even:
pure void bar(int N, int M)(ref const int[N][M] buf) {

Bye,
bearophile


Current status of toString in phobos

2010-11-17 Thread Matthias Walter
Hi,

I'm currently using DMD v2.049 with phobos. I found an old discussion
about how toString should be designed and how it is supposed to work. As
the following code does not print out the number, I wonder what is the
current status of how to implement a toString function for a struct/class:

| auto n = BigInt(42);
| writefln(%s, n);

Thanks
Matthias


const vs immutable for local variables

2010-11-17 Thread Jonathan M Davis
In C++, I tend to declare all local variables const when I know that they 
aren't 
going to need to be altered. I'd like to something similar in D. However, D has 
both const and immutable. I can see clear differences in how const and 
immutable 
work with regards to function parameters and member variables, but it's not as 
clear with regards to const and immutable.

So, the question is: what are the advantages of one over the other? 
Specifically, 
my concern is how likely compiler optimizations are. Does using immutable make 
compiler optimizations more likely? Or would const do just as well if not 
better? Or is dmd smart enough that it really doesn't matter if you use const 
or 
immutable on local variables which never change?

- Jonathan M Davis


Re: Current status of toString in phobos

2010-11-17 Thread Jonathan M Davis
On Wednesday 17 November 2010 19:48:30 Matthias Walter wrote:
 Hi,
 
 I'm currently using DMD v2.049 with phobos. I found an old discussion
 about how toString should be designed and how it is supposed to work. As
 the following code does not print out the number, I wonder what is the
 
 current status of how to implement a toString function for a struct/class:
 | auto n = BigInt(42);
 | writefln(%s, n);

Object has the function toString(), which you have to override.

Structs have to define toString() as well. However, unlike classes, it's 
signature must be _exactly_ string toString();  You can't add extra modifiers 
such as const or nothrow, or it won't work. You _should_ be able to have extra 
modifiers on it, but it doesn't work at the moment if you do (so I typically 
end 
up declaring two toString()s - one with the modifiers and one without - and 
declare a private method which they both call that has the actual 
implementation). There's an open bug on it. Once it's fixed, any signature for 
toString() should work for structs as long as its name is toString() and it 
returns a string.

As for BigInt, for some reason it doesn't have a normal toString(). Instead, it 
has one which you pass a delegate and format string to in order to control how 
it's converted to a string. It's probably useful, but I do think that it should 
have a normal toString() method as well. I've opened a bug report on it: 
http://d.puremagic.com/issues/show_bug.cgi?id=5231

And by the way, version 2.050 is the most recent version of dmd, so you might 
want to grab it.

- Jonathan M Davis


Re: const vs immutable for local variables

2010-11-17 Thread Jonathan M Davis
On Wednesday 17 November 2010 23:09:40 bearophile wrote:
 Jonathan M Davis:
  In C++, I tend to declare all local variables const when I know that they
  aren't going to need to be altered. I'd like to something similar in D.
  However, D has both const and immutable. I can see clear differences in
  how const and immutable work with regards to function parameters and
  member variables, but it's not as clear with regards to const and
  immutable.
 
 In D2 for local variables that don't change use immutable when they are
 computed at run-time. I'd like to suggest you to use enum when they are
 known at compile-time, but in some cases this is bad (some examples of
 associative arrays, etc).

Well. yes. enums are definitely tha case for compile time constants. The 
question 
is for runtime. And why would you suggest immutable over const for runtime?

  So, the question is: what are the advantages of one over the other?
  Specifically, my concern is how likely compiler optimizations are. Does
  using immutable make compiler optimizations more likely? Or would const
  do just as well if not better? Or is dmd smart enough that it really
  doesn't matter if you use const or immutable on local variables which
  never change?
 
 Or is dmd dumb enough that it makes no optimization difference? :-)

I really don't see any reason why const vs immutable would make any difference 
for a local variable except insofar as a function takes an immutable argument 
rather than a const one. I would think that both would be optimized 
identically, 
but I don't know.

- Jonathan M Davis


[Issue 4864] ICE(statement.c) Crash on invalid 'if statement' body inside mixin

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=4864


Don clugd...@yahoo.com.au changed:

   What|Removed |Added

   Keywords||patch
 CC||clugd...@yahoo.com.au


--- Comment #2 from Don clugd...@yahoo.com.au 2010-11-17 00:13:39 PST ---
PATCH: statement.c 337.
CompileStatement::flatten()

Statements *a = new Statements();
while (p.token.value != TOKeof)
{
+   int olderrs = global.errors;
Statement *s = p.parseStatement(PSsemi | PScurlyscope);
+   if (olderrs == global.errors) // discard it if parsing failed
a-push(s);
}

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5229] New: Inaccurate parsing of floating-point literals

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5229

   Summary: Inaccurate parsing of floating-point literals
   Product: D
   Version: D1  D2
  Platform: All
OS/Version: All
Status: NEW
  Keywords: wrong-code
  Severity: normal
  Priority: P2
 Component: DMD
AssignedTo: nob...@puremagic.com
ReportedBy: bugzi...@kyllingen.net


--- Comment #0 from Lars T. Kyllingstad bugzi...@kyllingen.net 2010-11-17 
03:29:34 PST ---
80-bit reals give you roughly 19 decimal digits of precision.  Thus, for a
given number, 20 digits should usually be enough to ensure that the literal
gets mapped to the closest representable number.

The following program shows that this is not always the case.  Here, 23 digits
is needed to get the closest representable number to pi^2, even though the
approximation to pi^2 itself is only accurate to 18 digits!

Test case:

void main()
{
// Approximations to pi^2, accurate to 18 digits:
real closest = 0x9.de9e64df22ef2d2p+0L;
real next= 0x9.de9e64df22ef2d3p+0L;

// A literal with 23 digits maps to the correct
// representation.
real dig23 = 9.86960_44010_89358_61883_45L;
assert (dig23 == closest);

// 22 digits should also be (more than) sufficient,
// but no...
real dig22 = 9.86960_44010_89358_61883_5L;
assert (dig22 == closest);  // Fails; should pass
assert (dig22 == next); // Passes; should fail
}

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 3827] automatic joining of adjacent strings is bad

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=3827



--- Comment #22 from Stewart Gordon s...@iname.com 2010-11-17 03:58:08 PST ---
(In reply to comment #21)
 doesn't this solve that problem? a ~ (this ~ that)

It does.  My point was that somebody might accidentally not add the brackets.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5229] Inaccurate parsing of floating-point literals

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5229


Don clugd...@yahoo.com.au changed:

   What|Removed |Added

 CC||clugd...@yahoo.com.au


--- Comment #1 from Don clugd...@yahoo.com.au 2010-11-17 04:23:43 PST ---
Actually 19 digits works. The thing that's wrong is that the compiler uses
_all_ provided digits. Instead, according IEEE754, it should only take the
first 19 digits, performing decimal rounding of the l9th digit if more digits
are provided.

It's a problem in DMC's standard library implementation of strtold().

I once found a case where adding more decimal digits made the number smaller(!)

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5219] @noheap annotation

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5219


Don clugd...@yahoo.com.au changed:

   What|Removed |Added

 CC||clugd...@yahoo.com.au


--- Comment #1 from Don clugd...@yahoo.com.au 2010-11-17 04:50:46 PST ---
No.
Use a profiler.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5230] New: ICE(tocsym.c) overriding a method that has an out contract

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5230

   Summary: ICE(tocsym.c) overriding a method that has an out
contract
   Product: D
   Version: D1  D2
  Platform: x86
OS/Version: Windows
Status: NEW
  Keywords: ice-on-valid-code
  Severity: regression
  Priority: P1
 Component: DMD
AssignedTo: nob...@puremagic.com
ReportedBy: s...@iname.com


--- Comment #0 from Stewart Gordon s...@iname.com 2010-11-17 09:42:46 PST ---
Clearly the implementation of out contract inheritance is broken.

- override_out_a.d -
import override_out_b;

class Derived : Base {
override int method() { return 69; }
}
- override_out_b.d -
class Base {
int method()
out (r) {}
body { return 42; }
}
- DMD 1.065 -
C:\Users\Stewart\Documents\Programming\D\Tests\bugsdmd override_out_a.d
override_out_b.d(3): Error: function __ensure forward declaration
linkage = 0
Assertion failure: '0' on line 381 in file 'tocsym.c'

abnormal program termination
- DMD 2.050 -
C:\Users\Stewart\Documents\Programming\D\Tests\bugsdmd override_out_a.d
override_out_b.d(3): Error: function __ensure forward declaration
linkage = 0
Assertion failure: '0' on line 407 in file 'tocsym.c'

abnormal program termination
--

Compiles successfully if the out contract is removed, or Base and Derived are
defined in the same module.

Adding an out contract to Derived.method doesn't change things.

This has broken SDWF.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5230] ICE(tocsym.c) overriding a method that has an out contract

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5230


Don clugd...@yahoo.com.au changed:

   What|Removed |Added

 CC||clugd...@yahoo.com.au


--- Comment #1 from Don clugd...@yahoo.com.au 2010-11-17 11:57:00 PST ---
This was almost certainly caused by the fix to 
bug 3602: ICE(tocsym.c) compiling a class, if its super class has preconditions
Which had almost exactly the same symptoms as this bug (only with __require
instead of __ensure).

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 3031] scoped static var conflicts while linking

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=3031



--- Comment #2 from Lukasz Wrzosek luk.wrzo...@gmail.com 2010-11-17 12:12:50 
PST ---
Created an attachment (id=817)
Fix for this bug.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 2056] Const system does not allow certain safe casts/conversions involving deep composite types

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=2056


Bruno Medeiros bdom.pub+deeb...@gmail.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||INVALID


--- Comment #2 from Bruno Medeiros bdom.pub+deeb...@gmail.com 2010-11-17 
12:15:17 PST ---
The latest DMD compiles both code samples now, *however*, I've come to realize
that in fact this code should NOT be allowed (that is, any of the Error here
lines in the code above should produce an error), because these casts are
actually not safe.

For the reasons why, see bug #2544 , which is the inverse of this one.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 3889] Forbid null as representation of empty dynamic array

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=3889



--- Comment #6 from Sobirari Muhomori dfj1es...@sneakemail.com 2010-11-17 
12:23:21 PST ---
compare
---
foo[]=(cast(Foo[])[])[]; //copy empty array

foo[]=(cast(Foo[])null)[]; //copy null slice
---

The first line has all 3 meanings of []

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 2095] covariance w/o typechecks = bugs

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=2095



--- Comment #15 from Bruno Medeiros bdom.pub+deeb...@gmail.com 2010-11-17 
12:24:40 PST ---
For the record, the same problem also occurs with pointer types:

B* ba=[new B()].ptr;
A* aa=ba;
*aa=new A;
(*ba).methodB(); // (*ba) is expected to be B, but is A

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 3889] Forbid null as representation of empty dynamic array

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=3889



--- Comment #7 from Sobirari Muhomori dfj1es...@sneakemail.com 2010-11-17 
12:26:41 PST ---
ps Huh, [] actually has 4 possible meanings, I forgot about either array
operation or full slice operator.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5203] dinstaller.exe v2.050 doesn't install anything

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5203


Matthias Pleh matthias.p...@gmx.at changed:

   What|Removed |Added

  Component|websites|installer
 AssignedTo|nob...@puremagic.com|bugzi...@digitalmars.com


--- Comment #1 from Matthias Pleh matthias.p...@gmx.at 2010-11-17 12:45:07 
PST ---
changed component from 'website' to 'installer'

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5093] improve error for importing std.c.windows.windows

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5093


simon s.d.hamm...@googlemail.com changed:

   What|Removed |Added

 Attachment #816 is|0   |1
   obsolete||


--- Comment #5 from simon s.d.hamm...@googlemail.com 2010-11-17 13:03:19 PST 
---
Created an attachment (id=818)
PATCH against rev 755: implement a module import backtrace for static assert

...and printing the correct import location might help

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 2056] Const system does not allow certain safe casts/conversions involving deep composite types

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=2056



--- Comment #3 from Sobirari Muhomori dfj1es...@sneakemail.com 2010-11-17 
14:21:51 PST ---
So this is a regression?

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 2095] covariance w/o typechecks = bugs

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=2095



--- Comment #16 from bearophile_h...@eml.cc 2010-11-17 15:13:10 PST ---
(In reply to comment #14)

 I'm afraid, there's nothing to test at runtime,

Some runtime data info may be added, then. There is already some of it for
classes and modules.


 and I thought the solution was
 already chosen to disallow mutable covariance at compile time.

I didn't know this.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 2095] covariance w/o typechecks = bugs

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=2095


Jonathan M Davis jmdavisp...@gmx.com changed:

   What|Removed |Added

 CC||jmdavisp...@gmx.com


--- Comment #17 from Jonathan M Davis jmdavisp...@gmx.com 2010-11-17 15:27:51 
PST ---
It really should be stopped at compile time. There's not really a good reason
to allow it. As much as it first looks like mixing A[] and B[] when B : A
should work, it's a _really_ bad idea. Just because a container holds a type
which is the base type of another type does not mean that a container which
holds the derived type should be assignable/castable/convertable to one which
holds the base type.

Really, the only question is whether you can get away with it with some form of
const, and I believe that the consensus on it in the newsgroup last time that
this was discussed was that you couldn't. I'd have to go digging through the
archives though to find the exact thread.

This can and should be disallowed at compile time. It's a definite bug. It just
hasn't been fixed yet.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5219] @noheap annotation

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5219



--- Comment #2 from bearophile_h...@eml.cc 2010-11-17 15:55:19 PST ---
This problem may be solved by a better profiler, or by an alternative to the
switch suggested in bug 5070

If this idea is bad then it may be closed.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 2095] covariance w/o typechecks = bugs

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=2095



--- Comment #18 from Stewart Gordon s...@iname.com 2010-11-17 16:57:33 PST ---
(In reply to comment #17)
 Really, the only question is whether you can get away with it with 
 some form of const, and I believe that the consensus on it in the 
 newsgroup last time that this was discussed was that you couldn't.  
 I'd have to go digging through the archives though to find the 
 exact thread.

I've no idea what discussion you're thinking of either.  But I've studied it -
see comment 4.  But to summarise, the following implicit conversions should be
allowed:

B[] to const(A)[]
const(B)[] to const(A)[]
immutable(B)[] to immutable(A)[]
immutable(B)[] to const(A)[]

 This can and should be disallowed at compile time.  It's a definite 
 bug.  It just hasn't been fixed yet.

Yes, in the spec.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5231] New: BigInt lacks a normal toString()

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5231

   Summary: BigInt lacks a normal toString()
   Product: D
   Version: unspecified
  Platform: Other
OS/Version: Linux
Status: NEW
  Severity: normal
  Priority: P2
 Component: Phobos
AssignedTo: nob...@puremagic.com
ReportedBy: jmdavisp...@gmx.com


--- Comment #0 from Jonathan M Davis jmdavisp...@gmx.com 2010-11-17 21:02:36 
PST ---
This program

import std.bigint;
import std.stdio;

void main()
{
auto b = BigInt(42);
writeln(b);
}


prints BigInt rather than 42. BigInt does not define a normal toString(). It
looks like it declares a version of toString() which takes a delegate and
format string in an attempt to have more control of what the string looks like.
However, this is useless for cases where you need an actual toString() -
particularly when functions which you have no control over call toString().
Normally, all types should define a toString() so that they can be printed, and
BigInt doesn't do that.

So, BigInt should declare a normal toString() - presumably one which prints out
the BigInt in decimal form.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5219] @noheap annotation

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5219


nfx...@gmail.com changed:

   What|Removed |Added

 CC||nfx...@gmail.com


--- Comment #3 from nfx...@gmail.com 2010-11-17 21:59:28 PST ---
It's certainly a good idea for a systems programming language.
But I don't know what the hell D2 wants to be.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5232] New: [patch] std.conv.to std.conv.roundTo report invalid overflows for very large numbers

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5232

   Summary: [patch] std.conv.to  std.conv.roundTo report invalid
overflows for very large numbers
   Product: D
   Version: D2
  Platform: Other
OS/Version: Windows
Status: NEW
  Keywords: patch
  Severity: minor
  Priority: P2
 Component: Phobos
AssignedTo: nob...@puremagic.com
ReportedBy: sandf...@jhu.edu


--- Comment #0 from Rob Jacques sandf...@jhu.edu 2010-11-17 22:00:45 PST ---
This comes from using roundTo!ulong with a real equal to ulong.max. Here's the
unit test:

real r = ulong.max;
assert(   (cast(ulong)r) == ulong.max , Okay);
assert(  to!ulong(r) == ulong.max , Okay);
assert( roundTo!ulong(r) == ulong.max , Conversion overflow);

The reason for this is that toImpl uses casting, which implies truncation, but
tests for overflow by simple comparison and roundTo adds 0.5 to the source
value. Here's a patch for toImpl:

T toImpl(T, S)(S value)
if (!implicitlyConverts!(S, T)
 std.traits.isNumeric!(S)  std.traits.isNumeric!(T))
{
enum sSmallest = mostNegative!(S);
enum tSmallest = mostNegative!(T);
static if (sSmallest  0) {
// possible underflow converting from a signed
static if (tSmallest == 0) {
immutable good = value = 0;
} else {
static assert(tSmallest  0);
immutable good = value = tSmallest;
}
if (!good) ConvOverflowError.raise(Conversion negative overflow);
}
static if (S.max  T.max) {
// possible overflow
-if (value  T.max) ConvOverflowError.raise(Conversion overflow);
+if (value = T.max+1.0L) ConvOverflowError.raise(Conversion
overflow);
}
return cast(T) value;
}

As a note, the roundTo unit test still fails because reals can't represent
ulong.max+0.5 and thus the implementation of roundTo effectively adds 1.0
instead of 0.5 (And thus it is still broken) Also, it should probably use
template constraints instead of static asserts. Here's a patch for both issues:

template roundTo(Target) {
Target roundTo(Source)(Source value)
if( isFloatingPoint!Source  isIntegral!Target )
{
return to!(Target)( round(value) );
}
}

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5231] BigInt lacks a normal toString()

2010-11-17 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5231


bearophile_h...@eml.cc changed:

   What|Removed |Added

 CC||bearophile_h...@eml.cc


--- Comment #1 from bearophile_h...@eml.cc 2010-11-17 23:10:31 PST ---
This is a dupe of my 4122

The lack of a normally usable toString is not acceptable.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---