Re: D Programming Language source (dmd, phobos, etc.) has moved to github

2011-01-28 Thread Russel Winder
On Thu, 2011-01-27 at 13:33 -0800, Bill Baxter wrote:
 On Thu, Jan 27, 2011 at 1:13 PM, Nick Sabalausky a@a.a wrote:
[ . . . ]
  Yea, and that's pretty much the original thing I was saying: It's nice that
  Hg seems to have it, but Git doesn't appear to be particularly interested in
  it.
 
 I think it's very handy for all the reasons you said.  I don't think
 I've every had to use a big hex string when dealing with mercurial.
 Maybe once or twice max.  Most of the stuff you do with repo history
 as an individual developer is all about the local copy of the tree on
 your system.  Globally unique identifiers aren't needed for that.  It
 looks like Bzr does something similar.  Not sure why Git hasn't gotten
 this particular nicety.

Bazaar does indeed have revision numbers per branch.  Note that branch
and repository is a different concept in Bazaar, unlike Git and
Mercurial where they are fundamentally the same.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: D Programming Language source (dmd, phobos, etc.) has moved to github

2011-01-28 Thread Mafi

Am 28.01.2011 12:30, schrieb Russel Winder:

On Thu, 2011-01-27 at 13:33 -0800, Bill Baxter wrote:

On Thu, Jan 27, 2011 at 1:13 PM, Nick Sabalauskya@a.a  wrote:

[ . . . ]

Yea, and that's pretty much the original thing I was saying: It's nice that
Hg seems to have it, but Git doesn't appear to be particularly interested in
it.


I think it's very handy for all the reasons you said.  I don't think
I've every had to use a big hex string when dealing with mercurial.
Maybe once or twice max.  Most of the stuff you do with repo history
as an individual developer is all about the local copy of the tree on
your system.  Globally unique identifiers aren't needed for that.  It
looks like Bzr does something similar.  Not sure why Git hasn't gotten
this particular nicety.


Bazaar does indeed have revision numbers per branch.  Note that branch
and repository is a different concept in Bazaar, unlike Git and
Mercurial where they are fundamentally the same.

I don't know Git but in Mercurial speech a branch is what you get when 
using the 'hg branch' command. It's like a tag but a commit can only 
belongs to exactly one branch ('default' is the default branch). All 
commits to all branches are put together in one repository.


Mafi


Re: D Programming Language source (dmd, phobos,etc.) has moved to github

2011-01-28 Thread Vladimir Panteleev

On Thu, 27 Jan 2011 21:48:28 +0200, Don nos...@nospam.com wrote:

No. Just one repository number, and one revision number. You just need  
to be sensible in how the clone numbers are assigned. That's easy.

Basically every repository has a number of clone numbers it can assign.
Every clone gets a subset of that range. Dealing with the situation when  
the range has run out is a bit complicated, but quite doable, and there  
are steps you can take to make it a very rare occurence.


Giving this some thought, I'm now confident that this is not possible. The  
assignment algorithm must take into account all variations of imaginable  
cases (cloning hierarchy up to a certain depth). We're talking about an  
algorithm must give a unique ID to each node in an implicit tree, not  
knowing about the state of the rest of the tree except the state of each  
new node's parent. The only sensible solutions will quickly generate  
humongous numbers for some or other common real-life scenarios.


I believe we're not still arguing that these numbers must also be useful  
beyond their terseness and uniqueness?


I think it's easier to just use the first 5 characters from Git's SHA-1  
hash.


I'm not have almost zero interest in this stuff, so I won't say any  
more. I'm really just commenting that it's not difficult to envisage an  
algorithm which makes exposing a random hash unnecessary.


You're welcome to not reply.

--
Best regards,
 Vladimirmailto:vladi...@thecybershadow.net


Re: A few improvements to SortedRange

2011-01-28 Thread Jacob Carlborg

On 2011-01-27 19:22, Andrei Alexandrescu wrote:

I just committed a few improvements to phobos' SortedRange, notably
adding various search policies and improving the existing routines.
These changes will be rolled out in the next dmd release. Until then,
feel free to suggest any improvements.

https://github.com/D-Programming-Language/phobos/commit/b6aabf2044001ea61af15834dcec5855680cf209


http://d-programming-language.org/cutting-edge/phobos/std_range.html#SearchPolicy


http://d-programming-language.org/cutting-edge/phobos/std_range.html#SortedRange



Andrei


When I view that page in Firefox 4 I get this error in a dialog window:

The language of this website could not be determined automatically. 
Please indicate the main language: (ISO 639-1)


In Internet Explorer I get:

Hyphenator.js says:
An Error occurred:
'window.prompt(...)' is null or not an object

--
/Jacob Carlborg


Re: DVM - D Version Manager

2011-01-28 Thread Jacob Carlborg

On 2011-01-27 23:34, Jérôme M. Berger wrote:

Jacob Carlborg wrote:

On 2011-01-26 21:04, Jérôme M. Berger wrote:

 You cannot. You need to modify the environment for the current
shell, which is the shell that the user is currently using (no
matter what else may or may not be installed on the system). This
has two consequences:

- You need to have some code that is run when the shell starts (i.e.
from .bashrc, .zshrc or .kshrc). That code will define the proper
aliases and/or functions (at the time being, this is mostly the
dvm function in dvm.sh (*)). This can be accomplished by having
a different version of this file for each shell;


Is it possible to detect what shell is running and then load the correct
version?


Since each shell sources a different file on startup, you can
source the correct version from the startup file. On installation,
the SHELL environment variable should tell you which shell is used.


Ah, right. Didn't think of that.


- You need to generate the contents of $dvm_result_path in a format
that the shell will understand. The easiest way to do that is
probably to define a few extra functions in dvm.sh to enable
setting environment variables in a portable way and handle
additional requirements (like builtin hash -r which is definitely
a bash-ism). Then generate the $dvm_result_path using these
functions instead of the normal shell syntax.


The contents of $dvm_result_path will only export one variable.


Do not you need to call builtin hash -r each time you change the
path (I do not know since I am not a bash user)? If not, why do you
need to call it in __dvm_setup_environment?


I don't know actually. I'll have to test that.


 Jerome

(*) BTW, I hope you do not add the full contents of dvm.sh nor a
source dvm.sh in .bashrc the way it is now. Otherwise, a
misconfiguration may prevent the user from starting a shell!


OK, how else can I do the same thing? BTW this is how RVM (Ruby Version
Manager) works, where I got the idea from. The whole RVM is written in
shell script and it's sourced in .bashrc.


Remove the call to exit and wrap the rest of the file in a if [[ !
-z $dvm_prefix ]] ; then ... So the file will look like:

==8--
if [[ -z $dvm_prefix ]] ; then

 if [[ -n $HOME ]] ; then
 dvm_prefix=$HOME/.
 else
 echo No \$dvm_prefix was provided and 
 echo $(id | \sed -e's/^[^(]*(//' -e 's/).*//') has no
\$HOME defined.
 fi
fi

if [[ ! -z $dvm_prefix ]] ; then
...
fi
--8==

Jerome


Yeah, I kind of notice that. The exit didn't work out that well.

--
/Jacob Carlborg


Re: A few improvements to SortedRange

2011-01-28 Thread Andrei Alexandrescu

On 1/28/11 7:27 AM, Jacob Carlborg wrote:

On 2011-01-27 19:22, Andrei Alexandrescu wrote:

I just committed a few improvements to phobos' SortedRange, notably
adding various search policies and improving the existing routines.
These changes will be rolled out in the next dmd release. Until then,
feel free to suggest any improvements.

https://github.com/D-Programming-Language/phobos/commit/b6aabf2044001ea61af15834dcec5855680cf209



http://d-programming-language.org/cutting-edge/phobos/std_range.html#SearchPolicy



http://d-programming-language.org/cutting-edge/phobos/std_range.html#SortedRange




Andrei


When I view that page in Firefox 4 I get this error in a dialog window:

The language of this website could not be determined automatically.
Please indicate the main language: (ISO 639-1)

In Internet Explorer I get:

Hyphenator.js says:
An Error occurred:
'window.prompt(...)' is null or not an object



Yah, I added experimental hyphenation. Looks much better, but I forgot 
how to set the language. Removed for now.


Andrei


Re: D Programming Language source (dmd, phobos, etc.) has moved to github

2011-01-28 Thread foo
Nick Sabalausky Wrote:

 Robert Clipsham rob...@octarineparrot.com wrote in message 
 news:ihnk80$fsf$1...@digitalmars.com...
  On 25/01/11 22:28, Nick Sabalausky wrote:
  I don't understand why you think I'm claiming anything of the sort. I 
  never
  said anything like that. I keep saying over and over and over and over 
  and
  over and over and over.changeset number **PLUS WHICH REPOSITORY (and
  maybe branch, depending how the given system chooses to work)**
 
  Person A has a repository with one branch, 'default' and has made two 
  commits. The current revision number is 1.
  Person B clones the repository and creates a branch, 'ver2', and makes two 
  commits. The revision number in his repository is now 3, it is still 1 in 
  person A's.
  Person A makes a commit, his revision 2. B switches back to the 'default' 
  branch, and makes another commit.  His revision 4. A pulls from the 
  default branch, now B's revision 4 == A's revision 3.
 
  It's very easy for the revision numbers to get out of sync like this.
 
 Right, I already understood all of that. But consider the following scenario 
 (And I realize that neither Hg nor Git work exactly like this, but until 
 Lutger mentioned the extra details in git describe --tags it sounded like 
 Git was much further away from this than Hg is):
 
 Adam starts a repository:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Adam/SuperApp
 
 It's Adam's project, so that could be considered the main official repo. 
 Adam makes five commits in the default default branch. His current 
 revision is:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Adam/SuperApp/default/4
 
 Behind the scenes, that revision is associated with an SHA-1 hash of 
 df3a9 This same revision, thus, could also be referred to as:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Adam/SuperApp/hashes/df3a9...
 
 But usually that's only used behind-the-scenes. Adam never cares about it 
 and neither do any of the other SuperApp contributors. But the DVCS often 
 uses it internally. (Are the hashes normally assiciated with a specific 
 branch? If so, then just consider it SuperApp/default/hashes/df3a9... 
 instead of SuperApp/hashes/df3a9...).
 
 Now, along comes Bob who clones Adam's SuperApp repo. Bob's copy of the same 
 revision is:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Bob/SuperApp/default/4
 
 Naturally, he also has the same hash as Adam:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Bob/SuperApp/hashes/df3a9...
 
 Then Adam and Bob start making commits, updates, pushes, pulls, etc, and 
 their revision numbers get out-of-sync. Adam and Bob are talking on a 
 newsgroup, and Adam mentions a super-cool improvement he just committed:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Adam/SuperApp/default/81
 
 Adam doesn't know this, but that just happens to have the hash 78ac1... 
 and thus be AKA:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Adam/SuperApp/hashes/78ac1...
 
 Bob wants Adam's new change, so he tells his DVCS to merge in:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Adam/SuperApp/default/81
 
 No problem. Bob didn't ask his DVCS for r81, he asked it for Adam's r81. 
 This revision now becomes Bob's:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Bob/SuperApp/default/117
 dvcs://joes-fancy-dvcs-hosting.org/users/Bob/SuperApp/hashes/78ac1...
 
 Since Adam announced this on a NG, Carlos also saw it and grabbed the new 
 change:
 
 dvcs://carlos-coder.co.uk/SuperApp/default/94
 dvcs://carlos-coder.co.uk/SuperApp/hashes/78ac1...
 
 They all start to use it, but Bob discovers a critical problem with it. So 
 Bob tells the NG to avoid:
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Adam/SuperApp/default/81
 
 Or, Bob might have referred to it with his own revision instead (Maybe 
 Adam's account was temporarily down):
 
 dvcs://joes-fancy-dvcs-hosting.org/users/Bob/SuperApp/default/117
 
 So Carlos tells his DVCS to revert that URI. To do this, Carlos's DVCS looks 
 up Adam's or Bob's URI and finds the associated hash: 78ac1 Then it 
 looks at Carlos's own copy of the repo, sees the active branch is default, 
 and finds the revision in default associated with the hash 78ac1..., 
 which is:
 
 dvcs://carlos-coder.co.uk/SuperApp/default/94
 
 Which then gets reverted.
 
 
 

This looks to me like an awful solution in search for a problem. 
The commit hash is the internal ID mainly used by git itself. If you want to 
communicate commits to other developers you have better means to do so. Let's 
emphasize that auto incremented numbers is NOT those means. 

commit have sha1 hashes Just like people have ID numbers to identity them (or 
the social security number in the US). That doesn't mean I would call you in 
conversation: person#122345445 I'll call you by a HUMAN GIVEN NAME (e.g. 
Nick).

If you want to refer to a git commit on the NG simply tag it with a meaningful 
name such as:

Re: D Programming Language source (dmd, phobos, etc.) has moved to github

2011-01-28 Thread Leandro Lucarella
Russel Winder, el 28 de enero a las 11:30 me escribiste:
 On Thu, 2011-01-27 at 13:33 -0800, Bill Baxter wrote:
  On Thu, Jan 27, 2011 at 1:13 PM, Nick Sabalausky a@a.a wrote:
 [ . . . ]
   Yea, and that's pretty much the original thing I was saying: It's nice 
   that
   Hg seems to have it, but Git doesn't appear to be particularly interested 
   in
   it.
  
  I think it's very handy for all the reasons you said.  I don't think
  I've every had to use a big hex string when dealing with mercurial.
  Maybe once or twice max.  Most of the stuff you do with repo history
  as an individual developer is all about the local copy of the tree on
  your system.  Globally unique identifiers aren't needed for that.  It
  looks like Bzr does something similar.  Not sure why Git hasn't gotten
  this particular nicety.
 
 Bazaar does indeed have revision numbers per branch.  Note that branch
 and repository is a different concept in Bazaar, unlike Git and
 Mercurial where they are fundamentally the same.

WRONG about Git.

AFAIK only in Darcs branch and repository is the same.

-- 
Leandro Lucarella (AKA luca) http://llucax.com.ar/
--
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
--
You can try the best you can
If you try the best you can
The best you can is good enough


Re: DMD for FreeBSD

2011-01-28 Thread Gour
On Thu, 27 Jan 2011 12:40:33 -0800
Walter Bright newshou...@digitalmars.com wrote:

 No. Actually, I like FreeBSD a lot.

Cool. It means we can count on it. :-)


Sincerely,
Gour

-- 

Gour  | Hlapicina, Croatia  | GPG key: CDBF17CA



signature.asc
Description: PGP signature


Re: DMD for FreeBSD

2011-01-28 Thread Jonathan M Davis
On Friday 28 January 2011 00:21:49 Gour wrote:
 On Thu, 27 Jan 2011 12:40:33 -0800
 
 Walter Bright newshou...@digitalmars.com wrote:
  No. Actually, I like FreeBSD a lot.
 
 Cool. It means we can count on it. :-)

Yes, though be aware that it's probably the least-tested out of the platforms, 
so it's probably the most likely to be buggy. Most everything is platform-
independent, but not everything is. A bug was fixed for FreeBSD just the other 
day where the FreeBSD build of druntime lacked CLOCK_MONOTONIC, and it's been 
several weeks since any of the Posix builds would have built without it, so 
obviously it's not getting tested all that much by the core druntime and phobos 
developers. However, if bugs are reported, we'll obviously fix them. It's just 
that they're more likely to get through due to a lack of testing and usage of 
dmd, druntime, and phobos on FreeBSD.

- Jonathan M Davis


Re: DMD for FreeBSD

2011-01-28 Thread Brad Roberts
On 1/28/2011 12:39 AM, Jonathan M Davis wrote:
 On Friday 28 January 2011 00:21:49 Gour wrote:
 On Thu, 27 Jan 2011 12:40:33 -0800

 Walter Bright newshou...@digitalmars.com wrote:
 No. Actually, I like FreeBSD a lot.

 Cool. It means we can count on it. :-)
 
 Yes, though be aware that it's probably the least-tested out of the 
 platforms, 
 so it's probably the most likely to be buggy. Most everything is platform-
 independent, but not everything is. A bug was fixed for FreeBSD just the 
 other 
 day where the FreeBSD build of druntime lacked CLOCK_MONOTONIC, and it's been 
 several weeks since any of the Posix builds would have built without it, so 
 obviously it's not getting tested all that much by the core druntime and 
 phobos 
 developers. However, if bugs are reported, we'll obviously fix them. It's 
 just 
 that they're more likely to get through due to a lack of testing and usage of 
 dmd, druntime, and phobos on FreeBSD.
 
 - Jonathan M Davis

If I can get an account of a reasonably well connected, always on the 'net, 
freebsd (x86, 32 bit, reasonably modern
release) box, I'll be happy to setup a continuous tester on it (see also 
http://d.puremagic.com/test-results/).  That'll
help keep it both building and passing the test suite.

Later,
Brad


Re: Unilink - alternative linker for win32/64, DMD OMF extensions?

2011-01-28 Thread Trass3r
I wonder why this tool isn't promoted in any way, no website etc.
I also wonder if he was willing to make it open-source. Then we could help 
support D and if it works someday, we can even include D symbol demangling :)


Re: Unilink - alternative linker for win32/64, DMD OMF extensions?

2011-01-28 Thread Dmitry Olshansky

On 28.01.2011 13:21, Trass3r wrote:

I wonder why this tool isn't promoted in any way, no website etc.

That's actually strange and funny in it's own right.
I got to this ftp only because I was 100% sure that tool existed, as a 
friend of mine suggested to use it instead of Borland's crappy linker 
few years ago.

I also wonder if he was willing to make it open-source. Then we could help 
support D and if it works someday, we can even include D symbol demangling :)

First things first, we need it to fully support DMD. Then any 
political/licensing terms can be discussed.


--
Dmitry Olshansky



Re: Is D still alive?

2011-01-28 Thread bearophile
Jonathan M Davis:

 I generally end up using unit tests to verify that stuff works correctly and 
 then 
 throw exceptions on bad input. So while I like having DbC built in, I don't 
 end 
 up using it all that much. It's prim,arily invariant that I end up using 
 though, 
 and that's harder to do inside of the member functions.

I think the problem here is that you are not using your D tools well enough yet:
- Preconditions allow you to save some tests in your unittests, because you 
have less need to test many input boundary conditions.
- Postconditions are useful to save some code to test the correctness of the 
results inside your unittests. You still need to put many tricky but correct 
input conditions inside your unittests, but then the postcondition will test 
that the outputs are inside the class of the correct outputs, and you will need 
less unittest code to test that the results are exactly the expected ones in 
some important situations.
- The missing old feature once somehow implemented allows to remove some 
other unittests, because you don't need a unittest any more to test the output 
is generally correct given a certain class of input.
- Class/struct invariants do something unittests have a hard time doing: 
testing the internal consistency of the class/struct data structures in all 
moments, and catching an inconsistency as soon as possible, this allows to 
catch bugs very soon, even before results reach the unittesting code. So their 
work is not much duplicated by unit testing.
- Loop invariants can't be replaced well enough by unittests. Again, they help 
you find bugs very early, often no more than few lines of code later of where 
the bug is. Unittests are not much able to do this.
- Currently you can't use invariants in some situations because of some DMD 
bugs.
- You must look a bit forward too. If D will have some success then some person 
will try to write some tool to test some contracts at compile time. 
Compile-time testing of contracts is useful because it's the opposite of a 
sampling: it's like an extension of the type system, it allows you to be sure 
of something for all possible cases.

Unit tests are useful as a sampling mean, they allow you to assert your 
function does exactly as expected for some specific input-output pairs. DbC 
doesn't perform a sampling, it tests more general properties about your inputs 
or outputs (and if you have some kind of implementation of the old feature, 
also general properties between inputs and their outputs), so DbC testing is 
wider but less deep and less specific than unit testing.

Generally with unittesting you can't be sure to have covered all interesting 
input-output cases (code coverage tools here help but don't solve the problem. 
Fuzzytesting tools are able to cover other cases), while with DbC you can't be 
sure your general rules about correct inputs, correct outputs (or even general 
rules about what a correct input-output pair is) are accurate enough to catch 
all bad situations and computations.

So generally DbC and unittests are better for different purposes, using them 
both you are able to complement each other weak points, and to improve your D 
coding.

I also suggest to try to write contracts first and code later, sometimes it 
helps.

Bye,
bearophile


Re: Why Ruby?

2011-01-28 Thread Bruno Medeiros

On 21/12/2010 20:46, Steven Schveighoffer wrote:

On Tue, 21 Dec 2010 14:50:21 -0500, Bruno Medeiros
brunodomedeiros+spam@com.gmail wrote:


In a less extreme view, it is not about controlling stupidity, but
controlling creativity (a view popular amongst artist/painter
programmers). So here the programmers are not dumb, but still they
need to be kept in line with rules, constraints, specifications,
strict APIs, etc.. You can't do anything too strange or out of the
ordinary, and the language is a reflection of that, especially with
regards to restrictions on dynamic typing (and other dynamic stuff
like runtime class modification).


Those aren't bugs, they are the artistic qualities of my program! It's a
statement on the political bias against bugs, I mean most people kill
bugs without a second thought!

;)

-Steve


I'm not sure if my meaning was fully understood thete, but I wasn't 
implying it was a programmer tried to masquerade problems in code by 
saying it is artistic or something.
Rather it was a not-so-thinly veiled reference (and critique) of Paul 
Graham's painter hacker/programmer archetype.


--
Bruno Medeiros - Software Engineer


Re: const(Object)ref is here!

2011-01-28 Thread Bruno Medeiros

On 27/01/2011 18:12, Andrei Alexandrescu wrote:

On 1/27/11 9:33 AM, Bruno Medeiros wrote:

On 21/12/2010 19:17, Andrei Alexandrescu wrote:

On 12/21/10 12:19 PM, Steven Schveighoffer wrote:

On Tue, 21 Dec 2010 13:10:12 -0500, Bruno Medeiros
brunodomedeiros+spam@com.gmail wrote:


On 06/12/2010 19:00, Jonathan M Davis wrote:

On Monday, December 06, 2010 05:41:42 Steven Schveighoffer wrote:

On Mon, 06 Dec 2010 04:44:07 -0500, spirdenis.s...@gmail.com
wrote:

On Mon, 6 Dec 2010 00:31:41 -0800

Jonathan M Davisjmdavisp...@gmx.com wrote:

toString() (or writeFrom() or whatever
it's going to become)


guess it was writeTo() ;-) but writeFrom is nice as well, we
should
find some useful use for it


It was proposed as writeTo, but I'm not opposed to a different name.


I have no problem with writeTo(). I just couldn't remember what it
was and
didn't want to take the time to look it up, and the name isn't as
obvious as
toString(), since it's not a standard name which exists in other
languages, and
it isn't actually returning anything. Whether it's to or from would
depend on
how you look at it - to the given delegate or from the object. But
writeTo() is
fine. Once it's used, it'll be remembered.



I don't think it's entirely fine. It should at least have
string/String somewhere in the name. (I mentioned this on the
other original thread, although late in time)


First, I'll say that it's not as important to me as it seems to be to
you, and I think others feel the same way. writeTo seems perfectly fine
to me, and the 'string' part is implied by the char[] parameter for the
delegate.

Changing the name to contain 'string' is fine as long as:

1) it's not toString. This is already established as returning a
string in both prior D and other languages. I think this would be too
confusing.
2) it's short. I don't want writeAsStringTo or something similar.

What did you have in mind?

-Steve


Conversion to text should be called toText. That makes the essence of
the function visible (it emits characters) without tying the
representation of the text.

Andrei


I don't understand this point. The representation of the text is tied,
it's going to be char[] ( aka UTF-8). Unless you were planning to have
overloads of toText, but that sounds like an awful idea.


Could be wchar or dchar.

Andrei



You mean to say that there would be three possible signatures for toText 
(for char[], wchar[], dchar[]), that the class coder can choose?
But of course, the coder would only need to define one, right? 
(otherwise that would be the awful idea)



--
Bruno Medeiros - Software Engineer


Re: Why Ruby?

2011-01-28 Thread Bruno Medeiros

On 21/12/2010 20:55, retard wrote:

My experiences in several language communities have shown that
programmers easily resort to emotional, completely irrational arguments.
It's best to avoid these kind of arguments as much as possible.
Passionate people seem to favor this juxtapositioning.


http://www.penny-arcade.com/comic/2004/3/19/
Penny Arcade is about games, but I'd say that theory applies to just 
about any internet community, including programmers.

Especially in the FOSS world.


--
Bruno Medeiros - Software Engineer


Re: Purity

2011-01-28 Thread Bruno Medeiros

On 27/01/2011 21:05, Simen kjaeraas wrote:

Bruno Medeiros brunodomedeiros+spam@com.gmail wrote:


string[] func(string arg) pure {
string elem2 = blah.idup;
return [ arg, elem2 ];
}

The compiler *cannot* know (well, looking at the signature only of
course) how to properly deepdup the result from the first return
value, so as to give the exact same result as if func was called again.


Could you please elucidate, as I am unsure of your reasoning for saying
the compiler cannot know how to deepdup the result.




string str = blah;
string[] var1 = func(str);
string[] var2 = func(str);

How can the compiler optimize the second call to func, the one that is 
assigned to var2, such that he deepdups var1 instead of calling func 
again? Which code would be generated?


The compiler can't do that because of all the transitive data of var1, 
the compiler doesn't know which of it was newly allocated by func, and 
which of it was reused from func's parameters or some other global inputs.


--
Bruno Medeiros - Software Engineer


Re: Who here actually uses D?

2011-01-28 Thread Bruno Medeiros

On 02/01/2011 02:01, Walter Bright wrote:

Caligo wrote:

I don't understand why so much time and effort as been spent, perhaps
wasted, on multiple compilers and standard libraries. I also don't
understand why Walter insists on having his own compiler when D has
finally been declared an open source project. LLVM and GCC are very
mature projects and they could have been used for the reference
implementation. If Walter was in charge of the GDC or LDC project,
then we would have had a better compiler than what we have today. That
way I think D as a new modern programming language could have been in
a much better position. You also don't need to pay people to fix bugs
or do whatever that is needed to be done so long as you have a healthy
and growing open source community. I don't think we have that yet, and
perhaps the fact that Walter comes from the closed-source proprietary
world is part of the reason.


The problems D has had have rarely been the back end.


Putting aside issues with bugs (not that they are not important), the 
choice of backend also has important consequences regarding what 
debugger tools one can use (at least on Windows).


--
Bruno Medeiros - Software Engineer


Re: Is D still alive?

2011-01-28 Thread spir

On 01/28/2011 12:36 PM, bearophile wrote:


I think the problem here is that you are not using your D tools well enough yet:
- Preconditions allow you to save some tests in your unittests, because you 
have less need to test many input boundary conditions.
- Postconditions are useful to save some code to test the correctness of the 
results inside your unittests. You still need to put many tricky but correct 
input conditions inside your unittests, but then the postcondition will test 
that the outputs are inside the class of the correct outputs, and you will need 
less unittest code to test that the results are exactly the expected ones in 
some important situations.


Very intersting points, thank you.


- The missing old feature once somehow implemented allows to remove some 
other unittests, because you don't need a unittest any more to test the output is 
generally correct given a certain class of input.


What is this old feature?



Generally with unittesting you can't be sure to have covered all interesting 
input-output cases (code coverage tools here help but don't solve the problem. 
Fuzzytesting tools are able to cover other cases), while with DbC you can't be 
sure your general rules about correct inputs, correct outputs (or even general 
rules about what a correct input-output pair is) are accurate enough to catch 
all bad situations and computations.

So generally DbC and unittests are better for different purposes, using them 
both you are able to complement each other weak points, and to improve your D 
coding.


Ditto.
I definitely need a mental jump to think in terms of DbC; nearly never use 
it, simply don't think at it.



I also suggest to try to write contracts first and code later, sometimes it 
helps.


Just what I sometimes do for tests ;-)

Denis
--
_
vita es estrany
spir.wikidot.com



Re: Patterns of Bugs

2011-01-28 Thread Bruno Medeiros

On 08/01/2011 09:14, Walter Bright wrote:

Jonathan M Davis wrote:

On Saturday 08 January 2011 00:16:13 Walter Bright wrote:

Jérôme M. Berger wrote:

When I built my latest PC, I saw in the MB manual that it would use

speech synthesis on the PC speaker to report errors. So I tried to
power on the PC without having plugged either CPU or RAM and it
started to say NO CPU FOUND! NO CPU FOUND! in a loop with a
hilarious Asian accent and the kind of rasping voice that used to
characterized old DOS games. Pretty fun ;)

That's a heckuva lot better than an undocumented beep pattern which
is what
I got.


LOL. The beeps for mine are documented in the motherboadr manual, but
the beeps are so hard to distinguish from one another, that it borders
on useless. A voice would certainly be better.


Yes, what is the difference between a slow beep and a fast beep?

While I'm ranting, does anyone else have trouble remembering which of O
and | is on, and which is off? What's the matter with on and off?


Hum, I never had problems with that: I always assumed the | meant a 
closed electrical circuit (ie, you closed the circuit with the switch), 
thus naturally it meant on.


--
Bruno Medeiros - Software Engineer


Re: Is D still alive?

2011-01-28 Thread Daniel Gibson
Am 28.01.2011 13:30, schrieb spir:
 On 01/28/2011 12:36 PM, bearophile wrote:
 
 I think the problem here is that you are not using your D tools well enough 
 yet:
 - Preconditions allow you to save some tests in your unittests, because you
 have less need to test many input boundary conditions.
 - Postconditions are useful to save some code to test the correctness of the
 results inside your unittests. You still need to put many tricky but correct
 input conditions inside your unittests, but then the postcondition will test
 that the outputs are inside the class of the correct outputs, and you will
 need less unittest code to test that the results are exactly the expected 
 ones
 in some important situations.
 
 Very intersting points, thank you.
 
 - The missing old feature once somehow implemented allows to remove some
 other unittests, because you don't need a unittest any more to test the 
 output
 is generally correct given a certain class of input.
 
 What is this old feature?

if you've got a function fun(int x){ ... ; x++; ... }
then you can use old.x (or something like that) in the dbc block at the end to
access the original value of x.
like assert(x == old.x+1) or something like that.
after hearing about dbc in university I thought that having old was fundamental
for dbc.. but D doesn't support it.

Cheers,
- Daniel


Re: Patterns of Bugs

2011-01-28 Thread Daniel Gibson
Am 28.01.2011 13:33, schrieb Bruno Medeiros:
 On 08/01/2011 09:14, Walter Bright wrote:
 Jonathan M Davis wrote:
 On Saturday 08 January 2011 00:16:13 Walter Bright wrote:
 Jérôme M. Berger wrote:
 When I built my latest PC, I saw in the MB manual that it would use

 speech synthesis on the PC speaker to report errors. So I tried to
 power on the PC without having plugged either CPU or RAM and it
 started to say NO CPU FOUND! NO CPU FOUND! in a loop with a
 hilarious Asian accent and the kind of rasping voice that used to
 characterized old DOS games. Pretty fun ;)
 That's a heckuva lot better than an undocumented beep pattern which
 is what
 I got.

 LOL. The beeps for mine are documented in the motherboadr manual, but
 the beeps are so hard to distinguish from one another, that it borders
 on useless. A voice would certainly be better.

 Yes, what is the difference between a slow beep and a fast beep?

 While I'm ranting, does anyone else have trouble remembering which of O
 and | is on, and which is off? What's the matter with on and off?
 
 Hum, I never had problems with that: I always assumed the | meant a closed
 electrical circuit (ie, you closed the circuit with the switch), thus 
 naturally
 it meant on.
 

O looks like a *closed* circle to me so this isn't that helpful IMHO ;)


Re: Is D still alive?

2011-01-28 Thread bearophile
spir:

 What is this old feature?

It's a basic DbC feature that's currently missing in D because of some 
implementation troubles (and maybe also because Walter is a bit disappointed 
about DbC).

Example: a class member function performs a certain computation and changes 
some attributes. In the postcondition you want to test that such changes are 
correct. To do this well it's useful to know what the original state of those 
attributes was. This is what the old feature allows you to do. Similar things 
are possible with free functions too.

So preconditions in a function allow you to assert that general rules about 
inputs are fulfilled, postconditions allow you to assert that general rules 
about its outputs are fulfilled, and the old (pre-state) feature allows you 
to assert that general rules about its input-output pairs are fulfilled. So 
it's not a small thing :-)


It was discussed three or more times:

http://www.digitalmars.com/d/archives/digitalmars/D/why_no_old_operator_in_function_postconditions_as_in_Eiffel_54654.html

http://www.digitalmars.com/d/archives/digitalmars/D/Communicating_between_in_and_out_contracts_98252.html


I definitely need a mental jump to think in terms of DbC; nearly never use 
it, simply don't think at it.

DbC looks like a very simple thing, but you need some time, thinking, and 
reading about what it is and what its purposes are, to learn to use it :-)

Bye,
bearophile


DSource (Was: Re: Moving to D )

2011-01-28 Thread Bruno Medeiros

On 07/01/2011 00:34, David Nadlinger wrote:

On 1/6/11 11:47 PM, Andrei Alexandrescu wrote:

Mercurial on dsource.org …


Personally, I'd really like to persuade Walter, you, and whoever else
actually decides this to consider hosting the main repository at an
external place like GitHub or Mercurial, because DSource has been having
some real troubles with stability, although it got slightly better again
recently. The problem is somewhat alleviated when using a DVCS, but
having availabilities the main source repositories is not quite the best
form of advertisement for a language.

Additionally, the UI of GitHub supports the scenario where only a few
people (or Walter alone) actually have commit/push access to the main
repository really well through cheap forks which stay logically
connected to he main repository and merge requests. The ability to make
comments on specific (lines in) commits, also in combination with pull
requests, is awesome as well.



I have to agree and reiterate this point. The issue of whether it is 
worthwhile for D to move to a DVCS (and which one of the two) is 
definitely a good thing to consider, but the issue of DSource vs. other 
code hosting sites is also quite a relevant one. (And not just for DMD 
but for any project.)


I definitely thank Brad for his support and work on DSource, however I 
question if it is the best way to go for medium or large-sized D 
projects. Other hosting sites will simply offer better/more features 
and/or support, stability, less bugs, spam-protection, etc..
What we have here is exactly the same issue of NIH syndrome vs DRY, but 
applied to hosting and development infrastructure instead of the code 
itself. But I think the principle applies just the same.


--
Bruno Medeiros - Software Engineer


Re: DSource (Was: Re: Moving to D )

2011-01-28 Thread Daniel Gibson
Am 28.01.2011 14:07, schrieb Bruno Medeiros:
 On 07/01/2011 00:34, David Nadlinger wrote:
 On 1/6/11 11:47 PM, Andrei Alexandrescu wrote:
 Mercurial on dsource.org …

 Personally, I'd really like to persuade Walter, you, and whoever else
 actually decides this to consider hosting the main repository at an
 external place like GitHub or Mercurial, because DSource has been having
 some real troubles with stability, although it got slightly better again
 recently. The problem is somewhat alleviated when using a DVCS, but
 having availabilities the main source repositories is not quite the best
 form of advertisement for a language.

 Additionally, the UI of GitHub supports the scenario where only a few
 people (or Walter alone) actually have commit/push access to the main
 repository really well through cheap forks which stay logically
 connected to he main repository and merge requests. The ability to make
 comments on specific (lines in) commits, also in combination with pull
 requests, is awesome as well.

 
 I have to agree and reiterate this point. The issue of whether it is 
 worthwhile
 for D to move to a DVCS (and which one of the two) is definitely a good thing 
 to
 consider, but the issue of DSource vs. other code hosting sites is also quite 
 a
 relevant one. (And not just for DMD but for any project.)
 
 I definitely thank Brad for his support and work on DSource, however I 
 question
 if it is the best way to go for medium or large-sized D projects. Other 
 hosting
 sites will simply offer better/more features and/or support, stability, less
 bugs, spam-protection, etc..
 What we have here is exactly the same issue of NIH syndrome vs DRY, but 
 applied
 to hosting and development infrastructure instead of the code itself. But I
 think the principle applies just the same.
 

D has already moved to github, see D.announce :)


Re: Git Contributors Guide (Was: Re: destructor order)

2011-01-28 Thread Vladimir Panteleev
On Thu, 27 Jan 2011 19:17:50 +0200, Ulrik Mikaelsson  
ulrik.mikaels...@gmail.com wrote:



2011/1/27 Vladimir Panteleev vladi...@thecybershadow.net:

On Thu, 27 Jan 2011 00:26:22 +0200, Ulrik Mikaelsson
ulrik.mikaels...@gmail.com wrote:


The way I will show here is to gather up your changes in a so-called
bundle, which can then be sent by mail or attached in a bug-tracker.
First, some terms that might need explaining.

Many open-source projects that use git use patches generated by the
format-patch command. Just type git format-patch origin. Unless you  
have a

LOT of commits, patches are better than binary bundles, because they are
still human-readable (they contain the diff), and they also preserve the
metadata (unlike diffs).

You can even have git e-mail these patches to the project's mailing  
list.
The second and following patches are sent as a reply to the first  
patch,

so they don't clutter the list when viewed in threading mode.


True. The only problem with this, I think, is getting the patch out
from web-based mail-readers. Key-parts of the metadata about the
commit lies in the mail-header, which might not always be easily
accessible in web-readers. Also, send-email is for some reason no
longer included in the git-version that comes with Ubuntu 10.10.
Perhaps it's been removed in later versions of git.


You can have send-email attach the patch as an attachment (see the  
git-format-patch man page).


--
Best regards,
 Vladimirmailto:vladi...@thecybershadow.net


Re: Why Ruby?

2011-01-28 Thread so

http://www.penny-arcade.com/comic/2004/3/19/
Penny Arcade is about games, but I'd say that theory applies to just  
about any internet community, including programmers.

Especially in the FOSS world.


I have seen that and similar pictures many times, but never fully  
understand what they are trying to say.


1. Anonymity is bad? (My answer: if anonymity is using nicks instead of  
names, certainly no)
2. We all are nothing but total fuckwads (whatever that means)? (My  
answer: possibly yes)
3. What the hell anonymity actually mean in internet anyways? (My answer:  
i am not sure)


For example, i am using a nick (instead of a random name, which might be  
as well forged), because i hate seeing my name everywhere, good or bad.


Re: DSource (Was: Re: Moving to D )

2011-01-28 Thread Bruno Medeiros

On 28/01/2011 13:13, Daniel Gibson wrote:

Am 28.01.2011 14:07, schrieb Bruno Medeiros:

On 07/01/2011 00:34, David Nadlinger wrote:

On 1/6/11 11:47 PM, Andrei Alexandrescu wrote:

Mercurial on dsource.org …


Personally, I'd really like to persuade Walter, you, and whoever else
actually decides this to consider hosting the main repository at an
external place like GitHub or Mercurial, because DSource has been having
some real troubles with stability, although it got slightly better again
recently. The problem is somewhat alleviated when using a DVCS, but
having availabilities the main source repositories is not quite the best
form of advertisement for a language.

Additionally, the UI of GitHub supports the scenario where only a few
people (or Walter alone) actually have commit/push access to the main
repository really well through cheap forks which stay logically
connected to he main repository and merge requests. The ability to make
comments on specific (lines in) commits, also in combination with pull
requests, is awesome as well.



I have to agree and reiterate this point. The issue of whether it is worthwhile
for D to move to a DVCS (and which one of the two) is definitely a good thing to
consider, but the issue of DSource vs. other code hosting sites is also quite a
relevant one. (And not just for DMD but for any project.)

I definitely thank Brad for his support and work on DSource, however I question
if it is the best way to go for medium or large-sized D projects. Other hosting
sites will simply offer better/more features and/or support, stability, less
bugs, spam-protection, etc..
What we have here is exactly the same issue of NIH syndrome vs DRY, but applied
to hosting and development infrastructure instead of the code itself. But I
think the principle applies just the same.



D has already moved to github, see D.announce :)


I know, I know. (I am up-to-date on D.announce, just not on D and 
D.bugs)
I still wanted to make that point though. First, for retrospection, but 
also because it may still apply to a few other DSource projects (current 
or future ones).


--
Bruno Medeiros - Software Engineer


Re: std.unittests for (final?) review

2011-01-28 Thread Bruno Medeiros

On 06/01/2011 02:54, Ary Borenszweig wrote:

I prefer assert, assertFalse, assertEqual and assertNotEqual.

Compare this:
assertPredicate!a  b(1 + 1, 3);

To this:
assert(1 + 1  3)

Or to this:

assertLess(1 + 1, 3)



I agree with Ary here, at least with regard to assertEquals. I'm a very 
test and contracts minded programmer so I use asserts a lot, and 
specifically assertEquals (possibly assertSame as well) is common enough 
that I want a shortcut to type them, I don't want to have to type the 
predicate. And it's more readable as well. (The implementation can use 
assertPredicate, that's good, but that's not the issue)


And yes, for a == b I won't have to type the predicate that it is the 
default:

  assertPredicate(1 + 1, 2);
However it still suffers from the readability issue, and it won't work 
for a similar predicate I also use quite often as well, assertAreEqual, 
which accepts nulls(

(a == b) || (a != null  b != null  a.equals(b))

In practice it won't be a problem at all because we can define aliases, 
like Jonathan said, however it would be nice if the std library would be 
mindful of this and provide shortcuts for the very common predicates.





--
Bruno Medeiros - Software Engineer


Re: std.unittests for (final?) review

2011-01-28 Thread Bruno Medeiros

On 06/01/2011 02:54, Ary Borenszweig wrote:


I'm not a big fan of IDEs anymore but having many functions also helps with the 
autocompletion, to get the code right on the first shot, instead of
assertPredicate!insertYourStringHere(...).


(!)
Man, what have those Ruby guys done to you? You've been seduced by the 
power of the Dark Side... :P


--
Bruno Medeiros - Software Engineer


Re: DSource (Was: Re: Moving to D )

2011-01-28 Thread Bruno Medeiros

On 28/01/2011 13:13, Daniel Gibson wrote:

Am 28.01.2011 14:07, schrieb Bruno Medeiros:

On 07/01/2011 00:34, David Nadlinger wrote:

On 1/6/11 11:47 PM, Andrei Alexandrescu wrote:

Mercurial on dsource.org …


Personally, I'd really like to persuade Walter, you, and whoever else
actually decides this to consider hosting the main repository at an
external place like GitHub or Mercurial, because DSource has been having
some real troubles with stability, although it got slightly better again
recently. The problem is somewhat alleviated when using a DVCS, but
having availabilities the main source repositories is not quite the best
form of advertisement for a language.

Additionally, the UI of GitHub supports the scenario where only a few
people (or Walter alone) actually have commit/push access to the main
repository really well through cheap forks which stay logically
connected to he main repository and merge requests. The ability to make
comments on specific (lines in) commits, also in combination with pull
requests, is awesome as well.



I have to agree and reiterate this point. The issue of whether it is worthwhile
for D to move to a DVCS (and which one of the two) is definitely a good thing to
consider, but the issue of DSource vs. other code hosting sites is also quite a
relevant one. (And not just for DMD but for any project.)

I definitely thank Brad for his support and work on DSource, however I question
if it is the best way to go for medium or large-sized D projects. Other hosting
sites will simply offer better/more features and/or support, stability, less
bugs, spam-protection, etc..
What we have here is exactly the same issue of NIH syndrome vs DRY, but applied
to hosting and development infrastructure instead of the code itself. But I
think the principle applies just the same.



D has already moved to github, see D.announce :)



I know, I know. :)  (I am up-to-date on D.announce, just not on D and 
D.bugs)
I still wanted to make that point though. First, for retrospection, but 
also because it may still apply to a few other DSource projects (current 
or future ones).


--
Bruno Medeiros - Software Engineer


Re: Is D still alive?

2011-01-28 Thread Steven Schveighoffer

On Thu, 27 Jan 2011 04:50:32 -0500, retard r...@tard.com.invalid wrote:


Wed, 26 Jan 2011 14:39:08 -0500, Steven Schveighoffer wrote:


I will warn you, once you start using D, you will not want to use
something else.  I cringe every day when I have to use PHP for work.


Nice trolling.


*shrug* call it whatever you want, it's true for me, and probably most  
everyone here.


-Steve


Re: eliminate junk from std.string?

2011-01-28 Thread Bruno Medeiros

On 11/01/2011 23:00, Andrei Alexandrescu wrote:

On 1/11/11 11:21 AM, Ary Borenszweig wrote:

Why care where they come from? Why not make them intuitive? Say, like,
Always
camel case?


If there's enough support for this, I'll do it.

Andrei


+1 vote.

--
Bruno Medeiros - Software Engineer


Re: Is D still alive?

2011-01-28 Thread Steven Schveighoffer

On Thu, 27 Jan 2011 04:59:18 -0500, retard r...@tard.com.invalid wrote:


Wed, 26 Jan 2011 15:35:19 -0500, Steven Schveighoffer wrote:


I'd suggest to anyone looking to use D for something really big to try
and prove out how well D will perform for you by coding up bits of
your whole project that you think will be needed.  Hopefully, you can do
everything without hitting a mercy bug and then you can write your full
project in it.


I think this reveals a lot about D. You still need to prove things. Or
maybe the community members in general aren't very good developers; they
can't see the potential of this language. The fact is, no matter what
language you choose, if it isn't a complete joke, you can finish the
project. (I'm assuming the community members here won't be writing any
massive projects which are not possible to do in C++ or PHP or Java.)


I fully see the potential of the language, but I've also experienced that  
a one (or two or three) man compiler team does not fix bugs on *my*  
schedule.  I can't buy enterprise support, so any bugs I may hit, I'm  
just going to have to wait for Walter and Co. to get around to them.  Not  
a problem for me, because I'm not developing with D professionally.  But  
if I was going to base a software company on D, I'd be very nervous at  
this prospect.


I find that I can work around many of D's bugs, but there are just some  
that you have to throw your hands up and wait (I don't have time to learn  
how a compiler works, and fix D's compiler).  I think as D matures and  
hopefully gets more enterprise support, these problems will be history.



I don't see any need to prove how well Haskell works. Even though it's a
avoid success at all costs experimental research language. It just
works. I mean to the extent that I'm willing to go with these silly test
projects that try to prove something.


The statements I made are not a property of D, they are a property of the  
lack of backing/maturity.  I'm sure when Haskell was at the same maturity  
stage as D, and if it had no financial backing/support contracts, it would  
be just as much of a gamble.


You seem to think that D is inherently flawed because of D, but it's  
simply too young for some tasks.  It's rapidly getting older, and I think  
in a year or two it will be mature enough for most projects.


-Steve


Re: Is D still alive?

2011-01-28 Thread Steven Schveighoffer
On Thu, 27 Jan 2011 21:37:31 -0500, Ellery Newcomer  
ellery-newco...@utulsa.edu wrote:



On 01/27/2011 05:41 PM, Walter Bright wrote:


Unit testing has produced a dramatic improvement in coding.


agreed. unit testing (maybe with dbc, I don't remember) was the only  
reason I noticed issue 5364.


I created 4 or 5 bugs against dmd when I added full unit tests to  
dcollections.


So unittests also help D probably as much as they do your code ;)

-Steve


Re: const(Object)ref is here!

2011-01-28 Thread Andrei Alexandrescu

On 1/28/11 5:37 AM, Bruno Medeiros wrote:

You mean to say that there would be three possible signatures for toText
(for char[], wchar[], dchar[]), that the class coder can choose?
But of course, the coder would only need to define one, right?
(otherwise that would be the awful idea)


Probably standardizing on one width is a good idea.

Andrei


Re: DVCS (was Re: Moving to D)

2011-01-28 Thread Bruno Medeiros

On 16/01/2011 19:38, Andrei Alexandrescu wrote:

On 1/15/11 10:47 PM, Nick Sabalausky wrote:

Daniel Gibsonmetalcae...@gmail.com  wrote in message
news:igtq08$2m1c$1...@digitalmars.com...
There's two reasons it's good for games:

1. Like you indicated, to get a better framerate. Framerate is more
important in most games than resolution.

2. For games that aren't really designed for multiple resolutions,
particularly many 2D ones, and especially older games (which are often
some
of the best, but they look like shit on an LCD).


It's a legacy issue. Clearly everybody except you is using CRTs for
gaming and whatnot. Therefore graphics hardware producers and game
vendors are doing what it takes to adapt to a fixed resolution.


Actually, not entirely true, although not for the reasons of old games. 
Some players of hardcore twitch FPS games (like Quake), especially 
professional players, still use CRTs, due to the near-zero input lag 
that LCDs, although having improved in that regard, are still not able 
to match exactly.


But other than that, I really see no reason to stick with CRTs vs a good 
LCD, yeah.



--
Bruno Medeiros - Software Engineer


Re: DVCS (was Re: Moving to D)

2011-01-28 Thread Bruno Medeiros

On 16/01/2011 04:47, Nick Sabalausky wrote:

There's two reasons it's good for games:

1. Like you indicated, to get a better framerate. Framerate is more
important in most games than resolution.



This reason was valid at least at some point in time, for me it actually 
hold me back from transitioning from CRTs to LCDs for some time. But 
nowadays the screen resolutions have stabilized (stopped increasing, in 
terms of DPI), and graphics cards have improved in power enough that you 
can play nearly any game with the LCDs native resolution with max 
framerate, so no worries with this anymore (you may have to tone down 
the graphics settings a bit in some cases, but that is fine with me)



2. For games that aren't really designed for multiple resolutions,
particularly many 2D ones, and especially older games (which are often some
of the best, but they look like shit on an LCD).


Well, if your LCD supports it, you have the option of not expanding the 
screen if output resolution is not the native one. How good or bad that 
would be, depends on the game I guess.
I actually did this some years ago on certain (recent) games for a some 
time, use only 1024×768 of the 1280x1024 native, to have better framerate.
It's not a problem for me for old games, since most of them that I 
occasionally play are played in console emulator. DOS games 
unfortunately were very hard to play correctly in XP in the first place 
(especially with soundblaster), so it's not a concern for me.




PS: here's a nice thread for anyone looking to purchase a new LCD:
http://forums.anandtech.com/showthread.php?t=39226
It explains a lot of things about LCD technology, and ranks several LCDs 
according to intended usage (office work, hardcore gaming, etc.).


--
Bruno Medeiros - Software Engineer


Re: Is D still alive?

2011-01-28 Thread Roman Ivanov
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article
 On 1/27/11 8:02 PM, Walter Bright wrote:
  I think one of the reasons DbC has not paid off is it still requires a
  significant investment of effort by the programmer. It's too easy to not
  bother.
 One issue with DbC is that its only significant advantage is its
 interplay with inheritance. Otherwise, scope() in conjunction with
 assert works with less syntactic overhead. So DbC tends to shine with
 large and deep hierarchies... but large and deep hierarchies are not
 that a la mode anymore.

DbC opens many interesting possibilities if it's supported by tools other than
just the compiler. MS has included it in .NET 4.0:

http://research.microsoft.com/en-us/projects/contracts/


DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-01-28 Thread Bruno Medeiros

On 06/01/2011 19:19, Jérôme M. Berger wrote:

Andrei Alexandrescu wrote:

What are the advantages of Mercurial over git? (git does allow multiple
branches.)





I've also been mulling over whether to try out and switch away from 
Subversion to a DVCS, but never went ahead cause I've also been 
undecided about Git vs. Mercurial. So this whole discussion here in the 
NG has been helpful, even though I rarely use branches, if at all.


However, there is an important issue for me that has not been mentioned 
ever, I wonder if other people also find it relevant. It annoys me a lot 
in Subversion, and basically it's the aspect where if you delete, 
rename, or copy a folder under version control in a SVN working copy, 
without using the SVN commands, there is a high likelihood your working 
copy will break! It's so annoying, especially since sometimes no amount 
of svn revert, cleanup, unlock, override and update, etc. will fix it. I 
just had one recently where I had to delete and re-checkout the whole 
project because it was that broken.
Other situations also seem to cause this, even when using SVN tooling 
(like partially updating from a commit that delete or moves directories, 
or something like that) It's just so brittle.
I think it may be a consequence of the design aspect of SVN where each 
subfolder of a working copy is a working copy as well (and each 
subfolder of repository is a repository as well)


Anyways, I hope Mercurial and Git are better at this, I'm definitely 
going to try them out with regards to this.


--
Bruno Medeiros - Software Engineer


Re: Is D still alive?

2011-01-28 Thread Andrei Alexandrescu

On 1/28/11 10:14 AM, Roman Ivanov wrote:

== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article

On 1/27/11 8:02 PM, Walter Bright wrote:

I think one of the reasons DbC has not paid off is it still requires a
significant investment of effort by the programmer. It's too easy to not
bother.

One issue with DbC is that its only significant advantage is its
interplay with inheritance. Otherwise, scope() in conjunction with
assert works with less syntactic overhead. So DbC tends to shine with
large and deep hierarchies... but large and deep hierarchies are not
that a la mode anymore.


DbC opens many interesting possibilities if it's supported by tools other than
just the compiler. MS has included it in .NET 4.0:

http://research.microsoft.com/en-us/projects/contracts/


Hm, I'm seeing in section 3 of userdoc.pdf that they don't care for 
precondition weakening:


While we could allow a weaker precondition, we have found that the 
complications of doing so outweigh the benefits. We just haven’t seen 
any compelling examples where weakening the precondition is useful. So 
we do not allow adding any preconditions at all in a subtype.


They do, however, allow strenghtening postconditions.

Overall the project looks very interesting and has quite recognizable 
names from the PL community. Hopefully it will increase awareness of 
contract programming.



Andrei


Decision on container design

2011-01-28 Thread Andrei Alexandrescu
Today after work I plan to start making one pass through std.container. 
After having thought of things for a long time, my conclusions are as 
follows:


1. Containers will be classes.

2. Most of the methods in existing containers will be final. It's up to 
the container to make a method final or not.


3. Containers and their ranges decide whether they give away references 
to their objects. Sealing is a great idea but it makes everybody's life 
too complicated. I'll defer sealing to future improvements in the 
language and/or the reflection subsystem.


4. Containers will assume that objects are cheap to copy so they won't 
worry about moving primitives.


Any showstoppers, please share.


Andrei


Re: DVCS (was Re: Moving to D)

2011-01-28 Thread Eric Poggel

On 1/12/2011 6:41 PM, Walter Bright wrote:

All semiconductors have a lifetime that is measured by the area under
the curve of their temperature over time.


Oddly enough, milk has the same behavior.


Re: Decision on container design

2011-01-28 Thread bearophile
Andrei:

As far as I understand the implications, I agree with your conclusions.
Other people will be free to create an alternative D2 library like this one:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2271.html
but a standard library can't be too much hard to use.

Bye,
bearophile


Re: Purity

2011-01-28 Thread Simen kjaeraas

Bruno Medeiros brunodomedeiros+spam@com.gmail wrote:


On 27/01/2011 21:05, Simen kjaeraas wrote:

Bruno Medeiros brunodomedeiros+spam@com.gmail wrote:


string[] func(string arg) pure {
string elem2 = blah.idup;
return [ arg, elem2 ];
}

The compiler *cannot* know (well, looking at the signature only of
course) how to properly deepdup the result from the first return
value, so as to give the exact same result as if func was called again.


Could you please elucidate, as I am unsure of your reasoning for saying
the compiler cannot know how to deepdup the result.




string str = blah;
string[] var1 = func(str);
string[] var2 = func(str);

How can the compiler optimize the second call to func, the one that is  
assigned to var2, such that he deepdups var1 instead of calling func  
again? Which code would be generated?


The compiler can't do that because of all the transitive data of var1,  
the compiler doesn't know which of it was newly allocated by func, and  
which of it was reused from func's parameters or some other global  
inputs.


But for immutable data (like the contents of the elements of a string[]),
that doesn't matter, does it?

--
Simen


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-01-28 Thread Michel Fortin
On 2011-01-28 11:29:49 -0500, Bruno Medeiros 
brunodomedeiros+spam@com.gmail said:


I've also been mulling over whether to try out and switch away from 
Subversion to a DVCS, but never went ahead cause I've also been 
undecided about Git vs. Mercurial. So this whole discussion here in the 
NG has been helpful, even though I rarely use branches, if at all.


However, there is an important issue for me that has not been mentioned 
ever, I wonder if other people also find it relevant. It annoys me a 
lot in Subversion, and basically it's the aspect where if you delete, 
rename, or copy a folder under version control in a SVN working copy, 
without using the SVN commands, there is a high likelihood your working 
copy will break! It's so annoying, especially since sometimes no amount 
of svn revert, cleanup, unlock, override and update, etc. will fix it. 
I just had one recently where I had to delete and re-checkout the whole 
project because it was that broken.
Other situations also seem to cause this, even when using SVN tooling 
(like partially updating from a commit that delete or moves 
directories, or something like that) It's just so brittle.
I think it may be a consequence of the design aspect of SVN where each 
subfolder of a working copy is a working copy as well (and each 
subfolder of repository is a repository as well)


Anyways, I hope Mercurial and Git are better at this, I'm definitely 
going to try them out with regards to this.


Git doesn't care how you move your files around. It track files by 
their content. If you rename a file and most of the content stays the 
same, git will see it as a rename. If most of the file has changed, 
it'll see it as a new file (with the old one deleted). There is 'git 
mv', but it's basically just a shortcut for moving the file, doing 'git 
rm' on the old path and 'git add' on the new path.


I don't know about Mercurial.

--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Decision on container design

2011-01-28 Thread Michel Fortin
On 2011-01-28 13:31:58 -0500, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:


Today after work I plan to start making one pass through std.container. 
After having thought of things for a long time, my conclusions are as 
follows:


1. Containers will be classes.

2. Most of the methods in existing containers will be final. It's up to 
the container to make a method final or not.


3. Containers and their ranges decide whether they give away references 
to their objects. Sealing is a great idea but it makes everybody's life 
too complicated. I'll defer sealing to future improvements in the 
language and/or the reflection subsystem.


4. Containers will assume that objects are cheap to copy so they won't 
worry about moving primitives.


Any showstoppers, please share.


Not my preferred choices (especially #1), but having containers in 
Phobos will certainly be an improvement over not having them. So go 
ahead!


About #4, it'd be nice to have the containers use move semantics when 
possible even if they fallback to (cheap) copy semantic when move isn't 
available. That way, if you have a type which is moveable but not 
copyable you can still put it in a container. Does that makes sense?



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: DSource (Was: Re: Moving to D )

2011-01-28 Thread retard
Fri, 28 Jan 2011 15:03:24 +, Bruno Medeiros wrote:

 
 I know, I know. :)  (I am up-to-date on D.announce, just not on D and
 D.bugs)
 I still wanted to make that point though. First, for retrospection, but
 also because it may still apply to a few other DSource projects (current
 or future ones).

You don't need to read every post here. Reading every bug report is just 
stupid.. but it's not my problem. It just means that the rest of us have 
less competition in everyday situations (getting women, work offers, and 
so on)


Re: Is D still alive?

2011-01-28 Thread retard
Fri, 28 Jan 2011 10:14:04 -0500, Steven Schveighoffer wrote:

 On Thu, 27 Jan 2011 04:59:18 -0500, retard r...@tard.com.invalid wrote:
 
 Wed, 26 Jan 2011 15:35:19 -0500, Steven Schveighoffer wrote:

 I'd suggest to anyone looking to use D for something really big to try
 and prove out how well D will perform for you by coding up bits of
 your whole project that you think will be needed.  Hopefully, you can
 do everything without hitting a mercy bug and then you can write your
 full project in it.

 I think this reveals a lot about D. You still need to prove things. Or
 maybe the community members in general aren't very good developers;
 they can't see the potential of this language. The fact is, no matter
 what language you choose, if it isn't a complete joke, you can finish
 the project. (I'm assuming the community members here won't be writing
 any massive projects which are not possible to do in C++ or PHP or
 Java.)
 
 I fully see the potential of the language, but I've also experienced
 that a one (or two or three) man compiler team does not fix bugs on *my*
 schedule.  I can't buy enterprise support, so any bugs I may hit, I'm
 just going to have to wait for Walter and Co. to get around to them. 
 Not a problem for me, because I'm not developing with D professionally.

I agree.
 
 But if I was going to base a software company on D, I'd be very nervous
 at this prospect.

Exactly.

 
 I think as D matures
 and hopefully gets more enterprise support, these problems will be
 history.

This is the classic chicken or the egg problem. I'm not trying to be 
unnecessarily mean. Enterprise support is something you desperately need. 
Consider dsource, wiki4d, d's bugzilla etc. It's amazing how much 3rd 
party money and effort affects the development. Luckily many things are 
also free nowadays such as github.

 
 I don't see any need to prove how well Haskell works. Even though it's
 a avoid success at all costs experimental research language. It just
 works. I mean to the extent that I'm willing to go with these silly
 test projects that try to prove something.
 
 The statements I made are not a property of D, they are a property of
 the lack of backing/maturity.  I'm sure when Haskell was at the same
 maturity stage as D, and if it had no financial backing/support
 contracts, it would be just as much of a gamble.

But Haskell developers have uninterruptedly received funding during the 
years.

 You seem to think that D is inherently flawed because of D, but it's
 simply too young for some tasks.  It's rapidly getting older, and I
 think in a year or two it will be mature enough for most projects.

I've heard this before. I've also heard the 64-bit port and many other 
things are done in a year/month or two. The fact is, you're overly 
optimistic and these are all bullshit. When I come back here in a year or 
two, I have full justification to laugh at your stupid claims.


Re: Is D still alive?

2011-01-28 Thread retard
Fri, 28 Jan 2011 16:14:27 +, Roman Ivanov wrote:

 == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s
 article
 On 1/27/11 8:02 PM, Walter Bright wrote:
  I think one of the reasons DbC has not paid off is it still requires
  a significant investment of effort by the programmer. It's too easy
  to not bother.
 One issue with DbC is that its only significant advantage is its
 interplay with inheritance. Otherwise, scope() in conjunction with
 assert works with less syntactic overhead. So DbC tends to shine with
 large and deep hierarchies... but large and deep hierarchies are not
 that a la mode anymore.
 
 DbC opens many interesting possibilities if it's supported by tools
 other than just the compiler. MS has included it in .NET 4.0:
 
 http://research.microsoft.com/en-us/projects/contracts/

Mono 2.8 also seems to support those:

http://www.mono-project.com/news/archive/2010/Oct-06.html


Re: Is D still alive?

2011-01-28 Thread Andrei Alexandrescu

On 1/28/11 3:25 PM, retard wrote:

Fri, 28 Jan 2011 10:14:04 -0500, Steven Schveighoffer wrote:

I think as D matures
and hopefully gets more enterprise support, these problems will be
history.


This is the classic chicken or the egg problem. I'm not trying to be
unnecessarily mean. Enterprise support is something you desperately need.
Consider dsource, wiki4d, d's bugzilla etc. It's amazing how much 3rd
party money and effort affects the development. Luckily many things are
also free nowadays such as github.


I don't see any need to prove how well Haskell works. Even though it's
a avoid success at all costs experimental research language. It just
works. I mean to the extent that I'm willing to go with these silly
test projects that try to prove something.


The statements I made are not a property of D, they are a property of
the lack of backing/maturity.  I'm sure when Haskell was at the same
maturity stage as D, and if it had no financial backing/support
contracts, it would be just as much of a gamble.


But Haskell developers have uninterruptedly received funding during the
years.


That doesn't say much about anything. Some projects worked well with 
funding, some worked well with little or no initial funding.



You seem to think that D is inherently flawed because of D, but it's
simply too young for some tasks.  It's rapidly getting older, and I
think in a year or two it will be mature enough for most projects.


I've heard this before. I've also heard the 64-bit port and many other
things are done in a year/month or two. The fact is, you're overly
optimistic and these are all bullshit. When I come back here in a year or
two, I have full justification to laugh at your stupid claims.


I think if you do that I have full justification to send you back where 
you originally came from. Cut the crap for a change, will you. Thanks.



Andrei


Re: Decision on container design

2011-01-28 Thread Andrei Alexandrescu

On 1/28/11 3:05 PM, Michel Fortin wrote:

On 2011-01-28 13:31:58 -0500, Andrei Alexandrescu
seewebsiteforem...@erdani.org said:


Today after work I plan to start making one pass through
std.container. After having thought of things for a long time, my
conclusions are as follows:

1. Containers will be classes.

2. Most of the methods in existing containers will be final. It's up
to the container to make a method final or not.

3. Containers and their ranges decide whether they give away
references to their objects. Sealing is a great idea but it makes
everybody's life too complicated. I'll defer sealing to future
improvements in the language and/or the reflection subsystem.

4. Containers will assume that objects are cheap to copy so they won't
worry about moving primitives.

Any showstoppers, please share.


Not my preferred choices (especially #1), but having containers in
Phobos will certainly be an improvement over not having them. So go ahead!


Well if you brought forth some strong argument I'm all ears. What I see 
for now is that struct containers are just difficult to implement and 
need to have carefully explained semantics, whereas a lot of people know 
how classes behave from day one.



About #4, it'd be nice to have the containers use move semantics when
possible even if they fallback to (cheap) copy semantic when move isn't
available. That way, if you have a type which is moveable but not
copyable you can still put it in a container. Does that makes sense?


That's what I did up until now. It is tantamount to defining a bunch of 
methods (aliases or not) that add to the interface that the user must 
absorb, but that are seldom useful. It just seems that the entire move 
paraphernalia doesn't lift its weight.



Andrei



Re: Decision on container design

2011-01-28 Thread Michel Fortin
On 2011-01-28 17:09:08 -0500, Andrei Alexandrescu 
seewebsiteforem...@erdani.org said:



On 1/28/11 3:05 PM, Michel Fortin wrote:

Not my preferred choices (especially #1), but having containers in
Phobos will certainly be an improvement over not having them. So go ahead!


Well if you brought forth some strong argument I'm all ears. What I see 
for now is that struct containers are just difficult to implement and 
need to have carefully explained semantics, whereas a lot of people 
know how classes behave from day one.


We already argument this over and over in the past. First, I totally 
acknowledge that C++ style containers have a problem: they make it 
easier to copy the content than pass it by reference. On the other side 
of the spectrum, I think that class semantics makes it too easy to have 
null dereferences, it's easy to get lost when you have a container of 
containers.


I have some experience with containers having class-style semantics: in 
Objective-C, I ended up creating a set of macro-like functions which I 
use to initialize containers whenever I use them in case they are null. 
And I had to do more of these utility functions to handle a particular 
data structure of mine which is a dictionary of arrays of objects. In 
C++, I'd have declared this as a map string, vector Object   and 
be done with it; no need for special care initializing each vector, so 
much easier than in Objective-C.


I agree that defining structs to have reference semantics as you have 
done is complicated. But I like the lazy initialization, and we have a 
precedent for that with AAs (ideally, AAs would be a compatible 
container too). Can't we just use the GC instead of reference counting? 
I'd make things much easier. Here is a implementation:


struct Container
{
struct Impl { ... }

private Impl* _impl;
ref Impl impl() @property
{
if (!impl) impl = new Impl;
return *impl;
}

alias impl this;
}

I also believe reference semantics are not to be used everywhere, even 
though they're good most of the time. I'd like to have a way to bypass 
it and get a value-semantic container. With the above, it's easy as 
long as you keep Container.Impl public:


void main() {
Container  lazyHeapAllocatedContainer;
Container.Impl stackAllocatedContainer;
}

void MyObject {
Container.Impl listOfObjects;
}



About #4, it'd be nice to have the containers use move semantics when
possible even if they fallback to (cheap) copy semantic when move isn't
available. That way, if you have a type which is moveable but not
copyable you can still put it in a container. Does that makes sense?


That's what I did up until now. It is tantamount to defining a bunch of 
methods (aliases or not) that add to the interface that the user must 
absorb, but that are seldom useful. It just seems that the entire move 
paraphernalia doesn't lift its weight.


But could we limit this to say that only containers that can return 
elements by ref? Perhaps that won't help. You know the problem better 
than me, I don't really have anything more to say.



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Decision on container design

2011-01-28 Thread bearophile
Michel Fortin:

 On the other side 
 of the spectrum, I think that class semantics makes it too easy to have 
 null dereferences,

That's why I have asked for a @ suffix syntax+semantics for notnull reference 
types in D :-)


 I agree that defining structs to have reference semantics as you have 
 done is complicated. But I like the lazy initialization, and we have a 
 precedent for that with AAs

Built-in AAs are currently broken and in need to be fixed:

import std.stdio: writeln;
void foo(int[int] aa, int n) {
aa[n] = n;
}
void main() {
int[int] a;
foo(a, 0);
writeln(a);
a[1] = 1;
foo(a, 2);
writeln(a);
}

Bye,
bearophile


Re: Decision on container design

2011-01-28 Thread Tomek Sowiński
Michel Fortin napisał:

 We already argument this over and over in the past. First, I totally 
 acknowledge that C++ style containers have a problem: they make it 
 easier to copy the content than pass it by reference. On the other side 
 of the spectrum, I think that class semantics makes it too easy to have 
 null dereferences, it's easy to get lost when you have a container of 
 containers.
 
 I have some experience with containers having class-style semantics: in 
 Objective-C, I ended up creating a set of macro-like functions which I 
 use to initialize containers whenever I use them in case they are null. 
 And I had to do more of these utility functions to handle a particular 
 data structure of mine which is a dictionary of arrays of objects. In 
 C++, I'd have declared this as a map string, vector Object   and 
 be done with it; no need for special care initializing each vector, so 
 much easier than in Objective-C.
 
 I agree that defining structs to have reference semantics as you have 
 done is complicated. But I like the lazy initialization, and we have a 
 precedent for that with AAs (ideally, AAs would be a compatible 
 container too). Can't we just use the GC instead of reference counting? 
 I'd make things much easier. Here is a implementation:
 
   struct Container
   {
   struct Impl { ... }
 
   private Impl* _impl;
   ref Impl impl() @property
   {
   if (!impl) impl = new Impl;
   return *impl;
   }
   
   alias impl this;
   }
 
 I also believe reference semantics are not to be used everywhere, even 
 though they're good most of the time. I'd like to have a way to bypass 
 it and get a value-semantic container. With the above, it's easy as 
 long as you keep Container.Impl public:
 
   void main() {
   Container  lazyHeapAllocatedContainer;
   Container.Impl stackAllocatedContainer;
   }
 
   void MyObject {
   Container.Impl listOfObjects;
   }

Is there anything implementation specific in the outer struct that provides ref 
semantics to Impl? If not, Container could be generic, parametrized by Impl 
type.

Overall, I think a value-like implementation in a referency wrapper is a 
clear-cut idiom, bringing order to otherwise messy struct-implemented 
ref-semantics. Do you know of a existing collection library that exploits this 
idea?

-- 
Tomek



non-ref null arrays [was: Re: Decision on container design]

2011-01-28 Thread spir

On 01/29/2011 01:01 AM, bearophile wrote:

Built-in AAs are currently broken and in need to be fixed:

import std.stdio: writeln;
void foo(int[int] aa, int n) {
 aa[n] = n;
}
void main() {
 int[int] a;
 foo(a, 0);
 writeln(a);
 a[1] = 1;
 foo(a, 2);
 writeln(a);
}

Bye,
bearophile


Variation on the theme:

void foo(int[int] aa, int n) {
aa[n] = n;
}
void foo(int[] a, int n) {
a ~= n;
}
void bar(ref int[int] aa, int n) {
aa[n] = n;
}
void bar(ref int[] a, int n) {
a ~= n;
}

unittest {
int[int] aa;
foo(aa, 3);
writeln(aa.length);

int[] a;
foo(a, 3);
writeln(a.length);

int[int] bb;
bar(bb, 3);
writeln(bb.length);

int[] b;
bar(b, 3);
writeln(b.length);
}

Denis
--
_
vita es estrany
spir.wikidot.com



Re: Decision on container design

2011-01-28 Thread bearophile
Andrei:

 1. Containers will be classes.

This page:
http://www.jroller.com/scolebourne/entry/the_next_big_jvm_language1

A quotation:

3) Everything is a monitor. In Java and the JVM, every object is a monitor, 
meaning that you can synchronize on any object. This is incredibly wasteful at 
the JVM level. Senior JVM guys have indicated large percentage improvements in 
JVM space and performance if we removed the requirement that every object can 
be synchronized on. (Instead, you would have specific classes like Java 5 
Lock)

I have read similar comments in various other places.

What about creating a @nomonitor annotation, for D2 classes to not create a 
monitor for specific classes annotated with it? This may reduce some class 
overhead.

Bye,
bearophile


Re: Decision on container design

2011-01-28 Thread Denis Koroskin
On Sat, 29 Jan 2011 02:32:28 +0300, Michel Fortin  
michel.for...@michelf.com wrote:


On 2011-01-28 17:09:08 -0500, Andrei Alexandrescu  
seewebsiteforem...@erdani.org said:



On 1/28/11 3:05 PM, Michel Fortin wrote:

Not my preferred choices (especially #1), but having containers in
Phobos will certainly be an improvement over not having them. So go  
ahead!
 Well if you brought forth some strong argument I'm all ears. What I  
see for now is that struct containers are just difficult to implement  
and need to have carefully explained semantics, whereas a lot of people  
know how classes behave from day one.


We already argument this over and over in the past. First, I totally  
acknowledge that C++ style containers have a problem: they make it  
easier to copy the content than pass it by reference. On the other side  
of the spectrum, I think that class semantics makes it too easy to have  
null dereferences, it's easy to get lost when you have a container of  
containers.


I have some experience with containers having class-style semantics: in  
Objective-C, I ended up creating a set of macro-like functions which I  
use to initialize containers whenever I use them in case they are null.  
And I had to do more of these utility functions to handle a particular  
data structure of mine which is a dictionary of arrays of objects. In  
C++, I'd have declared this as a map string, vector Object   and  
be done with it; no need for special care initializing each vector, so  
much easier than in Objective-C.


I agree that defining structs to have reference semantics as you have  
done is complicated. But I like the lazy initialization, and we have a  
precedent for that with AAs (ideally, AAs would be a compatible  
container too). Can't we just use the GC instead of reference counting?  
I'd make things much easier. Here is a implementation:


struct Container
{
struct Impl { ... }

private Impl* _impl;
ref Impl impl() @property
{
if (!impl) impl = new Impl;
return *impl;
}

alias impl this;
}



Unfortunately, this design has big issues:


void fill(Appender appender)
{
appender.put(hello);
appender.put(world);
}

void test()
{
Appenderstring appender;
fill(appender); // Appender is supposed to have reference semantics
assert(appender.length != 0); // fails!
}

Asserting above fails because at the time you pass appender object to the  
fill method it isn't initialized yet (lazy initialization). As such, a  
null is passed, creating an instance at first appending, but the result  
isn't seen to the caller.


An explicit initialization is needed to work around this design issue. The  
worst thing is that in many cases it would work fine (you might have  
already initialized it indirectly) but sometimes you get unexpected  
result. I got hit by this in past, and it wasn't easy to trace down.


As such, I strongly believe containers either need to have copy semantics,  
or be classes. However, copy semantics contradicts with the cheap copy  
ctor idiom because you need to copy all the elements from source  
container.


I also believe reference semantics are not to be used everywhere, even  
though they're good most of the time. I'd like to have a way to bypass  
it and get a value-semantic container. With the above, it's easy as long  
as you keep Container.Impl public:


void main() {
Container  lazyHeapAllocatedContainer;
Container.Impl stackAllocatedContainer;
}

void MyObject {
Container.Impl listOfObjects;
}



About #4, it'd be nice to have the containers use move semantics when
possible even if they fallback to (cheap) copy semantic when move isn't
available. That way, if you have a type which is moveable but not
copyable you can still put it in a container. Does that makes sense?
 That's what I did up until now. It is tantamount to defining a bunch  
of methods (aliases or not) that add to the interface that the user  
must absorb, but that are seldom useful. It just seems that the entire  
move paraphernalia doesn't lift its weight.


But could we limit this to say that only containers that can return  
elements by ref? Perhaps that won't help. You know the problem better  
than me, I don't really have anything more to say.






--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Is D still alive?

2011-01-28 Thread Walter Bright

Steven Schveighoffer wrote:

I can't buy enterprise support,


Of course you can!


Re: Is D still alive?

2011-01-28 Thread Jonathan M Davis
On Friday, January 28, 2011 17:16:54 Walter Bright wrote:
 Steven Schveighoffer wrote:
  I can't buy enterprise support,
 
 Of course you can!

Well, since Scotty hasn't been born yet, it's probably a bit premature... ;)

- Jonathan M Davis


Re: Decision on container design

2011-01-28 Thread Michel Fortin

On 2011-01-28 20:10:06 -0500, Denis Koroskin 2kor...@gmail.com said:


Unfortunately, this design has big issues:


void fill(Appender appender)
{
 appender.put(hello);
 appender.put(world);
}

void test()
{
 Appenderstring appender;
 fill(appender); // Appender is supposed to have reference semantics
 assert(appender.length != 0); // fails!
}

Asserting above fails because at the time you pass appender object to 
the  fill method it isn't initialized yet (lazy initialization). As 
such, a  null is passed, creating an instance at first appending, but 
the result  isn't seen to the caller.


That's indeed a problem. I don't think it's a fatal flaw however, given 
that the idiom already exists in AAs.


That said, the nice thing about my proposal is that you can easily 
reuse the Impl to create a new container to build a new container 
wrapper with the semantics you like with no loss of efficiency.


As for the case of Appender... personally in the case above I'd be 
tempted to use Appender.Impl directly (value semantics) and make fill 
take a 'ref'. There's no point in having an extra heap allocation, 
especially if you're calling test() in a loop or if there's a good 
chance fill() has nothing to append to it.


That's the issue with containers. The optimal semantics always change 
depending on the use case.



An explicit initialization is needed to work around this design issue. 
The  worst thing is that in many cases it would work fine (you might 
have  already initialized it indirectly) but sometimes you get 
unexpected  result. I got hit by this in past, and it wasn't easy to 
trace down.


As such, I strongly believe containers either need to have copy 
semantics,  or be classes. However, copy semantics contradicts with the 
cheap copy  ctor idiom because you need to copy all the elements from 
source  container.


Personally, I'm really concerned by the case where you have a container 
of containers. Class semantics make things really complicated as you 
always have to initialize everything in the container explicitly; value 
semantics makes things semantically easier but quite inefficient as 
moving elements inside of the outermost container implies copying the 
containers. Making containers auto-initialize themselves on first use 
solves the case where containers are references-types; making 
containers capable of using move semantics solves the problem for 
value-type containers.



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Decision on container design

2011-01-28 Thread Michel Fortin

On 2011-01-28 19:00:02 -0500, Tomek Sowiński j...@ask.me said:


Michel Fortin napisał:


We already argument this over and over in the past. First, I totally
acknowledge that C++ style containers have a problem: they make it
easier to copy the content than pass it by reference. On the other side
of the spectrum, I think that class semantics makes it too easy to have
null dereferences, it's easy to get lost when you have a container of
containers.

I have some experience with containers having class-style semantics: in
Objective-C, I ended up creating a set of macro-like functions which I
use to initialize containers whenever I use them in case they are null.
And I had to do more of these utility functions to handle a particular
data structure of mine which is a dictionary of arrays of objects. In
C++, I'd have declared this as a map string, vector Object   and
be done with it; no need for special care initializing each vector, so
much easier than in Objective-C.

I agree that defining structs to have reference semantics as you have
done is complicated. But I like the lazy initialization, and we have a
precedent for that with AAs (ideally, AAs would be a compatible
container too). Can't we just use the GC instead of reference counting?
I'd make things much easier. Here is a implementation:

struct Container
{
struct Impl { ... }

private Impl* _impl;
ref Impl impl() @property
{
if (!impl) impl = new Impl;
return *impl;
}

alias impl this;
}

I also believe reference semantics are not to be used everywhere, even
though they're good most of the time. I'd like to have a way to bypass
it and get a value-semantic container. With the above, it's easy as
long as you keep Container.Impl public:

void main() {
Container  lazyHeapAllocatedContainer;
Container.Impl stackAllocatedContainer;
}

void MyObject {
Container.Impl listOfObjects;
}


Is there anything implementation specific in the outer struct that provides
ref semantics to Impl? If not, Container could be generic, parametrized by
Impl type.


You could provide an implementation-specific version of some functions 
as an optimization. For instance there is no need to create the Impl 
when asking for the length, if the pointer is null, length is zero. 
Typically, const function can be implemented in the outward container 
with a shortcut checking for null.



Overall, I think a value-like implementation in a referency wrapper is 
a clear-cut idiom, bringing order to otherwise messy struct-implemented 
ref-semantics. Do you know of a existing collection library that 
exploits this idea?


No. Only associative arrays in D do that, that I know of.

--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Is D still alive?

2011-01-28 Thread Walter Bright

Jonathan M Davis wrote:

On Friday, January 28, 2011 17:16:54 Walter Bright wrote:

Steven Schveighoffer wrote:

I can't buy enterprise support,

Of course you can!


Well, since Scotty hasn't been born yet, it's probably a bit premature... ;)


She canna take the power!


Re: Nested function declarations

2011-01-28 Thread wrzosk

On 28.01.2011 03:35, Akakima wrote:

bearophilebearophileh...@lycos.com  a �crit dans le message de news:
iht0ha$2avd$1...@digitalmars.com...

This D2 code:

import std.math: sqrt;
void main() {
double sqrt();
double result = sqrt(9.0);
}

Generates the errors:
test.d(4): Error: function test.main.sqrt () is not callable using
argument types (double)
test.d(4): Error: expected 0 arguments, not 1 for non-variadic function
type double()



--- This one compiles, but does not link:

import std.math: sqrt;
void main()
{
   double sqrt(double x);
   double result = sqrt(9.0);
}


--- And this one compiles and links ok.

import std.math: sqrt;
void main()
{
   double sqrt(double x)  {return 1.0;  }
   double result = sqrt(9.0);
}

So, with the appropriate prototype it compiles, but
there is a conflict with the sqrt() already defined in Phobos.





This works:

import std.math: sqrt;
import std.stdio;
void main()
{
  double sqrt(double x)  {return 1.0;  }
  double result = .sqrt(9.0);
  writeln(result);
}


Re: github syntax hilighting

2011-01-28 Thread Jacob Carlborg

On 2011-01-27 18:27, Andrej Mitrovic wrote:

On 1/27/11, Jacob Carlborgd...@me.com  wrote:

I would guess it takes 10 seconds because it processes the file as a
whole. I would guess vim doesn't do that, but I don't know.


I haven't thought of that! Actually Vim's syntax highlighting
'algorithm' is customizable. You can set it up to try highlighting the
entire file, or it can do some magic and scan backwards N lines to
figure out how to highlight the entire screen. Afaik the D syntax
highlighting script doesn't scan the entire file at once, which
probably explains the speed. :)


Hehe, is it anything in Vim that isn't customizable.

--
/Jacob Carlborg


Re: array of elements of various sybtypes

2011-01-28 Thread Steven Schveighoffer
On Wed, 26 Jan 2011 23:40:39 -0500, Andrej Mitrovic  
andrej.mitrov...@gmail.com wrote:



On 1/27/11, Steven Schveighoffer schvei...@yahoo.com wrote:

 I'm not sure why this works and the other doesn't, but we
definitely need something that allows one to control the array type of a
literal.


pragma helps in discovering what DMD does sometime. This will error
out but it will give some useful info:

pragma(msg, typeid( [1,2,cast(ubyte)3] ));
error: [1,2,cast(int)cast(ubyte)3] , D11TypeInfo_Ai6__initZ

So it forces a cast back to int again.


Right, because int can hold all of the values.

Casting the whole array to ubyte[] makes it cast all the arrays elements.

Essentially, I'm proposing that cast(U[])[e1, e2, ..., en] is translated  
to [cast(U)e1, cast(U)e2, ..., cast(U)en].


This works for array literals that are composed of literal elements, but  
for some reason doesn't work for classes.




But we can use a postfix to set an unsigned type for the whole array:

writeln(typeid( [1,2,3u] ));  // uint[]


Because integer promotion rules say ints get promoted to uints.   
Essentially, uint is considered to be able to hold all int values (even  
though it technically can't).  So the common type chosen between int and  
uint is uint.



And we can select a string type with a postfix, but we can't use a cast:

void main()
{
writeln(typeid( [ad, b, c] ));// works
writeln(typeid( [cast(dchar)a, b, c] ));  // doesn't work,
// Error: incompatible types for ((cast(dchar)a) ? (b)): 'dchar'
and 'string'
}


The error has been pointed out, but you are still missing the point.   
Casting one element has a partial effect on the overall type, but does not  
force the type of the array.  It only forces the type of one element.   
The compiler still chooses an array type of the common type of all  
elements.


-Steve


Re: array of elements of various sybtypes

2011-01-28 Thread Steven Schveighoffer

On Thu, 27 Jan 2011 08:33:05 -0500, spir denis.s...@gmail.com wrote:


On 01/27/2011 05:05 AM, Steven Schveighoffer wrote:
On Wed, 26 Jan 2011 22:10:58 -0500, Jonathan M Davis  
jmdavisp...@gmx.com wrote:



On Wednesday 26 January 2011 18:59:50 Steven Schveighoffer wrote:
On Wed, 26 Jan 2011 18:28:12 -0500, Jonathan M Davis  
jmdavisp...@gmx.com



I'd like to see cast(T0[])[...] work, I think that should solve the
problem.


It probably doesn't for the exact same reason that the assignment  
didn't do it.
The expression is evaluated and _then_ it's cast. So, if the  
expression isn't

valid in and of itself, it fails.


This works:

cast(ubyte[])[1,2,3] // creates an array of 3 ubytes

So clearly cast has an effect on the type of the array literal in that  
case.

I'm not sure why this works and the other doesn't,
[1,2,3] is valid! [t1,t2] is not if one of the elements' type is not  
implicitely convertible to the other. In your example cast applies to an  
already constructed array. (Hope you see what I mean)


I do, but what do you think the binary value of the array is?  You might  
be surprised that it is


01h 02h 03h

instead of the binary representation of the int[] [1, 2, 3]:

01h 00h 00h 00h 02h 00h 00h 00h 03h 00h 00h 00h

i.e. all the elements are casted to ubyte *before* the array is  
constructed.  This is the behavior I think we should have in all cases.



but we definitely need
something that allows one to control the array type of a literal.


Yop! But this hint has to belong to the literal notation syntax itself.  
Not anything (like cast ot to!) that applies afterwards.


IMO cast is fair game, because it's a compiler internal operation.  to! is  
a library function, and should not affect the literal type.


In D1, the array could be typed by casting the first element (the first  
element
was always used as the type of the array). In D2 we no longer can  
control the

type of the array that way, we need some way to do it.


Works in D2. If any element is of the inteded common type, then all goes  
fine.


But this doesn't work to force the array type.  It only works to force a  
different type into consideration for the common type.


It is specifying a different intention to the compiler than I think you  
want.  If, for example, you *wanted* the type of the array to be T0[], and  
not Object[], this line still results in an Object[] array:


Object o = new T1;
auto arr = [cast(T0)t1, t2, o];

So wouldn't you rather the compiler say hey, o can't be a T0, even though  
you want a T0[] or would you rather it just happily carry out your order  
and fail later when you try T0 methods on any of the elements?


-Steve


Re: Problem with Sorting

2011-01-28 Thread Steven Schveighoffer
On Fri, 28 Jan 2011 00:05:34 -0500, Jonathan M Davis jmdavisp...@gmx.com  
wrote:



Also, the signature for opCmp on classes is supposed to be

int opCmp(Object o)

It's supposed to take Object. I'm surprised that it works at all the way  
that it

is.


std.algorithm.sort is a template, so it has access to the fully derived  
class type.  The compiler simply rewrites a  b as a.opCmp(b), which calls  
his function if you are using the derived type.


Without defining opCmp as taking object, however, the builtin array sort  
would fail with a hidden function error.


-Steve


Re: Interface problems

2011-01-28 Thread Steven Schveighoffer
On Thu, 27 Jan 2011 09:26:28 -0500, Stanislav Blinov bli...@loniir.ru  
wrote:



26.01.2011 16:54, Steven Schveighoffer пишет:


This is hardly a solution.  He wants to do value comparison, not  
identity comparison.


The real fix is to make interface assume it is an Object, so it can be  
implicitly cast to Object, and find another way to implement COM  
interfaces.  The COM interface hack is way outdated and extremely  
harmful, esp. on OS' who *don't use COM*!  I can't see how the benefits  
it has outweigh the problems it causes.


The recent discussion in D group about destructor order brought me to  
yet another question about interfaces.
Currently, functions that should accept classes as parameters, e.g.  
clear(), accept interfaces as well:


void clear(T)(T obj) if (is(T == class)) // note the constraint
{ /*...*/ }

interface I {}
class A : I {}
void main()
{
 I a = new A; // note that typeof(a) is I, not A
 clear(a);
}

This compiles. But raises a question: how come? If it is assumed that a  
reference to interface is not necessarily a D class instance, then it  
shouldn't. The fact that it compiles even more ambiguates the purpose  
and usage of interfaces.
I agree with Steven. Having a support for COM, CORBA and so on in the  
language is great, but wouldn't it be better to specify it explicitly?  
Maybe solidify the usage of 'extern' keyword?


interface D {} // instances are guaranteed to be D Objects
extern interface E {} // instances may or may not be D Objects (COM and  
alike)


I mean, it's already there in the language and is described in  
'Interfacing to C++' in the documentation. Though currently, extern  
interfaces are accepted by is(T == class) constraint as well.


It's because everywhere in the compiler, an interface is considered a  
class (I think even the type representing an interface is derived from the  
type representing class).  Except one, and that's implicit casting to  
Object.


My thought is, we already have extern(C++) interface, why not extern(COM)  
interface?  But we could even leave the notion of interfaces deriving from  
IUnknown as COM interfaces and just have the compiler check if an  
interface derives from IUnknown.  In fact, it *ALREADY DOES THIS* because  
it has to treat the layout of an IUnknown interface differently from  
another interface.  The whole argument to have interfaces not derive from  
Object because of COM is standing on such poor footing that I can't see  
how it's taken this long to fix.


-Steve


Re: array of elements of various sybtypes

2011-01-28 Thread Steven Schveighoffer

On Thu, 27 Jan 2011 07:49:06 -0500, spir denis.s...@gmail.com wrote:


On 01/27/2011 03:54 AM, Steven Schveighoffer wrote:

On Wed, 26 Jan 2011 18:33:45 -0500, spir denis.s...@gmail.com wrote:


On 01/26/2011 07:23 PM, Steven Schveighoffer wrote:

On Wed, 26 Jan 2011 12:27:37 -0500, spir denis.s...@gmail.com wrote:




auto ts = cast(T0[])[t1, t2];


Nope, refused for the same reason (tried to construct [t1,t2] before  
casting

it).


Hm.. maybe that only works on array literals with all literal elements.  
I

expected the cast to affect the type the compiler is expecting.

For example, this works:

cast(ubyte[])[1,2]; // without cast typed as int[]


Yes, but with [1,2] the compiler has no difficulty to create the initial  
array in the first place ;-)


But the cast affects how the array is created.  See my post elsewhere in  
this thread.


-Steve


Are these (known) bugs?

2011-01-28 Thread biozic

Hi,

I am playing with the to-be-released std.datetime, and encountered these 
errors (the last one concerns std.variant, actually), with dmd 2.052 
(Mac OS X 10.6):


---
import std.array, std.datetime, std.variant;

unittest {
auto app = appender!(Interval!Date[]);
auto interval = Interval!Date(Date(2000, 1, 1), Date(2011, 2, 3));
app.put(interval);
// Error: datetime.d(20208): Invariant Failure: begin is not before 
or equal to end.

}

unittest {
Variant[] va;
ubyte u = 0;
va ~= u;
// Error: cannot append type ubyte to type VariantN!(maxSize)[]
}
---

Are these (known) bugs, or do I do anything wrong?
Thanks,
Nicolas




Re: Are these (known) bugs?

2011-01-28 Thread biozic

Le 28/01/11 19:57, biozic a écrit :

unittest {
Variant[] va;
ubyte u = 0;
va ~= u;
// Error: cannot append type ubyte to type VariantN!(maxSize)[]
}


Hmm... I'm silly, forget this one :)
va ~= Variant(u), of course.


Re: Are these (known) bugs?

2011-01-28 Thread Jonathan M Davis
On Friday, January 28, 2011 10:57:58 biozic wrote:
 Hi,
 
 I am playing with the to-be-released std.datetime, and encountered these
 errors (the last one concerns std.variant, actually), with dmd 2.052
 (Mac OS X 10.6):
 
 ---
 import std.array, std.datetime, std.variant;
 
 unittest {
  auto app = appender!(Interval!Date[]);
  auto interval = Interval!Date(Date(2000, 1, 1), Date(2011, 2, 3));
  app.put(interval);
  // Error: datetime.d(20208): Invariant Failure: begin is not before
 or equal to end.
 }

There no known bugs in std.datetime. My guess would be that the issue lies with 
appender and Interval!(Date).init and/or something set to void if appender does 
that at all. But since Date.init would be equal to Date.init, it seems 
extremely 
bizarre that Interval!(Date).init would have its begin and end not be equal, 
which makes it less likely that Interval!(Date).init would be the problem. So, 
I 
don't know. The code is very thoroughly tested, but that doesn't mean that I 
didn't miss something, and it's possible that there's a bug in appender. I'm 
not 
at all familiar with how appender works. I'll have to take a look at it tonight.

But std.datetime has a ton of unit tests and, as far as I know, is currently 
passing all of them on Linux, Windows, and OS X (I don't know about FreeBSD). 
The most likely problems would be on OS X or FreeBSD, since I don't have a 
system with either OS X or FreeBSD, and there could be problems in time zones 
other than America/Los_Angeles - particularly on Windows where you can't easily 
test time zones other than the one that you're in - since all of my development 
has been done in America/Los_Angeles. But I'm not aware of any bugs. So, if you 
do find problems, please report them.

- Jonathan M Davis


Re: array of elements of various sybtypes

2011-01-28 Thread Don

spir wrote:

Hello,

This fails:

class T0 {}
class T1 : T0 {}
class T2 : T0 {}

unittest {
auto t1 = new T1();
auto t2 = new T2();
T0[] ts = [t1, t2];
}

Error: cannot implicitly convert expression (t1) of type __trials__.T0 
to __trials__.T2
Error: cannot implicitly convert expression ([(__error),t2]) of type 
T2[] to T0[]


I guess it should be accepted due to explicite typing 'T0[]'. What do 
you think? D first determines the type of the last element (always the 
last one), here T2. Then, /ignoring/ the array's defined type, tries to 
cast other elements to the same type T2. 


No, it's not supposed to do that. What is supposed to happen is, to use 
?: to determine the common type of the array. This will give a T0[] array.

Then, it attempts to implicitly cast to the defined type.

Your code should work. You've just hit an implementation bug.


Re: github syntax hilighting

2011-01-28 Thread Don

Jacob Carlborg wrote:

On 2011-01-26 20:30, Jonathan M Davis wrote:

On Wednesday, January 26, 2011 11:21:55 Brad Roberts wrote:

On 1/26/2011 7:13 AM, Steven Schveighoffer wrote:

Anyone have any clue why this file is properly syntax-aware:

https://github.com/D-Programming-Language/druntime/blob/master/src/rt/lif 


etime.d

but this file isn't

https://github.com/D-Programming-Language/druntime/blob/master/src/core/t 


hread.d

I'm still not familiar at all with git or github...

-Steve


I'm going to guess it's filesize.  Totally a guess, but consider that
adding highlighting costs additional markup.  The file that's not
highlighted is over 100k, the file that is is only(!) 62k.


LOL. It won't even _show_ std.datetime. You may be on to something there.

- Jonathan M Davis


If github even won't show the file you clearly has too much in one file. 
More than 34k lines of code (looking at the latest svn), are you kidding 
me. That's insane, std.datetimem should clearly be a package. I don't 
know why Andrei and Walter keeps insisting on having so much code in one 
file


It takes about 10 seconds to get syntax highlighting at the bottom of 
the file in TextMate.


You can compile the whole of Phobos in that time... g


Re: github syntax hilighting

2011-01-28 Thread Jonathan M Davis
On Friday, January 28, 2011 14:02:34 Don wrote:
 Jacob Carlborg wrote:
  On 2011-01-26 20:30, Jonathan M Davis wrote:
  On Wednesday, January 26, 2011 11:21:55 Brad Roberts wrote:
  On 1/26/2011 7:13 AM, Steven Schveighoffer wrote:
  Anyone have any clue why this file is properly syntax-aware:
  
  https://github.com/D-Programming-Language/druntime/blob/master/src/rt/
  lif
  
  etime.d
  
  but this file isn't
  
  https://github.com/D-Programming-Language/druntime/blob/master/src/cor
  e/t
  
  hread.d
  
  I'm still not familiar at all with git or github...
  
  -Steve
  
  I'm going to guess it's filesize.  Totally a guess, but consider that
  adding highlighting costs additional markup.  The file that's not
  highlighted is over 100k, the file that is is only(!) 62k.
  
  LOL. It won't even _show_ std.datetime. You may be on to something
  there.
  
  - Jonathan M Davis
  
  If github even won't show the file you clearly has too much in one file.
  More than 34k lines of code (looking at the latest svn), are you kidding
  me. That's insane, std.datetimem should clearly be a package. I don't
  know why Andrei and Walter keeps insisting on having so much code in one
  file
  
  It takes about 10 seconds to get syntax highlighting at the bottom of
  the file in TextMate.
 
 You can compile the whole of Phobos in that time... g

LOL. Yeah. Well, I wrote it using gvim, and it handles it just fine. And 
honestly, in some respects, it's actually easier to deal with it all in one 
file 
than it would be if it were split it up. The unittests and documentation are 
probably 2/3 of the file anyway. So, yeah. It's large. But it works.

- Jonathan M Davis


Re: Are these (known) bugs?

2011-01-28 Thread Jonathan M Davis
On Friday 28 January 2011 13:55:03 Jonathan M Davis wrote:
 On Friday, January 28, 2011 10:57:58 biozic wrote:
  Hi,
  
  I am playing with the to-be-released std.datetime, and encountered these
  errors (the last one concerns std.variant, actually), with dmd 2.052
  (Mac OS X 10.6):
  
  ---
  import std.array, std.datetime, std.variant;
  
  unittest {
  
   auto app = appender!(Interval!Date[]);
   auto interval = Interval!Date(Date(2000, 1, 1), Date(2011, 2, 3));
   app.put(interval);
   // Error: datetime.d(20208): Invariant Failure: begin is not before
  
  or equal to end.
  }
 
 There no known bugs in std.datetime. My guess would be that the issue lies
 with appender and Interval!(Date).init and/or something set to void if
 appender does that at all. But since Date.init would be equal to
 Date.init, it seems extremely bizarre that Interval!(Date).init would have
 its begin and end not be equal, which makes it less likely that
 Interval!(Date).init would be the problem. So, I don't know. The code is
 very thoroughly tested, but that doesn't mean that I didn't miss
 something, and it's possible that there's a bug in appender. I'm not at
 all familiar with how appender works. I'll have to take a look at it
 tonight.
 
 But std.datetime has a ton of unit tests and, as far as I know, is
 currently passing all of them on Linux, Windows, and OS X (I don't know
 about FreeBSD). The most likely problems would be on OS X or FreeBSD,
 since I don't have a system with either OS X or FreeBSD, and there could
 be problems in time zones other than America/Los_Angeles - particularly on
 Windows where you can't easily test time zones other than the one that
 you're in - since all of my development has been done in
 America/Los_Angeles. But I'm not aware of any bugs. So, if you do find
 problems, please report them.

Okay. This is pretty much _has_ to be either a bug in appender or in the 
compiler. It happens non-derministically. The exact same program will sometimes 
work just fine and sometimes the invariant will fail. It almost has to be the 
case that the values being used were initialized with garbage data. It's the 
kind of thing that I'd expect if = void had been used. I don't see = void in 
the 
appender code, but it _is_ calling GC functions such as GC.extend and 
GC.qalloc. 
I don't know enough about appender or how those functions work, but I suspect 
that something isn't being properly initialized before it's used. Open a bug 
report on the issue. Someone with knowledge of appender is going to have to 
take 
a look at it.

- Jonathan M Davis


std.format example not working

2011-01-28 Thread Akakima
Firt, i would like to know if you are interested in receiving
comments an bug reports for DMD V1.

If yes, then the following example does not work:

http://www.digitalmars.com/d/1.0/phobos/std_format.html

import std.c.stdio;
import std.format;
void formattedPrint(...)
{
  void putc(char c)  {fputc(c, stdout);}
  std.format.doFormat(putc, _arguments, _argptr);
}


declaring: void putc(dchar c) fixes the problem.

Also to a D newbee like me, _arguments and _argptr are confusing.

_arginfo or _argtype is more significant.
_argptr could be _arglist or _argvalue or _arguments

Those names would be a better fit to the explanations given.

_ 




[Issue 5496] 64bit: possible ABI issue in mars.h for D runtime.

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5496



--- Comment #2 from Iain Buclaw ibuc...@ubuntu.com 2011-01-28 04:54:50 PST ---
Edited summary for clarity sake.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 3813] Bad writeln of arrays

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=3813



--- Comment #11 from Denis Derman denis.s...@gmail.com 2011-01-28 06:08:53 
PST ---
More generally, why don't writeln / formatValue / whatever builtin funcs used
for output simply recurse to format elements of collections. This is how things
work in all languages I know that provide default output forms. And this
default is equal, or very similar, to literal notation).
Isn't this the only sensible choice? I think we agree default output format is
primarily for programmer's feedback.

Side-point: I would love default formats for composites thingies like struct 
class objects as well. Probably another enhancement request. Currently, the
code copied below writes out:

S
modulename.C

Great! very helpful ;-) I wish we would get, as default, an output similar to
the notation needed to create the objects:

S(1, 1.1, '1', 1.1, S.Sub(1))
C(1, 1.1, '1', 1.1, new C.Sub(1))

(Except for members with default values, possibly not provided in creation
code, but listed on output.)

At least we can write a toString... but it's a bit annaoying to be forced to do
it, esp for quickly written pieces of code, when a default would do the job
(prbably most cases by far).

Denis

Code:

struct S {
struct Sub {
int j;
this (int j) {
this.j = j;
}
}
int i;
float f;
char c;
string s;
Sub sub;
}

class C {
static class Sub {
int j;
this (int j) {
this.j = j;
}
}
int i;
float f;
char c;
string s;
Sub sub;
this (int i, float f, char c, string s, Sub sub) {
this.i = i;
this.f = f;
this.c = c;
this.s = s;
this.sub = sub;
}
}

unittest {
S s = S(1, 1.1, '1', 1.1, S.Sub(1));
writeln(s);
C c = new C(1, 1.1, '1', 1.1, new C.Sub(1));
writeln(c);
}

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 2006] Appending empty array using ~= doesn't work

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=2006


Denis Derman denis.s...@gmail.com changed:

   What|Removed |Added

 CC||denis.s...@gmail.com


--- Comment #5 from Denis Derman denis.s...@gmail.com 2011-01-28 06:39:21 PST 
---
(In reply to comment #3)
 I just got bitten by this again.
 
 float[][] arr;
 arr ~= [1.0]; // ok, adds a new element (an array of length 1).
 arr ~= []; // not ok, does nothing. :-(
 
 The last line there does nothing, apparently because the compiler interprets 
 it
 to be an array of array that's empty, which is the least useful 
 interpretation.
  So I find it unexpected that the compiler interprets it this way.  Once
 again... even though I already ran into it once.  I just forgot because it
 seems so silly for the compiler to choose the interpretation that it does.
 
 At the very least I'd like the compiler to generate an error saying it doesn't
 know how to interpret 'arr ~= []'.

Yes, ambiguity that is bug-prone, because both interpretations can run, must
yield compiler error.

Denis

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5498] New: array of elements of subtypes of a common supertype

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5498

   Summary: array of elements of subtypes of a common supertype
   Product: D
   Version: D2
  Platform: All
OS/Version: All
Status: NEW
  Severity: normal
  Priority: P2
 Component: DMD
AssignedTo: nob...@puremagic.com
ReportedBy: denis.s...@gmail.com


--- Comment #0 from Denis Derman denis.s...@gmail.com 2011-01-28 06:59:48 PST 
---
Hello,

Seems there is no way to create an array of elements which types are subtypes
of a common supertype:

void test0 () {
auto t1 = new T1();
auto t2 = new T2();
T0[] array = [t1, t2];
}
==
Error: cannot implicitly convert expression (t1) of type __trials__.T0 to
__trials__.T2
Error: cannot implicitly convert expression ([(__error),t2]) of type T2[] to
T0[]

The array value is obviously correct. But there is here no common compatible
type between T1  T2 that D could directly choose; instead, they have a common
super-type, which is the one intended as array element type.
D logically does not take our specification of the wished common type, in the
target part of the assignment, into account; since the source array must in any
case first exist at all. And it does not try to guess a common supertype by
climbing up the type hierarchy tree. This would be naughty-bug-prone since D
could find a common type which is not the intended one, precisely in case of
programmer error. And anyway Object would always be convenient, while this is
certainly not the programmer intention in the general case.
The core issue is that the language must first create a valid array value,
before any attempt to convert to any explicitely specified type (if ever this
feature was implemented). For this reason, cast or to! applied on the array
cannot help neither; the original array must first initially be correct
according to D rules for literal:

T0[] array = cast(T0[])[t1, t2];
==
Same error.

A workaround is to cast one of the elements, instead of the array, to the
intended common type (*):

void test2 () {
auto t1 = new T1();
auto t2 = new T2();
T0[] array1 = [cast(T0)t1, t2]; // ok
T0[] array2 = [t1, cast(T0)t2]; // ok
}

But this trick raises a conceptual problem: what we mean is specifying the
array literal's common type; what is in fact written is a cast of an element.
Far to be obvious.

A library solution can be made via an array-feeding helper function; it uses
a variadic argument to avoid the user writing an array literal:

void feed (T) (ref T[] array, T[] elements...) {
array ~= elements;
}
void test3 () {
auto t1 = new T1();
auto t2 = new T2();
T0[] array;
array.feed(t1, t2);// means: array = [t1,t2];
array.feed(t2, t1);// means: array ~= [t2,t1];
writeln(array);
}
==
[arraydef.T1, arraydef.T2, arraydef.T2, arraydef.T1]

As shown, feed can also extend an existing array, replacing ~= which fails
for the same cause as =. (Reason why I called the func feed, not init.)
This is still a workaround, maybe a better one. Programmers need to know about
the issue and the provided solution, thus this should be mentionned in good
place in doc about arrays.

A true solution would require having a way to hint the compiler about the
intended type, in literal syntax itself --a hint taken into account by the
language before any initial array is created. Just like postfixes 'w'  'd' for
chars and strings. The best I could think at is, by analogy, postfixing the
element type to the array literal:

T0[] array = [t1, t2]T0;

Not very nice :-(


Denis


(*) to! cannot be used at all because we hit another, unrelated, bug; namely
one about non mutually exclusive template constraints:

T0[] array1 = [to!(T0)(t1), t2];
==
/usr/include/d/dmd/phobos/std/conv.d(99): Error: template std.conv.toImpl(T,S)
if (!implicitlyConverts!(S,T)  isSomeString!(T)  isInputRange!(Unqual!(S))
 isSomeChar!(ElementType!(S))) toImpl(T,S) if (!implicitlyConverts!(S,T) 
isSomeString!(T)  isInputRange!(Unqual!(S))  isSomeChar!(ElementType!(S)))
matches more than one template declaration,
/usr/include/d/dmd/phobos/std/conv.d(559):toImpl(Target,Source) if
(implicitlyConverts!(Source,Target)) and
/usr/include/d/dmd/phobos/std/conv.d(626):toImpl(T,S) if (is(S : Object) 
is(T : Object))

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5221] entity.c: Merge Walter's list with Thomas'

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5221


Aziz K�ksal aziz.koek...@gmail.com changed:

   What|Removed |Added

 CC||aziz.koek...@gmail.com


--- Comment #4 from Aziz K�ksal aziz.koek...@gmail.com 2011-01-28 14:25:56 
PST ---
I researched this issue with named HTML entities and found several, different
lists out there.

I think the following list is the most complete and most accurate one:

http://www.w3.org/2003/entities/2007/w3centities-f.ent

Please consider mentioning this in the language specification.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5499] ICE(toir.c): delegate as alias template parameter, only with -release -inline

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5499


Don clugd...@yahoo.com.au changed:

   What|Removed |Added

   Keywords|rejects-valid   |ice-on-valid-code
 CC||clugd...@yahoo.com.au
Summary|toir.c Internal error   |ICE(toir.c): delegate as
   ||alias template parameter,
   ||only with -release -inline


--- Comment #1 from Don clugd...@yahoo.com.au 2011-01-28 14:30:11 PST ---
I couldn't reproduce this until I used -inline as well as -release.

Looks similar to bug 4504, but this test case is shorter, and 4504 doesn't
require -release.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5499] ICE(toir.c): delegate as alias template parameter, only with -release -inline

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5499



--- Comment #2 from bearophile_h...@eml.cc 2011-01-28 14:32:57 PST ---
Sorry, I meant just -inline.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5221] entity.c: Merge Walter's list with Thomas'

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5221


Don clugd...@yahoo.com.au changed:

   What|Removed |Added

 CC||clugd...@yahoo.com.au


--- Comment #5 from Don clugd...@yahoo.com.au 2011-01-28 14:36:53 PST ---
(In reply to comment #4)
 I researched this issue with named HTML entities and found several, different
 lists out there.
 
 I think the following list is the most complete and most accurate one:
 
 http://www.w3.org/2003/entities/2007/w3centities-f.ent
 
 Please consider mentioning this in the language specification.

A few hours ago I merged this patch into my fork of dmd. Complete source is
here:

https://github.com/donc/dmd/blob/master/src/entity.c

Would be great if you or someone else could compare that list, to the one
you've just posted.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 4287] opOpAssign!(~=) for std.array.Appender

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=4287



--- Comment #1 from bearophile_h...@eml.cc 2011-01-28 14:38:22 PST ---
The put() method is not easy to remember (other collections use insert(), etc),
so for me the ~= is simpler to remember. The needed code for Appender, tested a
little:


/// Adds or appends data to the managed array.
void opOpAssign(string op:~, T)(T data)
{
this.put(data);
}


It allows to write:

import std.stdio, std.array;
void main() {
auto a = appender!(int[]);
a ~= [1, 2];
a ~= 3;
writeln(a.data);
}

--

To define an appender of integers I suggest a syntax like:
auto a = appender!(int);

Instead of:
auto a = appender!(int[]);

because the significant type here is of the items added to the appender. The
fact that Appender uses an array to store such items is an implementation
detail the user must be able to ignore (an Appender may be implemented with a
dynamic array of fixed-sized arrays of items too, like some C++ deque data
structures, to decrease large memory allocations, at the cost of a slower O(n)
data method to convert the items in an array).

--

An O(log n) opIndex() too is useful for Appender, it's also useful to avoid
some usages of data method.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5221] entity.c: Merge Walter's list with Thomas'

2011-01-28 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5221



--- Comment #6 from Iain Buclaw ibuc...@ubuntu.com 2011-01-28 16:20:41 PST ---
(In reply to comment #5)
 (In reply to comment #4)
  I researched this issue with named HTML entities and found several, 
  different
  lists out there.
  
  I think the following list is the most complete and most accurate one:
  
  http://www.w3.org/2003/entities/2007/w3centities-f.ent
  
  Please consider mentioning this in the language specification.
 
 A few hours ago I merged this patch into my fork of dmd. Complete source is
 here:
 
 https://github.com/donc/dmd/blob/master/src/entity.c
 
 Would be great if you or someone else could compare that list, to the one
 you've just posted.

There are quite a lot of additions, and the odd difference inbetween. I can do
an update, though I guess it depends on how much you want to put in.

There are entities to whom's value is large than a unsigned short. eg:

b.nu, 0x1D6CE,
b.Omega,  0x1D6C0,
b.omega,  0x1D6DA,
Bopf, 0x1D539,
bopf, 0x1D553,

Which then leads to question #2, does the parser allow '\b.nu;' ?

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---