Re: std.datetime questions

2011-03-12 Thread Jonathan M Davis
On Friday 11 March 2011 19:34:26 Jonathan M Davis wrote:
 On Friday, March 11, 2011 19:18:21 Nicholas wrote:
  Thanks for the information.  I'll play with it when I'm at work again and
  then report my findings.
  
  
  In the interim, my timezone is EST.  I used TimeZone America/New_York on
  32-bit WinXP SP 3.
 
 I assume that you were using WindowsTimeZone then?
 
  Overall, the library seems like it offers a lot.  I found a glaring bug
  in std.date as well with EST, which was more harmful than the ones I
  mentioned now.
 
 Yeah. std.date is pretty broken. So, there hasn't really been even a decent
 solution for date/time stuff in Phobos for a while, but std.datetime should
 fix that. And it's definitely designed in such a way that it's at least
 _supposed_ to handle time zones really well and fairly painlessly. Only
 time and usage will tell how good the design really is though. I think
 that it's quite solid overall, but I'm not about to claim that it's
 perfect. And while bugs in it should be rare given how thoroughly tested
 it is, I'm not about to claim that there definitely aren't any. Definitely
 report any that you find.
 
 If I have time, I may mess around with America/New_York a bit this weekend
 and see if anything obvious pops up. Glancing at WindowsTimeZone, I see
 that it's missing some unit tests, so I definitely need to add some,
 regardless of whether there's currently anything wrong with it.

Okay. It looks like WindowsTimeZone gets the UTC offsets reversed. So, in the 
case of America/New_York, you'd get UTC+5 instead of UTC-5.

http://d.puremagic.com/issues/show_bug.cgi?id=5731

I'll try and get it fixed this weekend. I should have caught that before, but 
apparently I forgot to create all of the appropriate tests for WindowsTimeZone.

- Jonathan M Davis


SpanMode.breadth -- misnomer?

2011-03-12 Thread %u
It seems to me that the SpanMode.breadth option when enumerating a
directory does not actually do breadth-first search, but rather
performs a kind of depth-first preorder traversal.

In other words, to me, this is depth-first postorder traversal:

\A
\A\1
\A\1\x
\A\1\y
\A\2
\B
\B\1

whereas this is depth-first preorder traversal:

\A\1\x
\A\1\y
\A\1
\A\2
\A
\B\1
\B

and whereas **this** is a true breadth-first traversal:

\A
\B
\A\1
\A\2
\B\1
\A\1\x
\A\1\y


Is that correct, and so is breadth actually a misnomer? I found it
really confusing that it didn't work level-by-level.


Re: SpanMode.breadth -- misnomer?

2011-03-12 Thread spir

On 03/12/2011 10:22 AM, %u wrote:

It seems to me that the SpanMode.breadth option when enumerating a
directory does not actually do breadth-first search, but rather
performs a kind of depth-first preorder traversal.

In other words, to me, this is depth-first postorder traversal:

\A
\A\1
\A\1\x
\A\1\y
\A\2
\B
\B\1

whereas this is depth-first preorder traversal:

\A\1\x
\A\1\y
\A\1
\A\2
\A
\B\1
\B

and whereas **this** is a true breadth-first traversal:

\A
\B
\A\1
\A\2
\B\1
\A\1\x
\A\1\y


Is that correct, and so is breadth actually a misnomer? I found it
really confusing that it didn't work level-by-level.


You are right about depth / breadth.

(But I also have always found preorder/postorder misleading, or rather 
inversed. For me, the second one should be called postorder, since it postpones 
app on A after app on A's subnodes. A better, non-misleading, naming may be 
branch-first (case 1 above) vs children-first (case 2).)


Denis
--
_
vita es estrany
spir.wikidot.com



Re: SpanMode.breadth -- misnomer?

2011-03-12 Thread %u
So apparently, it's incredibly hard (if not impossible) to have a true
breadth-first search that scales up reasonably well to, say, an entire
volume of data:
stackoverflow.com/questions/5281626/breadth-first-directory-traversal-
is-it-possible-with-olog-n-memory

I suggest we rename the option to something else and deprecate the
name?


Error message issue

2011-03-12 Thread Russel Winder
Coming from Java, C++, etc. where + is used for string concatenation I
initially wrote:

assert ( iterative ( item[0] ) == item[1] , iterative (  + to ! 
string ( item[0] ) +  ) =  + to ! string ( item[1] ) ) ;

which results in:

factorial_d2.d(45): Error: Array operation iterative (  + 
to(item[0u]) +  ) =  + to(item[1u]) not implemented

which does seem a bit off the wall.  Replacing + with ~ fixes the
problem, but the error message above wasn't that helpful in being able
to deduce this.

I think this is somewhat more than a RTFM, or you should know the
basics of the language in that D is very like C and Java and yet in
this one place has chosen a different symbol for the operation.

Not a big issue, just irritating.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: std.datetime questions

2011-03-12 Thread Andrei Alexandrescu

On 3/12/11 2:32 AM, Jonathan M Davis wrote:

I'll try and get it fixed this weekend. I should have caught that before, but
apparently I forgot to create all of the appropriate tests for WindowsTimeZone.


Oh noes! :o)

Andrei


LDC2: Where do bug reports go?

2011-03-12 Thread dsimcha
I've noticed that the issue tracker tab on the LDC2 project 
(https://bitbucket.org/prokhin_alexey/ldc2/overview) is missing.  First, 
why is it missing?  Second, if it's missing on purpose, then where is 
the correct place to file bug reports?


Re: Curl support RFC

2011-03-12 Thread Jonas Drewsen

Thank you.

Regarding scalability: In my experience the fastest network handling for 
multiple concurrent request is done asyncronously using select or epoll. 
The current wrapper would probably use threading and messages to handle 
multiple concurrent requests which is not as efficient.


Usually you only need this kind of scalability for server side 
networking and not client side like libcurl is providing so I do not see 
this as a major issue for an initial version.


I do know how to support epoll/select based curl and by that better 
scalability and that would fortunately just be an extension to the API 
I've shown. Currently I will focus on getting the common things finished 
and rock solid.


/Jonas


On 11/03/11 16.30, dsimcha wrote:

I don't know much about this kind of stuff except that I use it for very simple
use cases occasionally.  One thing I'll definitely give your design credit for,
based on your examples, is making simple things simple.  I don't know how it
scales to more complex use cases (not saying it doesn't, just that I'm not
qualified to evaluate that), but I definitely would use this.  Nice work.

BTW, what is the license status of libcurl?  According to Wikipedia it's MIT
licensed.  Where does that leave us with regard to the binary attribution issue?

== Quote from Jonas Drewsen (jdrew...@nospam.com)'s article

Hi,
 So I've spent some time trying to wrap libcurl for D. There is a lot
of things that you can do with libcurl which I did not know so I'm
starting out small.
For now I've created all the declarations for the latest public curl C
api. I have put that in the etc.c.curl module.
On top of that I've created a more D like api as seen below. This is
located in the 'etc.curl' module. What you can see below currently works
but before proceeding further down this road I would like to get your
comments on it.
//
// Simple HTTP GET with sane defaults
// provides the .content, .headers and .status
//
writeln( Http.get(http://www.google.com;).content );
//
// GET with custom data receiver delegates
//
Http http = new Http(http://www.google.dk;);
http.setReceiveHeaderCallback( (string key, string value) {
writeln(key ~ : ~ value);
} );
http.setReceiveCallback( (string data) { /* drop */ } );
http.perform;
//
// POST with some timouts
//
http.setUrl(http://www.testing.com/test.cgi;);
http.setReceiveCallback( (string data) { writeln(data); } );
http.setConnectTimeout(1000);
http.setDataTimeout(1000);
http.setDnsTimeout(1000);
http.setPostData(The quick);
http.perform;
//
// PUT with data sender delegate
//
string msg = Hello world;
size_t len = msg.length; /* using chuncked transfer if omitted */
http.setSendCallback( delegate size_t(char[] data) {
  if (msg.empty) return 0;
  auto l = msg.length;
  data[0..l] = msg[0..$];
  msg.length = 0;
  return l;
  },
  HttpMethod.put, len );
http.perform;
//
// HTTPS
//
writeln(Http.get(https://mail.google.com;).content);
//
// FTP
//
writeln(Ftp.get(ftp://ftp.digitalmars.com/sieve.ds;,
  ./downloaded-file));
// ... authenication, cookies, interface select, progress callback
// etc. is also implemented this way.
/Jonas






Re: Curl support RFC

2011-03-12 Thread Jonas Drewsen

On 11/03/11 19.31, Jacob Carlborg wrote:

On 2011-03-11 16:20, Jonas Drewsen wrote:

Hi,

So I've spent some time trying to wrap libcurl for D. There is a lot of
things that you can do with libcurl which I did not know so I'm starting
out small.

For now I've created all the declarations for the latest public curl C
api. I have put that in the etc.c.curl module.

On top of that I've created a more D like api as seen below. This is
located in the 'etc.curl' module. What you can see below currently works
but before proceeding further down this road I would like to get your
comments on it.

//
// Simple HTTP GET with sane defaults
// provides the .content, .headers and .status
//
writeln( Http.get(http://www.google.com;).content );

//
// GET with custom data receiver delegates
//
Http http = new Http(http://www.google.dk;);
http.setReceiveHeaderCallback( (string key, string value) {
writeln(key ~ : ~ value);
} );
http.setReceiveCallback( (string data) { /* drop */ } );
http.perform;

//
// POST with some timouts
//
http.setUrl(http://www.testing.com/test.cgi;);
http.setReceiveCallback( (string data) { writeln(data); } );
http.setConnectTimeout(1000);
http.setDataTimeout(1000);
http.setDnsTimeout(1000);
http.setPostData(The quick);
http.perform;

//
// PUT with data sender delegate
//
string msg = Hello world;
size_t len = msg.length; /* using chuncked transfer if omitted */

http.setSendCallback( delegate size_t(char[] data) {
if (msg.empty) return 0;
auto l = msg.length;
data[0..l] = msg[0..$];
msg.length = 0;
return l;
},
HttpMethod.put, len );
http.perform;

//
// HTTPS
//
writeln(Http.get(https://mail.google.com;).content);

//
// FTP
//
writeln(Ftp.get(ftp://ftp.digitalmars.com/sieve.ds;,
./downloaded-file));


// ... authenication, cookies, interface select, progress callback
// etc. is also implemented this way.


/Jonas


Is there support for other HTTP methods/verbs in the D wrapper, like
delete?



Yes.. all methods in libcurl are supported.

/Jonas


Re: LDC2: Where do bug reports go?

2011-03-12 Thread Trass3r

I guess he just missed to set up a public issue tracking.


Re: Google Summer of Code 2011 application

2011-03-12 Thread Nebster

On 11/03/2011 20:03, Gary Whatmore wrote:

Nebster Wrote:


On 10/03/2011 19:36, Trass3r wrote:

How about adding more stuff to CTFE, esp. pointers and classes?


Or get Algebraic data types to typecheck in the compiler :)


Stop trolling. We should really ban these Tango fanboys here.

Nobody really wants to turn D into an ivory tower hell with all the functional 
language features. Even bearophile was trolling recently. Why remembers the 
'where' syntax. *Vomit*

Nick S. is right, we should use HTML for our documents too. Maybe some stupid 
typography expert cares, but the majority (99%) of users don't. They've used to 
browsing broken HTML pages, DDOC is good enough for them. It has also shown 
potential as a general typesetting system for technical documentation in the 
digitalmars site.


Haha, I hate tango .
Phobos is better in my opinion (or I prefer it at least)! I just read in 
the documentation that it is a possible extension so I thought it would 
be a good Google Code project :P


Re: Google Summer of Code 2011 application

2011-03-12 Thread Daniel Gibson

Am 12.03.2011 18:16, schrieb Nebster:

On 11/03/2011 20:03, Gary Whatmore wrote:

Nebster Wrote:


On 10/03/2011 19:36, Trass3r wrote:

How about adding more stuff to CTFE, esp. pointers and classes?


Or get Algebraic data types to typecheck in the compiler :)


Stop trolling. We should really ban these Tango fanboys here.

Nobody really wants to turn D into an ivory tower hell with all the
functional language features. Even bearophile was trolling recently.
Why remembers the 'where' syntax. *Vomit*

Nick S. is right, we should use HTML for our documents too. Maybe some
stupid typography expert cares, but the majority (99%) of users don't.
They've used to browsing broken HTML pages, DDOC is good enough for
them. It has also shown potential as a general typesetting system for
technical documentation in the digitalmars site.


Haha, I hate tango .


Come on, don't be an idiot. Gary is a troll, just ignore him.


Phobos is better in my opinion (or I prefer it at least)!


No reason to hate Tango.


I just read in
the documentation that it is a possible extension so I thought it would
be a good Google Code project :P




Re: Curl support RFC

2011-03-12 Thread Jonas Drewsen

On 11/03/11 22.21, Jesse Phillips wrote:

I'll make some comments on the API. Do we have to choose Http/Ftp...? The URI 
already contains this, I could see being able to specifically request one or 
the other for performance or so www.google.com works.


That is a good question.

The problem with creating a grand unified Curl class that does it all is 
that each protocol supports different things ie. http supports cookie 
handling and http redirection, ftp supports passive/active mode and dir 
listings and so on.


I think it would confuse the user of the API if e.g. he were allowed to 
set cookies on his ftp request.


The protocols supported (Http, Ftp,... classes) do have a base class 
Protocol that implements common things like timouts etc.




And what about properties? They tend to be very nice instead of set methods. 
examples below.


Actually I thought off this and went the usual C++ way of _not_ using 
public properties but use accessor methods. Is public properties 
accepted as the D way and if so what about the usual reasons about why 
you should use accessor methods (like encapsulation and tolerance to 
future changes to the API)?


I do like the shorter onHeader/onContent much better though :)

/Jonas


Jonas Drewsen Wrote:


//
// Simple HTTP GET with sane defaults
// provides the .content, .headers and .status
//
writeln( Http.get(http://www.google.com;).content );

//
// GET with custom data receiver delegates
//
Http http = new Http(http://www.google.dk;);
http.setReceiveHeaderCallback( (string key, string value) {
writeln(key ~ : ~ value);
} );
http.setReceiveCallback( (string data) { /* drop */ } );
http.perform;


http.onHeader = (string key, string value) {...};
http.onContent = (string data) { ... };
http.perform();




Re: Curl support RFC

2011-03-12 Thread Jonas Drewsen

On 12/03/11 05.30, Ary Manzana wrote:

On 3/11/11 12:20 PM, Jonas Drewsen wrote:

Hi,

So I've spent some time trying to wrap libcurl for D. There is a lot of
things that you can do with libcurl which I did not know so I'm starting
out small.

For now I've created all the declarations for the latest public curl C
api. I have put that in the etc.c.curl module.

On top of that I've created a more D like api as seen below. This is
located in the 'etc.curl' module. What you can see below currently works
but before proceeding further down this road I would like to get your
comments on it.


I *love* it.

All APIs should be like yours. One-liners for what you want right now.
If it's a little more complex, some more lines. This is perfect.

Congratulations!


Thank you! Words like these keep up the motivation.

/Jonas


Re: Curl support RFC

2011-03-12 Thread Jonas Drewsen

On 11/03/11 17.33, Vladimir Panteleev wrote:

On Fri, 11 Mar 2011 17:20:38 +0200, Jonas Drewsen jdrew...@nospam.com
wrote:


writeln( Http.get(http://www.google.com;).content );


Does this return a string? What if the page's encoding isn't UTF-8?

Data should probably be returned as void[], similar to std.file.read.


Currently it returns a string, but should probably return void[] as you 
suggest.


Maybe the interface should be something like this to support misc. 
encodings (like the std.file.readText does):


class Http {
struct Result(S) {
S content;
...
}
static Result!S get(S = void[])(in string url);

}

Actually I just took a look at Andrei's std.stream2 suggestion and 
Http/Ftp... Transports would be pretty neat to have as well for reading 
formatted data.


I'll follow the newly spawned Stream proposal thread on this one :)

/Jonas


Re: LDC2: Where do bug reports go?

2011-03-12 Thread Moritz Warning
On Sat, 12 Mar 2011 11:58:55 -0500, dsimcha wrote:

 I've noticed that the issue tracker tab on the LDC2 project
 (https://bitbucket.org/prokhin_alexey/ldc2/overview) is missing.  First,
 why is it missing?  Second, if it's missing on purpose, then where is
 the correct place to file bug reports?

You can also put them here http://dsource.org/projects/ldc/newticket
or here http://bitbucket.org/lindquist/ldc for now.


Re: Stream Proposal

2011-03-12 Thread Jonas Drewsen

On 12/03/11 04.54, Daniel Gibson wrote:

Am 12.03.2011 04:40, schrieb dsimcha:

On 3/11/2011 10:14 PM, Jonathan M Davis wrote:

On Friday, March 11, 2011 18:29:42 dsimcha wrote:

3. std.stdio.File should be moved to the new stream module but publicly
imported by std.stdio. It should also grow some primitives that make it
into an input range of characters. These can be implemented with
buffering under the hood for efficiency.


??? Why? File is not a stream. It's a separate thing. I see no reason
to combine
it with streams. I don't think that the separation between std.stdio and
std.stream as it stands is a problem. The problem is the design of
std.stream.

- Jonathan M Davis


Isn't file I/O a pretty important use case for streams, i.e. the main
one?


Network I/O is also very important.

BTW, Andrei proposed a stream API a while ago[1] which was also
discussed back than - can't we use that as a basis for further
discussions about streams?

By the way, I'd prefer class-based streams (and even Andrei proposed
that in aforementioned discussion).

Cheers,
- Daniel

[1]
http://lists.puremagic.com/pipermail/digitalmars-d/2010-December/thread.html#91169


I like this proposal.

And regarding the question about non-blocking streams then I'm 
definitely a proponent of this. The standard C++ library streaming 
support is really not geared towards this and therefore it is difficult 
to get non-blocking streaming right.


/Jonas




Re: Curl support RFC

2011-03-12 Thread Lutger Blijdestijn
Jonas Drewsen wrote:

 On 11/03/11 22.21, Jesse Phillips wrote:
 I'll make some comments on the API. Do we have to choose Http/Ftp...? The
 URI already contains this, I could see being able to specifically request
 one or the other for performance or so www.google.com works.
 
 That is a good question.
 
 The problem with creating a grand unified Curl class that does it all is
 that each protocol supports different things ie. http supports cookie
 handling and http redirection, ftp supports passive/active mode and dir
 listings and so on.
 
 I think it would confuse the user of the API if e.g. he were allowed to
 set cookies on his ftp request.
 
 The protocols supported (Http, Ftp,... classes) do have a base class
 Protocol that implements common things like timouts etc.
 
 
 And what about properties? They tend to be very nice instead of set
 methods. examples below.
 
 Actually I thought off this and went the usual C++ way of _not_ using
 public properties but use accessor methods. Is public properties
 accepted as the D way and if so what about the usual reasons about why
 you should use accessor methods (like encapsulation and tolerance to
 future changes to the API)?
 
 I do like the shorter onHeader/onContent much better though :)
 
 /Jonas

Properties *are* accessor methods, with some sugar. In fact you already have 
used them, try it:

http.setReceiveHeaderCallback =  (string key, string value) {
writeln(key ~ : ~ value);
};

Marking a function with @property just signals it's intended use, in which 
case it's nicer to grop the get/set prefixes. Supposedly using parenthesis 
with such declarations will be outlawed in the future, but I don't think 
that's the case currently.

 Jonas Drewsen Wrote:

 //
 // Simple HTTP GET with sane defaults
 // provides the .content, .headers and .status
 //
 writeln( Http.get(http://www.google.com;).content );

 //
 // GET with custom data receiver delegates
 //
 Http http = new Http(http://www.google.dk;);
 http.setReceiveHeaderCallback( (string key, string value) {
 writeln(key ~ : ~ value);
 } );
 http.setReceiveCallback( (string data) { /* drop */ } );
 http.perform;

 http.onHeader = (string key, string value) {...};
 http.onContent = (string data) { ... };
 http.perform();



Re: Error message issue

2011-03-12 Thread spir

On 03/12/2011 01:45 PM, Russel Winder wrote:

Coming from Java, C++, etc. where + is used for string concatenation I
initially wrote:

 assert ( iterative ( item[0] ) == item[1] , iterative (  + to ! string ( 
item[0] ) +  ) =  + to ! string ( item[1] ) ) ;

which results in:

 factorial_d2.d(45): Error: Array operation iterative (  + to(item[0u]) + 
 ) =  + to(item[1u]) not implemented

which does seem a bit off the wall.  Replacing + with ~ fixes the
problem, but the error message above wasn't that helpful in being able
to deduce this.

I think this is somewhat more than a RTFM, or you should know the
basics of the language in that D is very like C and Java and yet in
this one place has chosen a different symbol for the operation.

Not a big issue, just irritating.


Actually, I don't find this message bad, compared to many others. In fact, D 
gives you rather too much information. Operation '+' not defined for these 
elements. may do the job better.
Note how happy you are that '+' is actually not defined for thoses types... 
(IYSWIM)
Finally, the design decision of _not_ messing '+' with concatenation is a very 
good one imo. The fact that most mainstream PLs (*) do the wrong thing is not a 
worthy argument (for me). Also, the choice of '~' looks good, doesn't it?


Denis

(*) Even python, for once. Seems most Python design errors are C legacy, due to 
Guido van Rossum's wish to please C/Unix hackers.

--
_
vita es estrany
spir.wikidot.com



Re: Curl support RFC

2011-03-12 Thread Jesse Phillips
Jonas Drewsen Wrote:

 On 11/03/11 22.21, Jesse Phillips wrote:
  I'll make some comments on the API. Do we have to choose Http/Ftp...? The 
  URI already contains this, I could see being able to specifically request 
  one or the other for performance or so www.google.com works.
 
 That is a good question.
 
 The problem with creating a grand unified Curl class that does it all is 
 that each protocol supports different things ie. http supports cookie 
 handling and http redirection, ftp supports passive/active mode and dir 
 listings and so on.
 
 I think it would confuse the user of the API if e.g. he were allowed to 
 set cookies on his ftp request.
 
 The protocols supported (Http, Ftp,... classes) do have a base class 
 Protocol that implements common things like timouts etc.

Ah. I guess I was just thinking about if you want to download some file, you 
don't really care where you are getting it from you just have the URL and are 
read to go.

  And what about properties? They tend to be very nice instead of set 
  methods. examples below.
 
 Actually I thought off this and went the usual C++ way of _not_ using 
 public properties but use accessor methods. Is public properties 
 accepted as the D way and if so what about the usual reasons about why 
 you should use accessor methods (like encapsulation and tolerance to 
 future changes to the API)?
 
 I do like the shorter onHeader/onContent much better though :)

D was originally very friendly with properties. Your could can at this moment 
be written: 

http.setReceiveHeaderCallback = (string key, string value) {
writeln(key ~ : ~ value);
};

But is going to be deprecated for the use of the @property attribute. You are 
probably aware of properties in C#, so yes D is fine with public fields and 
functions that look like public fields.

Otherwise this looks really good and I do hope to see it in Phobos.



Re: Is DMD 2.052 32-bit?

2011-03-12 Thread Don

lurker wrote:

Jonathan M Davis Wrote:


On Wednesday 09 March 2011 17:56:13 Walter Bright wrote:

On 3/9/2011 4:30 PM, Jonathan M Davis wrote:

Much as I'd love to have a 64-bit binary of dmd, I don't think that the
gain is even vaguely worth the risk at this point.

What is the gain? The only thing I can think of is some 64 bit OS
distributions are hostile to 32 bit binaries.
Well, the fact that you then have a binary native to your system is obviously a 
gain (and is likely the one which people will cite most often), and that _does_ 
count for quite a lot. However, regardless of that, it's actually pretty easy to 
get dmd to run out of memory when compiling if you do much in the way of CTFE or 
template stuff. Granted, fixing some of the worst memory-related bugs in dmd will 
go a _long_ way towards fixing that, but even if they are, you're theoretically 
eventually supposed to be able to do pretty much anything at compile time which 
you can do at runtime in SafeD. And using enough memory that you require the 64-
bit address space would be one of the things that you can do in SafeD when 
compiling for 64-bit. As long as the compiler is only 32-bit, you can't do that 
at compile time even though you can do it at runtime (though the current 
limitations of CTFE do reduce the problem in that you can't do a lot of stuff at 
compile time period).


In any case, the fact that dmd runs out of memory fairly easily makes having a 
64-bit version which could use all of my machine's memory really attractive. And 
honestly, having an actual, 64-bit binary to run on a 64-bit system is something 
that people generally want, and it _is_ definitely a problem to get a 32-bit 
binary into the 64-bit release of a Liunx distro.


Truth be told, I would have thought that it would be a given that there would be 
a 64-bit version of dmd when going to support 64-bit compilation and was quite 
surprised when that was not your intention.


I think porting DMD to 64 bits would be a pragmatic solution to this. 



Computers are getting more memory faster than Walter is able to fix possible 
leaks in DMD.


No. This has nothing to do with memory leaks. The slowdown and excessive 
memory consumption is caused by a few lines of code. Fixing that 'bug' 
(really, the existing CTFE memory management (ie, non-existent!) was a 
quick hack to get things running) won't just make it consume 2 or 3 
times less memory. We're talking 100x, 1000x, etc.


Re: Uh... destructors?

2011-03-12 Thread Don

Bruno Medeiros wrote:

On 23/02/2011 17:47, Steven Schveighoffer wrote:

On Wed, 23 Feb 2011 12:28:33 -0500, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


On 2/23/11 11:16 AM, Steven Schveighoffer wrote:



Just because a function is not marked @safe does not mean it is unsafe.
It just means you can do things the compiler cannot verify are safe, 
but

that you know are actually safe. I showed you earlier an example of a
safe pure function that uses malloc and free.

Programmers are allowed to make conceptually safe functions which are
not marked as @safe, why not the same for pure functions?

-Steve


I understand that. My point is that allowing unsafe functions to be
pure dilutes pure to the point of uselessness.


And that's not a point. It's an unsupported opinion.

pure has nothing to do with safety, it has to do with optimization. Safe
functions are no more optimizable than unsafe ones. Safety has to do
with reducing memory bugs.

The two concepts are orthogonal, I have not been convinced otherwise.

-Steve


pure has something to do with @safety. (Also, it has more to do with 
than just optimization, it also affects code readability.)


In order to gain any benefit from calling pure functions (whether the 
benefit is programmer code readability or compiler optimization) it 
needs to be determined from the pure function's signature what is the 
transitively reachable mutable state that the function may access. 
Normally this state is whatever is transitively reachable from the 
parameters. However, if you allow *arbitrary* _pointer arithmetic_ you 
could legally manipulate any mutable data in your program from within 
the pure function. This would make the pure attribute useless because it 
would not offer any additional guarantees whatsoever over an unpure 
function. So such a rule is necessary such that, for example, the 
following function should not be allowed to be pure:


pure int func(int* ptr, int ix) {
  return (ptr + ix)++;
}


I don't think this makes the pure attribute useless, since you still 
only get a violation of purity, if you are smuggling in the address of a 
global via some other parameter (in this case, ix).
You just can't do strong purity optimisation if there are any pointer 
parameters. But that remains true even if you disallow pointer 
arithmetic inside pure functions.


I don't think it can violate weak purity, unless the caller deliberately 
smuggles the address of a global. So I don't know if this needs to be 
prevented, or not.



I'm not sure if this is what Andrei had in mind with regards to @safety.
It should be noted that none of this implies that free() should be 
disallowed in pure functions. And indeed I think that if malloc() is 
allowerd, free() can and should be allowed as well.


Re: Code Sandwiches

2011-03-12 Thread Nick Sabalausky
David Nadlinger s...@klickverbot.at wrote in message 
news:ilgjnj$1oui$1...@digitalmars.com...
 On 3/11/11 11:17 PM, Jonathan M Davis wrote:
 On Friday, March 11, 2011 11:18:59 David Nadlinger wrote:
 My question from above still remains: Is there any scientific data to
 back this assumption?

 I don't know. I haven't gone looking. However, I know that there's lots 
 of
 anecdotal evidence for it. There's probably experimental evidence as 
 well, but I
 haven't gone looking for it.

 The reason I'm asking is that while I can understand that you might 
 personally prefer light text on dark backgrounds, I don't think that this 
 can be generalized so easily.


That may be a very fair point.

 I don't know of any research specifically studying eyestrain, but there 
 are results indicating that *black-on-white* text is significantly easier 
 to read, e.g. Hall and Hanna (2004) [1] or Bucher and Baumgartner (2007) 
 [2].

 Also, while I don't want to doubt that you know lots of anecdotal evidence 
 favoring light-on-dark text, I think there is probably more for the 
 opposite: Just look at the standard text settings of most widely used 
 OS/DEs out there, or at the color scheme of the most frequented web sites, 
 etc.

 Light-on-dark color schemes certainly had their advantages on early 
 monitors (flicker, tearing), but with today's sophisticated screens, I 
 personally prefer dark text on light backgrounds. Even with a brightness 
 setting matching the ambient light (many people I know have turned the 
 backlight up way too high), longer blocks of white text on a dark 
 background have the nasty habit of leaving an after-image in my eyes, as 
 demonstrated by this site: http://www.ironicsans.com/owmyeyes/.


That's a very poor example of light-on-dark: It's all-bold, pure-white on 
pure-black. Even light-on-dark fans don't do that. The white is normally a 
grey.



 [1] http://sigs.aisnet.org/sighci/bit04/BIT_Hall.pdf
 [2] 
 http://www.psycho.uni-duesseldorf.de/abteilungen/aap/Dokumente/Ergonomics-2007-Text-background-polarity.pdf

Neither of those (and from what I noticed when I skimmed through, none of 
the experiments they cited) appear to take into account whether the subject 
is more accustomed to positive contrast or negative contrast. Since most 
people are more accustomed to positive contrast I would expect the findings 
to be biased in favor of positive contrast.

FWIW, I found the white backgrounds of those pdf's to be rather eye-searing. 
Eventually ended up looking for a use system color settings option in my 
pdf reader.





Re: Curl support RFC

2011-03-12 Thread Jonas Drewsen

On 12/03/11 20.44, Jesse Phillips wrote:

Jonas Drewsen Wrote:


On 11/03/11 22.21, Jesse Phillips wrote:

I'll make some comments on the API. Do we have to choose Http/Ftp...? The URI 
already contains this, I could see being able to specifically request one or 
the other for performance or so www.google.com works.


That is a good question.

The problem with creating a grand unified Curl class that does it all is
that each protocol supports different things ie. http supports cookie
handling and http redirection, ftp supports passive/active mode and dir
listings and so on.

I think it would confuse the user of the API if e.g. he were allowed to
set cookies on his ftp request.

The protocols supported (Http, Ftp,... classes) do have a base class
Protocol that implements common things like timouts etc.


Ah. I guess I was just thinking about if you want to download some file, you 
don't really care where you are getting it from you just have the URL and are 
read to go.


There should definitely be a simple method based only on an url. I'll 
put that in.




And what about properties? They tend to be very nice instead of set methods. 
examples below.


Actually I thought off this and went the usual C++ way of _not_ using
public properties but use accessor methods. Is public properties
accepted as the D way and if so what about the usual reasons about why
you should use accessor methods (like encapsulation and tolerance to
future changes to the API)?

I do like the shorter onHeader/onContent much better though :)


D was originally very friendly with properties. Your could can at this moment 
be written:

http.setReceiveHeaderCallback = (string key, string value) {
 writeln(key ~ : ~ value);
};

But is going to be deprecated for the use of the @property attribute. You are 
probably aware of properties in C#, so yes D is fine with public fields and 
functions that look like public fields.


Just tried the property stuff out but it seems a bit inconsistent. Maybe 
someone can enlighten me:


import std.stdio;

alias void delegate() deleg;

class T {
  private deleg tvalue;
  @property void prop(deleg dg) {
tvalue = dg;
  }
  @property deleg prop() {
return tvalue;
  }
}

void main(string[] args) {
  T t = new T;
  t.prop = { writeln(fda); };

  // Seems a bit odd that assigning to a temporary (tvalue) suddently
  // changes the behaviour.
  auto tvalue = t.prop;
  tvalue(); // Works as expected by printing fda
  t.prop(); // Just returns the delegate!

  // Shouldn't the @property attribute ensure that no () is needed
  // when using the property
  t.prop()(); // Works
}

/Jonas





Otherwise this looks really good and I do hope to see it in Phobos.





Re: Code Sandwiches

2011-03-12 Thread spir

On 03/12/2011 10:16 PM, Nick Sabalausky wrote:

Even with a brightness
  setting matching the ambient light (many people I know have turned the
  backlight up way too high), longer blocks of white text on a dark
  background have the nasty habit of leaving an after-image in my eyes, as
  demonstrated by this site:http://www.ironicsans.com/owmyeyes/.


That's a very poor example of light-on-dark: It's all-bold, pure-white on
pure-black. Even light-on-dark fans don't do that. The white is normally a
grey.


It's very strange. What the text on this page explains, complaining about light 
text on dark background, is exactly what I experience when reading text with 
the opposite combination, eg PDFs.
His text holds a link that switches colors (thus suddenly displaying black on 
white): this kills my eyes! I have to zap away at once.


I must admit I'm kind of an exceptional case in that my eyes are extra 
sensitive to light (there are called pair eyes in french, I don't know the 
english term). On the nice side, I can see very well at night, on the other 
side, excess light hurts me badly very fast. But an ophtalmologist explained me 
what I experience is just normal reaction, simply over-sensitive, that what 
hurts me strongly and fast hurts everyone else on the long run (sounds obvious).
Another obvious remark (not from me, read on the web) is that what is good for 
paper is not good for screens; because they are light sources. Reading text on 
white backgroung is like staring at an intensely luminous sky, without moving 
your sight: doesn't this hurt you?


On this other hand, it seems that pure white text on pure black bg is far too 
be an optimal combination; text looks hard too read. I guess the reason is that 
fonts are originally drawn for the opposite combination, and also for paper. 
Full B/W or W/B contrast seems a bad scheme in both cases.
What looks nice and readible instead is choosing ~ 25% lightness bg, 75% 
lightness fg, with the same hue; one can also adjust saturation to increase or 
decrease contrast.
The opposite (dark on light with 25%/75% saturation) is also pleasant and 
non-agressive. Why insist on imposing black on white? I guess this has to do 
with our civilisation demanding clean / white / uniform things. Like hospitals. 
An esthetic of death. (Sorry for the personal tone, if ever you mind.)


Denis
--
_
vita es estrany
spir.wikidot.com



Re: Code Sandwiches

2011-03-12 Thread David Nadlinger

On 3/12/11 11:34 PM, Nick Sabalausky wrote:

spirdenis.s...@gmail.com  wrote in message
news:mailman.2474.1299967680.4748.digitalmar...@puremagic.com...

On 03/12/2011 10:16 PM, Nick Sabalausky wrote:

Even with a brightness

  setting matching the ambient light (many people I know have turned
the
  backlight up way too high), longer blocks of white text on a dark
  background have the nasty habit of leaving an after-image in my eyes,
as
  demonstrated by this site:http://www.ironicsans.com/owmyeyes/.


That's a very poor example of light-on-dark: It's all-bold, pure-white on
pure-black. Even light-on-dark fans don't do that. The white is
normally a
grey.


It's very strange. What the text on this page explains, complaining about
light text on dark background, is exactly what I experience when reading
text with the opposite combination, eg PDFs.
His text holds a link that switches colors (thus suddenly displaying black
on white): this kills my eyes! I have to zap away at once.



Yea, I have a hard time looking at that version, too. And I didn't even see
it until after I was away from the page for about an hour and then came
back.

There are also other reasons that both versions of that page are hard to
read:

- All bold.
- All justified (I honestly do find justified text harder to read than
left-algned. And the difference is much more pronounced with narrower text
columns, such as that page uses.)
- One lng paragraph.


Oh, really? I guess there is no way this site could be a fabricated 
example for clearly demonstrating the effect, right?




Re: Code Sandwiches

2011-03-12 Thread Nick Sabalausky
David Nadlinger s...@klickverbot.at wrote in message 
news:ilgs4q$27rk$1...@digitalmars.com...
 On 3/12/11 11:07 PM, spir wrote:
 Another obvious remark (not from me, read on the web) is that what is
 good for paper is not good for screens; because they are light sources.
 Reading text on white backgroung is like staring at an intensely
 luminous sky, without moving your sight: doesn't this hurt you?

 Only if you have turned up the brightness/backlight of your monitor way 
 too high.


I have the same effect as him, but my monitor is so dark that when I look at 
an image or video that has low lighting (such as any typical night-time 
scene in hollywood movies, or any low-lit room in an FPS) I can barely see 
anything at all. My monitor is so dark that a large square of 0x252525 is 
barely distinguishable from a large 0x00 square right next to it. And my 
contrast isn't too high: Any lower is noticably overly-dark and 
overly-washed-out. And, of course, pure-white on pure-black doesn't give me 
any bloom.




Re: Code Sandwiches

2011-03-12 Thread Nick Sabalausky
David Nadlinger s...@klickverbot.at wrote in message 
news:ilgt04$298s$1...@digitalmars.com...
 On 3/12/11 11:34 PM, Nick Sabalausky wrote:
 spirdenis.s...@gmail.com  wrote in message
 news:mailman.2474.1299967680.4748.digitalmar...@puremagic.com...
 On 03/12/2011 10:16 PM, Nick Sabalausky wrote:
 Even with a brightness
   setting matching the ambient light (many people I know have turned
 the
   backlight up way too high), longer blocks of white text on a dark
   background have the nasty habit of leaving an after-image in my 
 eyes,
 as
   demonstrated by this site:http://www.ironicsans.com/owmyeyes/.

 That's a very poor example of light-on-dark: It's all-bold, pure-white 
 on
 pure-black. Even light-on-dark fans don't do that. The white is
 normally a
 grey.

 It's very strange. What the text on this page explains, complaining 
 about
 light text on dark background, is exactly what I experience when reading
 text with the opposite combination, eg PDFs.
 His text holds a link that switches colors (thus suddenly displaying 
 black
 on white): this kills my eyes! I have to zap away at once.


 Yea, I have a hard time looking at that version, too. And I didn't even 
 see
 it until after I was away from the page for about an hour and then came
 back.

 There are also other reasons that both versions of that page are hard to
 read:

 - All bold.
 - All justified (I honestly do find justified text harder to read than
 left-algned. And the difference is much more pronounced with narrower 
 text
 columns, such as that page uses.)
 - One lng paragraph.

 Oh, really? I guess there is no way this site could be a fabricated 
 example for clearly demonstrating the effect, right?


Doesn't matter, he's still constructed a blatant strawman. Those three 
things I mentioned, plus the fact that he's using maximum contrast, all make 
text harder to read *regardless* of positive/negative contrast. And 
*despite* that, he's still using those tricks in his attempt to prove 
something completely different (ie, that light-on-dark is hard to 
read/look-at and shouldn't be used). It's exactly the same as if I made 
chicken noodle soup with rotted rancid chicken, tossed in some dog shit, and 
then tried to claim: See! Chicken makes food taste terrible! (But you 
used bad ingredients...  Well excuse me for trying to clearly demonstrate 
the effect!)

Even if it weren't a strawman, it's still exaggerated and unrealistic - and 
demonstrating that an excess of something is bad does not indicate that 
ordinary usage is bad (salt and fat are perfect examples).





Re: GZip File Reading (std.file)

2011-03-12 Thread dsimcha
Since it seems like the consensus is that streaming gzip support belongs 
in a stream API, I guess we have yet another reason to get busy with the 
stream API.  However, I'm wondering if std.file should support gzip and, 
if license issues can be overcome, bzip2.


I'd love to be able to write code like this:

// Read and transparently decompress foo.txt, which is UTF-8 encoded.
auto foo = cast(string) gzippedRead(foo.txt.gz);

// Write a buffer to a gzipped file.
gzippedWrite(foo.txt.gz, buf);

This stuff would be trivial to implement in std.file and, IMHO, belongs 
there.  What's the consensus on whether it belongs?


Re: Code Sandwiches

2011-03-12 Thread David Nadlinger

On 3/13/11 12:14 AM, Nick Sabalausky wrote:

Doesn't matter, he's still constructed a blatant strawman. Those three
things I mentioned, plus the fact that he's using maximum contrast, all make
text harder to read *regardless* of positive/negative contrast. And
*despite* that, he's still using those tricks in his attempt to prove
something completely different (ie, that light-on-dark is hard to
read/look-at and shouldn't be used). It's exactly the same as if I made
chicken noodle soup with rotted rancid chicken, tossed in some dog shit, and
then tried to claim: See! Chicken makes food taste terrible! (But you
used bad ingredients...  Well excuse me for trying to clearly demonstrate
the effect!)

Even if it weren't a strawman, it's still exaggerated and unrealistic - and
demonstrating that an excess of something is bad does not indicate that
ordinary usage is bad (salt and fat are perfect examples).


Calm down, this isn't a religious war or something, at least not for me. 
If you want to try to prove everybody else »wrong«, feel free to do so, 
but I just picked that example because it neatly illustrates the effect 
I experienced when I was experimenting light-on-dark color schemes in my 
text editor/IDE…


David


Re: Curl support RFC

2011-03-12 Thread Jonathan M Davis
On Saturday 12 March 2011 13:51:37 Jonas Drewsen wrote:
 On 12/03/11 20.44, Jesse Phillips wrote:
  Jonas Drewsen Wrote:
  On 11/03/11 22.21, Jesse Phillips wrote:
  I'll make some comments on the API. Do we have to choose Http/Ftp...?
  The URI already contains this, I could see being able to specifically
  request one or the other for performance or so www.google.com works.
  
  That is a good question.
  
  The problem with creating a grand unified Curl class that does it all is
  that each protocol supports different things ie. http supports cookie
  handling and http redirection, ftp supports passive/active mode and dir
  listings and so on.
  
  I think it would confuse the user of the API if e.g. he were allowed to
  set cookies on his ftp request.
  
  The protocols supported (Http, Ftp,... classes) do have a base class
  Protocol that implements common things like timouts etc.
  
  Ah. I guess I was just thinking about if you want to download some file,
  you don't really care where you are getting it from you just have the
  URL and are read to go.
 
 There should definitely be a simple method based only on an url. I'll
 put that in.
 
  And what about properties? They tend to be very nice instead of set
  methods. examples below.
  
  Actually I thought off this and went the usual C++ way of _not_ using
  public properties but use accessor methods. Is public properties
  accepted as the D way and if so what about the usual reasons about why
  you should use accessor methods (like encapsulation and tolerance to
  future changes to the API)?
  
  I do like the shorter onHeader/onContent much better though :)
  
  D was originally very friendly with properties. Your could can at this
  moment be written:
  
  http.setReceiveHeaderCallback = (string key, string value) {
  
   writeln(key ~ : ~ value);
  
  };
  
  But is going to be deprecated for the use of the @property attribute. You
  are probably aware of properties in C#, so yes D is fine with public
  fields and functions that look like public fields.
 
 Just tried the property stuff out but it seems a bit inconsistent. Maybe
 someone can enlighten me:
 
 import std.stdio;
 
 alias void delegate() deleg;
 
 class T {
private deleg tvalue;
@property void prop(deleg dg) {
  tvalue = dg;
}
@property deleg prop() {
  return tvalue;
}
 }
 
 void main(string[] args) {
T t = new T;
t.prop = { writeln(fda); };
 
// Seems a bit odd that assigning to a temporary (tvalue) suddently
// changes the behaviour.
auto tvalue = t.prop;
tvalue(); // Works as expected by printing fda
t.prop(); // Just returns the delegate!
 
// Shouldn't the @property attribute ensure that no () is needed
// when using the property
t.prop()(); // Works
 }

@property doesn't currently enforce much of anything. Things are in a 
transitory 
state with regards to property. Originally, there was no such thing as 
@property 
and any function which had no parameters and returned a value could be used as 
a 
getter and any function which returned nothing and took a single argument could 
be used as a setter. It was decided to make it more restrictive, so @property 
was added. Eventually, you will _only_ be able to use such functions as 
property 
functions if they are marked with @property, and you will _have_ to call them 
with the property syntax and will _not_ be able to call non-property functions 
with the property syntax. However, at the moment, the compiler doesn't enforce 
that. It will eventually, but there are several bugs with regards to property 
functions (they mostly work, but you found one of the cases where they don't), 
and it probably wouldn't be a good idea to enforce it until more of those bugs 
have been fixed.

- Jonathan M Davis


Re: Code Sandwiches

2011-03-12 Thread David Nadlinger

On 3/12/11 11:49 PM, Nick Sabalausky wrote:

David Nadlingers...@klickverbot.at  wrote in message
news:ilgs4q$27rk$1...@digitalmars.com...

On 3/12/11 11:07 PM, spir wrote:

Another obvious remark (not from me, read on the web) is that what is
good for paper is not good for screens; because they are light sources.
Reading text on white backgroung is like staring at an intensely
luminous sky, without moving your sight: doesn't this hurt you?


Only if you have turned up the brightness/backlight of your monitor way
too high.



I have the same effect as him, but my monitor is so dark that  […]


What effect? In the post you quoted, I was referring specifically to the 
»obvious remark« by Denis, which only holds for unsuitable monitor 
brightness settings – even if my monitor was capable of delivering a 
luminous intensity close to an »intensely luminous sky«, I doubt that I 
would ever run it at that setting (well, maybe if I was on a sandy beach 
on a bright summer day).


David


Re: Code Sandwiches

2011-03-12 Thread Nick Sabalausky
David Nadlinger s...@klickverbot.at wrote in message 
news:ilgvk8$2dmt$2...@digitalmars.com...
 On 3/12/11 11:49 PM, Nick Sabalausky wrote:
 David Nadlingers...@klickverbot.at  wrote in message
 news:ilgs4q$27rk$1...@digitalmars.com...
 On 3/12/11 11:07 PM, spir wrote:
 Another obvious remark (not from me, read on the web) is that what is
 good for paper is not good for screens; because they are light sources.
 Reading text on white backgroung is like staring at an intensely
 luminous sky, without moving your sight: doesn't this hurt you?

 Only if you have turned up the brightness/backlight of your monitor way
 too high.


 I have the same effect as him, but my monitor is so dark that  [.]

 What effect? In the post you quoted, I was referring specifically to 
 the »obvious remark« by Denis, which only holds for unsuitable monitor 
 brightness settings - even if my monitor was capable of delivering a 
 luminous intensity close to an »intensely luminous sky«, I doubt that I 
 would ever run it at that setting (well, maybe if I was on a sandy beach 
 on a bright summer day).


I meant about positive-contrast being hard on my eyes.





Re: Code Sandwiches

2011-03-12 Thread Nick Sabalausky
David Nadlinger s...@klickverbot.at wrote in message 
news:ilgvf0$2dmt$1...@digitalmars.com...
 On 3/13/11 12:14 AM, Nick Sabalausky wrote:
 Doesn't matter, he's still constructed a blatant strawman. Those three
 things I mentioned, plus the fact that he's using maximum contrast, all 
 make
 text harder to read *regardless* of positive/negative contrast. And
 *despite* that, he's still using those tricks in his attempt to prove
 something completely different (ie, that light-on-dark is hard to
 read/look-at and shouldn't be used). It's exactly the same as if I made
 chicken noodle soup with rotted rancid chicken, tossed in some dog shit, 
 and
 then tried to claim: See! Chicken makes food taste terrible! (But you
 used bad ingredients...  Well excuse me for trying to clearly 
 demonstrate
 the effect!)

 Even if it weren't a strawman, it's still exaggerated and unrealistic - 
 and
 demonstrating that an excess of something is bad does not indicate that
 ordinary usage is bad (salt and fat are perfect examples).

 Calm down, this isn't a religious war or something, at least not for me. 
 If you want to try to prove everybody else »wrong«, feel free to do so, 
 but I just picked that example because it neatly illustrates the effect I 
 experienced when I was experimenting light-on-dark color schemes in my 
 text editor/IDE.


I'm not upset or worked up about it at all (emotional state usually doesn't 
come across in text very well anyway, so it's best not to make assumptions 
about it). I was just explaining how that page fails to make the point that 
it tries to make. I realize you only brought it up to help describe a 
certain effect, and naturally that's fine, but I was objecting more to the 
page itself rather than the appropriateness of your reference to it.





Re: Code Sandwiches

2011-03-12 Thread Andrej Mitrovic
I wish all apps followed a defined standard and allowed us to set all
applications to use dark backgrounds at once.

On Linux you can't even set the cursor blinking to be the same for all
apps. Either it's a GTK/KDE/XF/Whatever-specific setting, or you have
to hunt down some configuration file Sometimes it ends up being in xml
format, so you have to read the manual on how to configure an app..,
and this was for a hex editor. Gz.

Someone wrote a freakin manual on how to set cursor blinking for each
app they could think off:
http://www.jurta.org/en/prog/noblink

Ridiculous. And then Windows is a pita. Right! Commence thread derailment. :-P


Derailed (Was: Code Sandwiches)

2011-03-12 Thread Nick Sabalausky
Andrej Mitrovic andrej.mitrov...@gmail.com wrote in message 
news:mailman.2479.1299981498.4748.digitalmar...@puremagic.com...
I wish all apps followed a defined standard and allowed us to set all
 applications to use dark backgrounds at once.

 On Linux you can't even set the cursor blinking to be the same for all
 apps. Either it's a GTK/KDE/XF/Whatever-specific setting, or you have
 to hunt down some configuration file

Hear hear! And I thought Linux/Unix was supposed to be the world of 
standards. Even on standards-inept Wndows, we have standardization for that 
sort of thing. Or at least we used to, until we got invaded by non-native 
toolkits and apps with non-optional skins. (...grumble, grumble...)

 Sometimes it ends up being in xml
 format, so you have to read the manual on how to configure an app..,
 and this was for a hex editor. Gz.


A hex editor with XML configuration...That's just deliciously ironic.

 Someone wrote a freakin manual on how to set cursor blinking for each
 app they could think off:
 http://www.jurta.org/en/prog/noblink


Wow. I thought it was just me who found that blinking distracting. I've 
never actually thought to turn it off though. Maybe I should try that.

 Ridiculous. And then Windows is a pita. Right! Commence thread derailment. 
 :-P

Why stop at just one derailment? (   :)   )  I'm gonna sabatage the other 
track, too:

From that link: To stop the cursor from blinking in Micro$oft Windows 
applications:

I certainly can't object to the idea of MS being evil (what large 
corporation isn't?), but the whole M$/Micro$oft thing is just downright 
juvenille. Not to mention it smacks of l33t-speak. What is this, the 90's? 
(Yes, we know you don't like MS. Nobody does. Now quit being deliberately 
dumb.) It's the internet meme equivalent of pants sagging - the obnoxious 
fad that just won't die.





Re: Derailed (Was: Code Sandwiches)

2011-03-12 Thread linux user
Nick Sabalausky Wrote:

 Andrej Mitrovic andrej.mitrov...@gmail.com wrote in message 
 news:mailman.2479.1299981498.4748.digitalmar...@puremagic.com...
 I wish all apps followed a defined standard and allowed us to set all
  applications to use dark backgrounds at once.
 
  On Linux you can't even set the cursor blinking to be the same for all
  apps. Either it's a GTK/KDE/XF/Whatever-specific setting, or you have
  to hunt down some configuration file
 
 Hear hear! And I thought Linux/Unix was supposed to be the world of 
 standards. Even on standards-inept Wndows, we have standardization for that 
 sort of thing. Or at least we used to, until we got invaded by non-native 
 toolkits and apps with non-optional skins. (...grumble, grumble...)

There are freedesktop.org standards. Unfortunately they only advocate XML for 
every application. 
Posted via http://httpdget.com


Re: Derailed (Was: Code Sandwiches)

2011-03-12 Thread Andrej Mitrovic
On 3/13/11, Nick Sabalausky a@a.a wrote:
 snip

OSX is a nice OS. I gave it a try once or twice. The OS is nice, but
man, when I started looking for software on the web I almost got sick.
Top 10 software for Your Mac, 5 Apps that will make your Mac
Experience Awesome!, This app will make you feel a Better Mac
Person. You deserve Beautiful Mac Software.

Ugh.. It's like every single app has a 10$ price tag and it's all
about selling bullshit with pretty words hidden behind colorful
websites. There was a text editor that had this one major feature:
Full screen mode with black side-bars. That was it. Nothing else, just
a text editor with black bars on the side running at full-screen. And
there's a whole website devoted to how awesome and inspiring and
unique this is, how it helps you focus. And a price tag. People buy
this shit, it's unbelievable.

There was also this thread on Reddit with a guy making some
window-management software. All it did was divide the screen and
resized the windows and put them side by side or something. And
apparently this was so awesome everyone started yelling Take my
wallet NOW!!!. Same thing happened on ycombinator.

I know of at least Autohotkey which came out in 2003 with which you
can do window management with ease. Hotkeys, keyboard or mouse, or add
buttons to your taskbar that do whatever you want with your windows.
There's an entire community devoted to writing all sorts of cool
window management scripts, and that's just one small feature of this
app. But apparently this Mac software that resizes windows is
revolutionary, comes with a price tag and everyone thought it was the
best thing that ever happened.


std.xml: Why is it so slow? Is there anything else wrong with it?

2011-03-12 Thread dsimcha
There seems to be a consensus around here that Phobos needs a good XML 
module, and that std.xml doesn't cut it, at least partly due to 
performance issues.  I have no clue how to write a good XML module from 
scratch.  It seems like noone else is taking up the project either. 
This leads me to two questions:


1.  Has anyone ever sat down and tried to figure out **why** std.xml is 
so slow?  Seriously, if noone's bothered to profile it or read the code 
carefully, then for all we know there might be some low hanging fruit 
and it might be an afternoon of optimization away from being reasonably 
fast.  Basically every experience I've ever had suggests that, if a 
piece of code has not already been profiled and heavily optimized, at 
least a 5-fold speedup can almost always be obtained just by optimizing 
the low-hanging fruit.  (For example, see my recent pull request for the 
D garbage collector.  BTW, if excessive allocations are a contributing 
factor, then fixing the GC should help with XML, too.)


If the answer is no, this hasn't been done, please post some canned 
benchmarks and maybe I'll take a crack at it.


2.  What other major defects/design flaws, if any, does std.xml have?

In other words, how are we really so sure that we need to start from 
scratch?


Re: std.xml: Why is it so slow? Is there anything else wrong with it?

2011-03-12 Thread Daniel Gibson

Am 13.03.2011 05:34, schrieb dsimcha:

There seems to be a consensus around here that Phobos needs a good XML
module, and that std.xml doesn't cut it, at least partly due to
performance issues. I have no clue how to write a good XML module from
scratch. It seems like noone else is taking up the project either. This
leads me to two questions:



Isn't Tomek Sowiński working on it?


1. Has anyone ever sat down and tried to figure out **why** std.xml is
so slow? Seriously, if noone's bothered to profile it or read the code
carefully, then for all we know there might be some low hanging fruit
and it might be an afternoon of optimization away from being reasonably
fast. Basically every experience I've ever had suggests that, if a piece
of code has not already been profiled and heavily optimized, at least a
5-fold speedup can almost always be obtained just by optimizing the
low-hanging fruit. (For example, see my recent pull request for the D
garbage collector. BTW, if excessive allocations are a contributing
factor, then fixing the GC should help with XML, too.)

If the answer is no, this hasn't been done, please post some canned
benchmarks and maybe I'll take a crack at it.

2. What other major defects/design flaws, if any, does std.xml have?

In other words, how are we really so sure that we need to start from
scratch?


(These questions should probably discusses nevertheless)

Cheers,
- Daniel


Re: Derailed (Was: Code Sandwiches)

2011-03-12 Thread Nick Sabalausky
Andrej Mitrovic andrej.mitrov...@gmail.com wrote in message 
news:mailman.2483.1299989460.4748.digitalmar...@puremagic.com...
 On 3/13/11, Nick Sabalausky a@a.a wrote:
 snip

 OSX is a nice OS. I gave it a try once or twice. The OS is nice, but
 man, when I started looking for software on the web I almost got sick.
 Top 10 software for Your Mac, 5 Apps that will make your Mac
 Experience Awesome!, This app will make you feel a Better Mac
 Person. You deserve Beautiful Mac Software.

 Ugh.. It's like every single app has a 10$ price tag and it's all
 about selling bullshit with pretty words hidden behind colorful
 websites. There was a text editor that had this one major feature:
 Full screen mode with black side-bars. That was it. Nothing else, just
 a text editor with black bars on the side running at full-screen. And
 there's a whole website devoted to how awesome and inspiring and
 unique this is, how it helps you focus. And a price tag. People buy
 this shit, it's unbelievable.

 There was also this thread on Reddit with a guy making some
 window-management software. All it did was divide the screen and
 resized the windows and put them side by side or something. And
 apparently this was so awesome everyone started yelling Take my
 wallet NOW!!!. Same thing happened on ycombinator.

 I know of at least Autohotkey which came out in 2003 with which you
 can do window management with ease. Hotkeys, keyboard or mouse, or add
 buttons to your taskbar that do whatever you want with your windows.
 There's an entire community devoted to writing all sorts of cool
 window management scripts, and that's just one small feature of this
 app. But apparently this Mac software that resizes windows is
 revolutionary, comes with a price tag and everyone thought it was the
 best thing that ever happened.

Heh, actually, you've described how I feel about the OS itself (along with 
every other Apple product out there, sans the Apple II). I spent a year or 
two trying to use OSX as my primary OS. I was impressd at first, but 
eventually found myself running away screaming, in large part for many of 
the things you've mentioned about their third party apps, except I found it 
to also be applicable to all of the first-party hardware and software. I 
think OSX's third party market is primarily an effect of Apple itself having 
the same attitude.





Re: std.xml: Why is it so slow? Is there anything else wrong with it?

2011-03-12 Thread Bekenn

Do we want to take a look at libxml, or are there legal issues with that?


Re: std.xml: Why is it so slow? Is there anything else wrong with it?

2011-03-12 Thread Jonathan M Davis
On Saturday 12 March 2011 20:39:31 Daniel Gibson wrote:
 Am 13.03.2011 05:34, schrieb dsimcha:
  There seems to be a consensus around here that Phobos needs a good XML
  module, and that std.xml doesn't cut it, at least partly due to
  performance issues. I have no clue how to write a good XML module from
  scratch. It seems like noone else is taking up the project either. This
 
  leads me to two questions:
 Isn't Tomek Sowiński working on it?

Yes.

  1. Has anyone ever sat down and tried to figure out **why** std.xml is
  so slow? Seriously, if noone's bothered to profile it or read the code
  carefully, then for all we know there might be some low hanging fruit
  and it might be an afternoon of optimization away from being reasonably
  fast. Basically every experience I've ever had suggests that, if a piece
  of code has not already been profiled and heavily optimized, at least a
  5-fold speedup can almost always be obtained just by optimizing the
  low-hanging fruit. (For example, see my recent pull request for the D
  garbage collector. BTW, if excessive allocations are a contributing
  factor, then fixing the GC should help with XML, too.)
  
  If the answer is no, this hasn't been done, please post some canned
  benchmarks and maybe I'll take a crack at it.
  
  2. What other major defects/design flaws, if any, does std.xml have?
  
  In other words, how are we really so sure that we need to start from
  scratch?

As I understand it, one of the main issues is that std.xml is delegate-based. I 
don't know how well it does with slicing and avoiding copying strings, but one 
of the biggest advantages that D has is its array slicing. And taking full 
advantage of that and avoiding string copying is one of - if not _the_ best - 
way to make std.xml lightning fast.

In any case, there was a discussion about std.xml recently, and the consensus 
was that we should just throw it out rather than leave it there and have people 
complain about how bad Phobos' xml module is.

As Daniel pointed out, Tomek Sowiński is currently working on a new std.xml. I 
don't know how far along he is or when he expects it to be done, but supposedly 
he's working on it and sometime reasonably soon we should have a new std.xml to 
review.

We are definitely _not_ going to be working on improving the current std.xml 
though. I think that the only reason that it's still there is that Andrei 
didn't 
get around to throwing it out before the last release (or at least deprecating 
it). That's definitely what he wants to do, and the consensus was in favor of 
that decision.

- Jonathan M Davis


Re: LDC2: Where do bug reports go?

2011-03-12 Thread Alexey Prokhin
 I've noticed that the issue tracker tab on the LDC2 project
 (https://bitbucket.org/prokhin_alexey/ldc2/overview) is missing.  First,
 why is it missing? 
I disabled it on purpose, because I am going to delete my branch soon and work 
directly with ldc main repository.

 Second, if it's missing on purpose, then where is
 the correct place to file bug reports?
Moritz answered on that question. But I suggest to use dsource tracker, this 
way all issues would be in one place.


Re: Curl support RFC

2011-03-12 Thread Jonas Drewsen

On 13/03/11 00.28, Jonathan M Davis wrote:

On Saturday 12 March 2011 13:51:37 Jonas Drewsen wrote:

On 12/03/11 20.44, Jesse Phillips wrote:

Jonas Drewsen Wrote:

On 11/03/11 22.21, Jesse Phillips wrote:

I'll make some comments on the API. Do we have to choose Http/Ftp...?
The URI already contains this, I could see being able to specifically
request one or the other for performance or so www.google.com works.


That is a good question.

The problem with creating a grand unified Curl class that does it all is
that each protocol supports different things ie. http supports cookie
handling and http redirection, ftp supports passive/active mode and dir
listings and so on.

I think it would confuse the user of the API if e.g. he were allowed to
set cookies on his ftp request.

The protocols supported (Http, Ftp,... classes) do have a base class
Protocol that implements common things like timouts etc.


Ah. I guess I was just thinking about if you want to download some file,
you don't really care where you are getting it from you just have the
URL and are read to go.


There should definitely be a simple method based only on an url. I'll
put that in.


And what about properties? They tend to be very nice instead of set
methods. examples below.


Actually I thought off this and went the usual C++ way of _not_ using
public properties but use accessor methods. Is public properties
accepted as the D way and if so what about the usual reasons about why
you should use accessor methods (like encapsulation and tolerance to
future changes to the API)?

I do like the shorter onHeader/onContent much better though :)


D was originally very friendly with properties. Your could can at this
moment be written:

http.setReceiveHeaderCallback = (string key, string value) {

  writeln(key ~ : ~ value);

};

But is going to be deprecated for the use of the @property attribute. You
are probably aware of properties in C#, so yes D is fine with public
fields and functions that look like public fields.


Just tried the property stuff out but it seems a bit inconsistent. Maybe
someone can enlighten me:

import std.stdio;

alias void delegate() deleg;

class T {
private deleg tvalue;
@property void prop(deleg dg) {
  tvalue = dg;
}
@property deleg prop() {
  return tvalue;
}
}

void main(string[] args) {
T t = new T;
t.prop = { writeln(fda); };

// Seems a bit odd that assigning to a temporary (tvalue) suddently
// changes the behaviour.
auto tvalue = t.prop;
tvalue(); // Works as expected by printing fda
t.prop(); // Just returns the delegate!

// Shouldn't the @property attribute ensure that no () is needed
// when using the property
t.prop()(); // Works
}


@property doesn't currently enforce much of anything. Things are in a transitory
state with regards to property. Originally, there was no such thing as @property
and any function which had no parameters and returned a value could be used as a
getter and any function which returned nothing and took a single argument could
be used as a setter. It was decided to make it more restrictive, so @property
was added. Eventually, you will _only_ be able to use such functions as property
functions if they are marked with @property, and you will _have_ to call them
with the property syntax and will _not_ be able to call non-property functions
with the property syntax. However, at the moment, the compiler doesn't enforce
that. It will eventually, but there are several bugs with regards to property
functions (they mostly work, but you found one of the cases where they don't),
and it probably wouldn't be a good idea to enforce it until more of those bugs
have been fixed.

- Jonathan M Davis


Okey... nice to hear that this is coming up.

Thanks again!
/Jonas




Re: Fibonacci with ranges

2011-03-12 Thread Russel Winder
On Fri, 2011-03-11 at 18:46 -0500, Jesse Phillips wrote:
 Without testing: foreach (f; take(recurrence!(a[n-1] + a[n-2])(0UL, 1UL), 
 50))
 
 teo Wrote:
 
  Just curious: How can I get ulong here?
  
  foreach (f; take(recurrence!(a[n-1] + a[n-2])(0, 1), 50))
  {
  writeln(f);
  }
 

Interestingly, or not, the code:

long declarative ( immutable long n ) {
  return take ( recurrence ! ( a[n-1] + a[n-2] ) ( 0L , 1L ) , n ) ;
}

results in the return statement delivering:

rdmd --main -unittest fibonacci_d2.d
fibonacci_d2.d(15): Error: template std.range.take(R) if 
(isInputRange!(Unqual!(R))  !isSafelySlicable!(Unqual!(R))  !is(Unqual!(R) 
T == Take!(T))) does not match any function template declaration
fibonacci_d2.d(15): Error: template std.range.take(R) if 
(isInputRange!(Unqual!(R))  !isSafelySlicable!(Unqual!(R))  !is(Unqual!(R) 
T == Take!(T))) cannot deduce template function from argument types 
!()(Recurrence!(fun,long,2u),immutable(long))

which seems deeply impenetrable for mere mortals.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Fibonacci with ranges

2011-03-12 Thread Russel Winder
On Sat, 2011-03-12 at 09:33 +, Russel Winder wrote:
[ . . . ]
 Interestingly, or not, the code:
 
 long declarative ( immutable long n ) {
   return take ( recurrence ! ( a[n-1] + a[n-2] ) ( 0L , 1L ) , n ) ;
 }
 
 results in the return statement delivering:
 
 rdmd --main -unittest fibonacci_d2.d
 fibonacci_d2.d(15): Error: template std.range.take(R) if 
 (isInputRange!(Unqual!(R))  !isSafelySlicable!(Unqual!(R))  
 !is(Unqual!(R) T == Take!(T))) does not match any function template 
 declaration
 fibonacci_d2.d(15): Error: template std.range.take(R) if 
 (isInputRange!(Unqual!(R))  !isSafelySlicable!(Unqual!(R))  
 !is(Unqual!(R) T == Take!(T))) cannot deduce template function from argument 
 types !()(Recurrence!(fun,long,2u),immutable(long))
 
 which seems deeply impenetrable for mere mortals.

Sorry I needed to add the driver code:

  foreach ( item ; data ) {
assert ( declarative ( item[0] ) == item[1] ) ;
  }

within a unittest block. 
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Fibonacci with ranges

2011-03-12 Thread Jonathan M Davis
On Saturday 12 March 2011 01:33:34 Russel Winder wrote:
 On Fri, 2011-03-11 at 18:46 -0500, Jesse Phillips wrote:
  Without testing: foreach (f; take(recurrence!(a[n-1] + a[n-2])(0UL,
  1UL), 50))
  
  teo Wrote:
   Just curious: How can I get ulong here?
   
   foreach (f; take(recurrence!(a[n-1] + a[n-2])(0, 1), 50))
   {
   
 writeln(f);
   
   }
 
 Interestingly, or not, the code:
 
 long declarative ( immutable long n ) {
   return take ( recurrence ! ( a[n-1] + a[n-2] ) ( 0L , 1L ) , n ) ;
 }
 
 results in the return statement delivering:
 
 rdmd --main -unittest fibonacci_d2.d
 fibonacci_d2.d(15): Error: template std.range.take(R) if
 (isInputRange!(Unqual!(R))  !isSafelySlicable!(Unqual!(R)) 
 !is(Unqual!(R) T == Take!(T))) does not match any function template
 declaration fibonacci_d2.d(15): Error: template std.range.take(R) if
 (isInputRange!(Unqual!(R))  !isSafelySlicable!(Unqual!(R)) 
 !is(Unqual!(R) T == Take!(T))) cannot deduce template function from
 argument types !()(Recurrence!(fun,long,2u),immutable(long))
 
 which seems deeply impenetrable for mere mortals.

LOL. Maybe I've been dealing with template code for too long, because that 
seems 
perfectly clear to me. Though I can certainly understand why it wouldn't be. 
Incidentally, isSafelySlicable will be going away (essentially it's checking 
that the range isn't some type of char[] or wchar[], and Andrei's just going to 
make it so that isSliceable is false for them).

All that template constraint is checking for is that the range is an input 
range 
which can't be sliced and isn't already a range returned from take. If a range 
_is_ sliceable, then take just returns the same range type.

However, I don't think that constraint is necessarily all that useful in this 
case. It's just that it's the first version of the template, so that's the way 
that gets displayed when the compiler can't instantiate any of the versions of 
the template with the given arguments.

What's happening is that the parameter that you're passing n to for recurrence 
is size_t. And on 32-bit systems, size_t is uint, so passing n - which is long 
- 
to recurrence would be a narrowing conversion, which requires a cast. The 
correct thing to do would be make n a size_t. The other thing that you'd need 
to 
do is change declarative to return auto, since take returns a range, _not_ a 
long.

In any case, it _would_ be nice if the compiler gave a more informative message 
about _why_ the template failed to instantiate - especially since it's _not_ 
the 
template constraint which is the problem - but unfortunately, the compiler just 
isn't that smart about template instantiation errors.

- Jonathan M Davis


Re: Fibonacci with ranges

2011-03-12 Thread Ali Çehreli

On 03/12/2011 01:33 AM, Russel Winder wrote:
 On Fri, 2011-03-11 at 18:46 -0500, Jesse Phillips wrote:
 Without testing: foreach (f; take(recurrence!(a[n-1] + 
a[n-2])(0UL, 1UL), 50))


 teo Wrote:

 Just curious: How can I get ulong here?

 foreach (f; take(recurrence!(a[n-1] + a[n-2])(0, 1), 50))
 {
writeln(f);
 }


 Interestingly, or not, the code:

 long declarative ( immutable long n ) {
return take ( recurrence ! ( a[n-1] + a[n-2] ) ( 0L , 1L ) , n ) ;
 }

take returns a lazy range which can't be returned as a single long.

Reading your other post, I think this may be what you wanted to see:

import std.range;
import std.algorithm;

auto declarative(immutable long n)
{
  return take(recurrence!(a[n-1] + a[n-2])(0L, 1L), n);
}

void main()
{
long[] data = [ 0, 1, 1, 2, 3, 5, 8 ];

foreach (n; 0 .. data.length) {
assert(equal(declarative(n), data[0..n]));
}
}

Ali



Re: Fibonacci with ranges

2011-03-12 Thread Russel Winder
Jonathan,

Thanks for the info, very helpful.  One point though:

On Sat, 2011-03-12 at 01:56 -0800, Jonathan M Davis wrote:
[ . . . ]
 What's happening is that the parameter that you're passing n to for 
 recurrence 
 is size_t. And on 32-bit systems, size_t is uint, so passing n - which is 
 long - 
 to recurrence would be a narrowing conversion, which requires a cast. The 
 correct thing to do would be make n a size_t. The other thing that you'd need 
 to 
 do is change declarative to return auto, since take returns a range, _not_ a 
 long.

It seems that D is falling into the same bear trap as C++ (and C?)
descended into long ago.  When a programmer want an int or a long, they
actually have to decide whether they need a size_t.  To be honest this
is a simple WTF!!!

Go really has this right.  int  does not exist, neither does long.
int32, int64 -- no ambiguity.  Why C++ and D have to continue with the
pretence of platform independent types when they are far from platform
independent seems counter-productive.

post-facto-rant-warning/

Thanks for the pointer about the take, I need to select just the last
entry in the range and return that.

 In any case, it _would_ be nice if the compiler gave a more informative 
 message 
 about _why_ the template failed to instantiate - especially since it's _not_ 
 the 
 template constraint which is the problem - but unfortunately, the compiler 
 just 
 isn't that smart about template instantiation errors.

C++ is bad enough, if D cannot improve on it . . .   :-((

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Fibonacci with ranges

2011-03-12 Thread Russel Winder
On Sat, 2011-03-12 at 02:15 -0800, Ali Çehreli wrote:
[ . . . ]
 void main()
 {
  long[] data = [ 0, 1, 1, 2, 3, 5, 8 ];
 
  foreach (n; 0 .. data.length) {
  assert(equal(declarative(n), data[0..n]));
  }
 }

In fact the driver is:

unittest {
  immutable data = [
[ 0 , 0 ] ,
[ 1 , 1 ] ,
[ 2 , 1 ] ,
[ 3 , 2 ] ,
[ 4 , 3 ] ,
[ 5 , 5 ] ,
[ 6 , 8 ] ,
[ 7 , 13 ] ,
[ 8 , 21 ] ,
[ 9 , 34 ] ,
[ 10 , 55 ] ,
[ 11 , 89 ] ,
[ 12 , 144 ] ,
[ 13 , 233 ] ,
  ] ;
  foreach ( item ; data ) {
assert ( iterative ( item[0] ) == item[1] ) ;
  }
  foreach ( item ; data ) {
assert ( declarative ( item[0] ) == item[1] ) ;
  }
}

so I need to index the take-n list to return the last value.

This of course brings up the question of the signature of any factorial
function.  Without a real use case it is a rhetorical question.  What is
nice though is that there could be a neat way of generating a memoized,
i.e. cached, lazy list.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Fibonacci with ranges

2011-03-12 Thread Jonathan M Davis
On Saturday 12 March 2011 02:48:19 Russel Winder wrote:
 Jonathan,
 
 On Sat, 2011-03-12 at 10:31 +, Russel Winder wrote:
 [ . . . ]
 
   What's happening is that the parameter that you're passing n to for
   recurrence is size_t. And on 32-bit systems, size_t is uint, so
   passing n - which is long - to recurrence would be a narrowing
   conversion, which requires a cast. The correct thing to do would be
   make n a size_t. The other thing that you'd need to do is change
   declarative to return auto, since take returns a range, _not_ a long.
 
 To analyse this a bit more I temporarily deconstructed the expression:
 
 long declarative ( immutable long n ) {
   auto r = recurrence ! ( a[n-1] + a[n-2] ) ( 0L , 1L ) ;
   auto t = take ( r , cast ( size_t ) ( n ) ) ;
   return t [ n ] ;
   //return ( take ( recurrence ! ( a[n-1] + a[n-2] ) ( 0L , 1L )
 , cast ( size_t ) ( n ) ) ) [ n ] ; }
 
 So with the cast it compiles fine -- though it still seems to me to be
 beyond the point of comprehension as to why an applications programmer
 has to manually cast a long to a size_t.  However the indexing of the
 range fails:

Um. Because it's a narrowing conversion on 32-bit machines. What else should it 
be doing? If it allowed the narrowing conversion without a cast, then you'd run 
into problems where you were losing precision without realizing it which would 
cause plenty of other entertaining bugs. Most newer languages require casts for 
narrowing conversions.

 fibonacci_d2.d(17): Error: no [] operator overload for type
 Take!(Recurrence!(fun,long,2u))
 
 Which elicits the response:  for f sake, I'm just copying the
 example from the manual.
 
 OK, so I am grumpy this morning, but that doesn't affect the fact that
 there appears to be a disconnect between documentation and what actually
 works.

take will only return a sliceable range if the range that you give it is 
sliceable. recurrence does not return a sliceable range, so take used an the 
result of recurrence doesn't return a sliceable range. The documentation for 
take is completely correct. It's just that it only has an array in its example, 
not a range which _isn't_ sliceable, so the one example that it does have 
involves a sliceable range.

- Jonathan M Davis


struct construct with array

2011-03-12 Thread Caligo
struct Test{

  public double[3] ar_;
  this(double[3] ar){
this.ar_ = ar;
  }
}

void main(){

  double[3] v1 = [1.0, 2.0, 3.0];
  double[3] v2 = [2.0, 3.0, 4.0];

  auto t1 = Test(v1[0..$] + v2[0..$]); // error

}


I want to add those two arrays and call the constructor in one line, but I'm
getting an error.  Any ideas?


Re: struct construct with array

2011-03-12 Thread Andrej Mitrovic
The best thing I can think of is introducing a temp variable:

void main(){
double[3] v1 = [1.0, 2.0, 3.0];
double[3] v2 = [2.0, 3.0, 4.0];

double[3] v3 = v1[] + v2[];
auto t1 = Test(v3);
}


Points and Vectors in 3D

2011-03-12 Thread Caligo
Given everything that D offers, what would be the best way to implement a
Point and a Vector type?  The same (x, y, z) can be used to represent
vectors, but a point represents a position, whereas a vector represents a
direction.  So, would you define two different structs for each? or define
and implement an interface?  a fixed array or POD members?


Ranges

2011-03-12 Thread Jonas Drewsen

Hi,

   I'm working a bit with ranges atm. but there are definitely some 
things that are not clear to me yet. Can anyone tell me why the char 
arrays cannot be copied but the int arrays can?


import std.stdio;
import std.algorithm;

void main(string[] args) {

  // This works
  int[] a1 = [1,2,3,4];
  int[] a2 = [5,6,7,8];
  copy(a1, a2);

  // This does not!
  char[] a3 = ['1','2','3','4'];
  char[] a4 = ['5','6','7','8'];
  copy(a3, a4);

}

Error message:

test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if 
(isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1))) 
does not match any function template declaration


test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if 
(isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1))) 
cannot deduce template function from argument types !()(char[],char[])


Thanks,
Jonas


Re: Points and Vectors in 3D

2011-03-12 Thread Simon

On 12/03/2011 20:51, Caligo wrote:

Given everything that D offers, what would be the best way to implement
a Point and a Vector type?  The same (x, y, z) can be used to represent
vectors, but a point represents a position, whereas a vector represents
a direction.  So, would you define two different structs for each? or
define and implement an interface?  a fixed array or POD members?


I've done lots of 3d over the years and used quite a lot of different 
libraries and I've come to prefer code that makes a distinction between 
points and vectors. Makes code easier to read and more type safe, though 
it's a bit more inconvenient when you need to mix things up.


I use:

struct pt {
  float[3] _vals;
}

struct vec {
  float[3] _vals;
}

Using the float[3] allows you to use vector ops:

pt  p0;
vec v;

p0._vals[] += v._vals[];

You don't want an interface; you don't get anything more value type than 
points  vectors. In a modern 3d models you could be dealing with a 1/2 
million vertices.


--
My enormous talent is exceeded only by my outrageous laziness.
http://www.ssTk.co.uk


Re: struct construct with array

2011-03-12 Thread Ali Çehreli

On 03/12/2011 10:42 AM, Caligo wrote:

struct Test{

   public double[3] ar_;
   this(double[3] ar){
 this.ar_ = ar;
   }
}

void main(){

   double[3] v1 = [1.0, 2.0, 3.0];
   double[3] v2 = [2.0, 3.0, 4.0];

   auto t1 = Test(v1[0..$] + v2[0..$]); // error

}


I want to add those two arrays and call the constructor in one line, but I'm
getting an error.  Any ideas?



Even a simpler code doesn't work:

double[3] v1 = [1.0, 2.0, 3.0];
double[3] v2 = [2.0, 3.0, 4.0];

auto result = v1[0..$] + v2[0..$];

Error: Array operation v1[0LU..__dollar] + v2[0LU..__dollar] not implemented

The following doesn't work either:

auto result = v1[] + v2[];
auto result = v1 + v2;

dmd does not implement those features yet.

Ali


Re: struct construct with array

2011-03-12 Thread Ali Çehreli

On 03/12/2011 02:52 PM, Ali Çehreli wrote:

On 03/12/2011 10:42 AM, Caligo wrote:

struct Test{

public double[3] ar_;
this(double[3] ar){
this.ar_ = ar;
}
}

void main(){

double[3] v1 = [1.0, 2.0, 3.0];
double[3] v2 = [2.0, 3.0, 4.0];

auto t1 = Test(v1[0..$] + v2[0..$]); // error

}


I want to add those two arrays and call the constructor in one line,
but I'm
getting an error. Any ideas?



Even a simpler code doesn't work:

double[3] v1 = [1.0, 2.0, 3.0];
double[3] v2 = [2.0, 3.0, 4.0];

auto result = v1[0..$] + v2[0..$];

Error: Array operation v1[0LU..__dollar] + v2[0LU..__dollar] not
implemented

The following doesn't work either:

auto result = v1[] + v2[];
auto result = v1 + v2;

dmd does not implement those features yet.

Ali


I see from the other posts that the problem has something to do with 
auto. This works:


double[3] v3 = v1[] + v2[];

Ali



Re: Ranges

2011-03-12 Thread Jonathan M Davis
On Saturday 12 March 2011 14:02:00 Jonas Drewsen wrote:
 Hi,
 
 I'm working a bit with ranges atm. but there are definitely some
 things that are not clear to me yet. Can anyone tell me why the char
 arrays cannot be copied but the int arrays can?
 
 import std.stdio;
 import std.algorithm;
 
 void main(string[] args) {
 
// This works
int[]  a1 = [1,2,3,4];
int[] a2 = [5,6,7,8];
copy(a1, a2);
 
// This does not!
char[] a3 = ['1','2','3','4'];
char[] a4 = ['5','6','7','8'];
copy(a3, a4);
 
 }
 
 Error message:
 
 test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if
 (isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1)))
 does not match any function template declaration
 
 test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if
 (isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1)))
 cannot deduce template function from argument types !()(char[],char[])

Character arrays / strings are not exactly normal. And there's a very good 
reason for it: unicode.

In unicode, a character is generally a single code point (there are also 
graphemes which involve combining code points to add accents and superscripts 
and whatnot to create a single character, but we'll ignore that in this 
discussion - it's complicated enough as it is). Depending on the encoding, that 
code point may be made up of one - or more - code units. UTF-8 uses 8 bit code 
units. UTF-16 uses 16 bit code units. And UTF-32 uses 32-bit code units. char 
is 
a UTF-8 code unit. wchar is a UTF-16 code unit. dchar is a UTF-32 code unit. 
UTF-32 is the _only_ one of those three which _always_ has one code unit per 
code point.

With an array of integers you can index it and slice it and be sure that 
everything that you're doing is valid. If you look at a single element, you 
know 
that it's a valid int. If you slice it, you know that every int in there is 
valid. If you're dealing with a dstring or dchar[], then the same still holds.

A dstring or dchar[] is an array of UTF-32 code units. Every code point is a 
single code unit, so every element in the array is a valid code point. You can 
take an arbitrary element in that array and know that it's a valid code point. 
You can slice it wherever you want and you still have a valid dstrin
g or dchar[]. The same does _not_ hold for char[] and wchar[].

char[] and wchar[] are arrays of UTF-8 and UTF-16 code units respectively. In 
both of those encodings, multiple code units are required to create a single 
code point. So, for instance, a code point could have 4 code units. That means 
that _4_ elements of that char[] make up a _single_ code point. You'd need 
_all_ 
4 of those elements to create a single, valid character. So, you _can't_ just 
take an arbitrary element in a char[] or wchar[] and expect it to be valid. You 
_can't_ just slice it anywhere. The resulting array stands a good chance of 
being invalid. You have to slice on code point boundaries - otherwise you could 
slice characters in hald and end up with an invalid string. So, unlike other 
arrays, it just doesn't work to treat char[] and wchar[] as random access 
ranges 
of their element type. What the programmer cares about is characters - dchars - 
not chars or wchars.

So, the way this is handled is that char[], wchar[], and dchar[] are all 
treated 
as ranges of dchar. In the case of dchar[], this is nothing special. You can 
index it and slice it as normal. So, it is a random access range.. However, in 
the case of char[] and wchar[], that means that when you're iterating over them 
that you're not dealing with a single element of the array at a time. front 
returns a dchar, and popFront() pops off however many elements made up front. 
It's like with foreach. If you iterate a char[] with auto or char, then each 
individual element is given

foreach(c; myStr) {}

But if you iterate over with dchar, then each code point is given as a dchar:

foreach(dchar c; myStr) {}

If you were to try and iterate over a char[] by char, then you would be looking 
at code units rather than code points which is _rarely_ what you want. If 
you're 
dealing with anything other than pure ASCII, you _will_ have bugs if you do 
that. You're supposed to use dchar with foreach and character arrays. That way, 
each value you process is a valid character. Ranges do the same, only you don't 
give them an iteration type, so they're _always_ iterating over dchar.

So, when you're using a range of char[] or wchar[], you're really using a range 
of dchar. These ranges are bi-directional. They can't be sliced, and they can't 
be indexed (since doing so would likely be invalid). This generally works very 
well. It's exactly what you want in most cases. The problem is that that means 
that the range that you're iterating over is effectively of a different type 
than 
the original char[] or wchar[].

You can't just take two ranges of dchar of the same length and necessarily have 
them 

Re: Points and Vectors in 3D

2011-03-12 Thread Bekenn

On 3/12/2011 2:20 PM, Simon wrote:

I've done lots of 3d over the years and used quite a lot of different
libraries and I've come to prefer code that makes a distinction between
points and vectors.


Agreed.  This has some nice benefits with operator overloading, as well:

vec v = ...;
pt p = ...;
auto p2 = p + v;// p2 is pt
auto p3 = p + p2;   // error
auto v2 = v + v;// v2 is vec

...and with properties:

p.x = 5;// p is pt, sets p._vals[0]
v.dx = 3;   // v is vec, sets v._vals[0]


Re: Ranges

2011-03-12 Thread Bekenn

On 3/12/2011 2:02 PM, Jonas Drewsen wrote:

Error message:

test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if
(isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1)))
does not match any function template declaration

test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if
(isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1)))
cannot deduce template function from argument types !()(char[],char[])


I haven't checked (could be completely off here), but I don't think that 
char[] counts as an input range; you would normally want to use dchar 
instead.


Re: Ranges

2011-03-12 Thread Bekenn

Or, better yet, just read Jonathan's post.


Re: Ranges

2011-03-12 Thread Jonathan M Davis
On Saturday 12 March 2011 16:05:37 Jonathan M Davis wrote:
 You could open an
 enhancement request for copy to treat char[] and wchar[] as arrays if
 _both_ of the arguments are of the same type.

Actually, on reflection, I'd have to say that there's not much point to that. 
If 
you really want to copy on array to another (rather than a range), just use the 
array copy syntax:

void main()
{
auto i = [1, 2, 3, 4];
auto j = [3, 4, 5, 6];
assert(i == [1, 2, 3, 4]);
assert(j == [3, 4, 5, 6]);

i[] = j[];

assert(i == [3, 4, 5, 6]);
assert(j == [3, 4, 5, 6]);
}

copy is of benefit, because it works on generic ranges, not for copying arrays 
(arrays already allow you to do that quite nicely), so if all you're looking at 
copying is arrays, then just use the array copy syntax.

- Jonathan M Davis


Re: Ranges

2011-03-12 Thread Jonathan M Davis
On Saturday 12 March 2011 16:11:20 Bekenn wrote:
 On 3/12/2011 2:02 PM, Jonas Drewsen wrote:
  Error message:
  
  test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if
  (isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1)))
  does not match any function template declaration
  
  test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if
  (isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1)))
  cannot deduce template function from argument types !()(char[],char[])
 
 I haven't checked (could be completely off here), but I don't think that
 char[] counts as an input range; you would normally want to use dchar
 instead.

Char[] _does_ count as input range (of dchar). It just doesn't count as an 
_output_ range (since it doesn't really hold dchar).

- Jonathan M Davis


.di header imports with DLL symbols fails to link

2011-03-12 Thread Andrej Mitrovic
On Windows, x86.

http://dl.dropbox.com/u/9218759/DLL_Imports.zip

fail_build.bat runs:
dmd driver.d mydll.lib -I%cd%\include\
but linking fails:
driver.obj(driver)
 Error 42: Symbol Undefined _D5mydll12__ModuleInfoZ
--- errorlevel 1

work_build.bat runs:
dmd driver.d mydll.lib %cd%\include\mydll.di
and this succeeds.

So passing the .di file explicitly works, but via the import switch it does not.

Here's a non-DLL example which works fine when using header files and an import 
switch:
http://dl.dropbox.com/u/9218759/importsWorkUsually.zip

So unless I'm missing something this looks like a linker bug?


Re: Ranges

2011-03-12 Thread Andrej Mitrovic
What Jonathan said really needs to be put up on the D website, maybe
under the articles section. Heck, I'd just put a link to that recent
UTF thread on the website, it's really informative (the one on UTF and
meaning of glyphs, etc). And UTF will only get more important, just
like multicore.

Speaking of which, a description on ranges should be put up there as
well. There's that article Andrei once wrote, but we should put it on
the D site and discuss D's implementation of ranges in more detail.
And by 'we' I mean someone who's well versed in ranges. :p


Re: .di header imports with DLL symbols fails to link

2011-03-12 Thread Bekenn

On 3/12/2011 5:24 PM, Andrej Mitrovic wrote:

driver.obj(driver)
  Error 42: Symbol Undefined _D5mydll12__ModuleInfoZ
--- errorlevel 1


Your dll is exporting a different symbol: _D5mydll3fooFiZi
Do you have the .def file and the command line used to build the DLL?


Re: .di header imports with DLL symbols fails to link

2011-03-12 Thread Daniel Green

On 3/12/2011 9:15 PM, Bekenn wrote:

On 3/12/2011 5:24 PM, Andrej Mitrovic wrote:

driver.obj(driver)
Error 42: Symbol Undefined _D5mydll12__ModuleInfoZ
--- errorlevel 1


Your dll is exporting a different symbol: _D5mydll3fooFiZi
Do you have the .def file and the command line used to build the DLL?
I believe _D5mydll12__ModuleInfoZ is supposed to be exported by the 
compiler.


It contains static constructor and unittest information used by the 
runtime to initialize it.


Re: .di header imports with DLL symbols fails to link

2011-03-12 Thread Andrej Mitrovic
Actually passing that .di file compiles it in statically, and the exe
ends up not needing the DLL.

It's a bit too late for me to thinker with the linker, I'll have a
clearer head tomorrow.


Re: .di header imports with DLL symbols fails to link

2011-03-12 Thread Andrej Mitrovic
My commands to compile were:
dmd -ofmydll.dll mydll.d
dmd -o- -Hdinclude mydll.d
dmd driver.d mydll.lib -I%cd%\include


Re: .di header imports with DLL symbols fails to link

2011-03-12 Thread Bekenn

On 3/12/2011 7:02 PM, Andrej Mitrovic wrote:

My commands to compile were:
dmd -ofmydll.dll mydll.d
dmd -o- -Hdinclude mydll.d
dmd driver.d mydll.lib -I%cd%\include


Thanks.

I've tried several things, but can't get the _D5mydll12__ModuleInfoZ 
symbol to show up at all.  The behavior is the same with and without a 
.def file (I tried a few versions).  I even went back to 
http://www.digitalmars.com/d/2.0/dll.html and copied everything
in the D code calling D code in DLLs section verbatim.  After fixing a 
few compilation errors (the web page's version of concat needs its 
arguments qualified with in), I ended up with the exact same problem 
you're experiencing.


I'd definitely call this a bug.


Re: .di header imports with DLL symbols fails to link

2011-03-12 Thread Daniel Green

On 3/12/2011 11:39 PM, Bekenn wrote:

On 3/12/2011 7:02 PM, Andrej Mitrovic wrote:

My commands to compile were:
dmd -ofmydll.dll mydll.d
dmd -o- -Hdinclude mydll.d
dmd driver.d mydll.lib -I%cd%\include


Thanks.

I've tried several things, but can't get the _D5mydll12__ModuleInfoZ
symbol to show up at all. The behavior is the same with and without a
.def file (I tried a few versions). I even went back to
http://www.digitalmars.com/d/2.0/dll.html and copied everything
in the D code calling D code in DLLs section verbatim. After fixing a
few compilation errors (the web page's version of concat needs its
arguments qualified with in), I ended up with the exact same problem
you're experiencing.

I'd definitely call this a bug.


Probably unrelated, but this same issue showed up in the GDC backend. 
Apparently, the compiler tried to be smart about exporting ModuleInfo 
only for those modules that needed it.  The fix was to always export it 
regardless.


Re: Ranges

2011-03-12 Thread Jonas Drewsen

Hi Jonathan,

   Thank you very much your in depth answer!

   It should indeed goto a faq somewhere it think. I did now about the 
codepoint/unit stuff but had no idea that ranges of char are handled 
using dchar internally. This makes sense but is an easy pitfall for 
newcomers trying to use std.{algoritm,array,ranges} for char[].


Thanks
Jonas

On 13/03/11 01.05, Jonathan M Davis wrote:

On Saturday 12 March 2011 14:02:00 Jonas Drewsen wrote:

Hi,

 I'm working a bit with ranges atm. but there are definitely some
things that are not clear to me yet. Can anyone tell me why the char
arrays cannot be copied but the int arrays can?

import std.stdio;
import std.algorithm;

void main(string[] args) {

// This works
int[]   a1 = [1,2,3,4];
int[] a2 = [5,6,7,8];
copy(a1, a2);

// This does not!
char[] a3 = ['1','2','3','4'];
char[] a4 = ['5','6','7','8'];
copy(a3, a4);

}

Error message:

test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if
(isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1)))
does not match any function template declaration

test2.d(13): Error: template std.algorithm.copy(Range1,Range2) if
(isInputRange!(Range1)  isOutputRange!(Range2,ElementType!(Range1)))
cannot deduce template function from argument types !()(char[],char[])


Character arrays / strings are not exactly normal. And there's a very good
reason for it: unicode.

In unicode, a character is generally a single code point (there are also
graphemes which involve combining code points to add accents and superscripts
and whatnot to create a single character, but we'll ignore that in this
discussion - it's complicated enough as it is). Depending on the encoding, that
code point may be made up of one - or more - code units. UTF-8 uses 8 bit code
units. UTF-16 uses 16 bit code units. And UTF-32 uses 32-bit code units. char is
a UTF-8 code unit. wchar is a UTF-16 code unit. dchar is a UTF-32 code unit.
UTF-32 is the _only_ one of those three which _always_ has one code unit per
code point.

With an array of integers you can index it and slice it and be sure that
everything that you're doing is valid. If you look at a single element, you know
that it's a valid int. If you slice it, you know that every int in there is
valid. If you're dealing with a dstring or dchar[], then the same still holds.

A dstring or dchar[] is an array of UTF-32 code units. Every code point is a
single code unit, so every element in the array is a valid code point. You can
take an arbitrary element in that array and know that it's a valid code point.
You can slice it wherever you want and you still have a valid dstrin
g or dchar[]. The same does _not_ hold for char[] and wchar[].

char[] and wchar[] are arrays of UTF-8 and UTF-16 code units respectively. In
both of those encodings, multiple code units are required to create a single
code point. So, for instance, a code point could have 4 code units. That means
that _4_ elements of that char[] make up a _single_ code point. You'd need _all_
4 of those elements to create a single, valid character. So, you _can't_ just
take an arbitrary element in a char[] or wchar[] and expect it to be valid. You
_can't_ just slice it anywhere. The resulting array stands a good chance of
being invalid. You have to slice on code point boundaries - otherwise you could
slice characters in hald and end up with an invalid string. So, unlike other
arrays, it just doesn't work to treat char[] and wchar[] as random access ranges
of their element type. What the programmer cares about is characters - dchars -
not chars or wchars.

So, the way this is handled is that char[], wchar[], and dchar[] are all treated
as ranges of dchar. In the case of dchar[], this is nothing special. You can
index it and slice it as normal. So, it is a random access range.. However, in
the case of char[] and wchar[], that means that when you're iterating over them
that you're not dealing with a single element of the array at a time. front
returns a dchar, and popFront() pops off however many elements made up front.
It's like with foreach. If you iterate a char[] with auto or char, then each
individual element is given

foreach(c; myStr) {}

But if you iterate over with dchar, then each code point is given as a dchar:

foreach(dchar c; myStr) {}

If you were to try and iterate over a char[] by char, then you would be looking
at code units rather than code points which is _rarely_ what you want. If you're
dealing with anything other than pure ASCII, you _will_ have bugs if you do
that. You're supposed to use dchar with foreach and character arrays. That way,
each value you process is a valid character. Ranges do the same, only you don't
give them an iteration type, so they're _always_ iterating over dchar.

So, when you're using a range of char[] or wchar[], you're really using a range
of dchar. These ranges are bi-directional. They can't be sliced, and they can't
be indexed (since doing so would 

[Issue 5731] WindowsTimeZone has offsets from UTC backwards

2011-03-12 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5731


Jonathan M Davis jmdavisp...@gmx.com changed:

   What|Removed |Added

 Status|NEW |ASSIGNED


-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5731] New: WindowsTimeZone has offsets from UTC backwards

2011-03-12 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5731

   Summary: WindowsTimeZone has offsets from UTC backwards
   Product: D
   Version: unspecified
  Platform: All
OS/Version: Windows
Status: NEW
  Severity: normal
  Priority: P2
 Component: Phobos
AssignedTo: nob...@puremagic.com
ReportedBy: jmdavisp...@gmx.com


--- Comment #0 from Jonathan M Davis jmdavisp...@gmx.com 2011-03-12 00:29:11 
PST ---
This program:

import std.datetime;
import std.stdio;

void main()
{
writeln(SysTime(Date.init, WindowsTimeZone.getTimeZone(Pacific Standard
Time)));
writeln(SysTime(Date.init, WindowsTimeZone.getTimeZone(Eastern Standard
Time)));
writeln(SysTime(Date.init, WindowsTimeZone.getTimeZone(Greenwich Standard
Time)));
writeln(SysTime(Date.init, WindowsTimeZone.getTimeZone(Romance Standard
Time)));
}


prints this:

0001-Jan-01 00:00:00+08:00
0001-Jan-01 00:00:00+05:00
0001-Jan-01 00:00:00+00:00
0001-Jan-01 00:00:00-01:00


Notice that the offsets are all the reverse of what they're supposed to be (+8
instead of -8, +5 instead of -5, etc.).

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5730] Error: variable has scoped destruction, cannot build closure

2011-03-12 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5730


Walter Bright bugzi...@digitalmars.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||bugzi...@digitalmars.com
 Resolution||INVALID


--- Comment #1 from Walter Bright bugzi...@digitalmars.com 2011-03-12 
00:50:04 PST ---
Right. {auto s1=s;} is a delegate literal. Delegate literals need to be able to
survive the end of the function they are defined inside. Since s is destroyed
upon the exit from main(), then this will not work, hence the error message.

This is expected behavior, not a bug.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5730] Error: variable has scoped destruction, cannot build closure

2011-03-12 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5730



--- Comment #2 from Max Samukha samu...@voliacable.com 2011-03-12 01:53:09 
PST ---
No, no. The bug is not about the impossibility to build a closure. It is about
__traits(compiles) not handling the compilation error properly. It should
suppress the error and evaluate to false.  

Another example:

struct S
{
~this()
{
}
}

void main()
{
S s;
static if (__traits(compiles, { auto s1 = s; }))
pragma(msg, Can build closure);
else
pragma(msg, Cannot build closure);
}

The compiler outputs:
Can build closure
Error: variable test.main.s has scoped destruction, cannot build closure


Instead, the above should compile successfully, printing Cannot build closure
at compile time.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5730] Error: variable has scoped destruction, cannot build closure

2011-03-12 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5730


Max Samukha samu...@voliacable.com changed:

   What|Removed |Added

 Status|RESOLVED|REOPENED
 Resolution|INVALID |


-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5730] __traits(compiles) does not handle variable has scoped destruction, cannot build closure error correctly

2011-03-12 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5730


Max Samukha samu...@voliacable.com changed:

   What|Removed |Added

Summary|Error: variable has scoped  |__traits(compiles) does not
   |destruction, cannot build   |handle variable has scoped
   |closure |destruction, cannot build
   ||closure error correctly


--- Comment #3 from Max Samukha samu...@voliacable.com 2011-03-12 02:37:00 
PST ---
Changed the title

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5731] std.datetime.SysTime prints UTC offsets backwards

2011-03-12 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5731


Jonathan M Davis jmdavisp...@gmx.com changed:

   What|Removed |Added

Summary|WindowsTimeZone has offsets |std.datetime.SysTime prints
   |from UTC backwards  |UTC offsets backwards
 OS/Version|Windows |All


--- Comment #1 from Jonathan M Davis jmdavisp...@gmx.com 2011-03-12 03:47:36 
PST ---
Okay. So, this isn't a WindowsTimeZone problem. It's a problem with SysTime's
string functions (toISOExtendedString in the example, but it's the same for all
of them) where they get the UTC offset backwards.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5732] New: Windows installer 1.067 creates incorrect target for Start menu link

2011-03-12 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5732

   Summary: Windows installer 1.067 creates incorrect target for
Start menu link
   Product: D
   Version: D2
  Platform: x86
OS/Version: Windows
Status: NEW
  Severity: minor
  Priority: P2
 Component: installer
AssignedTo: nob...@puremagic.com
ReportedBy: commonm...@seznam.cz


--- Comment #0 from Simon Says commonm...@seznam.cz 2011-03-12 10:05:44 PST 
---
When I install DMD 2 from the Windows 1-click installer and choose to install
only D2 with option to create links in start menu, the Documentation link is
created pointing incorrectly to (default path) C:\D\dmd\

Setting the link's target to C:\D\dmd2\... obviously fixes the problem, but
shouldn't be there different links to appropriate versions of documentation?

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5730] __traits(compiles) does not handle variable has scoped destruction, cannot build closure error correctly

2011-03-12 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5730


Don clugd...@yahoo.com.au changed:

   What|Removed |Added

 CC||clugd...@yahoo.com.au


--- Comment #4 from Don clugd...@yahoo.com.au 2011-03-12 11:48:11 PST ---
This is happening because the has scoped destruction error is generated in
the glue layer, not in the front-end. The same issue applies to any error
message generated in e2ir.c, toir.c, or s2ir.c.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---