Re: Plot2kill 0.2

2011-03-11 Thread David Nadlinger

On 3/11/11 8:49 AM, Kagamin wrote:

It's also easier to debug code if you store objects in variables. What if 
histogram is created with a bug, how would you diagnose it? If you have the 
histogram stored in a variable, you can put a breakpoint after the assignment 
and inspect the histogram, but your example doesn't provide a variable to 
inspect.


Just put a break point at the beginning of the line and step through it?

David


Re: Plot2kill 0.2

2011-03-11 Thread dsimcha

On 3/11/2011 2:49 AM, Kagamin wrote:

dsimcha Wrote:


The problem with the with statement idea is that you still need to
declare the variable.  I often throw up quick anonymous plots with
anonymous Figure objects, like:

Histogram(someDataSet).toFigure
  .title(A Title)
  .xLabel(Stuff)
  .showAsMain();


It's also easier to debug code if you store objects in variables. What if 
histogram is created with a bug, how would you diagnose it? If you have the 
histogram stored in a variable, you can put a breakpoint after the assignment 
and inspect the histogram, but your example doesn't provide a variable to 
inspect.


Well, then give it a name if you ever end up needing to.  In general I 
hate this as an argument against terse but readable code because you can 
always make things into named variables very easily if you ever need to 
set a breakpoint there.  Usually I use as few named variables as I can 
without hurting readability when I program, both to save typing and 
because I find it hard to think of good names for intermediate 
variables, so I end up naming them something horrible.


Re: Plot2kill 0.2

2011-03-11 Thread Kagamin
David Nadlinger Wrote:

 On 3/11/11 8:49 AM, Kagamin wrote:
  It's also easier to debug code if you store objects in variables. What if 
  histogram is created with a bug, how would you diagnose it? If you have the 
  histogram stored in a variable, you can put a breakpoint after the 
  assignment and inspect the histogram, but your example doesn't provide a 
  variable to inspect.
 
 Just put a break point at the beginning of the line and step through it?

Those are library methods. You probably won't have sources for them, but you 
can still access its properties or inspect private members if you have 
appropriate symbols for them. It also can be quite tedious to step through 
those chained methods, because the chain can be long.
May be m$ debugger sucks, but it's nontrivial to step through methods called in 
single statement: it tends to step through the whole statement, though you can 
step-in and step-out.


Re: Plot2kill 0.2

2011-03-11 Thread Kagamin
  Just put a break point at the beginning of the line and step through it?
 May be m$ debugger sucks, but it's nontrivial to step through methods called 
 in single statement: it tends to step through the whole statement, though you 
 can step-in and step-out.

You will also step into functions called for arguments like getColor() in the 
example.


Re: Plot2kill 0.2

2011-03-11 Thread dsimcha
== Quote from Kagamin (s...@here.lot)'s article
   Just put a break point at the beginning of the line and step through it?
  May be m$ debugger sucks, but it's nontrivial to step through methods called
in single statement: it tends to step through the whole statement, though you 
can
step-in and step-out.
 You will also step into functions called for arguments like getColor() in the
example.

Ok, I'll admit I don't know much about this stuff.  I debug mostly with asserts
and print statements.  I very seldom use a debugger.


Re: Plot2kill 0.2

2011-03-11 Thread Nick Sabalausky
dsimcha dsim...@yahoo.com wrote in message 
news:ildvns$2epk$1...@digitalmars.com...
 == Quote from Kagamin (s...@here.lot)'s article
   Just put a break point at the beginning of the line and step through 
   it?
  May be m$ debugger sucks, but it's nontrivial to step through methods 
  called
 in single statement: it tends to step through the whole statement, though 
 you can
 step-in and step-out.
 You will also step into functions called for arguments like getColor() in 
 the
 example.

 Ok, I'll admit I don't know much about this stuff.  I debug mostly with 
 asserts
 and print statements.  I very seldom use a debugger.

Same here. I got used to printf-debugging when dealing with a bunch of 
platforms that lacked debuggers. Plus it makes it a lot easier to look 
backwards in time (just scroll up instead of restarting and re-stepping 
through).

Unfortunately that makes debugging CTFE a royal pain in the ass since CTFE 
has absolutely zero way to send anything to stdout - or any other IO for 
that matter. And you can't work around it by appending to a log to then 
display at runtime because CTFE deliberately disallows any global mutable 
state *even* when order-of-evaluation doesn't matter for what you're trying 
to do.




Re: Plot2kill 0.2

2011-03-11 Thread David Nadlinger

On 3/11/11 11:36 PM, Simon wrote:

On 11/03/2011 21:33, Nick Sabalausky wrote:

Unfortunately that makes debugging CTFE a royal pain in the ass since
CTFE
has absolutely zero way to send anything to stdout - […]

[…]
just use:

pragma(msg, CTFE_string);


No, this doesn't quite cut it for debugging CTFE functions: While you 
can obviously use pragma(msg, …) to write the *result* of a function 
invoked via CTFE to standard output, this doesn't help you if you want 
to debug the stuff going on *in* that function itself, for example to 
track down one of the numerous CTFE bugs.


David


P.S.: As a last resort to get something printed during CTFE evaluation, 
it is possible to use »assert(false, message)« – DMD prints the 
(correctly evaluated) message as part of the compiler error message. 
Obviously, this screws up compilation though.


Re: Plot2kill 0.2

2011-03-11 Thread Simon

On 11/03/2011 22:52, David Nadlinger wrote:

On 3/11/11 11:36 PM, Simon wrote:

On 11/03/2011 21:33, Nick Sabalausky wrote:

Unfortunately that makes debugging CTFE a royal pain in the ass since
CTFE
has absolutely zero way to send anything to stdout - […]

[…]
just use:

pragma(msg, CTFE_string);


No, this doesn't quite cut it for debugging CTFE functions: While you
can obviously use pragma(msg, …) to write the *result* of a function
invoked via CTFE to standard output, this doesn't help you if you want
to debug the stuff going on *in* that function itself, for example to
track down one of the numerous CTFE bugs.

David


P.S.: As a last resort to get something printed during CTFE evaluation,
it is possible to use »assert(false, message)« – DMD prints the
(correctly evaluated) message as part of the compiler error message.
Obviously, this screws up compilation though.


Never had a problem myself; and I've used some really hairy string 
mixins to create runtime  CTFE functions.
Worse case, when composing functions is to print out the result and copy 
into a file and then debug as normal.


It's neither elegant or convenient, but it works.

--
My enormous talent is exceeded only by my outrageous laziness.
http://www.ssTk.co.uk


Re: Plot2kill 0.2

2011-03-11 Thread David Nadlinger

On 3/12/11 12:41 AM, Simon wrote:

No, this doesn't quite cut it for debugging CTFE functions: While you
can obviously use pragma(msg, …) to write the *result* of a function
invoked via CTFE to standard output, this doesn't help you if you want
to debug the stuff going on *in* that function itself, for example to
track down one of the numerous CTFE bugs.
[…]


Never had a problem myself; and I've used some really hairy string
mixins to create runtime  CTFE functions.
Worse case, when composing functions is to print out the result and copy
into a file and then debug as normal.

It's neither elegant or convenient, but it works.


But not for the case mentioned above – first, because there is often no 
sane way to »print out the result«, and second, because if there are 
CTFE bugs involved, running the code at runtime (I guess that's what you 
meant with »debug as normal«) obviously doesn't help you in any way.


And yes, I had (or rather have) this problem myself, with not even that 
crazy code…


David


Re: Plot2kill 0.2

2011-03-11 Thread Nick Sabalausky
Simon s.d.hamm...@gmail.com wrote in message 
news:ilec50$cj0$1...@digitalmars.com...
 On 11/03/2011 22:52, David Nadlinger wrote:
 On 3/11/11 11:36 PM, Simon wrote:
 On 11/03/2011 21:33, Nick Sabalausky wrote:
 Unfortunately that makes debugging CTFE a royal pain in the ass since
 CTFE
 has absolutely zero way to send anything to stdout - [.]
 [.]
 just use:

 pragma(msg, CTFE_string);

 No, this doesn't quite cut it for debugging CTFE functions: While you
 can obviously use pragma(msg, .) to write the *result* of a function
 invoked via CTFE to standard output, this doesn't help you if you want
 to debug the stuff going on *in* that function itself, for example to
 track down one of the numerous CTFE bugs.

 David


 P.S.: As a last resort to get something printed during CTFE evaluation,
 it is possible to use »assert(false, message)« - DMD prints the
 (correctly evaluated) message as part of the compiler error message.
 Obviously, this screws up compilation though.

 Never had a problem myself; and I've used some really hairy string mixins 
 to create runtime  CTFE functions.
 Worse case, when composing functions is to print out the result and copy 
 into a file and then debug as normal.

 It's neither elegant or convenient, but it works.


Yes. Like I said, it's a pain in the ass. I never said it wasn't possible.





Re: User Defined Annotations

2011-03-11 Thread Andrew Wiley
On Fri, Mar 11, 2011 at 1:38 AM, Jacob Carlborg d...@me.com wrote:
 On 2011-03-11 04:30, Andrew Wiley wrote:

 This is a topic that seems to come up every so often (well, I bring it
 up every so often, but I haven't really heard negative remarks about
 it), and I was wondering what it would take to move forward with
 getting user defined annotations into the language, and whether it
 would be feasible for D2. Would the next step be to make a full
 proposal?

 For this to be useful that language would probably need better reflection
 capabilities, either compile time or runtime.


As I see it, the proposal should probably include how this should look
in compile time reflection. As far as I know, runtime reflection has
even less of a plan than user defined annotations, and the two should
probably be proposed separately.


Re: Code Sandwiches

2011-03-11 Thread Lars T. Kyllingstad
On Thu, 10 Mar 2011 17:22:53 -0500, Nick Sabalausky wrote:

 Lars T. Kyllingstad public@kyllingen.NOSPAMnet wrote in message
 news:ilaa5k$2vls$2...@digitalmars.com...

 PDF ensures a consistent look across different platforms and viewers,
 because the layout is fixed and fonts can be embedded.
 
 That's a significant part of what makes it good for printing and
 terrible for everything else.
 
 
 Embedding formulas as images isn't really an option, because you want
 them to be in the same font as (or a font that looks good with) the
 document's main font.
 
 That strikes me as worrying over a trivial detail. Is the formula's font
 *really*the point, or it is the formula itself?
 
 
 As I see it, the only viable option for embedding math in HTML is to
 use MathML.
 
 Viable and perfect are two very different things. If you feel that
 the formulas *MUST* be in the same font as the rest, then it sounds like
 you mean perfect not viable.
 
 
 Anyway, besides ensuring good-looking formulas, a fixed layout means
 that you are in full control over other typesetting issues such as
 hyphenation.
 
 Again, that *belongs* in the realm of the reader, the reader's machine
 and the document viewer. This isn't old-school dead-tree media we're
 talking about here. In printed form, the viewing device/app and the
 publication format are inherently the exact same thing, so the
 distinction is irrelevent and presentation details like that may as well
 be handled by the producer. But with computers, the two things are
 inherently very different.
 
 The bottom line is, viewing a document should work as well as it
 reasonably can on *anything* it's viewed on, any app, any device, any
 person. Yes, that might *seem* to indicate letting the producer control
 every detail, but outside of paper (where there *is* only one
 app/device the document is viewed with) that doesn't work: Obviously,
 different viewers are going to have different needs, different optimal
 uses, etc. Is it at all reasonable for the content producer to take into
 account every viewer/device or even personal preference that it's going
 to come across, even just in the present, let alone the future?
 Certainly not (heck, that would be lke the days before device drivers).
 Is it even conceivably *possible* with PDF? Not remotely. The *only*
 thing that has the proper information to appropriately format a document
 is the viewer itself, the device itself, etc. *Not* the content
 producer.
 
 
 And finally, I have yet to see any web browser or word processor that
 even comes close to LaTeX with regards to typesetting quality.  Show me
 a PDF file created by LaTeX and a PDF version of a Word document, and
 I'm pretty sure I can tell at a glance which is which.


 I don't doubt that. But show *me* the same two documents and *if* I can
 tell them apart I'm pretty sure I could tell that I don't care which is
 which. Seriously, does anyone without a typesetting background ever even
 notice such things?

Based on your above comments, I get the feeling that you don't find 
typography important at all.  But typography is at least as important as 
any other design decision, and most people do care about design.

If you create a web site for some company, you want to design it so it 
looks professional and is easy to use.  If I write a scientific paper, I 
want it to look professional and be easy to read.  And although you may 
not have a conscious opinion about typography, your eyes and brain 
certainly do.  Try reading 20 or 30 pages worth of heavy material, 
perhaps interspersed with a bunch of mathematical formulas here and 
there, as rendered by a web browser.  I guarantee you, your eyes and 
brain will be a lot more exhausted than they would have been if the 
document were professionally typeset.

I wish the designers of web sites and browsers would pay more attention 
to typesetting issues and spend less time on bloating the web with Flash 
animations and JavaScript misfeatures.


 I don't understand your big gripe with PDF readers either.  Maybe Adobe
 just makes a crappy one?
 
 They do. A *very* crappy one. That's why I use FoxIt instead.
 
 
 I use the one that comes with the GNOME desktop, Evince, and it works
 perfectly.  (It's open source, too!)  As we speak I have it open on a
 1422-page PDF document, and I can scroll without any lag, search for
 text (and math, even), and basically do anything I can in a web reader.


 Does it stick page breaks in the middle of a document? Do the page
 breaks serve *any* useful purpose outside printed form? Can web pages
 link to specific parts of the document? When the PDF is from a book that
 has smaller inner margins than outer margins, do the left/right margns
 keep changing form one page to the next? If you resize the window, can
 you still read it without introducing horizontal scrolling? If you find
 the chosen font difficult to read, or you merely prefer a different one,
 can you change it? Are comparable programs as 

Re: Code Sandwiches

2011-03-11 Thread Lars T. Kyllingstad
On Thu, 10 Mar 2011 18:43:43 -0500, Nick Sabalausky wrote:

 Andrei Alexandrescu seewebsiteforem...@erdani.org wrote in message
 news:ilbmlp$oq$1...@digitalmars.com...
 On 3/10/11 2:22 PM, Nick Sabalausky wrote:
 Lars T. Kyllingstadpublic@kyllingen.NOSPAMnet  wrote in message
 news:ilaa5k$2vls$2...@digitalmars.com...
 Embedding formulas as images isn't really an option, because you want
 them to be in the same font as (or a font that looks good with) the
 document's main font.

 That strikes me as worrying over a trivial detail. Is the formula's
 font *really*the point, or it is the formula itself?

 http://www-cs-faculty.stanford.edu/~uno/cm.html


 Those look the same to me.

A tiny difference in one symbol, for sure, but in a document with 
thousands and thousands of symbols, such small differences add up and can 
impact both reading speed and eye strain.


 In fact, the old delta was so ugly, I couldn't stand to write papers
 using that symbol; now I can't stand to read papers that still do use
 it.
 
 And I thought *I* tended to be particular about things!

Well, Knuth is the designer of TeX, so it's not too surprising that he 
has opinions about such details. :)

-Lars


Re: Code Sandwiches

2011-03-11 Thread Nick Sabalausky
Lars T. Kyllingstad public@kyllingen.NOSPAMnet wrote in message 
news:ilcmaf$19dg$1...@digitalmars.com...

 Based on your above comments, I get the feeling that you don't find
 typography important at all.  But typography is at least as important as
 any other design decision, and most people do care about design.


I wouldn't say I find it to be *zero* importance, I just find it to be of 
much less importance than UI. And the UI is something I find all PDF readers 
I've tried to be severely deficient in compared to web browsers (heavily 
animated sites notwithstanding). And I really think those UI issues have 
more to do with the nature of PDF than just the quality of the readers.



 I wish the designers of web sites and browsers would pay more attention
 to typesetting issues and spend less time on bloating the web with Flash
 animations and JavaScript misfeatures.


That I can agree with. I'd *much* rather have a slightly better font and 
typesetting than flashing, flying, spinning bullcrap. Heck, I'd rather have 
*worse* fonts and typesetting than Flash/JS misfeatures :)


 And please note that I'm not saying PDF is perfect for everything.
 Actually, I agree with you that the only thing it is *perfect* for is
 printing.

Right. I realize that.


 But it *is* preferable over HTML in some situations, and
 scientific/technical literature is one of those.  Novels are another
 example.


Well, I'd much prefer html for any of those. I *really* *really* hate trying 
to read anything in  a pdf viewer. I'm actually very surprised that anyone 
finds it practical.


 If someone comes up with an alternative format for on-screen document
 reading that does away with obsolete artifacts of printed media, such as
 page breaks, odd/even page margins, etc. and has better hyperlinking
 capabilities than PDF, but still lets you embed fonts and have full
 control over other typesetting issues, I'd be happy to use it.

 Heck, web browsers with decent typesetting engines would be a *huge* step
 in the right direction.

I'd be all for that stuff as well. Heck, I'm normally one of the first 
people to agree that HTML, CSS and web browsers have serious problems. But 
at least I can get by with them (thanks largely to NoScript) as opposed to 
pdf which I find to be nearly intolerable.

I guess there's two good things I can say about pdf's and pdf viewers 
though: There's rarely any idiotic scripted or multimedia nonsense, and it's 
not as hard to find pdf viewers that actually obey my system's visual 
settings.





Re: Code Sandwiches

2011-03-11 Thread spir

On 03/11/2011 09:25 AM, Lars T. Kyllingstad wrote:

Based on your above comments, I get the feeling that you don't find
typography important at all.  But typography is at least as important as
any other design decision, and most people do care about design.

If you create a web site for some company, you want to design it so it
looks professional and is easy to use.  If I write a scientific paper, I
want it to look professional and be easy to read.  And although you may
not have a conscious opinion about typography, your eyes and brain
certainly do.  Try reading 20 or 30 pages worth of heavy material,
perhaps interspersed with a bunch of mathematical formulas here and
there, as rendered by a web browser.  I guarantee you, your eyes and
brain will be a lot more exhausted than they would have been if the
document were professionally typeset.

I wish the designers of web sites and browsers would pay more attention
to typesetting issues and spend less time on bloating the web with Flash
animations and JavaScript misfeatures.


I do agree. I also wish -- something much easier to do -- they would care for 
our nerval systems  stop saturating them with non-information (white backgrounds).


Denis
--
_
vita es estrany
spir.wikidot.com



Re: GZip File Reading

2011-03-11 Thread Steven Schveighoffer
On Thu, 10 Mar 2011 20:29:55 -0500, Walter Bright  
newshou...@digitalmars.com wrote:



On 3/10/2011 6:24 AM, dsimcha wrote:

On 3/10/2011 4:59 AM, Walter Bright wrote:

On 3/9/2011 8:53 PM, dsimcha wrote:

I'd like to get some comments on what an appropriate API design and
implementation for writing gzipped files would be. Two key
requirements are that
it must be as easy to use as std.stdio.File and it must be easy to
extend to
support other single-file compression formats like bz2.


Use ranges.


Ok, obviously. The point was trying to figure out how to maximize the  
reuse of

the infrastructure from std.stdio.File.


It's not so obvious based on my reading of the other comments. For  
example, we should not be inventing a streaming interface.


C's FILE * interface is too limiting/low performing.  I'm working to  
create a streaming interface to replace it, and then we can compare the  
differences.  I think it's pretty obvious from Tango's I/O performance  
that a D-based stream interface is a better approach.


Ranges should be built on top of that interface.

I won't continue the debate, since it's difficult to argue from a position  
of theory.  However, I don't think it will be long before I can show some  
real numbers.  I'm not expecting Phobos to adopt, based on my experience  
with dcollections, but it should be seamlessly usable with Phobos,  
especially since range-based functions are templated.


-Steve


Re: Code Sandwiches

2011-03-11 Thread David Nadlinger

On 3/11/11 11:30 AM, spir wrote:

I do agree. I also wish -- something much easier to do -- they would
care for our nerval systems  stop saturating them with non-information
(white backgrounds).


Is there any scientific data to back this assumption?

David


Re: GZip File Reading

2011-03-11 Thread dsimcha

On 3/11/2011 8:04 AM, Steven Schveighoffer wrote:

On Thu, 10 Mar 2011 20:29:55 -0500, Walter Bright
newshou...@digitalmars.com wrote:


On 3/10/2011 6:24 AM, dsimcha wrote:

On 3/10/2011 4:59 AM, Walter Bright wrote:

On 3/9/2011 8:53 PM, dsimcha wrote:

I'd like to get some comments on what an appropriate API design and
implementation for writing gzipped files would be. Two key
requirements are that
it must be as easy to use as std.stdio.File and it must be easy to
extend to
support other single-file compression formats like bz2.


Use ranges.


Ok, obviously. The point was trying to figure out how to maximize the
reuse of
the infrastructure from std.stdio.File.


It's not so obvious based on my reading of the other comments. For
example, we should not be inventing a streaming interface.


C's FILE * interface is too limiting/low performing. I'm working to
create a streaming interface to replace it, and then we can compare the
differences. I think it's pretty obvious from Tango's I/O performance
that a D-based stream interface is a better approach.

Ranges should be built on top of that interface.

I won't continue the debate, since it's difficult to argue from a
position of theory. However, I don't think it will be long before I can
show some real numbers. I'm not expecting Phobos to adopt, based on my
experience with dcollections, but it should be seamlessly usable with
Phobos, especially since range-based functions are templated.

-Steve


Well, I certainly appreciate your efforts.  IMHO the current state of 
file I/O for anything but uncompressed plain text in D is pretty sad. 
Even uncompressed plain text is pretty bad on Windows due to various 
bugs.  IMHO one huge improvement that could be made to Phobos would be 
to create modules for reading the most common file formats (my personal 
list would be gzip, bzip2, png, bmp, jpeg and csv) with a nice 
high-level D interface.


Curl support RFC

2011-03-11 Thread Jonas Drewsen

Hi,

   So I've spent some time trying to wrap libcurl for D. There is a lot 
of things that you can do with libcurl which I did not know so I'm 
starting out small.


For now I've created all the declarations for the latest public curl C 
api. I have put that in the etc.c.curl module.


On top of that I've created a more D like api as seen below. This is 
located in the 'etc.curl' module. What you can see below currently works 
but before proceeding further down this road I would like to get your 
comments on it.


//
// Simple HTTP GET with sane defaults
// provides the .content, .headers and .status
//
writeln( Http.get(http://www.google.com;).content );

//
// GET with custom data receiver delegates
//
Http http = new Http(http://www.google.dk;);
http.setReceiveHeaderCallback( (string key, string value) {
writeln(key ~ : ~ value);
} );
http.setReceiveCallback( (string data) { /* drop */ } );
http.perform;

//
// POST with some timouts
//
http.setUrl(http://www.testing.com/test.cgi;);
http.setReceiveCallback( (string data) { writeln(data); } );
http.setConnectTimeout(1000);
http.setDataTimeout(1000);
http.setDnsTimeout(1000);
http.setPostData(The quick);
http.perform;

//
// PUT with data sender delegate
//
string msg = Hello world;
size_t len = msg.length; /* using chuncked transfer if omitted */

http.setSendCallback( delegate size_t(char[] data) {
if (msg.empty) return 0;
auto l = msg.length;
data[0..l] = msg[0..$];
msg.length = 0;
return l;
},
HttpMethod.put, len );
http.perform;

//
// HTTPS
//
writeln(Http.get(https://mail.google.com;).content);

//
// FTP
//
writeln(Ftp.get(ftp://ftp.digitalmars.com/sieve.ds;,
./downloaded-file));


// ... authenication, cookies, interface select, progress callback
// etc. is also implemented this way.


/Jonas


Re: Curl support RFC

2011-03-11 Thread dsimcha
I don't know much about this kind of stuff except that I use it for very simple
use cases occasionally.  One thing I'll definitely give your design credit for,
based on your examples, is making simple things simple.  I don't know how it
scales to more complex use cases (not saying it doesn't, just that I'm not
qualified to evaluate that), but I definitely would use this.  Nice work.

BTW, what is the license status of libcurl?  According to Wikipedia it's MIT
licensed.  Where does that leave us with regard to the binary attribution issue?

== Quote from Jonas Drewsen (jdrew...@nospam.com)'s article
 Hi,
 So I've spent some time trying to wrap libcurl for D. There is a lot
 of things that you can do with libcurl which I did not know so I'm
 starting out small.
 For now I've created all the declarations for the latest public curl C
 api. I have put that in the etc.c.curl module.
 On top of that I've created a more D like api as seen below. This is
 located in the 'etc.curl' module. What you can see below currently works
 but before proceeding further down this road I would like to get your
 comments on it.
 //
 // Simple HTTP GET with sane defaults
 // provides the .content, .headers and .status
 //
 writeln( Http.get(http://www.google.com;).content );
 //
 // GET with custom data receiver delegates
 //
 Http http = new Http(http://www.google.dk;);
 http.setReceiveHeaderCallback( (string key, string value) {
   writeln(key ~ : ~ value);
 } );
 http.setReceiveCallback( (string data) { /* drop */ } );
 http.perform;
 //
 // POST with some timouts
 //
 http.setUrl(http://www.testing.com/test.cgi;);
 http.setReceiveCallback( (string data) { writeln(data); } );
 http.setConnectTimeout(1000);
 http.setDataTimeout(1000);
 http.setDnsTimeout(1000);
 http.setPostData(The quick);
 http.perform;
 //
 // PUT with data sender delegate
 //
 string msg = Hello world;
 size_t len = msg.length; /* using chuncked transfer if omitted */
 http.setSendCallback( delegate size_t(char[] data) {
  if (msg.empty) return 0;
  auto l = msg.length;
  data[0..l] = msg[0..$];
  msg.length = 0;
  return l;
  },
  HttpMethod.put, len );
 http.perform;
 //
 // HTTPS
 //
 writeln(Http.get(https://mail.google.com;).content);
 //
 // FTP
 //
 writeln(Ftp.get(ftp://ftp.digitalmars.com/sieve.ds;,
  ./downloaded-file));
 // ... authenication, cookies, interface select, progress callback
 // etc. is also implemented this way.
 /Jonas



Re: Google Summer of Code 2011 application

2011-03-11 Thread Nebster

On 10/03/2011 19:36, Trass3r wrote:

How about adding more stuff to CTFE, esp. pointers and classes?


Or get Algebraic data types to typecheck in the compiler :)


Re: Code Sandwiches

2011-03-11 Thread Andrei Alexandrescu

On 3/11/11 5:21 AM, David Nadlinger wrote:

On 3/11/11 11:30 AM, spir wrote:

I do agree. I also wish -- something much easier to do -- they would
care for our nerval systems  stop saturating them with non-information
(white backgrounds).


Is there any scientific data to back this assumption?

David


Paper has white background, which worked quite well for it.

Andrei


Re: Code Sandwiches

2011-03-11 Thread David Nadlinger

On 3/11/11 4:35 PM, Andrei Alexandrescu wrote:

On 3/11/11 5:21 AM, David Nadlinger wrote:

On 3/11/11 11:30 AM, spir wrote:

I do agree. I also wish -- something much easier to do -- they would
care for our nerval systems  stop saturating them with non-information
(white backgrounds).


Is there any scientific data to back this assumption?

David


Paper has white background, which worked quite well for it.

Andrei


Yes, but I think spir meant it the other way round…

David


Re: Curl support RFC

2011-03-11 Thread Vladimir Panteleev
On Fri, 11 Mar 2011 17:20:38 +0200, Jonas Drewsen jdrew...@nospam.com  
wrote:



writeln( Http.get(http://www.google.com;).content );


Does this return a string? What if the page's encoding isn't UTF-8?

Data should probably be returned as void[], similar to std.file.read.

--
Best regards,
 Vladimirmailto:vladi...@thecybershadow.net


Re: Code Sandwiches

2011-03-11 Thread Jonathan M Davis
On Friday 11 March 2011 07:35:09 Andrei Alexandrescu wrote:
 On 3/11/11 5:21 AM, David Nadlinger wrote:
  On 3/11/11 11:30 AM, spir wrote:
  I do agree. I also wish -- something much easier to do -- they would
  care for our nerval systems  stop saturating them with non-information
  (white backgrounds).
  
  Is there any scientific data to back this assumption?
  
  David
 
 Paper has white background, which worked quite well for it.

The problem with a white background on a computer screen is that a computer 
screen projects light whereas paper merely reflects it. So, while reading black 
on white works great with paper, it's harder on the eyes with a computer 
screen. 
But naturally, the folks doing the computer stuff have typically emulated 
paper, 
so most text read via the computer is still black on white. This can help cause 
eye strain though, which is one of the reason that there are plenty of 
programmers out there who mess with the color scheme of at least their code 
editor to make it light text on a dark background.

Now, beyond some eye strain in some folks, I'm not aware of there being any 
real 
problems with black text on white with a computer screen - certainly nothing 
about the saturation of light with non-information harming your nervous system 
(I'm really not sure what Spir means here). But I don't think that there's much 
question that reading black on white is harder on your eyes on a computer 
screen 
than it is on paper. Still, I wouldn't expect computers to do white on black or 
anything similar at this point. The whole black on white thing is just too 
ingrained in people.

- Jonathan M Davis


Re: Code Sandwiches

2011-03-11 Thread spir

On 03/11/2011 04:35 PM, Andrei Alexandrescu wrote:

On 3/11/11 5:21 AM, David Nadlinger wrote:

On 3/11/11 11:30 AM, spir wrote:

I do agree. I also wish -- something much easier to do -- they would
care for our nerval systems  stop saturating them with non-information
(white backgrounds).


Is there any scientific data to back this assumption?

David


Paper has white background, which worked quite well for it.


Paper is not a light emitter.

Denis
--
_
vita es estrany
spir.wikidot.com



Re: Curl support RFC

2011-03-11 Thread Lutger Blijdestijn
dsimcha wrote:

 I don't know much about this kind of stuff except that I use it for very
 simple
 use cases occasionally.  One thing I'll definitely give your design credit
 for,
 based on your examples, is making simple things simple.  I don't know how
 it scales to more complex use cases (not saying it doesn't, just that I'm
 not
 qualified to evaluate that), but I definitely would use this.  Nice work.
 
 BTW, what is the license status of libcurl?  According to Wikipedia it's
 MIT
 licensed.  Where does that leave us with regard to the binary attribution
 issue?
 

Walter contacted the author, it's not a problem:

http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.comgroup=digitalmars.Dartnum=112832


Re: Resizing an array: Dangerous? Possibly buggy?

2011-03-11 Thread Nick Treleaven
On Wed, 09 Mar 2011 18:15:42 +, %u wrote:

 I think pitfalls like this one (with the garbage collector, for example)
 should definitely be documented somewhere. I would imagine that quite a
 few people who try to set the length of an array won't realize that they
 can run out of memory this way, especially because it's nondeterministic
 in many cases.

If you're referring to reducing the length of an array, I think people 
with a C background would expect the memory not to be reallocated, 
because this avoids copying memory contents, and anyway the array may 
grow again.

I think this is documented somewhere, maybe TDPL when talking about 
slices. But making people more aware of it is probably a good thing. 
Perhaps an article on things to watch out for to prevent the GC holding 
onto too much memory would be useful.


Re: Curl support RFC

2011-03-11 Thread Jacob Carlborg

On 2011-03-11 16:20, Jonas Drewsen wrote:

Hi,

So I've spent some time trying to wrap libcurl for D. There is a lot of
things that you can do with libcurl which I did not know so I'm starting
out small.

For now I've created all the declarations for the latest public curl C
api. I have put that in the etc.c.curl module.

On top of that I've created a more D like api as seen below. This is
located in the 'etc.curl' module. What you can see below currently works
but before proceeding further down this road I would like to get your
comments on it.

//
// Simple HTTP GET with sane defaults
// provides the .content, .headers and .status
//
writeln( Http.get(http://www.google.com;).content );

//
// GET with custom data receiver delegates
//
Http http = new Http(http://www.google.dk;);
http.setReceiveHeaderCallback( (string key, string value) {
writeln(key ~ : ~ value);
} );
http.setReceiveCallback( (string data) { /* drop */ } );
http.perform;

//
// POST with some timouts
//
http.setUrl(http://www.testing.com/test.cgi;);
http.setReceiveCallback( (string data) { writeln(data); } );
http.setConnectTimeout(1000);
http.setDataTimeout(1000);
http.setDnsTimeout(1000);
http.setPostData(The quick);
http.perform;

//
// PUT with data sender delegate
//
string msg = Hello world;
size_t len = msg.length; /* using chuncked transfer if omitted */

http.setSendCallback( delegate size_t(char[] data) {
if (msg.empty) return 0;
auto l = msg.length;
data[0..l] = msg[0..$];
msg.length = 0;
return l;
},
HttpMethod.put, len );
http.perform;

//
// HTTPS
//
writeln(Http.get(https://mail.google.com;).content);

//
// FTP
//
writeln(Ftp.get(ftp://ftp.digitalmars.com/sieve.ds;,
./downloaded-file));


// ... authenication, cookies, interface select, progress callback
// etc. is also implemented this way.


/Jonas


Is there support for other HTTP methods/verbs in the D wrapper, like delete?

--
/Jacob Carlborg


Re: Pretty please: Named arguments

2011-03-11 Thread Stewart Gordon

On 09/03/2011 12:22, Gareth Charnock wrote:
snip

Which meant I had to look up what the default values of pos,size and style 
where even
though I was happy with those default values. The more arguments the more of a 
pain this
setup is without named arguments. Contrast to a hypothetical C++ syntax:

new wxFrame(a_parent,wxANY,Hello world,name = My Custom name str)

snip

This isn't hypothetical C++ syntax, it's perfectly legal C++ syntax.  It is 
equivalent to

name = My Custom Name str, new wxFrame(a_parent, wxANY, Hello world, 
name)

Struct and array initialisers use a colon, not an equals sign, and if we add named 
arguments they would need to do the same to avoid changing the meaning of existing code. 
You may be just illustrating the concept and not proposing this as the actual syntax to 
add to C++ or D, but being careful now will help you to get used to the feature when/if it 
arrives.


Stewart.


Re: Code Sandwiches

2011-03-11 Thread David Nadlinger

On 3/11/11 5:55 PM, Jonathan M Davis wrote:

The problem with a white background on a computer screen is that a computer
screen projects light whereas paper merely reflects it. So, while reading black
on white works great with paper, it's harder on the eyes with a computer screen.


My question from above still remains: Is there any scientific data to 
back this assumption?


David


Re: Google Summer of Code 2011 application

2011-03-11 Thread Gary Whatmore
Nebster Wrote:

 On 10/03/2011 19:36, Trass3r wrote:
  How about adding more stuff to CTFE, esp. pointers and classes?
 
 Or get Algebraic data types to typecheck in the compiler :)

Stop trolling. We should really ban these Tango fanboys here.

Nobody really wants to turn D into an ivory tower hell with all the functional 
language features. Even bearophile was trolling recently. Why remembers the 
'where' syntax. *Vomit*

Nick S. is right, we should use HTML for our documents too. Maybe some stupid 
typography expert cares, but the majority (99%) of users don't. They've used to 
browsing broken HTML pages, DDOC is good enough for them. It has also shown 
potential as a general typesetting system for technical documentation in the 
digitalmars site.


Re: Library Documentation

2011-03-11 Thread Nicholas
== Quote from novice2 (so...@noem.ail)'s article
 Nicholas Wrote:
  As a result of (my) complaining and being a huge fan of XMind, I decided to
  try to organize the library for my own references as I encounter new 
  sections
  of it.  I have a decent portion of it in place now.  I thought I'd post a 
  link
  in case it can help anyone else out as well.
 
 
  http://polish.slavic.pitt.edu/~swan/theta/Phobos.xmind
 may be you could expose/share your work via service like
 http://www.xmind.net/share/
 because not everybody have installed xmind...

Good point.  I'll do that on Monday when I'm back at the office.  I updated
std.datetime to 2.052 yesterday (didn't realize there was a new version until 
then).


std.datetime questions

2011-03-11 Thread Nicholas
I just started using the new std.datetime.  Two things I find strange that
maybe someone can explain are:


1) EST timezone displays GMT+0500 instead of -0500 when using any of the
toString functions.  Didn't check other negative timezones.


2) The UTC time for std.file's DirEntry uses milliseconds, so when converting
SysTime to UTC, I had to use toUnixTime and then I had to multiple the result
by 1000.


Also, I found it strange that this wouldn't work:

auto stime = SysTime( ... );
long timetest = stime.toUnixTime() * 1000; //doesn't work

I had to do:

timetest = stime.toUnixTime();
timetest *= 1000;


I believe there's also a problem with the time in SysTime when you specify a
timezone and set the time to 0 in the constructor, e.g. SysTime( DateTime(
2011, 3, 11 ), TimeZone( America/New_York ) ), in that it forces the time to
GMT instead of the specified local time.  I'll have to double check but I know
it worked when I used a non-zero time.


Re: Code Sandwiches

2011-03-11 Thread Walter Bright

On 3/9/2011 10:18 PM, Nick Sabalausky wrote:

They're text. With minor formatting. That alone makes html better. Html is
lousy for a lot of things, but formatted text is the one thing it's always
been perfectly good at. And frankly I think I'd *rather* go with pretty much
any word processing format if the only other option was pdf.


I used to use HTML for presentations. Frankly, it was terrible. The text was 
rendered badly, especially when blown up on a screen. I could never get it to 
look right. I couldn't email the presentation to anyone without sending a wad of 
other files along with it.


I switched to pdf presentations, and they worked great and looked great. The pdf 
viewers would render text that looked great blown up. The pdf was all in one 
file, meaning I could email it to someone and they could look at it directly 
from their mail program. I would bring backups on a thumb drive so in case my 
laptop was busted/stolen by the TSA, I could run my presentation on anyone's 
computer.


I do not understand why HTML engines do such an ugly job rendering text, while 
PDF's on the same machine do a great job. This is true on Windows as well as Ubuntu.


Re: Code Sandwiches

2011-03-11 Thread lurker
Walter Bright Wrote:

 On 3/9/2011 10:18 PM, Nick Sabalausky wrote:
  They're text. With minor formatting. That alone makes html better. Html is
  lousy for a lot of things, but formatted text is the one thing it's always
  been perfectly good at. And frankly I think I'd *rather* go with pretty much
  any word processing format if the only other option was pdf.
 
 I used to use HTML for presentations. Frankly, it was terrible. The text was 
 rendered badly, especially when blown up on a screen. I could never get it to 
 look right. I couldn't email the presentation to anyone without sending a wad 
 of 
 other files along with it.
 
 I switched to pdf presentations, and they worked great and looked great. The 
 pdf 
 viewers would render text that looked great blown up. The pdf was all in one 
 file, meaning I could email it to someone and they could look at it directly 
 from their mail program. I would bring backups on a thumb drive so in case my 
 laptop was busted/stolen by the TSA, I could run my presentation on anyone's 
 computer.
 
 I do not understand why HTML engines do such an ugly job rendering text, 
 while 
 PDF's on the same machine do a great job. This is true on Windows as well as 
 Ubuntu.

This can't be true! Walter defending inferior semi-standard formats. PDF 
doesn't even have as nice transition effects as powerpoint or new jQuery using 
presentations stored in the cloud services. Your thumb drives break anyway once 
a year so I'm in favor of a subscription model for the cloud. Stardard HTML + 
CSS + JavaScript or Flash works for everyone. There's also Silverlight coming 
soon.


Re: Is DMD 2.052 32-bit?

2011-03-11 Thread lurker
Jonathan M Davis Wrote:

 Now, assuming that all of that is taken care, if you're using a 32-bit binary 
 on 
 a 64-bit system, you're still going to be restricted on how much that program 
 can use. It doesn't use the native word size of the machine to do what it 
 does, 
 and in many cases, running a 32-bit program on a 64-bit machine is slower 
 than 
 running a 64-bit version of that program on that machine (though that's going 
 to 
 vary from program to program, since there are  obviously quite a few factors 
 which affect efficiency).

The efficiency claim is true. 64-bit architures have much more registers. This 
can effectively double the code's performance in most cases. Loads and stores 
can also use full 64 bits of bandwidth instead of 32. Thus again twice as much 
speed. In general if you worry about larger binary size, use UPX. Other than 
that, 64 bit code outperforms the 32 bit. We want to keep the fastest compiler 
title, right?


Re: Is DMD 2.052 32-bit?

2011-03-11 Thread lurker
Jonathan M Davis Wrote:

 On Wednesday 09 March 2011 17:56:13 Walter Bright wrote:
  On 3/9/2011 4:30 PM, Jonathan M Davis wrote:
   Much as I'd love to have a 64-bit binary of dmd, I don't think that the
   gain is even vaguely worth the risk at this point.
  
  What is the gain? The only thing I can think of is some 64 bit OS
  distributions are hostile to 32 bit binaries.
 
 Well, the fact that you then have a binary native to your system is obviously 
 a 
 gain (and is likely the one which people will cite most often), and that 
 _does_ 
 count for quite a lot. However, regardless of that, it's actually pretty easy 
 to 
 get dmd to run out of memory when compiling if you do much in the way of CTFE 
 or 
 template stuff. Granted, fixing some of the worst memory-related bugs in dmd 
 will 
 go a _long_ way towards fixing that, but even if they are, you're 
 theoretically 
 eventually supposed to be able to do pretty much anything at compile time 
 which 
 you can do at runtime in SafeD. And using enough memory that you require the 
 64-
 bit address space would be one of the things that you can do in SafeD when 
 compiling for 64-bit. As long as the compiler is only 32-bit, you can't do 
 that 
 at compile time even though you can do it at runtime (though the current 
 limitations of CTFE do reduce the problem in that you can't do a lot of stuff 
 at 
 compile time period).
 
 In any case, the fact that dmd runs out of memory fairly easily makes having 
 a 
 64-bit version which could use all of my machine's memory really attractive. 
 And 
 honestly, having an actual, 64-bit binary to run on a 64-bit system is 
 something 
 that people generally want, and it _is_ definitely a problem to get a 32-bit 
 binary into the 64-bit release of a Liunx distro.
 
 Truth be told, I would have thought that it would be a given that there would 
 be 
 a 64-bit version of dmd when going to support 64-bit compilation and was 
 quite 
 surprised when that was not your intention.

I think porting DMD to 64 bits would be a pragmatic solution to this. Computers 
are getting more memory faster than Walter is able to fix possible leaks in 
DMD. There's awful lots of template and CTFE code using more than 2 or 3 GB of 
RAM. I can't even imagine how one could develop some modern application if this 
was a hard limit. Luckily there are GDC and LDC, which allow enterprise users 
to take full advantage of the 24-64 GB available.

Some simple use case would be Facebook's infrastructure. Assume Andrei wanted 
to rewrite it all in D. Probably more than 100M LOC. Would need hundreds of 
gigabytes of RAM to compile. It would also take days to compile, and maybe 50% 
less on a 64 bit system.


Re: Code Sandwiches

2011-03-11 Thread Andrej Mitrovic
Transition effects? Is this the 90s?


Re: Curl support RFC

2011-03-11 Thread Jesse Phillips
I'll make some comments on the API. Do we have to choose Http/Ftp...? The URI 
already contains this, I could see being able to specifically request one or 
the other for performance or so www.google.com works.

And what about properties? They tend to be very nice instead of set methods. 
examples below.

Jonas Drewsen Wrote:

 //
 // Simple HTTP GET with sane defaults
 // provides the .content, .headers and .status
 //
 writeln( Http.get(http://www.google.com;).content );
 
 //
 // GET with custom data receiver delegates
 //
 Http http = new Http(http://www.google.dk;);
 http.setReceiveHeaderCallback( (string key, string value) {
   writeln(key ~ : ~ value);
 } );
 http.setReceiveCallback( (string data) { /* drop */ } );
 http.perform;

http.onHeader = (string key, string value) {...};
http.onContent = (string data) { ... };
http.perform();


Re: Code Sandwiches

2011-03-11 Thread Nick Sabalausky
lurker a@a.a wrote in message news:ile1fe$2i8q$1...@digitalmars.com...
 Walter Bright Wrote:

 On 3/9/2011 10:18 PM, Nick Sabalausky wrote:
  They're text. With minor formatting. That alone makes html better. Html 
  is
  lousy for a lot of things, but formatted text is the one thing it's 
  always
  been perfectly good at. And frankly I think I'd *rather* go with pretty 
  much
  any word processing format if the only other option was pdf.

 I used to use HTML for presentations. Frankly, it was terrible. The text 
 was
 rendered badly, especially when blown up on a screen. I could never get 
 it to
 look right

What specifically was done badly?


 I switched to pdf presentations, and they worked great and looked great. 
 The pdf
 viewers would render text that looked great blown up. The pdf was all in 
 one
 file, meaning I could email it to someone and they could look at it 
 directly
 from their mail program. I would bring backups on a thumb drive so in 
 case my
 laptop was busted/stolen by the TSA, I could run my presentation on 
 anyone's
 computer.


Well, PDF's are designed for a page-by-page medium, and presentation slides 
do fit that bill, unlike documents.



 This can't be true! Walter defending inferior semi-standard formats. PDF 
 doesn't even have as nice transition effects as powerpoint or new jQuery 
 using presentations stored in the cloud services.

Ugh. Transition effects are cheesy. (Hollywood avoids them for a reason.)


 Your thumb drives break anyway once a year so I'm in favor of a 
 subscription model for the cloud.

I've never had a USB drive or an SD card die on me. And I've been using the 
cheap no-name ones from MicroCenter. Maybe you're just using *really* bad 
ones or being rough on them? Or spend time near strong em fields?

I'm going to try to refrain from saying anything about the cloud. Don't 
really feel like another big debate, atm.





Re: Code Sandwiches

2011-03-11 Thread lurker
Nick Sabalausky Wrote:

 I've never had a USB drive or an SD card die on me. And I've been using the 
 cheap no-name ones from MicroCenter. Maybe you're just using *really* bad 
 ones or being rough on them? Or spend time near strong em fields?

You might forget them in the wrong pocket or your dog might bite the usb 
connector into pieces. I also don't like the unmount feature. You sometimes 
forget to sync data and boom the whole file system is corrupt.


Re: Code Sandwiches

2011-03-11 Thread Nick Sabalausky
lurker a@a.a wrote in message news:ile4mh$2qoe$1...@digitalmars.com...
 Nick Sabalausky Wrote:

 I've never had a USB drive or an SD card die on me. And I've been using 
 the
 cheap no-name ones from MicroCenter. Maybe you're just using *really* bad
 ones or being rough on them? Or spend time near strong em fields?

 You might forget them in the wrong pocket or your dog might bite the usb 
 connector into pieces.

Outside of chinese restaurants, I hate dogs, so that second part isn't a 
problem for me ;)

 I also don't like the unmount feature. You sometimes forget to sync data 
 and boom the whole file system is corrupt.

Yea. I have always thought it seemed strange that modern removable media 
lacks the sensible lock/eject system that CD-ROM drives have had since the 
90's.




Re: Is DMD 2.052 32-bit?

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 13:11:53 lurker wrote:
 Jonathan M Davis Wrote:
  On Wednesday 09 March 2011 17:56:13 Walter Bright wrote:
   On 3/9/2011 4:30 PM, Jonathan M Davis wrote:
Much as I'd love to have a 64-bit binary of dmd, I don't think that
the gain is even vaguely worth the risk at this point.
   
   What is the gain? The only thing I can think of is some 64 bit OS
   distributions are hostile to 32 bit binaries.
  
  Well, the fact that you then have a binary native to your system is
  obviously a gain (and is likely the one which people will cite most
  often), and that _does_ count for quite a lot. However, regardless of
  that, it's actually pretty easy to get dmd to run out of memory when
  compiling if you do much in the way of CTFE or template stuff. Granted,
  fixing some of the worst memory-related bugs in dmd will go a _long_ way
  towards fixing that, but even if they are, you're theoretically
  eventually supposed to be able to do pretty much anything at compile
  time which you can do at runtime in SafeD. And using enough memory that
  you require the 64- bit address space would be one of the things that
  you can do in SafeD when compiling for 64-bit. As long as the compiler
  is only 32-bit, you can't do that at compile time even though you can do
  it at runtime (though the current limitations of CTFE do reduce the
  problem in that you can't do a lot of stuff at compile time period).
  
  In any case, the fact that dmd runs out of memory fairly easily makes
  having a 64-bit version which could use all of my machine's memory
  really attractive. And honestly, having an actual, 64-bit binary to run
  on a 64-bit system is something that people generally want, and it _is_
  definitely a problem to get a 32-bit binary into the 64-bit release of a
  Liunx distro.
  
  Truth be told, I would have thought that it would be a given that there
  would be a 64-bit version of dmd when going to support 64-bit
  compilation and was quite surprised when that was not your intention.
 
 I think porting DMD to 64 bits would be a pragmatic solution to this.
 Computers are getting more memory faster than Walter is able to fix
 possible leaks in DMD. There's awful lots of template and CTFE code using
 more than 2 or 3 GB of RAM. I can't even imagine how one could develop
 some modern application if this was a hard limit. Luckily there are GDC
 and LDC, which allow enterprise users to take full advantage of the 24-64
 GB available.
 
 Some simple use case would be Facebook's infrastructure. Assume Andrei
 wanted to rewrite it all in D. Probably more than 100M LOC. Would need
 hundreds of gigabytes of RAM to compile. It would also take days to
 compile, and maybe 50% less on a 64 bit system.

It's not that bad. dmd has some serious memory leaks with regards to CTFE and 
templates in that it generally doesn't release memory for a lot of it until 
it's 
done. So, it uses _way_ more memory that it needs to. I don't know why it does 
things the way it does, but theoretically, it should be able to reduce that to 
sane levels on 32-bit. I expect that it just requires taking the time to do it.

Also, in most cases, if using too much memory due to CTFE or templates is a 
problem, then all you have to do is do incremental builds and build each module 
separately. Then you're usually fine.

So, while having a 64-bit dmd would definitely help alleviate dmd's memory 
issues, those memory issues _do_ need to be fixed regardless. And fixing them 
would almost certainly make it dmd's memory consumption acceptable with 32-bit 
in most cases.

- Jonathan M Davis


Re: Code Sandwiches

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 11:18:59 David Nadlinger wrote:
 On 3/11/11 5:55 PM, Jonathan M Davis wrote:
  The problem with a white background on a computer screen is that a
  computer screen projects light whereas paper merely reflects it. So,
  while reading black on white works great with paper, it's harder on the
  eyes with a computer screen.
 
 My question from above still remains: Is there any scientific data to
 back this assumption?

I don't know. I haven't gone looking. However, I know that there's lots of 
anecdotal evidence for it. There's probably experimental evidence as well, but 
I 
haven't gone looking for it.

Personally, I know that my eyes do much better when I have a dark background 
and 
light text on the screen. It's much harder on my eyes to have a white 
background 
with black text. None of those problems occur with paper.

- Jonathan M Davis


Re: Library Documentation

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 12:08:19 Nicholas wrote:
 == Quote from novice2 (so...@noem.ail)'s article
 
  Nicholas Wrote:
   As a result of (my) complaining and being a huge fan of XMind, I
   decided to try to organize the library for my own references as I
   encounter new sections of it.  I have a decent portion of it in place
   now.  I thought I'd post a link in case it can help anyone else out as
   well.
   
   
   http://polish.slavic.pitt.edu/~swan/theta/Phobos.xmind
  
  may be you could expose/share your work via service like
  http://www.xmind.net/share/
  because not everybody have installed xmind...
 
 Good point.  I'll do that on Monday when I'm back at the office.  I updated
 std.datetime to 2.052 yesterday (didn't realize there was a new version
 until then).

LOL. Yeah. It's practically not even related to the previous version. The few 
items that it had were moved to core.time and left in std.datetime, but it's 
very small in comparison to what was added. What's there _is_ thoroughly 
documented though. So, depending on what your problem is with Phobos' 
documentation is (I don't know what your problem with it is), maybe you'll like 
that better. If your problem with the documentation has to do with the fact 
that 
the links on the top aren't organized (which they obviously need to be), then 
that problem still needs to be dealt with. There has been _some_ work in that 
direction though. Andrei has been working to improve how std.algorithm's links 
are laid out, and there has been a person or two who have been working on ways 
to improve the way all that is laid out in general, but it hasn't yet reached 
the point that Phobos' basic documentation layout has been truly fixed.

Still, it's good to have as much documentation as we do, even if it could use 
some improvements as far as layout goes.

- Jonathan M Davis


Re: std.datetime questions

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 12:29:49 Nicholas wrote:
 I just started using the new std.datetime.  Two things I find strange that
 maybe someone can explain are:
 
 
 1) EST timezone displays GMT+0500 instead of -0500 when using any of the
 toString functions.  Didn't check other negative timezones.

If it does, that's a bug. Please report it with appropriate details as to what 
OS you're using (that matters a lot for std.datetime) and which time zone class 
you were using and whatnot (probably LocalTime unless you were specifically 
trying to use another time zone).

 2) The UTC time for std.file's DirEntry uses milliseconds, so when
 converting SysTime to UTC, I had to use toUnixTime and then I had to
 multiple the result by 1000.

std.file.DirEntry should have older functions which use d_time and newer ones 
which use SysTime. No conversion should be necessary. There _are_ functions in 
std.datetime for converting between d_time and SysTime if you need to (they'll 
go away when std.date does) if you need to convert between those two though. It 
sounds like you're using the d_time versions of DirEntry's functions. Just use 
the SysTime versions and you won't need to do any converting (the older 
functions are going to go when std.date does as well).

 Also, I found it strange that this wouldn't work:
 
 auto stime = SysTime( ... );
 long timetest = stime.toUnixTime() * 1000; //doesn't work
 
 I had to do:
 
 timetest = stime.toUnixTime();
 timetest *= 1000;

My guess would be is what's happening is that time_t can't handle being 
multiplied by 1000. long can. In the first case, you're multiplying the time_t, 
not a long. In the second, you're multiplying a long.

 I believe there's also a problem with the time in SysTime when you specify
 a timezone and set the time to 0 in the constructor, e.g. SysTime(
 DateTime( 2011, 3, 11 ), TimeZone( America/New_York ) ), in that it
 forces the time to GMT instead of the specified local time.  I'll have to
 double check but I know it worked when I used a non-zero time.

You're going to need to be more specific. I don't understand what you're saying 
well enough to try and reproduce it, let alone fix it if there's a problem.

Regardless, if you find any bugs in std.datetime, _please_ report them. As far 
as 
I know, it's bug free. There are no outstanding bugs on it. It _has_ been 
thoroughly tested (and I've actually been  working on improving the unit 
tests), 
but it's mostly been used in Linux in the America/Los_Angeles time zone, and I 
haven't tested every time zone under the sun. So, I may have missed something.

- Jonathan M Davis


Re: Code Sandwiches

2011-03-11 Thread Walter Bright

On 3/11/2011 1:21 PM, Nick Sabalausky wrote:

lurkera@a.a  wrote in message news:ile1fe$2i8q$1...@digitalmars.com...

Walter Bright Wrote:


On 3/9/2011 10:18 PM, Nick Sabalausky wrote:

They're text. With minor formatting. That alone makes html better. Html
is
lousy for a lot of things, but formatted text is the one thing it's
always
been perfectly good at. And frankly I think I'd *rather* go with pretty
much
any word processing format if the only other option was pdf.


I used to use HTML for presentations. Frankly, it was terrible. The text
was
rendered badly, especially when blown up on a screen. I could never get
it to
look right


What specifically was done badly?


Something as simple as getting the text to be rendered attractively. HTML text 
looks ragged blown up. HTML fonts simply do not look good. There were other 
problems such as the presentation machine differing slightly from my dev 
machine, throwing everything off.


The HTML stuff was so bad that I regularly got pretty negative feedback about 
it. Those problems all vanished when I switched to pdf. No complaints.




I'm going to try to refrain from saying anything about the cloud. Don't
really feel like another big debate, atm.


As if I'm really going to have my presentation hinge on getting a reliable 
internet connection. (I've seen some that did, and they always got derailed and 
spent their allotted time trying to get it to work.)


One huge impediment to me doing cloud computing is the random nature of 
responsiveness of the internet. And, when it is not responding, the software 
gives you no clue what the problem is:


1. your browser crashed
2. your browser is slowly executing javascript
3. your ethernet cable fell out of its socket again
4. your hub needs to be power cycled
5. another machine on your lan is hogging the bandwidth
6. your router needs to be rebooted
7. your cable modem needs to be rebooted
8. your ISP needs to be rebooted
9. a tree fell on the wires again
10. the internet is just slow today
11. any of the above happened to the web site you're trying to access


Re: GZip File Reading

2011-03-11 Thread Stewart Gordon

On 10/03/2011 04:53, dsimcha wrote:
snip

I'd like to get some comments on what an appropriate API design and 
implementation for
writing gzipped files would be. Two key requirements are that it must be as 
easy to use as
std.stdio.File and it must be easy to extend to support other single-file 
compression
formats like bz2.


You don't seem to get how std.stream works.

The API is defined in the InputStream and OutputStream interfaces.  Various classes 
implement this interface, generally through the Stream abstract class, to provide the 
functionality for a specific kind of stream.  File is just one of these classes.  Another 
is MemoryStream, to read to and write from a buffer in memory.  A stream class used to 
work with gzipped files would be just another.


Indeed, we have FilterStream, which is a base class for stream classes that wrap a stream, 
such as a file or memory stream, to modify the data in some way as it goes in and out. 
Compressing or decompressing is an example of this - so I guess that GzipStream would be a 
subclass of FilterStream.


Stewart.


Re: GZip File Reading

2011-03-11 Thread dsimcha

On 3/11/2011 7:12 PM, Stewart Gordon wrote:

On 10/03/2011 04:53, dsimcha wrote:
snip

I'd like to get some comments on what an appropriate API design and
implementation for
writing gzipped files would be. Two key requirements are that it must
be as easy to use as
std.stdio.File and it must be easy to extend to support other
single-file compression
formats like bz2.


You don't seem to get how std.stream works.

The API is defined in the InputStream and OutputStream interfaces.
Various classes implement this interface, generally through the Stream
abstract class, to provide the functionality for a specific kind of
stream. File is just one of these classes. Another is MemoryStream, to
read to and write from a buffer in memory. A stream class used to work
with gzipped files would be just another.

Indeed, we have FilterStream, which is a base class for stream classes
that wrap a stream, such as a file or memory stream, to modify the data
in some way as it goes in and out. Compressing or decompressing is an
example of this - so I guess that GzipStream would be a subclass of
FilterStream.

Stewart.


But:

1.  std.stream is scheduled for deprecation IIRC.

2.  std.stdio.File is what's now idiomatic to use.

3.  Streams in D should be based on input ranges, not whatever crufty 
old stuff std.stream is based on.


Re: GZip File Reading

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 16:27:21 dsimcha wrote:
 On 3/11/2011 7:12 PM, Stewart Gordon wrote:
  On 10/03/2011 04:53, dsimcha wrote:
  snip
  
  I'd like to get some comments on what an appropriate API design and
  implementation for
  writing gzipped files would be. Two key requirements are that it must
  be as easy to use as
  std.stdio.File and it must be easy to extend to support other
  single-file compression
  formats like bz2.
  
  You don't seem to get how std.stream works.
  
  The API is defined in the InputStream and OutputStream interfaces.
  Various classes implement this interface, generally through the Stream
  abstract class, to provide the functionality for a specific kind of
  stream. File is just one of these classes. Another is MemoryStream, to
  read to and write from a buffer in memory. A stream class used to work
  with gzipped files would be just another.
  
  Indeed, we have FilterStream, which is a base class for stream classes
  that wrap a stream, such as a file or memory stream, to modify the data
  in some way as it goes in and out. Compressing or decompressing is an
  example of this - so I guess that GzipStream would be a subclass of
  FilterStream.
  
  Stewart.
 
 But:
 
 1.  std.stream is scheduled for deprecation IIRC.

Technically speaking, I think that it's intended to be scheduled for 
deprecation 
as opposed to actually being scheduled for deprecation, but whatever. It's 
going 
to be phased out as soon as we have a replacement.d

 2.  std.stdio.File is what's now idiomatic to use.

Well, more like it's the only solution we have which will be sticking around. 
Once we have a new std.stream, it may be the preferred solution.

 3.  Streams in D should be based on input ranges, not whatever crufty
 old stuff std.stream is based on.

Indeed. But the new API still needs to be fleshed out and implemented before we 
actually have even a _proposed_ new std.stream, let alone actually have it.

- Jonathan M Davis


Stream Proposal

2011-03-11 Thread dsimcha
The discussion we've had here lately about reading gzipped files has proved
rather enlightening.  I therefore propose the following high-level design for
streams, with the details to be filled in later:

1.  Streams should be built on top of input and output ranges.  A stream is
just an input or output range that's geared towards performing I/O rather than
computation.  The border between what belongs in std.algorithm vs. std.stream
may be a bit hazy.

2.  Streams should be template based/structs, rather than virtual function
based/classes.  This will allow reference counting for expensive resources,
and allow decorators to be used with zero overhead.  If you need runtime
polymorphism or a well-defined ABI, you can wrap your stream using
std.range.inputRangeObject and std.range.outputRangeObject.

3.  std.stdio.File should be moved to the new stream module but publicly
imported by std.stdio.  It should also grow some primitives that make it into
an input range of characters.  These can be implemented with buffering under
the hood for efficiency.

4.  std.stdio.byLine and byChunk and whatever functions support them should be
generalized to work with any input range of characters and any input range of
bytes, respectively.  The (horribly ugly) readlnImpl function that supports
byLine should be templated and decoupled from C's file I/O functions.  It
should simply read one byte at a time from any range of bytes, decode UTF as
necessary and build a line as a string/wstring/dstring.  Any buffering should
be handled by the range it's reading from.


Re: Stream Proposal

2011-03-11 Thread Andrei Alexandrescu

On 3/11/11 6:29 PM, dsimcha wrote:

The discussion we've had here lately about reading gzipped files has proved
rather enlightening.  I therefore propose the following high-level design for
streams, with the details to be filled in later:

1.  Streams should be built on top of input and output ranges.  A stream is
just an input or output range that's geared towards performing I/O rather than
computation.  The border between what belongs in std.algorithm vs. std.stream
may be a bit hazy.


1a. Formatting should be separated from transport (probably this is the 
main issue with std.stream).


A simple input buffered stream of T would be a range of T[] that has two 
extra primitives:


T[] lookAhead(size_t n);
void leaveBehind(size_t n);

as discussed earlier in a related thread. lookAhead makes sure the 
stream has n Ts in the buffer (or less at end of stream), and 
leaveBehind forgets n Ts at the beginning of the buffer.


I'm not sure there's a need for formalizing a buffered output interface 
(we could simply make buffering transparent, in which case there's only 
need for primitives that get and set the size of the buffer).


In case we do want to formalize an output buffer, it would need 
primitives such as:


T[] getBuffer(size_t n);
void commitBuffer(size_t n);


Andrei


Re: Stream Proposal

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 18:29:42 dsimcha wrote:
 3.  std.stdio.File should be moved to the new stream module but publicly
 imported by std.stdio.  It should also grow some primitives that make it
 into an input range of characters.  These can be implemented with
 buffering under the hood for efficiency.

??? Why? File is not a stream. It's a separate thing. I see no reason to 
combine 
it with streams. I don't think that the separation between std.stdio and 
std.stream as it stands is a problem. The problem is the design of std.stream.

- Jonathan M Davis


Re: std.datetime questions

2011-03-11 Thread Nicholas
== Quote from Jonathan M Davis (jmdavisp...@gmx.com)'s article
 On Friday, March 11, 2011 12:29:49 Nicholas wrote:
  I just started using the new std.datetime.  Two things I find strange that
  maybe someone can explain are:
 
 
  1) EST timezone displays GMT+0500 instead of -0500 when using any of the
  toString functions.  Didn't check other negative timezones.
 If it does, that's a bug. Please report it with appropriate details as to what
 OS you're using (that matters a lot for std.datetime) and which time zone 
 class
 you were using and whatnot (probably LocalTime unless you were specifically
 trying to use another time zone).
  2) The UTC time for std.file's DirEntry uses milliseconds, so when
  converting SysTime to UTC, I had to use toUnixTime and then I had to
  multiple the result by 1000.
 std.file.DirEntry should have older functions which use d_time and newer ones
 which use SysTime. No conversion should be necessary. There _are_ functions in
 std.datetime for converting between d_time and SysTime if you need to (they'll
 go away when std.date does) if you need to convert between those two though. 
 It
 sounds like you're using the d_time versions of DirEntry's functions. Just use
 the SysTime versions and you won't need to do any converting (the older
 functions are going to go when std.date does as well).
  Also, I found it strange that this wouldn't work:
 
  auto stime = SysTime( ... );
  long timetest = stime.toUnixTime() * 1000; //doesn't work
 
  I had to do:
 
  timetest = stime.toUnixTime();
  timetest *= 1000;
 My guess would be is what's happening is that time_t can't handle being
 multiplied by 1000. long can. In the first case, you're multiplying the 
 time_t,
 not a long. In the second, you're multiplying a long.
  I believe there's also a problem with the time in SysTime when you specify
  a timezone and set the time to 0 in the constructor, e.g. SysTime(
  DateTime( 2011, 3, 11 ), TimeZone( America/New_York ) ), in that it
  forces the time to GMT instead of the specified local time.  I'll have to
  double check but I know it worked when I used a non-zero time.
 You're going to need to be more specific. I don't understand what you're 
 saying
 well enough to try and reproduce it, let alone fix it if there's a problem.
 Regardless, if you find any bugs in std.datetime, _please_ report them. As 
 far as
 I know, it's bug free. There are no outstanding bugs on it. It _has_ been
 thoroughly tested (and I've actually been  working on improving the unit 
 tests),
 but it's mostly been used in Linux in the America/Los_Angeles time zone, and I
 haven't tested every time zone under the sun. So, I may have missed something.
 - Jonathan M Davis

Thanks for the information.  I'll play with it when I'm at work again and then
report my findings.


In the interim, my timezone is EST.  I used TimeZone America/New_York on 32-bit
WinXP SP 3.


Overall, the library seems like it offers a lot.  I found a glaring bug in
std.date as well with EST, which was more harmful than the ones I mentioned now.


Re: Library Documentation

2011-03-11 Thread Nicholas
== Quote from Jonathan M Davis (jmdavisp...@gmx.com)'s article
 On Friday, March 11, 2011 12:08:19 Nicholas wrote:
  == Quote from novice2 (so...@noem.ail)'s article
 
   Nicholas Wrote:
As a result of (my) complaining and being a huge fan of XMind, I
decided to try to organize the library for my own references as I
encounter new sections of it.  I have a decent portion of it in place
now.  I thought I'd post a link in case it can help anyone else out as
well.
   
   
http://polish.slavic.pitt.edu/~swan/theta/Phobos.xmind
  
   may be you could expose/share your work via service like
   http://www.xmind.net/share/
   because not everybody have installed xmind...
 
  Good point.  I'll do that on Monday when I'm back at the office.  I updated
  std.datetime to 2.052 yesterday (didn't realize there was a new version
  until then).
 LOL. Yeah. It's practically not even related to the previous version. The few
 items that it had were moved to core.time and left in std.datetime, but it's
 very small in comparison to what was added. What's there _is_ thoroughly
 documented though. So, depending on what your problem is with Phobos'
 documentation is (I don't know what your problem with it is), maybe you'll 
 like
 that better. If your problem with the documentation has to do with the fact 
 that
 the links on the top aren't organized (which they obviously need to be), then
 that problem still needs to be dealt with. There has been _some_ work in that
 direction though. Andrei has been working to improve how std.algorithm's links
 are laid out, and there has been a person or two who have been working on ways
 to improve the way all that is laid out in general, but it hasn't yet reached
 the point that Phobos' basic documentation layout has been truly fixed.
 Still, it's good to have as much documentation as we do, even if it could use
 some improvements as far as layout goes.
 - Jonathan M Davis

Yeah, it was amazing when I opened up the new datetime source file.  The 
previous
one just had Ticks and StopWatch along with 3 public functions outside of those.
It took me a while to go through the new one.


My problem with the documentation isn't that it lacks information.  Most of the
developers have done an excellent job in that regards.  The problem is the 
layout.
 It takes as much time to find the information on the webpage as it does to just
search through the source code.  And both can be fairly crazy to look through.  
I
believe that if you have to hit ctrl-f to find what you need then there's a
fundamental flaw with the layout.


Not everyone thinks alike, though.  I just wanted to offer an alternative.  
Since
no IDEs offer intellisense (VisualD's is rudimentary but improving) there's a
severe need for quick referencing.  I was hoping to achieve that with XMind.


Re: std.datetime questions

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 19:18:21 Nicholas wrote:
 Thanks for the information.  I'll play with it when I'm at work again and
 then report my findings.
 
 
 In the interim, my timezone is EST.  I used TimeZone America/New_York on
 32-bit WinXP SP 3.

I assume that you were using WindowsTimeZone then?

 Overall, the library seems like it offers a lot.  I found a glaring bug in
 std.date as well with EST, which was more harmful than the ones I mentioned
 now.

Yeah. std.date is pretty broken. So, there hasn't really been even a decent 
solution for date/time stuff in Phobos for a while, but std.datetime should fix 
that. And it's definitely designed in such a way that it's at least _supposed_ 
to 
handle time zones really well and fairly painlessly. Only time and usage will 
tell how good the design really is though. I think that it's quite solid 
overall, but I'm not about to claim that it's perfect. And while bugs in it 
should be rare given how thoroughly tested it is, I'm not about to claim that 
there definitely aren't any. Definitely report any that you find.

If I have time, I may mess around with America/New_York a bit this weekend and 
see if anything obvious pops up. Glancing at WindowsTimeZone, I see that it's 
missing some unit tests, so I definitely need to add some, regardless of 
whether 
there's currently anything wrong with it.

- Jonathan M Davis


Re: Library Documentation

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 19:31:51 Nicholas wrote:
 == Quote from Jonathan M Davis (jmdavisp...@gmx.com)'s article
 
  On Friday, March 11, 2011 12:08:19 Nicholas wrote:
   == Quote from novice2 (so...@noem.ail)'s article
   
Nicholas Wrote:
 As a result of (my) complaining and being a huge fan of XMind, I
 decided to try to organize the library for my own references as I
 encounter new sections of it.  I have a decent portion of it in
 place now.  I thought I'd post a link in case it can help anyone
 else out as well.
 
 
 http://polish.slavic.pitt.edu/~swan/theta/Phobos.xmind

may be you could expose/share your work via service like
http://www.xmind.net/share/
because not everybody have installed xmind...
   
   Good point.  I'll do that on Monday when I'm back at the office.  I
   updated std.datetime to 2.052 yesterday (didn't realize there was a
   new version until then).
  
  LOL. Yeah. It's practically not even related to the previous version. The
  few items that it had were moved to core.time and left in std.datetime,
  but it's very small in comparison to what was added. What's there _is_
  thoroughly documented though. So, depending on what your problem is with
  Phobos' documentation is (I don't know what your problem with it is),
  maybe you'll like that better. If your problem with the documentation
  has to do with the fact that the links on the top aren't organized
  (which they obviously need to be), then that problem still needs to be
  dealt with. There has been _some_ work in that direction though. Andrei
  has been working to improve how std.algorithm's links are laid out, and
  there has been a person or two who have been working on ways to improve
  the way all that is laid out in general, but it hasn't yet reached the
  point that Phobos' basic documentation layout has been truly fixed.
  Still, it's good to have as much documentation as we do, even if it
  could use some improvements as far as layout goes.
  - Jonathan M Davis
 
 Yeah, it was amazing when I opened up the new datetime source file.  The
 previous one just had Ticks and StopWatch along with 3 public functions
 outside of those. It took me a while to go through the new one.
 
 
 My problem with the documentation isn't that it lacks information.  Most of
 the developers have done an excellent job in that regards.  The problem is
 the layout. It takes as much time to find the information on the webpage
 as it does to just search through the source code.  And both can be fairly
 crazy to look through.  I believe that if you have to hit ctrl-f to find
 what you need then there's a fundamental flaw with the layout.

Well, I don't think that the documentation layout needs improvement, but work 
on 
that has been a low enough priority that progress has been slow.

 Not everyone thinks alike, though.  I just wanted to offer an alternative. 
 Since no IDEs offer intellisense (VisualD's is rudimentary but improving)
 there's a severe need for quick referencing.  I was hoping to achieve that
 with XMind.

Well, I'd never heard of XMind before you mentioned it, so I have no idea what 
it offers, but if it can give a better version of the documentation, then it 
may 
be worth looking at.

- Jonathan M Davis


Re: Stream Proposal

2011-03-11 Thread dsimcha

On 3/11/2011 10:14 PM, Jonathan M Davis wrote:

On Friday, March 11, 2011 18:29:42 dsimcha wrote:

3.  std.stdio.File should be moved to the new stream module but publicly
imported by std.stdio.  It should also grow some primitives that make it
into an input range of characters.  These can be implemented with
buffering under the hood for efficiency.


??? Why? File is not a stream. It's a separate thing. I see no reason to combine
it with streams. I don't think that the separation between std.stdio and
std.stream as it stands is a problem. The problem is the design of std.stream.

- Jonathan M Davis


Isn't file I/O a pretty important use case for streams, i.e. the main one?


Re: Stream Proposal

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 19:40:47 dsimcha wrote:
 On 3/11/2011 10:14 PM, Jonathan M Davis wrote:
  On Friday, March 11, 2011 18:29:42 dsimcha wrote:
  3.  std.stdio.File should be moved to the new stream module but publicly
  imported by std.stdio.  It should also grow some primitives that make it
  into an input range of characters.  These can be implemented with
  buffering under the hood for efficiency.
  
  ??? Why? File is not a stream. It's a separate thing. I see no reason to
  combine it with streams. I don't think that the separation between
  std.stdio and std.stream as it stands is a problem. The problem is the
  design of std.stream.
  
  - Jonathan M Davis
 
 Isn't file I/O a pretty important use case for streams, i.e. the main one?

Yes. You should be able to read a file as a stream. But that doesn't mean that 
std.stdio.File needs to be in std.stream or that it needs to use streams. The 
way that File is currently used to read files shouldn't change. The streaming 
stuff should be in addition to that.

- Jonathan M Davis


Re: Stream Proposal

2011-03-11 Thread Daniel Gibson

Am 12.03.2011 04:40, schrieb dsimcha:

On 3/11/2011 10:14 PM, Jonathan M Davis wrote:

On Friday, March 11, 2011 18:29:42 dsimcha wrote:

3. std.stdio.File should be moved to the new stream module but publicly
imported by std.stdio. It should also grow some primitives that make it
into an input range of characters. These can be implemented with
buffering under the hood for efficiency.


??? Why? File is not a stream. It's a separate thing. I see no reason
to combine
it with streams. I don't think that the separation between std.stdio and
std.stream as it stands is a problem. The problem is the design of
std.stream.

- Jonathan M Davis


Isn't file I/O a pretty important use case for streams, i.e. the main one?


Network I/O is also very important.

BTW, Andrei proposed a stream API a while ago[1] which was also 
discussed back than - can't we use that as a basis for further 
discussions about streams?


By the way, I'd prefer class-based streams (and even Andrei proposed 
that in aforementioned discussion).


Cheers,
- Daniel

[1] 
http://lists.puremagic.com/pipermail/digitalmars-d/2010-December/thread.html#91169 



Re: Curl support RFC

2011-03-11 Thread Ary Manzana

On 3/11/11 12:20 PM, Jonas Drewsen wrote:

Hi,

So I've spent some time trying to wrap libcurl for D. There is a lot of
things that you can do with libcurl which I did not know so I'm starting
out small.

For now I've created all the declarations for the latest public curl C
api. I have put that in the etc.c.curl module.

On top of that I've created a more D like api as seen below. This is
located in the 'etc.curl' module. What you can see below currently works
but before proceeding further down this road I would like to get your
comments on it.


I *love* it.

All APIs should be like yours. One-liners for what you want right now. 
If it's a little more complex, some more lines. This is perfect.


Congratulations!


Re: Is DMD 2.052 32-bit?

2011-03-11 Thread Nick Sabalausky
lurker a@a.a wrote in message news:ile2kg$2klo$1...@digitalmars.com...
 Jonathan M Davis Wrote:

 Now, assuming that all of that is taken care, if you're using a 32-bit 
 binary on
 a 64-bit system, you're still going to be restricted on how much that 
 program
 can use. It doesn't use the native word size of the machine to do what it 
 does,
 and in many cases, running a 32-bit program on a 64-bit machine is slower 
 than
 running a 64-bit version of that program on that machine (though that's 
 going to
 vary from program to program, since there are  obviously quite a few 
 factors
 which affect efficiency).

 The efficiency claim is true. 64-bit architures have much more registers. 
 This can effectively double the code's performance in most cases. Loads 
 and stores can also use full 64 bits of bandwidth instead of 32. Thus 
 again twice as much speed. In general if you worry about larger binary 
 size, use UPX. Other than that, 64 bit code outperforms the 32 bit. We 
 want to keep the fastest compiler title, right?

OTOH, 32-bit code on 64-bit already vastly outperforms 32-bit code on a 
32-bit machine anyway.




Re: (New idea) Resizing an array: Dangerous? Possibly buggy?

2011-03-11 Thread %u
 I think pitfalls like this one (with the garbage collector, for example) 
 should definitely be documented somewhere. I would imagine that quite a few 
 people who try to set the length of an array
won't realize that they can run out of memory this way, especially because it's 
nondeterministic in many cases.

 If you're referring to reducing the length of an array, I think people with a 
 C background would expect the memory not to be reallocated, because this 
 avoids copying memory contents, and anyway
the array may grow again.
 I think this is documented somewhere, maybe TDPL when talking about slices. 
 But making people more aware of it is probably a good thing. Perhaps an 
 article on things to watch out for to prevent
the GC holding onto too much memory would be useful.


I'm having an idea: Why not automatically reallocate/shrink an array when it's 
resized to below 25% of its capacity, and automatically double the capacity 
when it overflows? That way, we're never
on a boundary case (as would happen if we simply shrunk the array when it was 
below 50% capacity rather than 25%), we could free memory, and the operations 
would be really O(1) (since the copies
are amortized over the items)... does that sound like a good idea?


Re: Library Documentation

2011-03-11 Thread Tom

El 12/03/2011 00:31, Nicholas escribió:

My problem with the documentation isn't that it lacks information.  Most of the
developers have done an excellent job in that regards.  The problem is the 
layout.
  It takes as much time to find the information on the webpage as it does to 
just
search through the source code.  And both can be fairly crazy to look through.  
I
believe that if you have to hit ctrl-f to find what you need then there's a
fundamental flaw with the layout.


Wow, I had the same complain a while ago. I totally agree with you in 
this regard. Something is wrong with the documentation if it's hard to 
find things in it.


Tom;


Re: (New idea) Resizing an array: Dangerous? Possibly buggy?

2011-03-11 Thread Jonathan M Davis
On Friday 11 March 2011 21:40:36 %u wrote:
  I think pitfalls like this one (with the garbage collector, for example)
  should definitely be documented somewhere. I would imagine that quite a
  few people who try to set the length of an array
 
 won't realize that they can run out of memory this way, especially because
 it's nondeterministic in many cases.
 
  If you're referring to reducing the length of an array, I think people
  with a C background would expect the memory not to be reallocated,
  because this avoids copying memory contents, and anyway
 
 the array may grow again.
 
  I think this is documented somewhere, maybe TDPL when talking about
  slices. But making people more aware of it is probably a good thing.
  Perhaps an article on things to watch out for to prevent
 
 the GC holding onto too much memory would be useful.
 
 
 I'm having an idea: Why not automatically reallocate/shrink an array when
 it's resized to below 25% of its capacity, and automatically double the
 capacity when it overflows? That way, we're never on a boundary case (as
 would happen if we simply shrunk the array when it was below 50% capacity
 rather than 25%), we could free memory, and the operations would be really
 O(1) (since the copies are amortized over the items)... does that sound
 like a good idea?

No. That means that you have to worry about reallocation at fairly 
unpredicatable points. If you really want that behavior, it's easy enough to 
create a wrapper which does that. However, you can't get the current behavior 
by 
wrapping an array that behaves as you suggest. Also, what you suggest adds 
additional overhead to every operation which could potentially resize an array. 
On the whole, the way arrays work right now works quite well.

- Jonathan M Davis


Re: Is DMD 2.052 32-bit?

2011-03-11 Thread Russel Winder
On Fri, 2011-03-11 at 16:02 -0500, lurker wrote:
[ . . . ]
 The efficiency claim is true. 64-bit architures have much more
 registers. This can effectively double the code's performance in most
 cases. Loads and stores can also use full 64 bits of bandwidth instead
 of 32. Thus again twice as much speed. In general if you worry about
 larger binary size, use UPX. Other than that, 64 bit code outperforms
 the 32 bit. We want to keep the fastest compiler title, right?

There are a large number of assumptions in the claim of twice as much
speed.  All the AMD64 registers and ALUs are 64-bit wide but are all
the caches?  Are all the buses to memory?  Are all the memory
structures?  Is the clock speed the same?  Are all the components
clocked in the same way?

Has anyone got actual experimental data?  Is there a benchmark suite?

My preference for a 64-bit DMD relate to simplicity of use on Debian and
Ubuntu where the packaging is far simpler if 64-bit executables are used
throughout -- if those executables are dynamically linked.  If they are
statically linked there is not the same issues, but then physical size
of executable becomes an issue.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: enum with classes/structs

2011-03-11 Thread useo
== Auszug aus Jonathan M Davis (jmdavisp...@gmx.com)'s Artikel
 On Thursday, March 10, 2011 11:28:04 bearophile wrote:
  useo:
   is it possible to declare a enum where all entries are
instances of a
 
   class (or struct), like the following:
  I don't think so. Enums are compile-time constants.
  This code doesn't compile:
 
  class A {
 this(uint i) {}
  }
  enum myEnum : A {
 entry1 = new A(0),
 entry2 = new A(1)
  }
  void main() {}
 
  It's important to understand that in D OOP and procedural/C-
style features
  are often separated. typedef didn't work well with OOP. Don't
mix things
  that are not meant to be mixed.
 There's absolutely nothing wrong with mixing enum with OOP. An
enum is simply an
 enumeration of values. There's absolutely nothing wrong with
those values being
 of struct or class types. The only restrictions there are
problems with the
 implementation. TDPL even gives examples of enum structs. They
currently work
 when you only have one value in the enum, but fail when you have
multiple (
 http://d.puremagic.com/issues/show_bug.cgi?id=4423 ). If/When
classes work with
 CTFE, then you should be able to haveenums of class objects.
 There's nothing about enums which are C or procedural-specific.
Java has object-
 oriented enums which are quite powerful. And aside from the
current
 implementation issues, D's enums are even more powerful because
they allow _any_
 type and so can be either primitive types or user-defined types
like you'd have
 in Java.
 enums don't care one way or another about OOP. They're just a set
of values that
 have to be ordered and be known at compile time.
 - Jonathan M Davis

Okay, thanks - I'll hope the bug will be solved. I'm absolution
right here with you, the enumerations of Java are very use- and
powerful.


Why is the struct instance being copied here?

2011-03-11 Thread d coder
Greetings

Please look at the code down here. When compiled and run, I get the message
Call to postblit printed. I think it is because of the foreach block,
because the variable i is not declared as ref there. Is there a way to
make it a ref?

Regards
- Puneet

import std.stdio;

struct Foo {
  this(this) {
writeln(Call to postblit);
  }
}

class Bar {
  Foo foo;
  this() {
foreach(i, f; this.tupleof) {
  // do nothing
}
  }
}

void main()
{
  Bar bar = new Bar();
}


Re: Why is the struct instance being copied here?

2011-03-11 Thread Steven Schveighoffer

On Fri, 11 Mar 2011 04:50:38 -0500, d coder dlang.co...@gmail.com wrote:


Greetings

Please look at the code down here. When compiled and run, I get the  
message

Call to postblit printed. I think it is because of the foreach block,
because the variable i is not declared as ref there. Is there a way to
make it a ref?

Regards
- Puneet

import std.stdio;

struct Foo {
  this(this) {
writeln(Call to postblit);
  }
}

class Bar {
  Foo foo;
  this() {
foreach(i, f; this.tupleof) {
  // do nothing
}
  }
}

void main()
{
  Bar bar = new Bar();
}


This typically works in a foreach loop:

foreach(i, ref f; x)

but a foreach for a tuple is a special beast, and using ref in your code  
yields this error:


foreachtuple.d(12): Error: no storage class for value f

But I agree it should be doable.  This should qualify for an enhancement  
request: http://d.puremagic.com/issues/enter_bug.cgi?product=D


-Steve


Re: Mixins: to!string cannot be interpreted at compile time

2011-03-11 Thread Caligo
On Tue, Mar 1, 2011 at 1:15 PM, Peter Lundgren lundg...@rose-hulman.eduwrote:

 That worked, thanks. This is interesting because the example used in The D
 Programming Language on page 83 gets away with it just fine. I had no
 problem
 running this:

 result ~= to!string(bitsSet(b)) ~ , ;



How did you get that example on page 83 to compile?  I'm getting undefined
identifier bitsSet, and it's not in std.intrinsic or std.bitmanip.


Read file/stream

2011-03-11 Thread nrgyzer
I'm trying to read a png file and I'm having some trouble with the
chunk-size. Each chunk of a png file begins with a 4 byte (unsigned)
integer. When I read this 4 byte integer (uint) I get an absolutely
incorrect length. My code currently looks like:

void main(string args) {

   File f = new File(test.png, FileMode.In);

   // png signature
   ubyte[8] buffer;
   f.read(buffer);

   // first chunk (IHDR)
   uint size;
   f.read(size);

   f.close();
}

When I run my code, I get 218103808 instead of 13 (decimal) or 0x0D
(hex). When I try to read the 4 byte integer as a ubyte[4]-array, I
get [0, 0, 0, 13] where 13 seems to be the correct ones because my
hex-editor says [0x00 0x00 0x00 0x0D] for these 4 bytes.

I hope anyone know where my mistake is. Thanks!


Re: Read file/stream

2011-03-11 Thread Simen kjaeraas

nrgyzer nrgy...@gmail.com wrote:


I'm trying to read a png file and I'm having some trouble with the
chunk-size. Each chunk of a png file begins with a 4 byte (unsigned)
integer. When I read this 4 byte integer (uint) I get an absolutely
incorrect length. My code currently looks like:

void main(string args) {

   File f = new File(test.png, FileMode.In);

   // png signature
   ubyte[8] buffer;
   f.read(buffer);

   // first chunk (IHDR)
   uint size;
   f.read(size);

   f.close();
}

When I run my code, I get 218103808 instead of 13 (decimal) or 0x0D
(hex). When I try to read the 4 byte integer as a ubyte[4]-array, I
get [0, 0, 0, 13] where 13 seems to be the correct ones because my
hex-editor says [0x00 0x00 0x00 0x0D] for these 4 bytes.

I hope anyone know where my mistake is. Thanks!


Looks to be an endian issue. 0x_000D is 218,103,808 in decimal
in little-endian (Intel), and 13 in big-endian (Motorola).

--
Simen


Re: Mixins: to!string cannot be interpreted at compile time

2011-03-11 Thread Caligo
On Fri, Mar 11, 2011 at 11:48 AM, Caligo iteronve...@gmail.com wrote:



 On Tue, Mar 1, 2011 at 1:15 PM, Peter Lundgren 
 lundg...@rose-hulman.eduwrote:

 That worked, thanks. This is interesting because the example used in The
 D
 Programming Language on page 83 gets away with it just fine. I had no
 problem
 running this:

 result ~= to!string(bitsSet(b)) ~ , ;



 How did you get that example on page 83 to compile?  I'm getting undefined
 identifier bitsSet, and it's not in std.intrinsic or std.bitmanip.


nvm, it's right there on that very page.


Re: Read file/stream

2011-03-11 Thread Stewart Gordon

On 11/03/2011 18:46, Steven Schveighoffer wrote:
snip

I am not sure what facilities Phobos provides for reading/writing integers in 
network
order (i.e. Big Endian), but I'm sure there's something.


http://www.digitalmars.com/d/1.0/phobos/std_stream.html
EndianStream

I haven't experimented with it.  And I don't expect it to handle structs well. 
Alternatively, you could use some simple code like



version (BigEndian) {
uint bigEndian(uint value) {
return value;
}
}

version (LittleEndian) {
uint bigEndian(uint value) {
return value  24
  | (value  0xFF00)  8
  | (value  0x00FF)  8
  | value  24;
}
}


though you would have to remember to call it for each file I/O operation that relies on 
it.  If you use a struct, you could put a method in it to call bigEndian on the members of 
relevance.


Stewart.


Re: Read file/stream

2011-03-11 Thread Ali Çehreli

On 03/11/2011 11:18 AM, Stewart Gordon wrote:

On 11/03/2011 18:46, Steven Schveighoffer wrote:
snip

I am not sure what facilities Phobos provides for reading/writing
integers in network
order (i.e. Big Endian), but I'm sure there's something.


http://www.digitalmars.com/d/1.0/phobos/std_stream.html
EndianStream

I haven't experimented with it. And I don't expect it to handle structs
well. Alternatively, you could use some simple code like


version (BigEndian) {
uint bigEndian(uint value) {
return value;
}
}

version (LittleEndian) {
uint bigEndian(uint value) {
return value  24
| (value  0xFF00)  8
| (value  0x00FF)  8
| value  24;
}
}


There is also std.intrinsic.bswap

Ali




though you would have to remember to call it for each file I/O operation
that relies on it. If you use a struct, you could put a method in it to
call bigEndian on the members of relevance.

Stewart.




Re: Read file/stream

2011-03-11 Thread Stewart Gordon

On 11/03/2011 19:50, Ali Çehreli wrote:
snip

There is also std.intrinsic.bswap


Well spotted.  I don't tend to look at std.intrinsic much.

Presumably there's a reason that it's been provided for uint but not ushort or 
ulong

Stewart.


Re: Read file/stream

2011-03-11 Thread Steven Schveighoffer
On Fri, 11 Mar 2011 16:42:59 -0500, Stewart Gordon smjg_1...@yahoo.com  
wrote:



On 11/03/2011 19:50, Ali Çehreli wrote:
snip

There is also std.intrinsic.bswap


Well spotted.  I don't tend to look at std.intrinsic much.

Presumably there's a reason that it's been provided for uint but not  
ushort or ulong


I think things in std.intrinsic are functions that tie directly to CPU  
features, so presumably, the CPU only provides the possibility for 4-byte  
width.


-Steve


Re: Read file/stream

2011-03-11 Thread Stewart Gordon

On 11/03/2011 21:51, Steven Schveighoffer wrote:
snip

Presumably there's a reason that it's been provided for uint but not ushort or 
ulong


I think things in std.intrinsic are functions that tie directly to CPU features,


True, but...


so presumably, the CPU only provides the possibility for 4-byte width.


D is designed to run on a variety of CPUs.  Do you really think that they all have a 
built-in instruction to reverse the order of 4 bytes but no other number?


Stewart.


Re: Fibonacci with ranges

2011-03-11 Thread Jesse Phillips
Without testing: foreach (f; take(recurrence!(a[n-1] + a[n-2])(0UL, 1UL), 50))

teo Wrote:

 Just curious: How can I get ulong here?
 
 foreach (f; take(recurrence!(a[n-1] + a[n-2])(0, 1), 50))
 {
   writeln(f);
 }




Re: I seem to be able to crash writefln

2011-03-11 Thread Spacen Jasset

On 10/03/2011 12:18, Steven Schveighoffer wrote:

On Wed, 09 Mar 2011 18:19:55 -0500, Joel Christensen joel...@gmail.com
wrote:


This is on Windows 7. Using a def file to stop the terminal window
coming up.

win.def
EXETYPE NT
SUBSYSTEM WINDOWS

bug.d
import std.stdio;
import std.string;

void main() {
auto f = File( z.txt, w );
scope( exit )
f.close;
string foo = bar;
foreach( n; 0 .. 10 ) {
writefln( %s, foo );
f.write( format( count duck-u-lar: %s\n, n ) );
}
}

output (from in z.txt):
count duck-u-lar: 0


If I dust off my rusty old Windows hat, I believe if you try to write to
stdout while there is no console window, you will encounter an error.

So don't do that ;) I'm not sure what you were expecting...

-Steve
You normally get no output AFAIR using c++ type compiled programs with 
printf or cout -- and perhaps an error set in cout, but I can't remember 
about that now.


Re: Read file/stream

2011-03-11 Thread Jonathan M Davis
On Friday, March 11, 2011 14:39:43 Stewart Gordon wrote:
 On 11/03/2011 21:51, Steven Schveighoffer wrote:
 snip
 
  Presumably there's a reason that it's been provided for uint but not
  ushort or ulong
  
  I think things in std.intrinsic are functions that tie directly to CPU
  features,
 
 True, but...
 
  so presumably, the CPU only provides the possibility for 4-byte width.
 
 D is designed to run on a variety of CPUs.  Do you really think that they
 all have a built-in instruction to reverse the order of 4 bytes but no
 other number?

You end up using ntohl and htonl, I believe. They're in core somewhere. I don't 
think that you necessarily get 64-bit versions versions, since unfortunately, 
they're not standard. But perhaps we should add them with implementations 
(rather than just declarations for C functions) for cases when they don't 
exist... IIRC, I had to create 64-bit versions for std.datetime and put them in 
there directly to do what I was doing, but we really should get the 64-bit 
versions in druntime at some point.

- Jonathan M Davis


[Issue 5717] 1.067 regression: appending Unicode char to string broken

2011-03-11 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5717


Walter Bright bugzi...@digitalmars.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||bugzi...@digitalmars.com
 Resolution||FIXED


--- Comment #7 from Walter Bright bugzi...@digitalmars.com 2011-03-11 
03:07:29 PST ---
https://github.com/D-Programming-Language/dmd/commit/19e819f6b9e71bc18bc5496ff9638ae7ade3e5ad

https://github.com/D-Programming-Language/dmd/commit/610064e7f74ce8258bc4a3eafde52b7311b56da7

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5722] Regression(2.052): Appending code-unit from multi-unit code-point at compile-time gives wrong result.

2011-03-11 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5722


Don clugd...@yahoo.com.au changed:

   What|Removed |Added

 CC||clugd...@yahoo.com.au


--- Comment #2 from Don clugd...@yahoo.com.au 2011-03-11 04:58:38 PST ---
Like bug 5717, this was caused by the fix to bug 4389 (char[]~dchar and
wchar[]~dchar *never* worked).
The problem is in constfold.c, Cat(). 

It erroneously assumes that all concatenation is equivalent to string ~ dchar.
But this isn't true for char[]~char, wchar[]~wchar, (this happens during
constant-folding optimization, which is how it manifests in the test case). In
such cases the dchar encoding should not occur - it should just give an
encoding length of 1, and do a simple memcpy.

It applies to everything of the form (e2-op == TOKint64) in that function.

(1)size_t len = es1-len + utf_codeLength(sz, v);
s = mem.malloc((len + 1) * sz);
memcpy(s, es1-string, es1-len * sz);
(2)utf_encode(sz, (unsigned char *)s + , v);

Lines (1) and (2) are valid for hetero concatenation, but when both types are
the same the lines should be:

(1) size_t len = es1-len + 1;

(2) memcpy((unsigned char *)s + (sz * es1-len), v, sz);

This should definitely be factored out into a helper function -- it's far too
repetitive already.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5717] 1.067 regression: appending Unicode char to string broken

2011-03-11 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5717



--- Comment #8 from Vladimir thecybersha...@gmail.com 2011-03-11 08:35:49 PST 
---
Thanks - not sure what the second commit has to do with it, though.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5729] New: taking the address of a @property doesn't work

2011-03-11 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5729

   Summary: taking the address of a @property doesn't work
   Product: D
   Version: D2
  Platform: All
OS/Version: All
Status: NEW
  Keywords: rejects-valid
  Severity: normal
  Priority: P2
 Component: DMD
AssignedTo: nob...@puremagic.com
ReportedBy: mrmoc...@gmx.de


--- Comment #0 from Trass3r mrmoc...@gmx.de 2011-03-11 08:50:19 PST ---
class A
{
private int blub = 5;
@property ref int bla()
{return blub;}
}

void main()
{
A a = new A();
int* b = a.bla;
}

property.d(11): Error: cannot implicitly convert expression (a.bla) of type
int delegate() @property ref to int*


This only works by adding parentheses: a.bla()
Shouldn't it work as expected without those for @property methods?

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 5730] New: Error: variable has scoped destruction, cannot build closure

2011-03-11 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5730

   Summary: Error: variable has scoped destruction, cannot build
closure
   Product: D
   Version: D2
  Platform: Other
OS/Version: Linux
Status: NEW
  Severity: normal
  Priority: P2
 Component: DMD
AssignedTo: nob...@puremagic.com
ReportedBy: samu...@voliacable.com


--- Comment #0 from Max Samukha samu...@voliacable.com 2011-03-11 10:22:17 
PST ---
The below should compile without errors:

struct S
{
~this()
{
}
}

void main()
{
S s;
enum error = __traits(compiles, { auto s1 = s; }));
static assert(!error); // line 1
}

Error: static assert  (!true) is false

Comment out line 1 to get the error that explains the problem:

Error: variable test.main.s has scoped destruction, cannot build closure

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---


[Issue 2634] Function literals are non-constant.

2011-03-11 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=2634



--- Comment #2 from Taylor Everding dmttd...@gmail.com 2011-03-11 14:35:35 
PST ---
It may be useful to know that 

void main() {
  auto a = function void() {};
}

compiles correctly, but when a is moved outside main the Error occurs.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---