Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Nick Sabalausky
On Sun, 11 Aug 2013 20:01:27 -0700
"H. S. Teoh"  wrote:
> 
> I personally prefer single-column with no more than about 40 ems in
> width or thereabouts. Anything more than that, and it becomes
> uncomfortable to read.
> 

For me, it's closer to 80. With 40 the line breaks are too frequent for
my eyes. And it just "feels" cramped.


> 
> - No full justification by default. Existing justification schemes
> could be improved (most implementations suffer from rivers of
> whitespace in a justified paragraph -- they could learn from LaTeX
> here). Needs native hyphenation support (using JS to patch up this
> flaw is a failure IMO).
> 

To be honest, I'm not a big fan of justified text. Obviously I can live
with it, but even without the occasional "rivers of whitespace" issue,
I find the lack of jagged edges gives my eyes too few reference points,
so I end up losing my place more easily. The value of justified text's
smooth edges, to me, seems somewhat "Adrian Monk" (wikipedia, if you
don't know).


> 
> - Pixel sizes should be banned, as well as hard-coded font sizes.
> These tie you to assumptions about specific user screen dimensions,
> which are almost always wrong. In this day and age, the only real
> solution is a fully dynamically adaptive layout. Everything else is
> just a relic from paper layouts, and is a dead-end.

Yea. Admittedly, I do occasionally use pixels for a little bit of
spacing here and there (never for more than 8px), but I can happily
give them up - especially with so much now using those ultra-high pixel
density screens. Pixels just don't make much sense now unless you're
already dealing on a raster level anyway, like a photo or something.


> Things like
> aligning images should be based on treating image size as an actual
> quantity you can compute sizes on; any hard-coded image sizes is
> bound to cause problems when the image is modified.
> 
> - Unable to express simple computations on sizes, requiring
>   circumlocutions that make the CSS hard to read and maintain.

Yes! That's one of my big issues with CSS, the inability to do anything
computationally. And yea, dealing with images tends to make that become
more of an issue.

Ultimately, the root problem here regarding the lack of computability,
is that HTML/CSS is not, and never has been, a UI layout format (No
matter how much people insist on treating it as
such...*cough*mozilla*cough*.) It's a *document* format. Always has
been. Everything else is a kludge, and is doomed to be so from the
start.


> 
> > >If someone expands their browser to be two-feet wide and ends up
> > >with too much text per line, then really they have no one to blame
> > >but their own dumbass self.
> > 
> > This is a frequent argument. The issue with it is that often people
> > use tabbed browsing, each tab having a page with its own approach to
> > readability.
> 
> The *real* problem is that webpage layout is still very much confined
> by outdated paper layout concepts. The browser should be able to
> automatically columnize text to fit the screen. Maybe with horizontal
> scrolling instead of vertical scrolling. Layouts should be flexible
> enough that the browser can resize the fonts to keep the text
> readable. Seriously, this isn't 1970. There's no reason why we should
> still be fiddling with this stuff manually. Layouts should be
> automatic, not hardcoded or at the whims of designers fixated on
> paper layout concepts.
> 

Exactly. In fact, we *already* had all this. It was called HTML 1. But
then some jackass designers  came in from the print world and demanded
webpages match their photoshop mockups to the pixel, thus HTML mutated
into the world's worst UI layout system. (Of course I skipped a few
steps there, but you get the picture.)

If we weren't trying to force app UIs and manual page layouts into web
pages, we could have *already* had nice document layout systems
tailored to the individual user (with tabbed browsing *not* being an
obstacle to basic window resizing, and with multiple device form
factors *never* being an issue for any content creator), instead of
this current endless circle where W3C occasionally hands out some new
half-baked CSS gimmick that a few of the more overzealous designers can
optionally employ in order to force a one-size-fits-all approach to
"readability" onto everyone who visits that one particular site, thus
leading to inevitable problems and ultimately the W3C's next round of
half-baked hacks to the CSS spec.


> 
> [...]
> > >I *really* wish PDF would die. It's great for printed stuff, but
> > >its mere existence just does far more harm than good. Designers are
> > >already far too tempted to treat computers like a freaking sheet of
> > >paper - PDF just clinches it for them.
> > 
> > Clearly PDF and other fixed-format products are targeted at putting
> > ink on paper, and that's going the way of the dinosaur. At the same
> > time, the publishing industry is very much in turmoil for the time
> > being and only fu

Re: Future of string lambda functions/string predicate functions

2013-08-11 Thread Walter Bright

On 8/8/2013 10:02 AM, H. S. Teoh wrote:

Well, it would be nice of the rest of the Phobos devs speak up,
otherwise they are giving the wrong impression about the state of
things.


See Andrei's reply.



Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Jonathan M Davis
On Sunday, August 11, 2013 20:07:17 H. S. Teoh wrote:
> On Sun, Aug 11, 2013 at 06:14:18PM -0700, Jonathan M Davis wrote:
> > On Sunday, August 11, 2013 15:38:09 H. S. Teoh wrote:
> > > Maybe my previous post didn't get the idea across clearly, so let me
> > > try again. My underlying thrust was: instead of maintaining 3
> > > different makefiles (or more) by hand, have a single source for all
> > > of them, and write a small D program to generate posix.mak,
> > > win32.mak, win64.mak, whatever, from that source.
> > > 
> > > That way, adding/removing files from the build, etc., involves only
> > > editing a single file, and regenerating the makefiles/whatever we
> > > use.  If there's a problem with a platform-specific makefile, then
> > > it's just a matter of fixing the platform-specific output handler in
> > > the D program.
> > > 
> > > The way we're currently doing it essentially amounts to the same
> > > thing as copy-n-pasting the same piece of code 3 times and trying to
> > > maintain all 3 copies separately, instead of writing a template that
> > > can be specialized 3 times, thus avoiding boilerplate and
> > > maintenance headaches.
> > 
> > But if you're going that far, why not just do the whole thing with D
> > and ditch make entirely? If it's to avoid bootstrapping issues, we're
> > going to have those anyway once we move the compiler to D (which is
> > well on is well underway), so that really isn't going to matter.
> 
> [...]
> 
> If you like, think of it this way: the build tool will be written in D,
> with the option of generating scripts in legacy formats like makefiles
> or shell scripts so that it can be bootstrapped by whoever needs it to.
> 
> We pay zero cost for this because the source document is the input
> format for the D tool, and the D tool takes care of producing the right
> sequence of commands. There is only one place to update when new files
> need to be added or old files removed -- or, if we integrate it with
> rdmd fully, even this may not be necessary. When somebody asks for a
> makefile, we just run the program with --generate=makefile. When
> somebody asks for a shell script, we just run it with
> --generate=shellscript. The generated makefiles/shell scripts are
> guaranteed to be consistent with the current state of the code, which is
> the whole point behind this exercise.

But what is the point of the makefile? As far as I can see, it gains you 
nothing. It doesn't help bootstrapping at all, because dmd itself will soon be 
written in D and therefore require that a D compiler already be installed. And 
the cost definitely isn't zero, because it requires extra code to be able to 
generate a makefile on top of doing the build purely with the D script (and I 
fully expect that donig the build with the D script will be simpler than 
generating the makefile would be). I see no benefit whatsoever in generating 
makefiles from a D script over simply doing the whole build with the D script. 
There would be an argument for it if dmd itself were going to stay in C++, 
because then you could avoid a circular dependency, but dmd is being converted 
to D, and so we're going to have that circular dependency anyway, negating 
that argument.

- Jonathan M Davis


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Andrei Alexandrescu

On 8/11/13 4:45 PM, Joseph Rushton Wakeling wrote:

On Sunday, 11 August 2013 at 23:37:28 UTC, Andrei Alexandrescu wrote:

That's an odd thing to say seeing as a lot of CS academic research is
ten years ahead of the industry.


I would personally venture to say that the publication practises of
academia in general and CS in particular have many destructive and
damaging aspects, and that industry-academia gap might be narrowed quite
a bit if these were addressed.


Could be improved, sure. Destructive and damaging - I'd be curious for 
some substantiation.


Andrei



Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread H. S. Teoh
On Sun, Aug 11, 2013 at 06:14:18PM -0700, Jonathan M Davis wrote:
> On Sunday, August 11, 2013 15:38:09 H. S. Teoh wrote:
> > Maybe my previous post didn't get the idea across clearly, so let me
> > try again. My underlying thrust was: instead of maintaining 3
> > different makefiles (or more) by hand, have a single source for all
> > of them, and write a small D program to generate posix.mak,
> > win32.mak, win64.mak, whatever, from that source.
> > 
> > That way, adding/removing files from the build, etc., involves only
> > editing a single file, and regenerating the makefiles/whatever we
> > use.  If there's a problem with a platform-specific makefile, then
> > it's just a matter of fixing the platform-specific output handler in
> > the D program.
> > 
> > The way we're currently doing it essentially amounts to the same
> > thing as copy-n-pasting the same piece of code 3 times and trying to
> > maintain all 3 copies separately, instead of writing a template that
> > can be specialized 3 times, thus avoiding boilerplate and
> > maintenance headaches.
> 
> But if you're going that far, why not just do the whole thing with D
> and ditch make entirely? If it's to avoid bootstrapping issues, we're
> going to have those anyway once we move the compiler to D (which is
> well on is well underway), so that really isn't going to matter.
[...]

If you like, think of it this way: the build tool will be written in D,
with the option of generating scripts in legacy formats like makefiles
or shell scripts so that it can be bootstrapped by whoever needs it to.

We pay zero cost for this because the source document is the input
format for the D tool, and the D tool takes care of producing the right
sequence of commands. There is only one place to update when new files
need to be added or old files removed -- or, if we integrate it with
rdmd fully, even this may not be necessary. When somebody asks for a
makefile, we just run the program with --generate=makefile. When
somebody asks for a shell script, we just run it with
--generate=shellscript. The generated makefiles/shell scripts are
guaranteed to be consistent with the current state of the code, which is
the whole point behind this exercise.


T

-- 
Designer clothes: how to cover less by paying more.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread H. S. Teoh
On Sun, Aug 11, 2013 at 04:33:26PM -0700, Andrei Alexandrescu wrote:
> On 8/11/13 12:00 PM, Nick Sabalausky wrote:
> >On Sun, 11 Aug 2013 11:25:02 -0700
> >Andrei Alexandrescu  wrote:
> >>
> >>For a column of text to be readable it should have not much more
> >>than 10 words per line. Going beyond that forces eyes to scan too
> >>jerkily and causes difficulty in following line breaks. Filling an
> >>A4 or letter paper with only one column would force either (a) an
> >>unusually large font, (b) very large margins, or (c) too many words
> >>per line.  Children books choose (a), which is why many do come in
> >>that format.  LaTeX and Word choose (b) in single-column documents.

The solution is to have adaptive layout that adapts itself to your
screen.

I personally prefer single-column with no more than about 40 ems in
width or thereabouts. Anything more than that, and it becomes
uncomfortable to read.

Actually, another interesting idea is left-to-right scrolling instead of
vertical scrolling. You'd lay out the text in columns that exactly fit
the height of the screen, and have as many columns as needed to fit the
entire text. If that exceeds the screen width, scroll horizontally.


[...]
> >>Multicolumn is best for screen reading, too. The only problem is
> >>there's no good flowing - the columns should fit the screen. There's
> >>work on that, see e.g.
> >>http://alistapart.com/article/css3multicolumn.

Fixing the number of columns is bound to fail, because user screen
dimensions cannot be predicted in advance. The only *real* solution is
to use an adaptive layout algorithm that adapts itself as needed.


> >A. HTML has good flowing, and has had it since freaking v1. No need
> >for upcoming CSS tricks: As long as the author doesn't go and do
> >something retarded like use a fixed layout or this new "zoom out
> >whenever the window shrinks" lunacy, then all any user ever has to do
> >is adjust the window to their liking.
> 
> Clearly HTML has made good progress toward reaching good formatting,
> but is not quite there yet.

Ugh. I don't consider HTML as good formatting at all. HTML+CSS still has
these shortcomings:

- No full justification by default. Existing justification schemes could
  be improved (most implementations suffer from rivers of whitespace in
  a justified paragraph -- they could learn from LaTeX here). Needs
  native hyphenation support (using JS to patch up this flaw is a
  failure IMO).

- Text width should be limited to 40em by default.

- Many layout solutions require CSS circumlocutions and hacks, because
  CSS simply isn't expressive enough for many formatting needs. This
  causes spotty browser support and fragility.

- Pixel sizes should be banned, as well as hard-coded font sizes. These
  tie you to assumptions about specific user screen dimensions, which
  are almost always wrong. In this day and age, the only real solution
  is a fully dynamically adaptive layout. Everything else is just a
  relic from paper layouts, and is a dead-end. Things like aligning
  images should be based on treating image size as an actual quantity
  you can compute sizes on; any hard-coded image sizes is bound to cause
  problems when the image is modified.

- Unable to express simple computations on sizes, requiring
  circumlocutions that make the CSS hard to read and maintain.

- Unable to express simple things like headers and footers, requiring
  hacks with floats and divs and whatnot, which, again, requires making
  assumptions about user screen size, which inevitably will go wrong.


> >If someone expands their browser to be two-feet wide and ends up with
> >too much text per line, then really they have no one to blame but
> >their own dumbass self.
> 
> This is a frequent argument. The issue with it is that often people
> use tabbed browsing, each tab having a page with its own approach to
> readability.

The *real* problem is that webpage layout is still very much confined by
outdated paper layout concepts. The browser should be able to
automatically columnize text to fit the screen. Maybe with horizontal
scrolling instead of vertical scrolling. Layouts should be flexible
enough that the browser can resize the fonts to keep the text readable.
Seriously, this isn't 1970. There's no reason why we should still be
fiddling with this stuff manually. Layouts should be automatic, not
hardcoded or at the whims of designers fixated on paper layout concepts.


[...]
> >I *really* wish PDF would die. It's great for printed stuff, but its
> >mere existence just does far more harm than good. Designers are
> >already far too tempted to treat computers like a freaking sheet of
> >paper - PDF just clinches it for them.
> 
> Clearly PDF and other fixed-format products are targeted at putting
> ink on paper, and that's going the way of the dinosaur. At the same
> time, the publishing industry is very much in turmoil for the time
> being and only future will tell what the right replacement is.
[...]

The right r

Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Jonathan M Davis
On Sunday, August 11, 2013 15:38:09 H. S. Teoh wrote:
> Maybe my previous post didn't get the idea across clearly, so let me try
> again. My underlying thrust was: instead of maintaining 3 different
> makefiles (or more) by hand, have a single source for all of them, and
> write a small D program to generate posix.mak, win32.mak, win64.mak,
> whatever, from that source.
> 
> That way, adding/removing files from the build, etc., involves only
> editing a single file, and regenerating the makefiles/whatever we use.
> If there's a problem with a platform-specific makefile, then it's just a
> matter of fixing the platform-specific output handler in the D program.
> 
> The way we're currently doing it essentially amounts to the same thing
> as copy-n-pasting the same piece of code 3 times and trying to maintain
> all 3 copies separately, instead of writing a template that can be
> specialized 3 times, thus avoiding boilerplate and maintenance
> headaches.

But if you're going that far, why not just do the whole thing with D and ditch 
make entirely? If it's to avoid bootstrapping issues, we're going to have 
those anyway once we move the compiler to D (which is well on is well 
underway), so that really isn't going to matter.

- Jonathan M Davis


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Walter Bright

On 8/11/2013 4:33 PM, Andrei Alexandrescu wrote:

Clearly PDF and other fixed-format products are targeted at putting ink on
paper, and that's going the way of the dinosaur. At the same time, the
publishing industry is very much in turmoil for the time being and only future
will tell what the right replacement is.


Currently ereaders are great for reading novels and such with little typography 
needs. But they're terrible for textbooks and reference material, mainly because 
the screen is both low res and is way too small.


It's like programming with an 80*24 display (I can't believe I was able to use 
them!).


(I was eagerly looking at the Surface tablet when it came out, but what killed 
it for me was the low res display. I want to read books on a tablet, and a low 
res display doesn't do that very well.)


I'd like an ereader that has a full 8.5*11 display.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Joseph Rushton Wakeling
On Sunday, 11 August 2013 at 23:37:28 UTC, Andrei Alexandrescu 
wrote:
That's an odd thing to say seeing as a lot of CS academic 
research is ten years ahead of the industry.


I would personally venture to say that the publication practises 
of academia in general and CS in particular have many destructive 
and damaging aspects, and that industry-academia gap might be 
narrowed quite a bit if these were addressed.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Andrei Alexandrescu

On 8/11/13 12:09 PM, Nick Sabalausky wrote:

On Sun, 11 Aug 2013 20:43:17 +0200
"Tyler Jameson Little"  wrote:



I really wish this was more popular:
__
|   ||
|   1   |   2|
|   ||
|   ||
||
|   ||
|   3   |   4|
|   ||
|   ||
___ page break ___
|   ||
|   ||
|   1   |   2|
|   ||
||
|   ||
|   ||
|   3   |   4|
|   ||

This allows a multi-column layout with less scrolling.


Yea, that's another thing that would help.


This is still too rigid. I think the right answer is adaptive flowed 
layout (http://goo.gl/CXylLi - warning it's a PDF :o)), where the system 
selects a typography-quality layout dynamically depending on the 
characteristics of the device.



Why can't we get the same for academic papers? They're even
simpler because each section can be forced to be the same size.


I keep getting more and more convinced that it's just comes back down
to the usual old problem of glacial bureaucratic-like nature of
academia. I truly believe the academic world is beginning to sink under
the weight of its own outdated traditions. This is just one symptom of
that, just like all the ways the MPAA/RIAA struggled against the
societal changes they wanted to pretend weren't really occurring.


That's an odd thing to say seeing as a lot of CS academic research is 
ten years ahead of the industry.



Andrei



Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Andrei Alexandrescu

On 8/11/13 12:00 PM, Nick Sabalausky wrote:

On Sun, 11 Aug 2013 11:25:02 -0700
Andrei Alexandrescu  wrote:


For a column of text to be readable it should have not much more than
10 words per line. Going beyond that forces eyes to scan too jerkily
and causes difficulty in following line breaks. Filling an A4 or
letter paper with only one column would force either (a) an unusually
large font, (b) very large margins, or (c) too many words per line.
Children books choose (a), which is why many do come in that format.
LaTeX and Word choose (b) in single-column documents.

[...]

Multicolumn is best for screen reading, too. The only problem is
there's no good flowing - the columns should fit the screen. There's
work on that, see e.g. http://alistapart.com/article/css3multicolumn.



A. HTML has good flowing, and has had it since freaking v1. No need for
upcoming CSS tricks: As long as the author doesn't go and do something
retarded like use a fixed layout or this new "zoom out whenever the
window shrinks" lunacy, then all any user ever has to do is adjust
the window to their liking.


Clearly HTML has made good progress toward reaching good formatting, but 
is not quite there yet.



If someone expands their browser to be
two-feet wide and ends up with too much text per line, then really they
have no one to blame but their own dumbass self.


This is a frequent argument. The issue with it is that often people use 
tabbed browsing, each tab having a page with its own approach to 
readability.



B. There's nothing stopping authors from making their PDFs a
single-column at whatever line width works well. Like I said,
personally I've never found 8" line width at a normal font size to be
even the slightest hint harder than 10 words per line (in fact,
sometimes I find 10 words per line to be *harder* due to such
frequent line breaks), *but* if the author wants to do 10 words per
line in a PDF, there's *nothing* in PDF stopping them from doing that
without immediately sacrificing those gains, and more, by
going multi-column.


This started with your refutation of my argument that two columns need 
less space. One column would fill less of the paper, which was my point. 
This is, indeed, the motivation of conferences: they want to publish 
relatively compact proceedings.


There is a lot of research and practice on readability, dating from 
hundreds of years ago - before the start of typography. In recent years 
there's been new research motivated by the advent of new media for 
displaying textual information, some of which supports your view, see 
e.g. http://goo.gl/qfHcJz. However, most pundits do suggest limiting the 
width of text lines, see the many results of http://goo.gl/HuPEXV.



Bottom line, obviously multi-column PDF is a bad situation, but we
already *have* multiple dead-simple solutions even without throwing our
hands up and saying "Oh, well, there's no good *multi-column* solution
ATM, so I have no way to make my document readable without waiting for
a reflowing-PDF or CSS5 or 6 or 7 or whatever."

An obsessive desire for multi-column appears to be getting in the way
of academic documents that have halfway decent readability. Meanwhile,
the *rest* of the word just doesn't bother, uses single-column, and
gets by perfectly fine with entirely readable documents (Well, except
when they put out webpages with gigantic sizes, grey-on-white text, and
double-spacing - Now *that* makes things *really* hard to read. Gives
me a headache every single time - and it's always committed by the
very people who *think* they're doing it to be more readable. Gack.)


Again, two-column layout is being used as a vehicle for putting a wealth 
of information in a good quality format that is cheap to print and bind 
(most conference proceedings are simply printed on letter/A4 paper and 
bound at the university bindery). The rest of the paper publishing world 
has different constraints because they print document in much larger 
numbers, in a specialized typography that use folios divided in 
different ways, producing smaller, single-column books. It strikes me as 
ignorant to accuse the academic world of high-brow snobbery because it 
produces good quality printed content with free software at affordable 
costs.



I *really* wish PDF would die. It's great for printed stuff, but
its mere existence just does far more harm than good. Designers are
already far too tempted to treat computers like a freaking sheet of
paper - PDF just clinches it for them.


Clearly PDF and other fixed-format products are targeted at putting ink 
on paper, and that's going the way of the dinosaur. At the same time, 
the publishing industry is very much in turmoil for the time being and 
only future will tell what the right replacement is.



Andrei



Re: Jquery SOB killer

2013-08-11 Thread superdan

JS wrote:

I wrote the script to get rid of the BS that has been happening 
lately with my replies. I can't stand people that think the 
world revolves around them. Hopefully this little script will 
bring some sanity back to this NG.


da irony iz completely lost on u, isnt it?


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Brian Rogoff

On Sunday, 11 August 2013 at 08:22:35 UTC, Walter Bright wrote:

http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


Interesting, and certainly D being a wide spectrum language is a 
reason that many of us investigate it. Julia is aiming at the 
same space as that mentioned in the paper, so I think that their 
point that D is the only choice here is not true any more.


No comment on the paper's formatting, which seems to its most 
salient feature :-)


-- Brian






Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Walter Bright

On 8/11/2013 3:40 PM, bearophile wrote:

For Haskell they release two different kinds of compilers+libraries: one is just
a core distribution with the compiler with the standard Haskell modules
(including the GMP compiled binaries), and the other contains the compiler with
its standard library, plus modules+binaries for the most common libraries.

Python on Windows uses a similar strategy.


This is not really a strategy, it addresses none of the issues I raised.



Is it useful to use BigInts at compile-time? If the answer is very positive then
perhaps the D interpreter could be modified to allow calling external numerical
libraries even at compile-time.


Don keeps extending CTFE to make it work with more stuff, as people find it more 
and more useful to do things at compile time. I see no reason BigInt should be 
excluded from that.




Note that D was developed with existing backends and linkers.


But isn't optlink being rewritten in C? Perhaps I am just confused, sorry.


Optlink was used for D because it was existing, free, and it worked. You seemed 
to have the idea that optlink was developed to use with D. Optlink predated D by 
12-15 years.




Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread bearophile

Walter Bright:

as soon as the D *package* starts to depend on 
non-default-installed libraries, trouble happens. With libcurl, 
the only solution so far seems to be to BUILD OUR OWN LIBCURL 
binary!


http://d.puremagic.com/issues/show_bug.cgi?id=10710

This is a terrible situation.


For Haskell they release two different kinds of 
compilers+libraries: one is just a core distribution with the 
compiler with the standard Haskell modules (including the GMP 
compiled binaries), and the other contains the compiler with its 
standard library, plus modules+binaries for the most common 
libraries.


Python on Windows uses a similar strategy.


Consider things like the trig functions. D started out by 
forwarding to the C versions. Unfortunately, the C versions are 
of spotty, unreliable quality (even today!). Because of that, 
we've been switching to our own implementations.


And, consider that using GMP means CTFE would not be supported.


At the moment BigInt doesn't run at compile-time.

You could wrap an external fast multi-precision library in Phobos 
D code that uses __ctfw to switch to a simpler pure D 
implementation at compile-time.


Is it useful to use BigInts at compile-time? If the answer is 
very positive then perhaps the D interpreter could be modified to 
allow calling external numerical libraries even at compile-time.




Note that D was developed with existing backends and linkers.


But isn't optlink being rewritten in C? Perhaps I am just 
confused, sorry.


Bye,
bearophile


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread H. S. Teoh
On Sun, Aug 11, 2013 at 09:26:11AM +0100, Russel Winder wrote:
> On Sat, 2013-08-10 at 14:27 -0400, Nick Sabalausky wrote:
> […]
> > is discovering and dealing with all the fun little differences
> > between the posix and win32 makefiles (and now we have some win64
> > makefiles as well).
> […]
> 
> Isn't this sort of problem solved by using SCons, Waf or (if you
> really have to) CMake?
[...]

+1. But people around here seems to have a beef against anything that
isn't make. *shrug*


T

-- 
If you think you are too small to make a difference, try sleeping in a closed 
room with a mosquito. -- Jan van Steenbergen


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread H. S. Teoh
On Sun, Aug 11, 2013 at 12:11:14AM -0700, Jonathan M Davis wrote:
> On Saturday, August 10, 2013 22:48:14 Walter Bright wrote:
> > On 8/10/2013 4:21 PM, Jonathan M Davis wrote:
> > > Another suggestion that I kind of liked was to just build them all
> > > with a single script written in D and ditch make entirely, which
> > > would seriously reduce the amount of duplication across platforms.
> > > But that's obviously a much bigger change and would likely be much
> > > more controversial than simply using a more standard make.
> > 
> > I don't see much point in that. The dmd build is straightforward,
> > and I see no particular gain from reinventing that wheel.
> 
> Well, make is horrible, and while posix.mak is way better than
> win32.mak or win64.mak, it's still pretty bad. Personally, I would
> never use make without something like cmake in front of it. If we were
> to write up something in D, it could be properly cross-platform (so
> only one script instead of 3+), and I fully expect that it could be
> far, far cleaner than what we're forced to do in make.
[...]

Maybe my previous post didn't get the idea across clearly, so let me try
again. My underlying thrust was: instead of maintaining 3 different
makefiles (or more) by hand, have a single source for all of them, and
write a small D program to generate posix.mak, win32.mak, win64.mak,
whatever, from that source.

That way, adding/removing files from the build, etc., involves only
editing a single file, and regenerating the makefiles/whatever we use.
If there's a problem with a platform-specific makefile, then it's just a
matter of fixing the platform-specific output handler in the D program.

The way we're currently doing it essentially amounts to the same thing
as copy-n-pasting the same piece of code 3 times and trying to maintain
all 3 copies separately, instead of writing a template that can be
specialized 3 times, thus avoiding boilerplate and maintenance
headaches.


T

-- 
For every argument for something, there is always an equal and opposite 
argument against it. Debates don't give answers, only wounded or inflated egos.


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread John Colvin

On Sunday, 11 August 2013 at 15:42:24 UTC, Nick Sabalausky wrote:

On Sun, 11 Aug 2013 01:22:34 -0700
Walter Bright  wrote:


http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


Holy crap those two-column PDFs are hard to read! Why in the 
world does
academia keep doing that anyway? (Genuine question, not 
rhetoric)


It's convenient for embedding figures without using up excessive 
space or resorting to *shivers* word wrapping.


Even without taking that in to account, I've always had a soft 
spot for 2 column layout, when done right. Most of the physics 
papers I read use it and I never have any problems. It's only 
really bad if they make the columns too narrow compared to the 
font width and you get too few words per line.


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Walter Bright

On 8/11/2013 3:18 PM, Jonathan M Davis wrote:

On Sunday, August 11, 2013 14:43:13 Walter Bright wrote:

That said, as soon as the D *package* starts to depend on
non-default-installed libraries, trouble happens. With libcurl, the only
solution so far seems to be to BUILD OUR OWN LIBCURL binary!


At this point, I'm inclined to think that while it's great for us to have
bindings to C libraries and to have user-friendly, D wrappers around them,
it's better that they don't end up in Phobos.


My sentiments exactly.



Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Jonathan M Davis
On Sunday, August 11, 2013 14:43:13 Walter Bright wrote:
> That said, as soon as the D *package* starts to depend on
> non-default-installed libraries, trouble happens. With libcurl, the only
> solution so far seems to be to BUILD OUR OWN LIBCURL binary!

At this point, I'm inclined to think that while it's great for us to have 
bindings to C libraries and to have user-friendly, D wrappers around them, 
it's better that they don't end up in Phobos.

- Jonathan M Davis


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Walter Bright

Oh, I forgot to mention, licensing.

We want Phobos to be free of any restrictive licensing. GPL is restrictive, and 
so is LGPL.


We very deliberately picked Boost. Having Phobos be a mix of GPL and Boost would 
utterly defeat picking Boost.


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Walter Bright

On 8/11/2013 2:21 PM, bearophile wrote:

Walter Bright:


On the subject of friction, I believe we make a mistake by making a dependency
on libcurl, a library over which we don't have control. Some issues:

http://d.puremagic.com/issues/show_bug.cgi?id=10710

http://d.puremagic.com/issues/show_bug.cgi?id=8756


Issue 8756 doesn't seem caused by libcurl. (If the pragma(lib) feature is not
portable then perhaps it should become a feature of just DMD and not of the D
language. This aligns a bit better the theory/dream of D with the reality of D.)

On the other hand doing everything yourself/ourselves has some other large
disadvantages, look at the Phobos bigints: the GHC Haskell compiler uses the GMP
multi-precision numbers, that are usually faster or much faster than Phobos
ones, have more functions implemented that are missing in Phobos (like
power-modulus), and help GHC developers focus on more Haskell-related issues.


You might consider that D is designed to be *very* friendly to linking with 
existing C code libraries for exactly that reason. Haskell is not. You might 
also recall my steadfast opposition to doing things like rolling our own crypto 
libraries rather than linking to existing ones.


That said, as soon as the D *package* starts to depend on non-default-installed 
libraries, trouble happens. With libcurl, the only solution so far seems to be 
to BUILD OUR OWN LIBCURL binary!


http://d.puremagic.com/issues/show_bug.cgi?id=10710

This is a terrible situation.

Consider things like the trig functions. D started out by forwarding to the C 
versions. Unfortunately, the C versions are of spotty, unreliable quality (even 
today!). Because of that, we've been switching to our own implementations.


And, consider that using GMP means CTFE would not be supported.


Rust developers don't try to design/develop at the same time a language, a
linker, a back-end, a run-time and a standard library.


Neither did D's developers. Note that D was developed with existing backends and 
linkers. Rust is not released yet, and given that they just switched to their 
own runtime, they clearly intend to ship with their own runtime.



Restricting the work helps speed up the development of what's more related to D.


We really aren't complete fools, bearophile.





Re: Something up with the forums?

2013-08-11 Thread Jonathan M Davis
On Sunday, August 11, 2013 11:16:44 Walter Bright wrote:
> Your post here also broke the thread. Your post appears in reply to the
> initial post of the thread, rather than the post you actually replied to.

Maybe due to that issue with mailman rewriting IDs? If not, I have no idea 
why. The problem with gmail is that your own messages to the mailing list 
don't even get delivered to you, so they don't show up in the thread, so if 
you're using a local mail client, replies to your message have no message to 
thread onto. And AFAIK, that only really affects other people when you reply to 
your own messages. Either way, I'm not using gmail anymore.

- Jonathan M Davis


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Anon

On Sunday, 11 August 2013 at 21:21:45 UTC, bearophile wrote:

Walter Bright:

On the subject of friction, I believe we make a mistake by 
making a dependency on libcurl, a library over which we don't 
have control. Some issues:


http://d.puremagic.com/issues/show_bug.cgi?id=10710

http://d.puremagic.com/issues/show_bug.cgi?id=8756


Issue 8756 doesn't seem caused by libcurl. (If the pragma(lib) 
feature is not portable then perhaps it should become a feature 
of just DMD and not of the D language. This aligns a bit better 
the theory/dream of D with the reality of D.)


Does pragma(lib, "curl") not work on Windows/DMD? I know it does 
in Linux (used in DMD and LDC, ignored under GDC),
and was under the impression that that was the portable way to 
use pragma(lib).


If it isn't now, I would argue that naming the library (rather 
than the file) should be the standard, accepted use of 
pragma(lib). It neatly avoids cluttering D code with version()s 
and repeated pragmas to handle the different naming schemes.


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread bearophile

Walter Bright:

On the subject of friction, I believe we make a mistake by 
making a dependency on libcurl, a library over which we don't 
have control. Some issues:


http://d.puremagic.com/issues/show_bug.cgi?id=10710

http://d.puremagic.com/issues/show_bug.cgi?id=8756


Issue 8756 doesn't seem caused by libcurl. (If the pragma(lib) 
feature is not portable then perhaps it should become a feature 
of just DMD and not of the D language. This aligns a bit better 
the theory/dream of D with the reality of D.)


On the other hand doing everything yourself/ourselves has some 
other large disadvantages, look at the Phobos bigints: the GHC 
Haskell compiler uses the GMP multi-precision numbers, that are 
usually faster or much faster than Phobos ones, have more 
functions implemented that are missing in Phobos (like 
power-modulus), and help GHC developers focus on more 
Haskell-related issues.


Rust developers don't try to design/develop at the same time a 
language, a linker, a back-end, a run-time and a standard 
library. Restricting the work helps speed up the development of 
what's more related to D.


Bye,
bearophile


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Walter Bright
On the subject of friction, I believe we make a mistake by making a dependency 
on libcurl, a library over which we don't have control. Some issues:


http://d.puremagic.com/issues/show_bug.cgi?id=10710

http://d.puremagic.com/issues/show_bug.cgi?id=8756


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Walter Bright

On 8/11/2013 11:49 AM, Brad Roberts wrote:

Gross over generalization when talking about _one_ app in _one_ scenario.


It happens over and over to me. Most 'ports' to Windows seem to be:

1. get it to compile
2. ship it!



You're deflecting rather than being willing to discuss a topic that comes up
regularly.


I'm posting in this thread because I'm willing to discuss it. I've added much 
more detail in this post.




You are also well aware of just how often having multiple make files
has cause pain by them not being updated in sync.


Yes, and I am usually the one who gets to resync them - and I think it's worth 
it.



Does gmake have _any_ of those problems?


The last time I tried it, it bombed because the makefiles had CRLF's. Not an 
auspicious start. This has probably been fixed, but I haven't cared to try 
again. But ok, it's been a while, let's take a look:


Consider:

http://gnuwin32.sourceforge.net/install.html

In the first paragraph, it says the user must have msvcrt.dll, which doesn't 
come with it and the user must go find it if he doesn't have it. Then "some 
packages require msvcp60.dll", which the user must also go find elsewhere.


Then, it must be "installed". It even is complicated enough to motivate someone 
to write a "download and maintenance utility."


"Some packages must be installed in their default directories (usually 
c:\progra~1\), or you have to set corresponding environment 
variables or set options at the command line; see the documentation of the 
package, or, when available, the installation instructions on the package page."


Oh joy. I downloaded the zip file, unzipped it, and ran make.exe. I was rewarded 
with a dialog box:


"The program can't start because libintl3.dll is missing from your computer. Try 
reinstalling the program to fix this problem."


This dll isn't included with the zip file, and the install instructions don't 
mention it, let alone where I can get it.


"The length of the command-line is limited; see MSDN."

DM make solves that problem.

"The MS-Windows command interpreters, command.com and cmd.exe, understand both 
the backward slash '\' (which is the default) and the forward slash '/' (such as 
on Unix) in filenames. In general, it is best to use forward slashes, since some 
programs internally use the filename, e.g. to derive a directory name, and in 
doing this rely on the forward slash as path separator."


Actually, Windows utilities (even ones provided by Microsoft) sometimes fail to 
recognize / as a separator. I've not found any consistent rule about this, other 
than "it's going to suck sooner or later if you try using / instead of \."


I didn't get further, because I don't have libintl3.dll.

--

Contrast that with DM make:

1. There is no install and no setup. It's just make.exe. Run it, it works. No 
friction.


2. Don't need no dlls one must search the internet for, and also no worries 
about "dll hell" from getting the wrong one. DM make runs on a vanilla install 
of Windows.


3. It's designed from the ground up to work with Windows. For example, it 
recognizes "del" as a builtin Windows command, not a program, and handles it 
directly. It does things in the Windows way.


4. It handles arbitrarily long command lines.

5. No worries with people having a different make.exe than the one the makefiles 
were built for, as make.exe is distributed with dmd.


6. It's a small program, 50K, meaning it fits in a corner and is a trivial part 
of the dmd package.


--

If for no other reason, I oppose using gnu make for dmd on Windows because it 
significantly raises the barrier of entry for anyone who wants to just recompile 
Phobos. Gratuitously adding friction for users is not what we need - note the 
regular posts we get from newbies and the existing friction they encounter.


Re: Jquery SOB killer

2013-08-11 Thread JS

BTW, I hope that if you want to be added to the ignore list on my
side, you use the script and do the same. I know some will want
everyone to see their irrelevant posts but I won't see it and so
you will never get a response from me and it just clutters up the
NG and distracts from D.


Jquery SOB killer

2013-08-11 Thread JS

This goes out to all the SOB's out there. Thanks jQuery!



// ==UserScript==
// @name   Remove Arrogant Bastard Posts from Dlang Forum
// @namespace  http://dlang.bastards.forum
// @version0.1
// @description  Dlang bastards suck
// @match  http://forum.dlang.org/*
// @requirehttp://code.jquery.com/jquery-latest.min.js
// ==/UserScript==

var names = ["Timon Gehr", "Dicebot", "deadalnix"];


$("div.post-author").each(function(i)
{
 	if ($.inArray($(this).html(), names) >= 0) 
$(this).closest(".post-wrapper").remove();

});

$("span.forum-postsummary-author").each(function(i)
{
if ($.inArray($(this).html(), names) >= 0) 
$(this).closest("tr").remove();

});


So there should be no excuse for anyone contaminating others 
threads with BS unless they are truly trolling.



If you want me to add you to my list then reply in this post and 
I will do so and you'll never hear from me again. If you can't 
hold your ego and arrogance at the door when responding to my 
posts for help or suggestions, then let me know so I can add you 
to the list. If you don't want to see my posts, use the script 
and put my name in the list. I imagine quite a few replies, just 
simply say "Add me" and I'll do it. No need for anything more.



I wrote the script to get rid of the BS that has been happening 
lately with my replies. I can't stand people that think the world 
revolves around them. Hopefully this little script will bring 
some sanity back to this NG.




Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Nick Sabalausky
On Sun, 11 Aug 2013 20:43:17 +0200
"Tyler Jameson Little"  wrote:

> 
> I really wish this was more popular:
> __
> |   ||
> |   1   |   2|
> |   ||
> |   ||
> ||
> |   ||
> |   3   |   4|
> |   ||
> |   ||
> ___ page break ___
> |   ||
> |   ||
> |   1   |   2|
> |   ||
> ||
> |   ||
> |   ||
> |   3   |   4|
> |   ||
> 
> This allows a multi-column layout with less scrolling.

Yea, that's another thing that would help.

> 
> Why can't we get the same for academic papers? They're even 
> simpler because each section can be forced to be the same size.

I keep getting more and more convinced that it's just comes back down
to the usual old problem of glacial bureaucratic-like nature of
academia. I truly believe the academic world is beginning to sink under
the weight of its own outdated traditions. This is just one symptom of
that, just like all the ways the MPAA/RIAA struggled against the
societal changes they wanted to pretend weren't really occurring.



Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Nick Sabalausky
On Sun, 11 Aug 2013 11:25:02 -0700
Andrei Alexandrescu  wrote:
> 
> For a column of text to be readable it should have not much more than
> 10 words per line. Going beyond that forces eyes to scan too jerkily
> and causes difficulty in following line breaks. Filling an A4 or
> letter paper with only one column would force either (a) an unusually
> large font, (b) very large margins, or (c) too many words per line.
> Children books choose (a), which is why many do come in that format.
> LaTeX and Word choose (b) in single-column documents.
> 
> [...]
> 
> Multicolumn is best for screen reading, too. The only problem is
> there's no good flowing - the columns should fit the screen. There's
> work on that, see e.g. http://alistapart.com/article/css3multicolumn.
> 

A. HTML has good flowing, and has had it since freaking v1. No need for
upcoming CSS tricks: As long as the author doesn't go and do something
retarded like use a fixed layout or this new "zoom out whenever the
window shrinks" lunacy, then all any user ever has to do is adjust
the window to their liking. If someone expands their browser to be
two-feet wide and ends up with too much text per line, then really they
have no one to blame but their own dumbass self.

B. There's nothing stopping authors from making their PDFs a
single-column at whatever line width works well. Like I said,
personally I've never found 8" line width at a normal font size to be
even the slightest hint harder than 10 words per line (in fact,
sometimes I find 10 words per line to be *harder* due to such
frequent line breaks), *but* if the author wants to do 10 words per
line in a PDF, there's *nothing* in PDF stopping them from doing that
without immediately sacrificing those gains, and more, by
going multi-column.

Bottom line, obviously multi-column PDF is a bad situation, but we
already *have* multiple dead-simple solutions even without throwing our
hands up and saying "Oh, well, there's no good *multi-column* solution
ATM, so I have no way to make my document readable without waiting for
a reflowing-PDF or CSS5 or 6 or 7 or whatever."

An obsessive desire for multi-column appears to be getting in the way
of academic documents that have halfway decent readability. Meanwhile,
the *rest* of the word just doesn't bother, uses single-column, and
gets by perfectly fine with entirely readable documents (Well, except
when they put out webpages with gigantic sizes, grey-on-white text, and
double-spacing - Now *that* makes things *really* hard to read. Gives
me a headache every single time - and it's always committed by the
very people who *think* they're doing it to be more readable. Gack.)

I *really* wish PDF would die. It's great for printed stuff, but
its mere existence just does far more harm than good. Designers are
already far too tempted to treat computers like a freaking sheet of
paper - PDF just clinches it for them.



Re: Version of implementation for docs

2013-08-11 Thread Tyler Jameson Little

On Sunday, 11 August 2013 at 15:25:27 UTC, JS wrote:

On Sunday, 11 August 2013 at 10:16:47 UTC, bearophile wrote:

JS:

Can we get the version of implementation/addition of a 
feature in the docs. e.g., if X feature/method/library is 
added into dmd version v, then the docs should display that 
feature.


Python docs do this, and in my first patch I have added such 
version number.


Bye,
bearophile


Too bad have the "development team" feel this is not important. 
Very bad decision and will hurt D in the long run. It's not a 
hard thing to do. Seems to be a lot of  laziness going around. 
Maybe you can tell us just how hard/time consuming it was to 
type in 2.063 when you added a method?


Personally I don't like the tone here, but I agree that having 
version numbers would be very nice to have, especially when using 
a pre-packaged DMD+Phobos from a package manager.


Perhaps this could be automated? It'd be a little messy, but it 
could look something like this:


* get list of all exported names changed since last release 
(using diff tool)
* eliminate all names that have the same definition in the last 
release

* mark new names (not in last release) as new in current release
* mark changed names as changed in current release (keep list of 
changes since added)

* document deleted names as having been removed

This would only have to be run once per release, so it's okay if 
it's a little expensive.


This bit me once in Go when a dependency failed to compile 
because of a missing function name. It existed in the official 
docs, but not in my local docs. After updating to the latest 
release, everything worked as expected. There was, however, no 
indication in the docs that anything had been added, only in the 
change logs.


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Brad Roberts

On 8/10/13 10:48 PM, Walter Bright wrote:

On 8/10/2013 4:21 PM, Jonathan M Davis wrote:

On Saturday, August 10, 2013 14:35:04 Nick Sabalausky wrote:

Is this something that would be acceptable, or does building DMD for
Windows need to stay as DM make?


I don't see any problem with it, but that doesn't mean that Walter won't.


Tools built for Unix never work right on Windows. It's why, for example, I run 
git on Linux and
don't use the Windows badly ported versions of git. Tiresome problems revolve 
around failure to
adapt to \ path separators, ; in PATH, CRLF line endings, Windows SEH, case 
insensitive file names,
no symbolic links, etc., no Perl installed, etc.

DMD and Phobos are fairly unusual in how well adapted they are to both Windows 
and Linux.


Gross over generalization when talking about _one_ app in _one_ scenario.  You're deflecting rather 
than being willing to discuss a topic that comes up regularly.  You are also well aware of just how 
often having multiple make files has cause pain by them not being updated in sync.


Does gmake have _any_ of those problems?



Re: Future of string lambda functions/string predicate functions

2013-08-11 Thread Walter Bright

On 8/11/2013 9:26 AM, Andrei Alexandrescu wrote:

There's a related issue that I think we must solve before deciding whether or
not we should deprecate string lambdas. Consider:

void main() {
 import std.range;
 SortedRange!(int[], "a > b") a;
 SortedRange!(int[], "a > b") b;
 b = a;
 SortedRange!(int[], (a, b) => a > b) c;
 SortedRange!(int[], (a, b) => a > b) d;
 d = c;
}

The last line fails to compile because D does not currently have a good notion
of comparing lambdas for equality.


Bugzilla?



Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Tyler Jameson Little
On Sunday, 11 August 2013 at 18:25:02 UTC, Andrei Alexandrescu 
wrote:

On 8/11/13 10:20 AM, Nick Sabalausky wrote:

On Sun, 11 Aug 2013 09:28:21 -0700
Andrei Alexandrescu  wrote:


On 8/11/13 8:49 AM, monarch_dodra wrote:
On Sunday, 11 August 2013 at 15:42:24 UTC, Nick Sabalausky 
wrote:

On Sun, 11 Aug 2013 01:22:34 -0700
Walter Bright  wrote:


http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


Holy crap those two-column PDFs are hard to read! Why in 
the world

does academia keep doing that anyway? (Genuine question, not
rhetoric)

But the fact that article even exists is really freaking
awesome. :)


My guess is simply because it takes more space, making a 4 
page

article look like a 7 page ;)


Double columns take less space


Per column yes, but overall, no. The same number of chars + 
same font

== same amount of space no matter how you rearrange them.

If anything, double columns take more space due to the inner 
margin and
increased number of line breaks (triggering more word-wrapping 
and thus
more space wasted due to more wrapped words - and that's just 
as true

with justified text as it is with left/right/center-aligned.


For a column of text to be readable it should have not much 
more than 10 words per line. Going beyond that forces eyes to 
scan too jerkily and causes difficulty in following line 
breaks. Filling an A4 or letter paper with only one column 
would force either (a) an unusually large font, (b) very large 
margins, or (c) too many words per line. Children books choose 
(a), which is why many do come in that format. LaTeX and Word 
choose (b) in single-column documents.



and are more readable.



In *print* double-columns are arguably more readable (although 
I've
honestly never found that to be the case personally, at least 
when

we're talking roughly 8.5" x 11" pages).

But it's certainly not more readable in PDFs, which work like 
this

(need monospaced font):

   Start
 | /|
 |/ |
 |  Scroll  |
 |   Up /   |
  Scroll | /|  Scroll
   Down  |/ |   Down
 |   /  |
 |  /   |
 | /|
 |/ |
   /
  /---/
 /
 | /|
 |/ |
 |  Scroll  |
 |   Up /   |
  Scroll | /|  Scroll
   Down  |/ |   Down
 |   /  |
 |  /   |
 | /|
 |/ |
   /
  /---/
 /
 | /|
 |/ |
 |  Scroll  |
 |   Up /   |
  Scroll | /|  Scroll
   Down  |/ |   Down
 |   /  |
 |  /   |
 | /|
 |/ |
|
   End


Multicolumn is best for screen reading, too. The only problem 
is there's no good flowing - the columns should fit the screen. 
There's work on that, see e.g. 
http://alistapart.com/article/css3multicolumn.



Andrei


I really wish this was more popular:
__
|   ||
|   1   |   2|
|   ||
|   ||
||
|   ||
|   3   |   4|
|   ||
|   ||
___ page break ___
|   ||
|   ||
|   1   |   2|
|   ||
||
|   ||
|   ||
|   3   |   4|
|   ||

This allows a multi-column layout with less scrolling. The aspect 
ratio on my screen is just about perfect to fit half of a page at 
a time. I don't understand why this is rarely taken advantage 
of... For example, I like G+'s layout because posts seem to be 
layed out L->R, T->B like so:


|  1  |  2  |  3  |
|  4  |  2  |  3  |
|  4  |  2  |  5  |
|  6  |  7  |  5  |

Why can't we get the same for academic papers? They're even 
simpler because each section can be forced to be the same size.


Re: Future of string lambda functions/string predicate functions

2013-08-11 Thread Tyler Jameson Little
On Sunday, 11 August 2013 at 16:26:16 UTC, Andrei Alexandrescu 
wrote:

On 8/8/13 9:52 AM, Jonathan M Davis wrote:

On Thursday, August 08, 2013 07:29:56 H. S. Teoh wrote:
Seems this thread has quietened down. So, what is the 
conclusion? Seems
like almost everyone concedes that silent deprecation is the 
way to go.
We still support string lambdas in the background, but in 
public docs we

promote the use of the new lambda syntax. Would that be a fair
assessment of this discussion?


I find it interesting that very few Phobos devs have weighed 
in on the matter,
but unfortunately, most of the posters who have weighed in do 
seem to be

against keeping them.


There's a related issue that I think we must solve before 
deciding whether or not we should deprecate string lambdas. 
Consider:


void main() {
import std.range;
SortedRange!(int[], "a > b") a;
SortedRange!(int[], "a > b") b;
b = a;
SortedRange!(int[], (a, b) => a > b) c;
SortedRange!(int[], (a, b) => a > b) d;
d = c;
}

The last line fails to compile because D does not currently 
have a good notion of comparing lambdas for equality. In 
contrast, string comparison is well defined, and although 
string lambdas have clowny issues with e.g. "a>b" being 
different from "a > b", people have a good understanding of 
what to do to get code working.


So I think we should come up with a good definition of what 
comparing two function aliases means.



Andrei


Correct me if I'm wrong, but AFAICT the old behavior was an 
undocumented feature. I couldn't find string lambdas formally 
documented anywhere, but lambdas are.


Comparing function aliases is an optimization, not a feature, so 
I don't feel it's a blocker to deprecating string lambdas. If the 
user needs the old behavior, he/she can do this today with an 
actual function:


bool gt(int a, int b) {
return a > b;
}

void main() {
import std.range;
SortedRange!(int[], "a > b") a;
SortedRange!(int[], "a > b") b;
b = a;
SortedRange!(int[], gt) c;
SortedRange!(int[], gt) d;
d = c;
}

While not as concise, this is safer and does not rely on 
undocumented behavior.


Another consideration, are the following equivalent?

(a,b) => a > b
(b,c) => b > c


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Andrei Alexandrescu

On 8/11/13 10:20 AM, Nick Sabalausky wrote:

On Sun, 11 Aug 2013 09:28:21 -0700
Andrei Alexandrescu  wrote:


On 8/11/13 8:49 AM, monarch_dodra wrote:

On Sunday, 11 August 2013 at 15:42:24 UTC, Nick Sabalausky wrote:

On Sun, 11 Aug 2013 01:22:34 -0700
Walter Bright  wrote:


http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


Holy crap those two-column PDFs are hard to read! Why in the world
does academia keep doing that anyway? (Genuine question, not
rhetoric)

But the fact that article even exists is really freaking
awesome. :)


My guess is simply because it takes more space, making a 4 page
article look like a 7 page ;)


Double columns take less space


Per column yes, but overall, no. The same number of chars + same font
== same amount of space no matter how you rearrange them.

If anything, double columns take more space due to the inner margin and
increased number of line breaks (triggering more word-wrapping and thus
more space wasted due to more wrapped words - and that's just as true
with justified text as it is with left/right/center-aligned.


For a column of text to be readable it should have not much more than 10 
words per line. Going beyond that forces eyes to scan too jerkily and 
causes difficulty in following line breaks. Filling an A4 or letter 
paper with only one column would force either (a) an unusually large 
font, (b) very large margins, or (c) too many words per line. Children 
books choose (a), which is why many do come in that format. LaTeX and 
Word choose (b) in single-column documents.



and are more readable.



In *print* double-columns are arguably more readable (although I've
honestly never found that to be the case personally, at least when
we're talking roughly 8.5" x 11" pages).

But it's certainly not more readable in PDFs, which work like this
(need monospaced font):

Start
  | /|
  |/ |
  |  Scroll  |
  |   Up /   |
   Scroll | /|  Scroll
Down |/ |   Down
  |   /  |
  |  /   |
  | /|
  |/ |
/
   /---/
  /
  | /|
  |/ |
  |  Scroll  |
  |   Up /   |
   Scroll | /|  Scroll
Down |/ |   Down
  |   /  |
  |  /   |
  | /|
  |/ |
/
   /---/
  /
  | /|
  |/ |
  |  Scroll  |
  |   Up /   |
   Scroll | /|  Scroll
Down |/ |   Down
  |   /  |
  |  /   |
  | /|
  |/ |
 |
End


Multicolumn is best for screen reading, too. The only problem is there's 
no good flowing - the columns should fit the screen. There's work on 
that, see e.g. http://alistapart.com/article/css3multicolumn.



Andrei



Re: Any library with support JSON-RPC for D?

2013-08-11 Thread Dicebot

On Sunday, 11 August 2013 at 17:01:32 UTC, ilya-stromberg wrote:
Can you print any code example, please? Or just link to the 
documentation?


There is a REST example packaged with vibe.d: 
https://github.com/rejectedsoftware/vibe.d/blob/master/examples/rest/source/app.d


As you may notice, it uses the very same interface declaration 
for both `registerRestInterface` on server and 
`RestInterfaceClient`. In example they are run within same 
program but same can be done for two separate binaries, resulting 
in, essentially, RPC for that interface methods with all D data 
types (de)serialized using JSON behind the scene.


Re: Something up with the forums?

2013-08-11 Thread Walter Bright

On 8/10/2013 6:37 PM, Jonathan M Davis wrote:

If you never noticed it, I'd guess that you use gmail's web interface rather
than a local client, as gmail will show your sent messages in the threading
that it does. If you're using a local client, that obviously doesn't happen,
since sent messages don't normally get put in your inbox. So, as someone who
uses a local mail client pretty much exclusively, what gmail was doing was
really annoying, particularly since it was constantly breaking up threads that
I replied in. But either you have a very different workflow (like using the web
interface), or your e-mail client is much smarter than mine, or you just
didn't notice for some reason.

But if gmail works for you, then great. This issue was a deal breaker for me.
It took it from gmail being annoying in some of its quirks and how badly it
interacted with local clients to being unacceptably broken.


Your post here also broke the thread. Your post appears in reply to the initial 
post of the thread, rather than the post you actually replied to.




Mind the Ninja Gap

2013-08-11 Thread bearophile
I have found an excellent paper, written recently, that I don't 
remember being discussed here:


"Can Traditional Programming Bridge the Ninja Performance Gap for 
Parallel Computing Applications?", by Nadathur Satish et. al. 
(2012):

http://software.intel.com/sites/default/files/article/386514/isca-2012-paper.pdf

If that link doesn't work, this is an alternative link to a 
document with a worse formatting:

http://www.intel.it/content/dam/www/public/us/en/documents/technology-briefs/intel-labs-closing-ninja-gap-paper.pdf


The "Ninja Gap" is the performance gap between mostly-numerical 
code written by experts compared to the same code written in a 
traditional style by lesser programmers. The Intel Labs 
researchers of this paper show that such gap is widening along 
the time, and they also show simple means to bridge most of this 
gap using modern compilers and some standard coding practices 
(with such practices the gap becomes just about 1.3X performance 
difference, that is often acceptable. While a gap of 50X is not 
good).


In my opinion numerical code is an important usage for the D 
language, and I think D should _help_ the programmer bridge as 
much of that ninja gap without resorting to actual ninja-level 
coding and the use lot of inline asm. So in my opinion this is an 
important paper.


Here I discussed a bit related matters, regarding the (very 
interesting and probably worth partially copying) ISPC compiler:

http://forum.dlang.org/post/hwsjzlxystpymnvfx...@forum.dlang.org


The paper presents several benchmarks, and for each one show how 
to brige most of the gap using some standard means. Then at page 
7 they write a summary of such strategies. Such strategies and 
means should be available in Phobos (or where not possible as D 
built-ins). (Example: Phobos should offer a simple data structure 
to perform the change from Array-Of-Structures (AOS) to 
Structure-Of-Array (SOA). I think this is not too much hard to 
implement, it's not too much lines of code, and it's useful in 
many situations, so I think it's a candidate for Phobos inclusion 
once someone implements it.)


Regarding D itself, I think it already has most of the needed 
features, but it should use them well/better. An example is 
visible in the missing vectorized comparisons (an example using 
the CILK plus array notation):


  T tmp[S];
  tmp[0:S] = A[0:S] * k - B[0:S];
  if (tmp[0:S] > 5.0f) {
tmp[0:S] = tmp[0:S] * sin(B[0:S]);
  }


(For me it's surprising how large firms as Intel and Microsoft 
keep inventing nice ideas, implementing them writing compilers 
and tools, and 99% of those things get ignored by most people and 
end being forgotten. Being a technological researcher in such 
firms seems a sad work).


The optimization examples shown in this paper sometimes use 
algorithmic changes, that are beyond what D/Phobos is expected to 
do automatically. But a language can help in other ways. This is 
a simple O(n^2) loop pair for a N-body benchmark discussed in 
this paper (there are approximate algorithms to solve the N-body 
problem, but this benchmark uses the simple algorithm):


foreach (immutable i, const ref b1; bodies)
foreach (const ref b2; bodies)
result[i] += compute(b1, b2);


A numerics-oriented language could help the programmer write code 
like this that improves the performance blocking the loop:


foreach (immutable size_t j; iota(0, bodies.length, blockSize))
foreach (immutable i, const ref b1; bodies)
foreach (const ref b2; bodies[j .. min($, j + blockSize)])
result[i] += compute(b1, b2);


So a language good for heavy numerical processing has to help the 
programmer use the standard tricks presented in this paper in 
simple clean ways.


Eventually I'll try to implement the AOS-to-SOA data structure. 
But before being considered for inclusion in Phobos I think it 
should be used for some time in real code, assuming some D 
programmers are interested in writing "numeric processing"-style 
code.


Bye,
bearophile


Re: Future of string lambda functions/string predicate functions

2013-08-11 Thread David Nadlinger
On Saturday, 10 August 2013 at 18:28:29 UTC, Jonathan M Davis 
wrote:
I find it interesting that very few Phobos devs have weighed in 
on the matter,
but unfortunately, most of the posters who have weighed in do 
seem to be

against keeping them.


I am not actively participating in any NG discussions right now 
due to university work, but for the record, I am very much in 
favor of phasing out string lambdas as well (even if short-term 
removal is certainly not possible at this point).


I am sure all the relevant arguments have been brought up 
already, so I am not going to repeat them all over again, but in 
my opinion, the increased cognitive load for the user (a new 
syntax to learn) and the fact that string lambdas can't work for 
any cases involving free functions (std.functional importing half 
of Phobos just in case a string lambda could need a certain 
function clearly isn't a scalable solution) are more than enough 
a reason to ditch them.


String lambdas were a brilliant hack back then, but now that we 
have a proper solution, it's time to let them go.


David


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Nick Sabalausky
On Sun, 11 Aug 2013 09:28:21 -0700
Andrei Alexandrescu  wrote:

> On 8/11/13 8:49 AM, monarch_dodra wrote:
> > On Sunday, 11 August 2013 at 15:42:24 UTC, Nick Sabalausky wrote:
> >> On Sun, 11 Aug 2013 01:22:34 -0700
> >> Walter Bright  wrote:
> >>
> >>> http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf
> >>
> >> Holy crap those two-column PDFs are hard to read! Why in the world
> >> does academia keep doing that anyway? (Genuine question, not
> >> rhetoric)
> >>
> >> But the fact that article even exists is really freaking
> >> awesome. :)
> >
> > My guess is simply because it takes more space, making a 4 page
> > article look like a 7 page ;)
> 
> Double columns take less space

Per column yes, but overall, no. The same number of chars + same font
== same amount of space no matter how you rearrange them.

If anything, double columns take more space due to the inner margin and
increased number of line breaks (triggering more word-wrapping and thus
more space wasted due to more wrapped words - and that's just as true
with justified text as it is with left/right/center-aligned.

> and are more readable.
> 

In *print* double-columns are arguably more readable (although I've
honestly never found that to be the case personally, at least when
we're talking roughly 8.5" x 11" pages).

But it's certainly not more readable in PDFs, which work like this
(need monospaced font):

   Start
 | /|
 |/ |
 |  Scroll  |
 |   Up /   |
  Scroll | /|  Scroll
   Down  |/ |   Down
 |   /  |
 |  /   |
 | /|
 |/ |
   /
  /---/
 /
 | /|
 |/ |
 |  Scroll  |
 |   Up /   |
  Scroll | /|  Scroll
   Down  |/ |   Down
 |   /  |
 |  /   |
 | /|
 |/ |
   /
  /---/
 /
 | /|
 |/ |
 |  Scroll  |
 |   Up /   |
  Scroll | /|  Scroll
   Down  |/ |   Down
 |   /  |
 |  /   |
 | /|
 |/ |
|
   End

Of course, you can zoom out enough that the entire page is viewable on
one screen so you don't have that ridiculous scroll-dance, but then
everything becomes too small to be readable, unless you're one of the
rare few who have a monitor that swivels vertically or some
ridiculous size like 36" (which isn't applicable to the vast majority
of users).



Re: Any library with support JSON-RPC for D?

2013-08-11 Thread ilya-stromberg

On Sunday, 11 August 2013 at 16:06:42 UTC, Dicebot wrote:

On Sunday, 11 August 2013 at 08:51:19 UTC, ilya-stromberg wrote:

Hi,

Do you know any library with support JSON-RPC for D?

Thanks.


Don't know about direct JSON-RPC implementation, but using 
vibe.http.rest from vibed.org allows for something similar - 
JSON-based RPC over HTTP.


Can you print any code example, please? Or just link to the 
documentation?


Re: Future of string lambda functions/string predicate functions

2013-08-11 Thread Andrei Alexandrescu

On 8/8/13 9:52 AM, Jonathan M Davis wrote:

On Thursday, August 08, 2013 07:29:56 H. S. Teoh wrote:

Seems this thread has quietened down. So, what is the conclusion? Seems
like almost everyone concedes that silent deprecation is the way to go.
We still support string lambdas in the background, but in public docs we
promote the use of the new lambda syntax. Would that be a fair
assessment of this discussion?


I find it interesting that very few Phobos devs have weighed in on the matter,
but unfortunately, most of the posters who have weighed in do seem to be
against keeping them.


There's a related issue that I think we must solve before deciding 
whether or not we should deprecate string lambdas. Consider:


void main() {
import std.range;
SortedRange!(int[], "a > b") a;
SortedRange!(int[], "a > b") b;
b = a;
SortedRange!(int[], (a, b) => a > b) c;
SortedRange!(int[], (a, b) => a > b) d;
d = c;
}

The last line fails to compile because D does not currently have a good 
notion of comparing lambdas for equality. In contrast, string comparison 
is well defined, and although string lambdas have clowny issues with 
e.g. "a>b" being different from "a > b", people have a good 
understanding of what to do to get code working.


So I think we should come up with a good definition of what comparing 
two function aliases means.



Andrei



Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Andrei Alexandrescu

On 8/11/13 8:49 AM, monarch_dodra wrote:

On Sunday, 11 August 2013 at 15:42:24 UTC, Nick Sabalausky wrote:

On Sun, 11 Aug 2013 01:22:34 -0700
Walter Bright  wrote:


http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


Holy crap those two-column PDFs are hard to read! Why in the world does
academia keep doing that anyway? (Genuine question, not rhetoric)

But the fact that article even exists is really freaking awesome. :)


My guess is simply because it takes more space, making a 4 page article
look like a 7 page ;)


Double columns take less space and are more readable.

Andrei



Re: Any library with support JSON-RPC for D?

2013-08-11 Thread Dicebot

On Sunday, 11 August 2013 at 08:51:19 UTC, ilya-stromberg wrote:

Hi,

Do you know any library with support JSON-RPC for D?

Thanks.


Don't know about direct JSON-RPC implementation, but using 
vibe.http.rest from vibed.org allows for something similar - 
JSON-based RPC over HTTP.


Re: Is D the Answer to the One vs. Two Language High , Performance Computing Dilemma?

2013-08-11 Thread Dicebot

On Sunday, 11 August 2013 at 08:48:04 UTC, Iain Buclaw wrote:

..whatever happened to std.serialize?


I am gathering information to start yet another review/inclusion 
attempt + waiting for Jacobs confirmation. In progress.




Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Nick Sabalausky
On Sun, 11 Aug 2013 09:26:11 +0100
Russel Winder  wrote:

> On Sat, 2013-08-10 at 14:27 -0400, Nick Sabalausky wrote:
> […]
> > is discovering and dealing with all the fun little differences
> > between the posix and win32 makefiles (and now we have some win64
> > makefiles as well).
> […]
> 
> Isn't this sort of problem solved by using SCons, Waf or (if you
> really have to) CMake?
> 

One of the problems, yea.

In this case, I think anything involving a full-fledged programming
language other than D is going to be, at the very least, controversial.

Even though I've always avoided CMake since it relies on other
makes on the back end, it seems like (out of all the existing build
tools I'm aware of) it may be the most appropriate in DMD's case. I
think that's something to further look into.



Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread monarch_dodra

On Sunday, 11 August 2013 at 15:42:24 UTC, Nick Sabalausky wrote:

On Sun, 11 Aug 2013 01:22:34 -0700
Walter Bright  wrote:


http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


Holy crap those two-column PDFs are hard to read! Why in the 
world does
academia keep doing that anyway? (Genuine question, not 
rhetoric)


But the fact that article even exists is really freaking 
awesome. :)


My guess is simply because it takes more space, making a 4 page 
article look like a 7 page ;)


Re: Is D the Answer to the One vs. Two Language High ,Performance Computing Dilemma?

2013-08-11 Thread Nick Sabalausky
On Sun, 11 Aug 2013 01:22:34 -0700
Walter Bright  wrote:

> http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf

Holy crap those two-column PDFs are hard to read! Why in the world does
academia keep doing that anyway? (Genuine question, not rhetoric)

But the fact that article even exists is really freaking awesome. :)



Re: Version of implementation for docs

2013-08-11 Thread JS

On Sunday, 11 August 2013 at 10:16:47 UTC, bearophile wrote:

JS:

Can we get the version of implementation/addition of a feature 
in the docs. e.g., if X feature/method/library is added into 
dmd version v, then the docs should display that feature.


Python docs do this, and in my first patch I have added such 
version number.


Bye,
bearophile


Too bad have the "development team" feel this is not important. 
Very bad decision and will hurt D in the long run. It's not a 
hard thing to do. Seems to be a lot of  laziness going around. 
Maybe you can tell us just how hard/time consuming it was to type 
in 2.063 when you added a method?


Re: std.concurrency.receive() question

2013-08-11 Thread David Nadlinger

Hi Ruslan,

On Sunday, 11 August 2013 at 13:04:41 UTC, Ruslan Mullakhmetov 
wrote:

(OwnerTerminated) { running = false; }


This is a template function literal taking an argument named 
"OwnerTerminated", not a function with an unnamed parameter of 
type OwnerTerminated. Quite a sublime trap, admittedly.


In the future you might want to post similar questions to the 
digitalmars.D.learn group instead.


Hope this helps,
David


Re: GtkD

2013-08-11 Thread Mike Wey

On 08/11/2013 10:19 AM, Russel Winder wrote:

On Sat, 2013-08-10 at 23:01 +0200, Mike Wey wrote:
[…]

The linker probably can't find the GtkD library.


I have tried using dmd and ldc with a -L-L/path/to/library and gdc with
-L/path/to/library but get subtly different error messages. Certainly
all the unfound references are _D... ones so I assume they are D
references.


On linux if you used "make install" to install the GtkD libraries you
could use pkg-config:

dmd $(pkg-config --cflags --libs gtkd-2) MyApp.d


Well putting the PGK_CONFIG_PATH in so as to find the .pc file, this
give very large numbers of errors. The first three are:

 helloWorld_d_gtkd.o:(.data+0xb0): undefined reference to 
`_D3gtk10MainWindow12__ModuleInfoZ'
 helloWorld_d_gtkd.o:(.data+0xb8): undefined reference to 
`_D3gtk5Label12__ModuleInfoZ'
 helloWorld_d_gtkd.o:(.data+0xc0): undefined reference to 
`_D3gtk4Main12__ModuleInfoZ'

from then everything is internal to libgtkd-2.a not having references.
This means though that it has found libgtkd-2.a so. An example:

/home/users/russel/lib.Linux.x86_64/GtkD/lib//libgtkd-2.a(ObjectG.o):(.data+0xb0):
 undefined reference to `_D4gtkc7gobject12__ModuleInfoZ'

All of them appear to be a lack of ModuleInfoZ as far as I can tell,
which may make this an obvious problem?


Unfortunately that doesn't make it obvious. Could you check the exact 
name of the ModuleInfo in the library?


nm --defined-only libgtkd-2.a | grep ModuleInfo | grep gobject | grep gtkc


Will gdc be able to cope with:

PKG_CONFIG_PATH=$HOME/lib.Linux.x86_64/GtkD/share/pkgconfig pkg-config
--cflags --libs gtkd-2

-I/home/users/russel/lib.Linux.x86_64/GtkD/include/d/gtkd-2/
-L-L/home/users/russel/lib.Linux.x86_64/GtkD/lib/ -L-lgtkd-2 -L-ldl

These look appropriate to DMD and LDC but not GDC, and in a Linux
context GDC is likely the compiler of choice.


When the GtkD library is compiled with gdc the .pc file will contain the 
appropriate flags for gdc.



Is there an issue of you have to use the same compiler to link code to
libraries as was used to build the libraries in the first place?  This
is not the issue here as I get the same problem for all three compilers,
some else is wrong possibly as well.



The different compilers aren't binary compatible so you will need to use 
the same compiler to build the lib and the application.


for linking its usually best to also use the compiler, using ld directly 
is an option but you'll then need to add the flags and libraries the 
compiler adds to the command yourself.



On Windows you will need to list gtkd.lib on the commandline with it's
relative or absolute path.


What's Windows?  ;-)


:)

--
Mike Wey


std.concurrency.receive() question

2013-08-11 Thread Ruslan Mullakhmetov


i try to compile the following slightly modified sample from the 
book and it failed with the following messages


=== source 

import std.stdio;
import std.concurrency;

void fileWriter()
{
// Write loop
for (bool running = true; running; )
{
receive(
(immutable(ubyte)[] buffer) {},
(OwnerTerminated) { running = false; }
);
}
stderr.writeln("Normally terminated.");
}

void main()
{

}

== error messages ===

/Users/ruslan/Source/dlang/dub-test/source/listener.d(11): Error: 
template std.concurrency.receive does not match any function 
template declaration. Candidates are:
/usr/local/Cellar/dmd/2.063.2/libexec/src/phobos/std/concurrency.d(646): 
   std.concurrency.receive(T...)(T ops)
/Users/ruslan/Source/dlang/dub-test/source/listener.d(11): Error: 
template std.concurrency.receive(T...)(T ops) cannot deduce 
template function from argument types !()(void 
function(immutable(ubyte)[] buffer) pure nothrow @safe, void)




what's wrong? if i replace OwnerTerminated with int or simply 
remove everything is ok.


if i replace with my own struct Terminate - fail.

any help would be appreciated.


Re: Variadic grouping

2013-08-11 Thread Artur Skawina
On 08/11/13 04:07, JS wrote:
> On Saturday, 10 August 2013 at 18:28:39 UTC, Artur Skawina wrote:
>>A!(int, 0, float, Group!(ubyte, "m", float[2], Group!(Group!(int,11,12), 
>> Group!(float,21,22 x4;
>>
>> Using just ';' would: a) be too subtle and confusing; b) not be enough
>> to handle the 'x4' case above.
> 
> a) thats your opinion.

I can't think of a more subtle way to separate lists than using just
a single pixel. I know -- I should try harder.


> b) wrong. Using Group is just an monstrous abuse of syntax. It's not needed. 
> I don't think there is a great need for nested groupings, but if so it is 
> easy to make ';' work.
> 
> Do your really think
> 
> that Group!() is any different than `;` as far as logic goes?

Yes.

> If so then you really need to learn to think abstractly.

Won't help in this case.


> This is like saying that there is a difference between using () or {} as 
> groupings. There is no semantic difference except what you choose to put on 
> it. Hell we could use ab for grouping symbols.
> 
> Group!() and (); are the same except one is shorter and IMO more 
> convienent... there is otherwise, no other difference.

Wasn't ';' enough?


> to see this, x4 can be written as
> 
> A!(int, 0, float, (ubyte, "m", float[2], ((int,11,12), (float,21,22 x4;
> 
> Note, all I did was delete Group!. IT IS HAS THE EXACT SAME LOGICAL 
> INTERPRETATION.

[...]

> Just because you add some symbol in front of brackets does not magically make 
> anything you do different... just extra typing.
> 
>>A!(int, 0, float, #(ubyte, "m", float[2], #(#(int,11,12), 
>> #(float,21,22 x4;
> 
> But there should be no need for # as there is no need for Group. (again, I'm 
> not talking about what D can do but what D should do)

   template A(T...) { pragma(msg, T); }
   alias B = A!("one", (2.0, "three"));

artur


Re: Version of implementation for docs

2013-08-11 Thread bearophile

JS:

Can we get the version of implementation/addition of a feature 
in the docs. e.g., if X feature/method/library is added into 
dmd version v, then the docs should display that feature.


Python docs do this, and in my first patch I have added such 
version number.


Bye,
bearophile


Re: Variadic grouping

2013-08-11 Thread BS
Man I love it when this Javascript bloke posts. The tantrums are 
brilliant :-)


JS: Why does D not do X, it should and without it D is useless. 
Implement my minions!
DD: Why should X be implemented? Show us some example where it 
would be useful.
JS: You are an ignorant stupid fool if you cannot see the 
benefit. And obviously too lazy to figure out an example yourself.


Here's your dummy JS, but please keep spitting it. It gives me 
great amusement reading your posts when you've been on the bean.


Re: Version of implementation for docs

2013-08-11 Thread Tobias Pankrath

On 11.08.2013 10:59, JS wrote:

On Sunday, 11 August 2013 at 07:17:57 UTC, Jonathan M Davis wrote:

On Sunday, August 11, 2013 09:10:15 JS wrote:

And where can I download the most up to date compiled dmd for
windows?


The latest relase is always here:

http://dlang.org/download.html

And that's the version that the online docs correspond to.

- Jonathan M Davis



But that doesn't correspond to the master? I thought the latest version
was 2.064?


~master is the development branch. Davis is talking about releases, 
because only releases get a version.


Re: Variadic grouping

2013-08-11 Thread JS

On Sunday, 11 August 2013 at 08:32:35 UTC, jerro wrote:
Group!() and (); are the same except one is shorter and IMO 
more convienent... there is otherwise, no other difference.


Except, of course, the fact that one requires a language change 
and the other doesn't.


and? How many language changes were done to D to make things more 
convienent? I didn't ask about how to do Variadic grouping but 
mentioned it as a feature for D.


"This can be done by making a symbol and breaking up a single
variadic but is messy."


Using Group is messy. I know some people like messy code but that 
shouldn't require everyone to write messy code.


Re: Version of implementation for docs

2013-08-11 Thread JS

On Sunday, 11 August 2013 at 07:17:57 UTC, Jonathan M Davis wrote:

On Sunday, August 11, 2013 09:10:15 JS wrote:

And where can I download the most up to date compiled dmd for
windows?


The latest relase is always here:

http://dlang.org/download.html

And that's the version that the online docs correspond to.

- Jonathan M Davis



But that doesn't correspond to the master? I thought the latest 
version was 2.064?


Any library with support JSON-RPC for D?

2013-08-11 Thread ilya-stromberg

Hi,

Do you know any library with support JSON-RPC for D?

Thanks.


Re: Is D the Answer to the One vs. Two Language High , Performance Computing Dilemma?

2013-08-11 Thread Iain Buclaw
On 11 August 2013 09:22, Walter Bright  wrote:
> http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf

That looks to have been written well over a year ago...  But still a
good point in it, whatever happened to std.serialize?


-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: @property - take it behind the woodshed and shoot it?

2013-08-11 Thread Jason den Dulk

Part 1 - Ordinary Functions (the rewrite rules)
--
Not strictly the discussion topic, but it was a big issue and is 
related, so:


Q: Do you think the "property" rewrite rules (ie optional opCall 
operators) are a bad idea.

A: Yes I do
Q: Why?
A1: Look at all the confusion its caused!
A2: Because I have shot myself in the foot too many times because 
of this to be convinced otherwise. Judging from the posts, I am 
not the only one hopping around on one foot.

Q: Accepting that it's here to stay, what can you do about it?
A: Pretend they don't exist and always write expressions the 
"correct" way.


Basically If you're having problems with the "property" rewrite 
rules, then stop trying to use them. Adopt the habit of always 
writing your function calls verbosely (even with UFCS) and your 
problems should go away.


Unless they are going to be gotten rid of entirely, the rewrite 
rules should not be changed. All of the suggestions I've seen 
(including Walter's original one) will only lead to more 
complexity, more special cases, more inconsistency, more 
confusion, and more problems.


Part 2 - @property
--

A big problem with @property is that people are thinking of them 
as functions, no small part due to the "property" rewrite rules 
discussed above. Even people who suggest that @properties should 
not use opCall still talk about calling properties, as though 
they are functions.


The trick is to stop thinking of properties in terms of 
functions, and to think of them as variables. To "call" a 
property should make as much sense as calling a variable.


Some have mentioned this, and others have hinted at it, but I 
think that it has not been made fully clear.


To illustrate, consider this:

   int v;

v is a reference to a block of memory which is accessed by a 
machine code getter and setter. So


   int a;

is really

   @property { int a() { fetch from memory }; int a(int) { write 
to memory } }


When you're declaring a true property, you're still declaring a 
variable, but you're simply providing the complier with 
altertives to the traditional fetch/write instructions.


--

Here is another (admittedly more complex) way to look at it.

struct {
  int opAssign(int) { ... };
  int opCast!(int) { ... } ;
} Property

Property prop;

You will need to imagine the opCast as an implicit cast for it to 
work, but hopefully you will get what I mean.


Here prop is a variable, so it would be parsed like one by the 
compiler.


The expression "prop = 3" would assign 3 to the contents of prop, 
Which would result in the opAssign (the setter) being called.


In the expression "y = prop++", we get the contents of prop (via 
opCast, the getter), apply the ++ operator to it, and assign the 
result to y.


If prop were an actual integer, the above expressions would work 
exactly the same way.


Using struct for this is a little ugly, so you could say instead:

@property { int prop(int) { ... }; int prop() { ... }; }

A little neater, but would work the same way.

Part 3 - functions returning delegates.
---

When it comes to delegate properties (ie the getter returns a 
delegate), I'm sensing that people are having trouble getting 
their heads around it, always needing to treat it like a special 
case. Let's try to see it in a way that it is not special.


Lets have

delegate int() f1;

and

@property delegate int() f2() { ... };

In the eyes of the greater world f1 and f2 should work the same 
way.


In the expression y = f1(), we get the contents of f1 (the 
delegate), apply the "()" operator to it, and assign the result 
to y;


In the expression y = f2(), we get the contents of f2 (via the 
getter, which gives us the delegate), apply the "()" operator to 
it, and assign the result to y;


In both cases y is an int.

Recall an eariler statement:

In the expression "y = prop++", we get the contents of prop (via 
opCast, the getter), apply the ++ operator to it, and assign the 
result to y.


Note the similarity. () is like other operators and is no longer 
special.


How does this work for ordinary functions?

int func()

can be seen as

auto func = function int()

func is a variable and its contents is the function. The 
expression func() means "get the contents of func (the function) 
an apply () to it (gives int)".


For

delegate int() delg()

which can be seen as

auto delg = function (delegate int())()

The expression delg() means "get the contents of delg (the 
function) and apply () to it (gives delegate)", and delg()() 
would apply () to whatever delg() gives and, ultimately, give us 
an int.


--

How come people are getting confused about it? That damn optional 
opCall rewrite rule.


Some people probably think that, for functions returning 
delegates, if

   f becomes f()
then
   f() becomes f()()

THIS DOES NOT HAPPEN! Nor should i

Re: Variadic grouping

2013-08-11 Thread jerro
Group!() and (); are the same except one is shorter and IMO 
more convienent... there is otherwise, no other difference.


Except, of course, the fact that one requires a language change 
and the other doesn't.


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Russel Winder
On Sat, 2013-08-10 at 14:27 -0400, Nick Sabalausky wrote:
[…]
> is discovering and dealing with all the fun little differences between
> the posix and win32 makefiles (and now we have some win64 makefiles as
> well).
[…]

Isn't this sort of problem solved by using SCons, Waf or (if you really
have to) CMake?

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Is D the Answer to the One vs. Two Language High , Performance Computing Dilemma?

2013-08-11 Thread Walter Bright

http://elrond.informatik.tu-freiberg.de/papers/WorldComp2012/PDP3426.pdf


Re: Something up with the forums?

2013-08-11 Thread Iain Buclaw
On 11 August 2013 02:37, Jonathan M Davis  wrote:
> On Sunday, August 11, 2013 02:25:33 Iain Buclaw wrote:
>> On 11 August 2013 00:16, Jonathan M Davis  wrote:
>> > On Saturday, August 10, 2013 17:31:39 Iain Buclaw wrote:
>> >> On 10 August 2013 16:52, Ali Çehreli  wrote:
>> >> > On 08/10/2013 08:42 AM, artur wrote:
>> >> >> Apparently, posts from the mailing lists are not making it to the
>> >> >> web i/f; the other direction seems to work.
>> >> >
>> >> > It looks to be the reverse for that sample thread: My response to Iain
>> >> > was
>> >> > posted from Thunderbird. (I don't use the forum interface.)
>> >> >
>> >> > Ali
>> >>
>> >> My client just so happens to be gmail.  I seem to receive all posts
>> >> from everyone... looks like Ali won't receive this message, and I
>> >> don't expect this message to show on the forum interface either...
>> >
>> > gmail never sends you your own responses. Google is "helpful" and filters
>> > out the messages that you send to a mailing list when they get sent back
>> > to you. That's actually the #1 reason why I stopped using gmail. So, if
>> > you're using gmail, that could add to the confusion.
>>
>> You know, I've never had that problem... and it's not *me* who's
>> getting the confusion.
>
> https://support.google.com/mail/answer/6588?topic=1564
>
> If you never noticed it, I'd guess that you use gmail's web interface rather
> than a local client, as gmail will show your sent messages in the threading
> that it does. If you're using a local client, that obviously doesn't happen,
> since sent messages don't normally get put in your inbox. So, as someone who
> uses a local mail client pretty much exclusively, what gmail was doing was
> really annoying, particularly since it was constantly breaking up threads that
> I replied in. But either you have a very different workflow (like using the 
> web
> interface), or your e-mail client is much smarter than mine, or you just
> didn't notice for some reason.
>
> But if gmail works for you, then great. This issue was a deal breaker for me.
> It took it from gmail being annoying in some of its quirks and how badly it
> interacted with local clients to being unacceptably broken.
>

I guess one thing I make heavy use of filters which use tagging and
archiving to make a pseudo folder structure. Essentially, all mail
that matches my (fairly large and growing) filter list is archived and
labelled.  So all messages that get sent from this ML - as well as my
own postings, *always* skips the inbox and are archived as a de facto,
but then are labelled as "D Mailing List".  It is also possible for
threads to gain extra labels if certain keywords are used in them over
time... :o)

Not sure if such a set-up would work with the local client, but yes, I
use the web interface.

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: GtkD

2013-08-11 Thread Russel Winder
On Sat, 2013-08-10 at 23:01 +0200, Mike Wey wrote:
[…]
> The linker probably can't find the GtkD library.

I have tried using dmd and ldc with a -L-L/path/to/library and gdc with
-L/path/to/library but get subtly different error messages. Certainly
all the unfound references are _D... ones so I assume they are D
references.

> On linux if you used "make install" to install the GtkD libraries you 
> could use pkg-config:
> 
> dmd $(pkg-config --cflags --libs gtkd-2) MyApp.d

Well putting the PGK_CONFIG_PATH in so as to find the .pc file, this
give very large numbers of errors. The first three are:

helloWorld_d_gtkd.o:(.data+0xb0): undefined reference to 
`_D3gtk10MainWindow12__ModuleInfoZ'
helloWorld_d_gtkd.o:(.data+0xb8): undefined reference to 
`_D3gtk5Label12__ModuleInfoZ'
helloWorld_d_gtkd.o:(.data+0xc0): undefined reference to 
`_D3gtk4Main12__ModuleInfoZ'

from then everything is internal to libgtkd-2.a not having references.
This means though that it has found libgtkd-2.a so. An example:

/home/users/russel/lib.Linux.x86_64/GtkD/lib//libgtkd-2.a(ObjectG.o):(.data+0xb0):
 undefined reference to `_D4gtkc7gobject12__ModuleInfoZ'

All of them appear to be a lack of ModuleInfoZ as far as I can tell,
which may make this an obvious problem?


Will gdc be able to cope with:

PKG_CONFIG_PATH=$HOME/lib.Linux.x86_64/GtkD/share/pkgconfig pkg-config
--cflags --libs gtkd-2

-I/home/users/russel/lib.Linux.x86_64/GtkD/include/d/gtkd-2/
-L-L/home/users/russel/lib.Linux.x86_64/GtkD/lib/ -L-lgtkd-2 -L-ldl

These look appropriate to DMD and LDC but not GDC, and in a Linux
context GDC is likely the compiler of choice.

Is there an issue of you have to use the same compiler to link code to
libraries as was used to build the libraries in the first place?  This
is not the issue here as I get the same problem for all three compilers,
some else is wrong possibly as well.

> On Windows you will need to list gtkd.lib on the commandline with it's 
> relative or absolute path.

What's Windows?  ;-)


-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Request for editor scripting help

2013-08-11 Thread Brian Schott

On Saturday, 10 August 2013 at 18:28:36 UTC, Val Markovic wrote:
For Vim, integration with the YouCompleteMe[1] plugin would be 
a great
option (if I may say so myself). YCM offers a Completer API 
which can be
used to connect a semantic completion engine for any language. 
It already
has semantic completion support for C, C++, ObjC, ObjC++ (all 
through

libclang), Python (through Jedi), C# (through OmniSharp) etc.

When DCD becomes stable (is it already?) I'll gladly write the 
integration

for YCM.

[1]: https://github.com/Valloric/YouCompleteMe


I think at this point the command line interface is fairly 
stable, but I wouldn't call the program itself stable.


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Nick Sabalausky
On Sat, 10 Aug 2013 22:48:14 -0700
Walter Bright  wrote:

> On 8/10/2013 4:21 PM, Jonathan M Davis wrote:
> > On Saturday, August 10, 2013 14:35:04 Nick Sabalausky wrote:
> >> Is this something that would be acceptable, or does building DMD
> >> for Windows need to stay as DM make?
> >
> > I don't see any problem with it, but that doesn't mean that Walter
> > won't.
> 
> Tools built for Unix never work right on Windows. It's why, for
> example, I run git on Linux and don't use the Windows badly ported
> versions of git. Tiresome problems revolve around failure to adapt to
> \ path separators, ; in PATH, CRLF line endings, Windows SEH, case
> insensitive file names, no symbolic links, etc., no Perl installed,
> etc.
> 

Fair point.

> 
> > Another suggestion that I kind of liked was to just build them all
> > with a single script written in D and ditch make entirely, which
> > would seriously reduce the amount of duplication across platforms.
> > But that's obviously a much bigger change and would likely be much
> > more controversial than simply using a more standard make.
> 
> I don't see much point in that. The dmd build is straightforward, and
> I see no particular gain from reinventing that wheel.
> 

The current state is fairly awful when trying to do cross-platform
automation of anything that involves building DMD. The make targets
are completely different, the available configuration options and
defaults are completely different, and the output locations are
completely different. Trying to deal with and accommodate the divergent
behaviors of posix.mak and win*.mak is a minefield that leads to
fragile, tangled code even with my best attempts to keep it clean. And
this isn't the first time I've automated building DMD, either.

And yea, all those differences can be addressed, but as long as we're
maintaining posix/win buildscripts separately - and in essentially two
separate languages (two flavors of make) - then divergence is only
going to reoccur.



Re: Version of implementation for docs

2013-08-11 Thread Jonathan M Davis
On Sunday, August 11, 2013 09:10:15 JS wrote:
> And where can I download the most up to date compiled dmd for
> windows?

The latest relase is always here:

http://dlang.org/download.html

And that's the version that the online docs correspond to.

- Jonathan M Davis


Re: Version of implementation for docs

2013-08-11 Thread JS

On Sunday, 11 August 2013 at 07:04:14 UTC, Jonathan M Davis wrote:

On Sunday, August 11, 2013 06:30:57 JS wrote:
Can we get the version of implementation/addition of a feature 
in

the docs. e.g., if X feature/method/library is added into dmd
version v, then the docs should display that feature.

For example, when I go to http://dlang.org/phobos/object.html I
see tsize. When I try to use it on my class I dmd says it 
doesn't

exist. I'm using 2.063. If I new when tsize was implemented I
would know if it is something I am doing wrong or if I'm just
using a version where it isn't implemented.

const pure nothrow @property @safe size_t tsize();   (v 
2.063)


or whatever.

P.S. note I added a fake version to it to demonstrate what the
docs could look like... I know for some this post will be very
difficult to understand... some will ask for a use case, some
will call me a troll... Others will say it is not useful, and
some will say it is too difficult to implement or will be too
hard to maintain.


If you want the docs that go with your version of the compiler, 
then look at
the ones that come with it (they are provided in the zip file - 
I don't know if
they get installed with the installers though). We have enough 
trouble keeping
the documentation up-to-date and accurate without having to 
worry about
versioning. And for the most part, if you have issues with it 
due to using an
older version of the compiler, the advice is going to be to 
upgrade to the

latest compiler.

- Jonathan M Davis


And where can I download the most up to date compiled dmd for 
windows?


Re: Have Win DMD use gmake instead of a separate DMMake makefile?

2013-08-11 Thread Jonathan M Davis
On Saturday, August 10, 2013 22:48:14 Walter Bright wrote:
> On 8/10/2013 4:21 PM, Jonathan M Davis wrote:
> > Another suggestion that I kind of liked was to just build them all with a
> > single script written in D and ditch make entirely, which would seriously
> > reduce the amount of duplication across platforms. But that's obviously a
> > much bigger change and would likely be much more controversial than
> > simply using a more standard make.
> 
> I don't see much point in that. The dmd build is straightforward, and I see
> no particular gain from reinventing that wheel.

Well, make is horrible, and while posix.mak is way better than win32.mak or 
win64.mak, it's still pretty bad. Personally, I would never use make without 
something like cmake in front of it. If we were to write up something in D, it 
could be properly cross-platform (so only one script instead of 3+), and I 
fully expect that it could be far, far cleaner than what we're forced to do in 
make.

- Jonathan M Davis


Re: Version of implementation for docs

2013-08-11 Thread Jonathan M Davis
On Sunday, August 11, 2013 06:30:57 JS wrote:
> Can we get the version of implementation/addition of a feature in
> the docs. e.g., if X feature/method/library is added into dmd
> version v, then the docs should display that feature.
> 
> For example, when I go to http://dlang.org/phobos/object.html I
> see tsize. When I try to use it on my class I dmd says it doesn't
> exist. I'm using 2.063. If I new when tsize was implemented I
> would know if it is something I am doing wrong or if I'm just
> using a version where it isn't implemented.
> 
> const pure nothrow @property @safe size_t tsize();   (v 2.063)
> 
> or whatever.
> 
> P.S. note I added a fake version to it to demonstrate what the
> docs could look like... I know for some this post will be very
> difficult to understand... some will ask for a use case, some
> will call me a troll... Others will say it is not useful, and
> some will say it is too difficult to implement or will be too
> hard to maintain.

If you want the docs that go with your version of the compiler, then look at 
the ones that come with it (they are provided in the zip file - I don't know if 
they get installed with the installers though). We have enough trouble keeping 
the documentation up-to-date and accurate without having to worry about 
versioning. And for the most part, if you have issues with it due to using an 
older version of the compiler, the advice is going to be to upgrade to the 
latest compiler.

- Jonathan M Davis