Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Max Samukha

On 12/20/2010 08:43 AM, Walter Bright wrote:

bearophile wrote:

Many games are like drugs.


Not for me. I get bored with games. You don't get bored with drugs.


You didn't play StarCraft when you were a teenager.


Re: is it possible to learn D(2)?

2010-12-20 Thread Walter Bright

Andrei Alexandrescu wrote:
The main issue is perceived value. Books are not T-shirts as significant 
time would have to be spent on reading them. Say I had 40 people in the 
audience and 40 books. Then it would have been like passing around 
marketing samples of no perceived value.


Right. If you passed them out to all comers, they'd share the fate of all such 
premiums. A third would be left on the floor, a third in the trash bins on the 
way out, a third taken home and forgotten about, and none read.


Re: gdc-4.5 testing

2010-12-20 Thread Anders F Björklund

Iain Buclaw wrote:

Other than that, it seemed to apply cleanly to
Fedora 14's version of GCC (gcc-4.5.1-20100924)



Not only applied, but also seems to be working... :-)
Once the enormous build and test completed, that is.
So now you can install both ldc and gcc-d (gdc),
and work with both Tango and Phobos from RPM packages.



That's certainly nice to hear, considering the number of changes required were
considerably less than what was needed for gcc-4.4 (then again, many of them 
were
backports from gcc-4.5 anyway ;). Of those changes made, they all turned out to 
be
pretty quick/lazy edits.


I uploaded the packages to SourceForge, if anyone else
wants to try them... It's made for Fedora 14 (x86_64):

http://sourceforge.net/projects/gdcgnu/files/gdc/8ac6cb4f40aa/

gcc-d-4.5.1-4.fc14.x86_64.rpm (5.2M) # gdc
phobos-devel-4.5.1-4.fc14.x86_64.rpm (764K)

gcc-4.5.1-4.fc14.diff (3901 bytes, the specfile changes)
gcc-4.5.1-4.fc14.src.rpm (54M, but 5G+ / hours to build)

As noted earlier, LDC and Tango was already part of the
system release and are available in the yum repositories:

https://fedoraproject.org/wiki/Features/D_Programming

ldc-0.9.2-25.20101114hg1698.fc14.x86_64.rpm (4.1M)
tango-devel-0.99.9-19.20100826svn5543.fc14.x86_64.rpm (2.2M)

And then it's just a matter of running gdmd or ldmd,
but if you want to use D2 you should still install dmd:

http://www.digitalmars.com/d/2.0/changelog.html

http://ftp.digitalmars.com/dmd-2.050-0.i386.rpm

  

The GDC RPMs need to be built for i686, updated to 4.5.1-6 -
and adopted for inclusion in Rawhide, upgraded to GCC 4.6...

Most likely the imports should be moved to include/d/4.5
and libgphobos.a moved to inside lib/gcc directory, too ?

But that's up to Fedora packagers.

--anders


Re: New syntax for string mixins

2010-12-20 Thread Don

VladD2 wrote:

Don Wrote:

I 
think VladD2 is right: You need to keep track of both current system and 
target system. Unfortunately, there is some information about the target 
system the compile-time code wouldn't be able discern without giving it the 
ability to run code (RPC? Virtualization? Really, really good emulator?) on 
the target system, but then again, that's a limitation with any 
cross-compiling scenario.
Note that for this to work at all, the compiler needs to be able to 
generate exectuable code for platform X as well as for Y -- that is, it 
needs to include two back-ends.


If the macros have been compiled and are in binary (executable) form, the 
compiler must only be able to generate code for platform X,


Yes, but it's not a compiler for platform X! It's only a compiler for 
platform Y.



and run macros (execute code from DLL). This is exactly what makes Nemerle 
compiler.


The .NET system always has a compiler for the platform it's running on. 
That's not necessarily true for D compilers.



In this case, compiling of the same macros looks like any other compilation 
process (on the platform X for the platform Y).


I don't think it's quite the same. In a makefile, every executable is 
listed, and so you can have some degree of control over it. 


Trust to rmdir ... lol!
And what about NAnt or MSBuild which can have binary extensions?

I think, you are completely wrong.

But in this 
scenario, the compiler is making calls to arbitrary shared libraries 
with arbitrary parameters.

It means the compiler cannot be trusted *at all*.


The experience of Lisp (50 years!) and Nemerel (about 6 years) shows that the ability to access any library - is not a problem. 


I don't think Nemerle has been sufficiently widely used, to be able to 
draw strong conclusions from it. The Lisp argument is strong though.


 This is a huge advantage.


And limit the possibility of a macro, you can simply forbidding them to use some libraries. 


I hope you're right, because it's indeed a powerful feature. But I'd 
want to hear the opinion of a security expert.
In particular, if it can be shown that it's exactly the same as Lisp, I 
would be convinced.


Re: What is this D book?

2010-12-20 Thread Daniel Gibson
On Mon, Dec 20, 2010 at 10:36 AM, spir denis.s...@gmail.com wrote:
 On Sun, 19 Dec 2010 20:33:39 -0800
 Jonathan M Davis jmdavisp...@gmx.com wrote:

 The funny thing is that I wouldn't have expected anyone to be able to create
 book 96 pages long on D just out of Wikipedia articles. And $44 for 96 
 pages?!
 LOL. The knowledge in that book would have to be pure gold to worth that 
 kind of
 price. What a total rip-off. It probably popped up just because TDPL was 
 released
 and some guys were looking to cash in. Maybe they were even hoping that some
 people would be foolish enough to mistake their book for TDPL.

 I don't think that print-on-demand publishing is necessarily a bad thing, but
 this is obviously a case where someone is trying to cash in on something that
 they did no work for.

 I agree the price is surprisingly high.
 But you are very wrong in stating trying to cash in on something that they 
 did no work for: Making a book out of diverse material is _much_ work (I've 
 done it). Actually so much and difficult work that it's often worth rewriting 
 from scratch! Just like trying to put together a bunch of lib modules and 
 make an app run fine out of that ;-)


I don't think they put much work in it. Probably just print the
wikipedia-article and some related (==linked) articles, maybe
recursively to fill at least these 96 pages.
I'd be surprised if these books weren't 99% automatically generated
(the last 1% is selecting a picture for the cover).


Re: New syntax for string mixins

2010-12-20 Thread Don

Alex_Dovhal wrote:

Don nos...@nospam.com wrote:
I don't think it's quite the same. In a makefile, every executable is 
listed, and so you can have some degree of control over it. But in this 
scenario, the compiler is making calls to arbitrary shared libraries with 
arbitrary parameters.

It means the compiler cannot be trusted *at all*.


You are right only partially - it's unsafe for browser language where code 
is taken from untrusted source. But this feature gives so much power to the 
macro sysrem  - that I think is worth considering it. IMO, usually compiled 
code is run just after compilation (with the same prermissions as 
compiler) - so compiled code can make dangerous things and can't be trusted 
at all, but no one is worry about that. Yes compiler can't be *trusted* with 
this features, but if one knows what he is doing, why to prevent him - add 
option --enable-ctfe-DANGEROUS-features to allow potentially dangerous 
features then it wouldn't be so unexpected. Are those features hard to add 
to the current implementation? 


In order for CTFE code to call pre-compiled code, three things are required:
(1) the compiler needs to be able to find the file (.obj/.lib/shared 
library) containing the compiled code;
(2) the compiler needs to be able to load the module and call it. This 
requires some form of dynamic linking.
(3) We need a marshalling step, to convert from compiler literal to 
compiled data, and back.



Step (3) is straightforward. The challenge is step(2), although note 
that it's a general allow the compiler to load a plugin problem, and 
doesn't have much to do with CTFE.





Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Max Samukha

On 12/19/2010 09:48 PM, Nick Sabalausky wrote:





Assuming you meant that as a sarcastic counter-example: There may be ways in
which they make life suck less, but *overall*, they're generally considered
to make life suck *more*. So the make life suck less rule still holds.

Although, if you meant it seriously then nevermind: The whole
drug-legalization issue is one of the few debates I actively avoid :)



I have no clear opinion about games, though I do believe they carry some 
similarity with drugs in the way they make a person neglect stuff 
important for his survival in the reality he was born into.


Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Kagamin
Caligo Wrote:

 You are absolutely right; life sucks for many people, and that's why some of
 them choose to play video games.  It gives them a chance to escape reality,
 and game companies exploit this to make money.  Game companies use all kinds
 of psychology in their games to keep you playing as long as possible.  That
 is why to me there is no honor in game development.  Also, I never said it's
 worthless; they make tons of money, and that's almost always at the expense
 of people like you.

The fact is all humans build their own reality - yes - because they're not fond 
of the raw nature. What you try to say is actually Hey, they live different 
lives! HATEHATEHATE!!!


Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Christopher Nicholson-Sauls
On 12/20/10 04:25, Max Samukha wrote:
 On 12/19/2010 09:48 PM, Nick Sabalausky wrote:
 


 Assuming you meant that as a sarcastic counter-example: There may be
 ways in
 which they make life suck less, but *overall*, they're generally
 considered
 to make life suck *more*. So the make life suck less rule still holds.

 Although, if you meant it seriously then nevermind: The whole
 drug-legalization issue is one of the few debates I actively avoid :)

 
 I have no clear opinion about games, though I do believe they carry some
 similarity with drugs in the way they make a person neglect stuff
 important for his survival in the reality he was born into.

That's a (sadly common) problem with people, though; not with games.
The same can be validly stated for television (which I usually avoid,
anyhow), sports, over-reliance on restaurants (a personal pet peeve),
and checking the D newsgroups... oh shi-

-- Chris N-S


Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Christopher Nicholson-Sauls
On 12/19/10 14:00, Nick Sabalausky wrote:
 Caligo iteronve...@gmail.com wrote in message 
 news:mailman.30.1292776925.4748.digitalmar...@puremagic.com...
 You are absolutely right; life sucks for many people, and that's why some 
 of
 them choose to play video games.  It gives them a chance to escape 
 reality,
 and game companies exploit this to make money.  Game companies use all 
 kinds
 of psychology in their games to keep you playing as long as possible. 
 That
 is why to me there is no honor in game development.  Also, I never said 
 it's
 worthless; they make tons of money, and that's almost always at the 
 expense
 of people like you.

 
 The old games as drugs argument.
 
 First of all, anyone who's a slave to psychological tricks is an idiot 
 anyway. Casinos use many psychological tricks to induce addiction and yet 
 most people are perfectly able to control themselves.
 
 Secondly, if you see movies, music, comics and novels as the same 
 dishonorable escapism, then I'll grant that your reasoning is at least 
 logically sound, even though you're in an extremely tiny minority on that 
 viewpoint. If not, however, then you're whole argument crumbles into a giant 
 pile of blatant bullshit, and clearly far too much of an imbicile to even 
 continue discussing this with.
 
 If it helps any, I'm not one of those baby boomers.  I'm actually in my
 early twenties.  So if you are going to insult me at least do it properly.

 
 Fine, but that does make you the exception.
 
 You sound way too angry and unhappy.
 
 I just have no tolerance for such obvious lies and idiocy.
 
 Instead of playing video games, you
 should definitely pick up Ruby if you haven't already.  I hear it's
 designed to make programmers happy.

 
 I realize you mean that in jest, but I actually have been using Ruby (Rake) 
 as the build system for a big web project. It gets the job done, but I'm not 
 exactly impressed with it.
 

Take a look at Thor sometime.  It's a replacement for Rake, and for some
jobs can be better.  Rails/3.x is apparently adopting it (or has adopted
it... I haven't made the jump to 3 yet).

https://github.com/wycats/thor

-- Chris N-S


Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Kagamin
Nick Sabalausky Wrote:

 Yea, and another thing is the matter of art in general: If you're an 
 ultra-utilitarian like Christopher seems to be (and even most programmers 
 aren't ultra-utilitarian), then art can be seen as lacking significant 
 contribution to society.

I think, the effect of art is quite tangible, so I see no reason to not call it 
utilitarian.


Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Kagamin
Christopher Nicholson-Sauls Wrote:

 That's a (sadly common) problem with people, though; not with games.
 The same can be validly stated for television (which I usually avoid,
 anyhow), sports, over-reliance on restaurants (a personal pet peeve),
 and checking the D newsgroups... oh shi-

I hope Walter won't spend 6 hours per day checking the D newsgroups... :3


Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Christopher Nicholson-Sauls
On 12/19/10 14:52, Nick Sabalausky wrote:
 Daniel Gibson metalcae...@gmail.com wrote in message 
 news:mailman.37.1292790264.4748.digitalmar...@puremagic.com...
 On Sun, Dec 19, 2010 at 5:41 PM, Caligo iteronve...@gmail.com wrote:
 You are absolutely right; life sucks for many people, and that's why some 
 of
 them choose to play video games. It gives them a chance to escape 
 reality,
 and game companies exploit this to make money. Game companies use all 
 kinds
 of psychology in their games to keep you playing as long as possible. 
 That
 is why to me there is no honor in game development.

 This is bullshit.
 Of course there are games with that goal (WoW, ...), but this doesn't make 
 game
 development in general unhonorable. There are many games that are
 not like this,
 for example most single player only games.. you play them until the end or 
 until
 you can't get any further and that's it.. maybe you play them again in
 the future, but
 it's not like a constant addiction. (I'm not saying that multi player
 games are generally
 more dangerous or anything, single player games are just an example 
 everybody
 should be able to comprehend)
 There are also game developers who openly label games like WoW unethical,
 e.g. http://en.wikipedia.org/wiki/Jonathan_Blow

 
 Interesting. I don't think I would go so far as to claim that WoW was 
 unethical...just uninteresting ;) But that's just me. This is at least one 
 thing the videogame world does that I do consider unethical: 
 Proprietary/Closed platforms. But that's not just a videogame thing, of 
 course. I consider proprietary/closed platforms in general to be unethical. 
 (Oh crap, I think I can feel myself turning into Stallman!)
 

(On the upside, that means you get to grow an epic beard.)


Re: is it possible to learn D(2)?

2010-12-20 Thread Jeff Nowakowski

On 12/20/2010 02:48 AM, Andrei Alexandrescu wrote:


Yes, how about it? Is this a murder investigation? I have a hard time
figuring out what is the ultimate purpose of spelunking my past
statements to look for inconsistencies.


Hypocrisy is a pet peeve of mine. How about discussing the gory problems 
with const, and discussing the true state of the language at the next D 
talk? If you're going to bash Go presentations for cherry-picking, you 
should hold yourself to the same standards.


As for why I did the research, if people are going to deny statements I 
made, then I'm going to back them up with facts. I did rescind one 
erroneous statement of mine.


My original post was in response to a thread about somebody looking to 
jump into D2, and somebody who responded asking why D1 was even being 
worked on. I'd say my post was on topic.


Re: Why Ruby?

2010-12-20 Thread Stephan Soller

On 19.12.2010 14:22, Alex_Dovhal wrote:

Stephan Sollerstephan.sol...@helionweb.de  wrote:

I don't think that the syntax improvement of chaining is worth such an
effort. It adds tons of complexity for only a very limited gain. I'm not
sure if I could write such self-parsed code without thinking about that
pipeline.


I think I don't fully understand what you mean by syntax improvement for
chaining. This my code is almost possible to run but needs some time and
work (to get round CTFE limitations, enhance parser). But syntax parser
behind it is rather primitive, if to use external tools one can make much
better syntax parsers with much less efford.



I read your post in the context of method chaining with templates like 
filter! and map!. Looks like I missed the point. :)


I think your idea is pretty impressive. Maybe useful for some high-level 
stuff like mathematical formulas.


Re: gdc-4.5 testing

2010-12-20 Thread Neal Becker
Does this support building shared libs now (on x86_64)?

Anders F Björklund wrote:

 Iain Buclaw wrote:
 Other than that, it seemed to apply cleanly to
 Fedora 14's version of GCC (gcc-4.5.1-20100924)
 
 Not only applied, but also seems to be working... :-)
 Once the enormous build and test completed, that is.
 So now you can install both ldc and gcc-d (gdc),
 and work with both Tango and Phobos from RPM packages.
 
 That's certainly nice to hear, considering the number of changes required
 were considerably less than what was needed for gcc-4.4 (then again, many
 of them were backports from gcc-4.5 anyway ;). Of those changes made,
 they all turned out to be pretty quick/lazy edits.
 
 I uploaded the packages to SourceForge, if anyone else
 wants to try them... It's made for Fedora 14 (x86_64):
 
 http://sourceforge.net/projects/gdcgnu/files/gdc/8ac6cb4f40aa/
 
 gcc-d-4.5.1-4.fc14.x86_64.rpm (5.2M) # gdc
 phobos-devel-4.5.1-4.fc14.x86_64.rpm (764K)
 
 gcc-4.5.1-4.fc14.diff (3901 bytes, the specfile changes)
 gcc-4.5.1-4.fc14.src.rpm (54M, but 5G+ / hours to build)
 
 As noted earlier, LDC and Tango was already part of the
 system release and are available in the yum repositories:
 
 https://fedoraproject.org/wiki/Features/D_Programming
 
 ldc-0.9.2-25.20101114hg1698.fc14.x86_64.rpm (4.1M)
 tango-devel-0.99.9-19.20100826svn5543.fc14.x86_64.rpm (2.2M)
 
 And then it's just a matter of running gdmd or ldmd,
 but if you want to use D2 you should still install dmd:
 
 http://www.digitalmars.com/d/2.0/changelog.html
 
 http://ftp.digitalmars.com/dmd-2.050-0.i386.rpm
 

 
 The GDC RPMs need to be built for i686, updated to 4.5.1-6 -
 and adopted for inclusion in Rawhide, upgraded to GCC 4.6...
 
 Most likely the imports should be moved to include/d/4.5
 and libgphobos.a moved to inside lib/gcc directory, too ?
 
 But that's up to Fedora packagers.
 
 --anders



Re: gdc-4.5 testing

2010-12-20 Thread Anders F Björklund

Neal Becker wrote:

Does this support building shared libs now (on x86_64)?


...

I uploaded the packages to SourceForge, if anyone else
wants to try them... It's made for Fedora 14 (x86_64):

http://sourceforge.net/projects/gdcgnu/files/gdc/8ac6cb4f40aa/


You mean in general, or specifics ? (like throwing exceptions
or allocating memory or whatever...) Was it a problem before ?

Basic creation seems to work:

$ gdc -fPIC -o foo.o -c foo.d
$ gcc -shared -o libfoo.so foo.o
$ file libfoo.so
libfoo.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), 
dynamically linked, not stripped


AFAIK both shared libraries and x86_64 code have been working
for years with GDC, even though that is not the case with DMD.

Phobos is still static, though.

--anders


Re: What is this D book?

2010-12-20 Thread Andrej Mitrovic
On 12/20/10, Daniel Gibson metalcae...@gmail.com wrote:

 I'd be surprised if these books weren't 99% automatically generated
 (the last 1% is selecting a picture for the cover).


This is exactly what they do (or maybe it's just a one man operation).
Read this comment from wikipedia:

As an example of the care given to the books, the book History of
Georgia (country) is about the European country Georgia but has a
cover image of Atlanta in the American state Georgia.[nan 7] The
Wikipedia article History of Georgia (country) does not make such a
comical blunder. Another example is a book about an American football
team with a soccer player on the cover.[nan 8]

The articles are often poorly printed with features like missing
characters from foreign languages, and numerous images of arrows where
Wikipedia had links. It appears much better to read the original
articles for free at the Wikipedia website than paying a lot of money
for what has been described as a scam or hoax. Advertising for the
books at Amazon and elsewhere does not reveal the free source of all
the content. It is only revealed inside the books, which may satisfy
the license requirements for republishing of Wikipedia articles

An Amazon.com book search on June 9, 2009 gives 1009 (August 6 gives
1859, October 1 gives 3978, September 20, 2010 gives 64,890) books
from Alphascript Publishing,[nan 3][nan 4] an imprint of VDM
Publishing Group. 1003 of the books are described as by John
McBrewster, Frederic P. Miller, and Agnes F. Vandome. They are called
editors in the book listings. A recent author is named as Mainyu
Eldon A. or similar. It seems the only content of the many books is
free Wikipedia articles, with no sign that these three people have
contributed to them. The books often have very long titles that are
full of keywords. Presumably, this is to make them more likely to be
found when searching on sites such as Amazon.com.

As of 20 September 2010, 64,881 similar books are also available from
Betascript Publishing [nan 9][nan 10] by Lambert M. Surhone, Miriam
T. Timpledon, Susan F. Maseken,[nan 11] including a book about The
Police Reunion Tour,[nan 12] featuring a picture of Police on its
cover.[nan 13]

and

http://news.ycombinator.com/item?id=1666149:
There's unfortunately already a whole boatload with extremely poor
quality control, totally crapping up Google Books and Amazon results,
especially for more niche topics. They're generally automatically
compiled by a script for tens of thousands of titles, and then printed
on demand, attempting to pass themselves off as original books on the
subject (no mention of Wikipedia anywhere). Two of the more
notorious publishers are Icon Group (some examples:
http://www.google.com/search?tbs=bks:1tbo=1q=%22we...) and
Alphascript (example: http://www.amazon.com/dp/6130070446). Sort of a
meatspace version of content farming.

So really there's work going on here, they just print out articles
with no editing whatsoever, and print a pretty picture on the front
page of the book. I wouldn't be surprised that those 3-4 editors that
are always listed do not even exist.


Re: What is this D book?

2010-12-20 Thread Andrej Mitrovic
Sorry, I meant there's *no work going on here* in that sentence.

On 12/20/10, Andrej Mitrovic andrej.mitrov...@gmail.com wrote:
 On 12/20/10, Daniel Gibson metalcae...@gmail.com wrote:

 I'd be surprised if these books weren't 99% automatically generated
 (the last 1% is selecting a picture for the cover).


 This is exactly what they do (or maybe it's just a one man operation).
 Read this comment from wikipedia:

 As an example of the care given to the books, the book History of
 Georgia (country) is about the European country Georgia but has a
 cover image of Atlanta in the American state Georgia.[nan 7] The
 Wikipedia article History of Georgia (country) does not make such a
 comical blunder. Another example is a book about an American football
 team with a soccer player on the cover.[nan 8]

 The articles are often poorly printed with features like missing
 characters from foreign languages, and numerous images of arrows where
 Wikipedia had links. It appears much better to read the original
 articles for free at the Wikipedia website than paying a lot of money
 for what has been described as a scam or hoax. Advertising for the
 books at Amazon and elsewhere does not reveal the free source of all
 the content. It is only revealed inside the books, which may satisfy
 the license requirements for republishing of Wikipedia articles

 An Amazon.com book search on June 9, 2009 gives 1009 (August 6 gives
 1859, October 1 gives 3978, September 20, 2010 gives 64,890) books
 from Alphascript Publishing,[nan 3][nan 4] an imprint of VDM
 Publishing Group. 1003 of the books are described as by John
 McBrewster, Frederic P. Miller, and Agnes F. Vandome. They are called
 editors in the book listings. A recent author is named as Mainyu
 Eldon A. or similar. It seems the only content of the many books is
 free Wikipedia articles, with no sign that these three people have
 contributed to them. The books often have very long titles that are
 full of keywords. Presumably, this is to make them more likely to be
 found when searching on sites such as Amazon.com.

 As of 20 September 2010, 64,881 similar books are also available from
 Betascript Publishing [nan 9][nan 10] by Lambert M. Surhone, Miriam
 T. Timpledon, Susan F. Maseken,[nan 11] including a book about The
 Police Reunion Tour,[nan 12] featuring a picture of Police on its
 cover.[nan 13]

 and

 http://news.ycombinator.com/item?id=1666149:
 There's unfortunately already a whole boatload with extremely poor
 quality control, totally crapping up Google Books and Amazon results,
 especially for more niche topics. They're generally automatically
 compiled by a script for tens of thousands of titles, and then printed
 on demand, attempting to pass themselves off as original books on the
 subject (no mention of Wikipedia anywhere). Two of the more
 notorious publishers are Icon Group (some examples:
 http://www.google.com/search?tbs=bks:1tbo=1q=%22we...) and
 Alphascript (example: http://www.amazon.com/dp/6130070446). Sort of a
 meatspace version of content farming.

 So really there's work going on here, they just print out articles
 with no editing whatsoever, and print a pretty picture on the front
 page of the book. I wouldn't be surprised that those 3-4 editors that
 are always listed do not even exist.



Re: gdc-4.5 testing

2010-12-20 Thread Lutger Blijdestijn
Anders F Björklund wrote:

 Iain Buclaw wrote:
 Other than that, it seemed to apply cleanly to
 Fedora 14's version of GCC (gcc-4.5.1-20100924)
 
 Not only applied, but also seems to be working... :-)
 Once the enormous build and test completed, that is.
 So now you can install both ldc and gcc-d (gdc),
 and work with both Tango and Phobos from RPM packages.
 
 That's certainly nice to hear, considering the number of changes required
 were considerably less than what was needed for gcc-4.4 (then again, many
 of them were backports from gcc-4.5 anyway ;). Of those changes made,
 they all turned out to be pretty quick/lazy edits.
 
 I uploaded the packages to SourceForge, if anyone else
 wants to try them... It's made for Fedora 14 (x86_64):

Thnx, installs and works fine for a few quick tests. Would be great to see 
the first D2 compiler in the next fedora release, and debian / ubuntu too of 
course. Great work!



Re: Why Ruby?

2010-12-20 Thread Alex_Dovhal

Stephan Soller stephan.sol...@helionweb.de wrote
 I read your post in the context of method chaining with templates like 
 filter! and map!. Looks like I missed the point. :)

 I think your idea is pretty impressive. Maybe useful for some high-level 
 stuff like mathematical formulas.

Yes, I borrowed the idea and syntax (changed a little) from Maxima CAS 
(computer algebra system - system for symbolically manipulating algerbraic 
formulaes) - there they have functions like makelist(i*i, i, 1, 10), 
sum(i*i, i, 1, 10), prod... etc. which appealed me so much. That expression 
in Maxima is converted to LISP and then calculated and send back to CAS. 
Here string is parsed to AST and then form new string to be mixed in D code. 




Re: Inlining Code Test

2010-12-20 Thread Don

Nick Voronin wrote:

On Sat, 18 Dec 2010 02:17:46 +0100
Don nos...@nospam.com wrote:


Nick Voronin wrote:

btw, is there no explicit alignment for variables in D at all?
align(8) double d; compiles if d is global, but it does nothing.
That's a regression. Large globals are always aligned to a 16-byte 
boundary (see changelog for 2.007)


On second thought large globals in static segment (as log says) are probably 
only those with __gshared prefix. And they do look aligned.


Good catch! Yes, that makes perfect sense.
So the bug is that align() is ignored for TLS variables.


Re: New syntax for string mixins

2010-12-20 Thread Alex_Dovhal

Don nos...@nospam.com wrote:
 In order for CTFE code to call pre-compiled code, three things are 
 required:
 (1) the compiler needs to be able to find the file (.obj/.lib/shared 
 library) containing the compiled code;
 (2) the compiler needs to be able to load the module and call it. This 
 requires some form of dynamic linking.
 (3) We need a marshalling step, to convert from compiler literal to 
 compiled data, and back.


 Step (3) is straightforward. The challenge is step(2), although note that 
 it's a general allow the compiler to load a plugin problem, and doesn't 
 have much to do with CTFE.

Understand. So, it should be dynamic loaded, compiler should know which D 
library to load for used function and this function's name mangling, also 
then phobos should be dynamic library to call it's functions in macro. This 
is non trivial stuff, and compiler itselt is written in C++ so this plugin 
architecture should be working in C++ too. Also when cross-compile it's 
neeaded compiler for both X and Y architectures or two compilers, 
communicating among them. So that compiler for Y when finds macro should 
call compiler X and dynamically load to itself produced function. OK, IMO 
it's too complex and experimental to be of any priority in nearest future. 




Re: executable size

2010-12-20 Thread Steven Schveighoffer

On Sun, 19 Dec 2010 08:25:36 -0500, Gary Whatmore n...@spam.sp wrote:


jovo Wrote:


Hi,
Today I compiled my old two module console program with d-2.50.
It uses only std.c.time, std.c.stdio, std.random and templates.
Compiled with -O -release, on windows.
Executable size (d-2.50): 4.184 kb.
Trayed with d-1.30: 84 kb.

Is it expected?


This is something you shouldn't worry too much about. Hard drives and  
system memory are getting bigger. 4 megabytes isn't that much when you  
have soon 4 terabytes of space. A single PC rarely has one million  
executables. So, keep writing more code. That's what the space is for.


I hate this excuse, it's used all the time.  The reality is that  
executable size *does* matter, and it always will.  Smaller programs load  
and run faster.


The other reality is that this is a toolchain issue, and not a language or  
spec issue.  With improved tools, this gets better, so it's not worth  
worrying about now.  When D gets full shared-library support, this problem  
goes away.


Array appending performance/invalidity used to be one of the most common  
negatives cited on D.  Now, nobody talks about it because it's been  
fixed.  You will see the same thing with exe size once D uses shared libs.


-Steve


Re: executable size

2010-12-20 Thread Andrej Mitrovic
On 12/20/10, Steven Schveighoffer schvei...@yahoo.com wrote:
 The reality is that
 executable size *does* matter, and it always will.  Smaller programs load
 and run faster.

Smaller programs, as in *less code*? Yes. But I really doubt that an
application with the *exact same code* is faster if it's executable
size shrinks. There are some apps that specialize in shrinking an
executable size, I know that.

I'd really like to see some performance comparisons of two copies of
the same app, one with the original exe size, and the other processed
by an exe shrinker.


Re: try...catch slooowness?

2010-12-20 Thread Steven Schveighoffer

On Sun, 19 Dec 2010 07:33:29 -0500, spir denis.s...@gmail.com wrote:


Hello,


I had not initially noticed that the 'in' operator (for AAs) returns a  
pointer to the looked up element. So that, to avoid double lookup in  
cases where lookups may fail, I naively used try...catch. In cases of  
very numerous lookups, my code suddenly became blitz fast. So that I  
wondered about exception handling efficiency. Below a test case (on my  
computer, both loops run in about the same average time):


void main () {
byte[uint] table = [3:1, 33:1, 333:1];
byte b;
byte* p;
Time t0;
uint N1 = 246, N2 = 999;
   // try...catch
t0 = time();
foreach (n ; 0..N1) {
try b = table[n];
catch (RangeError e) {}
}
writefln(try...catch version time: %sms, time() - t0);
   // pointer
t0 = time();
foreach (n ; 0..N2) {
p = (n in table);
if (p) b = table[n];
}
writefln(pointer version time: %sms, time() - t0);
   writefln(pointer version is about %s times faster,N2/N1);
}
==
try...catch version time: 387ms
pointer version time: 388ms
pointer version is about 40650 times faster

Note that both versions perform a single lookup trial; the difference  
thus only lies in pointer deref vs try...catch handling, i guess. What  
do you think?


This example is misleading.  First, catching an exception should be a rare  
occurrence (literally, an exception to the rule).  You are testing the  
case where catching an exception vastly outweighs the cases where an  
exception is not thrown.  What I'm saying is, catching an exception is  
very slow, but *trying* to catch an exception is not.


Second, exception handling is not meant to be used in the way you used  
it.  You don't use it as an extra return value.  I'd expect a more  
reasonable use of catching an exception in AAs as this:


try
{
  foreach(n ; 0..N1)
  {
 b = table[n];
  }
}
catch(RangeError e)
{
   writeln(Caught exception! , e);
}

An exception is a recoverable error, but it usually means something is  
wrong, not 'business as usual'.  This doesn't mean it's impossible to  
design poor interfaces that use exceptions for everything, but it  
shouldn't be that way.  An exception should always be a rare occurrence,  
when something happens that you don't expect.  A huge clue that you are  
using exceptions poorly or that the interface is not meant to be used that  
way is if your exception handling is being done at the innermost level of  
your program.  Exception handling is great when it exists at a much higher  
level, because you can essentially do all error handling in one spot, and  
simply write code without worrying about error codes.


This is why the 'in' operator exists for AAs.

General rules of thumb for AAs:

1. if you expect that a value is always going to be present when you ask  
for it, use exception handling at a high level.
2. if you *don't* expect that, and want to check the existence of an  
element, use 'in'


Now, after saying all that, improving how exception handling works can  
only be good.  So comparing exception handling performance in D to  
exception handling in other languages can give a better idea of how well  
D's exception handling performs.


-Steve


Re: executable size

2010-12-20 Thread Steven Schveighoffer
On Mon, 20 Dec 2010 12:28:10 -0500, Andrej Mitrovic  
andrej.mitrov...@gmail.com wrote:



On 12/20/10, Steven Schveighoffer schvei...@yahoo.com wrote:

The reality is that
executable size *does* matter, and it always will.  Smaller programs  
load

and run faster.


Smaller programs, as in *less code*? Yes. But I really doubt that an
application with the *exact same code* is faster if it's executable
size shrinks. There are some apps that specialize in shrinking an
executable size, I know that.


No, I mean smaller exe size.  It's well known that shrinking an app so  
portions of it (or all of it) fits into the cache can increase performance.


If using common shared libraries, the OS only need load and store the  
library in memory once, so it can save memory and load more programs, or a  
program that consumes more memory during runtime can run faster without  
having to swap to disk.


-Steve


Re: Why Ruby?

2010-12-20 Thread Jacob Carlborg

On 2010-12-19 22:02, Michel Fortin wrote:

On 2010-12-19 11:11:03 -0500, Jacob Carlborg d...@me.com said:


I can clearly see that you haven't used an Objective-C/D bridge. The
reason (or at least one of the reasons) for which Michel Fortin (as
well as I) gave up the Objective-C/D bridge and started to modify DMD
is template bloat. I'm not saying that using template strings as
lambdas is going to bloat your executable/library as much as the
bridge does but I always think twice before adding a template to my code.


I also want to add that the code bloat in the D/Objective-C bridge was
more because the bridge needed to create two stubs for each method in
all Cocoa classes, and those stubs contained code to translate
exceptions from one model to the other. Using templates and mixins made
the creation of those stubs easy, but I don't think another method of
generating these stubs would have faired better.


I was thinking about having the tool that creates the bindings 
generating all the necessary code inline and skip all the templates, 
just to see if there would be a difference in the speed of the 
compilation and the size of the generated binaries.



So the bloat came from the approach (generating stubs for everything)
much more than the implementation choice (templates). The new approach
is to avoid having to generate stubs by exposing directly the
Objective-C objects rather than wrappers around them. Less wrapping,
less bloat.


--
/Jacob Carlborg


Re: is it possible to learn D(2)?

2010-12-20 Thread Andrei Alexandrescu

On 12/20/10 6:02 AM, Jeff Nowakowski wrote:

On 12/20/2010 02:48 AM, Andrei Alexandrescu wrote:


Yes, how about it? Is this a murder investigation? I have a hard time
figuring out what is the ultimate purpose of spelunking my past
statements to look for inconsistencies.


Hypocrisy is a pet peeve of mine. How about discussing the gory problems
with const, and discussing the true state of the language at the next D
talk? If you're going to bash Go presentations for cherry-picking, you
should hold yourself to the same standards.


I understand. The issue is comparing apples with apples. Every language 
has implementation bugs and shortcomings. I'd be glad to discuss them if 
the gist of the talk were the state of implementation, or if asked 
during any of my talks on D.


What I didn't find becoming about the aforementioned talk on Go was that 
it presented only the good consequences of some PL design choices that 
come with tradeoffs having pluses and minuses in almost equal supplies. 
Taking that stand to its logical conclusion would lead one to believe 
that Go figured out some point that all other languages missed, which in 
my humble opinion is not the case. (BTW I believe that D _did_ figure 
out some points, and did make decisions with mostly positive 
consequences, that all other languages missed, such as the scope statement.)



As for why I did the research, if people are going to deny statements I
made, then I'm going to back them up with facts. I did rescind one
erroneous statement of mine.


Will the jury please disregard the erroneous statement.


My original post was in response to a thread about somebody looking to
jump into D2, and somebody who responded asking why D1 was even being
worked on. I'd say my post was on topic.


I agree.


Andrei


Re: executable size

2010-12-20 Thread Jacob Carlborg

On 2010-12-20 18:10, Steven Schveighoffer wrote:

On Sun, 19 Dec 2010 08:25:36 -0500, Gary Whatmore n...@spam.sp wrote:


jovo Wrote:


Hi,
Today I compiled my old two module console program with d-2.50.
It uses only std.c.time, std.c.stdio, std.random and templates.
Compiled with -O -release, on windows.
Executable size (d-2.50): 4.184 kb.
Trayed with d-1.30: 84 kb.

Is it expected?


This is something you shouldn't worry too much about. Hard drives and
system memory are getting bigger. 4 megabytes isn't that much when you
have soon 4 terabytes of space. A single PC rarely has one million
executables. So, keep writing more code. That's what the space is for.


I hate this excuse, it's used all the time. The reality is that
executable size *does* matter, and it always will. Smaller programs load
and run faster.

The other reality is that this is a toolchain issue, and not a language
or spec issue. With improved tools, this gets better, so it's not worth
worrying about now. When D gets full shared-library support, this
problem goes away.


One problem that seems hard to solve in a good way is the module 
constructors. Currently on Mac OS X with Tango when it's built as a 
dynamic library all module constructors are run, regardless if they're 
imported or not.



Array appending performance/invalidity used to be one of the most common
negatives cited on D. Now, nobody talks about it because it's been
fixed. You will see the same thing with exe size once D uses shared libs.

-Steve



--
/Jacob Carlborg


Re: executable size

2010-12-20 Thread Steven Schveighoffer

On Mon, 20 Dec 2010 14:15:26 -0500, Jacob Carlborg d...@me.com wrote:


On 2010-12-20 18:10, Steven Schveighoffer wrote:

On Sun, 19 Dec 2010 08:25:36 -0500, Gary Whatmore n...@spam.sp wrote:


jovo Wrote:


Hi,
Today I compiled my old two module console program with d-2.50.
It uses only std.c.time, std.c.stdio, std.random and templates.
Compiled with -O -release, on windows.
Executable size (d-2.50): 4.184 kb.
Trayed with d-1.30: 84 kb.

Is it expected?


This is something you shouldn't worry too much about. Hard drives and
system memory are getting bigger. 4 megabytes isn't that much when you
have soon 4 terabytes of space. A single PC rarely has one million
executables. So, keep writing more code. That's what the space is for.


I hate this excuse, it's used all the time. The reality is that
executable size *does* matter, and it always will. Smaller programs load
and run faster.

The other reality is that this is a toolchain issue, and not a language
or spec issue. With improved tools, this gets better, so it's not worth
worrying about now. When D gets full shared-library support, this
problem goes away.


One problem that seems hard to solve in a good way is the module  
constructors. Currently on Mac OS X with Tango when it's built as a  
dynamic library all module constructors are run, regardless if they're  
imported or not.


This is definitely a problem.  The issue I see mostly here is that Tango  
has many modules, specifically to allow trimming of unused code (another  
toolchain issue).


Two solutions that might work:

1. Mark the root module of the application (i.e. the one with main()).   
Then only initialize modules that are depended on by that module.  Where  
this fails is modules who define extern(C) functions (such as druntime),  
since you do not have to import those modules in order to call the  
functions.  I suppose modules with extern(C) declarations must also be  
marked as required.
2. Split the library into smaller libraries that would only be used when  
needed.


I'm not sure Phobos would have so much of an issue, because the number of  
modules is less.


One thing is for sure, this problem would be easier solved if we could  
decide things at link-time...


-Steve


Re: is it possible to learn D(2)?

2010-12-20 Thread Gour
On Mon, 20 Dec 2010 07:02:51 -0500
 Jeff == Jeff Nowakowski j...@dilacero.org wrote:

Hi Jeff,

Jeff Hypocrisy is a pet peeve of mine. How about discussing the gory
Jeff problems with const, and discussing the true state of the
Jeff language at the next D talk? If you're going to bash Go
Jeff presentations for cherry-picking, you should hold yourself to the
Jeff same standards.

Please don't take it personal...I'm just taking 'advantage' of your
post to suggest to all the posters one thing: Please be a little bit
more positive towards Walter and Andrei. They are not Supermans but
sincerely trying to give some good to us and it's practically free.

Long ago, I bought and used Walter's Zortech C++ compiler which was
superb. Then I left programming waters and returned back some years
ago. I didn't want to go to C(++) which evolved into huge beast and
skipped all the scripting languages trying my fortune with Haskell.

However, after some time I've decided that I want something more
pragmatic...read a bit about D, saw Andrei's Google presentation (I
liked his enthusiasm), bought the TDPL book (and put it in hardcover
to last longer) and now I'm slowly learning the language hoping to use
it in a real-world along with QtD.

Yes, I'm not blind and can see that some mistakes were probably done
within D community...D is certainly not perfect language (this title
is already reserved for Sanskrit :-) ), but if you can tell me about
better language to be used for practical daily programming having
feature set or covering different programming paradigms - here I
am. ;)

I did my homework and nothing is similar to D, so please make this
newsgroup more pleasant place by uttering some nice words about Walter
 Bright. I sincerely believe they're humans who like to get some
encouragement as well instead of constant downpour of (very often)
unjustified criticism.

If anyone can do better, pls. step in and show the
example...otherwise, let's us show some gratitude towards people
trying to make programming more fun.


Sincerely,
Gour

-- 

Gour  | Hlapicina, Croatia  | GPG key: CDBF17CA



signature.asc
Description: PGP signature


Re: What is this D book?

2010-12-20 Thread Walter Bright

spir wrote:

I agree the price is surprisingly high. But you are very wrong in stating
trying to cash in on something that they did no work for: Making a book out
of diverse material is _much_ work (I've done it). Actually so much and
difficult work that it's often worth rewriting from scratch! Just like trying
to put together a bunch of lib modules and make an app run fine out of that
;-)


I agree that being the editor is a lot of work.


Re: try...catch slooowness?

2010-12-20 Thread spir
On Mon, 20 Dec 2010 12:29:29 -0500
Steven Schveighoffer schvei...@yahoo.com wrote:

 This example is misleading.  First, catching an exception should be a rare  
 occurrence (literally, an exception to the rule).  You are testing the  
 case where catching an exception vastly outweighs the cases where an  
 exception is not thrown.  What I'm saying is, catching an exception is  
 very slow, but *trying* to catch an exception is not.

Right, understood, thank you.

Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com



Re: Why Ruby?

2010-12-20 Thread Michel Fortin

On 2010-12-20 12:50:47 -0500, Jacob Carlborg d...@me.com said:


On 2010-12-19 22:02, Michel Fortin wrote:

On 2010-12-19 11:11:03 -0500, Jacob Carlborg d...@me.com said:


I can clearly see that you haven't used an Objective-C/D bridge. The
reason (or at least one of the reasons) for which Michel Fortin (as
well as I) gave up the Objective-C/D bridge and started to modify DMD
is template bloat. I'm not saying that using template strings as
lambdas is going to bloat your executable/library as much as the
bridge does but I always think twice before adding a template to my code.


I also want to add that the code bloat in the D/Objective-C bridge was
more because the bridge needed to create two stubs for each method in
all Cocoa classes, and those stubs contained code to translate
exceptions from one model to the other. Using templates and mixins made
the creation of those stubs easy, but I don't think another method of
generating these stubs would have faired better.


I was thinking about having the tool that creates the bindings 
generating all the necessary code inline and skip all the templates, 
just to see if there would be a difference in the speed of the 
compilation and the size of the generated binaries.


That'd certainly make an interesting comparison.

--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: What is this D book?

2010-12-20 Thread Vladimir Panteleev
On Mon, 20 Dec 2010 11:39:07 +0200, Daniel Gibson metalcae...@gmail.com  
wrote:



I don't think they put much work in it. Probably just print the
wikipedia-article and some related (==linked) articles, maybe
recursively to fill at least these 96 pages.
I'd be surprised if these books weren't 99% automatically generated
(the last 1% is selecting a picture for the cover).


From [1]:

As an example of the care given to the books, the book History of  
Georgia (country) is about the European country Georgia but has a cover  
image of Atlanta in the American state Georgia.[nan 7] The Wikipedia  
article History of Georgia (country) does not make such a comical  
blunder. Another example is a book about an American football team with  
a soccer player on the cover.[nan 8]


  [1]:  
http://en.wikipedia.org/wiki/User:PrimeHunter/Alphascript_Publishing_sells_free_articles_as_expensive_books


--
Best regards,
 Vladimirmailto:vladi...@thecybershadow.net


thin heaps

2010-12-20 Thread Andrei Alexandrescu

Just saw this:

http://www.reddit.com/r/programming/comments/eoq15/implementing_shortest_path_in_c_is_much_easier/

in which a reader points to this paper on thin heaps:

http://www.cs.princeton.edu/courses/archive/spr04/cos423/handouts/thin%20heap.pdf

Does anyone here have experience with thin heaps? I think they'd be a 
good addition to std.container.



Andrei


Re: thin heaps

2010-12-20 Thread Seth Hoenig
 I think they'd be a good addition to std.container.


Why? What more do you need that std.container.BinaryHeap doesn't provide?


Re: thin heaps

2010-12-20 Thread Matthias Walter
On 12/20/2010 05:01 PM, Andrei Alexandrescu wrote:
 Just saw this:

 http://www.reddit.com/r/programming/comments/eoq15/implementing_shortest_path_in_c_is_much_easier/


 in which a reader points to this paper on thin heaps:

 http://www.cs.princeton.edu/courses/archive/spr04/cos423/handouts/thin%20heap.pdf


 Does anyone here have experience with thin heaps? I think they'd be a
 good addition to std.container.
You might have realized my recent interest in std.container.BinHeap as a
priority queue. In fact I thought about implementing Fibonacci Heaps
with the same interface for D. I worked on a C++ implementation a while
ago, so maybe I should give the Thin Heaps a try?

Matthias


Re: try...catch slooowness?

2010-12-20 Thread Walter Bright

Michel Fortin wrote:
Exceptions are slow, that's a fact of life. The idea is that an 
exception should be exceptional, so the case to optimize for is the case 
where you don't have any exception: a try...catch that doesn't throw. 
Other ways to implement exceptions exists which are faster at throwing 
(setjmp for instance), but they're also slower at entering and exiting a 
try..catch block when no exception occur.


[...]

Exceptions are recommended to avoid cluttering your normal code flow 
with error handling code. Clearly, in the code above exceptions are part 
of the normal code flow. That's not what exception are made for.


Right on all counts. Exceptions are for *exceptional* cases, i.e. unexpected 
errors, not normal control flow.


The implementation is designed so that the speed normal execution is strongly 
favored over speed of exception handling.


Re: is it possible to learn D(2)?

2010-12-20 Thread Walter Bright

Caligo wrote:

If there is going to be a D3, will it be backwards compatible with D2?


D3 plans are a complete unknown at the moment.

And why is work still being done on the D1 compiler?  Shouldn't it be 
marked deprecated so people stop using it and move to D2?


Since there are many breaking changes from D1 to D2, and a lot of people have 
large code bases in D1, it makes sense to support them with bug fixes. However, 
no new features are added to D1.


Re: is it possible to learn D(2)?

2010-12-20 Thread Jean Crystof
Walter Bright Wrote:

 Caligo wrote:
  If there is going to be a D3, will it be backwards compatible with D2?
 
 D3 plans are a complete unknown at the moment.

So, what's the main reason D3 plans are unknown? Have you got a list of 
realistic new features? Is it lack of manpower? Too early to release anything 
new now that D2 isn't in serious production use yet?


Re: is it possible to learn D(2)?

2010-12-20 Thread Walter Bright

Jean Crystof wrote:

So, what's the main reason D3 plans are unknown? Have you got a list of
realistic new features? Is it lack of manpower? Too early to release anything
new now that D2 isn't in serious production use yet?


D2 first.


Scala containers

2010-12-20 Thread Andrei Alexandrescu
Scala uses an inheritance-rich design for its containers that I'd 
considered for D (in a slightly different form as D doesn't have traits) 
and rejected. Still, I wonder how that design compares to D's choice.


http://blog.schauderhaft.de/2010/12/19/the-scala-collection-api-sucks-or-is-it-a-work-of-beauty/


Andrei


Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Nick Sabalausky
Max Samukha spam...@d-coding.com wrote in message 
news:ien42a$26q...@digitalmars.com...
 On 12/20/2010 08:43 AM, Walter Bright wrote:
 bearophile wrote:
 Many games are like drugs.

 Not for me. I get bored with games. You don't get bored with drugs.

 You didn't play StarCraft when you were a teenager.

I always got bored pretty quickly with RTSes. Pikmin's the only RTS that's 
held my attention for long, and it's very non-standard as far as RTSes go. 
I've always been more 2D platformer, 1990's single-player FPS, shmup, puzzle 
(not so much falling block though), action RPG, and adventure.




Re: Game development is worthless? WTF? (Was: Why Ruby?)

2010-12-20 Thread Nick Sabalausky
Christopher Nicholson-Sauls ibisbase...@gmail.com wrote in message 
news:ienfgr$2st...@digitalmars.com...
 On 12/19/10 14:52, Nick Sabalausky wrote:

 Interesting. I don't think I would go so far as to claim that WoW was
 unethical...just uninteresting ;) But that's just me. This is at least 
 one
 thing the videogame world does that I do consider unethical:
 Proprietary/Closed platforms. But that's not just a videogame thing, of
 course. I consider proprietary/closed platforms in general to be 
 unethical.
 (Oh crap, I think I can feel myself turning into Stallman!)


 (On the upside, that means you get to grow an epic beard.)

Heh, I actually do have a beard. Although it's not quite Stallman-level.




Re: rdmd bug?

2010-12-20 Thread spir
On Mon, 20 Dec 2010 04:27:33 +0300
Nick Voronin elfy...@gmail.com wrote:

 On Mon, 20 Dec 2010 01:24:02 +0100
 CrypticMetaphor crypticmetapho...@gmail.com wrote:
 
  Anyway, the problem is, if I call rdmd from outside the folder in which 
  the main source resides in, and main includes another file in that 
  folder, I get an error.
 
  // If I'm in a shell, and I do this, I get an error:
  ...\projectfolderrdmd src\main.d
  src\main.d(2): Error: module test is in file 'test.d' which cannot be read
  import path[0] = C:\D\dmd2\windows\bin\..\..\src\phobos
  import path[1] = C:\D\dmd2\windows\bin\..\..\src\druntime\import
 
  Anyway, I want to be able to compile with rdmd from a different folder, 
  is this a bug? or should I use a different tool? :-S
  *aahhh*
 
 Add -Ifullpath_to_projectfolder\src. It's the way it works IMHO, if you 
 import something it must be relative to search path or to current dir. There 
 may be a better way (replace current dir with the dir where source is, but it 
 will take away control), but this works.
 
 There is a bug though, I can't make it work with -Irelative_path_to_src. 
 Looks like .deps contain paths relative to where rdmd was ran, while dmd 
 interprets them as paths relative to where .deps file is.

Yes, I think it cannot work, since -I is a option of dmd, not of rdmd. rdmd 
just passes by options to dmd if I understand correctly. What you want is to 
influence a feature provided dy rdmd, not dmd, namely automatic inclusion of 
(non-standard) imports.
But sure, it may be a bug: rdmd should look for imports using pathes relative 
to the location of the (app) module it is given. Or, for simplicity, we can 
just state it must be called from this location.

Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com



Re: Classes or stucts :: Newbie

2010-12-20 Thread spir
On Sun, 19 Dec 2010 21:33:56 -0500
bearophile bearophileh...@lycos.com wrote:

 So, putting classes on the stack kind of negates the whole point of having 
 both structs and classes in the first place.  
 
 This is false, the definition of D class instance doesn't specify where the 
 instance memory is allocated.

For me, the important difference is that classes are referenced, while structs 
are plain values. This is a semantic distinction of highest importance. I would 
like structs to be subtype-able and to implement (runtime-type-based) 
polymorphism.

Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com



Re: Classes or stucts :: Newbie

2010-12-20 Thread Jonathan M Davis
On Monday 20 December 2010 01:19:31 spir wrote:
 On Sun, 19 Dec 2010 21:33:56 -0500
 
 bearophile bearophileh...@lycos.com wrote:
  So, putting classes on the stack kind of negates the whole point of
  having both structs and classes in the first place.
  
  This is false, the definition of D class instance doesn't specify where
  the instance memory is allocated.
 
 For me, the important difference is that classes are referenced, while
 structs are plain values. This is a semantic distinction of highest
 importance. I would like structs to be subtype-able and to implement
 (runtime-type-based) polymorphism.

Except that contradicts the facts that they're value types. You can't have a 
type which has polymorphism and is a value type. By its very nature, 
polymorphism requires you to deal with a reference.

C++ allows you to put classes on the stack. It even allows you to assign a 
derived type to a base type where the variable being assigned to is on the 
stack. The result is shearing. The only part assigned is the base type portion, 
and the data which is part of the derived type is lost. That's because the 
variable _is_ the base type. A value type _is_ a particular type _exactly_ and 
_cannot_ be any other type. This is distinctly different from a reference of a 
base type which points to an object which is of a derived type. In that case, 
the variable is a reference of the base type, but the object referenced is in 
fact the derived type. The indirection allows you to use the derived type as if 
it were the base type. It allows you to use polymorphism. Without that 
indirection, you can't do that.

So, you _could_ make structs have inheritance, but doing so would introduce 
shearing, which causes a number of problems. One of the main reasons that 
structs in D do _not_ have inheritance is to avoid shearing.

- Jonathan M Davis


enum ubyte[] vs enum ubyte[3]

2010-12-20 Thread Johannes Pfau

Hi,
I'm currently patching Ragel (http://www.complang.org/ragel/) to generate  
D2 compatible code. Right now it creates output like this for static  
arrays:


enum ubyte[] _parseResponseLine_key_offsets = [
0, 0, 17, 18, 37, 41, 42, 44,
50, 51, 57, 58, 78, 98, 118, 136,
138, 141, 143, 146, 148, 150, 152, 153,
159, 160, 160, 162, 164
];

Making it output enum ubyte[30] would be more complicated, so I wonder  
if there's a difference between enum ubyte[] and enum ubyte[30]?


--
Johannes Pfau


Re: Classes or stucts :: Newbie

2010-12-20 Thread spir
On Mon, 20 Dec 2010 01:29:13 -0800
Jonathan M Davis jmdavisp...@gmx.com wrote:

  For me, the important difference is that classes are referenced, while
  structs are plain values. This is a semantic distinction of highest
  importance. I would like structs to be subtype-able and to implement
  (runtime-type-based) polymorphism.  
 
 Except that contradicts the facts that they're value types. You can't have a 
 type which has polymorphism and is a value type. By its very nature, 
 polymorphism requires you to deal with a reference.

Can you expand on this?

At least Oberon has value structs (records) with inheritance and 
polyporphism; I guess the turbo Pascal OO model was of that kind, too (unsure) 
-- at least the version implemented in freepascal seems to work fine that way. 
And probably loads of less known PLs provide such a feature.
D structs could as well IIUC: I do not see the relation with instances beeing 
implicitely referenced. (Except that they must be passed by ref to member 
functions they are the receiver of, but this is true for any kind of OO, 
including present D structs.)

(I guess we have very different notions of reference, as shown by previous 
threads.)


Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com



Re: enum ubyte[] vs enum ubyte[3]

2010-12-20 Thread bearophile
Johannes Pfau:

Hello Johannes and thank you for developing your tool for D2 too :-)


 Making it output enum ubyte[30] would be more complicated, so I wonder  
 if there's a difference between enum ubyte[] and enum ubyte[30]?

In D1 a enum ubyte[] is a compile-time constant dynamic array of unsigned 
bytes, it is a 2 word long struct that contains a pointer and a length.
In D1 you express the same thing with const ubyte[].

In D2 a enum ubyte[30] is a compile-time constant fixed size array of 32 
unsigned bytes that gets passed around by value.
In D1 a const ubyte[30] is a compile-time constant fixed size array of 32 
unsigned bytes that gets passed around by reference.

So they are two different things and you use one or the other according to your 
needs. Currently there are also some low performance issues in D with enums 
that get re-created each time you use them (this is true for associative 
arrays, but I don't remember if this is true for dynamic arrays too). So better 
to take a look at the produced asm to be sure, if you want to avoid performance 
pitfalls.

Regardless the array kind you want to use, also take a look at Hex Strings:
http://www.digitalmars.com/d/2.0/lex.html
That allow you to write bytes arrays as hex data:
x00 FBCD 32FD 0A

Bye,
bearophile


Re: enum ubyte[] vs enum ubyte[3]

2010-12-20 Thread Johannes Pfau

At 20.12.2010, 11:02, bearophile wrote bearophileh...@lycos.com:


Hello Johannes and thank you for developing your tool for D2 too :-)

Actually it's not mine, I'm just a regular user. I don't think I could  
ever understand the finite state machine code (especially because it's  
c++), but patching the c/d1 codegen to output d2 code is easy enough ;-)


In D1 a enum ubyte[] is a compile-time constant dynamic array of  
unsigned bytes, it is a 2 word long struct that contains a pointer and a  
length.

Did you mean in D2? I feared that, so I'll have to do some extra work...

In D2 a enum ubyte[30] is a compile-time constant fixed size array of  
32 unsigned bytes that gets passed around by value.

Yep, that's what I want.



Regardless the array kind you want to use, also take a look at Hex  
Strings:

http://www.digitalmars.com/d/2.0/lex.html
That allow you to write bytes arrays as hex data:
x00 FBCD 32FD 0A
That's interesting, I'll have a look at it, but ragel shares big parts of  
the c/c++/d code, so as long as the C syntax works there's no need to  
change that.




Bye,
bearophile


Thanks for your help!

--
Johannes Pfau


Re: enum ubyte[] vs enum ubyte[3]

2010-12-20 Thread bearophile
Johannes Pfau:

 Did you mean in D2?

Right, sorry.
Bye,
bearophile


Re: Classes or stucts :: Newbie

2010-12-20 Thread bearophile
Nick Voronin:

 Here is where we diverge. Choosing struct vs class on criteria of their 
 placement makes no sense to me. 

In D you use a class if you want inheritance or when you (often) need reference 
semantics, and you use a struct when you need a little value passed around by 
value or when you want a simple form of RAII or when you want to implement 
something manually (like using PIMPL), or when you want max performance (and 
you manage structs by pointer, you may even put a tag inside the stuct or the 
pointer and implement manually some kind of inheritance). With structs you have 
a literal syntax, postblits, in-place allocation, and you are free to use 
align() too.

Bye,
bearophile


Re: define methods apart

2010-12-20 Thread Christopher Nicholson-Sauls
On 12/19/10 06:52, spir wrote:
 On Sun, 19 Dec 2010 03:37:37 -0600
 Christopher Nicholson-Sauls ibisbase...@gmail.com wrote:
 
 On 12/18/10 07:19, spir wrote:
 Hello,


 I cannot find a way to define methods (I mean member functions) outside 
 the main type-definition body:

 struct X {}
 void X.say () {writeln(I say!);}
 ==
 Element.d(85): semicolon expected, not '.'

 Do I overlook anything, or is this simply impossible? In the latter case, 
 what is the problem?
 (In many languages, not only dynamic ones, method are or at least can be 
 defined apart.)


 Denis
 -- -- -- -- -- -- --
 vit esse estrany ☣

 spir.wikidot.com


 As bearophile says, it just isn't the D way to do things.

 But, if you absolutely must (or just want to, for playing sakes) there
 are ways of faking it using opDispatch.  Here's one I just tossed
 together and tested (DMD 2.050) right now.

 [code snipped]

 Generally speaking, though, I'm not sure what the real value would be in
 doing this in D.  Did you have a particular use case in mind, or was it
 just idle exploration?
 
 Thank you very for this example use of opdispatch :-)
 I'm still exploring the language (which I like very much, except for some 
 given features *). Actually, I just wanted to know whether it's possible, 
 because I'm used to this way and find it more practicle or readable in 
 various cases. But it is not a problem.
 
 Denis
 
 (*) Some inherited from C/C++ (unhelpful syntax or semantics, mainly), some 
 among the newest (too abstract or complicated, i'd say).
 -- -- -- -- -- -- --
 vit esse estrany ☣
 
 spir.wikidot.com
 

No problem.  opDispatch has a number of possible uses.  Another thing
I've done with it before is to wrap the message passing system from
std.concurrency, to ease defining message protocols.  Basically I define
a message as a struct, then define an opDispatch that looks for the
pattern 'sendBLAH(...)' and forwards that to 'tid.send(BLAHMessage(...),
thisTid())' auto-magically.  To make it really magical I had to create
some code-generation for the receiving end so it would provide an
argument to receive/receiveTimeout for each handleBLAH method I define.

It had a few little bugs/quirks though, which is why I haven't ever
shared it.

-- Chris N-S


Re: Classes or stucts :: Newbie

2010-12-20 Thread Jonathan M Davis
On Monday 20 December 2010 01:52:58 spir wrote:
 On Mon, 20 Dec 2010 01:29:13 -0800
 
 Jonathan M Davis jmdavisp...@gmx.com wrote:
   For me, the important difference is that classes are referenced, while
   structs are plain values. This is a semantic distinction of highest
   importance. I would like structs to be subtype-able and to implement
   (runtime-type-based) polymorphism.
  
  Except that contradicts the facts that they're value types. You can't
  have a type which has polymorphism and is a value type. By its very
  nature, polymorphism requires you to deal with a reference.
 
 Can you expand on this?
 
 At least Oberon has value structs (records) with inheritance and
 polyporphism; I guess the turbo Pascal OO model was of that kind, too
 (unsure) -- at least the version implemented in freepascal seems to work
 fine that way. And probably loads of less known PLs provide such a
 feature. D structs could as well IIUC: I do not see the relation with
 instances beeing implicitely referenced. (Except that they must be passed
 by ref to member functions they are the receiver of, but this is true
 for any kind of OO, including present D structs.)
 
 (I guess we have very different notions of reference, as shown by
 previous threads.)

Okay. This can get pretty complicated, so I'm likely to screw up on some of the 
details, but this should give you a basic idea of what's going on.

In essentially any C-based language, when you declare an integer on the stack 
like so:

int a = 2;

you set aside a portion of the stack which is the exact size of an int 
(typically 32 bits, but that will depend on the language). If you declare a 
pointer,

int* a;

then you're setting aside a portion of the stack the size of a pointer (32 bits 
on a 32 bit machine and 64 bits on a 64 bit machine). That variable then holds 
an address - typically to somewhere on the heap, though it could be to an 
address on the stack somewhere. In the case of int*, the address pointed to 
will 
refer to a 32-bit block of memory which holds an int.

If you have a struct or a class that you put on the stack. Say,

class A
{
int a;
float b;
}

then you're setting aside exactly as much space as that type requires to hold 
itself. At minimum, that will be the total size of its member variables (in 
this 
case an int and a float, so probably a total of 64 bits), but it often will 
include extra padding to align the variables along appropriate boundaries for 
the sake of efficiency, and depending on the language, it could have extra type 
information. If the class has a virtual table (which it will if it has virtual 
functions, which in most any language other than C++ would mean that it 
definitely has a virtual table), then that would be part of the space required 
for the class as well (virtual functions are polymorphic; when you call a 
virtual function, it calls the version of the function for the actual type that 
an object is rather than the pointer or reference that you're using to refer to 
the object; when a non-virtual function function is called, then the version of 
the function which the pointer or reference is is used; all class functions are 
virtual in D unless the compiler determines that they don't have to be and 
optimizes it out (typically because they're final); struct functions and stand-
alone functions are never virtual). The exact memory layout of a type _must_ be 
known at compile time. The exact amount of space required is then known, so 
that 
the stack layout can be done appropriately.

If you're dealing with a pointer, then the exact memory layout of the memory 
being pointed to needs to be known when that memory is initialized, but the 
pointer doesn't necessarily need to know it. This means that you can have a 
pointer of one type point to a variable of another type. Now, assuming that 
you're not subverting the type system (e.g. my casting int* to float*), you're 
dealing with inheritance. For instance, you have

class B : A
{
bool c;
}

and a variable of type A*. That pointer could point to an object which is 
exactly of type A, or it could point to any subtype of A. B is derived from A, 
so the object could be a B. As long as the functions are virtual, you can have 
polymorphic functions by having the virtual table used to call the version of 
the function for the type that the object actually is rather than the type that 
the pointer is.

References are essentially the same as pointers (though they may have some 
extra 
information with them, making them a bit bigger than a pointer would be in 
terms 
of the amount of space required on the stack). However, in the case of D, 
pointers are _not_ treated as polymorphic (regardless of whether a function is 
virtual or not), whereas references _are_ treated as polymorphic (why, I don't 
know - probably to simplify pointers). In C++ though, pointers are polymorphic.

Now, if you have a variable of type A*, you could do something like this:

B* b = new 

Re: enum ubyte[] vs enum ubyte[3]

2010-12-20 Thread Nick Voronin
On Mon, 20 Dec 2010 10:26:16 +0100
Johannes Pfau s...@example.com wrote:

 Hi,
 I'm currently patching Ragel (http://www.complang.org/ragel/) to generate  
 D2 compatible code.

Interesting. Ragel-generated code works fine for me in D2. I suppose it mostly 
uses such a restricted C-like subset of language that it didn't change much 
from D1 to D2. But if you are going to patch it, please make it add extra {} 
around action code! The thing is that when there is a label before {} block 
(and in ragel generated code I saw it's always so) the block isn't considered 
as a new scope which causes problems when you have local variables declaration 
inside actions.

Anyway, good luck with whatever you plan :) Ragel is cool.

 Right now it creates output like this for static  
 arrays:
 
 enum ubyte[] _parseResponseLine_key_offsets = [
   0, 0, 17, 18, 37, 41, 42, 44,
   50, 51, 57, 58, 78, 98, 118, 136,
   138, 141, 143, 146, 148, 150, 152, 153,
   159, 160, 160, 162, 164
 ];
 
 Making it output enum ubyte[30] would be more complicated, so I wonder  
 if there's a difference between enum ubyte[] and enum ubyte[30]?

One is fixed size array and other is dynamic. Honestly I doubt that it matters 
for code generated by Ragel, since this is constant and won't be passed around. 
If it's harder to make it fixed-size then don't bother.

-- 
Nick Voronin elfy...@gmail.com


Re: enum ubyte[] vs enum ubyte[3]

2010-12-20 Thread Jonathan M Davis
On Monday 20 December 2010 01:26:16 Johannes Pfau wrote:
 Hi,
 I'm currently patching Ragel (http://www.complang.org/ragel/) to generate
 D2 compatible code. Right now it creates output like this for static
 arrays:
 
 enum ubyte[] _parseResponseLine_key_offsets = [
   0, 0, 17, 18, 37, 41, 42, 44,
   50, 51, 57, 58, 78, 98, 118, 136,
   138, 141, 143, 146, 148, 150, 152, 153,
   159, 160, 160, 162, 164
 ];
 
 Making it output enum ubyte[30] would be more complicated, so I wonder
 if there's a difference between enum ubyte[] and enum ubyte[30]?

ubyte[] is a dynamic array. ubyte[30] is a static array. They are inherently 
different types. The fact that you're dealing with an enum is irrelevant. So, 
the 
code that you're generating is _not_ a static array. It's a dynamic array. This 
is inherently different from C or C++ where having [] on a type (whether it has 
a 
number or not) is _always_ a static array.

- Jonathan M Davis


Re: Classes or stucts :: Newbie

2010-12-20 Thread bearophile
Jonathan M Davis:

 So, putting classes on the stack kind of negates the whole point of having
 both structs and classes in the first place.

Where you put the instance is mostly a matter of implementation. This is why a 
smart JavaVM is able to perform escape analysis and choose where to allocate 
the class instance.

Keep in mind that if you allocate a class on the stack or in-place inside 
another class, you don't turn it into a value, because beside the class 
instance you reserve space for its reference too (this reference may even be 
immutable, if you want).


 scoped classes are definitely not in SafeD.

Well implemented scoped classes are safe enough (compared to the other things). 
The compiler may perform escape analysis of all the aliases of a scoped object 
and statically raise an error if a reference escapes. This isn't 100% safe in a 
language that has all kind of casts and low-level features, but it's often safe 
enough, compared to other things. And those casts and low level features that 
can fool the escape analysis can be disabled statically (with something like 
@safe), this makes scoped classes 100% safe, probably safer than heap 
allocations.


The whole point of safe when talking about safe in D is memory saftey.

I know, but some people (including me) think that safe D is a misleading name 
because it just means memory safe D.


If the compiler can determine that a particular class object can be put on the 
stack and optimize it that way. Fine, but it's pretty rare that it can do that 
- essentially only in cases where you don't pass it to _anything_ except for 
pure functions (including calls to member functions).

I don't agree that it's rare. If a function that allocates an object calls a 
function (or member function) that's present in the same compilation unit (this 
more or less means same module), then the compiler is able to continue the 
escape analysis and determine if the called function escapes the reference. If 
this doesn't happen, then the class instance is free to be scoped. This 
situation is common enough.


And if the compiler can do that, then it there's no need for the programmer to 
use scope explicitly.

I don't agree. An annotation like @scope is a contract between the programmer 
and the compiler. It means that if the compiler sees a reference escape, then 
it stops the compilation.


And no, a compiler _can't_ do pure optimizations on its own, 
generally-speaking, because that would require looking not only at the body of 
the function that's being called but at the function bodies of any functions 
that it calls. D is not designed in a way that the compiler even necessarily 
has _access_ to a function's body when compiling, and you can't generally look 
at a function's body when doing optimizations when calling that function. So, 
_some_ pure optimizations could be done, but most couldn't. This is not the 
case with scoped classes, because purity already gives you the information 
that you need.

Quite often a function calls another function in thee same compilation unit, in 
this case the analysis is possible. So you limit the optimizations to this 
common but limited case.

And LDC compiler and in future GDC too, have link-time optimization, this means 
the compiler packs or sees the program code code in a single compilation unit. 
In this case it's able to perform a more complete analysis (including 
de-virtualization of some virtual functions).


Safety by convention means that the language and the compiler do not enforce 
it in any way.

This is not fully true. If the syntax of the unsafe thing is ugly and long, the 
programmer is discouraged to use it. This makes the unsafe thing more visible 
for the eyes of the programmer. Statistically this may reduce bug count.


There's nothing contradictory about Walter's stance. He's for having safety 
built into the language as much as reasonably possible and against having it 
thrust upon the programmer to program in a particular way to avoid unsafe 
stuff.

I think you have missed part of the context of my comments for Nick Voronin, he 
was trying to say something here:

Yet we won't have library solution for pointers instead of language support 
(hopefully)? :) I think it all goes against being practical as an objective 
of the language. Safety is important but you don't achieve safety by means of 
making unsafe thing unconvenient and inefficient. If there is emplace() then 
there is no reason not to have scope storage class. At least looking from 
user's POV. I don't know how hard it is on the compiler.
In _general_ case there is no safety in D. With all low-level capabilities one 
can always defeat compiler. Removing intermediate-level safer (yet unsafe) 
capabilities arguabily gains nothing but frustration. I'm all for encouraging 
good practices, but this is different.

In D the convention is to not use certain low-level means to do something (and 
@safe statically forbids them, so it's not just a 

Re: string comparison

2010-12-20 Thread Lars T. Kyllingstad
On Sun, 19 Dec 2010 07:01:30 +, doubleagent wrote:

 Andrei's quick dictionary illustration [in his book, 'The D Programming
 Language'] doesn't seem to work.  Code attached.

That's strange.  I ran the example you posted using DMD 2.050 myself, and 
it works for me.  Are you 100% sure that you are running this version, 
and that it is not using an outdated Phobos version (from an older 
installation, for instance)?

One suggestion:  Try replacing the next-to-last line with this:

  dictionary[word.idup] = newId;

The 'word' array is mutable and reused by byLine() on each iteration.  By 
doing the above you use an immutable copy of it as the key instead.


 On my computer, with d2-0.5.0, I got the following output while testing.
 
 andrei
 0 andrei
  andrei
 1 andrei
 
 
 Also, why doesn't 'splitter' show up on the site's documentation of
 std.string?  And what advantage does 'splitter(strip(line))' offer over
 'split(line)'?

splitter is defined in std.algorithm.  The fact that it becomes visible 
when you import std.string is due to bug 314:

  http://d.puremagic.com/issues/show_bug.cgi?id=314

(std.string is supposed to publically import just a few symbols from 
std.algorithm, but because of this bug the whole module gets imported 
publically.)

The advantage with splitter is that it is lazy and therefore more 
efficient.  split() is eager and allocates memory to hold the string 
fragments.

-Lars


Re: Classes or stucts :: Newbie

2010-12-20 Thread spir
On Mon, 20 Dec 2010 03:11:49 -0800
Jonathan M Davis jmdavisp...@gmx.com wrote:

 Now, you could conceivably have a language where all of its objects were 
 actually pointers, but they were treated as value types. So,
 
 B b;
 A a = b;
 
 would actually be declaring
 
 B* b;
 A* a = b;
 
 underneath the hood, except that the assignment would do a deep copy and 
 allocate the appropriate meemory rather than just copying the pointer like 
 would 
 happen in a language like C++ or D. Perhaps that's what Oberon does. I have 
 no 
 idea. I have never heard of the language before, let alone used it.

I don't know how Oberon works. But I'm sure that its records are plain values, 
_not_ pointed under the hood. And their methods all are virtual (they have a 
virtual method table). I have no more details, sorry.

Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com



Re: string comparison

2010-12-20 Thread Stanislav Blinov

20.12.2010 8:35, doubleagent пишет:

Compared to the relatively snappy response other threads have been receiving I'm
going to assume that nobody is interested in my inquiry.

That's cool.  Can anybody point me to an IRC chatroom for D noobs, and is there
anywhere to post errata for the book?

Please don't feel offended if you don't get response quickly. Even if it 
may seem that people are active in other threads doesn't mean they are 
fast enough to analyse arising questions and problems while discussing 
some recent ideas and improvements and not forgetting to work and sleep ;)
Besides, there are many people here from different parts of the world, 
different time zones.


And lastly, hasn't this by chance been your first post? AFAIR, the first 
message is being moderated so it doesn't get to the public at once.


BTW, There is a #D channel on freenode, if my memory serves.


Re: enum ubyte[] vs enum ubyte[3]

2010-12-20 Thread Johannes Pfau

On Monday, December 20, 2010, Nick Voronin elfy...@gmail.com wrote:


On Mon, 20 Dec 2010 10:26:16 +0100
Johannes Pfau s...@example.com wrote:


Hi,
I'm currently patching Ragel (http://www.complang.org/ragel/) to  
generate

D2 compatible code.


Interesting. Ragel-generated code works fine for me in D2. I suppose it  
mostly uses such a restricted C-like subset of language that it didn't  
change much from D1 to D2.
The most important change is const correctness. Because of that table  
based output didn't work with D2. And you couldn't directly pass const  
data (like string.ptr) to Ragel.


But if you are going to patch it, please make it add extra {} around  
action code! The thing is that when there is a label before {} block  
(and in ragel generated code I saw it's always so) the block isn't  
considered as a new scope which causes problems when you have local  
variables declaration inside actions.


You mean like this code:
-
tr15:
#line 228 jpf/http/parser.rl
{
if(start != p)
{
key = line[(start - line.ptr) .. (p - line.ptr)];
}
}
-
should become: ?
-
tr15:
#line 228 jpf/http/parser.rl
{{
if(start != p)
{
key = line[(start - line.ptr) .. (p - line.ptr)];
}
}}
-

One is fixed size array and other is dynamic. Honestly I doubt that it  
matters for code generated by Ragel, since this is constant and won't be  
passed around. If it's harder to make it fixed-size then don't bother.


Could a dynamic array cause heap allocations, even if it's data is never  
changed? If not, dynamic arrays would work fine.


--
Johannes Pfau


Re: Problems with Reflection in Static library

2010-12-20 Thread Stanislav Blinov

19.12.2010 10:51, Mandeep Singh Brar пишет:

Thanks a lot for your reply Tomek. I understand what you are saying
but this would not work for me. The reason is that i am trying to
make some kind of plugins from these libs. So i would not know the
name objectFactory also in advance (multiple plugins will not be
implementing the same named method and linking statically).

For now i am just merging these two into the same place.
In general, it's not a very good idea to make plugins using static 
linkage. Even if your plugins will be dynamic libraries (dll/so), which 
is currently not possible at least on Linux (and at least with dmd, I 
don't know about gdc2), but the core to which they are plugged-in 
remains static library, you may and most certainly will get unpleasant 
surprises. Reason being, any static data contained in the static library 
will be duplicated for every executable/dll that links to it. I don't 
know if anything can be done with it (actually, I think nothing can be). 
So for plugins, it's better to keep the 'core' also as a dynamic 
library. But, again, dmd is currently is not on good terms with dynamic 
linking (aside from loading a C dll at runtime, but that doesn't count).


Re: string comparison

2010-12-20 Thread Jonathan M Davis
On Monday, December 20, 2010 06:01:23 Lars T. Kyllingstad wrote:
 On Sun, 19 Dec 2010 07:01:30 +, doubleagent wrote:
  Andrei's quick dictionary illustration [in his book, 'The D Programming
  Language'] doesn't seem to work.  Code attached.
 
 That's strange.  I ran the example you posted using DMD 2.050 myself, and
 it works for me.  Are you 100% sure that you are running this version,
 and that it is not using an outdated Phobos version (from an older
 installation, for instance)?
 
 One suggestion:  Try replacing the next-to-last line with this:
 
   dictionary[word.idup] = newId;
 
 The 'word' array is mutable and reused by byLine() on each iteration.  By
 doing the above you use an immutable copy of it as the key instead.
 
  On my computer, with d2-0.5.0, I got the following output while testing.
  
  andrei
  0   andrei
  
   andrei
  
  1   andrei
  
  
  Also, why doesn't 'splitter' show up on the site's documentation of
  std.string?  And what advantage does 'splitter(strip(line))' offer over
  'split(line)'?
 
 splitter is defined in std.algorithm.  The fact that it becomes visible
 when you import std.string is due to bug 314:
 
   http://d.puremagic.com/issues/show_bug.cgi?id=314
 
 (std.string is supposed to publically import just a few symbols from
 std.algorithm, but because of this bug the whole module gets imported
 publically.)

Actually, while that is a definite bug, splitter() _is_ defined in std.string 
as 
well (though it calls std.algorithm.splitter()), but it returns auto, so it 
doesn't show up in the docs, which is a different bug.

- Jonathan M Davis


Re: Classes or stucts :: Newbie

2010-12-20 Thread Jonathan M Davis
On Monday, December 20, 2010 03:19:48 bearophile wrote:
 Jonathan M Davis:
  So, putting classes on the stack kind of negates the whole point of
  having both structs and classes in the first place.
 
 Where you put the instance is mostly a matter of implementation. This is
 why a smart JavaVM is able to perform escape analysis and choose where to
 allocate the class instance.
 
 Keep in mind that if you allocate a class on the stack or in-place inside
 another class, you don't turn it into a value, because beside the class
 instance you reserve space for its reference too (this reference may even
 be immutable, if you want).
 
  scoped classes are definitely not in SafeD.
 
 Well implemented scoped classes are safe enough (compared to the other
 things). The compiler may perform escape analysis of all the aliases of a
 scoped object and statically raise an error if a reference escapes. This
 isn't 100% safe in a language that has all kind of casts and low-level
 features, but it's often safe enough, compared to other things. And those
 casts and low level features that can fool the escape analysis can be
 disabled statically (with something like @safe), this makes scoped classes
 100% safe, probably safer than heap allocations.
 
 The whole point of safe when talking about safe in D is memory saftey.
 
 I know, but some people (including me) think that safe D is a misleading
 name because it just means memory safe D.

Talking about SafeD meaning memory safety makes the meaning of safety clear. If 
you try and make the term safety encompass more than that, it takes very little 
for safety to become subjective. Regardless of whether it would be nice if 
SafeD gave types of safety other than memory safety, when D documentation and 
any of the main D devs talk about safety, it is memory safety which is being 
referred to. Trying to expand the meaning beyond that will just cause confusion 
regardless of whether the non-memory safety being discussed is desirable or not.

 If the compiler can determine that a particular class object can be put on
 the stack and optimize it that way. Fine, but it's pretty rare that it
 can do that - essentially only in cases where you don't pass it to
 _anything_ except for pure functions (including calls to member
 functions).
 
 I don't agree that it's rare. If a function that allocates an object calls
 a function (or member function) that's present in the same compilation
 unit (this more or less means same module), then the compiler is able to
 continue the escape analysis and determine if the called function escapes
 the reference. If this doesn't happen, then the class instance is free to
 be scoped. This situation is common enough.
 
 And if the compiler can do that, then it there's no need for the
 programmer to use scope explicitly.
 
 I don't agree. An annotation like @scope is a contract between the
 programmer and the compiler. It means that if the compiler sees a
 reference escape, then it stops the compilation.
 
 And no, a compiler _can't_ do pure optimizations on its own,
 generally-speaking, because that would require looking not only at the
 body of the function that's being called but at the function bodies of
 any functions that it calls. D is not designed in a way that the compiler
 even necessarily has _access_ to a function's body when compiling, and
 you can't generally look at a function's body when doing optimizations
 when calling that function. So, _some_ pure optimizations could be done,
 but most couldn't. This is not the case with scoped classes, because
 purity already gives you the information that you need.
 
 Quite often a function calls another function in thee same compilation
 unit, in this case the analysis is possible. So you limit the
 optimizations to this common but limited case.
 
 And LDC compiler and in future GDC too, have link-time optimization, this
 means the compiler packs or sees the program code code in a single
 compilation unit. In this case it's able to perform a more complete
 analysis (including de-virtualization of some virtual functions).

It's trivial to get a reference or pointer to escape and make undetectable to 
the compiler. Some escape analysis can be and is done, but all it takes is 
passing a pointer or a reference to another function and the compiler can't 
determine it anymore unless it has access to the called functions body, and 
perhaps the bodies of functions that that function calls. And if the compiler 
can't be 100% correct with escape analysis, then any feature that requires it 
is 
not safe.

And as great as fancier optimizations such as link-time optimizations may be, 
the existence of dynamic libraries eliminates any and all guarantees that such 
optimizations would be able to make if they had all of the source to look at. 
So, you can't rely on them. They help, and they're great, but no feature can 
require them. They're optimizations only.

- Jonathan M Davis


Re: Classes or stucts :: Newbie

2010-12-20 Thread Jonathan M Davis
On Monday, December 20, 2010 06:24:56 spir wrote:
 On Mon, 20 Dec 2010 03:11:49 -0800
 
 Jonathan M Davis jmdavisp...@gmx.com wrote:
  Now, you could conceivably have a language where all of its objects were
  actually pointers, but they were treated as value types. So,
  
  B b;
  A a = b;
  
  would actually be declaring
  
  B* b;
  A* a = b;
  
  underneath the hood, except that the assignment would do a deep copy and
  allocate the appropriate meemory rather than just copying the pointer
  like would happen in a language like C++ or D. Perhaps that's what
  Oberon does. I have no idea. I have never heard of the language before,
  let alone used it.
 
 I don't know how Oberon works. But I'm sure that its records are plain
 values, _not_ pointed under the hood. And their methods all are virtual
 (they have a virtual method table). I have no more details, sorry.

Well, given C's memory model - which D uses - you can't do that. Oberon could 
use a different memory model and have some other way of doing it, but it won't 
work for D, so you'll never see structs with polymorphic behavior in D.

- Jonthan M Davis


Re: string comparison

2010-12-20 Thread Steven Schveighoffer
On Mon, 20 Dec 2010 00:35:53 -0500, doubleagent doubleagen...@gmail.com  
wrote:


Compared to the relatively snappy response other threads have been  
receiving I'm

going to assume that nobody is interested in my inquiry.


Just a tip, don't expect snappy responses on Sunday...  We all have lives  
you know ;)  I for one usually have my computer that I do D stuff with off  
for most of the weekend.


-Steve


Re: Classes or stucts :: Newbie

2010-12-20 Thread Steven Schveighoffer
On Sun, 19 Dec 2010 17:38:17 -0500, Jonathan M Davis jmdavisp...@gmx.com  
wrote:



On Sunday 19 December 2010 14:26:19 bearophile wrote:

Jonathan M Davis:
 There will be a library solution to do it, but again, it's unsafe.

It can be safer if the compiler gives some help. For me it's one of the
important unfinished parts of D.


Whereas, I would argue that it's completely unnecessary. structs and  
classes

serve different purposes. There is no need for scoped classes. They may
perodically be useful, but on the whole, they're completely unnecessary.

The compiler can help, but it can't fix the problem any more that it can
guarantee that a pointer to a local variable doesn't escape once you've  
passed
it to another function. In _some_ circumstances, it can catch escaping  
pointers

and references, but in the general case, it can't.

If we have library solutions for people who want to play with fire,  
that's fine.
But scoped classes is just not one of those things that the language  
really

needs. They complicate things unnecessarily for minimal benefit.


I don't mind having a solution as long as there is a solution.

The main need I see for scoped classes is for when you *know* as the  
programmer that the lifetime of a class or struct will not exceed the  
lifetime of a function, but you don't want to incur the penalty of  
allocating on the heap.  Mostly this is because the functions you want to  
call take classes or interfaces.


It's difficult to find an example with Phobos since there are not many  
classes.  But with Tango, scoped classes are used everywhere.


-Steve


Re: string comparison

2010-12-20 Thread doubleagent
 Are you 100% sure that you are running this version

I have to be.  There are no other versions of phobos on this box and 'which dmd'
points to the correct binary.

  dictionary[word.idup] = newId;

That fixes it.

 The 'word' array is mutable and reused by byLine() on each iteration.  By
 doing the above you use an immutable copy of it as the key instead.

I REALLY don't understand this explanation.  Why does the mutability of 'word'
matter when the associative array 'dictionary' assigns keys by value...it's got 
to
assign them by value, right?  Otherwise we would only get one entry in
'dictionary' and the key would be constantly changing.

The behavior itself seems really unpredictable prior to testing, and really
unintended after testing.  I suspect it's due to some sort of a bug.  The 
program,
on my box anyway, only fails when we give it identical strings, except one is
prefixed with a space.  That should tell us that 'splitter' and 'strip' didn't 
do
their job properly.  The fly in the ointment is that when we output the strings,
they appear as we would expect.

I suspect D does string comparisons (when the 'in' keyword is used) based on 
some
kind of a hash, and that hash doesn't get correctly updated when 'strip' or
'splitter' is applied, or upon the next comparison or whatever.  Calling 'idup'
must force the hash to get recalculated.  Obviously, you guys would know if
there's any merit to this, but it seems to explain the problem.

 The advantage with splitter is that it is lazy and therefore more
 efficient.  split() is eager and allocates memory to hold the string
 fragments.

Yeah, that's what I thought would be the answer.  Kudos to you guys for thinking
of laziness out of the box.  This is a major boon for D.

You know, there's something this touches on which I was curious about.  If D
defaults to 'safety first', and with some work you can get down-to-the-metal, 
why
doesn't the language default to immutable variables, with an explicit modifier 
for
mutable ones?  C compatibility?


Re: string comparison

2010-12-20 Thread doubleagent
I understand.  Thank you, and thanks for pointing out the chatroom.


Re: string comparison

2010-12-20 Thread Steven Schveighoffer
On Mon, 20 Dec 2010 11:13:34 -0500, Stanislav Blinov bli...@loniir.ru  
wrote:


And lastly, hasn't this by chance been your first post? AFAIR, the first  
message is being moderated so it doesn't get to the public at once.


BTW, this message board is not moderated.

-Steve


Re: string comparison

2010-12-20 Thread Steven Schveighoffer
On Mon, 20 Dec 2010 14:05:56 -0500, Steven Schveighoffer  
schvei...@yahoo.com wrote:


On Mon, 20 Dec 2010 11:13:34 -0500, Stanislav Blinov bli...@loniir.ru  
wrote:


And lastly, hasn't this by chance been your first post? AFAIR, the  
first message is being moderated so it doesn't get to the public at  
once.


BTW, this message board is not moderated.


I should clarify, it's retroactively moderated :)  That is, if spam  
appears, it's allowed to go through, but then removed once discovered.


-Steve


Re: string comparison

2010-12-20 Thread doubleagent
 The reason that std.string.splitter() does not show in the documentation is 
 that
 its return type is auto, and there is currently a bug in ddoc that makes it so
 that auto functions don't end up in the generated documentation. Looking at 
 the
 code, it pretty much just forwards to std.algorithm.splitter() using 
 whitespace
 as its separator, so you can look at the documentation there if you'd like.

Thanks.  The code was pretty self-explanatory but it's helpful to know that auto
functions currently don't get documented.


Re: string comparison

2010-12-20 Thread Jonathan M Davis
On Monday, December 20, 2010 10:44:12 doubleagent wrote:
  Are you 100% sure that you are running this version
 
 I have to be.  There are no other versions of phobos on this box and 'which
 dmd' points to the correct binary.
 
   dictionary[word.idup] = newId;
 
 That fixes it.
 
  The 'word' array is mutable and reused by byLine() on each iteration.  By
  doing the above you use an immutable copy of it as the key instead.
 
 I REALLY don't understand this explanation.  Why does the mutability of
 'word' matter when the associative array 'dictionary' assigns keys by
 value...it's got to assign them by value, right?  Otherwise we would only
 get one entry in 'dictionary' and the key would be constantly changing.

Okay. I don't know what the actual code looks like, but word is obviously a 
dynamic array, and if it's from byLine(), then that dynamic array is mutable - 
both the array itself and its elements. Using idup gets you an immutable copy. 
When copying dynamic arrays, you really get a slice of that array. So, you get 
an array that points to the same array as the original. Any changes to the 
elements in one affects the other. If you append to one of them and it doesn't 
have the space to resize in place or dyou o anything else which could cause it 
to reallocate, then that array is reallocated and they no longer point to the 
same data and changing will not change the other.

If the elements of the array are const or immutable, then the fact that the two 
arrays point to the same data isn't a problem because the elements can't be 
changed (except in cases where you'red dealing with const rather than immutable 
and another array points to the same data but doesn't have const elements). So, 
assigning one string to another, for instance (string being an alias for 
immutable(char)[]), will never result in one string altering another. However, 
if you're dealing with char[] rather than string, one array _can_ affect the 
elements of another. I believe that byLine() deals with a char[], not a string.

Now, as for associative arrays, they don't really deal with const correctly. I 
believe that they're actually implemented with void* and you can actually do 
things like put const elements in them in spite of the fact that toHash() on 
Object is not currently const (there is an open bug on the fact that Object is 
not const-correct). So, it does not surprise me in the least if it will take 
mutable types as its key and then allow them to be altered (assuming that 
they're pointers or reference types and you can therefore have other references 
to them). But to fix the problem in this case would require immutability rather 
than const, because you're dealing with a reference type (well, 
pseudo-reference 
type since dynamic arrays share their elements such that changes to their 
elements affect all arrays which point to those elements, but other changes - 
such as altering their length don't affect other arrays and will even likely 
result in the arrays then being completely separate).

 The behavior itself seems really unpredictable prior to testing, and really
 unintended after testing.  I suspect it's due to some sort of a bug.  The
 program, on my box anyway, only fails when we give it identical strings,
 except one is prefixed with a space.  That should tell us that 'splitter'
 and 'strip' didn't do their job properly.  The fly in the ointment is that
 when we output the strings, they appear as we would expect.
 
 I suspect D does string comparisons (when the 'in' keyword is used) based
 on some kind of a hash, and that hash doesn't get correctly updated when
 'strip' or 'splitter' is applied, or upon the next comparison or whatever.
  Calling 'idup' must force the hash to get recalculated.  Obviously, you
 guys would know if there's any merit to this, but it seems to explain the
 problem.

in should use toHash() (or whatever built-in functions for built-in types if 
you're not dealing with a struct or class) followed by ==. I'd be stunned if 
there were any caching involved. The problem is that byLine() is using a 
mutable 
array, so the elements pointed to by the array that you just put in the 
associative array changed, which means that the hash for them is wrong, and == 
will fail when used to compare the array to what it was before.

  The advantage with splitter is that it is lazy and therefore more
  efficient.  split() is eager and allocates memory to hold the string
  fragments.
 
 Yeah, that's what I thought would be the answer.  Kudos to you guys for
 thinking of laziness out of the box.  This is a major boon for D.
 
 You know, there's something this touches on which I was curious about.  If
 D defaults to 'safety first', and with some work you can get
 down-to-the-metal, why doesn't the language default to immutable
 variables, with an explicit modifier for mutable ones?  C compatibility?

C compatability would be one reason. Familiarity would be another. Also, it 
would be _really_ 

Re: string comparison

2010-12-20 Thread Lars T. Kyllingstad
On Mon, 20 Dec 2010 18:44:12 +, doubleagent wrote:

 Are you 100% sure that you are running this version
 
 I have to be.  There are no other versions of phobos on this box and
 'which dmd' points to the correct binary.
 
  dictionary[word.idup] = newId;
 
 That fixes it.
 
 The 'word' array is mutable and reused by byLine() on each iteration. 
 By doing the above you use an immutable copy of it as the key instead.
 
 I REALLY don't understand this explanation.  Why does the mutability of
 'word' matter when the associative array 'dictionary' assigns keys by
 value...it's got to assign them by value, right?  Otherwise we would
 only get one entry in 'dictionary' and the key would be constantly
 changing.

This could be related to bug 2954, for which a fix will be released in 
the next version of DMD.

  http://d.puremagic.com/issues/show_bug.cgi?id=2954

-Lars


Re: string comparison

2010-12-20 Thread doubleagent
 Okay. I don't know what the actual code looks like

Here.

import std.stdio, std.string;

void main() {
uint[string] dictionary; // v[k], so string-uint
foreach (line; stdin.byLine()) {
// break sentence into words
// Add each word in the sentence to the vocabulary
foreach (word; splitter(strip(line))) {
if (word in dictionary) continue; // nothing to do
auto newId = dictionary.length;
dictionary[word] = newId;
writefln(%s\t%s, newId, word);
}
}
}

 ...

Okay, suppose you're right.  The behavior is still incorrect because the
associative array has allowed two identical keys...identical because the only
difference between two strings which I care about are the contents of their
character arrays.

 Also, it
 would be _really_ annoying to have to mark variables mutable all over the 
 place
 as you would inevitably have to do.

Obviously your other points are valid, but I haven't found this to be true
(Clojure is pure joy).  Maybe you're right because D is a systems language and
mutability needs to be preferred, however after only a day or two of exposure to
this language that assumption also appears to be wrong.  Take a look at Walter's
first attempted patch to bug 2954: 13 lines altered to explicitly include
immutable, and 4 altered to treat variables as const:
http://www.dsource.org/projects/dmd/changeset/749

But I'm willing to admit that my exposure is limited, and that particular 
example
is a little biased.


Re: string comparison

2010-12-20 Thread doubleagent
 This could be related to bug 2954, for which a fix will be released in
 the next version of DMD.

Looking at that new descriptive error message ie error(associative arrays can
only be assigned values with immutable keys, not %s, e2-type-toChars());  it
appears to be a distinct possibility.  Thanks.


Re: string comparison

2010-12-20 Thread Jonathan M Davis
On Monday, December 20, 2010 16:45:20 doubleagent wrote:
  Okay. I don't know what the actual code looks like
 
 Here.
 
 import std.stdio, std.string;
 
 void main() {
 uint[string] dictionary; // v[k], so string-uint
 foreach (line; stdin.byLine()) {
 // break sentence into words
 // Add each word in the sentence to the vocabulary
 foreach (word; splitter(strip(line))) {
 if (word in dictionary) continue; // nothing to do
 auto newId = dictionary.length;
 dictionary[word] = newId;
 writefln(%s\t%s, newId, word);
 }
 }
 }
 
  ...
 
 Okay, suppose you're right.  The behavior is still incorrect because the
 associative array has allowed two identical keys...identical because the
 only difference between two strings which I care about are the contents of
 their character arrays.

Array comparison cares about the contents of the array. It may shortcut 
comparisons if lengths differ or if they point to the same point in memory and 
have the same length, but array comparison is all about comparing their 
elements.

In this case, you'd have two arrays/strings which point to the same point in 
memory but have different lengths. Because their lengths differ, they'd be 
deemed 
unequal. If you managed to try and put a string in the associative array which 
has the same length as one that you already inserted, then they'll be 
considered 
equal, since their lengths are identical and they point to same point in 
memory, 
so in that case, I would expect the original value to be replaced with the new 
one. But other than that, the keys will be considered unequal in spite of the 
fact that they point to the same place in memory.

The real problem here is that associative arrays currently allow non-immutable 
keys. Once that's fixed, then it won't be a problem anymore.

  Also, it
  would be _really_ annoying to have to mark variables mutable all over the
  place as you would inevitably have to do.
 
 Obviously your other points are valid, but I haven't found this to be true
 (Clojure is pure joy).  Maybe you're right because D is a systems language
 and mutability needs to be preferred, however after only a day or two of
 exposure to this language that assumption also appears to be wrong.  Take
 a look at Walter's first attempted patch to bug 2954: 13 lines altered to
 explicitly include immutable, and 4 altered to treat variables as const:
 http://www.dsource.org/projects/dmd/changeset/749
 
 But I'm willing to admit that my exposure is limited, and that particular
 example is a little biased.

Most programmers don't use const even in languages that have it. And with many 
programmers programming primarily in languages like Java or C# which don't 
really have const (IIRC, C# has more of a const than Java, but it's still 
pretty 
limited), many, many programmers never use const and see no value in it. So, 
for 
most programmers, mutable variables will be the norm, and they'll likely only 
use const or immutable if they have to. There are plenty of C++ programmers who 
will seek to use const (and possibly immutable) heavily, but they're definitely 
not the norm. And, of course, there are plenty of other languages out there 
with 
const or immutable types of one sort or another (particularly most functional 
languages), but those aren't the types of languages that most programmers use. 
The result is that most beginning D programmers will be looking for mutable to 
be the norm, and forcing const and/or immutable on them could be seriously off-
putting.

Now, most code which is going to actually use const and immutable is likely to 
be a fair mix of mutable, const, and immutable - especially if you don't try to 
make everything immutable at the cost of efficiency like you'd typically get in 
a 
functional language. That being the case, regardless of whether mutable, const, 
or immutable is the default, you're going to have to mark a fair number of 
variables as something other than the default. So, making const or immutable 
the 
default would likely not save any typing, and it would annoy a _lot_ of 
programmers.

So, the overall gain of making const or immutable the default is pretty minimal 
if not outright negative.

Personally, I use const and immutable a lot, but I still  wouldn't want const 
or 
immutable to be the default.

- Jonathan M Davis


Re: enum ubyte[] vs enum ubyte[3]

2010-12-20 Thread Nick Voronin
On Mon, 20 Dec 2010 17:17:05 +0100
Johannes Pfau s...@example.com wrote:

  But if you are going to patch it, please make it add extra {} around  
  action code! The thing is that when there is a label before {} block  
  (and in ragel generated code I saw it's always so) the block isn't  
  considered as a new scope which causes problems when you have local  
  variables declaration inside actions.
 
 You mean like this code:
 -
 tr15:
 #line 228 jpf/http/parser.rl
  {
  if(start != p)
  {
  key = line[(start - line.ptr) .. (p - line.ptr)];
  }
  }
 -
 should become: ?
 -
 tr15:
 #line 228 jpf/http/parser.rl
  {{
  if(start != p)
  {
  key = line[(start - line.ptr) .. (p - line.ptr)];
  }
  }}
 -

Yes. This way it becomes a scope which is kind of what one would expect from it.

 
  One is fixed size array and other is dynamic. Honestly I doubt that it  
  matters for code generated by Ragel, since this is constant and won't be  
  passed around. If it's harder to make it fixed-size then don't bother.
 
 Could a dynamic array cause heap allocations, even if it's data is never  
 changed? If not, dynamic arrays would work fine.

Sorry, I can't provide reliable information on what can happen in general, but 
right now there is no difference in produced code accessing elements of enum 
ubyte[] and enum ubyte[30]. In both cases constants are directly embedded in 
code. 

In fact as long as you only access its elements (no passing array as an 
argument, no assignment to another variable and no accessing .ptr) there is no 
array object at all. If you do -- new object is created every time you do. I 
believe Ragel doesn't generate code which passes tables around, so it doesn't 
matter. 

-- 
Nick Voronin elfy...@gmail.com


Re: Classes or stucts :: Newbie

2010-12-20 Thread Nick Voronin
On Mon, 20 Dec 2010 05:43:08 -0500
bearophile bearophileh...@lycos.com wrote:

 Nick Voronin:
 
  Here is where we diverge. Choosing struct vs class on criteria of their 
  placement makes no sense to me. 
 
 In D you use a class if you want inheritance or when you (often) need 
 reference semantics, and you use a struct when you need a little value passed 
 around by value or when you want a simple form of RAII or when you want to 
 implement something manually (like using PIMPL), or when you want max 
 performance (and you manage structs by pointer, you may even put a tag inside 
 the stuct or the pointer and implement manually some kind of inheritance). 
 With structs you have a literal syntax, postblits, in-place allocation, and 
 you are free to use align() too.

Well said. Plenty of differences there more important than stack/heap 
allocation.

-- 
Nick Voronin elfy...@gmail.com


Re: string comparison

2010-12-20 Thread doubleagent
Good  I agree.


is expression for template structs/classes instances?

2010-12-20 Thread d coder
Greetings

I want to find if a given struct type is instantiated from a
particular template struct type. For example:

struct S (T)  {
  alias T Type;
  T t;
}

And later I want to find out if a given type is of type S(*)
(basically any type instantiated from template struct S). In fact I do
not know the type value T used at the time of instantiating S!(T).

I was looking at is ( Type Identifier : TypeSpecialization ,
TemplateParameterList ) expression at
http://www.digitalmars.com/d/2.0/expression.html#IsExpression .
Thought there would be some way using that, but I could not find any.

Regards
Cherry


Re: is expression for template structs/classes instances?

2010-12-20 Thread Jonathan M Davis
On Monday 20 December 2010 20:23:49 d coder wrote:
 Greetings
 
 I want to find if a given struct type is instantiated from a
 particular template struct type. For example:
 
 struct S (T)  {
   alias T Type;
   T t;
 }
 
 And later I want to find out if a given type is of type S(*)
 (basically any type instantiated from template struct S). In fact I do
 not know the type value T used at the time of instantiating S!(T).
 
 I was looking at is ( Type Identifier : TypeSpecialization ,
 TemplateParameterList ) expression at
 http://www.digitalmars.com/d/2.0/expression.html#IsExpression .
 Thought there would be some way using that, but I could not find any.
 
 Regards
 Cherry

Well, from the compiler's perspective S!int would have no relation to S!float, 
S!bool, or any other S!T. The template is instantiated with whatever types and 
values that it's given and then it's its own beast. So, really, there is no 
relation between the various instantiations of any particular template. I'm not 
sure that it would be impossible to have something in __traits or std.traits 
which tested whether a particular type was an instantiation of a particular 
template, but I'm not at all certain that it _is_ possible. Templates are used 
to generate code, but once generated, that code is essentially the same as it 
would have been had you typed it all yourself. So, my guess would be that you 
can't do what you're trying to do. I agree that it could be useful to be able 
to 
do it, but unfortunately, I don't think that it's possible.

If you knew enough about the type, you might be able to do some template voodoo 
to do it in a round-about manner, but it would be specific to the type in 
question. For instance, given your definiton of S, you could use 
_traits/std.traits to check that the type that you're testing has a member 
variable t. You could then check that S!(typeof(t)) was the same as the type 
that you were testing. So, if you get particularly cunning about it, I believe 
that it can be tested for in specific cases, but I don't believe that it can be 
done in any general way.

- Jonathan M Davis


Re: is expression for template structs/classes instances?

2010-12-20 Thread d coder
 For instance, given your definiton of S, you could use
 _traits/std.traits to check that the type that you're testing has a member
 variable t. You could then check that S!(typeof(t)) was the same as the type
 that you were testing. So, if you get particularly cunning about it, I believe
 that it can be tested for in specific cases, but I don't believe that it can 
 be
 done in any general way.


Thanks Jonathan

That is exactly what I had thought of doing. I was conscious that it
may not be the cleanest way.
Now that you are saying a cleaner way may not exist, I will go ahead
and write the code.

Regards
- Cherry


[Issue 5359] std.traits : isDelegate returns false on a delegate

2010-12-20 Thread d-bugmail
http://d.puremagic.com/issues/show_bug.cgi?id=5359


Max Samukha samu...@voliacable.com changed:

   What|Removed |Added

 CC||samu...@voliacable.com


--- Comment #2 from Max Samukha samu...@voliacable.com 2010-12-20 04:28:28 
PST ---
I am sure that using homonym templates for testing types and expressions is a
bad idea. It results in some syntactic compression but at the same time brings
lots of confusion. There should be two distinct templates. Something like this:

template isExpression(alias expression)
{
enum isExpression = is(typeof(expression));
}

template isDelegate(alias expression) if (isExpression!expression)
{
enum isDelegate = isDelegateType!(typeof(expression));
}

template isDelegateType(T)
{
static enum isDelegateType = is(T == delegate);
}

template isFunctionPointer(alias expression) if (isExpression!expression)
{
enum isFunctionPointer = isFunctionPointerType!(typeof(expression));
}

template isFunctionPointerType(T)
{
static if (__traits(compiles, *T.init))
enum isFunctionPointerType = isFunctionType!(typeof((*T.init)));
else
enum isFunctionPointerType = false;
}

template isFunctionType(T)
{
enum isFunctionType = is(T == function);
}

unittest
{
alias void delegate() Dg;
Dg dg;
alias void function() Fn;
Fn fn;

static void foo()
{
}

static assert(isDelegate!dg);
static assert(isDelegateType!Dg);
static assert(!__traits(compiles, isDelegate!Dg));
static assert(!isDelegateType!Fn);

static assert(isFunctionPointer!fn);
static assert(!__traits(compiles, isFunctionPointer!Fn));
static assert(!isFunctionType!Fn);

static assert(isFunctionPointerType!Fn);
static assert(!isFunctionPointerType!Dg);

static assert(isFunctionType!(typeof(*foo)));
static assert(!isFunctionType!Fn);
static assert(!isFunctionType!Dg);
}

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
--- You are receiving this mail because: ---