Signaling NaNs Rise Again

2009-03-26 Thread Walter Bright

Inspired by Don Clugston's recent compiler patch.

http://www.reddit.com/r/programming/comments/87vqv/signaling_nans_rise_again/


Re: Allowing relative file imports

2009-03-26 Thread Walter Bright

grauzone wrote:

Walter Bright wrote:
http://www.comeaucomputing.com lets you upload random C++ code, 
compile it on their system, and view the messages put out by their 
compiler. Suppose you did it with D, had it import some sensitive 
file, and put it out with a pragma msg statement?


Your compiler can do the same:
http://codepad.org/hWC9hbPQ


That's awesome!


Re: Allowing relative file imports

2009-03-26 Thread grauzone

Walter Bright wrote:

Georg Wrede wrote:
I mean, how often do you see web sites where stuff is fed to a C 
compiler and the resulting programs run? (Yes it's too slow, but 
that's hardly the point here.) That is simply not done.


Consider the Java JVM. You've probably got one installed on your 
computer. It gets java code from gawd knows where (as the result of web 
browsing), it compiles it, and runs it on your machine unbeknownst to you.


.NET does that too.

Every day my browser downloads javascript code, compiles it, and runs it.

There's no reason in principle that D could not be used instead.

This means that we should think about security issues. Compiling 
untrusted code should not result in an attack on your system.


http://www.comeaucomputing.com lets you upload random C++ code, compile 
it on their system, and view the messages put out by their compiler. 
Suppose you did it with D, had it import some sensitive file, and put it 
out with a pragma msg statement?


Your compiler can do the same:
http://codepad.org/hWC9hbPQ


Re: Allowing relative file imports

2009-03-26 Thread Walter Bright

Georg Wrede wrote:
I mean, how often do you see web sites where stuff is fed to a C 
compiler and the resulting programs run? (Yes it's too slow, but 
that's hardly the point here.) That is simply not done.


Consider the Java JVM. You've probably got one installed on your 
computer. It gets java code from gawd knows where (as the result of web 
browsing), it compiles it, and runs it on your machine unbeknownst to you.


.NET does that too.

Every day my browser downloads javascript code, compiles it, and runs it.

There's no reason in principle that D could not be used instead.

This means that we should think about security issues. Compiling 
untrusted code should not result in an attack on your system.


http://www.comeaucomputing.com lets you upload random C++ code, compile 
it on their system, and view the messages put out by their compiler. 
Suppose you did it with D, had it import some sensitive file, and put it 
out with a pragma msg statement?


Re: State of Play

2009-03-26 Thread Walter Bright

Bill Baxter wrote:

It seems to me the only people who would know which compilers deserve
the "stable" label are the folks using dmd on a daily basis to build
their software.  Yet I've never seen the question come up here or
anywhere else of what version of D the users find to be the most
stable.   My impression is frankly that Walter just arbitrarily slaps
the label on a rev that's about 10 steps back from current.  Probably
there's more to it than that, but that's what it seems like.


The current "stable" D1 is that way because it's the one that people 
supplied me with a bundled version that has the major libraries 
specifically tested and working with it.


I think that is a fairly reasonable definition of it.


Re: State of Play

2009-03-26 Thread Jarrett Billingsley
On Fri, Mar 27, 2009 at 1:19 AM, Jarrett Billingsley
 wrote:
> On Fri, Mar 27, 2009 at 12:29 AM, Walter Bright
>  wrote:
>> Leandro Lucarella wrote:
>>>
>>> Walter Bright, el 26 de marzo a las 16:58 me escribiste:

 Jarrett Billingsley wrote:
>
> It's not the bugs that you know about that cause problems for other
> people!

 Half-baked implementations won't help them, either. I just don't think
 the answer is, what is in essence, a lot more releases.
>>>
>>> Millions of open source projects that work that way can prove you wrong.
>>
>> Phobos works that way, and intermediate "releases" are pretty much ignored
>> (as I think they should be).
>>
>
> Maybe it's because no one uses it.
>
> I mean, I'm just saying.
>

This is also partly the beer talking.


Re: State of Play

2009-03-26 Thread Jarrett Billingsley
On Fri, Mar 27, 2009 at 12:29 AM, Walter Bright
 wrote:
> Leandro Lucarella wrote:
>>
>> Walter Bright, el 26 de marzo a las 16:58 me escribiste:
>>>
>>> Jarrett Billingsley wrote:

 It's not the bugs that you know about that cause problems for other
 people!
>>>
>>> Half-baked implementations won't help them, either. I just don't think
>>> the answer is, what is in essence, a lot more releases.
>>
>> Millions of open source projects that work that way can prove you wrong.
>
> Phobos works that way, and intermediate "releases" are pretty much ignored
> (as I think they should be).
>

Maybe it's because no one uses it.

I mean, I'm just saying.


Re: State of Play

2009-03-26 Thread Brad Roberts
Bill Baxter wrote:
> On Fri, Mar 27, 2009 at 1:03 PM, Leandro Lucarella  wrote:
>> Walter Bright, el 26 de marzo a las 16:58 me escribiste:
>>> Jarrett Billingsley wrote:
 It's not the bugs that you know about that cause problems for other people!
>>> Half-baked implementations won't help them, either. I just don't think
>>> the answer is, what is in essence, a lot more releases.
>> Millions of open source projects that work that way can prove you wrong.
> 
> 
> I think part of the problem with the current approach is that the
> "stable" D releases seem to have no connection with reality.  It's
> always been way older than it should be every time I've looked.  I
> wouldn't recommend that anyone use 1.030 right now.  I'd say 1.037
> should be the most recent "stable" version at the moment.   It seems
> there isn't a good process in place for figuring out what's stable and
> what's not.
> 
> It seems to me the only people who would know which compilers deserve
> the "stable" label are the folks using dmd on a daily basis to build
> their software.  Yet I've never seen the question come up here or
> anywhere else of what version of D the users find to be the most
> stable.   My impression is frankly that Walter just arbitrarily slaps
> the label on a rev that's about 10 steps back from current.  Probably
> there's more to it than that, but that's what it seems like.
> 
> --bb

Actually it's more like he moves it forward when conversations like this
come up and point out how far behind it is.  I'm not sure I've seen it
ever pro-actively moved forward, only re-actively. :)

Later,
Brad


Re: State of Play

2009-03-26 Thread Bill Baxter
On Fri, Mar 27, 2009 at 1:03 PM, Leandro Lucarella  wrote:
> Walter Bright, el 26 de marzo a las 16:58 me escribiste:
>> Jarrett Billingsley wrote:
>> >It's not the bugs that you know about that cause problems for other people!
>>
>> Half-baked implementations won't help them, either. I just don't think
>> the answer is, what is in essence, a lot more releases.
>
> Millions of open source projects that work that way can prove you wrong.


I think part of the problem with the current approach is that the
"stable" D releases seem to have no connection with reality.  It's
always been way older than it should be every time I've looked.  I
wouldn't recommend that anyone use 1.030 right now.  I'd say 1.037
should be the most recent "stable" version at the moment.   It seems
there isn't a good process in place for figuring out what's stable and
what's not.

It seems to me the only people who would know which compilers deserve
the "stable" label are the folks using dmd on a daily basis to build
their software.  Yet I've never seen the question come up here or
anywhere else of what version of D the users find to be the most
stable.   My impression is frankly that Walter just arbitrarily slaps
the label on a rev that's about 10 steps back from current.  Probably
there's more to it than that, but that's what it seems like.

--bb


Re: State of Play

2009-03-26 Thread Walter Bright

Leandro Lucarella wrote:

Walter Bright, el 26 de marzo a las 16:58 me escribiste:

Jarrett Billingsley wrote:

It's not the bugs that you know about that cause problems for other people!

Half-baked implementations won't help them, either. I just don't think
the answer is, what is in essence, a lot more releases.


Millions of open source projects that work that way can prove you wrong.


Phobos works that way, and intermediate "releases" are pretty much 
ignored (as I think they should be).


Re: DMC to Create C .lib ?

2009-03-26 Thread Chris Andrews
Sergey Gromov Wrote:

> You're talking about Doryen Library project, right?
> 
> My thought is that TCOD_console_flush is actually cdecl and must be
> declared as extern(C), while you seem to declare it as extern(Windows)
> which is stdcall.  What you get with linkdef is corrupted stack.  This
> is why coffimplib is absolute best when you have the right COFF import
> library: it allows to catch this sort of errors.

Hah, you got me.  Yeah, I'm toying with a wrapper for Doryen Library in D.  I'm 
trying to not talk too much about it, since I don't know if I'm skilled enough 
to write and implement it, nor dedicated enough to finish it. :p  We'll see.

All the methods were preprended by TCODLIB_API, which is a typedef to 
__declspec(dllexport) ( or __declspec(import) ) which I thought (according to 
the guide) translates as export extern(Windows)

I guess I'll try externing to C instead, and try to coffimp the library again.  
Perhaps the "visual studio" library is a bad one to use, and I should try the 
"Mingw" .a file. 



Re: State of Play

2009-03-26 Thread Leandro Lucarella
Walter Bright, el 26 de marzo a las 16:58 me escribiste:
> Jarrett Billingsley wrote:
> >It's not the bugs that you know about that cause problems for other people!
> 
> Half-baked implementations won't help them, either. I just don't think
> the answer is, what is in essence, a lot more releases.

Millions of open source projects that work that way can prove you wrong.

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)

 ease, eating well (no more microwave dinners and saturated fats),
a patient better driver, a safer car (baby smiling in back seat),
sleeping well (no bad dreams), no paranoia,


Re: D1.x series proposal [was: State of Play]

2009-03-26 Thread Leandro Lucarella
Sean Kelly, el 26 de marzo a las 22:08 me escribiste:
> == Quote from Leandro Lucarella (llu...@gmail.com)'s article
> >
> > D situation is a little different because D2 is already here, and it's too
> > much ahead of D1. So a plan to backport features from D2 to D1
> > progressively should be done.
> 
> Who is going to do these backports?

That's a fair question without an answer =)
(I wish I had the time to do it. If I had the time I probably first do it
and then propose it)

> Personally, I'd rather think about moving my code to D2 in one jump than
> in a bunch of incremental steps, each requiring a custom compiler.

Are you? If not, why? I think D2 being a moving target make people don't
want to port code because it would be too hard to maintain. They idea
behind 1.x series is that each minor version is *stable*. Code you port
to, let's say, 1.1.000, will work with 1.1.100. No new predefined
versions, no new nothing. You get a really stable language and one that
evolves fast. You just have to do some minor porting about once a year,
when a new minor version is release, and that porting should be trivial.
Porting code to D2 now is a complicated excercise, at least to do it right
(using constness features).

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)

Algún día los libros desterrarán a la radio y el hombre descubrirá el
oculto poder del Amargo Serrano.
-- Ricardo Vaporeso. El Bolsón, 1909.


Re: build a project

2009-03-26 Thread Daniel Keep


Derek Parnell wrote:
> On Thu, 26 Mar 2009 21:21:47 +0100, Don wrote:
> 
>> I use bud. Even though it hasn't been touched since rebuild began. :-(.
> 
> How would like Bud to be 'touched'. I'm happy to make it better but I'm not
> sure what you need from it.

My current project builds using a Python script which does some stuff
including calling bud (along with a bunch of other utilities.)

I switched to rebuild for everything else for the -dc switch since I
tend to have a number of DMD installs (1.x tango stable, 1.x tango trunk
+ changes, 1.x phobos, 2.x phobos).

Really, I always saw bud as being "done."  It doesn't really have any
major issues with it.  Things like -dc or -oq and the like are nice, but
it does its job well.  :)

<3 bud

  -- Daniel


Re: What can you "new"

2009-03-26 Thread grauzone
Doesn't matter what you're making, OS or not, the choice of language 
*certainly* carries repercussions throughout a project. Sure Linux is doing 
fine with C. So what? It could probably be doing a lot better with D.


I'm not saying that the programming language is not relevant at all. 
Rather, other issues outweigh the choice of the programming language by 
far. Just take driver support as an example. That's actually what is 
discouraging most people from using Linux over Window. You can't do 
anything with an OS that makes parts of your hardware as useful as a brick.


I'd only say that there are some "key" features of a language, that 
actually matter for something like a kernel. For example, as you said, a 
kernel written in assembler is very hard to port to another 
architecture. And you wouldn't write a kernel in Visual Basic (although 
some folks are writing kernels in languages like Java, C# or even Python).


But D is as good as C/C++ in this regard. when it comes to real life 
issues, D is probably even a bit worse, because of tool-chain issues. 
Would your D-OS ever run on, say, PowerPC?


Also, I should emphasize that I never said D would or wouldn't "shake up the 
OS market", just that the potential was there, whether it be *if* a new OS 
was built ground-up in D or *if* an existing one was ported. My main point 
was just that D could certainly improve the overall development process of 
whatever OS used it, allowing things to advance faster, be more reliable, 
etc., and thus potentially give it a real leg up.


Maybe. Note that the Linux developers refused to use C++, although the 
C++ advocates came up with similar arguments. Sure, the languages 
provide some nice features, which make life easier. But again, what 
would D help when writing device drivers? Or when figuring out a good 
locking hierarchy? It doesn't matter that much, you have to deal with 
much larger problems.


If all the carpenters are building houses with wooden hammers, and Joe Shmoe 
comes along with his metal hammer, well, he may succeed or he may fail, but 
he would certainly have that extra advantage, and thus have at least the 
potential to "shake things up".


That's a bad comparison, because it's very simple to switch a hammer. 
Also, the metal hammer would be nicer to use than the wooden one, but 
it'd split in two parts if you strained it too much. Even if you're 
careful. Some would build complicated, abstract works of art, using a 
new method called "nail mixin". They'd need at least a dozen hammers 
until the artwork is finished. The result would blow up in an explosion 
from time to time for unknown reasons.


Re: What can you "new"

2009-03-26 Thread Nick Sabalausky
"grauzone"  wrote in message 
news:gqh4su$2cb...@digitalmars.com...
> Jarrett Billingsley wrote:
>> On Thu, Mar 26, 2009 at 2:49 PM, grauzone  wrote:
>>> Jarrett Billingsley wrote:
 On Thu, Mar 26, 2009 at 1:33 PM, Nick Sabalausky  wrote:

> Besides, I'd think an OS written in D would certainly have the 
> potential
> to
> really shake up the current OS market. Not because people would say 
> "Oh,
> wow, it's written in D", of course, but because the developers would 
> have
> a
> far easier time making it, well, good.
 *cough*www.xomb.org*cough*
>>> Your point? Yes, we know that D can be used to write hobby kernels.
>>
>> No need to be hostile about it.  I was just letting Nick know.
>
> It will never "shake up the current OS market", though. Also, I think the 
> programming language is not really relevant for how good an OS is. Linux 
> is doing fine with C. This too should probably go rather towards Nick.

Choice of language can certainly make a big difference in a product, OS or 
otherwise. For instance, (purely hypothetical example that conveniently 
ignores timeframe and overhead involved in porting to a different language) 
if Win98 had been written in something like C#, it would have been more 
reliable, released sooner (probably would have been "Win97" as originally 
intended), and required more memory and processing power to run. If OSX had 
been written in pure asm, it would have been leaner, faster, buggier, 
released later, and probably wouldn't have been ported to x86. If early 
versions of the PalmOS kernel were written Python, the old PalmPilots 
probably would have been painfuly slow.

Doesn't matter what you're making, OS or not, the choice of language 
*certainly* carries repercussions throughout a project. Sure Linux is doing 
fine with C. So what? It could probably be doing a lot better with D.

Also, I should emphasize that I never said D would or wouldn't "shake up the 
OS market", just that the potential was there, whether it be *if* a new OS 
was built ground-up in D or *if* an existing one was ported. My main point 
was just that D could certainly improve the overall development process of 
whatever OS used it, allowing things to advance faster, be more reliable, 
etc., and thus potentially give it a real leg up.

If all the carpenters are building houses with wooden hammers, and Joe Shmoe 
comes along with his metal hammer, well, he may succeed or he may fail, but 
he would certainly have that extra advantage, and thus have at least the 
potential to "shake things up".




Re: build a project

2009-03-26 Thread Frits van Bommel

Xu, Qian wrote:
"Denis Koroskin" <2kor...@gmail.com> wrote in message 
news:op.urepy0lro7c...@hood.creatstudio.intranet...
It's certainly much faster to generate 1 header file than recompile 
all the dependencies each time.


Yes. but it works for certain redundancy (like comments) only.
If you change your implementation not your interface, the header file 
will be changed as well. so I cannot take much advantage of header files 
of D. (Am I wrong?)


DMD should only include bodies for functions it considers for inlining. 
That should mean the header might change due to changes in small 
functions, but probably not for changes in big ones.


(I'm not sure how GDC handles this. I do know LDC does inlining later in 
the process (*after* "codegen"), but I'm not sure what header generation 
does)


Re: What can you "new"

2009-03-26 Thread grauzone

Jarrett Billingsley wrote:

On Thu, Mar 26, 2009 at 2:49 PM, grauzone  wrote:

Jarrett Billingsley wrote:

On Thu, Mar 26, 2009 at 1:33 PM, Nick Sabalausky  wrote:


Besides, I'd think an OS written in D would certainly have the potential
to
really shake up the current OS market. Not because people would say "Oh,
wow, it's written in D", of course, but because the developers would have
a
far easier time making it, well, good.

*cough*www.xomb.org*cough*

Your point? Yes, we know that D can be used to write hobby kernels.


No need to be hostile about it.  I was just letting Nick know.


It will never "shake up the current OS market", though. Also, I think 
the programming language is not really relevant for how good an OS is. 
Linux is doing fine with C. This too should probably go rather towards Nick.


Re: State of Play

2009-03-26 Thread Walter Bright

Jarrett Billingsley wrote:

It's not the bugs that you know about that cause problems for other people!


Half-baked implementations won't help them, either. I just don't think 
the answer is, what is in essence, a lot more releases.


Re: Is 2X faster large memcpy interesting?

2009-03-26 Thread Moritz Warning
On Thu, 26 Mar 2009 21:08:40 +0100, Don wrote:

> Is it worth spending any more time on it?

It's a basic building block for many programs.

Let the code flow! :p


Re: Is 2X faster large memcpy interesting?

2009-03-26 Thread Thomas Moran

On 26/03/2009 20:08, Don wrote:

BTW: I tested the memcpy() code provided in AMD's 1992 optimisation
manual, and in Intel's 2007 manual. Only one of them actually gave any
benefit when run on a 2008 Intel Core2 -- which was it? (Hint: it wasn't
Intel!)
I've noticed that AMD's docs are usually greatly superior to Intels, but
this time the difference is unbelievable.


Don, have you seen Agner Fog's memcpy() and memmove() implementations 
included with the most recent versions of his manuals? In the unaligned 
case they read two XMM words and shift/combine them into the target 
alignment, so all loads and stores are aligned. Pretty cool.


He says (modestly):

; This method is 2 - 6 times faster than the implementations in the
; standard C libraries (MS, Gnu) when src or dest are misaligned.
; When src and dest are aligned by 16 (relative to each other) then this
; function is only slightly faster than the best standard libraries.


Re: build a project

2009-03-26 Thread Xu, Qian


"grauzone"  wrote in message 
news:gqg99j$pj...@digitalmars.com...
What build system are you using? You said "WAF-tool", but I didn't find 
anything about it.


http://code.google.com/p/waf/
we use this tool for building. but it calculates the dependency incorrectly.

--Qian 



Re: build a project

2009-03-26 Thread Xu, Qian
"Denis Koroskin" <2kor...@gmail.com> wrote in message 
news:op.urepy0lro7c...@hood.creatstudio.intranet...
It's certainly much faster to generate 1 header file than recompile all 
the dependencies each time.




Yes. but it works for certain redundancy (like comments) only.
If you change your implementation not your interface, the header file will 
be changed as well. so I cannot take much advantage of header files of D. 
(Am I wrong?)


for instance:
class Foo {
 void bar() {
int i = 0;
i ++; // add/remove this line, to see, if the header file changes.
 }
}

--Qian 



Re: State of Play

2009-03-26 Thread Jarrett Billingsley
On Thu, Mar 26, 2009 at 7:01 PM, Walter Bright
 wrote:
> Jarrett Billingsley wrote:
>>
>> So what about the following counterargument: "even if nightly builds
>> were made available, how can we be sure that enough people are using
>> them to sufficiently test them?"  OK, sure, if not many people are
>> using the nightly builds, then there wouldn't be much benefit.  But it
>> does seem to work out fine for a lot of projects.  And with a proper
>> SCM set up which you commit to daily, there's virtually no work on
>> your part.  You just commit, and everyone else can download and
>> compile.
>
> I believe that people downloading half-baked works in progress and then
> finding problems I already know about and am fixing is probably not more
> productive.

It's not the bugs that you know about that cause problems for other people!


Re: State of Play

2009-03-26 Thread Trass3r

Walter Bright schrieb:
As for the does one develop stable code targeting D1 and D2, I would 
suggest targeting D1 but be careful to use the string alias for all the 
char[]'s, and treat strings as if they were immutable. This will cover 
90% of any source code changes between D1 and D2, perhaps even more than 
90%. It's also very possible to write D1 code using the immutability 
style, in fact, I advocated it long before D2 (see all the old threads 
discussing Copy On Write).


Well, using the string alias leads to problems, esp. when used as a 
function parameter:

int func(string str)

This makes it impossible to pass a mutable string to the function in D2.

I personally use an  alias const(char)[] cstring;  for most of my 
parameters and  alias invariant(char)[] istring;  for normal immutable 
strings.


Re: State of Play

2009-03-26 Thread Walter Bright

Jarrett Billingsley wrote:

So what about the following counterargument: "even if nightly builds
were made available, how can we be sure that enough people are using
them to sufficiently test them?"  OK, sure, if not many people are
using the nightly builds, then there wouldn't be much benefit.  But it
does seem to work out fine for a lot of projects.  And with a proper
SCM set up which you commit to daily, there's virtually no work on
your part.  You just commit, and everyone else can download and
compile.


I believe that people downloading half-baked works in progress and then 
finding problems I already know about and am fixing is probably not more 
productive.


Re: build a project

2009-03-26 Thread Derek Parnell
On Thu, 26 Mar 2009 21:21:47 +0100, Don wrote:

> I use bud. Even though it hasn't been touched since rebuild began. :-(.

How would like Bud to be 'touched'. I'm happy to make it better but I'm not
sure what you need from it.

-- 
Derek Parnell
Melbourne, Australia
skype: derek.j.parnell


Re: Allowing relative file imports

2009-03-26 Thread Georg Wrede

Andrei Alexandrescu wrote:

Georg Wrede wrote:

Walter Bright wrote:

Daniel Keep wrote:

It should be noted that this is really no different to executing
arbitrary code on a machine.  That said, compiling a program is not
typically thought of as "executing" code, so some restrictions in this
case would probably be prudent.


Here's the scenario I'm concerned about. Let's say you set up a 
website that instead of supporting javascript, supports D used as a 
scripting language. The site thus must run the D compiler on the 
source code. When it executes the resulting code, that execution 
presumably will run in a "sandbox" at a low privilege level.


But the compiler itself will be part of the server software, and may 
run at a higher privilege. The import feature could possible read any 
file in the system, inserting it into the executable being built. The 
running executable could then supply this information to the 
attacker, even though it is sandboxed.


This is why even using the import file feature must be explicitly 
enabled by a compiler switch, and which directories it can read must 
also be explicitly set with a compiler switch. Presumably, it's a lot 
easier for the server software to control the compiler switches than 
to parse the D code looking for obfuscated file imports.


As almost everybody else here, I've maintained a couple of websites.

Using D to write CGI programs (that are compiled, real binaries) is 
appealing, but I'd never even think about having the web server itself 
use the D compiler!!!


I mean, how often do you see web sites where stuff is fed to a C 
compiler and the resulting programs run? (Yes it's too slow, but 
that's hardly the point here.) That is simply not done.


Of course it is, probably just not in C. Last time I looked, there are 
two concepts around, one of "statically-generated dynamic pages" and one 
of "entirely dynamic pages". I know because I installed an Apache server 
and at that time support for statically-generated dynamic pages was new.


What that means is this:

a) statically-generated dynamic = you generate the page once, it's good 
until the source of the page changes;


b) "really" dynamic page = you generate the page at each request.

Rdmd might get one thinking of such, but then, how many websites use 
dynamically created PHP? Dynamically created pages yes, but with 
static PHP source.


I must be missing something big here...


I think D with rdmd would be great for (a).


I'm still not sure what you mean. I see it as static (as in plain html) 
vs dynamic (as, FaceBook, Wikipedia, etc.). Now these dynamic pages can 
be php pages, that get their data from a database (I guess wikimedia 
would be a good example), but neither case involves creating the server 
side programs (as in *.php, *.cgi) dynamically.


Or sort-of. Many PHP web applications contain pages that dynamically 
choose which sub-elements (say a news ticker) to "show", but that's 
still just combinations of prewritten "mini-pages", if you will. (Some 
even have them in a RDBMS.)


But a use case where one would need to create CGI-BIN stuff that is so 
variable as to warrant recompiling, I don't see. One would rather have a 
set of small D programs (binaries) that do small things, like one for 
latest news, one for informing about others online, etc.




Of course there are sites where I can type D source code in a box, and 
have it compiled and run. But I'm sure neither of us are talking about 
such sites? I mean, to do that, the administrator usually knows what 
he's doing! And can take care of himself, which means we don't have to 
accommodate his needs.


Re: build a project

2009-03-26 Thread Trass3r

Don schrieb:

Rebuild can't make DLLs. That's a showstopper for me.
I use bud. Even though it hasn't been touched since rebuild began. :-(.


In the end (D) dlls don't work on Windows anyway. ;)


Re: Is 2X faster large memcpy interesting?

2009-03-26 Thread Trass3r

Don schrieb:
The next D2 runtime will include my cache-size detection code. This 
makes it possible to write a cache-aware memcpy, using (for example) 
non-temporal writes when the arrays being copied exceed the size of the 
largest cache.

In my tests, it gives a speed-up of approximately 2X in such cases.
The downside is, it's a fair bit of work to implement, and it only 
affects extremely large arrays, so I fear it's basically irrelevant (It 
probably won't help arrays < 32K in size). Do people actually copy 
megabyte-sized arrays?

Is it worth spending any more time on it?



Well, arrays > 32K aren't that unsual, esp. in scientific computing.
Even a small 200x200 matrix makes up 4*8 bytes.


Re: Is 2X faster large memcpy interesting?

2009-03-26 Thread Christopher Wright

Don wrote:
The next D2 runtime will include my cache-size detection code. This 
makes it possible to write a cache-aware memcpy, using (for example) 
non-temporal writes when the arrays being copied exceed the size of the 
largest cache.

In my tests, it gives a speed-up of approximately 2X in such cases.
The downside is, it's a fair bit of work to implement, and it only 
affects extremely large arrays, so I fear it's basically irrelevant (It 
probably won't help arrays < 32K in size). Do people actually copy 
megabyte-sized arrays?

Is it worth spending any more time on it?


I don't use large arrays very often. When I do, I would not copy them if 
I could avoid it. Usually, either I keep catenating to an array until a 
certain point, then I only ever need to read from it, with no copying 
ever necessary. So I would rarely, if ever, benefit from this.


Re: Allowing relative file imports

2009-03-26 Thread Christopher Wright

Andrei Alexandrescu wrote:

Georg Wrede wrote:

Walter Bright wrote:

Daniel Keep wrote:

It should be noted that this is really no different to executing
arbitrary code on a machine.  That said, compiling a program is not
typically thought of as "executing" code, so some restrictions in this
case would probably be prudent.


Here's the scenario I'm concerned about. Let's say you set up a 
website that instead of supporting javascript, supports D used as a 
scripting language. The site thus must run the D compiler on the 
source code. When it executes the resulting code, that execution 
presumably will run in a "sandbox" at a low privilege level.


But the compiler itself will be part of the server software, and may 
run at a higher privilege. The import feature could possible read any 
file in the system, inserting it into the executable being built. The 
running executable could then supply this information to the 
attacker, even though it is sandboxed.


This is why even using the import file feature must be explicitly 
enabled by a compiler switch, and which directories it can read must 
also be explicitly set with a compiler switch. Presumably, it's a lot 
easier for the server software to control the compiler switches than 
to parse the D code looking for obfuscated file imports.


As almost everybody else here, I've maintained a couple of websites.

Using D to write CGI programs (that are compiled, real binaries) is 
appealing, but I'd never even think about having the web server itself 
use the D compiler!!!


I mean, how often do you see web sites where stuff is fed to a C 
compiler and the resulting programs run? (Yes it's too slow, but 
that's hardly the point here.) That is simply not done.


Of course it is, probably just not in C. Last time I looked, there are 
two concepts around, one of "statically-generated dynamic pages" and one 
of "entirely dynamic pages". I know because I installed an Apache server 
and at that time support for statically-generated dynamic pages was new.


What that means is this:

a) statically-generated dynamic = you generate the page once, it's good 
until the source of the page changes;


b) "really" dynamic page = you generate the page at each request.


Have you ever done web development? If so, did you actually do *code 
generation* on each page request? If so, I never want to work with you.


Web applications in compiled languages pretty much never invoke the 
compiler when they're running. Very few programs need a compiler on the 
machine they're deployed to. It's a security risk, and it's an unneeded 
dependency, and it pretty much guarantees a maintenance and debugging 
problem, and it promises performance issues.


Re: State of Play

2009-03-26 Thread Jarrett Billingsley
On Thu, Mar 26, 2009 at 6:07 PM, Walter Bright
 wrote:
>
> But that's why the download page divides the downloads into "latest" and
> "stable." If you want "stable", why download "latest"?

Stable doesn't just mean "not changing."  It also means "providing a
strong foundation upon which something can be built."  The older
compilers are usually anything but that, as the newer ones usually fix
more things than what they break.  The library developers are forced
to use newer compilers because there are showstopping bugs in the
older ones, and the library users are then forced to use newer
compilers as a result.  At least, that's what I've experienced.

> Furthermore, before release, it is made available to the DWT and Tango teams
> to see if it breaks them. If I made it generally available, how is that
> different from the "latest" on the download page? There's even a "bundle"
> version that comes with various libraries tested and verified with it.

Well usually problems only arise when things change - directory
structure, new features, language spec changes (.init).  Bugfixes
rarely create problems.  For example I'd welcome fixes to things like
bugs 313 and 314 (which are ancient, by the way) even if it means I
have to change my code, because I know that my code is more correct as
a result.  I feel like the idea behind the nightly releases is so that
when _changes_ occur, or when regressions are introduced, they can be
stamped out before a major release.  DWT and Tango are major projects
but are by no means an exhaustive testbed.

So what about the following counterargument: "even if nightly builds
were made available, how can we be sure that enough people are using
them to sufficiently test them?"  OK, sure, if not many people are
using the nightly builds, then there wouldn't be much benefit.  But it
does seem to work out fine for a lot of projects.  And with a proper
SCM set up which you commit to daily, there's virtually no work on
your part.  You just commit, and everyone else can download and
compile.


Re: Allowing relative file imports

2009-03-26 Thread Christopher Wright

Georg Wrede wrote:

As almost everybody else here, I've maintained a couple of websites.

Using D to write CGI programs (that are compiled, real binaries) is 
appealing, but I'd never even think about having the web server itself 
use the D compiler!!!


I mean, how often do you see web sites where stuff is fed to a C 
compiler and the resulting programs run? (Yes it's too slow, but 
that's hardly the point here.) That is simply not done.


Similarly, how often do you do code generation in a PHP application? You 
can do it, and I'm sure people use eval for small things, but anything 
bigger than that, it just becomes a mess.


Re: Is 2X faster large memcpy interesting?

2009-03-26 Thread Walter Bright

Andrei Alexandrescu wrote:
I'd think so. In this day and age it is appalling that we don't quite 
know how to quickly copy memory around. A long time ago I ran some 
measurements (http://www.ddj.com/cpp/184403799) and I was quite 
surprised. My musings were as true then as now. And now we're getting to 
the second freakin' Space Odyssey!


It turns out that efficiently copying objects only a few bytes long 
requires a bunch of code. So, I gave up on having the compiler generate 
intrinsic memcpy code, and instead just call the library function 
memcpy. This is implemented in the next update.


Re: State of Play

2009-03-26 Thread Walter Bright

Steven Schveighoffer wrote:
On Thu, 26 Mar 2009 17:27:25 -0400, Walter Bright 
You can already used shared/unshared. The semantics aren't 
implemented, but the type system support for it is.


But is it enforced?


No, it is just type-checked.

Basically, I want to focus on one new language 
aspect at a time.  As far as I know, with the current D compiler, I can 
access a global not marked shared from multiple threads, no?


That's correct.

When 
shared/unshared is actually implemented, each thread gets its own copy, 
right?


Of unshared data, right, each thread gets its own copy. The default will 
be __thread for globals.


Re: D1.x series proposal [was: State of Play]

2009-03-26 Thread Sean Kelly
== Quote from Leandro Lucarella (llu...@gmail.com)'s article
>
> D situation is a little different because D2 is already here, and it's too
> much ahead of D1. So a plan to backport features from D2 to D1
> progressively should be done.

Who is going to do these backports?  Personally, I'd rather think about
moving my code to D2 in one jump than in a bunch of incremental
steps, each requiring a custom compiler.


Re: State of Play

2009-03-26 Thread cemiller
On Thu, 26 Mar 2009 12:17:04 -0700, Walter Bright  
 wrote:



Denis Koroskin wrote:

One of the breaking changes that I recall was that you made Posix
identifier built-in and thus any custom Posix versioning became an
error. Not sure if it was introduced in 1.041, though, but it is
still a breaking change.


It was more of a build system change, but I get your point. It shows  
that even trivial changes are a bad idea for D1.




Isn't there a very simple way to get the best of both... allow these  
versions to be re-set if they are valid for the platform. e.g. still allow  
me to -version=Posix on linux, but fail on Windows (ignoring that Windows  
can have some posix compliance).


The same could be done for the rest of the pre-set ones. It could also be  
an easy way for programmers to enforce compiling only with supported  
configurations.


Re: State of Play

2009-03-26 Thread Walter Bright

Tomas Lindquist Olsen wrote:

However, now that the DMD source is available, that can change. All we
need is for it be put in a public repository (svn, hg etc). Then
people would be able to see what you're doing and test the fixes.
Catching bugs early in the process, as well as providing pretty much
guaranteed code review. I know I'd be subscribing my RSS reader to the
changelog at least.


I think that's a good idea.


Re: State of Play

2009-03-26 Thread Walter Bright

Leandro Lucarella wrote:

This is another problem with D, the lack of a public SCM repository for,
at least, the fronend. Now each DMD release is really a "nightly snapshot"
without any real-world testing (I guess). That's another reason why
"stable" DMD release keep breaking.

If one could download a snapshot every once in a while and give it a try,
most breaking changes should be detected *before* the actual release. Or
maybe alpha or beta versions should be released.

When I see a compiler version 1.041, I see a stable release number, not
a beta, or release candidate or nightly snapshoot, but sadly that is what
it is. Maybe that is a big source of confusion.


But that's why the download page divides the downloads into "latest" and 
"stable." If you want "stable", why download "latest"?


http://www.digitalmars.com/d/download.html

Furthermore, before release, it is made available to the DWT and Tango 
teams to see if it breaks them. If I made it generally available, how is 
that different from the "latest" on the download page? There's even a 
"bundle" version that comes with various libraries tested and verified 
with it.


Re: Is 2X faster large memcpy interesting?

2009-03-26 Thread Sean Kelly
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article
>
> As a rule of thumb, it's generally good to use memcpy (and consequently
> fill-by-copy) if you can — for large data sets, memcpy doesn't make much
> difference, and for smaller data sets, it might be much faster. For
> cheap-to-copy objects, Duff's Device might perform faster than a simple
> for loop. Ultimately, all this is subject to your compiler's and
> machine's whims and quirks.
> There is a very deep, and sad, realization underlying all this. We are
> in 2001, the year of the Spatial Odyssey. We've done electronic
> computing for more than 50 years now, and we strive to design more and
> more complex systems, with unsatisfactory results. Software development
> is messy. Could it be because the fundamental tools and means we use are
> low-level, inefficient, and not standardized? Just step out of the box
> and look at us — after 50 years, we're still not terribly good at
> filling and copying memory.

I don't know how sad this is.  For better or worse, programming is still a
craft, much like blacksmithing.  Code is largely written from scratch for
each project, techniques are jealously guarded (in our case via copyright
law), etc.  This may not be great from the perspective of progress, but
it certainly makes the work more interesting.  But then I'm a tinker at
heart, so YMMV.


Re: State of Play

2009-03-26 Thread Leandro Lucarella
Walter Bright, el 26 de marzo a las 14:05 me escribiste:
> Tomas Lindquist Olsen wrote:
> >I don't necessarily want a 100% stable language. In fact I don't. But
> >obviously asking for both is just silly.
> >The only thing I'm not happy about is if code that used to work, still
> >compiles, but no longer works. This is where the real problem is and
> >I've seen it several times. MinWin, APaGeD and probably others.
> 
> What do you suggest?
> 
> It's why there's a "last stable version" of D1 on the website. With any
> software package, if you always download the latest version but another
> package was only tested with a different version, it's likely to have
> problems.

This is another problem with D, the lack of a public SCM repository for,
at least, the fronend. Now each DMD release is really a "nightly snapshot"
without any real-world testing (I guess). That's another reason why
"stable" DMD release keep breaking.

If one could download a snapshot every once in a while and give it a try,
most breaking changes should be detected *before* the actual release. Or
maybe alpha or beta versions should be released.

When I see a compiler version 1.041, I see a stable release number, not
a beta, or release candidate or nightly snapshoot, but sadly that is what
it is. Maybe that is a big source of confusion.

Maybe an improvement on version numbering scheme can be very useful in
this regard, even for people looking at D2.0, D2.032 really looks like
a stable release. D2.0alpha032, for example, is really ugly but at lease
looks more like an alpha release =)
D1.041rc2, looks like a release candidate, not a final version, and if it
breaks something, I wouldn't be as surprised as if D1.041 breaks.

Anyway, I will be glad if you take a look at the D1.x series proposal =)
http://www.prowiki.org/wiki4d/wiki.cgi?D1XProposal

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)



Re: What can you "new"

2009-03-26 Thread Jarrett Billingsley
On Thu, Mar 26, 2009 at 2:49 PM, grauzone  wrote:
> Jarrett Billingsley wrote:
>>
>> On Thu, Mar 26, 2009 at 1:33 PM, Nick Sabalausky  wrote:
>>
>>> Besides, I'd think an OS written in D would certainly have the potential
>>> to
>>> really shake up the current OS market. Not because people would say "Oh,
>>> wow, it's written in D", of course, but because the developers would have
>>> a
>>> far easier time making it, well, good.
>>
>> *cough*www.xomb.org*cough*
>
> Your point? Yes, we know that D can be used to write hobby kernels.

No need to be hostile about it.  I was just letting Nick know.


D1.x series proposal [was: State of Play]

2009-03-26 Thread Leandro Lucarella
Tomas Lindquist Olsen, el 26 de marzo a las 19:13 me escribiste:
> > That's why I'd love to see some kind of D 1.1 (maybe LDC could be used
> > to
> > make an "unofficial" D 1.1 language), with a few minor non-breaking new
> > features over D 1.0, then D 1.2 could introduce some more, and so on.
> > This
> > way people can catch-up easly, with small simple iterations, and D1
> > wont
> > feel as a dead frozen language.
> 
>  I think this is bound to happen sooner or later.
> >>>
> >>> Well, then I'd love it happen sooner ;)
> >>
> >> We could start by figuring out what D 1.1 is ...
> >
> > It's D2 - const/invariant, yeaaah! :-P
> 
> Sounds a little drastic to me.

Yes, I think small iterations are better. It's easier to catch-up and
start using new features for developers, it's less likely for something
to break, etc.

I think Python got the development model really right. From Wikipedia:
"Python 2.0 was released on 16 October 2000, with many major new features
including a full garbage collector and support for unicode. However, the
most important change was to the development process itself, with a shift
to a more transparent and community-backed process."

I don't think is a coincidence that since Python 2.0 the language has such
a fast growth. I think D can learn a lot from Python =)

Again from Wikipedia:
"Python 3.0, a major, backwards-incompatible release, was released on
3 December 2008 after a long period of testing. Many of its major
features have been backported to the backwards-compatible Python 2.6."

I've searched throgh the new D2 features page[1] and D2 changelog[2] and
started a wiki page to make a plan for (unofficial) D1.x language
series:
http://www.prowiki.org/wiki4d/wiki.cgi?D1XProposal

Here is a transcription for the lazy ones that don't want to switch to the
browser and to ease the discussion in the NG:


D1.x series proposal


This proposal is intended to address the actual feel about D1 being
obsolete and D2 being not ready for real life use. There a lot of new
features in D2 that are simple, useful and easily backported to D1, but D1
is marked as frozen (or stable), so no new features are added. Worst,
there is a very thin line between what is a new feature and a bugfix or
a harmless addition, so sometimes D1 gets new features (like several new
predefined version identifiers, .__vptr and .__monitor properties,
extern(system), etc.). And it's much more easy for D1 to break when adding
this kind of stuff, and suddenly the frozen D1 version is not stable
anymore, and it doesn't provide anything new that is really interesting.

What would be nice is to have a language that evolves fast, with small
iterations, but each iteration being really stable. This way developers
can be more confident in the stability of the language concerning one
particular version. If some code works with D1.034, then it should work
with D1.045 without changes, or with any other D1.0xx.

Python has a very good development model, and D can learn a lot from it.
Python versions are composed of 3 numbers, major version, minor version,
and bugfix versions. Bugfix version should not introduce backward
incompatible changes at all, only bugfixes. If your code works with Python
2.5.0, it will work with Python 2.5.3 too. Minor version releases happen
every year, year and a half, and can include backward compatible changes
or even backward incompatible changes as long as they are easy to stop and
upgrade. Generally every backward incompatible change is added in 2 steps.
The change is introduced in minor version X, but only enable if the
developer asks for it (using from __future__ import feature, in the it can
be a compiler flag, or a pragma). Any incompatibility with the new feature
issue a warning (even when the new feature is not requested by the user).
For example, if a new keyword is introduced, and you have a symbol with
the same name as the new keyword, a warning is issued. In version X+1, the
new feature is enabled by default, and if there any backward compatibility
with the old version can be maintained, it kept but issues a deprecation
warning. Finally, in X+2 all (deprecated) backward compatibility is
removed. When a new major version is on the way, like with 3.0, all new
features that can be ported to the previous major version, are ported and
a new flag can be set to enable forward-compatibility warnings, to ease
writing forward compatible programs.

D situation is a little different because D2 is already here, and it's too
much ahead of D1. So a plan to backport features from D2 to D1
progressively should be done.

Here are the features proposed for D1.x series:

D1.1

This should be a transitional version, with as little changes as possible,
just to try out how the new version is taken by the community. Only
trivial changes are backported, specially the ones that improves
forward-compatibility of code. Porting D1.0 programs to D should be
tri

Re: State of Play

2009-03-26 Thread Steven Schveighoffer
On Thu, 26 Mar 2009 17:27:25 -0400, Walter Bright  
 wrote:



Steven Schveighoffer wrote:

what needs to be done:
 1. Make Tango build on top of druntime.  I just merged from trunk  
yesterday, which was about 300 files, so most likely there will be  
compile issues ;)

2. Const-ify everything.  Some parts are already done.
3. Make all opApply's scoped properly.
 Not sure what happens after that, but step 2 alone is a ton of work.   
In addition, there are some blocker bugs in DMD (1645 and 2524 right  
now) that prevent a complete port.
 When the shared/unshared paradigm is released, there's probably  
another ton of work to do :)


You can already used shared/unshared. The semantics aren't implemented,  
but the type system support for it is.


But is it enforced?  Basically, I want to focus on one new language aspect  
at a time.  As far as I know, with the current D compiler, I can access a  
global not marked shared from multiple threads, no?  When shared/unshared  
is actually implemented, each thread gets its own copy, right?  It's still  
a fuzzy concept to me, and it seems like a waste of time to try and write  
code to use features that don't yet exist.  The longer I can put it off,  
the better.


The only impact I've seen so far is that singletons in Tango were  
sometimes named 'shared', so I had to change the names because of the  
reserved keyword.


-Steve


Re: State of Play

2009-03-26 Thread Tomas Lindquist Olsen
On Thu, Mar 26, 2009 at 10:05 PM, Walter Bright
 wrote:
> Tomas Lindquist Olsen wrote:
>>
>> I don't necessarily want a 100% stable language. In fact I don't. But
>> obviously asking for both is just silly.
>> The only thing I'm not happy about is if code that used to work, still
>> compiles, but no longer works. This is where the real problem is and
>> I've seen it several times. MinWin, APaGeD and probably others.
>
> What do you suggest?
>
> It's why there's a "last stable version" of D1 on the website. With any
> software package, if you always download the latest version but another
> package was only tested with a different version, it's likely to have
> problems.

I'm not sure about what to do with the damage done already. I think it
came across worse than I meant it to.
In the future I think this can largely be solved by my public source
repository proposal.

>


Re: State of Play

2009-03-26 Thread Tomas Lindquist Olsen
On Thu, Mar 26, 2009 at 10:02 PM, Walter Bright
 wrote:
> Tomas Lindquist Olsen wrote:
>>
>> Which leads me to: If I was to help with a D 1.1 implementation, only
>> features that would not change any semantics of valid D1 code would go
>> in.
>
> But they always do. Many of the complaints about D1 breaking things is the
> result of bug fixes to D1 inadvertently breaking something unexpected.
>

True.

However, now that the DMD source is available, that can change. All we
need is for it be put in a public repository (svn, hg etc). Then
people would be able to see what you're doing and test the fixes.
Catching bugs early in the process, as well as providing pretty much
guaranteed code review. I know I'd be subscribing my RSS reader to the
changelog at least.


Re: Allowing relative file imports

2009-03-26 Thread Andrei Alexandrescu

Georg Wrede wrote:

Walter Bright wrote:

Daniel Keep wrote:

It should be noted that this is really no different to executing
arbitrary code on a machine.  That said, compiling a program is not
typically thought of as "executing" code, so some restrictions in this
case would probably be prudent.


Here's the scenario I'm concerned about. Let's say you set up a 
website that instead of supporting javascript, supports D used as a 
scripting language. The site thus must run the D compiler on the 
source code. When it executes the resulting code, that execution 
presumably will run in a "sandbox" at a low privilege level.


But the compiler itself will be part of the server software, and may 
run at a higher privilege. The import feature could possible read any 
file in the system, inserting it into the executable being built. The 
running executable could then supply this information to the attacker, 
even though it is sandboxed.


This is why even using the import file feature must be explicitly 
enabled by a compiler switch, and which directories it can read must 
also be explicitly set with a compiler switch. Presumably, it's a lot 
easier for the server software to control the compiler switches than 
to parse the D code looking for obfuscated file imports.


As almost everybody else here, I've maintained a couple of websites.

Using D to write CGI programs (that are compiled, real binaries) is 
appealing, but I'd never even think about having the web server itself 
use the D compiler!!!


I mean, how often do you see web sites where stuff is fed to a C 
compiler and the resulting programs run? (Yes it's too slow, but 
that's hardly the point here.) That is simply not done.


Of course it is, probably just not in C. Last time I looked, there are 
two concepts around, one of "statically-generated dynamic pages" and one 
of "entirely dynamic pages". I know because I installed an Apache server 
and at that time support for statically-generated dynamic pages was new.


What that means is this:

a) statically-generated dynamic = you generate the page once, it's good 
until the source of the page changes;


b) "really" dynamic page = you generate the page at each request.

Rdmd might get one thinking of such, but then, how many websites use 
dynamically created PHP? Dynamically created pages yes, but with static 
PHP source.


I must be missing something big here...


I think D with rdmd would be great for (a).


Andrei


Re: Is 2X faster large memcpy interesting?

2009-03-26 Thread Walter Bright

Don wrote:
The next D2 runtime will include my cache-size detection code. This 
makes it possible to write a cache-aware memcpy, using (for example) 
non-temporal writes when the arrays being copied exceed the size of the 
largest cache.

In my tests, it gives a speed-up of approximately 2X in such cases.
The downside is, it's a fair bit of work to implement, and it only 
affects extremely large arrays, so I fear it's basically irrelevant (It 
probably won't help arrays < 32K in size). Do people actually copy 
megabyte-sized arrays?

Is it worth spending any more time on it?


I say go for it!


Re: State of Play

2009-03-26 Thread Tomas Lindquist Olsen
On Thu, Mar 26, 2009 at 9:45 PM, grauzone  wrote:
> Tomas Lindquist Olsen wrote:
>>
>> Which leads me to: If I was to help with a D 1.1 implementation, only
>> features that would not change any semantics of valid D1 code would go
>> in.
>
> Isn't this the point of the whole "D 1.1" idea?
>

People seem to have different ideas of what D 1.1 should be.


Re: Is 2X faster large memcpy interesting?

2009-03-26 Thread Andrei Alexandrescu

Don wrote:
The next D2 runtime will include my cache-size detection code. This 
makes it possible to write a cache-aware memcpy, using (for example) 
non-temporal writes when the arrays being copied exceed the size of the 
largest cache.

In my tests, it gives a speed-up of approximately 2X in such cases.
The downside is, it's a fair bit of work to implement, and it only 
affects extremely large arrays, so I fear it's basically irrelevant (It 
probably won't help arrays < 32K in size). Do people actually copy 
megabyte-sized arrays?

Is it worth spending any more time on it?


I'd think so. In this day and age it is appalling that we don't quite 
know how to quickly copy memory around. A long time ago I ran some 
measurements (http://www.ddj.com/cpp/184403799) and I was quite 
surprised. My musings were as true then as now. And now we're getting to 
the second freakin' Space Odyssey!


===
Things are clearly hazy, aren't they? First off, maybe it came as a 
surprise to you that there's more than one way to fill and copy objects. 
Then, there's no single variant of fill and copy that works best on all 
compilers, data sets, and machines. (I guess if I tested the same code 
on a Celeron, which has less cache, I would have gotten very different 
results. To say nothing about other architectures.)


As a rule of thumb, it's generally good to use memcpy (and consequently 
fill-by-copy) if you can — for large data sets, memcpy doesn't make much 
difference, and for smaller data sets, it might be much faster. For 
cheap-to-copy objects, Duff's Device might perform faster than a simple 
for loop. Ultimately, all this is subject to your compiler's and 
machine's whims and quirks.


There is a very deep, and sad, realization underlying all this. We are 
in 2001, the year of the Spatial Odyssey. We've done electronic 
computing for more than 50 years now, and we strive to design more and 
more complex systems, with unsatisfactory results. Software development 
is messy. Could it be because the fundamental tools and means we use are 
low-level, inefficient, and not standardized? Just step out of the box 
and look at us — after 50 years, we're still not terribly good at 
filling and copying memory.




Andrei


Re: State of Play

2009-03-26 Thread Walter Bright

Steven Schveighoffer wrote:

what needs to be done:

1. Make Tango build on top of druntime.  I just merged from trunk 
yesterday, which was about 300 files, so most likely there will be 
compile issues ;)

2. Const-ify everything.  Some parts are already done.
3. Make all opApply's scoped properly.

Not sure what happens after that, but step 2 alone is a ton of work.  In 
addition, there are some blocker bugs in DMD (1645 and 2524 right now) 
that prevent a complete port.


When the shared/unshared paradigm is released, there's probably another 
ton of work to do :)


You can already used shared/unshared. The semantics aren't implemented, 
but the type system support for it is.




And of course, there's the possibility of redesigns.  A lot of code can 
probably make use of __traits and there are the new range constructs to 
consider.  Of course, these are all probably things that would be 
severely different from the D1 version, so they probably won't happen 
for a while.


Note that the September or later date is based on the amount of time I 
spend on it (which isn't a lot).  Someone who wanted to do nothing but 
porting Tango to D2 could probably get it done in a month or two.  Note 
also that I don't consider the port done until all the "cast to get it 
to compile" hacks are removed.  In some cases, this requires design 
changes, and in some of those, tough decisions.


-Steve


Re: State of Play

2009-03-26 Thread Walter Bright

Tomas Lindquist Olsen wrote:

D1 does have some missing features that are in D2, and could be
backported to D1 without breaking any code.
This isn't going to happen for the sake of stability. But if I want to
use some of the new features, I have to get all the cruft that made me
look into D in the first place as well. A major reason I started with
D was because of simple syntax, GC and lack of the const hell.

D2 is no longer a simple language, you have to know all kinds of shit
to be able to use it correctly.

All my projects at the moment are in C++. And TBH I don't see that
changing any time soon. The stuff I did in D no longer works, and I
don't have time to debug the entire thing to figure out how/where the
compiler changed.


I've worked with C/C++ for decades, and it's a miracle if code that 
compiled 10 years ago compiles today without changes. Code never works 
without changes when porting to a different compiler. Code targeted at 
VC tends to target specific versions of VC. "Portable" libraries like 
Boost, STL, and Hans Boehm GC are full of #ifdef's.


The only real difference from D is that:
1. The evolution of C++ is about 10 times slower than D.
2. We're all so used to working around these problems in C++, we don't 
notice it.


I understand your frustrations with the changes, just not why that means 
using C++ .



And yes, the Phobos vs. Tango (which in turn keeps breaking) situation
of course isn't making things better.


Re: Allowing relative file imports

2009-03-26 Thread Georg Wrede

Walter Bright wrote:

Daniel Keep wrote:

It should be noted that this is really no different to executing
arbitrary code on a machine.  That said, compiling a program is not
typically thought of as "executing" code, so some restrictions in this
case would probably be prudent.


Here's the scenario I'm concerned about. Let's say you set up a website 
that instead of supporting javascript, supports D used as a scripting 
language. The site thus must run the D compiler on the source code. When 
it executes the resulting code, that execution presumably will run in a 
"sandbox" at a low privilege level.


But the compiler itself will be part of the server software, and may run 
at a higher privilege. The import feature could possible read any file 
in the system, inserting it into the executable being built. The running 
executable could then supply this information to the attacker, even 
though it is sandboxed.


This is why even using the import file feature must be explicitly 
enabled by a compiler switch, and which directories it can read must 
also be explicitly set with a compiler switch. Presumably, it's a lot 
easier for the server software to control the compiler switches than to 
parse the D code looking for obfuscated file imports.


As almost everybody else here, I've maintained a couple of websites.

Using D to write CGI programs (that are compiled, real binaries) is 
appealing, but I'd never even think about having the web server itself 
use the D compiler!!!


I mean, how often do you see web sites where stuff is fed to a C 
compiler and the resulting programs run? (Yes it's too slow, but 
that's hardly the point here.) That is simply not done.


Rdmd might get one thinking of such, but then, how many websites use 
dynamically created PHP? Dynamically created pages yes, but with static 
PHP source.


I must be missing something big here...



Re: State of Play

2009-03-26 Thread Walter Bright

Tomas Lindquist Olsen wrote:

I don't necessarily want a 100% stable language. In fact I don't. But
obviously asking for both is just silly.
The only thing I'm not happy about is if code that used to work, still
compiles, but no longer works. This is where the real problem is and
I've seen it several times. MinWin, APaGeD and probably others.


What do you suggest?

It's why there's a "last stable version" of D1 on the website. With any 
software package, if you always download the latest version but another 
package was only tested with a different version, it's likely to have 
problems.


Re: State of Play

2009-03-26 Thread Walter Bright

Tomas Lindquist Olsen wrote:

Which leads me to: If I was to help with a D 1.1 implementation, only
features that would not change any semantics of valid D1 code would go
in.


But they always do. Many of the complaints about D1 breaking things is 
the result of bug fixes to D1 inadvertently breaking something unexpected.


Re: Is 2X faster large memcpy interesting?

2009-03-26 Thread Georg Wrede

Don wrote:
The next D2 runtime will include my cache-size detection code. This 
makes it possible to write a cache-aware memcpy, using (for example) 
non-temporal writes when the arrays being copied exceed the size of the 
largest cache.

In my tests, it gives a speed-up of approximately 2X in such cases.
The downside is, it's a fair bit of work to implement, and it only 
affects extremely large arrays, so I fear it's basically irrelevant (It 
probably won't help arrays < 32K in size). Do people actually copy 
megabyte-sized arrays?

Is it worth spending any more time on it?


BTW: I tested the memcpy() code provided in AMD's 1992 optimisation 
manual, and in Intel's 2007 manual. Only one of them actually gave any 
benefit when run on a 2008 Intel Core2 -- which was it? (Hint: it wasn't 
Intel!)
I've noticed that AMD's docs are usually greatly superior to Intels, but 
this time the difference is unbelievable.


What's the alternative? What would you do instead? Is there something 
cooler or more important for D to do?


(IMHO, if the other alternatives have any merit, then I'd vote for them.)

But then again, you've already invested in this, and it clearly 
interests you. Labourious, yes, but it sounds fun.


Re: What can you "new"

2009-03-26 Thread Steven Schveighoffer
On Thu, 26 Mar 2009 16:33:09 -0400, Andrei Alexandrescu  
 wrote:



Don wrote:

Cristian Vlasceanu wrote:
Hm... how should I put it nicely... wait, I guess I can't: if you guys  
think D is a systems language, you are smelling your own farts!


Because 1) GC magic and deterministic system level behavior are not  
exactly good friends, and 2) YOU DO NOT HAVE A SYSTEMS PROBLEM TO  
SOLVE. C was invented to write an OS in a portable fashion. Now that's  
a systems language. Unless you are designing the next uber OS, D is a  
solution in search of a problem, ergo not a systems language (sorry  
Walter). It is a great application language though, and if people  
really need custom allocation schemes, then they can write that part  
in C/C++ or even assembler (and I guess you can provide a custom  
run-time too, if you really DO HAVE a systems problem to address --  
like developing for an embedded platform).
 You're equating "systems language" with "language intended for writing  
a complete operating system". That's not what's intended.

AFAIK there are no operating systems written solely in C++.
Probably, D being a "systems language" actually means "D is competing  
with C++".


I'm surprised at how many people misunderstand the "systems language" or  
"systems-level programming" terms. Only a couple of months ago, a good  
friend whom I thought would know a lot better, mentioned that he thought  
a "systems-level language" is one that can be used to build large  
systems.


wikipedia to the rescue!

http://en.wikipedia.org/wiki/System_programming_language
http://en.wikipedia.org/wiki/System_software

-Steve


Re: State of Play

2009-03-26 Thread grauzone

Tomas Lindquist Olsen wrote:

On Thu, Mar 26, 2009 at 9:25 PM, Tomas Lindquist Olsen
 wrote:

On Thu, Mar 26, 2009 at 9:02 PM, Walter Bright
 wrote:

Tomas Lindquist Olsen wrote:

On Thu, Mar 26, 2009 at 8:17 PM, Walter Bright
 wrote:

Denis Koroskin wrote:

One of the breaking changes that I recall was that you made Posix
identifier built-in and thus any custom Posix versioning became an
error. Not sure if it was introduced in 1.041, though, but it is
still a breaking change.

It was more of a build system change, but I get your point. It shows that
even trivial changes are a bad idea for D1.


Everyone certainly does not think it was a bad idea. If trivial things
like this sets people off, they should at least look at the problem
(and comment those few lines) before complaining.

All my humble opinion of course.

To me, it illustrates a fundamental disconnect. One cannot have both a 100%
stable language and yet introduce improvements to it.


I don't necessarily want a 100% stable language. In fact I don't. But
obviously asking for both is just silly.
The only thing I'm not happy about is if code that used to work, still
compiles, but no longer works. This is where the real problem is and
I've seen it several times. MinWin, APaGeD and probably others.



Which leads me to: If I was to help with a D 1.1 implementation, only
features that would not change any semantics of valid D1 code would go
in.


Isn't this the point of the whole "D 1.1" idea?


Re: What can you "new"

2009-03-26 Thread Andrei Alexandrescu

Don wrote:

Cristian Vlasceanu wrote:
Hm... how should I put it nicely... wait, I guess I can't: if you guys 
think D is a systems language, you are smelling your own farts!


Because 1) GC magic and deterministic system level behavior are not 
exactly good friends, and 2) YOU DO NOT HAVE A SYSTEMS PROBLEM TO 
SOLVE. C was invented to write an OS in a portable fashion. Now that's 
a systems language. Unless you are designing the next uber OS, D is a 
solution in search of a problem, ergo not a systems language (sorry 
Walter). It is a great application language though, and if people 
really need custom allocation schemes, then they can write that part 
in C/C++ or even assembler (and I guess you can provide a custom 
run-time too, if you really DO HAVE a systems problem to address -- 
like developing for an embedded platform).


You're equating "systems language" with "language intended for writing a 
complete operating system". That's not what's intended.

AFAIK there are no operating systems written solely in C++.
Probably, D being a "systems language" actually means "D is competing 
with C++".


I'm surprised at how many people misunderstand the "systems language" or 
"systems-level programming" terms. Only a couple of months ago, a good 
friend whom I thought would know a lot better, mentioned that he thought 
a "systems-level language" is one that can be used to build large systems.


Andrei


Re: State of Play

2009-03-26 Thread Steven Schveighoffer
On Thu, 26 Mar 2009 15:50:22 -0400, Brad Roberts   
wrote:



Steven Schveighoffer wrote:

On Thu, 26 Mar 2009 07:42:01 -0400, Mike James  wrote:


Is Tango for D2.0 at a level of D1.0 and can be used now?


No.  It is being worked on.  I don't forsee Tango for D2 being ready
until at least September, perhaps later.

-Steve


Would you mind outlining / documenting what needs to be done?  I'd
expect that there are a number of people who would be interested in
volunteering to help.


what needs to be done:

1. Make Tango build on top of druntime.  I just merged from trunk  
yesterday, which was about 300 files, so most likely there will be compile  
issues ;)

2. Const-ify everything.  Some parts are already done.
3. Make all opApply's scoped properly.

Not sure what happens after that, but step 2 alone is a ton of work.  In  
addition, there are some blocker bugs in DMD (1645 and 2524 right now)  
that prevent a complete port.


When the shared/unshared paradigm is released, there's probably another  
ton of work to do :)


And of course, there's the possibility of redesigns.  A lot of code can  
probably make use of __traits and there are the new range constructs to  
consider.  Of course, these are all probably things that would be severely  
different from the D1 version, so they probably won't happen for a while.


Note that the September or later date is based on the amount of time I  
spend on it (which isn't a lot).  Someone who wanted to do nothing but  
porting Tango to D2 could probably get it done in a month or two.  Note  
also that I don't consider the port done until all the "cast to get it to  
compile" hacks are removed.  In some cases, this requires design changes,  
and in some of those, tough decisions.


-Steve


Re: State of Play

2009-03-26 Thread Tomas Lindquist Olsen
On Thu, Mar 26, 2009 at 9:25 PM, Tomas Lindquist Olsen
 wrote:
> On Thu, Mar 26, 2009 at 9:02 PM, Walter Bright
>  wrote:
>> Tomas Lindquist Olsen wrote:
>>>
>>> On Thu, Mar 26, 2009 at 8:17 PM, Walter Bright
>>>  wrote:

 Denis Koroskin wrote:
>
> One of the breaking changes that I recall was that you made Posix
> identifier built-in and thus any custom Posix versioning became an
> error. Not sure if it was introduced in 1.041, though, but it is
> still a breaking change.

 It was more of a build system change, but I get your point. It shows that
 even trivial changes are a bad idea for D1.

>>>
>>> Everyone certainly does not think it was a bad idea. If trivial things
>>> like this sets people off, they should at least look at the problem
>>> (and comment those few lines) before complaining.
>>>
>>> All my humble opinion of course.
>>
>> To me, it illustrates a fundamental disconnect. One cannot have both a 100%
>> stable language and yet introduce improvements to it.
>>
>
> I don't necessarily want a 100% stable language. In fact I don't. But
> obviously asking for both is just silly.
> The only thing I'm not happy about is if code that used to work, still
> compiles, but no longer works. This is where the real problem is and
> I've seen it several times. MinWin, APaGeD and probably others.
>

Which leads me to: If I was to help with a D 1.1 implementation, only
features that would not change any semantics of valid D1 code would go
in.


Re: State of Play

2009-03-26 Thread Tomas Lindquist Olsen
On Thu, Mar 26, 2009 at 9:02 PM, Walter Bright
 wrote:
> Tomas Lindquist Olsen wrote:
>>
>> On Thu, Mar 26, 2009 at 8:17 PM, Walter Bright
>>  wrote:
>>>
>>> Denis Koroskin wrote:

 One of the breaking changes that I recall was that you made Posix
 identifier built-in and thus any custom Posix versioning became an
 error. Not sure if it was introduced in 1.041, though, but it is
 still a breaking change.
>>>
>>> It was more of a build system change, but I get your point. It shows that
>>> even trivial changes are a bad idea for D1.
>>>
>>
>> Everyone certainly does not think it was a bad idea. If trivial things
>> like this sets people off, they should at least look at the problem
>> (and comment those few lines) before complaining.
>>
>> All my humble opinion of course.
>
> To me, it illustrates a fundamental disconnect. One cannot have both a 100%
> stable language and yet introduce improvements to it.
>

I don't necessarily want a 100% stable language. In fact I don't. But
obviously asking for both is just silly.
The only thing I'm not happy about is if code that used to work, still
compiles, but no longer works. This is where the real problem is and
I've seen it several times. MinWin, APaGeD and probably others.


Re: build a project

2009-03-26 Thread Don

Nick Sabalausky wrote:
"Don"  wrote in message 
news:gqglkh$1dp...@digitalmars.com...

grauzone wrote:

Qian Xu wrote:

grauzone wrote:

To Qian Xu (the OP): welcome to the funny world of the D tool chain. If
you're using dsss/rebuild, you can speed up compilation a bit by 
putting

oneatatime=no into the rebuild configuration file. Because of dmd bugs,
you might need to use the -full switch to avoid linker errors.

I am working in Linux. There is no dsss/rebuild in Linux, isn't there?
I'd say the reverse is true -- there is dsss/rebuild ONLY in Linux. I 
wouldn't recommend it for Windows.




I'm curious why you say that about the Windows version and what you world 
recommend instead of rebuild on Windows.  


Rebuild can't make DLLs. That's a showstopper for me.
I use bud. Even though it hasn't been touched since rebuild began. :-(.


Re: build a project

2009-03-26 Thread Nick Sabalausky
"Don"  wrote in message 
news:gqglkh$1dp...@digitalmars.com...
> grauzone wrote:
>> Qian Xu wrote:
>>> grauzone wrote:
 To Qian Xu (the OP): welcome to the funny world of the D tool chain. If
 you're using dsss/rebuild, you can speed up compilation a bit by 
 putting
 oneatatime=no into the rebuild configuration file. Because of dmd bugs,
 you might need to use the -full switch to avoid linker errors.
>>>
>>> I am working in Linux. There is no dsss/rebuild in Linux, isn't there?
>
> I'd say the reverse is true -- there is dsss/rebuild ONLY in Linux. I 
> wouldn't recommend it for Windows.
>

I'm curious why you say that about the Windows version and what you world 
recommend instead of rebuild on Windows. 




Is 2X faster large memcpy interesting?

2009-03-26 Thread Don
The next D2 runtime will include my cache-size detection code. This 
makes it possible to write a cache-aware memcpy, using (for example) 
non-temporal writes when the arrays being copied exceed the size of the 
largest cache.

In my tests, it gives a speed-up of approximately 2X in such cases.
The downside is, it's a fair bit of work to implement, and it only 
affects extremely large arrays, so I fear it's basically irrelevant (It 
probably won't help arrays < 32K in size). Do people actually copy 
megabyte-sized arrays?

Is it worth spending any more time on it?


BTW: I tested the memcpy() code provided in AMD's 1992 optimisation 
manual, and in Intel's 2007 manual. Only one of them actually gave any 
benefit when run on a 2008 Intel Core2 -- which was it? (Hint: it wasn't 
Intel!)
I've noticed that AMD's docs are usually greatly superior to Intels, but 
this time the difference is unbelievable.


Re: State of Play

2009-03-26 Thread Sean Kelly
== Quote from Walter Bright (newshou...@digitalmars.com)'s article
> Tomas Lindquist Olsen wrote:
> > On Thu, Mar 26, 2009 at 8:17 PM, Walter Bright
> >  wrote:
> >> Denis Koroskin wrote:
> >>> One of the breaking changes that I recall was that you made Posix
> >>> identifier built-in and thus any custom Posix versioning became an
> >>> error. Not sure if it was introduced in 1.041, though, but it is
> >>> still a breaking change.
> >> It was more of a build system change, but I get your point. It shows that
> >> even trivial changes are a bad idea for D1.
> >>
> >
> > Everyone certainly does not think it was a bad idea. If trivial things
> > like this sets people off, they should at least look at the problem
> > (and comment those few lines) before complaining.
> >
> > All my humble opinion of course.
> To me, it illustrates a fundamental disconnect. One cannot have both a
> 100% stable language and yet introduce improvements to it.
> As for the does one develop stable code targeting D1 and D2, I would
> suggest targeting D1 but be careful to use the string alias for all the
> char[]'s, and treat strings as if they were immutable. This will cover
> 90% of any source code changes between D1 and D2, perhaps even more than
> 90%. It's also very possible to write D1 code using the immutability
> style, in fact, I advocated it long before D2 (see all the old threads
> discussing Copy On Write). If code follows the COW principle, it should
> port from D1 to D2 with little more than a few cosmetic changes.

One minor thing in druntime that may help is that opEquals() returns
an equals_t, which evaluates to bool in D2.  It would probably be worth
changing the declaration in D1 to have a similar alias that evaluates to
int.  That should help address another minor inconsistency between D1
and D2.


Re: State of Play

2009-03-26 Thread Walter Bright

Tomas Lindquist Olsen wrote:

On Thu, Mar 26, 2009 at 8:17 PM, Walter Bright
 wrote:

Denis Koroskin wrote:

One of the breaking changes that I recall was that you made Posix
identifier built-in and thus any custom Posix versioning became an
error. Not sure if it was introduced in 1.041, though, but it is
still a breaking change.

It was more of a build system change, but I get your point. It shows that
even trivial changes are a bad idea for D1.



Everyone certainly does not think it was a bad idea. If trivial things
like this sets people off, they should at least look at the problem
(and comment those few lines) before complaining.

All my humble opinion of course.


To me, it illustrates a fundamental disconnect. One cannot have both a 
100% stable language and yet introduce improvements to it.


As for the does one develop stable code targeting D1 and D2, I would 
suggest targeting D1 but be careful to use the string alias for all the 
char[]'s, and treat strings as if they were immutable. This will cover 
90% of any source code changes between D1 and D2, perhaps even more than 
90%. It's also very possible to write D1 code using the immutability 
style, in fact, I advocated it long before D2 (see all the old threads 
discussing Copy On Write). If code follows the COW principle, it should 
port from D1 to D2 with little more than a few cosmetic changes.


Re: What can you "new"

2009-03-26 Thread Walter Bright

Cristian Vlasceanu wrote:
Hm... how should I put it nicely... wait, I guess I can't: if you guys think 
D is a systems language, you are smelling your own farts!


Because 1) GC magic and deterministic system level behavior are not exactly 
good friends, and 2) YOU DO NOT HAVE A SYSTEMS PROBLEM TO SOLVE. C was 
invented to write an OS in a portable fashion. Now that's a systems 
language. Unless you are designing the next uber OS, D is a solution in 
search of a problem, ergo not a systems language (sorry Walter). It is a 
great application language though, and if people really need custom 
allocation schemes, then they can write that part in C/C++ or even assembler 
(and I guess you can provide a custom run-time too, if you really DO HAVE a 
systems problem to address -- like developing for an embedded platform).


Although D has gc support, it is possible (and rather easy) to write 
programs that do not rely at all on the gc. My port of Empire from C to 
D does exactly that.


It is quite possible and practical to write an OS in D, and it has been 
done.


Re: State of Play

2009-03-26 Thread Brad Roberts
Steven Schveighoffer wrote:
> On Thu, 26 Mar 2009 07:42:01 -0400, Mike James  wrote:
> 
>> Is Tango for D2.0 at a level of D1.0 and can be used now?
> 
> No.  It is being worked on.  I don't forsee Tango for D2 being ready
> until at least September, perhaps later.
> 
> -Steve

Would you mind outlining / documenting what needs to be done?  I'd
expect that there are a number of people who would be interested in
volunteering to help.


Re: State of Play

2009-03-26 Thread Tomas Lindquist Olsen
On Thu, Mar 26, 2009 at 8:17 PM, Walter Bright
 wrote:
> Denis Koroskin wrote:
>>
>> One of the breaking changes that I recall was that you made Posix
>> identifier built-in and thus any custom Posix versioning became an
>> error. Not sure if it was introduced in 1.041, though, but it is
>> still a breaking change.
>
> It was more of a build system change, but I get your point. It shows that
> even trivial changes are a bad idea for D1.
>

Everyone certainly does not think it was a bad idea. If trivial things
like this sets people off, they should at least look at the problem
(and comment those few lines) before complaining.

All my humble opinion of course.


Re: State of Play

2009-03-26 Thread Steven Schveighoffer

On Thu, 26 Mar 2009 07:42:01 -0400, Mike James  wrote:


Is Tango for D2.0 at a level of D1.0 and can be used now?


No.  It is being worked on.  I don't forsee Tango for D2 being ready until  
at least September, perhaps later.


-Steve


Re: build a project

2009-03-26 Thread Don

grauzone wrote:

Qian Xu wrote:

grauzone wrote:

To Qian Xu (the OP): welcome to the funny world of the D tool chain. If
you're using dsss/rebuild, you can speed up compilation a bit by putting
oneatatime=no into the rebuild configuration file. Because of dmd bugs,
you might need to use the -full switch to avoid linker errors.


I am working in Linux. There is no dsss/rebuild in Linux, isn't there?


I'd say the reverse is true -- there is dsss/rebuild ONLY in Linux. I 
wouldn't recommend it for Windows.




Yes, there is.

The thing I described above will only make dsss/rebuild to compile 
everything in one go. Even if a single file is modified, everything is 
recompiled. But this is still much faster than incremental compilation.


What build system are you using? You said "WAF-tool", but I didn't find 
anything about it.



--Qian ^^)


Re: What can you "new"

2009-03-26 Thread Don

Cristian Vlasceanu wrote:
Hm... how should I put it nicely... wait, I guess I can't: if you guys think 
D is a systems language, you are smelling your own farts!


Because 1) GC magic and deterministic system level behavior are not exactly 
good friends, and 2) YOU DO NOT HAVE A SYSTEMS PROBLEM TO SOLVE. C was 
invented to write an OS in a portable fashion. Now that's a systems 
language. Unless you are designing the next uber OS, D is a solution in 
search of a problem, ergo not a systems language (sorry Walter). It is a 
great application language though, and if people really need custom 
allocation schemes, then they can write that part in C/C++ or even assembler 
(and I guess you can provide a custom run-time too, if you really DO HAVE a 
systems problem to address -- like developing for an embedded platform).


You're equating "systems language" with "language intended for writing a 
complete operating system". That's not what's intended.

AFAIK there are no operating systems written solely in C++.
Probably, D being a "systems language" actually means "D is competing 
with C++".


Re: State of Play

2009-03-26 Thread Walter Bright

Denis Koroskin wrote:

BTW, looks like you released 1.042 about two weeks ago, but the file
is not found on the server.



It wasn't released. 1.041 is the current version.


Re: State of Play

2009-03-26 Thread Walter Bright

Denis Koroskin wrote:

One of the breaking changes that I recall was that you made Posix
identifier built-in and thus any custom Posix versioning became an
error. Not sure if it was introduced in 1.041, though, but it is
still a breaking change.


It was more of a build system change, but I get your point. It shows 
that even trivial changes are a bad idea for D1.





Re: State of Play

2009-03-26 Thread Denis Koroskin
On Thu, 26 Mar 2009 21:27:52 +0300, Walter Bright  
wrote:

> ValeriM wrote:
>> No. It's not stable.
>> Try to build last Tango and DWT releases with D1.041 and you will get the 
>> problems.
>
> I am not aware that D1.041 broke Tango and DWT. Please, which bugzilla
> numbers are those?
> 

BTW, looks like you released 1.042 about two weeks ago, but the file is not 
found on the server.



Re: .NET on a string

2009-03-26 Thread Steven Schveighoffer
On Tue, 24 Mar 2009 20:02:16 -0400, Cristian Vlasceanu  
 wrote:



Steven Schveighoffer Wrote:


On Tue, 24 Mar 2009 18:26:16 -0400, Cristian Vlasceanu
 wrote:

> Back to the slices topic: I agree that my proposed "ref" solution  
would

> require code changes, but isn't that true for T[new] as well?
>
> Cristian
>

There is not already a meaning for T[new], it is a syntax error.  There  
is

already a meaning for ref T[].



Yes, but the current, existing meaning will be preserved:

void f(ref T[] a) {
   a[13] = 42; // still works as before if "a" is a slice under the hood
   a = null; // very easy for the compiler to make this work: a.array =  
null

}


OK, I'm not sure I understood your original proposal, before I respond  
more, let me make it clear what my understanding was.


In your proposal, a ref T[] a is the same as a slice today.  That is,  
assignment to a ref T[] simply copies the pointer and length from another  
T[] or ref T[].  However, it does not reference another slice struct, but  
is a local struct in itself.


In the current situation, a ref T[] is a reference to a slice struct.   
That is, assignement to a ref T[] overwrites the pointer and length on the  
reference that was passed.


So here is my objection:

void trim(ref char[] c)
{
   //
   // remove leading and trailing spaces
   //
   while(c.length > 0 && c[0] == ' ')
  c = c[1..$];
   while(c.length > 0 && c[$-1] == ' ')
  c = c[0..$-1];
}

void foo()
{
   char[] x = "   trim this!   ".dup;
   trim(x);
   assert(x == "trim this!");
}

Now, in your scheme, the ref simply means that c's data is referencing  
something else, not that c is a reference, so the assert will fail, no?


If this isn't the case, let me know how you signify:

1. a normal slice (struct is local, but ptr and length are aliased from  
data).

2. a reference of a slice (references an external struct).

-Steve


Re: State of Play

2009-03-26 Thread Denis Koroskin
On Thu, 26 Mar 2009 21:27:52 +0300, Walter Bright  
wrote:

> ValeriM wrote:
>> No. It's not stable.
>> Try to build last Tango and DWT releases with D1.041 and you will get the 
>> problems.
>
> I am not aware that D1.041 broke Tango and DWT. Please, which bugzilla
> numbers are those?
> 

One of the breaking changes that I recall was that you made Posix identifier 
built-in and thus any custom Posix versioning became an error. Not sure if it 
was introduced in 1.041, though, but it is still a breaking change.



Re: What can you "new"

2009-03-26 Thread grauzone

Jarrett Billingsley wrote:

On Thu, Mar 26, 2009 at 1:33 PM, Nick Sabalausky  wrote:


Besides, I'd think an OS written in D would certainly have the potential to
really shake up the current OS market. Not because people would say "Oh,
wow, it's written in D", of course, but because the developers would have a
far easier time making it, well, good.


*cough*www.xomb.org*cough*


Your point? Yes, we know that D can be used to write hobby kernels.


Re: What can you "new"

2009-03-26 Thread Jarrett Billingsley
On Thu, Mar 26, 2009 at 1:33 PM, Nick Sabalausky  wrote:

> Besides, I'd think an OS written in D would certainly have the potential to
> really shake up the current OS market. Not because people would say "Oh,
> wow, it's written in D", of course, but because the developers would have a
> far easier time making it, well, good.

*cough*www.xomb.org*cough*


Re: State of Play

2009-03-26 Thread Walter Bright

ValeriM wrote:

No. It's not stable.
Try to build last Tango and DWT releases with D1.041 and you will get the 
problems.


I am not aware that D1.041 broke Tango and DWT. Please, which bugzilla 
numbers are those?


Re: State of Play

2009-03-26 Thread Tomas Lindquist Olsen
On Thu, Mar 26, 2009 at 6:58 PM, Ary Borenszweig  wrote:
> Tomas Lindquist Olsen wrote:
>>
>> On Thu, Mar 26, 2009 at 6:41 PM, Leandro Lucarella 
>> wrote:
>>>
>>> Tomas Lindquist Olsen, el 26 de marzo a las 18:18 me escribiste:

 On Thu, Mar 26, 2009 at 5:50 PM, Leandro Lucarella 
 wrote:
>
> ...snip...
>
> That's why I'd love to see some kind of D 1.1 (maybe LDC could be used
> to
> make an "unofficial" D 1.1 language), with a few minor non-breaking new
> features over D 1.0, then D 1.2 could introduce some more, and so on.
> This
> way people can catch-up easly, with small simple iterations, and D1
> wont
> feel as a dead frozen language.

 I think this is bound to happen sooner or later.
>>>
>>> Well, then I'd love it happen sooner ;)
>>
>> We could start by figuring out what D 1.1 is ...
>
> It's D2 - const/invariant, yeaaah! :-P
>

Sounds a little drastic to me.


Re: State of Play

2009-03-26 Thread Ary Borenszweig

Tomas Lindquist Olsen wrote:

On Thu, Mar 26, 2009 at 6:41 PM, Leandro Lucarella  wrote:

Tomas Lindquist Olsen, el 26 de marzo a las 18:18 me escribiste:

On Thu, Mar 26, 2009 at 5:50 PM, Leandro Lucarella  wrote:

...snip...

That's why I'd love to see some kind of D 1.1 (maybe LDC could be used to
make an "unofficial" D 1.1 language), with a few minor non-breaking new
features over D 1.0, then D 1.2 could introduce some more, and so on. This
way people can catch-up easly, with small simple iterations, and D1 wont
feel as a dead frozen language.

I think this is bound to happen sooner or later.

Well, then I'd love it happen sooner ;)


We could start by figuring out what D 1.1 is ...


It's D2 - const/invariant, yeaaah! :-P


Re: State of Play

2009-03-26 Thread Tomas Lindquist Olsen
On Thu, Mar 26, 2009 at 6:41 PM, Leandro Lucarella  wrote:
> Tomas Lindquist Olsen, el 26 de marzo a las 18:18 me escribiste:
>> On Thu, Mar 26, 2009 at 5:50 PM, Leandro Lucarella  wrote:
>> > ...snip...
>> >
>> > That's why I'd love to see some kind of D 1.1 (maybe LDC could be used to
>> > make an "unofficial" D 1.1 language), with a few minor non-breaking new
>> > features over D 1.0, then D 1.2 could introduce some more, and so on. This
>> > way people can catch-up easly, with small simple iterations, and D1 wont
>> > feel as a dead frozen language.
>>
>> I think this is bound to happen sooner or later.
>
> Well, then I'd love it happen sooner ;)

We could start by figuring out what D 1.1 is ...

>
> --
> Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/
> 
> GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
> 
>


Re: State of Play

2009-03-26 Thread Leandro Lucarella
Tomas Lindquist Olsen, el 26 de marzo a las 18:18 me escribiste:
> On Thu, Mar 26, 2009 at 5:50 PM, Leandro Lucarella  wrote:
> > ...snip...
> >
> > That's why I'd love to see some kind of D 1.1 (maybe LDC could be used to
> > make an "unofficial" D 1.1 language), with a few minor non-breaking new
> > features over D 1.0, then D 1.2 could introduce some more, and so on. This
> > way people can catch-up easly, with small simple iterations, and D1 wont
> > feel as a dead frozen language.
> 
> I think this is bound to happen sooner or later.

Well, then I'd love it happen sooner ;)

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)



Re: State of Play

2009-03-26 Thread Nick Sabalausky
"Leandro Lucarella"  wrote in message 
news:20090326165012.gc17...@burns.springfield.home...
> Ary Borenszweig, el 26 de marzo a las 10:20 me escribiste:
>> >Note the use of the word "language."
>> >What you're referring to are bugs in the compiler.  It happens.
>> >  -- Daniel
>>
>> But ValieriM has a point. If I code, say, a library in D 1.041 only to
>> find out that in a couple of months it won't compile anymore in D 1.045,
>> that's not good at all. That's when someone sends a message to the
>> newsgroups saying "I just downloaded library Foo, but it won't compile
>> with D 1.045... is it abandoned?  Why isn't it maintained? D1 is
>> broken". The point is, you shouldn't need to maintain libraries for D1
>> anymore. Maybe the test suite for D1 should be bigger to cover more
>> cases...
>
> Another problem with D1 vs. D2 is nobody wants to start new software using
> D1 when D2 is about to come and have breaking changes. Not trivial
> breaking changes, but really base ones, a change in paradigm (D2 will be
> a lot more oriented to concurrency, and a lot of idioms will change). You
> probably have to start thinking in a different way to code with D2, at
> least to take advantage of new features/paradigm.
>
> I think this is something people see and why a lot of people see D1 as
> a dead end. This doesn't happen with other language AFAIK (at least no
> with Python wich is another moderately fast-evolving language that I use
> often). You can move from Python 2 to Python 3 almost as smoothly as
> moving from 2.5 to 2.6 (and that is *very* smooth). The difference with
> D is Python 3 didn't introduce any paradigm change, it doesn't want to be
> a revolutionary new language as D2 want.
>
> I think this phrase should not be in the D homepage anymore:
> "It seems to me that most of the "new" programming languages fall into one
> of two categories: Those from academia with radical new paradigms and
> those from large corporations with a focus on RAD and the web. Maybe it's
> time for a new language born out of practical experience implementing
> compilers." -- Michael
>
> I think D now almost fit into the "Those from academia with radical new
> paradigms" category =P
>
> I think this wouldn't happen if D2 were focused on just adding small
> features like the ones already added (like full closures, struct
> improvements, common runtime with Tango, etc.) and AST macros for
> example... =)
>
> If you take small steps, you can evolve more smoothly (like Python) and
> avoid this kind of huge gaps between language versions.
>
> That's why I'd love to see some kind of D 1.1 (maybe LDC could be used to
> make an "unofficial" D 1.1 language), with a few minor non-breaking new
> features over D 1.0, then D 1.2 could introduce some more, and so on. This
> way people can catch-up easly, with small simple iterations, and D1 wont
> feel as a dead frozen language.
>

I've never really minded D's willingness to change, especially this early in 
it's lifetime (in fact, I quite respect it), primarily because I've seen 
what too much emphasis on backwards compatibility can eventually do to a 
language (ie, C/C++).

But that said, you do have an interesting point about providing a migration 
path that breaks the changes into smaller, easier-to-digest chunks.




Re: What can you "new"

2009-03-26 Thread Nick Sabalausky
"Cristian Vlasceanu"  wrote in message 
news:gqf7r3$20o...@digitalmars.com...
> Hm... how should I put it nicely... wait, I guess I can't: if you guys 
> think D is a systems language, you are smelling your own farts!
>
> Because 1) GC magic and deterministic system level behavior are not 
> exactly good friends, and 2) YOU DO NOT HAVE A SYSTEMS PROBLEM TO SOLVE.

Ok, that argument is just plain silly. Of course we have a systems problem 
to solve: Many of us have plenty of reason to write system software 
(embedded, gaming devices, VM's, drivers, hell, even kernel modules), but 
we're absolutely fed up with C/C++.

So what in the world other choice do we have? Java, C#, Python, Ruby? 
Granted, D definitely still needs some improvements in the 
systems-programming area, but never in a million years will any of those 
other languages even remotely approach the level of feasibility for systems 
programming that D already has right now. So what are us systems-programmers 
supposed to do, just stick with that antiquated POS C/C++ for the rest of 
existence? Or come up with something better (D) that we can eventually 
migrate to?

Besides, I'd think an OS written in D would certainly have the potential to 
really shake up the current OS market. Not because people would say "Oh, 
wow, it's written in D", of course, but because the developers would have a 
far easier time making it, well, good.

(And don't knock my farts 'till you've tried them! ;) ) 




Re: State of Play

2009-03-26 Thread Tomas Lindquist Olsen
On Thu, Mar 26, 2009 at 5:50 PM, Leandro Lucarella  wrote:
> ...snip...
>
> That's why I'd love to see some kind of D 1.1 (maybe LDC could be used to
> make an "unofficial" D 1.1 language), with a few minor non-breaking new
> features over D 1.0, then D 1.2 could introduce some more, and so on. This
> way people can catch-up easly, with small simple iterations, and D1 wont
> feel as a dead frozen language.

I think this is bound to happen sooner or later.

>
> Just my 2¢
>
> --
> Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/


Re: State of Play

2009-03-26 Thread Clay Smith

Ary Borenszweig wrote:

Daniel Keep wrote:


ValeriM wrote:

Ary Borenszweig Wrote:


Mike James escribi�:

What is the state of play with D1.0 vs. D2.0?

Is D1.0 a dead-end and D2.0 should be used for future projects?

Is D2.0 stable enough for use at the present?

Is Tango for D2.0 at a level of D1.0 and can be used now?

Is DWT ready for D2.0 now?


Regards, mike.
I don't know why a lot of people see D1.0 as a dead-end. It's a 
stable language. It won't get new features. It won't change. It'll 
probably receive bug fixes. It works. It gets the job done. You can 
use it and be sure than in a time all of what you did will still be 
compatible with "newer" versions of D1.

No. It's not stable.
Try to build last Tango and DWT releases with D1.041 and you will get 
the problems.


"It's a stable language."

Note the use of the word "language."

What you're referring to are bugs in the compiler.  It happens.

  -- Daniel


But ValieriM has a point. If I code, say, a library in D 1.041 only to 
find out that in a couple of months it won't compile anymore in D 1.045, 
that's not good at all. That's when someone sends a message to the 
newsgroups saying "I just downloaded library Foo, but it won't compile 
with D 1.045... is it abandoned? Why isn't it maintained? D1 is broken". 
The point is, you shouldn't need to maintain libraries for D1 anymore. 
Maybe the test suite for D1 should be bigger to cover more cases...


You should be using the compiler that comes bundled with Tango, perhaps.


Re: State of Play

2009-03-26 Thread Leandro Lucarella
Ary Borenszweig, el 26 de marzo a las 10:20 me escribiste:
> >Note the use of the word "language."
> >What you're referring to are bugs in the compiler.  It happens.
> >  -- Daniel
> 
> But ValieriM has a point. If I code, say, a library in D 1.041 only to
> find out that in a couple of months it won't compile anymore in D 1.045,
> that's not good at all. That's when someone sends a message to the
> newsgroups saying "I just downloaded library Foo, but it won't compile
> with D 1.045... is it abandoned?  Why isn't it maintained? D1 is
> broken". The point is, you shouldn't need to maintain libraries for D1
> anymore. Maybe the test suite for D1 should be bigger to cover more
> cases...

Another problem with D1 vs. D2 is nobody wants to start new software using
D1 when D2 is about to come and have breaking changes. Not trivial
breaking changes, but really base ones, a change in paradigm (D2 will be
a lot more oriented to concurrency, and a lot of idioms will change). You
probably have to start thinking in a different way to code with D2, at
least to take advantage of new features/paradigm.

I think this is something people see and why a lot of people see D1 as
a dead end. This doesn't happen with other language AFAIK (at least no
with Python wich is another moderately fast-evolving language that I use
often). You can move from Python 2 to Python 3 almost as smoothly as
moving from 2.5 to 2.6 (and that is *very* smooth). The difference with
D is Python 3 didn't introduce any paradigm change, it doesn't want to be
a revolutionary new language as D2 want.

I think this phrase should not be in the D homepage anymore:
"It seems to me that most of the "new" programming languages fall into one
of two categories: Those from academia with radical new paradigms and
those from large corporations with a focus on RAD and the web. Maybe it's
time for a new language born out of practical experience implementing
compilers." -- Michael

I think D now almost fit into the "Those from academia with radical new
paradigms" category =P

I think this wouldn't happen if D2 were focused on just adding small
features like the ones already added (like full closures, struct
improvements, common runtime with Tango, etc.) and AST macros for
example... =)

If you take small steps, you can evolve more smoothly (like Python) and
avoid this kind of huge gaps between language versions.

That's why I'd love to see some kind of D 1.1 (maybe LDC could be used to
make an "unofficial" D 1.1 language), with a few minor non-breaking new
features over D 1.0, then D 1.2 could introduce some more, and so on. This
way people can catch-up easly, with small simple iterations, and D1 wont
feel as a dead frozen language.

Just my 2¢

-- 
Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/

GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)



Re: State of Play

2009-03-26 Thread Clay Smith

Mike James wrote:

What is the state of play with D1.0 vs. D2.0?

Is D1.0 a dead-end and D2.0 should be used for future projects?

Is D2.0 stable enough for use at the present?

Is Tango for D2.0 at a level of D1.0 and can be used now?

Is DWT ready for D2.0 now?


Regards, mike.


Use DMD 1.0 if you want a stable language that works.

Use DMD 2.0 if you don't potentially changing your code with every 
compiler release, and don't need to use that many libraries.


Re: build a project

2009-03-26 Thread Denis Koroskin
On Thu, 26 Mar 2009 18:58:34 +0300, Qian Xu  wrote:

> Denis Koroskin wrote:
>
>> On Thu, 26 Mar 2009 17:44:33 +0300, Qian Xu 
>> wrote:
>>
>>> Hi All,
>>>
>>> I have a project with 300 d-files. I use WAF-tool to build my project.
>>>
>>> If I add one line comment in a d-file from the bottom of the dependency
>>> tree, almost the whole project will be recompiled again. This is very
>>> time consuming.
>>>
>>> Is there any way to detect, whether the modification of a d-file does not
>>> have affect to its dependency?
>>>
>>> Best regards and have a good day
>>> --Qian Xu
>>>
>>
>> You can try comparing DMD-generated .di files from that modified file.
>> There is no need to recompile dependencies if two headers match.
>
> To generate header files takes also much time
>
> --Qian
> 

It's certainly much faster to generate 1 header file than recompile all the 
dependencies each time.



Re: build a project

2009-03-26 Thread grauzone

Qian Xu wrote:

grauzone wrote:

To Qian Xu (the OP): welcome to the funny world of the D tool chain. If
you're using dsss/rebuild, you can speed up compilation a bit by putting
oneatatime=no into the rebuild configuration file. Because of dmd bugs,
you might need to use the -full switch to avoid linker errors.


I am working in Linux. There is no dsss/rebuild in Linux, isn't there?


Yes, there is.

The thing I described above will only make dsss/rebuild to compile 
everything in one go. Even if a single file is modified, everything is 
recompiled. But this is still much faster than incremental compilation.


What build system are you using? You said "WAF-tool", but I didn't find 
anything about it.



--Qian ^^)


Re: build a project

2009-03-26 Thread Qian Xu
Denis Koroskin wrote:

> On Thu, 26 Mar 2009 17:44:33 +0300, Qian Xu 
> wrote:
> 
>> Hi All,
>>
>> I have a project with 300 d-files. I use WAF-tool to build my project.
>>
>> If I add one line comment in a d-file from the bottom of the dependency
>> tree, almost the whole project will be recompiled again. This is very
>> time consuming.
>>
>> Is there any way to detect, whether the modification of a d-file does not
>> have affect to its dependency?
>>
>> Best regards and have a good day
>> --Qian Xu
>> 
> 
> You can try comparing DMD-generated .di files from that modified file.
> There is no need to recompile dependencies if two headers match.

To generate header files takes also much time

--Qian


Re: build a project

2009-03-26 Thread Qian Xu
grauzone wrote:
> 
> To Qian Xu (the OP): welcome to the funny world of the D tool chain. If
> you're using dsss/rebuild, you can speed up compilation a bit by putting
> oneatatime=no into the rebuild configuration file. Because of dmd bugs,
> you might need to use the -full switch to avoid linker errors.

I am working in Linux. There is no dsss/rebuild in Linux, isn't there?

--Qian ^^)


Re: What can you "new"

2009-03-26 Thread Sean Kelly

grauzone wrote:

Sean Kelly wrote:

Georg Wrede wrote:


To do Systems Work on an embedded system, I'd like to see a D subset, 
without GC, and which would essentially be like C with classes. I've 
even toyed with the idea of having a D1toC translator for the job.


With D2 you can drop in a different allocator to be used by the 
runtime -- there's an example implementation that simply calls 
malloc/free, for example.  You'll leak memory if you perform string 
concatenation or use the AA, but otherwise everything works fine.


You forgot the array literals (which almost look like harmless, 
non-allocating array initializers), and the full closure delegates, 
where the compiler will randomly choose to allocate or not to allocate 
memory ("randomly" from the programmer's point of view). And of course 
most library functions.


The array literals should really be fixed :-p  But you're right--I 
forgot about closures.  Library functions... that's at least something 
easily addressable by the user.  In all fairness, I agree that it isn't 
terribly practical to forego a GC in D, but it is possible for a 
sufficiently motivated user.


Re: build a project

2009-03-26 Thread grauzone

Vladimir Panteleev wrote:
On Thu, 26 Mar 2009 16:54:27 +0200, Denis Koroskin <2kor...@gmail.com> 
wrote:


On Thu, 26 Mar 2009 17:44:33 +0300, Qian Xu 
 wrote:



Hi All,

I have a project with 300 d-files. I use WAF-tool to build my project.

If I add one line comment in a d-file from the bottom of the dependency
tree, almost the whole project will be recompiled again. This is very 
time

consuming.

Is there any way to detect, whether the modification of a d-file does 
not

have affect to its dependency?

Best regards and have a good day
--Qian Xu



You can try comparing DMD-generated .di files from that modified file.
There is no need to recompile dependencies if two headers match.


Unless you compile with inlining enabled, AFAIK.


A method, that would guarantee correctness, is to let the compiler only 
read .di files of other modules. But then the build process would be 
more complicated. You had to generate all .di files of all depended 
modules (which would take a while) and deal with circular module 
dependencies.


To Qian Xu (the OP): welcome to the funny world of the D tool chain. If 
you're using dsss/rebuild, you can speed up compilation a bit by putting 
oneatatime=no into the rebuild configuration file. Because of dmd bugs, 
you might need to use the -full switch to avoid linker errors.


Re: What can you "new"

2009-03-26 Thread Denis Koroskin
On Thu, 26 Mar 2009 18:20:02 +0300, grauzone  wrote:

> Sean Kelly wrote:
>> Georg Wrede wrote:
>>>
>>> To do Systems Work on an embedded system, I'd like to see a D subset,
>>> without GC, and which would essentially be like C with classes. I've
>>> even toyed with the idea of having a D1toC translator for the job.
>>
>> With D2 you can drop in a different allocator to be used by the runtime
>> -- there's an example implementation that simply calls malloc/free, for
>> example.  You'll leak memory if you perform string concatenation or use
>> the AA, but otherwise everything works fine.
>
> You forgot the array literals (which almost look like harmless,
> non-allocating array initializers), and the full closure delegates,
> where the compiler will randomly choose to allocate or not to allocate
> memory ("randomly" from the programmer's point of view). And of course
> most library functions.
> 

Tango is designed in a way to avoid any hidden allocations.



Re: What can you "new"

2009-03-26 Thread grauzone

Sean Kelly wrote:

Georg Wrede wrote:


To do Systems Work on an embedded system, I'd like to see a D subset, 
without GC, and which would essentially be like C with classes. I've 
even toyed with the idea of having a D1toC translator for the job.


With D2 you can drop in a different allocator to be used by the runtime 
-- there's an example implementation that simply calls malloc/free, for 
example.  You'll leak memory if you perform string concatenation or use 
the AA, but otherwise everything works fine.


You forgot the array literals (which almost look like harmless, 
non-allocating array initializers), and the full closure delegates, 
where the compiler will randomly choose to allocate or not to allocate 
memory ("randomly" from the programmer's point of view). And of course 
most library functions.


  1   2   >