Re: Significant GC performance penalty

2012-12-15 Thread Rob T

On Sunday, 16 December 2012 at 05:37:57 UTC, SomeDude wrote:


Isn't the memory management completely negligible when compared 
to the database access here ?


Here are the details ...

My test run selects and returns 206,085 records with 14 fields 
per record.


With all dynamic memory allocations disabled that are used to 
create the data structure containing the returned rows, a run 
takes 5 seconds. This does not return any data, but it runs 
exactly through all the records in the same way but returns to a 
temporary stack allocated value of appropriate type.


If I disable the GC before the run and re-enable it immediately 
after, it takes 7 seconds. I presume a full 2 seconds are used to 
disable and re-enable the GC which seems like a lot of time.


With all dynamic memory allocations enabled that are used to 
create the data structure containing the returned rows, a run 
takes 28 seconds. In this case, all 206K records are returned in 
a dynamically generate list.


If I disable the GC before the run and re-enable it immediately 
after, it takes 11 seconds. Since a full 2 seconds are used to 
disable and re-enable the GC, then 9 seconds are used, and since 
5 seconds are used without memory allocations, the allocations 
are using 4 seconds, but I'm doing a lot of allocations.


In my case, the structure is dynamically generated by allocating 
each individual field for each record returned, so there's 
206,085 records x 14 fields = 2,885,190 allocations being 
performed. I can cut the individual allocations down to 206,000 
by allocating the full record in one shot, however this is a 
stress test designed to work D as hard as possible and compare it 
with an identically stressed C++ version.


Both the D and C++ versions perform identically with the GC 
disabled and subtracting the 2 seconds from the D version to 
remove the time used up by enabling and disabling the GC during 
and after the run.


I wonder why 2 seconds are used to disable and enable the GC? 
That seems like a very large amount of time. If I select only 
5,000 records, the time to disable and enable the GC drops 
significantly to negligible levels and it takes the same amount 
of time per run with GC disabled & enabled, or with GC left 
enabled all the time.


During all tests, I do not run out of free RAM, and at no point 
does the memory go to swap.


--rt


Re: Next focus: PROCESS

2012-12-15 Thread deadalnix

On Sunday, 16 December 2012 at 03:59:33 UTC, Rob T wrote:

On Thursday, 13 December 2012 at 07:18:16 UTC, foobar wrote:


Per my answer to Rob:
D2 *is* the major version.
releases should be minor versions and largely backwards 
compatible - some evolution is allowed given some reasonable 
restrictions like a proper migration path over several 
releases.
Critical bug-fixes can go directly on stable releases but 
nothing else.




I just realized that we are mixing together the version numbers 
for two entirely different things.


As you have correctly pointed out, one of the version numbers 
is for the D language specification, which is version 
2.something, the other version is for the compiler releases, 
which has the major language specification version assigned to 
identify it as supporting version 2.something of the D 
language. The remaining numbers indicate incremental releases 
which may also roughly correspond to the minor evolutionary 
changes to the language specification, i.e., does DMD 2.061 
also mean D specification 2.061?




Yes it does. If new features are introduced, then this is by 
definition a change in the specs.


I think it makes good sense that the "2" is used to indicate 
that the compiler supports D major version 2 of the 
specification, but we should all be clear that is all that it 
represents. Therefore DMD 3.x will never appear unless there's 
a D language ver 3 specification for it to support.




A version 3 would means introducing change in the specification 
that may break a lot of code. This is type of stuff you shouldn't 
do often.


A completely separate issue that should be dealt with, is that 
the language specification's version number is not indicated 
anywhere that I could see. We just assume it's version 
2.something and we have no idea where the "something" part is 
currently at or what has changed since ver 2.0. This is no good 
because it means that we cannot indicate in the compiler 
release change log what minor version of the 2.x specification 
the compiler is actually supporting.




The compiler source code is probably what we have that look like 
the most as a spec right now.


Re: Next focus: PROCESS

2012-12-15 Thread deadalnix

On Sunday, 16 December 2012 at 02:03:34 UTC, Jesse Phillips wrote:

On Saturday, 15 December 2012 at 20:39:22 UTC, deadalnix wrote:
Can we drop the LTS name ? It reminds me of ubuntu, and I 
clearly hope that people promoting that idea don't plan to 
reproduce ubuntu's scheme :
- it is not suitable for a programming language (as stated 3 
time now, so just read before why I won't repeat it).


You don't need to repeat your self, you need to expand on your 
points. Joseph has already requested that you give specifics of 
your objection, you have explained why the situation is 
different but not what needs to be different.




This is completely backward, but I'll do it anyway. But first, 
let me explain why it is backward.


You are using distro's versionning system as a base of reflexion. 
But such system is made to achieve different goal than a 
programming language. I shouldn't be here explaining why this is 
wrong, you should be here explaining me why it can be applied 
anyway.


Otherwise, anyone can come with any point, whatever how stupid it 
is, and each time we have to prove that person wrong. When you 
come with something, you have to explain why it make sens, not 
the other way around.


Back to the point, and it will be the last time. A distro is a 
set of programs. The goal of the distro is to provide a set of 
programs, as up to date as possible, that integrate nicely with 
each other, and with as few bugs as possible. Some of these goals 
are in conflict, so we see different pattern emerge, with 
different tradeoff, as ubuntu and debians's processes.


From one version to another, distro don't need backward 
compatibility. Easy migration is important, but not 
compatibility. This is why debian can switch to multiarch in its 
next version (which break a hell lot of things). You don't need 
revision of the distro because software are updated on a per 
software basis when problems are detected (package in fact, but 
it don't really matter).


This is very different from a programming language where :
- no package update is possible. The concept of package don't 
even exist in our world.
- backward compatibility is a very important point, when it isn't 
for distros.


The only goal that is coming is trying to reach some level of 
stability. Everything else is completely different.


Your points were specific to Debian's model, which is not 
Ubuntu's.



- ubuntu is notoriously unstable.


I don't know anyone who uses the LTS releases. That isn't to 
say no one is, but Ubuntu is doing a lot of experimenting in 
their 6 month releases.


I used to work with ubuntu. I've done a ton of things with that 
distro. It IS unstable. In fact, it is based on debian unstable, 
so it isn't really a surprise.


Re: Significant GC performance penalty

2012-12-15 Thread SomeDude

On Friday, 14 December 2012 at 19:24:39 UTC, Rob T wrote:
On Friday, 14 December 2012 at 18:46:52 UTC, Peter Alexander 
wrote:
Allocating memory is simply slow. The same is true in C++ 
where you will see performance hits if you allocate memory too 
often. The GC makes things worse, but if you really care about 
performance then you'll avoid allocating memory so often.


Try to pre-allocate as much as possible, and use the stack 
instead of the heap where possible. Fixed size arrays and 
structs are your friend.


In my situation, I can think of some ways to mitigate the 
memory allocation  problem, however it's a bit tricky when 
SELECT statement results have to be dynamically generated, 
since the number of rows returned and size and type of the rows 
are always different depending on the query and the data stored 
in the database. It's just not at all practical to custom fit 
for each SELECT to a pre-allocated array or list, it'll just be 
far too much manual effort.




Isn't the memory management completely negligible when compared 
to the database access here ?


Re: Next focus: PROCESS

2012-12-15 Thread Rob T

On Thursday, 13 December 2012 at 07:18:16 UTC, foobar wrote:


Per my answer to Rob:
D2 *is* the major version.
releases should be minor versions and largely backwards 
compatible - some evolution is allowed given some reasonable 
restrictions like a proper migration path over several releases.
Critical bug-fixes can go directly on stable releases but 
nothing else.




I just realized that we are mixing together the version numbers 
for two entirely different things.


As you have correctly pointed out, one of the version numbers is 
for the D language specification, which is version 2.something, 
the other version is for the compiler releases, which has the 
major language specification version assigned to identify it as 
supporting version 2.something of the D language. The remaining 
numbers indicate incremental releases which may also roughly 
correspond to the minor evolutionary changes to the language 
specification, i.e., does DMD 2.061 also mean D specification 
2.061?


I think it makes good sense that the "2" is used to indicate that 
the compiler supports D major version 2 of the specification, but 
we should all be clear that is all that it represents. Therefore 
DMD 3.x will never appear unless there's a D language ver 3 
specification for it to support.


A completely separate issue that should be dealt with, is that 
the language specification's version number is not indicated 
anywhere that I could see. We just assume it's version 
2.something and we have no idea where the "something" part is 
currently at or what has changed since ver 2.0. This is no good 
because it means that we cannot indicate in the compiler release 
change log what minor version of the 2.x specification the 
compiler is actually supporting.


--rt


Re: Compilation strategy

2012-12-15 Thread Walter Bright

On 12/15/2012 6:53 PM, Iain Buclaw wrote:

Probably won't be easy (if bug still exists).  To describe it (I'll try to find
a working example later)


These things all belong in bugzilla. Otherwise, they will never get fixed.



Re: Compilation strategy

2012-12-15 Thread Iain Buclaw
On 15 December 2012 18:52, Jonathan M Davis  wrote:

> On Saturday, December 15, 2012 10:44:56 H. S. Teoh wrote:
> > Isn't that just some compiler bugs that sometimes cause certain symbols
> > not to be instantiated in the object file? IMO, such bugs should be
> > fixed in the compiler, rather than force the user to compile one way or
> > another.
>
> Well obviously. They're bugs. Of course they should be fixed. But as long
> as
> they haven't been fixed, we have to work around them, which means compiling
> everything at once.
>
> - Jonathan M Davis
>


Probably won't be easy (if bug still exists).  To describe it (I'll try to
find a working example later) - when compiled separately, both modules
claim the symbol is extern to their scope.  However when compiled under one
compilation unit, the compiler has substantially more information regarding
the symbol and sends it to the backend to be written.


If I don't find it by Monday, you'll have to wait until the new year when I
return. :-)


-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: Next focus: PROCESS

2012-12-15 Thread Rob T

On Sunday, 16 December 2012 at 02:03:34 UTC, Jesse Phillips wrote:

On Saturday, 15 December 2012 at 20:39:22 UTC, deadalnix wrote:
Can we drop the LTS name ? It reminds me of ubuntu, and I 
clearly hope that people promoting that idea don't plan to 
reproduce ubuntu's scheme :
- it is not suitable for a programming language (as stated 3 
time now, so just read before why I won't repeat it).


You don't need to repeat your self, you need to expand on your 
points. Joseph has already requested that you give specifics of 
your objection, you have explained why the situation is 
different but not what needs to be different.


Your points were specific to Debian's model, which is not 
Ubuntu's.



- ubuntu is notoriously unstable.


I don't know anyone who uses the LTS releases. That isn't to 
say no one is, but Ubuntu is doing a lot of experimenting in 
their 6 month releases.


I think if we focus on the end results that the Ubuntu process is 
designed to accomplish and what the Debian is designed to 
accomplish we can start to think about which model is more likely 
to produce the desired results that we wish to achieve with the D 
process.


I'll sum it up as follows:

What both systems attempt to accomplish is in conflict, Ubuntu 
attempts to create a reasonably stable distribution with more 
recent updates so that users have access to current software, but 
that means there will be more bugs and less stability.


Debian attempts to distribute a bug free stable distribution, 
however the delays in getting there mean that the software is 
often much less current than what is currently available.


The end results may be summarized as follows:

I would *never* use Ubuntu for mission critical tasks, for 
example I would not use it to power a 24/7 remote server. I may 
use it for a workstation at home, but not at work, although I may 
use it at work if I needed access to the most current software.


Why not use it for mission critical tasks? Because it's unstable. 
Why is it unstable? Because it's based on Debian's unstable 
branch.


Debian stable on the other hand, is rock solid, and works well 
for mission critical tasks, and is suitable for use in a server 
environment, however, it does not come with the most current 
software, a some packages may be a couple of versions behind 
(which can be very frustrating at times).


So to achieve stability vs instability, you have to trade away 
less mature versions of software for older more mature versions 
of software.


Debian's process is design specifically for stability, Ubuntu's 
process is designed specifically to provide the most current 
software in a reasonably stable way.


--rt


Re: Next focus: PROCESS

2012-12-15 Thread Jesse Phillips

On Saturday, 15 December 2012 at 20:39:22 UTC, deadalnix wrote:
Can we drop the LTS name ? It reminds me of ubuntu, and I 
clearly hope that people promoting that idea don't plan to 
reproduce ubuntu's scheme :
 - it is not suitable for a programming language (as stated 3 
time now, so just read before why I won't repeat it).


You don't need to repeat your self, you need to expand on your 
points. Joseph has already requested that you give specifics of 
your objection, you have explained why the situation is different 
but not what needs to be different.


Your points were specific to Debian's model, which is not 
Ubuntu's.



 - ubuntu is notoriously unstable.


I don't know anyone who uses the LTS releases. That isn't to say 
no one is, but Ubuntu is doing a lot of experimenting in their 6 
month releases.


Re: Compilation strategy

2012-12-15 Thread Walter Bright

On 12/15/2012 9:31 AM, RenatoUtsch wrote:

Yes, I'm writing a build system for D (that will be pretty damn good, I think,
it has some interesting new concepts), and compiling each source separately to
an object, and then linking everything will allow easily to make the build
parallel, dividing the sources to compile in various threads. Or the compiler
already does that if I pass all source files in one call?


The compiler does a little multithreading, but not enough to make a difference. 
I've certainly thought about various schemes to parallelize it, though.




Re: Voldemort structs no longer work?

2012-12-15 Thread Timon Gehr

On 12/15/2012 10:44 PM, H. S. Teoh wrote:

...
This way, the type still has an .init, except that it's only accessible
inside the function itself. Or are there unintended consequences here?



Lazy initialization of a member of such a type would require unsafe 
language features and not work in CTFE.




Re: Nested Structs (Solution)

2012-12-15 Thread Rob T
There's an interesting discussion going on that may be related to 
this subject.


http://forum.dlang.org/thread/mailman.2705.1355596709.5162.digitalmar...@puremagic.com

Note the definition with the "hidden reference frame" baggage, 
and to get rid of the extra baggage use "static struct".


The reference frame mentioned however is not for a parent class 
or struct, but maybe its similar enough that nested structs could 
possibly be extended with a hidden reference to the parent class 
or struct.


It all seems debatable though with various levels of 
complications associated with implementing a nest struct.


--rt


Re: Voldemort structs no longer work?

2012-12-15 Thread Rob T

Good finds.

The definition of a "nested struct" is not consistently or well 
defined, so there's no wonder it's not working as anyone expects.


--rt


Re: Next focus: PROCESS

2012-12-15 Thread Rob T

On Saturday, 15 December 2012 at 19:03:49 UTC, Brad Roberts wrote:

On 12/15/2012 2:29 AM, Dmitry Olshansky wrote:

I think one of major goals is to be able to continue ongoing 
development while at the _same time_ preparing a release.
To me number one problem is condensed in the statement "we are 
going to release do not merge anything but regressions"
the process should sidestep this "lock-based" work-flow. Could 
be useful to add something along these line to goals

section. (i.e. the speed and smoothness is a goal)


I've been staying out of this thread for the most part, but I 
feel the need to comment on this part specifically.  It's
quite common for most major projects to have a clear "we're 
wrapping up a release" phase where work _is_ restricted to
bug fixing and stabilizing.  They don't stop people from 
working off in their development branches (no one could
effectively impose such restrictions even if they wanted to), 
but they _do_ tighten down on what's allowed to be merged.


This is a forcing function that's just required.  There's a lot 
of issues that otherwise won't get sufficient attention.
 If all it took was altruism then regressions would be fixed 
immediately, bugs would always be fixed in highest priority
to lowest priority (assuming that could even be effectively 
designed), etc.


Without the 'ok, guys, focus in this smaller more critical 
subset of bugs' step, release branches would be created and
never polished (or focused down to the release manager to do 
all the work if he's that skilled and/or generous of his time).


There's a phrase I'm trying to remember, but it's something to 
the effect that 'hope isn't a recipe for success.'
Hoping that people fix regressions on release critical bugs 
isn't sufficient.  Incentive and steering is required.  The
desire to ungate master branch merges is one approach that's 
been shown to be successful.


I feel you've made a very important point here, and I've put up a 
section in the wiki process talk page devoted to the subject.


http://wiki.dlang.org/Talk:Release_Process#The_Path_of_Least_Resistance:_Incentives_and_Barriers

Although debating ideas in here is welcome, please be sure to 
post your ideas in the talk page, esp after there's a conclusion 
made, otherwise your excellent idea may vanish into the Ether and 
never become implemented as it should have been.


--rt


Re: Voldemort structs no longer work?

2012-12-15 Thread Walter Bright

On 12/15/2012 10:36 AM, H. S. Teoh wrote:

With latest git dmd:

auto makeVoldemort(int x) {
struct Voldemort {
@property int value() { return x; }
}
return Voldemort();
}
void main() {
auto v = makeVoldemort();
writeln(v.value);
}

Compile error:

test.d(3): Error: function test.makeVoldemort.Voldemort.value cannot 
access frame of function test.makeVoldemort


This compiles if the @property is elided.

Definitely a bug.



Re: Compilation strategy

2012-12-15 Thread David Nadlinger
On Saturday, 15 December 2012 at 17:02:08 UTC, Andrei 
Alexandrescu wrote:
In phobos we use a single call for building the library. Then 
(at least on Posix) we use multiple calls for running unittests.


This highlights the problem with giving a single answer to the 
question: Building a large project in one call is often 
impractical. It only works for Phobos library builds because many 
templates don't even get instantiated.


David


Re: Voldemort structs no longer work?

2012-12-15 Thread Jonathan M Davis
On Saturday, December 15, 2012 13:44:13 H. S. Teoh wrote:
> But anyway, thinking a bit more about the .init problem, couldn't we
> just say that .init is not accessible outside the scope of the function
> that defines the type, and therefore you cannot declare a variable of
> that type (using typeof or whatever other workaround) without also
> assigning it to an already-initialized instance of the type?
> 
> This way, the type still has an .init, except that it's only accessible
> inside the function itself. Or are there unintended consequences here?

That's pretty much how it is now. The problem is all of the stuff that requires 
.init. For instance, lots of template constraints and static if use .init to 
check stuff about a type. Not having an .init gets in the way of a lot of 
template metaprogramming. And sometimes, you have to have .init or some things 
are impossible.

For instance, takeNone will attempt to return an empty range of the given type 
(something which some algorithms need, or they won't work), and if the range 
doesn't have init or slicing, then it's forced to return the result of 
takeExactly, which is then a different range type. For some stuff, that's fine, 
for other stuff, that renders it unusable. I think that the only place that 
that affects Phobos at the moment is that you lose an optimization path in one 
of find's overloads, but I've been in other situations where I've had to make a 
range empty without changing its type and popping all of the elements off was 
unacceptable, and Voldemort types make that impossible.

We may be able to work around enough of the problems caused by a lack of a 
usable init property to be able to continue to use Voldemort types, but some 
issues can't be fixed with them (like that of takeNone), and until we do find a 
solution for even the problems that we can fix, the lack of an init property 
tends to be crippling for metaprogramming.

- Jonathan M Davis


Re: Voldemort structs no longer work?

2012-12-15 Thread H. S. Teoh
On Sat, Dec 15, 2012 at 01:09:33PM -0800, Jonathan M Davis wrote:
> On Saturday, December 15, 2012 12:18:21 H. S. Teoh wrote:
> > It seems that the only clean way to do this is to use a class
> > instead of a struct, since the .init value will conveniently just be
> > null, thereby sidestepping the problem.
> 
> That would incur unnecessary overhead and probably break all kinds of
> code, because they're then reference types instead of value types, and
> a _lot_ of code doesn't use save propperly. If structs can't do what
> we need as Voldemort types, it's just better to make it so that
> they're not Voldemort types.  Voldemort types are a cute idea, and in
> principle are great, but I don't think that it's worth fighting to
> keep them if they have problems.
[...]

Well, the current way many Phobos ranges work, they're kind of
pseudo-Voldemort types (except they don't actually need/use the context
pointer):

auto cycle(R)(R range) {
struct CycleImpl {
R r;
this(R _r) { r = _r; }
... // range methods
}
return CycleImpl(range);
}

auto r = cycle(myRange);

While it's true that you can write:

typeof(cycle(myRange)) s;

and thereby break encapsulation, if someone is desperate enough to do
such things they probably have a good reason for it, and they should be
able to deal with the consequences too.

But anyway, thinking a bit more about the .init problem, couldn't we
just say that .init is not accessible outside the scope of the function
that defines the type, and therefore you cannot declare a variable of
that type (using typeof or whatever other workaround) without also
assigning it to an already-initialized instance of the type?

This way, the type still has an .init, except that it's only accessible
inside the function itself. Or are there unintended consequences here?


T

-- 
It is impossible to make anything foolproof because fools are so ingenious. -- 
Sammy


Re: Voldemort structs no longer work?

2012-12-15 Thread deadalnix
On Saturday, 15 December 2012 at 21:10:19 UTC, Jonathan M Davis 
wrote:

On Saturday, December 15, 2012 12:18:21 H. S. Teoh wrote:
It seems that the only clean way to do this is to use a class 
instead of
a struct, since the .init value will conveniently just be 
null, thereby

sidestepping the problem.


That would incur unnecessary overhead and probably break all 
kinds of code,
because they're then reference types instead of value types, 
and a _lot_ of
code doesn't use save propperly. If structs can't do what we 
need as Voldemort
types, it's just better to make it so that they're not 
Voldemort types.
Voldemort types are a cute idea, and in principle are great, 
but I don't think

that it's worth fighting to keep them if they have problems.

- Jonathan M Davis


I always found them inconsistent with the behavior they have in 
classes (where no outer pointer is created).


This is a lot of work to do in the standard lib however.


Re: Voldemort structs no longer work?

2012-12-15 Thread Jonathan M Davis
On Saturday, December 15, 2012 12:18:21 H. S. Teoh wrote:
> It seems that the only clean way to do this is to use a class instead of
> a struct, since the .init value will conveniently just be null, thereby
> sidestepping the problem.

That would incur unnecessary overhead and probably break all kinds of code, 
because they're then reference types instead of value types, and a _lot_ of 
code doesn't use save propperly. If structs can't do what we need as Voldemort 
types, it's just better to make it so that they're not Voldemort types. 
Voldemort types are a cute idea, and in principle are great, but I don't think 
that it's worth fighting to keep them if they have problems.

- Jonathan M Davis


Re: Next focus: PROCESS

2012-12-15 Thread Dmitry Olshansky

12/15/2012 11:03 PM, Brad Roberts пишет:

On 12/15/2012 2:29 AM, Dmitry Olshansky wrote:


I think one of major goals is to be able to continue ongoing development while 
at the _same time_ preparing a release.
To me number one problem is condensed in the statement "we are going to release do 
not merge anything but regressions"
the process should sidestep this "lock-based" work-flow. Could be useful to add 
something along these line to goals
section. (i.e. the speed and smoothness is a goal)


I've been staying out of this thread for the most part, but I feel the need to 
comment on this part specifically.  It's
quite common for most major projects to have a clear "we're wrapping up a 
release" phase where work _is_ restricted to
bug fixing and stabilizing.  They don't stop people from working off in their 
development branches (no one could
effectively impose such restrictions even if they wanted to), but they _do_ 
tighten down on what's allowed to be merged.

This is a forcing function that's just required.  There's a lot of issues that 
otherwise won't get sufficient attention.
  If all it took was altruism then regressions would be fixed immediately, bugs 
would always be fixed in highest priority
to lowest priority (assuming that could even be effectively designed), etc.


I understand the desire of focusing people attention on a priority but 
there is no denying the fact that only a small subset of folks that is 
able to (say) fix a regression in the compiler.


What I'm trying to avoid here is a situation where the compiler team 
fights with last regressions but the phobos development is frozen. Or 
the other way around. There could be a couple of these ping-pongs during 
the process. And that's what I think we had and is suboptimal.


It's part of process to have a frozen view t in a form of staging branch 
but keep the master branch going. Otherwise it's just more branches (and 
pain) for no particular purpose.



Without the 'ok, guys, focus in this smaller more critical subset of bugs' 
step, release branches would be created and
never polished (or focused down to the release manager to do all the work if 
he's that skilled and/or generous of his time).



In the grander scale it boils down to managing who does what. Basically 
there should be a release check-list that everybody knows about: a list 
of issues and takers that currently work on them (Bugzilla provides that 
and should be more heavily used). If it's more or less covered (or 
nobody else could do it) the others may go on with the development on 
master.


The main problem I see with "everybody focus on these" is that I expect 
the number of contributors to grow but their area of expertise to get be 
more and more specialized (in general). Thus IMHO it won't scale.



There's a phrase I'm trying to remember, but it's something to the effect that 
'hope isn't a recipe for success.'
Hoping that people fix regressions on release critical bugs isn't sufficient.  
Incentive and steering is required.  The
desire to ungate master branch merges is one approach that's been shown to be 
successful.

Yes. The focus of never stopping the development is to encourage new 
contributors by providing shorter feedback cycle.


--
Dmitry Olshansky


Re: Next focus: PROCESS

2012-12-15 Thread RenatoUtsch

On Saturday, 15 December 2012 at 20:39:22 UTC, deadalnix wrote:
On Saturday, 15 December 2012 at 20:32:42 UTC, Jesse Phillips 
wrote:
On Saturday, 15 December 2012 at 10:29:55 UTC, Dmitry 
Olshansky wrote:
Second point is about merging master into staging - why not 
just rewrite it with master branch altogether after each 
release?
master is the branch with correct history (all new stuff is 
rebased on it) thus new staging will have that too.


Why you don't rewrite is because it is a public branch. Unlike 
feature branches which will basically be thrown out everyone 
on the development team will need to have staging updated. If 
we rewrite history then instead of


$ git pull staging

At random times it will be (I don't know the commands and 
won't even look it up)


It just won't be pretty.


I've made modifications to the graphic hoping to illustrate 
some thoughts.


http://i.imgur.com/rJVSg.png

This does not depict what is currently described (in terms of 
branching). But is what I've written under 
http://wiki.dlang.org/Release_Process#Release_Schedule


I see patches going into the LTS-1 (if applicable), the LTS-1 
is then merged into the latest LTS, which is merged into any 
active staging, that is then merged into master.


The monthly release don't get bug fixes (just wait for the 
next month).


I've removed some version numbering since I don't know if we 
should have a distinct numbering for LTS and Monthly. I've 
already give some thoughts on this: 
http://forum.dlang.org/post/ydmgqmbqngwderfkl...@forum.dlang.org


Can we drop the LTS name ? It reminds me of ubuntu, and I 
clearly hope that people promoting that idea don't plan to 
reproduce ubuntu's scheme :
 - it is not suitable for a programming language (as stated 3 
time now, so just read before why I won't repeat it).

 - ubuntu is notoriously unstable.


Of course, lets just call it "stable", then. Or you have a better 
name?


Anyways, I do think that "stable" releases every 3 or more years 
and monthly or every 3 months releases are the best solution to 
the current D users.


-- Renato


Re: Moving towards D2 2.061 (and D1 1.076)

2012-12-15 Thread Kai Nacke

On 12.12.2012 02:42, David Nadlinger wrote:

On Tuesday, 11 December 2012 at 13:37:16 UTC, Iain Buclaw wrote:

I foresee that this release will be the biggest pain in the ass to
merge downstream into GDC. I wonder if David on LDC's side shares the
same concern...


I have been busy with getting LDC ready for the next release lately, so
I didn't have a closer look at the state of things with regard to
merging yet. However, it seems like Kai has already put together a patch
which merges the frontend as it was a few days ago (see
https://github.com/ldc-developers/ldc/wiki/Building-and-hacking-LDC-on-Windows-using-MSVC),
so maybe he has any comments on this?

David


To merge the frontend I created a MSBUILD script which uses git to 
perform a 3-way merge. I commit the source of the previous dmd fe, 
create a ldc branch and commit the current fe source. Then I commit the 
current dmd fe and try to merge it into the ldc branch. The number of 
merge conflicts is then an indicator how difficult the merge is.


With 2.061 I got only a few merge conflicts. For me it seems to be 
easier to merge then the previous release.


Kai


Re: Next focus: PROCESS

2012-12-15 Thread deadalnix
On Saturday, 15 December 2012 at 20:32:42 UTC, Jesse Phillips 
wrote:
On Saturday, 15 December 2012 at 10:29:55 UTC, Dmitry Olshansky 
wrote:
Second point is about merging master into staging - why not 
just rewrite it with master branch altogether after each 
release?
master is the branch with correct history (all new stuff is 
rebased on it) thus new staging will have that too.


Why you don't rewrite is because it is a public branch. Unlike 
feature branches which will basically be thrown out everyone on 
the development team will need to have staging updated. If we 
rewrite history then instead of


$ git pull staging

At random times it will be (I don't know the commands and won't 
even look it up)


It just won't be pretty.


I've made modifications to the graphic hoping to illustrate 
some thoughts.


http://i.imgur.com/rJVSg.png

This does not depict what is currently described (in terms of 
branching). But is what I've written under 
http://wiki.dlang.org/Release_Process#Release_Schedule


I see patches going into the LTS-1 (if applicable), the LTS-1 
is then merged into the latest LTS, which is merged into any 
active staging, that is then merged into master.


The monthly release don't get bug fixes (just wait for the next 
month).


I've removed some version numbering since I don't know if we 
should have a distinct numbering for LTS and Monthly. I've 
already give some thoughts on this: 
http://forum.dlang.org/post/ydmgqmbqngwderfkl...@forum.dlang.org


Can we drop the LTS name ? It reminds me of ubuntu, and I clearly 
hope that people promoting that idea don't plan to reproduce 
ubuntu's scheme :
 - it is not suitable for a programming language (as stated 3 
time now, so just read before why I won't repeat it).

 - ubuntu is notoriously unstable.


Re: Voldemort structs no longer work?

2012-12-15 Thread H. S. Teoh
On Sat, Dec 15, 2012 at 12:02:16PM -0800, Jonathan M Davis wrote:
> On Saturday, December 15, 2012 11:45:10 H. S. Teoh wrote:
> > Ironically enough, Andrei in the subsequent paragraph discourages
> > the use of such nested structs, whereas Walter's article promotes
> > the use of such Voldemort types as a "happy discovery". :)
> 
> No, the real irony is that it's Andrei who promoted them in the first
> place. :)

Heh. :)


> We _are_ finding some serious issues with them though (e.g. they don't
> work with init and apparently can't work with init), and there has
> been some discussion of ditching them due to such issues, but no
> consensus has been reached on that.
[...]

Hmm, that's true. They can't possibly work with init: if you do
something like:

auto func(int x) {
struct V {
@property int value() { return x; }
}
return V();
}
auto v = func(123);
auto u = v.init;// <-- what happens here?
writeln(u.value);   // <-- and here?

It seems that we'd have to make .init illegal on these Voldemort
structs. But that breaks consistency with the rest of the language that
every type must have an .init value.

Alternatively, .init can implicitly create a context where the value of
x is set to int.init. But this is unnecessary (it's supposed to be a
*Voldemort* type!) and very ugly (why are we creating a function's local
context when it isn't even being called?), not to mention useless
(u.value will return a nonsensical value).

Or, perhaps a less intrusive workaround, is to have .init set the hidden
context pointer to null, and you'll get a null deference when accessing
u.value. Which is not pretty either, since no pointers or references are
apparent in V.value; it's implicit.

It seems that the only clean way to do this is to use a class instead of
a struct, since the .init value will conveniently just be null, thereby
sidestepping the problem.


T

-- 
If creativity is stifled by rigid discipline, then it is not true creativity.


Re: Next focus: PROCESS

2012-12-15 Thread deadalnix

On Saturday, 15 December 2012 at 19:03:49 UTC, Brad Roberts wrote:

On 12/15/2012 2:29 AM, Dmitry Olshansky wrote:

I think one of major goals is to be able to continue ongoing 
development while at the _same time_ preparing a release.
To me number one problem is condensed in the statement "we are 
going to release do not merge anything but regressions"
the process should sidestep this "lock-based" work-flow. Could 
be useful to add something along these line to goals

section. (i.e. the speed and smoothness is a goal)


I've been staying out of this thread for the most part, but I 
feel the need to comment on this part specifically.  It's
quite common for most major projects to have a clear "we're 
wrapping up a release" phase where work _is_ restricted to
bug fixing and stabilizing.  They don't stop people from 
working off in their development branches (no one could
effectively impose such restrictions even if they wanted to), 
but they _do_ tighten down on what's allowed to be merged.


This is a forcing function that's just required.  There's a lot 
of issues that otherwise won't get sufficient attention.
 If all it took was altruism then regressions would be fixed 
immediately, bugs would always be fixed in highest priority
to lowest priority (assuming that could even be effectively 
designed), etc.


Without the 'ok, guys, focus in this smaller more critical 
subset of bugs' step, release branches would be created and
never polished (or focused down to the release manager to do 
all the work if he's that skilled and/or generous of his time).


There's a phrase I'm trying to remember, but it's something to 
the effect that 'hope isn't a recipe for success.'
Hoping that people fix regressions on release critical bugs 
isn't sufficient.  Incentive and steering is required.  The
desire to ungate master branch merges is one approach that's 
been shown to be successful.


Very good point. The final sprint is a good practice to adopt.


Re: Significant GC performance penalty

2012-12-15 Thread Rob T

On Saturday, 15 December 2012 at 13:04:41 UTC, Mike Parker wrote:
On Saturday, 15 December 2012 at 11:35:18 UTC, Jacob Carlborg 
wrote:

On 2012-12-14 19:27, Rob T wrote:

I wonder what can be done to allow a programmer to go fully 
manual,

while not loosing any of the nice features of D?


Someone has create a GC free version of druntime and Phobos. 
Unfortunately I can't find the post in the newsgroup right now.


http://3d.benjamin-thaut.de/?p=20


Thanks for the link, Windows only and I'm using Linux, but still 
worth a look.


Note this, comment below, a 3x difference, same as what I 
experienced:


Update:
I found a piece of code that did manually slow down the 
simulation in case it got to fast. This code never kicked in with 
the GC version, because it never reached the margin. The manual 
memory managed version however did reach the margin and was 
slowed down. With this piece of code removed the manual memory 
managed version runs at 5 ms which is 200 FPS and thus nearly 3 
times as fast as the GC collected version.


Re: Voldemort structs no longer work?

2012-12-15 Thread Iain Buclaw
On 15 December 2012 19:58, Iain Buclaw  wrote:

> On 15 December 2012 19:45, H. S. Teoh  wrote:
>
>> On Sat, Dec 15, 2012 at 11:31:22AM -0800, Jonathan M Davis wrote:
>> > On Saturday, December 15, 2012 19:50:34 Iain Buclaw wrote:
>> > > On Saturday, 15 December 2012 at 18:38:29 UTC, H. S. Teoh wrote:
>> > > > With latest git dmd:
>> > > >   auto makeVoldemort(int x) {
>> > > >
>> > > >   struct Voldemort {
>> > > >
>> > > >   @property int value() { return x; }
>> > > >
>> > > >   }
>> > > >   return Voldemort();
>> > > >
>> > > >   }
>> > > >   void main() {
>> > > >
>> > > >   auto v = makeVoldemort();
>> > > >   writeln(v.value);
>> > > >
>> > > >   }
>> > > >
>> > > > Compile error:
>> > > >   test.d(3): Error: function test.makeVoldemort.Voldemort.value
>> > > >
>> > > > cannot access frame of function test.makeVoldemort
>> > > >
>> > > > Changing 'struct' to 'class' works. Is this deliberate, or is it a
>> > > > bug?  It is certainly inconsistent with Walter's article on
>> > > > Voldemort types, which uses structs as examples.
>> [...]
>> > > Pretty certain it's deliberate.  No closure is created for nested
>> > > structs to access it's parent, complying with it's POD behaviour.
>> >
>> > static nested structs don't have access to their outer scopes.
>> > Non-static structs do. This reeks of a bug.
>> [...]
>>
>> Found the reference in TDPL, §7.1.9 (p.263):
>>
>> Nested structs embed the magic "frame pointer" that allows them
>> to access outer values such as a and b in the example above.
>> [...] If you want to define a nested struct without that
>> baggage, just prefix struct with static in the definition of
>> Local, which makes Local a regular struct and consequently
>> prevents it from accessing a and b.
>>
>> Ironically enough, Andrei in the subsequent paragraph discourages the
>> use of such nested structs, whereas Walter's article promotes the use of
>> such Voldemort types as a "happy discovery". :)
>>
>> Anyway, filed a bug:
>>
>> http://d.puremagic.com/issues/show_bug.cgi?id=9162
>>
>>
> If it is one, it's a bug in FuncDeclaration::getLevel.
>
>
And if it isn't FuncDeclaration::getLevel(), then it is a bug because the
struct Voldemort does not represent itself as a nested type.

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: Voldemort structs no longer work?

2012-12-15 Thread Jonathan M Davis
On Saturday, December 15, 2012 11:45:10 H. S. Teoh wrote:
> Ironically enough, Andrei in the subsequent paragraph discourages the
> use of such nested structs, whereas Walter's article promotes the use of
> such Voldemort types as a "happy discovery". :)

No, the real irony is that it's Andrei who promoted them in the first place. :)

We _are_ finding some serious issues with them though (e.g. they don't work 
with init and apparently can't work with init), and there has been some 
discussion of ditching them due to such issues, but no consensus has been 
reached on that.

- Jonathan M Davis


Re: Voldemort structs no longer work?

2012-12-15 Thread Iain Buclaw
On 15 December 2012 19:45, H. S. Teoh  wrote:

> On Sat, Dec 15, 2012 at 11:31:22AM -0800, Jonathan M Davis wrote:
> > On Saturday, December 15, 2012 19:50:34 Iain Buclaw wrote:
> > > On Saturday, 15 December 2012 at 18:38:29 UTC, H. S. Teoh wrote:
> > > > With latest git dmd:
> > > >   auto makeVoldemort(int x) {
> > > >
> > > >   struct Voldemort {
> > > >
> > > >   @property int value() { return x; }
> > > >
> > > >   }
> > > >   return Voldemort();
> > > >
> > > >   }
> > > >   void main() {
> > > >
> > > >   auto v = makeVoldemort();
> > > >   writeln(v.value);
> > > >
> > > >   }
> > > >
> > > > Compile error:
> > > >   test.d(3): Error: function test.makeVoldemort.Voldemort.value
> > > >
> > > > cannot access frame of function test.makeVoldemort
> > > >
> > > > Changing 'struct' to 'class' works. Is this deliberate, or is it a
> > > > bug?  It is certainly inconsistent with Walter's article on
> > > > Voldemort types, which uses structs as examples.
> [...]
> > > Pretty certain it's deliberate.  No closure is created for nested
> > > structs to access it's parent, complying with it's POD behaviour.
> >
> > static nested structs don't have access to their outer scopes.
> > Non-static structs do. This reeks of a bug.
> [...]
>
> Found the reference in TDPL, §7.1.9 (p.263):
>
> Nested structs embed the magic "frame pointer" that allows them
> to access outer values such as a and b in the example above.
> [...] If you want to define a nested struct without that
> baggage, just prefix struct with static in the definition of
> Local, which makes Local a regular struct and consequently
> prevents it from accessing a and b.
>
> Ironically enough, Andrei in the subsequent paragraph discourages the
> use of such nested structs, whereas Walter's article promotes the use of
> such Voldemort types as a "happy discovery". :)
>
> Anyway, filed a bug:
>
> http://d.puremagic.com/issues/show_bug.cgi?id=9162
>
>
If it is one, it's a bug in FuncDeclaration::getLevel.


-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: Voldemort structs no longer work?

2012-12-15 Thread H. S. Teoh
On Sat, Dec 15, 2012 at 11:45:10AM -0800, H. S. Teoh wrote:
[...]
> Found the reference in TDPL, §7.1.9 (p.263):
> 
>   Nested structs embed the magic "frame pointer" that allows them
>   to access outer values such as a and b in the example above.
>   [...] If you want to define a nested struct without that
>   baggage, just prefix struct with static in the definition of
>   Local, which makes Local a regular struct and consequently
>   prevents it from accessing a and b.
> 
> Ironically enough, Andrei in the subsequent paragraph discourages the
> use of such nested structs, whereas Walter's article promotes the use of
> such Voldemort types as a "happy discovery". :)
> 
> Anyway, filed a bug:
> 
>   http://d.puremagic.com/issues/show_bug.cgi?id=9162
[...]

Also, according to http://dlang.org/struct.html:

A nested struct is a struct that is declared inside the scope of
a function or a templated struct that has aliases to local
functions as a template argument. Nested structs have member
functions. It has access to the context of its enclosing scope
(via an added hidden field).


T

-- 
If you compete with slaves, you become a slave. -- Norbert Wiener


Re: Compilation strategy

2012-12-15 Thread Walter Bright

On 12/15/2012 8:55 AM, Russel Winder wrote:

A quick straw poll.  Do people prefer to have all sources compiled in a
single compiler call, or (more like C++) separate compilation of each
object followed by a link call.


Both are needed, and are suitable for different purposes. It's like asking if 
you prefer a standard or a philips screwdriver.


Re: Voldemort structs no longer work?

2012-12-15 Thread H. S. Teoh
On Sat, Dec 15, 2012 at 11:31:22AM -0800, Jonathan M Davis wrote:
> On Saturday, December 15, 2012 19:50:34 Iain Buclaw wrote:
> > On Saturday, 15 December 2012 at 18:38:29 UTC, H. S. Teoh wrote:
> > > With latest git dmd:
> > >   auto makeVoldemort(int x) {
> > >   
> > >   struct Voldemort {
> > >   
> > >   @property int value() { return x; }
> > >   
> > >   }
> > >   return Voldemort();
> > >   
> > >   }
> > >   void main() {
> > >   
> > >   auto v = makeVoldemort();
> > >   writeln(v.value);
> > >   
> > >   }
> > > 
> > > Compile error:
> > >   test.d(3): Error: function test.makeVoldemort.Voldemort.value
> > > 
> > > cannot access frame of function test.makeVoldemort
> > > 
> > > Changing 'struct' to 'class' works. Is this deliberate, or is it a
> > > bug?  It is certainly inconsistent with Walter's article on
> > > Voldemort types, which uses structs as examples.
[...]
> > Pretty certain it's deliberate.  No closure is created for nested
> > structs to access it's parent, complying with it's POD behaviour.
> 
> static nested structs don't have access to their outer scopes.
> Non-static structs do. This reeks of a bug.
[...]

Found the reference in TDPL, §7.1.9 (p.263):

Nested structs embed the magic "frame pointer" that allows them
to access outer values such as a and b in the example above.
[...] If you want to define a nested struct without that
baggage, just prefix struct with static in the definition of
Local, which makes Local a regular struct and consequently
prevents it from accessing a and b.

Ironically enough, Andrei in the subsequent paragraph discourages the
use of such nested structs, whereas Walter's article promotes the use of
such Voldemort types as a "happy discovery". :)

Anyway, filed a bug:

http://d.puremagic.com/issues/show_bug.cgi?id=9162


T

-- 
Nobody is perfect.  I am Nobody. -- pepoluan, GKC forum


Re: Voldemort structs no longer work?

2012-12-15 Thread Jonathan M Davis
On Saturday, December 15, 2012 19:50:34 Iain Buclaw wrote:
> On Saturday, 15 December 2012 at 18:38:29 UTC, H. S. Teoh wrote:
> > With latest git dmd:
> > auto makeVoldemort(int x) {
> > 
> > struct Voldemort {
> > 
> > @property int value() { return x; }
> > 
> > }
> > return Voldemort();
> > 
> > }
> > void main() {
> > 
> > auto v = makeVoldemort();
> > writeln(v.value);
> > 
> > }
> > 
> > Compile error:
> > test.d(3): Error: function test.makeVoldemort.Voldemort.value
> > 
> > cannot access frame of function test.makeVoldemort
> > 
> > Changing 'struct' to 'class' works. Is this deliberate, or is
> > it a bug?
> > It is certainly inconsistent with Walter's article on Voldemort
> > types,
> > which uses structs as examples.
> > 
> > 
> > T
> 
> Pretty certain it's deliberate.  No closure is created for nested
> structs to access it's parent, complying with it's POD behaviour.

static nested structs don't have access to their outer scopes. Non-static 
structs do. This reeks of a bug.

- Jonathan M Davis


Re: SCons D tool: need help with building static library

2012-12-15 Thread H. S. Teoh
On Sat, Dec 15, 2012 at 06:42:45PM +, Russel Winder wrote:
> On Thu, 2012-12-13 at 14:49 -0800, H. S. Teoh wrote:
> > Hi Russel,
> > 
> > I've been using your BitBucket scons_d_tooling version of SCons for
> > my D projects, and it's been great! However, I needed to make a
> > static library today and I'm having some trouble with it.  Here's a
> > reduced testcase:
[...]
[...]
> I entered a variant of your code as a test and made the smallest
> possible change to make things work on Linux. This is untested on OS X
> or Windows.
> 
> The LDC test fails for reasons I cannot suss just now, the DMD and GDC
> tests pass.
> 
> I have no doubt this is a hack patch, it all needs to be sorted
> properly. 
[...]

Thanks! I can build the static library now.

But I have trouble when I try to link to it. For some reason, the dmd
link command isn't picking up the value of LIBPATH, so the linker can't
find the library. Here's a reduced test case:

#!/usr/src/scons/russel/scons_d_tooling/bootstrap.py -f
env = Environment(
DC = '/usr/src/d/dmd/src/dmd',
)
env.Library('mylib', 'mylib.d')

prog_env = env.Clone(
LIBS = ['mylib'],
LIBPATH = '#'
)
prog_env.Program('prog', 'prog.d')

Output:

scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
/usr/src/d/dmd/src/dmd -I. -c -ofmylib.o mylib.d
ar cr libmylib.a mylib.o
ranlib libmylib.a
/usr/src/d/dmd/src/dmd -I. -c -ofprog.o prog.d
/usr/src/d/dmd/src/dmd -ofprog prog.o -L-lmylib
/usr/bin/ld: cannot find -lmylib
collect2: error: ld returned 1 exit status
--- errorlevel 1
scons: *** [prog] Error 1
scons: building terminated because of errors.

This works correctly when using the C compiler (SCons correctly inserts
a "-L." in the link command).

//

Also, an unrelated issue: if DC isn't specified and no D compiler is
found in $PATH, it produces a rather odd command line:

scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
I. -c -ofprog.o prog.d
sh: 1: I.: not found
ofprog prog.o -L-lmylib
sh: 1: ofprog: not found
scons: done building targets.

This is not a big deal, but it'd be nice if the tool gave a more helpful
message along the lines of "I can't find a D compiler, please specify
one", instead of producing a mangled command. :-)


T

-- 
It's amazing how careful choice of punctuation can leave you hanging:


Re: Next focus: PROCESS

2012-12-15 Thread Jesse Phillips

On Saturday, 15 December 2012 at 19:03:49 UTC, Brad Roberts wrote:


This is a forcing function that's just required.


There is a focusing that needs to happen, but as you say, you 
can't really dictate where someone puts their time (for open 
source). So it is best to let the person who decided to review a 
pull to also spend the time to merge it. But strongly encourage 
everyone on the team to get the release out.


Remember we are preparing for growth and will want to be ready to 
handle all types of contributors.


Re: Next focus: PROCESS

2012-12-15 Thread Brad Roberts
On 12/15/2012 2:29 AM, Dmitry Olshansky wrote:

> I think one of major goals is to be able to continue ongoing development 
> while at the _same time_ preparing a release.
> To me number one problem is condensed in the statement "we are going to 
> release do not merge anything but regressions"
> the process should sidestep this "lock-based" work-flow. Could be useful to 
> add something along these line to goals
> section. (i.e. the speed and smoothness is a goal)

I've been staying out of this thread for the most part, but I feel the need to 
comment on this part specifically.  It's
quite common for most major projects to have a clear "we're wrapping up a 
release" phase where work _is_ restricted to
bug fixing and stabilizing.  They don't stop people from working off in their 
development branches (no one could
effectively impose such restrictions even if they wanted to), but they _do_ 
tighten down on what's allowed to be merged.

This is a forcing function that's just required.  There's a lot of issues that 
otherwise won't get sufficient attention.
 If all it took was altruism then regressions would be fixed immediately, bugs 
would always be fixed in highest priority
to lowest priority (assuming that could even be effectively designed), etc.

Without the 'ok, guys, focus in this smaller more critical subset of bugs' 
step, release branches would be created and
never polished (or focused down to the release manager to do all the work if 
he's that skilled and/or generous of his time).

There's a phrase I'm trying to remember, but it's something to the effect that 
'hope isn't a recipe for success.'
Hoping that people fix regressions on release critical bugs isn't sufficient.  
Incentive and steering is required.  The
desire to ungate master branch merges is one approach that's been shown to be 
successful.


Re: Compilation strategy

2012-12-15 Thread Jonathan M Davis
On Saturday, December 15, 2012 10:44:56 H. S. Teoh wrote:
> Isn't that just some compiler bugs that sometimes cause certain symbols
> not to be instantiated in the object file? IMO, such bugs should be
> fixed in the compiler, rather than force the user to compile one way or
> another.

Well obviously. They're bugs. Of course they should be fixed. But as long as 
they haven't been fixed, we have to work around them, which means compiling 
everything at once.

- Jonathan M Davis


Re: Voldemort structs no longer work?

2012-12-15 Thread Iain Buclaw

On Saturday, 15 December 2012 at 18:38:29 UTC, H. S. Teoh wrote:

With latest git dmd:

auto makeVoldemort(int x) {
struct Voldemort {
@property int value() { return x; }
}
return Voldemort();
}
void main() {
auto v = makeVoldemort();
writeln(v.value);
}

Compile error:

	test.d(3): Error: function test.makeVoldemort.Voldemort.value 
cannot access frame of function test.makeVoldemort


Changing 'struct' to 'class' works. Is this deliberate, or is 
it a bug?
It is certainly inconsistent with Walter's article on Voldemort 
types,

which uses structs as examples.


T


Pretty certain it's deliberate.  No closure is created for nested 
structs to access it's parent, complying with it's POD behaviour.


Regards,
Iain.


Re: Compilation strategy

2012-12-15 Thread RenatoUtsch

On Saturday, 15 December 2012 at 18:44:35 UTC, H. S. Teoh wrote:

On Sat, Dec 15, 2012 at 07:30:52PM +0100, RenatoUtsch wrote:
On Saturday, 15 December 2012 at 18:00:58 UTC, H. S. Teoh 
wrote:

[...]
>So perhaps one possible middle ground would be to link 
>packages
>separately, but compile all the sources within a single 
>package at
>once.  Presumably, if the project is properly organized, 
>recompiling
>a single package won't take too long, and has the perk of 
>optimizing

>for size within packages. This will probably also map to SCons
>easily, since SCons builds per-directory.

[...]

Well, the idea is good. Small projects usually don't have much
packages, so there will be just a few compiler calls. And 
compiling
files concurrently will only have a meaningful efect if the 
project

is large, and a large project will have a lot of packages.


Yes, that's the idea behind it.


Maybe adding an option to choose between compiling all sources 
at
once, per package, or per source. For example, in development 
and
debug builds the compilation is per file or package, but in 
release
builds all sources are compiled at once, or various packages 
at once.


This way release builds will take advantage of this behavior 
that
the frontend has, but developers won't have productivity 
issues.
And, of couse, the behaviour will not be fixed, the devs that 
are

using the build system will choose that.


I forgot to mention also, that passing too many source files to 
the
compiler may sometimes cause memory consumption issues, as the 
compiler
has to hold everything in memory. This may not be practical for 
very

large project, where you can't fit everything into RAM.


T


Well, so compiling by packages seem to be the best approach. When 
I return home I will do some tests to see what I can do.


-- Renato


Re: Compilation strategy

2012-12-15 Thread H. S. Teoh
On Sat, Dec 15, 2012 at 06:42:27PM +, Iain Buclaw wrote:
> On 15 December 2012 16:55, Russel Winder  wrote:
> 
> > A quick straw poll.  Do people prefer to have all sources compiled
> > in a single compiler call, or (more like C++) separate compilation
> > of each object followed by a link call.
> >
> > Thanks.
> >
> >
> I do believe there are still some strange linker bugs that occur if
> you compile separately vs. single compilation.  Don't ask for
> examples, it'll take me hours or days to hunt them down in my
> archives. :-)
[...]

Isn't that just some compiler bugs that sometimes cause certain symbols
not to be instantiated in the object file? IMO, such bugs should be
fixed in the compiler, rather than force the user to compile one way or
another.


T

-- 
Bare foot: (n.) A device for locating thumb tacks on the floor.


Re: Compilation strategy

2012-12-15 Thread H. S. Teoh
On Sat, Dec 15, 2012 at 07:30:52PM +0100, RenatoUtsch wrote:
> On Saturday, 15 December 2012 at 18:00:58 UTC, H. S. Teoh wrote:
[...]
> >So perhaps one possible middle ground would be to link packages
> >separately, but compile all the sources within a single package at
> >once.  Presumably, if the project is properly organized, recompiling
> >a single package won't take too long, and has the perk of optimizing
> >for size within packages. This will probably also map to SCons
> >easily, since SCons builds per-directory.
[...]
> Well, the idea is good. Small projects usually don't have much
> packages, so there will be just a few compiler calls. And compiling
> files concurrently will only have a meaningful efect if the project
> is large, and a large project will have a lot of packages.

Yes, that's the idea behind it.


> Maybe adding an option to choose between compiling all sources at
> once, per package, or per source. For example, in development and
> debug builds the compilation is per file or package, but in release
> builds all sources are compiled at once, or various packages at once.
> 
> This way release builds will take advantage of this behavior that
> the frontend has, but developers won't have productivity issues.
> And, of couse, the behaviour will not be fixed, the devs that are
> using the build system will choose that.

I forgot to mention also, that passing too many source files to the
compiler may sometimes cause memory consumption issues, as the compiler
has to hold everything in memory. This may not be practical for very
large project, where you can't fit everything into RAM.


T

-- 
Stop staring at me like that! You'll offend... no, you'll hurt your eyes!


Re: SCons D tool: need help with building static library

2012-12-15 Thread Russel Winder
On Thu, 2012-12-13 at 14:49 -0800, H. S. Teoh wrote:
> Hi Russel,
> 
> I've been using your BitBucket scons_d_tooling version of SCons for my D
> projects, and it's been great! However, I needed to make a static
> library today and I'm having some trouble with it.  Here's a reduced
> testcase:
> 
>   env = Environment(
>   DC = '/usr/src/d/dmd/src/dmd'
>   )
>   env.StaticLibrary('mylib', Split("""
>   a.d
>   b.d
>   """))

You are the first person to try doing this since I severely restructured
the code!

> Here's the output:
> 
>   scons: Reading SConscript files ...
>   scons: done reading SConscript files.
>   scons: Building targets ...
>   /usr/src/d/dmd/src/dmd -I. -c -ofa.o a.d
>   /usr/src/d/dmd/src/dmd -I. -c -ofb.o b.d
>   lib -c libmylib.a a.o b.o
>   sh: 1: lib: not found
>   scons: *** [libmylib.a] Error 127
>   scons: building terminated because of errors.
> 
> The compilation steps work fine, but when it should be running ar to
> create the library archive, it runs a non-existent 'lib' instead, which
> fails.

lib is the Windows equivalent of ar, I failed to take this into account.

> I've tracked down the problem to the presub command $SMART_ARCOM, but it
> appears to be a function disguised as a magical variable, so I've no
> idea how to go further.
> 
> Am I missing some setting in the Environment? How can I convince it to
> use 'ar' (as it should) instead of 'lib'?

I entered a variant of your code as a test and made the smallest
possible change to make things work on Linux. This is untested on OS X
or Windows.

The LDC test fails for reasons I cannot suss just now, the DMD and GDC
tests pass.

I have no doubt this is a hack patch, it all needs to be sorted
properly. 

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Compilation strategy

2012-12-15 Thread Iain Buclaw
On 15 December 2012 16:55, Russel Winder  wrote:

> A quick straw poll.  Do people prefer to have all sources compiled in a
> single compiler call, or (more like C++) separate compilation of each
> object followed by a link call.
>
> Thanks.
>
>
I do believe there are still some strange linker bugs that occur if you
compile separately vs. single compilation.  Don't ask for examples, it'll
take me hours or days to hunt them down in my archives. :-)


-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: Compilation strategy

2012-12-15 Thread RenatoUtsch

On Saturday, 15 December 2012 at 18:24:50 UTC, jerro wrote:
On Saturday, 15 December 2012 at 17:31:19 UTC, RenatoUtsch 
wrote:
On Saturday, 15 December 2012 at 17:05:59 UTC, Peter Alexander 
wrote:
On Saturday, 15 December 2012 at 16:55:39 UTC, Russel Winder 
wrote:
A quick straw poll.  Do people prefer to have all sources 
compiled in a
single compiler call, or (more like C++) separate 
compilation of each

object followed by a link call.


Single compiler call is easier for small projects, but I 
worry about compile times for larger projects...


Yes, I'm writing a build system for D (that will be pretty 
damn good, I think, it has some interesting new concepts)


I took a look at your github project, there isn't any code yet, 
but I like the concept. I was actually planing to do something 
similar, but since you are already doing it, I think my time 
would be better spent contributing to your project. Will there 
be some publicly available code in the near future?


I expect to release a first alpha version in about 15~30 days, 
maybe less, it depends on how much time I will have on the rest 
of this month.


Voldemort structs no longer work?

2012-12-15 Thread H. S. Teoh
With latest git dmd:

auto makeVoldemort(int x) {
struct Voldemort {
@property int value() { return x; }
}
return Voldemort();
}
void main() {
auto v = makeVoldemort();
writeln(v.value);
}

Compile error:

test.d(3): Error: function test.makeVoldemort.Voldemort.value cannot 
access frame of function test.makeVoldemort

Changing 'struct' to 'class' works. Is this deliberate, or is it a bug?
It is certainly inconsistent with Walter's article on Voldemort types,
which uses structs as examples.


T

-- 
You have to expect the unexpected. -- RL


Re: Compilation strategy

2012-12-15 Thread RenatoUtsch

On Saturday, 15 December 2012 at 18:00:58 UTC, H. S. Teoh wrote:

On Sat, Dec 15, 2012 at 06:31:17PM +0100, RenatoUtsch wrote:

On Saturday, 15 December 2012 at 17:05:59 UTC, Peter Alexander
wrote:
>On Saturday, 15 December 2012 at 16:55:39 UTC, Russel Winder
>wrote:
>>A quick straw poll.  Do people prefer to have all sources 
>>compiled
>>in a single compiler call, or (more like C++) separate 
>>compilation

>>of each object followed by a link call.
>
>Single compiler call is easier for small projects, but I worry
>about compile times for larger projects...

Yes, I'm writing a build system for D (that will be pretty damn
good, I think, it has some interesting new concepts), and 
compiling
each source separately to an object, and then linking 
everything
will allow easily to make the build parallel, dividing the 
sources
to compile in various threads. Or the compiler already does 
that if

I pass all source files in one call?

[...]

I find that the current front-end (common to dmd, gdc, ldc) 
tends to
work better when passed multiple source files at once. It tends 
to be
faster, presumably because it only has to parse 
commonly-imported files
once, and also produces smaller object/executable sizes -- 
maybe due to
fewer duplicated template instantiations? I'm not sure of the 
exact
reasons, but this behaviour appears consistent throughout dmd 
and gdc,
and I presume also ldc (I didn't test that). So based on this, 
I'd lean

toward compiling multiple files at once.


Yeah, I did read about this somewhee.

However, in very large project, clearly this won't work very 
well. If it
takes half an hour to build the entire system, it makes the 
code -
compile - test cycle very slow, which reduces programmer 
productivity.


So perhaps one possible middle ground would be to link packages
separately, but compile all the sources within a single package 
at once.
Presumably, if the project is properly organized, recompiling a 
single
package won't take too long, and has the perk of optimizing for 
size
within packages. This will probably also map to SCons easily, 
since

SCons builds per-directory.


T


Well, the idea is good. Small projects usually don't have much 
packages, so there will be just a few compiler calls. And 
compiling files concurrently will only have a meaningful efect if 
the project is large, and a large project will have a lot of 
packages.


Maybe adding an option to choose between compiling all sources at 
once, per package, or per source. For example, in development and 
debug builds the compilation is per file or package, but in 
release builds all sources are compiled at once, or various 
packages at once.


This way release builds will take advantage of this behavior that 
the frontend has, but developers won't have productivity issues. 
And, of couse, the behaviour will not be fixed, the devs that are 
using the build system will choose that.


Re: Compilation strategy

2012-12-15 Thread jerro

On Saturday, 15 December 2012 at 17:31:19 UTC, RenatoUtsch wrote:
On Saturday, 15 December 2012 at 17:05:59 UTC, Peter Alexander 
wrote:
On Saturday, 15 December 2012 at 16:55:39 UTC, Russel Winder 
wrote:
A quick straw poll.  Do people prefer to have all sources 
compiled in a
single compiler call, or (more like C++) separate compilation 
of each

object followed by a link call.


Single compiler call is easier for small projects, but I worry 
about compile times for larger projects...


Yes, I'm writing a build system for D (that will be pretty damn 
good, I think, it has some interesting new concepts)


I took a look at your github project, there isn't any code yet, 
but I like the concept. I was actually planing to do something 
similar, but since you are already doing it, I think my time 
would be better spent contributing to your project. Will there be 
some publicly available code in the near future?


Re: Moving towards D2 2.061 (and D1 1.076)

2012-12-15 Thread H. S. Teoh
On Sat, Dec 15, 2012 at 06:35:33PM +0100, RenatoUtsch wrote:
> On Saturday, 15 December 2012 at 16:16:11 UTC, SomeDude wrote:
[...]
> >Yes, but what H.S. Theoh wrote about the desperate need of process
> >is still true and correct. Like many others here, I think it's the
> >biggest problem with D right now for its adoption. I for one will
> >never consider using D for my main line of work without a true
> >STABLE branch: a branch you can rely on. And yet I'm pretty sold
> >to the language, but when your project is at stake, what you need
> >is security. And the current development scheme doesn't provide
> >that.
> 
> Yeah, but if people doesn't help to define a new process, no process
> will ever be defined.
> 
> We are trying to do something like that, any support or ideas will
> be helpful. The community needs to help to define this, and Walter
> already said that he will agree on what the community defines.
> 
> See:
> http://wiki.dlang.org/Release_Process
> http://forum.dlang.org/thread/ka5rv5$2k60$1...@digitalmars.com

I'd also like to add that if anyone has any ideas to improve or refine
the current proposed process, they should add their suggestions to the
talk page:

http://wiki.dlang.org/Talk:Release_Process

That way, the ideas won't get lost in ether after the forum threads die
off, and it keeps everything in one place instead of sprinkled
throughout multiple places in ancient forum threads.


T

-- 
When solving a problem, take care that you do not become part of the problem.


Re: Compilation strategy

2012-12-15 Thread Peter Alexander
On Saturday, 15 December 2012 at 17:27:38 UTC, Jakob Bornecrantz 
wrote:

On Saturday, 15 December 2012 at 17:05:59 UTC, Peter Alexander
Single compiler call is easier for small projects, but I worry 
about compile times for larger projects...


As evident by Phobos and my own project[1], for larger projects 
multiple concurrent calls is the only way to go. Rebuilding 
everything does take a bit, but a bit of thought behind the 
layout of the project and things work much faster when working 
on specific areas.


Phobos is only around 200kloc. I'm worrying about the really 
large projects (multi-million lines of code).


Re: Compilation strategy

2012-12-15 Thread H. S. Teoh
On Sat, Dec 15, 2012 at 06:31:17PM +0100, RenatoUtsch wrote:
> On Saturday, 15 December 2012 at 17:05:59 UTC, Peter Alexander
> wrote:
> >On Saturday, 15 December 2012 at 16:55:39 UTC, Russel Winder
> >wrote:
> >>A quick straw poll.  Do people prefer to have all sources compiled
> >>in a single compiler call, or (more like C++) separate compilation
> >>of each object followed by a link call.
> >
> >Single compiler call is easier for small projects, but I worry
> >about compile times for larger projects...
> 
> Yes, I'm writing a build system for D (that will be pretty damn
> good, I think, it has some interesting new concepts), and compiling
> each source separately to an object, and then linking everything
> will allow easily to make the build parallel, dividing the sources
> to compile in various threads. Or the compiler already does that if
> I pass all source files in one call?
[...]

I find that the current front-end (common to dmd, gdc, ldc) tends to
work better when passed multiple source files at once. It tends to be
faster, presumably because it only has to parse commonly-imported files
once, and also produces smaller object/executable sizes -- maybe due to
fewer duplicated template instantiations? I'm not sure of the exact
reasons, but this behaviour appears consistent throughout dmd and gdc,
and I presume also ldc (I didn't test that). So based on this, I'd lean
toward compiling multiple files at once.

However, in very large project, clearly this won't work very well. If it
takes half an hour to build the entire system, it makes the code -
compile - test cycle very slow, which reduces programmer productivity.

So perhaps one possible middle ground would be to link packages
separately, but compile all the sources within a single package at once.
Presumably, if the project is properly organized, recompiling a single
package won't take too long, and has the perk of optimizing for size
within packages. This will probably also map to SCons easily, since
SCons builds per-directory.


T

-- 
They pretend to pay us, and we pretend to work. -- Russian saying


Re: Moving towards D2 2.061 (and D1 1.076)

2012-12-15 Thread RenatoUtsch

On Saturday, 15 December 2012 at 16:16:11 UTC, SomeDude wrote:
On Friday, 14 December 2012 at 01:26:35 UTC, Walter Bright 
wrote:

On 12/13/2012 5:10 PM, H. S. Teoh wrote:

Remedy adopting D


Saying that would be premature and incorrect at the moment. We 
still have to ensure that Remedy wins with D. This is an 
ongoing thing.


Yes, but what H.S. Theoh wrote about the desperate need of 
process is still true and correct. Like many others here, I 
think it's the biggest problem with D right now for its 
adoption. I for one will never consider using D for my main 
line of work without a true STABLE branch: a branch you can 
rely on. And yet I'm pretty sold to the language, but when your 
project is at stake, what you need is security. And the current 
development scheme doesn't provide that.


Yeah, but if people doesn't help to define a new process, no 
process will ever be defined.


We are trying to do something like that, any support or ideas 
will be helpful. The community needs to help to define this, and 
Walter already said that he will agree on what the community 
defines.


See:
http://wiki.dlang.org/Release_Process
http://forum.dlang.org/thread/ka5rv5$2k60$1...@digitalmars.com


Re: Compilation strategy

2012-12-15 Thread RenatoUtsch
On Saturday, 15 December 2012 at 17:05:59 UTC, Peter Alexander 
wrote:
On Saturday, 15 December 2012 at 16:55:39 UTC, Russel Winder 
wrote:
A quick straw poll.  Do people prefer to have all sources 
compiled in a
single compiler call, or (more like C++) separate compilation 
of each

object followed by a link call.


Single compiler call is easier for small projects, but I worry 
about compile times for larger projects...


Yes, I'm writing a build system for D (that will be pretty damn 
good, I think, it has some interesting new concepts), and 
compiling each source separately to an object, and then linking 
everything will allow easily to make the build parallel, dividing 
the sources to compile in various threads. Or the compiler 
already does that if I pass all source files in one call?


-- Renato Utsch


Re: Compilation strategy

2012-12-15 Thread Jakob Bornecrantz
On Saturday, 15 December 2012 at 17:05:59 UTC, Peter Alexander 
wrote:
On Saturday, 15 December 2012 at 16:55:39 UTC, Russel Winder 
wrote:
A quick straw poll.  Do people prefer to have all sources 
compiled in a
single compiler call, or (more like C++) separate compilation 
of each

object followed by a link call.


Single compiler call is easier for small projects, but I worry 
about compile times for larger projects...



As evident by Phobos and my own project[1], for larger projects 
multiple concurrent calls is the only way to go. Rebuilding 
everything does take a bit, but a bit of thought behind the 
layout of the project and things work much faster when working on 
specific areas.


Cheers, Jakob.

[1] http://github.com/Charged/Miners


Re: Compilation strategy

2012-12-15 Thread Russel Winder
On Sat, 2012-12-15 at 16:55 +, Russel Winder wrote:
> A quick straw poll.  Do people prefer to have all sources compiled in a
> single compiler call, or (more like C++) separate compilation of each
> object followed by a link call. 

Oh and I should have asked: do you do things differently when using:

dmd
gdc
ldc

for:

programs
shared libraries
static libraries

Thanks.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: OT (partially): about promotion of integers

2012-12-15 Thread Isaac Gouy

On Tuesday, 11 December 2012 at 23:59:29 UTC, bearophile wrote:

-snip-

But as usual you have to take such comparisons cum grano salis, 
because there are a lot more people working on the GHC compiler 
and because the Shootout Haskell solutions are quite 
un-idiomatic (you can see it also from the Shootout site 
itself, taking a look at the length of the solutions) and they 
come from several years of maniac-level discussions (they have 
patched the Haskell compiler and its library several times to 
improve the results of those benchmarks):


http://www.haskell.org/haskellwiki/Shootout



I looked at that haskellwiki page but I didn't find anything to 
suggest -- "they have patched the Haskell compiler and its 
library several times to improve the results of those benchmarks"?


Was it something the compiler writers told you?


Re: Compilation strategy

2012-12-15 Thread Peter Alexander
On Saturday, 15 December 2012 at 16:55:39 UTC, Russel Winder 
wrote:
A quick straw poll.  Do people prefer to have all sources 
compiled in a
single compiler call, or (more like C++) separate compilation 
of each

object followed by a link call.


Single compiler call is easier for small projects, but I worry 
about compile times for larger projects...


Re: Compilation strategy

2012-12-15 Thread Andrei Alexandrescu

On 12/15/12 11:55 AM, Russel Winder wrote:

A quick straw poll.  Do people prefer to have all sources compiled in a
single compiler call, or (more like C++) separate compilation of each
object followed by a link call.


In phobos we use a single call for building the library. Then (at least 
on Posix) we use multiple calls for running unittests.


Andrei


Re: Compilation strategy

2012-12-15 Thread Adam D. Ruppe
On Saturday, 15 December 2012 at 16:55:39 UTC, Russel Winder 
wrote:

Do people prefer to have all sources compiled in a
single compiler call


I prefer the single call.


Compilation strategy

2012-12-15 Thread Russel Winder
A quick straw poll.  Do people prefer to have all sources compiled in a
single compiler call, or (more like C++) separate compilation of each
object followed by a link call. 

Thanks.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: Moving towards D2 2.061 (and D1 1.076)

2012-12-15 Thread SomeDude

On Friday, 14 December 2012 at 00:42:58 UTC, Walter Bright wrote:

On 12/13/2012 4:17 PM, David Nadlinger wrote:


Like any major user of a language, they want confidence in our 
full support of them. Asking them to use a patched or branch 
version of the compiler does not inspire confidence.


Maybe, but not having a full development process is NOT how you 
inspire confidence, quite the contrary. And in all fairness, they 
are a bit crazy to rely on features that are still considered 
experimental by the community. You should probably tell them to 
rely on proven features instead. That is by far the best way for 
them to succeed.


Re: Moving towards D2 2.061 (and D1 1.076)

2012-12-15 Thread SomeDude

On Friday, 14 December 2012 at 01:26:35 UTC, Walter Bright wrote:

On 12/13/2012 5:10 PM, H. S. Teoh wrote:

Remedy adopting D


Saying that would be premature and incorrect at the moment. We 
still have to ensure that Remedy wins with D. This is an 
ongoing thing.


Yes, but what H.S. Theoh wrote about the desperate need of 
process is still true and correct. Like many others here, I think 
it's the biggest problem with D right now for its adoption. I for 
one will never consider using D for my main line of work without 
a true STABLE branch: a branch you can rely on. And yet I'm 
pretty sold to the language, but when your project is at stake, 
what you need is security. And the current development scheme 
doesn't provide that.


Quick and dirty Benchmark of std.parallelism.reduce with gdc 4.6.3

2012-12-15 Thread Zardoz
I recently made some benchmarks with parallelism version of 
Reduce using the example code, and I got this times with this 
CPUs :


AMD FX(tm)-4100 Quad-Core Processor (Kubuntu 12.04 x64):
std.algorithm.reduce   = 70294 ms
std.parallelism.reduce = 18354 ms -> SpeedUp = ~3.79

2x AMD Opteron(tm) Processor 6128 aka 8 cores x 2 = 16 cores! 
(Rocks 6.0 x64) :

std.algorithm.reduce   = 98323 ms
std.parallelism.reduce = 6592 ms  -> SpeedUp = ~14.91

My congrats to std.parallelism and D language!

Source code compile with gdc 4.6.3 with -o2 flag :
import std.algorithm, std.parallelism, std.range;
import std.stdio;
import std.datetime;

void main() {
  // Parallel reduce can be combined with std.algorithm.map to 
interesting
  // effect. The following example (thanks to Russel Winder) 
calculates

  // pi by quadrature using std.algorithm.map and TaskPool.reduce.
  // getTerm is evaluated in parallel as needed by 
TaskPool.reduce.

  // // Timings on an Athlon 64 X2 dual core machine:
  // // TaskPool.reduce: 12.170 s
  // std.algorithm.reduce: 24.065 s

  immutable n = 1_000_000_000;
  immutable delta = 1.0 / n;
  real getTerm(int i) {
immutable x = ( i - 0.5 ) * delta;
return delta / ( 1.0 + x * x ) ;
  }

  StopWatch sw;
  sw.start(); //start/resume mesuring.
  immutable pi = 4.0 * taskPool.reduce!"a + b"( 
std.algorithm.map!getTerm(iota(n)) );
  //immutable pi = 4.0 * std.algorithm.reduce!"a + b"( 
std.algorithm.map!getTerm(iota(n)) );

  sw.stop();

  writeln("PI = ", pi);
  writeln("Tiempo = ", sw.peek().msecs, "[ms]");
}




Re: Should alias this support implicit construction in function calls and return statements?

2012-12-15 Thread Simen Kjaeraas

On 2012-48-15 06:12, Jonathan M Davis  wrote:

I don't see any reason not to support it. if you want coversion to only  
go one
way, then alias a function which returns the value being aliased rather  
than
aliasing a variable. If it doesn't support implicit conversions from  
other
types, then it's impossible to have such implicit conversion in D, and I  
don't

see any reason why it should be disallowed like that, not when you've
explicitly said that you want to do an alias like that.


There is a need for clarification for implicit construction of types with
multiple fields and alias this, though. What does this do:

struct S {
   int n;
   string s;
   alias s this;
}

S s = "Foo!";


I can see a few solutions:

1) Disallow implicit construction of these types from the aliased type.
The problem with this solution is default initialization of other
fields may be exactly what you want. There may also be problems
where functions are used for alias this.
2) Default construct the wrapper type, then apply alias this.
3) Use some specialized constructor. Will require some annotation or
other way to mark it as implicit (@implicit?).


Another, likely less common, problem is with classes with alias this:

class A {
   int n;
   alias n this;
}

A a = 4;


Should this allocate a new A?

--
Simen


Re: Next focus: PROCESS

2012-12-15 Thread RenatoUtsch

On Saturday, 15 December 2012 at 10:29:55 UTC, Dmitry Olshansky
wrote:

12/14/2012 3:34 AM, deadalnix пишет:

On Thursday, 13 December 2012 at 20:48:30 UTC, deadalnix wrote:
On Thursday, 13 December 2012 at 20:04:50 UTC, Dmitry 
Olshansky wrote:

I think it's good.

But personally I'd expect:

* master to be what you define as dev, because e.g. GitHub 
puts
master as default target branch when making pull requests. 
Yeah, I
know it's their quirk that it's easy to miss. Still leaving 
less
burden to check the branch label for both reviewers and pull 
authors

is a net gain.

* And what you describe as master (it's a snapshot or a 
pre-release

to me) to be named e.g. staging.

And we can as well drop the dead branch 'dev' then.


That sound also like a good plan to me.


Updated to follow the idea, plus added bunch of process 
description.

Feel free to comment in order to refine this.

http://wiki.dlang.org/Release_Process


I wasn't comfortable doing speculative edits to the wiki 
directly so here a few more comments:


I think one of major goals is to be able to continue ongoing 
development while at the _same time_ preparing a release. To me 
number one problem is condensed in the statement "we are going 
to release do not merge anything but regressions" the process 
should sidestep this "lock-based" work-flow. Could be useful to 
add something along these line to goals section. (i.e. the 
speed and smoothness is a goal)


Second point is about merging master into staging - why not 
just rewrite it with master branch altogether after each 
release?
master is the branch with correct history (all new stuff is 
rebased on it) thus new staging will have that too.


Here what I proposed on the discussion page, what do you think?

--
I have come with a good idea for the release schedule. Say that
we are 3 or 4 monthes before the next LTS release. We need to
branch the staging branch in a 2.N (put the contents of the
staging branch in the 2.N branch) branch, where N is the new LTS
version. Then, no other features can be included in this 2.N
branch, only bugfixes are allowed. This period will have one RC
every month (?) with the latest fixes in the 2.N branch. After
the 3 or 4 monthes period we'll tag the 2.N.0 release. After
every 4~6 monthes, we'll release a new 2.N.x version with the
latest bugfixes in the 2.N branch, but with no additional
features (including non-breaking features here will just be more
work to the devs, I don't think it is a good idea). While these
bugfix releases are being made, every feature that stabilizes
enough in the master branch is merged into the staging branch,
where the feature shouldn't have much breaking changes (although
changes are still allowed, the branch is not frozen). Every 3
monthes, dev releases are made with these features that made
their way into the staging branch. This is done while the fixes
in the 2.N branch are made. Then, 4 monthes prior to the next LTS
release, a new 2.N+1 branch will be created with the staging
branch contents. The cycle will repeat, these 4 monthes will have
no additional features on the 2.N+1 branch, and neither on the
next 3 years. This organization, in dev and LTS releases, will
allow releases for the ones that want a stable environment to
develop (the LTS releases) and the ones that want the latest
great features from D will have a somewhat stable environment
(the dev releases, somewhat like the ones we have now, maybe a
little more stable) to use. On top of that, the staging branch
will never be frozen, so development will never stop, as someone
was saying on the forums that was a bad idea. And, when the new
LTS release is made (2 years after the older LTS), the older LTS
will be mantained for more one year, what means that each LTS
will be mantained for 3 years. What do you think? RenatoUtsch
(talk) 14:14, 15 December 2012 (CET)
--
http://wiki.dlang.org/Talk:Release_Process#Expanding_LTS


Re: Significant GC performance penalty

2012-12-15 Thread Mike Parker
On Saturday, 15 December 2012 at 11:35:18 UTC, Jacob Carlborg 
wrote:

On 2012-12-14 19:27, Rob T wrote:

I wonder what can be done to allow a programmer to go fully 
manual,

while not loosing any of the nice features of D?


Someone has create a GC free version of druntime and Phobos. 
Unfortunately I can't find the post in the newsgroup right now.


http://3d.benjamin-thaut.de/?p=20


Re: Custom Memory Allocation and reaps

2012-12-15 Thread r_m_r

On 12/15/2012 03:50 AM, Dmitry Olshansky wrote:

I'd through in a spoiler (and featuring a later date):
http://www.nwcpp.org/old/Downloads/2008/memory-allocation.screen.pdf


Thanks for the slides. BTW is there any video of the presentation?

Interestingly, the slides mention the paper[1] in a few places.

[1] "Reconsidering Custom Memory Allocation" (2002) by Emery D. Berger,
Benjamin G. Zorn, Kathryn S.McKinley

Regards,
r_m_r


Re: Moving towards D2 2.061 (and D1 1.076)

2012-12-15 Thread F i L
On Saturday, 15 December 2012 at 06:17:13 UTC, Walter Bright 
wrote:

On 12/14/2012 6:26 PM, F i L wrote:
Sorry if I missed this, but with User Defined Attributes be 
part of 2.61?


Yes.


Awesome! Can't wait :)


Re: Significant GC performance penalty

2012-12-15 Thread Jacob Carlborg

On 2012-12-14 19:27, Rob T wrote:


I wonder what can be done to allow a programmer to go fully manual,
while not loosing any of the nice features of D?


Someone has create a GC free version of druntime and Phobos. 
Unfortunately I can't find the post in the newsgroup right now.


--
/Jacob Carlborg


Re: Next focus: PROCESS

2012-12-15 Thread Dmitry Olshansky

12/14/2012 3:34 AM, deadalnix пишет:

On Thursday, 13 December 2012 at 20:48:30 UTC, deadalnix wrote:

On Thursday, 13 December 2012 at 20:04:50 UTC, Dmitry Olshansky wrote:

I think it's good.

But personally I'd expect:

* master to be what you define as dev, because e.g. GitHub puts
master as default target branch when making pull requests. Yeah, I
know it's their quirk that it's easy to miss. Still leaving less
burden to check the branch label for both reviewers and pull authors
is a net gain.

* And what you describe as master (it's a snapshot or a pre-release
to me) to be named e.g. staging.

And we can as well drop the dead branch 'dev' then.


That sound also like a good plan to me.


Updated to follow the idea, plus added bunch of process description.
Feel free to comment in order to refine this.

http://wiki.dlang.org/Release_Process


I wasn't comfortable doing speculative edits to the wiki directly so 
here a few more comments:


I think one of major goals is to be able to continue ongoing development 
while at the _same time_ preparing a release. To me number one problem 
is condensed in the statement "we are going to release do not merge 
anything but regressions" the process should sidestep this "lock-based" 
work-flow. Could be useful to add something along these line to goals 
section. (i.e. the speed and smoothness is a goal)


Second point is about merging master into staging - why not just rewrite 
it with master branch altogether after each release?
master is the branch with correct history (all new stuff is rebased on 
it) thus new staging will have that too.


--
Dmitry Olshansky


Re: Invalid trainling code unit

2012-12-15 Thread rumbu

On Saturday, 15 December 2012 at 06:37:51 UTC, Ali Çehreli wrote:


Works here as well.

My guess is that the encoding of the source code is not one of 
the Unicode encodings, rather a "code table" encoding. If so, 
please save the source code in a UTF encoding, e.g. UTF-8.


Ali



Yes, it was ANSI encoded (I think this is the default encoding in 
D-IDE), I converted the file to UTF-8 and it's compiling.


Thanks for your help.


Re: No bounds checking for dynamic arrays at compile time?

2012-12-15 Thread Paulo Pinto

Am 15.12.2012 03:56, schrieb Walter Bright:

On 12/14/2012 7:08 AM, Paulo Pinto wrote:

So the question is if toy university compilers have flow analysis why
not having
it in D?


The compiler does do full data flow analysis in the optimizer pass. But,
by then, it is intermediate code not D code.



Ah ok.

I am used to see it being done on the AST level, before doing further 
passes.


--
Paulo