Unsubscribing from D.Announce

2011-12-20 Thread Paul D. Anderson
Sorry if this question has an obvious answer, but how do I 
unsubscribe from D.Announce?


I subscribed because I was missing compiler updates, etc., but I 
didn't realize how busy the forum was. I get 10 or 20 messages 
every time I check my e-mail (less than once a day) and that's 
about 9 or 19 too many.


Thanks,

Paul


Re: Unsubscribing from D.Announce

2011-12-20 Thread Andrej Mitrovic
http://lists.puremagic.com/cgi-bin/mailman/listinfo/digitalmars-d-announce

Last input box (unsuscribe or edit options button).


Re: Unsubscribing from D.Announce

2011-12-20 Thread Jonathan M Davis
On Tuesday, December 20, 2011 19:12:09 Paul D. Anderson wrote:
 Sorry if this question has an obvious answer, but how do I
 unsubscribe from D.Announce?
 
 I subscribed because I was missing compiler updates, etc., but I
 didn't realize how busy the forum was. I get 10 or 20 messages
 every time I check my e-mail (less than once a day) and that's
 about 9 or 19 too many.

And this list is one of the quiet ones...

- Jonathan M Davis


Re: Unsubscribing from D.Announce

2011-12-20 Thread Paul D. Anderson
On Tuesday, 20 December 2011 at 18:35:52 UTC, Andrej Mitrovic 
wrote:

http://lists.puremagic.com/cgi-bin/mailman/listinfo/digitalmars-d-announce

Last input box (unsuscribe or edit options button).


Thx


Re: DI Generation Needs your Help!

2011-12-20 Thread Adam Wilson
On Mon, 19 Dec 2011 08:53:21 -0800, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:



On 12/19/11 2:11 AM, Adam Wilson wrote:

As you may all be aware, I've been trying to improve the automated
generation of .di files and I now have something that I feel is
testable. Currently the new code only makes the following changes.

1. Function Implementations are removed
2. Private Function Declarations are removed.
3. Variable Initializers, except for const, are removed.


Don't forget immutable.


I did it, but it wasn't pretty. I had to pass the immutable state via the  
HeaderGenState struct.



Everything else is left alone. Templates and mixins are not addressed
with this code and *should* not be modified. That's where I need your
help, the test cases I have written cover some basic scenarios but I
don't have the capability to test these changes with the diverse code
base that the community has created.

drey_ from IRC was kind enough to test build Derelict with the changes
and has discovered a potential issue around private imports. Derelict
uses private imports that contain types which are used in function alias
declarations. As one would expect, this caused many compiler errors.
Currently, I feel that private imports should be stripped from the DI
file as they are intended to be internal to the module. However, I want
to put it to the community to decide, and I would especially appreciate
Mr. Bright's opinion on private imports in DI files.


I suspect you'd still need the private imports because template code may  
use them.


Privates are now all in.

This is great work. It's almost a textbook example of how one can make a  
great positive impact on D's development by finding an area of  
improvement and working on it. Congratulations!


You may want to generate DIs for Phobos and take a look at them. Phobos  
uses a vast array of D's capabilities so it's an effective unittest for  
DI generation.



Thanks,

Andrei



--
Adam Wilson
Project Coordinator
The Horizon Project
http://www.thehorizonproject.org/


Re: DI Generation Needs your Help!

2011-12-20 Thread Adam Wilson

On Mon, 19 Dec 2011 00:11:25 -0800, Adam Wilson flybo...@gmail.com wrote:

The latest DI generation code is now on my Git account and ready for  
testing. It fixes the following issues:


1.  Privates should exist in the DI file to support public templates.
2.  Template classes and functions retain their implementations.
3.  Immutable types should retain their initializers.

At this point I could really use testing; you can download them from my  
git account here: https://lightben...@github.com/LightBender/dmd.git
I am trying to get myself setup for building phobos as a test but this is  
proving to be a lengthy process.


--
Adam Wilson
Project Coordinator
The Horizon Project
http://www.thehorizonproject.org/


Re: Java Scala

2011-12-20 Thread Russel Winder
On Sun, 2011-12-18 at 03:57 -0600, Andrei Alexandrescu wrote:
[...]
 It's quite amazing how many discussions a la Java is successful 
 because... completely neglect an essential point: one BILLION dollars 
 was poured into Java, a significant fraction of which was put in 
 branding, marketing, and PR.

Not all of it from Sun -- they didn't have pockets that deep.

 The sheer fact that many of us - even those who actually _lived_ through 
 the Java marketing bonanza - tend to forget about it echoes many studies 
 in marketing: people believe they are making rational and logical 
 choices and refuse to admit and understand they are influenced by 
 marketing, even when they fall prey to textbook marketing techniques.

Corollary:  You have to have new product on the shelves every 6 months
or people stop buying your product.  Just look in the supermarket
shelves for the use of new.  The product may be the old product but
the packaging is different so it is new.

 It's easy to forget now, but in the craze of late 1990s, Java was so 
 heavily and so successfully advertised, I remember there were managers 
 who were desperate to adopt Java, and were convinced it would be a 
 strategic disaster if they failed to do so. That weirdly applied even to 
 managers who knew nothing about programming - they were as confused as 
 people who lined up to buy a Windows 95 CD that they couldn't install 
 because they didn't have a computer. It was incredible - a manager would 
 tell me how vital Java adoption is, but had no idea what Java really 
 was. There were Java commercials on the TV! 
 (http://www.youtube.com/watch?v=pHxtB8zr8UM)

I was in academia at the time so don't know what was happening in the
real world, but there certainly was a manic aspect to the Java snowball
-- and I use this metaphor advisedly, when you roll a snowball in snow
it gets bigger, but when the temperature rises snowballs melt away.

Publishers as well as academics were culpable in the mass mania.  A
revamp in the university curriculum meant new books and new sales, so
they pushed it as hard as possible.  Dietel, once a prominent operating
systems author, created a not so great programming languages publishing
empire out of it.

 Back then people were made to believe pretty much anything and 
 everything good about Java. Some believed Java was small and great for 
 limited-memory embedded systems. Some believed there's no real Internet 
 without Java. Some believed Java was awesomely fast. Most importantly, a 
 lot of people in decision positions believed jumping on the Java 
 bandwagon was an absolute necessity. And this gushing of social proof 
 became a self-fulfilling prophecy because with many people working on 
 Java an entire web of tools, libraries, and applications sprung to life, 
 creating offer and demand for more of the same.

And then there was JavaCard.  One of the biggest con jobs of all time.
Fundamentally a good idea, badly executed and managed because it became
a cash cow for Sun.  Now I suspect a blip in history.  Which is a shame
as smartcards are now powerful enough to run something along the
JavaCard lines that would really do something useful with smartcards. 

I see JavaME is being re-raised as useful technology. Great if I can run
courses, but it would be a bad move.  JavaSE Embedded is actually a
different kettle of fish.  Not useful everywhere, but in certain use
cases far better than using C or C++.  Or D except that there aren't
enough backend to D to make that viable. 

 Andy Warhol would have loved the stunt. Except jumpstarting this 
 gigantic engine wasn't free - it cost Sun one billion dollars. (It could 
 be speculated that ultimately this was part of the reason of Sun's 
 demise because other companies, not Sun, were able to capitalize on 
 Java.) Forgetting the role that that billion dollar played in the 
 success of Java would miss on probably the single most important reason, 
 and by far.

Whilst I can believe the $1bn overall, not all of it was Sun, and not
all of it was Java.  cf. the Self language episode.  I bet IBM were
happy.

 Right now I'm begging and cajoling Facebook and Microsoft for 5K-10K to 
 organize a conference on D in 2012. I'll say D is successful when many 
 companies would be honored to offer that level of sponsorship.

Musicians are coming up with new ways of funding things that is working
very well.  Pre-sales.  Put out the road-map and business plan for an
album or concert.  Take bookings and money before committing to
anything, then you have the cash float to make commitments.  Organizing
it from cash flow means no need for sponsors.  Except that once the show
realization is on the road you can inform the sponsors of what a
successful event this is going to be and how they are going to look bad
if they are not there.

PyCon UK (un)conferences tend to get organized on this model these days.

Obviously though it is all about having the contacts who can commit
budget.

-- 
Russel.

Re: Java Scala

2011-12-20 Thread Caligo
On Tue, Dec 20, 2011 at 2:09 AM, Russel Winder rus...@russel.org.uk wrote:

 Musicians are coming up with new ways of funding things that is working
 very well.  Pre-sales.  Put out the road-map and business plan for an
 album or concert.  Take bookings and money before committing to
 anything, then you have the cash float to make commitments.  Organizing
 it from cash flow means no need for sponsors.  Except that once the show
 realization is on the road you can inform the sponsors of what a
 successful event this is going to be and how they are going to look bad
 if they are not there.

 PyCon UK (un)conferences tend to get organized on this model these days.

 Obviously though it is all about having the contacts who can commit
 budget.


I don't understand why Walter, Andrei, or other D experts aren't going to
universities to give talks.  As far as I know it costs no money.  At least
it didn't cost us anything to set up an even when we were activists.  You
just need to ask and reserve a room, such as an auditorium.  It doesn't
have to be some official corporate sponsored DCon.  Don't forget a YouTube
version :-)


Re: d future or plans for d3

2011-12-20 Thread Timon Gehr

On 12/20/2011 07:01 AM, Ruslan Mullakhmetov wrote:

On 2011-12-19 11:52:25 +, Alex Rønne Petersen said:


On 18-12-2011 15:40, Somedude wrote:

Le 18/12/2011 15:07, Ruslan Mullakhmetov a écrit :


GC is just a mater of implementation. In presence of resources
implement
good GC algorithm offers no difficulty.



Oh really ? Then please make us a favor and write one for D. Also I'm
sure the C++ guys will be pleased to hear that it's such an easy task.


Yeah, unfortunately, we can't just keep saying it's an implementation
issue. It's very much a real problem; D programmers are *avoiding*
the GC because it just isn't comparable to, say, the .NET and Java GCs
in performance at all.

- Alex


Thanks for you explanation. I'm quite far away from GC but where is the
problem compare to Java and .NET? Resources (people), specific language
features making hard to implement GC or something else?

When i said that this is just a matter of implementation i followed the
idea that it's already implemented in say Java, C#, Erlang wich GC was
declared to be good enough in this topic.



The difference is that those languages are entirely type safe.


Re: CURL Wrapper: Vote Thread

2011-12-20 Thread Bernard Helyer

Yes.


Re: Java Scala

2011-12-20 Thread Walter Bright

On 12/19/2011 11:41 PM, Paulo Pinto wrote:

I think that it is more important that developers learn proper data
structures and algorithms together with computer architecture than just
Assembly, specially if you are dealing with heterogeneous computing as it is
becoming standard nowadays.


There's no way I would advocate learning just assembly. Learning assembler is 
a very important component of mastering programming, there are many other 
components.


Re: Next in Review Queue (12/18/2011)? std.serialize/orange?

2011-12-20 Thread Alix Pexton

On 20/12/2011 07:38, Jacob Carlborg wrote:

On 2011-12-20 01:52, Alix Pexton wrote:


I recently started abbreviating Serialization as s11n (Es-Eleven-En) in
the same vain as i18n for Internationalization and l10n for
Localization. Its not my invention, its in use in C/C++ libs and
probably other languages too. I think std.s11n; world be an acceptable
module name.

A...


I've never heard that abbreviation.



I got fed up of typing it the long way and started using the same 
abbreviation technique as for i18n (ie 
firstLetter-#omittedLetters-lastLetter) googled the result and found out 
I was not the first to have the same idea. I've no more etymological 
references than that but I never claimed it was common or well known ^^


A...


Re: d future or plans for d3

2011-12-20 Thread Froglegs


 I've only recently tried D out, but what I'd like to see..

-GC being truly optional
-being able to specify if the standard library should use GC or 
not, perhaps with allocators
-if D is going to have a GC, then a precise, compacting one would 
be cool, but not if it gets in the way of making the GC optional



 One thing I'm not sure about, D classes are virtual by default, 
but if you mark all functions as final does the class still 
contain a VFP or any other cruft?
 Also why are class functions virtual by default? My experience 
in C++ is that I rarely use virtual, so I don't really understand 
why that is the default.






Re: Java Scala

2011-12-20 Thread Walter Bright

On 12/19/2011 11:42 PM, Russel Winder wrote:

I think this might be more true of native code languages than virtual
machine languages.  Java programmers generally don't know the bytecodes,
Python programmers generally don't know the bytecodes, Ruby programmers
generally don't know the bytecodes (Ruby 1.8 may have been interpreted,
but 1.9 is a bytecode bases system).


I don't mean knowing the bytecode. Knowing assembler means you develop a feel 
for what has to happen at the machine level for various constructs. Knowing 
bytecode doesn't help with that.




The problem was that all too often the staff teaching the courses didn't
really know what they were talking about :-((


I learned programming from my peers in college who took pity on my ignorance and 
kindly helped out. I remember Larry Zwick, who said good gawd, don't you know 
what tables are? after looking at some coding horror listing of mine. I said 
whut's dat? and he proceeded to teach me table-driven state machines on the spot.


I remember learning OOP (though I didn't learn the term for it until years 
later) by reading through the listing for the ADVENT game, and there was the 
comment a troll is a modified dwarf. It was one of those lightbulb moments.


Re: d future or plans for d3

2011-12-20 Thread Dejan Lekic
On Sun, 18 Dec 2011 04:09:21 +0400, Ruslan Mullakhmetov wrote:

   I want to ask you about D future, i mean next big iteration of D and
 propose some new feature, agent-based programming. Currently, after
 introducing C++11 i see the only advantages of D over C++11 except
 syntax sugare is  garbage collector and modules.

I do not think D (as a language) should be modified to support agent-
based programming. A Phobos module (or package) would do. I seriously do 
not see what language changes we need for agent-based programming. :)


Re: Bitmapped vector tries vs. arrays

2011-12-20 Thread Dejan Lekic
On Sun, 18 Dec 2011 15:23:10 +0100, deadalnix wrote:

 
 Some argument are not convincing. OK, log32(n) is close to O(1) in many
 cases but it isn't O(1). Matter of fact.
 

Well, he talks about it, and we must admit that for the memory you *can* 
address, log32(n) is really, really fast. Matter of fact. :)


Re: d future or plans for d3

2011-12-20 Thread Jonathan M Davis
On Tuesday, December 20, 2011 11:21:41 Froglegs wrote:
   I've only recently tried D out, but what I'd like to see..
 
 -GC being truly optional
 -being able to specify if the standard library should use GC or
 not, perhaps with allocators

Some aspects of D will _always_ require a GC or they won't work. Array 
concatenation would be a prime example. I believe that delegates are another 
major example. I think that scoped delegates avoid the problem, but any that 
require closures do not. Other things be done but become risky - e.g. slicing 
arrays (the GC normally owns the memory such that all dynamic arrays are 
slices and none of them own their memory, so slicing manually managed memory 
gets dicey).

There are definitely portions of the standard library that can and should be 
useable without the GC, but because some aspects of the language require, some 
portions of the standard library will always require it. In general, I don't 
think that it's reasonable to expect to use D without a GC. You can really 
minimize how much you use it, and if you're really, really careful and avoid 
some of D's nicer features, you might be able to avoid it entirely, but 
realistically, if you're using D, you're going to be using the GC at least 
some.

In general, the trick is going to be allowing custom allocators where it makes 
sense (e.g. containers) and being smart about how you design your program 
(e.g. using structs instead of classes if you don't need the extra abilities 
of a class - such as polymorphism). So, if you're smart, you can be very 
efficient with memory usage in D, but unless you really want to hamstring your 
usage of D, avoiding the GC entirely just doesn't work.

- Jonathan M Davis


Re: d future or plans for d3

2011-12-20 Thread Timon Gehr

On 12/20/2011 11:21 AM, Froglegs wrote:


I've only recently tried D out, but what I'd like to see..

-GC being truly optional
-being able to specify if the standard library should use GC or not,
perhaps with allocators
-if D is going to have a GC, then a precise, compacting one would be
cool, but not if it gets in the way of making the GC optional


One thing I'm not sure about, D classes are virtual by default, but if
you mark all functions as final does the class still contain a VFP or
any other cruft?


The class will still have a vptr. The vtable will contain only the type 
info.



Also why are class functions virtual by default? My experience in C++ is
that I rarely use virtual, so I don't really understand why that is the
default.



In C++, class and struct are essentially the same. In D, you use classes 
if you want polymorphism and structs if you don't need it. structs 
cannot participate in inheritance hierarchies or contain virtual 
functions. If you don't need virtual functions, you should probably use 
structs instead of classes. (you are not doing OOP anyway.)


structs don't contain a vptr.
If you want to use extensible with final methods, class C{final: /* 
method declarations */} will do the job. But it is not the most common 
case, so it should not be the default. (IIRC C# takes a different stance 
on this, they have final methods by default to force the programmer to 
make dynamic binding explicit. It has both benefits and drawbacks.)


Re: d future or plans for d3

2011-12-20 Thread Timon Gehr

On 12/20/2011 11:48 AM, Jonathan M Davis wrote:

On Tuesday, December 20, 2011 11:21:41 Froglegs wrote:

   I've only recently tried D out, but what I'd like to see..

-GC being truly optional
-being able to specify if the standard library should use GC or
not, perhaps with allocators


Some aspects of D will _always_ require a GC or they won't work.


This is not a huge problem. They just wont work if the programmer 
chooses not to use the GC. There is nothing absolutely essential to 
writing working D programs that requires the GC.



Array
concatenation would be a prime example. I believe that delegates are another
major example. I think that scoped delegates avoid the problem, but any that
require closures do not. Other things be done but become risky - e.g. slicing
arrays (the GC normally owns the memory such that all dynamic arrays are
slices and none of them own their memory, so slicing manually managed memory
gets dicey).

There are definitely portions of the standard library that can and should be
useable without the GC, but because some aspects of the language require, some
portions of the standard library will always require it. In general, I don't
think that it's reasonable to expect to use D without a GC. You can really
minimize how much you use it, and if you're really, really careful and avoid
some of D's nicer features, you might be able to avoid it entirely, but
realistically, if you're using D, you're going to be using the GC at least
some.

In general, the trick is going to be allowing custom allocators where it makes
sense (e.g. containers) and being smart about how you design your program
(e.g. using structs instead of classes if you don't need the extra abilities
of a class - such as polymorphism). So, if you're smart, you can be very
efficient with memory usage in D, but unless you really want to hamstring your
usage of D, avoiding the GC entirely just doesn't work.



Note that you can use manual memory management for classes.




Re: d future or plans for d3

2011-12-20 Thread bearophile
Froglegs:

   One thing I'm not sure about, D classes are virtual by default, 
 but if you mark all functions as final does the class still 
 contain a VFP or any other cruft?

Even D final classes, that do not have virtual methods, have a pointer to 
virtual table. It's used to know what class the instance is (for reflection too 
and for the destructor).


   Also why are class functions virtual by default? My experience 
 in C++ is that I rarely use virtual, so I don't really understand 
 why that is the default.

Maybe because D OO design copies Java OO design a lot. But even in C# methods 
are not virtual on default.

Bye,
bearophile


Re: d future or plans for d3

2011-12-20 Thread bearophile
Timon Gehr:

 If you don't need virtual functions, you should probably use 
 structs instead of classes. (you are not doing OOP anyway.)

I don't agree with both that statements.

Bye,
bearophile


Re: d future or plans for d3

2011-12-20 Thread Froglegs


The class will still have a vptr. The vtable will contain only 
the type info.


No way to disable type info(like in most C++ compilers you can 
disable RTTI)? I get that GC might want it, but if I disable GC 
why would I want type info?


I saw that D is planning to make the standard containers into 
classes with final methods, why do this instead of using structs 
if it bloats each instance of the container?



Some aspects of D will _always_ require a GC or they won't 
work. Array concatenation would be a prime example. I believe 
that delegates are another major example. I think that scoped 
delegates avoid the problem, but any that require closures do 
not. Other things be done but become risky - e.g. slicing 
arrays (the GC normally owns the memory such that all dynamic 
arrays are slices and none of them own their memory, so slicing 
manually managed memory gets dicey).


The array concatenation requiring GC I get, but why does a 
delegate require it?


This link says D allocates closures on the heap

http://en.wikipedia.org/wiki/Anonymous_function#D

I don't really get why, C++ lambda works well(aside from broken 
lack of template lambda's) and do not require heap usage, even 
binding it to std::function can generally avoid it if it doesn't 
exceed the  SBO size








Re: d future or plans for d3

2011-12-20 Thread Vladimir Panteleev

On Tuesday, 20 December 2011 at 11:17:32 UTC, Froglegs wrote:


The class will still have a vptr. The vtable will contain only 
the type info.


No way to disable type info(like in most C++ compilers you can 
disable RTTI)? I get that GC might want it, but if I disable GC 
why would I want type info?


It's used for casts and other language/library features depending 
on typeid.


There's also the standard Object virtual methods: toString (used 
e.g. when passing objects to writeln), toHash, opEquals, opCmp 
(used in associative arrays, array sorting).




Re: d future or plans for d3

2011-12-20 Thread Vladimir Panteleev

On Tuesday, 20 December 2011 at 11:17:32 UTC, Froglegs wrote:
The array concatenation requiring GC I get, but why does a 
delegate require it?


This link says D allocates closures on the heap

http://en.wikipedia.org/wiki/Anonymous_function#D

I don't really get why, C++ lambda works well(aside from broken 
lack of template lambda's) and do not require heap usage, even 
binding it to std::function can generally avoid it if it 
doesn't exceed the  SBO size


C++ closures do not allow you to maintain a reference to the 
context after the function containing said context returns. 
Instead, C++ allows you to choose between copying the variables 
into the lambda instance, or referencing them (the references may 
not escape). The compiler may or may not enforce correct uses 
of reference captures. In contrast, D's approach is both 
intuitive (does not copy variables) and safe (conservatively 
allocates on the heap), with the downside of requiring the 
context to be garbage-collected.


Re: DI Generation Needs your Help!

2011-12-20 Thread Andrej Mitrovic
Derelict works ok now, good work!

However, the .di files end up eating newlines.

Before:
double ALLEGRO_USECS_TO_SECS(long x)
{
return x / 1e+06;
}
double ALLEGRO_MSECS_TO_SECS(long x)
{
return x / 1000;
}
double ALLEGRO_BPS_TO_SECS(int x)
{
return 1 / x;
}

After:
double ALLEGRO_USECS_TO_SECS(long x);double ALLEGRO_MSECS_TO_SECS(long
x);double ALLEGRO_BPS_TO_SECS(int x);

I've tried merging
https://github.com/D-Programming-Language/dmd/pull/538 but it doesn't
fix this.


Re: d future or plans for d3

2011-12-20 Thread Froglegs


C++ closures do not allow you to maintain a reference to the 
context after the function containing said context returns. 
Instead, C++ allows you to choose between copying the variables 
into the lambda instance, or referencing them (the references 
may not escape). The compiler may or may not enforce correct 
uses of reference captures. In contrast, D's approach is both 
intuitive (does not copy variables) and safe (conservatively 
allocates on the heap), with the downside of requiring the 
context to be garbage-collected.


Ah, makes sense now, thanks.

Still it seems like a case of you pay for what you don't use, 
and seems like a real downer for adopting D since you loose the 
ability to use lambda's without having the GC shoved down your 
throat(wouldn't be so bad if the D GC was known for performance, 
but everything I've read indicates it is quite slow).






Re: d future or plans for d3

2011-12-20 Thread bearophile
Froglegs:

 Still it seems like a case of you pay for what you don't use,

That's a design rule for C++, but D is a bit different :-)
Often in D there are ways to not pay what you don't use, but you have to ask 
for them. If you don't ask for those ways, you usually need to pay a little, 
and you get a program that's safer or more easy to debug. 

So for D the rule is more like Safety on default, and unsafe (and cheap) on 
request.


 and seems like a real downer for adopting D since you loose the 
 ability to use lambda's without having the GC shoved down your 
 throat

In theory in D there is a way to use (at the usage site) static delegates, that 
don't allocate a closure on the heap. I don't know if now this feature is fully 
correctly implemented, but if not it's planned.

Bye,
bearophile


Re: Java Scala

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 2:09 AM, Russel Winder wrote:

On Sun, 2011-12-18 at 03:57 -0600, Andrei Alexandrescu wrote:
[...]

It's quite amazing how many discussions a la Java is successful
because... completely neglect an essential point: one BILLION dollars
was poured into Java, a significant fraction of which was put in
branding, marketing, and PR.


Not all of it from Sun -- they didn't have pockets that deep.


The sheer fact that many of us - even those who actually _lived_ through
the Java marketing bonanza - tend to forget about it echoes many studies
in marketing: people believe they are making rational and logical
choices and refuse to admit and understand they are influenced by
marketing, even when they fall prey to textbook marketing techniques.


Corollary:  You have to have new product on the shelves every 6 months
or people stop buying your product.  Just look in the supermarket
shelves for the use of new.  The product may be the old product but
the packaging is different so it is new.


Confusion. Product != brand. There doesn't have to be a new brand to 
replace Starbucks or Coca-Cola every six months.


Andrei




Re: DI Generation Needs your Help!

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 2:03 AM, Adam Wilson wrote:

On Mon, 19 Dec 2011 00:11:25 -0800, Adam Wilson flybo...@gmail.com wrote:

The latest DI generation code is now on my Git account and ready for
testing. It fixes the following issues:

1. Privates should exist in the DI file to support public templates.
2. Template classes and functions retain their implementations.
3. Immutable types should retain their initializers.


Great!


At this point I could really use testing; you can download them from my
git account here: https://lightben...@github.com/LightBender/dmd.git
I am trying to get myself setup for building phobos as a test but this
is proving to be a lengthy process.


Nah, it's much easier than you might think. The posix.mak makefile is 
very small for what it does, and you need to literally change one line 
of code to make it generate .di headers.



Andrei


Re: Java Scala

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 2:09 AM, Russel Winder wrote:

Publishers as well as academics were culpable in the mass mania.  A
revamp in the university curriculum meant new books and new sales, so
they pushed it as hard as possible.  Dietel, once a prominent operating
systems author, created a not so great programming languages publishing
empire out of it.


I was also in the academia, doing PL research no less. The academic 
interest was not of commercial nature for the most part - Java _is_ a 
clean language great for doing research of both kinds: (a) research that 
studies programs written in that language, (b) research that adds a 
little feature to the language and proves its properties. The fact that 
Java is underpowered has no import to that kind of work.


Andrei


Re: Java Scala

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 2:26 AM, Caligo wrote:



On Tue, Dec 20, 2011 at 2:09 AM, Russel Winder rus...@russel.org.uk
mailto:rus...@russel.org.uk wrote:

Musicians are coming up with new ways of funding things that is working
very well.  Pre-sales.  Put out the road-map and business plan for an
album or concert.  Take bookings and money before committing to
anything, then you have the cash float to make commitments.  Organizing
it from cash flow means no need for sponsors.  Except that once the show
realization is on the road you can inform the sponsors of what a
successful event this is going to be and how they are going to look bad
if they are not there.

PyCon UK (un)conferences tend to get organized on this model these days.

Obviously though it is all about having the contacts who can commit
budget.


I don't understand why Walter, Andrei, or other D experts aren't going
to universities to give talks.


We do, and at corporations as well as universities. There are a variety 
of scheduling issues, but they can be worked out. The main reason we're 
not doing more is there are not many invitations.


Andrei


initializedArray

2011-12-20 Thread Andrej Mitrovic
I think it would be cool to have an initializedArray function, which
creates and initializes an array with a *specific* initializer. A
hardcoded example would be:

import std.array;

auto initializedArray(F:float[])(size_t size, float init)
{
auto arr = uninitializedArray!(float[])(size);
arr[] = init;
return arr;
}

void main()
{
float[] arr = initializedArray!(float[])(3, 0.0f);
assert(arr[] == [0.0f, 0.0f, 0.0f]);
}

Currently there's no D syntax for using new on arrays and specifying a
specific initializer, so maybe we should have this as a library
function. Thoughts?


Re: Bitmapped vector tries vs. arrays

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 4:44 AM, Dejan Lekic wrote:

On Sun, 18 Dec 2011 15:23:10 +0100, deadalnix wrote:



Some argument are not convincing. OK, log32(n) is close to O(1) in many
cases but it isn't O(1). Matter of fact.



Well, he talks about it, and we must admit that for the memory you *can*
address, log32(n) is really, really fast. Matter of fact. :)


I agree. Well, anywhere between 1 and 7. I only have a bit of a problem 
with making a number between 1 and 7 equal to 1.


Andrei


Re: d future or plans for d3

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 5:56 AM, Froglegs wrote:



C++ closures do not allow you to maintain a reference to the context
after the function containing said context returns. Instead, C++
allows you to choose between copying the variables into the lambda
instance, or referencing them (the references may not escape). The
compiler may or may not enforce correct uses of reference captures. In
contrast, D's approach is both intuitive (does not copy variables) and
safe (conservatively allocates on the heap), with the downside of
requiring the context to be garbage-collected.


Ah, makes sense now, thanks.

Still it seems like a case of you pay for what you don't use, and
seems like a real downer for adopting D since you loose the ability to
use lambda's without having the GC shoved down your throat(wouldn't be
so bad if the D GC was known for performance, but everything I've read
indicates it is quite slow).


D's pass-down lambdas do not need memory allocation. As far as I 
remember none of std.algorithm's use of lambda allocates memory.


Andrei


Re: d future or plans for d3

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 5:17 AM, Froglegs wrote:

The array concatenation requiring GC I get, but why does a delegate
require it?

This link says D allocates closures on the heap

http://en.wikipedia.org/wiki/Anonymous_function#D

I don't really get why, C++ lambda works well(aside from broken lack of
template lambda's) and do not require heap usage, even binding it to
std::function can generally avoid it if it doesn't exceed the SBO size


Well another way of putting it is std::function MUST do heap allocation 
if environment size exceeds the SBO size. It's the same thing here.


Just like C++ lambdas, D lambdas and local functions don't need heap 
allocation unless they escape their scope.



Andrei


Re: initializedArray

2011-12-20 Thread Andrej Mitrovic
Ok here's an initial implementation (I've had to put the initializer
first, otherwise I can't use variadic arguments):
http://www.ideone.com/2rqFb

I've borrowed BaseElementType from Philippe Sigaud's template book.


Re: initializedArray

2011-12-20 Thread Andrej Mitrovic
afternote: I didn't actually need to pass that array via ref, I'm only
modifying the elements.


Re: d future or plans for d3

2011-12-20 Thread Froglegs


D's pass-down lambdas do not need memory allocation. As far as 
I remember none of std.algorithm's use of lambda allocates 
memory.


Andrei


Oh cool, I like that


Re: d future or plans for d3

2011-12-20 Thread deadalnix

Le 20/12/2011 14:08, Andrei Alexandrescu a écrit :

On 12/20/11 5:56 AM, Froglegs wrote:



C++ closures do not allow you to maintain a reference to the context
after the function containing said context returns. Instead, C++
allows you to choose between copying the variables into the lambda
instance, or referencing them (the references may not escape). The
compiler may or may not enforce correct uses of reference captures. In
contrast, D's approach is both intuitive (does not copy variables) and
safe (conservatively allocates on the heap), with the downside of
requiring the context to be garbage-collected.


Ah, makes sense now, thanks.

Still it seems like a case of you pay for what you don't use, and
seems like a real downer for adopting D since you loose the ability to
use lambda's without having the GC shoved down your throat(wouldn't be
so bad if the D GC was known for performance, but everything I've read
indicates it is quite slow).


D's pass-down lambdas do not need memory allocation. As far as I
remember none of std.algorithm's use of lambda allocates memory.

Andrei


Is the compiler able to ensure that and do not allocate on the heap ?


Re: d future or plans for d3

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 7:41 AM, deadalnix wrote:

D's pass-down lambdas do not need memory allocation. As far as I
remember none of std.algorithm's use of lambda allocates memory.

Andrei


Is the compiler able to ensure that and do not allocate on the heap ?


Yes, to the best of my knowledge it's pretty much cut and dried.

Andrei


Re: CURL Wrapper: Vote Thread

2011-12-20 Thread Masahiro Nakagawa

Yes!


Masahiro

On Sun, 18 Dec 2011 04:36:15 +0900, dsimcha dsim...@yahoo.com wrote:

The time has come to vote on the inclusion of Jonas Drewsen's CURL  
wrapper in Phobos.



Code: https://github.com/jcd/phobos/blob/curl-wrapper/etc/curl.d
Docs: http://freeze.steamwinter.com/D/web/phobos/etc_curl.html


For those of you on Windows, a libcurl binary built by DMC is available  
at http://gool.googlecode.com/files/libcurl_7.21.7.zip.



Voting lasts one week and ends on 12/24.


Re: Program size, linking matter, and static this()

2011-12-20 Thread Denis Shelomovskij

16.12.2011 21:29, Andrei Alexandrescu пишет:

Hello,


Late last night Walter and I figured a few interesting tidbits of
information. Allow me to give some context, discuss them, and sketch a
few approaches for improving things.

A while ago Walter wanted to enable function-level linking, i.e. only
get the needed functions from a given (and presumably large) module. So
he arranged things that a library contains many small object files
(that actually are generated from a single .d file and never exist on
disk, only inside the library file, which can be considered an archive
like tar). Then the linker would only pick the used object files from
the library and link those in. Unfortunately that didn't have nearly the
expected impact - essentially the size of most binaries stayed the same.
The mystery was unsolved, and Walter needed to move on to other things.

One particularly annoying issue is that even programs that don't
ostensibly use anything from an imported module may balloon inexplicably
in size. Consider:

import std.path;
void main(){}

This program, after stripping and all, has some 750KB in size. Removing
the import line reduces the size to 218KB. That includes the runtime
support, garbage collector, and such, and I'll consider it a baseline.
(A similar but separate discussion could be focused on reducing the
baseline size, but herein I'll consider it constant.)

What we'd simply want is to be able to import stuff without blatantly
paying for what we don't use. If a program imports std.path and uses no
function from it, it should be as large as a program without the import.
Furthermore, the increase should be incremental - using 2-3 functions
from std.path should only increase the executable size by a little, not
suddenly link in all code in that module.

But in experiments it seemed like program size would increase in sudden
amounts when certain modules were included. After much investigation we
figured that the following fateful causal sequence happened:

1. Some modules define static constructors with static this() or
static shared this(), and/or static destructors.

2. These constructors/destructors are linked in automatically whenever a
module is included.

3. Importing a module with a static constructor (or destructor) will
generate its ModuleInfo structure, which contains static information
about all module members. In particular, it keeps virtual table pointers
for all classes defined inside the module.

4. That means generating ModuleInfo refers all virtual functions defined
in that module, whether they're used or not.

5. The phenomenon is transitive, e.g. even if std.path has no static
constructors but imports std.datetime which does, a ModuleInfo is
generated for std.path too, in addition to the one for std.datetime. So
now classes inside std.path (if any) will be all linked in.

6. It follows that a module that defines classes which in turn use other
functions in other modules, and has static constructors (or includes
other modules that do) will baloon the size of the executable suddenly.

There are a few approaches that we can use to improve the state of affairs.

A. On the library side, use static constructors and destructors
sparingly inside druntime and std. We can use lazy initialization
instead of compulsively initializing library internals. I think this is
often a worthy thing to do in any case (dynamic libraries etc) because
it only does work if and when work needs to be done at the small cost of
a check upon each use.

B. On the compiler side, we could use a similar lazy initialization
trick to only refer class methods in the module if they're actually
needed. I'm being vague here because I'm not sure what and how that can
be done.

Here's a list of all files in std using static cdtors:

std/__fileinit.d
std/concurrency.d
std/cpuid.d
std/cstream.d
std/datebase.d
std/datetime.d
std/encoding.d
std/internal/math/biguintcore.d
std/internal/math/biguintx86.d
std/internal/processinit.d
std/internal/windows/advapi32.d
std/mmfile.d
std/parallelism.d
std/perf.d
std/socket.d
std/stdiobase.d
std/uri.d

The majority of them don't do a lot of work and are not much used inside
phobos, so they don't blow up the executable. The main one that could
receive some attention is std.datetime. It has a few static ctors and a
lot of classes. Essentially just importing std.datetime or any std
module that transitively imports std.datetime (and there are many of
them) ends up linking in most of Phobos and blows the size up from the
218KB baseline to 700KB.

Jonathan, could I impose on you to replace all static cdtors in
std.datetime with lazy initialization? I looked through it and it
strikes me as a reasonably simple job, but I think you'd know better
what to do than me.

A similar effort could be conducted to reduce or eliminate static cdtors
from druntime. I made the experiment of commenting them all, and that
reduced the size of the baseline from 218KB to 200KB. This is a good
amount, but not as 

Re: d future or plans for d3

2011-12-20 Thread jerro
 The array concatenation requiring GC I get, but why does a 
 delegate require it?

If you really want a stack allocated delegate, you could use something like:

import std.stdio, std.traits;

struct DelegateWrapper(alias fun, Args...)
{
Args args;
private auto f(ParameterTypeTuple!fun[Args.length..$] otherArgs)
{
return fun(args, otherArgs);
}
auto dg()
{
return f;
}
}

auto delegateWrapper(alias fun, A...)(A a)
{
return DelegateWrapper!(fun,A)(a);
}

void main()
{
static test (int a, int b, int c)
{
writeln(a, b, c);
}
auto b = delegateWrapper!(test)(1,2);

b.dg()(3);
}




auto + Top-level Const/Immutable

2011-12-20 Thread dsimcha
The changes made to IFTI in DMD 2.057 are great, but they reveal another 
hassle with getting generic code to play nice with const.


import std.range, std.array;

ElementType!R sum(R)(R range) {
if(range.empty) return 0;
auto ans = range.front;
range.popFront();

foreach(elem; range) ans += elem;
return ans;
}

void main() {
const double[] nums = [1, 2, 3];
sum(nums);
}

test.d(8): Error: variable test9.sum!(const(double)[]).sum.ans cannot 
modify const
test.d(14): Error: template instance test9.sum!(const(double)[]) error 
instantiating


Of course this is fixable with an Unqual, but it requires the programmer 
to remember this every time and breaks for structs with indirection. 
Should we make `auto` also strip top-level const from primitives and 
arrays and, if const(Object)ref gets in, from objects?


Re: Program size, linking matter, and static this()

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 9:00 AM, Denis Shelomovskij wrote:

16.12.2011 21:29, Andrei Alexandrescu пишет:

[snip]

Really sorry, but it sounds silly for me. It's a minor problem. Does
anyone really cares about 600 KiB (3.5x) size change in an empty
program? Yes, he does, but only if there is no other size increases in
real programs.


In my experience, in a system programming language people do care about 
baseline size for one reason or another. I'd agree the reason is often 
overstated. But I did notice that people take a look at D and use 
hello, world size as a proxy for language's overall overhead - 
runtime, handling of linking etc. You may or may not care about the 
conclusions of our investigation, but we and a category of people do 
care for a variety of project sizes and approaches to building them.



Now dmd have at least _two order of magnitude_ file size increase. I
posted that problem four months ago at Building GtkD app on Win32
results in 111 MiB file mostly from zeroes.

[snip]

---
char arr[1024 * 1024 * 10];
void main() { }
---

[snip]

If described issues aren't much more significant than static this(),
show me where am I wrong, please.


Using BSS is a nice optimization, but not all compilers do it and I know 
for a fact MSVC didn't have it for a long time. That's probably why I 
got used to thinking poor style when seeing a large statically-sized 
buffer with static duration.


I'd say both issues deserve to be looked at, and saying one is more 
significant than the other would be difficult.



Andrei


Re: auto + Top-level Const/Immutable

2011-12-20 Thread Martin Nowak

On Tue, 20 Dec 2011 15:23:31 +0100, dsimcha dsim...@yahoo.com wrote:

The changes made to IFTI in DMD 2.057 are great, but they reveal another  
hassle with getting generic code to play nice with const.


import std.range, std.array;

ElementType!R sum(R)(R range) {
 if(range.empty) return 0;
 auto ans = range.front;
 range.popFront();

 foreach(elem; range) ans += elem;
 return ans;
}

void main() {
 const double[] nums = [1, 2, 3];
 sum(nums);
}

test.d(8): Error: variable test9.sum!(const(double)[]).sum.ans cannot  
modify const
test.d(14): Error: template instance test9.sum!(const(double)[]) error  
instantiating


Of course this is fixable with an Unqual, but it requires the programmer  
to remember this every time and breaks for structs with indirection.  
Should we make `auto` also strip top-level const from primitives and  
arrays and, if const(Object)ref gets in, from objects?


At a first thought yes. I always end up using 'const/immutable var = exp'  
if I want

the other one and 'auto var = exp' with const pretty often causes troubles.


Re: d future or plans for d3

2011-12-20 Thread Timon Gehr

On 12/20/2011 11:57 AM, bearophile wrote:

Timon Gehr:


If you don't need virtual functions, you should probably use
structs instead of classes. (you are not doing OOP anyway.)


I don't agree with both that statements.

Bye,
bearophile


1. He does not want type info. Structs don't have type info. He does not 
want virtual functions. Structs don't support virtual functions. Ergo he 
should use mostly structs. Please defend your disagreement.


2. Dynamic binding is a core concept of OOP. A language that does not 
support dynamic binding does not support OOP. A program that does not 
use dynamic binding is not object oriented. What is to disagree with?


Re: CURL Wrapper: Vote Thread

2011-12-20 Thread Sean Kelly
Yes. 

Sent from my iPhone

On Dec 20, 2011, at 1:49 AM, Bernard Helyer b.hel...@gmail.com wrote:

 Yes.


Top C++

2011-12-20 Thread deadalnix

http://www.johndcook.com/blog/2011/06/14/why-do-c-folks-make-things-so-complicated/


Re: Top C++

2011-12-20 Thread dsimcha

On Tuesday, 20 December 2011 at 15:21:46 UTC, deadalnix wrote:

http://www.johndcook.com/blog/2011/06/14/why-do-c-folks-make-things-so-complicated/


Sounds a lot like SafeD vs. non-safe D.


Re: std.container and classes

2011-12-20 Thread Jerry
Jonathan M Davis jmdavisp...@gmx.com writes:

 On Saturday, December 17, 2011 17:31:46 Andrei Alexandrescu wrote:
 On 12/13/11 9:08 PM, Jonathan M Davis wrote:
  Is the plan for std.container still to have all of its containers be
  final classes (classes so that they're reference types and final so
  that their functions are inlinable)? Or has that changed? I believe
  that Andrei said something recently about discussing reference counting
  and containers with Walter.
  
  The reason that I bring this up is that Array and SList are still
  structs, and the longer that they're structs, the more code that will
  break when they get changed to classes. Granted, some level of code
  breakage may occur when we add custom allocators to them, but since
  that would probably only affect the constructor (and preferably
  wouldn't affect anything if you want to simply create a container with
  the GC heap as you would now were Array and SList classes), the
  breakage for that should be minimal.
  
  Is there any reason for me to not just go and make Array and SList final
  classes and create a pull request for it?
  
  - Jonathan M Davis
 
 Apologies for being slow on this. It may be a fateful time to discuss
 that right now, after all the discussion of what's appropriate for
 stdlib vs. application code etc.
 
 As some of you know, Walter and I went back and forth several times on
 this. First, there was the issue of making containers value types vs.
 reference types. Making containers value types would be in keep with the
 STL approach. However, Walter noted that copying entire containers by
 default is most often NOT desirable and there's significant care and
 adornments in C++ programs to make sure that that default behavior is
 avoided (e.g. adding const to function parameters).
 
 So we decided to make containers reference types, and that seemed to be
 a good choice.
 
 The second decision is classes vs. structs. Walter correctly pointed out
 that the obvious choice for defining a reference type in D - whether the
 type is momonorphic or polymorphic - is making it a class. If containers
 aren't classes, the reasoning went, it means we took a wrong step
 somewhere; it might mean our flagship abstraction for reference types is
 not suitable for, well, defining a reference type.
 
 Fast forward a couple of months, a few unslept nights, and a bunch of
 newsgroup and IRC conversations. Several additional pieces came together.
 
 The most important thing I noticed is that people expect standard
 containers to have sophisticated memory management. Many ask not about
 containers as much as containers with custom allocators. Second,
 containers tend to be large memory users by definition. Third,
 containers are self-contained (heh) and relatively simple in terms of
 what they model, meaning that they _never_ suffer from circular
 references, like general entity types might.
 
 All of these arguments very strongly suggest that many want containers
 to be types with deterministic control over memory and accept
 configurable allocation strategies (regions, heap, malloc, custom). So
 that would mean containers should be reference counted structs.
 
 This cycle of thought has happened twice, and the evidence coming the
 second time has been stronger. The first time around I went about and
 started implementing std.container with reference counting in mind. The
 code is not easy to write, and is not to be recommended for most types,
 hence my thinking (at the end of the first cycle) that we should switch
 to class containers. One fear I have is that people would be curious,
 look at the implementation of std.container, and be like so am I
 expected to do all this to define a robust type? I start to think that
 the right answer to that is to improve library support for good
 reference counted types, and define reference counted struct containers
 that are deterministic.
 
 Safety is also an issue. I was hoping I'd provide safety as a policy,
 e.g. one may choose for a given container whether they want safe or not
 (and presumably fast). I think it's best to postpone that policy and
 focus for now on defining safe containers with safe ranges. This
 precludes e.g. using T[] as a range for Array!T.
 
 Please discuss.

 The only reason that I can think of to use a reference-counted struct instead 
 of a class is becuse then it's easier to avoid the GC heap entirely.  Almost 
 all of a container's memory is going to end up on the heap regardless, 
 because 
 the elements almost never end up in the container itself. They're in a 
 dynamic 
 array or in nodes or something similar. So, whether the container is ref-
 counted or a class is almost irrelevant. If anything making it a class makes 
 more sense, because then it doesn't have to worry about the extra cost of ref-
 counting. However, once we bring allocators into the picture, the situation 
 changes slightly. In both cases, the elements themselves end up where the 
 

Re: Top C++

2011-12-20 Thread Timon Gehr

On 12/20/2011 04:22 PM, deadalnix wrote:

http://www.johndcook.com/blog/2011/06/14/why-do-c-folks-make-things-so-complicated/



Top C++ sounds like SafeD.


Re: Top C++

2011-12-20 Thread deadalnix

Le 20/12/2011 16:37, dsimcha a écrit :

On Tuesday, 20 December 2011 at 15:21:46 UTC, deadalnix wrote:

http://www.johndcook.com/blog/2011/06/14/why-do-c-folks-make-things-so-complicated/



Sounds a lot like SafeD vs. non-safe D.


That is what I thought and it is why I posted it here.


Re: Java Scala

2011-12-20 Thread J Arrizza
On Mon, Dec 19, 2011 at 12:48 PM, Walter Bright
newshou...@digitalmars.comwrote:

 On 12/19/2011 11:52 AM, ddverne wrote:

 On Sunday, 18 December 2011 at 07:09:21 UTC, Walter Bright wrote:

 A programmer who doesn't know assembler is never going to write better
 than
 second rate programs.



  You are going to be a better C, C++, or D programmer if you're
 comfortable with assembler.


In my university the assembler course was a weeder course. If you passed it
you got in to second year (750 entrants, 150 openings).

My point is being comfortable with assembler is likely an effect not a
cause. If you have the motivation and skills to pick up assembler in a
semester then you are probably going to be a better programmer in the end
simply because of your motivation and skills, not necessarily from knowing
assembler.

OTOH my first exposure to programming was hand assembly of machine code on
a MIKBUG based SWTPC. When I used an actual assembler it was, thank you
gxd for making my life a whole hell of a lot easier! C was the next step
in ease. You mean I don't have to actually keep track of every register's
content? And so on up the tree of abstraction I went.

In the end, this progression has been extremely beneficial in visualizing
how all that abstract source code translates down into machine code. Memory
allocation, speed and size optimization, etc. etc. make a lot more sense
when you know how the machine behaves at a fundamental level.

And on the other-other hand, the bottom line is this. Wetware causes the
problems in sw development. How can a language feature help fix or prevent
those problems? And of course all that balanced against the need for some
developers to break the speed limit.

John


Re: d future or plans for d3

2011-12-20 Thread Jonathan M Davis
On Tuesday, December 20, 2011 15:49:34 Timon Gehr wrote:
 2. Dynamic binding is a core concept of OOP. A language that does not
 support dynamic binding does not support OOP. A program that does not
 use dynamic binding is not object oriented. What is to disagree with?

I don't agree with that either. You don't need polymorphism for OOP. It's 
quite restricted without it, but you can still program with objects even if 
you're restricted to something like D's structs, so you're still doing OOP.

- Jonathan M Davis


Re: d future or plans for d3

2011-12-20 Thread Timon Gehr

On 12/20/2011 05:36 PM, Jonathan M Davis wrote:

On Tuesday, December 20, 2011 15:49:34 Timon Gehr wrote:

2. Dynamic binding is a core concept of OOP. A language that does not
support dynamic binding does not support OOP. A program that does not
use dynamic binding is not object oriented. What is to disagree with?


I don't agree with that either. You don't need polymorphism for OOP. It's
quite restricted without it, but you can still program with objects even if
you're restricted to something like D's structs, so you're still doing OOP.

- Jonathan M Davis


No. That is glorified procedural style. 'Objects' as in 'OOP' carry data 
and _behavior_, structs don't (except if you give them some function 
pointers, but that is just implementing poor man's polymorphism.)


Having some kind of dynamic execution model is a requirement for OOP. 
There are no two ways about it.


Re: d future or plans for d3

2011-12-20 Thread Jonathan M Davis
On Tuesday, December 20, 2011 17:58:30 Timon Gehr wrote:
 On 12/20/2011 05:36 PM, Jonathan M Davis wrote:
  On Tuesday, December 20, 2011 15:49:34 Timon Gehr wrote:
  2. Dynamic binding is a core concept of OOP. A language that does not
  support dynamic binding does not support OOP. A program that does not
  use dynamic binding is not object oriented. What is to disagree with?
  
  I don't agree with that either. You don't need polymorphism for OOP.
  It's
  quite restricted without it, but you can still program with objects even
  if you're restricted to something like D's structs, so you're still
  doing OOP.
  
  - Jonathan M Davis
 
 No. That is glorified procedural style. 'Objects' as in 'OOP' carry data
 and _behavior_, structs don't (except if you give them some function
 pointers, but that is just implementing poor man's polymorphism.)
 
 Having some kind of dynamic execution model is a requirement for OOP.
 There are no two ways about it.

Well, I completely disagree. The core of OOP is encapsulating the data within 
an object and having functions associated with the object itself which operate 
on that data. It's about encapsulation and tying the functions to the type. 
Polymorphism is a nice bonus, but it's not required.

I'd say that any language which is really trying to do OOP should definitely 
have polymorphism or it's going to have pretty sucky OOP, but it can still 
have OOP.

- Jonathan M Davis


Re: auto + Top-level Const/Immutable

2011-12-20 Thread Jonathan M Davis
On Tuesday, December 20, 2011 09:23:31 dsimcha wrote:
 The changes made to IFTI in DMD 2.057 are great, but they reveal another
 hassle with getting generic code to play nice with const.
 
 import std.range, std.array;
 
 ElementType!R sum(R)(R range) {
 if(range.empty) return 0;
 auto ans = range.front;
 range.popFront();
 
 foreach(elem; range) ans += elem;
 return ans;
 }
 
 void main() {
 const double[] nums = [1, 2, 3];
 sum(nums);
 }
 
 test.d(8): Error: variable test9.sum!(const(double)[]).sum.ans cannot
 modify const
 test.d(14): Error: template instance test9.sum!(const(double)[]) error
 instantiating
 
 Of course this is fixable with an Unqual, but it requires the programmer
 to remember this every time and breaks for structs with indirection.
 Should we make `auto` also strip top-level const from primitives and
 arrays and, if const(Object)ref gets in, from objects?

Assuming that the assignment can still take place, then making auto infer non-
const and non-immutable would be an improvement IMHO. However, there _are_ 
cases where you'd have to retain const - a prime example being classes. But 
value types could have const/immutable stripped from them, as could arrays 
using their tail-constness.

- Jonathan M Davis


Re: d future or plans for d3

2011-12-20 Thread bearophile
Timon Gehr:

 What is to disagree with?

Sorry for my precedent answer, please ignore it, sometimes I have a too much 
big mouth. I think discussions about definitions are not so interesting.

Bye,
bearophile


Re: initializedArray

2011-12-20 Thread Paul D. Anderson
I think this is a great idea and a good example of adding to the 
library rather than changing the syntax.


Paul

Wouldn't the sentence 'I want to put a hyphen between the words 
Fish and And and And and Chips in my Fish-And-Chips sign' have 
been clearer if quotation marks had been placed before Fish, and 
between Fish and and, and and and And, and And and and, and and 
and And, and And and and, and and and Chips, as well as after 
Chips? — Martin Gardner


On Tuesday, 20 December 2011 at 12:55:18 UTC, Andrej Mitrovic 
wrote:
I think it would be cool to have an initializedArray function, 
which
creates and initializes an array with a *specific* initializer. 
A

hardcoded example would be:

import std.array;

auto initializedArray(F:float[])(size_t size, float init)
{
  auto arr = uninitializedArray!(float[])(size);
  arr[] = init;
  return arr;
}

void main()
{
  float[] arr = initializedArray!(float[])(3, 0.0f);
  assert(arr[] == [0.0f, 0.0f, 0.0f]);
}

Currently there's no D syntax for using new on arrays and 
specifying a

specific initializer, so maybe we should have this as a library
function. Thoughts?





Binary Size: function-sections, data-sections, etc.

2011-12-20 Thread dsimcha
I started poking around and examining the details of how the GNU 
linker works, to solve some annoying issues with LDC.  In the 
process I the following things that may be useful low-hanging 
fruit for reducing binary size:


1.  If you have an ar library of object files, by default no dead 
code elimination is apparently done within an object file, or at 
least not nearly as much as one would expect.  Each object file 
in the ar library either gets pulled in or doesn't.


2.  When something is compiled with -lib, DMD writes libraries 
with one object file **per function**, to get around this.  GDC 
and LDC don't.  However, if you compile the object files and then 
manually make an archive with the ar command (which is common in 
a lot of build processes, such as gtkD's), this doesn't apply.


3.  The defaults can be overridden if you compile your code with 
-ffunction-sections and -fdata-sections (DMD doesn't support 
this, GDC and LDC do) and link with --gc-sections.  
-ffunction-sections and -fdata-sections cause each function or 
piece of static data to be written as its own section in the 
object file, instead of having one giant section that's either 
pulled in or not.  --gc-sections garbage collects unused 
sections, resulting in much smaller binaries especially when the 
sections are fine-grained.


On one project I'm working on, I compiled all the libs I use with 
GDC using -ffunction-sections -fdata-sections.  The stripped 
binary is 5.6 MB when I link the app without --gc-sections, or 
3.5 MB with --gc-sections.  Quite a difference.  The difference 
would be even larger if Phobos were compiled w/ 
-ffunction-sections and -fdata-sections.  (See 
https://bitbucket.org/goshawk/gdc/issue/293/ffunction-sections-fdata-sections-for 
).


DMD can't compile libraries with -ffunction-sections or 
-fdata-sections and due to other details of my build process that 
are too complicated to explain here, the results from DMD aren't 
directly comparable to those from GDC.  However, --gc-sections 
reduces the DMD binaries from 11 MB to 9 MB.


Bottom line:  If we want to reduce D's binary size there are two 
pieces of low-hanging fruit:


1.  Make -L--gc-sections the default in dmd.conf on Linux and 
probably other Posix OS's.


2.  Add -ffunction-sections and -fdata-sections or equivalents to 
DMD and compile Phobos with these enabled.  I have no idea how 
hard this would be, but I imagine it would be easy for someone 
who's already familiar with object file formats.


Re: auto + Top-level Const/Immutable

2011-12-20 Thread dsimcha
On Tuesday, 20 December 2011 at 17:46:40 UTC, Jonathan M Davis 
wrote:
Assuming that the assignment can still take place, then making 
auto infer non-
const and non-immutable would be an improvement IMHO. However, 
there _are_ cases where you'd have to retain const - a prime 
example being classes. But value types could have 
const/immutable stripped from them, as could arrays using their 
tail-constness.


- Jonathan M Davis


Right.  The objects would only be head de-constified if Michael 
Fortin's patch to allow such things got in.  A simple way of 
explaining this would be auto removes top level const from the 
type T if T implicitly converts to the type that would result.


Re: d future or plans for d3

2011-12-20 Thread Timon Gehr

On 12/20/2011 06:41 PM, Jonathan M Davis wrote:

On Tuesday, December 20, 2011 17:58:30 Timon Gehr wrote:

On 12/20/2011 05:36 PM, Jonathan M Davis wrote:

On Tuesday, December 20, 2011 15:49:34 Timon Gehr wrote:

2. Dynamic binding is a core concept of OOP. A language that does not
support dynamic binding does not support OOP. A program that does not
use dynamic binding is not object oriented. What is to disagree with?


I don't agree with that either. You don't need polymorphism for OOP.
It's
quite restricted without it, but you can still program with objects even
if you're restricted to something like D's structs, so you're still
doing OOP.

- Jonathan M Davis


No. That is glorified procedural style. 'Objects' as in 'OOP' carry data
and _behavior_, structs don't (except if you give them some function
pointers, but that is just implementing poor man's polymorphism.)

Having some kind of dynamic execution model is a requirement for OOP.
There are no two ways about it.


Well, I completely disagree.  The core of OOP is encapsulating the data within
an object


Yes, encapsulation is another core concept of OOP.


and having functions associated with the object itself which operate
on that data.


Correct. The functions have to be associated with _the object itself_. 
Case closed.



It's about encapsulation and tying the functions to the type.


Static typing is not an OOP concept.


Polymorphism is a nice bonus, but it's not required.


It is not a bonus. It is part of what OOP is about.



I'd say that any language which is really trying to do OOP should definitely
have polymorphism or it's going to have pretty sucky OOP, but it can still
have OOP.



In case you define OOP differently from all relevant textbooks, based 
only on the encapsulation aspect, yes.


Re: Java Scala

2011-12-20 Thread Isaac Gouy
 From: Russel Winder rus...@russel.org.uk

 Sent: Monday, December 19, 2011 11:29 PM

   If you want to look at even more biased benchmarking look at
   http://shootout.alioth.debian.org/ it is fundamentally designed to 
   show that C is the one true language for writing performance 
   computation.

 Overstated perhaps, baseless, no.  But this is a complex issue.

False and baseless, and a simple issue. 

Your words are clear - ... designed to show 

Your false accusation is about purpose and intention - you should take back 
that accusation.


Re: auto + Top-level Const/Immutable

2011-12-20 Thread Timon Gehr

On 12/20/2011 07:16 PM, dsimcha wrote:

On Tuesday, 20 December 2011 at 17:46:40 UTC, Jonathan M Davis wrote:

Assuming that the assignment can still take place, then making auto
infer non-
const and non-immutable would be an improvement IMHO. However, there
_are_ cases where you'd have to retain const - a prime example being
classes. But value types could have const/immutable stripped from
them, as could arrays using their tail-constness.

- Jonathan M Davis


Right. The objects would only be head de-constified if Michael Fortin's
patch to allow such things got in. A simple way of explaining this would
be auto removes top level const from the type T if T implicitly
converts to the type that would result.


Yes, having to use

auto x = cast()y;

is quite annoying, I'd like this change to happen.


BTW: What will happen with cast()classRef once we have head-mutable 
class references?


Re: Binary Size: function-sections, data-sections, etc.

2011-12-20 Thread Trass3r
Bottom line:  If we want to reduce D's binary size there are two pieces  
of low-hanging fruit:


1.  Make -L--gc-sections the default in dmd.conf on Linux and probably  
other Posix OS's.


2.  Add -ffunction-sections and -fdata-sections or equivalents to DMD  
and compile Phobos with these enabled.  I have no idea how hard this  
would be, but I imagine it would be easy for someone who's already  
familiar with object file formats.


Seems like --gc-sections _can_ have its pitfalls:
http://blog.flameeyes.eu/2009/11/21/garbage-collecting-sections-is-not-for-production

Also I read somewhere that --gc-sections isn't always supported (no  
standard switch or something like that).


I personally see no reason not to use -ffunction-sections and  
-fdata-sections for compiling phobos though, cause a test with gdc didn't  
even result in a much bigger lib file, nor did it take significantly  
longer to compile/link.
That site I linked claims though, that it does mean serious overhead even  
if --gc-sections is omitted then.

So we have to do tests with huge codebases first.


Re: d future or plans for d3

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 11:41 AM, Jonathan M Davis wrote:

On Tuesday, December 20, 2011 17:58:30 Timon Gehr wrote:

On 12/20/2011 05:36 PM, Jonathan M Davis wrote:

On Tuesday, December 20, 2011 15:49:34 Timon Gehr wrote:

2. Dynamic binding is a core concept of OOP. A language that does not
support dynamic binding does not support OOP. A program that does not
use dynamic binding is not object oriented. What is to disagree with?


I don't agree with that either. You don't need polymorphism for OOP.
It's
quite restricted without it, but you can still program with objects even
if you're restricted to something like D's structs, so you're still
doing OOP.

- Jonathan M Davis


No. That is glorified procedural style. 'Objects' as in 'OOP' carry data
and _behavior_, structs don't (except if you give them some function
pointers, but that is just implementing poor man's polymorphism.)

Having some kind of dynamic execution model is a requirement for OOP.
There are no two ways about it.


Well, I completely disagree. The core of OOP is encapsulating the data within
an object and having functions associated with the object itself which operate
on that data. It's about encapsulation and tying the functions to the type.
Polymorphism is a nice bonus, but it's not required.


I think the model you're discussing is often called object-based.

Andrei



Re: Binary Size: function-sections, data-sections, etc.

2011-12-20 Thread Martin Nowak

On Tue, 20 Dec 2011 19:14:03 +0100, dsimcha dsim...@yahoo.com wrote:

I started poking around and examining the details of how the GNU linker  
works, to solve some annoying issues with LDC.  In the process I the  
following things that may be useful low-hanging fruit for reducing  
binary size:


1.  If you have an ar library of object files, by default no dead code  
elimination is apparently done within an object file, or at least not  
nearly as much as one would expect.  Each object file in the ar library  
either gets pulled in or doesn't.


2.  When something is compiled with -lib, DMD writes libraries with one  
object file **per function**, to get around this.  GDC and LDC don't.   
However, if you compile the object files and then manually make an  
archive with the ar command (which is common in a lot of build  
processes, such as gtkD's), this doesn't apply.


3.  The defaults can be overridden if you compile your code with  
-ffunction-sections and -fdata-sections (DMD doesn't support this, GDC  
and LDC do) and link with --gc-sections.  -ffunction-sections and  
-fdata-sections cause each function or piece of static data to be  
written as its own section in the object file, instead of having one  
giant section that's either pulled in or not.  --gc-sections garbage  
collects unused sections, resulting in much smaller binaries especially  
when the sections are fine-grained.



Only newer versions of binutils actually support --gc-sections.
There also was a bug that it clears the EH sections.

On one project I'm working on, I compiled all the libs I use with GDC  
using -ffunction-sections -fdata-sections.  The stripped binary is 5.6  
MB when I link the app without --gc-sections, or 3.5 MB with  
--gc-sections.  Quite a difference.  The difference would be even larger  
if Phobos were compiled w/ -ffunction-sections and -fdata-sections.   
(See  
https://bitbucket.org/goshawk/gdc/issue/293/ffunction-sections-fdata-sections-for  
).


DMD can't compile libraries with -ffunction-sections or -fdata-sections  
and due to other details of my build process that are too complicated to  
explain here, the results from DMD aren't directly comparable to those  
from GDC.  However, --gc-sections reduces the DMD binaries from 11 MB to  
9 MB.


Bottom line:  If we want to reduce D's binary size there are two pieces  
of low-hanging fruit:


1.  Make -L--gc-sections the default in dmd.conf on Linux and probably  
other Posix OS's.


2.  Add -ffunction-sections and -fdata-sections or equivalents to DMD  
and compile Phobos with these enabled.  I have no idea how hard this  
would be, but I imagine it would be easy for someone who's already  
familiar with object file formats.


Re: Binary Size: function-sections, data-sections, etc.

2011-12-20 Thread Walter Bright

On 12/20/2011 10:14 AM, dsimcha wrote:

1. Make -L--gc-sections the default in dmd.conf on Linux and probably other
Posix OS's.


I tried that years ago, and it created executables that always crashed. I seem 
to recall that it removed some crucial sections :-)


Maybe things are better now and it will work.


2. Add -ffunction-sections and -fdata-sections or equivalents to DMD and compile
Phobos with these enabled. I have no idea how hard this would be, but I imagine
it would be easy for someone who's already familiar with object file formats.


I didn't know about those flags.


Re: Java Scala

2011-12-20 Thread Walter Bright

On 12/20/2011 8:29 AM, J Arrizza wrote:

My point is being comfortable with assembler is likely an effect not a cause. If
you have the motivation and skills to pick up assembler in a semester then you
are probably going to be a better programmer in the end simply because of your
motivation and skills, not necessarily from knowing assembler.


I don't agree, as I had been programming for two years before I learned 
assembler. My high level code made dramatic improvements after that.




In the end, this progression has been extremely beneficial in visualizing how
all that abstract source code translates down into machine code. Memory
allocation, speed and size optimization, etc. etc. make a lot more sense when
you know how the machine behaves at a fundamental level.


Yes, exactly. Also, knowing assembler can get you out of many jams that 
otherwise would stymie you - such as running into a code gen bug.


Code gen bugs are not a thing of the past. I just ran into one with lcc on the 
mac.


[dpl.org] License of the content (need for Wikipedia)

2011-12-20 Thread Alexander Malahov

Hello everyone,

I want to add D logo to its wikipedia's article, but it requires 
license [1].


Also, russian D wiki page relys havily on the articles from the 
dpl.org (for what I've checked, 95% of the content is just 
translation). So, it would be nice, if content of the whole site 
would have some permissive license.


I'm not sure how this works, but I think you have following 
options:
1. send Declaration of consent for all enquiries to wikimedia 
[2]

2. add comment to the image in the html source
3. add comment in the beginnig of all pages' html, just under 
copyright


Licenses recommended for images: 
http://en.wikipedia.org/wiki/Wikipedia:File_copyright_tags#For_image_creators


List of all free licenses:
http://en.wikipedia.org/wiki/Wikipedia:File_copyright_tags/Free_licenses

Recommended is Creative Commons 3.0 Attribution-ShareAlike [3], 
it's used for wikipedia's articles.


In case you'll choose CC, but not sure which one suits best, here 
is handy helper:

http://creativecommons.org/choose/


-
[1] I think it would be ok to use it under fair use 
(http://en.wikipedia.org/wiki/Wikipedia:Fair_use), but that will 
preclude uploading of the logo to Wikimedia Commons, hence will 
require separate upload for every language (en, ru, fr, ...)


[2] 
http://en.wikipedia.org/wiki/Wikipedia:Declaration_of_consent_for_all_enquiries


[3] http://creativecommons.org/licenses/by-sa/3.0/



Re: Java Scala

2011-12-20 Thread ddverne

On Tuesday, 20 December 2011 at 19:18:30 UTC, Walter Bright wrote:
I don't agree, as I had been programming for two years before I 
learned assembler. My high level code made dramatic 
improvements after that.


I'm really curious, could you give us some examples of those 
improvements?


Re: d future or plans for d3

2011-12-20 Thread Robert Jacques

On Sun, 18 Dec 2011 15:29:23 -0800, Andrei Alexandrescu 
seewebsiteforem...@erdani.org wrote:

Unions will be conservative. The golden standard is that SafeD can't use
them or anything that forces conservative approaches.


Andrei


Is there a strong rational for a conservative approach to unions? Why not 
simply set the GC pointer flag bit every time a union is assigned to? Given 
that there are fully precise GC implementations for C, why should D aim for 
something less?


Re: d future or plans for d3

2011-12-20 Thread Robert Jacques

On Mon, 19 Dec 2011 10:54:22 -0800, Timon Gehr timon.g...@gmx.ch wrote:


On 12/19/2011 07:50 PM, Vladimir Panteleev wrote:

On Monday, 19 December 2011 at 08:28:52 UTC, Adam Wilson wrote:

According to this wikipedia page
http://en.wikipedia.org/wiki/Boehm_garbage_collector it is also the GC
that is used by D, with some minor modifications of course.


I'm not sure if that's true... I believe that they both use the same
basic idea, but AFAIK the D garbage collector is a D port of a C rewrite
which was originally written for something else. The D GC has been
optimized a lot since its first versions.


It would probably be interesting to test the mostly concurrent
generational Boehm GC with D. I'd expect it to perform a lot better than
the simple mark and sweep GC we have in druntime.



The Boehm GC isn't concurrent nor generational.


Re: Binary Size: function-sections, data-sections, etc.

2011-12-20 Thread Marco Leise

Am 20.12.2011, 19:14 Uhr, schrieb dsimcha dsim...@yahoo.com:

I started poking around and examining the details of how the GNU linker  
works, to solve some annoying issues with LDC.  In the process I the  
following things that may be useful low-hanging fruit for reducing  
binary size:


1.  If you have an ar library of object files, by default no dead code  
elimination is apparently done within an object file, or at least not  
nearly as much as one would expect.  Each object file in the ar library  
either gets pulled in or doesn't.


2.  When something is compiled with -lib, DMD writes libraries with one  
object file **per function**, to get around this.  GDC and LDC don't.   
However, if you compile the object files and then manually make an  
archive with the ar command (which is common in a lot of build  
processes, such as gtkD's), this doesn't apply.


3.  The defaults can be overridden if you compile your code with  
-ffunction-sections and -fdata-sections (DMD doesn't support this, GDC  
and LDC do) and link with --gc-sections.  -ffunction-sections and  
-fdata-sections cause each function or piece of static data to be  
written as its own section in the object file, instead of having one  
giant section that's either pulled in or not.  --gc-sections garbage  
collects unused sections, resulting in much smaller binaries especially  
when the sections are fine-grained.


On one project I'm working on, I compiled all the libs I use with GDC  
using -ffunction-sections -fdata-sections.  The stripped binary is 5.6  
MB when I link the app without --gc-sections, or 3.5 MB with  
--gc-sections.  Quite a difference.  The difference would be even larger  
if Phobos were compiled w/ -ffunction-sections and -fdata-sections.   
(See  
https://bitbucket.org/goshawk/gdc/issue/293/ffunction-sections-fdata-sections-for  
).


DMD can't compile libraries with -ffunction-sections or -fdata-sections  
and due to other details of my build process that are too complicated to  
explain here, the results from DMD aren't directly comparable to those  
from GDC.  However, --gc-sections reduces the DMD binaries from 11 MB to  
9 MB.


Bottom line:  If we want to reduce D's binary size there are two pieces  
of low-hanging fruit:


1.  Make -L--gc-sections the default in dmd.conf on Linux and probably  
other Posix OS's.


2.  Add -ffunction-sections and -fdata-sections or equivalents to DMD  
and compile Phobos with these enabled.  I have no idea how hard this  
would be, but I imagine it would be easy for someone who's already  
familiar with object file formats.


Nice of you to start some discussion on these flags. I use them myself  
(and a few others that seem to affect code size) in a 'tiny' target inside  
the D Makefile I use.

Currently it looks like this:

	dmd sources,directory,bin -m32 -O -release -noboundscheck -L--strip-all  
-L-O1 -L-znodlopen -L-znorelro -L--no-copy-dt-needed-entries -L--relax  
-L--sort-common -L--gc-sections -L-lrt -L--as-needed
	strip bin -R .comment -R .note.ABI-tag -R .gnu.hash -R .gnu.version -R  
.jcr -R .got


That's not even funny, I know :D


Re: initializedArray

2011-12-20 Thread Dejan Lekic


I would go even further, and give a *function* as an argument - 
function that will be used to initialise values.




Re: Binary Size: function-sections, data-sections, etc.

2011-12-20 Thread Trass3r

On Tuesday, 20 December 2011 at 19:36:19 UTC, Marco Leise wrote:
Nice of you to start some discussion on these flags. I use them 
myself (and a few others that seem to affect code size) in a 
'tiny' target inside the D Makefile I use.

Currently it looks like this:

	dmd sources,directory,bin -m32 -O -release -noboundscheck 
-L--strip-all -L-O1 -L-znodlopen -L-znorelro 
-L--no-copy-dt-needed-entries -L--relax -L--sort-common 
-L--gc-sections -L-lrt -L--as-needed
	strip bin -R .comment -R .note.ABI-tag -R .gnu.hash -R 
.gnu.version -R .jcr -R .got


That's not even funny, I know :D


How far down do you get in terms of size with this?


Re: d future or plans for d3

2011-12-20 Thread Robert Jacques

On Sun, 18 Dec 2011 15:28:16 -0800, Andrei Alexandrescu 
seewebsiteforem...@erdani.org wrote:


On 12/18/11 5:22 PM, Vladimir Panteleev wrote:

On Sunday, 18 December 2011 at 23:13:03 UTC, Andrei Alexandrescu wrote:

On 12/18/11 4:53 PM, Vladimir Panteleev wrote:

On Sunday, 18 December 2011 at 20:32:18 UTC, Andrei Alexandrescu wrote:

That is an interesting opportunity. At any rate, I am 100% convinced
precise GC is the only way to go, and I think I've convinced Walter to
a good extent as well.


Sacrificing something (performance, executable size) for something else
is not an unilateral improvement.


I think we can do a lot toward improving the footprint and performance
of a precise GC while benefitting of its innate advantages.


Still, a more conservative GC will always outperform a more precise one
in scanning speed.


I'm not sure. I seem to recall discussions with pathological cases when
large regions of memory were scanned for no good reason.



Scanning speed is proportional to the size of the live heap, which will always 
be larger for conservative collectors. So while conservative collectors are 
faster per byte, they have to scan more bytes. There's been a bunch of research 
into precise GCs for C, as graduate students love hard problems. There are 
several solutions out there currently; the one I stumbled upon is called 
Magpie. The associated thesis has some pretty in depth performance analyses. 
There are also some follow up papers from later students and more real world 
tests of precise vs conservative vs manual.


Could we use something better than zip for the dmd package?

2011-12-20 Thread Trass3r

The ftp is not the fastest one and 7z reduces the size by 40%.


Re: Top C++

2011-12-20 Thread Peter Alexander

On 20/12/11 3:22 PM, deadalnix wrote:

http://www.johndcook.com/blog/2011/06/14/why-do-c-folks-make-things-so-complicated/


I don't think it's that simple when performance and memory usage are a 
concern.


It's easy to have abstractions that compose well when it comes to 
expressiveness, but it is not possible to abstract away the performance 
concerns of your program. Designing for efficiency requires a holistic 
approach that permeates through your whole program, making top/bottom 
separation essentially impossible.


Re: Could we use something better than zip for the dmd package?

2011-12-20 Thread Timon Gehr

On 12/20/2011 08:57 PM, Trass3r wrote:

The ftp is not the fastest one and 7z reduces the size by 40%.


7z is not supported out of the box on most systems.


Re: d future or plans for d3

2011-12-20 Thread Timon Gehr

On 12/20/2011 08:33 PM, Robert Jacques wrote:

On Mon, 19 Dec 2011 10:54:22 -0800, Timon Gehr timon.g...@gmx.ch wrote:


On 12/19/2011 07:50 PM, Vladimir Panteleev wrote:

On Monday, 19 December 2011 at 08:28:52 UTC, Adam Wilson wrote:

According to this wikipedia page
http://en.wikipedia.org/wiki/Boehm_garbage_collector it is also the GC
that is used by D, with some minor modifications of course.


I'm not sure if that's true... I believe that they both use the same
basic idea, but AFAIK the D garbage collector is a D port of a C rewrite
which was originally written for something else. The D GC has been
optimized a lot since its first versions.


It would probably be interesting to test the mostly concurrent
generational Boehm GC with D. I'd expect it to perform a lot better than
the simple mark and sweep GC we have in druntime.



The Boehm GC isn't concurrent nor generational.


http://www.hpl.hp.com/personal/Hans_Boehm/gc/

'The collector uses a mark-sweep algorithm. It provides incremental and 
generational collection under operating systems which provide the right 
kind of virtual memory support. [...]'


http://www.hpl.hp.com/personal/Hans_Boehm/gc/gcdescr.html

'Generational Collection and Dirty Bits
We basically use the concurrent and generational GC algorithm described 
in Mostly Parallel Garbage Collection, by Boehm, Demers, and Shenker.'




Re: initializedArray

2011-12-20 Thread Philippe Sigaud
On Tue, Dec 20, 2011 at 14:22, Andrej Mitrovic
andrej.mitrov...@gmail.com wrote:
 Ok here's an initial implementation (I've had to put the initializer
 first, otherwise I can't use variadic arguments):

Why? That works for me:

auto initializedArray(T, I...)(I args)
if (allSatisfy!(isIntegral, I[0 .. $-1])
 isImplicitlyConvertible!(I[$-1], BaseElementType!T) // Least
constraining that your original test
 (I[0..$-1].length == rank!T)) // Verify the number of arguments
{
auto arr = uninitializedArray!(T)(args[0 .. $-1]);
initArr(arr, args[$-1]);
return arr;
}

void main()
{
auto arr = initializedArray!(float[][])(3, 4, 3.0f);
writeln(arr);
}

If you want to separate the dimensions and the intializer, another
solution would be to return a intermediate callable struct:

auto arr = initializedArray!(float[][])(3,4)(3.0f);



 http://www.ideone.com/2rqFb


Re: initializedArray

2011-12-20 Thread Andrej Mitrovic
On 12/20/11, Dejan Lekic dejan.le...@gmail.com wrote:
 I would go even further, and give a *function* as an argument -
 function that will be used to initialise values.

Well I don't know about functions yet, but I did need to use another
array as an initializer. So the new implementation takes care of that
via lockstep:

http://www.ideone.com/gKFTK


Re: initializedArray

2011-12-20 Thread Andrej Mitrovic
*Also those two templates can be merged, I just have to change the constraints.


Re: initializedArray

2011-12-20 Thread Andrej Mitrovic
On 12/20/11, Philippe Sigaud philippe.sig...@gmail.com wrote:
 Why?

I didn't think of using tuples first, initially I've tried using
size_t[]... but that was a bad idea.

But I still think it should be the first argument, because it's more
consistent with regards to array dimensions. You never have to find
the initializer because you'll always know it's the first argument. At
least to me it seems to be a good idea to put it first, but ymmv.


Re: Java Scala

2011-12-20 Thread Andrei Alexandrescu

On 12/20/11 1:29 AM, Russel Winder wrote:

The system as set out is biased though, systematically so.  This is not
a problem per se since all the micro-benchmarks are about
computationally intensive activity.  Native code versions are therefore
always going to appear better.  But then this is fine the Shootout is
about computationally intensive comparison.


This is fine, so no bias so far. It's a speed benchmark, so it's 
supposed to measure speed. It says as much. If native code comes usually 
in top places, the word is expected, not biased.



Actually I am surprised
that Java does so well in this comparison due to its start-up time
issues.


I suppose this is because the run time of the tests is long enough to 
bury VM startup time. Alternatively, the benchmark may only measure the 
effective execution time.



Part of the problem I alluded to was people using the numbers without
thinking.  No amount of words on pages affect these people, they take
the numbers as is and make decisions based solely on them.


Well, how is that a bias of the benchmark?


C, C++ and
Fortran win on most of them and so are the only choice of language.


The benchmark measures speed. If one is looking for speed wouldn't the 
choice of language be in keeping with these results? I'd be much more 
suspicious of the quality and/or good will of the benchmark if other 
languages would frequently come to the top.



As I understand it, Isaac ruins this basically single handed, relying of
folk providing versions of the code.  This means there is a highly
restricted resource issue in managing the Shootout.  Hence a definite
set of problems and a restricted set of languages to make management
feasible.  This leads to interesting situation such as D is not part of
the set but Clean and Mozart/Oz are.  But then Isaac is the final
arbiter here, as it is his project, and what he says goes.


If I recall things correctly, Isaac dropped the D code because it was 
32-bit only, which was too much trouble for his setup. Now we have good 
64 bit generation, so it may be a good time to redo D implementations of 
the benchmarks and submit it again to Isaac for inclusion in the shootout.


Quite frankly, however, your remark (which I must agree, for all respect 
I hold for you, is baseless) is a PR faux pas - and unfortunately not 
the only one of our community. I'd find it difficult to go now and say, 
by the way, Isaac, we're that community that insulted you on a couple 
of occasions. Now that we got to talk again, how about putting D back in 
the shootout?



I looked at the Java code and the Groovy code a couple of years back (I
haven't re-checked the Java code recently), and it was more or less a
transliteration of the C code.


That is contributed code. In order to demonstrate bias you'd need to 
show that faster code was submitted and refused.



This meant that the programming
languages were not being shown off at their best.  I started a project
with the Groovy community to provide reasonable version of Groovy codes
and was getting some take up.  Groovy was always going to be with Python
and Ruby and nowhere near C, C++, and Fortran, or Java, but the results
being displayed at the time were orders of magnitude slower than Groovy
could be, as shown by the Java results.  The most obvious problem was
that the original Groovy code was written so as to avoid any parallelism
at all.


Who wrote the code? Is the owner of the shootout site responsible for 
those poor results?



Of course Groovy (like Python) would never be used directly for this
sort of computation, a mixed Groovy/Java or Python/C (or Python/C++,
Python/Fortran) would be -- the tight loop being coded in the static
language, the rest in the dynamic language.   Isaac said though that
this was not permitted, that only pure single language versions were
allowed.  Entirely reasonable in one sense, unfair in another: fair
because it is about language performance in the abstract, unfair because
it is comparing languages out of real world use context.


I'd find it a stretch to label that as unfair, for multiple reasons. The 
shootout measures speed of programming languages, not speed of systems 
languages wrapped in shells of other languages. The simpler reason is 
that it's the decision of the site owner to choose the rules. I happen 
to find them reasonable, but I get your point too (particularly if the 
optimized routines are part of the language's standard library).



(It is worth noting that the Python is represented by CPython, and I
suspect PyPy would be a lot faster for these micro-benchmarks.  But only
when PyPy is Python 3 compliant since Python 3 and not Python 2 is the
representative in the Shootout.  A comparison here is between using
Erlang and Erlang HiPE.)

In the event, Isaac took Groovy out of the Shootout, so the Groovy
rewrite effort was disbanded.  I know Isaac says run your own site, but
that rather misses the point, and leads directly to the sort of hassles

Re: initializedArray

2011-12-20 Thread Philippe Sigaud
On Tue, Dec 20, 2011 at 21:27, Andrej Mitrovic
andrej.mitrov...@gmail.com wrote:
 On 12/20/11, Philippe Sigaud philippe.sig...@gmail.com wrote:
 Why?

 I didn't think of using tuples first, initially I've tried using
 size_t[]... but that was a bad idea.

 But I still think it should be the first argument, because it's more
 consistent with regards to array dimensions. You never have to find
 the initializer because you'll always know it's the first argument. At
 least to me it seems to be a good idea to put it first, but ymmv.

I do not have a strong opinion about this. I just interpreted your
initial comment as I couldn't find a way to put the initializer last,
so I put it in the first position.


Re: Could we use something better than zip for the dmd package?

2011-12-20 Thread Trass3r

7z is not supported out of the box on most systems.


The package is created for devs, not noobs.
btw Ubuntu's fine with 7z.


Re: Program size, linking matter, and static this()

2011-12-20 Thread Marco Leise

Am 19.12.2011, 20:43 Uhr, schrieb Jacob Carlborg d...@me.com:

It could be useful for a package manager. Theoretically all installed  
packages could share the same dynamic library. But I would guess the the  
packages would depend on different versions of the library and the  
package manager would end up installing a whole bunch of different  
versions of the Phobos and druntime.


No! Let's please try to get closer to something that works with package  
managers than the situation on Windows.


On Windows I see few applications that install libraries separately,  
unless they started on Linux or the libraries are established like  
DirectX. In the past DLLs from newly installed programs used to overwrite  
existing DLLs. IIRC the DLLs were then checked for their versions by  
installers, so they are never downgraded, but that still broke some  
applications with library updates that changed the API. Starting with  
Vista, there is the winsxs difrectory that - as I understand it - keeps a  
copy of every version of every dll associated to the programs that  
installed/use them.


Package managers are close to my ideal world:
- different API versions (major revisions) can be installed in parallel
- applications link to the API version they were designed for
- bug fixes replace the old DLL for the whole system, all applications  
benefit

- RAM is shared between applications that use the same DLL

I'd think it would be bad to make cuts here. If you cannot even imagine an  
operating system with 1000 little apps like type/cat, cp/copy, sed etc...  
written in D, because they would all link statically against the runtime  
and cause major bloat, then that is turning off another few % of C users  
and purists. You don't drive an off-road car, because you go off-roads so  
often, but because you could imagine it. (Please buy small cars for city  
use.)


Linking against different library versions goes in practice like this:
There is at least one version installed, maybe libphobos2.so.1.057. The 1  
would be a major revision (one where hard deprecations occur), then there  
is a link named libphobos2.so.1 to that file, that all applications using  
API version 1 link against. So the actual file can be updated to  
libphobos2.so.1.058 without recompiles or breakage.


Re: Program size, linking matter, and static this()

2011-12-20 Thread Marco Leise
Am 19.12.2011, 19:08 Uhr, schrieb Walter Bright  
newshou...@digitalmars.com:



On 12/16/2011 2:55 PM, Walter Bright wrote:
For example, in std.datetime there's final class Clock. It inherits  
nothing,
and nothing can be derived from it. The comments for it say it is  
merely a

namespace. It should be a struct.


Or perhaps it should be in its own module.


When I first saw it I thought That's how _Java_ goes about free  
functions: Make it a class. :)


Re: initializedArray

2011-12-20 Thread Philippe Sigaud
On Tue, Dec 20, 2011 at 21:20, Andrej Mitrovic
andrej.mitrov...@gmail.com wrote:
 On 12/20/11, Dejan Lekic dejan.le...@gmail.com wrote:
 I would go even further, and give a *function* as an argument -
 function that will be used to initialise values.

 Well I don't know about functions yet, but I did need to use another
 array as an initializer. So the new implementation takes care of that
 via lockstep:

from : http://www.ideone.com/gKFTK :

unittest
{
auto arr2 = initializedArray!(int[][])([[1, 2], [3, 4]], 2, 2);
assert(arr2 == [[1, 2], [3, 4]]);
}

1) What's the difference with using auto arr2 == [[1,2],[3,4]].dup;  ?
(I honestly asks, I don't know much about D's assignements)

2) You can get the lengths of [[1,2],[3,4]], so the 2,2 args is
redundant. What happens if you type:

auto arr2 = initializedArray!(int[][])([[1, 2], [3, 4]], 4, 10);

3) I still think you should relax the constraint on the init value's
type. You do not need it to be *exactly* BaseElementType!T. Thats
stops you from writing

auto arr2 = initializedArray!(float[][])(3,  2,3);

4-ish) No need to attribute the rank/BaseElementType code to me :-)


Re: Could we use something better than zip for the dmd package?

2011-12-20 Thread Timon Gehr

On 12/20/2011 09:53 PM, Trass3r wrote:

7z is not supported out of the box on most systems.


The package is created for devs, not noobs.


The package is created for both devs and noobs.


btw Ubuntu's fine with 7z.


I had to install package p7zip-full.


Re: Program size, linking matter, and static this()

2011-12-20 Thread dsimcha

On Tuesday, 20 December 2011 at 20:51:38 UTC, Marco Leise wrote:

Am 19.12.2011, 20:43 Uhr, schrieb Jacob Carlborg d...@me.com:
On Windows I see few applications that install libraries 
separately, unless they started on Linux or the libraries are 
established like DirectX. In the past DLLs from newly installed 
programs used to overwrite existing DLLs. IIRC the DLLs were 
then checked for their versions by installers, so they are 
never downgraded, but that still broke some applications with 
library updates that changed the API. Starting with Vista, 
there is the winsxs difrectory that - as I understand it - 
keeps a copy of every version of every dll associated to the 
programs that installed/use them.


Minor nitpick:  winsxs has been around since XP.


Re: Program size, linking matter, and static this()

2011-12-20 Thread Marco Leise
Am 20.12.2011, 16:00 Uhr, schrieb Denis Shelomovskij  
verylonglogin@gmail.com:


The second dmd issue (that was discovered because of 99.00% of zeros) is  
that _it doesn't use bss section_.

Lets look at the C++ program built using Microsoft's cl:
---
char arr[1024 * 1024 * 10];
void main() { }
---
It resultis in ~10KiB executable, because `arr` is initialized with zero  
bytes and put in bss section. If one of its elements is set to non-zero:

---
char arr[1024 * 1024 * 10] = { 1 };
void main() { }
---
The array can't be in .bss any more and resulting executable size will  
be increased by adding ~10MiB. The following D program results in ~10MiB  
executable:

---
ubyte[1024 * 1024 * 10] arr;
void main() { }
---
So, if there really is a reason not to use .bss, it should be clearly  
explained.




If described issues aren't much more significant than static this(),  
show me where am I wrong, please.


+1. I didn't know about .bss, but static arrays of zeroes (global, struct,  
class) increasing the executable size looked like a problem wanting a  
solution. I hope it is easy to solve for dmd and is just an unimportant  
issue, so was never implemented.


Re: Program size, linking matter, and static this()

2011-12-20 Thread Walter Bright

On 12/20/2011 6:23 AM, Andrei Alexandrescu wrote:

On 12/20/11 9:00 AM, Denis Shelomovskij wrote:

Now dmd have at least _two order of magnitude_ file size increase. I
posted that problem four months ago at Building GtkD app on Win32
results in 111 MiB file mostly from zeroes.

[snip]

---
char arr[1024 * 1024 * 10];
void main() { }
---

[snip]

If described issues aren't much more significant than static this(),
show me where am I wrong, please.


Using BSS is a nice optimization, but not all compilers do it and I know for a
fact MSVC didn't have it for a long time. That's probably why I got used to
thinking poor style when seeing a large statically-sized buffer with static
duration.

I'd say both issues deserve to be looked at, and saying one is more significant
than the other would be difficult.


First off, dmd most definitely puts 0 initialized static data into the BSS 
segment. So what's going on here?


1. char data is not initialized to 0, it is initialized to 0xFF. Non-zero data 
cannot be put in BSS.


2. Static data goes, by default, into thread local storage. BSS data is not 
thread local. To put it in global data, it has to be declared with __gshared.


So,

__gshared byte arr[1024 * 1024 *10];

will go into BSS.

There is pretty much no reason to have such huge arrays in static data. Instead, 
dynamically allocate them.




Re: std.container and classes

2011-12-20 Thread Froglegs
  I don't really think ref counted struct vs class is fair, 
because in reality most containers don't need ref counting.  I 
can't think of one instance in C++ where I stuck a container 
directly in a shared_ptr or anything similar.


Also as far I as I can tell making it a class would bloat it with 
unnecessary data(vtable), and being it is common to have many 
many instances of these containers, that doesn't sound like such 
a great thing.





Re: Program size, linking matter, and static this()

2011-12-20 Thread Walter Bright

On 12/20/2011 1:07 PM, Marco Leise wrote:

+1. I didn't know about .bss, but static arrays of zeroes (global, struct,
class) increasing the executable size looked like a problem wanting a solution.
I hope it is easy to solve for dmd and is just an unimportant issue, so was
never implemented.


I added a faq entry for this.


Get a Reference to an Object's Monitor

2011-12-20 Thread Andrew Wiley
Is there any way to get a reference to an object's monitor? This would
be very useful because a Condition has to be tied to a Mutex, and
since objects already have a reference to a Mutex built in, it doesn't
make much sense to create another (and add another reference to the
class) when the end effect is the same.

Example:
---
class Example {
private:
Mutex _lock;
Condition _condition;
public
this() {
_lock = new Mutex();
_condition = new Condition(_lock);
}
void methodA() shared {
synchronized(_lock) {
// do some stuff
while(someTest)
_condition.wait();
}
}
void methodB() shared {
synchronized(_lock) {
//do some stuff
_condition.notifyAll();
}
}
}
---

If I could get a reference to Example's monitor, this example becomes:

---
synchronized class Example {
private:
Condition _condition;
public
this() {
_condition = new Condition(this.__monitor);
}
void methodA() {
// do some stuff
while(someTest)
_condition.wait();
}
void methodB() {
//do some stuff
_condition.notifyAll();
}
}
---

Which is much more fool-proof to write.


Re: Program size, linking matter, and static this()

2011-12-20 Thread Vladimir Panteleev
On Tuesday, 20 December 2011 at 14:01:04 UTC, Denis Shelomovskij 
wrote:

Detailed description:
GtkD is built using singe (gtk-one-obj.lib) or separate (one 
per source file) object files (gtk-sep-obj.lib).


Than main.d that imports gtk.Main is built using those 
libraries.


Than zeroCount utils is built and launched over resulting files:
--
Now let's calculate zero bytes counts:
--
 Zero bytes| %|Non-zero| Total bytes|File
3628311| 21.56|13202153|16830464|gtk-one-obj.lib
1953124| 15.98|10272924|12226048|gtk-sep-obj.lib
  127968798| 99.00| 1298430|   129267228|main-one-obj.exe
 743821| 37.51| 1239183| 1983004|main-sep-obj.exe
Done.

So we have to use very slow per-file build to produce a good 
(not 100 MiB) executable.
No matter what *.exe is launched, its process allocates ~20MiB 
of RAM (loaded Gtk dll-s).


I believe this is bug 2254:

http://d.puremagic.com/issues/show_bug.cgi?id=2254

The cause is the way DMD builds libraries. The old way of 
building libraries (using a librarian) does not create libraries 
that exhibit this problem when linked with an executable.


  1   2   >