Re: Tango for D2: All user modules ported

2012-02-09 Thread HeiHon

On Saturday, 4 February 2012 at 10:56:14 UTC, bobef wrote:

Great news. ...

Same here!

This is the number one thing I waited for to be ported to D2. I 
never considered moving to D2 without Tango. Big thanks to 
SiegeLord and all the other contributors.


Just one example why I like Tango:

hello_tango.d:
module hello_tango;
// dmd 2.057 + SiegeLord-Tango-D2-4c9566e 2012-01-24
import tango.io.Stdout;

int main(string[] args)
{
   foreach(i, arg; args)
   {
   Stdout.formatln(  arg {,3}: '{}', i, arg);
   }
   return 0;
}

rdmd --build-only -release -O hello_tango.d

hello_tango a b ä ö
 arg   0: 'hello_tango'
 arg   1: 'a'
 arg   2: 'b'
 arg   3: 'ä'
 arg   4: 'ö'


hello_phobos.d:
module hello_phobos;

// dmd 2.057
import std.stdio;

int main(string[] args)
{
   foreach(i, arg; args)
   {
   stdout.writefln(  arg %3d: '%s', i, arg);
   }
   return 0;
}

rdmd --build-only -release -O hello_phobos.d

hello_phobos a b ä ö
 arg   0: 'hello_phobos'
 arg   1: 'a'
 arg   2: 'b'
 arg   3: '+ñ'
 arg   4: '+Â'


E:\source\D\d2dir he*
09.02.2012  15:18   204 hello_phobos.d
09.02.2012  15:18   992.284 hello_phobos.exe
09.02.2012  15:18   250 hello_tango.d
09.02.2012  15:18   180.764 hello_tango.exe

The hello_tango.exe is much smaller and it even works with 
strange german umlauts :-)


BTW:
Tango doesn't build (bob) with dmd 2.058 beta because of:
...
dmd -c -I. -release -oftango-net-device-Berkeley-release.obj 
./tango/net/device/Berkeley.d
object.Exception@build\src\bob.d(632): Process exited normally 
with return code 1
.\tango\net\device\Berkeley.d(1921): Error: cannot implicitly 
convert expression (new char[][](cast(uint)i)) of type char[][] 
to const(char)[][]




Re: I wrote A starting guide for Newbies

2012-02-09 Thread deadalnix

Le 08/02/2012 16:32, MattCodr a écrit :

Hi guys,

I decided to wrote a starting guide for newbies and newcomers with D
Language. It's a really simple and basic introduction for those who may
be a little lost like me when I started.

It's a PDF file format and can be seen accessing the link bellow.

Link: http://goo.gl/GkAYO

Any problems or mistakes please let me know.

I really hope you enjoy.


I wish I had this when I begun. My first usage of D implyed a 
compilation of ldc, then gdc (the only one that worked at the time on my 
plateform) and patching phobos by myself (reminder, it was the first 
time I used that language, not to mention it was pretty harsh and I 
think most people would have quit at this point). It was few years ago, 
and thing got better since.


But I'm convienced that D isn't accesible enough for beginers. So this 
document is very welcome !


Re: I wrote A starting guide for Newbies

2012-02-09 Thread James Miller
 I wish I had this when I begun. My first usage of D implyed a compilation of
 ldc, then gdc (the only one that worked at the time on my plateform) and
 patching phobos by myself (reminder, it was the first time I used that
 language, not to mention it was pretty harsh and I think most people would
 have quit at this point). It was few years ago, and thing got better since.

 But I'm convienced that D isn't accesible enough for beginers. So this
 document is very welcome !

I agree, D is not that accessible to beginners, partially due to the
rapidly changing nature of the language and technology. I found it
difficult to start when every library I encountered was broken, either
because it hadn't been ported to D2, or used libraries that were
broken for some reason.

Hopefully once the language and compiler specs settle down a lot more,
we can start work on making things easier overall. I for one would
love to see a clang-style autocompleter for D, compiler packages for
the major OSes/distributions that work with 99% of the D code out
there, some sort of simple project-finder so we can find libraries
easily, etc, etc.

Also, more documentation, not the kind of documentation we have now,
which is really good, but requires a certain amount of knowledge to
start with, but closer to Learn how to program, with D. Personally I
think D would be a brilliant language to teach with, it has decent OO,
is compile-time checked, has pointers, but you often don't need them,
templates that can be used like Java/C# generics but also allow for
more complex constructs. You can start out with this is a variable
go through these are pointers and end with this is
meta-programming.


Re: GoingNative 2012 to be livestreamed tomorrow - part 2

2012-02-09 Thread bearophile
Some more comments about the conference.

--

About Variadic Templates are Funadic by Andrei Alexandrescu fun talk:

I have had to see it at only 1-1.1X speed, to understand the language.

I can't see the laser spot in the video :-(

Thank you to Walter for designing D varidic templates in a simpler way. C++11 
variadic templates look too much complex and over-engineered (example: the 
lockstep expansion seems a bit crazy).

Slide 21: I didn't know that default is OK as first switch case too :-)

Even the questionsanswers part of this talk was interesting enough.

--

About Bjarne Stroustrup and Andrew Sutton A Concept Design for C++:

Regarding this code in Slide 12:

templateNumber Num Num gsqrt(Num);
gsqrt(2); // fine
gsqrt(Silly!); // error: char* is not a Number


In D template constraints have two (or more) different usages:
1) To just remove a template from the pool of the usable ones;
2) In other situations only one template is present, and its constraints are a 
way to give it some static typing. In this case I'd like better error messages.


Once this patch is applied:
https://github.com/D-Programming-Language/dmd/pull/692

you are able to write something like this, that isn't exceptionally nice 
looking, but it's useful (it's going to make Phobos code a bit more hairy, but 
the user is going to see some better error messages):


template IsNumberWithError(T, string file, int line) {
enum bool IsNumberWithError = is( ...
static if (!IsNumberWithError)
__ctfeWriteln(file, (, line, ): ', typeid(T), ' is not a number.);
}

double gsqrt(T)(T x) if (IsNumberWithError!(T, __FILE__, __LINE__)) { /*...*/ }


An alternative is to give an else to the template constraints, but the error 
message is at the bottom of the function, making it not easy to find, so I 
don't like this syntax:


int spam(T)(T x) if (IsFoo!T || IsBar!T) {
// ...
} else {
__ctfeWriteln(', typeid(T), ' is not Foo or Bar.);
}


If Concepts are a type system for templates, then Current template code 
remains valid. Constrained and traditional templates must interoperate means 
gradual typing, or optional typing (see recent Racket Scheme).


Slide 37, Concepts for the STL (N3351): how many of them are already in 
Phobos? Are the missing ones needed/useful for Phobos?


This is Slide 39 and 40 are quite nice:

templateInputIterator Iter, PredicateValueTypeIter Pred
bool all_of(Iter first, Iter last, Pred pred);

std::find_if():
templateInputIterator Iter, PredicateValueTypeIter Pred
Iter find_if(Iter first, Iter last, Pred pred);


Template aliases seem nice, not to save a bit of code to avoid defining another 
template, but to allow deducibility as explained here: 
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1449.pdf

--

In the Panel: Ask Us Anything! why didn't Andrei comment about the experience 
of compile-time function execution of D :-)


Later I'll discuss some bits from the talk Clang - Defending C++ from Murphy's 
Million Monkeys in the main D newsgroup because it contains some things more 
directly relevant for D and its compiler.

Bye,
bearophile


Re: GoingNative 2012 to be livestreamed tomorrow - part 2

2012-02-09 Thread Jacob Carlborg

On 2012-02-10 02:47, bearophile wrote:

Some more comments about the conference.

--

About Variadic Templates are Funadic by Andrei Alexandrescu fun talk:

I have had to see it at only 1-1.1X speed, to understand the language.

I can't see the laser spot in the video :-(

Thank you to Walter for designing D varidic templates in a simpler way. C++11 
variadic templates look too much complex and over-engineered (example: the 
lockstep expansion seems a bit crazy).

Slide 21: I didn't know that default is OK as first switch case too :-)

Even the questionsanswers part of this talk was interesting enough.

--

About Bjarne Stroustrup and Andrew Sutton A Concept Design for C++:

Regarding this code in Slide 12:

templateNumber Num  Num gsqrt(Num);
gsqrt(2); // fine
gsqrt(Silly!); // error: char* is not a Number


In D template constraints have two (or more) different usages:
1) To just remove a template from the pool of the usable ones;
2) In other situations only one template is present, and its constraints are a 
way to give it some static typing. In this case I'd like better error messages.


Once this patch is applied:
https://github.com/D-Programming-Language/dmd/pull/692

you are able to write something like this, that isn't exceptionally nice 
looking, but it's useful (it's going to make Phobos code a bit more hairy, but 
the user is going to see some better error messages):


template IsNumberWithError(T, string file, int line) {
 enum bool IsNumberWithError = is( ...
 static if (!IsNumberWithError)
 __ctfeWriteln(file, (, line, ): ', typeid(T), ' is not a 
number.);
}

double gsqrt(T)(T x) if (IsNumberWithError!(T, __FILE__, __LINE__)) { /*...*/ }


Wouldn't this be possible:


template IsNumberWithError(T, string file = __FILE__, int line = __LINE__) {
 enum bool IsNumberWithError = is( ...
 static if (!IsNumberWithError)
 __ctfeWriteln(file, (, line, ): ', typeid(T), ' is not a 
number.);

}

double gsqrt(T)(T x) if (IsNumberWithError!(T)) { /*...*/ }

__FILE__ and __LINE__ would be picked up from the calling point and 
not the declaration point of IsNumberWithError. We already have this in 
some cases.


--
/Jacob Carlborg


Re: Possible to pass a member function to spawn?

2012-02-09 Thread saxo123
Hello,

I'm the guy that made the initial post in this thread. Well, some 100 or so 
replies ago :-). I must admit that I cannot always follow the discussion as I'm 
a real D newbie. As I understand one issue discussed is that the actor class is 
declared shared (see blow the solution I meanwhile came up with). The trick 
I'm doing is the MyActor.start() thing: the created instance of MyActor is not 
returned to the outside world but only the tid of the spawned thread. This way 
nobody gets a reference to an actor object he could play with from within a 
different thread.

Problem is that this also compiles:

MyActor myActor = new MyActor();
auto tid = myActor.start();
myActor.run(i); // call from the parent thread!

I believe I will just write down in the docs that this approach is strongly 
discouraged! Another problem ist that

auto tid = MyActor.start();

doesn't compile as it should: Error: undefined identifier module MyActor.start

This is a bit strange since this should be legal, f.x. p.176 in the book by 
Alexandrescu provides a analogous example. Same with tid.send(thisTid, 
Actor.SHUTDOWN) with Actor.SHUTDOWN.

Regards, Oliver


int main()
{

auto tid = MyActor.start();

tid.send(123);
tid.send(456);
tid.send(1.0f);

tid.send(hello);

tid.send(thisTid, Actor.SHUTDOWN);

receive( 
(int x) { writeln(spawned actor has shut down with return code: , x); 
});

return 0;
}

- Actor.d 

shared abstract class Actor {

public static string SHUTDOWN = shutdown;

protected bool cont = true;

Tid start() {
return spawn(dispatch, this);
}

void run() {
while(cont) {
act();
}
}

abstract void act();

protected bool checkShutdown(Tid sender, string msg) {
if(msg == SHUTDOWN) {
writeln(shutting down ...);
cont = false;
sender.send(0);
return true;
}
return false;
}

}

void dispatch(Actor actor)
{
actor.run();
}

- End of Actor.d 


- MyActor.d 

shared class MyActor : Actor {

void run(int i) {
writeln(i);
}

void act() 
{
receive(
(int msg) { run(msg); },
(Tid sender, string msg) { checkShutdown(sender, msg); },
(Variant v) { writeln(huh?); }
);
}

}
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


Re: OT Adam D Ruppe's web stuff

2012-02-09 Thread Jacob Carlborg

On 2012-02-08 15:51, Adam D. Ruppe wrote:

On Wednesday, 8 February 2012 at 07:37:23 UTC, Jacob Carlborg wrote:

Maybe Adam's code can be used as a base of implementing a library like
Rack in D.

http://rack.rubyforge.org/


That looks like it does the same job as cgi.d.

cgi.d actually offers a uniform interface across various
web servers and integration methods.

If you always talk through the Cgi class, and use the GenericMain
mixin, you can run the same program with:

1) cgi, tested on Apache and IIS (including implementations for methods
that don't work on one or the other natively)

2) fast cgi (using the C library)

3) HTTP itself (something I expanded this last weekend and still want
to make better)




Sometimes I think I should rename it, to reflect this, but meh,
misc-stuff-including blah blah shows how good I am at names!


It seems Rack supports additional interface next to CGI. But I think we 
could take this one step further. I'm not entirely sure what APIs Rack 
provides but in Rails they have a couple of method to uniform the 
environment variables.


For example, ENV[REQUEST_URI] returns differently on different 
servers. Rails provides a method, request_uri on the request object 
that will return the same value on all different servers.


I don't know if CGI already has support for something similar.

--
/Jacob Carlborg


Re: Possible to pass a member function to spawn?

2012-02-09 Thread Oliver Plow
 MyActor myActor = new MyActor();
 auto tid = myActor.start();
 myActor.run(i);   // call from the parent thread!

I was slow ... I now only made Actor.start() public and all other methods in 
the actor classes are protected or private and then we are fine.

-- Oliver

 Original-Nachricht 
 Datum: Thu, 09 Feb 2012 09:29:54 +0100
 Von: saxo...@gmx.de
 An: digitalmars.D digitalmars-d@puremagic.com
 Betreff: Re: Possible to pass a member function to spawn?

 Hello,
 
 I'm the guy that made the initial post in this thread. Well, some 100 or
 so replies ago :-). I must admit that I cannot always follow the discussion
 as I'm a real D newbie. As I understand one issue discussed is that the
 actor class is declared shared (see blow the solution I meanwhile came up
 with). The trick I'm doing is the MyActor.start() thing: the created 
 instance
 of MyActor is not returned to the outside world but only the tid of the
 spawned thread. This way nobody gets a reference to an actor object he could
 play with from within a different thread.
 
 Problem is that this also compiles:
 
 MyActor myActor = new MyActor();
 auto tid = myActor.start();
 myActor.run(i);   // call from the parent thread!
 
 I believe I will just write down in the docs that this approach is
 strongly discouraged! Another problem ist that
 
 auto tid = MyActor.start();
 
 doesn't compile as it should: Error: undefined identifier module
 MyActor.start
 
 This is a bit strange since this should be legal, f.x. p.176 in the book
 by Alexandrescu provides a analogous example. Same with tid.send(thisTid,
 Actor.SHUTDOWN) with Actor.SHUTDOWN.
 
 Regards, Oliver
 
 
 int main()
 {
 
 auto tid = MyActor.start();
 
 tid.send(123);
 tid.send(456);
 tid.send(1.0f);
 
 tid.send(hello);
 
 tid.send(thisTid, Actor.SHUTDOWN);
 
 receive( 
 (int x) { writeln(spawned actor has shut down with return code:
 , x); 
 });
 
 return 0;
 }
 
 - Actor.d 
 
 shared abstract class Actor {
 
 public static string SHUTDOWN = shutdown;
 
 protected bool cont = true;
 
 Tid start() {
 return spawn(dispatch, this);
 }
 
 void run() {
 while(cont) {
 act();
 }
 }
 
 abstract void act();
 
 protected bool checkShutdown(Tid sender, string msg) {
 if(msg == SHUTDOWN) {
 writeln(shutting down ...);
 cont = false;
 sender.send(0);
 return true;
 }
 return false;
 }
 
 }
 
 void dispatch(Actor actor)
 {
 actor.run();
 }
 
 - End of Actor.d 
 
 
 - MyActor.d 
 
 shared class MyActor : Actor {
 
 void run(int i) {
 writeln(i);
 }
 
 void act() 
 {
 receive(
 (int msg) { run(msg); },
 (Tid sender, string msg) { checkShutdown(sender, msg); },
 (Variant v) { writeln(huh?); }
 );
 }
 
 }
 -- 
 Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
 belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


Re: Mac OS X 10.5 support

2012-02-09 Thread Jacob Carlborg

On 2012-02-09 04:52, Walter Bright wrote:

Lately, dmd seems to have broken support for OS X 10.5. Supporting that
system is problematic for us, since we don't have 10.5 systems available
for dev/test.

Currently, the build/test farm is OS X 10.7.


That's too bad. But the same must apply to 10.6 as well, since the 
build/test farm runs Mac OS X 10.7. I mean it can cause problems as well 
since we don't have a build farm that runs 10.6.



I don't think this is like the Windows issue. Upgrading Windows is (for
me, anyway) a full day job. Upgrading OS X is inexpensive and relatively
painless, the least painless of any system newer than DOS that I've
experienced.

Hence, is it worthwhile to continue support for 10.5? Can we officially
say that only 10.6+ is supported? Is there a significant 10.5 community
that eschews OS upgrades but still expects new apps?



--
/Jacob Carlborg


Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Nicolae Mihalache
Hello,

I'm a complete newbie in D and trying to compare with Java. I
implemented  a simple test for measuring the throughput in message
passing between threads. I see that Java can pass about 4mil
messages/sec while D only achieves 1mil/sec. I thought that D should
be faster.

The messages are simply integers (which are converted to Integer in Java).

The two programs are attached. I tried compiling the D version with
both dmd and gdc and various optimization flags.

mache
import std.concurrency, std.stdio;
import std.datetime;

const n=1;
void main() {
auto tid=spawn(receiver);
setMaxMailboxSize(tid, 1000, OnCrowding.block);
tid.send(thisTid);
foreach(i; 0..n) {
   tid.send(i); 
}
writeln(finished sending);
auto s=receiveOnly!(string)();
writeln(received , s);
}

void receiver() {
   auto mainTid=receiveOnly!(Tid)();
   StopWatch sw;
   sw.start();  
   long s;
   for(auto i=0;in;i++) {
  auto msg = receiveOnly!(int)();
  s+=msg;
  //writeln(received , msg);
   }
   sw.stop();
   writeln(finished receiving);
   writefln(received %d messages in %d msec sum=%d speed=%d msg/sec, n, sw.peek().msecs, s, n*1000L/sw.peek().msecs);
   mainTid.send(finished);
}
package inutil.local;

import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;

public class ThroughputMpTest {
static   long n=1000;

static BlockingQueueInteger queue=new ArrayBlockingQueueInteger(1000);
static class Consumer implements Runnable {
@Override
public void run() {
long s=0;
try { 
long t0=System.currentTimeMillis();
for(int i=0;in;i++) {
int x=queue.take();
s+=x;
}
long t1=System.currentTimeMillis();
double d=t1-t0;
System.out.println(n+ messages received in +d+ ms, sum=+s+ speed: +1000*d/n+ microsec/message, +1000*n/d+ messages/sec);
} catch (Exception e){
e.printStackTrace();
}
}
}

static class Producer implements Runnable {
@Override
public void run() {
try {
for(int i=0;in;i++) {
queue.put(i);
}
} catch (Exception e) {
e.printStackTrace();
}
}

}

public static void main(String[] args) throws InterruptedException {
Thread t=new Thread(new Consumer());
t.start();
(new Thread(new Producer())).start();
t.join();
}
}


Re: Mac OS X 10.5 support

2012-02-09 Thread Walter Bright

On 2/9/2012 12:55 AM, Jacob Carlborg wrote:

That's too bad. But the same must apply to 10.6 as well, since the build/test
farm runs Mac OS X 10.7. I mean it can cause problems as well since we don't
have a build farm that runs 10.6.


Yes, except that no problems have arisen (so far!) with 10.6.


Re: std.regex performance

2012-02-09 Thread Dmitry Olshansky

On 09.02.2012 3:35, Jesse Phillips wrote:

On Wednesday, 8 February 2012 at 22:21:35 UTC, David Nadlinger wrote:

On 2/8/12 10:44 PM, Jesse Phillips wrote:

foreach(w; std.string.split(readText(name)))
if(!match(w, regex(r\d)).empty)
{}
}


Could it be that you are rebuilding the regex engine on every
iteration here?

David


That is the case. The older regex apparently cached the last regex. will
be more careful in the feature.


I suggest to file this as an enhancement request, as new std.regex 
should have been backwards compatible.


--
Dmitry Olshansky


Re: Mac OS X 10.5 support

2012-02-09 Thread Brad Roberts
On 2/9/2012 12:55 AM, Jacob Carlborg wrote:
 
 That's too bad. But the same must apply to 10.6 as well, since the build/test 
 farm runs Mac OS X 10.7. I mean it can
 cause problems as well since we don't have a build farm that runs 10.6.
 

If anyone wants to give me a shell account on an osx 10.6 box (or 10.5 too for 
that matter), I'll be happy to setup and
maintain the auto-tester on it.  Feel free to shoot me an email.

Later,
Brad


Re: Mac OS X 10.5 support

2012-02-09 Thread Sönke Ludwig

Am 09.02.2012 04:52, schrieb Walter Bright:

Lately, dmd seems to have broken support for OS X 10.5. Supporting that
system is problematic for us, since we don't have 10.5 systems available
for dev/test.

Currently, the build/test farm is OS X 10.7.

I don't think this is like the Windows issue. Upgrading Windows is (for
me, anyway) a full day job. Upgrading OS X is inexpensive and relatively
painless, the least painless of any system newer than DOS that I've
experienced.

Hence, is it worthwhile to continue support for 10.5? Can we officially
say that only 10.6+ is supported? Is there a significant 10.5 community
that eschews OS upgrades but still expects new apps?


I have a project that we actually plan to use in production in the 
company for which I work. They still require 10.5 support for their 
products so removing that support would make for a very bad situation here.


But it should be possible to get a 10.5 retail DVD and install it inside 
a VM.. I actually planned to do exactly this to support 10.5 nightbuilds 
for my own D stuff.


If support should be dropped anyway, are the issues only build-related 
so that e.g. gdc would still continue work on 10.5 without further work?


Re: How to save RAM in D programs (on zero initialized buffers)

2012-02-09 Thread Marco Leise

Am 09.02.2012, 03:56 Uhr, schrieb Manfred Nowak svv1...@hotmail.com:


Marco Leise wrote:


That sounds a bit vague.


Andrei has written a paper on allocation:
  http://erdani.com/publications/cuj-2005-12.pdf

-manfred


Oh ok, that's farther than I ever digged into memory management. My  
current problem with the compression utility has been solved at a small  
scale with a manual memory management 'ZeroInitializedAndAlignedBuffer'  
struct that uses calloc - right in the spirit of the original source code.  
The startup time was reduced by ~0.7 seconds and is now relaitvely close  
to the original as well, at least there is no longer that perceived delay.  
Techniques like calloc make it possible to use the algorithm in other  
places like batch processing, compressed FUSE file systems on Linux or  
libraries, where the new instances may be spawned in quick succession.


All technical details aside, I would wish for some solution to I spawn a  
new instance of some huge complex data structure, but I probably wont need  
all of it, so use calloc. I don't know what the situation with other D  
programs is, but would be interested in some larger scale experiments with  
calloc and app startup time. It may be worse in some cases, but what if it  
improved the general situation - without writing a memory management  
library? I don't know the GC internals enough to tell if calloc is already  
used somewhere (in place of malloc and memset).


Re: std.uuid is ready for review

2012-02-09 Thread Johannes Pfau
Thanks for your feedback! Comments below:

Am Wed, 08 Feb 2012 23:40:14 -0600
schrieb Robert Jacques sandf...@jhu.edu:

 
 Comments in order of generation, not importance:
 
 This is a port of  boost.uuid from the boost project with some minor
 additions and API changes for a more D-like API. shouldn't be the
 first line in the docs.
 
 A UUID, or Universally unique identifier, one of these (or both)
 should link to the Wikipedia article.
done

 
 Variant is the name of an existing Phobos library type and version is
 a D keyword. Now, thanks to Wikipedia I understand that variants and
 versions are a core part of UUIDs, but a lack of documentation
 explanation sent me for a loop. These terms should be explained
 better.
done

 Suggested rewrite:
 This library implements a UUID as a struct allowing a UUID to be
 used in the most efficient ways, including using memcpy. A drawback
 is that a struct can not have a default constructors, and thus simply
 declaring a UUID will not initialize it to a value generated by one
 of the defined mechanisms. Use the struct's constructors or the UUID
 generator functions to get an initialized UUID.- For efficiency,
 UUID is implemented as a struct. UUIDs therefore default to nil. Use
 UUID's constructors or generator static members to get an initialized
 UUID.
This was a leftover from boost, fixed.

 Also, this snippet needs to be part of the introductory example.
   UUID id;
   assert(id.isNil);
 Oh, and the example should be fully commented. i.e.
   assert(id.isNil); // UUIDs default to nil
done

 And shouldn't use writelns. i.e.
   assert(entry.uuidVersion == UUID.Version.nameBasedSha1);
ok. I had to rewrite the example, but the writelns are gone now

 
 All the generators have the function name [name]UUID. Instead, make
 these function static member functions inside UUID and remove the
 UUID from the name. i.e. nilUUID - UUID.nil randomUUID -
 UUID.random., etc. I'm not sure if you should also do this for
 dnsNamespace, etc. (i.e. dnsNamespace - UUID.dns) or not.

UUID.nil makes sense and looks better. I don't have an opinion about
the other functions, but struct as namespace vs free functions
has always led to debates here, so I'm not sure if I should change it.
I need some more feedback here first. (Also imho randomUUID() looks
better than UUID.random(), but maybe that's just me)

 
 UUID.nil should be an alias/enum to UUID.init, not an immutable.
alias UUID.init nilUUID;
doesn't work, it would work if nil was a member of UUID, but see above
for comments on that.
Made it an enum for now.

 
 There's an additional toString signature which should be supported.
 See std.format.
You're talking about this, right?
const void toString(scope void delegate(const(char)[]) sink);

Nice, when did the writeTo proposal get merged? I must have totally
missed that, actually writeTo is a way better choice here, as it can
avoid memory allocation.

but it seems to!string doesn't support the new signature?

BTW: How should sink interact with pure/safe versions? Can't we just
change that declaration to?

const @safe [pure] void toString(scope @safe pure void
delegate(const(char)[]) sink);

 
 uuidVersion() - ver()?
I'm not sure, uuidVersion is indeed quite long, but it is more
descriptive as ver



Re: Mac OS X 10.5 support

2012-02-09 Thread Don Clugston

On 09/02/12 05:46, Brad Anderson wrote:

On Wed, Feb 8, 2012 at 8:52 PM, Walter Bright
newshou...@digitalmars.com mailto:newshou...@digitalmars.com wrote:

Lately, dmd seems to have broken support for OS X 10.5. Supporting
that system is problematic for us, since we don't have 10.5 systems
available for dev/test.

Currently, the build/test farm is OS X 10.7.

I don't think this is like the Windows issue. Upgrading Windows is
(for me, anyway) a full day job. Upgrading OS X is inexpensive and
relatively painless, the least painless of any system newer than DOS
that I've experienced.

Hence, is it worthwhile to continue support for 10.5? Can we
officially say that only 10.6+ is supported? Is there a significant
10.5 community that eschews OS upgrades but still expects new apps?


There appears to be fewer 10.5 users than 10.4, oddly:
http://update.omnigroup.com/


Note that 10.5 and 10.4 support PowerPC as well as x86. They have 4% 
PowerPC, down from about 7% at the start of 2011.
That accounts for about 25% of the combined decline of 10.4 and 10.5, 
and it's clearly caused by old machines being replaced.
They must date from 2006 or earlier. Surely a large fraction of the 
remaining 10.4  10.5 systems are likewise near end of life.


So it looks like:
48% 10.7
34% 10.6
15% 10.5 + 10.4
 4% PowerPC, never supported by DMD.


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Marco Leise

Am 09.02.2012, 10:06 Uhr, schrieb Nicolae Mihalache xproma...@gmail.com:


Hello,

I'm a complete newbie in D and trying to compare with Java. I
implemented  a simple test for measuring the throughput in message
passing between threads. I see that Java can pass about 4mil
messages/sec while D only achieves 1mil/sec. I thought that D should
be faster.

The messages are simply integers (which are converted to Integer in  
Java).


The two programs are attached. I tried compiling the D version with
both dmd and gdc and various optimization flags.

mache


I cannot give you an explanation, just want to say that a message in  
std.concurrency is also using a wrapper (a 'Variant') + a type field  
(standard, priority, linkDead). So you effectively have no optimization  
for int, but the same situation as in Java.
The second thing I notice is that std.concurrency uses a double linked  
list implementation, while you use an array in the Java version, which  
results in no additional node allocations.


Re: How to save RAM in D programs (on zero initialized buffers)

2012-02-09 Thread Kagamin
I guess, calloc will reuse blocks too, so if you run the 
compressing function twice, it will reuse the memory block used 
and freed previously and zero it out honestly.


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Alex_Dovhal
Nicolae Mihalache xproma...@gmail.com wrote:
 Hello,

 I'm a complete newbie in D and trying to compare with Java. I
 implemented  a simple test for measuring the throughput in message
 passing between threads. I see that Java can pass about 4mil
 messages/sec while D only achieves 1mil/sec. I thought that D should
 be faster.

 The messages are simply integers (which are converted to Integer in Java).

 The two programs are attached. I tried compiling the D version with
 both dmd and gdc and various optimization flags.

 mache

Hi, I downloaded your two programs, I didn't run them but noticed that in 
'mp.d'
you have n set to 100_000_000, while in 'ThroughputMpTest.java' n is set to
10_000_000, so with this D code is 10/4 = 2.5 times faster :) 




Re: std.xml and Adam D Ruppe's dom module

2012-02-09 Thread Johannes Pfau
Am Wed, 08 Feb 2012 20:49:48 -0600
schrieb Robert Jacques sandf...@jhu.edu:

 On Wed, 08 Feb 2012 02:12:57 -0600, Johannes Pfau
 nos...@example.com wrote:
  Am Tue, 07 Feb 2012 20:44:08 -0500
  schrieb Jonathan M Davis jmdavisp...@gmx.com:
  On Tuesday, February 07, 2012 00:56:40 Adam D. Ruppe wrote:
   On Monday, 6 February 2012 at 23:47:08 UTC, Jonathan M Davis
 [snip]
 
  Using ranges of dchar directly can be horribly inefficient in some
  cases, you'll need at least some kind off buffered dchar range. Some
  std.json replacement code tried to use only dchar ranges and had to
  reassemble strings character by character using Appender. That sucks
  especially if you're only interested in a small part of the data and
  don't care about the rest.
  So for pull/sax parsers: Use buffering, return strings(better:
  w/d/char[]) as slices to that buffer. If the user needs to keep a
  string, he can still copy it. (String decoding should also be done
  on-demand only).
 
 Speaking as the one proposing said Json replacement, I'd like to
 point out that JSON strings != UTF strings: manual conversion is
 required some of the time. And I use appender as a dynamic buffer in
 exactly the manner you suggest. There's even an option to use a
 string cache to minimize total memory usage. (Hmm... that
 functionality should probably be re-factored out and made into its
 own utility) That said, I do end up doing a bunch of useless encodes
 and decodes, so I'm going to special case those away and add slicing
 support for strings. wstrings and dstring will still need to be
 converted as currently Json values only accept strings and therefore
 also Json tokens only support strings. As a potential user of the
 sax/pull interface would you prefer the extra clutter of special side
 channels for zero-copy wstrings and dstrings?

Regarding wstrings and dstrings: We'll JSON seems to be UTF8 in almost
all cases, so it's not that important. But i think it should be
possible to use templates to implement identical parsers for d/w/strings

Regarding the use of Appender: Long text ahead ;-)

I think pull parsers should really be as fast a possible and low-level.
For easy to use highlevel stuff there's always DOM and a safe,
high-level serialization API should be implemented based on the
PullParser as well. The serialization API would read only the requested
data, skipping the rest:

struct Data
{
string link;
}
auto Data = unserialize!Data(json);


So in the PullParser we should
avoid memory allocation whenever possible, I think we can even avoid it
completely:

I think dchar ranges are just the wrong input type for parsers, parsers
should use buffered ranges or streams (which would be basically the
same). We could use a generic BufferedRange with real
dchar-ranges then. This BufferedRange could use a static buffer, so
there's no need to allocate anything.

The pull parser should return slices to the original string (if the
input is a string) or slices to the Range/Stream's buffer.
Of course, such a slice is only valid till the pull parser is called
again. The slice also wouldn't be decoded yet. And a slice string could
only be as long as the buffer, but I don't think this is an issue, a
512KB buffer can already store 524288 characters.

If the user wants to keep a string, he should really do
decodeJSONString(data).idup. There's a little more opportunity for
optimization: As long as a decoded json string is always smaller than
the encoded one(I don't know if it is), we could have a decodeJSONString
function which overwrites the original buffer -- no memory allocation.

If that's not the case, decodeJSONString has to allocate iff the
decoded string is different. So we need a function which always returns
the decoded string as a safe too keep copy and a function which returns
the decoded string as a slice if the decoded string is
the same as the original.

An example: string json = 
{
   link:http://www.google.com;,
   useless_data:lorem ipsum,
   more:{
  not interested:yes
   }
}

now I'm only interested in the link. I should be possible to parse that
with zero memory allocations:

auto parser = Parser(json);
parser.popFront();
while(!parser.empty)
{
if(parser.front.type == KEY
tempDecodeJSON(parser.front.value) == link)
{
parser.popFront();
assert(!parser.empty  parser.front.type == VALUE);
return decodeJSON(parser.front.value); //Should return a slice
}
//Skip everything else;
parser.popFront();
}

tempDecodeJSON returns a decoded string, which (usually) isn't safe to
store(it can/should be a slice to the internal buffer, here it's a
slice to the original string, so it could be stored, but there's no
guarantee). In this case, the call to tempDecodeJSON could even be left
out, as we only search for link wich doesn't need encoding.


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Nicolae Mihalache
That would be funny but it's not true. I tested with different values,
that's why I ended up uploading different versions.

The programs print the computed message rate and takes into account
the number of messages.

mache





On Thu, Feb 9, 2012 at 11:57 AM, Alex_Dovhal alex_dov...@yahoo.com wrote:
 Nicolae Mihalache xproma...@gmail.com wrote:
 Hello,

 I'm a complete newbie in D and trying to compare with Java. I
 implemented  a simple test for measuring the throughput in message
 passing between threads. I see that Java can pass about 4mil
 messages/sec while D only achieves 1mil/sec. I thought that D should
 be faster.

 The messages are simply integers (which are converted to Integer in Java).

 The two programs are attached. I tried compiling the D version with
 both dmd and gdc and various optimization flags.

 mache

 Hi, I downloaded your two programs, I didn't run them but noticed that in
 'mp.d'
 you have n set to 100_000_000, while in 'ThroughputMpTest.java' n is set to
 10_000_000, so with this D code is 10/4 = 2.5 times faster :)




Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Alex_Dovhal
Sorry, my mistake. It's strange to have different 'n', but you measure speed 
as 1000*n/time, so it's doesn't matter if n is 10 times bigger. 




Re: std.xml and Adam D Ruppe's dom module

2012-02-09 Thread Johannes Pfau
Am Wed, 08 Feb 2012 20:49:48 -0600
schrieb Robert Jacques sandf...@jhu.edu:
 
 Speaking as the one proposing said Json replacement, I'd like to
 point out that JSON strings != UTF strings: manual conversion is
 required some of the time. And I use appender as a dynamic buffer in
 exactly the manner you suggest. There's even an option to use a
 string cache to minimize total memory usage. (Hmm... that
 functionality should probably be re-factored out and made into its
 own utility) That said, I do end up doing a bunch of useless encodes
 and decodes, so I'm going to special case those away and add slicing
 support for strings. wstrings and dstring will still need to be
 converted as currently Json values only accept strings and therefore
 also Json tokens only support strings. As a potential user of the
 sax/pull interface would you prefer the extra clutter of special side
 channels for zero-copy wstrings and dstrings?

BTW: Do you know DYAML?
https://github.com/kiith-sa/D-YAML

I think it has a pretty nice DOM implementation which doesn't require
any changes to phobos. As YAML is a superset of JSON, adapting it for
std.json shouldn't be too hard. The code is boost licensed and well
documented.

I think std.json would have better chances of being merged into phobos
if it didn't rely on changes to std.variant.


Re: Possible to pass a member function to spawn?

2012-02-09 Thread Artur Skawina
On 02/09/12 02:46, Timon Gehr wrote:
 On 02/09/2012 12:50 AM, Artur Skawina wrote:
 On 02/08/12 22:47, Timon Gehr wrote:
 On 02/08/2012 10:26 PM, Artur Skawina wrote:
 ...

 If we effectively passed ownership of our unique instance to another 
 context, 'x' can no longer
 be unique. If it were to mutate to the target type, then leaving it
 accessible from the current context should be reasonably safe.

 The idea was that spawn could take unique class references and pass 
 ownership to a different thread -- eliminating the need to cast to and from 
 shared.

 I'll rephrase what i said in that d.learn post; *all* I'm suggesting is this:

 a) Any result of an expression that the compiler can determine is unique is
 internally flagged as such. This means eg array concatenation or 
 new-expressions.
 Just a simple bitflag set, in addition to the the stored real type.
 b) Any access to the data clears this flag (with just a few exceptions, 
 below).
 c) If the expression needs to be implicitly converted to another type *and*
 no implicit cast is possible *and* the unique flag is set - then 
 additional
 safe conversions are tried, and if one succeeds, the unique flag gets 
 cleared
 and the type gets modified to the that of the target.

 This allows for things which are 100% safe, but currently prohibited by the
 compiler and require explicit casts.
 
 Your 'simple bit flag' already necessitates a flow analysis, and it does not 
 solve the problem Manu describes. Why not make it powerful enough to be 
 useful?

No, it does not really require *extra* flow analysis - you only need to clear 
the
flag on access/lookup. This makes it relatively cheap and while this approach 
has
its limits, it solves ~80% of the problem; it *does* let you write code without
having to use explicit casts where there should be none.

Manu's problem is the need for casts for sharing/unsharing, right? This *is* a
problem, and my approach allows for an implicit unique=shared conversion.
Could you show an example message-passing function that uses your explicit
unique class together with sender code and receiver signature? Maybe I'm
missing something. Just a one-line spawn() that calls the receiver with an
argument provided by the sender. What i'm interested in is: would your new
unique allow implementing it *without* explicit unsharing?

 If I understood you right, you'd like (b) to be much less restrictive, which 
 i
 think complicates things too much. Some (b)-restrictions for cases that 
 always
 are cheap to discover /can/ be removed, but this needs to be determined on a 
 case-
 -by-case basis.
 
 Everything that can modularly be shown to work should work. Of course the 
 analysis will still be conservative.

It has to be exact. ie it has to always predictably work for every case and 
every
compiler.

 Eg. I think any leaked refs to the data don't qualify (IOW any
 assignment, even if only indirectly via this expression, needs to clear the 
 flag).
 
 If the leaked ref can be shown to be dead, there is no problem. (this is 
 simple!)

It's not the trivial dead refs examples that are the problem.

 One thing the (b) probably /has/ to allow is storing the result in an auto
 variable. But making another copy should clear the flag.

 While i originally needed this for immutable/const/mutable, it would also 
 work
 for shared. If spawn() takes a shared argument, passing it a unique one
 will work too. And i'm not even convinced the ref needs to disappear from the
 current context (obviously accessing the now shared data has to treat it as 
 such
 - but this is not different from what we had before, when using explicit 
 casts;
 in fact now it's marked as shared so it should be safer.)
 
 Then you still have to cast away shared in the receiver thread. As I said 
 before, the idea is that you can send unique objects, not that they 
 implicitly convert to shared and are then sent.

The receiver should *not* be passed unique shared objects. This is why I'm
wondering about your spawn() implementation above.

 So the question is: does having an explicit unique storage class improve
 things further?
 
 Yes. Then the concept persists function boundaries. You cannot pass an 
 unshared object to another thread if there is no explicit unique storage 
 class.

Well, I know what you meant, but sometimes you can - eg immutable or shared.
The interesting case is *unsharing* the object.
And, yes, some kind of unique could allow for this, i'm just not yet
convinced your proposal isn't prohibitively expensive, both for the compiler
and user (by making things too complicated to use).

 Other than using it [1] to mark things as unique that the compiler can't 
 figure
 out by itself.

 [1] I'm using unique, but if it were to become a keyword it should be
  uniq or @uniq, for the same reasons as int, auto or ref.
 
 immutable

synchronized. The fact that the language has flaws does not mean we should
add more. :)

 And if it can work 

Re: Mac OS X 10.5 support

2012-02-09 Thread Jacob Carlborg

On 2012-02-09 10:37, Sönke Ludwig wrote:

Am 09.02.2012 04:52, schrieb Walter Bright:

Lately, dmd seems to have broken support for OS X 10.5. Supporting that
system is problematic for us, since we don't have 10.5 systems available
for dev/test.

Currently, the build/test farm is OS X 10.7.

I don't think this is like the Windows issue. Upgrading Windows is (for
me, anyway) a full day job. Upgrading OS X is inexpensive and relatively
painless, the least painless of any system newer than DOS that I've
experienced.

Hence, is it worthwhile to continue support for 10.5? Can we officially
say that only 10.6+ is supported? Is there a significant 10.5 community
that eschews OS upgrades but still expects new apps?


I have a project that we actually plan to use in production in the
company for which I work. They still require 10.5 support for their
products so removing that support would make for a very bad situation here.

But it should be possible to get a 10.5 retail DVD and install it inside
a VM.. I actually planned to do exactly this to support 10.5 nightbuilds
for my own D stuff.

If support should be dropped anyway, are the issues only build-related
so that e.g. gdc would still continue work on 10.5 without further work?


Yes, issue 4854 is a blocker:

http://d.puremagic.com/issues/show_bug.cgi?id=4854

--
/Jacob Carlborg


Re: How to save RAM in D programs (on zero initialized buffers)

2012-02-09 Thread Marco Leise

Am 09.02.2012, 11:55 Uhr, schrieb Kagamin s...@here.lot:

I guess, calloc will reuse blocks too, so if you run the compressing  
function twice, it will reuse the memory block used and freed previously  
and zero it out honestly.


You don't understand how it works. calloc gives you exactly 0 KB of  
memory. There is nothing to zero out :)
There is on page, a block of 4096 bytes somewhere in the kernel, that is  
all zeroes and read-only. If you allocate memory you get a bunch of  
references to it. Or in other words, a zillion views on the same 4096  
bytes repeating over and over. Only once you need to write to it, will  
happen what you say. The zero page is 'copied' into some (probably  
previously freed) page. This is equivalent to zeroing out the target page.
The main difference here is that the zeroing out happens with a 'lazy'  
keyword attached to it.


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Gor Gyolchanyan
Generally, D's message passing is implemented in quite easy-to-use
way, but far from being fast.
I dislike the Variant structure, because it adds a huge overhead. I'd
rather have a templated message passing system with type-safe message
queue, so no Variant is necessary.
In specific cases Messages can be polymorphic objects. This will be
way faster, then Variant.

On Thu, Feb 9, 2012 at 3:12 PM, Alex_Dovhal alex_dov...@yahoo.com wrote:
 Sorry, my mistake. It's strange to have different 'n', but you measure speed
 as 1000*n/time, so it's doesn't matter if n is 10 times bigger.





-- 
Bye,
Gor Gyolchanyan.


Re: OT Adam D Ruppe's web stuff

2012-02-09 Thread Adam D. Ruppe
On Thursday, 9 February 2012 at 08:26:25 UTC, Jacob Carlborg 
wrote:
For example, ENV[REQUEST_URI] returns differently on 
different servers. Rails provides a method, request_uri on 
the request object that will return the same value on all 
different servers.


I don't know if CGI already has support for something similar.


Yeah, in cgi.d, you use Cgi.requestUri, which is an immutable
string, instead of using the environment variable directly.

  requestUri = getenv(REQUEST_URI);
// Because IIS doesn't pass requestUri, we simulate it here if 
it's empty.

   if(requestUri.length == 0) {
// IIS sometimes includes the script name as part of the 
path info - we don't want that
if(pathInfo.length = scriptName.length  (pathInfo[0 .. 
scriptName.length] == scriptName))

pathInfo = pathInfo[scriptName.length .. $];

   requestUri = scriptName ~ pathInfo ~ 
(queryString.length ? (? ~ queryString) : );


  // FIXME: this works for apache and iis... but what 
about others?






That's in the cgi constructor. Somewhat ugly code, but I figure
better to have ugly code in the library than incompatibilities
in the user program!

The http constructor creates these variables from the raw headers.


Here's the ddoc:
http://arsdnet.net/web.d/cgi.html

If you search for requestHeaders, you'll see all the stuff
following. If you use those class members instead of direct
environment variables, you'll get max compatibility.


Re: OT Adam D Ruppe's web stuff

2012-02-09 Thread Jacob Carlborg

On 2012-02-09 15:56, Adam D. Ruppe wrote:

On Thursday, 9 February 2012 at 08:26:25 UTC, Jacob Carlborg wrote:

For example, ENV[REQUEST_URI] returns differently on different
servers. Rails provides a method, request_uri on the request object
that will return the same value on all different servers.

I don't know if CGI already has support for something similar.


Yeah, in cgi.d, you use Cgi.requestUri, which is an immutable
string, instead of using the environment variable directly.

requestUri = getenv(REQUEST_URI);
// Because IIS doesn't pass requestUri, we simulate it here if it's empty.
if(requestUri.length == 0) {
// IIS sometimes includes the script name as part of the path info - we
don't want that
if(pathInfo.length = scriptName.length  (pathInfo[0 ..
scriptName.length] == scriptName))
pathInfo = pathInfo[scriptName.length .. $];

requestUri = scriptName ~ pathInfo ~ (queryString.length ? (? ~
queryString) : );

// FIXME: this works for apache and iis... but what about others?





That's in the cgi constructor. Somewhat ugly code, but I figure
better to have ugly code in the library than incompatibilities
in the user program!

The http constructor creates these variables from the raw headers.


Here's the ddoc:
http://arsdnet.net/web.d/cgi.html

If you search for requestHeaders, you'll see all the stuff
following. If you use those class members instead of direct
environment variables, you'll get max compatibility.


Cool, you already thought of all of this it seems.

--
/Jacob Carlborg


Re: Why I don't want D to expand

2012-02-09 Thread Zachary Lund

On Wednesday, 8 February 2012 at 06:40:38 UTC, Bee wrote:

Why don't you go GPL, bitch? That's where shit reigns.


What are you, Richard Stallman?

Also, you know there are various other solutions that use the DMD 
frontend with other solutions for backends (LDC, GDC) that you 
can contribute to. From what I understand, Walter doesn't have 
legal right to change the licensing on the DMD backend.


To the hope that I'm not wasting my time,
Zachary Lund


Re: Link to D 2.0 language spec ebook is broken

2012-02-09 Thread Zachary Lund

On Monday, 6 February 2012 at 23:19:04 UTC, Brad Anderson wrote:
It appears to be because of the redirect from 
http://digitalmars.com/d/2.0/*to
http://www.d-programming-language.org and the D site doesn't 
have the

actual ebook hosted.

Regards,
Brad Anderson


I don't think the D2 language has an ebook specification like 
D1 did. I've talked about it and I still sort of wish for one. 
The site just doesn't feel organized and centralized to me nor is 
it portable.


Re: std.xml and Adam D Ruppe's dom module

2012-02-09 Thread Sean Kelly
For XML, template the parser on char type so transcoding is unnecessary. Since 
JSON is UTF-8 I'd use char there, and at least for the event parser don't 
proactively decode strings--let the user do this. In fact, don't proactively 
decode anything. Give me the option of getting a number via its string 
representation directly from the input buffer. Roughly, JSON events should be:

Enter object
Object key
Int value (as string)
Float value (as string)
Null
True
False
Etc. 

On Feb 8, 2012, at 6:49 PM, Robert Jacques sandf...@jhu.edu wrote:

 On Wed, 08 Feb 2012 02:12:57 -0600, Johannes Pfau nos...@example.com wrote:
 Am Tue, 07 Feb 2012 20:44:08 -0500
 schrieb Jonathan M Davis jmdavisp...@gmx.com:
 On Tuesday, February 07, 2012 00:56:40 Adam D. Ruppe wrote:
  On Monday, 6 February 2012 at 23:47:08 UTC, Jonathan M Davis
 [snip]
 
 Using ranges of dchar directly can be horribly inefficient in some
 cases, you'll need at least some kind off buffered dchar range. Some
 std.json replacement code tried to use only dchar ranges and had to
 reassemble strings character by character using Appender. That sucks
 especially if you're only interested in a small part of the data and
 don't care about the rest.
 So for pull/sax parsers: Use buffering, return strings(better:
 w/d/char[]) as slices to that buffer. If the user needs to keep a
 string, he can still copy it. (String decoding should also be done
 on-demand only).
 
 Speaking as the one proposing said Json replacement, I'd like to point out 
 that JSON strings != UTF strings: manual conversion is required some of the 
 time. And I use appender as a dynamic buffer in exactly the manner you 
 suggest. There's even an option to use a string cache to minimize total 
 memory usage. (Hmm... that functionality should probably be re-factored out 
 and made into its own utility) That said, I do end up doing a bunch of 
 useless encodes and decodes, so I'm going to special case those away and add 
 slicing support for strings. wstrings and dstring will still need to be 
 converted as currently Json values only accept strings and therefore also 
 Json tokens only support strings. As a potential user of the sax/pull 
 interface would you prefer the extra clutter of special side channels for 
 zero-copy wstrings and dstrings?


Re: std.xml and Adam D Ruppe's dom module

2012-02-09 Thread Sean Kelly
This. And decoded JSON strings are always smaller than encoded strings--JSON 
uses escaping to encode non UTF-8 stuff, so in the case where someone sends a 
surrogate pair (legal in JSON) it's encoded as \u\u. In short, it's 
absolutely possible to create a pull parser that never allocates, even for 
decoding. As proof, I've done it before. :-p

On Feb 9, 2012, at 3:07 AM, Johannes Pfau nos...@example.com wrote:

 Am Wed, 08 Feb 2012 20:49:48 -0600
 schrieb Robert Jacques sandf...@jhu.edu:
 
 On Wed, 08 Feb 2012 02:12:57 -0600, Johannes Pfau
 nos...@example.com wrote:
 Am Tue, 07 Feb 2012 20:44:08 -0500
 schrieb Jonathan M Davis jmdavisp...@gmx.com:
 On Tuesday, February 07, 2012 00:56:40 Adam D. Ruppe wrote:
 On Monday, 6 February 2012 at 23:47:08 UTC, Jonathan M Davis
 [snip]
 
 Using ranges of dchar directly can be horribly inefficient in some
 cases, you'll need at least some kind off buffered dchar range. Some
 std.json replacement code tried to use only dchar ranges and had to
 reassemble strings character by character using Appender. That sucks
 especially if you're only interested in a small part of the data and
 don't care about the rest.
 So for pull/sax parsers: Use buffering, return strings(better:
 w/d/char[]) as slices to that buffer. If the user needs to keep a
 string, he can still copy it. (String decoding should also be done
 on-demand only).
 
 Speaking as the one proposing said Json replacement, I'd like to
 point out that JSON strings != UTF strings: manual conversion is
 required some of the time. And I use appender as a dynamic buffer in
 exactly the manner you suggest. There's even an option to use a
 string cache to minimize total memory usage. (Hmm... that
 functionality should probably be re-factored out and made into its
 own utility) That said, I do end up doing a bunch of useless encodes
 and decodes, so I'm going to special case those away and add slicing
 support for strings. wstrings and dstring will still need to be
 converted as currently Json values only accept strings and therefore
 also Json tokens only support strings. As a potential user of the
 sax/pull interface would you prefer the extra clutter of special side
 channels for zero-copy wstrings and dstrings?
 
 Regarding wstrings and dstrings: We'll JSON seems to be UTF8 in almost
 all cases, so it's not that important. But i think it should be
 possible to use templates to implement identical parsers for d/w/strings
 
 Regarding the use of Appender: Long text ahead ;-)
 
 I think pull parsers should really be as fast a possible and low-level.
 For easy to use highlevel stuff there's always DOM and a safe,
 high-level serialization API should be implemented based on the
 PullParser as well. The serialization API would read only the requested
 data, skipping the rest:
 
 struct Data
 {
string link;
 }
 auto Data = unserialize!Data(json);
 
 
 So in the PullParser we should
 avoid memory allocation whenever possible, I think we can even avoid it
 completely:
 
 I think dchar ranges are just the wrong input type for parsers, parsers
 should use buffered ranges or streams (which would be basically the
 same). We could use a generic BufferedRange with real
 dchar-ranges then. This BufferedRange could use a static buffer, so
 there's no need to allocate anything.
 
 The pull parser should return slices to the original string (if the
 input is a string) or slices to the Range/Stream's buffer.
 Of course, such a slice is only valid till the pull parser is called
 again. The slice also wouldn't be decoded yet. And a slice string could
 only be as long as the buffer, but I don't think this is an issue, a
 512KB buffer can already store 524288 characters.
 
 If the user wants to keep a string, he should really do
 decodeJSONString(data).idup. There's a little more opportunity for
 optimization: As long as a decoded json string is always smaller than
 the encoded one(I don't know if it is), we could have a decodeJSONString
 function which overwrites the original buffer -- no memory allocation.
 
 If that's not the case, decodeJSONString has to allocate iff the
 decoded string is different. So we need a function which always returns
 the decoded string as a safe too keep copy and a function which returns
 the decoded string as a slice if the decoded string is
 the same as the original.
 
 An example: string json = 
 {
   link:http://www.google.com;,
   useless_data:lorem ipsum,
   more:{
  not interested:yes
   }
 }
 
 now I'm only interested in the link. I should be possible to parse that
 with zero memory allocations:
 
 auto parser = Parser(json);
 parser.popFront();
 while(!parser.empty)
 {
if(parser.front.type == KEY
tempDecodeJSON(parser.front.value) == link)
{
parser.popFront();
assert(!parser.empty  parser.front.type == VALUE);
return decodeJSON(parser.front.value); //Should return a slice
}
//Skip everything else;

Re: Mac OS X 10.5 support

2012-02-09 Thread Sean Kelly
At this point, the only people on 10.4-5 should be those with PPC macs. I think 
32-bit Intel owners may be stuck on 10.6.

On Feb 8, 2012, at 9:13 PM, Nick Sabalausky a@a.a wrote:

 Walter Bright newshou...@digitalmars.com wrote in message 
 news:jgvfu2$gmk$1...@digitalmars.com...
 Lately, dmd seems to have broken support for OS X 10.5. Supporting that 
 system is problematic for us, since we don't have 10.5 systems available 
 for dev/test.
 
 Currently, the build/test farm is OS X 10.7.
 
 I don't think this is like the Windows issue. Upgrading Windows is (for 
 me, anyway) a full day job. Upgrading OS X is inexpensive and relatively 
 painless, the least painless of any system newer than DOS that I've 
 experienced.
 
 Hence, is it worthwhile to continue support for 10.5? Can we officially 
 say that only 10.6+ is supported? Is there a significant 10.5 community 
 that eschews OS upgrades but still expects new apps?
 
 While I'm normally big on not dropping support for older things, my honest 
 take on it is that if someone's using an Apple OS, then they've already 
 agreed to an implicit contract (for lack of a better word) that they're 
 going to need to keep upgrading to whatever's the latest hardware/software 
 anyway. It's just the way Apple works. 'Course, as a non-Apple user, I'm not 
 sure anything I have to say on it counts for much. So, FWIW.
 
 


Re: Mac OS X 10.5 support

2012-02-09 Thread Sean Kelly
You need 10.5 server. Apple doesn't allow desktop versions of OSX in a VM (I 
think 10.7 may be the first exception to this rule) and VM makers honor this. I 
may be able to sort out earlier OSX server versions somewhere for my own use, 
but I don't have the resources to make them accessible to others.  I'll see 
about trying this today. 

On Feb 9, 2012, at 1:37 AM, Sönke Ludwig lud...@informatik.uni-luebeck.de 
wrote:

 Am 09.02.2012 04:52, schrieb Walter Bright:
 Lately, dmd seems to have broken support for OS X 10.5. Supporting that
 system is problematic for us, since we don't have 10.5 systems available
 for dev/test.
 
 Currently, the build/test farm is OS X 10.7.
 
 I don't think this is like the Windows issue. Upgrading Windows is (for
 me, anyway) a full day job. Upgrading OS X is inexpensive and relatively
 painless, the least painless of any system newer than DOS that I've
 experienced.
 
 Hence, is it worthwhile to continue support for 10.5? Can we officially
 say that only 10.6+ is supported? Is there a significant 10.5 community
 that eschews OS upgrades but still expects new apps?
 
 I have a project that we actually plan to use in production in the company 
 for which I work. They still require 10.5 support for their products so 
 removing that support would make for a very bad situation here.
 
 But it should be possible to get a 10.5 retail DVD and install it inside a 
 VM.. I actually planned to do exactly this to support 10.5 nightbuilds for my 
 own D stuff.
 
 If support should be dropped anyway, are the issues only build-related so 
 that e.g. gdc would still continue work on 10.5 without further work?


Re: Mac OS X 10.5 support

2012-02-09 Thread Walter Bright

On 2/9/2012 1:37 AM, Sönke Ludwig wrote:

I have a project that we actually plan to use in production in the company for
which I work. They still require 10.5 support for their products so removing
that support would make for a very bad situation here.

But it should be possible to get a 10.5 retail DVD and install it inside a VM..
I actually planned to do exactly this to support 10.5 nightbuilds for my own D
stuff.

If support should be dropped anyway, are the issues only build-related so that
e.g. gdc would still continue work on 10.5 without further work?



Would it also be possible for you to:

1. debug what has gone wrong with the 10.5 support? I'll be happy to fold in any 
resulting patches.


2. provide a remote login shell so we can figure it out?

3. use git bisect to determine which change broke it?


Re: std.xml and Adam D Ruppe's dom module

2012-02-09 Thread Johannes Pfau
Am Thu, 09 Feb 2012 08:18:15 -0600
schrieb Robert Jacques sandf...@jhu.edu:

 On Thu, 09 Feb 2012 05:13:52 -0600, Johannes Pfau
 nos...@example.com wrote:
  Am Wed, 08 Feb 2012 20:49:48 -0600
  schrieb Robert Jacques sandf...@jhu.edu:
 
  Speaking as the one proposing said Json replacement, I'd like to
  point out that JSON strings != UTF strings: manual conversion is
  required some of the time. And I use appender as a dynamic buffer
  in exactly the manner you suggest. There's even an option to use a
  string cache to minimize total memory usage. (Hmm... that
  functionality should probably be re-factored out and made into its
  own utility) That said, I do end up doing a bunch of useless
  encodes and decodes, so I'm going to special case those away and
  add slicing support for strings. wstrings and dstring will still
  need to be converted as currently Json values only accept strings
  and therefore also Json tokens only support strings. As a
  potential user of the sax/pull interface would you prefer the
  extra clutter of special side channels for zero-copy wstrings and
  dstrings?
 
  BTW: Do you know DYAML?
  https://github.com/kiith-sa/D-YAML
 
  I think it has a pretty nice DOM implementation which doesn't
  require any changes to phobos. As YAML is a superset of JSON,
  adapting it for std.json shouldn't be too hard. The code is boost
  licensed and well documented.
 
  I think std.json would have better chances of being merged into
  phobos if it didn't rely on changes to std.variant.
 
 I know about D-YAML, but haven't taken a deep look at it; it was
 developed long after I wrote my own JSON library.

I know, I didn't mean to criticize. I just thought DYAML could give
some useful inspiration for the DOM api.

 I did look into
 YAML before deciding to use JSON for my application; I just didn't
 need the extra features and implementing them would've taken extra
 dev time.

Sure, I was only referring to DYAML cause the DOM is very similar. Just
remove some features and it would suit JSON very well. One problem is
that DYAML uses some older YAML version which isn't 100% compatible
with JSON, so it can't be used as a JSON parser. There's also no way to
tell it to generate only JSON compatible output (and AFAIK that's a
design decision and not simply a missing feature)
 
 As for reliance on changes to std.variant, this was a change
 *suggested* by Andrei.
Ok, then those changes obviously make sense. I actually thought Andrei
didn't like some of those changes.

 And while it is the slower route to go, I
 believe it is the correct software engineering choice; prior to the
 change I was implementing my own typed union (i.e. I poorly
 reinvented std.variant) Actually, most of my initial work on Variant
 was to make its API just as good as my home-rolled JSON type.
 Furthermore, a quick check of the YAML code-base seems to indicate
 that underneath the hood, Variant is being used. I'm actually a
 little curious about what prevented YAML from being expressed using
 std.variant directly and if those limitations can be removed.

I guess the custom Node type was only added to support additional
methods(isScalar, isSequence, isMapping, add, remove, removeAt) and I'm
not sure if those are supported on Variant (length, foreach, opIndex,
opIndexAssign), but IIRC those are supported in your new std.variant.
 
 * The other thing slowing both std.variant and std.json down is my
 thesis writing :)




Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread dsimcha
I wonder how much it helps to just optimize the GC a little.  How 
much does the performance gap close when you use DMD 2.058 beta 
instead of 2.057?  This upcoming release has several new garbage 
collector optimizations.  If the GC is the bottleneck, then it's 
not surprising that anything that relies heavily on it is slow 
because D's GC is still fairly naive.


On Thursday, 9 February 2012 at 15:44:59 UTC, Sean Kelly wrote:
So a queue per message type?  How would ordering be preserved? 
Also, how would this work for interprocess messaging?  An 
array-based queue is an option however (though it would mean 
memmoves on receive), as are free-lists for nodes, etc.  I 
guess the easiest thing there would be a lock-free shared slist 
for the node free-list, though I couldn't weigh the chance of 
cache misses from using old memory blocks vs. just expecting 
the allocator to be fast.


On Feb 9, 2012, at 6:10 AM, Gor Gyolchanyan 
gor.f.gyolchan...@gmail.com wrote:


Generally, D's message passing is implemented in quite 
easy-to-use

way, but far from being fast.
I dislike the Variant structure, because it adds a huge 
overhead. I'd
rather have a templated message passing system with type-safe 
message

queue, so no Variant is necessary.
In specific cases Messages can be polymorphic objects. This 
will be

way faster, then Variant.

On Thu, Feb 9, 2012 at 3:12 PM, Alex Dovhal alex 
dov...@yahoo.com wrote:
Sorry, my mistake. It's strange to have different 'n', but 
you measure speed
as 1000*n/time, so it's doesn't matter if n is 10 times 
bigger.







--
Bye,
Gor Gyolchanyan.





Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Brad Anderson
On Thu, Feb 9, 2012 at 9:22 AM, dsimcha dsim...@yahoo.com wrote:

 I wonder how much it helps to just optimize the GC a little.  How much
 does the performance gap close when you use DMD 2.058 beta instead of
 2.057?  This upcoming release has several new garbage collector
 optimizations.  If the GC is the bottleneck, then it's not surprising that
 anything that relies heavily on it is slow because D's GC is still fairly
 naive.


 On Thursday, 9 February 2012 at 15:44:59 UTC, Sean Kelly wrote:

 So a queue per message type?  How would ordering be preserved? Also, how
 would this work for interprocess messaging?  An array-based queue is an
 option however (though it would mean memmoves on receive), as are
 free-lists for nodes, etc.  I guess the easiest thing there would be a
 lock-free shared slist for the node free-list, though I couldn't weigh the
 chance of cache misses from using old memory blocks vs. just expecting the
 allocator to be fast.

 On Feb 9, 2012, at 6:10 AM, Gor Gyolchanyan gor.f.gyolchan...@gmail.com
 wrote:

  Generally, D's message passing is implemented in quite easy-to-use
 way, but far from being fast.
 I dislike the Variant structure, because it adds a huge overhead. I'd
 rather have a templated message passing system with type-safe message
 queue, so no Variant is necessary.
 In specific cases Messages can be polymorphic objects. This will be
 way faster, then Variant.

 On Thu, Feb 9, 2012 at 3:12 PM, Alex Dovhal alex dov...@yahoo.com
 wrote:

 Sorry, my mistake. It's strange to have different 'n', but you measure
 speed
 as 1000*n/time, so it's doesn't matter if n is 10 times bigger.





 --
 Bye,
 Gor Gyolchanyan.




dmd 2.057:
received 1 messages in 192034 msec sum=49995000
speed=520741 msg/sec
received 1 messages in 84118 msec sum=49995000
speed=1188806 msg/sec
received 1 messages in 88274 msec sum=49995000
speed=1132836 msg/sec

dmd 2.058 beta:
received 1 messages in 93539 msec sum=49995000
speed=1069072 msg/sec
received 1 messages in 96422 msec sum=49995000
speed=1037107 msg/sec
received 1 messages in 203961 msec sum=49995000
speed=490289 msg/sec

Both versions would inexplicably run at approximately half the speed
sometimes. I have no idea what is up with that.  I have no java development
environment to test for comparison.  This machine has 4 cores and is
running Windows.

Regards,
Brad Anderson


RedMonk rankings

2012-02-09 Thread Simen Kjærås

http://redmonk.com/sogrady/2012/02/08/language-rankings-2-2012/

Kinda interesting, but as with all these things, don't take it as the
word of god. Nice to see D all the way up there, I'd honestly expect it
be lower.


Re: std.regex performance

2012-02-09 Thread Jesse Phillips


I suggest to file this as an enhancement request, as new 
std.regex should have been backwards compatible.


http://d.puremagic.com/issues/show_bug.cgi?id=7471

I redid the timings with mingw using time, and I find this strange

$ time ./test2.058.exe

real0m55.500s
user0m0.031s
sys 0m0.000s

If I know my time output, doesn't that mean the computer is 
spending 1 minute not running my program, maybe doing IO?


And 2.056 is similar, and actually takes longer in user time.

real0m0.860s
user0m0.047s


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Andrei Alexandrescu

On 2/9/12 6:10 AM, Gor Gyolchanyan wrote:

Generally, D's message passing is implemented in quite easy-to-use
way, but far from being fast.
I dislike the Variant structure, because it adds a huge overhead. I'd
rather have a templated message passing system with type-safe message
queue, so no Variant is necessary.
In specific cases Messages can be polymorphic objects. This will be
way faster, then Variant.


cc Sean Kelly

I haven't looked at the implementation, but one possible liability is 
that large messages don't fit in a Variant and must use dynamic 
allocation under the wraps. There are a number of ways to avoid that, 
such as parallel arrays (one array per type for data and one for the 
additional tags).


We must make the message passing subsystem to not use any memory 
allocation in the quiescent state. If we're doing one allocation per 
message passed, that might explain the 4x performance difference (I have 
no trouble figuring Java's allocator is this much faster than D's).



Andrei


Re: OT Adam D Ruppe's web stuff

2012-02-09 Thread Andrei Alexandrescu

On 2/9/12 6:56 AM, Adam D. Ruppe wrote:

Here's the ddoc:
http://arsdnet.net/web.d/cgi.html


Cue the choir: Please submit to Phobos.

Andrei


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Marco Leise

Am 09.02.2012, 17:22 Uhr, schrieb dsimcha dsim...@yahoo.com:

I wonder how much it helps to just optimize the GC a little.  How much  
does the performance gap close when you use DMD 2.058 beta instead of  
2.057?  This upcoming release has several new garbage collector  
optimizations.  If the GC is the bottleneck, then it's not surprising  
that anything that relies heavily on it is slow because D's GC is still  
fairly naive.


I did some OProfile-ing. The full report is attached, but for simplicity  
it is without call graph this time. Here is an excerpt:


CPU: Core 2, speed 2001 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit  
mask of 0x00 (Unhalted core cycles) count 10

samples  %linenr info symbol name
1383818.8416  gcx.d:426   void*  
gc.gcx.GC.malloc(ulong, uint, ulong*)
4465  6.0795  gcx.d:2454  ulong  
gc.gcx.Gcx.fullcollect(void*)

...

Compiled with: gcc-Version 4.6.2 20111026 (gdc 0.31 - r751:34491c2e7bb4,  
using dmd 2.057) (GCC)CPU: Core 2, speed 2001 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask 
of 0x00 (Unhalted core cycles) count 10
samples  %linenr info symbol name
1383818.8416  gcx.d:426   void* gc.gcx.GC.malloc(ulong, 
uint, ulong*)
4465  6.0795  gcx.d:2454  ulong 
gc.gcx.Gcx.fullcollect(void*)
3466  4.7192  lifetime.d:829  _d_newarrayiT
3282  4.4687  concurrency.d:926   void 
std.concurrency.MessageBox.put(ref std.concurrency.Message)
3088  4.2046  concurrency.d:1419  void 
std.concurrency.List!(std.concurrency.Message).List.put(ref 
std.concurrency.List!(std.concurrency.Message).List)
3037  4.1351  concurrency.d:977   bool 
std.concurrency.MessageBox.get!(nothrow @safe void delegate(int), pure @safe 
void delegate(std.concurrency.LinkTerminated), pure @safe void 
delegate(std.concurrency.OwnerTerminated), pure @safe void 
delegate(std.variant.VariantN!(32uL).VariantN)).get(nothrow @safe void 
delegate(int), pure @safe void delegate(std.concurrency.LinkTerminated), pure 
@safe void delegate(std.concurrency.OwnerTerminated), pure @safe void 
delegate(std.variant.VariantN!(32uL).VariantN))
2624  3.5728  variant.d:235   long 
std.variant.VariantN!(32uL).VariantN.handler!(int).handler(std.variant.VariantN!(32uL).VariantN.OpID,
 ubyte[32]*, void*)
2544  3.4639  gcbits.d:115void gc.gcbits.GCBits.clear(ulong)
2152  2.9301  object_.d:2417  _d_monitorenter
2011  2.7381  concurrency.d:591   int 
std.concurrency.receiveOnly!(int).receiveOnly()
1712  2.3310  gc.d:205gc_qalloc
1695  2.3079  variant.d:253   long 
std.variant.VariantN!(32uL).VariantN.handler!(int).handler(std.variant.VariantN!(32uL).VariantN.OpID,
 ubyte[32]*, void*).bool tryPutting(int*, TypeInfo, void*)
1659  2.2589  concurrency.d:1058  bool 
std.concurrency.MessageBox.get!(nothrow @safe void delegate(int), pure @safe 
void delegate(std.concurrency.LinkTerminated), pure @safe void 
delegate(std.concurrency.OwnerTerminated), pure @safe void 
delegate(std.variant.VariantN!(32uL).VariantN)).get(nothrow @safe void 
delegate(int), pure @safe void delegate(std.concurrency.LinkTerminated), pure 
@safe void delegate(std.concurrency.OwnerTerminated), pure @safe void 
delegate(std.variant.VariantN!(32uL).VariantN)).bool scan(ref 
std.concurrency.List!(std.concurrency.Message).List)
1658  2.2575  gcbits.d:104void gc.gcbits.GCBits.set(ulong)
1487  2.0247  variant.d:499   
std.variant.VariantN!(32uL).VariantN 
std.variant.VariantN!(32uL).VariantN.opAssign!(int).opAssign(int)
1435  1.9539  gcbits.d:92 ulong gc.gcbits.GCBits.test(ulong)
1405  1.9130  condition.d:230 void 
core.sync.condition.Condition.notify()
1390  1.8926  object_.d:2440  _d_monitorexit
1386  1.8872  concurrency.d:1304  ref @property 
std.concurrency.Message 
std.concurrency.List!(std.concurrency.Message).List.Range.front()
1248  1.6993  object_.d:137   bool object.opEquals(Object, 
Object)
1131  1.5399  concurrency.d:1378  void 
std.concurrency.List!(std.concurrency.Message).List.removeAt(std.concurrency.List!(std.concurrency.Message).List.Range)
998   1.3589  mutex.d:149 void 
core.sync.mutex.Mutex.unlock()
993   1.3521  gcbits.d:141ulong 
gc.gcbits.GCBits.testClear(ulong)
920   1.2527  concurrency.d:497   void 
std.concurrency._send!(int)._send(std.concurrency.MsgType, std.concurrency.Tid, 
int)
859   1.1696  concurrency.d:996   bool 
std.concurrency.MessageBox.get!(nothrow @safe void delegate(int), pure @safe 
void delegate(std.concurrency.LinkTerminated), pure @safe void 

Re: Mac OS X 10.5 support

2012-02-09 Thread Jacob Carlborg

On 2012-02-09 17:21, Sean Kelly wrote:

You need 10.5 server. Apple doesn't allow desktop versions of OSX in a VM (I 
think 10.7 may be the first exception to this rule) and VM makers honor this. I 
may be able to sort out earlier OSX server versions somewhere for my own use, 
but I don't have the resources to make them accessible to others.  I'll see 
about trying this today.


VMware made a mistake with VMware Fusion 4.1 that allows to virtualize 
Leopard and Snow Leopard.


http://www.macworld.com/article/163755/2011/11/vmware_fusion_update_lets_users_virtualize_leopard_snow_leopard.html

--
/Jacob Carlborg


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Marco Leise
Am 09.02.2012, 18:35 Uhr, schrieb Andrei Alexandrescu  
seewebsiteforem...@erdani.org:



On 2/9/12 6:10 AM, Gor Gyolchanyan wrote:

Generally, D's message passing is implemented in quite easy-to-use
way, but far from being fast.
I dislike the Variant structure, because it adds a huge overhead. I'd
rather have a templated message passing system with type-safe message
queue, so no Variant is necessary.
In specific cases Messages can be polymorphic objects. This will be
way faster, then Variant.


cc Sean Kelly

I haven't looked at the implementation, but one possible liability is  
that large messages don't fit in a Variant and must use dynamic  
allocation under the wraps. There are a number of ways to avoid that,  
such as parallel arrays (one array per type for data and one for the  
additional tags).


We must make the message passing subsystem to not use any memory  
allocation in the quiescent state. If we're doing one allocation per  
message passed, that might explain the 4x performance difference (I have  
no trouble figuring Java's allocator is this much faster than D's).



Andrei


Well, what does +1 Variant and +1 LinkedListNode sum up to?


Re: Output to console from DerivedThread class strange

2012-02-09 Thread kraybourne


I know neither Windows or D very well, but I've noticed in D on Windows, 
output seems to be held back sometimes until the app terminates.


Maybe try this:

On 2/7/12 11:28 PM, Oliver Puerto wrote:

void run() {
writeln(Derived thread running.);

stdout.flush(); // -- added

}


It at least helped me in similar weird situations, although I'm not 
sure it helps you or explains anything.


Formating output of retro

2012-02-09 Thread Dmitry Olshansky

Finally getting to debug std.regex issues, I've found that it seems like

import std.stdio, std.range;

void main()
{
writefln(%s,retro(abcd));
}

no longer works, can anyone on older version check if it's regression?

--
Dmitry Olshansky


Named parameters workaround

2012-02-09 Thread Matthias Walter
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

named parameters have been discussed awhile ago and it seems to me
that they won't get into the language soon. I came up with a
workaround that makes it possible to use them with some extra typing.

Suppose we have a function

foo(int a, int b, string c, MyStruct d, MyClass e);

and want to call with with named parameters. It is possible to write a
wrapper function as follows

foo(N)(N n)
{
  foo(
n.has!a() ? n.get!a() : 42, // 42 is the default parameter
n.get!b(), // Complain if b is not given
n.has!c() ? n.get!c() : foo,
n.has!d() ? n.get!d() : MyStruct(),
n.has!e() ? n.get!e() : new MyClass()
 );
}

and then call it

foo(named!c,b,e(Foo, 0, new MyClass()));

With some more work we could also allow

foo(named!c(Foo), named!b(0), named!e(new MyClass()));

All this can be handled by a templated Wrapper-Object that is created
by named()(). When doing the code generation of the wrapper object via
a mixin, we could also allow ref parameters (with named!ref
e(my_class) and by storing a pointer).

Of course, the code for the wrapper can be inlined because the
presence of every parameter can be decided at compile time.

Any ideas?

Best regards,

Matthias
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJPM5SOAAoJEPdcuJbq5/sRr/MIAL0tJBGWdkhrPUbaS5gqb9ho
jkyKW8u+akVMnlTW4BKQ7lHSkJBySZxn4Ty/zIJEmqoIHrlsI308z26miSy5bDeK
XcNAx1M+3wUuvYPoJpg3nlARofez9R0n1opfS6DnDYHGYLZH9AK924bwKyChFfP9
a/6mEyPHsMem/+2CWIWJjsLzEBkc+OacgCmzj7dGZfoJBhmF/EjxZgdwYpnA8q3N
KYIl28gqyf+JBkmdzVhhDuBMUb1PlqqqnbXS66EaYcQIA7bUESPc8dKJKIQTKVy3
Lq5MSg8BuvMdnIXYVn0HK4R2LWTshZn5kXkfy7EX8Xw4yyT4e6VIkcwDOyy8iMQ=
=T4UD
-END PGP SIGNATURE-


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Sean Kelly
On Feb 9, 2012, at 10:14 AM, Marco Leise wrote:

 Am 09.02.2012, 17:22 Uhr, schrieb dsimcha dsim...@yahoo.com:
 
 I wonder how much it helps to just optimize the GC a little.  How much does 
 the performance gap close when you use DMD 2.058 beta instead of 2.057?  
 This upcoming release has several new garbage collector optimizations.  If 
 the GC is the bottleneck, then it's not surprising that anything that relies 
 heavily on it is slow because D's GC is still fairly naive.
 
 I did some OProfile-ing. The full report is attached, but for simplicity it 
 is without call graph this time. Here is an excerpt:
 
 CPU: Core 2, speed 2001 MHz (estimated)
 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit 
 mask of 0x00 (Unhalted core cycles) count 10
 samples  %linenr info symbol name
 1383818.8416  gcx.d:426   void* gc.gcx.GC.malloc(ulong, 
 uint, ulong*)
 4465  6.0795  gcx.d:2454  ulong 
 gc.gcx.Gcx.fullcollect(void*)

One random thing that just occurred to me… if the standard receive pattern is:

receive((int x) { … });

There's a good chance that a stack frame is being dynamically allocated for the 
delegate when it's passed to receive (since I don't believe there's any way to 
declare the parameters to receive as scope).  I'll have to check this, and 
maybe consider changing receive to use alias template parameters instead of 
normal function parameters?

Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Andrei Alexandrescu

On 2/9/12 10:31 AM, Marco Leise wrote:

Am 09.02.2012, 18:35 Uhr, schrieb Andrei Alexandrescu
seewebsiteforem...@erdani.org:


On 2/9/12 6:10 AM, Gor Gyolchanyan wrote:

Generally, D's message passing is implemented in quite easy-to-use
way, but far from being fast.
I dislike the Variant structure, because it adds a huge overhead. I'd
rather have a templated message passing system with type-safe message
queue, so no Variant is necessary.
In specific cases Messages can be polymorphic objects. This will be
way faster, then Variant.


cc Sean Kelly

I haven't looked at the implementation, but one possible liability is
that large messages don't fit in a Variant and must use dynamic
allocation under the wraps. There are a number of ways to avoid that,
such as parallel arrays (one array per type for data and one for the
additional tags).

We must make the message passing subsystem to not use any memory
allocation in the quiescent state. If we're doing one allocation per
message passed, that might explain the 4x performance difference (I
have no trouble figuring Java's allocator is this much faster than D's).


Andrei


Well, what does +1 Variant and +1 LinkedListNode sum up to?


Sorry, I don't understand...

Andrei


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Timon Gehr

On 02/09/2012 08:27 PM, Sean Kelly wrote:

On Feb 9, 2012, at 10:14 AM, Marco Leise wrote:


Am 09.02.2012, 17:22 Uhr, schrieb dsimchadsim...@yahoo.com:


I wonder how much it helps to just optimize the GC a little.  How much does the 
performance gap close when you use DMD 2.058 beta instead of 2.057?  This 
upcoming release has several new garbage collector optimizations.  If the GC is 
the bottleneck, then it's not surprising that anything that relies heavily on 
it is slow because D's GC is still fairly naive.


I did some OProfile-ing. The full report is attached, but for simplicity it is 
without call graph this time. Here is an excerpt:

CPU: Core 2, speed 2001 MHz (estimated)
Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask 
of 0x00 (Unhalted core cycles) count 10
samples  %linenr info symbol name
1383818.8416  gcx.d:426   void* gc.gcx.GC.malloc(ulong, 
uint, ulong*)
4465  6.0795  gcx.d:2454  ulong 
gc.gcx.Gcx.fullcollect(void*)


One random thing that just occurred to me… if the standard receive pattern is:

receive((int x) { … });

There's a good chance that a stack frame is being dynamically allocated for the delegate 
when it's passed to receive (since I don't believe there's any way to declare the 
parameters to receive as scope).  I'll have to check this, and maybe consider 
changing receive to use alias template parameters instead of normal function parameters?


You can mark an entire tuple as scope without trouble:

void foo(T,S...)(T arg1, scope S args) {...}

Does this improve the run time?


Re: Formating output of retro

2012-02-09 Thread Jesse Phillips
On Thursday, 9 February 2012 at 18:59:47 UTC, Dmitry Olshansky 
wrote:
Finally getting to debug std.regex issues, I've found that it 
seems like


import std.stdio, std.range;

void main()
{
writefln(%s,retro(abcd));
}

no longer works, can anyone on older version check if it's 
regression?


Works in 2.056


Re: Formating output of retro

2012-02-09 Thread Brad Anderson
On Thu, Feb 9, 2012 at 12:41 PM, Jesse Phillips
jessekphillip...@gmail.comwrote:

 On Thursday, 9 February 2012 at 18:59:47 UTC, Dmitry Olshansky wrote:

 Finally getting to debug std.regex issues, I've found that it seems like

 import std.stdio, std.range;

 void main()
 {
writefln(%s,retro(abcd));
 }

 no longer works, can anyone on older version check if it's regression?


 Works in 2.056


And in 2.057.  It doesn't work in 2.058 beta. Error during compilation:

Error: static assert  Cannot put a Result into a LockingTextWriter

Regards,
Brad Anderson


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Marco Leise
Am 09.02.2012, 20:35 Uhr, schrieb Andrei Alexandrescu  
seewebsiteforem...@erdani.org:



If we're doing one allocation per
message passed, that might explain the 4x performance difference (I
have no trouble figuring Java's allocator is this much faster than  
D's).



Andrei


Well, what does +1 Variant and +1 LinkedListNode sum up to?


Sorry, I don't understand...

Andrei


There are at least 2 allocations, one for the Variant and one for the new  
node in the linked list aka message box. But from what you wrote it sounds  
like a Variant doesn't allocate unless the contained data exceeds some  
internal storage. Sean found another possible allocation in the other  
branch of this discussion.


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread bearophile
Marco Leise:

 Sean found another possible allocation in the other  
 branch of this discussion.

Maybe this is able to help Sean and similar situations:
http://d.puremagic.com/issues/show_bug.cgi?id=5070

Bye,
bearophile


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Martin Nowak
On Thu, 09 Feb 2012 16:44:46 +0100, Sean Kelly s...@invisibleduck.org  
wrote:


So a queue per message type?  How would ordering be preserved? Also, how  
would this work for interprocess messaging?  An array-based queue is an  
option however (though it would mean memmoves on receive), as are  
free-lists for nodes, etc.  I guess the easiest thing there would be a  
lock-free shared slist for the node free-list, though I couldn't weigh  
the chance of cache misses from using old memory blocks vs. just  
expecting the allocator to be fast.


I didn't yet got around to polish my lock-free SList/DList implementations,
but mutexes should only become a problem with high contention when you  
need to block.

You'd also would need some kind of blocking for lock-free lists.

Best first order optimization would be to allocate the list node  
deterministically.
The only reason to use GC memory for them is that mallocating is still too  
cumbersome.

Nodes are unshared so you'd want a unique_pointer.


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Andrei Alexandrescu

On 2/9/12 11:49 AM, Marco Leise wrote:

Am 09.02.2012, 20:35 Uhr, schrieb Andrei Alexandrescu
seewebsiteforem...@erdani.org:


If we're doing one allocation per
message passed, that might explain the 4x performance difference (I
have no trouble figuring Java's allocator is this much faster than
D's).


Andrei


Well, what does +1 Variant and +1 LinkedListNode sum up to?


Sorry, I don't understand...

Andrei


There are at least 2 allocations, one for the Variant and one for the
new node in the linked list aka message box. But from what you wrote it
sounds like a Variant doesn't allocate unless the contained data exceeds
some internal storage. Sean found another possible allocation in the
other branch of this discussion.


I understand. The good news is, this looks like low-hanging fruit! I'll 
keep an eye on pull requests in druntime. Thanks to fellow Romanian 
Nicolae Mihalache for contributing the comparison.



Andrei


Re: Formating output of retro

2012-02-09 Thread Dmitry Olshansky

On 09.02.2012 23:50, Brad Anderson wrote:

On Thu, Feb 9, 2012 at 12:41 PM, Jesse Phillips
jessekphillip...@gmail.com mailto:jessekphillips%...@gmail.com wrote:

On Thursday, 9 February 2012 at 18:59:47 UTC, Dmitry Olshansky wrote:

Finally getting to debug std.regex issues, I've found that it
seems like

import std.stdio, std.range;

void main()
{
writefln(%s,retro(abcd));
}

no longer works, can anyone on older version check if it's
regression?


Works in 2.056


And in 2.057.  It doesn't work in 2.058 beta. Error during compilation:

Error: static assert Cannot put a Result into a LockingTextWriter

Regards,
Brad Anderson

Filed, I'm not sure if it's phobos or compiler issue.
http://d.puremagic.com/issues/show_bug.cgi?id=7476

--
Dmitry Olshansky


Re: Mac OS X 10.5 support

2012-02-09 Thread Sönke Ludwig

Am 09.02.2012 17:20, schrieb Walter Bright:

On 2/9/2012 1:37 AM, Sönke Ludwig wrote:

I have a project that we actually plan to use in production in the
company for
which I work. They still require 10.5 support for their products so
removing
that support would make for a very bad situation here.

But it should be possible to get a 10.5 retail DVD and install it
inside a VM..
I actually planned to do exactly this to support 10.5 nightbuilds for
my own D
stuff.

If support should be dropped anyway, are the issues only build-related
so that
e.g. gdc would still continue work on 10.5 without further work?



Would it also be possible for you to:

1. debug what has gone wrong with the 10.5 support? I'll be happy to
fold in any resulting patches.

2. provide a remote login shell so we can figure it out?

3. use git bisect to determine which change broke it?


I will try and see if a regular retail version of 10.5 can somehow be 
run in a VM, I will possibly get one tomorrow. Otherwise I'll try to get 
a 10.5 test machine on monday and see what I can do.


Re: Carmack about static analysis

2012-02-09 Thread Bruno Medeiros

On 24/12/2011 12:42, bearophile wrote:

A new blog post by the very good John Carmack, I like how well readable this 
post is:
http://altdevblogaday.com/2011/12/24/static-code-analysis/



Nice article! I particularly liked this comment:
The classic hacker disdain for “bondage and discipline languages” is 
short sighted – the needs of large, long-lived, multi-programmer 
projects are just different than the quick work you do for yourself.
that throws a jab at a lot of the obsession with dynamic languages that 
goes on out there.
It's something I've echoed in the past, especially when people start 
comparing languages to one another using small code snippets, and 
picking on issues/advantages that actually are only significant when 
writing small sized code, but not at all for medium/large sized apps. 
(the Hello-World Snippet Fallacy?)



--
Bruno Medeiros - Software Engineer


Re: Carmack about static analysis

2012-02-09 Thread Bruno Medeiros

On 24/12/2011 23:27, Adam D. Ruppe wrote:

On Saturday, 24 December 2011 at 23:12:27 UTC, bearophile wrote:

I was talking about the abundance of (({}){()}) and not about
identifiers length.


It's the same fallacy. I can't read Carmack's mind, but
I'm sure he's talking about shortening code the same way
I would mean it if I said it - simpler concepts, fewer cases,
less repetition.

It's about how much you have to think about, now how much you
have to read/write.


Exactly.
Reminds me of the issues that some people have with Java 
closures/lambdas. You do closures/lambdas in Java by means of creating 
an anonymous class and then implementing a method, and this is much more 
verbose than a plain lambda syntax, like D has (or many other functional 
languages). But although it is much more verbose, it is actually not 
that much more complex, it doesn't add much more to think about.
There is however another issue with Java's closures/lambdas which is not 
complained or mentioned as much as the above, but is actually much more 
annoying because it adds more semantic complexity: the closure can't 
modify (the immediate/head value of) the outer variables. So when 
modifying is necessary, you often see code where people instantiate a 
one element array, the closure accesses the array (the array reference 
is not modified), it then modifies the element inside, in index 0, and 
then that is retrieved by the outer code when the closure finishes. This 
on the other is a much more serious issue, and adds more complexity to 
the code beyond just verbosity.



--
Bruno Medeiros - Software Engineer


Re: [OT] Programming language WATs

2012-02-09 Thread Bruno Medeiros

On 20/01/2012 15:40, Robert Clipsham wrote:

Just came across this amusing 4 minute video:

https://www.destroyallsoftware.com/talks/wat

Anyone have any other WATs you can do in other languages? Bonus points
for WATs you can do in D.



LOL, that was good presentation! :)

--
Bruno Medeiros - Software Engineer


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Sean Kelly
On Feb 9, 2012, at 10:31 AM, Marco Leise wrote:

 Am 09.02.2012, 18:35 Uhr, schrieb Andrei Alexandrescu 
 seewebsiteforem...@erdani.org:
 
 On 2/9/12 6:10 AM, Gor Gyolchanyan wrote:
 Generally, D's message passing is implemented in quite easy-to-use
 way, but far from being fast.
 I dislike the Variant structure, because it adds a huge overhead. I'd
 rather have a templated message passing system with type-safe message
 queue, so no Variant is necessary.
 In specific cases Messages can be polymorphic objects. This will be
 way faster, then Variant.
 
 cc Sean Kelly
 
 I haven't looked at the implementation, but one possible liability is that 
 large messages don't fit in a Variant and must use dynamic allocation under 
 the wraps. There are a number of ways to avoid that, such as parallel arrays 
 (one array per type for data and one for the additional tags).
 
 We must make the message passing subsystem to not use any memory allocation 
 in the quiescent state. If we're doing one allocation per message passed, 
 that might explain the 4x performance difference (I have no trouble figuring 
 Java's allocator is this much faster than D's).
 
 Well, what does +1 Variant and +1 LinkedListNode sum up to?

FWIW, you can use DMD's built in profiler so long as the receiving thread is 
the same as the sending thread:

import std.concurrency;

void main() {
for(int i = 0; i  1_000_000; i++) {
send(thisTid, 12345);
auto x = receiveOnly!int();
}
}


I generated timings for this both before and after adding scope to mbox.get():

$ dmd -release -inline -O abc
$ time abc

real0m0.831s
user0m0.829s
sys 0m0.002s

… add scope to mbox.get()

$ dmd -release -inline -O abc
$ time abc

real0m0.653s
user0m0.649s
sys 0m0.003s


And here's the trace log after scope was added.  Notice that there were 61 
calls to GCX.fullcollect().  We can also see that there was 1 allocation per 
send/receive operation, so only an alloc for the message list node.

$ dmd -O -release -profile abc
gladsheim:misc sean$ time abc

real0m11.348s
user0m11.331s
sys 0m0.015s


 Timer Is 3579545 Ticks/Sec, Times are in Microsecs 

  Num  TreeFuncPer
  CallsTimeTimeCall

100   437709765   220179413 220 void 
std.concurrency._send!(int)._send(std.concurrency.MsgType, std.concurrency.Tid, 
int)
100   300987757   140736393 140 bool 
std.concurrency.MessageBox.get!(nothrow @safe void delegate(int), pure @safe 
void function(std.concurrency.LinkTerminated)*, pure @safe void 
function(std.concurrency.OwnerTerminated)*, pure @safe void 
function(std.variant.VariantN!(32u).VariantN)*).get(scope nothrow @safe void 
delegate(int), scope pure @safe void function(std.concurrency.LinkTerminated)*, 
scope pure @safe void function(std.concurrency.OwnerTerminated)*, scope pure 
@safe void function(std.variant.VariantN!(32u).VariantN)*)
100   20213160989479808  89 void* gc.gcx.GC.malloc(uint, 
uint, uint*)
  1   8250454225755650157556501 _Dmain
133   11265180052026745  52 void* 
gc.gcx.GC.mallocNoSync(uint, uint, uint*)
 615342234249606106  813214 uint 
gc.gcx.Gcx.fullcollect(void*)
200   16010375342531732  21 bool 
std.concurrency.MessageBox.get!(nothrow @safe void delegate(int), pure @safe 
void function(std.concurrency.LinkTerminated)*, pure @safe void 
function(std.concurrency.OwnerTerminated)*, pure @safe void 
function(std.variant.VariantN!(32u).VariantN)*).get(scope nothrow @safe void 
delegate(int), scope pure @safe void function(std.concurrency.LinkTerminated)*, 
scope pure @safe void function(std.concurrency.OwnerTerminated)*, scope pure 
@safe void function(std.variant.VariantN!(32u).VariantN)*).bool scan(ref 
std.concurrency.List!(std.concurrency.Message).List)
2004201861239837170  19 int 
std.variant.VariantN!(32u).VariantN.handler!(int).handler(std.variant.VariantN!(32u).VariantN.OpID,
 ubyte[32]*, void*)
100   11757202124641771  24 bool 
std.concurrency.MessageBox.get!(nothrow @safe void delegate(int), pure @safe 
void function(std.concurrency.LinkTerminated)*, pure @safe void 
function(std.concurrency.OwnerTerminated)*, pure @safe void 
function(std.variant.VariantN!(32u).VariantN)*).get(scope nothrow @safe void 
delegate(int), scope pure @safe void function(std.concurrency.LinkTerminated)*, 
scope pure @safe void function(std.concurrency.OwnerTerminated)*, scope pure 
@safe void function(std.variant.VariantN!(32u).VariantN)*).bool 
onStandardMsg(ref std.concurrency.Message)
1004728079420418675  20 void 
std.concurrency.Message.map!(nothrow @safe void delegate(int)).map(nothrow 
@safe void delegate(int))
100   31655676715569009  15 int 
std.concurrency.receiveOnly!(int).receiveOnly()
1003631736213212905  

Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Sean Kelly
On Feb 9, 2012, at 11:53 AM, bearophile wrote:

 Marco Leise:
 
 Sean found another possible allocation in the other  
 branch of this discussion.
 
 Maybe this is able to help Sean and similar situations:
 http://d.puremagic.com/issues/show_bug.cgi?id=5070

This would be handy.  I don't always think to check the asm dump when I'm 
working with delegates.

Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Sean Kelly
On Feb 9, 2012, at 11:57 AM, Martin Nowak wrote:

 On Thu, 09 Feb 2012 16:44:46 +0100, Sean Kelly s...@invisibleduck.org wrote:
 
 So a queue per message type?  How would ordering be preserved? Also, how 
 would this work for interprocess messaging?  An array-based queue is an 
 option however (though it would mean memmoves on receive), as are free-lists 
 for nodes, etc.  I guess the easiest thing there would be a lock-free shared 
 slist for the node free-list, though I couldn't weigh the chance of cache 
 misses from using old memory blocks vs. just expecting the allocator to be 
 fast.
 
 I didn't yet got around to polish my lock-free SList/DList implementations,
 but mutexes should only become a problem with high contention when you need 
 to block.
 You'd also would need some kind of blocking for lock-free lists.

No blocking should be necessary for the lock-free list.  Just try to steal a 
node with a CAS.  If the result was null (i.e. if the list ended up being 
empty), allocate a node via malloc/GC.

 Best first order optimization would be to allocate the list node 
 deterministically.

Neat idea.  I think I can make that change fairly trivially.

Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Oliver Plow
Hi Nicolae,

I don't know whether you are particularly interested in the case you presented. 
For performance comparison between D and other languages in general there is 
this article that I think is quite good: 
http://janus.cs.utwente.nl:8000/twiki/pub/Composer/DotNetGeneral/csharp-performance.pdf

It is allready quite old and stems from 2003. Would be interesting to see how 
the repport would like like today when the benchmarks were redone. As your 
example suggested these number crunching-oriented benchmarks as in this report 
are not always that meaningful for everday life performance issues.

Regards, Oliver

 Original-Nachricht 
 Datum: Thu, 9 Feb 2012 10:06:40 +0100
 Von: Nicolae Mihalache xproma...@gmail.com
 An: digitalmars-d@puremagic.com
 Betreff: Message passing between threads: Java 4 times faster than D

 Hello,
 
 I'm a complete newbie in D and trying to compare with Java. I
 implemented  a simple test for measuring the throughput in message
 passing between threads. I see that Java can pass about 4mil
 messages/sec while D only achieves 1mil/sec. I thought that D should
 be faster.
 
 The messages are simply integers (which are converted to Integer in Java).
 
 The two programs are attached. I tried compiling the D version with
 both dmd and gdc and various optimization flags.
 
 mache

-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Graham St Jack
I suggest using a template-generated type that can contain any of the 
messages to be sent over a channel. It is reasonably straightforward to 
generate all the boilerplate code necessary to make this happen. My 
prototype (attached) still needs work to remove linux dependencies and 
tighten it up, but it works ok. Another advantage of this approach 
(well, I see it as an advantage) is that you declare in a single 
location all the messages that can be sent over the channel, and of 
course the messages are type-safe.


The file of interest is concurrency.d.

On 10/02/12 02:14, Sean Kelly wrote:

So a queue per message type?  How would ordering be preserved? Also, how would 
this work for interprocess messaging?  An array-based queue is an option 
however (though it would mean memmoves on receive), as are free-lists for 
nodes, etc.  I guess the easiest thing there would be a lock-free shared slist 
for the node free-list, though I couldn't weigh the chance of cache misses from 
using old memory blocks vs. just expecting the allocator to be fast.

On Feb 9, 2012, at 6:10 AM, Gor Gyolchanyangor.f.gyolchan...@gmail.com  wrote:


Generally, D's message passing is implemented in quite easy-to-use
way, but far from being fast.
I dislike the Variant structure, because it adds a huge overhead. I'd
rather have a templated message passing system with type-safe message
queue, so no Variant is necessary.
In specific cases Messages can be polymorphic objects. This will be
way faster, then Variant.

On Thu, Feb 9, 2012 at 3:12 PM, Alex_Dovhalalex_dov...@yahoo.com  wrote:

Sorry, my mistake. It's strange to have different 'n', but you measure speed
as 1000*n/time, so it's doesn't matter if n is 10 times bigger.





--
Bye,
Gor Gyolchanyan.



--
Graham St Jack



delve.tar.gz
Description: GNU Zip compressed data


Re: Carmack about static analysis

2012-02-09 Thread bearophile
Bruno Medeiros:

 the needs of large, long-lived, multi-programmer 
 projects are just different than the quick work you do for yourself.
 that throws a jab at a lot of the obsession with dynamic languages that 
 goes on out there.
 It's something I've echoed in the past, especially when people start 
 comparing languages to one another using small code snippets, and 
 picking on issues/advantages that actually are only significant when 
 writing small sized code, but not at all for medium/large sized apps. 
 (the Hello-World Snippet Fallacy?)

Many programs are small or very small, and they will keep being small. I write 
many of those. So there are many situations where paying for a lot of 
infrastructure and bondage in your code isn't good. (Haskell programmers 
sometimes don't agree with this idea, but the world of programming is large 
enough to allow two persons with very different opinions to be both acceptably 
right, both are able to find a way to write code in a good enough way).

From my experience dynamic languages as Python are very good (often better 
than D, unless the problem being explored requires a large amount of 
computations) for exploratory programming. Usual such explorations are done 
on short programs, so this is also a special case of the precedent point.

Comparing languages with small code snippets doesn't tell you all you want to 
know about how a language scales for very large programs, of course, so they 
aren't enough. But such small snippets are very useful any way because large 
programs are mostly made of small parts; and it's still true that being able to 
remove one line from a group of 4 lines sometimes means reducing the size of a 
large program by 10% or more. So little syntax niceties matter even for huge 
programs. This is also why (as example) Python list comps are very useful for 
programs one million lines of code long too.

Bye,
bearophile


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Sean Kelly
On Feb 9, 2012, at 2:17 PM, Sean Kelly wrote:
 
 Best first order optimization would be to allocate the list node 
 deterministically.
 
 Neat idea.  I think I can make that change fairly trivially.

$ time abc

real0m0.556s
user0m0.555s
sys 0m0.001s

So another 100ms improvement.  Switching to a (__gshared, no mutex) free-list 
that falls back on malloc yields:

$ time abc

real0m0.505s
user0m0.503s
sys 0m0.001s

Not as much of a gain there, and I believe we've eliminated all the allocations 
(though I'd have to do a pile build to verify).  Still, that's approaching 
being twice as fast as before, which is definitely something.

Re: OT Adam D Ruppe's web stuff

2012-02-09 Thread Adam D. Ruppe
On Thursday, 9 February 2012 at 17:36:01 UTC, Andrei Alexandrescu 
wrote:

Cue the choir: Please submit to Phobos.


Perhaps when I finish the URL struct in there. (It
takes a url and breaks it down into parts you can edit,
and can do rebasing. Currently, the handling of the Location:
header is technically wrong - the http spec says it is supposed
to be an absolute url, but I don't enforce that.

Now, in cgi mode, it doesn't matter, since the web server
fixes it up for us. But, in http mode... well, it still
doesn't matter since the browsers can all figure it out,
but I'd like to do the right thing anyway.).


I might change the http constructor and/or add one
that takes a std.socket socket cuz that would be cool.



But I just don't want to submit it when I still might
be making some big changes in the near future.




BTW, I spent a little time reorganizing and documenting
dom.d a bit more.

http://arsdnet.net/web.d/dom.html

Still not great docs, but if you come from javascript,
I think it is pretty self-explanatory anyway.


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread David Nadlinger

On 2/9/12 11:17 PM, Sean Kelly wrote:

On Feb 9, 2012, at 11:57 AM, Martin Nowak wrote:

I didn't yet got around to polish my lock-free SList/DList implementations,
but mutexes should only become a problem with high contention when you need to 
block.
You'd also would need some kind of blocking for lock-free lists.


No blocking should be necessary for the lock-free list.  Just try to steal a 
node with a CAS.  If the result was null (i.e. if the list ended up being 
empty), allocate a node via malloc/GC.


And the neat thing is that you don't have to worry about node deletion 
as much when you have a GC…


David


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Andrew Wiley
On Thu, Feb 9, 2012 at 3:06 AM, Nicolae Mihalache xproma...@gmail.com wrote:
 Hello,

 I'm a complete newbie in D and trying to compare with Java. I
 implemented  a simple test for measuring the throughput in message
 passing between threads. I see that Java can pass about 4mil
 messages/sec while D only achieves 1mil/sec. I thought that D should
 be faster.

 The messages are simply integers (which are converted to Integer in Java).

 The two programs are attached. I tried compiling the D version with
 both dmd and gdc and various optimization flags.

 mache

I recently completed a message passing library in D that lets the
messages be passed between actors that don't necessarily correspond to
threads (as std.concurrency requires). I'll see how it does on your
benchmark.


Re: OT Adam D Ruppe's web stuff

2012-02-09 Thread Adam D. Ruppe

On Tuesday, 7 February 2012 at 20:00:26 UTC, Adam D. Ruppe wrote:

I'm taking this to an extreme with this:

http://arsdnet.net:8080/



hehehe, I played with this a little bit more tonight.

http://arsdnet.net/dcode/sse/

needs the bleeding edge dom.d from my github.
https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff

Here's the code, not very long.
http://arsdnet.net/dcode/sse/test.d

The best part is this:

 document.mainBody.addEventListener(click, (Element thislol, 
Event event) {

   event.target.style.color = red;
   event.target.appendText( clicked! );
   event.preventDefault();
 });


A html onclick handler written in D!



Now, like I said before, probably not usable for real work. What
this does is for each user session, it creates a server side DOM
object.

Using observers on the DOM, it listens for changes and forwards 
them

to Javascript. You use the D api to change your document, and it
sends them down. I've only implemented a couple mutation events,
but they go a long way - appendChild and setAttribute - as they
are the building blocks for many of the functions.

On the client side, the javascript listens for events and forwards
them to D.

To sync the elements on both sides, I added a special feature
to dom.d to put an attribute there that is usable on both sides.
The Makefile in there shows the -version needed to enable it.


Since it is a server side document btw, you can refresh the 
browser
and keep the same document. It could quite plausible gracefully 
degrade!




But, yeah, lots of fun. D rox.


Re: Carmack about static analysis

2012-02-09 Thread Walter Bright

On 2/9/2012 12:09 PM, Bruno Medeiros wrote:

Nice article! I particularly liked this comment:
The classic hacker disdain for “bondage and discipline languages” is short
sighted – the needs of large, long-lived, multi-programmer projects are just
different than the quick work you do for yourself.


I implicitly agree with you. But people have written large programs in dynamic 
languages, and claim it works out equivalently for them. I don't have enough 
experience in that direction to decide if that's baloney or not.


Re: RedMonk rankings

2012-02-09 Thread bcs

On 02/09/2012 09:28 AM, Simen Kjærås wrote:

http://redmonk.com/sogrady/2012/02/08/language-rankings-2-2012/

Kinda interesting, but as with all these things, don't take it as the
word of god. Nice to see D all the way up there, I'd honestly expect it
be lower.


D is neck-n0neck with Go (:D) and behind LISP?!


Re: RedMonk rankings

2012-02-09 Thread Matt Soucy

On 02/09/2012 12:28 PM, Simen Kjærås wrote:

http://redmonk.com/sogrady/2012/02/08/language-rankings-2-2012/

Kinda interesting, but as with all these things, don't take it as the
word of god. Nice to see D all the way up there, I'd honestly expect it
be lower.
I noticed LinkedIn mentioned in the article...so apparently D isn't a 
valid skill there. I can enter it, but it's not standardized.


Re: std.uuid is ready for review

2012-02-09 Thread Robert Jacques

On Thu, 09 Feb 2012 03:57:21 -0600, Johannes Pfau nos...@example.com wrote:

Thanks for your feedback! Comments below:
Am Wed, 08 Feb 2012 23:40:14 -0600
schrieb Robert Jacques sandf...@jhu.edu:


[snip]


All the generators have the function name [name]UUID. Instead, make
these function static member functions inside UUID and remove the
UUID from the name. i.e. nilUUID - UUID.nil randomUUID -
UUID.random., etc. I'm not sure if you should also do this for
dnsNamespace, etc. (i.e. dnsNamespace - UUID.dns) or not.


UUID.nil makes sense and looks better. I don't have an opinion about
the other functions, but struct as namespace vs free functions
has always led to debates here, so I'm not sure if I should change it.
I need some more feedback here first. (Also imho randomUUID() looks
better than UUID.random(), but maybe that's just me)


Hmm... I'd agree that randomUUID reads better than UUID.random. IMO well named 
free-functions are generally better than fake namespaces via structs. However, fake 
namespaces via structs a generally better than fake namespaces via free-function naming 
convention (i.e. [function][namespace] or [namespace][function]. That said, I think the 
bigger problem is that all these functions are effectively constructors. I'd suspect that 
overloading UUID(...) would be a clearer expression of the concepts involved. As for 
syntax, maybe something like: UUID(Flag!random, ... ) to disambiguate when 
necessary.

[snip]



There's an additional toString signature which should be supported.
See std.format.

You're talking about this, right?
const void toString(scope void delegate(const(char)[]) sink);

Nice, when did the writeTo proposal get merged? I must have totally
missed that, actually writeTo is a way better choice here, as it can
avoid memory allocation.


I missed it to, then I saw code using it and smiled.


but it seems to!string doesn't support the new signature?


I think that's worthy of a bug report.


BTW: How should sink interact with pure/safe versions? Can't we just
change that declaration to?

const @safe [pure] void toString(scope @safe pure void
delegate(const(char)[]) sink);


Since the to!, etc. are all templated, adding extra attributes is okay.


uuidVersion() - ver()?

I'm not sure, uuidVersion is indeed quite long, but it is more
descriptive as ver


Shrug. One's long, one's short, neither is perfect, version is a keyword.


Re: Message passing between threads: Java 4 times faster than D

2012-02-09 Thread Oliver Plow

 I recently completed a message passing library in D that lets the
 messages be passed between actors that don't necessarily correspond to
 threads (as std.concurrency requires). I'll see how it does on your
 benchmark.

Sounds quite interesting. You created some kind of thread pool for your 
library? Is this work company internal or will it be published? Would be cool 
to have something like that for D.

Cheers, Oliver

-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


Re: D at work

2012-02-09 Thread Pedro Lacerda

 DVM is great for this: https://bitbucket.org/doob/dvm


 DVM sounds well, thanks!


As for use cases, command line is a good bet. I suggest starting with
 something that has a clear scope and isn't chosen based on a marketing
 feature. For example if you're going to build a server of some sort be sure
 the project won't grow and require database access (or verify that the
 bindings you'll need are up-to-date beforehand), and while making use of
 const/pure would good to use don't make your design choices around it. Do
 expand/explore and contribute, real world testing needs to be done, but be
 genital with it as things are still coming together and you want to show a
 productivity gain and quality.


I'll keep it in mind. Why can't I make design around const/pure?



Do you know any famous company or software publicly running on D? I
searched at StackOverflow, Wiki4D and Wikipedia without meaningful results.




2012/2/9 Jacob Carlborg d...@me.com

 On 2012-02-09 02:13, Adam D. Ruppe wrote:

 The way I do it is to try updates at some
 point when I have a little free time.

 Get the new version, but keep the old version.

 Compile. If it works, sweet, probably ok to keep
 it.

 If your app doesn't compile, and it isn't an
 easy fix, just go back to the old release.


 Every two or three releases though, I'll take
 the pain and make sure I'm up to date anyway,
 usually because the new dmd releases are good
 stuff.


 DVM is great for this: https://bitbucket.org/doob/dvm

 --
 /Jacob Carlborg



Re: Arrays - Inserting and moving data

2012-02-09 Thread Pedro Lacerda
I __believe__ that insertInPlace doesn't shift the elements, but use an
appender allocating another array instead.
Maybe this function do what you want.


int[] arr = [0,1,2,3,4,5,6,7,8,9];

void maybe(T)(T[] arr, size_t pos, T value) {
size_t i;
for (i = arr.length - 1; i  pos; i--) {
arr[i] = arr[i-1];
}
arr[i] = value;
}

maybe(arr, 3, 0);
maybe(arr, 0, 1);
assert(arr == [1, 0, 1, 2, 0, 3, 4, 5, 6, 7]);



2012/2/9 MattCodr matheus_...@hotmail.com

 I have a doubt about the best way to insert and move (not replace) some
 data on an array.

 For example,

 In some cases if I want to do action above, I do a loop moving the data
 until the point that I want and finally I insert the new data there.


 In D I did this:

 begin code
 .
 .
 .
   int[] arr = [0,1,2,3,4,5,6,7,8,9];

   arr.insertInPlace(position, newValue);
   arr.popBack();
 .
 .
 .
 end code


 After the insertInPlace my array changed it's length to 11, so I use
 arr.popBack(); to keep the array length = 10;

 The code above is working well, I just want know if is there a better way?

 Thanks,

 Matheus.



Re: Arrays - Inserting and moving data

2012-02-09 Thread MattCodr

On Thursday, 9 February 2012 at 12:51:09 UTC, Pedro Lacerda wrote:

I __believe__ that insertInPlace doesn't shift the elements,


Yes, It appears that it really doesn't shift the array, 
insertInPlace just returns a new array with a new element in n 
position.




Maybe this function do what you want.


  int[] arr = [0,1,2,3,4,5,6,7,8,9];

  void maybe(T)(T[] arr, size_t pos, T value) {
  size_t i;
  for (i = arr.length - 1; i  pos; i--) {
  arr[i] = arr[i-1];
  }
  arr[i] = value;
  }




In fact, I usually wrote functions as you did. I just looking for 
a new way to do that with D and Phobos lib.


Thanks,

Matheus.


Re: Arrays - Inserting and moving data

2012-02-09 Thread Ali Çehreli

On 02/09/2012 03:47 AM, MattCodr wrote:

I have a doubt about the best way to insert and move (not replace) some
data on an array.

For example,

In some cases if I want to do action above, I do a loop moving the data
until the point that I want and finally I insert the new data there.


In D I did this:

begin code
.
.
.
int[] arr = [0,1,2,3,4,5,6,7,8,9];

arr.insertInPlace(position, newValue);
arr.popBack();
.
.
.
end code


After the insertInPlace my array changed it's length to 11, so I use
arr.popBack(); to keep the array length = 10;

The code above is working well, I just want know if is there a better way?

Thanks,

Matheus.


Most straightforward that I know of is the following:

arr = arr[0 .. position] ~ [ newValue ] ~ arr[position + 1 .. $];

But if you don't actually want to modify the data, you can merely access 
the elements in-place by std.range.chain:


import std.stdio;
import std.range;

void main()
{
int[] arr = [0,1,2,3,4,5,6,7,8,9];
immutable position = arr.length / 2;
immutable newValue = 42;

auto r = chain(arr[0 .. position], [ newValue ], arr[position + 1 
.. $]);

writeln(r);
}

'r' above is a lazy range that just provides access to the three ranges 
given to it. 'arr' does not change in any way.


Ali


Re: A GUI library to begin with

2012-02-09 Thread Zachary Lund

On Wednesday, 8 February 2012 at 22:21:35 UTC, AaronP wrote:

On 02/08/2012 09:24 AM, Jesse Phillips wrote:
I think GtkD is stated to suck because it isn't native to 
Windows or

Mac, both in look and availability.



Hmm, perhaps. Incidentally, it looks great on Linux! :P


GTK+ was created for GIMP which incidentally was made as an 
open-source alternative for Photoshop that worked correctly for 
platforms outside of Windows. Linux and FreeBSD just so happen to 
be large targets here.


Re: Arrays - Inserting and moving data

2012-02-09 Thread H. S. Teoh
On Thu, Feb 09, 2012 at 10:30:22AM -0800, Ali Çehreli wrote:
[...]
 But if you don't actually want to modify the data, you can merely
 access the elements in-place by std.range.chain:
 
 import std.stdio;
 import std.range;
 
 void main()
 {
 int[] arr = [0,1,2,3,4,5,6,7,8,9];
 immutable position = arr.length / 2;
 immutable newValue = 42;
 
 auto r = chain(arr[0 .. position], [ newValue ], arr[position +
 1 .. $]);
 writeln(r);
 }
 
 'r' above is a lazy range that just provides access to the three
 ranges given to it. 'arr' does not change in any way.
[...]

Wow! This is really cool. So you *can* have O(1) insertions in the
middle of an array after all. :)

Of course, you probably want to flatten it once in a while to keep
random access cost from skyrocketing. (I'm assuming delegates or
something equivalent are involved in generating the lazy range?)


T

-- 
Give a man a fish, and he eats once. Teach a man to fish, and he will sit 
forever.


Compiler error with static vars/functions

2012-02-09 Thread Oliver Plow
Hello,

I'm fighting with a strange compiler error. This here compiles and runs fine:

-- main.d -

class Foo
{
static int z = 4;
static int bar() { return 6; }
int foobar() { return 7; }
}

int main(string[] argv)
{
writeln(Foo.z);
writeln(Foo.bar()); // produces 6
Foo f;
writeln(f.bar()); // produces 6;
writeln(f.foobar());
return 0;
}

Whereas this does not compile:


-- main.d -

import Foo;

int main(string[] argv)
{
writeln(Foo.z); // Error: undefined identifier module Foo.z
writeln(Foo.bar()); // Error: undefined identifier module Foo.bar
Foo f;
writeln(f.bar());
writeln(f.foobar());
return 0;
}

-- Foo.d --

class Foo
{
public static int z = 4;
public static int bar() { return 6; }
public int foobar() { return 7; }
}

This is a bit strange for me. Apparently, must be some kind of import problem 
importing Foo. But I don't see how ...

Thanks for any hints.
Cheers, Oliver


-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


Re: Compiler error with static vars/functions

2012-02-09 Thread Jonathan M Davis
On Thursday, February 09, 2012 14:57:08 Oliver Plow wrote:
 Hello,
 
 I'm fighting with a strange compiler error. This here compiles and runs
 fine:
 
[snip]

 This is a bit strange for me. Apparently, must be some kind of import
 problem importing Foo. But I don't see how ...

It's because you named both your module and type Foo. So, when you import Foo, 
Foo.z is a module-level symbol in Foo (which does not exist). You'd need to do 
Foo.Foo.z. If your module were named something completele different (e.g. 
xyzzy), then Foo.z would then be referring to your class Foo's z variable, and 
it would work.

Normally, it's considered good practice to give modules names which are all 
lowercase (particularly since some OSes aren't case-sensitive for file
operations). Renaming your module to foo should fix your problem.

- Jonathan M Davis


Re: Compiler error with static vars/functions

2012-02-09 Thread simendsjo

On 02/09/2012 02:57 PM, Oliver Plow wrote:

Hello,

I'm fighting with a strange compiler error. This here compiles and runs fine:

-- main.d -

class Foo
{
 static int z = 4;
 static int bar() { return 6; }
 int foobar() { return 7; }
}

int main(string[] argv)
{
 writeln(Foo.z);
 writeln(Foo.bar()); // produces 6
 Foo f;
 writeln(f.bar()); // produces 6;
 writeln(f.foobar());
 return 0;
}

Whereas this does not compile:


-- main.d -

import Foo;

int main(string[] argv)
{
 writeln(Foo.z);// Error: undefined identifier module Foo.z
 writeln(Foo.bar()); // Error: undefined identifier module Foo.bar
 Foo f;
 writeln(f.bar());
 writeln(f.foobar());
 return 0;
}

-- Foo.d --

class Foo
{
public static int z = 4;
public static int bar() { return 6; }
public int foobar() { return 7; }
}

This is a bit strange for me. Apparently, must be some kind of import problem 
importing Foo. But I don't see how ...

Thanks for any hints.
Cheers, Oliver




As your class is named the same as your module, writeln(Foo.z) looks for 
z in module Foo. Foo.Foo.z shoulde give you module.class.instance.


Re: Arrays - Inserting and moving data

2012-02-09 Thread MattCodr

On Thursday, 9 February 2012 at 18:30:22 UTC, Ali Çehreli wrote:

On 02/09/2012 03:47 AM, MattCodr wrote:
I have a doubt about the best way to insert and move (not 
replace) some

data on an array.

For example,

In some cases if I want to do action above, I do a loop moving 
the data
until the point that I want and finally I insert the new data 
there.



In D I did this:

begin code
.
.
.
int[] arr = [0,1,2,3,4,5,6,7,8,9];

arr.insertInPlace(position, newValue);
arr.popBack();
.
.
.
end code


After the insertInPlace my array changed it's length to 11, so 
I use

arr.popBack(); to keep the array length = 10;

The code above is working well, I just want know if is there a 
better way?


Thanks,

Matheus.


Most straightforward that I know of is the following:

   arr = arr[0 .. position] ~ [ newValue ] ~ arr[position + 1 
.. $];


But if you don't actually want to modify the data, you can 
merely access the elements in-place by std.range.chain:


import std.stdio;
import std.range;

void main()
{
   int[] arr = [0,1,2,3,4,5,6,7,8,9];
   immutable position = arr.length / 2;
   immutable newValue = 42;

   auto r = chain(arr[0 .. position], [ newValue ], 
arr[position + 1 .. $]);

   writeln(r);
}

'r' above is a lazy range that just provides access to the 
three ranges given to it. 'arr' does not change in any way.


Ali


Hi Ali,

You gave me a tip with this chain feature.

I changed a few lines of your code, and it worked as I wanted:


import std.stdio;
import std.range;
import std.array;

void main()
{
int[] arr = [0,1,2,3,4,5,6,7,8,9];
immutable position = arr.length / 2;
immutable newValue = 42;

auto r = chain(arr[0 .. position], [ newValue ], arr[position 
.. $-1]);

arr = array(r);

foreach(int i; arr)
writefln(%d, i);
}


Thanks,

Matheus.





Re: Arrays - Inserting and moving data

2012-02-09 Thread Ali Çehreli

On 02/09/2012 11:03 AM, H. S. Teoh wrote:
 On Thu, Feb 09, 2012 at 10:30:22AM -0800, Ali Çehreli wrote:
 [...]
 But if you don't actually want to modify the data, you can merely
 access the elements in-place by std.range.chain:

 import std.stdio;
 import std.range;

 void main()
 {
  int[] arr = [0,1,2,3,4,5,6,7,8,9];
  immutable position = arr.length / 2;
  immutable newValue = 42;

  auto r = chain(arr[0 .. position], [ newValue ], arr[position +
 1 .. $]);
  writeln(r);
 }

 'r' above is a lazy range that just provides access to the three
 ranges given to it. 'arr' does not change in any way.
 [...]

 Wow! This is really cool. So you *can* have O(1) insertions in the
 middle of an array after all. :)

 Of course, you probably want to flatten it once in a while to keep
 random access cost from skyrocketing.

O(1) would be violated only if there are too many actual ranges.

 (I'm assuming delegates or
 something equivalent are involved in generating the lazy range?)

Simpler than that. :) The trick is that chain() returns a range object 
that operates lazily. I have used chain() as an example for finite 
RandomAccessRange types (I used the name 'Together' instead of Chain). 
Search for Finite RandomAccessRange here:


  http://ddili.org/ders/d.en/ranges.html

And yes, I note there that the implementation is not O(1). Also look 
under the title Laziness in that chapter.


Ali



Re: Compiler error with static vars/functions

2012-02-09 Thread bearophile
Jonathan M Davis:

 Normally, it's considered good practice to give modules names which are all 
 lowercase (particularly since some OSes aren't case-sensitive for file
 operations).

That's just a fragile work-around for a module system design problem that I 
didn't like from the first day I've seen D. I'll take a look in Bugzilla if 
there is already something on this.

Bye,
bearophile


Re: A GUI library to begin with

2012-02-09 Thread Denis Shelomovskij

08.02.2012 7:55, Mr. Anonymous пишет:

Why does GTK suck (I read that a couple of times).


GtkD (+OpenGL) worked stable in my rather big D1+Tango project 2 years 
ago (and do it now). Looks like it has lots of memory leaks (in almost 
every function call) but it didn't lead to crash after few hours of 
program work (but my program have no big text buffers).


Re: A GUI library to begin with

2012-02-09 Thread Damian Ziemba
On Wednesday, 8 February 2012 at 03:55:41 UTC, Mr. Anonymous 
wrote:

Hello,

I want to start playing with D, and I'm looking at a GUI 
library to begin with.

From what I see here:
http://www.prowiki.org/wiki4d/wiki.cgi?GuiLibraries
I have four choices:
GtkD, DWT, DFL, DGui.

Has anyone tried these? Any suggestions?
What is the status of DWT? What's the difference between DFL 
and DGui? Why does GTK suck (I read that a couple of times).


Thanks.



GtkD seems to be the most mature and production ready for D.
Although indeed, Gtk+ (and then GtkD) suffers from its lack of 
Native controls.


The best solution would be QtD, but it looks like its abandoned. 
QtJambi isn't officially supported by Trolltech (Nokia, whatever 
:D) any more, so switching to Smoke would be the must.


WxD works quite good, you need to keep in mind that it crashes 
with DMD64, GDC and LDC works fine.


DWT could be nice if it gets 64bitz support and Mac/Cocoa port 
too.


DFL seems to be Windows only? Tho I guess it isn't maintained 
anymore.



Situation with D and GUI is kinda poor.
I see hope in Andrej researches about wxPHP and bringing it to D.
I see hope in reviewing QtD project, it used to be flagship 
product next to DWT for D.
DWT could be nice too if 64bt for Windows/Linux and Cocoa will be 
in.




As for now, I would use GtkD ;-)


Re: A GUI library to begin with

2012-02-09 Thread Damian Ziemba
Ach, and there is plugin for Windows Gtk+ runtime called WIMP 
which emulates Windows Native look, so situation with GtkD isn't 
so bad on Linux/FreeBSD and Windows.


I guess the biggest problem is da Mac OSX platform.

Monodevelop looks so f**cking ugly on Mac :D


Re: Arrays - Inserting and moving data

2012-02-09 Thread MattCodr

On Thursday, 9 February 2012 at 19:49:43 UTC, Timon Gehr wrote:
Note that this code does the same, but is more efficient if you 
don't actually need the array:


Yes I know, In fact I need re-think the way I code with this new 
features of D, like ranges for example.


Thanks,

Matheus.


Re: A GUI library to begin with

2012-02-09 Thread Jordi Sayol
Al 09/02/12 21:25, En/na Damian Ziemba ha escrit:
 
 GtkD seems to be the most mature and production ready for D.
 Although indeed, Gtk+ (and then GtkD) suffers from its lack of Native 
 controls.
 
 The best solution would be QtD, but it looks like its abandoned. QtJambi 
 isn't officially supported by Trolltech (Nokia, whatever :D) any more, so 
 switching to Smoke would be the must.
 
 WxD works quite good, you need to keep in mind that it crashes with DMD64, 
 GDC and LDC works fine.
 
 DWT could be nice if it gets 64bitz support and Mac/Cocoa port too.
 
 DFL seems to be Windows only? Tho I guess it isn't maintained anymore.
 
 
 Situation with D and GUI is kinda poor.
 I see hope in Andrej researches about wxPHP and bringing it to D.
 I see hope in reviewing QtD project, it used to be flagship product next to 
 DWT for D.
 DWT could be nice too if 64bt for Windows/Linux and Cocoa will be in.
 
 
 
 As for now, I would use GtkD ;-)
 

There is some other interesting option, but in an early stage:
http://repo.or.cz/w/girtod.git
-- 
Jordi Sayol


Re: A GUI library to begin with

2012-02-09 Thread maarten van damme
I used gtkd, it worked perfectly. only downside is it isn't native on
windows.


Re: Compiler error with static vars/functions

2012-02-09 Thread Jonathan M Davis
On Thursday, February 09, 2012 14:45:43 bearophile wrote:
 Jonathan M Davis:
  Normally, it's considered good practice to give modules names which are
  all lowercase (particularly since some OSes aren't case-sensitive for
  file operations).
 
 That's just a fragile work-around for a module system design problem that I
 didn't like from the first day I've seen D. I'll take a look in Bugzilla if
 there is already something on this.

What design problem? The only design problem I see is the fact that some OSes 
were badly designed to be case insensitive when dealing with files, and that's 
not a D issue.

- Jonathan M Davis


Re: Compiler error with static vars/functions

2012-02-09 Thread Jonathan M Davis
On Thursday, February 09, 2012 22:42:17 Oliver Plow wrote:
 Thanks for the answer. This means that all classes belonging to the same
 module must be in the same *.d file? I mean not one *.d file per class as
 in most languages?

There is no connection between modules and classes other than the fact that 
they have to go into modules (like all code in D does). You could have 1000 
public classes in the same module if you wanted to (though obviously that 
would be a maintenance nightmare). structs, classes, and free functions can 
all mix in a single module, and the module's name can be anything you want as 
long as it's a valid symbol name. It doesn't have to match any of the symbol 
names within tho module.

And I'd dispute the most languages bit. The only languages that I'm aware of 
which make such a connection are Java and C#, and I'm not even sure that C# is 
that strict about it (it's been a while since I programmed in C#). I believe 
that the one public class per file requirement is something that Java 
introduced and which is not common among programming languages in general.

- Jonathan M Davis


  1   2   >