Re: Optional extra return value? Multiple return values with auto?

2012-08-15 Thread ReneSac

On Wednesday, 15 August 2012 at 01:22:41 UTC, Era Scarecrow wrote:

On Wednesday, 15 August 2012 at 00:37:32 UTC, ReneSac wrote:
And my last question of my first post: I can't use auto for 
the out values right? An enhancement proposal like this 
would be compatible with D?


 I would say No. Maybe if it was a union, but I don't think 
so;.It still needs to know it's type when it's working with, 
aka statically typed (known at compile-time).


 The auto as an out variable may work in an interpreted or more 
dynamic language.


It was the reverse way (as in my first post):

bool bar(out ulong output){ output = 0 ; return true;}

auto check = bar(auto num); // error

The type of the out parameter is explicit in this case, but, 
right now, I need to declarate num outside the function, and thus 
I can't use the auto keyword.


I'm not sure how this would work for templatized out parameters 
(if they are permitted)...


Re: Optional extra return value? Multiple return values with auto?

2012-08-14 Thread ReneSac
Thanks, this indeed works. One obvious (when your program 
starts to behave weirdly...) down side of this solution: it needs 
a different dummy for each optional out value of a function, or 
else multiple variables will be modifying the same dummy.


And, of course, a different dummy for each type of out value, 
because values after cast() apparently aren't lvalues.


And my last question of my first post: I can't use auto for the 
out values right? An enhancement proposal like this would be 
compatible with D?


Also, the function duplication workaround doesn't works if I 
templatize the function... Is this inconsistency intentional?



On Saturday, 11 August 2012 at 23:23:48 UTC, Ali Çehreli wrote:


I am not a fan of this solution either. To make the code to 
compile, define dummy as static:


static T dummy = T.init;  // -- this works

That way there will be just one copy for the entire type, 
instead of one copy per object.


Ali


Re: Optional extra return value? Multiple return values with auto?

2012-08-11 Thread ReneSac

On Tuesday, 24 July 2012 at 05:30:49 UTC, Ali Çehreli wrote:

The options that I can think of:

- Return a struct (or a class) where one of the members is not 
filled-in


- Similarly, return a tuple


This is awkward, and doesn't look good for performance.



- Use an out parameter, which can have a default lvalue:

int g_default_param;

void foo(ref int i = g_default_param)
{
if (i == g_param) {
// The caller is not interested in 'i'

} else {
// The caller wants 'i'
i = 42;
}
}

void main()
{
foo();

int i;
foo(i);
assert(i == 42);
}


This is not working inside a class. I'm not sure what default 
value I should put when I don't know the type entered:


class a (T) {
T dummy = T.init;
bool foo(int a, out T optional = dummy)
{
return true;
}
}

void main () {
auto c = new a!uint();
c.foo(5);
}

I get the following error:

Error: need 'this' to access member dummy



Optional extra return value? Multiple return values with auto?

2012-07-23 Thread ReneSac
How I can return multiple values in D, but one of them being 
optional? I tried the 'out' hack to achieve multiple return 
values, but it didn't accepted a default argument: it needed a 
lvalue in the calling function.


In Lua, for example, one can do:

function foo(input)

-- calculations --

return result, iterations_needed
end

a, stats = foo()

or, if you are only interested in the result:

a = foo() -- the second return is automatically discarded.

Do I really have to duplicate the function, in order to achieve 
this?


Also, is there some way to use auto with multiple returns? Like:

bool bar(out ulong output){ output = 0 ; return true;}

auto check = bar(auto num); // error

Right now, I need to declarate num outside the function, and thus 
I can't use the auto keyword.


Re: D 50% slower than C++. What I'm doing wrong?

2012-04-16 Thread ReneSac

On Monday, 16 April 2012 at 07:28:25 UTC, Andrea Fontana wrote:

Are you on linux/windows/mac?


Windows.

My main question is now *WHY* D is slower than C++ in this 
program? The code is identical (even the same C functions) in the 
performance-critical parts, I'm using the same compiler backend 
(gdc/g++), and D was supposed to a fast compilable language. Yet 
it is up to 50% slower.


What is D doing more than C++ in this program, that accounts for 
the lost CPU cycles? Or what prevents the D program to be 
optimized to the C++ level? The D front-end?


Re: D 50% slower than C++. What I'm doing wrong?

2012-04-16 Thread ReneSac

On Monday, 16 April 2012 at 22:58:08 UTC, Timon Gehr wrote:

On 04/17/2012 12:24 AM, ReneSac wrote:

Windows.



DMC runtime !
DMC = Digital Mars Compiler? Does Mingw/GDC uses that? I think 
that both, g++ and GDC compiled binaries, use the mingw runtime, 
but I'm not sure also.


No. They are not the same. The performance difference is 
probably explained by the dmc runtime vs. glibc difference, 
because your biased results are not reproducible on a linux 
system where glibc is used for both versions.

Ok, I will benchmark on linux latter.

This is a fallacy. Benchmarks can only compare implementations, 
not languages. Furthermore, it is usually the case that 
benchmarks that have surprising results don't measure what they 
intend to measure. Your program is apparently rather I/O bound.
Yeah, I'm comparing the implementation, and I made it clear that 
it may be the GDC front-end that may be the bottleneck.


And I don't think it is I/O bound. It is only around 10MB/s, 
whereas my HD can do ~100MB/s. Furthermore, on files more 
compressible, where the speed was higher, the difference between 
D and C++ was higher too. And if is in fact I/O bound, then D is 
MORE than 50% slower than C++.


The difference is likely because of differences in external C 
libraries.
Both, the D and C++ versions, use C's stdio library. What is the 
difference?




Re: D 50% slower than C++. What I'm doing wrong?

2012-04-15 Thread ReneSac
On Sunday, 15 April 2012 at 02:56:21 UTC, Joseph Rushton Wakeling 
wrote:
On Saturday, 14 April 2012 at 19:51:21 UTC, Joseph Rushton 
Wakeling wrote:
GDC has all the regular gcc optimization flags available 
IIRC. The ones on the

GDC man page are just the ones specific to GDC.
I'm not talking about compiler flags, but the inline keyword 
in the C++ source
code. I saw some discussion about @inline but it seems not 
implemented (yet?).

Well, that is not a priority for D anyway.

About compiler optimizations, -finline-functions and -fweb are 
part of -O3. I
tried to compile with -no-bounds-check, but made no diference 
for DMD and GDC.

It probably is part of -release as q66 said.


Ah yes, you're right.  I do wonder if your seeming speed 
differences are magnified because the whole operation is only 
2-4 seconds long: if your algorithm were operating over a 
longer timeframe I think you'd likely find the relative speed 
differences decrease.  (I have a memory of a program that took 
~0.004s with a C/C++ version and 1s with D, and the difference 
seemed to be just startup time for the D program.)

Well, this don't seem to be true:

1.2MB compressible
encode:
C++:   0.11s (100%)
D-inl: 0.14s (127%)
decode
C++:   0.12s (100%)
D-inl: 0.16s (133%)

~200MB compressible
encode:
C++:   17.2s (100%)
D-inl: 21.5s (125%)
decode:
C++:   16.3s (100%)
D-inl: 24,5s (150%)

3,8GB, barelly-compressible
encode:
C++:   412s  (100%)
D-inl: 512s  (124%)


What really amazes me is the difference between g++, DMD and 
GDC in size of the executable binary.  100 orders of magnitude!
I have remarked it in another topic before, with a simple hello 
world. I need to update there, now that I got DMD working. BTW, 
it is 2 orders of magnitude.




3 remarks about the D code.  One is that much of it still seems 
very C-ish; I'd be interested to see how speed and executable 
size differ if things like the file opening, or the reading of 
characters, are done with more idiomatic D code.
 Sounds stupid as the C stuff should be fastest, but I've been 
surprised sometimes at how using idiomatic D formulations can 
improve things.


Well, it may indeed be faster, especially the IO that is 
dependent on things like buffering and so on. But for this I just 
wanted something as close as the C++ code as possible.


Second remark is more of a query -- can Predictor.p() and 
.update really be marked as pure?  Their result for a given 
input actually varies depending on the current values of cxt 
and ct, which are modified outside of function scope.
Yeah, I don't know. I just did just throw those qualifiers 
against the compiler, and saw what sticks. And I was testing the 
decode speed specially to see easily if the output was corrupted. 
But maybe it haven't corrupted because the compiler don't 
optimize based on pure yet... there was no speed difference 
too.. so...




Third remark -- again a query -- why the GC.disable ... ?
Just to be sure that the speed difference wasn't caused by the 
GC. It didn't make any speed difference also, and it is indeed a 
bad idea.





D 50% slower than C++. What I'm doing wrong?

2012-04-14 Thread ReneSac
I have this simple binary arithmetic coder in C++ by Mahoney and 
translated to D by Maffi. I added notrow, final and pure  
and GC.disable where it was possible, but that didn't made much 
difference. Adding const to the Predictor.p() (as in the C++ 
version) gave 3% higher performance. Here the two versions:


http://mattmahoney.net/dc/  -- original zip

http://pastebin.com/55x9dT9C  -- Original C++ version.
http://pastebin.com/TYT7XdwX  -- Modified D translation.

The problem is that the D version is 50% slower:

test.fpaq0 (16562521 bytes) - test.bmp (33159254 bytes)

Lang| Comp  | Binary size | Time (lower is better)
C++  (g++)  -  13kb   -  2.42s  (100%)   -O3 -s
D(DMD)  - 230kb   -  4.46s  (184%)   -O -release -inline
D(GDC)  -1322kb   -  3.69s  (152%)   -O3 -frelease -s


The only diference I could see between the C++ and D versions is 
that C++ has hints to the compiler about which functions to 
inline, and I could't find anything similar in D. So I manually 
inlined the encode and decode functions:


http://pastebin.com/N4nuyVMh  - Manual inline

D(DMD)  - 228kb   -  3.70s  (153%)   -O -release -inline
D(GDC)  -1318kb   -  3.50s  (144%)   -O3 -frelease -s

Still, the D version is slower. What makes this speed diference? 
Is there any way to side-step this?


Note that this simple C++ version can be made more than 2 times 
faster with algoritimical and io optimizations, (ab)using 
templates, etc. So I'm not asking for generic speed 
optimizations, but only things that may make the D code more 
equal to the C++ code.


Re: D 50% slower than C++. What I'm doing wrong?

2012-04-14 Thread ReneSac
I tested the q66 version in my computer (sandy bridge @ 4.3GHz). 
Repeating the old timmings here, and the new results are marked 
as D-v2:


test.fpaq0 (16562521 bytes) - test.bmp (33159254 bytes)

Lang| Comp  | Binary size | Time (lower is better)
C++  (g++)  -  13kb   -  2.42s  (100%)   -O3 -s
D(DMD)  - 230kb   -  4.46s  (184%)   -O -release -inline
D(GDC)  -1322kb   -  3.69s  (152%)   -O3 -frelease -s
D-v2 (DMD)  - 206kb   -  4.50s  (186%)   -O -release -inline
D-v2 (GDC)  - 852kb   -  3.65s  (151%)   -O3 -frelease -s

So, basically the same thing... Not using clases seems a little 
slower on DMD, and no difference on GDC. The if (++ct[cxt][y]  
65534) made a very small, but measurable difference (those .04s 
in GDC). The if ((cxt += cxt + y) = 512) only made the code 
more complicated, with no speed benefit.


But the input file is also important. The file you tested seems 
to be an already compressed one, or something not very 
compressible. Here a test with an incompressible file:


pnad9huff.fpaq0 (43443040 bytes) - test-d.huff (43617049 bytes)

C++  (g++)  -  13kb   -  5.13   (100%)   -O3 -s
D-v2 (DMD)  - 206kb   -  8.03   (156%)   -O -release -inline
D-v2 (GDC)  - 852kb   -  7.09   (138%)   -O3 -frelease -s
D-inl(DMD)  - 228kb   -  6.93   (135%)   -O -release -inline
D-inl(GDC)  -1318kb   -  6.86   (134%)   -O3 -frelease -s

The C++ advantage becomes smaller in this file. D-inl is my 
manual inline version, with your small optimization on 
Predictor.Update().


On Saturday, 14 April 2012 at 19:51:21 UTC, Joseph Rushton 
Wakeling wrote:
GDC has all the regular gcc optimization flags available IIRC.  
The ones on the GDC man page are just the ones specific to GDC.
I'm not talking about compiler flags, but the inline keyword in 
the C++ source code. I saw some discussion about @inline but it 
seems not implemented (yet?). Well, that is not a priority for D 
anyway.



About compiler optimizations, -finline-functions and -fweb are 
part of -O3. I tried to compile with -no-bounds-check, but made 
no diference for DMD and GDC. It probably is part of -release as 
q66 said.


Re: Up to date documentation on D implementation.

2012-04-07 Thread ReneSac

On Saturday, 7 April 2012 at 06:21:16 UTC, Dmitry Olshansky wrote:

On 07.04.2012 8:51, ReneSac wrote:
The only thing I noticed is that a simple Hello World took 
several
seconds to compile, and ended up with 1.25MB (release, 
non-debug build)!


how about strip it?
+ MinGW debug info is notoriously bloated (if it was debug 
build).


I said it was a non-debug build. The debug build for the Hello 
World is 7.6MB.





Re: Input from a newbie

2012-04-07 Thread ReneSac

On Saturday, 7 April 2012 at 22:21:36 UTC, Jonas wrote:


5) What's wrong with this program? Is it that `printf` doesn't 
understand D strings? If so, how do I use D strings in string 
formatting?


import std.stdio;

string foo() { return foobar; }

int main() {
  printf(%s\n, foo());
  return 0;
}



You can use toStringz() from std.string to convert D strings to 
null terminated strings when you need to interface with C 
functions. But in this case, writeln() would be the simplest 
solution. It would be only:


writeln(foo());

Instead of:

printf(%s\n, toStringz(foo()));

I'm a newbie in D too, so correct me if there is anything wrong.


Re: Up to date documentation on D implementation.

2012-04-06 Thread ReneSac

On Friday, 6 April 2012 at 01:33:10 UTC, Mike Parker wrote:

DMD runs just fine on 64-bit Windows.
Then why 32 bit Windows (Win32) operating system, such as 
Windows XP is put as a requirement? This should be corrected: 
http://dlang.org/dmd-windows.html


Anyway, in the mean time I have setup GDC using the latest 
binaries, and it is working well.


The only thing I noticed is that a simple Hello World took 
several seconds to compile, and ended up with 1.25MB (release, 
non-debug build)! And I thought that D was fast to compile... But 
then I discovered that switching to std.c.stdio made the 
compilation almost instantaneous, and the executable size a 
slightly more reasonable 408KB. It works, but that isn't really 
an option, as D strings aren't readily compatible with C 
strings...


I know that the lower limiter in binary size is higher, due to 
the statically compiled runtime, but this bloat in the std lib 
for a single function worries me a bit. Is DMD better in this 
measurement, or is it a limitation of the current D libraries?


This may be kinda important latter, as in compression benchmarks, 
the decompressor size is added in the compressed size to prevent 
cheating. I don't want a multi-megabyte executable size.




Re: Up to date documentation on D implementation.

2012-04-05 Thread ReneSac

On Thursday, 5 April 2012 at 18:34:05 UTC, Jesse Phillips wrote:


You'll be pretty safe using features you know for C, but you 
can venture out pretty far from it.


While, the page isn't specific to the questions you have at 
hand, this does cover much of the current state. Remember, 
recently implemented features are more likely to have bugs.


I will probably program close to C/Lua style (the languages I'm 
most proficient with), but pretty far is vague. And I haven't 
been following the time line of the feature additions, like old 
users do, and I'm not sure if I should read the entire changelog 
for some vague indication of the stability of a feature...



http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel


Ok, that page gives some pointers. Seems like I shouldn't use 
std.stream. So, std.cstream or std.stdio are safe?


Dynamic Arrays, Slicing, Unittest, conditional compilation and 
compile time function execution should be working well, right? 
What about std.paralelism and message passing, non-shared 
multithreading?


And I still don't know how to generate windows executables.. If 
it is really impossible to compile D in Windows 64 bits, then 
what is the best compiler for Linux?


Re: Up to date documentation on D implementation.

2012-04-05 Thread ReneSac

On Thursday, 5 April 2012 at 22:07:05 UTC, Jesse Phillips wrote:

On Thursday, 5 April 2012 at 21:10:41 UTC, ReneSac wrote:

I will probably program close to C/Lua style (the languages 
I'm most proficient with), but pretty far is vague. And I 
haven't been following the time line of the feature additions, 
like old users do, and I'm not sure if I should read the 
entire changelog for some vague indication of the stability of 
a feature...


The page I liked does have compiler versions for some of the 
implemented features, as you appear to have noticed.


Ah, I saw The following list of major issues dates from July 
2009. So I supposed it was old, but now I see that I understood 
wrong.



http://www.prowiki.org/wiki4d/wiki.cgi?LanguageDevel


Ok, that page gives some pointers. Seems like I shouldn't use 
std.stream. So, std.cstream or std.stdio are safe?


Hmm, bring up a good point, I think someone is working on 
revamping stdio, though I would think it would mostly remain 
compatible. Who's doing that? Could you write the details here:



So, what I should use?


http://www.prowiki.org/wiki4d/wiki.cgi?ReviewQueue

Dynamic Arrays, Slicing, Unittest, conditional compilation and 
compile time function execution should be working well, right?


Yep, there are some requested improvements but, things are 
stable.

Good! Thanks.



What about std.paralelism and message passing, non-shared 
multithreading?


I'm not sure how much use they have been getting, so it is hard 
to say. I know there have been questions about how to use them, 
but they seem solid.


If you get into using shared though, you'll probably walk into 
areas that will require casting to get things done. I don't 
know what if any changes are planned, but likely it needs a 
closer look.


I would try to avoid anything shared anyway. It is hard to 
program if a++ isn't deterministic...




Sorry, forgot to cover that. I believe GDC will compile 64bit 
Windows applications, but otherwise you can still compile and 
run 32bit applications.


Most people use DMD, but GDC, I hear, should be on par.


I don't need a 64bit binary right now. Actually, I would even 
prefer a 32bit one for development because then I can't run too 
wild in memory usage. The problem is that DMD seems to require 32 
bit windows, according to the page I linked... Is it not true?


Anyway, GDC seems to have quite better performance/optimization, 
so I may end up using it... But I also heard bad things about it 
in old posts... so...