Re: What am I doing wrong here?

2022-05-08 Thread JG via Digitalmars-d-learn

On Saturday, 7 May 2022 at 02:29:59 UTC, Salih Dincer wrote:

On Friday, 6 May 2022 at 18:04:13 UTC, JG wrote:

```d
//...
struct Adder {
int a;
int opCall(int b) { return a+b; }
}
auto adder(int a) {
auto ret = Adder.init;
ret.a=a;
return ret;
}

void main() {
auto g = adder(5);
g(5).writeln; // 10
auto d = toDelegate!(int, int)(g);
d(5).writeln; // 10
// ...
}
```


The value returned by the delegate structure in the above line 
is 10. Its parameter is 5, but if you do it to 21, you will get 
42. So it doesn't have the ability to delegate Adder.


In summary, the sum function isn't executing.  Instead, its 
value gets double.


SDB@79


What do you mean?

```d
import std;

struct Delegate(A,B) {
B function(void* ptr, A a) f;
void* data;
B opCall(A a) {
return f(data,a);
}
}

auto toDelegate(A, B,S)(ref S s) {
static B f(void* ptr, A a) {
return (*(cast(S*) ptr))(a);
}
Delegate!(A,B) ret;
ret.f=
ret.data= cast(void*) 
return ret;
}

struct Adder {
int a;
int opCall(int b) { return a+b; }
}
auto adder(int a) {
auto ret = Adder.init;
ret.a=a;
return ret;
}

void main() {
auto g = adder(5);
g(5).writeln;
auto d = toDelegate!(int, int)(g);
d(41).writeln;
auto a =7;
auto h = (int b)=>a+b;
auto d1 = toDelegate!(int,int)(h);
void* ptr = cast(void*) 
(*cast(int delegate(int)*) ptr)(10).writeln;
d1(21).writeln;
h(32).writeln;
}
```
Output:
10
46
17
28
39

Which is what is expected.




Re: What am I doing wrong here?

2022-05-06 Thread Salih Dincer via Digitalmars-d-learn

On Friday, 6 May 2022 at 18:04:13 UTC, JG wrote:

```d
//...
struct Adder {
int a;
int opCall(int b) { return a+b; }
}
auto adder(int a) {
auto ret = Adder.init;
ret.a=a;
return ret;
}

void main() {
auto g = adder(5);
g(5).writeln; // 10
auto d = toDelegate!(int, int)(g);
d(5).writeln; // 10
// ...
}
```


The value returned by the delegate structure in the above line is 
10. Its parameter is 5, but if you do it to 21, you will get 42. 
So it doesn't have the ability to delegate Adder.


In summary, the sum function isn't executing.  Instead, its value 
gets double.


SDB@79




Re: What am I doing wrong here?

2022-05-06 Thread JG via Digitalmars-d-learn

On Friday, 6 May 2022 at 18:35:40 UTC, Ali Çehreli wrote:

On 5/6/22 11:04, JG wrote:
> [...]

This is a segmentation fault. Reduced:

import std;

[...]


Hi, thanks. That was quite silly. (I was thinking the variable 
lives to the end of  scope of main but not thinking about that I 
am passing by value).


Re: What am I doing wrong here?

2022-05-06 Thread Ali Çehreli via Digitalmars-d-learn

On 5/6/22 11:04, JG wrote:
> This isn't code to be used for anything (just understanding).

This is a segmentation fault. Reduced:

import std;

struct Delegate(A,B) {
B function(void* ptr, A a) f;
void* data;
B opCall(A a) {
return f(data,a);
}
}

auto toDelegate(A, B,S)(S s) {
static B f(void* ptr, A a) {
return (*(cast(S*) ptr))(a);
}
Delegate!(A,B) ret;
ret.f=
ret.data= cast(void*) 
return ret;
}

void main() {
auto a =7;
auto h = (int b)=>a+b;
auto d1 = toDelegate!(int,int)(h);
d1(10);
}

This is what I did to figure out:

1) Compiled with -g

2) Started gdb (my compiled program is 'deneme'):

$ gdb --args ./deneme

3) Entered 'run' in gdb

You are storing the address of the local variable 's' here:

ret.data= cast(void*) 

One good news is, compiling with -dip1000 catches this mistake. :)

Another solution is to pass 's' by ref, in which case you have to make 
sure that variable lives long enough when its used.


//  vvv
auto toDelegate(A, B,S)(ref S s) {
  // ...
}

Further, you can mix -dip1000 and ref if you protect the code by also 
adding 'return' to the parameter:


//  vv
auto toDelegate(A, B,S)(return ref S s) {
  // ...
}

I am surprised that 'in' does not provide similar protection.

//  vv
auto toDelegate(A, B,S)(in S s) {

The code compiles but crashes. (?)

Ali



What am I doing wrong here?

2022-05-06 Thread JG via Digitalmars-d-learn

This isn't code to be used for anything (just understanding).
```d
import std;

struct Delegate(A,B) {
B function(void* ptr, A a) f;
void* data;
B opCall(A a) {
return f(data,a);
}
}

auto toDelegate(A, B,S)(S s) {
static B f(void* ptr, A a) {
return (*(cast(S*) ptr))(a);
}
Delegate!(A,B) ret;
ret.f=
ret.data= cast(void*) 
return ret;
}

struct Adder {
int a;
int opCall(int b) { return a+b; }
}
auto adder(int a) {
auto ret = Adder.init;
ret.a=a;
return ret;
}

void main() {
auto g = adder(5);
g(5).writeln;
auto d = toDelegate!(int, int)(g);
d(5).writeln;
auto a =7;
auto h = (int b)=>a+b;
auto d1 = toDelegate!(int,int)(h);
void* ptr = cast(void*) 
(*cast(int delegate(int)*) ptr)(10).writeln;
d1(10).writeln;
h(10).writeln;
}
```



Re: D outperformed by C++, what am I doing wrong?

2017-08-17 Thread data pulverizer via Digitalmars-d-learn

On Sunday, 13 August 2017 at 06:09:39 UTC, amfvcg wrote:

Hi all,
I'm solving below task:

given container T and value R return sum of R-ranges over T. An 
example:

input : T=[1,1,1] R=2
output : [2, 1]

input : T=[1,2,3] R=1
output : [1,2,3]
(see dlang unittests for more examples)


Below c++ code compiled with g++-5.4.0 -O2 -std=c++14 runs on 
my machine in 656 836 us.
Below D code compiled with dmd v2.067.1 -O runs on my machine 
in ~ 14.5 sec.


Each language has it's own "way of programming", and as I'm a 
beginner in D - probably I'm running through bushes instead of 
highway. Therefore I'd like to ask you, experienced dlang devs, 
to shed some light on "how to do it dlang-way".


...



From time to time the forum gets questions like these. It would 
be nice it we had blog articles concerning high performance 
programming in D, a kind of best practice approach, everything 
from programming idioms, to compilers, and compiler flags.




Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread via Digitalmars-d-learn

On Sunday, 13 August 2017 at 09:56:44 UTC, Johan Engelen wrote:

On Sunday, 13 August 2017 at 09:15:48 UTC, amfvcg wrote:


Change the parameter for this array size to be taken from 
stdin and I assume that these optimizations will go away.


This is paramount for all of the testing, examining, and 
comparisons that are discussed in this thread.
Full information is given to the compiler, and you are 
basically testing the constant folding power of the compilers 
(not unimportant).


I agree that in general this is not the right way to benchmark. I 
however am interested specifically in the pattern matching / 
constant folding abilities
of the compiler. I would have expected `sum(iota(1, N + 1))` to 
be replaced with `(N*(N+1))/2`. LDC already does this 
optimization in some cases. I have opened an issue for some of 
the rest: https://github.com/ldc-developers/ldc/issues/2271


No runtime calculation is needed for the sum. Your program 
could be optimized to the following code:

```
void main()
{
MonoTime beg = MonoTime.currTime;
MonoTime end = MonoTime.currTime;
writeln(end-beg);
writeln(5000);
}
```
So actually you should be more surprised that the reported time 
is not equal to near-zero (just the time between two 
`MonoTime.currTime` calls)!


On Posix, `MonoTime.currTime`'s implementation uses 
clock_gettime(CLOCK_MONOTONIC, ...) which quite a bit more 
involved than simply using the rdtsc instruciton on x86. See: 
http://linuxmogeb.blogspot.bg/2013/10/how-does-clockgettime-work.html


On Windows, `MonoTime.currTime` uses QueryPerformanceCounter, 
which on Win 7 and later uses the rdtsc instruction, which makes 
it quite streamlined. In some testing I did several months ago 
QueryPerformanceCounter had really good latency and precision 
(though I forgot the exact numbers I got).


Instead of `iota(1,100)`, you should initialize the array 
with random numbers with a randomization seed given by the user 
(e.g. commandline argument or stdin). Then, the program will 
actually have to do the runtime calculations that I assume you 
are expecting it to perform.




Agreed, though I think Phobos's unpredictableSeed does an ok job 
w.r.t. seeding, so unless you want to repeat the benchmark on the 
exact same dataset, something like this does a good job:


T[] generate(T)(size_t size)
{
import std.algorithm.iteration : map;
import std.range : array, iota;
import std.random : uniform;

return size.iota.map!(_ => uniform!T()).array;
}


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread via Digitalmars-d-learn

On Sunday, 13 August 2017 at 09:41:39 UTC, Johan Engelen wrote:
On Sunday, 13 August 2017 at 09:08:14 UTC, Petar Kirov 
[ZombineDev] wrote:

 [...]



 [...]


Execution of sum_subranges is already O(1), because the 
calculation of the sum is delayed: the return type of the 
function is not `uint`, it is `MapResult!(sum, )` which 
does a lazy evaluation of the sum.


- Johan


Heh, yeah you're absolutely right. I was just about to correct 
myself, when I saw your reply. Don't know how I missed such an 
obvious thing :D




Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread Johan Engelen via Digitalmars-d-learn

On Sunday, 13 August 2017 at 09:15:48 UTC, amfvcg wrote:


Change the parameter for this array size to be taken from stdin 
and I assume that these optimizations will go away.


This is paramount for all of the testing, examining, and 
comparisons that are discussed in this thread.
Full information is given to the compiler, and you are basically 
testing the constant folding power of the compilers (not 
unimportant). No runtime calculation is needed for the sum. Your 
program could be optimized to the following code:

```
void main()
{
MonoTime beg = MonoTime.currTime;
MonoTime end = MonoTime.currTime;
writeln(end-beg);
writeln(5000);
}
```
So actually you should be more surprised that the reported time 
is not equal to near-zero (just the time between two 
`MonoTime.currTime` calls)!


Instead of `iota(1,100)`, you should initialize the array 
with random numbers with a randomization seed given by the user 
(e.g. commandline argument or stdin). Then, the program will 
actually have to do the runtime calculations that I assume you 
are expecting it to perform.


- Johan


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread Johan Engelen via Digitalmars-d-learn
On Sunday, 13 August 2017 at 09:08:14 UTC, Petar Kirov 
[ZombineDev] wrote:


This instantiation:

sum_subranges(std.range.iota!(int, int).iota(int, int).Result, 
uint)


of the following function:

auto sum_subranges(T)(T input, uint range)
{
import std.range : chunks, ElementType, array;
import std.algorithm : map;
return input.chunks(range).map!(sum);
}

gets optimized with LDC to:
 [snip]


I.e. the compiler turned a O(n) algorithm to O(1), which is 
quite neat. It is also quite surprising to me that it looks 
like even dmd managed to do a similar optimization:

 [snip]


Execution of sum_subranges is already O(1), because the 
calculation of the sum is delayed: the return type of the 
function is not `uint`, it is `MapResult!(sum, )` which 
does a lazy evaluation of the sum.


- Johan



Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread amfvcg via Digitalmars-d-learn
On Sunday, 13 August 2017 at 09:08:14 UTC, Petar Kirov 
[ZombineDev] wrote:


There's one especially interesting result:

This instantiation:

sum_subranges(std.range.iota!(int, int).iota(int, int).Result, 
uint)


of the following function:

auto sum_subranges(T)(T input, uint range)
{
import std.range : chunks, ElementType, array;
import std.algorithm : map;
return input.chunks(range).map!(sum);
}

gets optimized with LDC to:
  push rax
  test edi, edi
  je .LBB2_2
  mov edx, edi
  mov rax, rsi
  pop rcx
  ret
.LBB2_2:
  lea rsi, [rip + .L.str.3]
  lea rcx, [rip + .L.str]
  mov edi, 45
  mov edx, 89
  mov r8d, 6779
  call _d_assert_msg@PLT

I.e. the compiler turned a O(n) algorithm to O(1), which is 
quite neat. It is also quite surprising to me that it looks 
like even dmd managed to do a similar optimization:


sum_subranges(std.range.iota!(int, int).iota(int, int).Result, 
uint):

push   rbp
movrbp,rsp
subrsp,0x30
movDWORD PTR [rbp-0x8],edi
movr9d,DWORD PTR [rbp-0x8]
test   r9,r9
jne41
movr8d,0x1b67
movecx,0x0
moveax,0x61
movrdx,rax
movQWORD PTR [rbp-0x28],rdx
movedx,0x0
movedi,0x2d
movrsi,rdx
movrdx,QWORD PTR [rbp-0x28]
call   41
41: movQWORD PTR [rbp-0x20],rsi
movQWORD PTR [rbp-0x18],r9
movrdx,QWORD PTR [rbp-0x18]
movrax,QWORD PTR [rbp-0x20]
movrsp,rbp algorithms a
poprbp
ret

Moral of the story: templates + ranges are an awesome 
combination.


Change the parameter for this array size to be taken from stdin 
and I assume that these optimizations will go away.


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread via Digitalmars-d-learn

On Sunday, 13 August 2017 at 08:43:29 UTC, amfvcg wrote:
On Sunday, 13 August 2017 at 08:33:53 UTC, Petar Kirov 
[ZombineDev] wrote:


With Daniel's latest version (
http://forum.dlang.org/post/mailman.5963.1502612885.31550.digitalmars-d-le...@puremagic.com
 )

$ ldc2 -O3 --release sum_subranges2.d
$ ./sum_subranges2
210 ms, 838 μs, and 8 hnsecs
5000


Great!!! And that's what I was hoping for.

So the conclusion is:

use the latest ldc that's out there.

Thank you Petar, thank you Daniel. (I cannot change the subject 
to SOLVED, can I?)


Btw. the idiomatic version of this d sample looks how I 
imagined it should!


There's one especially interesting result:

This instantiation:

sum_subranges(std.range.iota!(int, int).iota(int, int).Result, 
uint)


of the following function:

auto sum_subranges(T)(T input, uint range)
{
import std.range : chunks, ElementType, array;
import std.algorithm : map;
return input.chunks(range).map!(sum);
}

gets optimized with LDC to:
  push rax
  test edi, edi
  je .LBB2_2
  mov edx, edi
  mov rax, rsi
  pop rcx
  ret
.LBB2_2:
  lea rsi, [rip + .L.str.3]
  lea rcx, [rip + .L.str]
  mov edi, 45
  mov edx, 89
  mov r8d, 6779
  call _d_assert_msg@PLT

I.e. the compiler turned a O(n) algorithm to O(1), which is quite 
neat. It is also quite surprising to me that it looks like even 
dmd managed to do a similar optimization:


sum_subranges(std.range.iota!(int, int).iota(int, int).Result, 
uint):

push   rbp
movrbp,rsp
subrsp,0x30
movDWORD PTR [rbp-0x8],edi
movr9d,DWORD PTR [rbp-0x8]
test   r9,r9
jne41
movr8d,0x1b67
movecx,0x0
moveax,0x61
movrdx,rax
movQWORD PTR [rbp-0x28],rdx
movedx,0x0
movedi,0x2d
movrsi,rdx
movrdx,QWORD PTR [rbp-0x28]
call   41
41: movQWORD PTR [rbp-0x20],rsi
movQWORD PTR [rbp-0x18],r9
movrdx,QWORD PTR [rbp-0x18]
movrax,QWORD PTR [rbp-0x20]
movrsp,rbp algorithms a
poprbp
ret

Moral of the story: templates + ranges are an awesome combination.


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread amfvcg via Digitalmars-d-learn
On Sunday, 13 August 2017 at 08:33:53 UTC, Petar Kirov 
[ZombineDev] wrote:


With Daniel's latest version (
http://forum.dlang.org/post/mailman.5963.1502612885.31550.digitalmars-d-le...@puremagic.com
 )

$ ldc2 -O3 --release sum_subranges2.d
$ ./sum_subranges2
210 ms, 838 μs, and 8 hnsecs
5000


Great!!! And that's what I was hoping for.

So the conclusion is:

use the latest ldc that's out there.

Thank you Petar, thank you Daniel. (I cannot change the subject 
to SOLVED, can I?)


Btw. the idiomatic version of this d sample looks how I imagined 
it should!


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread ikod via Digitalmars-d-learn

On Sunday, 13 August 2017 at 08:32:50 UTC, amfvcg wrote:

Gives me

5 μs and 2 hnsecs
5000
3 secs, 228 ms, 837 μs, and 4 hnsecs
5000


And you've compiled it with?

Btw. clang for c++ version works worse than gcc (for this case 
[112ms vs 180ms]).


DMD64 D Compiler v2.074.1


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread Daniel Kozak via Digitalmars-d-learn
And this one is awesome :P
http://ideone.com/muehUw

On Sun, Aug 13, 2017 at 10:27 AM, Daniel Kozak  wrote:

> this one is even faster than c++:
> http://ideone.com/TRDsOo
>
> On Sun, Aug 13, 2017 at 10:00 AM, Daniel Kozak  wrote:
>
>> my second version on ldc takes 380ms and c++ version on same compiler
>> (clang), takes 350ms, so it seems to be almost same
>>
>> On Sun, Aug 13, 2017 at 9:51 AM, amfvcg via Digitalmars-d-learn <
>> digitalmars-d-learn@puremagic.com> wrote:
>>
>>> On Sunday, 13 August 2017 at 07:30:32 UTC, Daniel Kozak wrote:
>>>
 Here is more D idiomatic way:

 import std.stdio : writeln;
 import std.algorithm.comparison: min;
 import std.algorithm.iteration: sum;
 import core.time: MonoTime, Duration;


 auto sum_subranges(T)(T input, uint range)
 {
 import std.array : array;
 import std.range : chunks, ElementType;
 import std.algorithm : map;

 if (range == 0)
 {
 return ElementType!(T)[].init;
 }
 return input.chunks(range).map!(sum).array;
 }

 unittest
 {
 assert(sum_subranges([1,1,1], 2) == [2, 1]);
 assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
 assert(sum_subranges([], 2) == []);
 assert(sum_subranges([1], 2) == [1]);
 assert(sum_subranges([1], 0) == []);
 }


 int main()
 {
 import std.range : iota, array;
 auto v = iota(0,100);
 int sum;
 MonoTime beg = MonoTime.currTime;
 for (int i=0; i < 100; i++)
 sum += cast(int)sum_subranges(v,2).length;
 MonoTime end = MonoTime.currTime;
 writeln(end-beg);
 writeln(sum);
 return sum;
 }

 On Sun, Aug 13, 2017 at 9:13 AM, Daniel Kozak 
 wrote:

 this works ok for me with ldc compiler, gdc does not work on my arch
> machine so I can not do comparsion to your c++ versin (clang does not work
> with your c++ code)
>
> import std.stdio : writeln;
> import std.algorithm.comparison: min;
> import std.algorithm.iteration: sum;
> import core.time: MonoTime, Duration;
>
>
> T[] sum_subranges(T)(T[] input, uint range)
> {
> import std.array : appender;
> auto app = appender!(T[])();
> if (range == 0)
> {
> return app.data;
> }
> for (uint i; i < input.length; i=min(i+range, input.length))
> {
> app.put(sum(input[i..min(i+range, input.length)]));
> }
> return app.data;
> }
>
> unittest
> {
> assert(sum_subranges([1,1,1], 2) == [2, 1]);
> assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
> assert(sum_subranges([], 2) == []);
> assert(sum_subranges([1], 2) == [1]);
> assert(sum_subranges([1], 0) == []);
> }
>
>
> int main()
> {
> import std.range : iota, array;
> auto v = iota(0,100).array;
> int sum;
> MonoTime beg = MonoTime.currTime;
> for (int i=0; i < 100; i++)
> sum += cast(int)sum_subranges(v,2).length;
> MonoTime end = MonoTime.currTime;
> writeln(end-beg);
> writeln(sum);
> return sum;
> }
>
> On Sun, Aug 13, 2017 at 9:03 AM, Neia Neutuladh via
> Digitalmars-d-learn < digitalmars-d-learn@puremagic.com> wrote:
>
> [...]
>>
>
>>> Thank you all for the replies. Good to know the community is alive in d
>>> :)
>>>
>>> Let's settle the playground:
>>> D  : http://ideone.com/h4fnsD
>>> C++: http://ideone.com/X1pyXG
>>>
>>> Both using GCC under the hood.
>>> C++ in 112 ms;
>>> D in :
>>> - 2.5 sec with original source;
>>> - 2.5 sec with Daniel's 1st version;
>>> - 5 sec timeout exceeded with Daniel's 2nd version;
>>> - 1.8 sec with Neia-like preallocation;
>>>
>>> So still it's not that neaty.
>>>
>>> (What's interesting C++ code generates 2KLOC of assembly, and Dlang @
>>> ldc 12KLOC - checked at godbolt).
>>>
>>> P.S. For C++ version to work under clang, the function which takes
>>> (BinaryOp) must go before the other one (my bad).
>>>
>>
>>
>


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread via Digitalmars-d-learn
On Sunday, 13 August 2017 at 08:29:30 UTC, Petar Kirov 
[ZombineDev] wrote:

On Sunday, 13 August 2017 at 08:13:56 UTC, amfvcg wrote:

On Sunday, 13 August 2017 at 08:00:53 UTC, Daniel Kozak wrote:
my second version on ldc takes 380ms and c++ version on same 
compiler (clang), takes 350ms, so it seems to be almost same




Ok, on ideone (ldc 1.1.0) it timeouts, on dpaste (ldc 0.12.0) 
it gets killed.

What version are you using?

Either way, if that'd be the case - that's slick. (and ldc 
would be the compiler of choice for real use cases).


Here are my results:

$ uname -sri
Linux 4.10.0-28-generic x86_64

$ lscpu | grep 'Model name'
Model name:Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz

$ ldc2 --version | head -n5
LDC - the LLVM D compiler (1.3.0):
  based on DMD v2.073.2 and LLVM 4.0.0
  built with LDC - the LLVM D compiler (1.3.0)
  Default target: x86_64-unknown-linux-gnu
  Host CPU: ivybridge

$ g++ --version | head -n1
g++ (Ubuntu 6.3.0-12ubuntu2) 6.3.0 20170406

$ ldc2 -O3 --release sum_subranges.d
$ ./sum_subranges
378 ms, 556 μs, and 9 hnsecs
5000

$ g++ -O5 sum_subranges.cpp -o sum_subranges
$ ./sum_subranges
237135
5000


With Daniel's latest version (
http://forum.dlang.org/post/mailman.5963.1502612885.31550.digitalmars-d-le...@puremagic.com
 )

$ ldc2 -O3 --release sum_subranges2.d
$ ./sum_subranges2
210 ms, 838 μs, and 8 hnsecs
5000


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread amfvcg via Digitalmars-d-learn

Gives me

5 μs and 2 hnsecs
5000
3 secs, 228 ms, 837 μs, and 4 hnsecs
5000


And you've compiled it with?

Btw. clang for c++ version works worse than gcc (for this case 
[112ms vs 180ms]).


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread via Digitalmars-d-learn

On Sunday, 13 August 2017 at 08:13:56 UTC, amfvcg wrote:

On Sunday, 13 August 2017 at 08:00:53 UTC, Daniel Kozak wrote:
my second version on ldc takes 380ms and c++ version on same 
compiler (clang), takes 350ms, so it seems to be almost same




Ok, on ideone (ldc 1.1.0) it timeouts, on dpaste (ldc 0.12.0) 
it gets killed.

What version are you using?

Either way, if that'd be the case - that's slick. (and ldc 
would be the compiler of choice for real use cases).


Here are my results:

$ uname -sri
Linux 4.10.0-28-generic x86_64

$ lscpu | grep 'Model name'
Model name:Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz

$ ldc2 --version | head -n5
LDC - the LLVM D compiler (1.3.0):
  based on DMD v2.073.2 and LLVM 4.0.0
  built with LDC - the LLVM D compiler (1.3.0)
  Default target: x86_64-unknown-linux-gnu
  Host CPU: ivybridge

$ g++ --version | head -n1
g++ (Ubuntu 6.3.0-12ubuntu2) 6.3.0 20170406

$ ldc2 -O3 --release sum_subranges.d
$ ./sum_subranges
378 ms, 556 μs, and 9 hnsecs
5000

$ g++ -O5 sum_subranges.cpp -o sum_subranges
$ ./sum_subranges
237135
5000


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread ikod via Digitalmars-d-learn

On Sunday, 13 August 2017 at 08:13:56 UTC, amfvcg wrote:

On Sunday, 13 August 2017 at 08:00:53 UTC, Daniel Kozak wrote:
my second version on ldc takes 380ms and c++ version on same 
compiler (clang), takes 350ms, so it seems to be almost same




Ok, on ideone (ldc 1.1.0) it timeouts, on dpaste (ldc 0.12.0) 
it gets killed.

What version are you using?

Either way, if that'd be the case - that's slick. (and ldc 
would be the compiler of choice for real use cases).


import std.stdio : writeln;
import std.algorithm.comparison: min;
import std.algorithm.iteration: sum;
import core.time: MonoTime, Duration;
import std.range;
import std.algorithm;

auto s1(T)(T input, uint r) {
return input.chunks(r).map!sum;
}

T[] sum_subranges(T)(T[] input, uint range)
{
T[] result;
if (range == 0)
{
return result;
}
for (uint i; i < input.length; i=min(i+range, input.length))
{
result ~= sum(input[i..min(i+range, input.length)]);
}
return result;
}

unittest
{
assert(sum_subranges([1,1,1], 2) == [2, 1]);
assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
assert(sum_subranges([], 2) == []);
assert(sum_subranges([1], 2) == [1]);
assert(sum_subranges([1], 0) == []);
assert(s1([1,1,1], 2).array == [2, 1]);
assert(s1([1,1,1,2,3,3], 2).array == [2, 3, 6]);
}
int main()
{
int sum;
MonoTime beg0 = MonoTime.currTime;
for (int i=0; i < 100; i++)
sum += s1(iota(100),2).length;
MonoTime end0 = MonoTime.currTime;
writeln(end0-beg0);
writeln(sum);

sum = 0;
int[100] v;
for (int i=0; i < 100; ++i)
v[i] = i;
MonoTime beg = MonoTime.currTime;
for (int i=0; i < 100; i++)
sum += cast(int)sum_subranges(v,2).length;
MonoTime end = MonoTime.currTime;
writeln(end-beg);
writeln(sum);
return sum;
}

Gives me

5 μs and 2 hnsecs
5000
3 secs, 228 ms, 837 μs, and 4 hnsecs
5000



Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread Daniel Kozak via Digitalmars-d-learn
this one is even faster than c++:
http://ideone.com/TRDsOo

On Sun, Aug 13, 2017 at 10:00 AM, Daniel Kozak  wrote:

> my second version on ldc takes 380ms and c++ version on same compiler
> (clang), takes 350ms, so it seems to be almost same
>
> On Sun, Aug 13, 2017 at 9:51 AM, amfvcg via Digitalmars-d-learn <
> digitalmars-d-learn@puremagic.com> wrote:
>
>> On Sunday, 13 August 2017 at 07:30:32 UTC, Daniel Kozak wrote:
>>
>>> Here is more D idiomatic way:
>>>
>>> import std.stdio : writeln;
>>> import std.algorithm.comparison: min;
>>> import std.algorithm.iteration: sum;
>>> import core.time: MonoTime, Duration;
>>>
>>>
>>> auto sum_subranges(T)(T input, uint range)
>>> {
>>> import std.array : array;
>>> import std.range : chunks, ElementType;
>>> import std.algorithm : map;
>>>
>>> if (range == 0)
>>> {
>>> return ElementType!(T)[].init;
>>> }
>>> return input.chunks(range).map!(sum).array;
>>> }
>>>
>>> unittest
>>> {
>>> assert(sum_subranges([1,1,1], 2) == [2, 1]);
>>> assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
>>> assert(sum_subranges([], 2) == []);
>>> assert(sum_subranges([1], 2) == [1]);
>>> assert(sum_subranges([1], 0) == []);
>>> }
>>>
>>>
>>> int main()
>>> {
>>> import std.range : iota, array;
>>> auto v = iota(0,100);
>>> int sum;
>>> MonoTime beg = MonoTime.currTime;
>>> for (int i=0; i < 100; i++)
>>> sum += cast(int)sum_subranges(v,2).length;
>>> MonoTime end = MonoTime.currTime;
>>> writeln(end-beg);
>>> writeln(sum);
>>> return sum;
>>> }
>>>
>>> On Sun, Aug 13, 2017 at 9:13 AM, Daniel Kozak  wrote:
>>>
>>> this works ok for me with ldc compiler, gdc does not work on my arch
 machine so I can not do comparsion to your c++ versin (clang does not work
 with your c++ code)

 import std.stdio : writeln;
 import std.algorithm.comparison: min;
 import std.algorithm.iteration: sum;
 import core.time: MonoTime, Duration;


 T[] sum_subranges(T)(T[] input, uint range)
 {
 import std.array : appender;
 auto app = appender!(T[])();
 if (range == 0)
 {
 return app.data;
 }
 for (uint i; i < input.length; i=min(i+range, input.length))
 {
 app.put(sum(input[i..min(i+range, input.length)]));
 }
 return app.data;
 }

 unittest
 {
 assert(sum_subranges([1,1,1], 2) == [2, 1]);
 assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
 assert(sum_subranges([], 2) == []);
 assert(sum_subranges([1], 2) == [1]);
 assert(sum_subranges([1], 0) == []);
 }


 int main()
 {
 import std.range : iota, array;
 auto v = iota(0,100).array;
 int sum;
 MonoTime beg = MonoTime.currTime;
 for (int i=0; i < 100; i++)
 sum += cast(int)sum_subranges(v,2).length;
 MonoTime end = MonoTime.currTime;
 writeln(end-beg);
 writeln(sum);
 return sum;
 }

 On Sun, Aug 13, 2017 at 9:03 AM, Neia Neutuladh via Digitalmars-d-learn
 < digitalmars-d-learn@puremagic.com> wrote:

 [...]
>

>> Thank you all for the replies. Good to know the community is alive in d :)
>>
>> Let's settle the playground:
>> D  : http://ideone.com/h4fnsD
>> C++: http://ideone.com/X1pyXG
>>
>> Both using GCC under the hood.
>> C++ in 112 ms;
>> D in :
>> - 2.5 sec with original source;
>> - 2.5 sec with Daniel's 1st version;
>> - 5 sec timeout exceeded with Daniel's 2nd version;
>> - 1.8 sec with Neia-like preallocation;
>>
>> So still it's not that neaty.
>>
>> (What's interesting C++ code generates 2KLOC of assembly, and Dlang @ ldc
>> 12KLOC - checked at godbolt).
>>
>> P.S. For C++ version to work under clang, the function which takes
>> (BinaryOp) must go before the other one (my bad).
>>
>
>


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread amfvcg via Digitalmars-d-learn

On Sunday, 13 August 2017 at 08:00:53 UTC, Daniel Kozak wrote:
my second version on ldc takes 380ms and c++ version on same 
compiler (clang), takes 350ms, so it seems to be almost same




Ok, on ideone (ldc 1.1.0) it timeouts, on dpaste (ldc 0.12.0) it 
gets killed.

What version are you using?

Either way, if that'd be the case - that's slick. (and ldc would 
be the compiler of choice for real use cases).


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread Daniel Kozak via Digitalmars-d-learn
my second version on ldc takes 380ms and c++ version on same compiler
(clang), takes 350ms, so it seems to be almost same

On Sun, Aug 13, 2017 at 9:51 AM, amfvcg via Digitalmars-d-learn <
digitalmars-d-learn@puremagic.com> wrote:

> On Sunday, 13 August 2017 at 07:30:32 UTC, Daniel Kozak wrote:
>
>> Here is more D idiomatic way:
>>
>> import std.stdio : writeln;
>> import std.algorithm.comparison: min;
>> import std.algorithm.iteration: sum;
>> import core.time: MonoTime, Duration;
>>
>>
>> auto sum_subranges(T)(T input, uint range)
>> {
>> import std.array : array;
>> import std.range : chunks, ElementType;
>> import std.algorithm : map;
>>
>> if (range == 0)
>> {
>> return ElementType!(T)[].init;
>> }
>> return input.chunks(range).map!(sum).array;
>> }
>>
>> unittest
>> {
>> assert(sum_subranges([1,1,1], 2) == [2, 1]);
>> assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
>> assert(sum_subranges([], 2) == []);
>> assert(sum_subranges([1], 2) == [1]);
>> assert(sum_subranges([1], 0) == []);
>> }
>>
>>
>> int main()
>> {
>> import std.range : iota, array;
>> auto v = iota(0,100);
>> int sum;
>> MonoTime beg = MonoTime.currTime;
>> for (int i=0; i < 100; i++)
>> sum += cast(int)sum_subranges(v,2).length;
>> MonoTime end = MonoTime.currTime;
>> writeln(end-beg);
>> writeln(sum);
>> return sum;
>> }
>>
>> On Sun, Aug 13, 2017 at 9:13 AM, Daniel Kozak  wrote:
>>
>> this works ok for me with ldc compiler, gdc does not work on my arch
>>> machine so I can not do comparsion to your c++ versin (clang does not work
>>> with your c++ code)
>>>
>>> import std.stdio : writeln;
>>> import std.algorithm.comparison: min;
>>> import std.algorithm.iteration: sum;
>>> import core.time: MonoTime, Duration;
>>>
>>>
>>> T[] sum_subranges(T)(T[] input, uint range)
>>> {
>>> import std.array : appender;
>>> auto app = appender!(T[])();
>>> if (range == 0)
>>> {
>>> return app.data;
>>> }
>>> for (uint i; i < input.length; i=min(i+range, input.length))
>>> {
>>> app.put(sum(input[i..min(i+range, input.length)]));
>>> }
>>> return app.data;
>>> }
>>>
>>> unittest
>>> {
>>> assert(sum_subranges([1,1,1], 2) == [2, 1]);
>>> assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
>>> assert(sum_subranges([], 2) == []);
>>> assert(sum_subranges([1], 2) == [1]);
>>> assert(sum_subranges([1], 0) == []);
>>> }
>>>
>>>
>>> int main()
>>> {
>>> import std.range : iota, array;
>>> auto v = iota(0,100).array;
>>> int sum;
>>> MonoTime beg = MonoTime.currTime;
>>> for (int i=0; i < 100; i++)
>>> sum += cast(int)sum_subranges(v,2).length;
>>> MonoTime end = MonoTime.currTime;
>>> writeln(end-beg);
>>> writeln(sum);
>>> return sum;
>>> }
>>>
>>> On Sun, Aug 13, 2017 at 9:03 AM, Neia Neutuladh via Digitalmars-d-learn
>>> < digitalmars-d-learn@puremagic.com> wrote:
>>>
>>> [...]

>>>
> Thank you all for the replies. Good to know the community is alive in d :)
>
> Let's settle the playground:
> D  : http://ideone.com/h4fnsD
> C++: http://ideone.com/X1pyXG
>
> Both using GCC under the hood.
> C++ in 112 ms;
> D in :
> - 2.5 sec with original source;
> - 2.5 sec with Daniel's 1st version;
> - 5 sec timeout exceeded with Daniel's 2nd version;
> - 1.8 sec with Neia-like preallocation;
>
> So still it's not that neaty.
>
> (What's interesting C++ code generates 2KLOC of assembly, and Dlang @ ldc
> 12KLOC - checked at godbolt).
>
> P.S. For C++ version to work under clang, the function which takes
> (BinaryOp) must go before the other one (my bad).
>


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread amfvcg via Digitalmars-d-learn

On Sunday, 13 August 2017 at 07:30:32 UTC, Daniel Kozak wrote:

Here is more D idiomatic way:

import std.stdio : writeln;
import std.algorithm.comparison: min;
import std.algorithm.iteration: sum;
import core.time: MonoTime, Duration;


auto sum_subranges(T)(T input, uint range)
{
import std.array : array;
import std.range : chunks, ElementType;
import std.algorithm : map;

if (range == 0)
{
return ElementType!(T)[].init;
}
return input.chunks(range).map!(sum).array;
}

unittest
{
assert(sum_subranges([1,1,1], 2) == [2, 1]);
assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
assert(sum_subranges([], 2) == []);
assert(sum_subranges([1], 2) == [1]);
assert(sum_subranges([1], 0) == []);
}


int main()
{
import std.range : iota, array;
auto v = iota(0,100);
int sum;
MonoTime beg = MonoTime.currTime;
for (int i=0; i < 100; i++)
sum += cast(int)sum_subranges(v,2).length;
MonoTime end = MonoTime.currTime;
writeln(end-beg);
writeln(sum);
return sum;
}

On Sun, Aug 13, 2017 at 9:13 AM, Daniel Kozak 
 wrote:


this works ok for me with ldc compiler, gdc does not work on 
my arch machine so I can not do comparsion to your c++ versin 
(clang does not work with your c++ code)


import std.stdio : writeln;
import std.algorithm.comparison: min;
import std.algorithm.iteration: sum;
import core.time: MonoTime, Duration;


T[] sum_subranges(T)(T[] input, uint range)
{
import std.array : appender;
auto app = appender!(T[])();
if (range == 0)
{
return app.data;
}
for (uint i; i < input.length; i=min(i+range, 
input.length))

{
app.put(sum(input[i..min(i+range, input.length)]));
}
return app.data;
}

unittest
{
assert(sum_subranges([1,1,1], 2) == [2, 1]);
assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
assert(sum_subranges([], 2) == []);
assert(sum_subranges([1], 2) == [1]);
assert(sum_subranges([1], 0) == []);
}


int main()
{
import std.range : iota, array;
auto v = iota(0,100).array;
int sum;
MonoTime beg = MonoTime.currTime;
for (int i=0; i < 100; i++)
sum += cast(int)sum_subranges(v,2).length;
MonoTime end = MonoTime.currTime;
writeln(end-beg);
writeln(sum);
return sum;
}

On Sun, Aug 13, 2017 at 9:03 AM, Neia Neutuladh via 
Digitalmars-d-learn < digitalmars-d-learn@puremagic.com> wrote:



[...]


Thank you all for the replies. Good to know the community is 
alive in d :)


Let's settle the playground:
D  : http://ideone.com/h4fnsD
C++: http://ideone.com/X1pyXG

Both using GCC under the hood.
C++ in 112 ms;
D in :
- 2.5 sec with original source;
- 2.5 sec with Daniel's 1st version;
- 5 sec timeout exceeded with Daniel's 2nd version;
- 1.8 sec with Neia-like preallocation;

So still it's not that neaty.

(What's interesting C++ code generates 2KLOC of assembly, and 
Dlang @ ldc 12KLOC - checked at godbolt).


P.S. For C++ version to work under clang, the function which 
takes (BinaryOp) must go before the other one (my bad).


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread Daniel Kozak via Digitalmars-d-learn
Here is more D idiomatic way:

import std.stdio : writeln;
import std.algorithm.comparison: min;
import std.algorithm.iteration: sum;
import core.time: MonoTime, Duration;


auto sum_subranges(T)(T input, uint range)
{
import std.array : array;
import std.range : chunks, ElementType;
import std.algorithm : map;

if (range == 0)
{
return ElementType!(T)[].init;
}
return input.chunks(range).map!(sum).array;
}

unittest
{
assert(sum_subranges([1,1,1], 2) == [2, 1]);
assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
assert(sum_subranges([], 2) == []);
assert(sum_subranges([1], 2) == [1]);
assert(sum_subranges([1], 0) == []);
}


int main()
{
import std.range : iota, array;
auto v = iota(0,100);
int sum;
MonoTime beg = MonoTime.currTime;
for (int i=0; i < 100; i++)
sum += cast(int)sum_subranges(v,2).length;
MonoTime end = MonoTime.currTime;
writeln(end-beg);
writeln(sum);
return sum;
}

On Sun, Aug 13, 2017 at 9:13 AM, Daniel Kozak  wrote:

> this works ok for me with ldc compiler, gdc does not work on my arch
> machine so I can not do comparsion to your c++ versin (clang does not work
> with your c++ code)
>
> import std.stdio : writeln;
> import std.algorithm.comparison: min;
> import std.algorithm.iteration: sum;
> import core.time: MonoTime, Duration;
>
>
> T[] sum_subranges(T)(T[] input, uint range)
> {
> import std.array : appender;
> auto app = appender!(T[])();
> if (range == 0)
> {
> return app.data;
> }
> for (uint i; i < input.length; i=min(i+range, input.length))
> {
> app.put(sum(input[i..min(i+range, input.length)]));
> }
> return app.data;
> }
>
> unittest
> {
> assert(sum_subranges([1,1,1], 2) == [2, 1]);
> assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
> assert(sum_subranges([], 2) == []);
> assert(sum_subranges([1], 2) == [1]);
> assert(sum_subranges([1], 0) == []);
> }
>
>
> int main()
> {
> import std.range : iota, array;
> auto v = iota(0,100).array;
> int sum;
> MonoTime beg = MonoTime.currTime;
> for (int i=0; i < 100; i++)
> sum += cast(int)sum_subranges(v,2).length;
> MonoTime end = MonoTime.currTime;
> writeln(end-beg);
> writeln(sum);
> return sum;
> }
>
> On Sun, Aug 13, 2017 at 9:03 AM, Neia Neutuladh via Digitalmars-d-learn <
> digitalmars-d-learn@puremagic.com> wrote:
>
>> On Sunday, 13 August 2017 at 06:09:39 UTC, amfvcg wrote:
>>
>>> Hi all,
>>> I'm solving below task:
>>>
>>
>> Well, for one thing, you are preallocating in C++ code but not in D.
>>
>> On my machine, your version of the code completes in 3.175 seconds.
>> Changing it a little reduces it to 0.420s:
>>
>> T[] result = new T[input.length];
>> size_t o = 0;
>> for (uint i; i < input.length; i=min(i+range, input.length))
>> {
>> result[o] = sum(input[i..min(i+range, input.length)]);
>> o++;
>> }
>> return result[0..o];
>>
>> You can also use Appender from std.array.
>>
>
>


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread Daniel Kozak via Digitalmars-d-learn
this works ok for me with ldc compiler, gdc does not work on my arch
machine so I can not do comparsion to your c++ versin (clang does not work
with your c++ code)

import std.stdio : writeln;
import std.algorithm.comparison: min;
import std.algorithm.iteration: sum;
import core.time: MonoTime, Duration;


T[] sum_subranges(T)(T[] input, uint range)
{
import std.array : appender;
auto app = appender!(T[])();
if (range == 0)
{
return app.data;
}
for (uint i; i < input.length; i=min(i+range, input.length))
{
app.put(sum(input[i..min(i+range, input.length)]));
}
return app.data;
}

unittest
{
assert(sum_subranges([1,1,1], 2) == [2, 1]);
assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
assert(sum_subranges([], 2) == []);
assert(sum_subranges([1], 2) == [1]);
assert(sum_subranges([1], 0) == []);
}


int main()
{
import std.range : iota, array;
auto v = iota(0,100).array;
int sum;
MonoTime beg = MonoTime.currTime;
for (int i=0; i < 100; i++)
sum += cast(int)sum_subranges(v,2).length;
MonoTime end = MonoTime.currTime;
writeln(end-beg);
writeln(sum);
return sum;
}

On Sun, Aug 13, 2017 at 9:03 AM, Neia Neutuladh via Digitalmars-d-learn <
digitalmars-d-learn@puremagic.com> wrote:

> On Sunday, 13 August 2017 at 06:09:39 UTC, amfvcg wrote:
>
>> Hi all,
>> I'm solving below task:
>>
>
> Well, for one thing, you are preallocating in C++ code but not in D.
>
> On my machine, your version of the code completes in 3.175 seconds.
> Changing it a little reduces it to 0.420s:
>
> T[] result = new T[input.length];
> size_t o = 0;
> for (uint i; i < input.length; i=min(i+range, input.length))
> {
> result[o] = sum(input[i..min(i+range, input.length)]);
> o++;
> }
> return result[0..o];
>
> You can also use Appender from std.array.
>


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread Neia Neutuladh via Digitalmars-d-learn

On Sunday, 13 August 2017 at 06:09:39 UTC, amfvcg wrote:

Hi all,
I'm solving below task:


Well, for one thing, you are preallocating in C++ code but not in 
D.


On my machine, your version of the code completes in 3.175 
seconds. Changing it a little reduces it to 0.420s:


T[] result = new T[input.length];
size_t o = 0;
for (uint i; i < input.length; i=min(i+range, input.length))
{
result[o] = sum(input[i..min(i+range, input.length)]);
o++;
}
return result[0..o];

You can also use Appender from std.array.


Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread Roman Hargrave via Digitalmars-d-learn

On Sunday, 13 August 2017 at 06:09:39 UTC, amfvcg wrote:

Hi all,
I'm solving below task:

given container T and value R return sum of R-ranges over T. An 
example:

input : T=[1,1,1] R=2
output : [2, 1]

input : T=[1,2,3] R=1
output : [1,2,3]
(see dlang unittests for more examples)


Below c++ code compiled with g++-5.4.0 -O2 -std=c++14 runs on 
my machine in 656 836 us.
Below D code compiled with dmd v2.067.1 -O runs on my machine 
in ~ 14.5 sec.


If I had to guess, this could be due to backend and optimizer.
I don't want to go in to detail on my thoughts because I am not 
an expert
on codegen optimization, but I might suggest running your test 
compiled with
GDC (using identical optimization settings as G++) and ldc2 with 
similar settings.




Re: D outperformed by C++, what am I doing wrong?

2017-08-13 Thread rikki cattermole via Digitalmars-d-learn

On 13/08/2017 7:09 AM, amfvcg wrote:

Hi all,
I'm solving below task:

given container T and value R return sum of R-ranges over T. An example:
input : T=[1,1,1] R=2
output : [2, 1]

input : T=[1,2,3] R=1
output : [1,2,3]
(see dlang unittests for more examples)


Below c++ code compiled with g++-5.4.0 -O2 -std=c++14 runs on my machine 
in 656 836 us.
Below D code compiled with dmd v2.067.1 -O runs on my machine in ~ 14.5 
sec.


Each language has it's own "way of programming", and as I'm a beginner 
in D - probably I'm running through bushes instead of highway. Therefore 
I'd like to ask you, experienced dlang devs, to shed some light on "how 
to do it dlang-way".



C++ code:

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 


template
std::vector sum_elements(const T& beg, const T& end, std::size_t k, K 
def)

{
  if (k == 0) {
  return std::vector{};
  }
  return sum_elements(beg, end, k, def, [](auto , auto ){ return 
r+l;});

}

template
std::vector
 sum_elements(
 const T& beg,
 const T& end,
 std::size_t k,
 K def,
 BinaryOp op)
{
 std::vector out;
 out.reserve((std::distance(beg, end) - 1)/k + 1);
 for (auto it = beg; it!=end; std::advance(it, 
std::min(static_cast(std::distance(it, end)), k)))

 {
 out.push_back(std::accumulate(it, std::next(it, 
std::min(static_cast(std::distance(it, end)), k)), def, op));

 }
 return out;
}

int main()
{
 std::vector vec;
 auto size = 100;
 vec.reserve(size);
 for (int i=0; i < size; ++i)
 vec.push_back(i);
 auto beg = std::chrono::system_clock::now();
 auto sum = 0;
 for (int i=0; i < 100; i++)
 sum += sum_elements(vec.begin(), vec.end(), 2, 0).size();
 auto end = std::chrono::system_clock::now();
 std::cout << 
std::chrono::duration_cast(end-beg).count() 
<< std::endl;

 std::cout << sum << std::endl;

 return sum;
}


D code:

import std.stdio : writeln;
import std.algorithm.comparison: min;
import std.algorithm.iteration: sum;
import core.time: MonoTime, Duration;


T[] sum_subranges(T)(T[] input, uint range)
{
 T[] result;
 if (range == 0)
 {
 return result;
 }
 for (uint i; i < input.length; i=min(i+range, input.length))
 {
 result ~= sum(input[i..min(i+range, input.length)]);
 }
 return result;
}

unittest
{
 assert(sum_subranges([1,1,1], 2) == [2, 1]);
 assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
 assert(sum_subranges([], 2) == []);
 assert(sum_subranges([1], 2) == [1]);
 assert(sum_subranges([1], 0) == []);
}


int main()
{
 int[100] v;
 for (int i=0; i < 100; ++i)
 v[i] = i;
 int sum;
 MonoTime beg = MonoTime.currTime;
 for (int i=0; i < 100; i++)
 sum += cast(int)sum_subranges(v,2).length;
 MonoTime end = MonoTime.currTime;
 writeln(end-beg);
 writeln(sum);
 return sum;
}



Dmd compiles quickly, but doesn't create all that optimized code.
Try ldc or gdc and get back to us about it ;)



D outperformed by C++, what am I doing wrong?

2017-08-13 Thread amfvcg via Digitalmars-d-learn

Hi all,
I'm solving below task:

given container T and value R return sum of R-ranges over T. An 
example:

input : T=[1,1,1] R=2
output : [2, 1]

input : T=[1,2,3] R=1
output : [1,2,3]
(see dlang unittests for more examples)


Below c++ code compiled with g++-5.4.0 -O2 -std=c++14 runs on my 
machine in 656 836 us.
Below D code compiled with dmd v2.067.1 -O runs on my machine in 
~ 14.5 sec.


Each language has it's own "way of programming", and as I'm a 
beginner in D - probably I'm running through bushes instead of 
highway. Therefore I'd like to ask you, experienced dlang devs, 
to shed some light on "how to do it dlang-way".



C++ code:

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 


template
std::vector sum_elements(const T& beg, const T& end, 
std::size_t k, K def)

{
 if (k == 0) {
 return std::vector{};
 }
 return sum_elements(beg, end, k, def, [](auto , auto ){ 
return r+l;});

}

template
std::vector
sum_elements(
const T& beg,
const T& end,
std::size_t k,
K def,
BinaryOp op)
{
std::vector out;
out.reserve((std::distance(beg, end) - 1)/k + 1);
for (auto it = beg; it!=end; std::advance(it, 
std::min(static_cast(std::distance(it, end)), k)))

{
out.push_back(std::accumulate(it, std::next(it, 
std::min(static_cast(std::distance(it, end)), k)), 
def, op));

}
return out;
}

int main()
{
std::vector vec;
auto size = 100;
vec.reserve(size);
for (int i=0; i < size; ++i)
vec.push_back(i);
auto beg = std::chrono::system_clock::now();
auto sum = 0;
for (int i=0; i < 100; i++)
sum += sum_elements(vec.begin(), vec.end(), 2, 0).size();
auto end = std::chrono::system_clock::now();
std::cout << 
std::chrono::duration_cast(end-beg).count() << std::endl;

std::cout << sum << std::endl;

return sum;
}


D code:

import std.stdio : writeln;
import std.algorithm.comparison: min;
import std.algorithm.iteration: sum;
import core.time: MonoTime, Duration;


T[] sum_subranges(T)(T[] input, uint range)
{
T[] result;
if (range == 0)
{
return result;
}
for (uint i; i < input.length; i=min(i+range, input.length))
{
result ~= sum(input[i..min(i+range, input.length)]);
}
return result;
}

unittest
{
assert(sum_subranges([1,1,1], 2) == [2, 1]);
assert(sum_subranges([1,1,1,2,3,3], 2) == [2, 3, 6]);
assert(sum_subranges([], 2) == []);
assert(sum_subranges([1], 2) == [1]);
assert(sum_subranges([1], 0) == []);
}


int main()
{
int[100] v;
for (int i=0; i < 100; ++i)
v[i] = i;
int sum;
MonoTime beg = MonoTime.currTime;
for (int i=0; i < 100; i++)
sum += cast(int)sum_subranges(v,2).length;
MonoTime end = MonoTime.currTime;
writeln(end-beg);
writeln(sum);
return sum;
}



What am I doing wrong in this Pegged grammar? "Expected ' be ' but got 'groups'"

2015-09-27 Thread Enjoys Math via Digitalmars-d-learn

Here's a minimal example:

http://pastebin.com/mJDwGDbb

Is this a bug in Pegged?

Nothing seems to fix it.


Re: What am I doing Wrong (OpenGL SDL)

2015-02-05 Thread Mike Parker via Digitalmars-d-learn

On 2/5/2015 4:53 PM, Entity325 wrote:

On Thursday, 5 February 2015 at 07:23:15 UTC, drug wrote:

Look at this
https://github.com/drug007/geoviewer/blob/master/src/sdlapp.d
I used here SDL and OpenGL and it worked. Ctor of SDLApp creates SDL
window with OpenGL context, may be it helps.


Tested your code. Symbols still not being loaded, though it might be
wise to borrow some of your organizational conventions.

Just for kicks I had a friend try to run the executable I produced, and
she said she got exactly the same output I did, so it's definitely
something that gets compiled into the program. A bug report has been
posted on Github, including the full source code(which I probably did
completely wrong) and a screenshot of the output.


I've already replied on github [1], but for anyone else following this 
thread -- the symbols actually are being loaded. The ones for 
DerelictGL3 anyway. It's a different problem entirely.


https://github.com/DerelictOrg/DerelictGL3/issues/29


Re: What am I doing Wrong (OpenGL SDL)

2015-02-05 Thread drug via Digitalmars-d-learn

On 05.02.2015 10:53, Entity325 wrote:

On Thursday, 5 February 2015 at 07:23:15 UTC, drug wrote:

Look at this
https://github.com/drug007/geoviewer/blob/master/src/sdlapp.d
I used here SDL and OpenGL and it worked. Ctor of SDLApp creates SDL
window with OpenGL context, may be it helps.


Tested your code. Symbols still not being loaded, though it might be
wise to borrow some of your organizational conventions.

Just for kicks I had a friend try to run the executable I produced, and
she said she got exactly the same output I did, so it's definitely
something that gets compiled into the program. A bug report has been
posted on Github, including the full source code(which I probably did
completely wrong) and a screenshot of the output.


Hmm, just checked it and it works. Change dependencies in package.json 
on this:

dependencies: {
gl3n: ==1.0.0,
glamour: ==1.0.1,
derelict-gl3: ==1.0.12,
derelict-fi: ==1.9.0,
},
and install besides SDL2 developer version of freeimage and libcurl. 
Unfortunately data format of server has changed and there won't be 
properly rendered map, but correct OpenGL context will be created undoubtly.


Re: What am I doing Wrong (OpenGL SDL)

2015-02-05 Thread Entity325 via Digitalmars-d-learn
Aldacron and I have determined that I'm doing something weird 
with the imports between gl.d and gl3.d, the functions I'm trying 
to access are deprecated so the problem is less that they aren't 
loading and more that I can see them at all.


Re: What am I doing Wrong (OpenGL SDL)

2015-02-04 Thread Entity325 via Digitalmars-d-learn
I will see how much I can strip away and still reproduce the 
problem. If I find the cause before then, I'll be sure to report 
back here.


Re: What am I doing Wrong (OpenGL SDL)

2015-02-04 Thread Entity325 via Digitalmars-d-learn
I am having a problem which is similar in appearance to the OP's, 
but it does not seem to be similar in function. When I try to 
execute any of the Gl functions in the (protected) 
DerelictGL3.gl.loadSymbols function, I get an access violation. 
Checking for null reveals that the functions are indeed null and 
therefore have not been loaded into memory.


Now for where it gets weird.

Here's the code I'm using to load DerelictGL and create an OpenGL 
context in my test program(to make sure all the libraries are 
built correctly.)


SDL_GLContext context;

*sdlWindow = SDL_CreateWindow(New SDL Window,
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
800, 600,
//SDL_WINDOW_FULLSCREEN_DESKTOP |
SDL_WINDOW_OPENGL);

context = SDL_GL_CreateContext(*sdlWindow);

/* Turn on double buffering with a 24bit Z buffer.
 * You may need to change this to 16 or 32 for your system */
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, bitsPerPixel);

GLVersion glVersion = DerelictGL.reload();
if(glVersion  GLVersion.GL20)
{
writeln(Sufficient OpenGL version found.);
writeln(Loaded OGL version is: , glVersion);
string glVersionStr = to!(string)(glGetString( GL_VERSION 
));

writeln(GL version reported as , glVersionStr);
}
else
{
writeln(Error: OpenGL version , glVersion,  not 
supported.);

context = null;
}
if(glMatrixMode is null)
writeln(*glMatrixMode not loaded);
else
writeln(*All functions loaded properly.);

(sdlWindow is a pointer to an SDL_Window, the reference of 
which is passed into the function doing all this so that I can 
load the window into it)


Console output for this code:

Sufficient OpenGL version found.
Loaded OGL version is: GL45
GL version reported as 4.5.0 NVIDIA 347.09
*All functions loaded properly.

So that's good.

Now in the project that I'm actually working on, something very 
different happens.


Here's the code from that project:
[insert variable passing and assignments here, I'm trying to save 
you guys some reading.]
// Flags tell SDL about the type of window we are 
creating.
windowFlags = 
SDL_WINDOW_OPENGL;//|SDL_WINDOW_RESIZABLE;

if(fullscreen)
windowFlags |= SDL_WINDOW_FULLSCREEN_DESKTOP;


//SDL_SetVideoMode( sdlvideoinfo.current_w, 
sdlvideoinfo.current_h, bpp, flags );

thisWindow = SDL_CreateWindow(startTitle.toStringz,
  SDL_WINDOWPOS_UNDEFINED,
  SDL_WINDOWPOS_UNDEFINED,
  startSize.x, 
startSize.y,

  windowFlags);
if(thisWindow is null)
return false;

glContext = SDL_GL_CreateContext(thisWindow);

GLVersion glVersion = DerelictGL.reload();
if(glVersion  GLVersion.GL20)
{
writeln(Sufficient OpenGL version found.);
writeln(Loaded OGL version is: , glVersion);
string glVersionStr = to!(string)(glGetString( GL_VERSION 
));

writeln(GL version reported as , glVersionStr);
}
else
{
writeln(Error: OpenGL version , glVersion,  not 
supported.);

glContext = null;
}
if(glMatrixMode is null)
writeln(*glMatrixMode not loaded);
else
writeln(*All functions loaded properly.);

My, doesn't that look familiar? That's because it wasn't working 
before, so I copied over the code from my test program, which 
includes diagnostic statements such as ouptutting the GL version 
loaded.


Console output for this code:

Sufficient OpenGL version found.
Loaded OGL version is: GL45
GL version reported as 4.5.0 NVIDIA 347.09
*glMatrixMode not loaded

What.

As far as I know, the code on both sides is functionally the 
same. The build environments are as identical as I can reasonably 
make them, and the execution directory for the second program has 
at minimum any libraries that the first program has.


And of course, I'm loading the new OpenGL versions, so that's not 
the problem.


Re: What am I doing Wrong (OpenGL SDL)

2015-02-04 Thread Entity325 via Digitalmars-d-learn

On Thursday, 5 February 2015 at 06:07:34 UTC, Entity325 wrote:
I will see how much I can strip away and still reproduce the 
problem. If I find the cause before then, I'll be sure to 
report back here.


I don't know if this is relevant, but while stripping down my 
code, I discovered that suddenly my test code that previously 
worked also no longer works.


I sincerely hope this isn't a driver issue, because that'll be a 
small nightmare to debug.


Re: What am I doing Wrong (OpenGL SDL)

2015-02-04 Thread drug via Digitalmars-d-learn

On 05.02.2015 09:57, Entity325 wrote:

On Thursday, 5 February 2015 at 06:07:34 UTC, Entity325 wrote:

I will see how much I can strip away and still reproduce the problem.
If I find the cause before then, I'll be sure to report back here.


I don't know if this is relevant, but while stripping down my code, I
discovered that suddenly my test code that previously worked also no
longer works.

I sincerely hope this isn't a driver issue, because that'll be a small
nightmare to debug.

Look at this https://github.com/drug007/geoviewer/blob/master/src/sdlapp.d
I used here SDL and OpenGL and it worked. Ctor of SDLApp creates SDL 
window with OpenGL context, may be it helps.


Re: What am I doing Wrong (OpenGL SDL)

2015-02-04 Thread Entity325 via Digitalmars-d-learn

On Thursday, 5 February 2015 at 07:23:15 UTC, drug wrote:
Look at this 
https://github.com/drug007/geoviewer/blob/master/src/sdlapp.d
I used here SDL and OpenGL and it worked. Ctor of SDLApp 
creates SDL window with OpenGL context, may be it helps.


Tested your code. Symbols still not being loaded, though it might 
be wise to borrow some of your organizational conventions.


Just for kicks I had a friend try to run the executable I 
produced, and she said she got exactly the same output I did, so 
it's definitely something that gets compiled into the program. A 
bug report has been posted on Github, including the full source 
code(which I probably did completely wrong) and a screenshot of 
the output.


Re: What am I doing Wrong (OpenGL SDL)

2015-02-04 Thread Mike Parker via Digitalmars-d-learn

On Thursday, 5 February 2015 at 01:51:05 UTC, Entity325 wrote:
I am having a problem which is similar in appearance to the 
OP's, but it does not seem to be similar in function. When I 
try to execute any of the Gl functions in the (protected) 
DerelictGL3.gl.loadSymbols function, I get an access violation. 
Checking for null reveals that the functions are indeed null 
and therefore have not been loaded into memory.




Normally, I would suggest that you make sure you're calling 
DerelictGL3.load, but if you weren't, then there's no way reload 
or glGetString would be working. I can't do much to help you 
without a minimal case that reproduces the problem, so if you can 
try to strip it down to something manageable and open an issue at 
the DerelictGL3 issue tracker[1], then I can go to work on it.


[1] https://github.com/DerelictOrg/DerelictGL3/issues


What am I doing Wrong (OpenGL SDL)

2014-07-04 Thread Sean Campbell via Digitalmars-d-learn
I cannot figure out what is wrong with this code and why i keep 
getting object.error access violation. the code is simple 
tutorial code for SDL and OpenGL what am i doing wrong (the 
access violation seems to be with glGenBuffers)

The Code

import std.stdio;
import derelict.opengl3.gl3;
import derelict.sdl2.sdl;

float vertices[] = [
0.0f,  0.5f, // Vertex 1 (X, Y)
0.5f, -0.5f, // Vertex 2 (X, Y)
-0.5f, -0.5f  // Vertex 3 (X, Y)
];

int main(string args[]){
DerelictSDL2.load();
DerelictGL3.load();
SDL_Init(SDL_INIT_EVERYTHING);
	SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, 
SDL_GL_CONTEXT_PROFILE_CORE);

SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION,2 );
	SDL_Window* window = SDL_CreateWindow(OpenGL, 100, 100, 800, 
600, SDL_WINDOW_OPENGL);

SDL_GLContext context = SDL_GL_CreateContext(window);
SDL_Event windowEvent;
GLuint vbo;
glGenBuffers(1, vbo); // Generate 1 buffer
glBindBuffer(GL_ARRAY_BUFFER, vbo);
	glBufferData(GL_ARRAY_BUFFER, (vertices[0]).sizeof * 
vertices.length, vertices.ptr, GL_STATIC_DRAW);

while (true)
{
if (SDL_PollEvent(windowEvent))
{
if (windowEvent.type == SDL_QUIT){
return 0;
			}else if (windowEvent.type == SDL_KEYUP  
windowEvent.key.keysym.sym == SDLK_ESCAPE){

return 0;
}
SDL_GL_SwapWindow(window);
}
}
return 0;
}


Re: What am I doing Wrong (OpenGL SDL)

2014-07-04 Thread Misu via Digitalmars-d-learn
Can you try to add DerelictGL3.reload(); after 
SDL_GL_CreateContext ?


Re: What am I doing Wrong (OpenGL SDL)

2014-07-04 Thread bearophile via Digitalmars-d-learn

Sean Campbell:

I cannot figure out what is wrong with this code and why i keep 
getting object.error access violation. the code is simple 
tutorial code for SDL and OpenGL what am i doing wrong (the 
access violation seems to be with glGenBuffers)


I don't know where your problem is, but you can start helping 
yourself with more tidy code (because this makes it more easy to 
fix) and adding some asserts on the pointers.




float vertices[] = [


Better to use the D syntax.



int main(string args[]){
DerelictSDL2.load();
DerelictGL3.load();
SDL_Init(SDL_INIT_EVERYTHING);


Better to add a space before the { and some blank lines to 
separate logically distinct pieces of your code.
Also, your main perhaps could be void (and use just empty returns 
inside it).



	SDL_Window* window = SDL_CreateWindow(OpenGL, 100, 100, 800, 
600, SDL_WINDOW_OPENGL);


Perhaps it's better to use auto here.


	glBufferData(GL_ARRAY_BUFFER, (vertices[0]).sizeof * 
vertices.length, vertices.ptr, GL_STATIC_DRAW);


Better to define a little function that computes the bytes of an 
array, and call it here, it's less bug-prone.


As a first step you can assert that all the pointers that should 
not be null in your program are not null. And then run a debugger.


Bye,
bearophile


Re: What am I doing Wrong (OpenGL SDL)

2014-07-04 Thread Sean Campbell via Digitalmars-d-learn

On Friday, 4 July 2014 at 08:02:59 UTC, Misu wrote:
Can you try to add DerelictGL3.reload(); after 
SDL_GL_CreateContext ?


yes this solved the problem. however why? is it a problem with 
the SDL binding?


Re: What am I doing Wrong (OpenGL SDL)

2014-07-04 Thread safety0ff via Digitalmars-d-learn

On Friday, 4 July 2014 at 09:39:49 UTC, Sean Campbell wrote:

On Friday, 4 July 2014 at 08:02:59 UTC, Misu wrote:
Can you try to add DerelictGL3.reload(); after 
SDL_GL_CreateContext ?


yes this solved the problem. however why? is it a problem with 
the SDL binding?


No.
https://github.com/DerelictOrg/DerelictGL3/blob/master/README.md


Re: What am I doing Wrong (OpenGL SDL)

2014-07-04 Thread Mike Parker via Digitalmars-d-learn

On 7/4/2014 6:39 PM, Sean Campbell wrote:

On Friday, 4 July 2014 at 08:02:59 UTC, Misu wrote:

Can you try to add DerelictGL3.reload(); after SDL_GL_CreateContext ?


yes this solved the problem. however why? is it a problem with the SDL
binding?


OpenGL on Windows requires a context be created before attempting to 
load any extensions or any later versions of OpenGL beyond 1.1. Although 
this is not an issue on other platforms, the Derelict binding makes it a 
requirement for consistency. DerelictGL3.load loads the DLL into memory 
along with the 1.0  1.1 function addresses. You can call that at any 
time, before or after creating a context. If you do not call 
DerelictGL3.reload, you will never load the extensions and 1.2+ 
functions. If you attempt to call it before creating a context, Derelict 
will throw an exception.


---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



What am I doing wrong here?

2012-10-14 Thread Martin
Hey everyone, I'm new to D so bare with me please. I've been 
trying to figure out what's up with the strange forward refernce 
errors the compiler (DMD 2.060) is giving me. Here's a code 
snippet that's generating a forward reference error:


public class AliasTestClass(alias func)
{

static assert(__traits(isStaticFunction, func));

}

public class TestClass
{

private AliasTestClass!(randomFunction) test; // -

public static void randomFunction()
{
}

}

The strange part about it is that if I surround the 
randomFunction parameter with another pair of paranthesis like so


private AliasTestClass!((randomFunction)) test;

It works just fine. If I don't, however, I get a forward 
reference error:
Error: template instance main.AliasTestClass!(randomFunction) 
forward reference of randomFunction


Am I doing anything wrong or is this some kind of bug?


Re: What am I doing wrong here?

2012-10-14 Thread Simen Kjaeraas

On 2012-10-14, 14:28, Martin wrote:

Hey everyone, I'm new to D so bare with me please. I've been trying to  
figure out what's up with the strange forward refernce errors the  
compiler (DMD 2.060) is giving me. Here's a code snippet that's  
generating a forward reference error:


public class AliasTestClass(alias func)
{

static assert(__traits(isStaticFunction, func));

}

public class TestClass
{

private AliasTestClass!(randomFunction) test; // -

public static void randomFunction()
{
}

}

The strange part about it is that if I surround the randomFunction  
parameter with another pair of paranthesis like so


private AliasTestClass!((randomFunction)) test;

It works just fine. If I don't, however, I get a forward reference error:
Error: template instance main.AliasTestClass!(randomFunction) forward  
reference of randomFunction


Am I doing anything wrong or is this some kind of bug?


It's a bug. Maybe it's already in Bugzilla (there are some forward-ref
bugs there already). Please file:

http://d.puremagic.com/issues/enter_bug.cgi

--
Simen


Re: What am I doing wrong here?

2012-10-14 Thread Martin

On Sunday, 14 October 2012 at 12:58:24 UTC, Simen Kjaeraas wrote:

On 2012-10-14, 14:28, Martin wrote:

Hey everyone, I'm new to D so bare with me please. I've been 
trying to figure out what's up with the strange forward 
refernce errors the compiler (DMD 2.060) is giving me. Here's 
a code snippet that's generating a forward reference error:


public class AliasTestClass(alias func)
{

static assert(__traits(isStaticFunction, func));

}

public class TestClass
{

private AliasTestClass!(randomFunction) test; // -

public static void randomFunction()
{
}

}

The strange part about it is that if I surround the 
randomFunction parameter with another pair of paranthesis like 
so


private AliasTestClass!((randomFunction)) test;

It works just fine. If I don't, however, I get a forward 
reference error:
Error: template instance main.AliasTestClass!(randomFunction) 
forward reference of randomFunction


Am I doing anything wrong or is this some kind of bug?


It's a bug. Maybe it's already in Bugzilla (there are some 
forward-ref

bugs there already). Please file:

http://d.puremagic.com/issues/enter_bug.cgi


Oh, thank you for clarifying, I thought I was doing something 
wrong :)


Re: What am I doing wrong ?

2012-04-26 Thread Marco Leise
Am Sun, 22 Apr 2012 23:47:20 +0200
schrieb SomeDude lovelyd...@mailmetrash.com:

 void main() {
  auto array = new Foo[10];
 -- for(int i = array.length; i  1; i--) { array[i].x = i; }
  writeln();
  foreach(Foo f; array) { write(f.x);}
 }
 
 throws core.exception.RangeError@bug(8): Range violation on the 
 line with the arrow.
 
 What am I doing wrong ?

You could also try:

foreach_reverse(i, ref f; array) { f.x = i; }

-- 
Marco



What am I doing wrong ?

2012-04-22 Thread SomeDude

Sorry for the noob questions, but

import std.stdio;

struct Foo {
int x;
}
void main() {
auto array = new Foo[10];
auto i = array.length;
foreach(Foo f; array) { f.x = --i; write(f.x);}
writeln();
foreach(Foo f; array) { write(f.x);}
}

gives me:

PS E:\DigitalMars\dmd2\samples rdmd bug.d
9876543210
00

Also,

void main() {
auto array = new Foo[10];
-- for(int i = array.length; i  1; i--) { array[i].x = i; }
writeln();
foreach(Foo f; array) { write(f.x);}
}

throws core.exception.RangeError@bug(8): Range violation on the 
line with the arrow.


What am I doing wrong ?


Re: What am I doing wrong ?

2012-04-22 Thread Dmitry Olshansky

On 23.04.2012 1:47, SomeDude wrote:

Sorry for the noob questions, but

import std.stdio;

struct Foo {
int x;
}
void main() {
auto array = new Foo[10];
auto i = array.length;
foreach(Foo f; array) { f.x = --i; write(f.x);}
writeln();
foreach(Foo f; array) { write(f.x);}


Here: Foo f is not a reference but a copy of each array element. Use
foreach( ref f; array)


}

gives me:

PS E:\DigitalMars\dmd2\samples rdmd bug.d
9876543210
00

Also,

void main() {
auto array = new Foo[10];
-- for(int i = array.length; i  1; i--) { array[i].x = i; }


Arrays indices are 0 based, thus last index is array.length-1.


writeln();
foreach(Foo f; array) { write(f.x);}
}

throws core.exception.RangeError@bug(8): Range violation on the line
with the arrow.

What am I doing wrong ?



--
Dmitry Olshansky


Re: What am I doing wrong ?

2012-04-22 Thread Mike Wey

On 04/22/2012 11:47 PM, SomeDude wrote:

Sorry for the noob questions, but

import std.stdio;

struct Foo {
int x;
}
void main() {
auto array = new Foo[10];
auto i = array.length;
foreach(Foo f; array) { f.x = --i; write(f.x);}


Use ref when you want to modify the original array, without ref you are 
modifying a copy in the foreach.


foreach(ref Foo f; array) { f.x = --i; write(f.x);}


writeln();
foreach(Foo f; array) { write(f.x);}
}

gives me:

PS E:\DigitalMars\dmd2\samples rdmd bug.d
9876543210
00

Also,

void main() {
auto array = new Foo[10];
-- for(int i = array.length; i  1; i--) { array[i].x = i; }
writeln();
foreach(Foo f; array) { write(f.x);}
}

throws core.exception.RangeError@bug(8): Range violation on the line
with the arrow.

What am I doing wrong ?


You set i to the array length of 10 but the array index is zero based so 
the last item in the array is at index 9.
But in the first iteration of the loop you are trying to access the the 
item at index 10 (array.length) which doesn't exist.


--
Mike Wey


Re: What am I doing wrong ?

2012-04-22 Thread SomeDude

On Sunday, 22 April 2012 at 21:50:32 UTC, Dmitry Olshansky wrote:

Omagad, thank you, too much Java is bd for your brains.