Re: initializing a static array

2017-10-10 Thread Simon Bürger via Digitalmars-d-learn

On Tuesday, 10 October 2017 at 13:54:16 UTC, Daniel Kozak wrote:

struct Double
{
double v = 0;
alias v this;
}

struct Foo(size_t n)
{
Double[n] bar;
}


Interesting approach. But this might introduce problems later. 
For example `Double` is implicitly convertible to `double`, but 
`Double[]` is not implicitly convertible to `double[]`. Therefore 
I will stick with jmh530's solution for now, but thank you anyway.




Re: initializing a static array

2017-10-10 Thread Simon Bürger via Digitalmars-d-learn

On Tuesday, 10 October 2017 at 13:48:16 UTC, Andrea Fontana wrote:

On Tuesday, 10 October 2017 at 13:36:56 UTC, Simon Bürger wrote:
Is there a good way to set them all to zero? The only way I 
can think of is using string-mixins to generate a string such 
as "[0,0,0,0]" with exactly n zeroes. But that seems quite an 
overkill for such a basic task. I suspect I might be missing 
something obvious here...


Maybe:

double[n] bar = 0.repeat(n).array;


This works fine, thanks a lot. I would have expected `.array` to 
return a dynamic array. But apparently the compiler is smart 
enough to know the length. Even the multi-dimensional case works 
fine:


double[n][n] bar = 0.repeat(n).array.repeat(n).array;




initializing a static array

2017-10-10 Thread Simon Bürger via Digitalmars-d-learn
I have a static array inside a struct which I would like to be 
initialized to all-zero like so


  struct Foo(size_t n)
  {
double[n] bar = ... all zeroes ...
  }

(note that the default-initializer of double is nan, and not zero)

I tried

  double[n] bar = 0;  // does not compile
  double[n] bar = {0}; // neither does this
  double[n] bar = [0]; // compiles, but only sets the first 
element, ignoring the rest


Is there a good way to set them all to zero? The only way I can 
think of is using string-mixins to generate a string such as 
"[0,0,0,0]" with exactly n zeroes. But that seems quite an 
overkill for such a basic task. I suspect I might be missing 
something obvious here...


Re: lambda function with "capture by value"

2017-08-06 Thread Simon Bürger via Digitalmars-d-learn

On Sunday, 6 August 2017 at 12:50:22 UTC, Adam D. Ruppe wrote:

On Saturday, 5 August 2017 at 19:58:08 UTC, Temtaime wrote:

(k){ dgs[k] = {writefln("%s", k); }; }(i);


Yeah, that's how I'd do it - make a function taking arguments 
by value that return the delegate you actually want to store. 
(Also use this pattern in Javascript btw for its `var`, though 
JS now has `let` which works without this trick... and D is 
supposed to work like JS `let` it is just buggy).


You could also define a struct with members for the values you 
want, populate it, and pass one of its methods as your 
delegate. It is syntactically the heaviest but does give the 
most precise control (and you can pass the struct itself by 
value to avoid the memory allocation entirely if you want).


But for the loop, the pattern Temtaime wrote is how I'd prolly 
do it.


I like the (kinda cryptic IMO) look of this '(k){...}(i)' 
construction. But for my actual code I went with struct+opCall 
without any delegate at all.


Anyway, thanks for all your suggestions.


Re: lambda function with "capture by value"

2017-08-05 Thread Simon Bürger via Digitalmars-d-learn

On Saturday, 5 August 2017 at 18:54:22 UTC, ikod wrote:

Maybe std.functional.partial can help you.


Nope.

int i = 1;
alias dg = partial!(writeln, i);
i = 2;
dg();

still prints '2' as it should because 'partial' takes 'i' as a 
symbol, which is - for this purpose - kinda like "by reference".


Anyway, I solved my problem already a while ago by replacing 
delegates with custom struct's that implement the call-operator. 
I started this thread just out of curiosity, because as I see it, 
the purpose of lambdas is pretty much to remove the need for such 
custom constructions.


Re: lambda function with "capture by value"

2017-08-05 Thread Simon Bürger via Digitalmars-d-learn

On Saturday, 5 August 2017 at 18:54:22 UTC, ikod wrote:

On Saturday, 5 August 2017 at 18:45:34 UTC, Simon Bürger wrote:

On Saturday, 5 August 2017 at 18:22:38 UTC, Stefan Koch wrote:

[...]


No, sometimes I want i to be the value it has at the time the 
delegate was defined. My actual usecase was more like this:


void delegate()[3] dgs;
for(int i = 0; i < 3; ++i)
dgs[i] = (){writefln("%s", i); };


And I want three different delegates, not three times the 
same. I tried the following:


void delegate()[3] dgs;
for(int i = 0; i < 3; ++i)
{
int j = i;
dgs[i] = (){writefln("%s", j); };
}

I thought that 'j' should be considered a new variable each 
time around, but sadly it doesn't work.


Maybe std.functional.partial can help you.


Thanks. But std.functional.partial takes the fixed arguments as 
template parameters, so they must be known at compile-time. 
Anyway, I solved my problem already a while ago by replacing 
delegates with custom structures which overload the 
call-operator. I opened this thread just out of curiosity. Takes 
a couple lines more but works fine.


Re: lambda function with "capture by value"

2017-08-05 Thread Simon Bürger via Digitalmars-d-learn

On Saturday, 5 August 2017 at 18:22:38 UTC, Stefan Koch wrote:

On Saturday, 5 August 2017 at 18:19:05 UTC, Stefan Koch wrote:

On Saturday, 5 August 2017 at 18:17:49 UTC, Simon Bürger wrote:
If a lambda function uses a local variable, that variable is 
captured using a hidden this-pointer. But this capturing is 
always by reference. Example:


int i = 1;
auto dg = (){ writefln("%s", i); };
i = 2;
dg(); // prints '2'

Is there a way to make the delegate "capture by value" so 
that the call prints '1'?


Note that in C++, both variants are available using
  [&]() { printf("%d", i); }
and
   [=]() { printf("%d", i); }
respectively.


No currently there is not.


and it'd be rather useless I guess.
You want i to be whatever the context i is a the point where 
you call the delegate.

Not at the point where you define the delegate.


No, sometimes I want i to be the value it has at the time the 
delegate was defined. My actual usecase was more like this:


void delegate()[3] dgs;
for(int i = 0; i < 3; ++i)
dgs[i] = (){writefln("%s", i); };


And I want three different delegates, not three times the same. I 
tried the following:


void delegate()[3] dgs;
for(int i = 0; i < 3; ++i)
{
int j = i;
dgs[i] = (){writefln("%s", j); };
}

I thought that 'j' should be considered a new variable each time 
around, but sadly it doesn't work.


lambda function with "capture by value"

2017-08-05 Thread Simon Bürger via Digitalmars-d-learn
If a lambda function uses a local variable, that variable is 
captured using a hidden this-pointer. But this capturing is 
always by reference. Example:


int i = 1;
auto dg = (){ writefln("%s", i); };
i = 2;
dg(); // prints '2'

Is there a way to make the delegate "capture by value" so that 
the call prints '1'?


Note that in C++, both variants are available using
   [&]() { printf("%d", i); }
and
   [=]() { printf("%d", i); }
respectively.


Re: Force usage of double (instead of higher precision)

2017-06-29 Thread Simon Bürger via Digitalmars-d-learn

On Thursday, 29 June 2017 at 00:07:35 UTC, kinke wrote:

On Wednesday, 28 June 2017 at 22:16:48 UTC, Simon Bürger wrote:

I am currently using LDC on 64-bit-Linux if that is relevant.


It is, as LDC on Windows/MSVC would use 64-bit compile-time 
reals. ;)


Changing it to double on other platforms is trivial if you 
compile LDC yourself. You'll want to use this alias: 
https://github.com/ldc-developers/ldc/blob/master/ddmd/root/ctfloat.d#L19, https://github.com/ldc-developers/ldc/blob/master/ddmd/root/ctfloat.h#L19


Huh, I will definitely look into this. This might be the most 
elegant solution. Thanks for the suggestion.


Re: Force usage of double (instead of higher precision)

2017-06-29 Thread Simon Bürger via Digitalmars-d-learn

Thanks a lot for your comments.

On Wednesday, 28 June 2017 at 23:56:42 UTC, Stefan Koch wrote:

[...]

Nice work can you re or dual license under the boost license ?
I'd like to incorporate the qd type into newCTFE.


The original work is not mine but traces back to 
http://crd-legacy.lbl.gov/~dhbailey/mpdist/ which is under a 
(modified) BSD license. I just posted the link for context, sorry 
for the confusion. Doing a port to D does not allow me to change 
the license even though I not a single line from the original 
would remain (I think?).


I might do a completely new D implementation (still based on the 
original authors research paper, not on the details of their 
code). But
1. I probably would only do a subset of functions I need for my 
work (i.e. double-double only, no quad-double, and only a limited 
set of trancendental functions).
2. Given that I have seen the original code, this might still be 
considered a "derivative work". I'm not sure, copyright-law is 
kinda confusing to me in these cases.


Indeed you'll have no way to get rid of the excess precision 
except for creating a function per sub-expression.


No, doesn't seem to work. Here is a minimal breaking example:

double sum(double x, double y) { return x + y; }
bool equals(double x, double y) { return x == y; }

enum pi = ddouble(3.141592653589793116e+00, 
1.224646799147353207e-16);


struct ddouble
{
double hi, lo;

invariant
{
if(!isNaN(hi) && !isNaN(lo))
assert(equals(sum(hi, lo),  hi));
}

this(double hi, double lo)
{
this.hi = hi;
this.lo = lo;
}
}

But there are workarounds that seem to work:
1. remove the constructor (I think this means the invariant is 
not checked anymore?)

2. disable the invariant in ctfe (using "if(__ctfe) return;")
3. Don't use any ctfe (by replacing enum with immutable globals, 
initialized in "static this").



I was using the newCTFE fork which fixes this.


Does this mean your new CTFE code (which is quite impressive work 
as far as I can tell), floating point no longer gets promoted to 
higher precision? That would be really good news for hackish 
floating-point code.


Honestly, this whole "compiler gets to decide which type to 
actually use" thing really bugs me. Kinda reminiscent of C/C++ 
integer types which could in principle be anything at all. I 
thought D had fixed this by specifying "int = 32-bit, long = 
64-bit, float = IEEE-single-precision, double = 
IEEE-double-precision". Apparently not.


If I write "double", I would like to get IEEE-conform 
double-precision operations. If I wanted something depending on 
target-platform and compiler-optimization-level I would have used 
"real". Also this 80-bit-extended type is just a bad idea in 
general and should never be used (IMHO). Even on x86 processors, 
it only exists for backward-compatibility. No current instruction 
set (like SEE/AVX) supports it. Sorry for the long rant. But I am 
puzzled that the spec (https://dlang.org/spec/float.html) 
actually encourages double<->real convertions while at the same 
time it (rightfully) disallows "unsafe math optimizations" such 
as "x-x=0".


Force usage of double (instead of higher precision)

2017-06-28 Thread Simon Bürger via Digitalmars-d-learn
According to the standard (http://dlang.org/spec/float.html), the 
compiler is allowed to compute any floating-point statement in a 
higher precision than specified. Is there a way to deactivate 
this behaviour?


Context (reason why I need this): I am building a "double double" 
type, which essentially takes two 64-bit double-precision numbers 
to emulate a (nearly) quadruple-precision number. A simplified 
version looks something like this:


struct ddouble
{
double high;
double low;

invariant
{
assert(high + low == high);
}

// ...implemententations of arithmetic operations...
}

Everything works fine at run-time, but if I declare a 
compile-time constant like


enum pi = ddouble(3.141592653589793116e+00, 
1.224646799147353207e-16);


the invariant fails because it is evaluated using 80-bit 
"extended precision" during CTFE. All arithmetic operations rely 
on IEEE-conform double-precision, so everything breaks down if 
the compiler decides to replace them with higher precision. I am 
currently using LDC on 64-bit-Linux if that is relevant.


(If you are interested in the "double double" type, take a look 
here:

https://github.com/BrianSwift/MetalQD
which includes a double-double and even quad-double 
implementation in C/C++/Fortran)


Re: nested inout return type

2016-06-14 Thread Simon Bürger via Digitalmars-d-learn
On Tuesday, 14 June 2016 at 14:47:11 UTC, Steven Schveighoffer 
wrote:

* only do one mutable version of opSlice
* add implicit cast (using "alias this") for const(Slice!T) ->
Slice!(const(T)).


Interesting, but unfortunately, the compiler isn't eager about 
this conversion. auto x = s[5 .. 7] isn't going to give you a 
Slice!(const(T)), like an array would. But I like the idea.


Hm, you are right, in fact it doesn't work. Somehow it seemed to 
in my usecase. Well, triplicate it is then. Which isn't that bad 
using something like


auto opSlice(size_t a, size_t b)
{
   // actual non-trivial code
}

auto opSlice(size_t a, size_t b) const
{
   return Slice!(const(T))(ptr, length).opSlice(a,b);
}

auto opSlice(size_t a, size_t b) immutable
{
   return Slice!(immutable(T))(ptr, length).opSlice(a,b);
}


Re: nested inout return type

2016-06-14 Thread Simon Bürger via Digitalmars-d-learn

On Tuesday, 14 June 2016 at 01:50:17 UTC, Era Scarecrow wrote:

On Monday, 13 June 2016 at 23:51:40 UTC, Era Scarecrow wrote:

 inout(Slice) opSlice(size_t a, size_t b) inout
 {
 return cast(inout(Slice)) Slice(ptr+a, b-a);
 }


 Seems the pointer has to be force-cast back to a normal 
pointer so the constructor can work. (Because the function is 
inout, ptr becomes inout(T*) )


 return cast(inout(Slice)) Slice(cast(T*)ptr+a, b-a);

 Beyond that it works as expected :) Writeln gives the 
following output on Slice with opSlices:


Slice!int(18FD60, 10)
const(Slice!int)(18FD60, 10)
immutable(Slice!int)(18FD90, 10)


Then instead of Slice!(const(T)) one would use const(Slice!T). 
Then there is no analogue of the following:


const(T)[] s = ...
s = s[5..7]

which is quite common when parsing strings for example. Still, 
might be the cleanest approach. However I found another solution 
for now without using any "inout":


* only do one mutable version of opSlice
* add implicit cast (using "alias this") for const(Slice!T) -> 
Slice!(const(T)).


So when trying to opSlice on a const it will first cast to a 
mutable-slice-of-const-elements and then do the slice. This is 
closer to the behavior of "const(T[])", though it might have 
issues when using immutable and not only const. Not sure.


Anyway, thanks for the help, and if someone cares, the full 
resulting code is on github.com/Krox/jive/blob/master/jive/array.d





nested inout return type

2016-06-13 Thread Simon Bürger via Digitalmars-d-learn
I'm writing a custom (originally multi-dimensional) Slice-type, 
analogous to the builtin T[], and stumbled upon the problem that 
the following code won't compile. The workaround is simple: just 
write the function three times for mutable/const/immutable. But 
as "inout" was invented to make that unneccessary I was wondering 
if there is a clever way to make this work.



struct Slice(T)
{
T* ptr;
size_t length;

Slice!(inout(T)) opSlice(size_t a, size_t b) inout
{
return Slice!(inout(T))(ptr+a, b-a);
}
}


Re: custom memory management

2014-02-28 Thread Simon Bürger

On Friday, 28 February 2014 at 10:40:17 UTC, Dicebot wrote:
On Thursday, 27 February 2014 at 21:46:17 UTC, Simon Bürger 
wrote:
Sadly, this is incorrect as well. Because if such an object is 
collected by the gc, but the gc decides not to run the 
destructor, the buffer will never be free'd.


I think you misiterpret the spec. If object is collected, 
destructor is guaranteed to run. But not all objects are 
guaranteed to be collected. For example, no collection happens 
at program termination.


So it is OK to release resources in destructor that will be 
reclaimed by OS at program termination anyway. List of such 
resources is OS-specific but heap memory tends to be there.


If you are right that would mean that the current dmd/runtime 
does not follow the spec. Curious. The current implementation is 
not aware of strcut-destructors on the heap, i.e. the 
GC.BlkAttr.FINALIZE flag is not set for structs (or arrays of 
structs).


In the struct-inside-a-class example, the struct destructor is 
called by the (automatically generated) class-destructor. The gc 
only knows about class-destrcutor and calls only that one 
directly.


custom memory management

2014-02-27 Thread Simon Bürger
I am trying to implement a structure with value semantics which 
uses an internal buffer. The first approach looks like this:


struct S
{
byte[] buf;

this(int size) { buf = new byte[size]; }
this(this) { buf = buf.dup; }
~this(this) { delete buf; }
}

This works fine as long as such an object is allocated on the 
stack (so the destructor is called at the end of the scope). 
However when the destructor is called by the gc, the buffer might 
already be collected, and freeing it a second time is obviously 
invalid.


My second approach was to allocate the buffer outside the 
gc-managed heap, like so:


this(int size) {
buf = (cast(byte*)core.stdc.stdlib.malloc(size))[0..size];
}

~this(this) {
core.stdc.stdlib.free(buf);
}

Sadly, this is incorrect as well. Because if such an object is 
collected by the gc, but the gc decides not to run the 
destructor, the buffer will never be free'd.


If the gc would either always or never call struct-destructors, 
one of my two solutions would work. But the current situation is 
(in compliance with the language spec), that it is called 
_sometimes_, which breaks both solutions.


One way the first approach could work would be for the destructor 
to check wether it was called by the gc, and skip the 
deallocation in that case. But as far as I know, the gc does not 
provide such a method. It would be trivial to implement, but 
seems kinda hackish.


I know the suggested way in D is to not deallocate the buffer at 
all, but rely on the gc to collect it eventually. But it still 
puzzles me that it seems to be impossible to do. Anybody have an 
idea how I could make it work?


thanks, simon


Re: What is format of dmd -deps ... output?

2014-02-27 Thread Simon Bürger

On Wednesday, 26 February 2014 at 13:38:55 UTC, Cooler wrote:

Is there any official/unofficial documentation about -deps
command line option output?


This does not answer your question directly, but if you want to 
create dependency files for use with make, you can use rdmd 
--makedepend.


Re: custom memory management

2014-02-27 Thread Simon Bürger

On Thursday, 27 February 2014 at 22:04:50 UTC, Namespace wrote:
A struct is a value type. So it is passed by value and is 
placed on the stack.


{
S s;
}

S DTor is called at the end of the scope. So you can rely on 
RAII as long as you use structs.


On the stack yes. But not on the heap:

S[] s = new S[17];
s = null;

the GC will collect the memory eventually, but without calling 
any destructor.


On the other hand:

class C { S s; }
C c = new c;
c = null;

in this case, when the gc collects the memory, it will call both 
destrcutors. The one of C as well as of S.


Re: custom memory management

2014-02-27 Thread Simon Bürger
On Thursday, 27 February 2014 at 22:15:41 UTC, Steven 
Schveighoffer wrote:

On Thu, 27 Feb 2014 16:46:15 -0500, Simon Bürger [...]
More and more, I think a thread-local flag of I'm in the GC 
collection cycle would be hugely advantageous -- if it doesn't 
already exist...


I don't think it does, so I actually implemented it myself (not 
thread-local, but same locking as the rest of the gc): 
github.com/Krox/druntime/commit/38b718f1dcf08ab8dabb6eed10ff1073e215890f 
. But now that you mention it, a thread-local flag might be 
better.


Re: Switching from Java to D: Beginner questions, multiplatform issues, etc.

2014-02-27 Thread Simon Bürger

What exactly is the difference between C and D headers?
D itself does not use headers at all. But you will need D 
headers, if you want to call a C library from D. The translation 
is mostly syntatic and straight forward like:

* replace #define-constants with enums
* replace macros with (templated) functions
* replace #ifdef with static-if / version
etc...


Metaprogramming is always welcomed; makes things much easier in 
the long run.
I'll probably find this out on my own eventually, but does D 
support operator overloading? That would be simply amazing.

Absolutely it does. Even in a more meta-way than c++:

struct S
{
int x;
S opBinary(string op)(S other) // overloads + - * / % all at 
once

{
  return mixin(x ~op~ other.x);
}
}