Re: Custom separator in array format

2020-01-28 Thread Malte via Digitalmars-d-learn

On Tuesday, 28 January 2020 at 08:54:16 UTC, Simen Kjærås wrote:

import std.stdio : writefln;
import std.format : format;
import std.algorithm : map;

auto vec = [1000, 2000, 3000];

writefln("%-(%s\t%)", vec.map!(e => format!"%,2?d"('_', 
e)));




That helps, thank you very much.


Custom separator in array format

2020-01-27 Thread Malte via Digitalmars-d-learn
I want to format an array using the %(...%) syntax. How can I 
change the separator? I tried to use ? and add it as additional 
parameter, but that doesn't seem to work on arrays:


import std;
void main()
{
writeln("This works:");
writefln("%,2?d", '_', 2000); // 20_00

auto vec = [1000, 2000, 3000];
writeln("This should fail (separator character expected) but 
the ? is just ignored:");

writefln("%(%,2?d\t%)", vec); // 10,0020,00   30,00
writeln("This throws:");
writefln("%(%,2?d\t%)", '_', vec); // 
std.format.FormatException@/dlang/dmd/linux/bin64/../../src/phobos/std/format.d(2271): incompatible format character for integral argument: %(

}


Re: taskPool.reduce vs algorithm.reduce

2018-07-19 Thread Malte via Digitalmars-d-learn

On Wednesday, 11 July 2018 at 10:07:33 UTC, Timoses wrote:
On Wednesday, 11 July 2018 at 08:31:30 UTC, Dorian Haglund 
wrote:

[...]


As the error message says taskPool.reduce is a non-global 
template. It's embedded in a taskPool struct. I can't say what 
the reason is that a delegate cannot be used with such a 
template. I'd be interested in hearing what the reason is.

(See Paul's reply).

I'm trying to trick around it, but can't get this to work...

https://run.dlang.io/is/EGbtuq


You can make it all global: https://run.dlang.io/is/Kf8CLC

From my own experience: Use only parallel foreach and save 
yourself a lot of hassle. That just works.


Pass arguments at compile time

2018-06-13 Thread Malte via Digitalmars-d-learn
I want to import a config file at compile time, but also need a 
way to have multiple configurations.


With gcc you could do something like 
-DIMPORTFROM='"MyConfigFile.txt"'. Is there any equivalent in D?


Hardcoding the config files for different versions and using that 
is not an option. Requiring a fixed filename and using different 
paths to configure with -J wouldn't be a good solution for my 
specific situation either.


So far my best idea was to write the wanted filename to a file 
and import twice.

enum config = import(import("importFrom.txt"));

That works, but that is additional work for the build script and 
feels like a hack.


Re: New programming paradigm

2018-06-03 Thread Malte via Digitalmars-d-learn

On Saturday, 2 June 2018 at 23:12:46 UTC, DigitalDesigns wrote:

On Thursday, 7 September 2017 at 22:53:31 UTC, Biotronic wrote:

[...]


I use something similar where I use structs behaving like 
enums. Each field in the struct is an "enum value" which an 
attribute, this is because I have not had luck with using 
attributes on enum values directly and that structs allow enums 
with a bit more power.


[...]


You might want to have a look at 
https://wiki.dlang.org/Dynamic_typing
This sounds very similar to what you are doing. I never really 
looked into it, because I prefer to know which type is used and 
give me errors if I try to do stupid things, but I think it's a 
cool idea.


Re: no [] operator overload for type Chunks!(char[])

2018-05-30 Thread Malte via Digitalmars-d-learn

On Wednesday, 30 May 2018 at 21:27:44 UTC, Ali Çehreli wrote:

On 05/30/2018 02:19 PM, Malte wrote:
Why does this code complain at the last line about a missing 
[] operator overload?


auto buffer = new char[6];
auto chunked = buffer.chunks(3);
chunked[1][2] = '!';

Same happens with wchar.
Dchar and byte work as expected.


UTF-8 auto decoding strikes again. :)

Even though the original container is char[], passing it 
through Phobos algorithms generated range of dchar. The thing 
is, those dchar elements are generated (decoded from chars) "on 
the fly" as one iterates over the range. Which means, there is 
no array of dchar to speak of, so there is no random access.


Ali


I see. Not what I would have expected, but makes sense for people 
working with UTF-8 strings.


Thanks for the fast answer.


no [] operator overload for type Chunks!(char[])

2018-05-30 Thread Malte via Digitalmars-d-learn
Why does this code complain at the last line about a missing [] 
operator overload?


auto buffer = new char[6];
auto chunked = buffer.chunks(3);
chunked[1][2] = '!';

Same happens with wchar.
Dchar and byte work as expected.


Re: Code repetition

2018-05-27 Thread Malte via Digitalmars-d-learn
On Sunday, 27 May 2018 at 06:47:38 UTC, IntegratedDimensions 
wrote:
A string mixin is too messy since it treats the code as a 
string losing all syntax highlighting, etc.


I'd love to have something like a template mixin where I can 
just do


mixin template fooSetup(ret)
{
// setup stuff
int x;
scope(exit) something;

}


and

void foo()
{
   fooSetup(3);
}


void foo4()
{
   fooSetup(13);
}

and everything behave as it should. I'm guessing this is going 
to be impossible to achieve in D?


Well, if you want to have the same power as Cs textual 
replacement, you need string mixins. Often you can replace it 
with templates, but that depends on the application. I would 
avoid it if you can though, but more for debugging and not 
because of syntax highlighting.


void main()
{
import std.stdio;

int y = 1;
mixin(fooSetup(3));
writeln("x is ",x);
x = 5;
}

string fooSetup(int val) {
import std.conv;

return q{
int x = }~val.to!string~q{;
scope(exit) {
writeln("x was ", x);
writeln("y was ", y);
}
};
}


Re: Remove closure allocation

2018-05-27 Thread Malte via Digitalmars-d-learn

On Saturday, 26 May 2018 at 18:10:30 UTC, Neia Neutuladh wrote:

On Saturday, 26 May 2018 at 15:00:40 UTC, Malte wrote:
This compiles with DMD, however it returns random numbers 
instead of the value I passed in. Looks like a bug to me. 
Should that work or is there any other pattern I could use for 
that?


Filed as https://issues.dlang.org/show_bug.cgi?id=18910

As for the larger issue, you have to store `q` somewhere.

The compiler could store it on the stack, but the compiler 
would have to do a lot of work to prove that that's safe -- 
that the return value from `arr.map` doesn't escape the current 
function, that `map` doesn't save the thing you passed to it 
anywhere, that sort of thing.


And that sort of analysis is flaky. It's a recipe for code that 
compiles on one version of a compiler and not the next, a lot 
of bug reports that are hard to track down, that sort of thing. 
And it means that it has to assume the worst for functions that 
it doesn't have the source for -- when you pass a delegate, 
when you call an extern(D) function, when you call a virtual 
method.


So the compiler does the safe thing, and it has the GC allocate 
a bit of memory to hold `q`.


The way around that is the `scope` keyword. The compiler can do 
that complex analysis one function at a time; that's 
sufficiently simple. So if you wrote a `map` function, you 
could define it as taking a `scope U delegate(T)` and the 
compiler wouldn't need to use the GC there.


That obviously adds restrictions on how you can write that 
function.


I had filed a bugreport already, but this is only slightly 
related. It might be the solution to my problem and just not 
working because it's a compiler bug, but could also be supposed 
to give me a compile error instead for some reasons I don't know 
yet.


I understand the general issue why a closure can be necessary, 
but it shouldn't be here, because all I have are value types 
(int) and strongly pure function calls. In fact even that 
immutable on q should be merely an optimization hint that the 
compiler could use the same register when inlining it instead of 
having to make a copy.
Adding scope and immutable at will wouldn't be an issue, but it 
doesn't solve the problem and wouldn't make sense to me if it did.


In fact, my actual issue is not that I need a @nogc function. I 
have a working code that is designed with that principle to have 
data transformation using pure functions and I want to 
parallelise it by just replacing map with taskPool.amap.
I had exactly that problem before and was more or less randomly 
trying to make changes to get rid of the "cannot access frame of 
function" error messages. However, if I try to make it a nogc 
function first, the compiler gives me useful information like 
"onlineapp.identity.__lambda4 closes over variable q at 
onlineapp.d(24)", so I can have a closer look at q instead of 
blindly guessing why the compiler thinks it needs another frame 
pointer.


What I did last time is rewrite the whole function to fit into a 
parallel foreach and I guess I have to do the same here too.


It's not that I can't rewrite it to something like

int identity(immutable int q) pure nothrow @safe @nogc
{
   static immutable auto arr = [42];
   int getSecondArgument(int a, int b)
   {
   return b;
   }

   foreach(i, a; arr) {
   auto tmp = getSecondArgument(a, q);
   if (i==0) return tmp;
   }

   assert(0);
}
, I just don't think I should have to rewrite it that much. But I 
currently see no other way to use taskPool and in general I see 
very little possibilities to utilize taskPool.amap even though it 
looks like a handy tool at first sight, but is just too limited 
when you actually try to do something with it.


Remove closure allocation

2018-05-26 Thread Malte via Digitalmars-d-learn
I was trying to get a function that has a closure allocation to 
compile with @nogc.


Assuming this function:

int identity(immutable int q) pure nothrow @safe
{
   import std.algorithm;

   static immutable auto arr = [42];
   int getSecondArgument(int a, int b)
   {
   return b;
   }

   return arr.map!(a => getSecondArgument(a,q))[0];
}

When I add @nogc it complains about using q.
My idea was to change the closure to have another parameter with 
default initializer:

int identity(immutable int q) pure nothrow @safe @nogc
{
   import std.algorithm;

   static immutable auto arr = [42];
   int getSecondArgument(int a, int b)
   {
   return b;
   }

   return arr.map!((int a, int b = q) => getSecondArgument(a, 
b))[0];

}


This compiles with DMD, however it returns random numbers instead 
of the value I passed in. Looks like a bug to me. Should that 
work or is there any other pattern I could use for that?


Re: Locking data

2018-05-23 Thread Malte via Digitalmars-d-learn

On Wednesday, 23 May 2018 at 13:36:20 UTC, rikki cattermole wrote:

On 24/05/2018 1:29 AM, Malte wrote:
On Wednesday, 23 May 2018 at 13:24:35 UTC, rikki cattermole 
wrote:

On 24/05/2018 1:20 AM, Malte wrote:
On Tuesday, 22 May 2018 at 21:45:07 UTC, 
IntegratedDimensions wrote:

an idea to lock data by removing the reference:

class A
{
   Lockable!Data data;
}

[...]


This sounds like you are looking for is an atomic swap. 
Afaik it doesn't exist in the standard library. You could 
use asm for the XCHG, but that would make your code x86 
dependent.
I think the easiest way would be to just use a mutex and 
tryLock.


What are you talking about? :p

http://dpldocs.info/experimental-docs/core.atomic.cas.1.html


That is Compare-and-set.
To make an exchange using cas I first have to read the value, 
then write to it expecting to be still the value I read 
before. That are more instructions than just a swap. If a cas 
fails, I have to redo everything. An exchange never fails, I 
just might not get the result I would like to have (null 
instead of pointer).


So you want a load + store as swap in a single function (that 
is optimized).
In that case, please create an issue on bugzilla 
(issues.dlang.org).


No, as I said, that is already one instruction on X86: 
https://www.felixcloutier.com/x86/XCHG.html
Just being able to use that instruction with the standard library 
would be good.


You could also use it with compiler intrinsics. Something like

import ldc.intrinsics;
T* tryGetPtr(T)(T** a) {
   return 
cast(T*)llvm_atomic_rmw_xchg!size_t(cast(shared(size_t)*)a, 0);

}
void restorePtr(T)(T** a, T* b) {
   
llvm_atomic_rmw_xchg!size_t(cast(shared(size_t)*)a,cast(size_t)b);

}


I would just go with mutexes unless your really need to go that 
low level though, much saner.


Re: Locking data

2018-05-23 Thread Malte via Digitalmars-d-learn

On Wednesday, 23 May 2018 at 13:24:35 UTC, rikki cattermole wrote:

On 24/05/2018 1:20 AM, Malte wrote:
On Tuesday, 22 May 2018 at 21:45:07 UTC, IntegratedDimensions 
wrote:

an idea to lock data by removing the reference:

class A
{
   Lockable!Data data;
}

[...]


This sounds like you are looking for is an atomic swap. Afaik 
it doesn't exist in the standard library. You could use asm 
for the XCHG, but that would make your code x86 dependent.
I think the easiest way would be to just use a mutex and 
tryLock.


What are you talking about? :p

http://dpldocs.info/experimental-docs/core.atomic.cas.1.html


That is Compare-and-set.
To make an exchange using cas I first have to read the value, 
then write to it expecting to be still the value I read before. 
That are more instructions than just a swap. If a cas fails, I 
have to redo everything. An exchange never fails, I just might 
not get the result I would like to have (null instead of pointer).


Re: Locking data

2018-05-23 Thread Malte via Digitalmars-d-learn
On Tuesday, 22 May 2018 at 21:45:07 UTC, IntegratedDimensions 
wrote:

an idea to lock data by removing the reference:

class A
{
   Lockable!Data data;
}

[...]


This sounds like you are looking for is an atomic swap. Afaik it 
doesn't exist in the standard library. You could use asm for the 
XCHG, but that would make your code x86 dependent.

I think the easiest way would be to just use a mutex and tryLock.


Re: assertNotThrown (and asserts in general)

2018-05-23 Thread Malte via Digitalmars-d-learn

On Monday, 21 May 2018 at 19:44:17 UTC, Jonathan M Davis wrote:
Walter wants to use assertions to then have the compiler make 
assumptions about the code and optimized based on it, but he 
hasn't implemented anything like that, and there are a number 
of arguments about why it's a very bad idea - in particular, if 
it allows the compiler to have undefined behavior if the 
assertion would have failed if it were left in. So, what is 
actually going to happen with that is unclear. There are folks 
who want additional performance benefits by allowing assertions 
to work as hints to the compiler, and there are folks who want 
them to truly just be for debugging purposes, because they 
don't want the compiler to then generate code that makes the 
function behave even more badly when the assertion would have 
failed but had been compiled out.
If your code is based on untrue assumptions, you probably have a 
bug anyways. If you used asserts and an optimization brought it 
in, you will at least find it as soon as you remove the release 
flag.
It shouldn't be a problem to make it a compiler flag for those 
who don't want it. Defaulted to true with -O3 but can be turned 
off with -fno-assert-optimize or something like that.


Personally, my big concern is that it can't introduce undefined 
behavior, or it would potentially violate memory safety in 
@safe code, which would then mean that using assertions in 
@safe code could make your code effectively @system, which 
would defeat the whole purpose of @safe.
Fair point, that probably limits the optimiations that can be 
done. If I have an assert that an array has 10 elements when it 
actually has only 3 and do some operations on it, that could 
read/write to memory I have never allocated.
However some optimations should still be possible in SafeD, like 
ignoring if conditions where the results are known at compile 
time if the asserts are true.
Or loop unrolling and auto-vectorization without checking for the 
rest should also be possible if you have an assert, that the 
length of an array is divisible by something.
Neither of them should be able to add unsafe instructions. The 
worst that could happen is relying on a wrong value to access an 
element of an array and fail a bounds check.


assertNotThrown doesn't use any assertions. It explicitly 
throws an AssertError (which is what a failed assertion does 
when it's not compiled out). assertNotThrown would have to use 
a version(assert) block to version the checks to try and mirror 
what the assert statement does. However, assertNotThrown is 
specifically intended for unit tests. IIRC, assertions in unit 
tests are left in when compiled with -unittest (otherwise, 
compiling with -release and -unittest - like Phobos does for 
one of its passes as part of its unittest build - would not 
work), but I don't think that the assertions outside of 
unittest blocks get left in in that case, so using 
version(assert) on assertThrown or assertNotThrown might break 
them. I'm not sure. Regardless, using them for testing what 
assertions do is just wrong. You need to test actual assert 
statements if that's what you want to be testing.
Okay, clearly a misunderstanding on my side then. Thanks for 
clarifying that.


Re: Efficient idiom for fastest code

2018-05-23 Thread Malte via Digitalmars-d-learn
On Wednesday, 23 May 2018 at 02:24:08 UTC, IntegratedDimensions 
wrote:
In some cases the decision holds for continuous ranges. For 
some 0 <= n <= N the decision is constant, but n is 
arbitrary(determined by unknown factors at compile time).


One can speed up the routine by using something akin to a 
simplified strategy pattern where one uses 
functions/delegates/lambdas to code a faster version without 
the test:



for(int i = 0; i < N; i++)
{
d = (){ if(decision(i)) A; else d = () { B; };
d();
}

this code basically reduces to

for(int i = 0; i < N; i++)
{
B;
}

Once the decision fails, which we assume once it fails it 
always fails in this particular case.


Therefor, once the decision fails it kicks in the "faster" 
code. Suppose decision is very slow.


I would just do

int i=0;
for(;decision(i) && i < N; i++)
{
   A;
}
for(;i < N; i++)
{
   B;
}


This could be turned to a mixin template with something like this:

mixin template forSplit(alias condition, alias A, alias B)
{
   void execute()
   {
   int i = 0;
   for (; condition(i) && i < N; i++)
   {
   A();
   }
   for (; i < N; i++)
   {
   B();
   }
   }
}


and to use it in code (assuming N is defined in the scope)

mixin forSplit!((int i)=>(decision(i)), {A;}, {B;}) loop;
loop.execute();


I have't measured anything, but I would assume that delegates 
come with an overhead that you just don't need here. In fact when 
trying to use

auto d = (int i) {};
d = (int i){ if(decision(i)) A; else d = (int i) { B; };};
for(int i = 0; i < N; i++)
{
d(i);
}
All I got was "cannot access frame of function D main" which sums 
up my experiences with lambdas in D so far.


While PGO and branch predictor are good, they don't help much 
here. Not executing an expression is out of scope for them. All 
they do is prevent pipeline flushes.
Also I think you overestimate what the compiler can do. My 
decision function to do some testing was this:

bool decision(int a) pure
out (result) {
assert(result == (a < 10));
} do {
   import std.algorithm, std.range;

   // stolen from https://dlang.org/library/std/parallelism.html
   enum n = 1_000_000;
   enum delta = 1.0 / n;
   alias getTerm = (int i)
   {
   immutable x = ( i - 0.5 ) * delta;
   return delta / ( 1.0 + x * x ) ;
   };
   immutable pi = 4.0 * reduce!"a + b"(n.iota.map!getTerm);

   return a < 3*pi;
}
With N=100 I got a speedup of ~10 (ldc -O3 -release). Even though 
this function is pure and could be optimized a lot. It calculated 
pi for every single call. And optimizing the decision function 
isn't even the point of that question.


assertNotThrown (and asserts in general)

2018-05-21 Thread Malte via Digitalmars-d-learn
I was interested by asserts and how the compiler uses them to 
optimize the code. So I looked at the compiler explorer to see 
how and found it, it doesn't.


What I tried to do is turn a std.conv.to!ulong(byte) to a simple 
cast with the help of assertions.

https://godbolt.org/g/4uckWU

If there is an assert right before an if with the same condition, 
I think it should remove the compare and jump in release, but it 
doesn't. I'm assuming the compilers just aren't ready yet, but 
should be someday. At least that is what the documentation 
promises.


More curious made me 
"assertNotThrown!ConvOverflowException(input.to!ubyte)" where it 
didn't even remove the assert code in release and produced the 
most inefficient assembly.
Is that intended behaviour? In my opinion that greatly limits the 
usabilty of it.