Re: Is there any performance penalty for static if?

2019-05-16 Thread Ferhat Kurtulmuş via Digitalmars-d-learn

On Thursday, 16 May 2019 at 00:18:25 UTC, user1234 wrote:
On Wednesday, 15 May 2019 at 22:03:39 UTC, Ferhat Kurtulmuş 
wrote:

[...]


You've been given the answer but about this particular piece of 
code, rather use the "is" expression


  static if (is(T == float)) {}
  else static if (is(T == double)) {}

etc.


Yes, it is now much better. Thank you.


Stack-based @nogc dynamic array

2019-05-16 Thread Marco de Wild via Digitalmars-d-learn

Hey all,

I want to create a small collection of items to store 
intermediate results of a calculation. It runs on a background 
thread, so it does not need to be the most efficient 
implementation. However, I want to prevent my background thread 
introducing a stop-the-world garbage collection.
In my program (rules and an AI for a mahjong game), the 
collection size is at most 14 (tiles in hand). I figured it would 
be the most simple to just keep it stack based. MY first attempt 
looks like:

```
struct NoGcArray(size_t maxSize, T)
{
private T[maxSize] _buffer;
private size_t _length;
size_t length() pure const @nogc nothrow
{
return _length;
}

void opOpAssign(string op)(T element) pure @nogc nothrow
in(_length < maxSize, "Cannot append if the buffer is fully 
filled.")

{
static if(op == "~")
{
_buffer[_length] = element;
++_length;
}
else
{
static assert(false, "Only concatenation supported");
}
}

T opIndex(size_t index)
in(index < _length, "Cannot access index greater than length")
{
return _buffer[index];
}

auto range() pure const @nogc nothrow
{
return Range(this);
}

alias range this;

static struct Range
{
this(NoGcArray src)
{
_src = src;
}
private NoGcArray _src;
private size_t _index;

T front() pure @nogc nothrow
{
return _src[_index];
}

void popFront() pure @nogc nothrow
{
_index++;
}

bool empty() pure @nogc nothrow
{
return _src._length <= _index;
}
}
}
```
However,
```
unittest
{
import std.algorithm : sum, map;
import fluent.asserts;
NoGcArray!(4, int) array;
array ~= 420;
array ~= 42;
array.map!(x => x*2).sum.should.equal(924);
}
```
fails. The test will run in an infinite loop. After some digging, 
I realise that the `alias this` is foiling with my plan rather 
than helping it. Is there a recommended way to achieve

- std.algorithm functions should Just Work (tm)
- appending as if it were a dynamic array
- preferably index-based access and .length should also work.

Is there a dub package that achieves this? Or are there any tips 
to roll my own implementation?


Re: Stack-based @nogc dynamic array

2019-05-16 Thread Adam D. Ruppe via Digitalmars-d-learn

I think you have overcomplicated something quite simple.

int[4] buffer;
int bufferLength;

buffer[bufferLength++] = item_to_append;
buffer[bufferLength++] = item_to_append;

int[] slice = buffer[0 .. bufferLength];

// you can use slice to any std.algorithm calls etc
// just remember it is on the stack so don't store it beyond a 
function call


Re: Stack-based @nogc dynamic array

2019-05-16 Thread Kagamin via Digitalmars-d-learn
Try mach.d https://github.com/pineapplemachine/mach.d it uses 
explicit range accessors for iteration.


disabling and enabling console output

2019-05-16 Thread Alex via Digitalmars-d-learn
I have some code that disables the console because some other 
code puts junk on it that I don't want to see... then I enable it.


stdout.close();
stderr.close();

...
stdout.open("CON", "w");
stderr.open("CON", "w");

It works but when the routine that uses this is called twice, it 
completely disables all output afterwards.


//1
stdout.close();
stderr.close();
//2
stdout.open("CON", "w");
stderr.open("CON", "w");
//3
stdout.close();
stderr.close();
//4
stdout.open("CON", "w");
stderr.open("CON", "w");
//5


So, one gets output at 1 and 3 but not 5.

Am I doing something wrong or is this a bug?


I've tried various things such as saving and restoring stdout, 
using CONOUT$, etc...


I think it might be outputting in 3 because of caching rather 
than it actually working once.


Can play around with it here:

https://run.dlang.io/is/q3kpSB


1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Alex via Digitalmars-d-learn

1 - 17 ms, 553 ╬╝s, and 1 hnsec

WTH!! is there any way to just get a normal u rather than some 
fancy useless asci hieroglyphic? Why don't we have a fancy M? and 
an h?


What's an hnsec anyways?


Re: disabling and enabling console output

2019-05-16 Thread Vladimir Panteleev via Digitalmars-d-learn

On Thursday, 16 May 2019 at 14:53:14 UTC, Alex wrote:
I have some code that disables the console because some other 
code puts junk on it that I don't want to see... then I enable 
it.


One thing you could try is going one level lower, and using dup() 
to save the stream to another fd, close() to close the stdout 
one, and dup2() to restore the saved fd over the stdout one.


Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Vladimir Panteleev via Digitalmars-d-learn

On Thursday, 16 May 2019 at 15:19:03 UTC, Alex wrote:

1 - 17 ms, 553 ╬╝s, and 1 hnsec

WTH!! is there any way to just get a normal u rather than some 
fancy useless asci hieroglyphic? Why don't we have a fancy M? 
and an h?


It's outputting UTF-8, but, your console is not configured to 
display UTF-8.


On Windows, you can do so (before running your program), by 
running: chcp 65001


Or, within your program, by calling: SetConsoleOutputCP(CP_UTF8);

Note that this has some negative side effects, which is why D 
doesn't do it automatically. (Blame Windows.)



What's an hnsec anyways?


Hecto-nano-second, the smallest representable unit of time in 
SysTime and Duration.




Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Steven Schveighoffer via Digitalmars-d-learn

On 5/16/19 4:27 PM, Vladimir Panteleev wrote:

On Thursday, 16 May 2019 at 15:19:03 UTC, Alex wrote:



What's an hnsec anyways?


Hecto-nano-second, the smallest representable unit of time in SysTime 
and Duration.


The output shouldn't involve the inner workings of the type. It should 
be changed to say 10 ns.


-Steve


Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Vladimir Panteleev via Digitalmars-d-learn
On Thursday, 16 May 2019 at 15:52:05 UTC, Steven Schveighoffer 
wrote:
Hecto-nano-second, the smallest representable unit of time in 
SysTime and Duration.


The output shouldn't involve the inner workings of the type. It 
should be changed to say 10 ns.


If the output is meant for the developer, then I disagree 
subjectively, as that creates the impression that the lowest 
resolution or representable unit of time is the nanosecond.


If the output is meant for the user, then hectonanoseconds or 
nanoseconds are going to be almost always irrelevant. The 
duration should be formatted appropriately to the use case.




Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Alex via Digitalmars-d-learn
On Thursday, 16 May 2019 at 15:27:33 UTC, Vladimir Panteleev 
wrote:

On Thursday, 16 May 2019 at 15:19:03 UTC, Alex wrote:

1 - 17 ms, 553 ╬╝s, and 1 hnsec

WTH!! is there any way to just get a normal u rather than some 
fancy useless asci hieroglyphic? Why don't we have a fancy M? 
and an h?


It's outputting UTF-8, but, your console is not configured to 
display UTF-8.


On Windows, you can do so (before running your program), by 
running: chcp 65001


Or, within your program, by calling: 
SetConsoleOutputCP(CP_UTF8);


Note that this has some negative side effects, which is why D 
doesn't do it automatically. (Blame Windows.)



What's an hnsec anyways?


Hecto-nano-second, the smallest representable unit of time in 
SysTime and Duration.


Thanks...

Why not just use u? If that is too much trouble then detect the 
code page and use u rather than the extended ascii which looks 
very out of place?


Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Vladimir Panteleev via Digitalmars-d-learn

On Thursday, 16 May 2019 at 16:49:35 UTC, Alex wrote:

Why not just use u?


It generally works fine on all the other filesystems, which today 
have mostly standardized on UTF-8.


If that is too much trouble then detect the code page and use u 
rather than the extended ascii which looks very out of place?


Well, a more correct solution would be to check if we're printing 
to the Windows console, and  use Unicode APIs, which would allow 
this to work regardless of the current 8-bit codepage. However, 
this (and your suggestion) are complicated to implement due to 
reasons related to how tightly Phobos is tied to C's FILE* for 
file input and output.


Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Vladimir Panteleev via Digitalmars-d-learn
On Thursday, 16 May 2019 at 16:52:22 UTC, Vladimir Panteleev 
wrote:

On Thursday, 16 May 2019 at 16:49:35 UTC, Alex wrote:

Why not just use u?


It generally works fine on all the other filesystems


* operating systems


Re: disabling and enabling console output

2019-05-16 Thread Alex via Digitalmars-d-learn
On Thursday, 16 May 2019 at 15:21:48 UTC, Vladimir Panteleev 
wrote:

On Thursday, 16 May 2019 at 14:53:14 UTC, Alex wrote:
I have some code that disables the console because some other 
code puts junk on it that I don't want to see... then I enable 
it.


One thing you could try is going one level lower, and using 
dup() to save the stream to another fd, close() to close the 
stdout one, and dup2() to restore the saved fd over the stdout 
one.


Unfortunately D doesn't seem to have dup, dup2.

On 05/10/14 22:24, MarisaLovesUsAll via Digitalmars-d-learn wrote:
I sometimes got a useless messages in stdout from SDL_Image 
library, and I want to temporary silence it. How do I do?


One way would be something like:

   import std.stdio;

   void writeOutput () {
  static c = 1;
  printf("%d\n", c++);
   }

   void main() {
  writeOutput();

  {
 auto ex = PushFD!1("/dev/null".ptr);
 writeOutput();
  }

  writeOutput();
   }

   struct PushFD(int fd) {
  import core.sys.posix.fcntl, core.sys.posix.unistd;
  int old;
  this(const char* fn) { //
 old = dup(fd);
 auto nfd = open(fn, O_RDWR);
 dup2(nfd, fd);
 close(nfd);
  }
  ~this() { dup2(old, fd); close(old); }
   }

// In real code you'll want to check for errors from 
dup/dup2/open/close.


artur

That code fails to compile on windows.


Re: disabling and enabling console output

2019-05-16 Thread Vladimir Panteleev via Digitalmars-d-learn

On Thursday, 16 May 2019 at 17:05:01 UTC, Alex wrote:
One thing you could try is going one level lower, and using 
dup() to save the stream to another fd, close() to close the 
stdout one, and dup2() to restore the saved fd over the stdout 
one.


Unfortunately D doesn't seem to have dup, dup2.


They are in the C library. Looks like Druntime has D declarations 
only for Posix:


https://github.com/cybershadow/druntime/blob/issue19433/src/core/sys/posix/unistd.d#L51-L52

You can just copy those.



Re: disabling and enabling console output

2019-05-16 Thread Alex via Digitalmars-d-learn
On Thursday, 16 May 2019 at 17:07:39 UTC, Vladimir Panteleev 
wrote:

On Thursday, 16 May 2019 at 17:05:01 UTC, Alex wrote:
One thing you could try is going one level lower, and using 
dup() to save the stream to another fd, close() to close the 
stdout one, and dup2() to restore the saved fd over the 
stdout one.


Unfortunately D doesn't seem to have dup, dup2.


They are in the C library. Looks like Druntime has D 
declarations only for Posix:


https://github.com/cybershadow/druntime/blob/issue19433/src/core/sys/posix/unistd.d#L51-L52

You can just copy those.


adding

int dup(int) @trusted;
int dup2(int, int) @trusted;
int close(int) @trusted;
int open(in char*, int, ...) @trusted;


results in an unresolved symbol.


Re: disabling and enabling console output

2019-05-16 Thread Vladimir Panteleev via Digitalmars-d-learn

On Thursday, 16 May 2019 at 17:18:01 UTC, Alex wrote:

adding

int dup(int) @trusted;
int dup2(int, int) @trusted;
int close(int) @trusted;
int open(in char*, int, ...) @trusted;


Be sure to make them extern(C).

Sorry, I haven't tried it, I'm guessing that it should work based 
on:

https://github.com/digitalmars/dmc/blob/master/include/io.h#L142-L147



Re: disabling and enabling console output

2019-05-16 Thread Alex via Digitalmars-d-learn
On Thursday, 16 May 2019 at 17:19:13 UTC, Vladimir Panteleev 
wrote:

On Thursday, 16 May 2019 at 17:18:01 UTC, Alex wrote:

adding

int dup(int) @trusted;
int dup2(int, int) @trusted;
int close(int) @trusted;
int open(in char*, int, ...) @trusted;


Be sure to make them extern(C).

Sorry, I haven't tried it, I'm guessing that it should work 
based on:

https://github.com/digitalmars/dmc/blob/master/include/io.h#L142-L147


Ok, the issue is because I was adding

import core.stdc.stdio, core.stdc.stdio;
extern (C):
@system:
nothrow:
@nogc:
static int _dup(int);
static int dup2(int, int);

directly inside the PushFD struct... I marked them static and it 
still didn't help. I don't know it's adding the class for the 
lookup:


PushFD!(1).PushFD.dup2(int, int)" 
(_D3mM__T6PushFDVii1ZQm4dup2UNbNiiiZi)


Unfortunately the code is crashing because _open or _sopen is 
returning -1 for /dev/null


Works fine nul, NUL, and CONOUT$ but does not block the output.


I'm not sure if they are failing to block or if they are blocking 
what is being opened(and not the original console). That is, do I 
need to not open and simply close stdout?






Re: disabling and enabling console output

2019-05-16 Thread Vladimir Panteleev via Digitalmars-d-learn

On Thursday, 16 May 2019 at 17:42:17 UTC, Alex wrote:
I'm not sure if they are failing to block or if they are 
blocking what is being opened(and not the original console). 
That is, do I need to not open and simply close stdout?


Yes, I see. It won't work because the two libraries are using 
different C standard libraries, so, SDLImage can't see that the 
FD was dup'd, and the DM C library most likely doesn't also 
dup/close the Windows handle.


I'm not sure how to do this on Windows, then; maybe someone with 
more experience would know. Windows has DuplicateHandle, but no 
equivalent for dup2 that I know of, which would be needed to 
restore stdin using this method.




Re: disabling and enabling console output

2019-05-16 Thread Adam D. Ruppe via Digitalmars-d-learn

On Thursday, 16 May 2019 at 14:53:14 UTC, Alex wrote:
I have some code that disables the console because some other 
code puts junk on it that I don't want to see


s stupid question but can't you just edit the other code to 
not put junk on it?


Maybe there is some other approach we can try out.


Re: disabling and enabling console output

2019-05-16 Thread Alex via Digitalmars-d-learn

On Thursday, 16 May 2019 at 17:49:24 UTC, Adam D. Ruppe wrote:

On Thursday, 16 May 2019 at 14:53:14 UTC, Alex wrote:
I have some code that disables the console because some other 
code puts junk on it that I don't want to see


s stupid question but can't you just edit the other code to 
not put junk on it?




No, it is in an external lib and I'd have to mod the code, 
maintain it, and compile it...


should be much simpler just to redirect stdout.

In fact, it works, I just can figure out how to re-enable it.






Dlang + emscripten

2019-05-16 Thread SrMordred via Digitalmars-d-learn

I´m following this link to build d+sdl2+emscripten on web:
https://theartofmachinery.com/2018/12/20/emscripten_d.html

And, i´m was able to compile but i get the warnings
warning: Linking two modules of different data layouts / target 
triples
even when I compile using x86. So i´m not sure if this is a LDC 
or Emscripten issue.


Someone is also exploring emscripten?


Re: Dlang + emscripten

2019-05-16 Thread Dukc via Digitalmars-d-learn

On Thursday, 16 May 2019 at 18:23:12 UTC, SrMordred wrote:

I´m following this link to build d+sdl2+emscripten on web:
https://theartofmachinery.com/2018/12/20/emscripten_d.html

And, i´m was able to compile but i get the warnings
warning: Linking two modules of different data layouts / target 
triples
even when I compile using x86. So i´m not sure if this is a LDC 
or Emscripten issue.


Someone is also exploring emscripten?


I did previously use it. It is tedious, but can be done if you 
have basic understanding of what you're doing and try long enough.


LDC does not know it's compiling to Emscripten, it outputs 
bytecode thinking it'll be compiled to some other platform. This 
is bound to cause all sort of problems when you try to import 
anything.


I recommend you use https://code.dlang.org/packages/spasm (which 
I have ported my code to) instead. You definitely will have 
nowhere as easy time working with them as with desktop, but it's 
better than a homemade ldc-linker-emscripten build system. You 
are also able to generate D bindings to web apis here, no need to 
write JavaScript wrappers manually. Disadvantages are that you 
don't have a C runtime, and you can only target WebAssembly which 
is more difficult to call from JavaScript than asm.js.


If you still choose to use Emscripten, check 
https://github.com/CyberShadow/dscripten-tools. Unlike Spasm, it 
won't give you web API, but it should ease development in other 
ways and give you some of the standard library stuff you don't 
have otherwise. I haven't used it so no idea how good it is in 
practice.


Generally, D is not currently good at targeting JavaScript. Just 
good enough that an experienced D coder could get it competitive 
in the right situation, but generally not good.


Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Steven Schveighoffer via Digitalmars-d-learn

On 5/16/19 4:55 PM, Vladimir Panteleev wrote:

On Thursday, 16 May 2019 at 15:52:05 UTC, Steven Schveighoffer wrote:
Hecto-nano-second, the smallest representable unit of time in SysTime 
and Duration.


The output shouldn't involve the inner workings of the type. It should 
be changed to say 10 ns.


If the output is meant for the developer, then I disagree subjectively, 
as that creates the impression that the lowest resolution or 
representable unit of time is the nanosecond.


It is what it is. The reason hnsecs is used instead of nsecs is because 
it gives a time range of 20,000 years instead of 2,000 years.


We do have a nanosecond resolution, and it's just rounded down to the 
nearest 10.


For example:

auto d = 15.nsecs;
assert(d == 10.nsecs);

You shouldn't be relying on what a string says to know what the tick 
resolution is.


For example, if I do writefln("%f", 1.0), I get 1.00. That doesn't 
mean I should assume floating point precision only goes down to 1/1_000_000.


hnsecs is more confusing than nanoseconds. People know what a nanosecond 
is, a hecto-nano-second is not as familiar a term.


If the output is meant for the user, then hectonanoseconds or 
nanoseconds are going to be almost always irrelevant. The duration 
should be formatted appropriately to the use case.


Depends on the user and the application.

-Steve


Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Steven Schveighoffer via Digitalmars-d-learn

On 5/16/19 9:17 PM, Steven Schveighoffer wrote:

On 5/16/19 4:55 PM, Vladimir Panteleev wrote:
If the output is meant for the developer, then I disagree 
subjectively, as that creates the impression that the lowest 
resolution or representable unit of time is the nanosecond.


It is what it is. The reason hnsecs is used instead of nsecs is because 
it gives a time range of 20,000 years instead of 2,000 years.


We do have a nanosecond resolution, and it's just rounded down to the 
nearest 10.


And to prove my point about it being an obscure term, I forgot it's not 
10 nanoseconds, but 100 nanoseconds. oops!




For example:

     auto d = 15.nsecs;
     assert(d == 10.nsecs);


This is not what I was trying to say, even though it's true (both are 
Duration.zero).


I meant:

auto d = 150.nsecs;
assert(d == 100.nsecs);

-Steve


Re: 1 - 17 ms, 553 ╬╝s, and 1 hnsec

2019-05-16 Thread Vladimir Panteleev via Digitalmars-d-learn
On Thursday, 16 May 2019 at 20:17:37 UTC, Steven Schveighoffer 
wrote:
We do have a nanosecond resolution, and it's just rounded down 
to the nearest 10.


For example:

auto d = 15.nsecs;
assert(d == 10.nsecs);


I'm not sure how to feel about this. Maybe there was a better way 
to handle nanoseconds here.


You shouldn't be relying on what a string says to know what the 
tick resolution is.


I don't like that with your proposal, it seems to add data that's 
not there. The 0 is entirely fictional. It might as well be part 
of the format string.



For example, if I do writefln("%f", 1.0), I get 1.00.


%f is a C-ism, %s does not do that.

hnsecs is more confusing than nanoseconds. People know what a 
nanosecond is, a hecto-nano-second is not as familiar a term.


Agreed, which is why Duration.toString shouldn't be used to 
present durations to users.


Developers, however, are expected to know what a hectonanosecond 
is, same as with all the other technical terms.


If the output is meant for the user, then hectonanoseconds or 
nanoseconds are going to be almost always irrelevant. The 
duration should be formatted appropriately to the use case.


Depends on the user and the application.


If the durations are so small or so precise that it makes sense 
to display them with such precision, then yes, applications would 
do better to use nanoseconds instead of hectonanoseconds.




DIP1000: Should this compile

2019-05-16 Thread Max Haughton via Digitalmars-d-learn

https://run.dlang.io/is/cKFsXh

Should this compile, or is return scope T* down to the user to 
not escape (Returning &local directly does not compile)




Re: DIP1000: Should this compile

2019-05-16 Thread Steven Schveighoffer via Digitalmars-d-learn

On 5/16/19 10:21 PM, Max Haughton wrote:

https://run.dlang.io/is/cKFsXh

Should this compile, or is return scope T* down to the user to not 
escape (Returning &local directly does not compile)




Answer to subject: no. This is a bug. Please file.

Not sure what the solution is, because dip1000 makes scope a storage 
class. So there's no way to tag what the input parameter points at.


-Steve


Re: DIP1000: Should this compile

2019-05-16 Thread Simen Kjærås via Digitalmars-d-learn

On Thursday, 16 May 2019 at 21:21:51 UTC, Max Haughton wrote:

https://run.dlang.io/is/cKFsXh

Should this compile, or is return scope T* down to the user to 
not escape (Returning &local directly does not compile)


This is a bug, as can be showed by repeating the call to 
(*boi).writeln - suddenly the output changes between calls. 
Filed: https://issues.dlang.org/show_bug.cgi?id=19881


--
  Simen


Re: DIP1000: Should this compile

2019-05-16 Thread Max Haughton via Digitalmars-d-learn
On Thursday, 16 May 2019 at 21:56:52 UTC, Steven Schveighoffer 
wrote:

On 5/16/19 10:21 PM, Max Haughton wrote:

https://run.dlang.io/is/cKFsXh

Should this compile, or is return scope T* down to the user to 
not escape (Returning &local directly does not compile)




Answer to subject: no. This is a bug. Please file.

Not sure what the solution is, because dip1000 makes scope a 
storage class. So there's no way to tag what the input 
parameter points at.


-Steve


The parameter pointer outlives the the &local, i.e. cannot be 
guaranteed that it doesn't escape, which is sufficient grounds to 
not allow the assignment. Hopefully, this is a implementation 
rather specification error (If my understanding of the DIP is 
correct)