Re: dlangbot for Telegram - D compiler in your pocket

2018-06-10 Thread Anton Fediushin via Digitalmars-d-announce

On Sunday, 10 June 2018 at 19:54:20 UTC, Dechcaudron wrote:

On Saturday, 9 June 2018 at 20:28:24 UTC, Anton Fediushin wrote:
Hello, I am glad to announce that new Telegram bot which can 
execute D code is up and running!


Check it out here: https://t.me/dlangbot

Features:
 - Two compilers to choose from: dmd (default) and ldc
 - Support for custom compiler arguments with `/args` command
 - It's possible to set program's stdin with `/stdin`
 - Code is automatically compiled and ran again when you edit 
your message


Repository: https://gitlab.com/ohboi/dlangbot

Any ideas on how to improve it are appreciated!


As much as I love Telegram bots and D, compilation and 
execution offered by a bot provides no clear advantage to me 
(still, I kinda like it). I assume you are looking out for 
potential vulnerabilities?


How about allowing for download of the executable?


This bot has the same purpose as run.dlang.io - a sandbox to try 
some things and share code with people. Even though sharing can 
be done with simple 'highlight your message with code and bot's 
reply -> share these messages' I am thinking about ways to make 
it simpler and faster.


Regarding vulnerabilities, if there are any I and 
authors/maintainers of dlang-tour will be interested in fixing 
them ASAP. After all, dlangbot uses tour's code under the hood.


Executable downloading would require me to rewrite the back-end. 
I am not sure if it'll worth it because it's not clear how safe 
that would be for a user and how usable that feature will be. I 
mean, if user already has x86-64 Linux machine (that's what 
dlangbot uses) then will it be any simpler and faster to message 
the bot with code, download an executable and run it than 
compiling it using installed compiler?






Re: -J all

2018-06-10 Thread Basile B. via Digitalmars-d

On Sunday, 10 June 2018 at 14:42:21 UTC, Basile B. wrote:

On Sunday, 10 June 2018 at 01:49:37 UTC, DigitalDesigns wrote:
Please allow -J to specify that all subdirectories are to be 
included! I'm having to include all subdirectories of my 
library with J because I import each file and extract 
information. It would be better to have something like


-JC:\Lib\*

rather than

-JC:\Lib\Internal
-JC:\Lib\Internal\OS
-JC:\Lib\API
-JC:\Lib\API\V1
-JC:\Lib\API\V1\Templates


...
..
.


This is opened as an enhancement request now: 
https://issues.dlang.org/show_bug.cgi?id=18967. IIRC there was 
a security concern mentioned last time this was proposed, not 
100% sure.


Sorry i've been extremely clumsy with the issue description. It's 
now https://issues.dlang.org/show_bug.cgi?id=18968


[Issue 18968] Wildcard to allow subfolders with the -J option

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18968

Basile B.  changed:

   What|Removed |Added

Summary|Allow subfolders with the   |Wildcard to allow
   |-J option   |subfolders with the -J
   ||option

--


[Issue 18968] New: Allow subfolders with the -J option

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18968

  Issue ID: 18968
   Summary: Allow subfolders with the -J option
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: b2.t...@gmx.com

Allowing

dmd -J/home/fantomas/*

instead of

dmd -J/home/fantomas/ -J/home/fantomas/assets/ -J/home/fantomas/translations/

--


[Issue 18967] Allow wildcard for subfolders with the -J option

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18967

Basile B.  changed:

   What|Removed |Added

Summary|Allow wildcard with the -J  |Allow wildcard for
   |option  |subfolders with the -J
   ||option

--


[Issue 18967] Allow wildcard for subfolders with the -J option

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18967

Basile B.  changed:

   What|Removed |Added

 Status|REOPENED|RESOLVED
 Resolution|--- |INVALID

--


[Issue 18967] Allow wildcard with the -J option

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18967

Basile B.  changed:

   What|Removed |Added

 Status|RESOLVED|REOPENED
 Resolution|WORKSFORME  |---

--- Comment #2 from Basile B.  ---
Sorry, i meant

dmd -J/home/fantomas/includes/*

instead of

dmd -J/home/fantomas/includes/default.css ^
-J/home/fantomas/includes/assets/icons.data ^
-J/home/fantomas/includes/assets/strings.rc ^
-J/home/fantomas/includes/translation/EN.po ^
-J/home/fantomas/includes/translation/FR.po ^
-J/home/fantomas/includes/translation/RU.po ^
-J/home/fantomas/includes/translation/JP.po

--


Re: -J all

2018-06-10 Thread Cym13 via Digitalmars-d

On Sunday, 10 June 2018 at 19:10:52 UTC, DigitalDesigns wrote:

On Sunday, 10 June 2018 at 14:42:21 UTC, Basile B. wrote:

On Sunday, 10 June 2018 at 01:49:37 UTC, DigitalDesigns wrote:
Please allow -J to specify that all subdirectories are to be 
included! I'm having to include all subdirectories of my 
library with J because I import each file and extract 
information. It would be better to have something like


-JC:\Lib\*

rather than

-JC:\Lib\Internal
-JC:\Lib\Internal\OS
-JC:\Lib\API
-JC:\Lib\API\V1
-JC:\Lib\API\V1\Templates


...
..
.


This is opened as an enhancement request now: 
https://issues.dlang.org/show_bug.cgi?id=18967. IIRC there was 
a security concern mentioned last time this was proposed, not 
100% sure.


Yeah, but -J was added for a security concern! So when does the 
insanity end?


There's no contradiction nor insanity, you're saying the same 
thing he did: -J was added for a security concern.


If it's such a big, e.g., to prevent root access then limit 
asterisk usage to non root and maybe only a depth of 3.


After all, if someone wanted access to sensitive areas just do 
-JC:\Windows\System32.


At some point one has to stop policing everything.


I'm not entirely sure what the threat model is, but it seems to 
me that we're not trying to protect against an user exposing 
sensitive areas. We're trying to protect against code that isn't 
trusted at compile time. I think the idea is to avoid allowing 
someone to import your config file with all passwords at 
compile-time so that it can use it or send it later at runtime to 
the attacker.


It's not a bad risk to consider but I wonder if that's the best 
solution we can find.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Patrick Schluter via Digitalmars-d

On Sunday, 10 June 2018 at 13:45:54 UTC, Mike Franklin wrote:

On Sunday, 10 June 2018 at 13:16:21 UTC, Adam D. Ruppe wrote:



memcpyD: 1 ms, 725 μs, and 1 hnsec
memcpyD2: 587 μs and 5 hnsecs
memcpyASM: 119 μs and 5 hnsecs

Still, the ASM version is much faster.



rep movsd is very CPU dependend and needs some precondtions to be 
fast. For relative short memory blocks it sucks on many other CPU 
than the last Intel.


See what Agner Fog has to say about it:

16.10
String instructions (all processors)
String instructions without a repeat prefix are too slow and 
should be replaced by simpler instructions. The same applies to 
LOOP on all processors and to JECXZ
on some processors. REP MOVSD andREP STOSD are quite fast if the 
repeat count is not too small. Always use the largest word size 
possible (DWORDin 32-bit mode, QWORD in 64-bit mode), and make 
sure that both source and destination are aligned by the word 
size. In many cases, however, it is faster to use XMM registers. 
Moving data in XMM registers is faster than REP MOVSD and REP 
STOSD
in most cases, especially on older processors. See page 164 for 
details.
Note that while the REP MOVS instruction writes a word to the 
destination, it reads the next word from the source in the same 
clock cycle. You can have a cache bank conflict if bit 2-4 are 
the same in these two addresses on P2 and P3. In other words, you 
will get a penalty of one clock extra per iteration if ESI
+WORDSIZE-EDI is divisible by 32. The easiest way to avoid cache 
bank conflicts is to align both source and destination by 8. 
Never use MOVSB or MOVSW
in optimized code, not even in 16-bit mode. On many processors, 
REP MOVS and REP STOS can perform fast by moving 16 bytes or an 
entire cache line at a time
. This happens only when certain conditions are met. Depending on 
the processor, the conditions for fast string instructions are, 
typically, that the count must
be high, both source and destination must be aligned, the 
direction must be forward, the distance between source and 
destination must be at least the cache line size, and the memory 
type for both source and destination must be either write-back or 
write-combining (you can normally assume the latter condition is 
met). Under these conditions, the speed is as high as you can 
obtain with vector register moves or even faster on some 
processors.
While the string instructions can be quite convenient, it must be 
emphasized that other solutions are faster in many cases. If the 
above conditions for fast move are not met then there is a lot to 
gain by using other methods. See page 164 for alternatives to REP 
MOVS





Re: How to sort byCodeUnit.permutations.filter(...)

2018-06-10 Thread Uknown via Digitalmars-d-learn

On Monday, 11 June 2018 at 04:12:57 UTC, Adam D. Ruppe wrote:

On Monday, 11 June 2018 at 04:06:44 UTC, Uknown wrote:
The problem is this prints a list of numbers. The task 
requires only the largest, so the intuitive fix


I would just pull the max out of it.

http://dpldocs.info/experimental-docs/std.algorithm.searching.maxElement.2.html


Thanks for your reply. I completely forgot about maxElement. I 
used it, but it prints 1234567, not the permutations. The same 
happens when using array. Why are the strings getting modified? 
The same happens with this:


"123".byCodeUnit.permutations.writeln;//[123, 213, 312, 132, 231, 
321]
"123".byCodeUnit.permutations.array.writeln;//[123, 123, 123, 
123, 123, 123]

Seems odd. Is this a bug or expected behaviour?


Re: What is the point of nothrow?

2018-06-10 Thread Jonathan M Davis via Digitalmars-d-learn
On Monday, June 11, 2018 04:11:38 Bauss via Digitalmars-d-learn wrote:
> I'm very well aware that Error is not supposed to be caught and
> that the program is in an invalid state, but ehat I'm trying to
> get at is that if nothrow or at least a feature similar existed
> that could detect code that may throw Error, then you could
> prevent writing code that throws it in the first place.
>
> It would be a great tool to writing bugless code.

Well, the problem with that is that if you do anything involving assertions,
dynamic arrays, or GC memory allocations, you can get an Error. The same
goes for some stuff like final switch statements. Some of those Errors don't
happen with -release (e.g. the check on final switch and asserion failures),
but enough operations have to be checked at runtime that I expect that even
if you ignored the ones that don't happen with -release, relatively little
code could be guaranteed to not throw Errors. And most Errors are thrown
because of Error conditions that can't reasonably be caught at compile time.
So, while knowing that none of those Errors can happen in a particular piece
of code might be nice, all it really means is that you're not doing any
operations in that code which can be checked for error conditions at runtime
but which can't be checked at compile time. On the whole, what Errors are
really doing is catching the bugs that you didn't manage to catch yourself
and that the compiler can't catch for you, but the runtime can. So,
arguably, all you'd really be doing if you guaranteed that a piece of code
didn't throw any Errors is guarantee that the code didn't do any operations
where the runtime can catch bugs for you. As such, while you obviously don't
want to end up having any Errors thrown in your program, I seriously
question that trying to write code that is statically guaranteed to not
throw any Errors is very useful or desirable - especially since it's not
like such code is guaranteed to be bug-free. It just wouldn't have any of
the bugs that the runtime can catch for you.

- Jonathan M Davis



Re: What is the point of nothrow?

2018-06-10 Thread Bauss via Digitalmars-d-learn

On Monday, 11 June 2018 at 00:47:27 UTC, Jonathan M Davis wrote:
On Sunday, June 10, 2018 23:59:17 Bauss via Digitalmars-d-learn 
wrote:
What is the point of nothrow if it can only detect when 
Exception is thrown and not when Error is thrown?


It seems like the attribute is useless because you can't 
really use it as protection to write bugless, safe code since 
the nasty bugs will pass by just fine.


I'm aware that it's a feature that nothrow can throw Error, 
but it makes the attribute completely useless because you 
basically have no safety to guard against writing code that 
throws Error.


To an extend @safe works, but there are tons of stuff that 
throws Error which you can only detect and guard against 
manually.


So what is the point of nothrow when it can only detect 
exceptions you'd catch anyway.


To me it would be so much more useful if you could detect code 
that could possibly throw Error.


Why do you care about detecting code that can throw an Error? 
Errors are supposed to kill the program, not get caught. As 
such, why does it matter if it can throw an Error?


Now, personally, I'm increasingly of the opinion that the fact 
that we have Errors is kind of dumb given that if it's going to 
kill the program, and it's not safe to do clean-up at that 
point, because the program is in an invalid state, then why not 
just print the message and stack trace right there and then 
kill the program instead of throwing anything? But 
unforntunately, that's not what happens, which does put things 
in the weird state where code can catch an Error even though it 
shouldn't be doing that.


As for the benefits of nothrow, as I understand it, they're 
twofold:


1. You know that you don't have to worry about any Exceptions 
being thrown from that code. You don't have to worry about 
doing any exception handling or having to ensure that anything 
gets cleaned up because of an Exception being thrown.


2. If the compiler knows that a function can't throw an 
Exception, then it doesn't have to insert any of the Exception 
handling mechanism stuff that it normally does when a function 
is called. It can assume that nothing ever gets thrown. If an 
Error does get thrown, then none of the proper clean-up will 
get done (e.g. constructors or scope statements), but because 
an Error being thrown means that the program is in an invalid 
state, it's not actually safe to be doing clean-up anyway. So, 
the fact that a function is nothrow gives you a performance 
benefit, because none of that extra Exception handling stuff 
gets inserted. How large a benefit that is in practice, I don't 
know, but it is a gain that can't be had with a function that 
isn't nothrow.


- Jonathan M Davis


Well at least from my point of view I would care about code that 
can throw Error, because if say nothrow could detect that then 
you could prevent writing that code that throws jt at all and 
thus you'd be writing less error prone code.


Maybe not necessarily nothrow, but something else that could 
ensure that your code is "100% safe" to run without any errors 
happening from ex. Accessing out of bounds, accessing invalid 
memory, attempting to access the member of an uninitialized class 
etc. Like you'd have to handle each such cases. Writing code in D 
today you have to think about each statement you write and 
whether it could possibly throw Error because you have little to 
no tools that helps you preventing writing such code.


I'm very well aware that Error is not supposed to be caught and 
that the program is in an invalid state, but ehat I'm trying to 
get at is that if nothrow or at least a feature similar existed 
that could detect code that may throw Error, then you could 
prevent writing code that throws it in the first place.


It would be a great tool to writing bugless code.


Re: How to sort byCodeUnit.permutations.filter(...)

2018-06-10 Thread Adam D. Ruppe via Digitalmars-d-learn

On Monday, 11 June 2018 at 04:06:44 UTC, Uknown wrote:
The problem is this prints a list of numbers. The task requires 
only the largest, so the intuitive fix


I would just pull the max out of it.

http://dpldocs.info/experimental-docs/std.algorithm.searching.maxElement.2.html


How to sort byCodeUnit.permutations.filter(...)

2018-06-10 Thread Uknown via Digitalmars-d-learn
I wrote a small program for Project Euler problem 41 ( 
https://projecteuler.net/problem=41 ).


--- project_euler_41.d
void main()
{
import math_common : primesLessThan;
import std.stdio : writeln;
import std.conv : parse;
import std.algorithm : permutations, canFind, filter, each, sort;
import std.utf : byCodeUnit;
import std.range : assumeSorted;
import std.array : array;

auto primes = primesLessThan(9_999_999UL).assumeSorted;

"1234567".byCodeUnit
.permutations
.filter!(a => primes.canFind(a.parse!uint))
.each!(a => a.writeln);
}
---

The problem is this prints a list of numbers. The task requires 
only the largest, so the intuitive fix is to add `.array.sort[$ - 
1].writeln` in place of the `each`, but this prints an array of 
`1234567`s instead of the permutations. Does anyone know how to 
sort the filter result without modifying the individual results?


Re: IOS support status

2018-06-10 Thread Joakim via Digitalmars-d

On Monday, 11 June 2018 at 02:58:28 UTC, makedgreatagain wrote:

On Thursday, 7 June 2018 at 14:12:15 UTC, Joakim wrote:
On Thursday, 7 June 2018 at 11:58:58 UTC, makedgreatagain 
wrote:

On Thursday, 7 June 2018 at 07:58:19 UTC, Joakim wrote:

[...]


Thanks for you suggestion Joakim.

I try follow your suggestion,  get all commit from origin 
branch (116 commit by Dan Olson fro IOS brnach).


When I try cherry pick the first commit, get conflict the 
line number is 6232.  (the ios branch is base on cpp and 
master ldc is move to d).


I will try more but I guess It is hard for me to made it work.


Huh, I didn't look that far back in the branch, may not be 
worth merging. How is his last release working? Do you want 
only betterC?  If so, you may be better off backporting the 
betterC patch that got it working later, shouldn't be much to 
that one.


Thanks for your help.

The side thing is betterC change also base on d based branch,  
which is very hard to backporting to cpp branch.


No, it only disables 4-5 things, so while you're right that it 
applies towards ddmd, it should be pretty easy to manually 
backport to the old C++ frontend. I plan to do so for the 
ltsmaster branch of ldc, its last C++ version, this week: you can 
apply that pull to Dan's ios branch or easily port it yourself.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Mike Franklin via Digitalmars-d

On Monday, 11 June 2018 at 03:31:05 UTC, Walter Bright wrote:

On 6/10/2018 7:49 PM, Mike Franklin wrote:

On Sunday, 10 June 2018 at 15:12:27 UTC, Kagamin wrote:

If the compiler can't get it right then who can?
The compiler implementation is faulty.  It rewrites the 
expressions to an `extern(C)` runtime implementation that is 
not @safe, nothrow, or pure: 
https://github.com/dlang/druntime/blob/706081f3cb23f4c597cc487ce16ad3d2ed021053/src/rt/lifetime.d#L1442  The backend is not involved.


It's only faulty if there's a bug in it. It's essentially 
@trusted code.


That only addresses the @safe attribute, and that code is much 
too complex for anyone to audit it and certify it as safe.


Exceptions are also not all handled, so there is no way it can 
pass as nothrow.


The runtime call needs to be replaced with a template that can 
take advantage of D's compile-time type information instead of 
`TypeInfo`, but doing so will force the implementation through 
semantic processing, which, in it's current state, will fail to 
compile.  It needs to be refactored, and creating @safe, nothrow, 
pure building blocks will help with that, hence the topic of this 
thread.


Mike


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Basile B. via Digitalmars-d

On Monday, 11 June 2018 at 01:03:16 UTC, Mike Franklin wrote:
I've modified the test based on the feedback so far, so here's 
what it looks like now:


import std.datetime.stopwatch;
import std.stdio;
import core.stdc.string;
import std.random;
import std.algorithm;

enum length = 4096 * 2;

void init(ref ubyte[] a)
{
a.length = length;

for(int i = 0; i < length; i++)
{
a[i] = uniform!ubyte;
}
}

void verifyResults(ubyte[] a, ubyte[] b)
{
assert(memcmp(a.ptr, b.ptr, length) == 0);
}

void memcpyD(ubyte[] dst, ubyte[] src)
{
dst[] = src[];
}

void memcpyDstdAlg(ubyte[] dst, ubyte[] src)
{
copy(src, dst);
}

void memcpyC(ubyte[] dst, ubyte[] src)
{
memcpy(dst.ptr, src.ptr, length);
}

void memcpyNaive(ubyte[] dst, ubyte[] src)
{
for(int i = 0; i < length; i++)
{
dst[i] = src[i];
}
}

void memcpyASM(ubyte[] dst, ubyte[] src)
{
auto s = src.ptr;
auto d = dst.ptr;
size_t len = length;
asm pure nothrow @nogc
{
mov RSI, s;
mov RDI, d;
cld;
mov RCX, len;
rep;
movsb;
}
}

Duration benchmark(alias f)(ubyte[] dst, ubyte[] src, uint n)
{
Duration result;
auto sw = StopWatch(AutoStart.yes);

sw.reset();
foreach (_; 0 .. n)
{
f(dst, src);
}
result = sw.peek();

return result;
}

void main()
{
ubyte[] src;
ubyte[] dst;

// verify the integrity of the algorithm
init(src);
init(dst);
memcpyD(dst, src);
verifyResults(dst, src);

init(src);
init(dst);
memcpyDstdAlg(dst, src);
verifyResults(dst, src);

init(src);
init(dst);
memcpyC(dst, src);
verifyResults(dst, src);

init(src);
init(dst);
memcpyNaive(dst, src);
verifyResults(dst, src);

init(src);
init(dst);
memcpyASM(dst, src);
verifyResults(dst, src);

// test the performance of the algorithm
enum iterations = 1000;
writeln("memcpyD: ", benchmark!memcpyD(dst, src, 
iterations));
writeln("memcpyDstdAlg: ", benchmark!memcpyDstdAlg(dst, 
src, iterations));
writeln("memcpyC: ", benchmark!memcpyC(dst, src, 
iterations));
writeln("memcpyNaive: ", benchmark!memcpyNaive(dst, src, 
iterations));
writeln("memcpyASM: ", benchmark!memcpyASM(dst, src, 
iterations));

}

The results on my Windows 10 machine (Intel Core i7-6700, 
3.4GHz):

memcpyD: 127 ╬╝s and 3 hnsecs
memcpyDstdAlg: 195 ╬╝s and 9 hnsecs
memcpyC: 126 ╬╝s and 7 hnsecs
memcpyNaive: 17 ms, 974 ╬╝s, and 9 hnsecs
memcpyASM: 122 ╬╝s and 8 hnsecs
(Gotta love how windows displays μ)

The results running on Arch Linux 64-bit in a VirtualBox on the 
same Windows 10 machine:

memcpyD: 409 μs
memcpyDstdAlg: 400 μs
memcpyC: 404 μs and 4 hnsecs
memcpyNaive: 17 ms, 251 μs, and 6 hnsecs
memcpyASM: 162 μs and 8 hnsecs

The results appear more sane now, but it seems the behavior is 
highly platform dependent.  Still the ASM is doing well for my 
hardware.  If I run the test multiple times, I do see a lot of 
noise in the results, but each test seems to be affected 
proportionally, so I'm gaining a little more confidence in the 
benchmark.


I still need to analyze the assembly of C's memcpy (anyone know 
where I can find the source code?),


- default win32 OMF: 
https://github.com/DigitalMars/dmc/blob/master/src/core/MEMCCPY.C
- default linux: 
https://github.com/gcc-mirror/gcc/blob/master/libgcc/memcpy.c
- not used but interesting: 
https://github.com/esmil/musl/blob/master/src/string/memcpy.c




Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Walter Bright via Digitalmars-d

On 6/10/2018 7:49 PM, Mike Franklin wrote:

On Sunday, 10 June 2018 at 15:12:27 UTC, Kagamin wrote:

If the compiler can't get it right then who can?
The compiler implementation is faulty.  It rewrites the expressions to an 
`extern(C)` runtime implementation that is not @safe, nothrow, or pure: 
https://github.com/dlang/druntime/blob/706081f3cb23f4c597cc487ce16ad3d2ed021053/src/rt/lifetime.d#L1442  
The backend is not involved.


It's only faulty if there's a bug in it. It's essentially @trusted code.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Mike Franklin via Digitalmars-d

On Monday, 11 June 2018 at 02:49:00 UTC, Mike Franklin wrote:

The compiler implementation is faulty.  It rewrites the 
expressions to an `extern(C)` runtime implementation that is 
not @safe, nothrow, or pure:  
https://github.com/dlang/druntime/blob/706081f3cb23f4c597cc487ce16ad3d2ed021053/src/rt/lifetime.d#L1442  The backend is not involved.


Also, understand that this lowering happens in the IR generation 
stage:  
https://github.com/dlang/dmd/blob/3a79629988efd51d4dda9edb38a6701cd097da89/src/dmd/e2ir.d#L2616 so semantic analysis is not performed on the runtime call.  If it were, it would fail to compile.


Mike




Re: IOS support status

2018-06-10 Thread makedgreatagain via Digitalmars-d

On Thursday, 7 June 2018 at 14:12:15 UTC, Joakim wrote:

On Thursday, 7 June 2018 at 11:58:58 UTC, makedgreatagain wrote:

On Thursday, 7 June 2018 at 07:58:19 UTC, Joakim wrote:

[...]


Thanks for you suggestion Joakim.

I try follow your suggestion,  get all commit from origin 
branch (116 commit by Dan Olson fro IOS brnach).


When I try cherry pick the first commit, get conflict the line 
number is 6232.  (the ios branch is base on cpp and master ldc 
is move to d).


I will try more but I guess It is hard for me to made it work.


Huh, I didn't look that far back in the branch, may not be 
worth merging. How is his last release working? Do you want 
only betterC?  If so, you may be better off backporting the 
betterC patch that got it working later, shouldn't be much to 
that one.


Thanks for your help.

The side thing is betterC change also base on d based branch,  
which is very hard to backporting to cpp branch.





Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Mike Franklin via Digitalmars-d

On Sunday, 10 June 2018 at 15:12:27 UTC, Kagamin wrote:

On Sunday, 10 June 2018 at 12:49:31 UTC, Mike Franklin wrote:
There are many reasons to do this, one of which is to leverage 
information available at compile-time and in D's type system 
(type sizes, alignment, etc...) in order to optimize the 
implementation of these functions, and allow them to be used 
from @safe code.


In safe code you just use assignment and array ops, backend 
does the rest.


On Sunday, 10 June 2018 at 13:27:04 UTC, Mike Franklin wrote:
But one think I discovered is that while we can set an array's 
length in @safe, nothrow, pure code, it gets lowered to a 
runtime hook that is neither @safe, nothrow, nor pure; the 
compiler is lying to us.


If the compiler can't get it right then who can?



The compiler implementation is faulty.  It rewrites the 
expressions to an `extern(C)` runtime implementation that is not 
@safe, nothrow, or pure:  
https://github.com/dlang/druntime/blob/706081f3cb23f4c597cc487ce16ad3d2ed021053/src/rt/lifetime.d#L1442  The backend is not involved.


Mike


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 06/10/2018 08:01 PM, Walter Bright wrote:

On 6/10/2018 4:39 PM, David Nadlinger wrote:
That's not entirely true. Intel started optimising some of the REP 
string instructions again on Ivy Bridge and above. There is a CPUID 
bit to indicate that (ERMS?); I'm sure the Optimization Manual has 
further details. From what I remember, `rep movsb` is supposed to beat 
an AVX loop on most recent Intel µarchs if the destination is aligned 
and the data is longer than a few cache 


The drama of which instruction mix is faster on which CPU never abates!


In many ways, I really miss 80's machine architecture ;) So simple.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Mike Franklin via Digitalmars-d
I've modified the test based on the feedback so far, so here's 
what it looks like now:


import std.datetime.stopwatch;
import std.stdio;
import core.stdc.string;
import std.random;
import std.algorithm;

enum length = 4096 * 2;

void init(ref ubyte[] a)
{
a.length = length;

for(int i = 0; i < length; i++)
{
a[i] = uniform!ubyte;
}
}

void verifyResults(ubyte[] a, ubyte[] b)
{
assert(memcmp(a.ptr, b.ptr, length) == 0);
}

void memcpyD(ubyte[] dst, ubyte[] src)
{
dst[] = src[];
}

void memcpyDstdAlg(ubyte[] dst, ubyte[] src)
{
copy(src, dst);
}

void memcpyC(ubyte[] dst, ubyte[] src)
{
memcpy(dst.ptr, src.ptr, length);
}

void memcpyNaive(ubyte[] dst, ubyte[] src)
{
for(int i = 0; i < length; i++)
{
dst[i] = src[i];
}
}

void memcpyASM(ubyte[] dst, ubyte[] src)
{
auto s = src.ptr;
auto d = dst.ptr;
size_t len = length;
asm pure nothrow @nogc
{
mov RSI, s;
mov RDI, d;
cld;
mov RCX, len;
rep;
movsb;
}
}

Duration benchmark(alias f)(ubyte[] dst, ubyte[] src, uint n)
{
Duration result;
auto sw = StopWatch(AutoStart.yes);

sw.reset();
foreach (_; 0 .. n)
{
f(dst, src);
}
result = sw.peek();

return result;
}

void main()
{
ubyte[] src;
ubyte[] dst;

// verify the integrity of the algorithm
init(src);
init(dst);
memcpyD(dst, src);
verifyResults(dst, src);

init(src);
init(dst);
memcpyDstdAlg(dst, src);
verifyResults(dst, src);

init(src);
init(dst);
memcpyC(dst, src);
verifyResults(dst, src);

init(src);
init(dst);
memcpyNaive(dst, src);
verifyResults(dst, src);

init(src);
init(dst);
memcpyASM(dst, src);
verifyResults(dst, src);

// test the performance of the algorithm
enum iterations = 1000;
writeln("memcpyD: ", benchmark!memcpyD(dst, src, iterations));
writeln("memcpyDstdAlg: ", benchmark!memcpyDstdAlg(dst, src, 
iterations));

writeln("memcpyC: ", benchmark!memcpyC(dst, src, iterations));
writeln("memcpyNaive: ", benchmark!memcpyNaive(dst, src, 
iterations));
writeln("memcpyASM: ", benchmark!memcpyASM(dst, src, 
iterations));

}

The results on my Windows 10 machine (Intel Core i7-6700, 3.4GHz):
memcpyD: 127 ╬╝s and 3 hnsecs
memcpyDstdAlg: 195 ╬╝s and 9 hnsecs
memcpyC: 126 ╬╝s and 7 hnsecs
memcpyNaive: 17 ms, 974 ╬╝s, and 9 hnsecs
memcpyASM: 122 ╬╝s and 8 hnsecs
(Gotta love how windows displays μ)

The results running on Arch Linux 64-bit in a VirtualBox on the 
same Windows 10 machine:

memcpyD: 409 μs
memcpyDstdAlg: 400 μs
memcpyC: 404 μs and 4 hnsecs
memcpyNaive: 17 ms, 251 μs, and 6 hnsecs
memcpyASM: 162 μs and 8 hnsecs

The results appear more sane now, but it seems the behavior is 
highly platform dependent.  Still the ASM is doing well for my 
hardware.  If I run the test multiple times, I do see a lot of 
noise in the results, but each test seems to be affected 
proportionally, so I'm gaining a little more confidence in the 
benchmark.


I still need to analyze the assembly of C's memcpy (anyone know 
where I can find the source code?), test on more platforms, and 
test varying sizes, but I'm just collecting some initial data 
right now, to learn how to proceed.


I'd be interested in those with other platforms reporting back 
their results for their hardware, and of course suggestions for 
how to meet or beat C's memcpy with a pure D implementation.


Thanks for all the feedback so far.

Mike



Re: What is the point of nothrow?

2018-06-10 Thread Jonathan M Davis via Digitalmars-d-learn
On Sunday, June 10, 2018 23:59:17 Bauss via Digitalmars-d-learn wrote:
> What is the point of nothrow if it can only detect when Exception
> is thrown and not when Error is thrown?
>
> It seems like the attribute is useless because you can't really
> use it as protection to write bugless, safe code since the nasty
> bugs will pass by just fine.
>
> I'm aware that it's a feature that nothrow can throw Error, but
> it makes the attribute completely useless because you basically
> have no safety to guard against writing code that throws Error.
>
> To an extend @safe works, but there are tons of stuff that throws
> Error which you can only detect and guard against manually.
>
> So what is the point of nothrow when it can only detect
> exceptions you'd catch anyway.
>
> To me it would be so much more useful if you could detect code
> that could possibly throw Error.

Why do you care about detecting code that can throw an Error? Errors are
supposed to kill the program, not get caught. As such, why does it matter if
it can throw an Error?

Now, personally, I'm increasingly of the opinion that the fact that we have
Errors is kind of dumb given that if it's going to kill the program, and
it's not safe to do clean-up at that point, because the program is in an
invalid state, then why not just print the message and stack trace right
there and then kill the program instead of throwing anything? But
unforntunately, that's not what happens, which does put things in the weird
state where code can catch an Error even though it shouldn't be doing that.

As for the benefits of nothrow, as I understand it, they're twofold:

1. You know that you don't have to worry about any Exceptions being thrown
from that code. You don't have to worry about doing any exception handling
or having to ensure that anything gets cleaned up because of an Exception
being thrown.

2. If the compiler knows that a function can't throw an Exception, then it
doesn't have to insert any of the Exception handling mechanism stuff that it
normally does when a function is called. It can assume that nothing ever
gets thrown. If an Error does get thrown, then none of the proper clean-up
will get done (e.g. constructors or scope statements), but because an Error
being thrown means that the program is in an invalid state, it's not
actually safe to be doing clean-up anyway. So, the fact that a function is
nothrow gives you a performance benefit, because none of that extra
Exception handling stuff gets inserted. How large a benefit that is in
practice, I don't know, but it is a gain that can't be had with a function
that isn't nothrow.

- Jonathan M Davis



Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Walter Bright via Digitalmars-d

On 6/10/2018 4:39 PM, David Nadlinger wrote:
That's not entirely true. Intel started optimising some of the REP string 
instructions again on Ivy Bridge and above. There is a CPUID bit to indicate 
that (ERMS?); I'm sure the Optimization Manual has further details. From what I 
remember, `rep movsb` is supposed to beat an AVX loop on most recent Intel 
µarchs if the destination is aligned and the data is longer than a few cache 


The drama of which instruction mix is faster on which CPU never abates!


What is the point of nothrow?

2018-06-10 Thread Bauss via Digitalmars-d-learn
What is the point of nothrow if it can only detect when Exception 
is thrown and not when Error is thrown?


It seems like the attribute is useless because you can't really 
use it as protection to write bugless, safe code since the nasty 
bugs will pass by just fine.


I'm aware that it's a feature that nothrow can throw Error, but 
it makes the attribute completely useless because you basically 
have no safety to guard against writing code that throws Error.


To an extend @safe works, but there are tons of stuff that throws 
Error which you can only detect and guard against manually.


So what is the point of nothrow when it can only detect 
exceptions you'd catch anyway.


To me it would be so much more useful if you could detect code 
that could possibly throw Error.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread David Nadlinger via Digitalmars-d

On Sunday, 10 June 2018 at 22:23:08 UTC, Walter Bright wrote:

On 6/10/2018 11:16 AM, David Nadlinger wrote:
Because of the large amounts of noise, the only conclusion one 
can draw from this is that memcpyD is the slowest,


Probably because it does a memory allocation.


Of course; that was already pointed out earlier in the thread.

The CPU makers abandoned optimizing the REP instructions 
decades ago, and just left the clunky implementations there for 
backwards compatibility.


That's not entirely true. Intel started optimising some of the 
REP string instructions again on Ivy Bridge and above. There is a 
CPUID bit to indicate that (ERMS?); I'm sure the Optimization 
Manual has further details. From what I remember, `rep movsb` is 
supposed to beat an AVX loop on most recent Intel µarchs if the 
destination is aligned and the data is longer than a few cache 
lines. I've never measured that myself, though.


 — David


Re: New D import speedup concept!

2018-06-10 Thread DigitalDesigns via Digitalmars-d

another interesting syntax:

import [] : std.algorithm, std.math;

Does implicit importing on all arrays.

int[] x;

x.canFind(x[0]);   // does implicit import for canFind!


At some point we might even be able to get away from 90% of 
importing.





New D import speedup concept!

2018-06-10 Thread DigitalDesigns via Digitalmars-d
Suppose you have a large source file. You use functions from 
outside libraries that use other components not directly in their 
own library:


module X
import Foo : foo;
auto x = foo();


module Foo;
import F : I;
I foo();


module F;
interface I;
class f : I;
alias Q = I;
f bar(f);

now, remember that Foo.foo returns an F.f in X. Auto saves the 
day because we don't have to explicitly state the type. This 
allows us to avoid importing F.


But suppose we want to cast the return value to f even though 
it's an I.


auto x = cast(f)foo();


Now we have to import the module F! auto does save us any more.


What the compiler should do is notice that it can implicitly 
import f. Why? Because it could implicitly import I and there is 
no ambiguity.


Notice that there is absolutely no difference in the analysis by 
the compile between these two:


auto x = foo();
auto x = cast(Q)foo();  // Note that the module of foo is known, 
and so Q can be deduced. That is, If no Q exists in the current 
scope then try the module of foo's return type(f), if not found 
try foo's module(F).



But with the explicit cast we have to import Q. But don't you 
think that once the compiler knows the module it can just chain 
it along as if it were imported explicitly?



auto x = foo();
x.bar(x);  // bar is known because x is from module F and bar is 
also from F. It is "implicitly" imported. If bar already exists 
in the imports then an error is given. The user an use an alias 
to get around the problem as usual.



This feature will allow one to avoid having to import modules 
explicitly when the compiler can easily deduce the modules 
without ambiguity.








Re: delegates and functions

2018-06-10 Thread drug via Digitalmars-d-learn

On 10.06.2018 20:58, SrMordred wrote:

a => { return 2*a; }
  /\  \   /
  ||   \ /
  ||    \   /
  || \ /
  ||  \   /
 This is   \ /
 function   \  This is definition of delegate
 definition  \

so you have a function that returns delegate.

```
it's like
a => a => 2*a;
or
(a){ return () { return 2*a; }


I believe that a lot of other languages work differently, so when their 
knowledge are transposed to D, people get confused at first.


Eg. in JS:

x = x => x * 2;
x = x =>{ return x * 2; }
//equivalent



Ah, I see. From this point of view there is some inconsistency indeed.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Temtaime via Digitalmars-d

On Sunday, 10 June 2018 at 22:23:08 UTC, Walter Bright wrote:

On 6/10/2018 11:16 AM, David Nadlinger wrote:
Because of the large amounts of noise, the only conclusion one 
can draw from this is that memcpyD is the slowest,


Probably because it does a memory allocation.



followed by the ASM implementation.


The CPU makers abandoned optimizing the REP instructions 
decades ago, and just left the clunky implementations there for 
backwards compatibility.



In fact, memcpyC and memcpyNaive produce exactly the same 
machine code (without bounds checking), as LLVM recognizes the 
loop and lowers it into a memcpy. memcpyDstdAlg instead gets 
turned into a vectorized loop, for reasons I didn't 
investigate any further.


This amply illustrates my other point that looking at the 
assembler generated is crucial to understanding what's 
happening.


On some cpu architectures(for example intel atoms) rep movsb is 
the fatest memcpy.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread solidstate1991 via Digitalmars-d

On Sunday, 10 June 2018 at 12:49:31 UTC, Mike Franklin wrote:

void memcpyASM()
{
auto s = src.ptr;
auto d = dst.ptr;
size_t len = length;
asm pure nothrow @nogc
{
mov RSI, s;
mov RDI, d;
cld;
mov RCX, len;
rep;
movsb;
}
}

Protip: Use SSE or AVX for an even faster copying.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Walter Bright via Digitalmars-d

On 6/10/2018 11:16 AM, David Nadlinger wrote:
Because of the large amounts of noise, the only conclusion one can draw from 
this is that memcpyD is the slowest,


Probably because it does a memory allocation.



followed by the ASM implementation.


The CPU makers abandoned optimizing the REP instructions decades ago, and just 
left the clunky implementations there for backwards compatibility.



In fact, memcpyC and memcpyNaive produce exactly the same machine code (without 
bounds checking), as LLVM recognizes the loop and lowers it into a memcpy. 
memcpyDstdAlg instead gets turned into a vectorized loop, for reasons I didn't 
investigate any further.


This amply illustrates my other point that looking at the assembler generated is 
crucial to understanding what's happening.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Walter Bright via Digitalmars-d

On 6/10/2018 6:45 AM, Mike Franklin wrote:

 void memcpyD()
{
     dst = src.dup;
}


Note that .dup is doing a GC memory allocation.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Walter Bright via Digitalmars-d

On 6/10/2018 5:49 AM, Mike Franklin wrote:

[...]


One source of entropy in the results is src and dst being global variables. 
Global variables in D are in TLS, and TLS access can be complex (many 
instructions) and is influenced by the -fPIC switch. Worse, global variable 
access is not optimized in dmd because of aliasing problems.


The solution is to pass src, dst, and length to the copy function as function 
parameters (and make sure function inlining is off).


In light of this, I want to BEAT THE DEAD HORSE once again and assert that if 
the assembler generated by a benchmark is not examined, the results can be 
severely misleading. I've seen this happen again and again. In this case, TLS 
access is likely being benchmarked, not memcpy.


BTW, the relative timing of rep movsb can be highly dependent on which CPU chip 
you're using.


[Issue 18967] Allow wildcard with the -J option

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18967

ag0aep6g  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||ag0ae...@gmail.com
 Resolution|--- |WORKSFORME

--- Comment #1 from ag0aep6g  ---
-J takes a directory, not a single file name. So your long version doesn't
actually work. The correct way would be `dmd -J/home/fantomas/includes`, with
`import("default.css")`, `import("icons.data")`, etc. in the D code.

https://run.dlang.io/is/xKQoJZ

So judging from your example, the current -J already does what you want.
Closing as WORKSFORME. Feel free to reopen if I'm missing the point.

--


iopipe v0.1.0 - now with Windows support!

2018-06-10 Thread Steven Schveighoffer via Digitalmars-d-announce

iopipe version 0.1.0 has been released.

iopipe is a high-performance pipe processing system that makes it easy 
to string together pipelines to process data with as little buffer 
copying as possible.


Nothing has really been changed, but it now has Windows i/o support. I 
will note at this time, however, that ring buffers are not yet supported 
on Windows.


This version deprecates the IODev type that I had included, in favor of 
a new io library that shows extreme promise.


This version ONLY builds on 2.080.1 or later (the bug fix that I 
submitted at dconf has been merged in that version, and so iopipe will 
now build against Martin Nowak's io library). In fact, iopipe 
development was kind of stalled due to this bug, so I'm super-happy to 
see it fixed and released!


Note that the new io library also supports sockets, which IODev did not 
have support for, AND has a pluggable driver system, so you could 
potentially use fiber-based async io without rebuilding. It just makes a 
lot of sense for D to have a standard low-level io library that 
everything can use without having to kludge together multiple types of 
io libraries.


Near future plans:

1. Utilize a CI to make sure it continues to work on all platforms.
2. Add RingBuffer support on Windows
3. Add more driver support for std.io.
4. Continue development of JSON library that depends on iopipe (not yet 
on code.dlang.org).


git - https://github.com/schveiguy/iopipe
dub - https://code.dlang.org/packages/iopipe
docs - http://schveiguy.github.io/iopipe/

-Steve


Re: Security point of contact

2018-06-10 Thread Seb via Digitalmars-d

On Saturday, 9 June 2018 at 23:19:34 UTC, Cym13 wrote:

On Saturday, 9 June 2018 at 21:52:59 UTC, Seb wrote:

On Saturday, 9 June 2018 at 19:03:59 UTC, Cym13 wrote:

Yop.

I need to discuss an issue related to dub. No need to alarm 
everyone yet, that only concerns 1.3% of dub projects, but 
still it's something that shouldn't be taken lightly.


Who should I contact?


Sönke, Martin und myself.

https://github.com/s-ludwig (look in the DUB git log for his 
email address)

https://github.com/MartinNowak
https://github.com/wilzbach


Thank you, the mail should be in your box already.


Sorry - I never got a mail :/
Which address did you use?
In doubt, this is my official one:

https://seb.wilzba.ch/contact/


Re: dlangbot for Telegram - D compiler in your pocket

2018-06-10 Thread Dechcaudron via Digitalmars-d-announce

On Saturday, 9 June 2018 at 20:28:24 UTC, Anton Fediushin wrote:
Hello, I am glad to announce that new Telegram bot which can 
execute D code is up and running!


Check it out here: https://t.me/dlangbot

Features:
 - Two compilers to choose from: dmd (default) and ldc
 - Support for custom compiler arguments with `/args` command
 - It's possible to set program's stdin with `/stdin`
 - Code is automatically compiled and ran again when you edit 
your message


Repository: https://gitlab.com/ohboi/dlangbot

Any ideas on how to improve it are appreciated!


As much as I love Telegram bots and D, compilation and execution 
offered by a bot provides no clear advantage to me (still, I 
kinda like it). I assume you are looking out for potential 
vulnerabilities?


How about allowing for download of the executable?


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread I love Ice Cream via Digitalmars-d
Don't C implementations already do 90% of what you want? I 
thought most compilers know about and optimize these methods 
based on context. I thought they were *special* in the eyes of 
the compiler already. I think you are fighting a battle pitting 
40 years of tweaking against you...


Re: -J all

2018-06-10 Thread DigitalDesigns via Digitalmars-d

On Sunday, 10 June 2018 at 14:42:21 UTC, Basile B. wrote:

On Sunday, 10 June 2018 at 01:49:37 UTC, DigitalDesigns wrote:
Please allow -J to specify that all subdirectories are to be 
included! I'm having to include all subdirectories of my 
library with J because I import each file and extract 
information. It would be better to have something like


-JC:\Lib\*

rather than

-JC:\Lib\Internal
-JC:\Lib\Internal\OS
-JC:\Lib\API
-JC:\Lib\API\V1
-JC:\Lib\API\V1\Templates


...
..
.


This is opened as an enhancement request now: 
https://issues.dlang.org/show_bug.cgi?id=18967. IIRC there was 
a security concern mentioned last time this was proposed, not 
100% sure.


Yeah, but -J was added for a security concern! So when does the 
insanity end?


If it's such a big, e.g., to prevent root access then limit 
asterisk usage to non root and maybe only a depth of 3.


After all, if someone wanted access to sensitive areas just do 
-JC:\Windows\System32.


At some point one has to stop policing everything.




[Issue 18928] extern(C++) bad codegen, wrong calling convention

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18928

--- Comment #12 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/dlang/dmd

https://github.com/dlang/dmd/commit/20ffec71c10a6f6e1651503dbb63cb1a7015cb20
fix issue 18928 - extern(C++) bad codegen on win64, wrong calling convention

With the VC ABI, a member function always returns a struct on the stack no
matter its size

https://github.com/dlang/dmd/commit/3a79629988efd51d4dda9edb38a6701cd097da89
Merge pull request #8330 from rainers/issue_18928

fix issue 18928 - extern(C++) bad codegen on win64, wrong calling convention
merged-on-behalf-of: Mathias LANG 

--


Re: Access to structures defined in C

2018-06-10 Thread Joe via Digitalmars-d-learn

On Sunday, 10 June 2018 at 17:59:12 UTC, Joe wrote:
That worked but now I have a more convoluted case: a C array of 
pointers to int pointers, e.g.,


int **xs[] = {x1, x2, 0};
int *x1[] = {x1a, 0};
int *x2[] = {x2a, x2b, 0};
...
int x2a[] = { 1, 3, 5, 0};

Only the first line is exposed (and without the 
initialization). So I tried:


extern(C) __gshared extern int**[1] xs;

The D compiler accepts that, but just about any manipulation 
gets screamed at, usually with Error: only one index allowed to 
index int. Note that I'm trying to access the ints, i.e., in C 
something like xs[1][0][2] to access the 5 in x2a. Do I have to 
mimic the intermediate C arrays?


I don't know why I didn't try this first.  It seems that the D 
equivalent of C's xs[1][0][2] is simply xs.ptr[[1][0][2].


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread David Nadlinger via Digitalmars-d

On Sunday, 10 June 2018 at 12:49:31 UTC, Mike Franklin wrote:
I'm not experienced with this kind of programming, so I'm 
doubting these results.  Have I done something wrong?  Am I 
overlooking something?


You've just discovered the fact that one can rarely be careful 
enough with what is benchmarked, and having enough statistics.


For example, check out the following output from running your 
program on macOS 10.12, compiled with LDC 1.8.0:


---
$ ./test
memcpyD: 2 ms, 570 μs, and 9 hnsecs
memcpyDstdAlg: 77 μs and 2 hnsecs
memcpyC: 74 μs and 1 hnsec
memcpyNaive: 76 μs and 4 hnsecs
memcpyASM: 145 μs and 5 hnsecs
$ ./test
memcpyD: 3 ms and 376 μs
memcpyDstdAlg: 76 μs and 9 hnsecs
memcpyC: 104 μs and 4 hnsecs
memcpyNaive: 72 μs and 2 hnsecs
memcpyASM: 181 μs and 8 hnsecs
$ ./test
memcpyD: 2 ms and 565 μs
memcpyDstdAlg: 76 μs and 9 hnsecs
memcpyC: 73 μs and 2 hnsecs
memcpyNaive: 71 μs and 9 hnsecs
memcpyASM: 145 μs and 3 hnsecs
$ ./test
memcpyD: 2 ms, 813 μs, and 8 hnsecs
memcpyDstdAlg: 81 μs and 2 hnsecs
memcpyC: 99 μs and 2 hnsecs
memcpyNaive: 74 μs and 2 hnsecs
memcpyASM: 149 μs and 1 hnsec
$ ./test
memcpyD: 2 ms, 593 μs, and 7 hnsecs
memcpyDstdAlg: 77 μs and 3 hnsecs
memcpyC: 75 μs
memcpyNaive: 77 μs and 2 hnsecs
memcpyASM: 145 μs and 5 hnsecs
---

Because of the large amounts of noise, the only conclusion one 
can draw from this is that memcpyD is the slowest, followed by 
the ASM implementation.


In fact, memcpyC and memcpyNaive produce exactly the same machine 
code (without bounds checking), as LLVM recognizes the loop and 
lowers it into a memcpy. memcpyDstdAlg instead gets turned into a 
vectorized loop, for reasons I didn't investigate any further.


 — David




[Issue 18943] core.internal.hash remove outdated special case for DMD unaligned reads

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18943

github-bugzi...@puremagic.com changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--


[Issue 18943] core.internal.hash remove outdated special case for DMD unaligned reads

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18943

--- Comment #2 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/dlang/druntime

https://github.com/dlang/druntime/commit/552f9e20c26e3451113eed32f21cbd6012c00800
Fix Issue 18943 - core.internal.hash remove outdated special case for DMD
unaligned reads

https://github.com/dlang/druntime/commit/2aac1d568ddb3fec0172afb2a180dad70354ed31
Merge pull request #2210 from n8sh/core-hash-18943

Fix Issue 18943 - core.internal.hash remove outdated special case for DMD
unaligned reads
merged-on-behalf-of: David Nadlinger 

--


Re: delegates and functions

2018-06-10 Thread SrMordred via Digitalmars-d-learn

a => { return 2*a; }
  /\  \   /
  ||   \ /
  ||\   /
  || \ /
  ||  \   /
 This is   \ /
 function   \  This is definition of delegate
 definition  \

so you have a function that returns delegate.

```
it's like
a => a => 2*a;
or
(a){ return () { return 2*a; }


I believe that a lot of other languages work differently, so when 
their knowledge are transposed to D, people get confused at first.


Eg. in JS:

x = x => x * 2;
x = x =>{ return x * 2; }
//equivalent




Re: Access to structures defined in C

2018-06-10 Thread Joe via Digitalmars-d-learn

On Wednesday, 14 March 2018 at 02:17:57 UTC, Adam D. Ruppe wrote:
The type system would *like* to know, certainly for correct 
range errors, but if you declare it as the wrong length and use 
the .ptr, it still works like it does in C:


extern(C) __gshared extern char*[1] files; // still works

import core.stdc.stdio;

void main() {
printf("%s\n", files.ptr[2]); // ptr bypasses the range 
check

}



That worked but now I have a more convoluted case: a C array of 
pointers to int pointers, e.g.,


int **xs[] = {x1, x2, 0};
int *x1[] = {x1a, 0};
int *x2[] = {x2a, x2b, 0};
...
int x2a[] = { 1, 3, 5, 0};

Only the first line is exposed (and without the initialization). 
So I tried:


extern(C) __gshared extern int**[1] xs;

The D compiler accepts that, but just about any manipulation gets 
screamed at, usually with Error: only one index allowed to index 
int. Note that I'm trying to access the ints, i.e., in C 
something like xs[1][0][2] to access the 5 in x2a. Do I have to 
mimic the intermediate C arrays?



But if you can match the length, that's ideal.


Unfortunately, although the C array lengths are known at C 
compile time, they're not made available otherwise so I'm afraid 
the [1] trick will have to do for now.




@nogc logging library

2018-06-10 Thread basiliscos via Digitalmars-d-learn

Hi,

I'm D-newbie and looking for no-gc loggig library, ideally with 
timestamps and formatting with output to stdout / file.


Is there any?

Thanks!

WBR,
basiliscos


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Kagamin via Digitalmars-d

On Sunday, 10 June 2018 at 12:49:31 UTC, Mike Franklin wrote:
There are many reasons to do this, one of which is to leverage 
information available at compile-time and in D's type system 
(type sizes, alignment, etc...) in order to optimize the 
implementation of these functions, and allow them to be used 
from @safe code.


In safe code you just use assignment and array ops, backend does 
the rest.


On Sunday, 10 June 2018 at 13:27:04 UTC, Mike Franklin wrote:
But one think I discovered is that while we can set an array's 
length in @safe, nothrow, pure code, it gets lowered to a 
runtime hook that is neither @safe, nothrow, nor pure; the 
compiler is lying to us.


If the compiler can't get it right then who can?


Re: std.digest can't CTFE?

2018-06-10 Thread Johannes Pfau via Digitalmars-d
Am Fri, 08 Jun 2018 11:46:41 -0700 schrieb Manu:
> 
> I'm already burning about 3x my reasonably allocate-able free time to
> DMD PR's...
> I'd really love if someone else would look at that :)

I'll see if I can allocate some time for that. Should be a mostly trivial 
change.

> I'm not quite sure what you mean though; endian conversion functions are
> still endian conversion functions, and they shouldn't be affected here.

Yes, but the point made in that article is that you can implement 
*Endian<=>native conversions without knowing the native endianness. This 
would immediately make these functions CTFE-able.

> The problem is in the std.digest code where it *calls* endian functions
> (or makes endian assumptions). There need be no reference to endian in
> std.digest... if code is pulling bytes from an int (ie, cast(byte*)) or
> something, just use ubyte[4] and index it instead if uint, etc. I'm
> surprised that digest code would use anything other than byte buffers.
> It may be that there are some optimised version()-ed fast-paths might be
> endian conscious, but the default path has no reason to not work.

That's not how hash algorithms are usually specified. These algorithms 
perform bit rotate operations, additions, multiplications on these 
values*. You could probably implement these on byte[4] values instead, 
but you'll waste time porting the algorithm, benchmarking possible 
performance impacts and it will be more difficult to compare the 
implementation to the reference implementation (think of audits).

So it's not realistic to change this.

* An interesting question here is if you could actually always ignore 
system endianess and do simple casts when cleverly adjusting all 
constants in the algorithm to fit?
-- 
Johannes


Re: -J all

2018-06-10 Thread Basile B. via Digitalmars-d

On Sunday, 10 June 2018 at 01:49:37 UTC, DigitalDesigns wrote:
Please allow -J to specify that all subdirectories are to be 
included! I'm having to include all subdirectories of my 
library with J because I import each file and extract 
information. It would be better to have something like


-JC:\Lib\*

rather than

-JC:\Lib\Internal
-JC:\Lib\Internal\OS
-JC:\Lib\API
-JC:\Lib\API\V1
-JC:\Lib\API\V1\Templates


...
..
.


This is opened as an enhancement request now: 
https://issues.dlang.org/show_bug.cgi?id=18967. IIRC there was a 
security concern mentioned last time this was proposed, not 100% 
sure.


[Issue 18967] New: Allow wildcard with the -J option

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18967

  Issue ID: 18967
   Summary: Allow wildcard with the -J option
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: b2.t...@gmx.com

e.g

dmd -J/home/fantomas/includes/*

instead of

dmd -J/home/fantomas/includes/default.css ^
-J/home/fantomas/includes/icons.data ^
-J/home/fantomas/includes/EN.po ^
-J/home/fantomas/includes/FR.po ^
-J/home/fantomas/includes/RU.po ^
-J/home/fantomas/includes/JP.po

--


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Seb via Digitalmars-d

On Sunday, 10 June 2018 at 13:45:54 UTC, Mike Franklin wrote:

On Sunday, 10 June 2018 at 13:16:21 UTC, Adam D. Ruppe wrote:

arr1[] = arr2[]; // the compiler makes this memcpy, the 
optimzer can further do its magic


void memcpyD()
{
dst = src.dup;
}

void memcpyD2()
{
dst[] = src[];
}

-
memcpyD: 1 ms, 725 μs, and 1 hnsec
memcpyD2: 587 μs and 5 hnsecs
memcpyASM: 119 μs and 5 hnsecs

Still, the ASM version is much faster.

btw, what's the difference between the two.  If you can't tell, 
I'm actually not a very good D programmer.


Mike


I would increase the test data size, s.t. you get a least a few 
seconds.
Otherwise the benchmarking won't tell you much because it's 
subject to too much randomness.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread rikki cattermole via Digitalmars-d

On 11/06/2018 1:45 AM, Mike Franklin wrote:

On Sunday, 10 June 2018 at 13:16:21 UTC, Adam D. Ruppe wrote:

arr1[] = arr2[]; // the compiler makes this memcpy, the optimzer can 
further do its magic


void memcpyD()
{
     dst = src.dup;


malloc (for slice not static array)


}

void memcpyD2()
{
     dst[] = src[];


memcpy


}

-
memcpyD: 1 ms, 725 μs, and 1 hnsec
memcpyD2: 587 μs and 5 hnsecs
memcpyASM: 119 μs and 5 hnsecs

Still, the ASM version is much faster.

btw, what's the difference between the two.  If you can't tell, I'm 
actually not a very good D programmer.


Mike







Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Mike Franklin via Digitalmars-d

On Sunday, 10 June 2018 at 13:16:21 UTC, Adam D. Ruppe wrote:

arr1[] = arr2[]; // the compiler makes this memcpy, the 
optimzer can further do its magic


void memcpyD()
{
dst = src.dup;
}

void memcpyD2()
{
dst[] = src[];
}

-
memcpyD: 1 ms, 725 μs, and 1 hnsec
memcpyD2: 587 μs and 5 hnsecs
memcpyASM: 119 μs and 5 hnsecs

Still, the ASM version is much faster.

btw, what's the difference between the two.  If you can't tell, 
I'm actually not a very good D programmer.


Mike





Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Mike Franklin via Digitalmars-d

On Sunday, 10 June 2018 at 13:17:53 UTC, Guillaume Piolat wrote:

Please make one that guarantee the usage of the corresponding 
backend intrinsic, for example on LLVM.


I tested with ldc and got similar results.  I thought the 
implementation in C forwarded to the backend intrinsic.  I think 
even LDC links with GCC, and the distribution of GCC is tuned for 
a given platform and architecture, so am I not already utilizing 
GCC's backend intrinsic.


Also, I'm not trying to find the holy grail of memcpy with this 
endeavor.  I just want to find something as good or better than 
what the druntime is currently using, but implemented in D.  Even 
if it's not optimal, if it's as good as what druntime is 
currently using, I want to replace those calls in the runtime 
with a D implementation.


Mike




Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Mike Franklin via Digitalmars-d

On Sunday, 10 June 2018 at 13:16:21 UTC, Adam D. Ruppe wrote:


And D already has it built in as well for @safe etc:

arr1[] = arr2[]; // the compiler makes this memcpy, the 
optimzer can further do its magic


so be sure to check against that too.


My intent is to use the D implementation in the druntime.  I'm 
trying to replace the runtime hooks with templates, so we don't 
have to rely so much on `TypeInfo` and we can begin to use more 
of D without linking in the runtime.


But one think I discovered is that while we can set an array's 
length in @safe, nothrow, pure code, it gets lowered to a runtime 
hook that is neither @safe, nothrow, nor pure; the compiler is 
lying to us.  If I replace the runtime hook with a template, I 
need to be honest, and that means all that low-level code in 
druntime that the runtime hook depends on needs to be @safe, 
nothrow, and pure.  So I'm starting with the fundamental 
dependencies on the C library, trying to ensure they can be 
replaced with D implementations and use in D idiomatically.  For 
example, memcpy might look like this.


void memcpy(T)(T[] dest, T[] src);

We can extract size, alignment etc.. at compile-time.

So, my goal is not for user-facing code, but for improving the 
druntime implementations.


Does that make sense?

Mike


Re: delegates and functions

2018-06-10 Thread drug via Digitalmars-d-learn

On 10.06.2018 12:21, OlegZ wrote:

On Saturday, 9 June 2018 at 22:28:22 UTC, Ali Çehreli wrote:
There is some explanation at the following page, of how the lambda 
syntax is related to the full syntax:

  http://ddili.org/ders/d.en/lambda.html#ix_lambda.=%3E


copy rect from article as image 
https://i.gyazo.com/c23a9139688b7ed59fbe9c6cdcf91b93.png >
well, I can to remember that lambda () => { return ..; } returns another 
implicit lambda


you shouldn't remember it. you need to understand that `a=>2*a` is short 
form of `(a) { return 2a; }. So using `a=>{ return 2*a; }` you get

```
(a) { return (){ return 2*a; }; }
```
i.e. a function returning delegate.
```
a => { return 2*a; }
  /\  \   /
  ||   \ /
  ||\   /
  || \ /
  ||  \   /
 This is   \ /
 function   \  This is definition of delegate
 definition  \

so you have a function that returns delegate.

```
it's like
a => a => 2*a;
or
(a){ return () { return 2*a; }

just first version is much simpler and namely convenience is the reason 
of this syntax I guess. >
can somebody tell me please what was the reason that lambda (blah) => { 
return value + 1; } actionally returns another lambda that returns 
value+1? what case it can be usefull? why Dlang gives me lambda of 
lambda where I want lambda written with "=> { return"?

I am expecting some magic and simple trick with lambdas.


You misuse two different form of the same. The form with "=>" is very 
convenient if expression is short, in other cases the form `(){}` is 
suited better.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Mike Franklin via Digitalmars-d

On Sunday, 10 June 2018 at 13:05:33 UTC, Nicholas Wilson wrote:

On Sunday, 10 June 2018 at 12:49:31 UTC, Mike Franklin wrote:
I'm exploring the possibility of implementing some of the 
basic software building blocks (memcpy, memcmp, memmove, 
etc...) that D utilizes from the C library with D 
implementations.  There are many reasons to do this, one of 
which is to leverage information available at compile-time and 
in D's type system (type sizes, alignment, etc...) in order to 
optimize the implementation of these functions, and allow them 
to be used from @safe code.


[...]


what compiler? what flags?


dmd main.d
Arch Linux x86_64

DMD, with no flags.  I'm not compiling C's memcpy, and the 
significant D implementation is memcpyASM which is all inline 
assembly, so I don't see how the compiler flags will make any 
difference, anyway.


Mike


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Adam D. Ruppe via Digitalmars-d

On Sunday, 10 June 2018 at 12:49:31 UTC, Mike Franklin wrote:
D utilizes from the C library with D implementations.  There 
are many reasons to do this, one of which is to leverage 
information available at compile-time and in D's type system 
(type sizes, alignment, etc...) in order to optimize the 
implementation of these functions, and allow them to be used 
from @safe code.


So keep in mind that memcpy is really a magical intrinsic anyway 
and optimzers frequently don't actually call a function, but 
rather see the size and replace the instructions inline (like it 
might replace it with just a couple movs instead of something 
fancy).


And D already has it built in as well for @safe etc:

arr1[] = arr2[]; // the compiler makes this memcpy, the optimzer 
can further do its magic


so be sure to check against that too.




Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Guillaume Piolat via Digitalmars-d

On Sunday, 10 June 2018 at 12:49:31 UTC, Mike Franklin wrote:
I'm not experienced with this kind of programming, so I'm 
doubting these results.  Have I done something wrong?  Am I 
overlooking something?



Hi,

I've spent a lot of time optimizing memcpy. One of the result was 
that on Intel ICC the compiler intrinsics were unbeatable.
Please make one that guarantee the usage of the corresponding 
backend intrinsic, for example on LLVM.


The reasoning is that small areas of memory (statically known) 
will be elided to a few byte moves (inlined), whereas the larger 
one will call the highly optimized C stdlib calls.
If you use ASM instead of IR the optimization barriers and 
register spilling will make you probably less efficient.


Re: Replacing C's memcpy with a D implementation

2018-06-10 Thread Nicholas Wilson via Digitalmars-d

On Sunday, 10 June 2018 at 12:49:31 UTC, Mike Franklin wrote:
I'm exploring the possibility of implementing some of the basic 
software building blocks (memcpy, memcmp, memmove, etc...) that 
D utilizes from the C library with D implementations.  There 
are many reasons to do this, one of which is to leverage 
information available at compile-time and in D's type system 
(type sizes, alignment, etc...) in order to optimize the 
implementation of these functions, and allow them to be used 
from @safe code.


[...]


what compiler? what flags?


Replacing C's memcpy with a D implementation

2018-06-10 Thread Mike Franklin via Digitalmars-d
I'm exploring the possibility of implementing some of the basic 
software building blocks (memcpy, memcmp, memmove, etc...) that D 
utilizes from the C library with D implementations.  There are 
many reasons to do this, one of which is to leverage information 
available at compile-time and in D's type system (type sizes, 
alignment, etc...) in order to optimize the implementation of 
these functions, and allow them to be used from @safe code.


The prevailing wisdom has been that there is no way to improve on 
C's memcpy implementation given that it has been mirco-optimized 
to death over several decades by many talented members of the 
human race.


So, I threw the following benchmark together to try to get a clue 
about what I was up against, and in a very short time, I beat the 
snot of C's memcpy.  The benefit seems to disappear as the array 
sizes increase, but I believe the vast majority of calls to 
memcpy are probably quite small.


import std.datetime.stopwatch;
import std.stdio;
import core.stdc.string;
import std.random;
import std.algorithm;

enum length = 4096 * 2;
ubyte[length] dst;
ubyte[length] src;
auto rnd = Random(42);
ubyte[] src2;
ubyte[] dst2;

void verifyResults()
{
assert(memcmp(dst.ptr, src.ptr, length) == 0);
assert(memcmp(dst2.ptr, src2.ptr, length) == 0);
}

void randomizeData()
{
for(int i = 0; i < length; i++)
{
src[i] = uniform!ubyte;
dst[i] = uniform!ubyte;
}

src2 = src;
dst2 = dst;
}

void memcpyD()
{
dst = src.dup;
}

void memcpyDstdAlg()
{
copy(src2, dst2);
}

void memcpyC()
{
memcpy(dst.ptr, src.ptr, length);
}

void memcpyNaive()
{
for(int i = 0; i < length; i++)
{
dst[i] = src[i];
}
}

void memcpyASM()
{
auto s = src.ptr;
auto d = dst.ptr;
size_t len = length;
asm pure nothrow @nogc
{
mov RSI, s;
mov RDI, d;
cld;
mov RCX, len;
rep;
movsb;
}
}

void main()
{
// verify the integrity of the algorithm
randomizeData();
memcpyD();
verifyResults();

randomizeData();
memcpyDstdAlg();
verifyResults();

randomizeData();
memcpyC();
verifyResults();

randomizeData();
memcpyNaive();
verifyResults();

randomizeData();
memcpyASM();
verifyResults();

// test the performance of the algorithm
auto r = benchmark!(memcpyD, memcpyDstdAlg, memcpyC, 
memcpyNaive, memcpyASM)(1000);

Duration memcpyDResult = r[0];
Duration memcpyDstdAlgResult = r[1];
Duration memcpyCResult = r[2];
Duration memcpyNaiveResult = r[3];
Duration memcpyASMResult = r[4];

writeln("memcpyD: ", memcpyDResult);
writeln("memcpyDstdAlg: ", memcpyDstdAlgResult);
writeln("memcpyC: ", memcpyCResult);
writeln("memcpyNaive: ", memcpyNaiveResult);
writeln("memcpyASM: ", memcpyASMResult);
}


-- Output 
memcpyD: 1 ms, 772 μs, and 4 hnsecs
memcpyDstdAlg: 531 μs and 8 hnsecs
memcpyC:   371 μs and 3 hnsecs
memcpyNaive:21 ms, 572 μs, and 2 hnsecs
memcpyASM: 119 μs and 6 hnsecs



I'm not experienced with this kind of programming, so I'm 
doubting these results.  Have I done something wrong?  Am I 
overlooking something?


Thanks,
Mike



Re: Passing C++ class to DLL for callbacks from D (Steam)

2018-06-10 Thread rikki cattermole via Digitalmars-d-learn

On 10/06/2018 11:28 PM, cc wrote:
Woops, that GetIntPtr came from the .cs header in the same folder as the 
C++ headers distributed with the SDK, that'll teach me to ctrl+f "class 
ISteamClient" in all open files and copy/paste before reading.


Stay with the c/c++ headers for c/c++ code. We don't generally don't go 
testing the long way ;)


Re: Passing C++ class to DLL for callbacks from D (Steam)

2018-06-10 Thread cc via Digitalmars-d-learn

On Sunday, 10 June 2018 at 10:47:58 UTC, rikki cattermole wrote:

On 10/06/2018 10:29 PM, cc wrote:
And it successfully fires the 3-arg Run method of the callback 
object.  However for some reason the function table of the 
ISteamClient seems to be off by one.. it kept calling the 
wrong methods until I commented one out, in this case 
GetIntPtr() as seen above, then everything seemed to line up.  
Not sure what the proper way to ensure it matches the C++ 
layout here, but at least it seems to be mostly working for 
now.  Thanks again!


Ugh what GetIntPtr? Unless of course this header file is 
wrong[0].


Make the members match exactly, order and everything and it 
should "just work".


[0] 
https://github.com/ValveSoftware/source-sdk-2013/blob/master/mp/src/public/steam/isteamclient.h#L113


Woops, that GetIntPtr came from the .cs header in the same folder 
as the C++ headers distributed with the SDK, that'll teach me to 
ctrl+f "class ISteamClient" in all open files and copy/paste 
before reading.


Anyway I played around with it some more and found the one single 
line that was causing the problem.  It needs this:


HSteamPipe SteamAPI_GetHSteamPipe();

rather than one of these:

HSteamPipe SteamAPI_ISteamClient_CreateSteamPipe(ISteamClient 
instancePtr);

virtual HSteamPipe CreateSteamPipe() = 0;

It looks like the pipe is already created upon initializing and 
calling CreateSteamPipe creates a secondary one that doesn't 
receive callbacks. SteamAPI_GetHSteamPipe is called inline to 
retrieve the current pipe on initialization of the the classed 
version of the API, but the documentation doesn't mention it 
exists at all, leading to confusion when using the flat version.  
Well, mystery solved.




Re: WTF! new in class is static?!?!

2018-06-10 Thread Bauss via Digitalmars-d-learn

On Sunday, 10 June 2018 at 02:34:11 UTC, KingJoffrey wrote:

On Sunday, 10 June 2018 at 01:27:50 UTC, bauss wrote:

On Saturday, 9 June 2018 at 12:40:07 UTC, RealProgrammer wrote:
maybe you and others in the D 'community' should start paying 
attention to the 'opinions' of those who do professional 
development with professional compilers.


I do professional work with a professional compiler aka. The D 
compiler and I've been doing so for years.


Well then, you should like my idea for the dconf '2019' logo...

Dman and a bug, holding hands, and the bug saying "I'm not a 
bug, I'm a feature!".


Sure, because other compilers do not have bugs.


Re: Passing C++ class to DLL for callbacks from D (Steam)

2018-06-10 Thread rikki cattermole via Digitalmars-d-learn

On 10/06/2018 10:29 PM, cc wrote:
And it successfully fires the 3-arg Run method of the callback object.  
However for some reason the function table of the ISteamClient seems to 
be off by one.. it kept calling the wrong methods until I commented one 
out, in this case GetIntPtr() as seen above, then everything seemed to 
line up.  Not sure what the proper way to ensure it matches the C++ 
layout here, but at least it seems to be mostly working for now.  Thanks 
again!


Ugh what GetIntPtr? Unless of course this header file is wrong[0].

Make the members match exactly, order and everything and it should "just 
work".


[0] 
https://github.com/ValveSoftware/source-sdk-2013/blob/master/mp/src/public/steam/isteamclient.h#L113


Re: Passing C++ class to DLL for callbacks from D (Steam)

2018-06-10 Thread cc via Digitalmars-d-learn

On Sunday, 10 June 2018 at 02:57:34 UTC, evilrat wrote:
Only subsystems getters like SteamUser() or SteamInventory() 
requires wrapping.


I really can't understand why they ever choose to silently 
ignore registering callbacks received with C API systems 
handles...


Thanks to the information you supplied I was able to get it 
working without a wrapper, like so:


extern(C++) abstract class ISteamClient {
//public abstract IntPtr GetIntPtr();
public abstract uint CreateSteamPipe();
public abstract bool BReleaseSteamPipe(uint hSteamPipe);
public abstract uint ConnectToGlobalUser(uint hSteamPipe);
	public abstract uint CreateLocalUser(ref uint phSteamPipe,uint 
eAccountType);

public abstract void ReleaseUser(uint hSteamPipe,uint hUser);
	public abstract ISteamUser GetISteamUser(uint hSteamUser,uint 
hSteamPipe,const(char)* pchVersion);

...
	public abstract ISteamUserStats GetISteamUserStats(uint 
hSteamUser,uint hSteamPipe,const(char)* pchVersion);

...
}

HSteamUser hSteamUser = SteamAPI_GetHSteamUser();
HSteamPipe hSteamPipe = SteamAPI_GetHSteamPipe();
auto steamClient = cast(ISteamClient) 
SteamInternal_CreateInterface(STEAMCLIENT_INTERFACE_VERSION);
auto userStats = steamClient.GetISteamUserStats(hSteamUser, 
hSteamPipe, STEAMUSERSTATS_INTERFACE_VERSION);
auto hid = 
SteamAPI_ISteamUserStats_GetNumberOfCurrentPlayers(userStats);

auto cbk = new CallResult!NumberOfCurrentPlayers_t();
SteamAPI_RegisterCallResult(cbk, hid);


And it successfully fires the 3-arg Run method of the callback 
object.  However for some reason the function table of the 
ISteamClient seems to be off by one.. it kept calling the wrong 
methods until I commented one out, in this case GetIntPtr() as 
seen above, then everything seemed to line up.  Not sure what the 
proper way to ensure it matches the C++ layout here, but at least 
it seems to be mostly working for now.  Thanks again!


Re: delegates and functions

2018-06-10 Thread OlegZ via Digitalmars-d-learn

On Saturday, 9 June 2018 at 22:28:22 UTC, Ali Çehreli wrote:
There is some explanation at the following page, of how the 
lambda syntax is related to the full syntax:

  http://ddili.org/ders/d.en/lambda.html#ix_lambda.=%3E


copy rect from article as image 
https://i.gyazo.com/c23a9139688b7ed59fbe9c6cdcf91b93.png


well, I can to remember that lambda () => { return ..; } returns 
another implicit lambda

but my mind refuses to understand fact that ...

auto l1 = (int a) => a + 1; // returns incremented value
auto l2 = (int a) => { return a + 1; }; // return another 
lambda that increment value

... very different artifacts.
WHY? it's looks same, but in DLang it's some kind of mindtrap, 
looks mindness and falsy. I am upset and angry cause somebody 
just told me "There Is No Santa Claus in Dlang!" (I fell same 
yesterday and almost calmed down already)


D-compiler: you want one lambda? you are lucky today! I'll give 
you two lambda in square(^2) for same price! do not thank!


really, if I want lambda that returns another lambda I can write
auto l1 = (int a) => () => a + 1; // maybe I need it in some 
special case
but if I want lambda with return... well, I should forget about 
shorter syntax and write
auto l1 = (int a) { /*...*/ return a + 1; } // why I can't use 
"=>" here?
silent cry. yes, this version is shorter than with "=>", but my 
production language is C# (same as Dlang with CIL instead native. 
before today) and I try to do something in Dlang.


my state when I stick to such things looks like this (pic about 
JavaScript) 
https://pics.me.me/javascript-%3E-53-53-5-2-5-2-%3E-53-ah-gonna-29130223.png


DON'T READ NEXT

I readed 2 weeks ago post "Dlang is a crap".
well, I agree with author in many thing.
* and such small things like (not)using "=>" can kickass you for 
hours.
* and naming exceptions for example in std.concurrency - some 
ends with "Exception" some not - told me that no one center in 
Dlang development, no good organization and no standard.
* and VisualD sometimes lost autocompletion/intellisense and 
there's no way to get it back, except to restart the computer.
* and parsing Binance(cryptoexchange) JSON-response "All Market 
Tickers Stream" from 
https://github.com/binance-exchange/binance-official-api-docs/blob/master/web-socket-streams.md (~150 submessages about 200KB) with ctRegex(that told as very fast) spend 13ms. task is find every submessage that starts with '{' and ends with '}' and parse it from JSON to some struct. but simple replace ctRegex to indexOf( '{'|'}' ) decrease time to 2ms only - 90% of time getting filled 150 structs with data (that means find all string values in submessage and convert it to some types like double or long with my CTFE-code, not Dlang-JSON) spend compile-time Regex for searching start{ and end}. WTF? well, Go(version maybe 1.9) was a little bit faster about 10ms, but C# with Json.NET, where JIT and no string-spans(that means alloc every substring in GC) and all strings are Unicode(WebSocket sent UTF8, that adds Encoding time to Unicode) - spend only 3ms for all. Dlang can do things better and faster but it don't want (or people don't want)
* and many "nothrow nogc trusted pure" near every function in 
Phobos and you should read every function attrubute when you try 
to understand exactly what does and how works this function, 
cause you can miss some important attribute. I don't know how it 
should be done/removed from eyes. probably better such stuff 
describe before function like [context: nogc noBoundsCheck 
nothrow pure trusted] int func( args... ) cause you can vissually 
ignore [context..] and look at function without urban garbage.. 
maybe it should looks in editor like checkbox spans in right side 
of editor where you can turn off/on such attributes but text is 
clear not over..loaded? (English is not my native)
* and everybody will change nothing with textual representation 
of code cause this things used already in many code and packages. 
(except VisualD that in some days will be fixed)

it's sad.
over Dlang' code textual representation should be used another 
sugar representation that hides and reorganize some things (like 
TypeScript over JavaScript or MOC in Qt)
when Rust will add GC I will go to Rust - I dont like ownership 
stuff.
when Go will add OOP & exceptions I will go to Go - I dont like 
check every function call with if err.. also I can live with type 
traits without classes in most cases.

probably I will go.
I like in Dlang: native (with asm), GC, CTFE, OOP, functional 
style... (I am not a professional with Dlang, I learning it)
but sometimes Dlang is little baby who can piss/shit you in 
moments when you really not expect and you haven't diaper or any 
protection for that cases.


YOU CAN READ AGAIN

can somebody tell me please what was the reason that lambda 
(blah) => { return value + 1; } actionally returns another lambda 
that returns value+1? what case it can be usefull? why Dlang 
gives me lambda of lambda 

[Issue 15506] VS2015 crash while debugging

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15506

Rainer Schuetze  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |WORKSFORME

--- Comment #4 from Rainer Schuetze  ---
Considering that this report predates the Concord debugger plugin, I'm closing
this and wait for new issues to show up...

--


[Issue 18622] Outdated information regarding link definition when generated by Visual D DLL project.

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18622

Rainer Schuetze  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #2 from Rainer Schuetze  ---
Change finally released in
https://github.com/dlang/visuald/releases/tag/v0.47.0-beta1

--


[Issue 16619] Visual D: link dependency file does not exist - always prompted to rebuild

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16619

Rainer Schuetze  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |WONTFIX

--- Comment #5 from Rainer Schuetze  ---
Sorry, but still actively supporting VS2005 is such a rare case that it doesn't
seem worth the effort.

--


[Issue 18879] !is doesn't highlight correctly

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18879

Rainer Schuetze  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #12 from Rainer Schuetze  ---
fix released in https://github.com/dlang/visuald/releases/tag/v0.47.0-beta1

--


[Issue 18956] latest experimental build crashing a lot

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18956

Rainer Schuetze  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #4 from Rainer Schuetze  ---
Fix released in https://github.com/dlang/visuald/releases/tag/v0.47.0-beta1

--


Re: GitHub could be acquired by Microsoft

2018-06-10 Thread Kagamin via Digitalmars-d-announce

On Sunday, 10 June 2018 at 00:29:04 UTC, bauss wrote:
And then Microsoft acquires both and everyone moves to 
Bitbucket.


Endless cycle :)


Until people figure out decentralization. AIU scuttlebutt server 
provides only discovery service, these proved to be able to run 
at little cost. And as tox shows, even discovery can be 
decentralized too.


Re: What's happening with the `in` storage class

2018-06-10 Thread Kagamin via Digitalmars-d

On Saturday, 9 June 2018 at 07:56:08 UTC, Jonathan M Davis wrote:
Yes, but the exact mening of scope has never been clear. Plenty 
of folks have made assumptions about what it meant, but it was 
never actually defined. Now that it is defined with DIP 1000, 
it seems like pretty much everyone trying to use it has a hard 
time understanding it at first (at least beyond the really 
simple cases).


I don't think dip1000 changes meaning of scope, it only provides 
language rules to check that scoping is honored. I suppose the 
problem (which wasn't quite formulated by the way) is with the 
code that is correct (honors scoping), but is not compatible with 
dip1000 checks. Code that is not correct shouldn't compile really.


[Issue 18966] extern(C++) constructor should match C++ semantics assigning vtable

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18966

Manu  changed:

   What|Removed |Added

   Keywords||C++, industry

--


[Issue 18966] New: extern(C++) constructor should match C++ semantics assigning vtable

2018-06-10 Thread d-bugmail--- via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=18966

  Issue ID: 18966
   Summary: extern(C++) constructor should match C++ semantics
assigning vtable
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: critical
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: turkey...@gmail.com

test.cpp

class Base
{
public:
Base() { x = 10; }
virtual ~Base() {}
virtual void vf()
{
x = 100;
}
int x;
};
Base base;


test.d

extern(C++):
class Base
{
this();
~this();
void vf();
int x;
}
class Derived : Base
{
this()
{
super();
}
override void vf()
{
x = 200;
}
}
void test()
{
Derived d = new Derived;
d.vf();
assert(d.x == 200);
}


When deriving a D class from a C++ class, the vtable assignment semantics are
incompatible.

D assigns the vtable once prior to construction.
C++ assigns the vtable at the start of the ctor, immediately following the call
to super.ctor().

extern(C++) class will need to loosely follow the C++ semantic, that is:
Inside any extern(C++) constructor, any call to a super constructor must
immediately be followed by assignment of the classes vtable.

--


Your feedback for Exercism is needed

2018-06-10 Thread dayllenger via Digitalmars-d
Hello everyone. I found this topic on Exercism github page. They 
are launching a new mentoring site and need some help and 
feedback.

https://github.com/exercism/d/issues/66


Re: GitHub could be acquired by Microsoft

2018-06-10 Thread Joakim via Digitalmars-d-announce

On Monday, 4 June 2018 at 20:00:45 UTC, Maksim Fomin wrote:

On Monday, 4 June 2018 at 19:26:23 UTC, Joakim wrote:

On Monday, 4 June 2018 at 19:06:52 UTC, Maksim Fomin wrote:

Unlikely, you don't spend $7.5 billion on a company because 
you want to send a message that you're a good dev tools 
company, then neglect it.


You have no idea about how big corporations' management spends 
money.
As with Nokia and Skype - I don't know whether it was initially 
a plan to destroy products or management was just silly.


I suggest you look at their online slides linked from the 
Nadella blog post to see their stated plan, such as 
integrating github into VS Code more:


http://aka.ms/ms06042018

and likely vastly overpaid for an unprofitable company in the 
first place


:) this is exactly how such deals are done - paying $7.5 bl. 
for nonprofitable company.
Unfortunately, their books are unavailable because they are 
private company, but scarce information in the web suggests 
that in most of their years they have losses.


Just as rough estimate: to support $7.5 bl valuation Microsoft 
must turn -$30 ml. net loss company into business generating 
around $750 ml. for many years. There is no way to get these 
money from the market. Alternatively, the project can have 
payoff if something is broken and Microsoft cash flows increase 
by $750 ml. This is more likely...


but they emphasize that they intend to keep github open and 
independent.


They can claim anything which suits best their interests right 
now. Or, as alternative, github can be broken in a such way, 
that their promises on surface are kept. Business is badly 
compatible with opensource by design.


I just finished reading this interesting article by a former 
Microsoft business guy, which makes the same point I did, that MS 
is unlikely to neglect github or otherwise force it in some 
direction to leverage it:


https://stratechery.com/2018/the-cost-of-developers/

You're right that MS has had many acquisitions go badly already, 
such as Nokia and Skype (though I'd argue both were long-term 
doomed before they were bought), but, as always, incompetence is 
the much more likely reason than malice.