Re: Thinktank: CI's, compiler lists, and project automation

2018-03-05 Thread Nick Sabalausky (Abscissa) via Digitalmars-d

On 03/04/2018 03:42 AM, Jacob Carlborg wrote:


Aha, you mean like that. Seems a bit difficult to fix. Perhaps 
specifying a range of compiler versions would do?


Yea, there's really no two ways around it: Ultimately, each new compiler 
release will need to get added to .travis.yml (or any other CI's 
equivalent) either sooner or later. (Unless the project eschews anything 
beyond "latest versions ONLY", but that comes with its own downsides, 
especially for libraries.)


Luckily, this is definitely automatable, at least as far as 
auto-submitted PRs, if nothing else. The devil is just in the details.


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Walter Bright via Digitalmars-d

On 3/4/2018 3:06 PM, Timon Gehr wrote:

On 04.03.2018 22:49, Walter Bright wrote:
Not necessarily. If the code contains an explicit assertion that the index is 
in bounds, then, according to the language specification, the bounds check 
may be removed with -release.


D, as all languages that I know of do implicitly or explicitly, generates code 
based on the "as if" rule.

...


Impossible. You wrote a Java compiler.


Even in Java, the compiler generates code that, from the user's point of view, 
behaves "as if" the code was actually what was specified. For a trivial example, 
replacing x*2 with x<<1. Not having this means no optimizations can be done.




All languages that use your "as if" rule are memory unsafe.
Zero languages that use the "as if" rule have any memory safe subset that 
includes assertions.

In D, assert is @safe, and it should remain @safe.



I find the reasoning in terms of "on"/"off" confusing anyway.
Does "off" mean "contract/assertion removed", or does it mean "failure is UB"?


"Off" means the check is removed. If the check does not hold, the program 
enters an invalid state, whether or not the check was actually done. An 
invalid state means subsequent execution is UB.


Why is potential memory corruption to be expected when using @safe language 
features with a flag to disable contract checks?


Because the checks provide extra information to the compiler that it can use to 
generate better code. If that extra information is not true, then the better 
code will be invalid.


Memory safety is only one class of errors in a program. If the program has 
entered a state that is not accounted for by the programmer, the rest of the 
program's execution will be not predictable.



This makes no sense. This is 
not useful behavior. There are convenient ways to support potentially unsound 
compilation hints that do not do this. Contracts and compilation hints should be 
orthogonal. Contracts should be potentially @safe, compilation hints should be 
@system always.


Note that _actual removal_ is the only use case of 'disabling contracts' that I 
care about, and I think many D programmers who use "off" will also have this 
behavior in mind. Yet this is not even an option.


I don't see much use for this behavior, unless you want to continue running the 
program after an assert failure, which I cannot recommend and the language is 
not designed to support. But you can always do something like:


   version (ignore_asserts) { } else { assert(...); }

which would optionally remove both the runtime check and any compiler use of the 
assert. Or you could use https://dlang.org/library/std/exception/enforce.html 
which has no influence on compiler semantics.




At the very least, the DIP should be up-front about this.
I'm still not even sure that Mathias Lang intended the UB semantics.


It being UB was my doing, not Mathias'. DIP1006 is not redefining the semantics 
of what assert does.


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Walter Bright via Digitalmars-d
The idea behind removal of the runtime checks is as a performance optimization 
done on a debugged program. It's like turning on or off array bounds checking. 
Many leave asserts and array bounds checking on even in released code to ensure 
memory safety.


At a minimum, turning it off and on will illuminate just what the checks are 
costing you.


It's at the option of the programmer.


Re: Interesting article from JVM world - Conservative GC: Is It Really That Bad?

2018-03-05 Thread Dukc via Digitalmars-d

On Monday, 5 March 2018 at 05:43:36 UTC, Ali wrote:

i think he means this article
https://www.excelsiorjet.com/blog/articles/conservative-gc-is-it-really-that-bad/
https://news.ycombinator.com/item?id=16436574


Thank you.


Re: template auto value

2018-03-05 Thread Steven Schveighoffer via Digitalmars-d

On 3/2/18 8:49 PM, Jonathan Marler wrote:

On Saturday, 3 March 2018 at 00:20:14 UTC, H. S. Teoh wrote:
On Fri, Mar 02, 2018 at 11:51:08PM +, Jonathan Marler via 
Digitalmars-d wrote:

[...]


Not true:

template counterexample(alias T) {}

int x;
string s;
alias U = counterexample!x;    // OK
alias V = counterexample!1;    // OK
alias W = counterexample!"yup";    // OK
alias X = counterexample!s;    // OK

alias Z = counterexample!int;    // NG

The last one fails because a value is expected, not a type.

If you *really* want to accept both values and types, `...` comes to 
the rescue:


template rescue(T...) if (T.length == 1) {}

int x;
string s;
alias U = rescue!x;    // OK
alias V = rescue!1;    // OK
alias W = rescue!"yup";    // OK
alias X = rescue!s;    // OK
alias Z = rescue!int;    // OK!


T


Ah thank you...I guess I didn't realize that literals like 1 and "yup" 
were considered "symbols" when it comes to alias template parameters.


Well, they aren't. But template alias is a bit of a mess when it comes 
to the spec. It will accept anything except keywords AFAIK. Would be 
nice if it just worked like the variadic version.


The variadic version is what is usually needed (you see a lot of 
if(T.length == 1) in std.traits).


But, if you wanted to ensure values (which is more akin to your 
proposal), you can do:


template rescue(alias val) if(!is(val)) // not a type

-Steve


Re: D for microservices

2018-03-05 Thread aberba via Digitalmars-d

On Sunday, 25 February 2018 at 22:12:38 UTC, Joakim wrote:

On Sunday, 25 February 2018 at 16:51:09 UTC, yawniek wrote:

great stuff, thank you! this will be very useful!

Q: what would be needed to build a single binary (a la golang) 
that works in a FROM SCRATCH docker container?


I don't know, presumably you're referring to the static linking 
support Jacob mentioned earlier in this thread.  I have not 
tried that.


On Sunday, 25 February 2018 at 17:48:34 UTC, aberba wrote:
I usually ship and compile code in Alpine itself. Once I have 
an ldc compiler with Alpine as base image,  I'm good to go. 
Some platforms like OpenShift will rebuild when a release is 
triggered in git master... Copying binary require some hacks.


OK, I will look at releasing a native ldc binary for Alpine 
with the upcoming 1.8 release.


LDC 1.8 is out!


Re: UDK : Comment sont levés les "Mappable keys"

2018-03-05 Thread Tony via Digitalmars-d

On Monday, 5 March 2018 at 02:12:07 UTC, Adam Levine wrote:


Bonjour à tous
Alors voilà, quelqu'un saurait-il comment sont levé les 
évènements des touches appuyées pour UDK?


Nous voudrions pouvoir utiliser un nouveau périphérique autre 
que la souris, le clavier ... : En l’occurrence la Kinect.
Nous avons développé notre API qui permet d'exploiter la kinect 
en c++.
Nous l'avons intégré dans UDK en unrealscript, cependant on 
voudrait pouvoir lever un évènement lorsque l'on détecte un 
geste.
On voudrait donc faire le binding de nos geste avec une 
commande UDK et réussir à lever nos évènements qui exécuterons 
les commandes prédéfinis.


Par exemple :
Bindings=(Name="BrasEnAvant",Command="StartFire | onrelease 
StopFire")

Comment lever l'évènement "BrasEnAvant" ?

Merci d'avance


Bing translate seemed to a better than normal job on this:

Hi all
So, would anyone know how the events of the keys pressed for UDK 
are lifted?


We would like to be able to use a new device other than the 
mouse, the keyboard...: In this case the Kinect.

We have developed our API that allows the use of Kinect in C++.
We have integrated it into UDK in UnrealScript, however we would 
like to be able to raise an event when we detect a gesture.
So we would like to do the binding of our gestures with a UDK 
command and succeed in lifting our events that will execute the 
predefined commands.


Like what:
Bindings = (Name =  "BrasEnAvant ", Command =  "StartFire | 
onrelease StopFire ")

How to raise the event  "BrasEnAvant "?

Thanks in advance



Re: template auto value

2018-03-05 Thread Jonathan Marler via Digitalmars-d
On Monday, 5 March 2018 at 13:03:50 UTC, Steven Schveighoffer 
wrote:

On 3/2/18 8:49 PM, Jonathan Marler wrote:

On Saturday, 3 March 2018 at 00:20:14 UTC, H. S. Teoh wrote:
On Fri, Mar 02, 2018 at 11:51:08PM +, Jonathan Marler via 
Digitalmars-d wrote:

[...]


Not true:

template counterexample(alias T) {}

int x;
string s;
alias U = counterexample!x;    // OK
alias V = counterexample!1;    // OK
alias W = counterexample!"yup";    // OK
alias X = counterexample!s;    // OK

alias Z = counterexample!int;    // NG

The last one fails because a value is expected, not a type.

If you *really* want to accept both values and types, `...` 
comes to the rescue:


template rescue(T...) if (T.length == 1) {}

int x;
string s;
alias U = rescue!x;    // OK
alias V = rescue!1;    // OK
alias W = rescue!"yup";    // OK
alias X = rescue!s;    // OK
alias Z = rescue!int;    // OK!


T


Ah thank you...I guess I didn't realize that literals like 1 
and "yup" were considered "symbols" when it comes to alias 
template parameters.


Well, they aren't. But template alias is a bit of a mess when 
it comes to the spec. It will accept anything except keywords 
AFAIK. Would be nice if it just worked like the variadic 
version.


The variadic version is what is usually needed (you see a lot 
of if(T.length == 1) in std.traits).


But, if you wanted to ensure values (which is more akin to your 
proposal), you can do:


template rescue(alias val) if(!is(val)) // not a type

-Steve


Thanks for the tip, it looks like the spec does mention 
"literals" but "alias" parameters are even more versatile than 
that 
(https://dlang.org/spec/template.html#TemplateAliasParameter).  
For example you can pass a function call.  I've created an issue 
to make sure we update the spec to reflect the true capabilities:


https://issues.dlang.org/show_bug.cgi?id=18558


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Timon Gehr via Digitalmars-d

On 05.03.2018 11:25, Walter Bright wrote:

On 3/4/2018 3:06 PM, Timon Gehr wrote:

On 04.03.2018 22:49, Walter Bright wrote:
Not necessarily. If the code contains an explicit assertion that the 
index is in bounds, then, according to the language specification, 
the bounds check may be removed with -release.


D, as all languages that I know of do implicitly or explicitly, 
generates code based on the "as if" rule.

...


Impossible. You wrote a Java compiler.


Even in Java, the compiler generates code that, from the user's point of 
view, behaves "as if" the code was actually what was specified. For a 
trivial example, replacing x*2 with x<<1. Not having this means no 
optimizations can be done.

...


I guess I misunderstood what you meant when you said "as if". I thought 
you meant that for all languages you know, when assertions are disabled, 
the compiler behaves "as if" the check was actually there and was known 
to succeed, even though the check is actually not there and may have 
failed if it was.





All languages that use your "as if" rule are memory unsafe.
Zero languages that use the "as if" rule have any memory safe subset 
that includes assertions.

In D, assert is @safe, and it should remain @safe.



I find the reasoning in terms of "on"/"off" confusing anyway.
Does "off" mean "contract/assertion removed", or does it mean 
"failure is UB"?


"Off" means the check is removed. If the check does not hold, the 
program enters an invalid state, whether or not the check was 
actually done. An invalid state means subsequent execution is UB.


Why is potential memory corruption to be expected when using @safe 
language features with a flag to disable contract checks?


Because the checks provide extra information to the compiler that it can 
use to generate better code. If that extra information is not true, then 
the better code will be invalid.

...


My question is not why it is the case technically, I was asking for a 
_rationale_ for this apparently silly behavior. I.e., why is this a good 
idea from the point of view of language design?


Again: assert is @safe. Compiler hints are @system. Why should assert 
give compiler hints?


Memory safety is only one class of errors in a program. If the program 
has entered a state that is not accounted for by the programmer, the 
rest of the program's execution will be not predictable.

...


But the whole point of having memory safety is to not have UB when the 
programmer screwed up. Behavior not foreseen by the programmer (a bug) 
is not the same as behavior unconstrained by the language specification 
(UB).




This makes no sense. This is not useful behavior. There are convenient 
ways to support potentially unsound compilation hints that do not do 
this. Contracts and compilation hints should be orthogonal. Contracts 
should be potentially @safe, compilation hints should be @system always.


Note that _actual removal_ is the only use case of 'disabling 
contracts' that I care about, and I think many D programmers who use 
"off" will also have this behavior in mind. Yet this is not even an 
option.


I don't see much use for this behavior, unless you want to continue 
running the program after an assert failure, which I cannot recommend 
and the language is not designed to support.


'in'-contracts catch AssertError when being composed. How can the 
language not be designed to support that?


Except for this case, the assertion is not _supposed_ to fail for my use 
cases, and I don't really need the language to explicitly "support that 
use case". The situation is the following:


- I usually don't want UB in programs I am working on. I want the 
runtime behavior of the programs to be determined by the source code, 
such that every behavior observed in the wild (intended or unintended) 
can be traced back to the source code (potentially in a 
non-deterministic way, e.g. void initialization of an integer constant). 
This should be the case always, even if me or someone else on my team 
made a mistake. The @safe D subset is supposed to give this guarantee. 
What good is @safe if it does not guarantee absence of buffer overrun 
attacks?


- Checking assertions can be too costly, so it should be possible to 
disable the check.


- Using existing assertions as compiler hints is not necessary. (Without 
having checked it, I'm sure that LDC/GDC have a more suitable intrinsic 
for this already.)


As far as I can discern, forcing disabled asserts to give compiler hints 
has no upsides.


But you can always do 
something like:


    version (ignore_asserts) { } else { assert(...); }
...


I know. Actually version(assert) assert(...); also works. However, this 
is too verbose, especially in contracts. I'd like a solution that does 
not require me to change the source code. Ideally, I just want the Java 
behavior (with reversed defaults).


which would optionally remove both the runtime check and any compiler 
use of the assert. Or you could use 
https://

Re: help me with dpldocs - how to filter 3rd party libs

2018-03-05 Thread Adam D. Ruppe via Digitalmars-d

On Monday, 5 March 2018 at 01:02:52 UTC, Norm wrote:
Might not help much though, I imagine these third-party sources 
are built as source only libraries, so they probably appear as 
source files anyway.


Yeah, in the case I'm looking at now, they aren't listed as dub 
packages at all, just files included in the src folder (which btw 
is how I prefer people to use my libraries too)


I think there is no solid solution for existing things, so I'll 
have to invent a new config thing. I think I'll do something like


included modules for documentation: "something.*"
excluded modules for documentation: "something.internal.*"

And use that to do processing. Still need to decide on syntax, 
filename, etc., but it should be doable.


By default, I will probably exclude the thirdparty naming 
conventions and internal (actually it does internal already).


I might also have it exclude win32, deimos, arsd, and a few other 
names I know are commonly used this way, unless specifically 
overridden.


Re: D for microservices

2018-03-05 Thread Joakim via Digitalmars-d

On Monday, 5 March 2018 at 14:34:44 UTC, aberba wrote:

On Sunday, 25 February 2018 at 22:12:38 UTC, Joakim wrote:

On Sunday, 25 February 2018 at 16:51:09 UTC, yawniek wrote:

great stuff, thank you! this will be very useful!

Q: what would be needed to build a single binary (a la 
golang) that works in a FROM SCRATCH docker container?


I don't know, presumably you're referring to the static 
linking support Jacob mentioned earlier in this thread.  I 
have not tried that.


On Sunday, 25 February 2018 at 17:48:34 UTC, aberba wrote:
I usually ship and compile code in Alpine itself. Once I have 
an ldc compiler with Alpine as base image,  I'm good to go. 
Some platforms like OpenShift will rebuild when a release is 
triggered in git master... Copying binary require some hacks.


OK, I will look at releasing a native ldc binary for Alpine 
with the upcoming 1.8 release.


LDC 1.8 is out!


The Alpine build is up, let me know if you have any problems.  
Note the changelog entry that says you'll need to install llvm 
and maybe other packages from the Alpine package manager first.


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Joseph Rushton Wakeling via Digitalmars-d

On Saturday, 3 March 2018 at 16:33:00 UTC, Martin Nowak wrote:
Doesn't really work that way, we can disable assertions, in 
contracts, out contracts, and invariants. But not assertions in 
some contexts while leaving them enabled in other contexts. At 
least not without modifying all related codegen and introducing 
context queries (e.g. think mixin templates).


That's a shame, but presumably the fine-grainedness could be 
extended at some point.


Question: what would -release=assert do to unittests?  Would it 
not touch them at all?  Or would it disable all asserts including 
in unittests?


FWIW -release=assert,in,out,invariant fits out needs well 
enough.
Just the use-case that someone wants to disable asserts in 
functions but still wants to use contracts, required to use a 
replacement for assert in contracts and invariants.


Yea, there are obviously workarounds.  I think the main concern 
from my side is to not have hierarchical assumptions about what 
gets turned on or off, and AFAICS 
-release=assert,in,out-invariant pretty much fits that.


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Timon Gehr via Digitalmars-d

On 05.03.2018 11:30, Walter Bright wrote:
The idea behind removal of the runtime checks is as a performance 
optimization done on a debugged program.


Optimizing performance is fine, but why pessimize safety? The hints will 
usually not make a significant difference in performance anyway. I guess 
it is fine to have a compiler option that is all speed no safety, but it 
should not be the only option.



It's like turning on or off array bounds checking.


It is not.

void main()@safe{
 int[] x=[];
 writeln(x[0]); // range violation even with -release
// defined behavior even with -boundscheck=off (!)
}

If I now add an assertion, I suddenly get UB:

void main()@safe{
int[] x=[];
assert(0invalid

writeln(x[0]); // UB with -release
}

I did not make the code any more wrong by adding the assertion.
Why should I get more UB?


Many leave asserts and array bounds checking on 
even in released code to ensure memory safety.

...


Maybe the requirements change and it is now too costly to leave 
contracts on in release mode, or the number of contracts in the code 
base slowly accumulates until we reach a point where the total cost is 
too large, or we replace a library, and the new version has costly 
contracts, etc. Now we have the following options:


- Leave contracts in -- fail performance requirements.

- Remove contracts -- fail safety requirements.

- Track down all 'assert's, even those in external libraries, and 
replace them by a custom home-cooked solution that is incompatible with 
everyone else's -- fail maintainability requirements.


To me this situation is ridiculous.

At a minimum, turning it off and on will illuminate just what the checks 
are costing you.

...


Well, no. If the bug is elusive enough to not have shown up in debug 
mode, it probably won't be seen early during -release testing, and even 
if it does, the UB may mask it. (Note that when the program becomes 
faster, the likelihood of timing-dependent bugs showing up may change.)


I.e., if something goes wrong, it is likely that you won't see the 
safety costs until it is too late.



It's at the option of the programmer.


It is not, but I want it to be. That's all I want. [1]

I'm just saying there should be the following option:

- Remove contracts -- sufficient performance and retain memory safety.

FWIW, this is what all contract systems that I'm aware of do, except D, 
and maybe C asserts in certain implementations (if you want to call that 
contracts).



[1] Well, maybe add a @system "__assume" intrinsic.


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Iain Buclaw via Digitalmars-d

On Monday, 5 March 2018 at 15:48:12 UTC, Timon Gehr wrote:


- Using existing assertions as compiler hints is not necessary. 
(Without having checked it, I'm sure that LDC/GDC have a more 
suitable intrinsic for this already.)


As far as I can discern, forcing disabled asserts to give 
compiler hints has no upsides.





In the simple cases, or in anything that looks like a 
unittest/testsuite, probably not.


There are likely going to be more aggressive optimizations 
however if CFA can see that a variable will never be outside a 
given range, i.e:


---
int[5] arr;

if (len < 0 || len >= 5)
{
unreachable();  // in non-release code, this would throw a 
RangeError.

}

return arr[len];
---

From this, we aggressively assume that len is a valid index of 
arr.  Something that happens in optimized non-release builds, but 
in release builds we must accommodate for the possibility of a 
range error.


Re: help me with dpldocs - how to filter 3rd party libs

2018-03-05 Thread bauss via Digitalmars-d

On Monday, 5 March 2018 at 17:26:21 UTC, Adam D. Ruppe wrote:

On Monday, 5 March 2018 at 01:02:52 UTC, Norm wrote:
Might not help much though, I imagine these third-party 
sources are built as source only libraries, so they probably 
appear as source files anyway.


Yeah, in the case I'm looking at now, they aren't listed as dub 
packages at all, just files included in the src folder (which 
btw is how I prefer people to use my libraries too)


I think there is no solid solution for existing things, so I'll 
have to invent a new config thing. I think I'll do something 
like


included modules for documentation: "something.*"
excluded modules for documentation: "something.internal.*"

And use that to do processing. Still need to decide on syntax, 
filename, etc., but it should be doable.


By default, I will probably exclude the thirdparty naming 
conventions and internal (actually it does internal already).


I might also have it exclude win32, deimos, arsd, and a few 
other names I know are commonly used this way, unless 
specifically overridden.


Honestly, I'd just open an issue for it and hopefully some 
annotation will be added.


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Iain Buclaw via Digitalmars-d
On Monday, 5 March 2018 at 18:44:54 UTC, Joseph Rushton Wakeling 
wrote:

On Saturday, 3 March 2018 at 16:33:00 UTC, Martin Nowak wrote:
Doesn't really work that way, we can disable assertions, in 
contracts, out contracts, and invariants. But not assertions 
in some contexts while leaving them enabled in other contexts. 
At least not without modifying all related codegen and 
introducing context queries (e.g. think mixin templates).


That's a shame, but presumably the fine-grainedness could be 
extended at some point.


Question: what would -release=assert do to unittests?  Would it 
not touch them at all?  Or would it disable all asserts 
including in unittests?




From memory, it would turn off asserts even in unittests.  You 
could raise a bug against gdc for that as it's a reasonable 
suggestion.



FWIW -release=assert,in,out,invariant fits out needs well 
enough.
Just the use-case that someone wants to disable asserts in 
functions but still wants to use contracts, required to use a 
replacement for assert in contracts and invariants.


Yea, there are obviously workarounds.  I think the main concern 
from my side is to not have hierarchical assumptions about what 
gets turned on or off, and AFAICS 
-release=assert,in,out-invariant pretty much fits that.


N.B: GDC has -f[no]-release, -f[no-]assert, -f[no-]invariant, 
-f[no-]preconditions, and -f[no-]postconditions  (-f[no-]in and 
-f[no-]out were removed as they are a little too vague).  And it 
doesn't matter which order you pass them in, if an option is 
explicitly set, then they do not get turned on/off by -frelease.


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Timon Gehr via Digitalmars-d

On 05.03.2018 20:41, Iain Buclaw wrote:

On Monday, 5 March 2018 at 15:48:12 UTC, Timon Gehr wrote:


- Using existing assertions as compiler hints is not necessary. 
(Without having checked it, I'm sure that LDC/GDC have a more suitable 
intrinsic for this already.)


As far as I can discern, forcing disabled asserts to give compiler 
hints has no upsides.





In the simple cases, or in anything that looks like a 
unittest/testsuite, probably not.


There are likely going to be more aggressive optimizations however if 
CFA can see that a variable will never be outside a given range, i.e:

...


(Note that by "forcing", I mean withholding other options from the user. 
I'm not saying that using information from asserts can never be useful, 
just that it can just as well be harmful, and therefore it is unwise to 
not allow disabling them. I was saying that there are no upsides to not 
having a flag that actually removes assertions.)



---
int[5] arr;

if (len < 0 || len >= 5)
{
     unreachable();  // in non-release code, this would throw a RangeError.
}

return arr[len];
---

 From this, we aggressively assume that len is a valid index of arr.  
Something that happens in optimized non-release builds, but in release 
builds we must accommodate for the possibility of a range error.


I think this particular case is a bit less questionable than doing the 
same for general assertions (for instance, in @safe code, -release will 
not actually remove the bounds check unless there is some relevant 
assertion somewhere). In any case, I don't argue strongly against a flag 
that turns all assertions into compiler hints, I just think there should 
also be a flag that disables them safely. Also, maybe -release should 
commit to either disregarding @safe completely or respecting it completely.


Re: I have a patch to let lldb demangle D symbols ; help welcome to improve it

2018-03-05 Thread Luís Marques via Digitalmars-d

On Tuesday, 27 February 2018 at 05:28:41 UTC, Timothee Cour wrote:

https://github.com/llvm-mirror/lldb/pull/3
+
https://github.com/timotheecour/dtools/blob/master/dtools/lldbdplugin.d


Ok, I started looking into this now. I hadn't realized that you 
were opening an external library. I'm not sure the LLDB 
developers are going to want to merge something like that, have 
you asked? Would you consider adding C++ code for the demangling 
itself instead?


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Walter Bright via Digitalmars-d

On 3/5/2018 7:48 AM, Timon Gehr wrote:
Again: assert is @safe. Compiler hints are @system. Why should assert give 
compiler hints?


Asserts give expressions that must be true. Why not take advantage of them? See 
Spec# which based an entire language around that notion:


 https://en.wikipedia.org/wiki/Spec_Sharp

Some possible optimizations based on this are:

1. elimination of array bounds checks
2. elimination of null pointer checks
3. by knowing a variable can take on a limited range of values, a cheaper data 
type can be substituted

4. elimination of checks for 'default' switch values
5. elimination of overflow checks

dmd's optimizer currently does not extract any information from assert's. But 
why shut the door on that possibility?



But the whole point of having memory safety is to not have UB when the 
programmer screwed up. Behavior not foreseen by the programmer (a bug) is not 
the same as behavior unconstrained by the language specification (UB).


It's the programmer's option to leave those runtime checks in if he wants to.


'in'-contracts catch AssertError when being composed. How can the language not 
be designed to support that?


That is indeed an issue. It's been proposed that in-contracts throw a different 
exception, say "ContractException" so that it is not UB when they fail. There's 
a bugzilla ER on this. (It's analogous to asserts in unittests not having UB 
after they fail.)



- I usually don't want UB in programs I am working on. I want the runtime 
behavior of the programs to be determined by the source code, such that every 
behavior observed in the wild (intended or unintended) can be traced back to the 
source code (potentially in a non-deterministic way, e.g. void initialization of 
an integer constant). This should be the case always, even if me or someone else 
on my team made a mistake. The @safe D subset is supposed to give this 
guarantee. What good is @safe if it does not guarantee absence of buffer overrun 
attacks?


It guarantees it at the option of the programmer via a command line switch.


- Using existing assertions as compiler hints is not necessary. (Without having 
checked it, I'm sure that LDC/GDC have a more suitable intrinsic for this already.)


As far as I can discern, forcing disabled asserts to give compiler hints has no 
upsides.


I suspect that if:

compiler_hint(i < 10);

were added, there would be nothing but confusion as to its correct usage vs 
assert vs enforce. There's already enough confusion about the latter two. In 
fact, I can pretty much guarantee it will be rarely used correctly.



I know. Actually version(assert) assert(...); also works. However, this is too 
verbose, especially in contracts.


You can wrap it in a template.


I'd like a solution that does not require me 
to change the source code. Ideally, I just want the Java behavior (with reversed 
defaults).


But you'll have to change the code to compiler_hint().



(enforce is _completely unrelated_ to the current discussion.)


It does just what you ask (for the regular assert case).


It being UB was my doing, not Mathias'. DIP1006 is not redefining the 
semantics of what assert does.
This is not really about assert semantics, this is about the semantics of 
"disabling the check".


It is very much about the semantics of assert.



There was no "-check=off" flag before.


Yes there was, it's the "release" flag.


The DIP uses terminology such as "disable assertions" as opposed to "disable 
assertion checks (but introduce compiler hints)".


Yes, the language could be more precise, but I couldn't blame Mathias for that. 
I also disagree with the word "hint", because it implies things like "this 
branch is more likely to be taken" to guide code generation decisions, rather 
than "assume X is absolutely always incontrovertibly true and you can bet the 
code on it".




Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Walter Bright via Digitalmars-d

On 3/5/2018 11:34 AM, Timon Gehr wrote:

On 05.03.2018 11:30, Walter Bright wrote:

The hints will usually not make a significant difference in performance anyway.


Reasonable people will disagree on what is significant or not.



It's like turning on or off array bounds checking.


It is not.

void main()@safe{
  int[] x=[];
  writeln(x[0]); // range violation even with -release
     // defined behavior even with -boundscheck=off (!)


It is not defined behavior with -boundscheck=off.


}

If I now add an assertion, I suddenly get UB:

void main()@safe{
     int[] x=[];
     assert(0

Because you put in an assert that did not hold, and disabled the check.


Now we have the 
following options:


- Leave contracts in -- fail performance requirements.

- Remove contracts -- fail safety requirements.

- Track down all 'assert's, even those in external libraries, and replace them 
by a custom home-cooked solution that is incompatible with everyone else's -- 
fail maintainability requirements.


To me this situation is ridiculous.


It's completely under the control of the programmer. I know you disagree with 
that notion. You can even create your own `myassert` to produced your desired 
semantics.



FWIW, this is what all contract systems that I'm aware of do, except D, and 
maybe C asserts in certain implementations (if you want to call that contracts).


D is better (!).

(C's asserts are not part of the language, so impart no semantics to the 
compiler.)



Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread ag0aep6g via Digitalmars-d

On 03/05/2018 10:11 PM, Walter Bright wrote:

On 3/5/2018 11:34 AM, Timon Gehr wrote:

[...]

  int[] x=[];
  writeln(x[0]); // range violation even with -release
 // defined behavior even with -boundscheck=off (!)


It is not defined behavior with -boundscheck=off.


Dereferencing null is not defined with -boundscheck=off?


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread ag0aep6g via Digitalmars-d

On 03/05/2018 09:55 PM, Walter Bright wrote:

On 3/5/2018 7:48 AM, Timon Gehr wrote:
Again: assert is @safe. Compiler hints are @system. Why should assert 
give compiler hints?


Asserts give expressions that must be true. Why not take advantage of 
them?


Because it's exactly what @safe is not supposed to do. You're trusting 
the programmer to get their asserts right. Trusting the programmer to 
get it right is @system.


[...]> It's the programmer's option to leave those runtime checks in if he

wants to.


As far as I understand, Timon only asks for a third option: to simply 
compile the code as if the asserts weren't there, without assuming that 
they would pass.


That way you get a speedup from the omitted asserts, but you don't get 
UB from a mistaken assert. This is not an unreasonable thing to want, is it?


You say that DMD does not currently use assert information, so -release 
currently does this.


[...]

There was no "-check=off" flag before.


Yes there was, it's the "release" flag.


But the controversial aspect is not implemented. And it will be very 
surprising if you ever do implement it.


I'm actually pretty shocked that -release is described that way. It 
makes a point of keeping bounds checks in @safe code. The reason is that 
it would be unsafe to remove them. What's the point of that when safety 
is compromised anyway by assuming that asserts would pass?


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Timon Gehr via Digitalmars-d

On 05.03.2018 21:55, Walter Bright wrote:

On 3/5/2018 7:48 AM, Timon Gehr wrote:
Again: assert is @safe. Compiler hints are @system. Why should assert 
give compiler hints?


Asserts give expressions that must be true.


"Trust the programmer" does not always scale.


Why not take advantage of them?


For some use cases it might be fine, but not for others, because you 
can't know whether the program and the assertions are really consistent.


Basically, I think the flags should be:

-check-{assert,invariant,precondition,postcondition,...}={on,off,assume}

E.g.:

$ dmd -check-assert=on test.d # throw on assertion failure
$ dmd -check-assert=off test.d# ignore assertions
$ dmd -check-assert=assume test.d # assertions are assumptions for code 
generation


Then the spec says that "assume" is potentially dangerous and can break 
@safe-ty guarantees, like -boundscheck=off.



See Spec# which based an entire language around that notion:

  https://en.wikipedia.org/wiki/Spec_Sharp
...


Spec# is the opposite of what you claim. It verifies statically that the 
conditions actually hold. Also, it is type safe. (I.e. no UB.)



Some possible optimizations based on this are:

1. elimination of array bounds checks
2. elimination of null pointer checks
3. by knowing a variable can take on a limited range of values, a 
cheaper data type can be substituted

4. elimination of checks for 'default' switch values
5. elimination of overflow checks

dmd's optimizer currently does not extract any information from 
assert's. But why shut the door on that possibility?

...


We should not do that, and it is not what I am arguing for. Sorry if 
that did not come across clearly.




But the whole point of having memory safety is to not have UB when the 
programmer screwed up. Behavior not foreseen by the programmer (a bug) 
is not the same as behavior unconstrained by the language 
specification (UB).


It's the programmer's option to leave those runtime checks in if he 
wants to.

...


My point is that either leaving them in or turning failures into UB are 
too few options. Also, @safe is a bit of a joke if there is no way to 
_disable contracts_ without nullifying the guarantees it's supposed to give.




'in'-contracts catch AssertError when being composed. How can the 
language not be designed to support that?


That is indeed an issue. It's been proposed that in-contracts throw a 
different exception, say "ContractException" so that it is not UB when 
they fail. There's a bugzilla ER on this. (It's analogous to asserts in 
unittests not having UB after they fail.)

...


This is ugly, but I don't think there is a better solution.



- I usually don't want UB in programs I am working on. I want the 
runtime behavior of the programs to be determined by the source code, 
such that every behavior observed in the wild (intended or unintended) 
can be traced back to the source code (potentially in a 
non-deterministic way, e.g. void initialization of an integer 
constant). This should be the case always, even if me or someone else 
on my team made a mistake. The @safe D subset is supposed to give this 
guarantee. What good is @safe if it does not guarantee absence of 
buffer overrun attacks?


It guarantees it at the option of the programmer via a command line switch.
...


You mean, leave in checks?



- Using existing assertions as compiler hints is not necessary. 
(Without having checked it, I'm sure that LDC/GDC have a more suitable 
intrinsic for this already.)


As far as I can discern, forcing disabled asserts to give compiler 
hints has no upsides.


I suspect that if:

     compiler_hint(i < 10);

were added, there would be nothing but confusion as to its correct usage 
vs assert vs enforce. There's already enough confusion about the latter 
two.


I have never understood why. The use cases of assert and enforce are 
disjoint.



In fact, I can pretty much guarantee it will be rarely used correctly.
...


Me too, but that's mostly because it will be rarely used.



I know. Actually version(assert) assert(...); also works. However, 
this is too verbose, especially in contracts.


You can wrap it in a template.
...


That won't work for in contracts if they start catching 
ContractException instead of AssertError. Also, I think we'd actually 
like to _shorten_ the contract syntax (there is another DIP on this).


For other uses, a function suffices, but I ideally want to keep using 
standard 'assert'. Everybody already knows what 'assert' means.




I'd like a solution that does not require me to change the source 
code. Ideally, I just want the Java behavior (with reversed defaults).


But you'll have to change the code to compiler_hint().
...


I don't, because I don't want that behavior. Others who want that 
behavior also should not have to. This should be a compilation switch.





(enforce is _completely unrelated_ to the current discussion.)


It does just what you ask (for the regular assert case).
...


No

Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Timon Gehr via Digitalmars-d

On 05.03.2018 22:11, Walter Bright wrote:

On 3/5/2018 11:34 AM, Timon Gehr wrote:

On 05.03.2018 11:30, Walter Bright wrote:
The hints will usually not make a significant difference in 
performance anyway.


Reasonable people will disagree on what is significant or not.
...


My point exactly! Hence, compiler flag.


...


I did not make the code any more wrong by adding the assertion.
Why should I get more UB?


Because you put in an assert that did not hold, and disabled the check.
...


(Maybe let's assume it was not me who did it, to stop the whole silly 
"you deserve what you got because you made a mistake" notion.)


Again, my question is not about the _mechanics_ of the status quo. I 
know it very well. It's the rationale that matters.





Now we have the following options:

- Leave contracts in -- fail performance requirements.

- Remove contracts -- fail safety requirements.

- Track down all 'assert's, even those in external libraries, and 
replace them by a custom home-cooked solution that is incompatible 
with everyone else's -- fail maintainability requirements.


To me this situation is ridiculous.


It's completely under the control of the programmer. I know you disagree 
with that notion.


I don't. I can use a manual patch of the compiler that has the 
additionally required flags and replicate the official packaging effort 
and make everyone who wants to compile my programs use that version. I 
just don't want to, as it seems silly.


It would be a lot better if the standard DMD compiler had the flags. Do 
you disagree that there should be an additional option to ignore 
contracts completely?


You can even create your own `myassert` to produced 
your desired semantics.

...


That's the third option above. It's not a practical solution. Putting 
the flag into a compiler fork is trivial by comparison.




FWIW, this is what all contract systems that I'm aware of do, except 
D, and maybe C asserts in certain implementations (if you want to call 
that contracts).


D is better (!).

(C's asserts are not part of the language, so impart no semantics to the 
compiler.)




(That's why I said "in certain implementations".)


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread John Colvin via Digitalmars-d

On Monday, 5 March 2018 at 10:30:12 UTC, Walter Bright wrote:
The idea behind removal of the runtime checks is as a 
performance optimization done on a debugged program. It's like 
turning on or off array bounds checking. Many leave asserts and 
array bounds checking on even in released code to ensure memory 
safety.


At a minimum, turning it off and on will illuminate just what 
the checks are costing you.


It's at the option of the programmer.


void safeCode1(int a, ref int[2] b) @safe
{
assert(a < 2);
b[a] = 0;
}

So, if I compile this with `-release -O`, the compiler is free to 
remove the bounds-check, which will cause a buffer overrun if `a 
> 1`. Ok.


void safeCode2(int a, ref int[2] b) @safe
{
b[a] = 0;
}

And here the compiler is *not* free to remove the bounds check.

This just feels bad. Adding extra failsafes for my debug program 
shouldn't make my release program less safe.


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Timon Gehr via Digitalmars-d

On 05.03.2018 22:24, ag0aep6g wrote:

On 03/05/2018 10:11 PM, Walter Bright wrote:

On 3/5/2018 11:34 AM, Timon Gehr wrote:

[...]

  int[] x=[];
  writeln(x[0]); // range violation even with -release
 // defined behavior even with -boundscheck=off (!)


It is not defined behavior with -boundscheck=off.


Dereferencing null is not defined with -boundscheck=off?


This was my bad. It's not dereferencing null. The compiler is free to 
assume 0function is dead code.


Anyway, a similar point can be made by considering contracts that say 
that specific values are non-null. They will turn null values into UB 
even though without them, null dereferences would have been defined to 
crash.


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread ag0aep6g via Digitalmars-d

On 03/05/2018 11:57 PM, Timon Gehr wrote:

On 05.03.2018 22:24, ag0aep6g wrote:

On 03/05/2018 10:11 PM, Walter Bright wrote:

[...]

It is not defined behavior with -boundscheck=off.


Dereferencing null is not defined with -boundscheck=off?


This was my bad. It's not dereferencing null. The compiler is free to 
assume 0function is dead code.


How is it free to assume that?

This was the full snippet (before I mutilated it in my quote):


void main()@safe{
 int[] x=[];
 writeln(x[0]); // range violation even with -release
// defined behavior even with -boundscheck=off (!)
}


There is no `assert(0anything, because there are no contracts, no asserts, and main is @safe. 
-boundscheck=off just makes it so that the length isn't checked before 
x.ptr is dereferenced. x.ptr is null, so the code is defined to 
dereference null, no?


If -boundscheck=off somehow does introduce UB here, we have the weird 
situation that using `x.ptr[0]` is more safe than in this scenario than 
`x[0]`. Because surely `x.ptr[0]` is a null dereference that's not 
affected by -boundscheck=off, right?


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Timon Gehr via Digitalmars-d

On 06.03.2018 00:52, ag0aep6g wrote:

On 03/05/2018 11:57 PM, Timon Gehr wrote:

On 05.03.2018 22:24, ag0aep6g wrote:

On 03/05/2018 10:11 PM, Walter Bright wrote:

[...]

It is not defined behavior with -boundscheck=off.


Dereferencing null is not defined with -boundscheck=off?


This was my bad. It's not dereferencing null. The compiler is free to 
assume 0function is dead code.


How is it free to assume that?
...


By Walter's definition. -boundscheck=off makes the compiler assume that 
all array accesses are within bounds. ("off" is a misleading term.)



This was the full snippet (before I mutilated it in my quote):


void main()@safe{
  int[] x=[];
  writeln(x[0]); // range violation even with -release
     // defined behavior even with -boundscheck=off (!)
}


There is no `assert(0anything, because there are no contracts, no asserts, and main is @safe. 
-boundscheck=off just makes it so that the length isn't checked before 
x.ptr is dereferenced.


It's not checked, but the compiler may still assume that it has actually 
been checked. The story is similar to asserts.


x.ptr is null, so the code is defined to 
dereference null, no?


If -boundscheck=off somehow does introduce UB here, we have the weird 
situation that using `x.ptr[0]` is more safe than in this scenario than 
`x[0]`. Because surely `x.ptr[0]` is a null dereference that's not 
affected by -boundscheck=off, right?


Yes, I think that's a good point (though it hinges on the assumption 
that x.ptr[i] is equivalent to *(x.ptr+i), which I'm not sure the 
specification states explicitly).


Re: DIP 1006 - Preliminary Review Round 1

2018-03-05 Thread Walter Bright via Digitalmars-d

On 3/5/2018 2:30 PM, John Colvin wrote:
This just feels bad. Adding extra failsafes for my debug program shouldn't make 
my release program less safe.


Then use `enforce()`.


Classinfo and @nogc conflict

2018-03-05 Thread solidstate1991 via Digitalmars-d
I'm trying to speed up my graphic engine, however the presence of 
the GC in function Layer.updateRaster (see here: 
https://github.com/ZILtoid1991/pixelperfectengine/blob/master/pixelperfectengine/src/PixelPerfectEngine/graphics/layers.d ) means I get an occasional bump in CPU usage if not a framedrop (most performance related thing got fixed since then).


I use classinfo for detecting the type of bitmaps, and while I 
probably will have a workaround for the associative array stuff, 
the classinfo thing is embedded into the runtime library, thus it 
needs to be fixed. I took a look at opEquals, but the trickier 
part would be making the toString function @nogc (make its return 
value a ref type?).


Re: I have a patch to let lldb demangle D symbols ; help welcome to improve it

2018-03-05 Thread Jacob Carlborg via Digitalmars-d

On Monday, 5 March 2018 at 20:03:39 UTC, Luís Marques wrote:
On Tuesday, 27 February 2018 at 05:28:41 UTC, Timothee Cour 
wrote:

https://github.com/llvm-mirror/lldb/pull/3
+
https://github.com/timotheecour/dtools/blob/master/dtools/lldbdplugin.d


Ok, I started looking into this now. I hadn't realized that you 
were opening an external library. I'm not sure the LLDB 
developers are going to want to merge something like that, have 
you asked? Would you consider adding C++ code for the 
demangling itself instead?


Seems like they prefer a shared library and not rewriting it in 
C++ [1].


BTW, there's also GNU libiberty, bart of binutils, which Iain 
claims have better support for demangling D symbols than 
core.demangler.


[1] 
http://lists.llvm.org/pipermail/lldb-dev/2018-January/013199.html


--
/Jacob Carlborg