Re: Rant after trying Rust a bit

2015-08-03 Thread Enamex via Digitalmars-d

On Friday, 31 July 2015 at 09:37:10 UTC, Jonathan M Davis wrote:

On Friday, 31 July 2015 at 04:47:20 UTC, Enamex wrote:
Right now docs say that `delete` is getting deprecated but 
using it on DMD .067.1 gives no warnings.


There are no warnings because it hasn't actually been 
deprecated yet.

[...]
- Jonathan M Davis


GC and memory management in general are inadequately documented. 
There're doc-pages and answers on SO and discussions on the forum 
about stuff that (coming from C++) should be so basic, like how 
to allocate an instance of a struct on the heap (GC'ed or 
otherwise) or how to allocate a class on non-managed heap (still 
don't get how `Unique!` works; does it even register with the GC? 
How to deep-copy/not-move its contents into another variable?), 
or on the stack, for that matter (there's `scoped!` but docs 
again are confusing. It's somehow stack-allocated but can't be 
copied?).


Eventually deprecating it while leaving it now without any 
warnings (though the docs warn; offer no replacement) seems like 
it'd be more trouble than it's worth down the line, since it's 
not a feature addition or even full deprecation but -AFAIU- a 
replacement of semantics for identical syntax.


Re: D for Game Development

2015-08-03 Thread Rikki Cattermole via Digitalmars-d

On 3/08/2015 6:53 p.m., Sebastiaan Koppe wrote:

On Monday, 3 August 2015 at 03:28:26 UTC, Rikki Cattermole wrote:

On 3/08/2015 1:35 p.m., Sebastiaan Koppe wrote:

On Sunday, 2 August 2015 at 14:03:50 UTC, Rikki Cattermole wrote:

Some of things that goes on in the modding world is truely amazing.

For every item/block with a recipe and vanilla items/blocks hardcoded.
It'll calculate at the start of runtime an EMC value in EE3. It does
it ridiculously fast.


I understand absolutely nothing about it.


I'll try my best to explain it.

- There could be 200k-400k+ blocks and items per modded instance


I don't understand why there are so many. Once you calculated the EMC
for each block -or item-type, you have them all, no?


200k-400k does sound a lot, but it really isn't.
There is a 'standard' mod called forge multi part. Basically for every 
block it adds a rather large amount more. It's at the very least *10 the 
real number of blocks.

There are other mods that do similar but different things like this.

Here is an old video from Direworld20 that should help you understand[0].

Also keep in mind dependencies between blocks and items for crafting 
recipes.


For the Agrarian Skies2 mod pack, it has roughly 4*13*499 items/blocks. 
Based upon what was visible from NEI.


I did have block+item dump here, but ugh too big for NNTP server.

The lists only total 6145 but based upon NEI it would be closer to 
25948. So obviously there are many many things not included in the item 
+ block dumps. Things like multiparts are not listed by the looks.
FYI Agrarian Skies is a themed mod pack that was not designed for 
fanciness so e.g. no extra "cool" blocks. Other packs like Direwolf20 
would have much more massive numbers. After all there is a reason why 
Mojang shifted from a number to identify blocks and items to strings. 
Mods were using them up a little too much[1].


[0] https://www.youtube.com/watch?v=u9yUr4jmU6s
[1] http://forum.feed-the-beast.com/threads/4096-and-beyond.20774/


Re: std.data.json formal review

2015-08-03 Thread Sönke Ludwig via Digitalmars-d

Am 02.08.2015 um 19:14 schrieb Dmitry Olshansky:


Actually JSON is defined as subset of EMCASCript-262 spec hence it may
not ciontain anything other 64-bit5 IEEE-754 numbers period.
See:
http://www.ecma-international.org/ecma-262/6.0/index.html#sec-terms-and-definitions-number-value

http://www.ecma-international.org/ecma-262/6.0/index.html#sec-ecmascript-language-types-number-type


Anything else is e-hm an "extension" (or simply put - violation of
spec), I've certainly seen 64-bit integers in the wild - how often true
big ints are found out there?

If no one can present some run of the mill REST JSON API breaking the
rules I'd suggest demoting BigInt handling to optional feature.




This is not true. Quoting from ECMA-404:


JSON is a text format that facilitates structured data interchange between all 
programming languages. JSON
is syntax of braces, brackets, colons, and commas that is useful in many 
contexts, profiles, and applications.
JSON  was  inspired  by  the  object  literals  of  JavaScript  aka  ECMAScript 
 as  defined  in  the  ECMAScript
Language   Specification,   third   Edition   [1].
It  does  not  attempt  to  impose  ECMAScript’s  internal  data
representations on other programming languages. Instead, it shares a small 
subset of ECMAScript’s textual
representations with all other programming languages.
JSON  is  agnostic  about  numbers.  In  any  programming  language,  there  
can  be  a  variety  of  number  types  of
various capacities and complements, fixed or floating, binary or decimal. That 
can make interchange between
different  programming  languages  difficult.  JSON  instead  offers  only  the 
 representation  of  numbers  that
humans use: a sequence  of digits.  All programming languages know  how to make 
sense of digit sequences
even if they disagree on internal representations. That is enough to allow 
interchange.





Re: std.data.json formal review

2015-08-03 Thread Dmitry Olshansky via Digitalmars-d

On 03-Aug-2015 10:56, Sönke Ludwig wrote:

Am 02.08.2015 um 19:14 schrieb Dmitry Olshansky:


Actually JSON is defined as subset of EMCASCript-262 spec hence it may
not ciontain anything other 64-bit5 IEEE-754 numbers period.
See:
http://www.ecma-international.org/ecma-262/6.0/index.html#sec-terms-and-definitions-number-value


http://www.ecma-international.org/ecma-262/6.0/index.html#sec-ecmascript-language-types-number-type



Anything else is e-hm an "extension" (or simply put - violation of
spec), I've certainly seen 64-bit integers in the wild - how often true
big ints are found out there?

If no one can present some run of the mill REST JSON API breaking the
rules I'd suggest demoting BigInt handling to optional feature.




This is not true. Quoting from ECMA-404:


JSON is a text format that facilitates structured data interchange
between all programming languages. JSON
is syntax of braces, brackets, colons, and commas that is useful in
many contexts, profiles, and applications.
JSON  was  inspired  by  the  object  literals  of  JavaScript  aka
ECMAScript  as  defined  in  the  ECMAScript
Language   Specification,   third   Edition   [1].
It  does  not  attempt  to  impose  ECMAScript’s  internal  data
representations on other programming languages. Instead, it shares a
small subset of ECMAScript’s textual
representations with all other programming languages.
JSON  is  agnostic  about  numbers.  In  any  programming  language,
there  can  be  a  variety  of  number  types  of
various capacities and complements, fixed or floating, binary or
decimal. That can make interchange between
different  programming  languages  difficult.  JSON  instead  offers
only  the  representation  of  numbers  that
humans use: a sequence  of digits.  All programming languages know
how to make sense of digit sequences
even if they disagree on internal representations. That is enough to
allow interchange.


Hm about 5 solid pages and indeed it leaves everything unspecified for 
extensibility so I stand corrected.

Still I'm more inclined to put my trust in RFCs, such as the new one:
http://www.ietf.org/rfc/rfc7159.txt

Which states:

   This specification allows implementations to set limits on the range
   and precision of numbers accepted.  Since software that implements
   IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is
   generally available and widely used, good interoperability can be
   achieved by implementations that expect no more precision or range
   than these provide, in the sense that implementations will
   approximate JSON numbers within the expected precision.  A JSON
   number such as 1E400 or 3.141592653589793238462643383279 may indicate
   potential interoperability problems, since it suggests that the
   software that created it expects receiving software to have greater
   capabilities for numeric magnitude and precision than is widely
   available.

   Note that when such software is used, numbers that are integers and
   are in the range [-(2**53)+1, (2**53)-1] are interoperable in the
   sense that implementations will agree exactly on their numeric
   values.

And it implies setting limits on everything:

9.  Parsers

   A JSON parser transforms a JSON text into another representation.  A
   JSON parser MUST accept all texts that conform to the JSON grammar.
   A JSON parser MAY accept non-JSON forms or extensions.

   An implementation may set limits on the size of texts that it
   accepts.  An implementation may set limits on the maximum depth of
   nesting.  An implementation may set limits on the range and precision
   of numbers.  An implementation may set limits on the length and
   character contents of strings.


Now back to our land let's look at say rapidJSON.

It MAY seem to handle big integers:
https://github.com/miloyip/rapidjson/blob/master/include/rapidjson/internal/biginteger.h

But it's used only to parse doubles:
https://github.com/miloyip/rapidjson/pull/137

Anyhow the API says it all - only integers up to 64bit and doubles:

http://rapidjson.org/md_doc_sax.html#Handler

Pretty much what I expect by default.
And plz-plz don't hardcode BitInteger in JSON parser, it's slow plus it 
causes epic code bloat as Don already pointed out.


--
Dmitry Olshansky


Re: Rant after trying Rust a bit

2015-08-03 Thread Max Samukha via Digitalmars-d

On Monday, 3 August 2015 at 06:52:41 UTC, Timon Gehr wrote:

On 08/02/2015 09:02 PM, Max Samukha wrote:

On Sunday, 26 July 2015 at 23:29:18 UTC, Walter Bright wrote:

For example, the '+' operator. Rust traits sez "gee, there's 
a +
operator, it's good to go. Ship it!" Meanwhile, you thought 
the
function was summing some data, when it actually is creating 
a giant
string, or whatever idiot thing someone decided to use '+' 
for.


Number addition and string concatenation are monoid 
operations. In this

light, '+' for both makes perfect sense.


'+' is usually used to denote the operation of an abelian group.


The point is that '+' for string concatenation is no more of an 
'idiot thing' than '~'.


Re: Rant after trying Rust a bit

2015-08-03 Thread Kagamin via Digitalmars-d

On Sunday, 2 August 2015 at 21:17:10 UTC, Jonathan M Davis wrote:
Where distinguishing between + and ~ would likely make a big 
difference though is dynamic languages that aren't strict with 
types and allow nonsense like "5" + 2.


Using '~' instead of '+' to concatenate strings is just a syntax 
and says nothing about type system.


Re: [semi-OT] forum.dlang.org performance mentioned on Hacker News

2015-08-03 Thread Walter Bright via Digitalmars-d

On 8/2/2015 8:33 AM, David Nadlinger wrote:

Somebody just mentioned Vladimir's great work in a discussion on the Hacker News
front page: https://news.ycombinator.com/item?id=9990763

  — David


The title is: Why and how is Hacker News so fast?


Re: DMD on WIndows 10

2015-08-03 Thread Kagamin via Digitalmars-d
On Saturday, 1 August 2015 at 04:25:07 UTC, Jonathan M Davis 
wrote:
You know, it would be _really_ cool if there were an OS out 
there that was fully compliant with both the POSIX standard and 
ecosystem and the Win32 API such that you could run KDE, gnome, 
bash, zsh, etc. on it just like on Linux/FreeBSD/etc. _and_ run 
Windows programs on it - all as native applications. A total 
pipe dream really, but _man_ would that be cool...


Windows does have a posix subsystem: 
https://msdn.microsoft.com/en-us/library/cc772343.aspx


`examplevalues` property

2015-08-03 Thread HaraldZealot via Digitalmars-d
I found myself in situation that were good that all types support 
`.examplevalues` property in unittest version. This property will 
return array of predefined values for specified type (we can even 
have some convention like `examplevalues[0]` is `init`, 
`examplevalues[1]` is `min` (for numerical type) an so on). If 
custom types doesn't redefine this property the array consist 
only from `init`.


The use case for this: templated struct or class with 
container-like semantics and internal unittest for method of such 
class.


Thoughts?




Re: D for Android

2015-08-03 Thread Elvis Zhou via Digitalmars-d

On Thursday, 30 July 2015 at 19:38:12 UTC, Joakim wrote:

On Monday, 25 May 2015 at 20:08:48 UTC, Joakim wrote:

[...]


Some good news, I've made progress on the port to Android/ARM, 
using ldc's 2.067 branch.  Currently, all 46 modules in 
druntime and 85 of 88 modules in phobos pass their tests (I had 
to comment out a few tests across four modules) when run on the 
command-line.  There is a GC issue that causes 2-3 other 
modules to hang only when the tests are run as part of an 
Android app/apk, ie a D shared library that's invoked by the 
Java runtime.


[...]


Would those patches for ldc/druntime/phobos be applied & merged 
into LDC eventually?


Re: `examplevalues` property

2015-08-03 Thread Andrea Fontana via Digitalmars-d

On Monday, 3 August 2015 at 12:13:15 UTC, HaraldZealot wrote:
I found myself in situation that were good that all types 
support `.examplevalues` property in unittest version. This 
property will return array of predefined values for specified 
type (we can even have some convention like `examplevalues[0]` 
is `init`, `examplevalues[1]` is `min` (for numerical type) an 
so on). If custom types doesn't redefine this property the 
array consist only from `init`.


The use case for this: templated struct or class with 
container-like semantics and internal unittest for method of 
such class.


Thoughts?


Why don't you use templates? Something like:

enum ValueType
{
Init,
Min,
Max
}

auto exampleValues(T)()
{
T[ValueType] retVal;

retVal[ValueType.Init] = T.init;
	static if (__traits(compiles, T.min)) retVal[ValueType.Min] = 
T.min;
	static if (__traits(compiles, T.max)) retVal[ValueType.Max] = 
T.max;


return retVal;
}

exampleValues!int.writeln;
exampleValues!string.writeln;



Re: `examplevalues` property

2015-08-03 Thread HaraldZealot via Digitalmars-d

On Monday, 3 August 2015 at 13:13:55 UTC, Andrea Fontana wrote:


Why don't you use templates? Something like:

enum ValueType
{
Init,
Min,
Max
}

auto exampleValues(T)()
{
T[ValueType] retVal;

retVal[ValueType.Init] = T.init;
	static if (__traits(compiles, T.min)) retVal[ValueType.Min] = 
T.min;
	static if (__traits(compiles, T.max)) retVal[ValueType.Max] = 
T.max;


return retVal;
}

exampleValues!int.writeln;
exampleValues!string.writeln;


Good solution!

But there is something that not perfect: it can be customizable 
only with template specialization as I see. I want not only 
standard values like `init` `max` or `min` but also some example 
value like 1, 2, 3, 4, 5 for `int`. In last case your template 
solution not so convenient as desired (introduction in language 
feature like `.testValue1` seems ridiculous, and without that 
only template specialization can provide customization, as I have 
said).


But this seems interesting direction, and easy to implement in 
object.d (without library implementation, this feature have 
little benefit).


Re: D for project in computational chemistry

2015-08-03 Thread FreeSlave via Digitalmars-d

On Sunday, 2 August 2015 at 16:25:18 UTC, Yura wrote:

Dear D coders/developers,

I am just thinking on one project in computational chemistry, 
and it is sort of difficult for me to pick up the right 
language this project to be written. The project is going to 
deal with the generation of the molecular structures and will 
resemble to some extent some bio-informatic stuff. Personally I 
code in two languages - Python, and a little bit in C (just 
started to learn this language).


[...]


Did you try PyPy implementation of python? It's claimed to be 
faster than CPython.
If it's still not enough for you, then try D for sure. Write 
sample program that do calculations on real data, use gdc or ldc 
to get the optimized code and see if you're happy with results.


assert(0) behavior

2015-08-03 Thread Steven Schveighoffer via Digitalmars-d

If you compile and run the following code, what happens?

void main()
{
   assert(0, "error message");
}

answer: it depends. On OSX, if you compile this like so:

dmd testassert.d
./testassert

You get this message + stack trace:

core.exception.AssertError@testassert.d(3): error message

Not bad. But assert(0) is special in that it is always enabled, even in 
release mode. So let's try that:


dmd -release testassert.d
./testassert
Segmentation fault: 11

WAT. The explanation is, assert(0) is translated in release mode to a 
HLT instruction. on X86, this results in a segfault. But a seg fault is 
tremendously less useful. Let's say you are running a 10k line program, 
and you see this. Compared with seeing the assert message and stack 
trace, this is going to cause hours of extra debugging.


Why do we do this? I'm really not sure. Note that "error message" is 
essentially not used if we have a seg fault. Throwing an assert error 
shouldn't cause any issues with stack unwinding performance, since this 
can be done inside a nothrow function. And when you throw an 
AssertError, it shouldn't be caught anyway.


Why can't assert(0) throw an assert error in release mode instead of 
segfaulting? What would it cost to do this instead?


-Steve


Re: `examplevalues` property

2015-08-03 Thread Andrea Fontana via Digitalmars-d

On Monday, 3 August 2015 at 13:54:51 UTC, HaraldZealot wrote:

On Monday, 3 August 2015 at 13:13:55 UTC, Andrea Fontana wrote:


Why don't you use templates? Something like:

enum ValueType
{
Init,
Min,
Max
}

auto exampleValues(T)()
{
T[ValueType] retVal;

retVal[ValueType.Init] = T.init;
	static if (__traits(compiles, T.min)) retVal[ValueType.Min] = 
T.min;
	static if (__traits(compiles, T.max)) retVal[ValueType.Max] = 
T.max;


return retVal;
}

exampleValues!int.writeln;
exampleValues!string.writeln;


Good solution!

But there is something that not perfect: it can be customizable 
only with template specialization as I see. I want not only 
standard values like `init` `max` or `min` but also some 
example value like 1, 2, 3, 4, 5 for `int`. In last case your 
template solution not so convenient as desired (introduction in 
language feature like `.testValue1` seems ridiculous, and 
without that only template specialization can provide 
customization, as I have said).


But this seems interesting direction, and easy to implement in 
object.d (without library implementation, this feature have 
little benefit).


You have to write the same amount of code.
It's just one line for each type... Something like:

import std.traits;

enum ValueType
{
Init,
Min,
Max
}

auto exampleValues(T)()
{

T[ValueType] retVal;

retVal[ValueType.Init] = T.init;
	static if (__traits(compiles, T.min)) retVal[ValueType.Min] = 
T.min;
	static if (__traits(compiles, T.max)) retVal[ValueType.Max] = 
T.max;


	static if(isIntegral!T) 		return tuple!("defaults", 
"customs")(retVal, [1,2,3,4,5]);
	else static if(isFloatingPoint!T) 	return tuple!("defaults", 
"customs")(retVal, [1.0,2.0,T.nan]);
	else static if(isSomeString!T) 		return tuple!("defaults", 
"customs")(retVal, ["hello", "world"]);

else return tuple!("defaults", "customs")(retVal, T[].init);
}


Re: `examplevalues` property

2015-08-03 Thread HaraldZealot via Digitalmars-d

On Monday, 3 August 2015 at 14:30:43 UTC, Andrea Fontana wrote:

On Monday, 3 August 2015 at 13:54:51 UTC, HaraldZealot wrote:

You have to write the same amount of code.
It's just one line for each type... Something like:

[...]


Many thanks, it seems like good workaround for my personal use 
case.


But have something like that in Phobos were great from my POV.


Re: DMD on WIndows 10

2015-08-03 Thread Tofu Ninja via Digitalmars-d

On Monday, 3 August 2015 at 10:37:43 UTC, Kagamin wrote:
On Saturday, 1 August 2015 at 04:25:07 UTC, Jonathan M Davis 
wrote:
You know, it would be _really_ cool if there were an OS out 
there that was fully compliant with both the POSIX standard 
and ecosystem and the Win32 API such that you could run KDE, 
gnome, bash, zsh, etc. on it just like on Linux/FreeBSD/etc. 
_and_ run Windows programs on it - all as native applications. 
A total pipe dream really, but _man_ would that be cool...


Windows does have a posix subsystem: 
https://msdn.microsoft.com/en-us/library/cc772343.aspx


How well does that work?


Re: assert(0) behavior

2015-08-03 Thread Dicebot via Digitalmars-d
On Monday, 3 August 2015 at 14:34:52 UTC, Steven Schveighoffer 
wrote:

Why do we do this?


Because all asserts must be completely removed in -release

Yet assert(0) effectively mean "unreachable code" (it is actually 
defined that way in spec) and thus it is possible to ensure extra 
"free" bit of safety by crashing the app.




Re: Visual Studio Code

2015-08-03 Thread Jacob Carlborg via Digitalmars-d

On 03/08/15 02:24, bitwise wrote:

Just stumbled upon this:

https://code.visualstudio.com/

I see support for Rust and Go, but no D.

If you download it, there is a little smiley/frowny in the bottom right
corner for feedback/feature requests.


If I recall correctly it supports TextMate bundles. Try the D TextMate 
bundle and see what happens.


--
/Jacob Carlborg


Re: assert(0) behavior

2015-08-03 Thread Steven Schveighoffer via Digitalmars-d

On 8/3/15 11:18 AM, Dicebot wrote:

On Monday, 3 August 2015 at 14:34:52 UTC, Steven Schveighoffer wrote:

Why do we do this?


Because all asserts must be completely removed in -release


1. They aren't removed, they are replaced with a nearly useless segfault.
2. If we are going to put something in there instead of "assert", why 
not just throw an error?


Effectively:

assert(0, msg)

becomes a fancy way of writing (in any mode, release or otherwise):

throw new AssertError(msg);

This is actually the way I thought it was done.

-Steve


Re: assert(0) behavior

2015-08-03 Thread Dicebot via Digitalmars-d
On Monday, 3 August 2015 at 15:50:56 UTC, Steven Schveighoffer 
wrote:

On 8/3/15 11:18 AM, Dicebot wrote:
On Monday, 3 August 2015 at 14:34:52 UTC, Steven Schveighoffer 
wrote:

Why do we do this?


Because all asserts must be completely removed in -release


1. They aren't removed, they are replaced with a nearly useless 
segfault.
2. If we are going to put something in there instead of 
"assert", why not just throw an error?


Effectively:

assert(0, msg)

becomes a fancy way of writing (in any mode, release or 
otherwise):


throw new AssertError(msg);

This is actually the way I thought it was done.

-Steve


Now, they are completely removed. There is effectively no 
AssertError present in -release (it is defined but compiler is 
free to assume it never happens). I'd expect any reasonable 
compiler to not even emit stack unwinding code for functions with 
assert(0) (and no other throwables are present).


assert(0) is effectively same as gcc __builtin_unreachable with 
all consequences for optimization - with only difference that 
latter won't even insert HLT but just continue executing 
corrupted program.


Re: assert(0) behavior

2015-08-03 Thread Dicebot via Digitalmars-d
General advice  - simply don't ever use -release unless you are 
_very_ sure about program correctness (to the point of 100% test 
coverage and previous successful debug runs)


Re: assert(0) behavior

2015-08-03 Thread Steven Schveighoffer via Digitalmars-d

On 8/3/15 11:57 AM, Dicebot wrote:

On Monday, 3 August 2015 at 15:50:56 UTC, Steven Schveighoffer wrote:

On 8/3/15 11:18 AM, Dicebot wrote:

On Monday, 3 August 2015 at 14:34:52 UTC, Steven Schveighoffer wrote:

Why do we do this?


Because all asserts must be completely removed in -release


1. They aren't removed, they are replaced with a nearly useless segfault.
2. If we are going to put something in there instead of "assert", why
not just throw an error?

Effectively:

assert(0, msg)

becomes a fancy way of writing (in any mode, release or otherwise):

throw new AssertError(msg);

This is actually the way I thought it was done.


Now, they are completely removed. There is effectively no AssertError
present in -release (it is defined but compiler is free to assume it
never happens). I'd expect any reasonable compiler to not even emit
stack unwinding code for functions with assert(0) (and no other
throwables are present).


I actually don't care if stack is unwound (it's not guaranteed by 
language anyway). Just segfault is not useful at all to anyone who 
doesn't have core dump enabled, and even if they do enable it, it's not 
trivial to get at the real cause of the error. I'd settle for calling:


__onAssertZero(__FILE__, __LINE__, message);

Which can do whatever we want, print a message, do HLT, throw 
AssertError, etc.



assert(0) is effectively same as gcc __builtin_unreachable with all
consequences for optimization - with only difference that latter won't
even insert HLT but just continue executing corrupted program.


No reason this would change. Compiler can still consider this 
unreachable code for optimization purposes.


-Steve


Re: assert(0) behavior

2015-08-03 Thread Steven Schveighoffer via Digitalmars-d

On 8/3/15 11:59 AM, Dicebot wrote:

General advice  - simply don't ever use -release unless you are _very_
sure about program correctness (to the point of 100% test coverage and
previous successful debug runs)


So in other words, only release code that has no bugs. Got it ;)

-Steve


Re: DMD on WIndows 10

2015-08-03 Thread Kagamin via Digitalmars-d

On Monday, 3 August 2015 at 14:43:00 UTC, Tofu Ninja wrote:

How well does that work?


Well, if it exists in the first place, I suppose, it has some 
sizable users?


Why Java (server VM) is faster than D?

2015-08-03 Thread aki via Digitalmars-d

When I was trying to port some Java program to D,
I noticed Java is faster than D.
I made a simple bench mark test as follows.
Then, I was shocked with the result.

test results on Win8 64bit (smaller is better)
Java(1.8.0,64bit,server): 0.677
C++(MS vs2013): 2.141
C#(MS vs2013): 2.220
D(DMD 2.067.1): 2.448
D(GDC 4.9.2/2.066): 2.481
Java(1.8.0,32bit,client): 3.060

Does anyone know the magic of Java?

Thanks, Aki.

---

test program for D lang:
import std.datetime;
import std.stdio;
class Foo {
int i = 0;
void bar() {}
};
class SubFoo : Foo {
override void bar() {
i = i * 3 + 1;
}
};
int test(Foo obj, int repeat) {
for (int r = 0; r

Re: Why Java (server VM) is faster than D?

2015-08-03 Thread John Colvin via Digitalmars-d

On Monday, 3 August 2015 at 16:27:39 UTC, aki wrote:

When I was trying to port some Java program to D,
I noticed Java is faster than D.
I made a simple bench mark test as follows.
Then, I was shocked with the result.

[...]


What compilation flags?


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Justin Whear via Digitalmars-d
Java being fastest at running Java-style code is not too surprising.  My 
guess is that Java is "hotspot" inlining the calls to `bar`, getting rid 
of the dynamic dispatch overhead.  I think that for real systems D will 
generally beat out Java across the board, but not if the D version is a 
straight up transliteration of the Java--expect Java to be the best at 
running Java code.


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Dmitry Olshansky via Digitalmars-d

On 03-Aug-2015 19:27, aki wrote:

When I was trying to port some Java program to D,
I noticed Java is faster than D.
I made a simple bench mark test as follows.
Then, I was shocked with the result.

test results on Win8 64bit (smaller is better)
Java(1.8.0,64bit,server): 0.677
C++(MS vs2013): 2.141
C#(MS vs2013): 2.220
D(DMD 2.067.1): 2.448
D(GDC 4.9.2/2.066): 2.481
Java(1.8.0,32bit,client): 3.060

Does anyone know the magic of Java?

Thanks, Aki.



Devirtualization? HotSpot is fairly aggressive in that regard.


--
Dmitry Olshansky


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Iain Buclaw via Digitalmars-d
On 3 August 2015 at 18:27, aki via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> When I was trying to port some Java program to D,
> I noticed Java is faster than D.
> I made a simple bench mark test as follows.
> Then, I was shocked with the result.
>
> test results on Win8 64bit (smaller is better)
> Java(1.8.0,64bit,server): 0.677
> C++(MS vs2013): 2.141
> C#(MS vs2013): 2.220
> D(DMD 2.067.1): 2.448
> D(GDC 4.9.2/2.066): 2.481
> Java(1.8.0,32bit,client): 3.060
>
> Does anyone know the magic of Java?
>
> Thanks, Aki.
>
>
I have read somewhere (or maybe heard) that Java VM is able to cache and
possibly remove/inline dynamic dispatches on the fly.  This is a clear win
for VM languages over native compiled.

Iain.


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Steven Schveighoffer via Digitalmars-d

On 8/3/15 12:31 PM, Dmitry Olshansky wrote:

On 03-Aug-2015 19:27, aki wrote:

When I was trying to port some Java program to D,
I noticed Java is faster than D.
I made a simple bench mark test as follows.
Then, I was shocked with the result.

test results on Win8 64bit (smaller is better)
Java(1.8.0,64bit,server): 0.677
C++(MS vs2013): 2.141
C#(MS vs2013): 2.220
D(DMD 2.067.1): 2.448
D(GDC 4.9.2/2.066): 2.481
Java(1.8.0,32bit,client): 3.060

Does anyone know the magic of Java?

Thanks, Aki.



Devirtualization? HotSpot is fairly aggressive in that regard.


Yeah, I think that's it. virtual calls cannot be inlined by the D 
compiler, but could be inlined by hotspot. You can fix this by making 
the derived class final, or marking the method final, and always using a 
reference to the derived type. If you need virtualization still, you 
will have to deal with lower performance.


-Steve



Re: Why Java (server VM) is faster than D?

2015-08-03 Thread John Colvin via Digitalmars-d

On Monday, 3 August 2015 at 16:27:39 UTC, aki wrote:

When I was trying to port some Java program to D,
I noticed Java is faster than D.
I made a simple bench mark test as follows.
Then, I was shocked with the result.

test results on Win8 64bit (smaller is better)
Java(1.8.0,64bit,server): 0.677
C++(MS vs2013): 2.141
C#(MS vs2013): 2.220
D(DMD 2.067.1): 2.448
D(GDC 4.9.2/2.066): 2.481
Java(1.8.0,32bit,client): 3.060

Does anyone know the magic of Java?

Thanks, Aki.

---

test program for D lang:
import std.datetime;
import std.stdio;
class Foo {
int i = 0;
void bar() {}
};
class SubFoo : Foo {
override void bar() {
i = i * 3 + 1;
}
};
int test(Foo obj, int repeat) {
for (int r = 0; r	double time = (Clock.currTime() - stime).total!"msecs" / 
1000.0;

writefln("time=%5.3f, ret=%d", time, ret);
}

test program for Java:
class Foo {
public int i = 0;
public void bar() {}
};
class SubFoo extends Foo {
public void bar() {
i = i * 3 + 1;
}
};
public class Main {
public static int test(Foo obj, int repeat) {
for (int r = 0; r

Not surprising. The virtual function call takes almost all of the 
time and the JVM will be devirtualising it. If you want to call 
tiny virtual functions in tight loops, use a VM.


That said, it's a bit disappointing that the devirtualisation 
doesn't happen at compile-time after inlining for a simple case 
like this.


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread John Colvin via Digitalmars-d
On Monday, 3 August 2015 at 16:41:42 UTC, Steven Schveighoffer 
wrote:

On 8/3/15 12:31 PM, Dmitry Olshansky wrote:

On 03-Aug-2015 19:27, aki wrote:

When I was trying to port some Java program to D,
I noticed Java is faster than D.
I made a simple bench mark test as follows.
Then, I was shocked with the result.

test results on Win8 64bit (smaller is better)
Java(1.8.0,64bit,server): 0.677
C++(MS vs2013): 2.141
C#(MS vs2013): 2.220
D(DMD 2.067.1): 2.448
D(GDC 4.9.2/2.066): 2.481
Java(1.8.0,32bit,client): 3.060

Does anyone know the magic of Java?

Thanks, Aki.



Devirtualization? HotSpot is fairly aggressive in that regard.


Yeah, I think that's it. virtual calls cannot be inlined by the 
D compiler, but could be inlined by hotspot. You can fix this 
by making the derived class final, or marking the method final, 
and always using a reference to the derived type. If you need 
virtualization still, you will have to deal with lower 
performance.


-Steve


Yup. I get very similar numbers to aki for his version, but 
changing two lines:


final class SubFoo : Foo {

int test(F)(F obj, int repeat) {

or less generally:

int test(SubFoo obj, int repeat) {

gets me down to 0.182s with ldc on OS X


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Adam D. Ruppe via Digitalmars-d
You can try a few potential optimizations in the D version 
yourself and see if it makes a difference.


Devirtualization has a very small impact. Test this by making 
`test` take `SubFoo` and making `bar` final, or making `bar` a 
stand-alone function.


That's not it.

Inlining alone doesn't make a huge difference either - test this 
by copy/pasting the `bar` method body to the test function.


But we can see a *huge* difference if we inline AND make the data 
local:


int test(SubFoo obj, int repeat) {
int i = obj.i; // local variable copy
for (int r = 0; robj.i = i; // save it back to the object so same result 
to the outside

world
return obj.i;
}



That cuts the time to less than 1/2 on my computer from the other 
fastest version.


So I suspect the JVM is able to figure out that the `i` member is 
being used and putting it in a hot cache instead of accessing it 
indirectly though the object, just like I did by hand there.


I betcha if the loop ran 5 times, it would be no different, but 
the JVM realizes after hundreds of iterations that there's a huge 
optimization potential there and rewrites the code at that point, 
making it faster for the next million runs.


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread John Colvin via Digitalmars-d

On Monday, 3 August 2015 at 16:47:14 UTC, Adam D. Ruppe wrote:
You can try a few potential optimizations in the D version 
yourself and see if it makes a difference.


Devirtualization has a very small impact. Test this by making 
`test` take `SubFoo` and making `bar` final, or making `bar` a 
stand-alone function.


That's not it.


Making SubFoo a final class and test take SubFoo gives a >10x 
speedup for me.


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Steven Schveighoffer via Digitalmars-d

On 8/3/15 12:50 PM, John Colvin wrote:

On Monday, 3 August 2015 at 16:47:14 UTC, Adam D. Ruppe wrote:

You can try a few potential optimizations in the D version yourself
and see if it makes a difference.

Devirtualization has a very small impact. Test this by making `test`
take `SubFoo` and making `bar` final, or making `bar` a stand-alone
function.

That's not it.


Making SubFoo a final class and test take SubFoo gives a >10x speedup
for me.


Let's make sure we're all comparing apples to apples here.

FWIW, I suspect the inlining to be the most significant improvement, 
which is impossible for virtual functions in D.


ALSO, make SURE you are compiling in release mode, so you aren't calling 
a virtual invariant function before/after every call.


-Steve


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Adam D. Ruppe via Digitalmars-d

On Monday, 3 August 2015 at 16:47:58 UTC, John Colvin wrote:

gets me down to 0.182s with ldc on OS X


Yeah, I tried dmd with the final and didn't get a difference but 
gdc with final (and -frelease, very important for max speed here 
since without it the method calls are surrounded by various 
assertions) and got similar speed to the hand written one too.


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Dmitry Olshansky via Digitalmars-d

On 03-Aug-2015 19:54, Steven Schveighoffer wrote:

On 8/3/15 12:50 PM, John Colvin wrote:

On Monday, 3 August 2015 at 16:47:14 UTC, Adam D. Ruppe wrote:

You can try a few potential optimizations in the D version yourself
and see if it makes a difference.

Devirtualization has a very small impact. Test this by making `test`
take `SubFoo` and making `bar` final, or making `bar` a stand-alone
function.

That's not it.


Making SubFoo a final class and test take SubFoo gives a >10x speedup
for me.


Let's make sure we're all comparing apples to apples here.

FWIW, I suspect the inlining to be the most significant improvement,
which is impossible for virtual functions in D.


Should be trivial in this particular case. You just keep the original 
virtual call where it cannot be deduced.




ALSO, make SURE you are compiling in release mode, so you aren't calling
a virtual invariant function before/after every call.


This one is critical. Actually why do we have an extra call for trivial 
null-check on any object that doesn't even have invariant?



--
Dmitry Olshansky


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread John Colvin via Digitalmars-d

On Monday, 3 August 2015 at 16:53:30 UTC, Adam D. Ruppe wrote:

On Monday, 3 August 2015 at 16:47:58 UTC, John Colvin wrote:

gets me down to 0.182s with ldc on OS X


Yeah, I tried dmd with the final and didn't get a difference 
but gdc with final (and -frelease, very important for max speed 
here since without it the method calls are surrounded by 
various assertions) and got similar speed to the hand written 
one too.


ouch, yeah those assertions cause me a 30x slowdown!


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Adam D. Ruppe via Digitalmars-d

On Monday, 3 August 2015 at 16:50:42 UTC, John Colvin wrote:
Making SubFoo a final class and test take SubFoo gives a >10x 
speedup for me.


Right, gdc and ldc will the the aggressive inlining and local 
data optimizations automatically once it is able to devirtualize 
the calls (at least when you use the -O flags).


dmd, however, even with -inline, doesn't make the local copy of 
the variable - it disassembles to this:


08098740 <_D1l4testFC1l6SubFooiZi>:
 8098740:   55  push   ebp
 8098741:   8b ec   movebp,esp
 8098743:   89 c1   movecx,eax
 8098745:   53  push   ebx
 8098746:   31 d2   xoredx,edx
 8098748:   8b 5d 08movebx,DWORD PTR 
[ebp+0x8]

 809874b:   56  push   esi
 809874c:   85 c9   test   ecx,ecx
 809874e:   7e 0f   jle809875f 
<_D1l4testFC1l6SubFooiZi+0x1f>
 8098750:   8b 43 08moveax,DWORD PTR 
[ebx+0x8]

 8098753:   8d 74 40 01 leaesi,[eax+eax*2+0x1]
 8098757:   42  incedx
 8098758:   89 73 08movDWORD PTR 
[ebx+0x8],esi

 809875b:   39 ca   cmpedx,ecx
 809875d:   7c f1   jl 8098750 
<_D1l4testFC1l6SubFooiZi+0x10>
 809875f:   8b 43 08moveax,DWORD PTR 
[ebx+0x8]

 8098762:   5e  popesi
 8098763:   5b  popebx
 8098764:   5d  popebp
 8098765:   c2 04 00ret0x4



There's no call in there, but there is still indirect memory 
access for the variable, so it doesn't get the caching benefits 
of the stack.




It isn't news that dmd's optimizer is pretty bad next to 
well, pretty much everyone else nowdays, whether gdc, ldc, or 
Java, but it is sometimes nice to take a look at why.




The biggest magic of Java IMO here is being CPU cache friendly!


Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Steven Schveighoffer via Digitalmars-d

On 8/3/15 12:59 PM, Dmitry Olshansky wrote:

On 03-Aug-2015 19:54, Steven Schveighoffer wrote:



ALSO, make SURE you are compiling in release mode, so you aren't calling
a virtual invariant function before/after every call.


This one is critical. Actually why do we have an extra call for trivial
null-check on any object that doesn't even have invariant?


Actually, that the call to the invariant should be avoidable if the 
object doesn't have one. It should be easy to check the vtable pointer 
to see if it points at the "default" invariant (which does nothing).


-Steve



Re: Why Java (server VM) is faster than D?

2015-08-03 Thread aki via Digitalmars-d

On Monday, 3 August 2015 at 16:47:58 UTC, John Colvin wrote:

changing two lines:
final class SubFoo : Foo {
int test(F)(F obj, int repeat) {


I tried it. DMD is no change, while GDC gets acceptable score.
D(DMD 2.067.1): 2.445
D(GDC 4.9.2/2.066): 0.928

Now I got a hint how to improve the code by hand.
Thanks, John.
But the original Java code that I'm porting is
about 10,000 lines of code.
And the performance is about 3 times different.
Yes! Java is 3 times faster than D in my app.
I hope the future DMD/GDC compiler will do the
similar optimization automatically, not by hand.

Aki.



Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Dmitry Olshansky via Digitalmars-d

On 03-Aug-2015 20:05, Steven Schveighoffer wrote:

On 8/3/15 12:59 PM, Dmitry Olshansky wrote:

On 03-Aug-2015 19:54, Steven Schveighoffer wrote:



ALSO, make SURE you are compiling in release mode, so you aren't calling
a virtual invariant function before/after every call.


This one is critical. Actually why do we have an extra call for trivial
null-check on any object that doesn't even have invariant?


Actually, that the call to the invariant should be avoidable if the
object doesn't have one. It should be easy to check the vtable pointer
to see if it points at the "default" invariant (which does nothing).


https://issues.dlang.org/show_bug.cgi?id=14865


--
Dmitry Olshansky


Re: Visual Studio Code

2015-08-03 Thread Misu via Digitalmars-d
Im using visual studio code with vibed and dub, it's working very 
well. Visual studio code support jade files.


Would be happy to see official support for D.

atm I have my own basic custom "D support". You can 
copy/paste/edit c# support and edit the files to add D keywords, 
it's very easy.


You can do this in this path (for windows): 
%AppData%\Local\Code\app-0.1.2\resources\app\plugins


If you want visual code to recognize .dt (vibed diet templates) 
as jade files, you can edit vs.language.jade/ticino.plugin.json


add .dt in extentions : "extensions": [ ".jade" , ".dt"], restart 
visual studio code.




Re: Why Java (server VM) is faster than D?

2015-08-03 Thread Etienne Cimon via Digitalmars-d

On Monday, 3 August 2015 at 17:33:30 UTC, aki wrote:

On Monday, 3 August 2015 at 16:47:58 UTC, John Colvin wrote:

changing two lines:
final class SubFoo : Foo {
int test(F)(F obj, int repeat) {


I tried it. DMD is no change, while GDC gets acceptable score.
D(DMD 2.067.1): 2.445
D(GDC 4.9.2/2.066): 0.928

Now I got a hint how to improve the code by hand.
Thanks, John.
But the original Java code that I'm porting is
about 10,000 lines of code.
And the performance is about 3 times different.
Yes! Java is 3 times faster than D in my app.
I hope the future DMD/GDC compiler will do the
similar optimization automatically, not by hand.

Aki.


LLVM might be able to do achieve Java's optimization for your use 
case using profile-guided optimization. In principle, it's hard 
to choose which function to inline without the function call 
counts, but LLVM has a back-end with sampling support.


http://clang.llvm.org/docs/UsersManual.html#profile-guided-optimization

Whether or not this is or will be available soon for D in LDC is 
a different matter.


Re: D for Android

2015-08-03 Thread Joakim via Digitalmars-d

On Monday, 3 August 2015 at 12:46:51 UTC, Elvis Zhou wrote:

On Thursday, 30 July 2015 at 19:38:12 UTC, Joakim wrote:

On Monday, 25 May 2015 at 20:08:48 UTC, Joakim wrote:

[...]


Some good news, I've made progress on the port to Android/ARM, 
using ldc's 2.067 branch.  Currently, all 46 modules in 
druntime and 85 of 88 modules in phobos pass their tests (I 
had to comment out a few tests across four modules) when run 
on the command-line.  There is a GC issue that causes 2-3 
other modules to hang only when the tests are run as part of 
an Android app/apk, ie a D shared library that's invoked by 
the Java runtime.


[...]


Would those patches for ldc/druntime/phobos be applied & merged 
into LDC eventually?


For the ones I wrote which have not been upstreamed already, yes, 
I'll submit PRs once I get them cleaned up.  For example, I'd 
like to devise a way not to use dl_iterate_phdr to load 
pre-initialized data, so that Android versions older than 5.0 can 
run D too.  I need to look into employing the same bracketed 
sections approach that dmd uses.


Re: D for project in computational chemistry

2015-08-03 Thread jmh530 via Digitalmars-d

On Monday, 3 August 2015 at 14:25:21 UTC, FreeSlave wrote:

On Sunday, 2 August 2015 at 16:25:18 UTC, Yura wrote:

Dear D coders/developers,

I am just thinking on one project in computational chemistry, 
and it is sort of difficult for me to pick up the right 
language this project to be written. The project is going to 
deal with the generation of the molecular structures and will 
resemble to some extent some bio-informatic stuff. Personally 
I code in two languages - Python, and a little bit in C (just 
started to learn this language).


[...]


Did you try PyPy implementation of python? It's claimed to be 
faster than CPython.
If it's still not enough for you, then try D for sure. Write 
sample program that do calculations on real data, use gdc or 
ldc to get the optimized code and see if you're happy with 
results.


Last time I checked there's lots of stuff that you can't use with 
pypy.


Re: std.experimental.color, request reviews

2015-08-03 Thread Tofu Ninja via Digitalmars-d

On Tuesday, 23 June 2015 at 14:58:35 UTC, Manu wrote:

https://github.com/D-Programming-Language/phobos/pull/2845

I'm getting quite happy with it.
I think it's a good and fairly minimal but useful starting 
point.


It'd be great to get some reviews from here.


Whats the status on this? This really should be easy to move into 
phobos, color is hard to mess up.


Re: std.data.json formal review

2015-08-03 Thread deadalnix via Digitalmars-d

On Tuesday, 28 July 2015 at 14:07:19 UTC, Atila Neves wrote:

Start of the two week process, folks.

Code: https://github.com/s-ludwig/std_data_json
Docs: http://s-ludwig.github.io/std_data_json/

Atila


Looked in the doc ( 
http://s-ludwig.github.io/std_data_json/stdx/data/json/value/JSONValue.html ). I wanted to know how JSONValue can be manipulated. That is not very explicit.


First, it doesn't looks like the value can embed null as a value. 
null is a valid json value.


Secondly, it seems that it accept bigint. As per JSON spec, the 
only kind of numeric value you can have in there is a num, which 
doesn't even make the difference between floating point and 
integer (!) and with 53 bits of precision. By having double and 
long in there, we are already way over spec, so I'm not sure why 
we'd want to put bigint in there.


Finally, I'd love to see that JSONValue to exhibit a similar API 
than jsvar.





Wiki article: Starting as a Contributor

2015-08-03 Thread Andrei Alexandrescu via Digitalmars-d
I had to set up dmd and friends on a fresh Ubuntu box, so I thought I'd 
document the step-by-step process:


http://wiki.dlang.org/Starting_as_a_Contributor

Along the way I also hit a small snag and fixed it at

https://github.com/D-Programming-Language/dlang.org/pull/1049

Further improvements are welcome.


Thanks,

Andrei


Re: D for project in computational chemistry

2015-08-03 Thread Laeeth Isharc via Digitalmars-d

On Monday, 3 August 2015 at 06:16:57 UTC, yawniek wrote:

On Sunday, 2 August 2015 at 16:25:18 UTC, Yura wrote:

While it is easy to code in Python there are two things I do 
not like:


1) Python is slow for nested loops (much slower comparing to C)
2) Python is not compiled. However, I want to work with a code 
which can be compiled and distributed as binaries (at least at 
the beginning).




you can use the best of both worlds with pyd:
https://github.com/ariovistus/pyd

- write python Modules in D
and/or
- make your D code scriptable with python


Also, note that you can write D in the ipython/jupyter notebook 
and have it interoperate with D libraries from code.dlang.org and 
with python.  It's at an early stage, but so far I have found it 
to work well.


https://github.com/DlangScience/PydMagic


Re: Wiki article: Starting as a Contributor

2015-08-03 Thread Steven Schveighoffer via Digitalmars-d

On 8/3/15 5:25 PM, Andrei Alexandrescu wrote:

I had to set up dmd and friends on a fresh Ubuntu box, so I thought I'd
document the step-by-step process:

http://wiki.dlang.org/Starting_as_a_Contributor


You should make sure there's no overlap with this:

http://wiki.dlang.org/Get_involved

I think it's a great idea to have a bootstrap for how to get your 
environment set up. It's been a long time since I did it. What I did is 
to download a dmd release zipfile, then delete the dmd, druntime, and 
phobos src, replacing it with cloned git repositories.


I also deleted all the binaries/libraries under bin/os directory, and 
use symlinks to the built files from their appropriate source 
directories. This way I can test changes without having to create a new 
installation somewhere.


-Steve


Re: Wiki article: Starting as a Contributor

2015-08-03 Thread Dmitry Olshansky via Digitalmars-d

On 04-Aug-2015 01:30, Steven Schveighoffer wrote:

On 8/3/15 5:25 PM, Andrei Alexandrescu wrote:

I had to set up dmd and friends on a fresh Ubuntu box, so I thought I'd
document the step-by-step process:

http://wiki.dlang.org/Starting_as_a_Contributor


You should make sure there's no overlap with this:

http://wiki.dlang.org/Get_involved

I think it's a great idea to have a bootstrap for how to get your
environment set up. It's been a long time since I did it. What I did is
to download a dmd release zipfile, then delete the dmd, druntime, and
phobos src, replacing it with cloned git repositories.




I also deleted all the binaries/libraries under bin/os directory, and
use symlinks to the built files from their appropriate source
directories. This way I can test changes without having to create a new
installation somewhere.



Yes, it's fairly simple way that I used countless times with great success.


--
Dmitry Olshansky


Re: assert(0) behavior

2015-08-03 Thread Walter Bright via Digitalmars-d

On 8/3/2015 8:50 AM, Steven Schveighoffer wrote:

1. They aren't removed, they are replaced with a nearly useless segfault.


Not useless at all:

1. The program does not continue running after it has failed. (Please, let's not 
restart that debate.

2. Running it under a debugger, the location of the fault will be identified.



2. If we are going to put something in there instead of "assert", why not just
throw an error?


Because they are expensive.

To keep asserts in all their glory in released code, do not use the -release 
switch.



Re: assert(0) behavior

2015-08-03 Thread Steven Schveighoffer via Digitalmars-d

On 8/3/15 6:54 PM, Walter Bright wrote:

On 8/3/2015 8:50 AM, Steven Schveighoffer wrote:

1. They aren't removed, they are replaced with a nearly useless segfault.


Not useless at all:

1. The program does not continue running after it has failed. (Please,
let's not restart that debate.


You can run some code that is fresh, i.e. don't *continue* running 
failing code, but it's ok, for example, to run a signal handler, or call 
a pure function. My thought is that you should print the file location 
and exit (perhaps with segfault).



2. Running it under a debugger, the location of the fault will be
identified.


Yeah, the real problem is when you are not running under a debugger. If 
you have an intermittent bug, or one that is in the field, a stack trace 
of the problem is worth 100x more than a segfault and trying to talk a 
customer through setting up a debugger, or even setting up their system 
to do a core dump for you to debug. They aren't always interested in 
that enough to care.


But also, a segfault has a specific meaning. It means, you tried to 
access memory you don't have access to. This clearly is not that, and it 
sends the wrong message -- memory corruption. This is not memory 
corruption, it's a program error.



2. If we are going to put something in there instead of "assert", why
not just
throw an error?


Because they are expensive.


They are expensive when? 2 microseconds before the program ends? At that 
point, I'd rather you spend as much resources telling me what went wrong 
as possible. More info the better.


Now, if it's expensive to include the throw vs. not including it (I 
don't know), then that's a fair point. If it means an extra few 
instructions to set up the throw (which isn't run at all when the 
assert(0) line isn't run), that's not convincing.



To keep asserts in all their glory in released code, do not use the
-release switch.


OK, this brings up another debate. The thing that triggered all this is 
an issue with core.time, see issue 
https://issues.dlang.org/show_bug.cgi?id=14863


Essentially, we wrote code to get all the clock information at startup 
on a posix system that supports clock_gettime, which is immutable and 
can be read easily at startup. However, how we handled the case where a 
clock could not be fetched is to assert(0, "don't support clock " ~ 
clockname).


The clear expectation was that the message will be printed (along with 
file/line number), and we can go fix the issue.


On Linux 2.6.32 - a supported LTS release I guess - one of these clocks 
is not supported. This means running a simple "hello world" program 
crashes at startup in a segfault, not a nice informative message.


So what is the right answer here? We want an assert to trigger for this, 
but the only assert that stays in is assert(0). However, that's useless 
if all someone sees is "segfuault". "some message" never gets printed, 
because druntime is compiled in release mode. I'm actually quite 
thrilled that someone figured this all out -- one person analyzed his 
core dump to narrow down the function, and happened to include his linux 
kernel version, and another person had the arcane knowledge that some 
clock wasn't supported in that version.


Is there a better mechanism we should be using in druntime that is 
clearly going to be compiled in release mode? Should we just throw an 
assert error directly? Clearly the code is not "corrupted" at this 
point, it's just an environmental issue. But we don't want to continue 
execution. What is the right call?


At the very least, assert(0, "message") should be a compiler error, the 
message is unused information.


-Steve


Re: assert(0) behavior

2015-08-03 Thread Ali Çehreli via Digitalmars-d

On 08/03/2015 04:57 PM, Steven Schveighoffer wrote:

> At the very least, assert(0, "message") should be a compiler error, the
> message is unused information.

Agreed.

How about dumping the message to stderr as a best effort if the message 
is a literal? Hm... On the other hand, undefined behavior means that 
even trying that can cause harm like radiating a human with too much 
medical radiation. :(


Perhaps, if it is a complicated expression like calling format(), then 
it should be an error?


Ali



Re: assert(0) behavior

2015-08-03 Thread Meta via Digitalmars-d
On Monday, 3 August 2015 at 23:57:36 UTC, Steven Schveighoffer 
wrote:
OK, this brings up another debate. The thing that triggered all 
this is an issue with core.time, see issue 
https://issues.dlang.org/show_bug.cgi?id=14863


[...]


Why not just " stderr.writeln(errMsg); assert(0);"?


Re: assert(0) behavior

2015-08-03 Thread Walter Bright via Digitalmars-d

On 8/3/2015 4:57 PM, Steven Schveighoffer wrote:

OK, this brings up another debate. The thing that triggered all this is an issue
with core.time, see issue https://issues.dlang.org/show_bug.cgi?id=14863

Essentially, we wrote code to get all the clock information at startup on a
posix system that supports clock_gettime, which is immutable and can be read
easily at startup. However, how we handled the case where a clock could not be
fetched is to assert(0, "don't support clock " ~ clockname).

The clear expectation was that the message will be printed (along with file/line
number), and we can go fix the issue.

On Linux 2.6.32 - a supported LTS release I guess - one of these clocks is not
supported. This means running a simple "hello world" program crashes at startup
in a segfault, not a nice informative message.

So what is the right answer here?


The answer is the code is misusing asserts to check for environmental errors. I 
cannot understand how I consistently fail at explaining this. There have been 
many thousand message threads on exactly this topic.


Asserts are for CHECKING FOR PROGRAM BUGS.

enforce(), etc., are for CHECKING FOR INPUT ERRORS. Environmental errors are 
input errors, not programming bugs.




Re: assert(0) behavior

2015-08-03 Thread Walter Bright via Digitalmars-d

On 8/3/2015 5:25 PM, Ali Çehreli wrote:

On 08/03/2015 04:57 PM, Steven Schveighoffer wrote:

 > At the very least, assert(0, "message") should be a compiler error, the
 > message is unused information.

Agreed.


No.

1. If you want the message, do not use -release.
2. Do not use asserts to issue error messages for input/environment errors.



Re: assert(0) behavior

2015-08-03 Thread via Digitalmars-d

On Monday, 3 August 2015 at 15:18:12 UTC, Dicebot wrote:
On Monday, 3 August 2015 at 14:34:52 UTC, Steven Schveighoffer 
wrote:

Why do we do this?


Because all asserts must be completely removed in -release

Yet assert(0) effectively mean "unreachable code" (it is 
actually defined that way in spec) and thus it is possible to 
ensure extra "free" bit of safety by crashing the app.


It isn't free. The point of having unreachable as primitive is 
that you can remove all branching that leads to it.


Re: assert(0) behavior

2015-08-03 Thread Dicebot via Digitalmars-d
On Tuesday, 4 August 2015 at 02:37:00 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 3 August 2015 at 15:18:12 UTC, Dicebot wrote:
On Monday, 3 August 2015 at 14:34:52 UTC, Steven Schveighoffer 
wrote:

Why do we do this?


Because all asserts must be completely removed in -release

Yet assert(0) effectively mean "unreachable code" (it is 
actually defined that way in spec) and thus it is possible to 
ensure extra "free" bit of safety by crashing the app.


It isn't free. The point of having unreachable as primitive is 
that you can remove all branching that leads to it.


"free compared to throwing exception" would be a better wording 
indeed. I remember Iain complaining about presence of HLT (and 
saying he won't do that for gdc) for exactly this reason.


Re: assert(0) behavior

2015-08-03 Thread via Digitalmars-d

On Tuesday, 4 August 2015 at 03:04:14 UTC, Dicebot wrote:
indeed. I remember Iain complaining about presence of HLT (and 
saying he won't do that for gdc) for exactly this reason.


Yes. I think a lot of this confusion could have been avoided by 
using "halt()" instead of "assert(0)" and "unreachable()" for 
marking code that has been proven unreachable. Right now it isn't 
obvious to me as a programmer what would happen for 
"assert(x-x)". I would have to look it up.


A more principled approach would be:

- "enforce(...)" test for errors

- "assert(...)" are (lint-like) annotations that don't affect 
normal execution, but can be requested to be dynamically tested.


- "halt()" marks sudden termination

- "unreachable()" injects a proven assumption into the deductive 
database


-"assume(...)" injects a proven assumption into the deductive 
database


- contracts define interfaces between separate pieces of code 
(like a plugin or dynamically linked library from multiple 
vendors e.g. sub system interface) that you may turn on/off based 
on what you interface with.


The input/environment/code distinction does not work very well. 
Sometimes input is a well defined part of the system, sometimes 
input is code (like dynamic linking a plugin), etc...




Representation length of integral types

2015-08-03 Thread tcak via Digitalmars-d
There is a use case for me that I am given a string, and before 
even trying to convert it to integer, I want to check whether it 
is valid. One of the checks necessary for this is the length of 
string.


So, if I have received "156" and it should be converted to ubyte, 
I would check whether it is at most 3 bytes length.


While doing that someone would define that maximum length in the 
code, while length information is constant and never changes.


For this reason, however data types have properties like max, 
min, and init, I would ask for addition of new properties for 
integral (even for floating points as well) types as (Example 
values for ubyte):


max_hex_length = 2
max_dec_length = 3
max_bin_length = 8



Re: [semi-OT] forum.dlang.org performance mentioned on Hacker News

2015-08-03 Thread Vladimir Panteleev via Digitalmars-d

On Sunday, 2 August 2015 at 15:46:40 UTC, dereck009 wrote:

On Sunday, 2 August 2015 at 15:33:42 UTC, David Nadlinger wrote:
Somebody just mentioned Vladimir's great work in a discussion 
on the Hacker News front page: 
https://news.ycombinator.com/item?id=9990763


 — David


Can we read about how the architecture of this site?

Where are the servers located?
Does it use any CDN?
Are all the pages cached static HTML or dynamic?


Seriously impressive performance.


Hi all, sorry, I'm on vacation so unable to write a full response.

No rocket science here, just general optimization common sense. 
Look at what CPU and web profilers (e.g. Google PageSpeed) say 
and optimize accordingly, rinse and repeat.


Re: Wiki article: Starting as a Contributor

2015-08-03 Thread Ali Çehreli via Digitalmars-d

On 08/03/2015 02:25 PM, Andrei Alexandrescu wrote:

I had to set up dmd and friends on a fresh Ubuntu box, so I thought I'd
document the step-by-step process:

http://wiki.dlang.org/Starting_as_a_Contributor

Along the way I also hit a small snag and fixed it at

https://github.com/D-Programming-Language/dlang.org/pull/1049

Further improvements are welcome.


Thanks,

Andrei


I am trying this now. I've already hit a problem. The wiki makes it 
sound like the bootstrapping is optional. As I already have dmd 2.067, I 
skipped that stepped and failed when making druntime as it apparently 
assumes the bootstrapped dmd:


[druntime]$ make -f posix.mak
../dmd/src/dmd -conf= -c -o- -Isrc -Iimport 
-Hfimport/core/sync/barrier.di src/core/sync/barrier.d

make: ../dmd/src/dmd: Command not found
make: *** [import/core/sync/barrier.di] Error 127

A symbolic link to dmd 2.067 failed because it does not know about 
pragma(inline). Fine...


Bootstrapping now... :)

Ali



Re: Representation length of integral types

2015-08-03 Thread rumbu via Digitalmars-d

On Tuesday, 4 August 2015 at 06:17:15 UTC, rumbu wrote:



enum max_hex_length = T.sizeof * 2;
enum max_bin_length = T.sizeof * 8;
enum max_dec_length = cast(T)log10(T.sizeof) + 1;


Errata:
enum max_dec_length = cast(T)log10(T.max) + 1;



Re: Representation length of integral types

2015-08-03 Thread rumbu via Digitalmars-d

On Tuesday, 4 August 2015 at 04:10:33 UTC, tcak wrote:
There is a use case for me that I am given a string, and before 
even trying to convert it to integer, I want to check whether 
it is valid. One of the checks necessary for this is the length 
of string.


So, if I have received "156" and it should be converted to 
ubyte, I would check whether it is at most 3 bytes length.


While doing that someone would define that maximum length in 
the code, while length information is constant and never 
changes.


For this reason, however data types have properties like max, 
min, and init, I would ask for addition of new properties for 
integral (even for floating points as well) types as (Example 
values for ubyte):


max_hex_length = 2
max_dec_length = 3
max_bin_length = 8


enum max_hex_length = T.sizeof * 2;
enum max_bin_length = T.sizeof * 8;
enum max_dec_length = cast(T)log10(T.sizeof) + 1;



Re: Representation length of integral types

2015-08-03 Thread Kai Nacke via Digitalmars-d

On Tuesday, 4 August 2015 at 04:10:33 UTC, tcak wrote:

max_hex_length = 2
max_dec_length = 3
max_bin_length = 8


I think that there is no need to add such properties. They 
clearly belong into the application domain. min and max values 
are different because they depend on the internal representation 
of the number.


There should be no problem to use CTFE to calculate these values 
at compile time.


Regards,
Kai