Re: D Lang installation on Windows, dependency on Visual Studio?

2016-11-17 Thread kink via Digitalmars-d

On Wednesday, 16 November 2016 at 17:34:58 UTC, Mike Parker wrote:
Why is it a wart? The MS toolchain is the system development 
environment for Windows. On Mac OS X, it's Xcode, which is a 1+ 
GB download before you can do any development with clang or dmd 
or anything that depends on it. On Linux distros, if it the GCC 
packages aren't already installed, they need to be pulled down.


I mean, I get that the MS tools might not be perceived by some 
as the 'system' tools, but that is what they are. It's no 
different than the other systems DMD is distributed on.


+1000. I hope this ridiculous thread comes to an end (nope, ELLCC 
is not a solution).


So yes, the DMD installer is right in warning that 64-bit code 
cannot be linked if VS isn't detected. It should detect the VC++ 
Build Tools though => 
https://issues.dlang.org/show_bug.cgi?id=16688. 32-bit MS COFF 
cannot be linked either, but let's not confuse new users by a too 
detailed msgbox.


Re: D Lang installation on Windows, dependency on Visual Studio?

2016-11-15 Thread kink via Digitalmars-d

On Tuesday, 15 November 2016 at 16:20:53 UTC, AB wrote:

On Tuesday, 15 November 2016 at 16:00:48 UTC, kink wrote:
It's not just the linker. You need the libs as well (static 
and dynamic ones), and not just the WinSDK ones, but the 
MSVCRT ones too.


I was under the impression that DMD for Windows was (meant to 
be) self-sufficient. I must have been misled by how it can 
build 32-bit applications just fine without requiring the many 
gigabytes of WinSDK and MSVCRT extras.


Did you give the Build Tools even a try? I can't install it as 
it's not installable alongside VS (yes, I do have enough disk 
space for 3 parallel VS installations!). The system requirements 
on their site says it needs 200 MB.


Also, DMD ships with the most common Windows libs (no idea from 
which WinSDK though), and DMD for 32-bit Windows (the non-COFF, 
i.e., optlink flavour) uses the Digital Mars C runtime (for which 
there's obviously no 64-bit version).


The 32-bit non-COFF Windows DMD comes along as self-sufficient 
package for basic users. If you're one of them, fine, otherwise 
get a proper dev environment and acknowledge that it'll require 
some space on disk.


Re: D Lang installation on Windows, dependency on Visual Studio?

2016-11-15 Thread kink via Digitalmars-d

On Tuesday, 15 November 2016 at 13:23:38 UTC, AB wrote:

On Tuesday, 15 November 2016 at 11:28:16 UTC, Kagamin wrote:

On Tuesday, 15 November 2016 at 10:31:23 UTC, AB wrote:

Are there plans to write a homebrew 64-bit linker for DMD?


There are already ld from mingw and lld from llvm team.


Why aren't they used and distributed in DMD for Windows by 
default?


If the tools mentioned above (LD and LLD) are available and 
usable on Windows x64 instead of the ones provided in heavily 
bloated packages by Microsoft, how come the DMD installer for 
Windows doesn't offer them as an alternative (or better yet as 
the default)?


It's not just the linker. You need the libs as well (static and 
dynamic ones), and not just the WinSDK ones, but the MSVCRT ones 
too. Just use the Visual C++ Build Tools if there's not enough 
disk space in 2016 *cough*. IMO there's just no way of doing 
professional Windows development without the MS toolchain. And no 
reason to complain about it just because most Linux distros come 
with a fully fledged development ecosystem.




Re: Quality of errors in DMD

2016-09-02 Thread kink via Digitalmars-d

On Friday, 2 September 2016 at 14:26:37 UTC, Ethan Watson wrote:
But compared to MSVC, I've found the error reporting of DMD to 
be severely lacking. In most cases with MSVC, I have an error 
code that I can google for which is (sometimes) thoroughly 
documented.


You're not really comparing DMD to MSVC, are you? ;) Imagine how 
DMD looked like if there was comparable financial backing...


Anyway, we all know that error reporting can be much improved, 
but complaining about it doesn't really help (at best, it moves 
that item up a bit on the mental agenda of some contributors) - 
getting yourself involved does.


Re: Optimisation possibilities: current, future and enhancements

2016-08-26 Thread kink via Digitalmars-d

On Friday, 26 August 2016 at 09:30:52 UTC, Timon Gehr wrote:
Better performance is better even when it is not the primary 
concern.

It's not the compiler's business to judge coding style


It's free to choose not to implement complex optimizations just 
so that people get super duper performance for crappy code, 
especially with the limited manpower we got. Fixing severe DMD 
bugs (there are still enough of those) is 100x more important.



// original code. not "bad".

int foo(int x) pure{ ... }

int bar(int x) pure{ return foo(x) + foo(5-x); }

void main(){
writeln(bar(5));
}

// ==> inlining

void main(){
writeln(foo(5)+foo(10-5));
}

// ==> constant folding, "bad" code

void main(){
writeln(foo(5)+foo(5));
}


Inlining and subsequent constant folding are only available if 
the callee isn't an external. For those cases, existing LLVM/GCC 
optimizations kick in and render at least this idea (automatic 
caching of pure function calls) obsolete (in most cases), see the 
LDC asm.
This is my main point. Higher-level, D-specific optimizations 
would be nice, but modern backends, coupled with optimized builds 
(`ldc2 -singleobj m1.d m2.d ...`) eat some of the ideas here for 
breakfast. So I'd focus on cases where LDC/GDC don't exploit 
optimization potential already.


Re: Optimisation possibilities: current, future and enhancements

2016-08-26 Thread kink via Digitalmars-d

On Friday, 26 August 2016 at 05:50:52 UTC, Basile B. wrote:

On Thursday, 25 August 2016 at 22:37:13 UTC, kinke wrote:

On Thursday, 25 August 2016 at 18:15:47 UTC, Basile B. wrote:
From my perspective, the problem with this example isn't 
missed optimization potential. It's the code itself. Why waste 
implementation efforts for such optimizations, if that would 
only reward people writing such ugly code with an equal 
performance to a more sane `2 * foo.foo()`? The latter is a) 
shorter, b) also faster with optimizations turned off and c) 
IMO simply clearer.


You're too focused on the example itself (Let's find an non 
trivial example, but then the asm generated would be longer). 
The point you miss is that it just *illustrates* what should 
happend when many calls to a pure const function are occur in a 
single sub program.


I know that it's just an illustration. But I surely don't like 
any function with repeated calls to this pure function. Why not 
have the developer code in a sensible style (cache that result 
once for that whole 'subprogram' manually) if performance is a 
concern? A compiler penalizing such bad coding style is 
absolutely fine by me.


Re: Usability of D on windows?

2016-08-24 Thread kink via Digitalmars-d
I managed to find and install LDC, and that mostly, somewhat 
works.


Except that half the time the compiler crashes with a stack 
trace, and sometimes it just hangs. Occasionally if I move and 
rearrange the code it will manage to compile it. There are also 
worrying comments on the LDC web page about how "Most programs 
work just fine" and "Several unit tests still fail" and stuff 
about it relying on stuff from visual c++ in order to work that 
make me seriously doubt it's stability and correctness.


The times when Windows wasn't a first class target for LDC are 
over. We've got Windows CI for a year now, so there should be no 
severe Windows-specific bugs (all unittests and the LDC-specific 
set of DMD tests work). What's still missing is proper 
debuginfos, and I'm unsure about DLL support.


As you didn't mention the LDC version you tried, I recommend 
using bleeding-edge master for Windows: 
https://github.com/ldc-developers/ldc/releases/tag/LDC-Win64-master


I've never experienced any hangs; compiler crashes may occur, but 
should be very rare. Be sure to let us know when you hit 
something at https://github.com/ldc-developers/ldc/issues - we 
can't fix stuff we don't know about!


Interestingly I found that LDC is crashing when I compile my 
code from visual D but not from the command line.


Worth investigating.

I'm like an honest opinion... Am I wasting my time trying to do 
this project in D on Windows? I'll continue to use and support 
the language but I don't want to fight a losing battle and end 
up having to move away anyway...


I've been working on LDC for a couple of years (focusing on 
Windows), mainly because I want to be able to replace C++ at work 
at some point. Instead of waiting for others to fix it, I 
realized one needs to get involved to push things forward. I 
still don't write any D code except for unittests and occasional 
DMD front-end mods, so I'd love to get some feedback on 
real-world usage of LDC on Windows.


On Tuesday, 23 August 2016 at 21:25:29 UTC, Cauterite wrote:
Well, you're fighting a losing battle by trying to use GDC/LDC 
on Windows, since Windows is priority #2 for D, and GDC/LDC are 
still struggling with priority #1 (Linux).


@Cauterite: You obviously have no idea about LDC at least, so 
please stop making such noninformed claims.


Re: Evaluation order of "+="

2016-07-12 Thread kink via Digitalmars-d

On Tuesday, 12 July 2016 at 02:27:04 UTC, deadalnix wrote:
There was a very lenghty discussion about this in the past. DMD 
is correct on that one.


Great, so after that very lengthy discussion, why did nobody add 
a frigging test?! Argh.


Re: D for Game Development

2015-08-12 Thread kink via Digitalmars-d

On Wednesday, 12 August 2015 at 14:57:17 UTC, jmh530 wrote:
Well I'm not sure what percent "serious system programming" is 
done by other people, but I don't do any.


I understand your points. I meant to say that D is a system 
programming language (too), so it's tightly coupled to some 
internals of the OS. And Windows being a proprietary OS, Visual 
Studio or more precisely at least its runtime is likely to be 
required in the future as well.


Almost ;) proper support for Win64 in LDC is about to be 
completed with the next release. It will most likely require 
Visual Studio 2015. But that's about it, you'll just need to 
extract an LDC archive. When invoking ldc2.exe, you'll need to 
make sure some environment variables are properly set up (e.g., 
by using a Visual Studio command prompt), for it to find the 
linker, libs etc.


Last time I built clang (from source, using Visual Studio) I was 
amazed by how painless that was. LLVM requires VS 2013 atm (at 
least for building), but Windows/MSVC support is still being 
finalized (native MSVCRT exception handling etc.). VS 2008 is 
really quite old by now, so I'd really recommend upgrading (VS 
2015 Community is free btw).


Re: D for Game Development

2015-08-12 Thread kink via Digitalmars-d

On Tuesday, 11 August 2015 at 00:56:57 UTC, Manu wrote:
On 11 August 2015 at 01:15, jmh530 via Digitalmars-d 
 wrote:
One big positive for DMD is that it is very easy to install on 
Windows. Just about anyone can get up and running quite 
easily. It doesn't require the installation of MSVC (which I 
can't stand) or Min-GW at all. If DMD and LDC are sort of 
merged in the way that you say, I just hope care is taken to 
ensure that it is easy for people to get started with it.


I think some care would be taken to bundle the distribution so 
it's both minimal and convenient for users to install and get 
started with.


Afaik DMD for Win64 requires the MS linker, so good luck without 
Visual Studio then. Same goes for LDC on Win64, although an LLVM 
COFF linker is under development. Serious system programming on 
Windows without MSVC and its C runtime? Not really an option; 
MinGW appears to be a dead end and never really fitted the 
Windows eco-system.


Re: auto ref is on the docket

2015-06-30 Thread kink via Digitalmars-d

On Monday, 29 June 2015 at 19:10:07 UTC, Atila Neves wrote:
It seems to me that there's a lot of confusion in this thread. 
I think I understand what you're saying but I'm not sure 
everyone else does. So correct me if I'm wrong: your stance 
(which is how I understand how things should be done in D) is 
that if a person wants to bind an rvalue to a ref parameter for 
performance, they shouldn't; they should pass it by value since 
it'll either get constructed in place or moved, not copied.


Essentially, what this guy said about C++ (not the original I 
read, that's now a 404):


http://m.blog.csdn.net/blog/CPP_CHEN/17528669

The reason rvalue references even exist in C++11/14 is because 
rvalues can bind to const&. I remember hearing/reading Andrei 
say that D doesn't need them precisely because in D, they 
can't. Pass by value and you get the C++ behaviour when rvalue 
references are used: a move.


In some cases even better, in-place construction. That's perfect 
- for rvalues, as you said. But for lvalues, a copy must be made 
- afaik, you don't have the option to explicitly move an lvalue 
into a byval-param in D as you can in C++. And even moving can be 
expensive if the type is big.
So for efficiency, template auto-ref is a nice option, but 
requires a template and leads to code-bloat if both lvalues and 
rvalues are used as arguments. For these reasons, we want to have 
the additional option to pass both lvalues and rvalues by ref.


Jonathan wants a non-templated function to specify explicitly 
whether it wants to accept an rvalue by ref, apparently to 
discriminate between 'output parameters' (mutable `ref`, lvalues 
only) and another category of `auto ref` parameters which also 
accept rvalues via lowering (I don't know how one would describe 
that category - to me it seems completely arbitrary and adds 
complexity, not to mention different interpretations by different 
developers).


I'm solely interested in the safety of binding rvalues to refs, 
meaning that the compiler should prevent these rvalue refs from 
being escaped (I'm not talking about returning the ref via 
`return ref`, which is safe anyway when propagating the 
rvalue/lvalue origin, but storing derived pointers in external 
state). That's where `scope ref` (unescapable reference) comes 
in. But I prefer the compiler to infer scope-ness automatically 
in the future and would thus like to allow binding rvalues to 
`ref` in general - for not-so-performance-critical functions 
taking a value type by reference (for some value types, copying 
just doesn't make sense - streams, loggers etc.).


Re: auto ref is on the docket

2015-06-25 Thread kink via Digitalmars-d
On Wednesday, 24 June 2015 at 23:30:53 UTC, Jonathan M Davis 
wrote:
But this has _nothing_ to do with scope, and scope ref was 
already rejected. The whole point of this is support having a 
function accept both rvalues and lvalues, not to do anything 
with scope.


And given that what scope does has never even been properly 
defined - all that the spec says about scope parameters is 
"references in the parameter cannot be escaped (e.g. assigned 
to a global variable)".


Yeah right. And all we need to safely pass in rvalue references 
is exactly the constraint that they won't escape. And scope alone 
doesn't imply pass-by-ref (`in` params aren't currently passed by 
ref).


[...] Before we can even consider what something like scope ref 
might mean, we'd have to properly define what scope means. And 
all we have for it is the basic idea of what it's supposed to 
do - none of the details - and trying to define scope ref 
before defining what scope means in general could totally 
hamstring us when properly defining scope later.


Is there a roadmap for "later"? It seems like these things always 
just get postponed, further postponed and never really done. What 
about the original argument 2 years ago, when rejecting DIP36, 
where Andrei explained that the idea was to make `ref` itself not 
escapable and delegating escaping refs to pointers? I don't see 
how that could be implemented without a breaking change, so I 
guess that may be something for D3. But we have been needing a 
solution for rvalue refs for years, and, as you know, are waiting 
since then, as `auto ref` for templates alone is simply not 
enough.


To quote Manu from the DIP36 thread:

It is, without doubt, the single biggest complaint I've heard
by virtually every programmer I've introduced D to.


+1 kazillion


Re: auto ref is on the docket

2015-06-24 Thread kink via Digitalmars-d
On Wednesday, 24 June 2015 at 11:19:04 UTC, Jonathan M Davis 
wrote:

[...]

3. Add a new attribute which does what's being proposed for 
auto ref for non-templated functions, in which case, we can use 
the non-templated behavior with templates as well and thus 
avoid template bloat when all you want is for your templated 
function to accept both lvalues and rvalues. auto ref, of 
course, then stays exactly as it is now.


At the moment, it seems that #2 is the most likely, and that's 
probably fine, but I do wonder if we'd be better off with #3, 
especially when you consider how much D code tends to be 
templated and how much code bloat auto ref is likely to 
generate with templated functions.


- Jonathan M Davis


If that wasn't clear before, I'm all for #3 too. Just call it 
`scope ref` and simplify the PR a lil' bit as suggested by Marc 
in an earlier post 
[http://forum.dlang.org/post/ricvtchihgzyisbkz...@forum.dlang.org].


Re: auto ref is on the docket

2015-06-23 Thread kink via Digitalmars-d

On Tuesday, 23 June 2015 at 09:57:26 UTC, Marc Schütz wrote:

On Monday, 22 June 2015 at 19:05:28 UTC, kinke wrote:
[...]
To clarify: What I meant by my comment was that const-ness 
should not be a precondition for allowing rvalue refs. Mutable 
rvalue refs are fine.


I know and see it the same way.

As I have already pointed out in another thread, I'd go one 
step further and propose an extremely convenient `in T` for 
this very common use case:


* The argument is passed by value (`const T`) if the compiler 
assumes moving/copying is more efficient than passing a 
reference (with its indirection on the callee side) for the 
particular target environment (hardware, ABI), e.g., for 
plain-old-datatypes T fitting into 1-2 registers, and Object 
references obviously.
* Otherwise, the argument is passed by-ref (`in ref T`). As 
`in T` doesn't mention any ref at all, it's clear that the 
hidden reference cannot escape.


Theoretically, `immutable ref` by itself would already allow 
these semantics (without the `scope` that `in` implies). 
Because (disregarding identity/addresses) for an immutable 
object there is no observable difference between pass-by-value 
and pass-by-reference. The same is not always true for `const 
ref`, whenever aliasing is possible:


void foo(const ref int a, ref int b) {
int x = a;
b++;
assert(x == a); // can fail if both refer to the same 
variable

// similar with global variables
}

To guarantee this from the caller's POV, the callee must be 
pure and the parameters must be known not to alias each other.


This is obviously true. Rvalues aren't affected as they cannot 
alias another parameter by definition. Lvalues are if passed by 
ref and the same instance is accessible by mutable ref from 
another parameter or global. But as shown by your example, that 
danger is always there. The proposed `in` semantics make it less 
obvious, that's true, but I still think it'd be worth it, as 
these aliasing bugs are a pain to track down, but in my 
experience extremely rare.


Re: auto ref is on the docket

2015-06-23 Thread kink via Digitalmars-d

On Monday, 22 June 2015 at 23:43:07 UTC, Timon Gehr wrote:
The problems with transitive const to be encountered in 
day-to-day C++ work are few.


Trolling as always. You know what I meant. Haven't had to resort 
to mutable fields and/or const_casts in a long time. Mostly a 
matter of good design imho.


As different behavior for templated `auto ref` and non-template 
`auto ref` seems to be an issue for more people here, why don't 
just rename the non-templated one `scope ref`, so that it's 
usable for templates too and also enables the `in ref` shortcut 
for const? I find `const auto ref` extremely clumsy.


Re: rvalue references

2015-06-10 Thread kink via Digitalmars-d

On Tuesday, 9 June 2015 at 20:25:28 UTC, Namespace wrote:
No opinions on the semantics for `in` I proposed earlier? `in 
T` would be something like a non-escapable `const auto ref T` 
accepting rvalues without codebloat and avoiding the 
indirection for small POD types T. Wouldn't that eliminate 99% 
of the use cases for `auto ref`, improve readability 
substantially and be the solution for all the C++ guys missing 
the rvalue-bindability of `const T&`?


'in' means 'const scope' and 'scope ref' together with 'in ref' 
was proposed in DIP 36 which was rejected. Let's see if Andrei 
thinks that my current work is satisfying. I hope so. :)


Can you point me to the justifications for the rejection? It 
should be pretty obvious that something like that is required. 
Personally, I'd go with `ref` being non-escapable by default and 
hence providing implicit rvalue-bindability, and having to use 
`escapable ref` if the parameter may escape and disallowing 
rvalues for these parameters. I know it's never gonna happen, 
it's just what I'd prefer.


I know what `in` currently means. Your proposed `in ref T` syntax 
is imo not much better than C++ `const T&`, so I'd prefer a 
simple and convenient `in T`. Semantics would be identical to 
your `in ref` with the additional optimization for small POD 
types. And for beginners, one could simply describe it as:
'Use the in keyword for a parameter if you're not going to mutate 
it. Don't rely on its identity as the argument may be passed by 
value or reference, whatever seems more efficient for the 
compiler and the target platform.'


Re: rvalue references

2015-06-09 Thread kink via Digitalmars-d
On Tuesday, 9 June 2015 at 15:08:07 UTC, Steven Schveighoffer 
wrote:
Because it's not moved. It's written in the stack where it will 
be passed to the next function.


Hmm, you mean the callee's function parameters stack? That's not 
always going to work, e.g., on Win64 the first 4 args are passed 
in registers, always. And, as I said, that ABI doesn't support 
byval passing of types > 64 bits (let's exclude vector types 
here), so rvalues > 64 bits can sadly not be constructed in-place 
without violating the Win64 ABI - they'll have to be passed byref.


Re: rvalue references

2015-06-09 Thread kink via Digitalmars-d
On Tuesday, 9 June 2015 at 13:13:53 UTC, Steven Schveighoffer 
wrote:
It's actually faster to pass an rvalue by value, because it's 
constructed on the stack anyway.


I seriously doubt that's true for a large struct, e.g., something 
containing a large static array. Why move/copy the damn thing if 
I'm only going to read a tiny portion of it?


And please don't forget the ABI in that regard. E.g., on Win64, 
only POD types <= 64 bits (and whose size is a power of 2) are 
really passed by value (in a register or on the stack); all other 
types are passed as reference to a dedicated copy allocated on 
the caller's stack. So in this case, the indirection is enforced 
by the ABI anyway. If the callee is not going to modify the 
parameter, the copy should obviously be elided, falling back to 
classical byref passing.


Re: rvalue references

2015-06-09 Thread kink via Digitalmars-d

On Monday, 8 June 2015 at 20:16:13 UTC, bitwise wrote:
static Mat4 transform()(const auto ref Vec3 pos, const auto ref 
Vec3 scale, const auto ref Quat rot);


Horrific.

static Mat4 transform(in Vec3 pos, in Vec3 scale, in Quat rot);

would be so much better...