Re: Always false float comparisons

2016-05-09 Thread Ethan Watson via Digitalmars-d

On Monday, 9 May 2016 at 09:10:19 UTC, Walter Bright wrote:

Don Clugston pointed out in his DConf 2016 talk that:

float f = 1.30;
assert(f == 1.30);

will always be false since 1.30 is not representable as a 
float. However,


float f = 1.30;
assert(f == cast(float)1.30);

will be true.

So, should the compiler emit a warning for the former case?


I'd assume in the first case that the float is being promoted to 
double for the comparison. Is there already a warning for loss of 
precision? We treat warnings as errors in our C++ code, so C4244 
triggers all the time in MSVC with integer operations. I just 
tested that float initialisation in MSVC, initialising a float 
with a double triggers C4305.


So my preference is "Yes please".

https://msdn.microsoft.com/en-us/library/th7a07tz.aspx
https://msdn.microsoft.com/en-us/library/0as1ke3f.aspx


Re: Always false float comparisons

2016-05-09 Thread Ethan Watson via Digitalmars-d

On Monday, 9 May 2016 at 12:30:13 UTC, Walter Bright wrote:

Promoting to double does not lose precision.


Yes, badly worded on my part, I was getting at the original 
assignment from double to float.


Re: How are you enjoying DConf? And where to go next?

2016-05-10 Thread Ethan Watson via Digitalmars-d

On Tuesday, 10 May 2016 at 09:19:11 UTC, Chris wrote:
Are you talking about the whole country or Istanbul? Anyway, 
it's not the average homicide rate I'm concerned with, it's the 
fact that you might happen to be at the wrong place at the 
wrong time when a bomb goes off.


I went to Tel Aviv in late 2012, around the time Hamas were 
trading missiles with the Israelis. There was a suicide bomb on a 
bus a few days before I arrived.


Yet I still had a much lower chance of being caught in such a 
blast than I do crossing the street and being hit by a truck. 
Actually being caught in a terrorist attack is a really low 
probability. Naively comparing the population of Istanbul to 
people killed or injured in the March 2016 attacks puts the 
probability of being in that blast somewhere around the same 
chances of choking on your food and dying.


Chalk me up as not seeing the point of terrorism hysteria.


Re: Always false float comparisons

2016-05-12 Thread Ethan Watson via Digitalmars-d
On Thursday, 12 May 2016 at 13:03:58 UTC, Steven Schveighoffer 
wrote:
Not taking one side or another on this, but due to D doing 
everything with reals, this is already the case.


Mmmm. I don't want to open up another can of worms right now, but 
our x64 C++ code only emits SSE instructions at compile time (or 
AVX on the Xbox One). The only thing that attempts to use reals 
in our codebase is our D code.


Re: Always false float comparisons

2016-05-12 Thread Ethan Watson via Digitalmars-d
On Thursday, 12 May 2016 at 14:29:01 UTC, Steven Schveighoffer 
wrote:
There was a question on the forums a while back about 
equivalent C++ code that didn't work in D. The answer turned 
out to be, you had to shoehorn everything into doubles in order 
to get the same answer.


I can certainly see that being the case, especially when dealing 
with SSE-based code. floats and doubles in XMM registers don't 
get calculated at 80-bit precision internally, their storage size 
dictates their calculation precision. Which has led MSVC to 
promoting floats to doubles for CRT functions when it thinks it 
can get away with it (and one instance where the compiler forgot 
to convert back to float afterwards and thus the lower 32 bits of 
a double were being treated as a float...)


It's fun comparing assembly too. There's one particular function 
we have here that collapsed to something like 20-30 lines of 
SSE-based code (half after I hand optimised it with branchless 
SSE intrinsics and without a call to fmod). The same function in 
D resulted in a significantly larger amount of x87 code.


I don't miss x87 at all. But this is getting OT.


Re: The Case Against Autodecode

2016-05-13 Thread Ethan Watson via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:

*rant*


Actually, chap, it's the attitude that's the turn-off in your 
post there. Listing problems in order to improve them, and 
listing problems to convince people something is a waste of time 
are incompatible mindsets around here.


Re: Always false float comparisons

2016-05-16 Thread Ethan Watson via Digitalmars-d

On Monday, 16 May 2016 at 10:25:33 UTC, Andrei Alexandrescu wrote:
I'm not sure about this. My understanding is that all SSE has 
hardware for 32 and 64 bit floats, and the the 80-bit hardware 
is pretty much cut-and-pasted from the x87 days without anyone 
really looking in improving it. And that's been the case for 
more than a decade. Is that correct?


Pretty much. On the OS side, Windows has officially deprecated 
x87 for the 64-bit version in desktop mode, and it's flat out 
forbidden in kernel mode. All development focus from Intel has 
been on improving the SSE/AVX instruction set and pipeline.


And on a gamedev side, we generally go for fast over precise. Or, 
more to the point, an acceptable loss in precision. The C++ 
codegen spits out SSE/AVX code by default in our builds, and I 
hand optimise with appropriate intrinsics certain functions that 
get inlined. SIMD is even more an appropriate point to bring up 
here - gaming is trending towards more parallel operations, 
operating on a single float at a time is not the correct way to 
get the best performance out of your system.


This is one of those things where I can see the point for the D 
compiler to do things its own way - but only when it expects to 
operate in a pure D environment. We have heavy interop between 
C++ and D. If simple functions can give different results at 
compile time without a way for me to configure the compiler on 
both sides, what actual benefits does that give me?


Re: Always false float comparisons

2016-05-17 Thread Ethan Watson via Digitalmars-d

On Monday, 16 May 2016 at 14:32:55 UTC, Andrei Alexandrescu wrote:

It is rare to need to actually compute the inverse of a matrix.


Unless you're doing game/graphics work ;-) 4x3 or 4x4 matrices 
are commonly used to represent transforms in 3D space in every 3D 
polygon-based rendering pipeline I know of. It's even a 
requirement for fixed-function OpenGL 1.x.


Video games - also known around here as "The Exception To The 
Rule".


(Side note: My own preference is to represent transforms as a 
quaternion and vector. Inverting such a transform is a simple 
matter of negating a few components. Generating a matrix from 
such a transform for rendering purposes is trivial compared to 
matrix inversion.)


Re: Always false float comparisons

2016-05-17 Thread Ethan Watson via Digitalmars-d

On Tuesday, 17 May 2016 at 02:00:24 UTC, Manu wrote:
If Ethan and Remedy want to expand their use of D, the compiler 
CAN
NOT emit x87 code. It's just a matter of time before a loop is 
in a

hot path.


I really need to see what the codegen for the latest DMD looks 
like, I have no idea what the current state is. But this isn't 
just true for Remedy, it's true for any engine programmer in 
another company thinking of using D.


In context of this entire discussion though, a compiler switch to 
give the codegen I want is my preference. I have no desire to 
dictate how other people should use D/DMD, I just want the option 
to use it the way I need to.


Re: Always false float comparisons

2016-05-17 Thread Ethan Watson via Digitalmars-d

On Wednesday, 18 May 2016 at 05:40:57 UTC, Walter Bright wrote:
That wasn't my prescription. My prescription was either 
changing the algorithm so it was not sensitive to exact 
bits-in-last-place, or to use roundToFloat() and 
roundToDouble() functions.


With Manu's example, that would have been a good old fashioned 
matrix multiply to transform a polygon vertex from local space to 
screen space, with whatever other values were required for the 
render effect. The problem there being that the hardware itself 
only calculated 24 bits of precision while dealing with 32 bit 
values. Such a solution was not an option.


Gaming hardware has gotten a lot less cheap and nasty. But Manu 
brought it up because it is conceptually the same problem as 
32/64 bit run time values vs 80 bit compile time values. Every 
solution offered here either comes down to "rewrite your code" or 
"increase code complexity", neither of which is often an option 
(changing the code in Manu's example required a seven+ hour 
compile time each iteration of the code; and being a very hot 
piece of code, it needed to be as simple as possible to maintain 
speed). Unlike the hardware, game programming has not gotten less 
cheap nor nasty. We will cheat our way to the fastest performing 
code using whatever trick we can find that doesn't cause the 
whole thing to come crashing down. The standard way for creating 
float values at compile time is to calculate them manually at the 
correct precision and put a #define in with that value. Being 
unable to specify/override compile time precision means that the 
solution is to declare enums in the exact same manner, and might 
result in more maintenance work if someone decides they want to 
switch from float to double etc. for their value.


Re: Always false float comparisons

2016-05-18 Thread Ethan Watson via Digitalmars-d

On Wednesday, 18 May 2016 at 06:57:58 UTC, Walter Bright wrote:
I don't understand the 24 vs 32 bit value thing. There is no 32 
bit mantissa floating point type. Floats have 24 bit mantissas, 
doubles 52.


Not in the standards, no. But older gaming hardware was never 
known to be standards-conformant.


As it turns out, the original hardware manuals can be found on 
the internet.


https://www.dropbox.com/s/rsgx6xmpkf2zzz8/VU_Users_Manual.pdf

Relevant info copied from page 27:

Calculation
* A 24-bit calculation including hidden bits is performed, and 
the result is truncated.  The rounding-off operation in IEEE 754 
is performed in the 0 direction, so the values for the least 
significant bit may vary.



In any case, the problem Manu was having was with C++.


VU code was all assembly, I don't believe there was a C/C++ 
compiler for it.


My proposal would make the behavior more consistent than C++, 
not less.


This is why I ask for a compiler option to make it consistent 
with the C++ floating point architecture I select. Making it 
better than C++ is great for use cases where you're not 
inter-opping with C++ extensively.


Although I do believe saying C++ is just clouding things here. 
Language doesn't matter, it's the runtime code using a different 
floating point instruction set/precision to compile time code 
that's the problem. See the SSE vs x87 comparisons posted in this 
thread for a concrete example. Same C++ code, different 
instruction sets and precisions.


Regardless. Making extra build steps with pre-calculated values 
or whatever is of course a workable solution, but it also raises 
the barrier of entry. You can't just, for example, select a 
compiler option in your IDE and have it just work. You have to go 
out of your way to make it work the way you want it to. And if 
there's one thing you can count on, it's end users being lazy.


Re: Always false float comparisons

2016-05-18 Thread Ethan Watson via Digitalmars-d

On Wednesday, 18 May 2016 at 08:21:18 UTC, Walter Bright wrote:

The constant folding part was where, then?


Probably on the EE, executed with different precision (32-bit) 
and passed over to the VU via one of its registers. Manu spent 
more time with that code than I did and can probably give exact 
details. But pasting the code, given it's proprietary code from a 
15 year old engine, will be difficult at best considering the 
code is likely on a backup tape somewhere.


You're also asking for a mode where the compiler for one 
machine is supposed to behave like hand-coded assembler for 
another machine with a different instruction set.


Actually, I'm asking for something exactly like the arch option 
for MSVC/-mfmath option for GCC/etc, and have it respect that for 
CTFE.


Re: Always false float comparisons

2016-05-18 Thread Ethan Watson via Digitalmars-d

On Wednesday, 18 May 2016 at 08:55:03 UTC, Walter Bright wrote:

MSVC doesn't appear to have a switch that does what you ask for


I'm still not entirely sure what the /fp switch does for x64 
builds. The documentation is not clear in the slightest and I 
haven't been able to find any concrete information. As near as I 
can tell it has no effect as the original behaviour was tied to 
how it handles the x87 control words. But it might also be 
possible that the SSE instructions emitted can differ depending 
on what operation you're trying to do. I have not dug deep to see 
exactly how the code gen differs. I can take a guess that 
/fp:precise was responsible for promoting my float to a double to 
call CRT functions, but I have not tested that so that's purely 
theoretical at the moment.


Of course, while this conversation has mostly been for compile 
time constant folding, the example of passing a value from the EE 
and treating it as a constant in the VU is still analagous to 
calculating a value at compile time in D at higher precision than 
the instruction set the runtime code is compiled to work with.


/arch:sse2 is the default with MSVC x64 builds (Xbox One defaults 
to /arch:avx), and it sounds like the DMD has defaulted to sse2 
for a long time. The exception being the compile time behaviour. 
That compile time behaviour conforming to the the runtime 
behaviour is an option I want, with the default being whatever is 
decided in here. Executing code at compile time at a higher 
precision than what SSE dictates is effectively undesired 
behaviour for our use cases.


And in cases where we compile code for another architecture on 
x64 (let's say ARM code with NEON instructions, as it's the most 
common case thanks to iOS development) then it would be forced to 
fallback to the default. Fine for most use cases as well. It 
would be up to the user to compile their ARM code on an ARM 
processor to get the code execution match if they need it.




Re: Always false float comparisons

2016-05-18 Thread Ethan Watson via Digitalmars-d

On Wednesday, 18 May 2016 at 11:17:14 UTC, Walter Bright wrote:
Again, even if the precision matches, the rounding will NOT 
match, and you will get different results randomly dependent on 
the exact operand values.


We've already been burned by middlewares/APIS toggling MMX flags 
on and off and not cleaning up after themselves, and as such we 
strictly control those flags going in to and out of such areas. 
We even have a little class with implementations for x87 
(thoroughly deprecated) and SSE that is used in a RAII manner, 
copying the MMX flag on construction and restoring it on 
destruction.


I appreciate that it sounds like I'm starting to stretch to hold 
to my point, but I imagine we'd also be able to control such 
things with the compiler - or at least know what flags it uses so 
that we can ensure consistent behaviour between compilation and 
runtime.


Re: Andrei's list of barriers to D adoption

2016-06-05 Thread Ethan Watson via Digitalmars-d
There's definitely an information war that needs to be won. That 
D has been around for 15-odd years and is still considered an 
emerging language is something of a problem.


I linked my DConf talks on a games industry forum, and the first 
response was that "It looks like a poor man's Rust". A notion I 
quickly dispelled, but it's a mindset that needs solid, linkable 
examples to work against. The talk I'm hoping to hold at GDC 
Europe in August will have some examples to that effect (they 
still haven't got back to me with confirmation). I'll need to 
make that more visible than slides/video once the talk is done.


Echoing the need for decimal support. I won't use it myself, but 
I know it's critical for finance.


Re: Andrei's list of barriers to D adoption

2016-06-06 Thread Ethan Watson via Digitalmars-d

On Monday, 6 June 2016 at 08:00:30 UTC, Laeeth Isharc wrote:

Hi Ethan.


Ahoy.

But don't you think that as a language D has intrinsically 
matured quite slowly?  Sociomantic began in 2008,or 
2009,whenever it was,  but at the time given where the language 
was that must have been quite a courageous decision if one 
thought one might be using it to process large amounts of data.


These is nothing wrong with maturing more slowly - indeed maybe 
more complex creatures take time for everything to come 
together. Things develop at their own pace.


Maturing slowly tends to be a counterpoint when I talk about it. 
And it's purely down to an information war thing. Compare to how 
C++ matures. And by matures, I mean the old dinosaur becomes more 
and more fossilized with age and is kept animated by cybernetic 
enhancements bolted on to the side in a haphazard manner. There's 
a lot of people I know that are fine with that because of 
entrenchment.


D is still ahead of the pack in terms of features. Communicating 
that, and why you should buy in to the better way, is a bit of a 
challenge. A colleague of mine complained that strings use 
another whacky operator (~) to join strings and it's just another 
way of doing string work, which came about because he hadn't 
looked deep enough in to the language to realise it's just normal 
array concatenation.


Yet despite being ahead of the pack, its slow adoption doesn't 
speak well for it. But there is precedent for slow adoption, at 
least in gaming. C++ was virtually unused until after the turn of 
the century, and now it's deeply entrenched. Moving to C++ was a 
pretty clear path forward for C programmers. Moving forward from 
C++? There's options (Rust, Swift, C#, D). And the other options 
have a far greater mindshare than D at the moment.


Re: Andrei's list of barriers to D adoption

2016-06-06 Thread Ethan Watson via Digitalmars-d

On Monday, 6 June 2016 at 07:18:56 UTC, Guillaume Piolat wrote:

- well there is an AAA game using it now,


Replying solely to highlight that Unreal Engine has garbage 
collected since forever; and Unity is a .NET runtime environment 
and all the GC frills that come with it. GC in the AAA/indie 
gaming space is hardly a new concept.


Re: Andrei's list of barriers to D adoption

2016-06-07 Thread Ethan Watson via Digitalmars-d

On Tuesday, 7 June 2016 at 05:38:25 UTC, H. S. Teoh wrote:
A Decimal type isn't hard to implement as a user-defined type. 
I don't understand the obsession with some people that 
something must be a built-in type to be acceptable...


As I see it, any kind of an implementation that's comparable to 
what's out there is acceptable, be it a standard libary or user 
library, as long as it's visible and people can easily find it.


For example in C++ land: 
https://software.intel.com/en-us/articles/intel-decimal-floating-point-math-library/


Which makes a point of stating it conforms to standards and is 
usable in cases where decimal is legally required.


And in C# land: 
https://msdn.microsoft.com/en-us/library/system.decimal(v=vs.110).aspx


System.Decimal and the basic decimal type being a part of the 
.NET runtime/C# language.


Both are highly visible from Google searches.


Re: Andrei's list of barriers to D adoption

2016-06-07 Thread Ethan Watson via Digitalmars-d

On Tuesday, 7 June 2016 at 07:57:09 UTC, Walter Bright wrote:

C++ still suffers from:

http://www.digitalmars.com/articles/b44.html

and probably always will.


template< size_t size > void function( char ( &array )[ size ] );

It's horrible syntax (no surprise), and being a templated 
function means it's recompiled N times... but there's at least 
something for it now.


(Of note is that it's how you're expected to handle string 
literals at compile time with constexp, but it doesn't make 
string manipulation at compile time any easier. Go figure.)


Re: Andrei's list of barriers to D adoption

2016-06-07 Thread Ethan Watson via Digitalmars-d
On Tuesday, 7 June 2016 at 09:11:38 UTC, Ola Fosheim Grøstad 
wrote:
Try tell someone making a 3D engine that your tooling is used 
in banking and that they should switch from C++.


Now, don't feel insulted, but banking/finance is considered a 
boring application area by most programmers I know of.


And yet, the financial/banking sector loves game/engine 
programmers because they give a damn about real-time performance. 
There are plenty of ex-game-developers in that sector making 
three times as much money as they used to.


Not to say that it isn't boring. That's purely a subjective thing.


Re: About GC: The Future of Rust : GC integration

2016-06-09 Thread Ethan Watson via Digitalmars-d

On Thursday, 9 June 2016 at 05:52:47 UTC, Adam Wilson wrote:
But to be fair, that's not a memory management problem but a 
disk IO problem.


Quoting for truth. The single biggest problem large games have is 
bandwidth, both in terms of offline storage and memory/bus 
bandwidth. This will only get worse with time as the quality of 
resources increases at a faster rate than bandwidth increases.


Taking D to GDC Europe - let's make this tight

2016-07-12 Thread Ethan Watson via Digitalmars-d

http://schedule.gdceurope.com/session/d-using-an-emerging-language-in-quantum-break

My proposal for a talk has been accepted, and I'll be in Cologne 
next month presenting to industry peers.


One of the things I was asking during the approval process was 
whether attendees tended to be more on the game programming side 
or the tech/engine programming side. They don't have that data. 
Looking at the rest of the talks on the schedule, there does 
appear to be a bit of a lack of technical talks. So that pretty 
much settles it for me, I'm going to go a bit in-depth on D. I've 
been seeing this talk as basically a sales pitch to the rest of 
the industry to check D out, and going in-depth means its time to 
shine.


The general gist of the talk will broadly cover what both my talk 
at DConf this year and Manu's talk at DConf in 2013 covered. But 
to break up the flow a bit, I intend on inserting examples of 
ways D saves time. Take the lighting talk I did about a simple 
interpolation function - illustrate the problem in C++, and the 
solution in D. But because there's options out there for 
languages these days, I also want equivalent solutions in Rust 
(as that is the most likely other option). I might also add a 
column for Swift so I can write "LOLNO" for each problem. Perhaps 
also relevant is C# equivalents as that has quite a lot of 
support in game programming these days thanks to Unity.


The examples are also meant to highlight specific language 
features. The interpolation example highlights template 
constraints and type inspection. And they're going to be based on 
code I've either written for Quantum Break, or had to spend an 
unholy amount of time getting to work in C++ recently.


Mainly here, I would like the assistance of someone that has used 
Rust and can provide Rust-based examples that perform the same 
task. Stefan Koch was also illustrating some of the modern 
syntactic shortcuts on IRC last night, I've been programming in a 
DMD that was released in 2013 for a while so getting up to speed 
on modern D programming with my examples will help make this even 
tighter than I can otherwise make it by myself.


The examples I'm looking at using (aiming for a spacing of about 
one every 10-15 minutes in the talk, so 4 in total):


* Generic interpolation function
  - Already illustrated at DConf for both C++ and D
  - C++ - Unmaintainable, buggy mess
  - Rust - No idea
  - D - Write once, handle any type thanks to type inspection
  - D feature demonstration: template constraints, type inspection

* Check a type for an equality operator
  - C++ - SFINAE whackiness, and as near as I can tell requires 
separate tests to determine if an object has a member operator 
and/or a global operator for comparison tests

  - Rust - No idea
  - D - Simple is() check wrapped in an enum
  - D feature demonstration: is, static if for further use

* Expansion of code for a script wrapper to a native function 
(retrieve parameters and pass to native)
  - C++ - Pre-C++11 is a mess but doable. Will focus on C++11, 
which requires template parameter inference, compile time number 
range generation, and calling a function with two dummy instances 
of objects to allow the inference to happen.

  - Rust - No idea
  - D - Haven't written the code, but intend on using mixin with 
strings

  - D feature demonstration - mixin, mixin template

* Fourth example TBD, might try to make it tie in to the binding 
system which means I'll cover CTFE.


Speaking of the binding system, the plan is to open source it for 
the talk. I'm in the process of cleaning it up right now for such 
purposes.


Re: Taking D to GDC Europe - let's make this tight

2016-07-12 Thread Ethan Watson via Digitalmars-d
On Tuesday, 12 July 2016 at 12:51:42 UTC, Ola Fosheim Grøstad 
wrote:

C++14/17 stuff


Which would be relevant if the target audience didn't need to 
support Visual Studio, which still doesn't fully support C++11 in 
the latest revision let alone C++14 features.


https://msdn.microsoft.com/en-us/library/hh567368.aspx


Re: Taking D to GDC Europe - let's make this tight

2016-07-12 Thread Ethan Watson via Digitalmars-d
On Tuesday, 12 July 2016 at 14:29:03 UTC, Andrei Alexandrescu 
wrote:
This is awesome! You should do an interview with Mike about the 
conference. -- Andrei


Just saw an email from him to this effect.


Re: Taking D to GDC Europe - let's make this tight

2016-07-12 Thread Ethan Watson via Digitalmars-d
On Tuesday, 12 July 2016 at 14:48:17 UTC, Ola Fosheim Grøstad 
wrote:

«
template struct make_void { typedef void type;};
template using void_t = typename 
make_void::type;

»


Variadic expansion works on my home code in VS2015. But we 
shipped Quantum Break on VS2012. I can use variadic template 
arguments for SFINAE purposes in VS2012, anything more involved 
is virtually unsupported.


On Tuesday, 12 July 2016 at 15:14:39 UTC, Ola Fosheim Grøstad 
wrote:
Anyway, if you are going to compare languages, use the latest 
edition of both languages.


I know this makes sense. But by the same token, this is a 
practical talk aimed at people using compilers and toolchains 
that aren't necessarily up to date. What I'd rather do is have 
further examples visible online for C++17 standards to compare 
against. Either way, given Microsoft's rate, the industry will be 
able to use C++17 some time in 2021.


Also of note is that with the binding system we're open sourcing, 
it's meant to just slot in and start people using D alongside 
their C++ codebases.


And further of note, this is to show things that are just plain 
horrible to do in C++. SFINAE whackiness leads me in to talking 
about the is operator in D, which leads in to talking about the 
binding system... It's all about how it directly relates to 
usage, not to what someone can do in six years time.


Re: Taking D to GDC Europe - let's make this tight

2016-07-12 Thread Ethan Watson via Digitalmars-d
On Tuesday, 12 July 2016 at 16:24:07 UTC, Ola Fosheim Grøstad 
wrote:
you'll get the same response that you would get from D-users if 
you compared C++17 to D1...


That's both wildly hyperbolic; and not going to happen for the 
mentioned reasons.



I can get this to work:


Which is both not the way I'm currently doing it, and not the way 
I've seen it done elsewhere. Effectively, the way I've seen it 
done is by testing the return type of a 
variadic-template-parameterised function that is specialised with 
the decltype for the operation in question.


However, this goes to prove my point. In both cases, it's a bunch 
of legwork just to get to a true_type or a false_type. Having it 
available in the standard library ignores the fact that if you 
need to do something similar that will never be covered by the 
standards, it's a whole bunch of near-esoteric work you'll need 
to understand to get to that point.


Whereas in D, you can do the same thing with an is() statement.

As I pointed out at DConf (and which I saw someone around here 
quote somewhere), the number one thing you can do in D that you 
can't do in C++ is save time. The is() statement isn't just a 
simple operator, it's far more powerful that writing a boatload 
of boilerplate template code because it tests if code compiles. 
Far more flexible than writing template code for the fail case 
and specialising for the success case, far quicker to learn, far 
quicker to use, far quicker to write, etc.



I am not sure if I understand the argument.


Did you see my DConf talk? Do you know that DMD uses mslink for 
64 bit builds, and we use the Xbox One version of mslink to get 
Xbox One compatibility? It seems to me you'll understand where 
I'm coming from better if you look at what I've already put out 
there.


Re: Why is 64-bit dmd not built as part of the Windows release?

2018-05-15 Thread Ethan Watson via Digitalmars-d

On Tuesday, 15 May 2018 at 16:01:28 UTC, Atila Neves wrote:

Isn't it just make -f win64.mak?", I hear you ask.


I wouldn't ask that. Every time I need a 64-bit dmd, I open the 
project in src/vcbuild and let Visual Studio and Visual D take 
care of it.


But I agree with the subject entirely. 64-bit DMD is absolutely 
required for my own usage. The Linux platforms have i386/x64 
downloads. OSX is going 64-bit only. Having both packages 
available for Windows would be much appreciated.


Re: Of possible interest: fast UTF8 validation

2018-05-16 Thread Ethan Watson via Digitalmars-d
On Wednesday, 16 May 2018 at 11:18:54 UTC, Andrei Alexandrescu 
wrote:

https://www.reddit.com/r/programming/comments/8js69n/validating_utf8_strings_using_as_little_as_07/


I re-implemented some common string functionality at Remedy using 
SSE 4.2 instructions. Pretty handy. Except we had to turn that 
code off for released products since nowhere near enough people 
are running SSE 4.2 capable hardware.


The code linked doesn't seem to use any instructions newer than 
SSE2, so it's perfectly safe to run on any x64 processor. Could 
probably be sped up with newer SSE instructions if you're only 
ever running internally on hardware you control.


Re: Of possible interest: fast UTF8 validation

2018-05-16 Thread Ethan Watson via Digitalmars-d
On Wednesday, 16 May 2018 at 13:54:05 UTC, Andrei Alexandrescu 
wrote:
Is it workable to have a runtime-initialized flag that controls 
using SSE vs. conservative?


Sure, it's workable with these kind of speed gains. Although the 
conservative code path ends up being slightly worse off - an 
extra fetch, compare and branch get introduced.


My preferred method though is to just build multiple sets of 
binaries as DLLs/SOs/DYNLIBs, then load in the correct libraries 
dependant on the CPUID test at program initialisation. Current 
Xbox/Playstation hardware is pretty terrible when it comes to 
branching, so compiling with minimal branching and deploying the 
exact binaries for the hardware capabilities is the way I 
generally approach things.


We never got around to setting something like that up for the PC 
release of Quantum Break, although we definitely talked about it.


Re: Of possible interest: fast UTF8 validation

2018-05-16 Thread Ethan Watson via Digitalmars-d

On Wednesday, 16 May 2018 at 14:25:07 UTC, Jack Stouffer wrote:
D doesn't seem to have C definitions for the x86 SIMD 
intrinsics, which is a bummer


Replying to highlight this.

There's core.simd which doesn't look anything like SSE/AVX 
intrinsics at all, and looks a lot more like a wrapper for 
writing assembly instructions directly.


And even better - LDC doesn't support core.simd and has its own 
intrinsics that don't match the SSE/AVX intrinsics API published 
by Intel.


And since I'm a multi-platform developer, the "What about NEON 
intrinsics?" question always sits in the back of my mind.


I ended up implementing my own SIMD primitives in Binderoo, but 
they're all versioned out for LDC at the moment until I look in 
to it and complete the implementation.


Re: See you soon at dconf

2017-05-03 Thread Ethan Watson via Digitalmars-d

On Wednesday, 3 May 2017 at 09:04:31 UTC, John Colvin wrote:
I'm guessing everyone will be converging on the conference 
hotel as the day goes on?


I imagine I'll wander by there. I'm not staying there, but it is 
a quick walk to my accommodation. I land at 20.45 though, so I 
hope it's still going around 22.30-23.00.


Re: DConf 2017 Berlin - Streaming ?

2017-05-05 Thread Ethan Watson via Digitalmars-d
I've put my slides up on Slideshare. They should show up on the 
DConf website some time soon too.


https://www.slideshare.net/EthanWatson5/binderoo-a-rapid-iteration-framework-that-even-scripters-can-use



DLang quarterly EU?

2017-05-06 Thread Ethan Watson via Digitalmars-d
I was speaking to Atila earlier about the things we like about 
DConf. Sitting around talking to a bunch of computer scientists 
is fantastic, and not something people generally get to do in 
their chosen careers as a programmer.


EU nations are quite close together. Rather than a city meet up 
monthly, what about a continental meet up quarterly?


This is quite feasible in Europe, since everything is quite close 
together. I'm keen. Atila is keen. Anyone else think this is a 
great idea?


Re: DLang quarterly EU?

2017-05-07 Thread Ethan Watson via Digitalmars-d

On Sunday, 7 May 2017 at 11:32:53 UTC, Adam Wilson wrote:

On 5/7/17 12:57, Seb wrote:
+1 - maybe its worth considering to make it for two days (=one 
weekend)


That can work. It would be two or three days vacation depending 
on flight schedules.

...
Not to mention a cool way to see new cities if it moves around.


Yes, that was the intention on both counts. There's no point to 
flying somewhere just for the day. Especially since there will 
doubtless be Micro BeerConfs in the evening ;-)


Andrei suggested that Bucharest be the first city we hold this 
in. Sounds like a great plan to me.


Re: Jonathan Blow's presentation

2017-05-08 Thread Ethan Watson via Digitalmars-d

On Monday, 8 May 2017 at 13:21:07 UTC, Rel wrote:

What do you guys think of the points explained here:
https://www.youtube.com/watch?v=gWv_vUgbmug

Seems like the language shares a lot of features with
D programming language. However there are several
features that caught my interest:
1) The compile times seems very fast in comparison
with other modern programming languages, I'm wondering
how he managed to do it?
2) Compile-time execution is not limited, the build
system is interestingly enough built into the language.


I was at that talk, and spoke to him quite a bit there. He also 
attended my talk. And yes, there is quite a bit of overlap in 
terms of features. He's well in to design by introspection, for 
example.


I can answer #1, I know a few things there but that's more 
something he should talk about as I don't know how public he's 
made that knowledge.


I also put forward to him a case with regards to compile time 
execution and code generation. Say you've got a global variable 
that you write to, and reading from that changes the kind of code 
you will generate. Thus, your outputted code can be entirely 
different according to whenever the compiler decides to schedule 
that function for execution and compilation. His response was, 
"Just don't do that."


That's essentially the philosophical difference there. Jonathan 
wants a language with no restrictions, and leave it up to the 
programmer to solve problems like the above themselves. Whether 
you agree with that or not, well, that's an entirely different 
matter.


Re: Jonathan Blow's presentation

2017-05-08 Thread Ethan Watson via Digitalmars-d

On Monday, 8 May 2017 at 16:10:51 UTC, Rel wrote:

I don't know if I ever will need it in my code. For the game
development use case it may be useful, for example to package
all of the game assets at compile time.


It's only useful for very select cases when hardcoded assets are 
required. You know, unless you want to try making a 45 gigabyte 
executable for current Playstation/Xbox games. A talk I watched 
the other year made the point that as far as textures go in video 
games, literally all but 10 you'll ever use are read only so stop 
trying to program that exception as if it's a normal thing. 
Hardcoding a select few assets is also a case of a vast-minority 
exception. There's ways to do it on each platform, and it's not 
really worth thinking about too much until those rare times you 
need one.


Embedding inside the executable is also already a thing you can 
do in D with the import keyword.


Re: Jonathan Blow's presentation

2017-05-09 Thread Ethan Watson via Digitalmars-d

On Monday, 8 May 2017 at 19:14:16 UTC, Meta wrote:
Is this why most console games that get ported to PC are 
massive? GTA V on PC, for example, was 100GB, while Skyrim was 
around 8GB.


Consoles have a fixed hardware level that will give you 
essentially deterministic performance. The quality of assets it 
can handle are generally 1/4 to 1/2 as detailed as what the 
current top-line but reasonably-priced PC hardware can handle. 
And PC gamers *love* getting the higher detailed assets. So we 
ship PC games with the option to scale the quality of the assets 
used at runtime, and ship with higher quality assets than is 
required for a console game.


See as an alternative example: the Shadows of Mordor ultra HD 
texture pack, which requires a 6GB video card and an additional 
download. Another example I like using is Rage, which is 
essentially 20GB of unique texture data. If they wanted to 
re-release it on Xbox One and PS4 without being accused of just 
dumping a port across, they'd want to ship with 80GB of texture 
data.


There's also grumblings about whether those HD packs are worth 
it, but now that 4K displays are coming in those grumblings are 
stopping as soon as people see the results.


On Tuesday, 9 May 2017 at 02:21:19 UTC, Nick Sabalausky 
(Abscissa) wrote:
I don't know anything about Witcher, but FF13 *does* have a 
fair amount of pre-rendered video, FWIW. And maybe Witcher uses 
better compression than FF13?


Correct about the video. The Final Fantasy games are notorious 
for their pre-renders and their lengthy cutscenes. All of which 
require massive amounts of video and audio data.


Better compression though? Unlikely. Texture formats are fairly 
standardised these days. Mesh formats are custom, but not as much 
of a space hog as textures. Other assets like audio and video is 
more where the compression formats come in to play. But gaming 
hardware has a few tricks for that. For example:


On Tuesday, 9 May 2017 at 02:13:19 UTC, Nick Sabalausky 
(Abscissa) wrote:
Uncompressed? Seriously? I assume that really means FLAC or 
something rather than truly uncompressed, but even 
still...sounds more like a bullet-list 
pandering^H^H^H^H^H^H^H^H^Hselling point to the same 
suckers^H^H^H^H^H^H^H"classy folk" who buy Monster-brand cables 
for digital signals than a legit quality enhancement.


Well, no. Gaming consoles - and even mobile devices - have 
dedicated hardware for decompressing some common audio and video 
formats. PC hardware does not. Decompression needs to happen on 
the CPU.


Take Titanfall as a use case, which copped quite a bit of flack 
for shipping the PC version with uncompressed audio. The Xbox One 
version shipped on a machine that guaranteed six hardware threads 
(at one per core) with dedicated hardware for audio 
decompression. Their PC minspec though? A dual core machine (at 
one thread per core) with less RAM and only using general purpose 
hardware.


The PC scene had a cry, but it was yet another case of PC gamers 
not actually understanding hardware fully. The PC market isn't 
all high-end users, the majority of players aren't running 
bleeding edge hardware. They made the right business decision to 
target hardware that low, but it meant some compromises had to be 
made. In this case, the cost of decompressing audio on the CPU 
was either unfeasible in real time or increased load times 
dramatically during load times. Loading uncompressed audio off 
the disk was legitimately an optimisation in both cases.


On Tuesday, 9 May 2017 at 06:50:18 UTC, Ola Fosheim Grøstad wrote:
It isn't all that hard to distinguish if you know what to 
listen for. I hear a big difference in music I have mixed 
down/mastered on a good headset.


So, as Walter would say, "It's trivially obvious to the casual 
observer."


That's the point of the blind test. It isn't trivially obvious to 
the casual observer. You might think it is, but you're not a 
casual observer. That's essentially why LAME started up - a bunch 
of audiophiles decided to encode for perception of quality rather 
than strictly objective quality.


Re: Jonathan Blow's presentation

2017-05-10 Thread Ethan Watson via Digitalmars-d

On Tuesday, 9 May 2017 at 23:47:46 UTC, Era Scarecrow wrote:
Nope, uncompressed. Seems some games they decided the small 
amount of time spent decompressing audio and textures was too 
high, which is why some of the games are 50Gb in size, because 
it's more important to have larger textures than trying to push 
the HD textures and 4k stuff, vs actually having hardware that 
can handle it, since the console hardware is seriously behind 
PC hardware.


On Tuesday, 9 May 2017 at 23:58:13 UTC, Era Scarecrow wrote:
Found an appropriate articles Regarding Titanfall (a few years 
ago), although that's for PC and the reason for giving a boost 
to 'underpowered PC's', although i could have sworn they did it 
for consoles more. Still ridiculous in my mind.


Yeah, you might want to actually read the entire thread before 
stating this stuff again.


Re: "I made a game using Rust"

2017-05-10 Thread Ethan Watson via Digitalmars-d

On Wednesday, 10 May 2017 at 13:22:22 UTC, Adam D. Ruppe wrote:
Those of you on IRC know that I've been pushing hard for better 
error messages. I think that is *the* killer feature clang 
offered and I see it brought up again and again.


D used to do well. Now we're lagging behind. No language change 
needed to vastly improve error messages.


I find it a very curious state of affairs myself that Microsoft's 
C++ compiler has significantly better error messages than DMD.


Re: "I made a game using Rust"

2017-05-10 Thread Ethan Watson via Digitalmars-d

On Wednesday, 10 May 2017 at 14:02:38 UTC, Adrian Matoga wrote:

Would you mind giving some examples?


My biggest bugbear with DMD is internal compiler errors giving me 
no meaningful information.


Excepting one or two edge cases with SSE types and the .NET 
compiler, I can get a meaningful error message and an error code 
from MSVC. And you can search them all through the MSDN 
documentation.


https://msdn.microsoft.com/en-us/library/8x5x43k7.aspx

If I find the message isn't that helpful, googling for the error 
code usually brings up discussions on stackoverflow about it.


Even user-inserted #errors have an error code.

https://msdn.microsoft.com/en-us/library/y0tzt8e0.aspx


Re: "I made a game using Rust"

2017-05-10 Thread Ethan Watson via Digitalmars-d
On Wednesday, 10 May 2017 at 14:31:28 UTC, Vladimir Panteleev 
wrote:
Internal compiler errors (ICEs) are bugs which are generally 
treated as high priority. Please report them.


See my previous rant on this subject.

http://forum.dlang.org/post/qkxyfiwjwqklftcwt...@forum.dlang.org

tl;dr - Sure, we'll submit bugs, but if someone literally has to 
stop work for a day because they can't tell at a glance what's 
gone wrong when compiling code then that's a massive failure of 
the compiler.


Re: Please provide DMD as 64 executable

2017-05-18 Thread Ethan Watson via Digitalmars-d

On Thursday, 18 May 2017 at 13:41:21 UTC, Andre Pany wrote:
I think the 64 bit version of dmd should be the default these 
days;)


I believe this is a Windows-only problem.

For which I'm +1. I have to build my own because compiling some 
Binderoo code with a few objects breaks the memory barrier.


Re: Please provide DMD as 64 executable

2017-05-21 Thread Ethan Watson via Digitalmars-d

On Sunday, 21 May 2017 at 01:29:58 UTC, Laeeth Isharc wrote:
There a Visual D script, but I do not know how to use that 
using msbuild.


We had some trickiness at work regarding this. You essentially 
need to invoke devenv instead of msbuild if you want to script 
the process.


Of course, now that Visual D supports D files inside a .vcxproj, 
it should probably be upgraded to use one of those instead of the 
.visualdproject file.


Re: Any video editing folks n da house?

2017-05-29 Thread Ethan Watson via Digitalmars-d
On Wednesday, 24 May 2017 at 09:27:59 UTC, Andrei Alexandrescu 
wrote:
I'm thinking publicly available videos so the footage is 
already out there.


One question I'd want to ask is: What is the legal status of the 
resulting video?


This is purely because of software licensing. My nonlinear 
editing of choice is Davinci Resolve, but I've only ever done it 
for hobby projects that make no money. In the case of providing a 
video authored with the software to the D Foundation, I'm not 
entirely sure the "free" license covers such usage.


(I also doubt I'd have the time to devote to cutting a video at 
this stage, but it's the first question I thought of when viewing 
this thread.)


Things that make writing a clean binding system more difficult

2016-07-28 Thread Ethan Watson via Digitalmars-d
As mentioned in the D blog the other day, the binding system as 
used by Remedy will both be open sourced and effectively 
completely rewritten from when we shipped Quantum Break. As I'm 
still deep within that rewrite, a bunch of things are still fresh 
in my mind that aren't that great when it comes to D and doing 
such a system.


These are things I also expect other programmers to come across 
in one way or another, being that they seem like a simple way to 
do things but getting them to behave require non-trivial 
workarounds.


I also assume "lodge a bug" will be the response to these. But 
there are some cases where I think documentation or 
easily-googleable articles will be required instead/as well. And 
in the case of one of these things, it's liable to start a long 
circular conversation chain.



1) Declaring a function pointer with a ref return value can't be 
done without workarounds.


Try compiling this:

ref int function( int, int ) functionPointer;

It won't let you, because only parameters and for loop symbols 
can be ref types. Despite the fact that I intend the function 
pointer to be of a kind that returns a ref int, I can't declare 
that easily. Easy, declare an alias, right?


alias RefFunctionPointer = ref int function( int, int );

Alright, cool, that works. But thanks to the binding system 
making heavy use of function pointers via code-time generated 
code, that means we then have to come up with a unique name for 
every function pointer symbol we'll need. Eep.


Rather, I have to do something like this:

template RefFunctionPointer( Params... ) if( Params.length > 1 )
{
  ref Params[ 0 ] dodgyFunction( Params[ 1 .. $ ] );
  alias RefFunctionPointer = typeof( &dodgyFunction );
}
RefFunctionPointer!( int, int, int ) functionPointer;

This can also alternately be done by generating a mixin string 
for the alias inside of the template and not requiring a dummy 
function to get the type from. Either way, it gets rid of the 
unique name requirement but now we have template expansion in the 
mix. Which is something I'll get to in a second...


Needless to say, this is something I wasted a lot of time on 
three years ago when I was getting the bindings up to speed 
originally. Turns out it's not any better in DMD 2.071.



2) Expansion of code (static foreach, templates) is slow to the 
point where string mixins are a legitimate compile-time 
optimisation


Take an example of whittling down a tuple/variable argument list. 
Doing it recursively would look something like this:


template SomeEliminator( Symbols... )
{
  static if( Symbols.length >= 1 )
  {
static if( SomeCondition!( Symbol[ 0 ] ) )
{
  alias SomeEliminator = TypeTuple!( Symbol[ 0 ], Symbols[ 1 
.. $ ] );

}
else
{
  alias SomeEliminator = TypeTuple!( Symbols[ 1 .. $ ] );
}
  }
  else
  {
alias SomeEliminator = TypeTuple!( );
  }
}

Okay, that works, but the template expansion is a killer on 
compile-time performance. It's legitimately far quicker on the 
compiler to do this:


template SomeEliminator( Symbols... )
{
  string SymbolSelector()
  {
string[] strOutputs;
foreach( iIndex, Symbol; Symbols )
{
  static if( SomeCondition!( Symbol ) )
  {
strOutputs ~= "Symbols[ " ~ iIndex.stringof ~ " ]";
  }
}
return strOutputs.joinWith( ", " );
  }
  mixin( "alias SomeEliminator = TypeTuple!( " ~ SymbolSelector() 
~ " );" );

}

With just a small codebase that I'm working on here, it chops 
seconds off the compile time. Of course, maybe there's something 
I'm missing here about variable parameter parsing and doing it 
without a mixin is quite possible and just as quick as the mixin, 
but that would make it the third method I know of to achieve the 
same effect. The idiomatic way of doing this without mixins 
should at least be defined, and optimised at the compiler level 
so that people don't get punished for writing natural D code.


Then there was this one that I came across:

outofswitch: switch( symbolName )
{
  foreach( Variable; VariablesOf!( SearchType ) )
  {
case Variable.Name:
  doSomething!( Variable.Type )();
  break outofswitch;
  }
  default:
writeln( symbolName, " was not found!" );
break;
}

This caused compile time to blow way out. How far out? By 
rewriting it like this, I cut compile times in half (at that 
point, from 10 seconds to 5):


switch( symbolName )
{
  mixin( generateSwitchFor!( SearchType )() );
  default:
writeln( symbolName, " was not found!" );
break;
}

Now, I love mixins, both template form and string form. The 
binding system uses them extensively. But mixins like this are 
effectively a hack. Anytime I have to break out a mixin because 
my compile time doubled from a seemingly simple piece of code is 
not good.



3) __ctfe is not a CTFE symbol.

This one bit me when I was trying to be efficient for runtime 
usage while allowing 

Re: Things that make writing a clean binding system more difficult

2016-07-28 Thread Ethan Watson via Digitalmars-d

On Thursday, 28 July 2016 at 08:49:35 UTC, Walter Bright wrote:

Do you mean:

  void foo();
  void foo() { }

?


Exactly this. I've been unable to get it to work.


Re: Things that make writing a clean binding system more difficult

2016-08-05 Thread Ethan Watson via Digitalmars-d

On Thursday, 4 August 2016 at 11:41:00 UTC, Jacob Carlborg wrote:
That works for me [1]. It was reported by Manu and fixed in 
2012 [2].


I did some more experimenting, and it turns out that the problem 
is when the declaration and definition have different linkage. 
Being C++ functions means that all the functions are declared 
with extern( C++ ), but the mixin generates essentially extern( D 
) functions.


And a bit more prodding and poking, and I found some problems 
with UDAs - specifically, adding them in the declaration but not 
having them in the definition. We use UDAs extensively to mark up 
functions for binding. Depending on when my function collector 
template instantiates at compile time determines whether I can 
see the UDAs or not.


So that's technically a bug. But. Before I go running off to the 
Bugzilla Walter made. Should a user declaring and then later 
defining a function be forced to 100% replicate that function 
definition before defining it? If yes, then the compiler needs to 
error. If no, then there'll need to be some rules made up for it 
because I can already see how that will be open to abuse.


Fact checking for my talk

2016-08-13 Thread Ethan Watson via Digitalmars-d
So I fly to Cologne tomorrow morning, and will be presenting on 
Tuesday. Got my talk written, gave it a dry run at the office and 
got feedback on it. Seems to be in a good spot.


But before I go up and feature compare to other languages, it'll 
be a good idea to get my facts right.


There's three spots in my talk where I go through some D code, 
and then show a table indicating whether the features I used in 
those examples are available in other, trendier languages. In 
some cases, the features become available with after-market add 
ons. But I'm focusing exclusively on stuff you can get out of the 
box, ie write some code and compile it with a stock 
DMD/LDC/GDC/SDC install and it'll Just Work(TM).


So here's a dodgy table indicating the features I'm showing on 
the slides, and the languages that are most relevant to game 
developers - C# and Swift - with Rust thrown in since that's the 
new language everyone knows about.


If I've got something wrong with the out-of-the-box solution, 
please let me know. If there is something you can do with an 
add-on, also let me know since it will allow me to prepare for 
the inevitable audience questions saying "but you can do this 
with this etc etc"



 |  Rust   |  Swift  |C#   |
-|-+-+-|
Template Constraints |Y|Y|  where  | [1]
-|-+-+-|
  Template "if" Constraints  |  where  |  where  |  where  |
-|-+-+-|
static if|N|N|N|
-|-+-+-|
 Eponymous templates |N|N|N|
-|-+-+-|
   Compile time reflection   |N|N|N|
-|-+-+-|
CTFE |N|N|N|
-|-+-+-|
   User defined attributes   |  Crates | Runtime |Y|
-|-+-+-|
   Deep function inspection  |N|N|N|
-|-+-+-|
   Mixins|N|N|N*   | [2]
-|-+-+-|

[1] Limited comparisons can be made with template where 
constraints
[2] Mixins in swift are essentially traits and protocol 
extentions, not like D mixins at all


Re: Fact checking for my talk

2016-08-13 Thread Ethan Watson via Digitalmars-d

On Saturday, 13 August 2016 at 12:58:36 UTC, ag0aep6g wrote:
What's a 'template "if" constraint'? Template constraints 
already use the `if` keyword. This is a template constraint:


template Foo(T) if (is(T : int)) {/* ... */}

Other than those, there are template specializations. Example:

template Foo(T : int) {/* ... */}


Bad naming on my part. I'll rename it. Although considering type 
deduction/parameter matching/specialisation is syntactically 
related, I'll find a better umbrella name for that.


Re: Fact checking for my talk

2016-08-13 Thread Ethan Watson via Digitalmars-d

On Saturday, 13 August 2016 at 13:02:09 UTC, Liam McSherry wrote:
For "static if," C# also has a very limited conditional 
compilation system that is barely comparable.


This is covered in more detail in the talk itself when I compare 
static if to C style preprocessors. I would hope everyone in the 
room knows the C# preprocessor is limited compared to a C/C++ 
preprocessor. And if not, someone will either ask or Google it 
themselves after the event.


Re: Fact checking for my talk

2016-08-13 Thread Ethan Watson via Digitalmars-d

On Saturday, 13 August 2016 at 15:51:18 UTC, Jacob Carlborg wrote:

What is "Deep function inspection"?


In the context of my talk, a collection of methods to inspect all 
function traits including parameter types and defaults etc. C++ 
can do type inspection. I believe Swift has something like 
Objective C does but I did not find concrete info on it. No idea 
about Rust.


Re: Fact checking for my talk

2016-08-13 Thread Ethan Watson via Digitalmars-d

On Saturday, 13 August 2016 at 17:19:42 UTC, Chris Wright wrote:
C# can do this. Check System.Reflection.MethodInfo and 
System.Reflection.ParameterInfo.


Runtime only? I'll make the distinction on my slides.


Re: Code signing to help with Windows virus false positives

2016-08-15 Thread Ethan Watson via Digitalmars-d

On Monday, 15 August 2016 at 20:43:59 UTC, Basile B. wrote:
I'm afraid to see people overreacting in front of a minor and 
temporary problem.


This is not the first time this is a problem.

Our scanner at Remedy regularly used to block code sent to and 
from Walter at the email level. Sometimes things just wouldn't be 
received on either side.


Our scanner also used to pick up the DMD that we shipped to our 
work environments until we added an exception for it.


I just put a clean install of Visual Studio and Visual D on this 
laptop in case some people want to see some D stuff after my talk 
today. Windows Defender blocked my download of DMD.


D code seems to be sufficiently different that virus scanners get 
confused. Both Windows Defender and F-Secure complained about it 
being the same trojan in fact.


This cannot be a problem if we expect people to get in to the 
language. If the first stop download is picked up as a virus? 
This is unbelievably bad.


Re: Fact checking for my talk

2016-08-15 Thread Ethan Watson via Digitalmars-d

On Sunday, 14 August 2016 at 18:05:12 UTC, ZombineDev wrote:

Rust stugg


Exactly what I was after, thanks.


Re: Fact checking for my talk

2016-08-15 Thread Ethan Watson via Digitalmars-d

On Saturday, 13 August 2016 at 19:34:42 UTC, Walter Bright wrote:
It's risky to compare with languages you aren't strongly 
familiar with. All it takes is one mistake and one audience 
member who knows more than you about it, and it can seriously 
derail and damage the entire presentation.


I recommend sticking with describing the unique D features, and 
let the audience members who know other languages draw their 
own comparisons.


I do agree with this.

But by the same token, the table highlights what actually are the 
unique D features. I make a point that the languages themselves 
are reasonable enough replacements for C++ in many circumstances, 
but that the things I do with D's compile time functionality 
aren't easily achievable in those languages.


At this point, the only thing I still haven't found concrete 
information on is function inspection in Swift and Rust, which 
should be a mark against the languages if it's not easily 
Googlable.


Re: Fact checking for my talk

2016-08-16 Thread Ethan Watson via Digitalmars-d

On Tuesday, 16 August 2016 at 06:36:25 UTC, Jacob Carlborg wrote:

On 2016-08-16 08:13, Ethan Watson wrote:

For Objective-C it's possible to use the Objective-C runtime 
functions to access some of this information. Based on a method 
you can access the types of the arguments and the return type. 
Although this data is represented as strings, in a semi mangled 
format. All this should be accessible in Swift as well but will 
only (I assume) work for Swift methods that can be called from 
Objective-C. "Native" Swift methods support other features that 
are not accessible in Objective-C, like generics.


Yeah, this is what I thought was possible with Swift. So thanks 
for that.


Binderoo - we're open sourcing our binding system

2016-08-16 Thread Ethan Watson via Digitalmars-d

https://github.com/Remedy-Entertainment/binderoo

So I just announced at GDC Europe in my talk. We're open sourcing 
our binding system.


It's currently a complete reengingeering of the system, and it's 
incomplete at the moment. It will be documented as the features 
become more solidified.


I'll also write some more about it once I've had a chance to 
unwind. The talk seemed to go well at least.


Re: Binderoo - we're open sourcing our binding system

2016-08-16 Thread Ethan Watson via Digitalmars-d
Slides are up at 
http://www.slideshare.net/EthanWatson5/d-using-an-emerging-language-in-quantum-break





Re: Binderoo - we're open sourcing our binding system

2016-08-16 Thread Ethan Watson via Digitalmars-d
On Wednesday, 17 August 2016 at 06:27:39 UTC, Jacob Carlborg 
wrote:


Windows only or cross-platform?


It will be cross platform, but right now I've only developed on 
Windows. Linux will be next, I have Mint setup at home. I'll 
likely need an external contributor for PS4, but that could very 
well be taken care of thanks to some of the people I spoke to 
after the talk if they decide to use this once it's more fully 
featured.


Re: Binderoo - we're open sourcing our binding system

2016-08-17 Thread Ethan Watson via Digitalmars-d

On Tuesday, 16 August 2016 at 15:31:20 UTC, Meta wrote:


Did you get a decent crowd despite giving your talk at the same 
time as John Romero?


Estimate of about 80-100 people.

Romero is a nice guy though. http://i.imgur.com/kTrfAZqh.jpg


Re: Binderoo - we're open sourcing our binding system

2016-08-17 Thread Ethan Watson via Digitalmars-d

On Tuesday, 16 August 2016 at 17:53:20 UTC, Meta wrote:

On Tuesday, 16 August 2016 at 12:30:14 UTC, Ethan Watson wrote:
Looking through your slides, I noticed that there's no need to 
pass `typeof(this)` to GenerateStubsFor.


Correct. Notice a few slides after that with the BindAllImports 
mixin that it does exactly what you say. At that point, since 
this is something of an introduction of the language, it's more 
about introducing the concept of typeof fully.


Re: Usability of "allMembers and derivedMembers traits now only return visible symbols"

2016-08-31 Thread Ethan Watson via Digitalmars-d

On Wednesday, 31 August 2016 at 08:06:05 UTC, Basile B. wrote:
allow them to see everything, then use "getProtection" if you 
wanna be conform with the protection attributes.


That's how it used to work, but getProtection would fail if the 
symbol wasn't public. Which led to me using a workaround to 
something of this effect:


enum PrivacyLevel : string
{
Public  = "public",
Private = "private",
Protected   = "protected",
Export  = "export",
Package = "package",
Inaccessible= "inaccessible"
};
//

template PrivacyOf( alias symbol )
{
	static if( __traits( compiles, __traits( getProtection, symbol ) 
) )

{
		enum PrivacyOf = cast(PrivacyLevel) __traits( getProtection, 
symbol );

}
else
{
enum PrivacyOf = PrivacyLevel.Inaccessible;
}
}
//

Still not an ideal solution - because if I'm trying to serialise 
and deserialise everything in between module reloads I still need 
to do the .tupleof method to get all data members; and if I want 
to define privacy levels for functions I'm automatically binding 
from C++ I need to muddy those waters with UDAs etc.


Re: Usability of "allMembers and derivedMembers traits now only return visible symbols"

2016-08-31 Thread Ethan Watson via Digitalmars-d

On Tuesday, 30 August 2016 at 22:24:12 UTC, Ali Çehreli wrote:

I don't agree with the current solution:


I'm somewhat surprised myself that "allMembers doesn't return all 
members" needs highlighting.


Why not have a new trait "allVisibleMembers" and just fix the 
privacy issues?


Re: Usability of "allMembers and derivedMembers traits now only return visible symbols"

2016-08-31 Thread Ethan Watson via Digitalmars-d

On Wednesday, 31 August 2016 at 09:25:52 UTC, Basile B. wrote:
nice idea, but this doesn't change the fact that the traits 
that access the results of the "omniscient" allMember must be 
tweaked to access everything.


I'm okay with this. My PrivacyLevel workaround does exactly this 
in fact.


But I would like to be able to read (and write) all members of a 
class without needing to mixin a template and without having to 
resort to .tupleof. Use case here is the extensive struct use we 
have, if we want them to match C++ exactly then that's where 
mixins become potentially hairy.


Re: Usability of "allMembers and derivedMembers traits now only return visible symbols"

2016-08-31 Thread Ethan Watson via Digitalmars-d

On Wednesday, 31 August 2016 at 09:30:43 UTC, Ethan Watson wrote:
I'm okay with this. My PrivacyLevel workaround does exactly 
this in fact.


I keep forgetting that I'm all open sourced now and can just link 
directly to the full example.


https://github.com/Remedy-Entertainment/binderoo/blob/master/binderoo_client/d/src/binderoo/objectprivacy.d


Re: colour lib

2016-08-31 Thread Ethan Watson via Digitalmars-d
On Wednesday, 31 August 2016 at 09:37:41 UTC, Andrea Fontana 
wrote:


So maybe I miss (more than) something reading source code. You 
should write a readme to explain how it works :)


I can probably chip in and help here at some point (both with 
documentation and ensuring the API is intuitive).


Template visibility

2016-08-31 Thread Ethan Watson via Digitalmars-d

https://github.com/Remedy-Entertainment/binderoo/blob/master/binderoo_client/d/src/binderoo/typedescriptor.d

Inside that is some code I have to translate D types to the C++ 
strings that we expect.


I'm in the middle of making a mathematical vector class that I 
will be sticking in Binderoo, but maps functionality to our 
internal SIMD vector class. As such, to keep Binderoo a clean 
interface for anyone to use, it will not contain any @CTypeName 
UDAS for Remedy-specific type info.


The problem comes when I try to alias to the expected type for 
our D code and provide a C++ string override, like so:


//
module remedy.rl.simdvector;

public import binderoo.math.vector;
public import binderoo.typedescriptor;

alias SIMDVector = binderoo.math.vector.VectorFloat;

enum CTypeNameOverride( T : SIMDVector ) = "r::SIMDVector";

pragma( msg, CTypeString!( SIMDVector ) );
//

The CTypeString template has no visibility to my 
CTypeNameOverride, and as such that pragma prints out 
"VectorFloat" instead of "r::SIMDVector".


Is there some way to mitigate this without needing to resort to 
mixins everywhere?  This is one of those things in C++ where if I 
specialise a template, any invocation of the template after that 
point will go to the specialised version. But in this case, 
because instantiation happens within the scope of the 
binderoo.typedescriptor module instead of within the scope of the 
module the template is invoking from, it just can't see my new 
CTypeNameOverride specialisation.


I do have other instances inside the Binderoo code where I 
resolve the module names for symbols and mixin an import for that 
to make it all just work, but I'm getting tired of having to do 
that every time I come across this problem.


Re: Template visibility

2016-08-31 Thread Ethan Watson via Digitalmars-d

On Wednesday, 31 August 2016 at 12:45:14 UTC, Ethan Watson wrote:
I do have other instances inside the Binderoo code where I 
resolve the module names for symbols and mixin an import for 
that to make it all just work, but I'm getting tired of having 
to do that every time I come across this problem.


I also realised that won't work in this case, as getting the 
module of the SIMDVector alias will infact get the module 
binderoo.math.vector. std.typecons.Typedef might work out just 
fine in this case to do that with though.


So why was typedef bad?

2016-08-31 Thread Ethan Watson via Digitalmars-d

http://dlang.org/deprecate.html#typedef

"typedef is not flexible enough to cover all use cases. This is 
better done with a library solution."


[Citation needed]

What use cases is it not flexible enough for?

This is tangentially related to my other topic about template 
visibility, specifically the alias I'm trying to do to my 
binderoo.math.vector.VectorFloat.


The problem with alias is that it won't instantiate an entirely 
new symbol. It's effectively a hard link to another symbol. 
Trying to resolve the module name won't actually give me what I 
want here.


Maybe the deprecated typedef will get me what I want? I can't 
make Visual D respect my -d command line properly, so I can't get 
in and quickly check if things are okay there.


Right, so off to the library solution, std.typecons.Typedef. Uh. 
This isn't a typedef. This embeds one type within another and 
tries to mimic that type as much as possible. And it makes that 
member private. You know what this means - if I try to parse over 
it for my serialisation pass for Binderoo, I can't use __traits( 
allMembers ) to get to it. Also, technically, since it's an 
object within an object it will need to double up on the JSON 
hierarchy to store it unless I get in and specialise it.


At the very least, it seems here that Typedef should actually be 
called TypeWrapper, it would actually make sense for its 
functionality there.


Which gets back to the keyword typedef. Sure, it's not as 
flexible as alias. And I don't even know if a typedef in one 
module would result in a symbol resolution to that module or not. 
But what was the actual problems with it?


Re: So why was typedef bad?

2016-08-31 Thread Ethan Watson via Digitalmars-d

On Wednesday, 31 August 2016 at 14:05:16 UTC, Chris Wright wrote:

Specifying the default value for the type.


Alias has the same problem in this case.

Making all typedefs from a base type implicitly convert to each 
other without warning unless you're careful, which should be a 
bug.


Which sounds like unique types constructed from other types are 
wanted instead of a typedef.


At the very least, if those were the actual problems, then it 
seems like std.typecons.Typedef has been transformed in to 
something other than a typedef simply for the crime of typedef 
being a subset of alias' functionality. Dropping typedef might 
make sense in favour of alias, but redirecting to something 
entirely different in the official documents... I know I just 
wasted some time evaluating its usefulness at least.


I'm making a distinction here between a typedef and a type mimic 
here because C++ interop is a big factor in our usage, so mixing 
up concepts between a language that's meant to make that easy is 
not ideal. Although looking at std.typecons.Typedef, I'd wonder 
if a typemimic language feature would have been a better way to 
go...


Re: Fallback 'catch-all' template functions

2016-09-01 Thread Ethan Watson via Digitalmars-d

On Thursday, 1 September 2016 at 05:37:50 UTC, Manu wrote:
I have a recurring problem where I need a fallback function 
like the bottom one, which should be used in lieu of a more 
precise match.


+1. I've already hit this a few times with Binderoo. I would have 
assumed that constraints are just another form of specialisation, 
but it requires me to be explicit with the base functionality.


Re: Fallback 'catch-all' template functions

2016-09-01 Thread Ethan Watson via Digitalmars-d
On Thursday, 1 September 2016 at 10:43:50 UTC, Dominikus Dittes 
Scherkl wrote:
I have never seen what benefit could be gained from having 
overloads.


Oh, it's perfectly fine if you're not writing a library that's 
designed to allow user extension by going the "all in one" 
method. If you encourage your users to modify your function 
itself, they can no longer drop in a new version and have to do a 
merge.


Templates in general seem weak for libraries in D, which is a 
pain considering templates are one of the areas where the 
language otherwise excels.


Re: Fallback 'catch-all' template functions

2016-09-01 Thread Ethan Watson via Digitalmars-d
On Thursday, 1 September 2016 at 11:01:28 UTC, Dominikus Dittes 
Scherkl wrote:
Ok, that may be fine, until you reach the point with the 
fallback version: if after that point someone "drops in" a new 
version, he silently changes the behavior of the function, 
because he "steals" some type which used to use the fallback 
version.


I don't see how that can be considered anything other than 
"expected behaviour" and thus ensure your design takes this in to 
account. If you give your user the keys to the kingdom, you need 
to expect them to use it.


Quality of errors in DMD

2016-09-02 Thread Ethan Watson via Digitalmars-d
Can we have a serious discussion in here about the quality of DMD 
errors?


I've been alternately a dog chasing its own tail, and a dog 
barking at a fire hydrant, chasing down errors deep in templated 
and mixin code over the last day. This has resulted in manually 
reducing templates and mixins by hand until I get to the root of 
the problem, which then results in submitting a bug and devising 
an ugly workaround.


And then I get this one in some code:

Assertion failure: '0' on line 1492 in file 'glue.c'

The problem ended up being that a symbol tucked away in a 
template that itself was tucked away in a template was undefined, 
but it couldn't tell me that. Rather, it just assert(0)'d and 
terminated. Rather less helpfully, the only meaningful 
information it could give me at the assert point (Could it add to 
it further down the stack? Maybe?) was defined out because DMD 
wasn't in a debug build.


Honestly, I find stuff like this in a compiler unacceptable. 
Using assert(0) as shorthand for an unexpected error is all fine 
and dandy until you put your product in the hands of the masses 
and they expect your program to at least give you some idea of 
what was going wrong rather than just crashing out in flames.


So just for fun, I searched DMD for all instances of assert(0) in 
the code base.


830 matches in DMD 2.070.2.

That's 830 possible places where the compiler will give the user 
virtually no help to track down what (if anything) they did wrong.


DMD certainly isn't the only compiler guilty of this. The .NET 
compiler gives precisely no useful information if it encounters 
SSE types in C++ headers for example. But compared to MSVC, I've 
found the error reporting of DMD to be severely lacking. In most 
cases with MSVC, I have an error code that I can google for which 
is (sometimes) thoroughly documented. And thanks to being a 
widely used product, Stack Overflow usually gives me results that 
I can use in my sleuthing.


I know I'm also seeing more errors than most because I'm doing 
the kind of code most people don't do. But I'm certainly of the 
opinion that searching for a compiler error code is far easier 
than trying to trick google in to matching the text of my error 
message.


Re: Quality of errors in DMD

2016-09-02 Thread Ethan Watson via Digitalmars-d

On Friday, 2 September 2016 at 21:16:02 UTC, Walter Bright wrote:
assert()s are there to check that impossible situations in the 
compiler don't actually happen. They are not for diagnosing 
errors in user code.


If a user sees an assert, it is a compiler bug and hopefully 
he'll submit a bug report for it in bugzilla. There aren't many 
open assert bugs in bugzilla because we usually put a priority 
on fixing them.


You know, I'd love to submit a bug about it. But after actually 
working out the problem without the compiler's help, I can't get 
a minimal enough test case to submit a bug with. I'll try it with 
Dustmite. But in this case, there's debug code there to spit out 
the information it has. And probably a stack to give it context.


This is legitimately the kind of stuff that drives an average 
user away from a language. I knew that commenting out one 
template invocation fixed my code, but not how to fix my template 
without a bunch of pain-staking removal and experimentation. Call 
it what you want, but that's a bad user experience.


Re: Quality of errors in DMD

2016-09-03 Thread Ethan Watson via Digitalmars-d

On Friday, 2 September 2016 at 21:52:57 UTC, Walter Bright wrote:
I understand your concern, and that's why we put a priority on 
fixing asserts that are submitted to bugzilla. But without a 
bug report, we are completely dead in the water in fixing it. 
It is supposed to never happen, that is why we cannot fix them 
in advance.


We have a rule in our codebase, and it's been the same in every 
codebase I've worked in professionally. If you throw an assert, 
you have to give a reason and useful information in the assert.


Thus, an error in code that looks like:

assert(0);

Makes no sense to me when in this case it could have been:

fatal_error("Invalid type (from %d to %s)", tx->ty, 
tx->toChars());


The quality of error reporting has immediately increased. And it 
would require the creation of a fatal_error macro that does an 
assert. But now I'm not scratching my head wondering what went 
wrong in the compiler.


Browsing through that function, I can also see another assert 
that doesn't let you use vector types unless you're running a 
64-bit build or are on OSX. It doesn't tell me that through an 
error message. I had to look at the source code to work it out. 
fatal_error("Vector types unavailable on the target platform"); 
and someone's day was made better. And then a couple of lines 
above that, another assert(0). fatal_error("Invalid vector type 
%s", tx->toChars()); and someone can deal with the unexpected.


If I have to open up the compiler's source to get an idea of what 
I'm doing wrong, that's a bad user experience. And why I want a 
discussion about this. Not to whinge, not to get a bug fix. But 
to highlight that assert(0) is a bad pattern and there should be 
a discussion about changing the usage of asserts inside DMD.


Re: Quality of errors in DMD

2016-09-03 Thread Ethan Watson via Digitalmars-d
On Saturday, 3 September 2016 at 13:20:37 UTC, Adam D. Ruppe 
wrote:
Except that in the real world, it is an irrelevant distinction 
because you have stuff to do and can't afford to wait on the 
compiler team to actually fix the bug.


If nothing else, you'd like to know where it is so you can hack 
around the bug by changing your implementation. I'm sure every 
long time D programmer (and likely C++ if you've been in a long 
time, I have hit many bugs in g++ too) has hit a compiler bug 
and "fixed" it by using some different approach in their user 
code.


Exactly this. If a compiler bug stops someone from working in a 
production environment because there's no information about why 
the bug occured, the semantic difference between a compiler bug 
and a user code bug means precisely nothing to the end user. It 
does mean that they're losing hours of work while the problem is 
clumsily attempted to be diagnosed.


In the cases I've been bringing up here, it's all been user code 
that's been the problem *anyway*. Regardless of if the compiler 
author was expecting code to get to that point or not, erroring 
out with zero information is a bad user experience.


This also gets compounded in environments where you can't just 
grab the hottest DMD with a compiler bug fix. Before too long, 
our level builders will be using D as their scripting language. 
They need a stable base. We can't do something like upgrade a 
compiler during a milestone week, so upgrades will be scheduled 
(I'm planning on going with even-numbered releases). A fix for 
the compiler bug is no good if I can't ship it out for months. 
The only way to go there is to implement workarounds until such 
time an upgrade is feasible.


(Side note: There's zero chance of me upgrading to the next DMD 
if it retains the altered allMembers functionality)


These kinds of problems are likely to be compounded when D 
reaches critical mass. It's all well and good to tell people in 
the enthusiast community "Run  to get a repro case and make a 
bug". If a problem can't be easily googlable or understandable 
from the error reporting, then that's a turn off for a wider 
audience.


Re: Usability of "allMembers and derivedMembers traits now only return visible symbols"

2016-09-03 Thread Ethan Watson via Digitalmars-d
On Saturday, 3 September 2016 at 21:54:24 UTC, Jacob Carlborg 
wrote:
Here's the PR that introduced the change: 
https://github.com/dlang/dmd/pull/6078


I'm certainly not going to upgrade to the next DMD if this change 
is retained. allMembers not returning all members makes 
introspection entirely useless when it comes to Binderoo.


The wrong conclusions were made from that bug to begin with it 
seems. allMembers should return all members. getProtection should 
report on the protection of a symbol *regardless* of whether 
getMember will succeed or not (this is currently why I have my 
PrivacyOf workaround - to stop the compiler crashing when using a 
template instead of a mixin template to do introspection).


getMember itself, well, I'd honestly prefer if there was a way to 
get to members without having to correlate with .tupleof as it 
will simplify Binderoo code. The .tupleof method doesn't help me 
when it comes to introspecting private/protected functions for 
C++ binding.


Re: Quality of errors in DMD

2016-09-04 Thread Ethan Watson via Digitalmars-d
On Saturday, 3 September 2016 at 22:53:25 UTC, Walter Bright 
wrote:
Adding more text to the assert message is not helpful to end 
users.


Really? I've highlighted several cases just in this one function 
in glue.c that would have saved me hours of time.


Re: Quality of errors in DMD

2016-09-04 Thread Ethan Watson via Digitalmars-d

On Sunday, 4 September 2016 at 00:09:50 UTC, Stefan Koch wrote:
Perhaps the best error message would be "Please post this as a 
bug to bugzilla."


I'd say that's in addition to getting rid of assert(0), not 
instead of.


Re: Quality of errors in DMD

2016-09-04 Thread Ethan Watson via Digitalmars-d

On Sunday, 4 September 2016 at 05:13:49 UTC, Walter Bright wrote:
If you're willing to look at the file/line of where the assert 
tripped, I don't see how a message would save any time at all.


The level builders at Remedy are going to be using D as a 
scripting language. If they get an error like this, they can't be 
expected to open a compiler's source code. Especially since as a 
part of the tool chain I won't be shipping it out to them.


Re: Quality of errors in DMD

2016-09-04 Thread Ethan Watson via Digitalmars-d

On Sunday, 4 September 2016 at 10:33:44 UTC, Walter Bright wrote:
As I mentioned before, assert failures are usually the result 
of the last edit one did. The problem is already narrowed down.


I got the error at the start of the thread because I added a 
variable to a class. The class is having two mixins applied to 
it, which invoke templated code themselves. I knew that 
commenting out this variable would work around the problem - but 
it very definitely was not a good workaround as this struct is 
being used to match a C++ struct. Which meant I had to fumble my 
way through multiple mixins and templates working out what had 
actually caused the problem.


Saying an assert is the result of the last thing you did, sure, 
it tends to be correct. But it's not as simple in D as it was in 
the C/early C++ days, especially when mixins are already a pain 
to debug.


Re: ADL

2016-09-04 Thread Ethan Watson via Digitalmars-d
On Saturday, 3 September 2016 at 01:09:18 UTC, Walter Bright 
wrote:

Fourth solution:

module myalgorithm;

void test(T)(T t)
{
import std.traits;
mixin("import " ~ std.traits.moduleName!T ~ ";");
mixin("alias M = " ~ std.traits.moduleName!T ~ ";");
// The above could be encapsulated into an eponymous 
template

// that takes T as a parameter and returns the alias

M.f(t);
}


Chipping in to say that I currently do something this with 
Binderoo templates... and it sucks.


https://github.com/Remedy-Entertainment/binderoo/blob/master/binderoo_client/d/src/binderoo/variabledescriptor.d

One example is in there, the VariableDescriptors eponymous 
template, where a template that collects every member variable of 
an object has to mix in the module names of each encountered 
member variable type to stop the compiler complaining about 
module visibility. So I'm doing the double whammy of taxing the 
template expansion engine and the CTFE engine. It could be that 
switching it to a mixin template (and working out someway to make 
it as usable as eponymous templates) will solve the problem - but 
the way this codebase is going it's going to mean every template 
needs to be a mixin.


Surely the base template system can be more flexible than this?


Re: ADL

2016-09-05 Thread Ethan Watson via Digitalmars-d

On Monday, 5 September 2016 at 01:00:26 UTC, Walter Bright wrote:

What about using this template?


Sure, it'll work assuming the module imports all its symbols 
publically, but it's still not as usable as it should be. I still 
need to invoke it for a number of things, including member 
variable types.


If the member variable is templated, I need to analyse the 
template arguments for types to import them too.


If it is a function, I need to treat each argument as I treat a 
member variable.


I started a thread the other day that touches on another problem 
I have which this template won't solve: 
https://forum.dlang.org/thread/wggldyzrbwjboibin...@forum.dlang.org


At least in my use cases, it comes down to the template instance 
not inheriting the visibility of symbols from its template 
parameters. Which leads to these workarounds.


We're aiming for the goal of sub-second compile and reload times 
for rapid iteration, both with normal code and scripter code. 
Anything I have to do through templates and CTFE slows compile 
times down, in some cases significantly.


Re: Taking pipeline processing to the next level

2016-09-05 Thread Ethan Watson via Digitalmars-d
On Monday, 5 September 2016 at 08:21:53 UTC, Andrei Alexandrescu 
wrote:
What are the benchmarks and the numbers? What loss are you 
looking at? -- Andrei


Just looking at the example, and referencing the map code in 
std.algorithm.iteration, I can see multiple function calls 
instead of one thanks to every indexing of the new map doing a 
transformation instead of caching it. I'm not sure if the lambda 
declaration there will result in the argument being taken by ref 
or by value, but let's assume by value for the sake of argument. 
Depending on if it's taking by value a reference or a value type, 
that could either be a cheap function call or an expensive one.


But even if it took it by reference, it's still a function call. 
Function calls are generally The Devil(TM) in a gaming 
environment. The less you can make, the better.


Random aside: There are streaming store instructions available to 
me on x86 platforms so that I don't have to wait for the 
destination to hit L1 cache before writing. The pattern Manu 
talks about with a batching function can better exploit this. But 
I imagine copy could also take advantage of this when dealing 
with value types.


Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d
Alright, so now I've definitely come up across something with 
Binderoo that has no easy solution.


For the sake of this example, I'm going to use the class I'm 
binary-matching with a C++ class and importing functionality with 
C++ function pointers to create a 100% functional match - our 
Mutex class. It doesn't have to be a mutex, it just needs to be 
any C++ class where a default constructor is non-trivial.


In C++, it looks much like what you'd expect:

class Mutex
{
public:
  Mutex();
  ~Mutex();
  void lock();
  bool tryLock();
  void unlock();

private:
  CRITICAL_SECTION  m_criticalSection;
};

Cool. Those functions call the exact library functions you'd 
expect, the constructor does an InitializeCriticalSection and the 
destructor does a DeleteCriticalSection.


Now, with Binderoo aiming to provide complete C++ matching to the 
point where it doesn't matter whether a class was allocated in 
C++ or D, this means I've chosen to make every C++-matching class 
a value type rather than a reference type. The reasoning is 
pretty simple:


class SomeOtherClass
{
private:
  SomeVitalObject m_object;
  Mutex   m_mutex;
};

This is a pretty common pattern. Other C++ classes will embed 
mutex instances inside them. A reference type for matching in 
this case is right out of the question. Which then leads to a 
major conundrum - default constructing this object in D.


D structs have initialisers, but you're only allowed constructors 
if you pass arguments. With a Binderoo matching struct 
declaration, it would basically look like this:


struct Mutex
{
  @BindConstructor void __ctor();
  @BindDestructor void __dtor();

  @BindMethod void lock();
  @BindMethod bool tryLock();
  @BindMethod void unlock();

  private CRITICAL_SECTION m_criticalSection;
}

After mixin expansion, it would look come out looking something 
like this:


struct Mutex
{
  pragma( inline ) this() { __methodTable.function0(); }
  pragma( inline ) ~this() { __methodTable.function1(); }

  pragma( inline ) void lock() { __methodTable.function2(); }
  pragma( inline ) bool tryLock() { return 
__methodTable.function3(); }

  pragma( inline ) void unlock() { __methodTable.function4(); }

  private CRITICAL_SECTION m_criticalSection;
}

(Imagine __methodTable is a gshared object with the relevant 
function pointers imported from C++.)


Of course, it won't compile. this() is not allowed for obvious 
reasons. But in this case, we need to call a corresponding 
non-trivial constructor in C++ code to get the functionality 
match.


Of course, given the simplicity of the class, I don't need to 
import C++ code to provide exact functionality at all. But I 
still need to call InitializeCriticalSection somehow whenever 
it's instantiated anywhere. This pattern of non-trivial default 
constructors is certainly not limited to mutexes, not in our 
codebase or wider C++ practices at all.


So now I'm in a bind. This is one struct I need to construct 
uniquely every time. And I also need to keep the usability up to 
not require calling some other function since this is matching a 
C++ class's functionality, including its ability to instantiate 
anywhere.


Suggestions?


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d
On Tuesday, 6 September 2016 at 13:57:27 UTC, Lodovico Giaretta 
wrote:
Of course I don't know which level of usability you want to 
achieve, but I think that in this case your bind system, when 
binding a default ctor, could use @disable this() and define a 
factory method (do static opCall work?) that calls the C++ ctor.


static opCall doesn't work for the SomeOtherClass example listed 
in OP. @disable this() will hide the static opCall and the 
compiler will throw an error.


Somewhat related: googling "factory method dlang" doesn't provide 
any kind of clarity on what exactly is a factory method. 
Documentation for factory methods/functions could probably be 
improved on this front.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d
On Tuesday, 6 September 2016 at 14:27:49 UTC, Lodovico Giaretta 
wrote:
That's because it doesn't initialize (with static opCall) the 
fields of SomeOtherClass, right? I guess that could be solved 
once and for all with some template magic of the binding system.


Correct for the first part. The second part... not so much. Being 
all value types, there's nothing stopping you instantiating the 
example Mutex on the stack in a function in D - and no way of 
enforcing the user to go through a custom construction path 
either.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d

On Tuesday, 6 September 2016 at 13:44:37 UTC, Ethan Watson wrote:

Suggestions?


Forgot to mention in OP that I had tried this( void* pArg = null 
); to no avail:


mutex.d(19): Deprecation: constructor mutex.Mutex.this all 
parameters have default arguments, but structs cannot have 
default constructors.


It's deprecated and the constructor doesn't get called. So no 
egregious sploits for me.


Re: @property Incorrectly Implemented?

2016-09-07 Thread Ethan Watson via Digitalmars-d

On Tuesday, 6 September 2016 at 19:18:11 UTC, John wrote:

It would be nice to get this behavior fixed.


Disagree. I've used properties before in C# to transform to and 
from data sets required for network multiplayer. It works 
functionally the same way in D. Behaviour is as I would expect 
for the wider implications of how properties can work.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-07 Thread Ethan Watson via Digitalmars-d

On Tuesday, 6 September 2016 at 14:49:20 UTC, Ethan Watson wrote:

this( void* pArg = null );


Also doesn't work: this( Args... )( Args args ) if( Args.length 
== 0 )


Just for funsies I tried making my Mutex a class for the purpose 
of embedding it manually in a struct. But thanks to all classes 
inheriting from Object there's 16 bytes at the front of the class 
that I don't want (64-bit build, it's 8 bytes in 32-bit builds 
but we're never going back to 32-bit). So that's very definitely 
out of the question.


static opCall() seems to be the only way to do this then. I can 
autogenerate it for any C++ bound class. But it's inadequate. It 
leaves room for user error when instantiating any C++ object in 
D. It's also another thing that C++ programmers need to be 
thoroughly educated about as Type() in C++11 calls the zero 
initializer, but in D it's effectively the opposite semantics.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-07 Thread Ethan Watson via Digitalmars-d

On Wednesday, 7 September 2016 at 11:42:40 UTC, Dicebot wrote:

If it is so, I'd call it a major extern(c++) bug.


The documentation seems to be correct. I can't extern( C++, class 
) or extern( C++, struct ) on an object, even in DMD 
2.071.2-beta3.


But ignoring that. My first member is offset by 8 bytes, even in 
an extern( C++ ) class. I assume it's just blindly sticking a 
vtable in there regardless of if I actually define virtual 
functions or not.


But regardless. Making it a class is still a bad idea since in 
this exact example it needs to exist on the stack/within an 
objects scope, which means you then need to further hack around 
with emplacement and wrappers and blah.


Binary matching, non-trivial constructors, and treating C++ 
objects like the value types they are will be required to make 
Binderoo work effortlessly. I've got two out of three of those. 
Not having any one of those is something of a deal breaker unless 
I get an effective workaround.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-07 Thread Ethan Watson via Digitalmars-d

On Wednesday, 7 September 2016 at 11:19:46 UTC, Dicebot wrote:

Is using svope class out of the question?


This might actually get me what I want. I'll have to play around 
with it and see.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-07 Thread Ethan Watson via Digitalmars-d
On Wednesday, 7 September 2016 at 12:09:21 UTC, Ethan Watson 
wrote:
This might actually get me what I want. I'll have to play 
around with it and see.


"Scope classes have been recommended for deprecation."

"A scope class reference can only appear as a function local 
variable."


So that's two nopes right there.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-07 Thread Ethan Watson via Digitalmars-d
On Wednesday, 7 September 2016 at 12:14:46 UTC, rikki cattermole 
wrote:

http://dlang.org/phobos/std_typecons.html#.scoped


This is the kind of hackaround I'd need to do if it were a 
class... And it would require more hacking around than the 
standard library supports. And it's a spiraling-out-of-control 
hack, which would effectively mean every C++ matching class will 
need to define a class and then an alias with the scoped type, 
and then that means the pattern matching I've been creating for 
function linkups won't work any more...


static opCall() and a static alloc function for allocating on the 
heap are still looking like the simplest options here.


Re: Templates are slow.

2016-09-08 Thread Ethan Watson via Digitalmars-d

On Thursday, 8 September 2016 at 05:02:38 UTC, Stefan Koch wrote:
I have just hit a barrier trying to optimize the compile-time 
in binderoo.


I did a double take when Stefan told me the representative sample 
code I gave him to run with Binderoo instantiated ~20,000 
templates and resulted in ~10,000,000 hash map look ups inside 
the compiler.


I can certainly write it to be more optimised, but one of the 
goals I have is to make the codebase human readable so that it's 
not just Manu and myself that can understand the code. As a 
result, I figure this could be representative of how an ordinary 
user would write templated code.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-08 Thread Ethan Watson via Digitalmars-d
On Wednesday, 7 September 2016 at 21:05:32 UTC, Walter Bright 
wrote:
5. In my not-so-humble opinion, construction should never fail 
and all constructors should be nothrow, but I understand that 
is a minority viewpoint


100% agree there. I can't think of any class in our C++ codebase 
that fails construction, and it's a pretty common rule in the 
games industry to not assert/throw exceptions during construction.


Of course, with Binderoo being open sourced, I can't guarantee 
any of my end users will be just as disciplined.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-08 Thread Ethan Watson via Digitalmars-d
On Wednesday, 7 September 2016 at 22:52:04 UTC, Walter Bright 
wrote:

Is:

if (resource != null)
resource.destroy();

v.s.:

resource.destroy();

so onerous? It's one TST/JNE pair for a value loaded into a 
register anyway.


This one has performance implications for game developers. The 
branch predictor in the CPU used for the Xbox One and the PS4 
isn't the greatest. If, for example, that destructor gets inlined 
and you're iterating over a range of resources and the destroy 
method is virtual, there's a good chance you will invoke the 
wrath of the dense branch predictor. You don't want to deal with 
the dense branch predictor.


http://www.agner.org/optimize/microarchitecture.pdf section 3.13 
has a bit more info on the branch predictor. Desktop Intel CPUs 
tend to hide performance problems like this thanks to their 
far-higher-quality branch predictors. Both chips gain benefits 
from sorting to how you expect the branch predictor to work, but 
there's a lot of code in a game codebase that isn't that low 
level.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-08 Thread Ethan Watson via Digitalmars-d

On Thursday, 8 September 2016 at 09:33:01 UTC, Dicebot wrote:
Instead, it would be much more constructive (pun unintended) to 
focus on language changes to extern(c++) class bindings to make 
them suitable for the task - those won't affect anyone but C++ 
interop users.


I agree in principle, but it doesn't help me right now. It's 
holding up my work, which means it's costing someone money. 
Workarounds will have to suffice until the language can be 
updated.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-08 Thread Ethan Watson via Digitalmars-d

On Thursday, 8 September 2016 at 10:36:22 UTC, Dicebot wrote:
As a workaround I sincerely believe explicit 'create' (with 
forged mangling if needed) is better. It provides exactly the 
same functionality without tricking the developet into 
expecting more by confusion of the syntax similarity.


If I was to enforce a programming standard with static opCall(). 
The code for instantiating the Mutex example would look like:


Mutex foo = Mutex();

Later on down the track, behind the scenes when default 
constructors work for C++ types I remove the static opCall() 
implementation and replace it with default constructors. Right 
now, Mutex() without static opCall() just gives me the .init. 
With the static opCall(), I can construct it. With a default 
constructor?


I suppose that'd depend on future decisions that haven't been 
made yet. In C++ Mutex() is meant to invoke the zero initialiser. 
It's effectively the opposite in D when using static opCall(). 
Which one would be the correct way to default construct a class? 
We'll find out I suppose.


Either way, assuming the default constructor will be called 
regardless of if it's foo = Mutex; or foo = Mutex();, using 
static opCall() will cut down on future maintenance work.


We're going to disagree on this one, basically. I'm designing 
this system for people who don't want to have to remember to call 
fancy create functions.


  1   2   >