On Wednesday, 18 May 2016 at 19:53:10 UTC, Era Scarecrow wrote:
On Wednesday, 18 May 2016 at 19:36:59 UTC, tsbockman wrote:
I agree that intrinsics for this would be nice. I doubt that any current D platform is actually computing the full 128 bit result for every 64 bit multiply though - that would waste both power and performance, for most programs.

Except the 128 result is _already_ there for 0 cost (at least for x86 instructions that I'm aware).

Can you give me a source for this, or at least the name of the relevant op code? (I'm new to x86 assembly.)

There's bound to be enough cases (say pseudo random number generation, encryption, or numerical processing above 64bits) I'd like access to it supported by the language and not having to inject instructions using the asm command.

Of course it would be useful to have in the language; I wasn't disputing that. I'd like to have as much support for 128-bit integers in the language as possible. Among other things, this would greatly simplify getting 128-bit floating-point working.

I'm just surprised that the CPU would really calculate the upper 64 bits of a multiply without being explicitly asked to.

Reply via email to