Given the following union

union F
{
    double x;
    struct {
        ulong lo;
        ulong hi;
    }
}

I do not understand the output below. Please clarify.

import std.stdio;
void main()
{
    F fp;
    fp.lo.writeln; // Why is this not zero? How is this value derived?
    fp.hi.writeln; // expected
    fp.x.writeln;  // expected

fp.x = 19716939937510315926535.148979323846264338327950288458209749445923078164062862089986280348253421170679;
    fp.lo.writeln;
    fp.hi.writeln;
fp.x.writefln!"%20.98f"; // Also, why is precision completely lost after 16 digits (18 if I change the type of x to real)?
}

Sorry if this seem like noise but I genuinely do not understand. What changes would I need to make to retain the precision of the value provided in the assignment above?

Thanks,
--confuzzled

Reply via email to