You find it well defined and expected that the compiler translate 1024*1024*1024*2 to -1*1024*1024*1024*2?
Why would you not want it to actually become, you know, what you write? Since it is a ulong that it should fit in, why would you expect the compiler to follow signed integer overflow rules? I mean, is there any time you actually want it to become 18446744071562067968 instead of the expected value?