On 10/17/12 23:00, David Nadlinger wrote:
> On Wednesday, 17 October 2012 at 20:37:53 UTC, Artur Skawina wrote:
>> Well, I think such optimizations are fine (as long as documented and there
>> exists alternatives), but note that this testcase checks for the case where
>> the object size calculation overflows. Ie it must not succeed.
> 
> Could you elaborate on that? It strikes me that this is either a GC 
> implementation detail or invalid D code in the first place (i.e. should not 
> be expected to compile resp. is undefined behavior).

Well, eg on a 32-bit platform the newly allocated memory object would need to
have a size of 8*2G == 16G. I guess you could see it as a GC implementation
detail, but that allocation can never succeed, simply because such an object
would be larger than the available address space, hence can't be mapped 
directly.
The 'new long[ptrdiff_t.max]' case can be caught at compile time, but a 
different
'new long[runtime_variable_which_happens_to_be_2G]' can not, and then the GC 
MUST
catch the overflow, instead of allocating a ((size_t)long.sizeof*2G) sized 
object
Which is what I assume that test was meant to check.

But even in the constant, statically-checkable case, would it make sense to 
ignore
the error if the allocation was "dead"? If nothing ever accesses the new object,
ignoring the error seems harmless. But is it OK to allow faulty code to 
silently run
long as the compiler can prove that the bug won't be triggered? Will every 
compiler
make the same decision? Would a different optimization level cause the error to 
be
thrown? For these reasons, silently optimizing away "harmless", but buggy code 
is
not a good idea.

artur

Reply via email to