Gabriel Dos Reis writes:
> >I believe you're confused about the semantics.  
> >The issue here is that the *size of object* requested can be
> >represented.  That is independent of whether the machine has enough
> >memory or not.  So, new_handler is a red herring

On Sat, Apr 07, 2007 at 06:05:35PM -0400, Ross Ridge wrote:
> The issue is what GCC should do when the calculation of the size of
> memory to allocate with operator new() results in unsigned wrapping.
> Currently, GCC's behavior is standard conforming but probably isn't the
> expected result.  If GCC does something other than what operator new()
> does when there isn't enough memory available then it will be doing
> something that is both non-conforming and probably not what was expected.

Consider an implementation that, when given

         Foo* array_of_foo = new Foo[n_elements];

passes __compute_size(elements, sizeof Foo) instead of n_elements*sizeof Foo
to operator new, where __compute_size is

inline size_t __compute_size(size_t num, size_t size) {
    size_t product = num * size;
    return product >= num ? product : ~size_t(0);
}

This counts on the fact that any operator new implementation has to fail
when asked to supply every single addressible byte, less one.  It would
appear that the extra cost, for the non-overflow case, is two
instructions (on most architectures): the compare and the branch, which
can be arranged so that the prediction is not-taken.

I haven't memorized the standard, but I don't believe that this
implementation would violate it.  The behavior differs only when more
memory is requested than can be delivered.


Reply via email to