In message <[EMAIL PROTECTED]> Peter Seebach writes:
: >I searched through the standard extensively to see if "allocates
: >space" is defined and couldn't find anything other than 'the poitner
: >can be used to access the space allocated.'
: 
: EXACTLY!
: 
: If it can't actually be used, then something has gone horribly wrong.

It says can, not must.  I disagree with you that you can't overcommit.
Also, access is ill defined.  Does that mean read access?  write
access?  How about execute access?  It just says access and leaves the 
meaning up to the implementor.

You fail to draw a distinction between "reading" the allocated area,
and "writing" to the allocated area.  The standard makes no promises
that you can do both, and only has the vaguely defined access as to
what you can do with it.

Reasonable people can differ as to what this means.

: >There appear to be no
: >guarantees that you can always use all of the memory that you've
: >allocated, the performance characterstics of accessing said memory or
: >anything else.
: 
: Performance characteristics are not at issue; I don't think anyone on the
: committee would complain about an implementation which provided "committed"
: space by backing it all to disk, all the time.

No one would.  However, the standard does not preclude the ability to
overcommit.  Many different systems do this already.

: >Do not read too much into the above words.  They mean exactly what
: >they say.  FreeBSD is in compliance.
: 
: Assuming I make the Copenhagen meeting, I'll bring this up again as a DR
: or something.  This gets debated every few years, and the consensus seems
: to be that malloc *MUST* return a valid pointer to space which can actually
: be used.

Really, I guess I missed those debates.  I've not seen it in the
literature.  I've not seen this addressed in the last 20 years that
systems have been overcomitting in any meaningful way.

: >The space is indeed allocated to
: >the process address space.  System resource issues might get in the
: >way of being able to use that, but that's true of anything you
: >allocate.
: 
: That's not what "allocate" means.  A resource which is *allocated* is one
: you actually have access to.

Not necessarily.  Allocate means, in the absense of resource
shortages, you get the memory.  You might get the memory, you might
not.  Consider a system that has a quota on dirty pages.  If I malloc
a huge array, and then only use part of it, I can stay under my
quota.  But if I use most of it by dirtying it, then the system can
kill the process.

: Basically, if your interpretation had been intended, the committee would have
: put in words to the effect of "access to an allocated object invokes undefined
: behavior".
:
: It doesn't.

Yes it does.  You do not understand.  If there's an ECC error, then
you do not have access to the memory without errors.  This is no
different, just a different class of error.

: Therefore, the following code:
:       #include <stdlib.h>
:       int main(void) {
:               unsigned char *p;
:               if (200000000 < SIZE_MAX) {
:                       p = malloc(200000000)
:                       if (p) {
:                               size_t i;
:                               for (i = 0; i < 200000000; ++i) {
:                                       p[i] = i % 5;
:                               }
:                       }
:               }
:               return 0;
:       }
: *MUST* succeed and return 0.  The program does not invoke undefined behavior;
: therefore, the compiler is obliged to ensure that it succeeds.  It's fine for
: malloc to fail; it's not fine for the loop to segfault.

That's not necessarily true.  There's nothing in the standard that
states that you have to be able to write to all parts of an allocated
region.  It is a perfectly reasonable implementation to return the
memory from malloc.  The system might enforce an upper bound on the
number of dirty pages a process has.  malloc would not necessarily
have any way of knowing what those limits are, and those limits and
quotas might be dynamic or shared amoung classes of users.  The limit
might be 100 right now, but 10 later.  This is a host environment
issue.  The malloc call gets a resource, but in the future you might
not be able to access that resource due to circumstances beyond the
control of the program.

Also, the program does not *HAVE* to succeed.  If you have a low CPU
quota, for example, and the program exceeds it, the host environment
will signal SIGXCPU to the process.  Another process might send this
one a SIGKILL.  The system may crash in the middle of its execution.
There might be an unrecoverable memory parity error.  There might be
an unrecoverable disk error on the swap partition.  There might be any 
number of things that preclude it from reaching completion, beyond
your control.

I don't see how this is significantly different than a dirty page
quota or a dirty page limit.  That's what we're talking about here.
There's nothing in the standard that precludes host limits that do not
fit into the standard well.

: This is probably one of the single most-unimplemented features in the spec,
: but it really wouldn't be that hard to provide as an option.

It would be extremely difficult to provide as an option.  The
over commit stuff is down in the kernel's vm system.  There's no real
way to have a single process get "guaranteed" access to anything.  It
is way ugly to change.

Warner

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

Reply via email to