> But you do find careful checks in unusually well-written software
> (abbreviated):
> 
> /usr/src/usr.sbin/ntpd/ntpd.c
>   int writefreq(double d) {
>       r = fprintf(freqfp, "%.3f\n", d * 1e6); /* scale to ppm */
>       if (r < 0 || fflush(freqfp) != 0) {
> 
> So, our current libc can trick ntpd(8) into silently writing a
> corrupt driftfile.  Do you say that check must be changed into
> 
>       if (r <= 0 || fflush(freqfp) != 0) {
> 
> I'm sure even fewer code authors would expect that to be needed.

I don't think that can happen.

Basically, a very small malloc is failing.

So there isn't a page of memory remaining in the resource limit.
Also, there isn't a sub-page region available for temporary use.
Really?

In that case, I think the program will probably crash in the next 5-10
lines of code.  Or more likely, it will have failed just beforehands
handling the imsg input, because there are malloc's everywhere.  It is
probably leaking like crazy and is ready to terminate and not do the job
at a large number of points in code.

In an ideal world every error return value is checked and the correct
decision made.  We don't live in an ideal world.  But even fprintf to
a file becomes fragile in this case for large writes because it can
produce partial options, and then the file is damaged.  To satisfy the
most stringent conditions we'd need to write everything in the most
low-level intrinsic functions to gaurantee safe behaviour.  Instead,
we have something akin to a 'social contract' and use the these
high-level functions with a few subtle edge conditions.

Reply via email to