On Wed, Mar 29, 2023 at 1:04 PM Evgeny Kotkov via dev <dev@apr.apache.org> wrote: > > Yann Ylavic <yla...@apache.org> writes: > > > +#undef NDEBUG /* always abort() on assert()ion failure */ > > +#include <assert.h> > > Sorry for being late to the party, but I think there are a few problems with > the "#undef NDEBUG" line: > > 1) The debug implementation of an assert() may print a diagnostic message, > for example to stderr. A caller of the library function may not be ready for > this to happen when using a non-debug version of the library. > > 2) The actual destination of the message seems to be implementation-defined. > For example, in Windows-based applications this may show a message box [1], > which is probably even more unexpected for the user of the library. > > 3) Undefining NDEBUG before other headers may silently cause unexpected > effects if any of those headers make some decisions based on the NDEBUG value, > which isn't an entirely unreasonable thing to expect.
Fair points, the system printing (the faulting code) or not by default before abort() should be #ifdef'd then? Depend on NDEBUG or an APR_ABORT_USE_STDERR alike? I'm not sure what the right default is though, ISTM that using system printing before abort() for a portable [system] lib is not delusional. This works well for users that can afford to let std outputs go or handle/redirect them (e.g. httpd including when running as a Windows service where the error could go to a journal/eventlog?), and an abort() can be hard to diagnose without a message (and without a coredump/backtrace). Using a build time toggle does not necessarily help by the way when all you have is an apr built by a distro/vendor (no NDEBUG set in Debian it seems for instance). Faulting silently by default looks harsh to me, there is no precedent on apr abort()ing explicitly so there is no legacy behaviour to preserve IMO. But not a strong opinion provided there is a #define'd. > > So, depending on whether we want the checks to soft- or hard-fault, maybe we > could either remove the "#undef NDEBUG" line, or switch to a custom check that > explicitly calls an abort()? If we let some user-defined NDEBUG fly through apr_base64.c the code will never abort(), which we always want for this int/size overflow UB. So possibly something like the below: Index: encoding/apr_base64.c =================================================================== --- encoding/apr_base64.c (revision 1908764) +++ encoding/apr_base64.c (working copy) @@ -20,8 +20,27 @@ * ugly 'len' functions, which is quite a nasty cost. */ -#undef NDEBUG /* always abort() on assert()ion failure */ +/* An APR__ASSERT()ion failure will abort() with or without printing + * the faulting condition (and code location) depending on NDEBUG. + */ +#ifndef NDEBUG /* use assert()/system's printing before abort() mechanism */ #include <assert.h> +#define APR__ASSERT(cond) assert(cond) +#else /* just abort() */ +#if APR_HAVE_STDLIB_H +#include <stdlib.h> +#endif +#if HAVE___BUILTIN_EXPECT +#define APR__UNLIKELY(cond) __builtin_expect(!!(cond), 0) +#else +#define APR__UNLIKELY(cond) (!!(cond)) +#endif +#define APR__ASSERT(cond) do { \ + if (APR__UNLIKELY(!(cond))) { \ + abort(); \ + } \ +} while (0) +#endif #include "apr_base64.h" #if APR_CHARSET_EBCDIC @@ -124,7 +143,7 @@ APR_DECLARE(int) apr_base64_decode_len(const char bufin = (const unsigned char *) bufcoded; while (pr2six[*(bufin++)] <= 63); nprbytes = (bufin - (const unsigned char *) bufcoded) - 1; - assert(nprbytes <= APR_BASE64_DECODE_MAX); + APR__ASSERT(nprbytes <= APR_BASE64_DECODE_MAX); return (int)(((nprbytes + 3u) / 4u) * 3u + 1u); } @@ -161,7 +180,7 @@ APR_DECLARE(int) apr_base64_decode_binary(unsigned bufin = (const unsigned char *) bufcoded; while (pr2six[*(bufin++)] <= 63); nprbytes = (bufin - (const unsigned char *) bufcoded) - 1; - assert(nprbytes <= APR_BASE64_DECODE_MAX); + APR__ASSERT(nprbytes <= APR_BASE64_DECODE_MAX); nbytesdecoded = (int)(((nprbytes + 3u) / 4u) * 3u); bufout = (unsigned char *) bufplain; @@ -206,7 +225,7 @@ static const char basis_64[] = APR_DECLARE(int) apr_base64_encode_len(int len) { - assert(len >= 0 && len <= APR_BASE64_ENCODE_MAX); + APR__ASSERT(len >= 0 && len <= APR_BASE64_ENCODE_MAX); return ((len + 2) / 3 * 4) + 1; } @@ -219,7 +238,7 @@ APR_DECLARE(int) apr_base64_encode(char *encoded, int i; char *p; - assert(len >= 0 && len <= APR_BASE64_ENCODE_MAX); + APR__ASSERT(len >= 0 && len <= APR_BASE64_ENCODE_MAX); p = encoded; for (i = 0; i < len - 2; i += 3) { @@ -258,7 +277,7 @@ APR_DECLARE(int) apr_base64_encode_binary(char *en int i; char *p; - assert(len >= 0 && len <= APR_BASE64_ENCODE_MAX); + APR__ASSERT(len >= 0 && len <= APR_BASE64_ENCODE_MAX); p = encoded; for (i = 0; i < len - 2; i += 3) { @@ -292,7 +311,7 @@ APR_DECLARE(char *) apr_pbase64_encode(apr_pool_t char *encoded; apr_size_t len = strlen(string); - assert(len <= (apr_size_t)APR_BASE64_ENCODE_MAX); + APR__ASSERT(len <= (apr_size_t)APR_BASE64_ENCODE_MAX); encoded = (char *) apr_palloc(p, apr_base64_encode_len((int)len)); apr_base64_encode(encoded, string, (int)len); -- ? Regarding 3) maybe if apr_base64 abort()s on int/size overflows, it too should for anything raised by stdlib (or any future apr included header macro, if ever) in its code? I don't think any faultable function/macro is called for now in there anyway, so let's not complicate further the #ifdefery to avoid a non-issue if possible, unless I'm missing something.. Regards; Yann.