On Wed, Aug 7, 2013 at 6:03 PM, Joseph S. Myers <jos...@codesourcery.com> wrote:
> Looking at the patch as committed, there seems to be some confusion about
> the nature of the --enable-vtable-verify configure option.

Yes, there is a bit.

>
> It's documented only for libstdc++.  Now, I still consider the existence
> of separate documentation for libstdc++ configure options to be
> unfortunate - I think all configure options for GCC should be documented
> in one place - but that's a separate matter.  Although only documented for
> libstdc++, it actually appears to be used in the libgcc and libvtv
> configure scripts.

The original intent of this flag was that if --enable-vtable-verify
was NOT used, then NOTHING having to do with vtable verification would
be built anywhere in the compiler, i.e. the binaries, libraries, etc.
would not contain anything that wasn't there before the vtable
verification patch was committed.  The idea (and implementation) has
evolved a bit, and I am not sure what the "right" way to handle this
option is at the moment.

>
> Given that it has effects on more than just libstdc++, it needs
> documenting in gcc/doc/install.texi, the main documentation of configure
> options for GCC.

I recently submitted a patch that adds the documentation for my
current understanding of the way this option works.

> Then there's the question of what the semantics should
> be.  My presumption is that the feature should be available for all GCC
> builds, at least by default on platforms where it can work, and the only
> thing needing a configure option should be whether the checks are enabled
> for libstdc++ itself (and ideally that would work by multilibbing
> libstdc++ rather than needing separate GCC builds to get libstdc++ with
> and without the checks).

Actually if you ever want to use the feature with your compiler you
should build your compiler with --enable-vtable-verify.  This will, as
you noted,  insert calls in libstdc++ to build the verification data
structures and to verify the virtual calls in libstdc++ itself.
However libstdc++ itself contains 'stub' (do-nothing) versions of the
functions that build the data structures and perform the verification.
 So if you want to turn on verification with libstdc++, you link it
with libvtv (which contains the "real" versions of those functions)
and if you don't want verification with libstdc++ you just don't link
in libvtv.  There is no need to multiple versions of libstdc++.

>Thus if the platform supports the feature, all
> relevant libgcc files should be built, and anything for libstdc++ needed
> for user code to use the feature should be built - the only thing not
> enabled by default would be checks for libstdc++'s classes' own vtables.
> (And unless there are difficulties in building the libgcc files on some
> systems, they could be built unconditionally, whether or not any other
> support needed for libvtv is present.

I supposed the libgcc files could be built all the time (on systems
that support libvtv).  Would there be any down side to this?

> Actually, it looks like they may
> depend on an ELF target, but not on anything more.)
>
> Could you confirm that the libstdc++ ABI is not affected by the configure
> option - that the same symbols, at the same symbol versions, with the same
> semantics, are exported from libstdc++.so for builds with and without the
> feature enabled?
>

The libstdc++ ABI has been enhanced to export the vtable verification
functions for which it contains stub versions (otherwise they could
never be overwritten by the versions in libvtv).  Other than that the
libstdc++ ABI exports the same symbols at the same symbol versions
with the same semantics.  I believe this export is unconditional.

> The file cp/vtable-class-hierarchy.c includes "tm.h".  Includes of tm.h
> from front ends are discouraged, and should have comments on them listing
> what target macros are used by the file in question (and so need to be
> converted to hooks before the include can be removed).  Could you add such
> a comment to the #include (or if it's redundant, remove all redundant
> #includes from that file)?
>
> You have a
>
> +#define MAX_SET_SIZE 5000
>
> which superficially looks like an arbitrary limit.  Could you add a
> comment explaining why no input files, not matter how extreme, could ever
> exceed the limit of 5000?  You have a couple of gcc_asserts regarding this
> limit, and an on-stack array for which it's at least not immediately
> obvious that all accesses are checked to ensure that a buffer overrun is
> impossible.
>
> If you have an arbitrary limit, and some input *can* exceed it (so
> triggering the gcc_asserts or buffer overrun), the right fix is of course
> to remove it, probably using vec.h to produce a dynamically growing array
> instead of hardcoding a size at all.  But failing that, exceeding the
> limit must result in a sorry () call, not an assertion failure or buffer
> overrun, neither of which is acceptable behavior for any compiler input
> whatever.
>
> Other people have previously commented about the logs *generated by the
> compiler*.  The issue raised regarding the directories for those logs has
> been fixed so my only comment there is that the diagnostics for failure to
> open those files have several problems:
>
> * Diagnostics should not start with a capital letter.
>
> * Use %< and %> as quotes in diagnostics.
>
> * Use %m in the diagnostic to print the error text corresponding to the
> errno value from the failure to open.
>
> * I doubt input_location is particularly meaningful as a location for
> these diagnostics; warning_at with UNKNOWN_LOCATION may be better.

I recently submitted a patch to fix these issues.

>
> But there's a much more serious issue with logs generated *by libvtv*.
> These appear to use fixed filenames in a fixed directory /tmp/vtv_logs.
> That's always a mistake (I've encountered such trouble with QEMU using
> such a fixed log name in the past).  Not only that, neither O_EXCL nor
> O_NOFOLLOW is used so you have an obvious security hole.  Using such
> global locations is simply never OK; you need to arrange for such failures
> to go somewhere that will never conflict with other users.  You can't
> create a directory in /tmp (because proper ownership and permissions for
> such a shared directory would be the same as for /tmp itself, rather than
> leaving it owned by the first user to create it, and an unprivileged
> process can't ensure that).  I suggest:
>
> * Prefer the current working directory (as used for core dumps) over /tmp
> (but if /tmp is used as a fallback, it needs to be /tmp not a subdirectory
> thereof).
>
> * Name the generated file with a name that includes the program name, its
> pid, and a random string, to reduce the chance of conflicts.
>
> * In any case, always open with O_NOFOLLOW (if defined) and O_EXCL to
> avoid symlink attacks.
>

For certain use cases the current working directory is not our
preferred place to put the log files.
I have recently submitted a patch that tries to use environment
variables to determine where to put the log files.  It first checks to
see if the user has defined VTV_LOGS_DIR, in which case it will use
that.  If that fails, it tries to find and use HOME.  If that also
fails, it falls back on using the working directory.  I hope that is
ok?

I have modified the call to open to take O_NOFOLLOW, but O_EXCL will
do the wrong thing, as we sometimes wish to append to the log files
and O_EXCL fails if you attempt to open an existing file (according to
the documentation I read).


> --
> Joseph S. Myers
> jos...@codesourcery.com

Reply via email to