https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93582

--- Comment #32 from CVS Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Jakub Jelinek <ja...@gcc.gnu.org>:

https://gcc.gnu.org/g:7f5617b00445dcc861a498a4cecc8aaa59e05b8c

commit r10-6809-g7f5617b00445dcc861a498a4cecc8aaa59e05b8c
Author: Jakub Jelinek <ja...@redhat.com>
Date:   Mon Feb 24 12:56:39 2020 +0100

    sccvn: Handle bitfields in push_partial_def [PR93582]

    The following patch adds support for bitfields to push_partial_def.
    Previously pd.offset and pd.size were counted in bytes and maxsizei
    in bits, now everything is counted in bits.

    Not really sure how much of the further code can be outlined and moved,
e.g.
    the full def and partial def code doesn't have pretty much anything in
    common (the partial defs case basically have some load bit range and a set
    of store bit ranges that at least partially overlap and we need to handle
    all the different cases, like negative pd.offset or non-negative, little
vs.
    bit endian, size so small that we need to preserve original bits on both
    sides of the byte, size that fits or is too large.
    Perhaps the storing of some value into a middle of existing buffer (i.e.
    what push_partial_def now does in the loop) could, but the candidate for
    sharing would be most likely store-merging rather than the other spots in
    sccvn, and I think it is better not to touch store-merging at this stage.

    Yes, I've thought about trying to do everything in place, but the code is
    quite hard to understand and get right already now and if we tried to do
the
    optimize on the fly, it would need more special cases and would for gcov
    coverage need more testcases to cover it.  Most of the time the sizes will
    be small.  Furthermore, for bitfields native_encode_expr stores actually
    number of bytes in the mode and not say actual bitsize rounded up to bytes,
    so it wouldn't be just a matter of saving/restoring bytes at the start and
    end, but we might need even 7 further bytes e.g. for __int128 bitfields.
    Perhaps we could have just a fast path for the case where everything is
byte
    aligned and (for integral types the mode bitsize is equal to the size too)?

    2020-02-24  Jakub Jelinek  <ja...@redhat.com>

        PR tree-optimization/93582
        * tree-ssa-sccvn.c (vn_walk_cb_data::push_partial_def): Consider
        pd.offset and pd.size to be counted in bits rather than bytes, add
        support for maxsizei that is not a multiple of BITS_PER_UNIT and
        handle bitfield stores and loads.
        (vn_reference_lookup_3): Don't call ranges_known_overlap_p with
        uncomparable quantities - bytes vs. bits.  Allow push_partial_def
        on offsets/sizes that aren't multiple of BITS_PER_UNIT and adjust
        pd.offset/pd.size to be counted in bits rather than bytes.
        Formatting fix.  Rename shadowed len variable to buflen.

        * gcc.dg/tree-ssa/pr93582-4.c: New test.
        * gcc.dg/tree-ssa/pr93582-5.c: New test.
        * gcc.dg/tree-ssa/pr93582-6.c: New test.
        * gcc.dg/tree-ssa/pr93582-7.c: New test.
        * gcc.dg/tree-ssa/pr93582-8.c: New test.

Reply via email to