https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98950

            Bug ID: 98950
           Summary: jump threading memory leak
           Product: gcc
           Version: 11.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: tree-optimization
          Assignee: unassigned at gcc dot gnu.org
          Reporter: rguenth at gcc dot gnu.org
  Target Milestone: ---

Tried to find my way around the code but I'm not too familiar with it.  This
shows up (at least) when building 521.wrf_r with -Ofast -flto

==19644== 832 bytes in 52 blocks are definitely lost in loss record 7,748 of
9,681
==19644==    at 0x4C2E94F: operator new(unsigned long) (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==19644==    by 0xC750AE:
thread_jumps::convert_and_register_current_path(edge_def*)
(tree-ssa-threadbackward.c:472)
==19644==    by 0xC75452:
thread_jumps::register_jump_thread_path_if_profitable(tree_node*, tree_node*,
basic_block_def*) [clone .part.37] (tree-ssa-threadbackward.c:561)
==19644==    by 0xC76487: register_jump_thread_path_if_profitable
(tree-ssa-threadbackward.c:553)
==19644==    by 0xC76487: thread_jumps::handle_phi(gphi*, tree_node*,
basic_block_def*) (tree-ssa-threadbackward.c:600)
==19644==    by 0xC763B4:
thread_jumps::fsm_find_control_statement_thread_paths(tree_node*)
(tree-ssa-threadbackward.c:730)
==19644==    by 0xC764D4: thread_jumps::handle_phi(gphi*, tree_node*,
basic_block_def*) (tree-ssa-threadbackward.c:594)
==19644==    by 0xC763B4:
thread_jumps::fsm_find_control_statement_thread_paths(tree_node*)
(tree-ssa-threadbackward.c:730)
==19644==    by 0xC76B8E: (anonymous
namespace)::pass_thread_jumps::execute(function*)
(tree-ssa-threadbackward.c:828)
==19644==    by 0x9E58A3: execute_one_pass(opt_pass*) (passes.c:2567)
==19644==    by 0x9E6100: execute_pass_list_1(opt_pass*) (passes.c:2656)
==19644==    by 0x9E6112: execute_pass_list_1(opt_pass*) (passes.c:2657)
==19644==    by 0x9E6154: execute_pass_list(function*, opt_pass*)
(passes.c:2667)

Reply via email to