Hi! Some errors (e.g. in in this particular PR in a backend machine builtin) are detected only during folding and the recursive cp_fold* call can then return error_mark_node. Passing that to fold_build*_loc is undesirable though, the gimplifiers as well as other places in the compiler don't expect error_mark_node to be operand of NOP_EXPR and various other trees, historically the FEs would then just not create the expression at all and use error_mark_node instead of the whole expression.
The following patch handles those in cp_fold. Bootstrapped/regtested on x86_64-linux and i686-linux, and tested on the testcase from the PR (which is in the testsuite already) using cross to darwin; ok for trunk? Another alternative would be to make sure tree folders don't introduce error_mark_node (if it wasn't there already), but instead fold the call say to build_int_cst (returntype, 0). The known cases that would need to change are at least darwin_build_constant_cfstring and darwin_fold_builtin, but maybe others. 2016-01-26 Jakub Jelinek <ja...@redhat.com> PR c++/68357 * cp-gimplify.c (cp_fold): If some operand folds to error_mark_node, return error_mark_node instead of building trees with error_mark_node operands. --- gcc/cp/cp-gimplify.c.jj 2016-01-20 10:55:15.000000000 +0100 +++ gcc/cp/cp-gimplify.c 2016-01-26 11:42:34.966038507 +0100 @@ -1954,7 +1954,12 @@ cp_fold (tree x) op0 = cp_fold_maybe_rvalue (TREE_OPERAND (x, 0), rval_ops); if (op0 != TREE_OPERAND (x, 0)) - x = fold_build1_loc (loc, code, TREE_TYPE (x), op0); + { + if (op0 == error_mark_node) + x = error_mark_node; + else + x = fold_build1_loc (loc, code, TREE_TYPE (x), op0); + } else x = fold (x); @@ -1986,7 +1991,12 @@ cp_fold (tree x) op0 = cp_fold_maybe_rvalue (TREE_OPERAND (x, 0), rval_ops); if (op0 != TREE_OPERAND (x, 0)) - x = fold_build1_loc (loc, code, TREE_TYPE (x), op0); + { + if (op0 == error_mark_node) + x = error_mark_node; + else + x = fold_build1_loc (loc, code, TREE_TYPE (x), op0); + } else x = fold (x); @@ -2043,7 +2053,12 @@ cp_fold (tree x) op1 = cp_fold_rvalue (TREE_OPERAND (x, 1)); if (op0 != TREE_OPERAND (x, 0) || op1 != TREE_OPERAND (x, 1)) - x = fold_build2_loc (loc, code, TREE_TYPE (x), op0, op1); + { + if (op0 == error_mark_node || op1 == error_mark_node) + x = error_mark_node; + else + x = fold_build2_loc (loc, code, TREE_TYPE (x), op0, op1); + } else x = fold (x); @@ -2066,7 +2081,14 @@ cp_fold (tree x) if (op0 != TREE_OPERAND (x, 0) || op1 != TREE_OPERAND (x, 1) || op2 != TREE_OPERAND (x, 2)) - x = fold_build3_loc (loc, code, TREE_TYPE (x), op0, op1, op2); + { + if (op0 == error_mark_node + || op1 == error_mark_node + || op2 == error_mark_node) + x = error_mark_node; + else + x = fold_build3_loc (loc, code, TREE_TYPE (x), op0, op1, op2); + } else x = fold (x); @@ -2093,9 +2115,18 @@ cp_fold (tree x) { r = cp_fold (CALL_EXPR_ARG (x, i)); if (r != CALL_EXPR_ARG (x, i)) - changed = 1; + { + if (r == error_mark_node) + { + x = error_mark_node; + break; + } + changed = 1; + } CALL_EXPR_ARG (x, i) = r; } + if (x == error_mark_node) + break; optimize = nw; r = fold (x); @@ -2143,7 +2174,15 @@ cp_fold (tree x) constructor_elt e = { p->index, op }; nelts->quick_push (e); if (op != p->value) - changed = true; + { + if (op == error_mark_node) + { + x = error_mark_node; + changed = false; + break; + } + changed = true; + } } if (changed) x = build_constructor (TREE_TYPE (x), nelts); @@ -2188,9 +2227,19 @@ cp_fold (tree x) op2 = cp_fold (TREE_OPERAND (x, 2)); op3 = cp_fold (TREE_OPERAND (x, 3)); - if (op0 != TREE_OPERAND (x, 0) || op1 != TREE_OPERAND (x, 1) - || op2 != TREE_OPERAND (x, 2) || op3 != TREE_OPERAND (x, 3)) - x = build4_loc (loc, code, TREE_TYPE (x), op0, op1, op2, op3); + if (op0 != TREE_OPERAND (x, 0) + || op1 != TREE_OPERAND (x, 1) + || op2 != TREE_OPERAND (x, 2) + || op3 != TREE_OPERAND (x, 3)) + { + if (op0 == error_mark_node + || op1 == error_mark_node + || op2 == error_mark_node + || op3 == error_mark_node) + x = error_mark_node; + else + x = build4_loc (loc, code, TREE_TYPE (x), op0, op1, op2, op3); + } x = fold (x); break; Jakub