We still need to do some cosmetic changes to range-ops itself, but I
realize it may be easier to review the first 3 patches, if one has an
idea of how it all fits together. As I said before, the first 3 patches
can go in as is; this is just for the curious or impatient.
As such, I am not submitting this patch for contribution yet. I'll
contribute later this week, after I cleanup some #if 0 code and
reminders we had written throughout the code.
For example, there are some optimizations that I'll hold off on
contributing, because they yield different (read "better") results than
mainline. Since we want to make sure there are absolutely no
differences from extract_range_from* for now, I believe it is prudent to
delay these optimizations until they can be contributed separately when
the dust settles.
Be that as it may, we can still discuss the general layout now.
The main idea is that all calls to tree-vrp's extract_range_from_
binary/unary_expr in vr-values.s are replaced with range_fold_*expr.
These new functions will then call both range-ops and the old
extract_range_from_binary/unary, and assert that the results are the
same.
The code asserting and comparing the results for range-ops and VRP is
all in assert_compare_value_ranges. I have taken care of starting from
scratch (no exceptions) and adding only what is strictly necessary,
where I have proven why two results mismatch but yet range-ops is
correct. For example, EXACT, MIN, MAX_EXPR, etc on VRP give up a lot
sooner than range-ops.
One annoying set of differences is the order in which extract_range*
extracts anti-ranges into pairs with ranges_from_anti_range, versus the
way we pick at sub-ranges. The ordering matters, and VRP will
frequently give up (into VARYING) because the intermediate
representation may be unrepresentable. The range-ops iteration does
slightly better, and there are sometimes differences when doing
operations on pairs of anti-ranges. If VRP yields a VARYING result, but
range-ops does better, we ignore the difference. There is a huge
comment section where I explain this in depth, with examples.
Other than the few accounted for exceptions in
assert_compare_value_ranges, there should be no differences, and we
abort() very loudly if there is.
Now...
The VRP entry points into the ranger are:
range_ops_fold_unary_expr
range_ops_fold_binary_expr
These functions are organized into 3 distinctive sections:
a) Perform any special behavior users of extract_range* expect.
This is generally the special code at the top of
extract_range_{binary,unary}*.
b) Symbolic handling.
Pleasantly we have found out that the only places where we
actually care about symbolics in VRP are:
1. PLUS_EXPR, MINUS_EXPR
2. POINTER_PLUS_EXPR
3. NEGATE_EXPR
4. BIT_NOT_EXPR
5. CONVERT_EXPR_CODE_P
Interestingly, NEGATE and BIT_NOT care about symbolics
because they may degrade into PLUS/MINUS. Since
POINTER_PLUS_EXPR is just a special purpose PLUS, the
argument can be made that symbolics only matter for
PLUS/MINUS and CONVERT_EXPR.
c) Calling into range-ops fold.
A few random notes on the rest:
1. range-op.cc is the main engine.
2. range.cc is our irange handling code. As discussed before, irange
and value_range_base share the same API, and for trunk we propose
a simple drop-in replacement of:
#ifndef USE_IRANGE
class value_range_storage;
typedef value_range_base irange;
typedef value_range_storage irange_storage;
#endif
I am attaching everything as it fits in our branch, but I believe
what will ultimately happen is that we keep the #if USE_IRANGE bits
in our branch, and everything else in trunk.
NOTE:
I would like to keep our nomenclature for irange even in trunk, as it
makes it easier to keep track of things in branch vs. trunk.
Besides, there should be no references to irange in tree-vrp, only in
range-op.c and range.h/cc, and eventually gori/ranger/etc.
This will allow us to experiment with either irange or value_range
seamlessly.
3. I have provided a compile time flag for use during stage1 debugging.
The flag -franges= chooses which range folding mechanism to use (VRP,
range-ops, or both while checking that they agree). The default is
-franges=checking.
Again, this is entire patchset is a place holder, so everyone can see
how it all fits together. I'll be cleaning this up, and only posting
the #if !USE_RANGE bits. Be that as it may, this patch has been tested
on x86-64 Linux with --enable-languages=all :).
Aldy
diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index d9e0885b96b..a3f3a201787 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1450,6 +1450,8 @@ OBJS = \
print-tree.o \
profile.o \
profile-count.o \
+ range.o \
+ range-op.o \
read-md.o \
read-rtl.o \
read-rtl-function.o \
@@ -2546,6 +2548,7 @@ GTFILES = $(CPPLIB_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
$(srcdir)/stringpool.c $(srcdir)/tree.c $(srcdir)/varasm.c \
$(srcdir)/gimple.h \
$(srcdir)/gimple-ssa.h \
+ $(srcdir)/range.h $(srcdir)/range.cc \
$(srcdir)/tree-ssanames.c $(srcdir)/tree-eh.c $(srcdir)/tree-ssa-address.c \
$(srcdir)/tree-cfg.c $(srcdir)/tree-ssa-loop-ivopts.c \
$(srcdir)/tree-dfa.c \
diff --git a/gcc/common.opt b/gcc/common.opt
index a1544d06824..355c3fc3c62 100644
--- a/gcc/common.opt
+++ b/gcc/common.opt
@@ -1773,6 +1773,26 @@ fipa-vrp
Common Report Var(flag_ipa_vrp) Optimization
Perform IPA Value Range Propagation.
+;; Perform VRP range folding with either range-ops or the traditional
+;; extract_range_from_{binary,unary}_expr functions. This is meant to
+;; be a transitional testing construct to verify that both agree.
+
+franges=
+Common Joined RejectNegative Enum(ranges_mode) Var(flag_ranges_mode) Init(RANGES_CHECKING)
+-franges=[checking|vrp|range-ops] Set the mode for calculating ranges.
+
+Enum
+Name(ranges_mode) Type(enum ranges_mode) UnknownError(unknown ranges mode %qs)
+
+EnumValue
+Enum(ranges_mode) String(checking) Value(RANGES_CHECKING)
+
+EnumValue
+Enum(ranges_mode) String(vrp) Value(RANGES_VRP)
+
+EnumValue
+Enum(ranges_mode) String(range-ops) Value(RANGES_RANGE_OPS)
+
fira-algorithm=
Common Joined RejectNegative Enum(ira_algorithm) Var(flag_ira_algorithm) Init(IRA_ALGORITHM_CB) Optimization
-fira-algorithm=[CB|priority] Set the used IRA algorithm.
diff --git a/gcc/flag-types.h b/gcc/flag-types.h
index a2103282d46..e377fd48908 100644
--- a/gcc/flag-types.h
+++ b/gcc/flag-types.h
@@ -145,6 +145,15 @@ enum ira_algorithm
IRA_ALGORITHM_PRIORITY
};
+/* Which method to use for operations on ranges. */
+enum ranges_mode
+{
+ RANGES_VRP = 1,
+ RANGES_RANGE_OPS = 2,
+ /* Use both VPR and range-ops, and verify that the results agree. */
+ RANGES_CHECKING = 3
+};
+
/* The regions used for the integrated register allocator (IRA). */
enum ira_region
{
diff --git a/gcc/function-tests.c b/gcc/function-tests.c
index f1e29e49ee1..2440dd6820b 100644
--- a/gcc/function-tests.c
+++ b/gcc/function-tests.c
@@ -570,6 +570,19 @@ test_conversion_to_ssa ()
ASSERT_EQ (SSA_NAME, TREE_CODE (gimple_return_retval (return_stmt)));
}
+/* Test range folding. We must start this here because we need cfun
+ set. */
+
+static void
+test_ranges ()
+{
+ tree fndecl = build_trivial_high_gimple_function ();
+ function *fun = DECL_STRUCT_FUNCTION (fndecl);
+ push_cfun (fun);
+ range_tests ();
+ pop_cfun ();
+}
+
/* Test of expansion from gimple-ssa to RTL. */
static void
@@ -674,6 +687,7 @@ function_tests_c_tests ()
test_gimplification ();
test_building_cfg ();
test_conversion_to_ssa ();
+ test_ranges ();
test_expansion_to_rtl ();
}
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
new file mode 100644
index 00000000000..d2189cd4c85
--- /dev/null
+++ b/gcc/range-op.cc
@@ -0,0 +1,2096 @@
+/* Code for range operators.
+ Copyright (C) 2017-2019 Free Software Foundation, Inc.
+ Contributed by Andrew MacLeod <amacl...@redhat.com>
+ and Aldy Hernandez <al...@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "insn-codes.h"
+#include "rtl.h"
+#include "tree.h"
+#include "gimple.h"
+#include "cfghooks.h"
+#include "tree-pass.h"
+#include "ssa.h"
+#include "optabs-tree.h"
+#include "gimple-pretty-print.h"
+#include "diagnostic-core.h"
+#include "flags.h"
+#include "fold-const.h"
+#include "stor-layout.h"
+#include "calls.h"
+#include "cfganal.h"
+#include "gimple-fold.h"
+#include "tree-eh.h"
+#include "gimple-iterator.h"
+#include "gimple-walk.h"
+#include "tree-cfg.h"
+#include "wide-int.h"
+#include "range-op.h"
+#include "wide-int-range.h"
+
+
+/* Defaults for all operations are to return NULL. This means no
+ additional range info is available beyond that of the type. */
+
+bool
+range_operator::fold_range (irange &r ATTRIBUTE_UNUSED,
+ const irange &op1 ATTRIBUTE_UNUSED,
+ const irange &op2 ATTRIBUTE_UNUSED) const
+{
+ return false;
+}
+
+bool
+range_operator::op1_range (irange &r ATTRIBUTE_UNUSED,
+ const irange &lhs ATTRIBUTE_UNUSED,
+ const irange &op2 ATTRIBUTE_UNUSED) const
+{
+ return false;
+}
+
+bool
+range_operator::op2_range (irange &r ATTRIBUTE_UNUSED,
+ const irange &lhs ATTRIBUTE_UNUSED,
+ const irange &op1 ATTRIBUTE_UNUSED) const
+{
+ return false;
+}
+
+// This class is used to create a range_operator for a tree code and
+// automatically register it with the operator table. Simply inherit
+// from this class and overload whatever routines are required. This
+// provides registration as well as default debug dumping for the tree
+// code and calling op_binary() to resolve fold_range.
+
+class trange_operator : public range_operator
+{
+public:
+ trange_operator (enum tree_code c);
+ virtual void dump (FILE *f) const;
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+protected:
+ tree_code code;
+};
+
+
+// This implements the range operator table as a local object in this file.
+
+class range_op_table
+{
+public:
+ inline range_operator *operator[] (enum tree_code code);
+ void register_operator (enum tree_code code, range_operator *);
+private:
+ range_operator *m_range_tree[MAX_TREE_CODES];
+} range_tree;
+
+// Return a pointer tto the range_operator instance, if there is one,
+// associated with tree_code CODE.
+
+range_operator *
+range_op_table::operator[] (enum tree_code code)
+{
+ gcc_assert (code > 0 && code < MAX_TREE_CODES);
+ return m_range_tree[code];
+}
+
+// Add OP to the handler table for CODE.
+
+void
+range_op_table::register_operator (enum tree_code code, range_operator *op)
+{
+ gcc_checking_assert (m_range_tree[code] == NULL && op != NULL);
+ m_range_tree[code] = op;
+}
+
+/* The table is hidden and accessed via a simple extern function. */
+
+range_operator *
+range_op_handler (enum tree_code code)
+{
+ return range_tree[code];
+}
+// -------------------------------------------------------------------------
+
+// Auxillary routine to return the upper limit for a type.
+
+inline wide_int
+max_limit (const_tree type)
+{
+ return wi::max_value (TYPE_PRECISION (type) , TYPE_SIGN (type));
+}
+
+// Auxillary routine to return the lower limit for a type.
+
+inline wide_int
+min_limit (const_tree type)
+{
+ return wi::min_value (TYPE_PRECISION (type) , TYPE_SIGN (type));
+}
+
+// If the range of either operand is undefined, set the result to
+// undefined and return true. This is a common routine to most
+// functions as undefined is a viral condition, so if either operand
+// is undefined, so is the result.
+
+inline bool
+empty_range_check (irange &r, const irange &op1, const irange & op2, tree type)
+{
+ if (op1.undefined_p () || op2.undefined_p ())
+ {
+ r.set_undefined (type);
+ return true;
+ }
+ else
+ return false;
+}
+
+/* Given newly calculated lbound and ubound, examine their respective
+ overflow bits to determine how to add [lbound, ubound] into range
+ R. */
+
+static void
+accumulate_range (irange &r,
+ const wide_int &lb, wi::overflow_type ov_lb,
+ const wide_int &ub, wi::overflow_type ov_ub,
+ bool overflow_wraps = false)
+{
+ value_range_kind kind;
+ wide_int min = lb, max = ub;
+ tree type = r.type ();
+ adjust_range_for_overflow (kind, min, max, type, ov_lb, ov_ub,
+ overflow_wraps);
+ if (kind == VR_VARYING)
+ {
+ r.set_varying (type);
+ return;
+ }
+ irange tmp (kind, type, min, max);
+ r.union_ (tmp);
+ return;
+}
+
+static void
+accumulate_range (irange &r, const wide_int &lb, const wide_int &ub)
+{
+ accumulate_range (r, lb, wi::OVF_NONE, ub, wi::OVF_NONE, false);
+}
+
+/* Like accumulate_range, but canonicalize the case where the bounds
+ are swapped and overflow may wrap. In which case we transform
+ [10,5] into [MIN,5][10,MAX].
+
+ I am not happy with this. This is basically a workaround for the
+ fact that VRP has cripple ranges and wide_int_range_mult_wrapping
+ may return swapped ranges, so they must be canonicalized into some
+ anti range crap. I'm hoping this will go away when everything is
+ an irange. */
+
+static inline void
+accumulate_range_and_canonicalize (signop s,
+ irange &r,
+ const wide_int &new_lb,
+ const wide_int &new_ub)
+{
+ /* If the bounds are swapped, set the overflow bit to force
+ add_to_range to create the range correctly. This is essentially
+ what set_and_canonicalize_value_range was doing. */
+ wi::overflow_type overflow;
+ bool overflow_wraps = TYPE_OVERFLOW_WRAPS (r.type ());
+ if (wi::gt_p (new_lb, new_ub, s))
+ {
+ overflow = wi::OVF_OVERFLOW;
+ overflow_wraps = true;
+ }
+ else
+ overflow = wi::OVF_NONE;
+
+ accumulate_range (r, new_lb, wi::OVF_NONE, new_ub, overflow, overflow_wraps);
+}
+
+/* ----------------------------------------------------------------------- */
+
+/* irange wrapper for wide_int_range_multiplicative_op. */
+
+static bool
+irange_multiplicative_op (enum tree_code code, signop s, irange &r,
+ const wide_int &lh_lb, const wide_int &lh_ub,
+ const wide_int &rh_lb, const wide_int &rh_ub)
+{
+ wide_int new_lb, new_ub;
+ bool overflow_undefined = TYPE_OVERFLOW_UNDEFINED (r.type ());
+ unsigned prec = TYPE_PRECISION (r.type ());
+ if (wide_int_range_multiplicative_op (new_lb, new_ub,
+ code, s, prec,
+ lh_lb, lh_ub, rh_lb, rh_ub,
+ overflow_undefined))
+ {
+ accumulate_range_and_canonicalize (s, r, new_lb, new_ub);
+ return true;
+ }
+ return false;
+}
+
+/* For an operation on pointers, this specifies whether the resulting
+ operation is null, non-null, or unknown. */
+enum wide_int_range_nullness {
+ WIDE_INT_RANGE_UNKNOWN = 0,
+ WIDE_INT_RANGE_NULL,
+ WIDE_INT_RANGE_NONNULL
+};
+
+/* FIXME: This was slated to be in wide-int-range.cc, but Richi is
+ unconvinced we need to abstract this out. To be discussed later.
+
+ See discussion here:
+ https://gcc.gnu.org/ml/gcc-patches/2018-09/msg00171.html
+*/
+/* Given a binary operator (CODE) on two pointer ranges, return if the
+ result will be zero, non-zero, or unknown. */
+
+static enum wide_int_range_nullness
+wide_int_range_pointer (enum tree_code code,
+ signop sign,
+ const wide_int &vr0_min,
+ const wide_int &vr0_max,
+ const wide_int &vr1_min,
+ const wide_int &vr1_max,
+ bool overflow_wraps)
+{
+ unsigned prec = vr0_min.get_precision ();
+ if (code == MIN_EXPR || code == MAX_EXPR)
+ {
+ /* For MIN/MAX expressions with pointers, we only care about
+ nullness, if both are non null, then the result is nonnull.
+ If both are null, then the result is null. Otherwise they
+ are varying. */
+ if (!wide_int_range_includes_zero_p (vr0_min, vr0_max, sign)
+ && !wide_int_range_includes_zero_p (vr1_min, vr1_max, sign))
+ return WIDE_INT_RANGE_NONNULL;
+ else if (wide_int_range_zero_p (vr0_min, vr0_max, prec)
+ && wide_int_range_zero_p (vr1_min, vr1_max, prec))
+ return WIDE_INT_RANGE_NULL;
+ else
+ return WIDE_INT_RANGE_UNKNOWN;
+ }
+ else if (code == POINTER_PLUS_EXPR)
+ {
+ /* For pointer types, we are really only interested in asserting
+ whether the expression evaluates to non-NULL.
+
+ With -fno-delete-null-pointer-checks we need to be more
+ conservative. As some object might reside at address 0,
+ then some offset could be added to it and the same offset
+ subtracted again and the result would be NULL.
+ E.g.
+ static int a[12]; where &a[0] is NULL and
+ ptr = &a[6];
+ ptr -= 6;
+ ptr will be NULL here, even when there is POINTER_PLUS_EXPR
+ where the first range doesn't include zero and the second one
+ doesn't either. As the second operand is sizetype (unsigned),
+ consider all ranges where the MSB could be set as possible
+ subtractions where the result might be NULL. */
+ if ((!wide_int_range_includes_zero_p (vr0_min, vr0_max, sign)
+ || !wide_int_range_includes_zero_p (vr1_min, vr1_max, sign))
+ && !overflow_wraps
+ && (flag_delete_null_pointer_checks
+ || !wi::sign_mask (vr1_max)))
+ return WIDE_INT_RANGE_NONNULL;
+ else if (wide_int_range_zero_p (vr0_min, vr0_max, prec)
+ && wide_int_range_zero_p (vr1_min, vr1_max, prec))
+ return WIDE_INT_RANGE_NULL;
+ else
+ return WIDE_INT_RANGE_UNKNOWN;
+ }
+ else if (code == BIT_AND_EXPR)
+ {
+ /* For pointer types, we are really only interested in asserting
+ whether the expression evaluates to non-NULL. */
+ if (!wide_int_range_includes_zero_p (vr0_min, vr0_max, sign)
+ && !wide_int_range_includes_zero_p (vr1_min, vr1_max, sign))
+ return WIDE_INT_RANGE_NONNULL;
+ else if (wide_int_range_zero_p (vr0_min, vr0_max, prec)
+ || wide_int_range_zero_p (vr1_min, vr1_max, prec))
+ return WIDE_INT_RANGE_NULL;
+ else
+ return WIDE_INT_RANGE_UNKNOWN;
+ }
+ else
+ return WIDE_INT_RANGE_UNKNOWN;
+}
+
+/* irange wrapper for wide_int_range_pointer. */
+
+static void
+irange_pointer_optimization (enum tree_code code, signop s, irange &r,
+ const wide_int &lh_lb, const wide_int &lh_ub,
+ const wide_int &rh_lb, const wide_int &rh_ub)
+{
+ tree type = r.type ();
+ wide_int_range_nullness n;
+ n = wide_int_range_pointer (code, s, lh_lb, lh_ub, rh_lb, rh_ub,
+ TYPE_OVERFLOW_WRAPS (type));
+ if (n == WIDE_INT_RANGE_UNKNOWN)
+ r.set_varying (type);
+ else if (n == WIDE_INT_RANGE_NULL)
+ r.union_ (range_zero (type));
+ else if (n == WIDE_INT_RANGE_NONNULL)
+ r.union_ (range_nonzero (type));
+ else
+ gcc_unreachable ();
+}
+
+/* The result of a bitwise [lh_lb, lh_ub] .AND. [rh_lb, rh_ub] has
+ already been calculated in range R. See if the operation was a
+ masking operation, and optimize it further. */
+
+static void
+irange_adjust_bit_and_mask (irange &r, signop s,
+ const wide_int &lh_lb, const wide_int &lh_ub,
+ const wide_int &rh_lb, const wide_int &rh_ub)
+{
+ /* FIXME: This optimization causes us to miscompare with VRP.
+ Disable for now, and then contribute it independently
+ upstream. */
+ return;
+
+ /* If the resulting range contains 0, AND the least significant bit
+ of mask is not set, we can improve the range by making the least
+ significant bit set the minimum, and adding in the 0.
+
+ For example, A & 0x3C returns a range of [0,60]. We can improve
+ this to [0,0][4, 60]. */
+ wide_int mask, lower_bound, upper_bound;
+ int tz;
+ tree type = r.type ();
+ irange zero = range_zero (type);
+ bool range_contains_zero = !range_intersect (r, zero).undefined_p ();
+ if (range_contains_zero
+ && wide_int_range_get_mask_and_bounds (mask,
+ lower_bound,
+ upper_bound,
+ lh_lb, lh_ub,
+ rh_lb, rh_ub)
+ && (tz = wi::ctz (mask)) != 0)
+ {
+ unsigned prec = TYPE_PRECISION (type);
+ irange negatives, positives;
+ wide_int lb, ub;
+ if (s == SIGNED)
+ {
+ positives = range_positives (type);
+ negatives = range_negatives (type);
+ positives.intersect (r);
+ negatives.intersect (r);
+ }
+ else
+ {
+ positives = r;
+ negatives.set_undefined (type);
+ }
+ if (!positives.undefined_p ())
+ {
+ // Mask out the positive numbers that can't happen.
+ lb = wi::shifted_mask (tz, 1, false, prec);
+ ub = positives.upper_bound();
+ if (wi::le_p (lb, ub, s))
+ positives.intersect (irange (type, lb, ub));
+ }
+ if (!negatives.undefined_p ())
+ {
+ // Mask out the negatives numbers that can't happen.
+ lb = wi::min_value (prec, s);
+ ub = wi::shifted_mask (0, tz, true, prec);
+ negatives.intersect (irange (type, lb, ub));
+ }
+ r = positives;
+ r.union_ (negatives);
+ r.union_ (zero);
+ }
+}
+
+/* Perform an operation CODE on pairs of ranges, and store the result
+ in R.
+
+ If the operation is a binary op, perform:
+ [lh_lb, lh_ub] .CODE. [rh_lb, rh_ub]
+
+ If the operation is a unary op, the rh_* bounds are unused.
+
+ Return FALSE to stop processing remaining subranges. In this case,
+ the result is assumed to span the entire domain (range_for_type). */
+
+static bool
+op_wide_int (enum tree_code code, irange &r, tree rh_type,
+ const wide_int &lh_lb, const wide_int lh_ub,
+ const wide_int &rh_lb, const wide_int &rh_ub)
+{
+ wide_int new_lb, new_ub, tmp;
+ wi::overflow_type ov_lb, ov_ub;
+ tree type = r.type ();
+ signop s = TYPE_SIGN (type);
+
+ if (POINTER_TYPE_P (type))
+ {
+ irange_pointer_optimization (code, s, r, lh_lb, lh_ub, rh_lb, rh_ub);
+ return true;
+ }
+
+ switch (code)
+ {
+ case PLUS_EXPR:
+ wide_int_binop (new_lb, code, lh_lb, rh_lb, s, &ov_lb);
+ wide_int_binop (new_ub, code, lh_ub, rh_ub, s, &ov_ub);
+ accumulate_range (r, new_lb, ov_lb, new_ub, ov_ub,
+ TYPE_OVERFLOW_WRAPS (type));
+ return true;
+
+ case MINUS_EXPR:
+ wide_int_binop (new_lb, code, lh_lb, rh_ub, s, &ov_lb);
+ wide_int_binop (new_ub, code, lh_ub, rh_lb, s, &ov_ub);
+ accumulate_range (r, new_lb, ov_lb, new_ub, ov_ub,
+ TYPE_OVERFLOW_WRAPS (type));
+ return true;
+
+ case MAX_EXPR:
+ case MIN_EXPR:
+ if (wide_int_range_min_max (new_lb, new_ub, code, s,
+ TYPE_PRECISION (type),
+ lh_lb, lh_ub, rh_lb, rh_ub))
+ {
+ accumulate_range (r, new_lb, new_ub);
+ return true;
+ }
+ r.set_varying (type);
+ return false;
+
+ case MULT_EXPR:
+ if (irange_multiplicative_op (code, s, r, lh_lb, lh_ub, rh_lb, rh_ub))
+ return true;
+ r.set_varying (type);
+ return false;
+
+ case RSHIFT_EXPR:
+ if (!wide_int_range_shift_undefined_p (TYPE_SIGN (rh_type),
+ TYPE_PRECISION (type),
+ rh_lb, rh_ub)
+ && irange_multiplicative_op (code, s, r,
+ lh_lb, lh_ub, rh_lb, rh_ub))
+ return true;
+ r.set_varying (type);
+ return false;
+
+ case LSHIFT_EXPR:
+ if (!wide_int_range_shift_undefined_p (TYPE_SIGN (rh_type),
+ TYPE_PRECISION (type),
+ rh_lb, rh_ub)
+ && wide_int_range_lshift (new_lb, new_ub, s, TYPE_PRECISION (type),
+ lh_lb, lh_ub, rh_lb, rh_ub,
+ TYPE_OVERFLOW_UNDEFINED (type)))
+ {
+ accumulate_range_and_canonicalize (s, r, new_lb, new_ub);
+ return true;
+ }
+ r.set_varying (type);
+ return false;
+
+ case BIT_AND_EXPR:
+ {
+ /* For pointer types, we are really only interested in asserting
+ whether the expression evaluates to non-NULL. */
+ wide_int may_be_nonzero_lh, must_be_nonzero_lh;
+ wide_int may_be_nonzero_rh, must_be_nonzero_rh;
+ wide_int_range_set_zero_nonzero_bits (s, lh_lb, lh_ub,
+ may_be_nonzero_lh,
+ must_be_nonzero_lh);
+ wide_int_range_set_zero_nonzero_bits (s, rh_lb, rh_ub,
+ may_be_nonzero_rh,
+ must_be_nonzero_rh);
+ if (wide_int_range_bit_and (new_lb, new_ub, s, TYPE_PRECISION (type),
+ lh_lb, lh_ub,
+ rh_lb, rh_ub,
+ must_be_nonzero_lh,
+ may_be_nonzero_lh,
+ must_be_nonzero_rh,
+ may_be_nonzero_rh))
+ {
+ // For AND, calculate each subrange separately, and then union
+ // the results.
+ irange tmp;
+ tmp.set_undefined (r.type ());
+ accumulate_range (tmp, new_lb, new_ub);
+ irange_adjust_bit_and_mask (tmp, s, lh_lb, lh_ub, rh_lb, rh_ub);
+ r.union_ (tmp);
+ return true;
+ }
+ r.set_varying (type);
+ return false;
+ }
+
+ case BIT_IOR_EXPR:
+ {
+ wide_int may_be_nonzero_lh, must_be_nonzero_lh;
+ wide_int may_be_nonzero_rh, must_be_nonzero_rh;
+ wide_int_range_set_zero_nonzero_bits (s, lh_lb, lh_ub,
+ may_be_nonzero_lh,
+ must_be_nonzero_lh);
+ wide_int_range_set_zero_nonzero_bits (s, rh_lb, rh_ub,
+ may_be_nonzero_rh,
+ must_be_nonzero_rh);
+ if (wide_int_range_bit_ior (new_lb, new_ub, s,
+ lh_lb, lh_ub,
+ rh_lb, rh_ub,
+ must_be_nonzero_lh,
+ may_be_nonzero_lh,
+ must_be_nonzero_rh,
+ may_be_nonzero_rh))
+ {
+ accumulate_range (r, new_lb, new_ub);
+ return true;
+ }
+ r.set_varying (type);
+ return false;
+ }
+
+ case BIT_XOR_EXPR:
+ {
+ wide_int may_be_nonzero_lh, must_be_nonzero_lh;
+ wide_int may_be_nonzero_rh, must_be_nonzero_rh;
+ wide_int_range_set_zero_nonzero_bits (s, lh_lb, lh_ub,
+ may_be_nonzero_lh,
+ must_be_nonzero_lh);
+ wide_int_range_set_zero_nonzero_bits (s, rh_lb, rh_ub,
+ may_be_nonzero_rh,
+ must_be_nonzero_rh);
+ if (wide_int_range_bit_xor (new_lb, new_ub, s, TYPE_PRECISION (type),
+ must_be_nonzero_lh,
+ may_be_nonzero_lh,
+ must_be_nonzero_rh,
+ may_be_nonzero_rh))
+ {
+ accumulate_range (r, new_lb, new_ub);
+ return true;
+ }
+ r.set_varying (type);
+ return false;
+ }
+
+ case TRUNC_MOD_EXPR:
+ if (wide_int_range_zero_p (rh_lb, rh_ub, TYPE_PRECISION (type)))
+ {
+ /* An empty range means undefined. */
+ return false;
+ }
+ wide_int_range_trunc_mod (new_lb, new_ub, s, TYPE_PRECISION (type),
+ lh_lb, lh_ub, rh_lb, rh_ub);
+ accumulate_range (r, new_lb, new_ub);
+ return true;
+
+ case TRUNC_DIV_EXPR:
+ case EXACT_DIV_EXPR:
+ case FLOOR_DIV_EXPR:
+ case ROUND_DIV_EXPR:
+ case CEIL_DIV_EXPR:
+ {
+ /* If we know we will divide by zero, return an empty range,
+ which will be interpreted as undefined. */
+ if (rh_lb == 0 && rh_ub == 0)
+ return false;
+
+ wide_int extra_min, extra_max;
+ bool extra_range_p;
+ if (wide_int_range_div (new_lb, new_ub, code, s,
+ TYPE_PRECISION (type),
+ lh_lb, lh_ub,
+ rh_lb, rh_ub,
+ TYPE_OVERFLOW_UNDEFINED (type),
+ extra_range_p, extra_min, extra_max))
+ {
+ accumulate_range (r, new_lb, new_ub);
+ if (extra_range_p)
+ accumulate_range (r, extra_min, extra_max);
+ return true;
+ }
+ r.set_varying (type);
+ return false;
+ }
+
+ case ABS_EXPR:
+ if (wide_int_range_abs (new_lb, new_ub,
+ TYPE_SIGN (r.type ()),
+ TYPE_PRECISION (r.type ()),
+ lh_lb, lh_ub,
+ TYPE_OVERFLOW_UNDEFINED (r.type ())))
+ {
+ r.union_ (irange (r.type (), new_lb, new_ub));
+ return true;
+ }
+ r.set_varying (type);
+ return false;
+
+ case ABSU_EXPR:
+ wide_int_range_absu (new_lb, new_ub, TYPE_PRECISION (r.type ()),
+ lh_lb, lh_ub);
+ r.union_ (irange (unsigned_type_for (r.type ()), new_lb, new_ub));
+ return true;
+
+ default:
+ return false;
+ }
+}
+
+/* Perform an operation between 2 ranges. */
+
+static bool
+op_binary (enum tree_code code, irange &r, const irange &lh, const irange &rh)
+{
+ bool res = false;
+ tree type = lh.type ();
+ // Clear and set result type.
+ r.set_undefined (type);
+
+ if (lh.undefined_p () || rh.undefined_p ())
+ return true;
+
+ for (unsigned x = 0; x < lh.num_pairs (); ++x)
+ for (unsigned y = 0; y < rh.num_pairs (); ++y)
+ {
+ wide_int lh_lb = lh.lower_bound (x);
+ wide_int lh_ub = lh.upper_bound (x);
+ wide_int rh_lb = rh.lower_bound (y);
+ wide_int rh_ub = rh.upper_bound (y);
+ tree type = rh.type ();
+ res = op_wide_int (code, r, type, lh_lb, lh_ub, rh_lb, rh_ub);
+ if (!res)
+ return false;
+ }
+
+ return res && !r.varying_p ();
+}
+
+/* Perform a unary operation on a range. TYPE is the type of the
+ resulting operation (and thus R). */
+
+static bool
+op_unary (enum tree_code code, irange &r, const irange &lh, tree type)
+{
+ bool res = false;
+ // Clear and set result type.
+ r.set_undefined (type);
+
+ if (lh.undefined_p ())
+ return true;
+
+ for (unsigned x = 0; x < lh.num_pairs (); ++x)
+ {
+ wide_int lower_bound = lh.lower_bound (x);
+ wide_int upper_bound = lh.upper_bound (x);
+ res = op_wide_int (code, r, type,
+ lower_bound, upper_bound, lower_bound, upper_bound);
+ if (!res)
+ return false;
+ }
+ return res && !r.varying_p ();
+}
+
+// Construct and register the range_op handler for treee code C.
+
+trange_operator::trange_operator (enum tree_code c)
+{
+ code = c;
+ range_tree.register_operator (c, this);
+}
+
+// Dump the name of this operation.
+
+void
+trange_operator::dump (FILE *f) const
+{
+ fprintf (f," %s ", get_tree_code_name (code));
+}
+
+// Perform the default fold operation of LH OP RH, and return it in R.
+
+bool
+trange_operator::fold_range (irange &r, const irange &lh,
+ const irange &rh) const
+{
+ if (empty_range_check (r, lh, rh, lh.type ()))
+ return true;
+
+ return op_binary (code, r, lh, rh);
+}
+
+// Return an irange instance that is a boolean TRUE.
+
+static irange
+range_true ()
+{
+ unsigned prec = TYPE_PRECISION (boolean_type_node);
+ return irange (boolean_type_node, wi::one (prec), wi::one (prec));
+}
+
+// Return an irange instance that is a boolean FALSE.
+
+static irange
+range_false ()
+{
+ unsigned prec = TYPE_PRECISION (boolean_type_node);
+ return irange (boolean_type_node, wi::zero (prec), wi::zero (prec));
+}
+
+
+enum bool_range_state { BRS_FALSE, BRS_TRUE, BRS_EMPTY, BRS_FULL };
+
+/* Return the summary information about boolean range LHS. Return an
+ "interesting" range in R. for EMPTY or FULL, return the equivilent
+ range for TYPE, for BRS_TRUE and BRS false, return the negatiuon of
+ the bool range. */
+static bool_range_state
+get_bool_state (irange &r, const irange &lhs, tree val_type)
+{
+ /* If there is no result, then this is unexectuable, so no range. */
+ if (lhs.undefined_p ())
+ {
+ r.set_undefined (val_type);
+ return BRS_EMPTY;
+ }
+
+ // If the bounds arent the same, then its not a constant. */
+ if (!wi::eq_p (lhs.upper_bound (), lhs.lower_bound ()))
+ {
+ r.set_varying (val_type);
+ return BRS_FULL;
+ }
+
+ if (lhs.zero_p ())
+ return BRS_FALSE;
+
+ return BRS_TRUE;
+}
+
+
+class operator_equal : public trange_operator
+{
+public:
+ operator_equal () : trange_operator (EQ_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &val) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &val) const;
+} op_equal;
+
+/* Fold comparison of the 2 ranges. */
+bool
+operator_equal::fold_range (irange &r, const irange &op1,
+ const irange &op2) const
+{
+ if (empty_range_check (r, op1, op2, boolean_type_node))
+ return true;
+
+ /* We can be sure the values are always equal or not if both ranges
+ consist of a single value, and then compare them. */
+ if (wi::eq_p (op1.lower_bound (), op1.upper_bound ())
+ && wi::eq_p (op2.lower_bound (), op2.upper_bound ()))
+ {
+ if (wi::eq_p (op1.lower_bound (), op2.upper_bound()))
+ r = range_true ();
+ else
+ r = range_false ();
+ }
+ else
+ {
+ /* If ranges do not intersect, we know the range is not equal, otherwise
+ we don;t really know anything for sure. */
+ r = range_intersect (op1, op2);
+ if (r.undefined_p ())
+ r = range_false ();
+ else
+ r.set_varying (boolean_type_node);
+ }
+
+ return true;
+}
+
+bool
+operator_equal::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ switch (get_bool_state (r, lhs, op2.type ()))
+ {
+ case BRS_FALSE:
+ /* If the result is false, the only time we know anything is if OP2 is
+ a constant. */
+ if (wi::eq_p (op2.lower_bound(), op2.upper_bound()))
+ r = range_invert (op2);
+ else
+ r.set_varying (op2.type ());
+ break;
+
+ case BRS_TRUE:
+ /* If its true, the result is the same as OP2. */
+ r = op2;
+ break;
+
+ default:
+ break;
+ }
+ return true;
+}
+
+
+bool
+operator_equal::op2_range (irange &r, const irange &lhs,
+ const irange &op1) const
+{
+ return operator_equal::op1_range (r, lhs, op1);
+}
+
+
+/* Range operator for def = op1 != op2. */
+
+class operator_not_equal : public trange_operator
+{
+public:
+ operator_not_equal () : trange_operator (NE_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_not_equal;
+
+/* Fold comparison of the 2 ranges. */
+bool
+operator_not_equal::fold_range (irange &r, const irange &op1,
+ const irange &op2) const
+{
+ if (empty_range_check (r, op1, op2, boolean_type_node))
+ return true;
+
+ /* We can be sure the values are always equal or not if both ranges
+ consist of a single value, and then compare them. */
+ if (wi::eq_p (op1.lower_bound (), op1.upper_bound ())
+ && wi::eq_p (op2.lower_bound (), op2.upper_bound ()))
+ {
+ if (wi::ne_p (op1.lower_bound (), op2.upper_bound()))
+ r = range_true ();
+ else
+ r = range_false ();
+ }
+ else
+ {
+ /* If ranges do not intersect, we know the range is not equal, otherwise
+ we don;t really know anything for sure. */
+ r = range_intersect (op1, op2);
+ if (r.undefined_p ())
+ r = range_true ();
+ else
+ r.set_varying (boolean_type_node);
+ }
+
+ return true;
+}
+
+/* Calculate the range of op1 being == to VAL based on LHS. */
+bool
+operator_not_equal::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ switch (get_bool_state (r, lhs, op2.type ()))
+ {
+ case BRS_TRUE:
+ /* If the result is true, the only time we know anything is if OP2 is
+ a constant. */
+ if (wi::eq_p (op2.lower_bound(), op2.upper_bound()))
+ r = range_invert (op2);
+ else
+ r.set_varying (op2.type ());
+ break;
+
+ case BRS_FALSE:
+ /* If its true, the result is the same as OP2. */
+ r = op2;
+ break;
+
+ default:
+ break;
+ }
+ return true;
+}
+
+
+bool
+operator_not_equal::op2_range (irange &r, const irange &lhs,
+ const irange &op1) const
+{
+ return operator_not_equal::op1_range (r, lhs, op1);
+}
+
+
+/* (X < VAL) produces the a range of [MIN, VAL - 1] */
+static void
+build_lt (irange &r, tree type, const wide_int &val)
+{
+ wi::overflow_type ov;
+ wide_int lim = wi::sub (val, 1, TYPE_SIGN (type), &ov);
+
+ /* If val - 1 underflows, check is X < MIN, which is an empty range. */
+ if (ov)
+ r.set_undefined (type);
+ else
+ r = irange (type, min_limit (type), lim);
+}
+
+/* (X <= VAL) produces the a range of [MIN, VAL] */
+static void
+build_le (irange &r, tree type, const wide_int &val)
+{
+ r = irange (type, min_limit (type), val);
+}
+
+/* (X > VAL) produces the a range of [VAL + 1, MAX] */
+static void
+build_gt (irange &r, tree type, const wide_int &val)
+{
+ wi::overflow_type ov;
+ wide_int lim = wi::add (val, 1, TYPE_SIGN (type), &ov);
+ /* If val + 1 overflows, check is for X > MAX , which is an empty range. */
+ if (ov)
+ r.set_undefined (type);
+ else
+ r = irange (type, lim, max_limit (type));
+}
+
+/* (X >= val) produces the a range of [VAL, MAX] */
+static void
+build_ge (irange &r, tree type, const wide_int &val)
+{
+ r = irange (type, val, max_limit (type));
+}
+
+
+
+class operator_lt : public trange_operator
+{
+public:
+ operator_lt () : trange_operator (LT_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_lt;
+
+bool
+operator_lt::fold_range (irange &r, const irange &op1, const irange &op2) const
+{
+ if (empty_range_check (r, op1, op2, boolean_type_node))
+ return true;
+
+ signop sign = TYPE_SIGN (op1.type ());
+ gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
+
+ if (wi::lt_p (op1.upper_bound (), op2.lower_bound (), sign))
+ r = range_true ();
+ else
+ if (!wi::lt_p (op1.lower_bound (), op2.upper_bound (), sign))
+ r = range_false ();
+ else
+ r.set_varying (boolean_type_node);
+ return true;
+}
+
+
+bool
+operator_lt::op1_range (irange &r, const irange &lhs, const irange &op2) const
+{
+ switch (get_bool_state (r, lhs, op2.type ()))
+ {
+ case BRS_TRUE:
+ build_lt (r, op2.type (), op2.upper_bound ());
+ break;
+
+ case BRS_FALSE:
+ build_ge (r, op2.type (), op2.lower_bound ());
+ break;
+
+ default:
+ break;
+ }
+ return true;
+}
+
+
+bool
+operator_lt::op2_range (irange &r, const irange &lhs, const irange &op1) const
+{
+ switch (get_bool_state (r, lhs, op1.type ()))
+ {
+ case BRS_FALSE:
+ build_le (r, op1.type (), op1.upper_bound ());
+ break;
+
+ case BRS_TRUE:
+ build_gt (r, op1.type (), op1.lower_bound ());
+ break;
+
+ default:
+ break;
+ }
+ return true;
+
+}
+
+class operator_le : public trange_operator
+{
+public:
+ operator_le () : trange_operator (LE_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_le;
+
+bool
+operator_le::fold_range (irange &r, const irange &op1, const irange &op2) const
+{
+ if (empty_range_check (r, op1, op2, boolean_type_node))
+ return true;
+
+ signop sign = TYPE_SIGN (op1.type ());
+ gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
+
+ if (wi::le_p (op1.upper_bound (), op2.lower_bound (), sign))
+ r = range_true ();
+ else
+ if (!wi::le_p (op1.lower_bound (), op2.upper_bound (), sign))
+ r = range_false ();
+ else
+ r.set_varying (boolean_type_node);
+ return true;
+}
+
+bool
+operator_le::op1_range (irange &r, const irange &lhs, const irange &op2) const
+{
+ switch (get_bool_state (r, lhs, op2.type ()))
+ {
+ case BRS_TRUE:
+ build_le (r, op2.type (), op2.upper_bound ());
+ break;
+
+ case BRS_FALSE:
+ build_gt (r, op2.type (), op2.lower_bound ());
+ break;
+
+ default:
+ break;
+ }
+ return true;
+}
+
+
+bool
+operator_le::op2_range (irange &r, const irange &lhs, const irange &op1) const
+{
+ switch (get_bool_state (r, lhs, op1.type ()))
+ {
+ case BRS_FALSE:
+ build_lt (r, op1.type (), op1.upper_bound ());
+ break;
+
+ case BRS_TRUE:
+ build_ge (r, op1.type (), op1.lower_bound ());
+ break;
+
+ default:
+ break;
+ }
+ return true;
+
+}
+
+
+class operator_gt : public trange_operator
+{
+public:
+ operator_gt () : trange_operator (GT_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_gt;
+
+bool
+operator_gt::fold_range (irange &r, const irange &op1, const irange &op2) const
+{
+ if (empty_range_check (r, op1, op2, boolean_type_node))
+ return true;
+
+ signop sign = TYPE_SIGN (op1.type ());
+ gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
+
+ if (wi::gt_p (op1.lower_bound (), op2.upper_bound (), sign))
+ r = range_true ();
+ else
+ if (!wi::gt_p (op1.upper_bound (), op2.lower_bound (), sign))
+ r = range_false ();
+ else
+ r.set_varying (boolean_type_node);
+
+ return true;
+}
+
+bool
+operator_gt::op1_range (irange &r, const irange &lhs, const irange &op2) const
+{
+ switch (get_bool_state (r, lhs, op2.type ()))
+ {
+ case BRS_TRUE:
+ build_gt (r, op2.type (), op2.lower_bound ());
+ break;
+
+ case BRS_FALSE:
+ build_le (r, op2.type (), op2.upper_bound ());
+ break;
+
+ default:
+ break;
+ }
+ return true;
+}
+
+
+bool
+operator_gt::op2_range (irange &r, const irange &lhs, const irange &op1) const
+{
+ switch (get_bool_state (r, lhs, op1.type ()))
+ {
+ case BRS_FALSE:
+ build_ge (r, op1.type (), op1.lower_bound ());
+ break;
+
+ case BRS_TRUE:
+ build_lt (r, op1.type (), op1.upper_bound ());
+ break;
+
+ default:
+ break;
+ }
+ return true;
+
+}
+
+
+class operator_ge : public trange_operator
+{
+public:
+ operator_ge () : trange_operator (GE_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_ge;
+
+bool
+operator_ge::fold_range (irange &r, const irange &op1, const irange &op2) const
+{
+ if (empty_range_check (r, op1, op2, boolean_type_node))
+ return true;
+
+ signop sign = TYPE_SIGN (op1.type ());
+ gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
+
+ if (wi::ge_p (op1.lower_bound (), op2.upper_bound (), sign))
+ r = range_true ();
+ else
+ if (!wi::ge_p (op1.upper_bound (), op2.lower_bound (), sign))
+ r = range_false ();
+ else
+ r.set_varying (boolean_type_node);
+
+ return true;
+}
+
+bool
+operator_ge::op1_range (irange &r, const irange &lhs, const irange &op2) const
+{
+ switch (get_bool_state (r, lhs, op2.type ()))
+ {
+ case BRS_TRUE:
+ build_ge (r, op2.type (), op2.lower_bound ());
+ break;
+
+ case BRS_FALSE:
+ build_lt (r, op2.type (), op2.upper_bound ());
+ break;
+
+ default:
+ break;
+ }
+ return true;
+}
+
+
+bool
+operator_ge::op2_range (irange &r, const irange &lhs, const irange &op1) const
+{
+ switch (get_bool_state (r, lhs, op1.type ()))
+ {
+ case BRS_FALSE:
+ build_gt (r, op1.type (), op1.lower_bound ());
+ break;
+
+ case BRS_TRUE:
+ build_le (r, op1.type (), op1.upper_bound ());
+ break;
+
+ default:
+ break;
+ }
+ return true;
+
+}
+
+
+class operator_plus : public trange_operator
+{
+public:
+ operator_plus () : trange_operator (PLUS_EXPR) { }
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+
+} op_plus;
+
+/* Adjust irange to be in terms of op1.
+ Given [range] = op1 + val, op1 = [range] - val. */
+bool
+operator_plus::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ return op_binary (MINUS_EXPR, r, lhs, op2);
+}
+
+bool
+operator_plus::op2_range (irange &r, const irange &lhs,
+ const irange &op1) const
+{
+ return op_binary (MINUS_EXPR, r, lhs, op1);
+}
+
+
+class operator_minus : public trange_operator
+{
+public:
+ operator_minus () : trange_operator (MINUS_EXPR) { }
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_minus;
+
+/* Adjust irange to be in terms of op1.
+ Given lhs = op1 - op2, op1 = lhs + op2. */
+bool
+operator_minus::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ return op_binary (PLUS_EXPR, r, lhs, op2);
+}
+
+/* Adjust irange to be in terms of op2.
+ Given lhs = op1 - op2, -op2 = lhs - op1, therefore op2 = op1 - lhs. */
+bool
+operator_minus::op2_range (irange &r, const irange &lhs,
+ const irange &op1) const
+{
+ return op_binary (MINUS_EXPR, r, op1 ,lhs);
+}
+
+
+trange_operator op_mult (MULT_EXPR);
+trange_operator op_trunc_div (TRUNC_DIV_EXPR);
+trange_operator op_floor_div(FLOOR_DIV_EXPR);
+trange_operator op_round_div (ROUND_DIV_EXPR);
+trange_operator op_ceil_div (CEIL_DIV_EXPR);
+trange_operator op_pointer_plus (POINTER_PLUS_EXPR);
+trange_operator op_max (MAX_EXPR);
+trange_operator op_min (MIN_EXPR);
+
+class operator_exact_divide : public trange_operator
+{
+public:
+ operator_exact_divide () : trange_operator (EXACT_DIV_EXPR) { }
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+
+} op_exact_div;
+
+
+// Adjust irange to be in terms of op1.
+bool
+operator_exact_divide::op1_range (irange &r,
+ const irange &lhs,
+ const irange &op2) const
+{
+ tree offset;
+ // [2, 4] = op1 / [3,3] since its exact divide, no need to worry about
+ // remainders in the endpoints, so op1 = [2,4] * [3,3] = [6,12].
+ // We wont bother trying to enumerate all the in between stuff :-P
+ // TRUE accuraacy is [6,6][9,9][12,12]. This is unlikely to matter most of
+ // the time however.
+ // If op2 is a multiple of 2, we would be able to set some non-zero bits.
+ if (op2.singleton_p (&offset) && op_binary (MULT_EXPR, r, lhs, op2)
+ && !integer_zerop (offset))
+ return true;
+ return false;
+}
+
+
+class operator_shift : public trange_operator
+{
+public:
+ operator_shift (enum tree_code c) : trange_operator (c) { }
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+};
+
+operator_shift op_lshift (LSHIFT_EXPR);
+operator_shift op_rshift (RSHIFT_EXPR);
+
+bool
+operator_shift::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ tree type = lhs.type ();
+ wide_int w2;
+ if (empty_range_check (r, lhs, op2, type))
+ return true;
+
+ /* FIXME: Andrew: You probably want to use
+ wide_int_range_shift_undefined_p() here instead of these checks.
+ But I'm not going to touch anything since this entire function
+ isn't doing anything but keeping notes to entertain you in your
+ sleepless nights. */
+
+ // Negative shifts are undefined, as well as shift >= precision
+ if (wi::lt_p (op2.lower_bound (), 0, TYPE_SIGN (op2.type ())))
+ return false;
+ if (wi::ge_p (op2.upper_bound (), TYPE_PRECISION (type), UNSIGNED))
+ return false;
+
+ return false;
+#if 0
+ // Check if the calculation can be done without overflows.
+ // and if so, adjust the bounds to allow for 1's that may have been shifted
+ // out.
+ wide_int mask;
+ if (code == LSHIFT_EXPR)
+ {
+ res = op_binary (RSHIFT_EXPR, r, lhs, w2);
+ if (res)
+ {
+ mask = wi::mask (op, true, r.get_precision ());
+ }
+ }
+ else
+ {
+ res = op_binary (LSHIFT_EXPR, r, lhs, w2);
+ }
+
+ return res;
+#endif
+}
+
+
+class operator_cast: public trange_operator
+{
+public:
+ operator_cast (enum tree_code code) : trange_operator (code) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+
+};
+
+operator_cast op_nop (NOP_EXPR);
+operator_cast op_convert (CONVERT_EXPR);
+
+
+/* Return the range of lh converted to the type of rh:
+ r = (type_of(rh)) lh. */
+
+bool
+operator_cast::fold_range (irange &r, const irange &lh, const irange &rh) const
+{
+ if (empty_range_check (r, lh, rh, rh.type ()))
+ return true;
+
+ if (lh.type () != rh.type ())
+ {
+ /* Handle conversion so they become the same type. */
+ r = lh;
+ r.cast (rh.type ());
+ r.intersect (rh);
+ }
+ else
+ /* If they are the same type, the result should be the intersection of
+ the two ranges. */
+ r = range_intersect (lh, rh);
+ return true;
+}
+
+bool
+operator_cast::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ tree lhs_type = lhs.type ();
+ tree op2_type = op2.type ();
+ irange op_type;
+
+ /* If the precision of the LHS is smaller than the precision of the RHS,
+ then there would be truncation of the value on the RHS, and so we can tell
+ nothing about it. */
+ if (TYPE_PRECISION (lhs_type) < TYPE_PRECISION (op2_type))
+ {
+ /* If we've been passed an actual value for the RHS rather than the type
+ see if it fits the LHS, and if so, then we can allow it. */
+ r = op2;
+ r.cast (lhs_type);
+ r.cast (op2_type);
+ if (r == op2)
+ {
+ /* We know the value of the RHS fits in the LHS type, so convert the
+ left hand side and remove any values that arent in OP2. */
+ r = lhs;
+ r.cast (op2_type);
+ r.intersect (op2);
+ return true;
+ }
+ /* Special case if the LHS is a boolean. A 0 means the RHS is zero,
+ and a 1 means the RHS is non-zero. */
+ else if (TREE_CODE (lhs_type) == BOOLEAN_TYPE)
+ {
+ /* If the LHS is unknown, the result is whatever op2 already is. */
+ if (!lhs.singleton_p ())
+ {
+ r = op2;
+ return true;
+ }
+ /* Boolean casts are weird in GCC. It's actually an implied mask with
+ 0x01, so all that is known is whether the rightmost bit is 0 or 1,
+ which implies the only value *not* in the RHS is 0 or -1. */
+ unsigned prec = TYPE_PRECISION (op2_type);
+ if (lhs.zero_p ())
+ r = irange (VR_ANTI_RANGE, op2_type,
+ wi::minus_one (prec), wi::minus_one (prec));
+ else
+ r = irange (VR_ANTI_RANGE, op2_type,
+ wi::zero (prec), wi::zero (prec));
+ /* And intersect it with what we know about op2. */
+ r.intersect (op2);
+ return true;
+ }
+ /* Otherwise we'll have to assume it's whatever we know about op2. */
+ r = op2;
+ return true;
+ }
+
+ /* If the LHS precision is greater than the rhs precision, the LHS range
+ is resticted to the range of the RHS by this assignment. */
+ if (TYPE_PRECISION (lhs_type) > TYPE_PRECISION (op2_type))
+ {
+ /* Cast the range of the RHS to the type of the LHS. */
+ op_type.set_varying (op2_type);
+ op_type.cast (lhs_type);
+
+ /* Intersect this with the LHS range will produce the RHS range. */
+ r = range_intersect (lhs, op_type);
+ }
+ else
+ r = lhs;
+
+ /* Cast the calculated range to the type of the RHS. */
+ r.cast (op2.type ());
+
+ return true;
+}
+
+/* ---------------------------------------------------------------------- */
+
+// Bitwise and logical ops.
+
+class operator_logical_and : public trange_operator
+{
+public:
+ operator_logical_and () : trange_operator (TRUTH_AND_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &lh, const irange &rh) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_logical_and;
+
+
+bool
+operator_logical_and::fold_range (irange &r, const irange &lh,
+ const irange &rh) const
+{
+ if (empty_range_check (r, lh, rh, boolean_type_node))
+ return true;
+
+ // 0 && anything is 0
+ if ((wi::eq_p (lh.lower_bound (), 0) && wi::eq_p (lh.upper_bound (), 0))
+ || (wi::eq_p (lh.lower_bound (), 0) && wi::eq_p (rh.upper_bound (), 0)))
+ {
+ r = range_false ();
+ return true;
+ }
+
+ // To reach this point, there must be a logical 1 on each side, and the only
+ // remaining question is whether there is a zero or not.
+
+ if (lh.contains_p (build_zero_cst (lh.type ()))
+ || rh.contains_p (build_zero_cst (rh.type ())))
+ r.set_varying (boolean_type_node);
+ else
+ r = range_true ();
+
+ return true;
+}
+
+
+
+bool
+operator_logical_and::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ switch (get_bool_state (r, lhs, op2.type ()))
+ {
+ /* A true result means both sides of the AND must be true. */
+ case BRS_TRUE:
+ r = range_true ();
+ break;
+ /* Any other result means only one side has to be false, the other
+ side can be anything. SO we cant be sure of any result here. */
+ default:
+ r.set_varying (boolean_type_node);
+ break;
+ }
+ return true;
+}
+
+bool
+operator_logical_and::op2_range (irange &r, const irange &lhs,
+ const irange &op1) const
+{
+ return operator_logical_and::op1_range (r, lhs, op1);
+}
+
+class operator_bitwise_and : public trange_operator
+{
+public:
+ operator_bitwise_and () : trange_operator (BIT_AND_EXPR) { }
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_bitwise_and;
+
+bool
+operator_bitwise_and::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ /* If this is really a logical operation, call that. */
+ if (types_compatible_p (lhs.type (), boolean_type_node))
+ return op_logical_and.op1_range (r, lhs, op2);
+
+ /* For now do nothing with bitwise AND of iranges, just return the type. */
+ r.set_varying (lhs.type ());
+ return true;
+}
+
+bool
+operator_bitwise_and::op2_range (irange &r, const irange &lhs,
+ const irange &op1) const
+{
+ return operator_bitwise_and::op1_range (r, lhs, op1);
+}
+
+
+class operator_logical_or : public trange_operator
+{
+public:
+ operator_logical_or () : trange_operator (TRUTH_OR_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &lh, const irange &rh) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_logical_or;
+
+
+bool
+operator_logical_or::fold_range (irange &r, const irange &lh,
+ const irange &rh) const
+{
+ if (empty_range_check (r, lh, rh, boolean_type_node))
+ return true;
+
+ r = range_union (lh, rh);
+ return true;
+}
+
+bool
+operator_logical_or::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ switch (get_bool_state (r, lhs, op2.type ()))
+ {
+ /* A false result means both sides of the OR must be false. */
+ case BRS_FALSE:
+ r = range_false ();
+ break;
+ /* Any other result means only one side has to be true, the other
+ side can be anything. SO we cant be sure of any result here. */
+ default:
+ r.set_varying (boolean_type_node);
+ break;
+ }
+ return true;
+}
+
+bool
+operator_logical_or::op2_range (irange &r, const irange &lhs,
+ const irange &op1) const
+{
+ return operator_logical_or::op1_range (r, lhs, op1);
+}
+
+class operator_bitwise_or : public trange_operator
+{
+public:
+ operator_bitwise_or () : trange_operator (BIT_IOR_EXPR) { }
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_bitwise_or;
+
+bool
+operator_bitwise_or::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ /* If this is really a logical operation, call that. */
+ if (types_compatible_p (lhs.type (), boolean_type_node))
+ return op_logical_or.op1_range (r, lhs, op2);
+
+ /* For now do nothing with bitwise OR of iranges, just return the type. */
+ r.set_varying (lhs.type ());
+ return true;
+}
+
+bool
+operator_bitwise_or::op2_range (irange &r, const irange &lhs,
+ const irange &op1) const
+{
+ return operator_bitwise_or::op1_range (r, lhs, op1);
+}
+
+class operator_bitwise_xor : public trange_operator
+{
+public:
+ operator_bitwise_xor (): trange_operator (BIT_XOR_EXPR) { }
+ /* FIXME: Andrew can implement the op1_range and op2_range variants
+ when he returns from leave :-P. */
+} op_bitwise_xor;
+
+class operator_trunc_mod : public trange_operator
+{
+public:
+ operator_trunc_mod (): trange_operator (TRUNC_MOD_EXPR) { }
+ /* FIXME: Andrew can implement the op1_range and op2_range variants
+ when he returns from leave :-P. */
+} op_trunc_mod;
+
+class operator_logical_not : public trange_operator
+{
+public:
+ operator_logical_not () : trange_operator (TRUTH_NOT_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &lh, const irange &rh) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+} op_logical_not;
+
+
+/* Folding a logical NOT, oddly enough, involves doing nothing on the
+ forward pass thru. During the initial walk backwards, the logical NOT
+ reversed the desired outcome on the way back, so on the way forward all
+ we do is pass the range forward.
+ b_2 = x_1 < 20
+ b_3 = !b_2
+ if (b_3)
+ to determine the TRUE branch, walking backward
+ if (b_3) if ([1,1])
+ b_3 = !b_2 [1,1] = ![0,0]
+ b_2 = x_1 < 20 [0,0] = x_1 < 20, false, so x_1 == [20, 255]
+ which is the result we are looking for.. so.. pass it thru. */
+
+bool
+operator_logical_not::fold_range (irange &r, const irange &lh,
+ const irange &rh ATTRIBUTE_UNUSED) const
+{
+ if (empty_range_check (r, lh, rh, boolean_type_node))
+ return true;
+
+ if (lh.varying_p () || lh.undefined_p ())
+ r = lh;
+ else
+ r = range_invert (lh);
+ return true;
+}
+
+bool
+operator_logical_not::op1_range (irange &r, const irange &lhs,
+ const irange &op2 ATTRIBUTE_UNUSED) const
+{
+ if (lhs.varying_p () || lhs.undefined_p ())
+ r = lhs;
+ else
+ r = range_invert (lhs);
+ return true;
+}
+
+
+class operator_bitwise_not : public trange_operator
+{
+public:
+ operator_bitwise_not () : trange_operator (BIT_NOT_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &lh, const irange &rh) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+} op_bitwise_not;
+
+bool
+operator_bitwise_not::fold_range (irange &r, const irange &lh,
+ const irange &rh) const
+{
+ tree type = lh.type ();
+ if (empty_range_check (r, lh, rh, type))
+ return true;
+
+ // ~X is simply -1 - X.
+ irange minusone (type,
+ wi::minus_one (TYPE_PRECISION (type)),
+ wi::minus_one (TYPE_PRECISION (type)));
+ return op_binary (MINUS_EXPR, r, minusone, lh);
+}
+
+bool
+operator_bitwise_not::op1_range (irange &r, const irange &lhs,
+ const irange &op2 ATTRIBUTE_UNUSED) const
+{
+ tree type = lhs.type ();
+
+ // ~X is -1 - X and since bitwise NOT is involutary...do it again.
+ irange minusone (type,
+ wi::minus_one (TYPE_PRECISION (type)),
+ wi::minus_one (TYPE_PRECISION (type)));
+ return op_binary (MINUS_EXPR, r, minusone, lhs);
+}
+
+
+/* ---------------------------------------------------------------------- */
+
+
+class operator_cst : public trange_operator
+{
+public:
+ operator_cst () : trange_operator (INTEGER_CST) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+} op_integer_cst;
+
+
+bool
+operator_cst::fold_range (irange &r, const irange &lh,
+ const irange &rh ATTRIBUTE_UNUSED) const
+{
+ r = lh;
+ return true;
+}
+
+
+class operator_ssa_name : public trange_operator
+{
+public:
+ operator_ssa_name () : trange_operator (SSA_NAME) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+} op_ssa_name;
+
+bool
+operator_ssa_name::fold_range (irange &r, const irange &lh,
+ const irange &rh ATTRIBUTE_UNUSED) const
+{
+ r = lh;
+ return true;
+}
+
+bool
+operator_ssa_name::op1_range (irange &r, const irange &lhs,
+ const irange &op2 ATTRIBUTE_UNUSED) const
+{
+ r = lhs;
+ return true;
+}
+
+
+// Unary identity function.
+class operator_identity : public trange_operator
+{
+ public:
+ operator_identity (enum tree_code c) : trange_operator (c) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2 ATTRIBUTE_UNUSED) const
+ { r = op1; return false; }
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2 ATTRIBUTE_UNUSED) const
+ { r = lhs; return false; }
+} op_paren (PAREN_EXPR), op_obj_type_ref (OBJ_TYPE_REF);
+
+class operator_abs : public trange_operator
+{
+ public:
+ operator_abs (enum tree_code code) : trange_operator (code) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r,
+ const irange &lhs, const irange &op2) const;
+} op_abs (ABS_EXPR), op_absu (ABSU_EXPR);
+
+bool
+operator_abs::fold_range (irange &r,
+ const irange &lh, const irange &rh) const
+{
+ tree type = rh.type ();
+ if (empty_range_check (r, lh, rh, type))
+ return true;
+
+ return op_unary (code, r, lh, type);
+}
+
+bool
+operator_abs::op1_range (irange &r,
+ const irange &lhs, const irange &op2) const
+{
+ // FIXME: ?? Andrew TODO ??
+ if (code == ABSU_EXPR)
+ return false;
+
+ tree type = lhs.type ();
+ if (empty_range_check (r, lhs, op2, type))
+ return true;
+ if (TYPE_UNSIGNED (type))
+ {
+ r = lhs;
+ return true;
+ }
+ // Start with the positives because negatives are an impossible result.
+ irange positives = range_positives (type);
+ positives.intersect (lhs);
+ r = positives;
+ // Then add the negative of each pair:
+ // ABS(op1) = [5,20] would yield op1 => [-20,-5][5,20].
+ for (unsigned i = 0; i < positives.num_pairs (); ++i)
+ r.union_ (irange (type,
+ -positives.upper_bound (i),
+ -positives.lower_bound (i)));
+ return true;
+}
+
+class operator_negate : public trange_operator
+{
+ public:
+ operator_negate () : trange_operator (NEGATE_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+ { return fold_range (r, lhs, op2); } // NEGATE is involutory :-P.
+} op_negate;
+
+/* Return the negated range of lh with the type of rh. */
+
+bool
+operator_negate::fold_range (irange &r,
+ const irange &lh, const irange &rh) const
+{
+ tree type = rh.type ();
+ if (empty_range_check (r, lh, rh, type))
+ return true;
+ // -X is simply 0 - X.
+ return op_binary (MINUS_EXPR, r, range_zero (type), lh);
+}
+
+// Disable for now for VRP parity.
+#if 0
+class operator_min_max : public trange_operator
+{
+public:
+ operator_min_max (tree_code c) : trange_operator (c) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+ virtual bool op2_range (irange &r, const irange &lhs,
+ const irange &op1) const;
+} op_min (MIN_EXPR), op_max (MAX_EXPR);
+
+
+bool
+operator_min_max::fold_range (irange &r, const irange &lh,
+ const irange &rh) const
+{
+ wide_int lb, ub;
+ wi::overflow_type ov;
+ tree type = lh.type ();
+
+ if (empty_range_check (r, lh, rh, type))
+ return true;
+
+ // Start with the union of both ranges.
+ r = range_union (lh, rh);
+
+ // For pointer types we are concerned with NULL and NON-NULL.
+ // Min max result in this case is a strict union.
+ if (POINTER_TYPE_P (type))
+ return true;
+
+ // Intersect the union with the max/min values of both to get a set
+ // of values. This allows MIN ([1,5][20,30] , [0,4][18,60]) to
+ // produce [0,5][18,30] rather than [0,30]
+ wide_int_binop (lb, code, lh.lower_bound (), rh.lower_bound (),
+ TYPE_SIGN (type), &ov);
+ wide_int_binop (ub, code, lh.upper_bound (), rh.upper_bound (),
+ TYPE_SIGN (type), &ov);
+ r.intersect (irange (type, lb, ub));
+ return true;
+}
+
+bool
+operator_min_max::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ if (empty_range_check (r, lhs, op2, lhs.type ()))
+ return true;
+
+ if (POINTER_TYPE_P (lhs.type ()))
+ return false;
+
+ // Until this can be examined closer... Im not convinces this is right
+ return false;
+
+ wide_int lb = lhs.lower_bound ();
+ wide_int ub = lhs.upper_bound ();
+ if (code == MIN_EXPR)
+ {
+ // If the upper bound is set by the other operand, we have no idea
+ // what this upper could be, otherwise it HAS to be the upper bound.
+ if (wi::eq_p (lhs.upper_bound (), op2.upper_bound ()))
+ ub = max_limit (lhs.type ());
+ }
+ else
+ {
+ // this operand. Otherwise, it could be any value to MAX_TYPE as the
+ // upper bound comes from the other operand.
+ if (wi::eq_p (lhs.lower_bound (), op2.lower_bound ()))
+ lb = min_limit (lhs.type ());
+ }
+
+ r = irange (lhs.type (), lb, ub);
+ return true;
+}
+
+bool
+operator_min_max::op2_range (irange &r, const irange &lhs,
+ const irange &op1) const
+{
+ return operator_min_max::op1_range (r, lhs, op1);
+}
+#endif // if 0
+
+class operator_addr_expr : public trange_operator
+{
+public:
+ operator_addr_expr () : trange_operator (ADDR_EXPR) { }
+ virtual bool fold_range (irange &r, const irange &op1,
+ const irange &op2) const;
+ virtual bool op1_range (irange &r, const irange &lhs,
+ const irange &op2) const;
+} op_addr;
+
+bool
+operator_addr_expr::fold_range (irange &r, const irange &lh,
+ const irange &rh) const
+{
+ if (empty_range_check (r, lh, rh, rh.type ()))
+ return true;
+
+ // Return a non-null pointer of the LHS type (passed in op2)
+ if (lh.zero_p ())
+ r = range_zero (rh.type ());
+ else
+ if (!lh.contains_p (build_zero_cst (lh.type ())))
+ r = range_nonzero (rh.type ());
+ else
+ return false;
+ return true;
+}
+
+// The same functionality for fold() applies to op1_range...
+// effectively copying the non-nullness.
+bool
+operator_addr_expr::op1_range (irange &r, const irange &lhs,
+ const irange &op2) const
+{
+ return operator_addr_expr::fold_range (r, lhs, op2);
+}
+
diff --git a/gcc/range-op.h b/gcc/range-op.h
new file mode 100644
index 00000000000..64eeb4f43be
--- /dev/null
+++ b/gcc/range-op.h
@@ -0,0 +1,71 @@
+/* Header file for range operator class.
+ Copyright (C) 2017-2019 Free Software Foundation, Inc.
+ Contributed by Andrew MacLeod <amacl...@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+#ifndef GCC_RANGE_OP_H
+#define GCC_RANGE_OP_H
+
+// This class is implemented for each kind of operator that is supported by
+// the range generator. It serves dual purposes.
+//
+// 1 - Generates range information for the specific operation between
+// the 4 possible combinations of integers and ranges.
+// This provides the ability to fold ranges for an expression.
+//
+// 2 - Performs range algebra on the expression such that a range can be
+// adjusted in terms of one of the operands:
+// def = op1 + op2
+// Given a range for def, we can possibly adjust the range so its in
+// terms of either operand.
+// op1_adjust (def_range, op2) will adjust the range in place so its
+// in terms of op1. since op1 = def - op2, it will subtract op2 from
+// each element of the range.
+//
+// 3 - Creates a range for an operand based on whether the result is 0 or
+// non-zero. This is mostly for logical true false, but can serve other
+// purposes.
+// ie 0 = op1 - op2 implies op2 has the same range as op1.
+
+
+class range_operator
+{
+public:
+ virtual void dump (FILE *f) const = 0;
+
+ // Set a range based on this operation between 2 operands.
+ // Return the TRUE if a valid range is created.
+ virtual bool fold_range (irange& r, const irange& op1,
+ const irange& op2) const;
+
+ // Set the range for op? in the general case. LHS is the range for
+ // the LHS of the expression, VAL is the range for the other
+ // operand, and the result is returned in R.
+ // ie [range] = op1 + VAL
+ // This is re-formed as new_range = [range] - VAL.
+ // Return TRUE if the operation could be performed and the range is
+ // valid.
+ virtual bool op1_range (irange& r, const irange& lhs,
+ const irange& op2) const;
+ virtual bool op2_range (irange& r, const irange& lhs,
+ const irange& op1) const;
+};
+
+extern range_operator *range_op_handler(enum tree_code code);
+
+#endif // GCC_RANGE_OP_H
diff --git a/gcc/range.cc b/gcc/range.cc
new file mode 100644
index 00000000000..0867cff699e
--- /dev/null
+++ b/gcc/range.cc
@@ -0,0 +1,1505 @@
+/* High resolution range class.
+ Copyright (C) 2017-2019 Free Software Foundation, Inc.
+ Contributed by Aldy Hernandez <al...@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "tree.h"
+#include "gimple.h"
+#include "gimple-pretty-print.h"
+#include "fold-const.h"
+#include "ssa.h"
+#include "range.h"
+#include "selftest.h"
+#include "wide-int-range.h"
+
+// Common code between the alternate irange implementations.
+
+// Return TRUE if two types are compatible for range operations.
+
+static bool
+range_compatible_p (tree t1, tree t2)
+{
+ if (POINTER_TYPE_P (t1) && POINTER_TYPE_P (t2))
+ return true;
+
+ // FORTRAN has different precision booleans that trigger a false
+ // from types_compatible_p.
+ if (TREE_CODE (t1) == BOOLEAN_TYPE && TREE_CODE (t2) == BOOLEAN_TYPE)
+ return true;
+
+ return types_compatible_p (t1, t2);
+}
+
+bool
+irange::operator== (const irange &r) const
+{
+ // Special case this because a freshly initialized range may be
+ // typeless.
+ if (undefined_p ())
+ return r.undefined_p ();
+
+ if (num_pairs () != r.num_pairs ()
+ || !range_compatible_p (type (), r.type ()))
+ return false;
+
+ for (unsigned p = 0; p < num_pairs (); p++)
+ if (wi::ne_p (lower_bound (p), r.lower_bound (p))
+ || wi::ne_p (upper_bound (p), r.upper_bound (p)))
+ return false;
+
+ return true;
+}
+
+bool
+irange::operator!= (const irange &r) const
+{
+ return !(*this == r);
+}
+
+irange
+range_intersect (const irange &r1, const irange &r2)
+{
+ irange tmp (r1);
+ tmp.intersect (r2);
+ return tmp;
+}
+
+irange
+range_invert (const irange &r1)
+{
+ irange tmp (r1);
+ tmp.invert ();
+ return tmp;
+}
+
+irange
+range_union (const irange &r1, const irange &r2)
+{
+ irange tmp (r1);
+ tmp.union_ (r2);
+ return tmp;
+}
+
+irange
+range_zero (tree type)
+{
+ return irange (build_zero_cst (type), build_zero_cst (type));
+}
+
+irange
+range_nonzero (tree type)
+{
+ return irange (VR_ANTI_RANGE,
+ build_zero_cst (type), build_zero_cst (type));
+}
+
+irange
+range_positives (tree type)
+{
+ unsigned prec = TYPE_PRECISION (type);
+ signop sign = TYPE_SIGN (type);
+ return irange (type, wi::zero (prec), wi::max_value (prec, sign));
+}
+
+irange
+range_negatives (tree type)
+{
+ unsigned prec = TYPE_PRECISION (type);
+ signop sign = TYPE_SIGN (type);
+ irange r;
+ if (sign == UNSIGNED)
+ r.set_undefined (type);
+ else
+ r = irange (type, wi::min_value (prec, sign), wi::minus_one (prec));
+ return r;
+}
+
+#if CHECKING_P
+#include "stor-layout.h"
+
+// Ideally this should go in namespace selftest, but range_tests
+// needs to be a friend of class irange so it can access
+// irange::m_max_pairs.
+
+#define INT(N) build_int_cst (integer_type_node, (N))
+#define UINT(N) build_int_cstu (unsigned_type_node, (N))
+#define INT16(N) build_int_cst (short_integer_type_node, (N))
+#define UINT16(N) build_int_cstu (short_unsigned_type_node, (N))
+#define INT64(N) build_int_cstu (long_long_integer_type_node, (N))
+#define UINT64(N) build_int_cstu (long_long_unsigned_type_node, (N))
+#define UINT128(N) build_int_cstu (u128_type, (N))
+#define UCHAR(N) build_int_cstu (unsigned_char_type_node, (N))
+#define SCHAR(N) build_int_cst (signed_char_type_node, (N))
+
+#define RANGE3(A,B,C,D,E,F) \
+( i1 = irange (INT (A), INT (B)), \
+ i2 = irange (INT (C), INT (D)), \
+ i3 = irange (INT (E), INT (F)), \
+ i1.union_ (i2), \
+ i1.union_ (i3), \
+ i1 )
+
+// Run all of the selftests within this file.
+
+void
+range_tests ()
+{
+ tree u128_type = build_nonstandard_integer_type (128, /*unsigned=*/1);
+ irange i1, i2, i3;
+ irange r0, r1, rold;
+
+ // Test that NOT(255) is [0..254] in 8-bit land.
+ irange not_255 (VR_ANTI_RANGE, UCHAR (255), UCHAR (255));
+ ASSERT_TRUE (not_255 == irange (UCHAR (0), UCHAR (254)));
+
+ // Test that NOT(0) is [1..255] in 8-bit land.
+ irange not_zero = range_nonzero (unsigned_char_type_node);
+ ASSERT_TRUE (not_zero == irange (UCHAR (1), UCHAR (255)));
+
+ // Check that [0,127][0x..ffffff80,0x..ffffff]
+ // => ~[128, 0x..ffffff7f].
+ r0 = irange (UINT128 (0), UINT128 (127));
+ tree high = build_minus_one_cst (u128_type);
+ // low = -1 - 127 => 0x..ffffff80.
+ tree low = fold_build2 (MINUS_EXPR, u128_type, high, UINT128(127));
+ r1 = irange (low, high); // [0x..ffffff80, 0x..ffffffff]
+ // r0 = [0,127][0x..ffffff80,0x..fffffff].
+ r0.union_ (r1);
+ // r1 = [128, 0x..ffffff7f].
+ r1 = irange (UINT128(128),
+ fold_build2 (MINUS_EXPR, u128_type,
+ build_minus_one_cst (u128_type),
+ UINT128(128)));
+ r0.invert ();
+ ASSERT_TRUE (r0 == r1);
+
+ r0.set_varying (integer_type_node);
+ tree minint = wide_int_to_tree (integer_type_node, r0.lower_bound ());
+ tree maxint = wide_int_to_tree (integer_type_node, r0.upper_bound ());
+
+ r0.set_varying (short_integer_type_node);
+ tree minshort = wide_int_to_tree (short_integer_type_node, r0.lower_bound ());
+ tree maxshort = wide_int_to_tree (short_integer_type_node, r0.upper_bound ());
+
+ r0.set_varying (unsigned_type_node);
+ tree maxuint = wide_int_to_tree (unsigned_type_node, r0.upper_bound ());
+
+ // Check that ~[0,5] => [6,MAX] for unsigned int.
+ r0 = irange (UINT (0), UINT (5));
+ r0.invert ();
+ ASSERT_TRUE (r0 == irange (UINT(6), maxuint));
+
+ // Check that ~[10,MAX] => [0,9] for unsigned int.
+ r0 = irange (VR_RANGE, UINT(10), maxuint);
+ r0.invert ();
+ ASSERT_TRUE (r0 == irange (UINT (0), UINT (9)));
+
+ // Check that ~[0,5] => [6,MAX] for unsigned 128-bit numbers.
+ r0 = irange (VR_ANTI_RANGE, UINT128 (0), UINT128 (5));
+ r1 = irange (UINT128(6), build_minus_one_cst (u128_type));
+ ASSERT_TRUE (r0 == r1);
+
+ // Check that [~5] is really [-MIN,4][6,MAX].
+ r0 = irange (VR_ANTI_RANGE, INT (5), INT (5));
+ r1 = irange (minint, INT (4));
+ r1.union_ (irange (INT (6), maxint));
+ ASSERT_FALSE (r1.undefined_p ());
+ ASSERT_TRUE (r0 == r1);
+
+ r1 = irange (INT (5), INT (5));
+ r1.check ();
+ irange r2 (r1);
+ ASSERT_TRUE (r1 == r2);
+
+ r1 = irange (INT (5), INT (10));
+ r1.check ();
+
+ r1 = irange (integer_type_node,
+ wi::to_wide (INT (5)), wi::to_wide (INT (10)));
+ r1.check ();
+ ASSERT_TRUE (r1.contains_p (INT (7)));
+
+ r1 = irange (SCHAR (0), SCHAR (20));
+ ASSERT_TRUE (r1.contains_p (SCHAR(15)));
+ ASSERT_FALSE (r1.contains_p (SCHAR(300)));
+
+ // If a range is in any way outside of the range for the converted
+ // to range, default to the range for the new type.
+ r1 = irange (integer_zero_node, maxint);
+ r1.cast (short_integer_type_node);
+ ASSERT_TRUE (r1.lower_bound () == wi::to_wide (minshort)
+ && r1.upper_bound() == wi::to_wide (maxshort));
+
+ // (unsigned char)[-5,-1] => [251,255].
+ r0 = rold = irange (SCHAR (-5), SCHAR (-1));
+ r0.cast (unsigned_char_type_node);
+ ASSERT_TRUE (r0 == irange (UCHAR (251), UCHAR (255)));
+ r0.cast (signed_char_type_node);
+ ASSERT_TRUE (r0 == rold);
+
+ // (signed char)[15, 150] => [-128,-106][15,127].
+ r0 = rold = irange (UCHAR (15), UCHAR (150));
+ r0.cast (signed_char_type_node);
+ r1 = irange (SCHAR (15), SCHAR (127));
+ r2 = irange (SCHAR (-128), SCHAR (-106));
+ r1.union_ (r2);
+ ASSERT_TRUE (r1 == r0);
+ r0.cast (unsigned_char_type_node);
+ ASSERT_TRUE (r0 == rold);
+
+ // (unsigned char)[-5, 5] => [0,5][251,255].
+ r0 = rold = irange (SCHAR (-5), SCHAR (5));
+ r0.cast (unsigned_char_type_node);
+ r1 = irange (UCHAR (251), UCHAR (255));
+ r2 = irange (UCHAR (0), UCHAR (5));
+ r1.union_ (r2);
+ ASSERT_TRUE (r0 == r1);
+ r0.cast (signed_char_type_node);
+ ASSERT_TRUE (r0 == rold);
+
+ // (unsigned char)[-5,5] => [0,5][251,255].
+ r0 = irange (INT (-5), INT (5));
+ r0.cast (unsigned_char_type_node);
+ r1 = irange (UCHAR (0), UCHAR (5));
+ r1.union_ (irange (UCHAR (251), UCHAR (255)));
+ ASSERT_TRUE (r0 == r1);
+
+ // (unsigned char)[5U,1974U] => [0,255].
+ r0 = irange (UINT (5), UINT (1974));
+ r0.cast (unsigned_char_type_node);
+ ASSERT_TRUE (r0 == irange (UCHAR (0), UCHAR (255)));
+ r0.cast (integer_type_node);
+ // Going to a wider range should not sign extend.
+ ASSERT_TRUE (r0 == irange (INT (0), INT (255)));
+
+ // (unsigned char)[-350,15] => [0,255].
+ r0 = irange (INT (-350), INT (15));
+ r0.cast (unsigned_char_type_node);
+ ASSERT_TRUE (r0 == irange (TYPE_MIN_VALUE (unsigned_char_type_node),
+ TYPE_MAX_VALUE (unsigned_char_type_node)));
+
+ // Casting [-120,20] from signed char to unsigned short.
+ // => [0, 20][0xff88, 0xffff].
+ r0 = irange (SCHAR (-120), SCHAR (20));
+ r0.cast (short_unsigned_type_node);
+ r1 = irange (UINT16 (0), UINT16 (20));
+ r2 = irange (UINT16 (0xff88), UINT16 (0xffff));
+ r1.union_ (r2);
+ ASSERT_TRUE (r0 == r1);
+ // A truncating cast back to signed char will work because [-120, 20]
+ // is representable in signed char.
+ r0.cast (signed_char_type_node);
+ ASSERT_TRUE (r0 == irange (SCHAR (-120), SCHAR (20)));
+
+ // unsigned char -> signed short
+ // (signed short)[(unsigned char)25, (unsigned char)250]
+ // => [(signed short)25, (signed short)250]
+ r0 = rold = irange (UCHAR (25), UCHAR (250));
+ r0.cast (short_integer_type_node);
+ r1 = irange (INT16 (25), INT16 (250));
+ ASSERT_TRUE (r0 == r1);
+ r0.cast (unsigned_char_type_node);
+ ASSERT_TRUE (r0 == rold);
+
+ // Test casting a wider signed [-MIN,MAX] to a nar`rower unsigned.
+ r0 = irange (TYPE_MIN_VALUE (long_long_integer_type_node),
+ TYPE_MAX_VALUE (long_long_integer_type_node));
+ r0.cast (short_unsigned_type_node);
+ r1 = irange (TYPE_MIN_VALUE (short_unsigned_type_node),
+ TYPE_MAX_VALUE (short_unsigned_type_node));
+ ASSERT_TRUE (r0 == r1);
+
+ // Test that casting a range with MAX_PAIRS that changes sign is
+ // done conservatively.
+ //
+ // (unsigned short)[-5,5][20,30][40,50]...
+ // => (unsigned short)[-5,50]
+ // => [0,50][65531,65535]
+ r0 = irange (INT16 (-5), INT16 (5));
+ gcc_assert (irange::m_max_pairs * 2 * 10 + 10 < 32767);
+ unsigned i;
+ for (i = 2; i < irange::m_max_pairs * 2; i += 2)
+ {
+ r1 = irange (INT16 (i * 10), INT16 (i * 10 + 10));
+ r0.union_ (r1);
+ }
+ r0.cast(short_unsigned_type_node);
+ r1 = irange (UINT16 (0), UINT16 ((i - 2) * 10 + 10));
+ r2 = irange (UINT16 (65531), UINT16 (65535));
+ r1.union_ (r2);
+ ASSERT_TRUE (r0 == r1);
+
+ // NOT([10,20]) ==> [-MIN,9][21,MAX].
+ r0 = r1 = irange (INT (10), INT (20));
+ r2 = irange (minint, INT(9));
+ r2.union_ (irange (INT(21), maxint));
+ ASSERT_FALSE (r2.undefined_p ());
+ r1.invert ();
+ ASSERT_TRUE (r1 == r2);
+ // Test that NOT(NOT(x)) == x.
+ r2.invert ();
+ ASSERT_TRUE (r0 == r2);
+
+ // NOT(-MIN,+MAX) is the empty set and should return false.
+ r0 = irange (minint, maxint);
+ r0.invert ();
+ ASSERT_TRUE (r0.undefined_p ());
+ r1.set_undefined ();
+ ASSERT_TRUE (r0 == r1);
+
+ // Test that booleans and their inverse work as expected.
+ r0 = range_zero (boolean_type_node);
+ ASSERT_TRUE (r0 == irange (build_zero_cst (boolean_type_node),
+ build_zero_cst (boolean_type_node)));
+ r0.invert();
+ ASSERT_TRUE (r0 == irange (build_one_cst (boolean_type_node),
+ build_one_cst (boolean_type_node)));
+
+ // Casting NONZERO to a narrower type will wrap/overflow so
+ // it's just the entire range for the narrower type.
+ //
+ // "NOT 0 at signed 32-bits" ==> [-MIN_32,-1][1, +MAX_32]. This is
+ // is outside of the range of a smaller range, return the full
+ // smaller range.
+ r0 = range_nonzero (integer_type_node);
+ r0.cast (short_integer_type_node);
+ r1 = irange (TYPE_MIN_VALUE (short_integer_type_node),
+ TYPE_MAX_VALUE (short_integer_type_node));
+ ASSERT_TRUE (r0 == r1);
+
+ // Casting NONZERO from a narrower signed to a wider signed.
+ //
+ // NONZERO signed 16-bits is [-MIN_16,-1][1, +MAX_16].
+ // Converting this to 32-bits signed is [-MIN_16,-1][1, +MAX_16].
+ r0 = range_nonzero (short_integer_type_node);
+ r0.cast (integer_type_node);
+ r1 = irange (INT (-32768), INT (-1));
+ r2 = irange (INT (1), INT (32767));
+ r1.union_ (r2);
+ ASSERT_TRUE (r0 == r1);
+
+ if (irange::m_max_pairs > 2)
+ {
+ // ([10,20] U [5,8]) U [1,3] ==> [1,3][5,8][10,20].
+ r0 = irange (INT (10), INT (20));
+ r1 = irange (INT (5), INT (8));
+ r0.union_ (r1);
+ r1 = irange (INT (1), INT (3));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == RANGE3 (1, 3, 5, 8, 10, 20));
+
+ // [1,3][5,8][10,20] U [-5,0] => [-5,3][5,8][10,20].
+ r1 = irange (INT (-5), INT (0));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == RANGE3 (-5, 3, 5, 8, 10, 20));
+ }
+
+ // [10,20] U [30,40] ==> [10,20][30,40].
+ r0 = irange (INT (10), INT (20));
+ r1 = irange (INT (30), INT (40));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == range_union (irange (INT (10), INT (20)),
+ irange (INT (30), INT (40))));
+ if (irange::m_max_pairs > 2)
+ {
+ // [10,20][30,40] U [50,60] ==> [10,20][30,40][50,60].
+ r1 = irange (INT (50), INT (60));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == RANGE3 (10, 20, 30, 40, 50, 60));
+ // [10,20][30,40][50,60] U [70, 80] ==> [10,20][30,40][50,60][70,80].
+ r1 = irange (INT (70), INT (80));
+ r0.union_ (r1);
+
+ r2 = RANGE3 (10, 20, 30, 40, 50, 60);
+ r2.union_ (irange (INT (70), INT (80)));
+ ASSERT_TRUE (r0 == r2);
+ }
+
+ // Make sure NULL and non-NULL of pointer types work, and that
+ // inverses of them are consistent.
+ tree voidp = build_pointer_type (void_type_node);
+ r0 = range_zero (voidp);
+ r1 = r0;
+ r0.invert ();
+ r0.invert ();
+ ASSERT_TRUE (r0 == r1);
+
+ if (irange::m_max_pairs > 2)
+ {
+ // [10,20][30,40][50,60] U [6,35] => [6,40][50,60].
+ r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+ r1 = irange (INT (6), INT (35));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == range_union (irange (INT (6), INT (40)),
+ irange (INT (50), INT (60))));
+
+ // [10,20][30,40][50,60] U [6,60] => [6,60] */
+ r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+ r1 = irange (INT (6), INT (60));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == irange (INT (6), INT (60)));
+
+ // [10,20][30,40][50,60] U [6,70] => [6,70].
+ r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+ r1 = irange (INT (6), INT (70));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == irange (INT (6), INT (70)));
+
+ // [10,20][30,40][50,60] U [35,70] => [10,20][30,70].
+ r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+ r1 = irange (INT (35), INT (70));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == range_union (irange (INT (10), INT (20)),
+ irange (INT (30), INT (70))));
+ }
+
+ // [10,20][30,40] U [25,70] => [10,70].
+ r0 = range_union (irange (INT (10), INT (20)),
+ irange (INT (30), INT (40)));
+ r1 = irange (INT (25), INT (70));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == range_union (irange (INT (10), INT (20)),
+ irange (INT (25), INT (70))));
+
+ if (irange::m_max_pairs > 2)
+ {
+ // [10,20][30,40][50,60] U [15,35] => [10,40][50,60].
+ r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+ r1 = irange (INT (15), INT (35));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == range_union (irange (INT (10), INT (40)),
+ irange (INT (50), INT (60))));
+ }
+
+ // [10,20] U [15, 30] => [10, 30].
+ r0 = irange (INT (10), INT (20));
+ r1 = irange (INT (15), INT (30));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == irange (INT (10), INT (30)));
+
+ // [10,20] U [25,25] => [10,20][25,25].
+ r0 = irange (INT (10), INT (20));
+ r1 = irange (INT (25), INT (25));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == range_union (irange (INT (10), INT (20)),
+ irange (INT (25), INT (25))));
+
+ if (irange::m_max_pairs > 2)
+ {
+ // [10,20][30,40][50,60] U [35,35] => [10,20][30,40][50,60].
+ r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+ r1 = irange (INT (35), INT (35));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == RANGE3 (10, 20, 30, 40, 50, 60));
+ }
+
+ // [15,40] U [] => [15,40].
+ r0 = irange (INT (15), INT (40));
+ r1.set_undefined ();
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == irange (INT (15), INT (40)));
+
+ // [10,20] U [10,10] => [10,20].
+ r0 = irange (INT (10), INT (20));
+ r1 = irange (INT (10), INT (10));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == irange (INT (10), INT (20)));
+
+ // [10,20] U [9,9] => [9,20].
+ r0 = irange (INT (10), INT (20));
+ r1 = irange (INT (9), INT (9));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == irange (INT (9), INT (20)));
+
+ if (irange::m_max_pairs > 2)
+ {
+ // [10,10][12,12][20,100] ^ [15,200].
+ r0 = RANGE3 (10, 10, 12, 12, 20, 100);
+ r1 = irange (INT (15), INT (200));
+ r0.intersect (r1);
+ ASSERT_TRUE (r0 == irange (INT (20), INT (100)));
+
+ // [10,20][30,40][50,60] ^ [15,25][38,51][55,70]
+ // => [15,20][38,40][50,51][55,60]
+ r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+ r1 = RANGE3 (15, 25, 38, 51, 55, 70);
+ r0.intersect (r1);
+ if (irange::m_max_pairs == 3)
+ {
+ // When pairs==3, we don't have enough space, so
+ // conservatively handle things. Thus, the ...[50,60].
+ ASSERT_TRUE (r0 == RANGE3 (15, 20, 38, 40, 50, 60));
+ }
+ else
+ {
+ r2 = RANGE3 (15, 20, 38, 40, 50, 51);
+ r2.union_ (irange (INT (55), INT (60)));
+ ASSERT_TRUE (r0 == r2);
+ }
+
+ // [15,20][30,40][50,60] ^ [15,35][40,90][100,200]
+ // => [15,20][30,35][40,60]
+ r0 = RANGE3 (15, 20, 30, 40, 50, 60);
+ r1 = RANGE3 (15, 35, 40, 90, 100, 200);
+ r0.intersect (r1);
+ if (irange::m_max_pairs == 3)
+ {
+ // When pairs==3, we don't have enough space, so
+ // conservatively handle things.
+ ASSERT_TRUE (r0 == RANGE3 (15, 20, 30, 35, 40, 60));
+ }
+ else
+ {
+ r2 = RANGE3 (15, 20, 30, 35, 40, 40);
+ r2.union_ (irange (INT (50), INT (60)));
+ ASSERT_TRUE (r0 == r2);
+ }
+
+ // Test cases where a union inserts a sub-range inside a larger
+ // range.
+ //
+ // [8,10][135,255] U [14,14] => [8,10][14,14][135,255]
+ r0 = range_union (irange (INT (8), INT (10)),
+ irange (INT (135), INT (255)));
+ r1 = irange (INT (14), INT (14));
+ r0.union_ (r1);
+ ASSERT_TRUE (r0 == RANGE3 (8, 10, 14, 14, 135, 255));
+ }
+
+ // [10,20] ^ [15,30] => [15,20].
+ r0 = irange (INT (10), INT (20));
+ r1 = irange (INT (15), INT (30));
+ r0.intersect (r1);
+ ASSERT_TRUE (r0 == irange (INT (15), INT (20)));
+
+ // [10,20][30,40] ^ [40,50] => [40,40].
+ r0 = range_union (irange (INT (10), INT (20)),
+ irange (INT (30), INT (40)));
+ r1 = irange (INT (40), INT (50));
+ r0.intersect (r1);
+ ASSERT_TRUE (r0 == irange (INT (40), INT (40)));
+
+ // Test non-destructive intersection.
+ r0 = rold = irange (INT (10), INT (20));
+ ASSERT_FALSE (range_intersect (r0, irange (INT (15),
+ INT (30))).undefined_p ());
+ ASSERT_TRUE (r0 == rold);
+
+ // Test the internal sanity of wide_int's wrt HWIs.
+ ASSERT_TRUE (wi::max_value (TYPE_PRECISION (boolean_type_node),
+ TYPE_SIGN (boolean_type_node))
+ == wi::uhwi (1, TYPE_PRECISION (boolean_type_node)));
+
+ // Test irange_storage.
+ r0 = irange (INT (5), INT (10));
+ irange_storage *stow = irange_storage::alloc (r0);
+ r1 = irange (integer_type_node, stow);
+ ASSERT_TRUE (r0 == r1);
+
+ // Test irange_storage with signed 1-bit fields.
+ tree s1bit_type = make_signed_type (1);
+ r0 = irange (build_int_cst (s1bit_type, -1), build_int_cst (s1bit_type, 0));
+ stow = irange_storage::alloc (r0);
+ r1 = irange (s1bit_type, stow);
+ ASSERT_TRUE (r0 == r1);
+
+ // Test zero_p().
+ r0 = irange (INT (0), INT (0));
+ ASSERT_TRUE (r0.zero_p ());
+
+ // Test nonzero_p().
+ r0 = irange (INT (0), INT (0));
+ r0.invert ();
+ ASSERT_TRUE (r0.nonzero_p ());
+
+ // Test irange / value_range conversion functions.
+ r0 = irange (VR_ANTI_RANGE, INT (10), INT (20));
+ value_range_base vr = r0;
+ ASSERT_TRUE (vr.kind () == VR_ANTI_RANGE);
+ ASSERT_TRUE (wi::eq_p (10, wi::to_wide (vr.min ()))
+ && wi::eq_p (20, wi::to_wide (vr.max ())));
+ r1 = vr;
+ ASSERT_TRUE (r0 == r1);
+}
+#endif // CHECKING_P
+
+#if USE_IRANGE
+// Standalone irange implementation.
+
+// Subtract 1 from X and set OVERFLOW if the operation overflows.
+
+static wide_int inline
+subtract_one (const wide_int &x, tree type, wi::overflow_type &overflow)
+{
+ // A signed 1-bit bit-field, has a range of [-1,0] so subtracting +1
+ // overflows, since +1 is unrepresentable. This is why we have an
+ // addition of -1 here.
+ if (TYPE_SIGN (type) == SIGNED)
+ return wi::add (x, -1 , SIGNED, &overflow);
+ else
+ return wi::sub (x, 1, UNSIGNED, &overflow);
+}
+
+// Set range from wide ints.
+
+void
+irange::init (tree type, const wide_int &lbound, const wide_int &ubound,
+ value_range_kind rt)
+{
+ if (rt == VR_UNDEFINED)
+ {
+ set_undefined (type);
+ return;
+ }
+ if (rt == VR_VARYING)
+ {
+ set_varying (type);
+ return;
+ }
+ gcc_checking_assert (irange::supports_type_p (type));
+ gcc_checking_assert (TYPE_PRECISION (type) == lbound.get_precision ());
+ gcc_checking_assert (lbound.get_precision () == ubound.get_precision ());
+ m_type = type;
+ gcc_checking_assert (wi::le_p (lbound, ubound, TYPE_SIGN (type)));
+ if (rt == VR_ANTI_RANGE)
+ {
+ // Calculate INVERSE([I,J]) as [-MIN, I-1][J+1, +MAX].
+ wi::overflow_type ovf;
+ m_nitems = 0;
+ wide_int min = wi::min_value (TYPE_PRECISION (type), TYPE_SIGN (type));
+ wide_int max = wi::max_value (TYPE_PRECISION (type), TYPE_SIGN (type));
+
+ // If we will overflow, don't bother. This will handle unsigned
+ // underflow which doesn't set the overflow bit.
+ if (lbound != min)
+ {
+ m_bounds[m_nitems++] = min;
+ m_bounds[m_nitems++] = subtract_one (lbound, type, ovf);
+ if (ovf)
+ m_nitems = 0;
+ }
+ // If we will overflow, don't bother. This will handle unsigned
+ // overflow which doesn't set the overflow bit.
+ if (ubound != max)
+ {
+ m_bounds[m_nitems++] = wi::add (ubound, 1, TYPE_SIGN (type), &ovf);
+ if (ovf)
+ m_nitems--;
+ else
+ m_bounds[m_nitems++] = max;
+ }
+ // If we get here with N==0, it means we tried to calculate the
+ // inverse of [-MIN, +MAX] which is actually the empty set, and
+ // N==0 maps nicely to the empty set.
+ }
+ else
+ {
+ m_bounds[0] = lbound;
+ m_bounds[1] = ubound;
+ m_nitems = 2;
+ }
+}
+
+irange::irange (tree type)
+{
+ set_varying (type);
+}
+
+irange::irange (value_range_kind rt, tree type,
+ const wide_int &lbound, const wide_int &ubound)
+{
+ init (type, lbound, ubound, rt);
+}
+
+irange::irange (tree type, const wide_int &lbound, const wide_int &ubound)
+{
+ init (type, lbound, ubound, VR_RANGE);
+}
+
+irange::irange (value_range_kind rt, tree lbound, tree ubound)
+{
+ gcc_checking_assert (rt == VR_RANGE || rt == VR_ANTI_RANGE);
+ tree type = TREE_TYPE (lbound);
+ init (type, wi::to_wide (lbound), wi::to_wide (ubound), rt);
+}
+
+irange::irange (tree lbound, tree ubound)
+{
+ tree type = TREE_TYPE (lbound);
+ init (type, wi::to_wide (lbound), wi::to_wide (ubound), VR_RANGE);
+}
+
+// Mark pair [i, j] to empty. This is done by building a non-sensical pair.
+
+void
+irange_storage::set_empty_pair (unsigned i, unsigned j, tree type)
+{
+ unsigned precision = trailing_bounds[0].get_precision ();
+ if (precision == 1 && TYPE_SIGN (type) == SIGNED)
+ {
+ // For stupid ass signed 1-bit types, we can't use [1, 0] as a
+ // nonsensical pair, since it maps to [-1, 0] which is valid.
+ // In this case, use [0, 1] which is invalid in this brain-dead world.
+ trailing_bounds[i] = wi::zero (precision);
+ trailing_bounds[j] = wi::one (precision);
+ }
+ else
+ {
+ // For almost all types, we mark empty ranges with a nonsensical [1, 0] range.
+ trailing_bounds[i] = wi::one (precision);
+ trailing_bounds[j] = wi::zero (precision);
+ }
+}
+
+irange::irange (tree type, const irange_storage *storage)
+{
+ m_type = type;
+ m_nitems = 0;
+ unsigned i = 0;
+ unsigned precision = wi::get_precision (storage->trailing_bounds[0]);
+ gcc_checking_assert (precision == TYPE_PRECISION (type));
+ while (i < m_max_pairs * 2)
+ {
+ if (storage->empty_pair_p (i, i + 1, type))
+ break;
+ m_bounds[i] = storage->trailing_bounds[i];
+ m_bounds[i + 1] = storage->trailing_bounds[i + 1];
+ i += 2;
+ }
+ m_nitems = i;
+}
+
+// Set range from the full domain of TYPE.
+
+void
+irange::set_varying (tree type)
+{
+ wide_int min = wi::min_value (TYPE_PRECISION (type), TYPE_SIGN (type));
+ wide_int max = wi::max_value (TYPE_PRECISION (type), TYPE_SIGN (type));
+ init (type, min, max);
+}
+
+bool
+irange::valid_p () const
+{
+ if (m_type == NULL
+ || m_nitems % 2
+ || m_nitems > m_max_pairs * 2)
+ return false;
+
+ if (undefined_p ())
+ return true;
+
+ // Check that the bounds are in the right order.
+ // So for [a,b][c,d][e,f] we must have: a <= b < c <= d < e <= f.
+ if (wi::gt_p (lower_bound (0), upper_bound (0), TYPE_SIGN (m_type)))
+ return false;
+ for (unsigned p = 1; p < num_pairs (); ++p)
+ {
+ if (wi::le_p (lower_bound (p), upper_bound (p - 1), TYPE_SIGN (m_type)))
+ return false;
+ if (wi::gt_p (lower_bound (p), upper_bound (p), TYPE_SIGN (m_type)))
+ return false;
+ }
+ return true;
+}
+
+void
+irange::check () const
+{
+ gcc_assert (valid_p ());
+}
+
+// Convert the current range in place into a range of type NEW_TYPE.
+
+void
+irange::cast (tree new_type)
+{
+ // If the expression involves a pointer, we are only interested in
+ // determining if it evaluates to NULL [0, 0] or non-NULL (~[0, 0]).
+ if (POINTER_TYPE_P (new_type) || POINTER_TYPE_P (m_type))
+ {
+ if (!contains_p (wi::zero (TYPE_PRECISION (m_type))))
+ {
+ // Don't use range_nonzero because it will recurse into cast().
+ unsigned prec = TYPE_PRECISION (new_type);
+ irange nz (VR_ANTI_RANGE, new_type,
+ wi::zero (prec), wi::zero (prec));
+ *this = nz;
+ }
+ else if (zero_p ())
+ *this = range_zero (new_type);
+ else
+ set_varying (new_type);
+ return;
+ }
+
+ // If nothing changed, this is a simple type conversion between two
+ // variants of the same type.
+ bool sign_change = TYPE_SIGN (new_type) != TYPE_SIGN (m_type);
+ unsigned old_precision = TYPE_PRECISION (m_type);
+ unsigned new_precision = TYPE_PRECISION (new_type);
+ signop old_sign = TYPE_SIGN (m_type);
+ signop new_sign = TYPE_SIGN (new_type);
+ if (undefined_p () || (!sign_change && old_precision == new_precision))
+ {
+ m_type = new_type;
+ return;
+ }
+
+ wide_int orig_lowest = lower_bound ();
+ wide_int orig_highest = upper_bound ();
+ wide_int new_type_min = wi::min_value (new_precision, new_sign);
+ wide_int new_type_max = wi::max_value (new_precision, new_sign);
+ unsigned n = m_nitems;
+ for (unsigned i = 0; i < n; i += 2)
+ {
+ // If this sub-range doesn't fit in the new range type, bail.
+ wide_int new_min, new_max;
+ if (!wide_int_range_convert (new_min, new_max,
+ old_sign, old_precision,
+ new_sign, new_precision,
+ m_bounds[i], m_bounds[i + 1]))
+ {
+ m_type = new_type;
+ m_nitems = 2;
+ m_bounds[0] = new_type_min;
+ m_bounds[1] = new_type_max;
+ return;
+ }
+ // If the new bounds are in the right order, we can do a
+ // straight up conversion.
+ if (wi::le_p (new_min, new_max, new_sign))
+ {
+ m_bounds[i] = new_min;
+ m_bounds[i + 1] = new_max;
+ }
+ // Otherwise, the bounds have wrapped and we must handle them
+ // specially as [-MIN,Y][X,MAX].
+ else
+ {
+ /* For one bit precision, the swapped range covers all values. */
+ if (TYPE_PRECISION (new_type) == 1)
+ {
+ set_varying (new_type);
+ return;
+ }
+ // If we're about to go over the maximum number of ranges,
+ // convert to something conservative and cast again.
+ if (m_nitems >= m_max_pairs * 2)
+ {
+ m_nitems = 2;
+ m_bounds[0] = orig_lowest;
+ m_bounds[1] = orig_highest;
+ cast (new_type);
+ return;
+ }
+ // Handle wrapping of [X,Y] as [-MIN,Y][X,MAX].
+ m_bounds[i] = new_type_min;
+ m_bounds[i + 1] = new_max;
+ // If we're about to construct [-MIN, MAX], no sense
+ // calculating anything else.
+ if (m_bounds[i + 1] == new_type_max)
+ {
+ m_nitems = 2;
+ m_type = new_type;
+ m_bounds[0] = new_type_min;
+ m_bounds[1] = new_type_max;
+ return;
+ }
+ m_bounds[m_nitems++] = new_min;
+ m_bounds[m_nitems++] = new_type_max;
+ }
+ }
+ m_type = new_type;
+ canonicalize ();
+}
+
+// Return TRUE if the current range contains wide-int ELEMENT.
+
+bool
+irange::contains_p (const wide_int &element) const
+{
+ for (unsigned p = 0; p < num_pairs (); ++p)
+ if (wi::ge_p (element, lower_bound (p), TYPE_SIGN (m_type))
+ && wi::le_p (element, upper_bound (p), TYPE_SIGN (m_type)))
+ return true;
+ return false;
+}
+
+// Return TRUE if the current range contains tree ELEMENT.
+
+bool
+irange::contains_p (tree element) const
+{
+ return contains_p (wi::to_wide (element));
+}
+
+// Remove PAIR.
+
+void
+irange::remove_pair (unsigned pair)
+{
+ unsigned i = pair * 2;
+ unsigned j = i + 1;
+ gcc_checking_assert (i < m_nitems && i < j);
+ unsigned dst = i;
+ unsigned ndeleted = j - i + 1;
+ for (++j; j < m_nitems; ++j)
+ m_bounds[dst++] = m_bounds[j];
+ m_nitems -= ndeleted;
+}
+
+// Canonicalize the current range.
+
+void
+irange::canonicalize ()
+{
+ if (undefined_p ())
+ return;
+
+ // Fix any out of order ranges: [10,20][-5,5] into [-5,5][10,20].
+ signop sign = TYPE_SIGN (m_type);
+ for (unsigned p = 0; p < num_pairs (); ++p)
+ for (unsigned q = p; q < num_pairs (); ++q)
+ if (wi::gt_p (lower_bound (p), lower_bound (q), sign))
+ {
+ wide_int t1 = lower_bound (p);
+ wide_int t2 = upper_bound (p);
+ set_lower_bound (p, lower_bound (q));
+ set_upper_bound (p, upper_bound (q));
+ set_lower_bound (q, t1);
+ set_upper_bound (q, t2);
+ }
+ // Merge sub-ranges when appropriate.
+ for (unsigned p = 0; p < num_pairs () - 1; )
+ {
+ // Merge edges that touch:
+ // [9,10][11,20] => [9,20]
+ // [9,10][10,20] => [9,20].
+ wi::overflow_type ovf;
+ if (upper_bound (p) == lower_bound (p + 1)
+ || (wi::add (upper_bound (p), 1, sign, &ovf) == lower_bound (p + 1)
+ && !ovf))
+ {
+ set_upper_bound (p, upper_bound (p + 1));
+ remove_pair (p + 1);
+ }
+ // Merge pairs that bleed into each other:
+ // [10,20][11,18] => [10,20]
+ // [10,20][15,30] => [10,30]
+ else if (wi::le_p (lower_bound (p), lower_bound (p + 1), sign)
+ && wi::ge_p (upper_bound (p), lower_bound (p + 1), sign))
+ {
+ set_upper_bound (p, wi::max (upper_bound (p), upper_bound (p + 1), sign));
+ remove_pair (p + 1);
+ }
+ else
+ ++p;
+ }
+ if (flag_checking)
+ check ();
+}
+
+// THIS = THIS U R
+
+void
+irange::union_ (const irange &r)
+{
+ gcc_checking_assert (range_compatible_p (m_type, r.m_type));
+
+ if (undefined_p ())
+ {
+ *this = r;
+ return;
+ }
+ else if (r.undefined_p ())
+ return;
+
+ // Do not worry about merging and such by reserving twice as many
+ // pairs as needed, and then simply sort the 2 ranges into this
+ // intermediate form.
+ //
+ // The intermediate result will have the property that the beginning
+ // of each range is <= the beginning of the next range. There may
+ // be overlapping ranges at this point. I.e. this would be valid
+ // [-20, 10], [-10, 0], [0, 20], [40, 90] as it satisfies this
+ // contraint : -20 < -10 < 0 < 40 When the range is rebuilt into r,
+ // the merge is performed.
+ //
+ // [Xi,Yi]..[Xn,Yn] U [Xj,Yj]..[Xm,Ym] --> [Xk,Yk]..[Xp,Yp]
+ signop sign = TYPE_SIGN (m_type);
+ wide_int res[m_max_pairs * 4];
+ wide_int u1 ;
+ wi::overflow_type ovf;
+ unsigned i = 0, j = 0, k = 0;
+
+ while (i < m_nitems && j < r.m_nitems)
+ {
+ // lower of Xi and Xj is the lowest point.
+ if (wi::le_p (m_bounds[i], r.m_bounds[j], sign))
+ {
+ res[k++] = m_bounds[i];
+ res[k++] = m_bounds[i + 1];
+ i += 2;
+ }
+ else
+ {
+ res[k++] = r.m_bounds[j];
+ res[k++] = r.m_bounds[j + 1];
+ j += 2;
+ }
+ }
+ for ( ; i < m_nitems; i += 2)
+ {
+ res[k++] = m_bounds[i];
+ res[k++] = m_bounds[i + 1];
+ }
+ for ( ; j < r.m_nitems; j += 2)
+ {
+ res[k++] = r.m_bounds[j];
+ res[k++] = r.m_bounds[j + 1];
+ }
+
+ // Now normalize the vector removing any overlaps.
+ i = 2;
+ int prec = TYPE_PRECISION (m_type);
+ wide_int max_val = wi::max_value (prec, sign);
+ for (j = 2; j < k ; j += 2)
+ {
+ if (res[i - 1] == max_val)
+ break;
+ u1 = wi::add (res[i - 1], 1, sign, &ovf);
+
+ // Overflow indicates we are at MAX already.
+ // A wide int bug requires the previous max_val check
+ // trigger: gcc.c-torture/compile/pr80443.c with -O3
+ if (ovf)
+ break;
+
+ // Current upper+1 is >= lower bound next pair, then we merge ranges.
+ if (wi::ge_p (u1, res[j], sign))
+ {
+ // New upper bounds is greater of current or the next one.
+ if (wi::gt_p (res[j + 1], res [i - 1], sign))
+ res [i - 1] = res[j + 1];
+ }
+ else
+ {
+ // This is a new distinct range, but no point in copying it
+ // if it is already in the right place.
+ if (i != j)
+ {
+ res[i++] = res[j];
+ res[i++] = res[j + 1];
+ }
+ else
+ i += 2;
+
+ }
+ }
+
+ // At this point, the vector should have i ranges, none
+ // overlapping. Now it simply needs to be copied, and if there are
+ // too many ranges, merge some. We wont do any analysis as to what
+ // the "best" merges are, simply combine the final ranges into one.
+ if (i > m_max_pairs * 2)
+ {
+ res[m_max_pairs * 2 - 1] = res[i - 1];
+ i = m_max_pairs * 2;
+ }
+
+ for (j = 0; j < i ; j++)
+ m_bounds[j] = res [j];
+ m_nitems = i;
+
+ if (flag_checking)
+ check ();
+}
+
+// THIS = THIS ^ [X,Y].
+
+void
+irange::intersect (const wide_int &x, const wide_int &y)
+{
+ unsigned pos = 0;
+
+ for (unsigned i = 0; i < m_nitems; i += 2)
+ {
+ signop sign = TYPE_SIGN (m_type);
+ wide_int newlo = wi::max (m_bounds[i], x, sign);
+ wide_int newhi = wi::min (m_bounds[i + 1], y, sign);
+ if (wi::gt_p (newlo, newhi, sign))
+ {
+ // If the new sub-range doesn't make sense, it's an
+ // impossible range and must be kept out of the result.
+ }
+ else
+ {
+ m_bounds[pos++] = newlo;
+ m_bounds[pos++] = newhi;
+ }
+ }
+ m_nitems = pos;
+ if (flag_checking)
+ check ();
+}
+
+// THIS = THIS ^ R.
+
+void
+irange::intersect (const irange &r)
+{
+ gcc_checking_assert (range_compatible_p (m_type, r.m_type));
+ irange orig_range (*this);
+
+ // Intersection with an empty range is an empty range.
+ set_undefined ();
+ if (orig_range.undefined_p () || r.undefined_p ())
+ return;
+
+ // The general algorithm is as follows.
+ //
+ // Intersect each sub-range of R with all of ORIG_RANGE one at a time, and
+ // join/union the results of these intersections together. I.e:
+ //
+ // [10,20][30,40][50,60] ^ [15,25][38,51][55,70]
+ //
+ // Step 1: [10,20][30,40][50,60] ^ [15,25] => [15,20]
+ // Step 2: [10,20][30,40][50,60] ^ [38,51] => [38,40]
+ // Step 3: [10,20][30,40][50,60] ^ [55,70] => [55,60]
+ // Final: [15,20] U [38,40] U [55,60] => [15,20][38,40][55,60]
+ for (unsigned i = 0; i < r.m_nitems; i += 2)
+ {
+ irange tmp (orig_range);
+ tmp.intersect (r.m_bounds[i], r.m_bounds[i + 1]);
+ union_ (tmp);
+ }
+ // There is no check here because the calls to union_ above would
+ // have verified sanity.
+}
+
+// Set THIS to the inverse of its range.
+
+void
+irange::invert ()
+{
+ // We always need one more set of bounds to represent an inverse, so
+ // if we're at the limit, we can't properly represent things.
+ //
+ // For instance, to represent the inverse of a 2 sub-range set
+ // [5, 10][20, 30], we would need a 3 sub-range set
+ // [-MIN, 4][11, 19][31, MAX].
+ //
+ // In this case, return the most conservative thing.
+ //
+ // However, if any of the extremes of the range are -MIN/+MAX, we
+ // know we will not need an extra bound. For example:
+ //
+ // INVERT([-MIN,20][30,40]) => [21,29][41,+MAX]
+ // INVERT([-MIN,20][30,MAX]) => [21,29]
+ wide_int min = wi::min_value (TYPE_PRECISION (m_type), TYPE_SIGN (m_type));
+ wide_int max = wi::max_value (TYPE_PRECISION (m_type), TYPE_SIGN (m_type));
+ if (m_nitems == m_max_pairs * 2
+ && m_bounds[0] != min
+ && m_bounds[m_nitems] != max)
+ {
+ m_bounds[1] = max;
+ m_nitems = 2;
+ return;
+ }
+
+ // The inverse of the empty set is the entire domain.
+ if (undefined_p ())
+ {
+ set_varying (m_type);
+ return;
+ }
+
+ // The algorithm is as follows. To calculate INVERT ([a,b][c,d]), we
+ // generate [-MIN, a-1][b+1, c-1][d+1, MAX].
+ //
+ // If there is an over/underflow in the calculation for any
+ // sub-range, we eliminate that subrange. This allows us to easily
+ // calculate INVERT([-MIN, 5]) with: [-MIN, -MIN-1][6, MAX]. And since
+ // we eliminate the underflow, only [6, MAX] remains.
+
+ unsigned i = 0;
+ wi::overflow_type ovf;
+
+ // Construct leftmost range.
+ irange orig_range (*this);
+ m_nitems = 0;
+ // If this is going to underflow on the MINUS 1, don't even bother
+ // checking. This also handles subtracting one from an unsigned 0,
+ // which doesn't set the underflow bit.
+ if (min != orig_range.m_bounds[i])
+ {
+ m_bounds[m_nitems++] = min;
+ m_bounds[m_nitems++] = subtract_one (orig_range.m_bounds[i],
+ m_type, ovf);
+ if (ovf)
+ m_nitems = 0;
+ }
+ i++;
+ // Construct middle ranges if applicable.
+ if (orig_range.m_nitems > 2)
+ {
+ unsigned j = i;
+ for (; j < (unsigned) (orig_range.m_nitems - 2); j += 2)
+ {
+ // The middle ranges cannot have MAX/MIN, so there's no need
+ // to check for unsigned overflow on the +1 and -1 here.
+ m_bounds[m_nitems++]
+ = wi::add (orig_range.m_bounds[j], 1, TYPE_SIGN (m_type), &ovf);
+ m_bounds[m_nitems++]
+ = subtract_one (orig_range.m_bounds[j + 1], m_type, ovf);
+ if (ovf)
+ m_nitems -= 2;
+ }
+ i = j;
+ }
+ // Construct rightmost range.
+ //
+ // However, if this will overflow on the PLUS 1, don't even bother.
+ // This also handles adding one to an unsigned MAX, which doesn't
+ // set the overflow bit.
+ if (max != orig_range.m_bounds[i])
+ {
+ m_bounds[m_nitems++]
+ = wi::add (orig_range.m_bounds[i], 1, TYPE_SIGN (m_type), &ovf);
+ m_bounds[m_nitems++] = max;
+ if (ovf)
+ m_nitems -= 2;
+ }
+
+ if (flag_checking)
+ check ();
+}
+
+// Dump the current range onto BUFFER.
+
+void
+irange::dump (pretty_printer *buffer) const
+{
+ wide_int min = wi::min_value (TYPE_PRECISION (m_type), TYPE_SIGN (m_type));
+ wide_int max = wi::max_value (TYPE_PRECISION (m_type), TYPE_SIGN (m_type));
+ if (POINTER_TYPE_P (m_type) && nonzero_p ())
+ pp_string (buffer, "[ non-zero pointer ]");
+ else
+ for (unsigned i = 0; i < m_nitems; ++i)
+ {
+ if (i % 2 == 0)
+ pp_character (buffer, '[');
+
+ // Wide ints may be sign extended to the full extent of the
+ // underlying HWI storage, even if the precision we care about
+ // is smaller. Chop off the excess bits for prettier output.
+ signop sign = TYPE_UNSIGNED (m_type) ? UNSIGNED : SIGNED;
+ widest_int val = widest_int::from (m_bounds[i], sign);
+ val &= wi::mask<widest_int> (m_bounds[i].get_precision (), false);
+
+ if (i == 0
+ && INTEGRAL_TYPE_P (m_type)
+ && !TYPE_UNSIGNED (m_type)
+ && m_bounds[i] == min
+ && TYPE_PRECISION (m_type) != 1)
+ pp_string (buffer, "-INF");
+ else if (i + 1 == m_nitems
+ && INTEGRAL_TYPE_P (m_type)
+ && !TYPE_UNSIGNED (m_type)
+ && m_bounds[i] == max
+ && TYPE_PRECISION (m_type) != 1)
+ pp_string (buffer, "+INF");
+ else
+ {
+ if (val > 0xffff)
+ print_hex (val, pp_buffer (buffer)->digit_buffer);
+ else
+ print_dec (m_bounds[i], pp_buffer (buffer)->digit_buffer, sign);
+ pp_string (buffer, pp_buffer (buffer)->digit_buffer);
+ }
+ if (i % 2 == 0)
+ pp_string (buffer, ", ");
+ else
+ pp_character (buffer, ']');
+ }
+ if (undefined_p ())
+ pp_string (buffer, "[]");
+
+ pp_character (buffer, ' ');
+ dump_generic_node (buffer, m_type, 0, TDF_NONE, false);
+ pp_flush (buffer);
+}
+
+// Dump the current range onto FILE F.
+
+void
+irange::dump (FILE *f) const
+{
+ pretty_printer buffer;
+ buffer.buffer->stream = f;
+ dump (&buffer);
+}
+
+// Like above but dump to STDERR.
+//
+// You'd think we could have a default parameter for dump(FILE),
+// but gdb currently doesn't do default parameters gracefully-- or at
+// all, and since this is a function we need to be callable from the
+// debugger...
+
+void
+irange::dump () const
+{
+ dump (stderr);
+}
+
+// Initialize the current irange_storage to the irange in IR.
+
+void
+irange_storage::set (const irange &ir)
+{
+ unsigned precision = TYPE_PRECISION (ir.type ());
+ trailing_bounds.set_precision (precision);
+ unsigned i;
+ for (i = 0; i < ir.num_pairs () * 2; ++i)
+ trailing_bounds[i] = ir.m_bounds[i];
+
+ // Clear the remaining empty ranges.
+ for (; i < irange::m_max_pairs * 2; i += 2)
+ set_empty_pair (i, i + 1, ir.type ());
+}
+
+// Update a previously initialized irange_storage to NEW_RANGE, iff the
+// precision of the present range is the same as the precision of
+// the new range. Return TRUE if update was successful.
+
+bool
+irange_storage::update (const irange &new_range)
+{
+ if (trailing_bounds.get_precision () == TYPE_PRECISION (new_range.type ()))
+ {
+ set (new_range);
+ return true;
+ }
+ return false;
+}
+
+// Return TRUE if range contains exactly one element and set RESULT to it.
+
+bool
+irange::singleton_p (tree *result) const
+{
+ if (num_pairs () == 1 && lower_bound (0) == upper_bound (0))
+ {
+ if (result)
+ *result = wide_int_to_tree (type (), lower_bound ());
+ return true;
+ }
+ return false;
+}
+
+// Convert irange to a value_range.
+
+value_range_base
+irange_to_value_range (const irange &r)
+{
+ value_range_base vr;
+ if (r.varying_p ())
+ {
+ vr.set_varying (r.type ());
+ return vr;
+ }
+ if (r.undefined_p ())
+ {
+ vr.set_undefined (r.type ());
+ return vr;
+ }
+ tree type = r.type ();
+ unsigned int precision = TYPE_PRECISION (type);
+ // Represent non-zero correctly.
+ if (TYPE_UNSIGNED (type)
+ && r.num_pairs () == 1
+ && r.lower_bound () == wi::uhwi (1, precision)
+ && r.upper_bound () == wi::max_value (precision, UNSIGNED)
+ // Do not get confused by booleans.
+ && TYPE_PRECISION (type) != 1)
+ vr = value_range (VR_ANTI_RANGE,
+ build_int_cst (type, 0), build_int_cst (type, 0));
+ // Represent anti-ranges.
+ else if ((r.num_pairs () == 2
+ || r.num_pairs () == 3)
+ // Do not get confused by booleans.
+ && TYPE_PRECISION (type) != 1
+ && r.lower_bound () == wi::min_value (precision, TYPE_SIGN (type))
+ && r.upper_bound () == wi::max_value (precision, TYPE_SIGN (type)))
+ {
+ irange tmp = r;
+ if (r.num_pairs () == 3)
+ {
+ // Hack to make up for the fact that we can compute finer
+ // grained ranges that VRP can only approximate with an
+ // anti-range. Attempt to reconstruct sub-ranges of the form:
+ //
+ // [0, 94][96, 127][0xff80, 0xffff] => ~[95,95]
+ // [0, 1][3, 0x7fffffff][0xff..80000000, 0xff..ff] => ~[2, 2].
+ //
+ // Merge the last two bounds.
+ tmp = irange (type, r.lower_bound (0), r.upper_bound (0));
+ tmp.union_ (irange (type, r.lower_bound (1), r.upper_bound ()));
+ }
+ tmp = range_invert (tmp);
+ vr = value_range (VR_ANTI_RANGE,
+ wide_int_to_tree (type, tmp.lower_bound ()),
+ wide_int_to_tree (type, tmp.upper_bound ()));
+ }
+ else
+ vr = value_range (VR_RANGE,
+ wide_int_to_tree (type, r.lower_bound ()),
+ wide_int_to_tree (type, r.upper_bound ()));
+ return vr;
+}
+
+// Convert a value_range into an irange.
+
+static irange
+value_range_to_irange (const value_range_base &vr)
+{
+ tree type = vr.type ();
+ if (vr.varying_p ())
+ return irange (type);
+ if (vr.undefined_p ())
+ {
+ irange r;
+ r.set_undefined (type);
+ return r;
+ }
+ return irange (vr.kind (), vr.min (), vr.max ());
+}
+
+irange::irange (const value_range_base &vr)
+{
+ *this = value_range_to_irange (vr);
+}
+
+#endif // USE_IRANGE
diff --git a/gcc/range.h b/gcc/range.h
new file mode 100644
index 00000000000..0a1bf2cd02f
--- /dev/null
+++ b/gcc/range.h
@@ -0,0 +1,353 @@
+/* Header file for high resolution range class. -*- C++ -*-
+ Copyright (C) 2017-2019 Free Software Foundation, Inc.
+ Contributed by Aldy Hernandez <al...@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+#ifndef GCC_RANGE_H
+#define GCC_RANGE_H
+
+// Set to one if irange is a standalone class, or zero if irange is
+// just value_range_base underneath.
+#define USE_IRANGE 0
+
+class value_range_base;
+
+#if USE_IRANGE
+// This is the standalone irange implementation.
+
+class irange_storage;
+
+// This is a class for working with ranges, currently integer ones.
+// With it you can specify a range of [5,10] or even ranges including
+// multi-part ranges [-10,5][30,40][50,60].
+//
+// Inverse ranges are represented as an actual range. For instance,
+// the inverse of 0 is [-MIN,-1][1,+MAX] for a signed integer.
+//
+// Methods are provided for intersecting and uniting ranges, as well
+// as converting between them. In performing any of these operations,
+// when no efficient way can be computed, we may default to a more
+// conservative range.
+//
+// For example, the inverse of [5,10][15,20][30,40] is actually
+// [-MIN,4][11,14][21,29][41,+MAX]. If this cannot be efficiently or
+// quickly computed, we may opt to represent the inverse as
+// [-MIN,4][41,+MAX] which is an equivalent conservative
+// representation.
+//
+// This class is not meant to live in long term storage.
+// Consequently, there are no GTY markers. For long term storage, use
+// the irange_storage class described later.
+
+class irange
+{
+ friend class irange_storage;
+ friend void range_tests ();
+
+ public:
+ irange ();
+ irange (tree type);
+ irange (value_range_kind, tree type, const wide_int &, const wide_int &);
+ irange (tree type, const wide_int &, const wide_int &);
+ irange (value_range_kind, tree, tree);
+ irange (tree, tree);
+ irange (tree type, const irange_storage *);
+ irange (const value_range_base &);
+
+ static bool supports_type_p (tree type);
+ static bool supports_ssa_p (tree ssa);
+ static bool supports_p (tree expr);
+
+ void set_varying (tree);
+ void set_undefined (tree = NULL);
+
+ unsigned num_pairs () const;
+ wide_int lower_bound (unsigned pair = 0) const;
+ wide_int upper_bound () const;
+ wide_int upper_bound (unsigned pair) const;
+
+ tree type () const { return m_type; }
+
+ void cast (tree type);
+
+ bool varying_p () const;
+ bool undefined_p () const;
+ bool zero_p () const;
+ bool nonzero_p () const;
+ // These two take a tree instead of a wide_int, for API
+ // compatibility with value_range.
+ bool singleton_p (tree * = NULL) const;
+ bool contains_p (tree) const;
+
+ bool operator== (const irange &r) const;
+ bool operator!= (const irange &r) const;
+
+ void union_ (const irange &r);
+ void intersect (const irange &r);
+ void invert ();
+
+ void dump () const;
+ void dump (FILE *) const;
+
+private:
+ void init (tree type, const wide_int &, const wide_int &,
+ value_range_kind = VR_RANGE);
+ void canonicalize ();
+ void set_lower_bound (unsigned pair, const wide_int &);
+ void set_upper_bound (unsigned pair, const wide_int &);
+ void remove_pair (unsigned pair);
+ void intersect (const wide_int &x, const wide_int &y);
+ bool contains_p (const wide_int &element) const;
+ void dump (pretty_printer *pp) const;
+ bool valid_p () const;
+ void check () const;
+
+ tree m_type;
+ unsigned char m_nitems;
+ static const unsigned int m_max_pairs = 3;
+ wide_int m_bounds[m_max_pairs * 2];
+}; // class irange
+
+inline
+irange::irange () : m_type (NULL), m_nitems (0)
+{
+}
+
+inline wide_int
+irange::lower_bound (unsigned pair) const
+{
+ return m_bounds[pair * 2];
+}
+
+inline wide_int
+irange::upper_bound () const
+{
+ return m_bounds[m_nitems - 1];
+}
+
+inline wide_int
+irange::upper_bound (unsigned pair) const
+{
+ return m_bounds[pair * 2 + 1];
+}
+
+inline void
+irange::set_lower_bound (unsigned pair, const wide_int &i)
+{
+ m_bounds[pair * 2 ] = i;
+}
+
+inline void
+irange::set_upper_bound (unsigned pair, const wide_int &i)
+{
+ m_bounds[pair * 2 + 1] = i;
+}
+
+inline unsigned
+irange::num_pairs () const
+{
+ return m_nitems / 2;
+}
+
+inline void
+irange::set_undefined (tree type)
+{
+ if (type)
+ m_type = type;
+ m_nitems = 0;
+}
+
+inline bool
+irange::undefined_p () const
+{
+ return !m_nitems;
+}
+
+inline bool
+irange::zero_p () const
+{
+ wide_int z = wi::zero (TYPE_PRECISION (m_type));
+ return (m_nitems == 2 && m_bounds[0] == z && m_bounds[1] == z);
+}
+
+inline bool
+irange::nonzero_p () const
+{
+ unsigned prec = TYPE_PRECISION (m_type);
+ return *this == irange (VR_ANTI_RANGE, m_type,
+ wi::zero (prec), wi::zero (prec));
+}
+
+// Return true if this range is the full range for its type.
+
+inline bool
+irange::varying_p () const
+{
+ return (m_nitems == 2 &&
+ m_bounds[0] == wi::min_value (TYPE_PRECISION (m_type),
+ TYPE_SIGN (m_type)) &&
+ m_bounds[1] == wi::max_value (TYPE_PRECISION (m_type),
+ TYPE_SIGN (m_type)));
+}
+
+// An irange is memory inefficient, so this class is used to store
+// them in memory. It is a variable length structure that contains
+// the sub-range pairs as well as the non-zero bitmask. The number of
+// entries are m_max_pairs * 2 + 1 (to accomodate the non-zero bits).
+//
+// To store an irange class X into an irange_storage use:
+//
+// irange X = ...;
+// irange_storage *stow = irange_storage::alloc (X);
+//
+// To convert it back into an irange use:
+//
+// tree type = ...;
+// irange X (type, stow);
+//
+// To get at the nonzero bits:
+//
+// irange_storage *stow = ...;
+// stow->set_nonzero_bits();
+// stow->get_nonzero_bits();
+
+class GTY((variable_size)) irange_storage
+{
+ friend class irange;
+
+ public:
+ static irange_storage *alloc (const irange &);
+ bool update (const irange &);
+
+ private:
+ static size_t size (unsigned precision);
+ void set (const irange &);
+ bool empty_pair_p (unsigned, unsigned, tree) const;
+ void set_empty_pair (unsigned, unsigned, tree);
+ void set_nonzero_bits (const wide_int &);
+ wide_int get_nonzero_bits ();
+
+ // The last wide_int in this field is a mask representing which bits in
+ // an integer are known to be non-zero.
+ trailing_wide_ints<irange::m_max_pairs * 2 + 1> trailing_bounds;
+};
+
+// Return the nonzero bits in the range.
+
+inline wide_int
+irange_storage::get_nonzero_bits ()
+{
+ return trailing_bounds[irange::m_max_pairs * 2];
+}
+
+// Set the nonzero bit mask to WI.
+inline void
+irange_storage::set_nonzero_bits (const wide_int &wi)
+{
+ trailing_bounds[irange::m_max_pairs * 2] = wi;
+}
+
+// Returns the size of an irange_storage with PRECISION.
+
+inline size_t
+irange_storage::size (unsigned precision)
+{
+ return sizeof (irange_storage)
+ /* There is a +1 for the non-zero bits field. */
+ + trailing_wide_ints<irange::m_max_pairs * 2 + 1>::extra_size (precision);
+}
+
+// Allocate GC memory for an irange_storage with PRECISION and
+// initialize it to IR.
+
+inline irange_storage *
+irange_storage::alloc (const irange &ir)
+{
+ unsigned precision = TYPE_PRECISION (ir.m_type);
+ irange_storage *stow = static_cast<irange_storage *> (ggc_internal_alloc
+ (size (precision)));
+ stow->set (ir);
+ stow->set_nonzero_bits (wi::shwi (-1, precision));
+ return stow;
+}
+
+// Return TRUE if pair [i, j] is marked as empty.
+
+inline bool
+irange_storage::empty_pair_p (unsigned i, unsigned j, tree type) const
+{
+ unsigned precision = wi::get_precision (trailing_bounds[0]);
+ if (precision == 1 && TYPE_SIGN (type) == SIGNED)
+ return (trailing_bounds[i] == wi::zero (precision)
+ && trailing_bounds[j] == wi::one (precision));
+ return (trailing_bounds[i] == wi::one (precision)
+ && trailing_bounds[j] == wi::zero (precision));
+}
+
+// Return true if TYPE is a valid type for irange to operate on.
+// Otherwise return FALSE.
+
+inline bool
+irange::supports_type_p (tree type)
+{
+ if (type && (INTEGRAL_TYPE_P (type) || POINTER_TYPE_P (type)))
+ return type;
+ return NULL;
+}
+
+// Return true if SSA is a valid ssa_name for irange to operate on.
+// Otherwise return FALSE.
+
+inline bool
+irange::supports_ssa_p (tree ssa)
+{
+ if (!SSA_NAME_IS_VIRTUAL_OPERAND (ssa))
+ return supports_type_p (TREE_TYPE (ssa));
+ return false;
+}
+
+// Return true if EXPR is a valid tree expression for irange to operate on.
+// Otherwise return FALSE.
+
+inline bool
+irange::supports_p (tree expr)
+{
+ if (TYPE_P (expr))
+ return supports_type_p (expr);
+ else if (TREE_CODE (expr) == SSA_NAME)
+ return supports_ssa_p (expr);
+ return supports_type_p (TREE_TYPE (expr));
+}
+
+value_range_base irange_to_value_range (const irange &);
+#else // USE_IRANGE
+class value_range_storage;
+typedef value_range_base irange;
+typedef value_range_storage irange_storage;
+#endif // USE_IRANGE
+
+// Common code between the alternate irange implementations.
+
+irange range_zero (tree type);
+irange range_nonzero (tree type);
+irange range_intersect (const irange &, const irange &);
+irange range_union (const irange &, const irange &);
+irange range_invert (const irange &);
+irange range_positives (tree type);
+irange range_negatives (tree type);
+#endif // GCC_RANGE_H
diff --git a/gcc/selftest.h b/gcc/selftest.h
index d278f0a2900..a1730ff912e 100644
--- a/gcc/selftest.h
+++ b/gcc/selftest.h
@@ -258,6 +258,10 @@ extern int num_passes;
} /* end of namespace selftest. */
+/* This is outside of the selftest namespace because it's a friend of
+ class irange. */
+extern void range_tests ();
+
/* Macros for writing tests. */
/* Evaluate EXPR and coerce to bool, calling
diff --git a/gcc/ssa.h b/gcc/ssa.h
index 56a8d103965..74dcc732cd2 100644
--- a/gcc/ssa.h
+++ b/gcc/ssa.h
@@ -25,6 +25,7 @@ along with GCC; see the file COPYING3. If not see
#include "stringpool.h"
#include "gimple-ssa.h"
+#include "range.h"
#include "tree-vrp.h"
#include "tree-ssanames.h"
#include "tree-phinodes.h"
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index efd51735022..5caea6d3593 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -68,6 +68,7 @@ along with GCC; see the file COPYING3. If not see
#include "vr-values.h"
#include "builtins.h"
#include "wide-int-range.h"
+#include "range-op.h"
static bool
ranges_from_anti_range (const value_range_base *ar,
@@ -136,6 +137,37 @@ value_range_base::value_range_base (tree type)
set_varying (type);
}
+value_range_base::value_range_base (enum value_range_kind kind,
+ tree type,
+ const wide_int &wmin,
+ const wide_int &wmax)
+{
+ tree min = wide_int_to_tree (type, wmin);
+ tree max = wide_int_to_tree (type, wmax);
+ gcc_assert (kind == VR_RANGE || kind == VR_ANTI_RANGE);
+ set (kind, min, max);
+}
+
+value_range_base::value_range_base (tree type,
+ const wide_int &wmin,
+ const wide_int &wmax)
+{
+ tree min = wide_int_to_tree (type, wmin);
+ tree max = wide_int_to_tree (type, wmax);
+ set (VR_RANGE, min, max);
+}
+
+value_range_base::value_range_base (tree min, tree max)
+{
+ set (VR_RANGE, min, max);
+}
+
+value_range_base::value_range_base (tree type ATTRIBUTE_UNUSED,
+ const value_range_storage *stow)
+{
+ *this = stow->m_vr;
+}
+
/* Like set, but keep the equivalences in place. */
void
@@ -1754,6 +1786,21 @@ extract_range_from_binary_expr (value_range_base *vr,
&& ranges_from_anti_range (&vr1, &vrtem0, &vrtem1))
{
extract_range_from_binary_expr (vr, code, expr_type, vr0_, &vrtem0);
+ /* On a division, if dividing one sub-range returned UNDEFINED,
+ it's a division by zero, and the entire thing is
+ UNDEFINED.
+
+ Imagine unsigned [X,Y] .DIV. ~[1,1]. This is done as:
+ ([X,Y] .DIV. [0,0]) U ([X,Y] .DIV. [2,MAX]). Once we get the
+ undefined from the first sub-range, we can bail. */
+ if (vr->undefined_p ()
+ && (code == TRUNC_DIV_EXPR
+ || code == FLOOR_DIV_EXPR
+ || code == CEIL_DIV_EXPR
+ || code == EXACT_DIV_EXPR
+ || code == ROUND_DIV_EXPR
+ || code == TRUNC_MOD_EXPR))
+ return;
if (!vrtem1.undefined_p ())
{
value_range_base vrres;
@@ -2229,6 +2276,353 @@ extract_range_from_unary_expr (value_range_base *vr,
return;
}
+/* Given two ranges (OLD_VR and NEW_VR) that are the result of
+ VR0 .OPCODE. VR1, abort if they are not equivalent. */
+
+void
+assert_compare_value_ranges (const value_range_base *old_vr,
+ const value_range_base *new_vr,
+ tree_code code,
+ const value_range_base *vr0,
+ const value_range_base *vr1)
+{
+ if (old_vr->equal_p (*new_vr))
+ return;
+
+ /* Now account for any known differences between range-ops and
+ extract_range_from_*expr. If we can't account for the difference
+ between the ranges, fail vewwy woughly. */
+
+ /* Ideally, unsigned [1, MAX] should've been canonicalized as
+ ~[0, 0], but this causes issues with ranges_from_anti_range.
+ Special case this for now, and avoid batting the beehive. */
+ if (TYPE_UNSIGNED (old_vr->type ())
+ && integer_onep (old_vr->min ())
+ && vrp_val_is_max (old_vr->max ())
+ && new_vr->nonzero_p ())
+ return;
+
+ /* extract_range_from_binary_expr special cases this scenario, and
+ gives up. Since range-ops can do better, avoid a false
+ positive. */
+ if (code == EXACT_DIV_EXPR && vr0->nonzero_p ())
+ return;
+
+ /* The ordering in which range-ops and
+ extract_range_from_binary_expr split up and handle sub-ranges
+ matters, and this can yield slightly worse results for VRP at
+ times. This is because the union of intermediate ranges,
+ depending on which order they are done in, can yield
+ unrepresentable ranges that ultimately generate a VARYING.
+
+ This is imprecise at best, so avoid comparing pairs of
+ VR_ANTI_RANGES inputs, for which VRP produces VARYING and
+ range-ops does better.
+
+ For example, extract_range_from_binary_expr, with its recursive
+ ranges_from_anti_range approach, will handle ~[5,10] OP [20,30]
+ in this order:
+
+ [0,4][11,MAX] OP [0,19][31,MAX]
+
+ t1 = [0,4] OP [0,19]
+ t2 = [0,4] OP [31,MAX]
+ t3 = union(t1, t2)
+
+ t4 = [11,MAX] OP [0,19]
+ t5 = [11,MAX] OP [31,MAX]
+ t6 = union(t4, t5)
+
+ t = union(t3, t6)
+
+ Whereas, range-ops will do:
+
+ t = union([0,4] OP [0,19])
+ t = union(t, union([0,4] OP [31,MAX]))
+ t = union(t, union([11,MAX] OP [0,19]))
+ t = union(t, union([11,MAX] OP [31,MAX]))
+
+ Ugh. In the amount of time it took to explain this, I could've
+ rewritten VRP to match range-ops, but let's avoid touching too
+ much of existing code.
+
+ Triggered by: gcc.target/i386/sse4_2-crc32b.c. */
+ if (vr0->kind () == VR_ANTI_RANGE && vr1->kind () == VR_ANTI_RANGE
+ && old_vr->varying_p ())
+ return;
+
+ /* MAX/MIN of pointers in VRP dumbs everything down to
+ NULL/NON_NULL/VARYING. When running range-ops with multiple
+ sub-ranges, we may get slightly better ranges. In this case,
+ pretend we're varying and see if we matched VRP. */
+ if ((code == MAX_EXPR || code == MIN_EXPR)
+ && POINTER_TYPE_P (new_vr->type ())
+ && !new_vr->zero_p ()
+ && !new_vr->nonzero_p ()
+ && old_vr->varying_p ())
+ return;
+
+ /* Sigh. extract_range_from_binary_expr may refuse to work on
+ varying ranges for some codes, but range-ops can sometimes derive
+ useful information. This is the same check we have in
+ extract_range_from_binary_expr. */
+ if (code != BIT_AND_EXPR
+ && code != BIT_IOR_EXPR
+ && code != TRUNC_DIV_EXPR
+ && code != FLOOR_DIV_EXPR
+ && code != CEIL_DIV_EXPR
+ && code != EXACT_DIV_EXPR
+ && code != ROUND_DIV_EXPR
+ && code != TRUNC_MOD_EXPR
+ && code != MIN_EXPR
+ && code != MAX_EXPR
+ && code != PLUS_EXPR
+ && code != MINUS_EXPR
+ && code != RSHIFT_EXPR
+ && code != POINTER_PLUS_EXPR
+ && (vr0->varying_p ()
+ || vr1->varying_p ()
+ || vr0->symbolic_p ()
+ || vr1->symbolic_p ()))
+ {
+ /* If VRP was varying, we know we did better. */
+ if (old_vr->varying_p ())
+ return;
+ }
+
+ /* There's an unaccounted difference. This may be a real bug. */
+
+ tree expr_type = old_vr->type ();
+ fprintf (stderr, "------------\n");
+ fprintf (stderr, "Ranges from VRP and range-ops do not agree!\n");
+ fprintf (stderr, "CODE: %s\n", get_tree_code_name (code));
+ fprintf (stderr, "TYPE = ");
+ debug_generic_stmt (expr_type);
+ if (CONVERT_EXPR_CODE_P (code))
+ {
+ fprintf (stderr, "\tFROM TYPE = ");
+ debug_generic_stmt (vr0->type ());
+ }
+ fprintf (stderr, "vr0: ");
+ vr0->dump (stderr);
+ fputc ('\n', stderr);
+ if (TREE_CODE_CLASS (code) != tcc_unary)
+ {
+ fprintf (stderr, "vr1: ");
+ vr1->dump (stderr);
+ fputc ('\n', stderr);
+ }
+ fprintf (stderr, "VRP returned: ");
+ old_vr->dump (stderr);
+ fprintf (stderr, "\nrange-ops returned: ");
+ new_vr->dump (stderr);
+ fputc ('\n', stderr);
+ gcc_unreachable();
+}
+
+/* Normalize a value_range for use in range_ops and return it.
+ Eventually, range-ops should do this for us. */
+
+static value_range_base
+normalize_for_range_ops (const value_range_base &vr)
+{
+ tree type = vr.type ();
+
+ /* This will return ~[0,0] for [&var, &var]. */
+ if (POINTER_TYPE_P (type) && !range_includes_zero_p (&vr))
+ {
+ value_range_base temp;
+ temp.set_nonzero (type);
+ return temp;
+ }
+ if (vr.symbolic_p ())
+ return normalize_for_range_ops (vr.normalize_symbolics ());
+ if (TREE_CODE (vr.min ()) == INTEGER_CST
+ && TREE_CODE (vr.max ()) == INTEGER_CST)
+ return vr;
+ /* Anything not strictly numeric at this point becomes varying. */
+ return value_range_base (vr.type ());
+}
+
+/* Fold a binary expression of two value_range's with range-ops. */
+
+static void
+range_ops_fold_binary_expr (value_range_base *vr,
+ enum tree_code code,
+ tree expr_type,
+ const value_range_base *vr0_,
+ const value_range_base *vr1_)
+{
+ /* Mimic any behavior users of extract_range_from_unary_expr may
+ expect. */
+ range_operator *op = range_op_handler (code);
+ if (!op)
+ {
+ vr->set_varying (expr_type);
+ return;
+ }
+ value_range_base vr0 = *vr0_, vr1 = *vr1_;
+ if (vr0.undefined_p () && vr1.undefined_p ())
+ {
+ vr->set_undefined (expr_type);
+ return;
+ }
+ if (vr0.undefined_p ())
+ vr0.set_varying (expr_type);
+ else if (vr1.undefined_p ())
+ vr1.set_varying (expr_type);
+
+ /* Handle symbolics. Worth moving into range-ops?? */
+ if ((code == PLUS_EXPR || code == MINUS_EXPR)
+ && (vr0.symbolic_p () || vr1.symbolic_p ()))
+ {
+ /* Pass on down until we handle anti-ranges in
+ extract_range_from_plus_expr. */
+ extract_range_from_binary_expr (vr, code, expr_type, &vr0, &vr1);
+ return;
+ }
+ if (handle_symbolics_in_pointer_plus_expr (vr, code, expr_type, &vr0, &vr1))
+ return;
+
+ /* Do the range-ops dance. */
+ value_range_base n0 = normalize_for_range_ops (vr0);
+ value_range_base n1 = normalize_for_range_ops (vr1);
+#if USE_IRANGE
+ irange ir;
+ op->fold_range (ir, n0, n1);
+ *vr = ir;
+#else
+ op->fold_range (*vr, n0, n1);
+#endif
+}
+
+/* Fold a unary expression of a value_range with range-ops. */
+
+static void
+range_ops_fold_unary_expr (value_range_base *vr,
+ enum tree_code code, tree expr_type,
+ const value_range_base *vr0)
+{
+ /* Mimic any behavior users of extract_range_from_unary_expr may
+ expect. */
+ range_operator *op = range_op_handler (code);
+ if (!op)
+ {
+ vr->set_varying (expr_type);
+ return;
+ }
+ if (vr0->undefined_p ())
+ {
+ vr->set_undefined (expr_type);
+ return;
+ }
+
+ /* Handle symbolics. Worth moving into range-ops?? */
+ if (code == NEGATE_EXPR && vr0->symbolic_p ())
+ {
+ /* -X is simply 0 - X. */
+ value_range_base zero;
+ zero.set_zero (vr0->type ());
+ range_ops_fold_binary_expr (vr, MINUS_EXPR, expr_type, &zero, vr0);
+ return;
+ }
+ if (code == BIT_NOT_EXPR && vr0->symbolic_p ())
+ {
+ /* ~X is simply -1 - X. */
+ value_range_base minusone;
+ minusone.set (build_int_cst (vr0->type (), -1));
+ range_ops_fold_binary_expr (vr, MINUS_EXPR, expr_type, &minusone, vr0);
+ return;
+ }
+ if (CONVERT_EXPR_CODE_P (code) && (POINTER_TYPE_P (expr_type)
+ || POINTER_TYPE_P (vr0->type ())))
+ {
+ /* This handles symbolic conversions such such as [25, x_4]. */
+ if (!range_includes_zero_p (vr0))
+ vr->set_nonzero (expr_type);
+ else if (vr0->zero_p ())
+ vr->set_zero (expr_type);
+ else
+ vr->set_varying (expr_type);
+ return;
+ }
+
+
+ /* Do the range-ops dance. */
+ value_range_base n0 = normalize_for_range_ops (*vr0);
+ value_range_base n1 (expr_type);
+#if USE_IRANGE
+ irange ir;
+ op->fold_range (ir, n0, n1);
+ *vr = ir;
+#else
+ op->fold_range (*vr, n0, n1);
+#endif
+}
+
+/* Generic folding of a binary expression between two value_ranges.
+ Uses range-ops and extract_range_from_binary_expr, and verifies
+ that the results match. */
+
+void
+range_fold_binary_expr (value_range_base *vr,
+ enum tree_code code,
+ tree expr_type,
+ const value_range_base *vr0,
+ const value_range_base *vr1)
+{
+ if (!value_range_base::supports_type_p (expr_type)
+ || !value_range_base::supports_type_p (vr0->type ())
+ || !value_range_base::supports_type_p (vr1->type ()))
+ {
+ *vr = value_range (expr_type);
+ return;
+ }
+ if (flag_ranges_mode & RANGES_RANGE_OPS)
+ range_ops_fold_binary_expr (vr, code, expr_type, vr0, vr1);
+ if (flag_ranges_mode & RANGES_VRP)
+ {
+ value_range_base old;
+ extract_range_from_binary_expr (&old, code, expr_type, vr0, vr1);
+ if (flag_ranges_mode & RANGES_CHECKING)
+ assert_compare_value_ranges (&old, vr, code, vr0, vr1);
+ else
+ *vr = old;
+ }
+}
+
+/* Generic folding of a unary expression of a value_range. Uses
+ range-ops and extract_range_from_unary_expr, and verifies that the
+ results match. */
+
+void
+range_fold_unary_expr (value_range_base *vr,
+ enum tree_code code,
+ tree expr_type,
+ const value_range_base *vr0)
+{
+ if (!value_range_base::supports_type_p (expr_type)
+ || !value_range_base::supports_type_p (vr0->type ()))
+ {
+ *vr = value_range (expr_type);
+ return;
+ }
+ if (flag_ranges_mode & RANGES_RANGE_OPS)
+ range_ops_fold_unary_expr (vr, code, expr_type, vr0);
+ if (flag_ranges_mode & RANGES_VRP)
+ {
+ value_range_base old;
+ extract_range_from_unary_expr (&old, code, expr_type, vr0, vr0->type ());
+ if (flag_ranges_mode & RANGES_CHECKING)
+ {
+ value_range_base vr1 (expr_type);
+ assert_compare_value_ranges (&old, vr, code, vr0, &vr1);
+ }
+ else
+ *vr = old;
+ }
+}
+
/* Given a COND_EXPR COND of the form 'V OP W', and an SSA name V,
create a new SSA name N and return the assertion assignment
'N = ASSERT_EXPR <V, V OP W>'. */
@@ -6338,6 +6732,150 @@ value_range_base::normalize_symbolics () const
return value_range_base (ttype);
}
+unsigned
+value_range_base::num_pairs () const
+{
+ if (undefined_p ())
+ return 0;
+ if (varying_p ())
+ return 1;
+ if (symbolic_p ())
+ return normalize_symbolics ().num_pairs ();
+ if (m_kind == VR_ANTI_RANGE)
+ {
+ value_range_base vr0, vr1;
+ gcc_assert (ranges_from_anti_range (this, &vr0, &vr1, true));
+ if (vr1.undefined_p ())
+ return 1;
+ return 2;
+ }
+ return 1;
+}
+
+wide_int
+value_range_base::lower_bound (unsigned pair) const
+{
+ if (symbolic_p ())
+ return normalize_symbolics ().lower_bound (pair);
+
+ gcc_assert (!undefined_p ());
+ gcc_assert (pair + 1 <= num_pairs ());
+ tree t = NULL;
+ if (m_kind == VR_ANTI_RANGE)
+ {
+ value_range_base vr0, vr1;
+ gcc_assert (ranges_from_anti_range (this, &vr0, &vr1, true));
+ if (pair == 0)
+ t = vr0.min ();
+ else if (pair == 1)
+ t = vr1.min ();
+ else
+ gcc_unreachable ();
+ }
+ else
+ t = m_min;
+ return wi::to_wide (t);
+}
+
+wide_int
+value_range_base::upper_bound (unsigned pair) const
+{
+ if (symbolic_p ())
+ return normalize_symbolics ().upper_bound (pair);
+
+ gcc_assert (!undefined_p ());
+ gcc_assert (pair + 1 <= num_pairs ());
+ tree t = NULL;
+ if (m_kind == VR_ANTI_RANGE)
+ {
+ value_range_base vr0, vr1;
+ gcc_assert (ranges_from_anti_range (this, &vr0, &vr1, true));
+ if (pair == 0)
+ t = vr0.max ();
+ else if (pair == 1)
+ t = vr1.max ();
+ else
+ gcc_unreachable ();
+ }
+ else
+ t = m_max;
+ return wi::to_wide (t);
+}
+
+wide_int
+value_range_base::upper_bound () const
+{
+ unsigned pairs = num_pairs ();
+ gcc_assert (pairs > 0);
+ return upper_bound (pairs - 1);
+}
+
+void
+value_range_base::cast (tree typ)
+{
+ value_range_base tem;
+ enum ranges_mode save = flag_ranges_mode;
+ /* Avoid infinite recursion in the ranger vs vrp checking code. */
+ flag_ranges_mode = RANGES_VRP;
+ /* At some point we should inline all of the CONVERT_EXPR code from
+ extract_range_from_unary_expr here. */
+ extract_range_from_unary_expr (&tem, CONVERT_EXPR, typ, this, type ());
+ flag_ranges_mode = save;
+ *this = tem;
+}
+
+/* Return TRUE if range contains INTEGER_CST. */
+
+bool
+value_range_base::contains_p (tree cst) const
+{
+ gcc_assert (TREE_CODE (cst) == INTEGER_CST);
+ if (symbolic_p ())
+ return normalize_symbolics ().contains_p (cst);
+ return value_inside_range (cst) == 1;
+}
+
+void
+value_range_base::invert ()
+{
+ if (undefined_p ())
+ set_varying (type ());
+ else if (varying_p ())
+ set_undefined (type ());
+ else if (m_kind == VR_RANGE)
+ m_kind = VR_ANTI_RANGE;
+ else if (m_kind == VR_ANTI_RANGE)
+ m_kind = VR_RANGE;
+ else
+ gcc_unreachable ();
+}
+
+void
+value_range_base::union_ (const value_range_base &r)
+{
+ /* Disable details for now, because it makes the ranger dump
+ unnecessarily verbose. */
+ bool details = dump_flags & TDF_DETAILS;
+ if (details)
+ dump_flags &= ~TDF_DETAILS;
+ union_ (&r);
+ if (details)
+ dump_flags |= TDF_DETAILS;
+}
+
+void
+value_range_base::intersect (const value_range_base &r)
+{
+ /* Disable details for now, because it makes the ranger dump
+ unnecessarily verbose. */
+ bool details = dump_flags & TDF_DETAILS;
+ if (details)
+ dump_flags &= ~TDF_DETAILS;
+ intersect (&r);
+ if (details)
+ dump_flags |= TDF_DETAILS;
+}
+
/* Visit all arguments for PHI node PHI that flow through executable
edges. If a valid value range can be derived from all the incoming
value ranges, set a new range for the LHS of PHI. */
diff --git a/gcc/tree-vrp.h b/gcc/tree-vrp.h
index e4268cd4742..d75a48669af 100644
--- a/gcc/tree-vrp.h
+++ b/gcc/tree-vrp.h
@@ -20,14 +20,27 @@ along with GCC; see the file COPYING3. If not see
#ifndef GCC_TREE_VRP_H
#define GCC_TREE_VRP_H
+class value_range_storage;
+
/* Range of values that can be associated with an SSA_NAME after VRP
has executed. */
class GTY((for_user)) value_range_base
{
+ friend class value_range_storage;
+ friend void range_tests ();
public:
value_range_base ();
value_range_base (value_range_kind, tree, tree);
+ value_range_base (tree, tree);
+ value_range_base (value_range_kind,
+ tree type, const wide_int &, const wide_int &);
+ value_range_base (tree type, const wide_int &, const wide_int &);
+ value_range_base (tree type, const value_range_storage *);
value_range_base (tree type);
+#if USE_IRANGE
+ /* Only for branch. */
+ value_range_base (const irange &ir) { *this = irange_to_value_range (ir); }
+#endif
void set (value_range_kind, tree, tree);
void set (tree);
@@ -62,8 +75,21 @@ public:
void dump (FILE *) const;
void dump () const;
+ /* Support machinery for irange. */
+ static const unsigned int m_max_pairs = 2;
static bool supports_type_p (tree type);
+ static bool supports_ssa_p (tree ssa);
+ static bool supports_p (tree expr);
+ void cast (tree);
+ bool contains_p (tree) const;
+ unsigned num_pairs () const;
+ wide_int lower_bound (unsigned = 0) const;
+ wide_int upper_bound (unsigned) const;
+ wide_int upper_bound () const;
+ void invert ();
value_range_base normalize_symbolics () const;
+ void union_ (const value_range_base &);
+ void intersect (const value_range_base &);
protected:
void check ();
@@ -144,6 +170,29 @@ class GTY((user)) value_range : public value_range_base
bitmap m_equiv;
};
+class value_range_storage
+{
+ friend class value_range_base;
+public:
+ static value_range_storage *alloc (const value_range_base &r)
+ {
+ value_range_storage *p = ggc_alloc<value_range_storage> ();
+ p->set (r);
+ return p;
+ }
+ bool update (const value_range_base &r)
+ {
+ set (r);
+ return true;
+ }
+private:
+ void set (const value_range_base &r)
+ {
+ m_vr = r;
+ }
+ value_range_base m_vr;
+};
+
inline
value_range_base::value_range_base ()
{
@@ -253,6 +302,30 @@ value_range_base::supports_type_p (tree type)
return NULL;
}
+// Return true if SSA is a valid ssa_name for value_range to operate on.
+// Otherwise return FALSE.
+
+inline bool
+value_range_base::supports_ssa_p (tree ssa)
+{
+ if (!SSA_NAME_IS_VIRTUAL_OPERAND (ssa))
+ return supports_type_p (TREE_TYPE (ssa));
+ return false;
+}
+
+// Return true if EXPR is a valid tree expression for value_range to
+// operate on. Otherwise return FALSE.
+
+inline bool
+value_range_base::supports_p (tree expr)
+{
+ if (TYPE_P (expr))
+ return supports_type_p (expr);
+ else if (TREE_CODE (expr) == SSA_NAME)
+ return supports_ssa_p (expr);
+ return supports_type_p (TREE_TYPE (expr));
+}
+
extern void register_edge_assert_for (tree, edge, enum tree_code,
tree, tree, vec<assert_info> &);
extern bool stmt_interesting_for_vrp (gimple *);
@@ -294,6 +367,11 @@ extern bool overflow_comparison_p (tree_code, tree, tree, bool, tree *);
extern tree get_single_symbol (tree, bool *, tree *);
extern void maybe_set_nonzero_bits (edge, tree);
extern value_range_kind determine_value_range (tree, wide_int *, wide_int *);
+void range_fold_binary_expr (value_range_base *, enum tree_code, tree,
+ const value_range_base *,
+ const value_range_base *);
+void range_fold_unary_expr (value_range_base *, enum tree_code, tree,
+ const value_range_base *);
/* Return TRUE if *VR includes the value zero. */
diff --git a/gcc/vr-values.c b/gcc/vr-values.c
index 421d05d9017..4f5824dfdc0 100644
--- a/gcc/vr-values.c
+++ b/gcc/vr-values.c
@@ -832,7 +832,7 @@ vr_values::extract_range_from_binary_expr (value_range *vr,
vrp_val_max (expr_type));
}
- ::extract_range_from_binary_expr (vr, code, expr_type, &vr0, &vr1);
+ range_fold_binary_expr (vr, code, expr_type, &vr0, &vr1);
/* Set value_range for n in following sequence:
def = __builtin_memchr (arg, 0, sz)
@@ -893,7 +893,7 @@ vr_values::extract_range_from_binary_expr (value_range *vr,
else
n_vr1.set (VR_RANGE, op1, op1);
- ::extract_range_from_binary_expr (vr, code, expr_type, &vr0, &n_vr1);
+ range_fold_binary_expr (vr, code, expr_type, &vr0, &n_vr1);
}
if (vr->varying_p ()
@@ -917,7 +917,7 @@ vr_values::extract_range_from_binary_expr (value_range *vr,
else
n_vr0.set (op0);
- ::extract_range_from_binary_expr (vr, code, expr_type, &n_vr0, &vr1);
+ range_fold_binary_expr (vr, code, expr_type, &n_vr0, &vr1);
}
/* If we didn't derive a range for MINUS_EXPR, and
@@ -958,7 +958,7 @@ vr_values::extract_range_from_unary_expr (value_range *vr, enum tree_code code,
else
vr0.set_varying (type);
- ::extract_range_from_unary_expr (vr, code, type, &vr0, TREE_TYPE (op0));
+ range_fold_unary_expr (vr, code, type, &vr0);
}
@@ -1458,8 +1458,7 @@ vr_values::extract_range_basic (value_range *vr, gimple *stmt)
type, op0);
extract_range_from_unary_expr (&vr1, NOP_EXPR,
type, op1);
- ::extract_range_from_binary_expr (vr, subcode, type,
- &vr0, &vr1);
+ range_fold_binary_expr (vr, subcode, type, &vr0, &vr1);
flag_wrapv = saved_flag_wrapv;
}
return;