------- Comment #8 from baldrick at gcc dot gnu dot org 2007-02-22 18:14 ------- > Can you walk me through some of the checks and why they can be removed? I see > (.004.gimple dump): ... > I assume all of the above is gimplified from just > > if Source_Last < Source_First then > if Target_First = Target_Type'First then > raise Constraint_Error; > end if; > Target_Last := Target_Type'Pred (Target_First); > return;
Yes. Amazing, isn't it ;) The important thing to keep in mind is that all "target" variables must be in the range 10..20, and all "source" variables in the range 0..100 (see the definitions type S is range 0 .. 100; <-- S corresponds to Source_Type in Join_Equal type T is range 10 .. 20; <-- T corresponds to Target_Type in Join_Equal ). What does "must be in the range" mean? Firstly, the program behaviour is undefined if a variable is outside its range. This is the same as for signed overflow. It's just that here overflow starts at 100 or at 20, not at INT_MAX! Secondly, the language requires the compiler to check at the point of the call that the values passed to the Join_Equal subprogram are in the right ranges. If not, an exception is raised. Likewise, at points where you do arithmetic like adding or subtracting 1, the compiler inserts checks that the result will not go out of range. If not, an exception is raised. That's where all this code is coming from. Anyway, the practical upshot is that VRP is allowed to assume that source_first and source_last have values in the range 0..100, and target_first and target_last have values in the range 10..20. Using this, it should be able to eliminate all of the compiler inserted range checking. > ? So in essence VRP should somehow be able to see that > Target_Type'Pred (Target_First) cannot be out of bounds because Target_First > is not Target_Type'First, correct? Exactly. > But given the gimplified form above > we also need to prove Target_First is not 128 (where does that come from? It cannot be 128 because it is in the range 10..20. As to why the compiler inserted 128, good question! Probably it has placed target in an unsigned variable with 8-bit precision, and is checking that some computation it is about to do in that variable will not overflow. > It looks like __gnat_rcheck_12 is not a noreturn function?). It is a noreturn function. It just raises a language defined exception. > We also > need to prove that (js__TtB) Target_First is > 10 (that looks doable from > the != 10 range we can extract from the first range check). But where > does the check against 21 come from? It seems to be another pointless check the compiler has inserted. It can be removed because iftmp.4 is in the range 11..20. > The 2nd check for 128 looks redundant > and indeed we remove it in VRP1. Yes. > I need to look closer at what js__TtB actually is looking like, but this > is at least a useful testcase. Probably this is the "base type" for the type js__T (aka JS.T) in the original source. The idea of a base type is that that's the type that has the full range you would expect from the precision. Most likely type T, with range 10..20, has been assigned 8 bit precision by the compiler, and has a base type js__TtB with range -128..127, i.e. a normal C signed byte. The compiler systematically converts to the base type before doing arithmetic. After the arithmetic, it checks whether the result is in the range of the original type (10..20). If not, it raises an exception (__gnat_rcheck_12), and otherwise it it put the result back in the type T variable. -- baldrick at gcc dot gnu dot org changed: What |Removed |Added ---------------------------------------------------------------------------- BugsThisDependsOn|26797 | http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30911