On GCC we use -gnato on tests known to need it
(/gcc/testsuite/ada/acats/overflow.lst) since we want to test
flags the typical GCC/Ada user does use and not what official validation
requires (which is -gnato -gnatE IIRC).
But you're running a test that's *part* of the official validation and
This code only works for one-complement machines, since it assumes a
symmetric range for Int. It breaks when UI_To_Int returns Integer'First, as
it did in this case. When it does, the abs produces an erroneous result
(since checking is disabled). So it almost doesn't matter what it puts
The test still fails at -O2 -gnato... All the current FAIL
still fail with -gnato, and we even have two additional failures
(unexpected constraint_error):
c45532e
c45532g
So we have to look carefully at what the front-end does with modular
types here.
Note that cxa4025, cxa4028, cxa4033 are
# BLOCK 6
# PRED: 4 (false,exec)
L1:;
iftmp.78_63 = D.1309_32;
iftmp.78_64 = D.1309_32;
D.1316_65 = (c460008__unsigned_edge_8) D.1309_32;
if (D.1316_65 == 255) goto L3; else goto L4;
# SUCC: 7 (true,exec) 8 (false,exec)
[...]
The problem (of course) is D.1316_65 can and
Excerpt from utils2.c:
/* Likewise, but only return types known to the Ada source. */
tree
get_ada_base_type (tree type)
{
while (TREE_TYPE (type)
(TREE_CODE (type) == INTEGER_TYPE
|| TREE_CODE (type) == REAL_TYPE)
!TYPE_EXTRA_SUBTYPE_P (type))
type =
On Thu, 2006-03-02 at 14:05 +0100, Eric Botcazou wrote:
# BLOCK 6
# PRED: 4 (false,exec)
L1:;
iftmp.78_63 = D.1309_32;
iftmp.78_64 = D.1309_32;
D.1316_65 = (c460008__unsigned_edge_8) D.1309_32;
if (D.1316_65 == 255) goto L3; else goto L4;
# SUCC: 7 (true,exec) 8
Just to be 100% clear, I'm leaving this one in the hands of the
Ada maintainers. I'm not qualified to fix it.
Right.
We're also still need the uintp fix installed. I'm not qualified to
say if Kenner's fix is correct or not, thus I'm not comfortable
checking in that
Just to be 100% clear, I'm leaving this one in the hands of the
Ada maintainers. I'm not qualified to fix it. Once the Ada
maintainers have this issue fixed, I'll re-run the Ada testsuite
and attack the next regression introduced by the VRP changes
(if any are left).
Sure. My message was
Jeffrey A Law wrote:
I wouldn't have a problem with non-canonical bounds if there were
no way to get a value into an object which is outside the
bounds. But if we can get values into the object which are outside
those bounds, then either the bounds are incorrect or the program
is invalid.
On Tue, 2006-02-28 at 18:59 -0500, Daniel Jacobowitz wrote:
On Tue, Feb 28, 2006 at 03:40:37PM -0700, Jeffrey A Law wrote:
Here's a great example from uintp.adb (which is the cause of the
testsuite hang FWIW)
We have a loop with the following termination code in uintp.num_bits
#
So this is likely to be a FE Ada coding bug and not a FE/ME/BE interface
issue, thanks for spotting this!
Indeed, having looked a bit closer at Uintp, I think this is the right fix.
Robert, please confirm.
*** uintp.adb 12 Sep 2003 21:50:56 - 1.80
--- uintp.adb 1 Mar 2006
So this is likely to be a FE Ada coding bug and not a FE/ME/BE interface
issue, thanks for spotting this!
Sorry, my last suggestion is clearly wrong. I think is right.
*** uintp.adb 12 Sep 2003 21:50:56 - 1.80
--- uintp.adb 1 Mar 2006 13:16:21 -
*** package
On Wed, 2006-03-01 at 08:24 -0500, Richard Kenner wrote:
So this is likely to be a FE Ada coding bug and not a FE/ME/BE interface
issue, thanks for spotting this!
Sorry, my last suggestion is clearly wrong. I think is right.
*** uintp.adb 12 Sep 2003 21:50:56 - 1.80
---
Here's the next segment in the ongoing saga of VRP vs Ada...
Not surprisingly we have another case where an object gets a
value outside of its TYPE_MIN_VALUE/TYPE_MAX_VALUE defined range.
Investigating the c460008 testsuite failure we have the following
code for Fixed_To_Short before VRP runs:
Ok this test is checking a corner case of the language, namely
non power of two modular types.
It looks like this one needs overflow checking to pass (-gnato):
$ gnatmake -f -I../../../support/ c460008.adb
gcc -c -I../../../support/ c460008.adb
gcc -c -I./ -I../../../support/ -I-
It looks like this one needs overflow checking to pass (-gnato):
ACATS should aways be run with -gnato since that's the only way to
get the behavior mandated by RM. Why are we running it without it? Is
this new? Certainly -gnato was used during validations.
Richard, Arnaud, could you
On Wed, 2006-03-01 at 18:48 -0500, Richard Kenner wrote:
It looks like this one needs overflow checking to pass (-gnato):
ACATS should aways be run with -gnato since that's the only way to
get the behavior mandated by RM. Why are we running it without it? Is
this new? Certainly -gnato
On GCC we use -gnato on tests known to need it
(/gcc/testsuite/ada/acats/overflow.lst) since we want to test flags
the typical GCC/Ada user does use and not what official validation
requires (which is -gnato -gnatE IIRC).
Well that would make the most sense if the code in the
Richard, Arnaud, could you check amongst GNAT experts if for such types
(non power of two modulus), it's not worth enabling overflow checks by
default now that we have VRP doing non trivial optimisations? People
using non power of two modulus are not caring for performance anyway, so
having a
[Sorry for the delay]
That's an old problem, which has already been discussed IIRC: should
TYPE_MAX_VALUE/TYPE_MIN_VALUE be constrained by TYPE_PRECISION and
TYPE_UNSIGNED?
My feeling? Absolutely, TYPE_MIN_VALUE and TYPE_MAX_VALUE should
represent the set of values that an object of the
On Tue, 2006-02-28 at 12:06 +0100, Eric Botcazou wrote:
[Sorry for the delay]
No worries.
I was actually referring to explicit constraints on TYPE_MAX_VALUE and
TYPE_MIN_VALUE derived from TYPE_PRECISION and TYPE_UNSIGNED, for example
that ceil(log2(TYPE_MAX_VALUE - TYPE_MIN_VALUE)) must be
Eric Botcazou wrote:
This problem was already raised when Diego contributed the VRP pass and Diego
ajusted it to cope with Ada. AFAIK Ada and VRP work fine on the 4.1 branch.
Which doesn't mean that Ada is DTRT. On the contrary, Ada ought to be
fixed. It's an ugly hack in
On 2/28/06, Diego Novillo [EMAIL PROTECTED] wrote:
Eric Botcazou wrote:
This problem was already raised when Diego contributed the VRP pass and
Diego
ajusted it to cope with Ada. AFAIK Ada and VRP work fine on the 4.1 branch.
Which doesn't mean that Ada is DTRT. On the contrary, Ada
Basically with the way Ada's setting of TYPE_MIN_VALUE/TYPE_MAX_VALUE
effectively makes them useless as we can not rely on them to
actually reflect the set of values allowed in an object.
Sorry, but why are you saying we can not rely on them to actually reflect the
set of values allowed in an
It's an ugly hack in extract_range_from_assert:
/* Special handling for integral types with super-types. Some FEs
construct integral types derived from other types and restrict
the range of values these new types may take.
It may happen that LIMIT is actually smaller than
On Tue, 2006-02-28 at 18:42 +0100, Eric Botcazou wrote:
Basically with the way Ada's setting of TYPE_MIN_VALUE/TYPE_MAX_VALUE
effectively makes them useless as we can not rely on them to
actually reflect the set of values allowed in an object.
Sorry, but why are you saying we can not rely
On Tue, 2006-02-28 at 18:50 +0100, Eric Botcazou wrote:
It's an ugly hack in extract_range_from_assert:
/* Special handling for integral types with super-types. Some FEs
construct integral types derived from other types and restrict
the range of values these new types may
Jeffrey A Law wrote:
Diego -- do you recall what code actually triggered this problem?
Not sure, exactly.
However, in figuring out what this code was working around, I remembered
this thread from Kenner where he fixed this particular FE bug:
On Tue, 2006-02-28 at 17:51 -0500, Diego Novillo wrote:
Jeffrey A Law wrote:
Diego -- do you recall what code actually triggered this problem?
Not sure, exactly.
However, in figuring out what this code was working around, I remembered
this thread from Kenner where he fixed this
On Tue, Feb 28, 2006 at 03:40:37PM -0700, Jeffrey A Law wrote:
Here's a great example from uintp.adb (which is the cause of the
testsuite hang FWIW)
We have a loop with the following termination code in uintp.num_bits
# BLOCK 8
# PRED: 5 [100.0%] (fallthru,exec) 6
Basically with the way Ada's setting of TYPE_MIN_VALUE/TYPE_MAX_VALUE
effectively makes them useless as we can not rely on them to
actually reflect the set of values allowed in an object.
As we've all told you numerous times before, TYPE_MIN_VALUE/TYPE_MAX_VALUE
are meant *precisely*
Which doesn't mean that Ada is DTRT. On the contrary, Ada ought to be
fixed. It's an ugly hack in extract_range_from_assert:
It wasn't a bug in the Ada front end, but in fold, which was long-ago
fixed. I thought this was removed a long time ago?
We have a loop with the following termination code in uintp.num_bits
This sure looks like a bug in Num_Bits to me, not in the compilation
of the front-end.
The relevant code is:
function Num_Bits (Input : Uint) return Nat is
Bits : Nat;
Num : Nat;
begin
if
On Wed, Feb 22, 2006 at 10:06:25AM -0700, Jeffrey A Law wrote:
This does highlight one of the issues that keeps nagging at me.
For an enumeration type, presumably we have TYPE_PRECISION set to
the minimum precision necessary to hold all the values in the enum.
What are
On 2/21/06, Richard Kenner [EMAIL PROTECTED] wrote:
But if the values in there do not reflect the reality of what values
are valid for the type, then I don't see how they can be generally
useful -- that's my point. We have two fields that are inaccurate,
apparently on
On 2/21/06, Jeffrey A Law [EMAIL PROTECTED] wrote:
On Mon, 2006-02-20 at 22:00 +0100, Richard Guenther wrote:
On 2/20/06, Jeffrey A Law [EMAIL PROTECTED] wrote:
On Sun, 2006-02-19 at 20:43 +0100, Laurent GUERBY wrote:
On Sun, 2006-02-19 at 14:23 -0500, Richard Kenner wrote:
On 2/21/06, Mark Mitchell [EMAIL PROTECTED] wrote:
Jeffrey A Law wrote:
My feeling? Absolutely, TYPE_MIN_VALUE and TYPE_MAX_VALUE should
represent the set of values that an object of the type may hold.
Any other definition effectively renders those values useless.
I agree -- with the
On Wed, 2006-02-22 at 10:54 +0100, Richard Guenther wrote:
type T1 is range 0 .. 127;
-- Compiler will choose some type for T'Base, likely to be -128..127
-- but could be Integer (implementation dependant)
subtype T is T1 range 0 .. 100;
R : T := 100+X-X;
--
On Wed, 2006-02-22 at 11:51 +0100, Richard Guenther wrote:
On 2/21/06, Mark Mitchell [EMAIL PROTECTED] wrote:
Jeffrey A Law wrote:
My feeling? Absolutely, TYPE_MIN_VALUE and TYPE_MAX_VALUE should
represent the set of values that an object of the type may hold.
Any other definition
On Tue, 2006-02-21 at 14:56 -0800, Mark Mitchell wrote:
Jeffrey A Law wrote:
My feeling? Absolutely, TYPE_MIN_VALUE and TYPE_MAX_VALUE should
represent the set of values that an object of the type may hold.
Any other definition effectively renders those values useless.
I agree -- with
On Mon, 2006-02-20 at 23:00 +0100, Richard Guenther wrote:
On 2/20/06, Richard Kenner [EMAIL PROTECTED] wrote:
Indeed. Ada should in this case generate
R = (T)( (basetype)100 + (basetype)X - (basetype)X )
i.e. carry out all arithmetic explicitly in the basetype and
Jeffrey A Law wrote:
This does highlight one of the issues that keeps nagging at me.
For an enumeration type, presumably we have TYPE_PRECISION set to
the minimum precision necessary to hold all the values in the enum.
What are TYPE_MIN_VAL/TYPE_MAX_VAL?Does TYPE_MAX_VALUE include
values
On Wed, 2006-02-22 at 09:00 -0800, Mark Mitchell wrote:
Jeffrey A Law wrote:
This does highlight one of the issues that keeps nagging at me.
For an enumeration type, presumably we have TYPE_PRECISION set to
the minimum precision necessary to hold all the values in the enum.
What are
Hi Laurent,
On Wednesday 22 February 2006 12:34, Laurent GUERBY wrote:
On Wed, 2006-02-22 at 10:54 +0100, Richard Guenther wrote:
type T1 is range 0 .. 127;
-- Compiler will choose some type for T'Base, likely to be
-128..127
-- but could be Integer (implementation
On Mon, 2006-02-20 at 16:49 -0500, Richard Kenner wrote:
Which leaves us with a very fundamental issue. Namely that we can not
use TYPE_MIN_VALUE or TYPE_MAX_VALUE for ranges.
The point is that it *is* supposed to be usable in general. If it can't
be used in a specific case,
On Sun, 2006-02-19 at 20:15 +0100, Eric Botcazou wrote:
Now for the first oddity. If we look at the underlying type
for last we have a type natural___XDLU_0__2147483647. What's
interesting about it is that it has a 32bit type precision, but
the min/max values only specify 31 bits. ie,
On Mon, 2006-02-20 at 22:00 +0100, Richard Guenther wrote:
On 2/20/06, Jeffrey A Law [EMAIL PROTECTED] wrote:
On Sun, 2006-02-19 at 20:43 +0100, Laurent GUERBY wrote:
On Sun, 2006-02-19 at 14:23 -0500, Richard Kenner wrote:
Second, for a given integer type (such as
But if the values in there do not reflect the reality of what values
are valid for the type, then I don't see how they can be generally
useful -- that's my point. We have two fields that are inaccurate,
apparently on purpose, and as a result they are basically unusable.
No,
On Tue, 2006-02-21 at 12:46 -0500, Richard Kenner wrote:
But if the values in there do not reflect the reality of what values
are valid for the type, then I don't see how they can be generally
useful -- that's my point. We have two fields that are inaccurate,
apparently on
Err, no they don't. Clearly an object of the type can hold a value
outside TYPE_MIN_VALUE/TYPE_MAX_VALUE at runtime. That IMHO means
that TYPE_MIN_VALUE/TYPE_MAX_VALUE do not reflect reality.
What does can mean here? If it means is physically capable of, then
TYPE_MIN_VALUE and
On Tue, 2006-02-21 at 13:31 -0500, Richard Kenner wrote:
Err, no they don't. Clearly an object of the type can hold a value
outside TYPE_MIN_VALUE/TYPE_MAX_VALUE at runtime. That IMHO means
that TYPE_MIN_VALUE/TYPE_MAX_VALUE do not reflect reality.
What does can mean here?
Can a conforming program set the object to a value outside of
TYPE_MIN_VALUE/TYPE_MAX_VALUE.
Let's forget about the obscure unchecked conversion - 'Valid case
because we're going to handle that in whatever way we need to.
So the answer is no.
On Tue, 2006-02-21 at 13:57 -0500, Richard Kenner wrote:
Can a conforming program set the object to a value outside of
TYPE_MIN_VALUE/TYPE_MAX_VALUE.
Let's forget about the obscure unchecked conversion - 'Valid case
because we're going to handle that in whatever way we need to.
OK. So if a program sets an object to a value outside
TYPE_MIN_VALUE/TYPE_MAX_VALUE, then that program is
invalid for the purposes of this discussion?
Correct. Of course, it has to be the *program* that's doing the set
(meaning setting a user-defined variable). If the compiler
On Tue, 2006-02-21 at 14:14 -0500, Richard Kenner wrote:
OK. So if a program sets an object to a value outside
TYPE_MIN_VALUE/TYPE_MAX_VALUE, then that program is
invalid for the purposes of this discussion?
Correct. Of course, it has to be the *program* that's doing the
In this specific case it is a user variable. However, we should
probably clarify the compiler-temporary case as well as VRP really
does not and should not care if an object is a user variable or
a compiler generated temporary.
Right. The only distinction is that if it's a
Jeffrey A Law wrote:
My feeling? Absolutely, TYPE_MIN_VALUE and TYPE_MAX_VALUE should
represent the set of values that an object of the type may hold.
Any other definition effectively renders those values useless.
I agree -- with the obvious caveat that it need not be the case that the
On Sun, 2006-02-19 at 20:43 +0100, Laurent GUERBY wrote:
On Sun, 2006-02-19 at 14:23 -0500, Richard Kenner wrote:
Second, for a given integer type (such as
natural___XDLU_0_2147483647), the type for the nodes in TYPE_MIN_VALUE
and TYPE_MAX_VALUE really should be a
On 2/20/06, Jeffrey A Law [EMAIL PROTECTED] wrote:
On Sun, 2006-02-19 at 20:43 +0100, Laurent GUERBY wrote:
On Sun, 2006-02-19 at 14:23 -0500, Richard Kenner wrote:
Second, for a given integer type (such as
natural___XDLU_0_2147483647), the type for the nodes in TYPE_MIN_VALUE
Which leaves us with a very fundamental issue. Namely that we can not
use TYPE_MIN_VALUE or TYPE_MAX_VALUE for ranges.
The point is that it *is* supposed to be usable in general. If it can't
be used in a specific case, let's address that specific case and understand
what needs
Indeed. Ada should in this case generate
R = (T)( (basetype)100 + (basetype)X - (basetype)X )
i.e. carry out all arithmetic explicitly in the basetype and only for
stores and loads use the subtype.
That is indeed required by the language and what is normally generated.
It
On 2/20/06, Richard Kenner [EMAIL PROTECTED] wrote:
Indeed. Ada should in this case generate
R = (T)( (basetype)100 + (basetype)X - (basetype)X )
i.e. carry out all arithmetic explicitly in the basetype and only for
stores and loads use the subtype.
That is indeed
When building a-textio in libada, today's gcc build fails when memory
is exhausted.
Seems like VRP is looping, consuming more and more memory.
Andrew.
make[7]: `a-teioed.o' is up to date.
/home/aph/gcc/build-x86_64-unknown-linux-gnu/./gcc/xgcc
On Feb 19, 2006, at 12:09 PM, Andrew Haley wrote:
When building a-textio in libada, today's gcc build fails when memory
is exhausted.
This has already been reported as PR 26348 and it looks like a bug
in the Ada front-end.
Thanks,
Andrew Pinski
This has already been reported as PR 26348 and it looks like a bug
in the Ada front-end.
Note that technically, its still a regression caused by a non Ada patch.
Anyway, hopefully Eric and Jeff can work together in identifying the
proper fix.
Arno
Andrew Haley [EMAIL PROTECTED] writes:
When building a-textio in libada, today's gcc build fails when memory
is exhausted.
This is PR26348.
Andreas.
--
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA
Andrew Pinski writes:
On Feb 19, 2006, at 12:09 PM, Andrew Haley wrote:
When building a-textio in libada, today's gcc build fails when memory
is exhausted.
This has already been reported as PR 26348 and it looks like a bug
in the Ada front-end.
Oh right, thanks. Was this
Anyway, hopefully Eric and Jeff can work together in identifying the
proper fix.
Jeff already did a thorough analysis of the problem (thanks!) and came to the
following double conclusion. Quoting him:
First, the inconsistency between the type's precision in its min/max
values needs to be
Second, for a given integer type (such as
natural___XDLU_0_2147483647), the type for the nodes in TYPE_MIN_VALUE
and TYPE_MAX_VALUE really should be a natural___XDLU_0_2147483647.
ie, the type of an integer constant should be the same as the type of
its min/max values.
No, the
On Sun, 2006-02-19 at 14:23 -0500, Richard Kenner wrote:
Second, for a given integer type (such as
natural___XDLU_0_2147483647), the type for the nodes in TYPE_MIN_VALUE
and TYPE_MAX_VALUE really should be a natural___XDLU_0_2147483647.
ie, the type of an integer constant
No, the type of the bounds of a subtype should be the *base type*.
That's how the tree has always looked, as far back as I can remember.
This is because intermediate computations can produce results
outside the subtype range but within the base type range (RM 3.5(6)),
71 matches
Mail list logo