On Tue, May 6, 2014 at 8:57 AM, Tobias Grosser <tob...@grosser.es> wrote:
> On 05/05/2014 21:11, Roman Gareev wrote:
>>
>> Hi Tobias,
>>
>> thank you for your reply! I have questions about types. Could you
>> please answer them?
>
>
> I looked through them and most seem to be related to how we derive types in
> graphite. As I said before, this is a _very_ fragile hack
> that works surprisingly well, but which is both too complex and
> in the end still incorrect. Sebastian wrote this code, so I am not familiar
> with the details. I also don't think it is necessary to
> understand the details. Instead of using any code, we should start
> implementing the new code using 64 bit signed integers. This
> should be correct in 99.9% of the cases.

Of course compilers have to work correctly in 100% of the cases, so
if you choose an approach that will be incorrect in > 0% of the cases
then you should make sure to detect those and not apply any transform.

> One of the selling points for the new isl code generation was however,
> that it will be possible to get precise information about the types
> needed for code generation. There existed already a patch for an older
> isl version and there is a partial patch for newer versions that Sven and me
> have been working on. It is not yet stable enough to be tested, but I
> attached it anyway for illustration. The idea is to
> introduce a set of functions
>
> +       int isl_ast_expr_has_known_size(
> +               __isl_keep isl_ast_expr *expr);
> +       int isl_ast_expr_is_bounded(
> +               __isl_keep isl_ast_expr *expr);
> +       int isl_ast_expr_is_signed(
> +               __isl_keep isl_ast_expr *expr);
> +       size_t isl_ast_expr_size_in_bits(
> +               __isl_keep isl_ast_expr *expr);
>
> in isl, where we can precisely compute the minimal legal type. We can then
> use this during code generation to derive good types.

You should be able to do this for all types you need up-front and check
if there is a suitable GIMPLE type available.  For example by using
lang_hooks.types.type_for_size () which will return NULL_TREE if there
isn't one.

>> Questions related to “type_for_interval”:
>>
>> 1. What happens in these lines?
>>
>> int precision = MAX (mpz_sizeinbase (bound_one, 2),
>> mpz_sizeinbase (bound_two, 2));
>> if (precision > BITS_PER_WORD)
>> {
>> gloog_error = true;
>> return integer_type_node;
>> }
>
>
>>
>> Do we try to count maximum number of value bits in bound_one and
>> bound_two?
>
>
> I believe.
>
>
>> Why can't it be greater than BITS_PER_WORD?
>
>
> No idea.

Looks artificial to me - up to BITS_PER_WORD is certianly fast on
the CPU, but it will reject 'long long' on 32bit x86 for example.

>
>> 2. Why do we want to generate signed types as much as possible?
>
>
> Because in the code cloog generates negative values are common. To be save
> we generate unsigned code.

signed types allow for more optimistic optimization later on (they have
undefined behavior on overflow) - which can be both good and a problem.

>
>> 3. Why do we always have enough precision in case of precision <
>> wider_precision?
>
>
> I have no idea (and did not bother trying to understand)
>
>
>> Questions related to “clast_to_gcc_expression”:
>>
>> 4. What is the idea behind this code?
>>
>> if (POINTER_TYPE_P (TREE_TYPE (name)) != POINTER_TYPE_P (type))
>> name = convert_to_ptrofftype (name);
>
>
> Sorry, again no idea.

We have special requirements in GIMPLE for pointer + offset arithmetic.
Thus if either of the operands is a pointer the other operand has to be
of pointer-offset type.

>
>> 5. Why do we check POINTER_TYPE_P(type)? (“type” has tree type and the
>> manual says that a tree is a pointer type)
>
>
> Sorry, again no idea.

The type of tree in the GCC implementation is a pointer type, but we
check whether the intermediate language type refered to by the implementation
object 'type' is a pointer type.

>
>> Questions related to “max_precision_type”:
>>
>> 6. Why is type1, for example, is the maximal precision type in case of
>> truth of POINTER_TYPE_P (type1)?
>
>
> I don't know. This really lacks comments. This may very well have a good
> reason, but it is hard to see as most of the other stuff for deriving the
> types is very much a hack.
>
>
>> 7. Why do we have enough precision for p2 in case of p1 > p2 and signed
>> type1?
>
>
> No idea.
>
>
>> 8. Why do we always build signed integer type in the line: “type =
>> build_nonstandard_integer_type (precision, false);”?
>
>
> No idea.
>
>
>> Questions related to “type_for_clast_red”:
>>
>> 9. Why do we use this code in case of clast_red_sum?
>>
>> value_min (m1, bound_one, bound_two);
>> value_min (m2, b1, b2);
>> mpz_add (bound_one, m1, m2);
>
>
> We try to derive the bounds for the sum. No idea, regarding the actual
> computation.
>
>
>> Can bound_one be greater then bound_two? (We also consider two cases
>> in “type_for_interval”)
>
>
> I would guess not. This may be a bug.
>
>
>> 10. Why do we assume that new bounds are min(bound_one, bound_two) and
>> min(b1, b2) in case of clast_red_min?
>
>
> No idea. This seems incorrect.
>
> I would have expected
>
> bound_one = value_max(bound_one, b1)
> bound_two = value_max(bound_two, b2)
>
>
> Again, I do not think spending time to understand the heuristics in
> type_for_clast is worth it. Some are rather complex and work well, some
> or just buggy but have never been triggered and a small percentage actually
> might be reusable later (pointer handling). As the approach
> has generally too little information to work reliably, I would not spend
> any time on it. I pointed out the correct approach above. Going with 64bit
> types will bring us a very long way, and we can finish the isl patch to get
> it 100% right.

If ISL can give you for each expression a type precision and signedness
then I'd stick to that if it is available (or else punt).  I wouldn't go for
all-64bit types.

Richard.

> Cheers,
> Tobias

Reply via email to