Currently we permit implicit conversions between vectors whose total
bitsizes are equal but which are divided into differing numbers of subparts.
It seems that in some circumstances this rule is overly lax.  For example
the following code, using vector types (whose definitions I have provided
from the intrinsics header file) defined for the ARM NEON instruction set,
is accepted without error or warning:

...
typedef __builtin_neon_qi int8x8_t      __attribute__ ((__vector_size__ (8)));
typedef __builtin_neon_hi int16x4_t     __attribute__ ((__vector_size__ (8)));
...

int8x8_t f (int16x4_t a)
{
  return a;
}

Here, the compiler is not complaining about an attempt to implicitly
convert a vector of four 16-bit quantities to a vector of eight
8-bit quantities.

This lack of type safety is unsettling, and I wonder if it should be fixed
with a patch along the lines of the (not yet fully tested) one below.  Does
that sound reasonable?  It seems right to try to fix the generic code here,
even though the testcase in hand is target-specific.  If this approach
is unreasonable, I guess some target-specific hooks will be needed.

Mark


--


Index: gcc/c-common.c
===================================================================
--- gcc/c-common.c      (revision 117639)
+++ gcc/c-common.c      (working copy)
@@ -1014,7 +1014,8 @@ vector_types_convertible_p (tree t1, tre
             && (TREE_CODE (TREE_TYPE (t1)) != REAL_TYPE ||
                 TYPE_PRECISION (t1) == TYPE_PRECISION (t2))
             && INTEGRAL_TYPE_P (TREE_TYPE (t1))
-               == INTEGRAL_TYPE_P (TREE_TYPE (t2)));
+               == INTEGRAL_TYPE_P (TREE_TYPE (t2))
+            && TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2));
 }

 /* Convert EXPR to TYPE, warning about conversion problems with constants.

Reply via email to