[clang] [Clang] Demote always_inline error to warning for mismatching SME attrs (PR #100740)

2024-07-26 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.


https://github.com/llvm/llvm-project/pull/100740
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [CLANG][AArch64]Add Neon vectors for fpm8_t (PR #99865)

2024-07-26 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

Sorry for the noise but I think I've a more wellformed question this time.

Is it be possible to use `AArch64SVEACLETypes.def` to reduce some of the 
boilerplate changes?  I'm not sure how much of this is tied to SVE (or rather 
scalable types) but I'm wondering if clang can be refactored to use 
`AARCH64_TYPE` for the places where the types do not matter (e.g. when 
populating enums) and only use `SVE_TYPE` for the places that care about the 
scalable property.

No is a fine answer but when looking at the code I'd rather not have to repeat 
all these changes for every new AArch64 scalar type we might want in the future.

https://github.com/llvm/llvm-project/pull/99865
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [CLANG][AArch64]Add Neon vectors for fpm8_t (PR #99865)

2024-07-26 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

Is it possible to use TargetExtType for the scalar type given this is a target 
specific type.  I fully expect LLVM not to support vector's of TargetExtType 
but I wonder if that can be relaxed given our only use case is to pass them to 
intrinsics. For anything more exotic we can add intrinsics to cast them to i8 
vectors.  Alternatively, we could also use TargetExtType for all fp8 scalar and 
vector types.

https://github.com/llvm/llvm-project/pull/99865
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][SveEmitter] Split up TargetGuard into SVE and SME component. (PR #96482)

2024-06-24 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.

This is certainly a step in the right direction.

https://github.com/llvm/llvm-project/pull/96482
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-20 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.


https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-20 Thread Paul Walker via cfe-commits


@@ -1781,7 +1781,13 @@ void SVEEmitter::createStreamingAttrs(raw_ostream , 
ACLEKind Kind) {
   uint64_t VerifyRuntimeMode = getEnumValueForFlag("VerifyRuntimeMode");
   uint64_t IsStreamingCompatibleFlag =
   getEnumValueForFlag("IsStreamingCompatible");
+
   for (auto  : Defs) {
+if (!Def->isFlagSet(VerifyRuntimeMode) &&
+(Def->getGuard().contains("sve") + Def->getGuard().contains("sme")) ==
+2)

paulwalker-arm wrote:

Surely this is just
```
if (!Def->isFlagSet(VerifyRuntimeMode) && Def->getGuard().contains("sve") && 
Def->getGuard().contains("sme"))
```
isn't it?

https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-20 Thread Paul Walker via cfe-commits


@@ -2264,6 +2278,18 @@ let TargetGuard = "sve2p1" in {
   defm SVPMOV_TO_VEC_LANE_D : PMOV_TO_VEC<"svpmov", "lUl", 
"aarch64_sve_pmov_to_vector_lane" ,[], ImmCheck1_7>;
 }
 
+let TargetGuard = "sve2p1|sme2p1" in {
+  // DUPQ
+  def SVDUP_LANEQ_B  : SInst<"svdup_laneq[_{d}]", "ddi",  "cUc", MergeNone, 
"aarch64_sve_dup_laneq", [VerifyRuntimeMode], [ImmCheck<1, ImmCheck0_15>]>;
+  def SVDUP_LANEQ_H  : SInst<"svdup_laneq[_{d}]", "ddi",  "sUsh", MergeNone, 
"aarch64_sve_dup_laneq", [VerifyRuntimeMode], [ImmCheck<1, ImmCheck0_7>]>;
+  def SVDUP_LANEQ_S  : SInst<"svdup_laneq[_{d}]", "ddi",  "iUif", MergeNone, 
"aarch64_sve_dup_laneq", [VerifyRuntimeMode], [ImmCheck<1, ImmCheck0_3>]>;
+  def SVDUP_LANEQ_D  : SInst<"svdup_laneq[_{d}]", "ddi",  "lUld", MergeNone, 
"aarch64_sve_dup_laneq", [VerifyRuntimeMode], [ImmCheck<1, ImmCheck0_1>]>;
+}
+
+let TargetGuard = "(sve2p1|sme2),bf16" in {

paulwalker-arm wrote:

This should match the above and thus be "(sve2p1|sme2p1),bf16".

https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-20 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm commented:

With the change of default it's very hard to check everything but we've already 
agreed  there'll need to be a full audit once all the inflight work has landed. 
 I did spot one thing though:

Should the integer svclamp and svrevd builtins be protected by "sve2p1|sme" 
rather than "sve2p1|sme2"?

https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-20 Thread Paul Walker via cfe-commits


@@ -286,17 +290,20 @@ let TargetGuard = "sve,f64mm,bf16" in {
 }
 
 let TargetGuard = "sve,bf16" in {
+  def SVBFMMLA   : SInst<"svbfmmla[_{0}]",   "MMdd",  "b", MergeNone, 
"aarch64_sve_bfmmla",   [IsOverloadNone]>;
+}
+
+let TargetGuard = "(sve|sme),bf16" in {
   def SVBFDOT: SInst<"svbfdot[_{0}]","MMdd",  "b", MergeNone, 
"aarch64_sve_bfdot",[IsOverloadNone, VerifyRuntimeMode]>;
   def SVBFMLALB  : SInst<"svbfmlalb[_{0}]",  "MMdd",  "b", MergeNone, 
"aarch64_sve_bfmlalb",  [IsOverloadNone, VerifyRuntimeMode]>;
   def SVBFMLALT  : SInst<"svbfmlalt[_{0}]",  "MMdd",  "b", MergeNone, 
"aarch64_sve_bfmlalt",  [IsOverloadNone, VerifyRuntimeMode]>;
-  def SVBFMMLA   : SInst<"svbfmmla[_{0}]",   "MMdd",  "b", MergeNone, 
"aarch64_sve_bfmmla",   [IsOverloadNone, VerifyRuntimeMode]>;
   def SVBFDOT_N  : SInst<"svbfdot[_n_{0}]",  "MMda",  "b", MergeNone, 
"aarch64_sve_bfdot",[IsOverloadNone, VerifyRuntimeMode]>;
   def SVBFMLAL_N : SInst<"svbfmlalb[_n_{0}]","MMda",  "b", MergeNone, 
"aarch64_sve_bfmlalb",  [IsOverloadNone, VerifyRuntimeMode]>;

paulwalker-arm wrote:

Not relevant to this patch but there's a typo here `SVBFMLAL_N` should 
`SVBFMLALB_N`.

https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-20 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-20 Thread Paul Walker via cfe-commits


@@ -17,7 +25,7 @@
 // CPP-CHECK-NEXT:[[TMP1:%.*]] = shl nuw nsw i64 [[TMP0]], 4
 // CPP-CHECK-NEXT:ret i64 [[TMP1]]
 //
-uint64_t test_svcntb()
+uint64_t test_svcntb(void) MODE_ATTR

paulwalker-arm wrote:

Is there a problem we need to worry about with using the SME keywords with `()` 
functions?

https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-18 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-18 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-18 Thread Paul Walker via cfe-commits


@@ -286,10 +290,13 @@ let TargetGuard = "sve,f64mm,bf16" in {
 }
 
 let TargetGuard = "sve,bf16" in {
+  def SVBFMMLA   : SInst<"svbfmmla[_{0}]",   "MMdd",  "b", MergeNone, 
"aarch64_sve_bfmmla",   [IsOverloadNone]>;
+}
+
+let TargetGuard = "(sve,bf16)|sme" in {

paulwalker-arm wrote:

Looking at the specification suggests this should be `(sve|sme),bf16`?

 
Also, the closing `}` could do with a matching comment.

https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-18 Thread Paul Walker via cfe-commits


@@ -1781,7 +1781,12 @@ void SVEEmitter::createStreamingAttrs(raw_ostream , 
ACLEKind Kind) {
   uint64_t VerifyRuntimeMode = getEnumValueForFlag("VerifyRuntimeMode");
   uint64_t IsStreamingCompatibleFlag =
   getEnumValueForFlag("IsStreamingCompatible");
+
   for (auto  : Defs) {
+assertDef->getGuard().contains("sve") +
+  Def->getGuard().contains("sme")) <= 1) ||
+Def->isFlagSet(VerifyRuntimeMode)) &&

paulwalker-arm wrote:

Given this is a build time error that represents a bug in `arm_sve.td`, do you 
think it's worth using `llvm_unreachable` rather than an assert?

https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-18 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-18 Thread Paul Walker via cfe-commits


@@ -2264,6 +2278,18 @@ let TargetGuard = "sve2p1" in {
   defm SVPMOV_TO_VEC_LANE_D : PMOV_TO_VEC<"svpmov", "lUl", 
"aarch64_sve_pmov_to_vector_lane" ,[], ImmCheck1_7>;
 }
 
+let TargetGuard = "sve2p1|sme2" in {

paulwalker-arm wrote:

I think this is a sme2p1 feature?

https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Expose compatible SVE intrinsics with only +sme (PR #95787)

2024-06-18 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm commented:

Not for this patch but I do wonder if there's value in protecting non-bf16 
instruction backed builtins (e.g. loadstone and shuffles) with the bf16 target 
guard.  I figure we'll either error on the use of the `svbfloat` type or the 
code generation should just work and thus there's be no reason to artificially 
restrict user code. 

https://github.com/llvm/llvm-project/pull/95787
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-15 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.


https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-14 Thread Paul Walker via cfe-commits


@@ -559,31 +559,76 @@ SemaARM::ArmStreamingType getArmStreamingFnType(const 
FunctionDecl *FD) {
   return SemaARM::ArmNonStreaming;
 }
 
-static void checkArmStreamingBuiltin(Sema , CallExpr *TheCall,
- const FunctionDecl *FD,
- SemaARM::ArmStreamingType BuiltinType) {
+static bool checkArmStreamingBuiltin(Sema , CallExpr *TheCall,
+ FunctionDecl *FD,

paulwalker-arm wrote:

Can `FD`'s constness be restored?  I think you only had to remove it because 
you previously called `getODRHash`.

https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-06 Thread Paul Walker via cfe-commits


@@ -622,7 +679,8 @@ bool SemaARM::CheckSMEBuiltinFunctionCall(unsigned 
BuiltinID,
 }
 
 if (BuiltinType)
-  checkArmStreamingBuiltin(SemaRef, TheCall, FD, *BuiltinType);
+  HasError |= checkArmStreamingBuiltin(SemaRef, TheCall, FD, *BuiltinType,

paulwalker-arm wrote:

Would it be wrong to return immediately? I ask because there's
```
  switch (BuiltinID) {
  default:
return false;
```
which should be `return HasError;`? but if we can return directly then there's 
less change of other similar issues.

https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-06 Thread Paul Walker via cfe-commits


@@ -559,31 +559,86 @@ SemaARM::ArmStreamingType getArmStreamingFnType(const 
FunctionDecl *FD) {
   return SemaARM::ArmNonStreaming;
 }
 
-static void checkArmStreamingBuiltin(Sema , CallExpr *TheCall,
- const FunctionDecl *FD,
- SemaARM::ArmStreamingType BuiltinType) {
+static bool checkArmStreamingBuiltin(Sema , CallExpr *TheCall,
+ FunctionDecl *FD,
+ SemaARM::ArmStreamingType BuiltinType,
+ unsigned BuiltinID) {
   SemaARM::ArmStreamingType FnType = getArmStreamingFnType(FD);
-  if (BuiltinType == SemaARM::ArmStreamingOrSVE2p1) {
-// Check intrinsics that are available in [sve2p1 or sme/sme2].
-llvm::StringMap CallerFeatureMap;
-S.Context.getFunctionFeatureMap(CallerFeatureMap, FD);
-if (Builtin::evaluateRequiredTargetFeatures("sve2p1", CallerFeatureMap))
-  BuiltinType = SemaARM::ArmStreamingCompatible;
-else
+
+  // Check if the intrinsic is available in the right mode, i.e.
+  // * When compiling for SME only, the caller must be in streaming mode.
+  // * When compiling for SVE only, the caller must be in non-streaming mode.
+  // * When compiling for both SVE and SME, the caller can be in either mode.
+  if (BuiltinType == SemaARM::VerifyRuntimeMode) {
+static llvm::StringMap CallerFeatureMapWithoutSVE,
+CallerFeatureMapWithoutSME;

paulwalker-arm wrote:

I hope I'm wrong but I think the use of static here is almost certainly bad 
because there's nothing stopping multiple threads from calling 
`checkArmStreamingBuiltin`. I looked for other instances but all I could find 
involved one time static initialisation after which the data is effectively 
constant.

Random Idea:
Rather than filtering the feature map you could process `BuiltinTargetGuards`, 
by which I mean you could split the guard into streaming and non-streaming 
(perhaps that's what `|` effectively means) and then you use whichever side is 
relevant to the function's mode of operation.

Of course you could just remove the cache and leave compile time as a worry for 
tomorrow.

https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-06 Thread Paul Walker via cfe-commits


@@ -559,31 +559,86 @@ SemaARM::ArmStreamingType getArmStreamingFnType(const 
FunctionDecl *FD) {
   return SemaARM::ArmNonStreaming;
 }
 
-static void checkArmStreamingBuiltin(Sema , CallExpr *TheCall,
- const FunctionDecl *FD,
- SemaARM::ArmStreamingType BuiltinType) {
+static bool checkArmStreamingBuiltin(Sema , CallExpr *TheCall,
+ FunctionDecl *FD,
+ SemaARM::ArmStreamingType BuiltinType,
+ unsigned BuiltinID) {
   SemaARM::ArmStreamingType FnType = getArmStreamingFnType(FD);
-  if (BuiltinType == SemaARM::ArmStreamingOrSVE2p1) {
-// Check intrinsics that are available in [sve2p1 or sme/sme2].
-llvm::StringMap CallerFeatureMap;
-S.Context.getFunctionFeatureMap(CallerFeatureMap, FD);
-if (Builtin::evaluateRequiredTargetFeatures("sve2p1", CallerFeatureMap))
-  BuiltinType = SemaARM::ArmStreamingCompatible;
-else
+
+  // Check if the intrinsic is available in the right mode, i.e.
+  // * When compiling for SME only, the caller must be in streaming mode.
+  // * When compiling for SVE only, the caller must be in non-streaming mode.
+  // * When compiling for both SVE and SME, the caller can be in either mode.
+  if (BuiltinType == SemaARM::VerifyRuntimeMode) {
+static llvm::StringMap CallerFeatureMapWithoutSVE,
+CallerFeatureMapWithoutSME;
+
+// Cache the feature maps, to avoid having to recalculate this for each
+// builtin call.
+static unsigned CachedODRHash = 0;

paulwalker-arm wrote:

As above.

https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [AArch64] Fix feature flags dependecies (PR #90612)

2024-06-05 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

@momchil-velikov's commentary applies globally and is not specific to FPMR. 
Which is to say, Arm switched a while back from "all system register need to be 
protected by their feature flag" to "only protect system registers where there 
is a need".  The rational is that we see it as being unnecessarily burdensome 
to asm writers to force them to use a feature flag in order to use the pretty 
printed version of an instruction they can already emit (this is especially 
true when dynamic feature detection is used, rather than wanting to explicitly 
say the feature must be present) .  We've no direct plans to revisit all 
previously implemented system registers unless there's a specific need.

https://github.com/llvm/llvm-project/pull/90612
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-04 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-04 Thread Paul Walker via cfe-commits


@@ -561,16 +561,61 @@ SemaARM::ArmStreamingType getArmStreamingFnType(const 
FunctionDecl *FD) {
 
 static void checkArmStreamingBuiltin(Sema , CallExpr *TheCall,
  const FunctionDecl *FD,
- SemaARM::ArmStreamingType BuiltinType) {
+ SemaARM::ArmStreamingType BuiltinType,
+ unsigned BuiltinID) {
   SemaARM::ArmStreamingType FnType = getArmStreamingFnType(FD);
-  if (BuiltinType == SemaARM::ArmStreamingOrSVE2p1) {
-// Check intrinsics that are available in [sve2p1 or sme/sme2].
-llvm::StringMap CallerFeatureMap;
-S.Context.getFunctionFeatureMap(CallerFeatureMap, FD);
-if (Builtin::evaluateRequiredTargetFeatures("sve2p1", CallerFeatureMap))
+
+  // Check if the intrinsic is available in the right mode, i.e.
+  // * When compiling for SME only, the caller must be in streaming mode.
+  // * When compiling for SVE only, the caller must be in non-streaming mode.
+  // * When compiling for both SVE and SME, the caller can be in either mode.
+  if (BuiltinType == SemaARM::ArmStreamingOrHasSVE) {
+static const FunctionDecl *CachedFD = nullptr;
+bool SatisfiesSVE = false, SatisfiesSME = false;
+
+if (FD != CachedFD) {
+  // We know the builtin requires either some combination of SVE flags, or
+  // some combination of SME flags, but we need to figure out which part
+  // of the required features is satisfied by the target features.
+  //
+  // For a builtin with target guard 'sve2p1|sme2', if we compile with
+  // '+sve2p1,+sme', then we know that it satisfies the 'sve2p1' part if we
+  // evaluate the features for '+sve2p1,+sme,+nosme'.
+  //
+  // Similarly, if we compile with '+sve2,+sme2', then we know it satisfies
+  // the 'sme2' part if we evaluate the features for '+sve2,+sme2,+nosve'.
+  llvm::StringMap CallerFeatureMap;
+  auto DisableFeatures = [](StringRef S) {
+for (StringRef K : CallerFeatureMap.keys())
+  if (K.starts_with(S))
+CallerFeatureMap[K] = false;
+  };
+
+  StringRef BuiltinTargetGuards(
+  S.Context.BuiltinInfo.getRequiredFeatures(BuiltinID));
+
+  S.Context.getFunctionFeatureMap(CallerFeatureMap, FD);
+  DisableFeatures("sme");
+  SatisfiesSVE = Builtin::evaluateRequiredTargetFeatures(
+  BuiltinTargetGuards, CallerFeatureMap);
+
+  S.Context.getFunctionFeatureMap(CallerFeatureMap, FD);
+  DisableFeatures("sve");
+  SatisfiesSME = Builtin::evaluateRequiredTargetFeatures(
+  BuiltinTargetGuards, CallerFeatureMap);
+
+  CachedFD = FD;
+}
+
+if (SatisfiesSVE && SatisfiesSME)

paulwalker-arm wrote:

Does this effectively prevent streaming compatible functions when only SVE 
feature flags are available?

My updated understand of streaming compatible functions is that SME features 
play no role and the user is expected to use SVE feature flags to direct the 
compiler to the level of SVE support a streaming compatible function can have, 
much like they would for ordinary functions.

https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-04 Thread Paul Walker via cfe-commits


@@ -225,7 +225,7 @@ def IsStreamingCompatible   : 
FlagType<0x40>;
 def IsReadZA: FlagType<0x80>;
 def IsWriteZA   : FlagType<0x100>;
 def IsReductionQV   : FlagType<0x200>;
-def IsStreamingOrSVE2p1 : FlagType<0x400>; // Use for 
intrinsics that are common between sme/sme2 and sve2p1.
+def IsSVEOrStreamingSVE : FlagType<0x400>; // Use for 
intrinsics that are common between SVE and SME.

paulwalker-arm wrote:

If you permit a bit of bike shedding I don't think this is a good name.  From 
what I can see the new flag is used to trigger dynamic resolution to determine 
if the builtin is available to use based on the target features along with any 
keywords associated with the function.  Perhaps `RequiresDynamicVerification`? 
or ideally something shorter that has the same meaning.

https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-04 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Generalise streaming mode checks for builtins. (PR #93802)

2024-06-04 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm requested changes to this pull request.


https://github.com/llvm/llvm-project/pull/93802
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Use __clang_arm_builtin_alias for overloaded svreinterpret's (PR #92427)

2024-05-22 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.


https://github.com/llvm/llvm-project/pull/92427
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Require SVE or SSVE for scalable types. (PR #91356)

2024-05-16 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.


https://github.com/llvm/llvm-project/pull/91356
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Require SVE or SSVE for scalable types. (PR #91356)

2024-05-16 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/91356
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Require SVE or SSVE for scalable types. (PR #91356)

2024-05-16 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm commented:

After further discussion I know understand the `__arm_streaming_compatible` 
keyword has no affect on the target features in play and only tells the 
compiler not to emit any SM state changing instructions as part of the calling 
convention.

https://github.com/llvm/llvm-project/pull/91356
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Require SVE or SSVE for scalable types. (PR #91356)

2024-05-16 Thread Paul Walker via cfe-commits


@@ -8982,11 +8982,18 @@ void Sema::CheckVariableDeclarationType(VarDecl *NewVD) 
{
 const FunctionDecl *FD = cast(CurContext);
 llvm::StringMap CallerFeatureMap;
 Context.getFunctionFeatureMap(CallerFeatureMap, FD);
-if (!Builtin::evaluateRequiredTargetFeatures(
-"sve", CallerFeatureMap)) {
-  Diag(NewVD->getLocation(), diag::err_sve_vector_in_non_sve_target) << T;
-  NewVD->setInvalidDecl();
-  return;

paulwalker-arm wrote:

Is dropping the immediate return upon setting up a diagnostic intentional?

https://github.com/llvm/llvm-project/pull/91356
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Require SVE or SSVE for scalable types. (PR #91356)

2024-05-16 Thread Paul Walker via cfe-commits


@@ -9,6 +9,12 @@
 
 #include 
 
+#if defined __ARM_FEATURE_SME
+#define MODE_ATTR __arm_streaming
+#else
+#define MODE_ATTR __arm_streaming_compatible

paulwalker-arm wrote:

Do you need to use `__arm_streaming_compatible` here?  Now we've agreed this 
keyword has no effect on the target features in use I think `MODE_ATTR` should 
remain blank to mirror the expected usage when SME is not in use.

https://github.com/llvm/llvm-project/pull/91356
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Require SVE or SSVE for scalable types. (PR #91356)

2024-05-16 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/91356
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Require SVE or SSVE for scalable types. (PR #91356)

2024-05-08 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm requested changes to this pull request.

As discussed offline, I don't think we want to be this strict.  As demonstrated 
by the changes to the ACLE tests, this change makes it impossible to distribute 
a library in binary form that can work for both SVE and InStreamingMode 
environments.  I believe functions decorated with `__arm_streaming_compatible` 
should be allowed to assume the presence of the subset of instructions that is 
available to both environments. Library users get protected at the point they 
call such functions whereby a compilation error is emitted when the caller 
either doesn't have access to SVE or is not in streaming mode.

https://github.com/llvm/llvm-project/pull/91356
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [mlir] Move several vector intrinsics out of experimental namespace (PR #88748)

2024-04-26 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.


https://github.com/llvm/llvm-project/pull/88748
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][SVE] Seperate the int and floating-point variants of addqv. (PR #89762)

2024-04-26 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm closed 
https://github.com/llvm/llvm-project/pull/89762
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][SVE] Seperate the int and floating-point variants of addqv. (PR #89762)

2024-04-23 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm created 
https://github.com/llvm/llvm-project/pull/89762

We only use common intrinsics for operations that treat their element type as a 
container of bits.

>From ed27a2d1406dccf70e7189578cd6950b61961c1b Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Tue, 23 Apr 2024 11:54:47 +
Subject: [PATCH] [LLVM][SVE] Seperate the int and floating-point variants of
 addqv.

We only use common intrinsics for operations that treat their
element type as a container of bits.
---
 clang/include/clang/Basic/arm_sve.td  | 27 ++-
 .../acle_sve2p1_fp_reduce.c   | 12 -
 llvm/include/llvm/IR/IntrinsicsAArch64.td | 10 ---
 llvm/lib/IR/AutoUpgrade.cpp   | 12 +
 .../lib/Target/AArch64/AArch64SVEInstrInfo.td |  2 +-
 5 files changed, 39 insertions(+), 24 deletions(-)

diff --git a/clang/include/clang/Basic/arm_sve.td 
b/clang/include/clang/Basic/arm_sve.td
index 6cc249837d3f3d..15340ebb62b365 100644
--- a/clang/include/clang/Basic/arm_sve.td
+++ b/clang/include/clang/Basic/arm_sve.td
@@ -1961,19 +1961,20 @@ def SVPSEL_D : SInst<"svpsel_lane_b64", "PPPm", "Pl", 
MergeNone, "", [IsStreamin
 
 // Standalone sve2.1 builtins
 let TargetGuard = "sve2p1" in {
-def SVORQV   : SInst<"svorqv[_{d}]", "{Pd", "csilUcUsUiUl", MergeNone, 
"aarch64_sve_orqv", [IsReductionQV]>;
-def SVEORQV  : SInst<"sveorqv[_{d}]", "{Pd", "csilUcUsUiUl", MergeNone, 
"aarch64_sve_eorqv", [IsReductionQV]>;
-def SVADDQV  : SInst<"svaddqv[_{d}]", "{Pd", "hfdcsilUcUsUiUl", MergeNone, 
"aarch64_sve_addqv", [IsReductionQV]>;
-def SVANDQV  : SInst<"svandqv[_{d}]", "{Pd", "csilUcUsUiUl", MergeNone, 
"aarch64_sve_andqv", [IsReductionQV]>;
-def SVSMAXQV : SInst<"svmaxqv[_{d}]", "{Pd", "csil", MergeNone, 
"aarch64_sve_smaxqv", [IsReductionQV]>;
-def SVUMAXQV : SInst<"svmaxqv[_{d}]", "{Pd", "UcUsUiUl", MergeNone, 
"aarch64_sve_umaxqv", [IsReductionQV]>;
-def SVSMINQV : SInst<"svminqv[_{d}]", "{Pd", "csil", MergeNone, 
"aarch64_sve_sminqv", [IsReductionQV]>;
-def SVUMINQV : SInst<"svminqv[_{d}]", "{Pd", "UcUsUiUl", MergeNone, 
"aarch64_sve_uminqv", [IsReductionQV]>;
-
-def SVFMAXNMQV: SInst<"svmaxnmqv[_{d}]", "{Pd", "hfd", MergeNone, 
"aarch64_sve_fmaxnmqv", [IsReductionQV]>;
-def SVFMINNMQV: SInst<"svminnmqv[_{d}]", "{Pd", "hfd", MergeNone, 
"aarch64_sve_fminnmqv", [IsReductionQV]>;
-def SVFMAXQV: SInst<"svmaxqv[_{d}]", "{Pd", "hfd", MergeNone, 
"aarch64_sve_fmaxqv", [IsReductionQV]>;
-def SVFMINQV: SInst<"svminqv[_{d}]", "{Pd", "hfd", MergeNone, 
"aarch64_sve_fminqv", [IsReductionQV]>;
+def SVORQV   : SInst<"svorqv[_{d}]",  "{Pd", "csilUcUsUiUl", MergeNone, 
"aarch64_sve_orqv",   [IsReductionQV]>;
+def SVEORQV  : SInst<"sveorqv[_{d}]", "{Pd", "csilUcUsUiUl", MergeNone, 
"aarch64_sve_eorqv",  [IsReductionQV]>;
+def SVADDQV  : SInst<"svaddqv[_{d}]", "{Pd", "csilUcUsUiUl", MergeNone, 
"aarch64_sve_addqv",  [IsReductionQV]>;
+def SVANDQV  : SInst<"svandqv[_{d}]", "{Pd", "csilUcUsUiUl", MergeNone, 
"aarch64_sve_andqv",  [IsReductionQV]>;
+def SVSMAXQV : SInst<"svmaxqv[_{d}]", "{Pd", "csil", MergeNone, 
"aarch64_sve_smaxqv", [IsReductionQV]>;
+def SVUMAXQV : SInst<"svmaxqv[_{d}]", "{Pd", "UcUsUiUl", MergeNone, 
"aarch64_sve_umaxqv", [IsReductionQV]>;
+def SVSMINQV : SInst<"svminqv[_{d}]", "{Pd", "csil", MergeNone, 
"aarch64_sve_sminqv", [IsReductionQV]>;
+def SVUMINQV : SInst<"svminqv[_{d}]", "{Pd", "UcUsUiUl", MergeNone, 
"aarch64_sve_uminqv", [IsReductionQV]>;
+
+def SVFADDQV   : SInst<"svaddqv[_{d}]",   "{Pd", "hfd", MergeNone, 
"aarch64_sve_faddqv",   [IsReductionQV]>;
+def SVFMAXNMQV : SInst<"svmaxnmqv[_{d}]", "{Pd", "hfd", MergeNone, 
"aarch64_sve_fmaxnmqv", [IsReductionQV]>;
+def SVFMINNMQV : SInst<"svminnmqv[_{d}]", "{Pd", "hfd", MergeNone, 
"aarch64_sve_fminnmqv", [IsReductionQV]>;
+def SVFMAXQV   : SInst<"svmaxqv[_{d}]",   "{Pd", "hfd", MergeNone, 
"aarch64_sve_fmaxqv",   [IsReductionQV]>;
+def SVFMINQV   : SInst<"svminqv[_{d}]",   "{Pd", "hfd", MergeNone, 
"aarch64_sve_fminqv",   [IsReductionQV]>;
 }
 
 let TargetGuard = "sve2p1|sme2" in {
diff --git 
a/clang/test/CodeGen/aarch64-sve2p1-intrinsics/acle_sve2p1_fp_reduce.c 
b/clang/test/CodeGen/aarch64-sve2p1-intrinsics/acle_sve2p1_fp_reduce.c
index e58cf4e49a37f9..9d5ffdafe8663e 100644
--- a/clang/test/CodeGen/aarch64-sve2p1-intrinsics/acle_sve2p1_fp_reduce.c
+++ b/clang/test/CodeGen/aarch64-sve2p1-intrinsics/acle_sve2p1_fp_reduce.c
@@ -20,13 +20,13 @@
 // CHECK-LABEL: @test_svaddqv_f16(
 // CHECK-NEXT:  entry:
 // CHECK-NEXT:[[TMP0:%.*]] = tail call  
@llvm.aarch64.sve.convert.from.svbool.nxv8i1( [[PG:%.*]])
-// CHECK-NEXT:[[TMP1:%.*]] = tail call <8 x half> 
@llvm.aarch64.sve.addqv.v8f16.nxv8f16( [[TMP0]],  [[OP:%.*]])
+// CHECK-NEXT:[[TMP1:%.*]] = tail call <8 x half> 
@llvm.aarch64.sve.faddqv.v8f16.nxv8f16( [[TMP0]],  [[OP:%.*]])
 // CHECK-NEXT:ret <8 x half> [[TMP1]]
 //
 // CPP-CHECK-LABEL: @_Z16test_svaddqv_f16u10__SVBool_tu13__SVFloat16_t(
 // 

[clang] [llvm] [LLVM][TypeSize] Remove default constructor. (PR #82810)

2024-02-28 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm closed 
https://github.com/llvm/llvm-project/pull/82810
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][TypeSize] Remove default constructor. (PR #82810)

2024-02-26 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/82810

>From a4c46459564bd8a8e5ca2a56fa643f866b7e869a Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Fri, 23 Feb 2024 18:26:10 +
Subject: [PATCH] [LLVM][TypeSize] Remove default constructor.

---
 clang/lib/CodeGen/CGCall.cpp| 15 ++-
 llvm/include/llvm/Support/TypeSize.h|  2 --
 llvm/unittests/Support/TypeSizeTest.cpp |  1 -
 3 files changed, 6 insertions(+), 12 deletions(-)

diff --git a/clang/lib/CodeGen/CGCall.cpp b/clang/lib/CodeGen/CGCall.cpp
index d05cf1c6e1814e..0d86fcf544d0fd 100644
--- a/clang/lib/CodeGen/CGCall.cpp
+++ b/clang/lib/CodeGen/CGCall.cpp
@@ -3221,12 +3221,10 @@ void CodeGenFunction::EmitFunctionProlog(const 
CGFunctionInfo ,
 
   llvm::StructType *STy =
   dyn_cast(ArgI.getCoerceToType());
-  llvm::TypeSize StructSize;
-  llvm::TypeSize PtrElementSize;
   if (ArgI.isDirect() && !ArgI.getCanBeFlattened() && STy &&
   STy->getNumElements() > 1) {
-StructSize = CGM.getDataLayout().getTypeAllocSize(STy);
-PtrElementSize =
+llvm::TypeSize StructSize = CGM.getDataLayout().getTypeAllocSize(STy);
+llvm::TypeSize PtrElementSize =
 CGM.getDataLayout().getTypeAllocSize(ConvertTypeForMem(Ty));
 if (STy->containsHomogeneousScalableVectorTypes()) {
   assert(StructSize == PtrElementSize &&
@@ -5310,12 +5308,11 @@ RValue CodeGenFunction::EmitCall(const CGFunctionInfo 
,
 
   llvm::StructType *STy =
   dyn_cast(ArgInfo.getCoerceToType());
-  llvm::Type *SrcTy = ConvertTypeForMem(I->Ty);
-  llvm::TypeSize SrcTypeSize;
-  llvm::TypeSize DstTypeSize;
   if (STy && ArgInfo.isDirect() && !ArgInfo.getCanBeFlattened()) {
-SrcTypeSize = CGM.getDataLayout().getTypeAllocSize(SrcTy);
-DstTypeSize = CGM.getDataLayout().getTypeAllocSize(STy);
+llvm::Type *SrcTy = ConvertTypeForMem(I->Ty);
+llvm::TypeSize SrcTypeSize =
+CGM.getDataLayout().getTypeAllocSize(SrcTy);
+llvm::TypeSize DstTypeSize = CGM.getDataLayout().getTypeAllocSize(STy);
 if (STy->containsHomogeneousScalableVectorTypes()) {
   assert(SrcTypeSize == DstTypeSize &&
  "Only allow non-fractional movement of structure with "
diff --git a/llvm/include/llvm/Support/TypeSize.h 
b/llvm/include/llvm/Support/TypeSize.h
index 1b793b0eccf3c7..68dbe1ea3062ab 100644
--- a/llvm/include/llvm/Support/TypeSize.h
+++ b/llvm/include/llvm/Support/TypeSize.h
@@ -321,8 +321,6 @@ class TypeSize : public 
details::FixedOrScalableQuantity {
   : FixedOrScalableQuantity(V) {}
 
 public:
-  constexpr TypeSize() : FixedOrScalableQuantity(0, false) {}
-
   constexpr TypeSize(ScalarTy Quantity, bool Scalable)
   : FixedOrScalableQuantity(Quantity, Scalable) {}
 
diff --git a/llvm/unittests/Support/TypeSizeTest.cpp 
b/llvm/unittests/Support/TypeSizeTest.cpp
index 34fe376989e7ba..b02b7e60095359 100644
--- a/llvm/unittests/Support/TypeSizeTest.cpp
+++ b/llvm/unittests/Support/TypeSizeTest.cpp
@@ -81,7 +81,6 @@ static_assert(INT64_C(2) * TSFixed32 == 
TypeSize::getFixed(64));
 static_assert(UINT64_C(2) * TSFixed32 == TypeSize::getFixed(64));
 static_assert(alignTo(TypeSize::getFixed(7), 8) == TypeSize::getFixed(8));
 
-static_assert(TypeSize() == TypeSize::getFixed(0));
 static_assert(TypeSize::getZero() == TypeSize::getFixed(0));
 static_assert(TypeSize::getZero() != TypeSize::getScalable(0));
 static_assert(TypeSize::getFixed(0) != TypeSize::getScalable(0));

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][TypeSize] Remove default constructor. (PR #82810)

2024-02-23 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm created 
https://github.com/llvm/llvm-project/pull/82810

Implements the follow-on work requesting on 
https://github.com/llvm/llvm-project/pull/75614.

>From a75304dffb77be1fb15f268000bfbdd07be774e1 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Fri, 23 Feb 2024 18:26:10 +
Subject: [PATCH] [LLVM][TypeSize] Remove default constructor.

---
 clang/lib/CodeGen/CGCall.cpp | 15 ++-
 llvm/include/llvm/Support/TypeSize.h |  2 --
 2 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/clang/lib/CodeGen/CGCall.cpp b/clang/lib/CodeGen/CGCall.cpp
index d05cf1c6e1814e..0d86fcf544d0fd 100644
--- a/clang/lib/CodeGen/CGCall.cpp
+++ b/clang/lib/CodeGen/CGCall.cpp
@@ -3221,12 +3221,10 @@ void CodeGenFunction::EmitFunctionProlog(const 
CGFunctionInfo ,
 
   llvm::StructType *STy =
   dyn_cast(ArgI.getCoerceToType());
-  llvm::TypeSize StructSize;
-  llvm::TypeSize PtrElementSize;
   if (ArgI.isDirect() && !ArgI.getCanBeFlattened() && STy &&
   STy->getNumElements() > 1) {
-StructSize = CGM.getDataLayout().getTypeAllocSize(STy);
-PtrElementSize =
+llvm::TypeSize StructSize = CGM.getDataLayout().getTypeAllocSize(STy);
+llvm::TypeSize PtrElementSize =
 CGM.getDataLayout().getTypeAllocSize(ConvertTypeForMem(Ty));
 if (STy->containsHomogeneousScalableVectorTypes()) {
   assert(StructSize == PtrElementSize &&
@@ -5310,12 +5308,11 @@ RValue CodeGenFunction::EmitCall(const CGFunctionInfo 
,
 
   llvm::StructType *STy =
   dyn_cast(ArgInfo.getCoerceToType());
-  llvm::Type *SrcTy = ConvertTypeForMem(I->Ty);
-  llvm::TypeSize SrcTypeSize;
-  llvm::TypeSize DstTypeSize;
   if (STy && ArgInfo.isDirect() && !ArgInfo.getCanBeFlattened()) {
-SrcTypeSize = CGM.getDataLayout().getTypeAllocSize(SrcTy);
-DstTypeSize = CGM.getDataLayout().getTypeAllocSize(STy);
+llvm::Type *SrcTy = ConvertTypeForMem(I->Ty);
+llvm::TypeSize SrcTypeSize =
+CGM.getDataLayout().getTypeAllocSize(SrcTy);
+llvm::TypeSize DstTypeSize = CGM.getDataLayout().getTypeAllocSize(STy);
 if (STy->containsHomogeneousScalableVectorTypes()) {
   assert(SrcTypeSize == DstTypeSize &&
  "Only allow non-fractional movement of structure with "
diff --git a/llvm/include/llvm/Support/TypeSize.h 
b/llvm/include/llvm/Support/TypeSize.h
index 1b793b0eccf3c7..68dbe1ea3062ab 100644
--- a/llvm/include/llvm/Support/TypeSize.h
+++ b/llvm/include/llvm/Support/TypeSize.h
@@ -321,8 +321,6 @@ class TypeSize : public 
details::FixedOrScalableQuantity {
   : FixedOrScalableQuantity(V) {}
 
 public:
-  constexpr TypeSize() : FixedOrScalableQuantity(0, false) {}
-
   constexpr TypeSize(ScalarTy Quantity, bool Scalable)
   : FixedOrScalableQuantity(Quantity, Scalable) {}
 

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Add missing SME functions to header file. (PR #75791)

2023-12-19 Thread Paul Walker via cfe-commits


@@ -10570,6 +10570,26 @@ Value 
*CodeGenFunction::EmitAArch64BuiltinExpr(unsigned BuiltinID,
 return Builder.CreateCall(F, llvm::ConstantInt::get(Int32Ty, HintID));
   }
 
+  if (BuiltinID == clang::AArch64::BI__builtin_arm_get_sme_state) {
+// Create call to __arm_sme_state and store the results to the two 
pointers.

paulwalker-arm wrote:

I see what you mean. In which case I thinking creating specific builtins to 
avoid having to go through memory would be better because I don’t get the point 
of creating a builtin that has a less efficient interface than the function it 
targets.  That said, I’ll not push for it if you feel this implementation is 
better.

https://github.com/llvm/llvm-project/pull/75791
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Add missing SME functions to header file. (PR #75791)

2023-12-19 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/75791
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Add missing SME functions to header file. (PR #75791)

2023-12-19 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.


https://github.com/llvm/llvm-project/pull/75791
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Add missing SME functions to header file. (PR #75791)

2023-12-19 Thread Paul Walker via cfe-commits


@@ -1600,6 +1600,25 @@ void SVEEmitter::createSMEHeader(raw_ostream ) {
   OS << "extern \"C\" {\n";
   OS << "#endif\n\n";
 
+  OS << "void __arm_za_disable(void) __arm_streaming_compatible;\n\n";
+
+  OS << "__ai bool __arm_has_sme(void) __arm_streaming_compatible {\n";
+  OS << "  uint64_t x0, x1;\n";
+  OS << "  __builtin_arm_get_sme_state(, );\n";
+  OS << "  return x0 & (1ULL << 63);\n";
+  OS << "}\n\n";
+
+  OS << "__ai bool __arm_in_streaming_mode(void) __arm_streaming_compatible "
+"{\n";
+  OS << "  uint64_t x0, x1;\n";
+  OS << "  __builtin_arm_get_sme_state(, );\n";
+  OS << "  return x0 & 1;\n";
+  OS << "}\n\n";
+
+  OS << "__ai __attribute__((target(\"sme\"))) void svundef_za(void) "

paulwalker-arm wrote:

I see. Thanks for the explanation.

https://github.com/llvm/llvm-project/pull/75791
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Add missing SME functions to header file. (PR #75791)

2023-12-19 Thread Paul Walker via cfe-commits


@@ -10570,6 +10570,26 @@ Value 
*CodeGenFunction::EmitAArch64BuiltinExpr(unsigned BuiltinID,
 return Builder.CreateCall(F, llvm::ConstantInt::get(Int32Ty, HintID));
   }
 
+  if (BuiltinID == clang::AArch64::BI__builtin_arm_get_sme_state) {
+// Create call to __arm_sme_state and store the results to the two 
pointers.

paulwalker-arm wrote:

OK, that makes sense.  But the new builtin can still return by value?

https://github.com/llvm/llvm-project/pull/75791
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Add missing SME functions to header file. (PR #75791)

2023-12-18 Thread Paul Walker via cfe-commits


@@ -10570,6 +10570,26 @@ Value 
*CodeGenFunction::EmitAArch64BuiltinExpr(unsigned BuiltinID,
 return Builder.CreateCall(F, llvm::ConstantInt::get(Int32Ty, HintID));
   }
 
+  if (BuiltinID == clang::AArch64::BI__builtin_arm_get_sme_state) {
+// Create call to __arm_sme_state and store the results to the two 
pointers.

paulwalker-arm wrote:

Out of interest why does this builtin return the result via memory? instead of 
being an alias of `__arm_sme_state`. Or rather, can `__arm_has_sme` and 
`__arm_in_streaming_mode` call `__arm_sme_state` directly?

https://github.com/llvm/llvm-project/pull/75791
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang][AArch64] Add missing SME functions to header file. (PR #75791)

2023-12-18 Thread Paul Walker via cfe-commits


@@ -1600,6 +1600,25 @@ void SVEEmitter::createSMEHeader(raw_ostream ) {
   OS << "extern \"C\" {\n";
   OS << "#endif\n\n";
 
+  OS << "void __arm_za_disable(void) __arm_streaming_compatible;\n\n";
+
+  OS << "__ai bool __arm_has_sme(void) __arm_streaming_compatible {\n";
+  OS << "  uint64_t x0, x1;\n";
+  OS << "  __builtin_arm_get_sme_state(, );\n";
+  OS << "  return x0 & (1ULL << 63);\n";
+  OS << "}\n\n";
+
+  OS << "__ai bool __arm_in_streaming_mode(void) __arm_streaming_compatible "
+"{\n";
+  OS << "  uint64_t x0, x1;\n";
+  OS << "  __builtin_arm_get_sme_state(, );\n";
+  OS << "  return x0 & 1;\n";
+  OS << "}\n\n";
+
+  OS << "__ai __attribute__((target(\"sme\"))) void svundef_za(void) "

paulwalker-arm wrote:

Why is the target attribute required? From reading the ACLE this builtin is not 
expected to emit any SME instructions?

https://github.com/llvm/llvm-project/pull/75791
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [mlir] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-18 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm closed 
https://github.com/llvm/llvm-project/pull/75217
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [mlir] [llvm] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-15 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

Turns out there was just a single extra instance, within MLIR.  It's an 
interesting one though and I've noted it as it looks like I'll need to extend 
`ModuleImport::getConstantAsAttr` as part of the patch that enables direct 
VectorType support for ConstantInt/FP.

https://github.com/llvm/llvm-project/pull/75217
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [mlir] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-15 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/75217

>From d19e9e20432c0dfe50bfba7cd782179653f42b2b Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH] [LLVM][IR] Replace ConstantInt's specialisation of getType()
 with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.

Co-authored-by: Nikita Popov 
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  5 ++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 mlir/lib/Target/LLVMIR/ModuleImport.cpp   |  2 +-
 12 files changed, 37 insertions(+), 37 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 353b7930b3c1ea..f2a199c3f61d3c 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17027,7 +17027,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17265,7 +17265,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..5beac5547d65e0 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..7fdc35e7fca097 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-unsigned DstWidth = CI2->getType()->getBitWidth();
+unsigned DstWidth = CI2->getBitWidth();
 unsigned SrcWidth = std::min(DstWidth, Log2(GVAlign));
 APInt 

[llvm] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-13 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

Just a note to say the PR is not complete because there are uses outside of 
clang and llvm that I need to port.

https://github.com/llvm/llvm-project/pull/75217
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-13 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/75217

>From b484b3c60b172fadb6fa600cdc15a865750867a8 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH] [LLVM][IR] Replace ConstantInt's specialisation of getType()
 with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.

Co-authored-by: Nikita Popov 
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  5 ++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 36 insertions(+), 36 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 353b7930b3c1ea..f2a199c3f61d3c 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17027,7 +17027,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17265,7 +17265,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..5beac5547d65e0 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..7fdc35e7fca097 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-unsigned DstWidth = CI2->getType()->getBitWidth();
+unsigned DstWidth = CI2->getBitWidth();
 unsigned SrcWidth = std::min(DstWidth, Log2(GVAlign));
 APInt BitsNotSet(APInt::getLowBitsSet(DstWidth, SrcWidth));
 
diff --git 

[llvm] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-13 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/75217

>From 3f01bab15a8645be06ab30afa3bc42f11f3d4959 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH 1/8] [LLVM][IR] Replace ConstantInt's specialisation of
 getType() with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  6 +++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 83d0a72aac5495..0d6e121943ed6d 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17023,7 +17023,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17261,7 +17261,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..6a171f863e1221 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getIntegerType()->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..c4780402340780 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-unsigned DstWidth = CI2->getType()->getBitWidth();
+unsigned DstWidth = CI2->getIntegerType()->getBitWidth();
 unsigned SrcWidth = std::min(DstWidth, Log2(GVAlign));
 APInt BitsNotSet(APInt::getLowBitsSet(DstWidth, SrcWidth));
 
diff 

[llvm] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-13 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/75217

>From 3f01bab15a8645be06ab30afa3bc42f11f3d4959 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH 1/7] [LLVM][IR] Replace ConstantInt's specialisation of
 getType() with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  6 +++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 83d0a72aac5495..0d6e121943ed6d 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17023,7 +17023,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17261,7 +17261,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..6a171f863e1221 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getIntegerType()->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..c4780402340780 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-unsigned DstWidth = CI2->getType()->getBitWidth();
+unsigned DstWidth = CI2->getIntegerType()->getBitWidth();
 unsigned SrcWidth = std::min(DstWidth, Log2(GVAlign));
 APInt BitsNotSet(APInt::getLowBitsSet(DstWidth, SrcWidth));
 
diff 

[clang] [llvm] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-13 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/75217

>From 3f01bab15a8645be06ab30afa3bc42f11f3d4959 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH 1/6] [LLVM][IR] Replace ConstantInt's specialisation of
 getType() with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  6 +++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 83d0a72aac5495..0d6e121943ed6d 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17023,7 +17023,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17261,7 +17261,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..6a171f863e1221 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getIntegerType()->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..c4780402340780 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-unsigned DstWidth = CI2->getType()->getBitWidth();
+unsigned DstWidth = CI2->getIntegerType()->getBitWidth();
 unsigned SrcWidth = std::min(DstWidth, Log2(GVAlign));
 APInt BitsNotSet(APInt::getLowBitsSet(DstWidth, SrcWidth));
 
diff 

[llvm] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-13 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/75217

>From 3f01bab15a8645be06ab30afa3bc42f11f3d4959 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH 1/5] [LLVM][IR] Replace ConstantInt's specialisation of
 getType() with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  6 +++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 83d0a72aac5495..0d6e121943ed6d 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17023,7 +17023,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17261,7 +17261,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..6a171f863e1221 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getIntegerType()->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..c4780402340780 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-unsigned DstWidth = CI2->getType()->getBitWidth();
+unsigned DstWidth = CI2->getIntegerType()->getBitWidth();
 unsigned SrcWidth = std::min(DstWidth, Log2(GVAlign));
 APInt BitsNotSet(APInt::getLowBitsSet(DstWidth, SrcWidth));
 
diff 

[llvm] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-13 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/75217

>From 3f01bab15a8645be06ab30afa3bc42f11f3d4959 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH 1/4] [LLVM][IR] Replace ConstantInt's specialisation of
 getType() with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  6 +++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 83d0a72aac5495..0d6e121943ed6d 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17023,7 +17023,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17261,7 +17261,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..6a171f863e1221 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getIntegerType()->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..c4780402340780 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-unsigned DstWidth = CI2->getType()->getBitWidth();
+unsigned DstWidth = CI2->getIntegerType()->getBitWidth();
 unsigned SrcWidth = std::min(DstWidth, Log2(GVAlign));
 APInt BitsNotSet(APInt::getLowBitsSet(DstWidth, SrcWidth));
 
diff 

[llvm] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-13 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/75217

>From 3f01bab15a8645be06ab30afa3bc42f11f3d4959 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH 1/3] [LLVM][IR] Replace ConstantInt's specialisation of
 getType() with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  6 +++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 83d0a72aac5495..0d6e121943ed6d 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17023,7 +17023,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17261,7 +17261,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..6a171f863e1221 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getIntegerType()->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..c4780402340780 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-unsigned DstWidth = CI2->getType()->getBitWidth();
+unsigned DstWidth = CI2->getIntegerType()->getBitWidth();
 unsigned SrcWidth = std::min(DstWidth, Log2(GVAlign));
 APInt BitsNotSet(APInt::getLowBitsSet(DstWidth, SrcWidth));
 
diff 

[llvm] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-13 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/75217

>From 3f01bab15a8645be06ab30afa3bc42f11f3d4959 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH 1/2] [LLVM][IR] Replace ConstantInt's specialisation of
 getType() with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  6 +++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 83d0a72aac5495..0d6e121943ed6d 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17023,7 +17023,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17261,7 +17261,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..6a171f863e1221 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getIntegerType()->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..c4780402340780 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-unsigned DstWidth = CI2->getType()->getBitWidth();
+unsigned DstWidth = CI2->getIntegerType()->getBitWidth();
 unsigned SrcWidth = std::min(DstWidth, Log2(GVAlign));
 APInt BitsNotSet(APInt::getLowBitsSet(DstWidth, SrcWidth));
 
diff 

[clang] [llvm] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-12 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

Most all the recommend changes assume the code paths will work equally well for 
vector types as they do for scalar types.  Can we be so sure this is the case? 
This is why I opted to keep the casting assertions with the exception of a few 
places where I could be sure the code path was clean.

https://github.com/llvm/llvm-project/pull/75217
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [LLVM][IR] Replace ConstantInt's specialisation of getType() with getIntegerType(). (PR #75217)

2023-12-12 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm created 
https://github.com/llvm/llvm-project/pull/75217

The specialisation will not be valid when ConstantInt gains native support for 
vector types.

This is largely a mechanical change but with extra attention paid to 
InstCombineVectorOps.cpp, LoopFlatten.cpp and Verifier.cpp to remove the need 
to call `getIntegerType()`.

>From 3f01bab15a8645be06ab30afa3bc42f11f3d4959 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH] [LLVM][IR] Replace ConstantInt's specialisation of getType()
 with getIntegerType().

The specialisation is no longer valid because ConstantInt is due to
gain native support for vector types.
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  |  7 +++
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  | 11 ++-
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  6 +++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  5 ++---
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 37 insertions(+), 36 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 83d0a72aac5495..0d6e121943ed6d 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3214,7 +3214,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17023,7 +17023,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17261,7 +17261,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 0b9f89830b79c6..b5dcc7fbc1d929 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -171,10 +171,9 @@ class ConstantInt final : public ConstantData {
   /// Determine if this constant's value is same as an unsigned char.
   bool equalsInt(uint64_t V) const { return Val == V; }
 
-  /// getType - Specialize the getType() method to always return an 
IntegerType,
-  /// which reduces the amount of casting needed in parts of the compiler.
-  ///
-  inline IntegerType *getType() const {
+  /// Variant of the getType() method to always return an IntegerType, which
+  /// reduces the amount of casting needed in parts of the compiler.
+  inline IntegerType *getIntegerType() const {
 return cast(Value::getType());
   }
 
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp 
b/llvm/lib/Analysis/InstructionSimplify.cpp
index 2a45acf63aa2ca..6a171f863e1221 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -6079,7 +6079,7 @@ static Value *simplifyRelativeLoad(Constant *Ptr, 
Constant *Offset,
   Type *Int32Ty = Type::getInt32Ty(Ptr->getContext());
 
   auto *OffsetConstInt = dyn_cast(Offset);
-  if (!OffsetConstInt || OffsetConstInt->getType()->getBitWidth() > 64)
+  if (!OffsetConstInt || OffsetConstInt->getIntegerType()->getBitWidth() > 64)
 return nullptr;
 
   APInt OffsetInt = OffsetConstInt->getValue().sextOrTrunc(
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index d499d74f7ba010..c4780402340780 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -868,7 +868,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned 
Opcode, Constant *C1,
   }
 
   if (GVAlign > 1) {
-

[clang] [llvm] [clang-tools-extra] [LoopVectorize] Improve algorithm for hoisting runtime checks (PR #73515)

2023-12-11 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.


https://github.com/llvm/llvm-project/pull/73515
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [clang-tools-extra] [LoopVectorize] Improve algorithm for hoisting runtime checks (PR #73515)

2023-12-11 Thread Paul Walker via cfe-commits


@@ -347,7 +347,12 @@ void RuntimePointerChecking::tryToCreateDiffCheck(
 auto *SinkStartAR = cast(SinkStartInt);
 const Loop *StartARLoop = SrcStartAR->getLoop();
 if (StartARLoop == SinkStartAR->getLoop() &&
-StartARLoop == InnerLoop->getParentLoop()) {
+StartARLoop == InnerLoop->getParentLoop() &&
+// If the diff check would already be loop invariant (due to the
+// recurrences being the same), then we should still prefer the diff

paulwalker-arm wrote:

Perhaps "...,then we prefer to keep the diff check because they are cheaper."

https://github.com/llvm/llvm-project/pull/73515
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [clang-tools-extra] [LoopVectorize] Improve algorithm for hoisting runtime checks (PR #73515)

2023-12-11 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/73515
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] 94c8373 - [NFC] A few whitespace changes.

2023-12-08 Thread Paul Walker via cfe-commits

Author: Paul Walker
Date: 2023-12-08T18:01:12Z
New Revision: 94c837345c27e173284a85471d4efda19eded08e

URL: 
https://github.com/llvm/llvm-project/commit/94c837345c27e173284a85471d4efda19eded08e
DIFF: 
https://github.com/llvm/llvm-project/commit/94c837345c27e173284a85471d4efda19eded08e.diff

LOG: [NFC] A few whitespace changes.

Added: 


Modified: 
clang/include/clang/Basic/DiagnosticFrontendKinds.td
llvm/include/llvm/Analysis/TargetTransformInfo.h
llvm/lib/Target/AArch64/AArch64TargetTransformInfo.h

Removed: 




diff  --git a/clang/include/clang/Basic/DiagnosticFrontendKinds.td 
b/clang/include/clang/Basic/DiagnosticFrontendKinds.td
index 715e0c0dc8fa84..568000106a84dc 100644
--- a/clang/include/clang/Basic/DiagnosticFrontendKinds.td
+++ b/clang/include/clang/Basic/DiagnosticFrontendKinds.td
@@ -80,6 +80,7 @@ def remark_fe_backend_optimization_remark_analysis_aliasing : 
Remark<"%0; "
 "the '__restrict__' qualifier with the independent array arguments. "
 "Erroneous results will occur if these options are incorrectly applied!">,
 BackendInfo, InGroup;
+
 def warn_fe_backend_optimization_failure : Warning<"%0">, BackendInfo,
 InGroup, DefaultWarn;
 def note_fe_backend_invalid_loc : Note<"could "

diff  --git a/llvm/include/llvm/Analysis/TargetTransformInfo.h 
b/llvm/include/llvm/Analysis/TargetTransformInfo.h
index 8635bdd470ee69..fb6f3287e3d262 100644
--- a/llvm/include/llvm/Analysis/TargetTransformInfo.h
+++ b/llvm/include/llvm/Analysis/TargetTransformInfo.h
@@ -2376,12 +2376,12 @@ class TargetTransformInfo::Model final : public 
TargetTransformInfo::Concept {
bool IsZeroCmp) const override {
 return Impl.enableMemCmpExpansion(OptSize, IsZeroCmp);
   }
-  bool enableInterleavedAccessVectorization() override {
-return Impl.enableInterleavedAccessVectorization();
-  }
   bool enableSelectOptimize() override {
 return Impl.enableSelectOptimize();
   }
+  bool enableInterleavedAccessVectorization() override {
+return Impl.enableInterleavedAccessVectorization();
+  }
   bool enableMaskedInterleavedAccessVectorization() override {
 return Impl.enableMaskedInterleavedAccessVectorization();
   }

diff  --git a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.h 
b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.h
index fa4c93d5f77a19..0b220069a388b6 100644
--- a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.h
+++ b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.h
@@ -291,6 +291,7 @@ class AArch64TTIImpl : public 
BasicTTIImplBase {
   bool isLegalMaskedGather(Type *DataType, Align Alignment) const {
 return isLegalMaskedGatherScatter(DataType);
   }
+
   bool isLegalMaskedScatter(Type *DataType, Align Alignment) const {
 return isLegalMaskedGatherScatter(DataType);
   }



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang-tools-extra] [clang] [Clang][AArch64] Add fix vector types to header into SVE (PR #73258)

2023-12-08 Thread Paul Walker via cfe-commits


@@ -2355,13 +2357,7 @@ void NeonEmitter::run(raw_ostream ) {
 
   OS << "#include \n";
 
-  // Emit NEON-specific scalar typedefs.
-  OS << "typedef float float32_t;\n";
-  OS << "typedef __fp16 float16_t;\n";
-
-  OS << "#ifdef __aarch64__\n";
-  OS << "typedef double float64_t;\n";
-  OS << "#endif\n\n";
+  OS << "#include \n";

paulwalker-arm wrote:

I guess there's a question as to why the poly types have been omitted.  We 
don't need to be that precious about the header containing only the bare 
minimum types that are needed across NEON and SVE.  I've seen circumstances 
where users have wanted the types but nothing else.  I know this is not the 
goal of this patch but it's a step in that direction.

https://github.com/llvm/llvm-project/pull/73258
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [Clang] Emit TBAA info for enums in C (PR #73326)

2023-12-07 Thread Paul Walker via cfe-commits


@@ -196,6 +196,9 @@ C Language Changes
   number of elements in the flexible array member. This information can improve
   the results of the array bound sanitizer and the
   ``__builtin_dynamic_object_size`` builtin.
+- Enums will now be represented in TBAA metadata using their actual underlying

paulwalker-arm wrote:

Works for me.

https://github.com/llvm/llvm-project/pull/73326
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [Clang] Emit TBAA info for enums in C (PR #73326)

2023-12-07 Thread Paul Walker via cfe-commits


@@ -6376,13 +6376,26 @@ aliases a memory access with an access tag ``(BaseTy2, 
AccessTy2,
 Offset2)`` if either ``(BaseTy1, Offset1)`` is reachable from ``(Base2,
 Offset2)`` via the ``Parent`` relation or vice versa.
 
+In C an enum will be compatible with an underlying integer type that is large
+enough to hold all enumerated values. In most cases this will be an ``int``,
+which is the default when no type is specified. However, if an ``int`` is not

paulwalker-arm wrote:

I don't think this information belongs in the LangRef, especially as it doesn't 
add any value to the underlying description of what the metadata represents.

https://github.com/llvm/llvm-project/pull/73326
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [Clang] Emit TBAA info for enums in C (PR #73326)

2023-12-07 Thread Paul Walker via cfe-commits


@@ -196,6 +196,9 @@ C Language Changes
   number of elements in the flexible array member. This information can improve
   the results of the array bound sanitizer and the
   ``__builtin_dynamic_object_size`` builtin.
+- Enums will now be represented in TBAA metadata using their actual underlying

paulwalker-arm wrote:

Perhaps worth saying `Enums in C...` since the behaviour for C++ is different 
and remains unchanged.

https://github.com/llvm/llvm-project/pull/73326
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [Clang] Emit TBAA info for enums in C (PR #73326)

2023-12-07 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/73326
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [Clang] Emit TBAA info for enums in C (PR #73326)

2023-12-07 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.

This looks broadly good to me.  I suggest reverting the LangRef change because 
it doesn't add any new information relevant to LLVM IR.

https://github.com/llvm/llvm-project/pull/73326
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][IR] Add native vector support to ConstantInt & ConstantFP. (PR #74502)

2023-12-06 Thread Paul Walker via cfe-commits


@@ -98,6 +99,13 @@ class ConstantInt final : public ConstantData {
   /// value. Otherwise return a ConstantInt for the given value.
   static Constant *get(Type *Ty, uint64_t V, bool IsSigned = false);
 
+  /// WARNING: Incomplete support, do not use. These methods exist for early
+  /// prototyping, for most use cases ConstantInt::get() should be used.
+  /// Return a ConstantInt with a splat of the given value.
+  static ConstantInt *getSplat(LLVMContext , ElementCount EC,
+   const APInt );
+  static ConstantInt *getSplat(const VectorType *Ty, const APInt );

paulwalker-arm wrote:

I've moved the textual IR side of things to 
https://github.com/llvm/llvm-project/pull/74620 following the suggestion to 
have `splat(x)` be synonymous with `ConstantInt/FP::get()`.

https://github.com/llvm/llvm-project/pull/74502
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Clang] Emit TBAA info for enums in C (PR #73326)

2023-12-06 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

Do you think it's worth adding something to the Clang release note?

https://github.com/llvm/llvm-project/pull/73326
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [LLVM][IR] Add native vector support to ConstantInt & ConstantFP. (PR #74502)

2023-12-05 Thread Paul Walker via cfe-commits


@@ -343,7 +343,7 @@ static bool verifyTripCount(Value *RHS, Loop *L,
 // If the RHS of the compare is equal to the backedge taken count we need
 // to add one to get the trip count.
 if (SCEVRHS == BackedgeTCExt || SCEVRHS == BackedgeTakenCount) {
-  ConstantInt *One = ConstantInt::get(ConstantRHS->getType(), 1);
+  ConstantInt *One = ConstantInt::get(ConstantRHS->getIntegerType(), 1);

paulwalker-arm wrote:

Could it be `static ConstantInt *get(IntegerType *Ty, uint64_t V, bool IsSigned 
= false);`?

I don't think I made this change up.  I wanted a mechanical change so just 
removed the overload and the compiler told me all the places that relied on it.

https://github.com/llvm/llvm-project/pull/74502
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [LLVM][IR] Add native vector support to ConstantInt & ConstantFP. (PR #74502)

2023-12-05 Thread Paul Walker via cfe-commits


@@ -98,6 +99,13 @@ class ConstantInt final : public ConstantData {
   /// value. Otherwise return a ConstantInt for the given value.
   static Constant *get(Type *Ty, uint64_t V, bool IsSigned = false);
 
+  /// WARNING: Incomplete support, do not use. These methods exist for early
+  /// prototyping, for most use cases ConstantInt::get() should be used.
+  /// Return a ConstantInt with a splat of the given value.
+  static ConstantInt *getSplat(LLVMContext , ElementCount EC,
+   const APInt );
+  static ConstantInt *getSplat(const VectorType *Ty, const APInt );

paulwalker-arm wrote:

OK, thanks.  It seems I've incorrectly assumed from an IR parsing and printing 
point of view there is a requirement for `IR_out == IR_in`. Your suggestion 
certainly means I can break the work up some more so I get that sorted.

https://github.com/llvm/llvm-project/pull/74502
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [LLVM][IR] Add native vector support to ConstantInt & ConstantFP. (PR #74502)

2023-12-05 Thread Paul Walker via cfe-commits


@@ -136,7 +144,11 @@ class ConstantInt final : public ConstantData {
   inline const APInt () const { return Val; }
 
   /// getBitWidth - Return the bitwidth of this constant.
-  unsigned getBitWidth() const { return Val.getBitWidth(); }
+  unsigned getBitWidth() const {
+assert(getType()->isIntegerTy() &&
+   "Returning the bitwidth of a vector constant is not support!");

paulwalker-arm wrote:

Fair enough.

https://github.com/llvm/llvm-project/pull/74502
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][IR] Add native vector support to ConstantInt & ConstantFP. (PR #74502)

2023-12-05 Thread Paul Walker via cfe-commits


@@ -343,7 +343,7 @@ static bool verifyTripCount(Value *RHS, Loop *L,
 // If the RHS of the compare is equal to the backedge taken count we need
 // to add one to get the trip count.
 if (SCEVRHS == BackedgeTCExt || SCEVRHS == BackedgeTakenCount) {
-  ConstantInt *One = ConstantInt::get(ConstantRHS->getType(), 1);
+  ConstantInt *One = ConstantInt::get(ConstantRHS->getIntegerType(), 1);

paulwalker-arm wrote:

Much the same reason as with `getBitWidth()` but the need for immediate change 
is greater so I renamed this method so changes to existing code paths are 
minimal whilst still providing a route to trigger asserts once testing is 
expanded.

I did consider just removing the method and adding the necessary casts but 
figured somebody went to the trouble of adding the override in the first place 
so I maintained this but under a modified name. Do you think the shorthand is 
not worth it?

https://github.com/llvm/llvm-project/pull/74502
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][IR] Add native vector support to ConstantInt & ConstantFP. (PR #74502)

2023-12-05 Thread Paul Walker via cfe-commits


@@ -136,7 +144,11 @@ class ConstantInt final : public ConstantData {
   inline const APInt () const { return Val; }
 
   /// getBitWidth - Return the bitwidth of this constant.
-  unsigned getBitWidth() const { return Val.getBitWidth(); }
+  unsigned getBitWidth() const {
+assert(getType()->isIntegerTy() &&
+   "Returning the bitwidth of a vector constant is not support!");

paulwalker-arm wrote:

Ultimately I think this should be more explicit, for example 
`getScalarBitWidth()`.  For this patch though the need was tiny so I made this 
change purely to trigger asserts at this level when failure cases are hit once 
I start expanding the testing.

https://github.com/llvm/llvm-project/pull/74502
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][IR] Add native vector support to ConstantInt & ConstantFP. (PR #74502)

2023-12-05 Thread Paul Walker via cfe-commits


@@ -98,6 +99,13 @@ class ConstantInt final : public ConstantData {
   /// value. Otherwise return a ConstantInt for the given value.
   static Constant *get(Type *Ty, uint64_t V, bool IsSigned = false);
 
+  /// WARNING: Incomplete support, do not use. These methods exist for early
+  /// prototyping, for most use cases ConstantInt::get() should be used.
+  /// Return a ConstantInt with a splat of the given value.
+  static ConstantInt *getSplat(LLVMContext , ElementCount EC,
+   const APInt );
+  static ConstantInt *getSplat(const VectorType *Ty, const APInt );

paulwalker-arm wrote:

We're in a transition period and thus I need an absolute way to create a vector 
ConstantInt (e.g. when parsing ll files and bitcode). Today 
`ConstantInt::get()` returns other `Constant` types to represent splats and 
that must be maintained for correctness because there are many code paths for 
which a vector ConstantInt will break.

https://github.com/llvm/llvm-project/pull/74502
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][IR] Add native vector support to ConstantInt & ConstantFP. (PR #74502)

2023-12-05 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

The PR contains a couple of commits that unless there's disagreement I'm 
tempted to land directly but have held off just in case there's any buyer 
remorse about extending ConstantInt/ConstantFP to cover vector types.

For similar reasons I've not updated the LangRef as I don't really want people 
using the support until at least code generation works.

https://github.com/llvm/llvm-project/pull/74502
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][IR] Add native vector support to ConstantInt & ConstantFP. (PR #74502)

2023-12-05 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm created 
https://github.com/llvm/llvm-project/pull/74502

[LLVM][IR] Add native vector support to ConstantInt & ConstantFP.

NOTE: For brevity the following takes about ConstantInt but
everything extends to cover ConstantFP as well.

Whilst ConstantInt::get() supports the creation of vectors whereby
each lane has the same value, it achieves this via other constants:

  * ConstantVector for fixed-length vectors
  * ConstantExprs for scalable vectors

ConstantExprs are being deprecated and ConstantVector is not space
efficient for larger vector types. This patch introduces an
alternative by allowing ConstantInt to natively support vector
splats via the IR syntax:

   splat(ty )

More specifically:

 * IR parsing is extended to support the new syntax.
 * ConstantInt gains the interface getSplat().
 * LLVMContext is extended to map ->ConstantInt.
 * BitCodeReader/Writer is extended to support vector types.

Whilst this patch adds the base support, more work is required
before it's production ready. For example, there's likely to be
many places where isa assumes a scalar type. Accordingly
the default behaviour of ConstantInt::get() remains unchanged but a
set of flag are added to allow wider testing and thus help with the
migration:

  --use-constant-int-for-fixed-length-splat
  --use-constant-fp-for-fixed-length-splat
  --use-constant-int-for-scalable-splat
  --use-constant-fp-for-scalable-splat

NOTE: No change is required to the bitcode format because types and
values are handled separately.

NOTE: Code generation doesn't work out-the-box but the issues look
limited to calls to ConstantInt::getBitWidth() that will need to be
ported.

>From 4c999f2e134ffc0385ec18ecbf1a80a696b7d095 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 29 Nov 2023 14:45:06 +
Subject: [PATCH 1/3] [NFC][LLVM][IR] Rename ConstantInt's getType() to
 getIntegerType().

Also adds an assert to ConstantInt::getBitWidth() to ensure it's
only called for integer types. This will have no affect today but
will aid with problem solving when ConstantInt is extended to
support vector types.
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  7 ---
 llvm/include/llvm/IR/Constants.h  | 12 +++-
 llvm/lib/Analysis/InstructionSimplify.cpp |  2 +-
 llvm/lib/IR/ConstantFold.cpp  |  2 +-
 llvm/lib/IR/Verifier.cpp  |  4 ++--
 .../Hexagon/HexagonLoopIdiomRecognition.cpp   |  6 +++---
 llvm/lib/Transforms/IPO/OpenMPOpt.cpp | 15 ---
 .../InstCombine/InstCombineVectorOps.cpp  |  4 ++--
 llvm/lib/Transforms/Scalar/ConstantHoisting.cpp   |  6 +++---
 llvm/lib/Transforms/Scalar/LoopFlatten.cpp|  2 +-
 llvm/lib/Transforms/Utils/SimplifyCFG.cpp |  8 
 11 files changed, 40 insertions(+), 28 deletions(-)

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 65d9862621061..8dc828abf8aec 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -3218,7 +3218,7 @@ RValue CodeGenFunction::EmitBuiltinExpr(const GlobalDecl 
GD, unsigned BuiltinID,
 Value *AlignmentValue = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(AlignmentValue);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(PtrValue, Ptr,
@@ -17010,7 +17010,7 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Value *Op1 = EmitScalarExpr(E->getArg(1));
 ConstantInt *AlignmentCI = cast(Op0);
 if (AlignmentCI->getValue().ugt(llvm::Value::MaximumAlignment))
-  AlignmentCI = ConstantInt::get(AlignmentCI->getType(),
+  AlignmentCI = ConstantInt::get(AlignmentCI->getIntegerType(),
  llvm::Value::MaximumAlignment);
 
 emitAlignmentAssumption(Op1, E->getArg(1),
@@ -17248,7 +17248,8 @@ Value *CodeGenFunction::EmitPPCBuiltinExpr(unsigned 
BuiltinID,
 Op0, llvm::FixedVectorType::get(ConvertType(E->getType()), 2));
 
 if (getTarget().isLittleEndian())
-  Index = ConstantInt::get(Index->getType(), 1 - Index->getZExtValue());
+  Index =
+  ConstantInt::get(Index->getIntegerType(), 1 - Index->getZExtValue());
 
 return Builder.CreateExtractElement(Unpacked, Index);
   }
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index 2f7fc5652c2cd..7bd8bfc477d78 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -136,7 +136,11 @@ class ConstantInt final : public ConstantData {
   inline const APInt () const { return Val; }
 
   /// getBitWidth - Return the bitwidth of this constant.
-  unsigned getBitWidth() const { return 

[clang] [Clang][AArch64] Add fix vector types to header into SVE (PR #73258)

2023-11-24 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

Bike shedding here but shouldn't the header be arm_vector_types.h (i.e. 
plural)? or even arm_acle_types.h if we think it might grow more uses.

https://github.com/llvm/llvm-project/pull/73258
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [mlir] [llvm][TypeSize] Fix addition/subtraction in TypeSize. (PR #72979)

2023-11-21 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

> The static functions renaming is going to produce a lot of noise but I guess 
> this is too late already... Shall we revert to keep the change minimal? 
> @nikic @paulwalker-arm WDYT ?

For my money the function's were originally named correctly and then 
erroneously changed to (a) diverge from the coding stand, and (b) unknowingly 
introduce a bug caused by the new naming.  For both reasons I'm happy to revert 
to the original naming via this patch.

https://github.com/llvm/llvm-project/pull/72979
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [mlir] [llvm][TypeSize] Fix addition/subtraction in TypeSize. (PR #72979)

2023-11-21 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm approved this pull request.

It looks like there's a code formatter issue with the MLIR change but otherwise 
this looks good to me.

https://github.com/llvm/llvm-project/pull/72979
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [LLVM][AArch64] Add ASM constraints for reduced GPR register ranges. (PR #70970)

2023-11-03 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm closed 
https://github.com/llvm/llvm-project/pull/70970
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][AArch64] Add ASM constraints for reduced GPR register ranges. (PR #70970)

2023-11-03 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

Rebased after pushing NFC refactoring commit.

https://github.com/llvm/llvm-project/pull/70970
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[llvm] [clang] [LLVM][AArch64] Add ASM constraints for reduced GPR register ranges. (PR #70970)

2023-11-03 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/70970

>From 4bd5f30bf5f3f55cbca0c49a612cf0fa0122046e Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 1 Nov 2023 17:33:10 +
Subject: [PATCH] [LLVM][AArch64] Add ASM constraints for reduced GPR register
 ranges.

The patch adds the follow ASM constraints:
  Uci => w8-w11/x8-x11
  Ucj => w12-w15/x12-x15

These constraints are required for SME load/store instructions
where a reduced set of GPRs are used to specify ZA array vectors.

NOTE: GCC has agreed to use the same constraint syntax.
---
 clang/docs/ReleaseNotes.rst   |  2 +
 clang/lib/Basic/Targets/AArch64.cpp   |  6 ++
 clang/test/CodeGen/aarch64-inline-asm.c   | 15 
 llvm/docs/LangRef.rst |  2 +
 .../Target/AArch64/AArch64ISelLowering.cpp| 34 +++-
 .../AArch64/inlineasm-Uc-constraint.ll| 78 +++
 6 files changed, 136 insertions(+), 1 deletion(-)
 create mode 100644 llvm/test/CodeGen/AArch64/inlineasm-Uc-constraint.ll

diff --git a/clang/docs/ReleaseNotes.rst b/clang/docs/ReleaseNotes.rst
index 4696836b3a00caa..afe7e2e79c2d087 100644
--- a/clang/docs/ReleaseNotes.rst
+++ b/clang/docs/ReleaseNotes.rst
@@ -738,6 +738,8 @@ Arm and AArch64 Support
   This affects C++ functions with SVE ACLE parameters. Clang will use the old
   manglings if ``-fclang-abi-compat=17`` or lower is  specified.
 
+- New AArch64 asm constraints have been added for r8-r11(Uci) and r12-r15(Ucj).
+
 Android Support
 ^^^
 
diff --git a/clang/lib/Basic/Targets/AArch64.cpp 
b/clang/lib/Basic/Targets/AArch64.cpp
index fe5a7af97b7753c..c71af71eba60ce2 100644
--- a/clang/lib/Basic/Targets/AArch64.cpp
+++ b/clang/lib/Basic/Targets/AArch64.cpp
@@ -1306,6 +1306,12 @@ bool AArch64TargetInfo::validateAsmConstraint(
   Name += 2;
   return true;
 }
+if (Name[1] == 'c' && (Name[2] == 'i' || Name[2] == 'j')) {
+  // Gpr registers ("Uci"=w8-11, "Ucj"=w12-15)
+  Info.setAllowsRegister();
+  Name += 2;
+  return true;
+}
 // Ump: A memory address suitable for ldp/stp in SI, DI, SF and DF modes.
 // Utf: A memory address suitable for ldp/stp in TF mode.
 // Usa: An absolute symbolic address.
diff --git a/clang/test/CodeGen/aarch64-inline-asm.c 
b/clang/test/CodeGen/aarch64-inline-asm.c
index 439fb9e33f9ae15..75e9a8c46b87692 100644
--- a/clang/test/CodeGen/aarch64-inline-asm.c
+++ b/clang/test/CodeGen/aarch64-inline-asm.c
@@ -80,3 +80,18 @@ void test_tied_earlyclobber(void) {
   asm("" : "+"(a));
   // CHECK: call i32 asm "", "=&{x1},0"(i32 %0)
 }
+
+void test_reduced_gpr_constraints(int var32, long var64) {
+  asm("add w0, w0, %0" : : "Uci"(var32) : "w0");
+// CHECK: [[ARG1:%.+]] = load i32, ptr
+// CHECK: call void asm sideeffect "add w0, w0, $0", "@3Uci,~{w0}"(i32 
[[ARG1]])
+  asm("add x0, x0, %0" : : "Uci"(var64) : "x0");
+// CHECK: [[ARG1:%.+]] = load i64, ptr
+// CHECK: call void asm sideeffect "add x0, x0, $0", "@3Uci,~{x0}"(i64 
[[ARG1]])
+  asm("add w0, w0, %0" : : "Ucj"(var32) : "w0");
+// CHECK: [[ARG2:%.+]] = load i32, ptr
+// CHECK: call void asm sideeffect "add w0, w0, $0", "@3Ucj,~{w0}"(i32 
[[ARG2]])
+  asm("add x0, x0, %0" : : "Ucj"(var64) : "x0");
+// CHECK: [[ARG2:%.+]] = load i64, ptr
+// CHECK: call void asm sideeffect "add x0, x0, $0", "@3Ucj,~{x0}"(i64 
[[ARG2]])
+}
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 6fd483276a301c7..1e9d42ed0a06079 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -5094,6 +5094,8 @@ AArch64:
   offsets). (However, LLVM currently does this for the ``m`` constraint as
   well.)
 - ``r``: A 32 or 64-bit integer register (W* or X*).
+- ``Uci``: Like r, but restricted to registers 8 to 11 inclusive.
+- ``Ucj``: Like r, but restricted to registers 12 to 15 inclusive.
 - ``w``: A 32, 64, or 128-bit floating-point, SIMD or SVE vector register.
 - ``x``: Like w, but restricted to registers 0 to 15 inclusive.
 - ``y``: Like w, but restricted to SVE vector registers Z0 to Z7 inclusive.
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp 
b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 94901c2d1a65688..f5193a9f2adf30c 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -10195,6 +10195,31 @@ getPredicateRegisterClass(PredicateConstraint 
Constraint, EVT VT) {
   llvm_unreachable("Missing PredicateConstraint!");
 }
 
+enum class ReducedGprConstraint { Uci, Ucj };
+
+static std::optional
+parseReducedGprConstraint(StringRef Constraint) {
+  return StringSwitch>(Constraint)
+  .Case("Uci", ReducedGprConstraint::Uci)
+  .Case("Ucj", ReducedGprConstraint::Ucj)
+  .Default(std::nullopt);
+}
+
+static const TargetRegisterClass *
+getReducedGprRegisterClass(ReducedGprConstraint Constraint, EVT VT) {
+  if (!VT.isScalarInteger() || VT.getFixedSizeInBits() > 64)
+return nullptr;
+
+  switch 

[clang] [llvm] [LLVM][AArch64] Add ASM constraints for reduced GPR register ranges. (PR #70970)

2023-11-02 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

* Rebased.
* Extended coverage to include all typical scalar types.
* Updated the LangRef to document the new constraints.
* Added an entry to the release note.

https://github.com/llvm/llvm-project/pull/70970
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][AArch64] Add ASM constraints for reduced GPR register ranges. (PR #70970)

2023-11-02 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm edited 
https://github.com/llvm/llvm-project/pull/70970
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][AArch64] Add ASM constraints for reduced GPR register ranges. (PR #70970)

2023-11-02 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm updated 
https://github.com/llvm/llvm-project/pull/70970

>From 500e5007a33d4ee3d594ef5ce58f8894c231f3dc Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 1 Nov 2023 16:27:29 +
Subject: [PATCH 1/2] [NFC][LLVM][SVE] Refactor predicate register ASM
 constraint parsing to use std::optional.

---
 .../Target/AArch64/AArch64ISelLowering.cpp| 26 +--
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp 
b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 291f0c8c5d991c6..94901c2d1a65688 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -10163,14 +10163,15 @@ const char 
*AArch64TargetLowering::LowerXConstraint(EVT ConstraintVT) const {
   return "r";
 }
 
-enum PredicateConstraint { Uph, Upl, Upa, Invalid };
+enum class PredicateConstraint { Uph, Upl, Upa };
 
-static PredicateConstraint parsePredicateConstraint(StringRef Constraint) {
-  return StringSwitch(Constraint)
+static std::optional
+parsePredicateConstraint(StringRef Constraint) {
+  return StringSwitch>(Constraint)
   .Case("Uph", PredicateConstraint::Uph)
   .Case("Upl", PredicateConstraint::Upl)
   .Case("Upa", PredicateConstraint::Upa)
-  .Default(PredicateConstraint::Invalid);
+  .Default(std::nullopt);
 }
 
 static const TargetRegisterClass *
@@ -10180,8 +10181,6 @@ getPredicateRegisterClass(PredicateConstraint 
Constraint, EVT VT) {
 return nullptr;
 
   switch (Constraint) {
-  default:
-return nullptr;
   case PredicateConstraint::Uph:
 return VT == MVT::aarch64svcount ? ::PNR_p8to15RegClass
  : ::PPR_p8to15RegClass;
@@ -10192,6 +10191,8 @@ getPredicateRegisterClass(PredicateConstraint 
Constraint, EVT VT) {
 return VT == MVT::aarch64svcount ? ::PNRRegClass
  : ::PPRRegClass;
   }
+
+  llvm_unreachable("Missing PredicateConstraint!");
 }
 
 // The set of cc code supported is from
@@ -10289,9 +10290,8 @@ AArch64TargetLowering::getConstraintType(StringRef 
Constraint) const {
 case 'S': // A symbolic address
   return C_Other;
 }
-  } else if (parsePredicateConstraint(Constraint) !=
- PredicateConstraint::Invalid)
-  return C_RegisterClass;
+  } else if (parsePredicateConstraint(Constraint))
+return C_RegisterClass;
   else if (parseConstraintCode(Constraint) != AArch64CC::Invalid)
 return C_Other;
   return TargetLowering::getConstraintType(Constraint);
@@ -10325,7 +10325,7 @@ AArch64TargetLowering::getSingleConstraintMatchWeight(
 weight = CW_Constant;
 break;
   case 'U':
-if (parsePredicateConstraint(constraint) != PredicateConstraint::Invalid)
+if (parsePredicateConstraint(constraint))
   weight = CW_Register;
 break;
   }
@@ -10382,9 +10382,9 @@ AArch64TargetLowering::getRegForInlineAsmConstraint(
   break;
 }
   } else {
-PredicateConstraint PC = parsePredicateConstraint(Constraint);
-if (const TargetRegisterClass *RegClass = getPredicateRegisterClass(PC, 
VT))
-  return std::make_pair(0U, RegClass);
+if (const auto PC = parsePredicateConstraint(Constraint))
+  if (const auto *RegClass = getPredicateRegisterClass(*PC, VT))
+return std::make_pair(0U, RegClass);
   }
   if (StringRef("{cc}").equals_insensitive(Constraint) ||
   parseConstraintCode(Constraint) != AArch64CC::Invalid)

>From c585766e7582dc152f3c7b057205533ff8c21390 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 1 Nov 2023 17:33:10 +
Subject: [PATCH 2/2] [LLVM][AArch64] Add ASM constraints for reduced GPR
 register ranges.

The patch adds the follow ASM constraints:
  Uci => w8-w11/x8-x11
  Ucj => w12-w15/x12-x15

These constraints are required for SME load/store instructions
where a reduced set of GPRs are used to specify ZA array vectors.

NOTE: GCC has agreed to use the same constraint syntax.
---
 clang/docs/ReleaseNotes.rst   |  2 +
 clang/lib/Basic/Targets/AArch64.cpp   |  6 ++
 clang/test/CodeGen/aarch64-inline-asm.c   | 15 
 llvm/docs/LangRef.rst |  2 +
 .../Target/AArch64/AArch64ISelLowering.cpp| 34 +++-
 .../AArch64/inlineasm-Uc-constraint.ll| 78 +++
 6 files changed, 136 insertions(+), 1 deletion(-)
 create mode 100644 llvm/test/CodeGen/AArch64/inlineasm-Uc-constraint.ll

diff --git a/clang/docs/ReleaseNotes.rst b/clang/docs/ReleaseNotes.rst
index 4696836b3a00caa..afe7e2e79c2d087 100644
--- a/clang/docs/ReleaseNotes.rst
+++ b/clang/docs/ReleaseNotes.rst
@@ -738,6 +738,8 @@ Arm and AArch64 Support
   This affects C++ functions with SVE ACLE parameters. Clang will use the old
   manglings if ``-fclang-abi-compat=17`` or lower is  specified.
 
+- New AArch64 asm constraints have been added for r8-r11(Uci) and r12-r15(Ucj).
+
 Android Support
 ^^^
 
diff --git 

[clang] [llvm] [LLVM][AArch64] Add ASM constraints for reduced GPR register ranges. (PR #70970)

2023-11-01 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

The first commit contains refactoring that I'll land separately assuming the 
reviewers are happy.  The new functionality is implemented by the second commit.

https://github.com/llvm/llvm-project/pull/70970
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [LLVM][AArch64] Add ASM constraints for reduced GPR register ranges. (PR #70970)

2023-11-01 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm created 
https://github.com/llvm/llvm-project/pull/70970

[LLVM][AArch64] Add ASM constraints for reduced GPR register ranges.

The patch adds the follow ASM constraints:
  Uci => w8-w11
  Ucj => w12-w15

These constraints are required for SME load/store instructions
where a reduced set of GPRs are used to specify ZA array vectors.

NOTE: GCC has agreed to use the same constraint syntax.

>From 7a772d1ad2c9bcdddefccaa25a73f708ab4fe50e Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 1 Nov 2023 16:27:29 +
Subject: [PATCH 1/2] [NFC][LLVM][SVE] Refactor predicate register ASM
 constraint parsing to use std::optional.

---
 .../Target/AArch64/AArch64ISelLowering.cpp| 26 +--
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp 
b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index d00db82c9e49ac2..44183a1fa48abab 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -10161,14 +10161,15 @@ const char 
*AArch64TargetLowering::LowerXConstraint(EVT ConstraintVT) const {
   return "r";
 }
 
-enum PredicateConstraint { Uph, Upl, Upa, Invalid };
+enum class PredicateConstraint { Uph, Upl, Upa };
 
-static PredicateConstraint parsePredicateConstraint(StringRef Constraint) {
-  return StringSwitch(Constraint)
+static std::optional
+parsePredicateConstraint(StringRef Constraint) {
+  return StringSwitch>(Constraint)
   .Case("Uph", PredicateConstraint::Uph)
   .Case("Upl", PredicateConstraint::Upl)
   .Case("Upa", PredicateConstraint::Upa)
-  .Default(PredicateConstraint::Invalid);
+  .Default(std::nullopt);
 }
 
 static const TargetRegisterClass *
@@ -10178,8 +10179,6 @@ getPredicateRegisterClass(PredicateConstraint 
Constraint, EVT VT) {
 return nullptr;
 
   switch (Constraint) {
-  default:
-return nullptr;
   case PredicateConstraint::Uph:
 return VT == MVT::aarch64svcount ? ::PNR_p8to15RegClass
  : ::PPR_p8to15RegClass;
@@ -10190,6 +10189,8 @@ getPredicateRegisterClass(PredicateConstraint 
Constraint, EVT VT) {
 return VT == MVT::aarch64svcount ? ::PNRRegClass
  : ::PPRRegClass;
   }
+
+  llvm_unreachable("Missing PredicateConstraint!");
 }
 
 // The set of cc code supported is from
@@ -10287,9 +10288,8 @@ AArch64TargetLowering::getConstraintType(StringRef 
Constraint) const {
 case 'S': // A symbolic address
   return C_Other;
 }
-  } else if (parsePredicateConstraint(Constraint) !=
- PredicateConstraint::Invalid)
-  return C_RegisterClass;
+  } else if (parsePredicateConstraint(Constraint))
+return C_RegisterClass;
   else if (parseConstraintCode(Constraint) != AArch64CC::Invalid)
 return C_Other;
   return TargetLowering::getConstraintType(Constraint);
@@ -10323,7 +10323,7 @@ AArch64TargetLowering::getSingleConstraintMatchWeight(
 weight = CW_Constant;
 break;
   case 'U':
-if (parsePredicateConstraint(constraint) != PredicateConstraint::Invalid)
+if (parsePredicateConstraint(constraint))
   weight = CW_Register;
 break;
   }
@@ -10380,9 +10380,9 @@ AArch64TargetLowering::getRegForInlineAsmConstraint(
   break;
 }
   } else {
-PredicateConstraint PC = parsePredicateConstraint(Constraint);
-if (const TargetRegisterClass *RegClass = getPredicateRegisterClass(PC, 
VT))
-  return std::make_pair(0U, RegClass);
+if (const auto PC = parsePredicateConstraint(Constraint))
+  if (const auto *RegClass = getPredicateRegisterClass(*PC, VT))
+return std::make_pair(0U, RegClass);
   }
   if (StringRef("{cc}").equals_insensitive(Constraint) ||
   parseConstraintCode(Constraint) != AArch64CC::Invalid)

>From 50c72be6e4c8ff508d8ceaacc7aa37ff2aef1cca Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Wed, 1 Nov 2023 17:33:10 +
Subject: [PATCH 2/2] [LLVM][AArch64] Add ASM constraints for reduced GPR
 register ranges.

The patch adds the follow ASM constraints:
  Uci => w8-w11
  Ucj => w12-w15

These constraints are required for SME load/store instructions
where a reduced set of GPRs are used to specify ZA array vectors.

NOTE: GCC has agreed to use the same constraint syntax.
---
 clang/lib/Basic/Targets/AArch64.cpp   |  6 
 clang/test/CodeGen/aarch64-inline-asm.c   |  9 +
 .../Target/AArch64/AArch64ISelLowering.cpp| 34 ++-
 .../AArch64/inlineasm-Uc-constraint.ll| 28 +++
 4 files changed, 76 insertions(+), 1 deletion(-)
 create mode 100644 llvm/test/CodeGen/AArch64/inlineasm-Uc-constraint.ll

diff --git a/clang/lib/Basic/Targets/AArch64.cpp 
b/clang/lib/Basic/Targets/AArch64.cpp
index fe5a7af97b7753c..d5834368e8970db 100644
--- a/clang/lib/Basic/Targets/AArch64.cpp
+++ b/clang/lib/Basic/Targets/AArch64.cpp
@@ -1306,6 +1306,12 @@ bool 

[clang] [CXXNameMangler] Correct the mangling of SVE ACLE types within function names. (PR #69460)

2023-10-24 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm closed 
https://github.com/llvm/llvm-project/pull/69460
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [CXXNameMangler] Correct the mangling of SVE ACLE types within function names. (PR #69460)

2023-10-19 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

I've updated the release note to remove bogus references to a function's return 
type affecting its name mangling.

https://github.com/llvm/llvm-project/pull/69460
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [CXXNameMangler] Correct the mangling of SVE ACLE types within function names. (PR #69460)

2023-10-18 Thread Paul Walker via cfe-commits

paulwalker-arm wrote:

To aid review I've split the patch into several commits mainly so the 
mechanical update of 200+ ACLE tests is separate from the much smaller code 
changes.  Given this is an ABI break I'd rather land the series as a single 
commit.

https://github.com/llvm/llvm-project/pull/69460
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [SVE ACLE] Allow default zero initialisation for svcount_t. (PR #69321)

2023-10-18 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm closed 
https://github.com/llvm/llvm-project/pull/69321
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [SVE ACLE] Allow default zero initialisation for svcount_t. (PR #69321)

2023-10-17 Thread Paul Walker via cfe-commits

https://github.com/paulwalker-arm created 
https://github.com/llvm/llvm-project/pull/69321

This matches the behaviour of the other SVE ACLE types.

>From d036844f5006adecbd5b0ae4fbc3014d43ef3992 Mon Sep 17 00:00:00 2001
From: Paul Walker 
Date: Tue, 17 Oct 2023 11:57:28 +0100
Subject: [PATCH] [SVE ACLE] Allow default zero initialisation for svcount_t.

---
 .../CodeGenCXX/aarch64-sve-vector-init.cpp | 18 ++
 .../SelectionDAG/SelectionDAGBuilder.cpp   |  6 ++
 llvm/lib/IR/Type.cpp   |  3 ++-
 llvm/test/CodeGen/AArch64/sve-zeroinit.ll  |  7 +++
 4 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/clang/test/CodeGenCXX/aarch64-sve-vector-init.cpp 
b/clang/test/CodeGenCXX/aarch64-sve-vector-init.cpp
index 2088e80acfc80f4..464275f164c2a54 100644
--- a/clang/test/CodeGenCXX/aarch64-sve-vector-init.cpp
+++ b/clang/test/CodeGenCXX/aarch64-sve-vector-init.cpp
@@ -55,6 +55,7 @@
 // CHECK-NEXT:[[B8:%.*]] = alloca , align 2
 // CHECK-NEXT:[[B8X2:%.*]] = alloca , align 2
 // CHECK-NEXT:[[B8X4:%.*]] = alloca , align 2
+// CHECK-NEXT:[[CNT:%.*]] = alloca target("aarch64.svcount"), align 2
 // CHECK-NEXT:store  zeroinitializer, ptr [[S8]], align 
16
 // CHECK-NEXT:store  zeroinitializer, ptr [[S16]], align 
16
 // CHECK-NEXT:store  zeroinitializer, ptr [[S32]], align 
16
@@ -106,6 +107,7 @@
 // CHECK-NEXT:store  zeroinitializer, ptr [[B8]], align 2
 // CHECK-NEXT:store  zeroinitializer, ptr [[B8X2]], 
align 2
 // CHECK-NEXT:store  zeroinitializer, ptr [[B8X4]], 
align 2
+// CHECK-NEXT:store target("aarch64.svcount") zeroinitializer, ptr 
[[CNT]], align 2
 // CHECK-NEXT:ret void
 //
 void test_locals(void) {
@@ -164,6 +166,8 @@ void test_locals(void) {
   __SVBool_t b8{};
   __clang_svboolx2_t b8x2{};
   __clang_svboolx4_t b8x4{};
+
+  __SVCount_t cnt{};
 }
 
 // CHECK-LABEL: define dso_local void @_Z12test_copy_s8u10__SVInt8_t
@@ -879,3 +883,17 @@ void test_copy_b8x2(__clang_svboolx2_t a) {
 void test_copy_b8x4(__clang_svboolx4_t a) {
   __clang_svboolx4_t b{a};
 }
+
+// CHECK-LABEL: define dso_local void @_Z13test_copy_cntu11__SVCount_t
+// CHECK-SAME: (target("aarch64.svcount") [[A:%.*]]) #[[ATTR0]] {
+// CHECK-NEXT:  entry:
+// CHECK-NEXT:[[A_ADDR:%.*]] = alloca target("aarch64.svcount"), align 2
+// CHECK-NEXT:[[B:%.*]] = alloca target("aarch64.svcount"), align 2
+// CHECK-NEXT:store target("aarch64.svcount") [[A]], ptr [[A_ADDR]], align 
2
+// CHECK-NEXT:[[TMP0:%.*]] = load target("aarch64.svcount"), ptr 
[[A_ADDR]], align 2
+// CHECK-NEXT:store target("aarch64.svcount") [[TMP0]], ptr [[B]], align 2
+// CHECK-NEXT:ret void
+//
+void test_copy_cnt(__SVCount_t a) {
+  __SVCount_t b{a};
+}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp 
b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 4bb0ba6f083109b..eabc76334fae1f2 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -1738,6 +1738,12 @@ SDValue SelectionDAGBuilder::getValueImpl(const Value 
*V) {
 if (const auto *NC = dyn_cast(C))
   return getValue(NC->getGlobalValue());
 
+if (VT == MVT::aarch64svcount) {
+  assert(C->isNullValue() && "Can only zero this target type!");
+  return DAG.getNode(ISD::BITCAST, getCurSDLoc(), VT,
+ DAG.getConstant(0, getCurSDLoc(), MVT::nxv16i1));
+}
+
 VectorType *VecTy = cast(V->getType());
 
 // Now that we know the number and type of the elements, get that number of
diff --git a/llvm/lib/IR/Type.cpp b/llvm/lib/IR/Type.cpp
index 97febcd99b4114f..006278d16484c1c 100644
--- a/llvm/lib/IR/Type.cpp
+++ b/llvm/lib/IR/Type.cpp
@@ -841,7 +841,8 @@ static TargetTypeInfo getTargetTypeInfo(const TargetExtType 
*Ty) {
 
   // Opaque types in the AArch64 name space.
   if (Name == "aarch64.svcount")
-return TargetTypeInfo(ScalableVectorType::get(Type::getInt1Ty(C), 16));
+return TargetTypeInfo(ScalableVectorType::get(Type::getInt1Ty(C), 16),
+  TargetExtType::HasZeroInit);
 
   return TargetTypeInfo(Type::getVoidTy(C));
 }
diff --git a/llvm/test/CodeGen/AArch64/sve-zeroinit.ll 
b/llvm/test/CodeGen/AArch64/sve-zeroinit.ll
index c436bb7f822b7a3..eab39d0ef402526 100644
--- a/llvm/test/CodeGen/AArch64/sve-zeroinit.ll
+++ b/llvm/test/CodeGen/AArch64/sve-zeroinit.ll
@@ -86,3 +86,10 @@ define  @test_zeroinit_16xi1() {
 ; CHECK-NEXT:  ret
   ret  zeroinitializer
 }
+
+define target("aarch64.svcount") @test_zeroinit_svcount() 
"target-features"="+sme2" {
+; CHECK-LABEL: test_zeroinit_svcount
+; CHECK:   pfalse p0.b
+; CHECK-NEXT:  ret
+  ret target("aarch64.svcount") zeroinitializer
+}

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


  1   2   >