[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/efriedma-quic approved this pull request. LGTM https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/RKSimon commented: LGTM - but we should have a frontend specialist approve https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
frasercrmck wrote: ping https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/frasercrmck edited https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14585,11 +14585,18 @@ void Sema::CheckAddressOfPackedMember(Expr *rhs) { _2, _3, _4)); } +static ExprResult UsualUnaryConversionsNoPromoteInt(Sema &S, Expr *E) { + // Don't promote integer types + if (QualType Ty = E->getType(); S.getASTContext().isPromotableIntegerType(Ty)) +return S.DefaultFunctionArrayLvalueConversion(E); + return S.UsualUnaryConversions(E); frasercrmck wrote: I don't have strong opinions either way. I've added some CodeGen tests for bitfields so at least the current behaviour is tested. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14604,57 +14611,63 @@ bool Sema::PrepareBuiltinElementwiseMathOneArgCall(CallExpr *TheCall) { } bool Sema::BuiltinElementwiseMath(CallExpr *TheCall, bool FPOnly) { - QualType Res; - if (BuiltinVectorMath(TheCall, Res, FPOnly)) -return true; - TheCall->setType(Res); - return false; + if (auto Res = BuiltinVectorMath(TheCall, FPOnly); Res.has_value()) { +TheCall->setType(*Res); +return false; + } + return true; } bool Sema::BuiltinVectorToScalarMath(CallExpr *TheCall) { - QualType Res; - if (BuiltinVectorMath(TheCall, Res)) + std::optional Res = BuiltinVectorMath(TheCall); + if (!Res) return true; - if (auto *VecTy0 = Res->getAs()) + if (auto *VecTy0 = (*Res)->getAs()) TheCall->setType(VecTy0->getElementType()); else -TheCall->setType(Res); +TheCall->setType(*Res); return false; } -bool Sema::BuiltinVectorMath(CallExpr *TheCall, QualType &Res, bool FPOnly) { +std::optional Sema::BuiltinVectorMath(CallExpr *TheCall, +bool FPOnly) { if (checkArgCount(TheCall, 2)) -return true; +return std::nullopt; - ExprResult A = TheCall->getArg(0); - ExprResult B = TheCall->getArg(1); - // Do standard promotions between the two arguments, returning their common - // type. - Res = UsualArithmeticConversions(A, B, TheCall->getExprLoc(), ACK_Comparison); - if (A.isInvalid() || B.isInvalid()) -return true; + checkEnumArithmeticConversions(TheCall->getArg(0), TheCall->getArg(1), + TheCall->getExprLoc(), ACK_Comparison); frasercrmck wrote: Alright, I've forbidden mixed enum types and have added some tests for them. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/frasercrmck updated https://github.com/llvm/llvm-project/pull/119423 >From 3f007d702922db63e128e3c0f72dff2f600e0879 Mon Sep 17 00:00:00 2001 From: Fraser Cormack Date: Tue, 10 Dec 2024 17:41:07 + Subject: [PATCH 1/8] [clang] Fix sub-integer __builtin_elementwise_(add|sub)_sat These builtins would unconditionally perform the usual arithmetic conversions on promotable scalar integer arguments. This meant in practice that char and short arguments were promoted to int, and the operation was truncated back down afterwards. This in effect silently replaced a saturating add/sub with a regular add/sub, which is not intuitive (or intended) behaviour. With this patch, promotable scalar integer types are not promoted to int, but are kept intact. If the types differ, the smaller integer is promoted to the larger one. The signedness of the operation matches the larger integer type. No change is made to vector types, which are both not promoted and whose element types must match. --- clang/lib/Sema/SemaChecking.cpp | 38 +++ .../test/CodeGen/builtins-elementwise-math.c | 98 ++- 2 files changed, 134 insertions(+), 2 deletions(-) diff --git a/clang/lib/Sema/SemaChecking.cpp b/clang/lib/Sema/SemaChecking.cpp index ce846ae88c38b4..9d986a22945f78 100644 --- a/clang/lib/Sema/SemaChecking.cpp +++ b/clang/lib/Sema/SemaChecking.cpp @@ -2765,6 +2765,44 @@ Sema::CheckBuiltinFunctionCall(FunctionDecl *FDecl, unsigned BuiltinID, // types only. case Builtin::BI__builtin_elementwise_add_sat: case Builtin::BI__builtin_elementwise_sub_sat: { +if (checkArgCount(TheCall, 2)) + return ExprError(); +ExprResult LHS = TheCall->getArg(0); +ExprResult RHS = TheCall->getArg(1); +QualType LHSType = LHS.get()->getType().getUnqualifiedType(); +QualType RHSType = RHS.get()->getType().getUnqualifiedType(); +// If both LHS/RHS are promotable integer types, do not perform the usual +// conversions - we must keep the saturating operation at the correct +// bitwidth. +if (Context.isPromotableIntegerType(LHSType) && +Context.isPromotableIntegerType(RHSType)) { + // First, convert each argument to an r-value. + ExprResult ResLHS = DefaultFunctionArrayLvalueConversion(LHS.get()); + if (ResLHS.isInvalid()) +return ExprError(); + LHS = ResLHS.get(); + + ExprResult ResRHS = DefaultFunctionArrayLvalueConversion(RHS.get()); + if (ResRHS.isInvalid()) +return ExprError(); + RHS = ResRHS.get(); + + LHSType = LHS.get()->getType().getUnqualifiedType(); + RHSType = RHS.get()->getType().getUnqualifiedType(); + + // If the two integer types are not of equal order, cast the smaller + // integer one to the larger one + if (int Order = Context.getIntegerTypeOrder(LHSType, RHSType); Order == 1) +RHS = ImpCastExprToType(RHS.get(), LHSType, CK_IntegralCast); + else if (Order == -1) +LHS = ImpCastExprToType(LHS.get(), RHSType, CK_IntegralCast); + + TheCall->setArg(0, LHS.get()); + TheCall->setArg(1, RHS.get()); + TheCall->setType(LHS.get()->getType().getUnqualifiedType()); + break; +} + if (BuiltinElementwiseMath(TheCall)) return ExprError(); diff --git a/clang/test/CodeGen/builtins-elementwise-math.c b/clang/test/CodeGen/builtins-elementwise-math.c index 7f6b5f26eb9307..4ac6fe18c4d5a3 100644 --- a/clang/test/CodeGen/builtins-elementwise-math.c +++ b/clang/test/CodeGen/builtins-elementwise-math.c @@ -68,7 +68,10 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) @@ -114,6 +117,50 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, // CHECK: store i64 98, ptr %i1.addr, align 8 i1 = __builtin_elementwise_add_sat(1, 'a'); + + // CHECK: [[C1:%.+]] = load i8, ptr %c1.addr, align 1 + // CHECK-NEXT: [[C2:%.+]] = load i8, ptr %c2.addr, align 1 + // CHECK-NEXT: call i8 @llvm.sadd.sat.i8(i8 [[C1]], i8 [[C2]]) + c1 = __builtin_elementwise_add_sat(c1, c2); + + // CHECK: [[UC1:%.+]] = load i8, ptr %uc1.addr, align
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14604,57 +14611,63 @@ bool Sema::PrepareBuiltinElementwiseMathOneArgCall(CallExpr *TheCall) { } bool Sema::BuiltinElementwiseMath(CallExpr *TheCall, bool FPOnly) { - QualType Res; - if (BuiltinVectorMath(TheCall, Res, FPOnly)) -return true; - TheCall->setType(Res); - return false; + if (auto Res = BuiltinVectorMath(TheCall, FPOnly); Res.has_value()) { +TheCall->setType(*Res); +return false; + } + return true; } bool Sema::BuiltinVectorToScalarMath(CallExpr *TheCall) { - QualType Res; - if (BuiltinVectorMath(TheCall, Res)) + std::optional Res = BuiltinVectorMath(TheCall); + if (!Res) return true; - if (auto *VecTy0 = Res->getAs()) + if (auto *VecTy0 = (*Res)->getAs()) TheCall->setType(VecTy0->getElementType()); else -TheCall->setType(Res); +TheCall->setType(*Res); return false; } -bool Sema::BuiltinVectorMath(CallExpr *TheCall, QualType &Res, bool FPOnly) { +std::optional Sema::BuiltinVectorMath(CallExpr *TheCall, +bool FPOnly) { if (checkArgCount(TheCall, 2)) -return true; +return std::nullopt; - ExprResult A = TheCall->getArg(0); - ExprResult B = TheCall->getArg(1); - // Do standard promotions between the two arguments, returning their common - // type. - Res = UsualArithmeticConversions(A, B, TheCall->getExprLoc(), ACK_Comparison); - if (A.isInvalid() || B.isInvalid()) -return true; + checkEnumArithmeticConversions(TheCall->getArg(0), TheCall->getArg(1), + TheCall->getExprLoc(), ACK_Comparison); efriedma-quic wrote: I think so. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14585,11 +14585,18 @@ void Sema::CheckAddressOfPackedMember(Expr *rhs) { _2, _3, _4)); } +static ExprResult UsualUnaryConversionsNoPromoteInt(Sema &S, Expr *E) { + // Don't promote integer types + if (QualType Ty = E->getType(); S.getASTContext().isPromotableIntegerType(Ty)) +return S.DefaultFunctionArrayLvalueConversion(E); + return S.UsualUnaryConversions(E); efriedma-quic wrote: It's a little weird that the resulting type can be larger than what integer promotion would produce (without this patch, `struct S { long l : 4; }; S s; static_assert(sizeof(__builtin_elementwise_abs(s.l)) == 4);`). But maybe that's fine. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/frasercrmck updated https://github.com/llvm/llvm-project/pull/119423 >From 3f007d702922db63e128e3c0f72dff2f600e0879 Mon Sep 17 00:00:00 2001 From: Fraser Cormack Date: Tue, 10 Dec 2024 17:41:07 + Subject: [PATCH 1/7] [clang] Fix sub-integer __builtin_elementwise_(add|sub)_sat These builtins would unconditionally perform the usual arithmetic conversions on promotable scalar integer arguments. This meant in practice that char and short arguments were promoted to int, and the operation was truncated back down afterwards. This in effect silently replaced a saturating add/sub with a regular add/sub, which is not intuitive (or intended) behaviour. With this patch, promotable scalar integer types are not promoted to int, but are kept intact. If the types differ, the smaller integer is promoted to the larger one. The signedness of the operation matches the larger integer type. No change is made to vector types, which are both not promoted and whose element types must match. --- clang/lib/Sema/SemaChecking.cpp | 38 +++ .../test/CodeGen/builtins-elementwise-math.c | 98 ++- 2 files changed, 134 insertions(+), 2 deletions(-) diff --git a/clang/lib/Sema/SemaChecking.cpp b/clang/lib/Sema/SemaChecking.cpp index ce846ae88c38b4..9d986a22945f78 100644 --- a/clang/lib/Sema/SemaChecking.cpp +++ b/clang/lib/Sema/SemaChecking.cpp @@ -2765,6 +2765,44 @@ Sema::CheckBuiltinFunctionCall(FunctionDecl *FDecl, unsigned BuiltinID, // types only. case Builtin::BI__builtin_elementwise_add_sat: case Builtin::BI__builtin_elementwise_sub_sat: { +if (checkArgCount(TheCall, 2)) + return ExprError(); +ExprResult LHS = TheCall->getArg(0); +ExprResult RHS = TheCall->getArg(1); +QualType LHSType = LHS.get()->getType().getUnqualifiedType(); +QualType RHSType = RHS.get()->getType().getUnqualifiedType(); +// If both LHS/RHS are promotable integer types, do not perform the usual +// conversions - we must keep the saturating operation at the correct +// bitwidth. +if (Context.isPromotableIntegerType(LHSType) && +Context.isPromotableIntegerType(RHSType)) { + // First, convert each argument to an r-value. + ExprResult ResLHS = DefaultFunctionArrayLvalueConversion(LHS.get()); + if (ResLHS.isInvalid()) +return ExprError(); + LHS = ResLHS.get(); + + ExprResult ResRHS = DefaultFunctionArrayLvalueConversion(RHS.get()); + if (ResRHS.isInvalid()) +return ExprError(); + RHS = ResRHS.get(); + + LHSType = LHS.get()->getType().getUnqualifiedType(); + RHSType = RHS.get()->getType().getUnqualifiedType(); + + // If the two integer types are not of equal order, cast the smaller + // integer one to the larger one + if (int Order = Context.getIntegerTypeOrder(LHSType, RHSType); Order == 1) +RHS = ImpCastExprToType(RHS.get(), LHSType, CK_IntegralCast); + else if (Order == -1) +LHS = ImpCastExprToType(LHS.get(), RHSType, CK_IntegralCast); + + TheCall->setArg(0, LHS.get()); + TheCall->setArg(1, RHS.get()); + TheCall->setType(LHS.get()->getType().getUnqualifiedType()); + break; +} + if (BuiltinElementwiseMath(TheCall)) return ExprError(); diff --git a/clang/test/CodeGen/builtins-elementwise-math.c b/clang/test/CodeGen/builtins-elementwise-math.c index 7f6b5f26eb9307..4ac6fe18c4d5a3 100644 --- a/clang/test/CodeGen/builtins-elementwise-math.c +++ b/clang/test/CodeGen/builtins-elementwise-math.c @@ -68,7 +68,10 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) @@ -114,6 +117,50 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, // CHECK: store i64 98, ptr %i1.addr, align 8 i1 = __builtin_elementwise_add_sat(1, 'a'); + + // CHECK: [[C1:%.+]] = load i8, ptr %c1.addr, align 1 + // CHECK-NEXT: [[C2:%.+]] = load i8, ptr %c2.addr, align 1 + // CHECK-NEXT: call i8 @llvm.sadd.sat.i8(i8 [[C1]], i8 [[C2]]) + c1 = __builtin_elementwise_add_sat(c1, c2); + + // CHECK: [[UC1:%.+]] = load i8, ptr %uc1.addr, align
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14604,57 +14611,63 @@ bool Sema::PrepareBuiltinElementwiseMathOneArgCall(CallExpr *TheCall) { } bool Sema::BuiltinElementwiseMath(CallExpr *TheCall, bool FPOnly) { - QualType Res; - if (BuiltinVectorMath(TheCall, Res, FPOnly)) -return true; - TheCall->setType(Res); - return false; + if (auto Res = BuiltinVectorMath(TheCall, FPOnly); Res.has_value()) { +TheCall->setType(*Res); +return false; + } + return true; } bool Sema::BuiltinVectorToScalarMath(CallExpr *TheCall) { - QualType Res; - if (BuiltinVectorMath(TheCall, Res)) + std::optional Res = BuiltinVectorMath(TheCall); + if (!Res) return true; - if (auto *VecTy0 = Res->getAs()) + if (auto *VecTy0 = (*Res)->getAs()) TheCall->setType(VecTy0->getElementType()); else -TheCall->setType(Res); +TheCall->setType(*Res); return false; } -bool Sema::BuiltinVectorMath(CallExpr *TheCall, QualType &Res, bool FPOnly) { +std::optional Sema::BuiltinVectorMath(CallExpr *TheCall, +bool FPOnly) { if (checkArgCount(TheCall, 2)) -return true; +return std::nullopt; - ExprResult A = TheCall->getArg(0); - ExprResult B = TheCall->getArg(1); - // Do standard promotions between the two arguments, returning their common - // type. - Res = UsualArithmeticConversions(A, B, TheCall->getExprLoc(), ACK_Comparison); - if (A.isInvalid() || B.isInvalid()) -return true; + checkEnumArithmeticConversions(TheCall->getArg(0), TheCall->getArg(1), + TheCall->getExprLoc(), ACK_Comparison); frasercrmck wrote: Can you clarify what you mean here, sorry? I assume you mean just in the context of these vector builtins. This call looks like it's just checking the following: ``` // C++2a [expr.arith.conv]p1: // If one operand is of enumeration type and the other operand is of a // different enumeration type or a floating-point type, this behavior is // deprecated ([depr.arith.conv.enum]). // // Warn on this in all language modes. Produce a deprecation warning in C++20. // Eventually we will presumably reject these cases (in C++23 onwards?). ``` So it's not really whether "enums are allowed", but whether you're allowed to mix different enums, or enums with floating-point values, in binary builtins. I agree the inconsistency here is not desirable, especially as we're not doing the same for the ternary elementwise builtins. Should we just forbid this in all binary/ternary vector elementwise builtins? https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14585,11 +14585,18 @@ void Sema::CheckAddressOfPackedMember(Expr *rhs) { _2, _3, _4)); } +static ExprResult UsualUnaryConversionsNoPromoteInt(Sema &S, Expr *E) { + // Don't promote integer types + if (QualType Ty = E->getType(); S.getASTContext().isPromotableIntegerType(Ty)) +return S.DefaultFunctionArrayLvalueConversion(E); + return S.UsualUnaryConversions(E); frasercrmck wrote: Yep, makes sense. I've added `Sema::UsualUnaryFPConversions` and am using that where appropriate. I didn't want to fold in the use of `DefaultFunctionArrayLvalueConversion` into that even though that would have saved a bit of duplicated code. Regarding bitfields: with the current version of this patch (i.e., without the implicit promotion) we are calling the builtin according to the type you specify in the bitfield: ``` c struct StructWithBitfield { int i : 4; char c: 4; short s : 4; }; ``` `__builtin_elementwise_abs(s.i)` calls `llvm.abs.i32`, `s.c` calls `llvm.abs.i8`, `s.s` `llvm.abs.i16`, etc. I wonder if this is consistent enough behaviour to permit them? https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14585,11 +14585,18 @@ void Sema::CheckAddressOfPackedMember(Expr *rhs) { _2, _3, _4)); } +static ExprResult UsualUnaryConversionsNoPromoteInt(Sema &S, Expr *E) { + // Don't promote integer types + if (QualType Ty = E->getType(); S.getASTContext().isPromotableIntegerType(Ty)) +return S.DefaultFunctionArrayLvalueConversion(E); + return S.UsualUnaryConversions(E); efriedma-quic wrote: The "usual unary conversions" is not a term used by any specification, so there isn't any real meaning attached to it. I think I'd prefer not to use a boolean, though, because operation you want here is different in a pretty subtle way. I missed that this also impacts bitfields. Whether we support enums here probably doesn't really matter, as long as we're consistent. Bitfields are unfortunately weird, due to the way promotion works; we might want to reject due to the potential ambiguity. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -649,7 +649,9 @@ Unless specified otherwise operation(±0) = ±0 and operation(±infinity) = ±in The integer elementwise intrinsics, including ``__builtin_elementwise_popcount``, ``__builtin_elementwise_bitreverse``, ``__builtin_elementwise_add_sat``, -``__builtin_elementwise_sub_sat`` can be called in a ``constexpr`` context. +``__builtin_elementwise_sub_sat`` can be called in a ``constexpr`` context. No frasercrmck wrote: Good idea, thanks. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/frasercrmck updated https://github.com/llvm/llvm-project/pull/119423 >From 3f007d702922db63e128e3c0f72dff2f600e0879 Mon Sep 17 00:00:00 2001 From: Fraser Cormack Date: Tue, 10 Dec 2024 17:41:07 + Subject: [PATCH 1/6] [clang] Fix sub-integer __builtin_elementwise_(add|sub)_sat These builtins would unconditionally perform the usual arithmetic conversions on promotable scalar integer arguments. This meant in practice that char and short arguments were promoted to int, and the operation was truncated back down afterwards. This in effect silently replaced a saturating add/sub with a regular add/sub, which is not intuitive (or intended) behaviour. With this patch, promotable scalar integer types are not promoted to int, but are kept intact. If the types differ, the smaller integer is promoted to the larger one. The signedness of the operation matches the larger integer type. No change is made to vector types, which are both not promoted and whose element types must match. --- clang/lib/Sema/SemaChecking.cpp | 38 +++ .../test/CodeGen/builtins-elementwise-math.c | 98 ++- 2 files changed, 134 insertions(+), 2 deletions(-) diff --git a/clang/lib/Sema/SemaChecking.cpp b/clang/lib/Sema/SemaChecking.cpp index ce846ae88c38b4..9d986a22945f78 100644 --- a/clang/lib/Sema/SemaChecking.cpp +++ b/clang/lib/Sema/SemaChecking.cpp @@ -2765,6 +2765,44 @@ Sema::CheckBuiltinFunctionCall(FunctionDecl *FDecl, unsigned BuiltinID, // types only. case Builtin::BI__builtin_elementwise_add_sat: case Builtin::BI__builtin_elementwise_sub_sat: { +if (checkArgCount(TheCall, 2)) + return ExprError(); +ExprResult LHS = TheCall->getArg(0); +ExprResult RHS = TheCall->getArg(1); +QualType LHSType = LHS.get()->getType().getUnqualifiedType(); +QualType RHSType = RHS.get()->getType().getUnqualifiedType(); +// If both LHS/RHS are promotable integer types, do not perform the usual +// conversions - we must keep the saturating operation at the correct +// bitwidth. +if (Context.isPromotableIntegerType(LHSType) && +Context.isPromotableIntegerType(RHSType)) { + // First, convert each argument to an r-value. + ExprResult ResLHS = DefaultFunctionArrayLvalueConversion(LHS.get()); + if (ResLHS.isInvalid()) +return ExprError(); + LHS = ResLHS.get(); + + ExprResult ResRHS = DefaultFunctionArrayLvalueConversion(RHS.get()); + if (ResRHS.isInvalid()) +return ExprError(); + RHS = ResRHS.get(); + + LHSType = LHS.get()->getType().getUnqualifiedType(); + RHSType = RHS.get()->getType().getUnqualifiedType(); + + // If the two integer types are not of equal order, cast the smaller + // integer one to the larger one + if (int Order = Context.getIntegerTypeOrder(LHSType, RHSType); Order == 1) +RHS = ImpCastExprToType(RHS.get(), LHSType, CK_IntegralCast); + else if (Order == -1) +LHS = ImpCastExprToType(LHS.get(), RHSType, CK_IntegralCast); + + TheCall->setArg(0, LHS.get()); + TheCall->setArg(1, RHS.get()); + TheCall->setType(LHS.get()->getType().getUnqualifiedType()); + break; +} + if (BuiltinElementwiseMath(TheCall)) return ExprError(); diff --git a/clang/test/CodeGen/builtins-elementwise-math.c b/clang/test/CodeGen/builtins-elementwise-math.c index 7f6b5f26eb9307..4ac6fe18c4d5a3 100644 --- a/clang/test/CodeGen/builtins-elementwise-math.c +++ b/clang/test/CodeGen/builtins-elementwise-math.c @@ -68,7 +68,10 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) @@ -114,6 +117,50 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, // CHECK: store i64 98, ptr %i1.addr, align 8 i1 = __builtin_elementwise_add_sat(1, 'a'); + + // CHECK: [[C1:%.+]] = load i8, ptr %c1.addr, align 1 + // CHECK-NEXT: [[C2:%.+]] = load i8, ptr %c2.addr, align 1 + // CHECK-NEXT: call i8 @llvm.sadd.sat.i8(i8 [[C1]], i8 [[C2]]) + c1 = __builtin_elementwise_add_sat(c1, c2); + + // CHECK: [[UC1:%.+]] = load i8, ptr %uc1.addr, align
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -649,7 +649,9 @@ Unless specified otherwise operation(±0) = ±0 and operation(±infinity) = ±in The integer elementwise intrinsics, including ``__builtin_elementwise_popcount``, ``__builtin_elementwise_bitreverse``, ``__builtin_elementwise_add_sat``, -``__builtin_elementwise_sub_sat`` can be called in a ``constexpr`` context. +``__builtin_elementwise_sub_sat`` can be called in a ``constexpr`` context. No RKSimon wrote: New paragraph to make this clearer. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
frasercrmck wrote: > Would be good to also update the documentation for the builtins the clarify > the behavior. I've added some words to that effect. Is that what you had in mind? https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/frasercrmck updated https://github.com/llvm/llvm-project/pull/119423 >From 3f007d702922db63e128e3c0f72dff2f600e0879 Mon Sep 17 00:00:00 2001 From: Fraser Cormack Date: Tue, 10 Dec 2024 17:41:07 + Subject: [PATCH 1/5] [clang] Fix sub-integer __builtin_elementwise_(add|sub)_sat These builtins would unconditionally perform the usual arithmetic conversions on promotable scalar integer arguments. This meant in practice that char and short arguments were promoted to int, and the operation was truncated back down afterwards. This in effect silently replaced a saturating add/sub with a regular add/sub, which is not intuitive (or intended) behaviour. With this patch, promotable scalar integer types are not promoted to int, but are kept intact. If the types differ, the smaller integer is promoted to the larger one. The signedness of the operation matches the larger integer type. No change is made to vector types, which are both not promoted and whose element types must match. --- clang/lib/Sema/SemaChecking.cpp | 38 +++ .../test/CodeGen/builtins-elementwise-math.c | 98 ++- 2 files changed, 134 insertions(+), 2 deletions(-) diff --git a/clang/lib/Sema/SemaChecking.cpp b/clang/lib/Sema/SemaChecking.cpp index ce846ae88c38b4..9d986a22945f78 100644 --- a/clang/lib/Sema/SemaChecking.cpp +++ b/clang/lib/Sema/SemaChecking.cpp @@ -2765,6 +2765,44 @@ Sema::CheckBuiltinFunctionCall(FunctionDecl *FDecl, unsigned BuiltinID, // types only. case Builtin::BI__builtin_elementwise_add_sat: case Builtin::BI__builtin_elementwise_sub_sat: { +if (checkArgCount(TheCall, 2)) + return ExprError(); +ExprResult LHS = TheCall->getArg(0); +ExprResult RHS = TheCall->getArg(1); +QualType LHSType = LHS.get()->getType().getUnqualifiedType(); +QualType RHSType = RHS.get()->getType().getUnqualifiedType(); +// If both LHS/RHS are promotable integer types, do not perform the usual +// conversions - we must keep the saturating operation at the correct +// bitwidth. +if (Context.isPromotableIntegerType(LHSType) && +Context.isPromotableIntegerType(RHSType)) { + // First, convert each argument to an r-value. + ExprResult ResLHS = DefaultFunctionArrayLvalueConversion(LHS.get()); + if (ResLHS.isInvalid()) +return ExprError(); + LHS = ResLHS.get(); + + ExprResult ResRHS = DefaultFunctionArrayLvalueConversion(RHS.get()); + if (ResRHS.isInvalid()) +return ExprError(); + RHS = ResRHS.get(); + + LHSType = LHS.get()->getType().getUnqualifiedType(); + RHSType = RHS.get()->getType().getUnqualifiedType(); + + // If the two integer types are not of equal order, cast the smaller + // integer one to the larger one + if (int Order = Context.getIntegerTypeOrder(LHSType, RHSType); Order == 1) +RHS = ImpCastExprToType(RHS.get(), LHSType, CK_IntegralCast); + else if (Order == -1) +LHS = ImpCastExprToType(LHS.get(), RHSType, CK_IntegralCast); + + TheCall->setArg(0, LHS.get()); + TheCall->setArg(1, RHS.get()); + TheCall->setType(LHS.get()->getType().getUnqualifiedType()); + break; +} + if (BuiltinElementwiseMath(TheCall)) return ExprError(); diff --git a/clang/test/CodeGen/builtins-elementwise-math.c b/clang/test/CodeGen/builtins-elementwise-math.c index 7f6b5f26eb9307..4ac6fe18c4d5a3 100644 --- a/clang/test/CodeGen/builtins-elementwise-math.c +++ b/clang/test/CodeGen/builtins-elementwise-math.c @@ -68,7 +68,10 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) @@ -114,6 +117,50 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, // CHECK: store i64 98, ptr %i1.addr, align 8 i1 = __builtin_elementwise_add_sat(1, 'a'); + + // CHECK: [[C1:%.+]] = load i8, ptr %c1.addr, align 1 + // CHECK-NEXT: [[C2:%.+]] = load i8, ptr %c2.addr, align 1 + // CHECK-NEXT: call i8 @llvm.sadd.sat.i8(i8 [[C1]], i8 [[C2]]) + c1 = __builtin_elementwise_add_sat(c1, c2); + + // CHECK: [[UC1:%.+]] = load i8, ptr %uc1.addr, align
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/frasercrmck edited https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14585,11 +14585,18 @@ void Sema::CheckAddressOfPackedMember(Expr *rhs) { _2, _3, _4)); } +static ExprResult UsualUnaryConversionsNoPromoteInt(Sema &S, Expr *E) { + // Don't promote integer types + if (QualType Ty = E->getType(); S.getASTContext().isPromotableIntegerType(Ty)) +return S.DefaultFunctionArrayLvalueConversion(E); + return S.UsualUnaryConversions(E); frasercrmck wrote: Could we perhaps extend `UsualUnaryConversions` with a default-true bool flag to control the handling of integers/bitfields? I can see that it might be exposing too many internal details to users of `UsualUnaryConversions`, so I'm not sure. I don't know the best practices of clang/sema design here. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -68,15 +66,18 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) i1 = __builtin_elementwise_add_sat(i1, i2); // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 10) - i1 = __builtin_elementwise_add_sat(i1, 10); + i1 = __builtin_elementwise_add_sat(i1, (long long int)10); frasercrmck wrote: Using `ll` now. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
fhahn wrote: Would be good to also update the documentation for the builtins the clarify the behavior. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/frasercrmck updated https://github.com/llvm/llvm-project/pull/119423 >From 3f007d702922db63e128e3c0f72dff2f600e0879 Mon Sep 17 00:00:00 2001 From: Fraser Cormack Date: Tue, 10 Dec 2024 17:41:07 + Subject: [PATCH 1/4] [clang] Fix sub-integer __builtin_elementwise_(add|sub)_sat These builtins would unconditionally perform the usual arithmetic conversions on promotable scalar integer arguments. This meant in practice that char and short arguments were promoted to int, and the operation was truncated back down afterwards. This in effect silently replaced a saturating add/sub with a regular add/sub, which is not intuitive (or intended) behaviour. With this patch, promotable scalar integer types are not promoted to int, but are kept intact. If the types differ, the smaller integer is promoted to the larger one. The signedness of the operation matches the larger integer type. No change is made to vector types, which are both not promoted and whose element types must match. --- clang/lib/Sema/SemaChecking.cpp | 38 +++ .../test/CodeGen/builtins-elementwise-math.c | 98 ++- 2 files changed, 134 insertions(+), 2 deletions(-) diff --git a/clang/lib/Sema/SemaChecking.cpp b/clang/lib/Sema/SemaChecking.cpp index ce846ae88c38b4..9d986a22945f78 100644 --- a/clang/lib/Sema/SemaChecking.cpp +++ b/clang/lib/Sema/SemaChecking.cpp @@ -2765,6 +2765,44 @@ Sema::CheckBuiltinFunctionCall(FunctionDecl *FDecl, unsigned BuiltinID, // types only. case Builtin::BI__builtin_elementwise_add_sat: case Builtin::BI__builtin_elementwise_sub_sat: { +if (checkArgCount(TheCall, 2)) + return ExprError(); +ExprResult LHS = TheCall->getArg(0); +ExprResult RHS = TheCall->getArg(1); +QualType LHSType = LHS.get()->getType().getUnqualifiedType(); +QualType RHSType = RHS.get()->getType().getUnqualifiedType(); +// If both LHS/RHS are promotable integer types, do not perform the usual +// conversions - we must keep the saturating operation at the correct +// bitwidth. +if (Context.isPromotableIntegerType(LHSType) && +Context.isPromotableIntegerType(RHSType)) { + // First, convert each argument to an r-value. + ExprResult ResLHS = DefaultFunctionArrayLvalueConversion(LHS.get()); + if (ResLHS.isInvalid()) +return ExprError(); + LHS = ResLHS.get(); + + ExprResult ResRHS = DefaultFunctionArrayLvalueConversion(RHS.get()); + if (ResRHS.isInvalid()) +return ExprError(); + RHS = ResRHS.get(); + + LHSType = LHS.get()->getType().getUnqualifiedType(); + RHSType = RHS.get()->getType().getUnqualifiedType(); + + // If the two integer types are not of equal order, cast the smaller + // integer one to the larger one + if (int Order = Context.getIntegerTypeOrder(LHSType, RHSType); Order == 1) +RHS = ImpCastExprToType(RHS.get(), LHSType, CK_IntegralCast); + else if (Order == -1) +LHS = ImpCastExprToType(LHS.get(), RHSType, CK_IntegralCast); + + TheCall->setArg(0, LHS.get()); + TheCall->setArg(1, RHS.get()); + TheCall->setType(LHS.get()->getType().getUnqualifiedType()); + break; +} + if (BuiltinElementwiseMath(TheCall)) return ExprError(); diff --git a/clang/test/CodeGen/builtins-elementwise-math.c b/clang/test/CodeGen/builtins-elementwise-math.c index 7f6b5f26eb9307..4ac6fe18c4d5a3 100644 --- a/clang/test/CodeGen/builtins-elementwise-math.c +++ b/clang/test/CodeGen/builtins-elementwise-math.c @@ -68,7 +68,10 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) @@ -114,6 +117,50 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, // CHECK: store i64 98, ptr %i1.addr, align 8 i1 = __builtin_elementwise_add_sat(1, 'a'); + + // CHECK: [[C1:%.+]] = load i8, ptr %c1.addr, align 1 + // CHECK-NEXT: [[C2:%.+]] = load i8, ptr %c2.addr, align 1 + // CHECK-NEXT: call i8 @llvm.sadd.sat.i8(i8 [[C1]], i8 [[C2]]) + c1 = __builtin_elementwise_add_sat(c1, c2); + + // CHECK: [[UC1:%.+]] = load i8, ptr %uc1.addr, align
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14585,11 +14585,18 @@ void Sema::CheckAddressOfPackedMember(Expr *rhs) { _2, _3, _4)); } +static ExprResult UsualUnaryConversionsNoPromoteInt(Sema &S, Expr *E) { + // Don't promote integer types + if (QualType Ty = E->getType(); S.getASTContext().isPromotableIntegerType(Ty)) +return S.DefaultFunctionArrayLvalueConversion(E); + return S.UsualUnaryConversions(E); frasercrmck wrote: Yes, pretty much. `UsualUnaryConversions` does `DefaultFunctionArrayLvalueConversion` first, then floating-point specific logic. However, it also handles `isPromotableBitField`s. In your other comment you ask about enums, and perhaps bitfields also falls into that category? Do we really need to support these types in vector maths/arithmetic builtins? There are tests for them, so I preserved the existing logic (though seemingly not for bitfields) but I see the argument for removing that support. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -68,15 +66,18 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) i1 = __builtin_elementwise_add_sat(i1, i2); // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 10) - i1 = __builtin_elementwise_add_sat(i1, 10); + i1 = __builtin_elementwise_add_sat(i1, (long long int)10); arsenm wrote: If you were using int64_t, should use ll if using long long https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -68,15 +66,18 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) i1 = __builtin_elementwise_add_sat(i1, i2); // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 10) - i1 = __builtin_elementwise_add_sat(i1, 10); + i1 = __builtin_elementwise_add_sat(i1, (long long int)10); unpacklo wrote: Maybe `INT64_C(10)` from the stdint.h header instead? https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14585,11 +14585,18 @@ void Sema::CheckAddressOfPackedMember(Expr *rhs) { _2, _3, _4)); } +static ExprResult UsualUnaryConversionsNoPromoteInt(Sema &S, Expr *E) { + // Don't promote integer types + if (QualType Ty = E->getType(); S.getASTContext().isPromotableIntegerType(Ty)) +return S.DefaultFunctionArrayLvalueConversion(E); + return S.UsualUnaryConversions(E); efriedma-quic wrote: If I'm following correctly, this specifically skips promoting integers... but it allows promoting floats? I think I'd prefer to factor out the unary-float conversions into a separate function, and call that here, for clarity. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -68,15 +66,18 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) i1 = __builtin_elementwise_add_sat(i1, i2); // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 10) - i1 = __builtin_elementwise_add_sat(i1, 10); + i1 = __builtin_elementwise_add_sat(i1, (long long int)10); efriedma-quic wrote: It's a little awkward, but integer constants in C are awkward in general; not sure there's any better way to handle it. You can write this more concisely as `10ll`. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -14604,57 +14611,63 @@ bool Sema::PrepareBuiltinElementwiseMathOneArgCall(CallExpr *TheCall) { } bool Sema::BuiltinElementwiseMath(CallExpr *TheCall, bool FPOnly) { - QualType Res; - if (BuiltinVectorMath(TheCall, Res, FPOnly)) -return true; - TheCall->setType(Res); - return false; + if (auto Res = BuiltinVectorMath(TheCall, FPOnly); Res.has_value()) { +TheCall->setType(*Res); +return false; + } + return true; } bool Sema::BuiltinVectorToScalarMath(CallExpr *TheCall) { - QualType Res; - if (BuiltinVectorMath(TheCall, Res)) + std::optional Res = BuiltinVectorMath(TheCall); + if (!Res) return true; - if (auto *VecTy0 = Res->getAs()) + if (auto *VecTy0 = (*Res)->getAs()) TheCall->setType(VecTy0->getElementType()); else -TheCall->setType(Res); +TheCall->setType(*Res); return false; } -bool Sema::BuiltinVectorMath(CallExpr *TheCall, QualType &Res, bool FPOnly) { +std::optional Sema::BuiltinVectorMath(CallExpr *TheCall, +bool FPOnly) { if (checkArgCount(TheCall, 2)) -return true; +return std::nullopt; - ExprResult A = TheCall->getArg(0); - ExprResult B = TheCall->getArg(1); - // Do standard promotions between the two arguments, returning their common - // type. - Res = UsualArithmeticConversions(A, B, TheCall->getExprLoc(), ACK_Comparison); - if (A.isInvalid() || B.isInvalid()) -return true; + checkEnumArithmeticConversions(TheCall->getArg(0), TheCall->getArg(1), + TheCall->getExprLoc(), ACK_Comparison); efriedma-quic wrote: Changning whether enums are allowed based on the language mode seems unintuitive and unnecessary. Either allow them, or forbid them. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -6,7 +6,7 @@ // CHECK: %conv2 = fptrunc double %hlsl.dot to float // CHECK: ret float %conv2 float builtin_bool_to_float_type_promotion ( float p0, bool p1 ) { - return __builtin_hlsl_dot ( p0, p1 ); + return __builtin_hlsl_dot ( (double)p0, (double)p1 ); frasercrmck wrote: Thanks for the reply. I think that make sense, thank you. Note though that I had to make a change to what I think is a user-facing `pow` builtin [here](https://github.com/llvm/llvm-project/pull/119423/files#diff-fdbd201243000d44dd63e6af5706293c968588d762bed9e050d27e0abd6e80acR40), so this PR might still impact HLSL users directly. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -6,7 +6,7 @@ // CHECK: %conv2 = fptrunc double %hlsl.dot to float // CHECK: ret float %conv2 float builtin_bool_to_float_type_promotion ( float p0, bool p1 ) { - return __builtin_hlsl_dot ( p0, p1 ); + return __builtin_hlsl_dot ( (double)p0, (double)p1 ); farzonl wrote: >From the checks it looks like you have just made the double more explicit. >Codegen doesn't look different. So I'm fine with this change. Also most of the type promotion rules for HLSL are enforced on the HLSL header. We are just using the builins as aliases for the cases we need to invoke an intrinsic. Note to myself though these tests don't look correct and should probably be removed. DXC would have promoted bool to a float. Short of setting the builtin as `CustomTypeChecking` I don't think that can be achieved here. But again it does not matter because the promotion rules are handled on the hlsl `dot` api which only aliases to the builtin. https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/frasercrmck edited https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -68,15 +66,18 @@ void test_builtin_elementwise_add_sat(float f1, float f2, double d1, double d2, long long int i2, si8 vi1, si8 vi2, unsigned u1, unsigned u2, u4 vu1, u4 vu2, _BitInt(31) bi1, _BitInt(31) bi2, - unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2) { + unsigned _BitInt(55) bu1, unsigned _BitInt(55) bu2, + char c1, char c2, unsigned char uc1, + unsigned char uc2, short s1, short s2, + unsigned short us1, unsigned short us2) { // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: [[I2:%.+]] = load i64, ptr %i2.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 [[I2]]) i1 = __builtin_elementwise_add_sat(i1, i2); // CHECK: [[I1:%.+]] = load i64, ptr %i1.addr, align 8 // CHECK-NEXT: call i64 @llvm.sadd.sat.i64(i64 [[I1]], i64 10) - i1 = __builtin_elementwise_add_sat(i1, 10); + i1 = __builtin_elementwise_add_sat(i1, (long long int)10); frasercrmck wrote: I've implemented a naive version of the RFC, but I'm not sure if we want to require explicit casting of immediates in this case. It arguably matches the vector behaviour, and floating-point behaviour, but it's still awkward. Thoughts? https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
@@ -6,7 +6,7 @@ // CHECK: %conv2 = fptrunc double %hlsl.dot to float // CHECK: ret float %conv2 float builtin_bool_to_float_type_promotion ( float p0, bool p1 ) { - return __builtin_hlsl_dot ( p0, p1 ); + return __builtin_hlsl_dot ( (double)p0, (double)p1 ); frasercrmck wrote: @farzonl I've quickly hacked this in place to satisfy the tests. The HLSL builtins use the same Sema APIs as the ones I'm interested in changing. Is the old behaviour something that needs preserved? https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
[clang] [clang] Restrict use of scalar types in vector builtins (PR #119423)
https://github.com/frasercrmck edited https://github.com/llvm/llvm-project/pull/119423 ___ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits