[llvm-branch-commits] [llvm] [Hashing] Use a non-deterministic seed if LLVM_ENABLE_ABI_BREAKING_CHECKS (PR #96282)

2024-06-27 Thread Eli Friedman via llvm-branch-commits

https://github.com/efriedma-quic commented:

I think I'm happier restricting the non-determinism to +Asserts for now, at 
least as an incremental step.

> Due to Avalanche effects, even a few ASLR bits are sufficient to cover many 
> different scenarios and expose latent bugs.

On Windows specifically, I'm less concerned about the total number of bits, and 
more concerned that ASLR isn't randomized for each run of an executable.

https://github.com/llvm/llvm-project/pull/96282
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] [Hashing] Use a non-deterministic seed if LLVM_ENABLE_ABI_BREAKING_CHECKS (PR #96282)

2024-06-27 Thread Eli Friedman via llvm-branch-commits


@@ -322,24 +306,20 @@ struct hash_state {
   }
 };
 
-
-/// A global, fixed seed-override variable.
-///
-/// This variable can be set using the \see llvm::set_fixed_execution_seed
-/// function. See that function for details. Do not, under any circumstances,
-/// set or read this variable.
-extern uint64_t fixed_seed_override;
-
+/// In LLVM_ENABLE_ABI_BREAKING_CHECKS builds, the seed is non-deterministic
+/// (address of a variable) to prevent having users depend on the particular
+/// hash values. On platforms without ASLR, this is still likely
+/// non-deterministic per build.
 inline uint64_t get_execution_seed() {
-  // FIXME: This needs to be a per-execution seed. This is just a placeholder
-  // implementation. Switching to a per-execution seed is likely to flush out
-  // instability bugs and so will happen as its own commit.
-  //
-  // However, if there is a fixed seed override set the first time this is
-  // called, return that instead of the per-execution seed.
-  const uint64_t seed_prime = 0xff51afd7ed558ccdULL;
-  static uint64_t seed = fixed_seed_override ? fixed_seed_override : 
seed_prime;
-  return seed;
+  // Work around x86-64 negative offset folding for old Clang -fno-pic
+  // https://reviews.llvm.org/D93931
+#if LLVM_ENABLE_ABI_BREAKING_CHECKS && 
\
+(!defined(__clang__) || __clang_major__ > 11)

efriedma-quic wrote:

Is it an ABI problem that this ifdef exists?  I mean, LLVM libraries built with 
clang<11 can't be used by programs built with clang>11.  With 
LLVM_ENABLE_ABI_BREAKING_CHECKS, I guess it's unlikely to cause issues, though. 
 (I guess you could use an empty inline asm as a workaround if you wanted to.)

https://github.com/llvm/llvm-project/pull/96282
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] [Hashing] Use a non-deterministic seed if LLVM_ENABLE_ABI_BREAKING_CHECKS (PR #96282)

2024-06-27 Thread Eli Friedman via llvm-branch-commits

https://github.com/efriedma-quic edited 
https://github.com/llvm/llvm-project/pull/96282
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [Hashing] Use a non-deterministic seed (PR #96282)

2024-06-21 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

We restricted reverse-iteration when we added it just to save time when we were 
enabling it: we wanted to prioritize issues that were actually likely to cause 
non-determinism (as opposed to relying on the hash algorithm, which is annoying 
but not actually non-deterministic).  If you're willing to fix all the 
resulting breakage, it should be fine to apply it more widely.

-

I'm a little concerned that doing this in release builds is going to lead to 
weird bug reports. Especially given the current approach for getting 
randomness: ASLR isn't really that random, particularly on Windows, so the 
probability of getting a particular seed isn't uniform.

https://github.com/llvm/llvm-project/pull/96282
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] [clang] Define ptrauth_sign_constant builtin. (PR #93904)

2024-06-03 Thread Eli Friedman via llvm-branch-commits


@@ -354,6 +354,23 @@ Given that ``signedPointer`` matches the layout for signed 
pointers signed with
 the given key, extract the raw pointer from it.  This operation does not trap
 and cannot fail, even if the pointer is not validly signed.
 
+``ptrauth_sign_constant``
+^
+
+.. code-block:: c
+
+  ptrauth_sign_constant(pointer, key, discriminator)
+
+Return a signed pointer for a constant address in a manner which guarantees
+a non-attackable sequence.
+
+``pointer`` must be a constant expression of pointer type which evaluates to
+a non-null pointer.  The result will have the same type as ``discriminator``.
+
+Calls to this are constant expressions if the discriminator is a null-pointer
+constant expression or an integer constant expression. Implementations may
+allow other pointer expressions as well.

efriedma-quic wrote:

This seems to imply that if you don't need a constant expression, the 
discriminator can be a variable.  This doesn't seem to match the 
implementation, though: it requires a constant discriminator in a specific form.

https://github.com/llvm/llvm-project/pull/93904
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] [clang] Define ptrauth_sign_constant builtin. (PR #93904)

2024-05-30 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

Why do we want a separate builtin, as opposed to just constant-folding calls to 
__builtin_ptrauth_sign?

https://github.com/llvm/llvm-project/pull/93904
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] [release/18.x] Backport fixes for ARM64EC thunk generation (PR #92580)

2024-05-17 Thread Eli Friedman via llvm-branch-commits

https://github.com/efriedma-quic approved this pull request.

LGTM

This only affects Arm64EC targets, the fixes are relatively small, and this 
affects correctness of generated thunks.

https://github.com/llvm/llvm-project/pull/92580
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] release/18.x: [GlobalOpt] Don't replace aliasee with alias that has weak linkage (#91483) (PR #92468)

2024-05-16 Thread Eli Friedman via llvm-branch-commits

https://github.com/efriedma-quic approved this pull request.

LGTM

This only affects optimizations on weak aliases, and the logic is very simple: 
just don't optimize them.

As noted on the original pull request, this also affects some cases which might 
be safe to optimize (a weak alias where the aliasee is an internal symbol with 
no other references). But "optimizing" those cases doesn't really have any 
useful effect, anyway: it doesn't unblock any additional optimizations, and the 
resulting ELF is basically identical.

https://github.com/llvm/llvm-project/pull/92468
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] [llvm] Backport "riscv-isa" module metadata to 18.x (PR #91514)

2024-05-11 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

If LTO was completely broken, this seems worth taking.  And the changes look 
safe. LGTM.

https://github.com/llvm/llvm-project/pull/91514
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] [llvm] Backport "riscv-isa" module metadata to 18.x (PR #91514)

2024-05-08 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

Can you briefly summarize why this is important to backport?  At first glance, 
this is only relevant for LTO with mixed architecture specifications, which... 
I can see someone might want it, I guess, but it seems pretty easy to work 
around not having it.

https://github.com/llvm/llvm-project/pull/91514
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] release/18.x: [clang codegen] Fix MS ABI detection of user-provided constructors. (#90151) (PR #90639)

2024-04-30 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

Proposing for backport because this is high-impact for anyone using Qt on Arm64 
Windows.

https://github.com/llvm/llvm-project/pull/90639
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] [clang][builtin] Implement __builtin_allow_runtime_check (PR #87568)

2024-04-29 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

I think it's worth re-posting the builtin as a separate RFC on Discourse, since 
the original RFC hadn't settled on the exact design for the clang builtin 
you're using here.

Code changes look fine.

https://github.com/llvm/llvm-project/pull/87568
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] [Clang] Handle structs with inner structs and no fields (#89126) (PR #90133)

2024-04-25 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

Looks fine.

https://github.com/llvm/llvm-project/pull/90133
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] [release/18.x][COFF][Aarch64] Add _InterlockedAdd64 intrinsic (#81849) (PR #89951)

2024-04-24 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

I think BuiltinsAArch64.def is part of clang's ABI, so changing it violates the 
backport rules.

Otherwise, I'd be inclined to accept; it's kind of late to request, but it's 
low risk.

https://github.com/llvm/llvm-project/pull/89951
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] release/18.x: [Clang] Handle structs with inner structs and no fields (#89126) (PR #89456)

2024-04-19 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

LGTM

This only impacts code using dynamic object sizes, which... I'm not sure how 
widely it's actually used outside the Linux kernel.  Implementation-wise, 
should be pretty safe.  There's some minor risk because the revised recursion 
visits RecordDecls it wouldn't look into before, but that seems unlikely to 
cause a practical issue.

https://github.com/llvm/llvm-project/pull/89456
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] release/18.x [X86_64] fix SSE type error in vaarg (PR #86698)

2024-04-16 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

Right, the policy doesn't say we can only take regression fixes.  We just need 
to weight the impact vs. the risk.

Looking at the latest conversation on the bug report this case is pretty 
clearly still broken.  It's improved in the sense that after the va_arg of the 
struct, subsequent va_arg calls produce the right value.  But the va_arg iteslf 
doesn't produce the right value (probably we aren't copying the struct 
correctly).  So that would be a regression for some cases.

Given that, we probably don't want to pull this into 18.x as-is.

Also, given that we're making other fixes to the surrounding code, pulling any 
one fix into 18.x seems risky to me.  And probably low-impact, given the 
testcases appear to be generated by a fuzzer.

https://github.com/llvm/llvm-project/pull/86698
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] Backport: Prepend all library intrinsics with `#` when building for Arm64EC (PR #88016)

2024-04-08 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

LGTM.  (This only affects Arm64EC, so it's very safe to backport.)

https://github.com/llvm/llvm-project/pull/88016
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang][CallGraphSection] Add type id metadata to indirect call and targets (PR #87573)

2024-04-05 Thread Eli Friedman via llvm-branch-commits


@@ -93,9 +93,17 @@ RValue CodeGenFunction::EmitCXXMemberOrOperatorCall(
   *this, MD, This, ImplicitParam, ImplicitParamTy, CE, Args, RtlArgs);
   auto  = CGM.getTypes().arrangeCXXMethodCall(
   Args, FPT, CallInfo.ReqArgs, CallInfo.PrefixSize);
-  return EmitCall(FnInfo, Callee, ReturnValue, Args, nullptr,
+  llvm::CallBase *CallOrInvoke = nullptr;
+  auto Call = EmitCall(FnInfo, Callee, ReturnValue, Args, ,
   CE && CE == MustTailCall,
   CE ? CE->getExprLoc() : SourceLocation());
+  
+  // Set type identifier metadata of indirect calls for call graph section.
+  if (CGM.getCodeGenOpts().CallGraphSection && CallOrInvoke &&
+  CallOrInvoke->isIndirectCall())
+CGM.CreateFunctionTypeMetadataForIcall(MD->getType(), CallOrInvoke);

efriedma-quic wrote:

This seems like it's scattering calls to CreateFunctionTypeMetadataForIcall 
across a lot of different places; code like this is hard to maintain.  Is there 
some reason we can't just do it in EmitCall() itself, or something like that?

https://github.com/llvm/llvm-project/pull/87573
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] release/18.x [X86_64] fix SSE type error in vaarg (PR #86698)

2024-03-26 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

Is there some reason you think we should take this specific patch, out of all 
the x86 ABI fixes going in recently?  It isn't a regression, as far as I know.

https://github.com/llvm/llvm-project/pull/86698
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [lld] [llvm] Backport fixes for ARM64EC import libraries (PR #84590)

2024-03-11 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

So your use-case is basically equivalent to using llvm-dlltool, except not 
using the text parser?

If this is actually enough to make Rust targets usable, then I guess we could 
consider it, but the fixes aren't structured in a way to make it obvious this 
won't impact non-ARM64EC targets.

https://github.com/llvm/llvm-project/pull/84590
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [lld] [llvm] Backport fixes for ARM64EC import libraries (PR #84590)

2024-03-11 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

This seems like a pretty big change to backport... how useful is it in 
practice?  I was under the impression that arm64ec lld support is still 
immature... and if you're using the MSVC linker, you might as well use the MSVC 
lib/dlltool.

https://github.com/llvm/llvm-project/pull/84590
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] Backport ARM64EC variadic args fixes to LLVM 18 (PR #81800)

2024-03-11 Thread Eli Friedman via llvm-branch-commits

https://github.com/efriedma-quic approved this pull request.

LGTM

https://github.com/llvm/llvm-project/pull/81800
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [clang] release/18.x: [ObjC] Check entire chain of superclasses to determine class layout (PR #84093)

2024-03-11 Thread Eli Friedman via llvm-branch-commits

https://github.com/efriedma-quic requested changes to this pull request.

We usually only take bugfixes on release branches (miscompiles/crashes/etc.).

https://github.com/llvm/llvm-project/pull/84093
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] release/18.x: [ARM] Switch to LiveRegUnits to fix r7 register allocation bug (PR #84475)

2024-03-11 Thread Eli Friedman via llvm-branch-commits

https://github.com/efriedma-quic requested changes to this pull request.

This is, as far as I can tell, not a miscompile; probably not worth taking on 
the 18.x branch.

(Also, it's usually not a good idea to open a PR for a cherry-pick before the 
original patch is merged.)

https://github.com/llvm/llvm-project/pull/84475
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] Backport ARM64EC variadic args fixes to LLVM 18 (PR #81800)

2024-02-28 Thread Eli Friedman via llvm-branch-commits

https://github.com/efriedma-quic commented:

Looks fine, but I think merging might need to wait for a point release?  CC 
@tstellar .

https://github.com/llvm/llvm-project/pull/81800
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] Backport ARM64EC variadic args fixes to LLVM 18 (PR #81800)

2024-02-14 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

I was sort of waiting until the discussion on #80994 resolves... we might end 
up reverting parts of #80595 .

I guess it won't do any harm to land as-is, though.

https://github.com/llvm/llvm-project/pull/81800
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] [LivePhysRegs] Add callee-saved regs from MFI in addLiveOutsNoPristines. (PR #73553)

2023-12-12 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

Just looked at https://gist.github.com/fhahn/67937125b64440a8a414909c4a1b7973 ; 
that seems roughly appropriate.  It's a little ugly to set the bit to false, 
then set it back to true, though; I'd rather just explicitly check whether all 
return instructions are LDMIA_RET/t2LDMIA_RET/tPOP_RET.

https://github.com/llvm/llvm-project/pull/73553
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] [LivePhysRegs] Add callee-saved regs from MFI in addLiveOutsNoPristines. (PR #73553)

2023-12-12 Thread Eli Friedman via llvm-branch-commits

efriedma-quic wrote:

> After PEI the liveness of LR needs to be accurately reflected and tail calls 
> could (should?) always "use" LR. That would either prevent outlining or cause 
> the outliner to preserve LR across introduced calls.

I'll elaborate on this a bit.  I think long-term, the way we want to model this 
is that pre-PEI, LR is treated as if it were a callee-save register.  (Nothing 
outside the prologue/epilogue should interact with the return address directly 
except for a few special cases like __builtin_return_address.)  Then PEI marks 
the LR use on the relevant "return" instructions, and post-PEI it's not 
callee-save anymore.  I think that correctly models the underlying weirdness: 
at a machine level, the return address is basically just an argument to the 
function, but it's special to PEI because of the interaction with the frame 
layout/exception handling/pop instructions/etc.  Stuff before PEI doesn't need 
to be aware it's an argument, and stuff after PEI can just treat it as a normal 
register.

> On the caller side, the call instruction clobbers LR, so it can't really be 
> considered live-out

Correct, it's not really live-out.

---

Short-term, can we make this fix a bit more targeted?  The point of the 
"isRestored" bit is that it's supposed to avoid marking liveness when the 
function ends in a "pop pc".  The relevant code is 
ARMFrameLowering::emitPopInst: it calls `Info.setRestored(false);` to do 
precisely this.  But it's triggering in a case where it shouldn't.  I think the 
problem is that the code isn't accounting for the possibility that there are 
other paths that return from the function.

https://github.com/llvm/llvm-project/pull/73553
___
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits


[llvm-branch-commits] [llvm] 4a51298 - [AArch64][SVE] Implement SPLAT_VECTOR for i1 vectors.

2019-12-09 Thread Eli Friedman via llvm-branch-commits

Author: Eli Friedman
Date: 2019-12-09T15:04:40-08:00
New Revision: 4a51298c13005be05e100f0ef46dbac47623bcd6

URL: 
https://github.com/llvm/llvm-project/commit/4a51298c13005be05e100f0ef46dbac47623bcd6
DIFF: 
https://github.com/llvm/llvm-project/commit/4a51298c13005be05e100f0ef46dbac47623bcd6.diff

LOG: [AArch64][SVE] Implement SPLAT_VECTOR for i1 vectors.

The generated sequence with whilelo is unintuitive, but it's the best
I could come up with given the limited number of SVE instructions that
interact with scalar registers. The other sequence I was considering
was something like dup+cmpne, but an extra scalar instruction seems
better than an extra vector instruction.

Differential Revision: https://reviews.llvm.org/D71160

Added: 


Modified: 
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
llvm/test/CodeGen/AArch64/sve-vector-splat.ll

Removed: 




diff  --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp 
b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index f32f03741221..b42496abecb6 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -825,7 +825,7 @@ AArch64TargetLowering::AArch64TargetLowering(const 
TargetMachine ,
 // splat of 0 or undef) once vector selects supported in SVE codegen. See
 // D68877 for more details.
 for (MVT VT : MVT::integer_scalable_vector_valuetypes()) {
-  if (isTypeLegal(VT) && VT.getVectorElementType() != MVT::i1)
+  if (isTypeLegal(VT))
 setOperationAction(ISD::SPLAT_VECTOR, VT, Custom);
 }
 setOperationAction(ISD::INTRINSIC_WO_CHAIN, MVT::i8, Custom);
@@ -7135,26 +7135,31 @@ SDValue 
AArch64TargetLowering::LowerSPLAT_VECTOR(SDValue Op,
   switch (ElemVT.getSimpleVT().SimpleTy) {
   case MVT::i8:
   case MVT::i16:
+  case MVT::i32:
 SplatVal = DAG.getAnyExtOrTrunc(SplatVal, dl, MVT::i32);
-break;
+return DAG.getNode(AArch64ISD::DUP, dl, VT, SplatVal);
   case MVT::i64:
 SplatVal = DAG.getAnyExtOrTrunc(SplatVal, dl, MVT::i64);
-break;
-  case MVT::i32:
-// Fine as is
-break;
-  // TODO: we can support splats of i1s and float types, but haven't added
-  // patterns yet.
-  case MVT::i1:
+return DAG.getNode(AArch64ISD::DUP, dl, VT, SplatVal);
+  case MVT::i1: {
+// The general case of i1.  There isn't any natural way to do this,
+// so we use some trickery with whilelo.
+// TODO: Add special cases for splat of constant true/false.
+SplatVal = DAG.getAnyExtOrTrunc(SplatVal, dl, MVT::i64);
+SplatVal = DAG.getNode(ISD::SIGN_EXTEND_INREG, dl, MVT::i64, SplatVal,
+   DAG.getValueType(MVT::i1));
+SDValue ID = DAG.getTargetConstant(Intrinsic::aarch64_sve_whilelo, dl,
+   MVT::i64);
+return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, VT, ID,
+   DAG.getConstant(0, dl, MVT::i64), SplatVal);
+  }
+  // TODO: we can support float types, but haven't added patterns yet.
   case MVT::f16:
   case MVT::f32:
   case MVT::f64:
   default:
-llvm_unreachable("Unsupported SPLAT_VECTOR input operand type");
-break;
+report_fatal_error("Unsupported SPLAT_VECTOR input operand type");
   }
-
-  return DAG.getNode(AArch64ISD::DUP, dl, VT, SplatVal);
 }
 
 static bool resolveBuildVector(BuildVectorSDNode *BVN, APInt ,

diff  --git a/llvm/test/CodeGen/AArch64/sve-vector-splat.ll 
b/llvm/test/CodeGen/AArch64/sve-vector-splat.ll
index b3f6cb4b24a1..086241c4e0a7 100644
--- a/llvm/test/CodeGen/AArch64/sve-vector-splat.ll
+++ b/llvm/test/CodeGen/AArch64/sve-vector-splat.ll
@@ -93,3 +93,43 @@ define  @sve_splat_2xi32(i32 %val) {
   %splat = shufflevector  %ins,  undef, 
 zeroinitializer
   ret  %splat
 }
+
+define  @sve_splat_2xi1(i1 %val) {
+; CHECK-LABEL: @sve_splat_2xi1
+; CHECK: sbfx x8, x0, #0, #1
+; CHECK-NEXT: whilelo p0.d, xzr, x8
+; CHECK-NEXT: ret
+  %ins = insertelement  undef, i1 %val, i32 0
+  %splat = shufflevector  %ins,  undef, 
 zeroinitializer
+  ret  %splat
+}
+
+define  @sve_splat_4xi1(i1 %val) {
+; CHECK-LABEL: @sve_splat_4xi1
+; CHECK: sbfx x8, x0, #0, #1
+; CHECK-NEXT: whilelo p0.s, xzr, x8
+; CHECK-NEXT: ret
+  %ins = insertelement  undef, i1 %val, i32 0
+  %splat = shufflevector  %ins,  undef, 
 zeroinitializer
+  ret  %splat
+}
+
+define  @sve_splat_8xi1(i1 %val) {
+; CHECK-LABEL: @sve_splat_8xi1
+; CHECK: sbfx x8, x0, #0, #1
+; CHECK-NEXT: whilelo p0.h, xzr, x8
+; CHECK-NEXT: ret
+  %ins = insertelement  undef, i1 %val, i32 0
+  %splat = shufflevector  %ins,  undef, 
 zeroinitializer
+  ret  %splat
+}
+
+define  @sve_splat_16xi1(i1 %val) {
+; CHECK-LABEL: @sve_splat_16xi1
+; CHECK: sbfx x8, x0, #0, #1
+; CHECK-NEXT: whilelo p0.b, xzr, x8
+; CHECK-NEXT: ret
+  %ins = insertelement  undef, i1 %val, i32 0
+  %splat = shufflevector  %ins,  undef, 
 zeroinitializer
+  ret  %splat
+}