3a/0x60
> kernel_init_freeable+0x1dd/0x238
> ? rest_init+0xc6/0xc6
> kernel_init+0x8/0x10a
> ret_from_fork+0x1f/0x30
> ---[ end trace 5bd3c1d0b2da ]---
>
> Signed-off-by: Kirill Tkhai
Hello Kirill,
This needs
Fixes: c055e3eae0f1 ("crypto: xor - use ktime for template benchm
On Fri, 25 Dec 2020 at 20:14, Eric Biggers wrote:
>
> On Tue, Dec 22, 2020 at 05:06:27PM +0100, Ard Biesheuvel wrote:
> > The AES-NI implementation of XTS was impacted significantly by the retpoline
> > changes, which is due to the fact that both its asm helper and the chai
On Thu, 24 Dec 2020 at 10:33, Milan Broz wrote:
>
> On 23/12/2020 23:38, Ard Biesheuvel wrote:
> > After applying my performance fixes for AES-NI in XTS mode, the only
> > remaining users of the x86 glue helper module are the niche algorithms
> > camellia, cast6, serpent
The glue helper's XTS routines are no longer used, so drop them.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/glue_helper-asm-avx.S | 59
arch/x86/crypto/glue_helper-asm-avx2.S| 78 --
arch/x86/crypto/glue_helper.c | 154
arc
Camellia in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.
Signed-off-by: Ard Biesheuvel
---
arch/x86/c
Twofish in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.
Signed-off-by: Ard Biesheuvel
---
arch/x86/c
The glue helper's CTR routines are no longer used, so drop them.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/glue_helper-asm-avx.S | 45
arch/x86/crypto/glue_helper-asm-avx2.S| 58
arch/x86/crypto/glue_helper.c
e the accelerated implementation
entirely in a future patch.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/cast6-avx-x86_64-asm_64.S | 28
arch/x86/crypto/cast6_avx_glue.c | 48
2 files changed, 76 deletions(-)
diff --git a/arch/x86/crypto/cast6-avx-x
Serpent in CTR mode is never used by the kernel directly, and is highly
unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop
the accelerated CTR mode implementation, and instead, rely on the CTR
template and the bare cipher.
Signed-off-by: Ard Biesheuvel
---
arch/x86/c
M) i7-8650U CPU @ 1.90GHz
Cc: Megha Dey
Cc: Eric Biggers
Cc: Herbert Xu
Cc: Milan Broz
Cc: Mike Snitzer
Ard Biesheuvel (10):
crypto: x86/camellia - switch to XTS template
crypto: x86/cast6 - switch to XTS template
crypto: x86/serpent- switch to XTS template
crypto: x86/twofish - swit
Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement Twofish in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/twofish-avx-x86_64-asm_64.S | 53 ---
arch/x86/crypto
Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement Serpent in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/serpent-avx-x86_64-asm_64.S | 48 --
arch/x86/crypto
Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement CAST6 in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/cast6-avx-x86_64-asm_64.S | 56 ---
arch/x86/crypto
Now that the XTS template can wrap accelerated ECB modes, it can be
used to implement Camellia in XTS mode as well, which turns out to
be at least as fast, and sometimes even faster.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/camellia-aesni-avx-asm_64.S | 180 -
arch/x86
On Tue, 22 Dec 2020 at 13:39, Marco Chiappero wrote:
>
> This patch includes a missing dependency (CRYPTO_AES) which may
> lead to an "undefined reference to `aes_expandkey'" linking error.
>
> Fixes: 5106dfeaeabe ("crypto: qat - add AES-XTS support for QAT GEN4 devices")
> Reported-by: kernel tes
this BLAKE2b implementation is only wired up to the shash API,
> since there is no library API for BLAKE2b yet. However, I've tried to
> keep things consistent with BLAKE2s, e.g. by defining
> blake2b_compress_arch() which is analogous to blake2s_compress_arch()
> and could be expor
. But I believe this is
> outweighed by the benefits of keeping the code in sync.
>
> Signed-off-by: Eric Biggers
Acked-by: Ard Biesheuvel
> ---
> crypto/blake2b_generic.c | 226 +++---
> include/crypto/blake2b.h | 67 +
>
is used
> instead of BLAKE2b, such as WireGuard.
>
> This new implementation is added in the form of a new module
> blake2s-arm.ko, which is analogous to blake2s-x86_64.ko in that it
> provides blake2s_compress_arch() for use by the library API as well as
> optionally register t
On Wed, 23 Dec 2020 at 09:12, Eric Biggers wrote:
>
> From: Eric Biggers
>
> Address the following checkpatch warning:
>
> WARNING: Use #include instead of
>
> Signed-off-by: Eric Biggers
Acked-by: Ard Biesheuvel
> ---
> include/crypto/blake2s.
On Wed, 23 Dec 2020 at 09:12, Eric Biggers wrote:
>
> From: Eric Biggers
>
> Use the full path in the include guards for the BLAKE2s headers to avoid
> ambiguity and to match the convention for most files in include/crypto/.
>
> Signed-off-by: Eric Biggers
Ack
eep things
> consistent rather than making optimizations for BLAKE2b but not BLAKE2s.
>
> Signed-off-by: Eric Biggers
Acked-by: Ard Biesheuvel
> ---
> include/crypto/blake2s.h | 53 ---
> include/crypto/internal/blake2s.h | 5 +--
> 2 fil
On Wed, 23 Dec 2020 at 09:12, Eric Biggers wrote:
>
> From: Eric Biggers
>
> The first three fields of 'struct blake2s_state' are used in assembly
> code, which isn't immediately obvious, so add a comment to this effect.
>
> Signed-off-by: Eri
o_blake2s_update()", so it had to be updated at the same time.)
>
> Signed-off-by: Eric Biggers
Acked-by: Ard Biesheuvel
> ---
> arch/x86/crypto/blake2s-glue.c| 74 +++---
> crypto/blake2s_generic.c | 76 -
> the shash helper functions. This will avoid duplicating this logic
> between the library and shash implementations.
>
> Signed-off-by: Eric Biggers
Acked-by: Ard Biesheuvel
> ---
> include/crypto/internal/blake2s.h | 41 ++
>
mbler helpers. Instead, let's adopt the arm64 strategy, i.e.,
provide a helper which can consume inputs of any size, provided that the
penultimate, full block is passed via the last call if ciphertext stealing
needs to be applied.
This also allows us to enable the XTS mode for i386.
Signed-o
fd3f ("x86/retpoline/crypto: Convert crypto assembler indirect
jumps")
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/aesni-intel_asm.S | 115
arch/x86/crypto/aesni-intel_glue.c | 25 +++--
2 files changed, 84 insertions(+), 56 deletions(-)
diff --git a/arch/x86/cryp
Eric Biggers
Cc: Herbert Xu
Ard Biesheuvel (2):
crypto: x86/aes-ni-xts - use direct calls to and 4-way stride
crypto: x86/aes-ni-xts - rewrite and drop indirections via glue helper
arch/x86/crypto/aesni-intel_asm.S | 353
arch/x86/crypto/aesni-intel_glue.c
On Mon, 21 Dec 2020 at 23:01, Eric Biggers wrote:
>
> Hi Ard,
>
> On Sat, Dec 12, 2020 at 10:16:56AM +0100, Ard Biesheuvel wrote:
> > Clean up some issues and peculiarities in the gcm(aes-ni) driver.
> >
> > Cc: Eric Biggers
> > Cc: Herbert Xu
> >
>
On Fri, 18 Dec 2020 at 22:07, Megha Dey wrote:
>
> From: Kyung Min Park
>
> Optimize GHASH computations with the 512 bit wide VPCLMULQDQ instructions.
> The new instruction allows to work on 4 x 16 byte blocks at the time.
> For best parallelism and deeper out of order execution, the main loop of
Now that kernel mode SIMD is guaranteed to be available when executing
in task or softirq context, we no longer need scalar fallbacks to use
when the NEON is unavailable. So get rid of them.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/ghash-ce-glue.c | 209 +---
1 file
Given that GCM executes at 1-2 cycles per bytes and operates on 64 byte
chunks, doing a yield check every iteration should limit the scheduling
(or softirq) latency to < 200 cycles, which is a very conservative upper
bound.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/ghash-ce-core
cheap, we can relax this
restriction, by increasing the granularity of kernel mode NEON code, and
always disabling softirq processing while the NEON is being used in task
context.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/include/asm/assembler.h | 19 +--
arch/arm64/kernel/asm
In order to ensure that kernel mode SIMD routines will not need a scalar
fallback if they run with softirqs disabled, disallow any use of the
AEAD encrypt and decrypt routines from outside of task or softirq context.
Signed-off-by: Ard Biesheuvel
---
crypto/aead.c | 10 ++
1 file
In order to ensure that kernel mode SIMD routines will not need a scalar
fallback if they run with softirqs disabled, disallow any use of the
skcipher encrypt and decrypt routines from outside of task or softirq
context.
Signed-off-by: Ard Biesheuvel
---
crypto/skcipher.c | 10 ++
1
ernel_fpu_begin/end is no longer
expensive?
Cc: Dave Martin
Cc: Mark Brown
Cc: Herbert Xu
Cc: Eric Biggers
Cc: Will Deacon
Cc: Catalin Marinas
Cc: Thomas Gleixner
Cc: Peter Zijlstra
Cc: Sebastian Andrzej Siewior
Cc: Ingo Molnar
Ard Biesheuvel (5):
crypto: aead - disallow en/decrypt
ne shash_alg structs using macros
> crypto: x86/blake2s - define shash_alg structs using macros
> crypto: blake2s - remove unneeded includes
> crypto: blake2s - share the "shash" API boilerplate code
> crypto: arm/blake2s - add ARM scalar optimized BLAKE2s
> wireguard:
if the bit sliced NEON driver is enabled as a module. So instead, let's
use IS_ENABLED() here.
Fixes: 69b6f2e817e5b ("crypto: arm64/aes-neon - limit exposed routines if ...")
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-glue.c | 4 ++--
1 file changed, 2 insertions(+), 2 d
e any input size,
and uses NEON permutation instructions and overlapping loads and stores
to handle the tail block. This results in a ~16% speedup for 1420 byte
blocks on cores with deep pipelines such as ThunderX2.
Signed-off-by: Ard Biesheuvel
---
arch/arm64/crypto/aes-glue.c | 46 +++---
arch/
Hello Chang,
On Wed, 16 Dec 2020 at 18:47, Chang S. Bae wrote:
>
> Key Locker (KL) is Intel's new security feature that protects the AES key
> at the time of data transformation. New AES SIMD instructions -- as a
> successor of Intel's AES-NI -- are provided to encode an AES key and
> reference i
(+ Eric)
TL;DR can we find a way to use synchronous SIMD skciphers/aeads
without cryptd or scalar fallbacks
On Thu, 10 Dec 2020 at 13:19, Ard Biesheuvel wrote:
>
> On Thu, 10 Dec 2020 at 13:16, Herbert Xu wrote:
> >
> > On Thu, Dec 10, 2020 at 01:03:56PM +0100, Ar
r even for final
blocks that are smaller than the chacha block size.
So increment the counter after calling chacha_block_xor_neon().
Fixes: 86cd97ec4b943af3 ("crypto: arm/chacha-neon - optimize for non-block size
multiples")
Reported-by: Eric Biggers
Signed-off-by: Ard Biesheuvel
---
v2:
non-block size
multiples")
Reported-by: Eric Biggers
Signed-off-by: Ard Biesheuvel
---
arch/arm/crypto/chacha-glue.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm/crypto/chacha-glue.c b/arch/arm/crypto/chacha-glue.c
index 7b5cf8430c6d..f19e6da8cdd0 100644
--- a/arch/arm/
On Sat, 12 Dec 2020 at 10:36, Ard Biesheuvel wrote:
>
> On Fri, 11 Dec 2020 at 20:07, Eric Biggers wrote:
> >
> > On Fri, Dec 11, 2020 at 07:29:04PM +0800, Tony W Wang-oc wrote:
> > > The driver crc32c-intel match CPUs supporting X86_FEATURE_XMM4_2.
> >
On Sat, 12 Dec 2020 at 07:43, Eric Biggers wrote:
>
> Hi Ard,
>
> On Tue, Nov 03, 2020 at 05:28:09PM +0100, Ard Biesheuvel wrote:
> > @@ -42,24 +42,24 @@ static void chacha_doneon(u32 *state, u8 *dst, const u8
> > *src,
> > {
> > u8 buf[CHACHA_BLO
Clean up some issues and peculiarities in the gcm(aes-ni) driver.
Cc: Eric Biggers
Cc: Herbert Xu
Ard Biesheuvel (4):
crypto: x86/gcm-aes-ni - prevent misaligned IV buffers on the stack
crypto: x86/gcm-aes-ni - drop unused asm prototypes
crypto: x86/gcm-aes-ni - clean up mapping of
which always do one or the other.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/aesni-intel_glue.c | 144
1 file changed, 58 insertions(+), 86 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c
b/arch/x86/crypto/aesni-intel_glue.c
index e5c4d0cce828..b0a13ab
additional stack realignment sequence that is needed,
and so the alignment is not guaranteed to be more than 8 bytes.
So instead, allocate some padding on the stack, and realign the IV and
data pointers by hand.
Cc:
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/aesni-intel_glue.c | 28
The gcm(aes-ni) driver is only built for x86_64, which does not make
use of highmem. So testing for PageHighMem is pointless and can be
omitted.
While at it, replace GFP_ATOMIC with the appropriate runtime decided
value based on the context.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto
On Fri, 11 Dec 2020 at 20:07, Eric Biggers wrote:
>
> On Fri, Dec 11, 2020 at 07:29:04PM +0800, Tony W Wang-oc wrote:
> > The driver crc32c-intel match CPUs supporting X86_FEATURE_XMM4_2.
> > On platforms with Zhaoxin CPUs supporting this X86 feature, When
> > crc32c-intel and crc32c-generic are b
Drop some prototypes that are declared but never called.
Signed-off-by: Ard Biesheuvel
---
arch/x86/crypto/aesni-intel_glue.c | 67
1 file changed, 67 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c
b/arch/x86/crypto/aesni-intel_glue.c
index 223670feaffa
Kconfig dependency on CRYPT_LIB_AES (#1)
- add missing module namespace import into skcipher.c (#2) - this addresses
the kbuild failure report
- add module import to QAT driver, which now contains a valid use of the
bare cipher API
Cc: Eric Biggers
Ard Biesheuvel (2):
chcr_ktls: use AES
-off-by: Ard Biesheuvel
---
drivers/net/ethernet/chelsio/inline_crypto/Kconfig | 1 +
drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 19
+++
2 files changed, 8 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/chelsio/inline_crypto
#x27;s use the new module namespace feature to move the symbol
exports into a new namespace CRYPTO_INTERNAL.
Signed-off-by: Ard Biesheuvel
---
Documentation/crypto/api-skcipher.rst| 4 +-
arch/arm/crypto/aes-neonbs-glue.c| 3 +
arch/s390/crypto/aes_s390.c
On Fri, 11 Dec 2020 at 11:07, Herbert Xu wrote:
>
> On Tue, Dec 01, 2020 at 02:24:50PM +, Giovanni Cabiddu wrote:
> >
> > @@ -1293,6 +1366,12 @@ static int qat_alg_skcipher_init_xts_tfm(struct
> > crypto_skcipher *tfm)
> > if (IS_ERR(ctx->ftfm))
> > return PTR_ERR(ctx->ftf
FPU state, this should not adversely affect performance.
[0] https://lore.kernel.org/linux-crypto/20201201194556.5220-1-a...@kernel.org/
Cc: Eric Biggers
Cc: Herbert Xu
Cc: Ben Greear
Ard Biesheuvel (3):
ARM: vfp: allow kernel mode NEON in softirq context
crypto: arm/aes-ce - drop non-SIMD
.
Signed-off-by: Ard Biesheuvel
---
arch/arm/include/asm/simd.h | 12
arch/arm/vfp/vfpmodule.c| 11 +++
2 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/arch/arm/include/asm/simd.h b/arch/arm/include/asm/simd.h
new file mode 100644
index
Now that kernel mode NEON is guaranteed to be available both in process
and in softirq context, we no longer have a need for the SIMD helper or
for non-SIMD fallbacks, given that the skcipher API is not supported in
any other context anyway. So drop this code.
Signed-off-by: Ard Biesheuvel
Now that kernel mode NEON is guaranteed to be available both in process
and in softirq context, we no longer have a need for the SIMD helper or
for non-SIMD fallbacks, given that the skcipher API is not supported in
any other context anyway. So drop this code.
Signed-off-by: Ard Biesheuvel
On Thu, 10 Dec 2020 at 13:16, Herbert Xu wrote:
>
> On Thu, Dec 10, 2020 at 01:03:56PM +0100, Ard Biesheuvel wrote:
> >
> > But we should probably start policing this a bit more. For instance, we now
> > have
> >
> > drivers/net/macsec.c:
> >
> > /
On Thu, 10 Dec 2020 at 12:14, Herbert Xu wrote:
>
> On Thu, Dec 10, 2020 at 08:30:47AM +0100, Ard Biesheuvel wrote:
> >
> > I would argue that these are orthogonal. My patch improves both the
> > accelerated and the fallback path, given that the latter does not have
&g
#x27;s use the new module namespace feature to move the symbol
exports into a new namespace CRYPTO_INTERNAL.
Signed-off-by: Ard Biesheuvel
---
Documentation/crypto/api-skcipher.rst| 4 +-
arch/arm/crypto/aes-neonbs-glue.c| 3 +
arch/s390/crypto/aes_s390.c
-off-by: Ard Biesheuvel
---
drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c | 19
+++
1 file changed, 7 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls
Patch #2 puts the cipher API (which should not be used outside of the
crypto API implementation) into an internal header file and module
namespace
Patch #1 is a prerequisite for this, to avoid having to make the chelsio
driver import the crypto internal namespace.
Cc: Eric Biggers
Ard
On Thu, 10 Dec 2020 at 04:01, Ben Greear wrote:
>
> On 12/9/20 6:43 PM, Herbert Xu wrote:
> > On Thu, Dec 10, 2020 at 01:18:12AM +0100, Ard Biesheuvel wrote:
> >>
> >> One thing I realized just now is that in the current situation, all
> >> the synchron
On Wed, 2 Dec 2020 at 00:12, Herbert Xu wrote:
>
> On Tue, Dec 01, 2020 at 11:27:52PM +0100, Ard Biesheuvel wrote:
> >
> > > The problem is that the degradation would come at the worst time,
> > > when the system is loaded. IOW when you get an interrupt during
On Fri, 4 Dec 2020 at 22:30, Brijesh Singh wrote:
>
> The SEV FW version >= 0.23 added a new command that can be used to query
> the attestation report containing the SHA-256 digest of the guest memory
> encrypted through the KVM_SEV_LAUNCH_UPDATE_{DATA, VMSA} commands and
> sign the report with t
missing printk->pr_cont conversion in the AEAD
benchmark.
Signed-off-by: Ard Biesheuvel
---
crypto/tcrypt.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index a647bb298fbc..a4a11d2b57bd 100644
--- a/crypto/tcrypt.c
++
On Tue, 8 Dec 2020 at 14:25, David Howells wrote:
>
> I wonder - would it make sense to reserve two arrays of scatterlist structs
> and a mutex per CPU sufficient to map up to 1MiB of pages with each array
> while the krb5 service is in use?
>
> That way sunrpc could, say, grab the mutex, map the
On Mon, 7 Dec 2020 at 15:15, David Howells wrote:
>
> Ard Biesheuvel wrote:
>
> > > I wonder if it would help if the input buffer and output buffer didn't
> > > have to correspond exactly in usage - ie. the output buffer could be used
> > > at a slower r
On Mon, 7 Dec 2020 at 14:50, Horia Geantă wrote:
>
> On 11/26/2020 9:09 AM, Ard Biesheuvel wrote:
> > On Wed, 25 Nov 2020 at 22:39, Iuliana Prodan wrote:
> >>
> >> On 11/25/2020 11:16 PM, Ard Biesheuvel wrote:
> >>> On Wed, 25 Nov 2020 a
filename encryption in the fscrypt layer. For larger inputs, the
speedup is still significant (~25% on decryption, ~6% on encryption)
Tested-by: Eric Biggers # x86_64
Signed-off-by: Ard Biesheuvel
---
v2: add 32-bit support:
. load IV earlier so we can reuse the IVP register to replace T2 which is
On Mon, 7 Dec 2020 at 19:46, Eric Biggers wrote:
>
> On Sun, Dec 06, 2020 at 11:45:23PM +0100, Ard Biesheuvel wrote:
> > Follow the same approach as the arm64 driver for implementing a version
> > of AES-NI in CBC mode that supports ciphertext stealing. Compared to the
> &
On Mon, 7 Dec 2020 at 13:02, David Howells wrote:
>
> Ard Biesheuvel wrote:
>
> > > Yeah - the problem with that is that for sunrpc, we might be dealing with
> > > 1MB
> > > plus bits of non-contiguous pages, requiring >8K of scatterlist elements
> >
bytes), which is relevant given that AES-CBC with ciphertext
stealing is used for filename encryption in the fscrypt layer. For larger
inputs, the speedup is still significant (~25% on decryption, ~6% on
encryption).
Signed-off-by: Ard Biesheuvel
---
Full tcrypt benchmark results for cts(cbc-aes
On Fri, 4 Dec 2020 at 18:19, David Howells wrote:
>
> Ard Biesheuvel wrote:
>
> > The tricky thing with CTS is that you have to ensure that the final
> > full and partial blocks are presented to the crypto driver as one
> > chunk, or it won't be able to perfo
On Fri, 4 Dec 2020 at 17:52, David Howells wrote:
>
> Bruce Fields wrote:
>
> > OK, I guess I don't understand the question. I haven't thought about
> > this code in at least a decade. What's an auxilary cipher? Is this a
> > question about why we're implementing something, or how we're
> > im
On Thu, 3 Dec 2020 at 23:26, Arnd Bergmann wrote:
>
> From: Arnd Bergmann
>
> When the SIMD portion of the driver is disabled, the compiler cannot
> figure out in advance if it will be called:
>
> ERROR: modpost: "crypto_aegis128_update_simd" [crypto/aegis128.ko] undefined!
>
> Add a conditional
On Thu, 3 Dec 2020 at 02:35, Iuliana Prodan (OSS)
wrote:
>
> From: Iuliana Prodan
>
> This series removes CRYPTO_ALG_ALLOCATES_MEMORY flag and
> allocates the memory needed by the driver, to fulfil a
> request, within the crypto request object.
> The extra size needed for base extended descriptor
On Wed, 2 Dec 2020 at 00:30, Herbert Xu wrote:
>
> On Wed, Dec 02, 2020 at 12:24:47AM +0100, Ard Biesheuvel wrote:
> >
> > True. But the fallback only gets executed if the scheduler is stupid
> > enough to schedule the TX task onto the core that is overloaded doing
&g
On Wed, 2 Dec 2020 at 00:12, Herbert Xu wrote:
>
> On Tue, Dec 01, 2020 at 11:27:52PM +0100, Ard Biesheuvel wrote:
> >
> > > The problem is that the degradation would come at the worst time,
> > > when the system is loaded. IOW when you get an interrupt during
On Tue, 1 Dec 2020 at 23:16, Herbert Xu wrote:
>
> On Tue, Dec 01, 2020 at 11:12:32PM +0100, Ard Biesheuvel wrote:
> >
> > What do you mean by just one direction? Ben just confirmed a
>
> The TX direction generally executes in process context, which
> would benef
On Tue, 1 Dec 2020 at 23:04, Herbert Xu wrote:
>
> On Tue, Dec 01, 2020 at 11:01:57PM +0100, Ard Biesheuvel wrote:
> >
> > This is not the first time this has come up. The point is that CCMP in
> > the wireless stack is not used in 99% of the cases, given that any
> &
On Tue, 1 Dec 2020 at 22:57, Herbert Xu wrote:
>
> On Tue, Dec 01, 2020 at 08:45:56PM +0100, Ard Biesheuvel wrote:
> > Add ccm(aes) implementation from linux-wireless mailing list (see
> > http://permalink.gmane.org/gmane.linux.kernel.wireless.general/126679).
> >
> &g
On Tue, 1 Dec 2020 at 20:53, Randy Dunlap wrote:
>
> On 12/1/20 2:03 AM, Stephen Rothwell wrote:
> > Hi all,
> >
> > Changes since 20201130:
> >
>
> on i386 or x86_64:
>
> CONFIG_CRYPTO_AEGIS128=m
> CONFIG_CRYPTO_AEGIS128_AESNI_SSE2=y
>
>
> ERROR: modpost: "crypto_aegis128_update_simd" [crypto/aeg
Greear
Co-developed-by: Steve deRosier
Signed-off-by: Steve deRosier
Signed-off-by: Ard Biesheuvel
---
v2: avoid the SIMD helper, as it produces an CRYPTO_ALG_ASYNC aead, which
is not usable by the 802.11 ccmp driver
arch/x86/crypto/aesni-intel_glue.c | 406 +++-
1 file changed
On Mon, 30 Nov 2020 at 07:58, Tianjia Zhang
wrote:
>
>
>
> On 11/30/20 10:24 AM, Herbert Xu wrote:
> > On Mon, Nov 30, 2020 at 10:21:56AM +0800, Tianjia Zhang wrote:
> >>
> >>> That is true only if there are non-generic implementations of
> >>> the algorithms, which is not the case here. Please e
On Mon, 30 Nov 2020 at 23:48, Ben Greear wrote:
>
> On 11/29/20 10:20 AM, Ard Biesheuvel wrote:
> > From: Steve deRosier
> >
> > Add ccm(aes) implementation from linux-wireless mailing list (see
> > http://permalink.gmane.org/gmane.linux.kernel.wireless.general/12
On Mon, 30 Nov 2020 at 13:42, Geert Uytterhoeven wrote:
>
> Hi Ard,
>
> On Mon, Nov 30, 2020 at 1:26 PM Ard Biesheuvel wrote:
> > Geert reports that builds where CONFIG_CRYPTO_AEGIS128_SIMD is not set
> > may still emit references to crypto_aegis128_update_simd(), which
always able to prove this.
So add some explicit checks for CONFIG_CRYPTO_AEGIS128_SIMD to help the
compiler figure this out.
Tested-by: Geert Uytterhoeven
Signed-off-by: Ard Biesheuvel
---
crypto/aegis128-core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/crypto/aegis128
On Mon, 30 Nov 2020 at 10:43, Ard Biesheuvel wrote:
>
> On Mon, 30 Nov 2020 at 10:37, Geert Uytterhoeven wrote:
> >
> > Hi Ard,
> >
> > On Tue, Nov 17, 2020 at 2:38 PM Ard Biesheuvel wrote:
> > > This series supersedes [0] '[PATCH] crypto: aegis1
On Mon, 30 Nov 2020 at 10:37, Geert Uytterhoeven wrote:
>
> Hi Ard,
>
> On Tue, Nov 17, 2020 at 2:38 PM Ard Biesheuvel wrote:
> > This series supersedes [0] '[PATCH] crypto: aegis128/neon - optimize tail
> > block handling', which is included as patch #3
On Sun, 29 Nov 2020 at 19:20, Ard Biesheuvel wrote:
>
> From: Steve deRosier
>
Whoops - please ignore this line.
> Add ccm(aes) implementation from linux-wireless mailing list (see
> http://permalink.gmane.org/gmane.linux.kernel.wireless.general/126679).
>
> This elimina
.
Suggested-by: Ben Greear
Co-developed-by: Steve deRosier
Signed-off-by: Steve deRosier
Signed-off-by: Ard Biesheuvel
---
Ben,
This is almost a rewrite of the original patch, switching to the new
skcipher API, using the existing SIMD helper, and drop numerous unrelated
changes. The basic
On Thu, 26 Nov 2020 at 17:00, Iuliana Prodan wrote:
>
> On 11/26/2020 9:09 AM, Ard Biesheuvel wrote:
> > On Wed, 25 Nov 2020 at 22:39, Iuliana Prodan wrote:
> >>
> >> On 11/25/2020 11:16 PM, Ard Biesheuvel wrote:
> >>> On Wed, 25 Nov 2020 a
v5.4+
Signed-off-by: Ard Biesheuvel
---
v2: - add comment block describing the erratum and how it is being worked
around
- mention A57 as well as A72, as both are affected
arch/arm/crypto/aes-ce-core.S | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --
On Wed, 25 Nov 2020 at 22:39, Iuliana Prodan wrote:
>
> On 11/25/2020 11:16 PM, Ard Biesheuvel wrote:
> > On Wed, 25 Nov 2020 at 22:14, Iuliana Prodan (OSS)
> > wrote:
> >>
> >> From: Iuliana Prodan
> >>
> >> Add the option to allocate the
On Wed, 25 Nov 2020 at 22:14, Iuliana Prodan (OSS)
wrote:
>
> From: Iuliana Prodan
>
> Add the option to allocate the crypto request object plus any extra space
> needed by the driver into a DMA-able memory.
>
> Add CRYPTO_TFM_REQ_DMA flag to be used by backend implementations to
> indicate to cr
On Wed, 25 Nov 2020 at 17:56, Eric Biggers wrote:
>
> On Wed, Nov 25, 2020 at 08:22:16AM +0100, Ard Biesheuvel wrote:
> > ARM Cortex-A72 cores running in 32-bit mode are affected by a silicon
> > erratum (1655431: ELR recorded incorrectly on interrupt taken between
> > cr
ff-by: Ard Biesheuvel
---
arch/arm/crypto/aes-ce-core.S | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/arm/crypto/aes-ce-core.S b/arch/arm/crypto/aes-ce-core.S
index 4d1707388d94..c0ef9680d90b 100644
--- a/arch/arm/crypto/aes-ce-core.S
+++ b/arch/arm/c
that are entirely avoidable.
So let's copy the key into the ctx buffer first, which we will do
anyway in the common case, and which guarantees correct alignment.
Cc:
Signed-off-by: Ard Biesheuvel
---
crypto/ecdh.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/c
201 - 300 of 2556 matches
Mail list logo