Using Advanced Vector eXtensions with hand-coded x64 algorithms (e.g /arch/x86/blowfish-x86_64-asm_64.S)

2018-12-04 Thread Shipof _
I was curious if it might make implementing F() faster to use
instructions that are meant to work with sets of data similar to what
would be processed


[PATCH] crypto: adiantum - propagate CRYPTO_ALG_ASYNC flag to instance

2018-12-04 Thread Eric Biggers
From: Eric Biggers 

If the stream cipher implementation is asynchronous, then the Adiantum
instance must be flagged as asynchronous as well.  Otherwise someone
asking for a synchronous algorithm can get an asynchronous algorithm.

There are no asynchronous xchacha12 or xchacha20 implementations yet
which makes this largely a theoretical issue, but it should be fixed.

Fixes: 059c2a4d8e16 ("crypto: adiantum - add Adiantum support")
Signed-off-by: Eric Biggers 
---
 crypto/adiantum.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/crypto/adiantum.c b/crypto/adiantum.c
index 2dfcf12fd4529..ca27e0dc2958c 100644
--- a/crypto/adiantum.c
+++ b/crypto/adiantum.c
@@ -590,6 +590,8 @@ static int adiantum_create(struct crypto_template *tmpl, 
struct rtattr **tb)
 hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
goto out_drop_hash;
 
+   inst->alg.base.cra_flags = streamcipher_alg->base.cra_flags &
+  CRYPTO_ALG_ASYNC;
inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE;
inst->alg.base.cra_ctxsize = sizeof(struct adiantum_tfm_ctx);
inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask |
-- 
2.20.0.rc1.387.gf8505762e3-goog



Re: [PATCH] fscrypt: remove CRYPTO_CTR dependency

2018-12-04 Thread Eric Biggers
On Thu, Sep 06, 2018 at 12:43:41PM +0200, Ard Biesheuvel wrote:
> On 5 September 2018 at 21:24, Eric Biggers  wrote:
> > From: Eric Biggers 
> >
> > fscrypt doesn't use the CTR mode of operation for anything, so there's
> > no need to select CRYPTO_CTR.  It was added by commit 71dea01ea2ed
> > ("ext4 crypto: require CONFIG_CRYPTO_CTR if ext4 encryption is
> > enabled").  But, I've been unable to identify the arm64 crypto bug it
> > was supposedly working around.
> >
> > I suspect the issue was seen only on some old Android device kernel
> > (circa 3.10?).  So if the fix wasn't mistaken, the real bug is probably
> > already fixed.  Or maybe it was actually a bug in a non-upstream crypto
> > driver.
> >
> > So, remove the dependency.  If it turns out there's actually still a
> > bug, we'll fix it properly.
> >
> > Signed-off-by: Eric Biggers 
> 
> Acked-by: Ard Biesheuvel 
> 
> This may be related to
> 
> 11e3b725cfc2 crypto: arm64/aes-blk - honour iv_out requirement in CBC
> and CTR modes
> 
> given that the commit in question mentions CTS. How it actually works
> around the issue is unclear to me, though.
> 
> 
> 
> 
> > ---
> >  fs/crypto/Kconfig | 1 -
> >  1 file changed, 1 deletion(-)
> >
> > diff --git a/fs/crypto/Kconfig b/fs/crypto/Kconfig
> > index 02b7d91c92310..284b589b4774d 100644
> > --- a/fs/crypto/Kconfig
> > +++ b/fs/crypto/Kconfig
> > @@ -6,7 +6,6 @@ config FS_ENCRYPTION
> > select CRYPTO_ECB
> > select CRYPTO_XTS
> > select CRYPTO_CTS
> > -   select CRYPTO_CTR
> > select CRYPTO_SHA256
> > select KEYS
> > help
> > --
> > 2.19.0.rc2.392.g5ba43deb5a-goog
> >

Ping.  Ted, can you consider applying this to the fscrypt tree for 4.21?

Thanks,

- Eric


[PATCH v2 0/3] crypto: arm64/chacha - performance improvements

2018-12-04 Thread Ard Biesheuvel
Improve the performance of NEON based ChaCha:

Patch #1 adds a block size of 1472 to the tcrypt test template so we have
something that reflects the VPN case.

Patch #2 improves performance for arbitrary length inputs: on deep pipelines,
throughput increases ~30% when running on inputs blocks whose size is drawn
randomly from the interval [64, 1024)

Patch #3 adopts the OpenSSL approach to use the ALU in parallel with the
SIMD unit to process a fifth block while the SIMD is operating on 4 blocks.

Performance on Cortex-A57:

BEFORE:
===
testing speed of async chacha20 (chacha20-neon) encryption
tcrypt: test 0 (256 bit key, 16 byte blocks): 2528223 operations in 1 seconds 
(40451568 bytes)
tcrypt: test 1 (256 bit key, 64 byte blocks): 2518155 operations in 1 seconds 
(161161920 bytes)
tcrypt: test 2 (256 bit key, 256 byte blocks): 1207948 operations in 1 seconds 
(309234688 bytes)
tcrypt: test 3 (256 bit key, 1024 byte blocks): 332194 operations in 1 seconds 
(340166656 bytes)
tcrypt: test 4 (256 bit key, 1472 byte blocks): 185659 operations in 1 seconds 
(273290048 bytes)
tcrypt: test 5 (256 bit key, 8192 byte blocks): 41829 operations in 1 seconds 
(342663168 bytes)

AFTER:
==
testing speed of async chacha20 (chacha20-neon) encryption
tcrypt: test 0 (256 bit key, 16 byte blocks): 2530018 operations in 1 seconds 
(40480288 bytes)
tcrypt: test 1 (256 bit key, 64 byte blocks): 2518270 operations in 1 seconds 
(161169280 bytes)
tcrypt: test 2 (256 bit key, 256 byte blocks): 1187760 operations in 1 seconds 
(304066560 bytes)
tcrypt: test 3 (256 bit key, 1024 byte blocks): 361652 operations in 1 seconds 
(370331648 bytes)
tcrypt: test 4 (256 bit key, 1472 byte blocks): 280971 operations in 1 seconds 
(413589312 bytes)
tcrypt: test 5 (256 bit key, 8192 byte blocks): 53654 operations in 1 seconds 
(439533568 bytes)

Zinc:
=
testing speed of async chacha20 (chacha20-software) encryption
tcrypt: test 0 (256 bit key, 16 byte blocks): 2510300 operations in 1 seconds 
(40164800 bytes)
tcrypt: test 1 (256 bit key, 64 byte blocks): 2663794 operations in 1 seconds 
(170482816 bytes)
tcrypt: test 2 (256 bit key, 256 byte blocks): 1237617 operations in 1 seconds 
(316829952 bytes)
tcrypt: test 3 (256 bit key, 1024 byte blocks): 364645 operations in 1 seconds 
(373396480 bytes)
tcrypt: test 4 (256 bit key, 1472 byte blocks): 251548 operations in 1 seconds 
(370278656 bytes)
tcrypt: test 5 (256 bit key, 8192 byte blocks): 47650 operations in 1 seconds 
(390348800 bytes)

Cc: Eric Biggers 
Cc: Martin Willi 

Ard Biesheuvel (3):
  crypto: tcrypt - add block size of 1472 to skcipher template
  crypto: arm64/chacha - optimize for arbitrary length inputs
  crypto: arm64/chacha - use combined SIMD/ALU routine for more speed

 arch/arm64/crypto/chacha-neon-core.S | 396 +++-
 arch/arm64/crypto/chacha-neon-glue.c |  59 ++-
 crypto/tcrypt.c  |   2 +-
 3 files changed, 404 insertions(+), 53 deletions(-)

-- 
2.19.2



[PATCH v2 1/3] crypto: tcrypt - add block size of 1472 to skcipher template

2018-12-04 Thread Ard Biesheuvel
In order to have better coverage of algorithms operating on block
sizes that are in the ballpark of a VPN  packet, add 1472 to the
block_sizes array.

Signed-off-by: Ard Biesheuvel 
---
 crypto/tcrypt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 0590a9204562..e7fb87e114a5 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -81,7 +81,7 @@ static char *check[] = {
NULL
 };
 
-static u32 block_sizes[] = { 16, 64, 256, 1024, 8192, 0 };
+static u32 block_sizes[] = { 16, 64, 256, 1024, 1472, 8192, 0 };
 static u32 aead_sizes[] = { 16, 64, 256, 512, 1024, 2048, 4096, 8192, 0 };
 
 #define XBUFSIZE 8
-- 
2.19.2



[PATCH v2 3/3] crypto: arm64/chacha - use combined SIMD/ALU routine for more speed

2018-12-04 Thread Ard Biesheuvel
To some degree, most known AArch64 micro-architectures appear to be
able to issue ALU instructions in parellel to SIMD instructions
without affecting the SIMD throughput. This means we can use the ALU
to process a fifth ChaCha block while the SIMD is processing four
blocks in parallel.

Signed-off-by: Ard Biesheuvel 
---
 arch/arm64/crypto/chacha-neon-core.S | 235 ++--
 arch/arm64/crypto/chacha-neon-glue.c |  39 ++--
 2 files changed, 239 insertions(+), 35 deletions(-)

diff --git a/arch/arm64/crypto/chacha-neon-core.S 
b/arch/arm64/crypto/chacha-neon-core.S
index 32086709e6b3..534e0a3fafa4 100644
--- a/arch/arm64/crypto/chacha-neon-core.S
+++ b/arch/arm64/crypto/chacha-neon-core.S
@@ -1,13 +1,13 @@
 /*
  * ChaCha/XChaCha NEON helper functions
  *
- * Copyright (C) 2016 Linaro, Ltd. 
+ * Copyright (C) 2016-2018 Linaro, Ltd. 
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License version 2 as
  * published by the Free Software Foundation.
  *
- * Based on:
+ * Originally based on:
  * ChaCha20 256-bit cipher algorithm, RFC7539, x64 SSSE3 functions
  *
  * Copyright (C) 2015 Martin Willi
@@ -160,8 +160,27 @@ ENTRY(hchacha_block_neon)
ret x9
 ENDPROC(hchacha_block_neon)
 
+   a0  .reqw12
+   a1  .reqw13
+   a2  .reqw14
+   a3  .reqw15
+   a4  .reqw16
+   a5  .reqw17
+   a6  .reqw19
+   a7  .reqw20
+   a8  .reqw21
+   a9  .reqw22
+   a10 .reqw23
+   a11 .reqw24
+   a12 .reqw25
+   a13 .reqw26
+   a14 .reqw27
+   a15 .reqw28
+
.align  6
 ENTRY(chacha_4block_xor_neon)
+   frame_push  10
+
// x0: Input state matrix, s
// x1: 4 data blocks output, o
// x2: 4 data blocks input, i
@@ -181,6 +200,9 @@ ENTRY(chacha_4block_xor_neon)
// matrix by interleaving 32- and then 64-bit words, which allows us to
// do XOR in NEON registers.
//
+   // At the same time, a fifth block is encrypted in parallel using
+   // scalar registers
+   //
adr_l   x9, CTRINC  // ... and ROT8
ld1 {v30.4s-v31.4s}, [x9]
 
@@ -191,7 +213,24 @@ ENTRY(chacha_4block_xor_neon)
ld4r{ v8.4s-v11.4s}, [x8], #16
ld4r{v12.4s-v15.4s}, [x8]
 
-   // x12 += counter values 0-3
+   mov a0, v0.s[0]
+   mov a1, v1.s[0]
+   mov a2, v2.s[0]
+   mov a3, v3.s[0]
+   mov a4, v4.s[0]
+   mov a5, v5.s[0]
+   mov a6, v6.s[0]
+   mov a7, v7.s[0]
+   mov a8, v8.s[0]
+   mov a9, v9.s[0]
+   mov a10, v10.s[0]
+   mov a11, v11.s[0]
+   mov a12, v12.s[0]
+   mov a13, v13.s[0]
+   mov a14, v14.s[0]
+   mov a15, v15.s[0]
+
+   // x12 += counter values 1-4
add v12.4s, v12.4s, v30.4s
 
 .Ldoubleround4:
@@ -200,33 +239,53 @@ ENTRY(chacha_4block_xor_neon)
// x2 += x6, x14 = rotl32(x14 ^ x2, 16)
// x3 += x7, x15 = rotl32(x15 ^ x3, 16)
add v0.4s, v0.4s, v4.4s
+ add   a0, a0, a4
add v1.4s, v1.4s, v5.4s
+ add   a1, a1, a5
add v2.4s, v2.4s, v6.4s
+ add   a2, a2, a6
add v3.4s, v3.4s, v7.4s
+ add   a3, a3, a7
 
eor v12.16b, v12.16b, v0.16b
+ eor   a12, a12, a0
eor v13.16b, v13.16b, v1.16b
+ eor   a13, a13, a1
eor v14.16b, v14.16b, v2.16b
+ eor   a14, a14, a2
eor v15.16b, v15.16b, v3.16b
+ eor   a15, a15, a3
 
rev32   v12.8h, v12.8h
+ ror   a12, a12, #16
rev32   v13.8h, v13.8h
+ ror   a13, a13, #16
rev32   v14.8h, v14.8h
+ ror   a14, a14, #16
rev32   v15.8h, v15.8h
+ ror   a15, a15, #16
 
// x8 += x12, x4 = rotl32(x4 ^ x8, 12)
// x9 += x13, x5 = rotl32(x5 ^ x9, 12)
// x10 += x14, x6 = rotl32(x6 ^ x10, 12)
// x11 += x15, x7 = rotl32(x7 ^ x11, 12)
add v8.4s, v8.4s, v12.4s
+ add   a8, a8, a12
add v9.4s, v9.4s, v13.4s
+ add   a9, a9, a13
add v10.4s, v10.4s, v14.4s
+ add   a10, a10, a14
add v11.4s, v11.4s, v15.4s
+ add   

[PATCH v2 2/3] crypto: arm64/chacha - optimize for arbitrary length inputs

2018-12-04 Thread Ard Biesheuvel
Update the 4-way NEON ChaCha routine so it can handle input of any
length >64 bytes in its entirety, rather than having to call into
the 1-way routine and/or memcpy()s via temp buffers to handle the
tail of a ChaCha invocation that is not a multiple of 256 bytes.

On inputs that are a multiple of 256 bytes (and thus in tcrypt
benchmarks), performance drops by around 1% on Cortex-A57, while
performance for inputs drawn randomly from the range [64, 1024)
increases by around 30%.

Signed-off-by: Ard Biesheuvel 
---
 arch/arm64/crypto/chacha-neon-core.S | 183 ++--
 arch/arm64/crypto/chacha-neon-glue.c |  38 ++--
 2 files changed, 184 insertions(+), 37 deletions(-)

diff --git a/arch/arm64/crypto/chacha-neon-core.S 
b/arch/arm64/crypto/chacha-neon-core.S
index 75b4e06cee79..32086709e6b3 100644
--- a/arch/arm64/crypto/chacha-neon-core.S
+++ b/arch/arm64/crypto/chacha-neon-core.S
@@ -19,6 +19,8 @@
  */
 
 #include 
+#include 
+#include 
 
.text
.align  6
@@ -36,7 +38,7 @@
  */
 chacha_permute:
 
-   adr x10, ROT8
+   adr_l   x10, ROT8
ld1 {v12.4s}, [x10]
 
 .Ldoubleround:
@@ -164,6 +166,12 @@ ENTRY(chacha_4block_xor_neon)
// x1: 4 data blocks output, o
// x2: 4 data blocks input, i
// w3: nrounds
+   // x4: byte count
+
+   adr_l   x10, .Lpermute
+   and x5, x4, #63
+   add x10, x10, x5
+   add x11, x10, #64
 
//
// This function encrypts four consecutive ChaCha blocks by loading
@@ -173,15 +181,15 @@ ENTRY(chacha_4block_xor_neon)
// matrix by interleaving 32- and then 64-bit words, which allows us to
// do XOR in NEON registers.
//
-   adr x9, CTRINC  // ... and ROT8
+   adr_l   x9, CTRINC  // ... and ROT8
ld1 {v30.4s-v31.4s}, [x9]
 
// x0..15[0-3] = s0..3[0..3]
-   mov x4, x0
-   ld4r{ v0.4s- v3.4s}, [x4], #16
-   ld4r{ v4.4s- v7.4s}, [x4], #16
-   ld4r{ v8.4s-v11.4s}, [x4], #16
-   ld4r{v12.4s-v15.4s}, [x4]
+   add x8, x0, #16
+   ld4r{ v0.4s- v3.4s}, [x0]
+   ld4r{ v4.4s- v7.4s}, [x8], #16
+   ld4r{ v8.4s-v11.4s}, [x8], #16
+   ld4r{v12.4s-v15.4s}, [x8]
 
// x12 += counter values 0-3
add v12.4s, v12.4s, v30.4s
@@ -425,24 +433,47 @@ ENTRY(chacha_4block_xor_neon)
zip1v30.4s, v14.4s, v15.4s
zip2v31.4s, v14.4s, v15.4s
 
+   mov x3, #64
+   subsx5, x4, #64
+   add x6, x5, x2
+   cselx3, x3, xzr, ge
+   cselx2, x2, x6, ge
+
// interleave 64-bit words in state n, n+2
zip1v0.2d, v16.2d, v18.2d
zip2v4.2d, v16.2d, v18.2d
zip1v8.2d, v17.2d, v19.2d
zip2v12.2d, v17.2d, v19.2d
-   ld1 {v16.16b-v19.16b}, [x2], #64
+   ld1 {v16.16b-v19.16b}, [x2], x3
+
+   subsx6, x4, #128
+   ccmpx3, xzr, #4, lt
+   add x7, x6, x2
+   cselx3, x3, xzr, eq
+   cselx2, x2, x7, eq
 
zip1v1.2d, v20.2d, v22.2d
zip2v5.2d, v20.2d, v22.2d
zip1v9.2d, v21.2d, v23.2d
zip2v13.2d, v21.2d, v23.2d
-   ld1 {v20.16b-v23.16b}, [x2], #64
+   ld1 {v20.16b-v23.16b}, [x2], x3
+
+   subsx7, x4, #192
+   ccmpx3, xzr, #4, lt
+   add x8, x7, x2
+   cselx3, x3, xzr, eq
+   cselx2, x2, x8, eq
 
zip1v2.2d, v24.2d, v26.2d
zip2v6.2d, v24.2d, v26.2d
zip1v10.2d, v25.2d, v27.2d
zip2v14.2d, v25.2d, v27.2d
-   ld1 {v24.16b-v27.16b}, [x2], #64
+   ld1 {v24.16b-v27.16b}, [x2], x3
+
+   subsx8, x4, #256
+   ccmpx3, xzr, #4, lt
+   add x9, x8, x2
+   cselx2, x2, x9, eq
 
zip1v3.2d, v28.2d, v30.2d
zip2v7.2d, v28.2d, v30.2d
@@ -451,29 +482,155 @@ ENTRY(chacha_4block_xor_neon)
ld1 {v28.16b-v31.16b}, [x2]
 
// xor with corresponding input, write to output
+   tbnzx5, #63, 0f
eor v16.16b, v16.16b, v0.16b
eor v17.16b, v17.16b, v1.16b
eor v18.16b, v18.16b, v2.16b
eor v19.16b, v19.16b, v3.16b
+   st1 {v16.16b-v19.16b}, [x1], #64
+
+   tbnzx6, #63, 1f
eor v20.16b, v20.16b, v4.16b
eor v21.16b, v21.16b, v5.16b