4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Eric Biggers <ebigg...@google.com>

commit 12455e320e19e9cc7ad97f4ab89c280fe297387c upstream.

The arm64 NEON bit-sliced implementation of AES-CTR fails the improved
skcipher tests because it sometimes produces the wrong ciphertext.  The
bug is that the final keystream block isn't returned from the assembly
code when the number of non-final blocks is zero.  This can happen if
the input data ends a few bytes after a page boundary.  In this case the
last bytes get "encrypted" by XOR'ing them with uninitialized memory.

Fix the assembly code to return the final keystream block when needed.

Fixes: 88a3f582bea9 ("crypto: arm64/aes - don't use IV buffer to return final 
keystream block")
Cc: <sta...@vger.kernel.org> # v4.11+
Reviewed-by: Ard Biesheuvel <ard.biesheu...@linaro.org>
Signed-off-by: Eric Biggers <ebigg...@google.com>
Signed-off-by: Herbert Xu <herb...@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>


---
 arch/arm64/crypto/aes-neonbs-core.S |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- a/arch/arm64/crypto/aes-neonbs-core.S
+++ b/arch/arm64/crypto/aes-neonbs-core.S
@@ -940,7 +940,7 @@ CPU_LE(     rev             x8, x8          )
 8:     next_ctr        v0
        cbnz            x4, 99b
 
-0:     st1             {v0.16b}, [x5]
+       st1             {v0.16b}, [x5]
        ldp             x29, x30, [sp], #16
        ret
 
@@ -948,6 +948,9 @@ CPU_LE(     rev             x8, x8          )
         * If we are handling the tail of the input (x6 != NULL), return the
         * final keystream block back to the caller.
         */
+0:     cbz             x6, 8b
+       st1             {v0.16b}, [x6]
+       b               8b
 1:     cbz             x6, 8b
        st1             {v1.16b}, [x6]
        b               8b


Reply via email to