Re: [PATCH 2/3] crypto: AF_ALG - disregard AAD buffer space for output
On Sat, Nov 12, 2016 at 03:03:36AM +0100, Stephan Mueller wrote: > > When you have separate buffers, the kernel does not seem to copy the AD over > to the target buffer. OK we should definitely fix that. Cheers, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/3] crypto: AF_ALG - disregard AAD buffer space for output
Am Samstag, 12. November 2016, 09:55:19 CET schrieb Herbert Xu: Hi Herbert, > On Thu, Nov 10, 2016 at 04:32:03AM +0100, Stephan Mueller wrote: > > The kernel crypto API AEAD cipher operation generates output such that > > space for the AAD is reserved in the output buffer without being > > touched. The processed ciphertext/plaintext is appended to the reserved > > AAD buffer. > > > > The user space interface followed that approach. However, this is a > > violation of the POSIX read definition which requires that any read data > > is placed at the beginning of the caller-provided buffer. As the kernel > > crypto API would leave room for the AAD, the old approach did not fully > > comply with the POSIX specification. > > Nack. The kernel AEAD API will copy the AD as is, it definitely > does not leave the output untouched unless of course when it is > an in-place operation. The user-space operation should operate > in the same manner. When you have separate buffers, the kernel does not seem to copy the AD over to the target buffer. > > Cheers, Ciao Stephan -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/3] crypto: AF_ALG - disregard AAD buffer space for output
Am Freitag, 11. November 2016, 16:26:12 CET schrieb Mat Martineau: Hi Mat, > > > > With this solution, the caller must not use sendpage with the exact same > > buffers for input and output. The following rationale applies: When > > the caller sends the same buffer for input/output to the sendpage > > operation, the cipher operation now will write the ciphertext to the > > beginning of the buffer where the AAD used to be. The subsequent tag > > calculation will now use the data it finds where the AAD is expected. > > As the cipher operation has already replaced the AAD with the ciphertext, > > the tag calculation will take the ciphertext as AAD and thus calculate > > a wrong tag. > > If it's not much overhead, I suggest checking for this condition and > returning an error. I can surely look into that. But Herbert's NACK seems to make this patch unlikely. > > Other than that, I've done a quick test of the patches using sendmsg() and > read() and found that they work as expected. > Thanks for testing. Ciao Stephan -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/3] crypto: AF_ALG - disregard AAD buffer space for output
On Thu, Nov 10, 2016 at 04:32:03AM +0100, Stephan Mueller wrote: > The kernel crypto API AEAD cipher operation generates output such that > space for the AAD is reserved in the output buffer without being > touched. The processed ciphertext/plaintext is appended to the reserved > AAD buffer. > > The user space interface followed that approach. However, this is a > violation of the POSIX read definition which requires that any read data > is placed at the beginning of the caller-provided buffer. As the kernel > crypto API would leave room for the AAD, the old approach did not fully > comply with the POSIX specification. Nack. The kernel AEAD API will copy the AD as is, it definitely does not leave the output untouched unless of course when it is an in-place operation. The user-space operation should operate in the same manner. Cheers, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/3] crypto: AF_ALG - disregard AAD buffer space for output
Stephan, On Thu, 10 Nov 2016, Stephan Mueller wrote: The kernel crypto API AEAD cipher operation generates output such that space for the AAD is reserved in the output buffer without being touched. The processed ciphertext/plaintext is appended to the reserved AAD buffer. The user space interface followed that approach. However, this is a violation of the POSIX read definition which requires that any read data is placed at the beginning of the caller-provided buffer. As the kernel crypto API would leave room for the AAD, the old approach did not fully comply with the POSIX specification. The patch changes the user space AF_ALG AEAD interface such that the processed ciphertext/plaintext are now placed at the beginning of the user buffer provided with the read system call. That means the user space interface now deviates from the in-kernel output buffer handling. For the cipher operation, the AAD buffer provided during input is pointed to by a new SGL which is chained with the output buffer SGL. With this approach, only pointers to one copy of the AAD are maintained to avoid data duplication. With this solution, the caller must not use sendpage with the exact same buffers for input and output. The following rationale applies: When the caller sends the same buffer for input/output to the sendpage operation, the cipher operation now will write the ciphertext to the beginning of the buffer where the AAD used to be. The subsequent tag calculation will now use the data it finds where the AAD is expected. As the cipher operation has already replaced the AAD with the ciphertext, the tag calculation will take the ciphertext as AAD and thus calculate a wrong tag. If it's not much overhead, I suggest checking for this condition and returning an error. Other than that, I've done a quick test of the patches using sendmsg() and read() and found that they work as expected. Thanks, Mat Reported-by: Mat Martineau Signed-off-by: Stephan Mueller --- crypto/algif_aead.c | 143 +--- 1 file changed, 113 insertions(+), 30 deletions(-) diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index c54bcb8..0212cc2 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -32,6 +32,7 @@ struct aead_sg_list { struct aead_async_rsgl { struct af_alg_sgl sgl; struct list_head list; + bool new_page; }; struct aead_async_req { @@ -405,6 +406,61 @@ static void aead_async_cb(struct crypto_async_request *_req, int err) iocb->ki_complete(iocb, err, err); } +/** + * scatterwalk_get_part() - get subset a scatterlist + * + * @dst: destination SGL to receive the pointers from source SGL + * @src: source SGL + * @len: data length in bytes to get from source SGL + * @max_sgs: number of SGs present in dst SGL to prevent overstepping boundaries + * + * @return: number of SG entries in dst + */ +static inline int scatterwalk_get_part(struct scatterlist *dst, + struct scatterlist *src, + unsigned int len, unsigned int max_sgs) +{ + /* leave one SG entry for chaining */ + unsigned int j = 1; + + while (len && j < max_sgs) { + unsigned int todo = min_t(unsigned int, len, src->length); + + sg_set_page(dst, sg_page(src), todo, src->offset); + if (src->length >= len) { + sg_mark_end(dst); + break; + } + len -= todo; + j++; + src = sg_next(src); + dst = sg_next(dst); + } + + return j; +} + +static inline int aead_alloc_rsgl(struct sock *sk, struct aead_async_rsgl **ret) +{ + struct aead_async_rsgl *rsgl = + sock_kmalloc(sk, sizeof(*rsgl), GFP_KERNEL); + if (unlikely(!rsgl)) + return -ENOMEM; + *ret = rsgl; + return 0; +} + +static inline int aead_get_rsgl_areq(struct sock *sk, +struct aead_async_req *areq, +struct aead_async_rsgl **ret) +{ + if (list_empty(&areq->list)) { + *ret = &areq->first_rsgl; + return 0; + } else + return aead_alloc_rsgl(sk, ret); +} + static int aead_recvmsg_async(struct socket *sock, struct msghdr *msg, int flags) { @@ -433,7 +489,7 @@ static int aead_recvmsg_async(struct socket *sock, struct msghdr *msg, if (!aead_sufficient_data(ctx)) goto unlock; - used = ctx->used; + used = ctx->used - ctx->aead_assoclen; if (ctx->enc) outlen = used + as; else @@ -452,7 +508,6 @@ static int aead_recvmsg_async(struct socket *sock, struct msghdr *msg, aead_request_set_ad(req, ctx->aead_assoclen); aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
Re: [PATCH] crypto: arm64/sha2: integrate OpenSSL implementations of SHA256/SHA512
On Fri, Nov 11, 2016 at 09:51:13PM +0800, Ard Biesheuvel wrote: > This integrates both the accelerated scalar and the NEON implementations > of SHA-224/256 as well as SHA-384/512 from the OpenSSL project. > > Relative performance compared to the respective generic C versions: > > | SHA256-scalar | SHA256-NEON* | SHA512 | > +-+--+--+ > Cortex-A53 | 1.63x | 1.63x| 2.34x | > Cortex-A57 | 1.43x | 1.59x| 1.95x | > Cortex-A73 | 1.26x | 1.56x| ?| > > The core crypto code was authored by Andy Polyakov of the OpenSSL > project, in collaboration with whom the upstream code was adapted so > that this module can be built from the same version of sha512-armv8.pl. > > The version in this patch was taken from OpenSSL commit > >866e505e0d66 sha/asm/sha512-armv8.pl: add NEON version of SHA256. > > * The core SHA algorithm is fundamentally sequential, but there is a > secondary transformation involved, called the schedule update, which > can be performed independently. The NEON version of SHA-224/SHA-256 > only implements this part of the algorithm using NEON instructions, > the sequential part is always done using scalar instructions. > > Signed-off-by: Ard Biesheuvel > --- > > This supersedes the SHA-256-NEON-only patch I sent out about 6 weeks ago. > > Will, Catalin: note that this pulls in a .pl script, and adds a build rule > locally in arch/arm64/crypto to generate .S files on the fly from Perl > scripts. I will leave it to you to decide whether you are ok with this as > is, or whether you prefer .S_shipped files, in which case the Perl script > is only included as a reference (this is how we did it for arch/arm in the > past, but given that it adds about 3000 lines of generated code to the patch, > I think we may want to simply keep it as below) I think we should include the shipped files too. 3000 lines isn't that much in the grand scheme of things, and there will be people who complain about the unconditional perl dependency. Will -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 00/11] getting back -Wmaybe-uninitialized
On Friday, November 11, 2016 9:13:00 AM CET Linus Torvalds wrote: > On Thu, Nov 10, 2016 at 8:44 AM, Arnd Bergmann wrote: > > > > Please merge these directly if you are happy with the result. > > I will take this. Thanks a lot! > I do see two warnings, but they both seem to be valid and recent, > though, so I have no issues with the spurious cases. Ok, both of them should have my fixes coming your way already. > Warning #1: > > sound/soc/qcom/lpass-platform.c: In function ‘lpass_platform_pcmops_open’: > sound/soc/qcom/lpass-platform.c:83:29: warning: ‘dma_ch’ may be used > uninitialized in this function [-Wmaybe-uninitialized] > drvdata->substream[dma_ch] = substream; > ~~~^~~ > > and 'dma_ch' usage there really is crazy and wrong. Broken by > 022d00ee0b55 ("ASoC: lpass-platform: Fix broken pcm data usage") Right, the patches crossed here, the bugfix patch that introduced this came into linux-next over the kernel summit, and the fix I sent on Tuesday made it into Mark Brown's tree on Wednesday but not before you pulled alsa tree. It should be fixed the next time you pull from the alsa tree, the commit is 3b89e4b77ef9 ("ASoC: lpass-platform: initialize dma channel number") > Warning #2 is not a real bug, but it's reasonable that gcc doesn't > know that storage_bytes (chip->read_size) has to be 2/4. Again, > introduced recently by commit 231147ee77f3 ("iio: maxim_thermocouple: > Align 16 bit big endian value of raw reads"), so you didn't see it. This is the one I mentioned in the commit message as one that is fixed in linux-next and that should make it in soon. > drivers/iio/temperature/maxim_thermocouple.c: In function > ‘maxim_thermocouple_read_raw’: > drivers/iio/temperature/maxim_thermocouple.c:141:5: warning: ‘ret’ > may be used uninitialized in this function [-Wmaybe-uninitialized] > if (ret) >^ > drivers/iio/temperature/maxim_thermocouple.c:128:6: note: ‘ret’ was > declared here > int ret; > ^~~ > > and I guess that code can just initialize 'ret' to '-EINVAL' or > something to just make the theoretical "somehow we had a wrong > chip->read_size" case error out cleanly. Right, that was my conclusion too. I sent the bugfix on Oct 25 for linux-next but it didn't make it in until this Monday, after you pulled the patch that introduced it on Oct 29. The commit in staging-testing is 32cb7d27e65d ("iio: maxim_thermocouple: detect invalid storage size in read()") Greg and Jonathan, I see now that this is part of the 'iio-for-4.10b' branch, so I suspect you were not planning to send this before the merge window. Could you make sure this ends up in v4.9 so we get a clean build when -Wmaybe-uninitialized gets enabled again? Arnd -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 00/11] getting back -Wmaybe-uninitialized
On Thu, Nov 10, 2016 at 8:44 AM, Arnd Bergmann wrote: > > Please merge these directly if you are happy with the result. I will take this. I do see two warnings, but they both seem to be valid and recent, though, so I have no issues with the spurious cases. Warning #1: sound/soc/qcom/lpass-platform.c: In function ‘lpass_platform_pcmops_open’: sound/soc/qcom/lpass-platform.c:83:29: warning: ‘dma_ch’ may be used uninitialized in this function [-Wmaybe-uninitialized] drvdata->substream[dma_ch] = substream; ~~~^~~ and 'dma_ch' usage there really is crazy and wrong. Broken by 022d00ee0b55 ("ASoC: lpass-platform: Fix broken pcm data usage") Warning #2 is not a real bug, but it's reasonable that gcc doesn't know that storage_bytes (chip->read_size) has to be 2/4. Again, introduced recently by commit 231147ee77f3 ("iio: maxim_thermocouple: Align 16 bit big endian value of raw reads"), so you didn't see it. drivers/iio/temperature/maxim_thermocouple.c: In function ‘maxim_thermocouple_read_raw’: drivers/iio/temperature/maxim_thermocouple.c:141:5: warning: ‘ret’ may be used uninitialized in this function [-Wmaybe-uninitialized] if (ret) ^ drivers/iio/temperature/maxim_thermocouple.c:128:6: note: ‘ret’ was declared here int ret; ^~~ and I guess that code can just initialize 'ret' to '-EINVAL' or something to just make the theoretical "somehow we had a wrong chip->read_size" case error out cleanly. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH -next] hwrng: atmel - use clk_disable_unprepare instead of clk_disable
From: Wei Yongjun Since clk_prepare_enable() is used to get trng->clk, we should use clk_disable_unprepare() to release it for the error path. Signed-off-by: Wei Yongjun --- drivers/char/hw_random/atmel-rng.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/char/hw_random/atmel-rng.c b/drivers/char/hw_random/atmel-rng.c index ae7cae5..661c82c 100644 --- a/drivers/char/hw_random/atmel-rng.c +++ b/drivers/char/hw_random/atmel-rng.c @@ -94,7 +94,7 @@ static int atmel_trng_probe(struct platform_device *pdev) return 0; err_register: - clk_disable(trng->clk); + clk_disable_unprepare(trng->clk); return ret; } -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v3] crypto: only call put_page on referenced and used pages
Am Dienstag, 13. September 2016, 13:27:34 CET schrieb Stephan Mueller: Hi Herbert, > Am Dienstag, 13. September 2016, 18:08:16 CEST schrieb Herbert Xu: > > Hi Herbert, > > > This patch appears to be papering over a real bug. > > > > The async path should be exactly the same as the sync path, except > > that we don't wait for completion. So the question is why are we > > getting this crash here for async but not sync? > > At least one reason is found in skcipher_recvmsg_async with the following > code path: > > if (txbufs == tx_nents) { > struct scatterlist *tmp; > int x; > /* Ran out of tx slots in async request > * need to expand */ > tmp = kcalloc(tx_nents * 2, sizeof(*tmp), > GFP_KERNEL); > if (!tmp) > goto free; > > sg_init_table(tmp, tx_nents * 2); > for (x = 0; x < tx_nents; x++) > sg_set_page(&tmp[x], sg_page(&sreq->tsg[x]), > sreq->tsg[x].length, > sreq->tsg[x].offset); > kfree(sreq->tsg); > sreq->tsg = tmp; > tx_nents *= 2; > mark = true; > } > > > ==> the code allocates twice the amount of the previously existing memory, > copies the existing SGs over, but does not set the remaining SGs to > anything. If the caller provides less pages than the number of allocated > SGs, some SGs are unset. Hence, the deallocation must not do anything with > the yet uninitialized SGs. I looked into the issue a bit deeper. In addition to the aforementioned code, the following code seems to be a second culprit: tx_nents = skcipher_all_sg_nents(ctx); sreq->tsg = kcalloc(tx_nents, sizeof(*sg), GFP_KERNEL); if (unlikely(!sreq->tsg)) goto unlock; sg_init_table(sreq->tsg, tx_nents); Here again, an SGL is initialized, but there are no pages mapped to the SGs. May I ask you to reconsider this patch as well as the patch "[PATCH] crypto: call put_page on used pages only" from September 10 since the current code of libkcapi can easily trigger these bugs and lead to a kernel crash. If you consider the patches papering over the heart of the problem, may I ask for suggestions on how the mentioned code should be changed such that the issues are removed? If the suggestion is to re-architect the memory handling in the async part, may I ask to at least apply the patches for now with the goal to have time for re-architecting the async code and yet have no open holes that lead to crashes? Thanks. Ciao Stephan -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] crypto: arm64/sha2: integrate OpenSSL implementations of SHA256/SHA512
This integrates both the accelerated scalar and the NEON implementations of SHA-224/256 as well as SHA-384/512 from the OpenSSL project. Relative performance compared to the respective generic C versions: | SHA256-scalar | SHA256-NEON* | SHA512 | +-+--+--+ Cortex-A53 | 1.63x | 1.63x| 2.34x | Cortex-A57 | 1.43x | 1.59x| 1.95x | Cortex-A73 | 1.26x | 1.56x| ?| The core crypto code was authored by Andy Polyakov of the OpenSSL project, in collaboration with whom the upstream code was adapted so that this module can be built from the same version of sha512-armv8.pl. The version in this patch was taken from OpenSSL commit 866e505e0d66 sha/asm/sha512-armv8.pl: add NEON version of SHA256. * The core SHA algorithm is fundamentally sequential, but there is a secondary transformation involved, called the schedule update, which can be performed independently. The NEON version of SHA-224/SHA-256 only implements this part of the algorithm using NEON instructions, the sequential part is always done using scalar instructions. Signed-off-by: Ard Biesheuvel --- This supersedes the SHA-256-NEON-only patch I sent out about 6 weeks ago. Will, Catalin: note that this pulls in a .pl script, and adds a build rule locally in arch/arm64/crypto to generate .S files on the fly from Perl scripts. I will leave it to you to decide whether you are ok with this as is, or whether you prefer .S_shipped files, in which case the Perl script is only included as a reference (this is how we did it for arch/arm in the past, but given that it adds about 3000 lines of generated code to the patch, I think we may want to simply keep it as below) arch/arm64/crypto/Kconfig | 8 + arch/arm64/crypto/Makefile| 15 + arch/arm64/crypto/sha256-glue.c | 185 + arch/arm64/crypto/sha512-armv8.pl | 778 arch/arm64/crypto/sha512-glue.c | 94 +++ 5 files changed, 1080 insertions(+) diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 2cf32e9887e1..5f4a617e2957 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -8,6 +8,14 @@ menuconfig ARM64_CRYPTO if ARM64_CRYPTO +config CRYPTO_SHA256_ARM64 + tristate "SHA-224/SHA-256 digest algorithm for arm64" + select CRYPTO_HASH + +config CRYPTO_SHA512_ARM64 + tristate "SHA-384/SHA-512 digest algorithm for arm64" + select CRYPTO_HASH + config CRYPTO_SHA1_ARM64_CE tristate "SHA-1 digest algorithm (ARMv8 Crypto Extensions)" depends on ARM64 && KERNEL_MODE_NEON diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile index abb79b3cfcfe..861589faf6ef 100644 --- a/arch/arm64/crypto/Makefile +++ b/arch/arm64/crypto/Makefile @@ -29,6 +29,12 @@ aes-ce-blk-y := aes-glue-ce.o aes-ce.o obj-$(CONFIG_CRYPTO_AES_ARM64_NEON_BLK) += aes-neon-blk.o aes-neon-blk-y := aes-glue-neon.o aes-neon.o +obj-$(CONFIG_CRYPTO_SHA256_ARM64) += sha256-arm64.o +sha256-arm64-y := sha256-glue.o sha256-core.o + +obj-$(CONFIG_CRYPTO_SHA512_ARM64) += sha512-arm64.o +sha512-arm64-y := sha512-glue.o sha512-core.o + AFLAGS_aes-ce.o:= -DINTERLEAVE=4 AFLAGS_aes-neon.o := -DINTERLEAVE=4 @@ -40,3 +46,12 @@ CFLAGS_crc32-arm64.o := -mcpu=generic+crc $(obj)/aes-glue-%.o: $(src)/aes-glue.c FORCE $(call if_changed_rule,cc_o_c) + +quiet_cmd_perl = PERLASM $@ + cmd_perl = $(PERL) $(<) void $(@) + +$(obj)/sha256-core.S: $(src)/sha512-armv8.pl + $(call cmd,perl) + +$(obj)/sha512-core.S: $(src)/sha512-armv8.pl + $(call cmd,perl) diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glue.c new file mode 100644 index ..a2226f841960 --- /dev/null +++ b/arch/arm64/crypto/sha256-glue.c @@ -0,0 +1,185 @@ +/* + * Linux/arm64 port of the OpenSSL SHA256 implementation for AArch64 + * + * Copyright (c) 2016 Linaro Ltd. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License as published by the Free + * Software Foundation; either version 2 of the License, or (at your option) + * any later version. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash for arm64"); +MODULE_AUTHOR("Andy Polyakov "); +MODULE_AUTHOR("Ard Biesheuvel "); +MODULE_LICENSE("GPL v2"); +MODULE_ALIAS_CRYPTO("sha224"); +MODULE_ALIAS_CRYPTO("sha256"); + +asmlinkage void sha256_block_data_order(u32 *digest, const void *data, + unsigned int num_blks); + +asmlinkage void sha256_block_neon(u32 *digest, const void *data, + unsigned int num_blks); + +static int sha256_update(struct shash_desc *desc, const u8 *data, +unsigned int len) +{ +
Re: algif_aead: AIO broken with more than one iocb
Am Dienstag, 13. September 2016, 18:12:46 CET schrieb Herbert Xu: Hi Herbert, > On Sun, Sep 11, 2016 at 04:59:19AM +0200, Stephan Mueller wrote: > > Hi Herbert, > > > > The AIO support for algif_aead is broken when submitting more than one > > iocb.> > > The break happens in aead_recvmsg_async at the following code: > > /* ensure output buffer is sufficiently large */ > > if (usedpages < outlen) > > > > goto free; > > > > The reason is that when submitting, say, two iocb, ctx->used contains the > > buffer length for two AEAD operations (as expected). However, the recvmsg > > code > I don't think we should allow that. We should make it so that you > must start a recvmsg before you can send data for a new request. > > Remember that the async path should be identical to the sync path, > except that you don't wait for completion. Just as a followup: with the patch submitted the other day to cover the AAD and tag handling, the algif_aead now supports also multiple iocb. Ciao Stephan -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] crypto: nx - drop duplicate header types.h
Drop duplicate header types.h from nx.c. Signed-off-by: Geliang Tang --- drivers/crypto/nx/nx.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/crypto/nx/nx.c b/drivers/crypto/nx/nx.c index 42f0f22..036057a 100644 --- a/drivers/crypto/nx/nx.c +++ b/drivers/crypto/nx/nx.c @@ -32,7 +32,6 @@ #include #include #include -#include #include #include -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] crypto: jitterentropy - drop duplicate header module.h
Drop duplicate header module.h from jitterentropy-kcapi.c. Signed-off-by: Geliang Tang --- crypto/jitterentropy-kcapi.c | 1 - 1 file changed, 1 deletion(-) diff --git a/crypto/jitterentropy-kcapi.c b/crypto/jitterentropy-kcapi.c index c4938497..787dccc 100644 --- a/crypto/jitterentropy-kcapi.c +++ b/crypto/jitterentropy-kcapi.c @@ -39,7 +39,6 @@ #include #include -#include #include #include #include -- 2.9.3 -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/16] crypto: skcipher - Add skcipher walk interface
On Wed, Nov 02, 2016 at 01:54:20PM -0700, Eric Biggers wrote: > > I think the case where skcipher_copy_iv() fails may be handled incorrectly. > Wouldn't it need to set walk.nbytes to 0 so as to not confuse callers which > expect that behavior? Or maybe it should be calling skcipher_walk_done(). Good catch. I'll fix and repost. > Setting walk->in.sg and walk->out.sg is redundant with the scatterwalk_start() > calls. Will remove. > This gets called with uninitialized 'walk.flags'. This was somewhat of a > theoretical problem with the old blkcipher_walk code but it looks like now it > will interact badly with the new SKCIPHER_WALK_SLEEP flag. As far as I can > see, > whether the flag will end up set or not can depend on the uninitialized value. > It would be nice if this problem could be avoided entirely be setting flags=0. Right. I'll fix this as well. > I'm also wondering about the choice to not look at 'atomic' until after the > call > to skcipher_walk_skcipher(). Wouldn't this mean that the choice of 'atomic' > would not be respected in e.g. the kmalloc() in skcipher_copy_iv()? The atomic flag is meant to be used in cases such as aesni where you need to do kernel_fpu_begin after the call to start the walk. IOW sleeping is fine at the start but not on subsequent walk calls. > I don't see any users of the "async" walking being introduced; are some > planned? skcipher_walk is meant to unite blkcipher_walk and ablkcipher_walk. The latter will use the async case. Cheers, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html