Re: [PATCH] lib/mpi: call cond_resched() from mpi_powm() loop
On Fri, Nov 10, 2017 at 10:37:30PM +1100, Herbert Xu wrote: > On Mon, Nov 06, 2017 at 10:19:51PM -0800, Eric Biggers wrote: > > From: Eric Biggers > > > > On a non-preemptible kernel, if KEYCTL_DH_COMPUTE is called with the > > largest permitted inputs (16384 bits), the kernel spends 10+ seconds > > doing modular exponentiation in mpi_powm() without rescheduling. If all > > threads do it, it locks up the system. Moreover, it can cause > > rcu_sched-stall warnings. > > > > Notwithstanding the insanity of doing this calculation in kernel mode > > rather than in userspace, fix it by calling cond_resched() as each bit > > from the exponent is processed. It's still noninterruptible, but at > > least it's preemptible now. > > > > Cc: sta...@vger.kernel.org # v4.12+ > > Signed-off-by: Eric Biggers > > Patch applied. Thanks. > -- If it's not too late can you fix the stable line to be just Cc: sta...@vger.kernel.org As Mat pointed out KEYCTL_DH_COMPUTE was actually introduced in v4.7. Also I think the code is also reachable through RSA by adding an x509 certificate using the "asymmetric" key type, although that appears to be limited to 4096-bit inputs rather than 16384 bits. Eric
Re: [PATCH v3] crypto: AF_ALG - remove locking in async callback
2017-11-10 13:20 GMT+01:00 Stephan Müller : > The code paths protected by the socket-lock do not use or modify the > socket in a non-atomic fashion. The actions pertaining the socket do not > even need to be handled as an atomic operation. Thus, the socket-lock > can be safely ignored. > > This fixes a bug regarding scheduling in atomic as the callback function > may be invoked in interrupt context. > > In addition, the sock_hold is moved before the AIO encrypt/decrypt > operation to ensure that the socket is always present. This avoids a > tiny race window where the socket is unprotected and yet used by the AIO > operation. > > Finally, the release of resources for a crypto operation is moved into a > common function of af_alg_free_resources. > > Fixes: e870456d8e7c8 ("crypto: algif_skcipher - overhaul memory management") > Fixes: d887c52d6ae43 ("crypto: algif_aead - overhaul memory management") > Reported-by: Romain Izard > Signed-off-by: Stephan Mueller Tested-by: Romain Izard On 4.14-rc8, with some fuzzing when applying the patch. The deterministic crash is not reproduced. > --- > crypto/af_alg.c | 21 ++--- > crypto/algif_aead.c | 23 --- > crypto/algif_skcipher.c | 23 --- > include/crypto/if_alg.h | 1 + > 4 files changed, 39 insertions(+), 29 deletions(-) > > diff --git a/crypto/af_alg.c b/crypto/af_alg.c > index 85cea9de324a..358749c38894 100644 > --- a/crypto/af_alg.c > +++ b/crypto/af_alg.c > @@ -1021,6 +1021,18 @@ ssize_t af_alg_sendpage(struct socket *sock, struct > page *page, > EXPORT_SYMBOL_GPL(af_alg_sendpage); > > /** > + * af_alg_free_resources - release resources required for crypto request > + */ > +void af_alg_free_resources(struct af_alg_async_req *areq) > +{ > + struct sock *sk = areq->sk; > + > + af_alg_free_areq_sgls(areq); > + sock_kfree_s(sk, areq, areq->areqlen); > +} > +EXPORT_SYMBOL_GPL(af_alg_free_resources); > + > +/** > * af_alg_async_cb - AIO callback handler > * > * This handler cleans up the struct af_alg_async_req upon completion of the > @@ -1036,18 +1048,13 @@ void af_alg_async_cb(struct crypto_async_request > *_req, int err) > struct kiocb *iocb = areq->iocb; > unsigned int resultlen; > > - lock_sock(sk); > - > /* Buffer size written by crypto operation. */ > resultlen = areq->outlen; > > - af_alg_free_areq_sgls(areq); > - sock_kfree_s(sk, areq, areq->areqlen); > - __sock_put(sk); > + af_alg_free_resources(areq); > + sock_put(sk); > > iocb->ki_complete(iocb, err ? err : resultlen, 0); > - > - release_sock(sk); > } > EXPORT_SYMBOL_GPL(af_alg_async_cb); > > diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c > index aacae0837aff..db9378a45296 100644 > --- a/crypto/algif_aead.c > +++ b/crypto/algif_aead.c > @@ -268,12 +268,23 @@ static int _aead_recvmsg(struct socket *sock, struct > msghdr *msg, > > if (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) { > /* AIO operation */ > + sock_hold(sk); > areq->iocb = msg->msg_iocb; > aead_request_set_callback(&areq->cra_u.aead_req, > CRYPTO_TFM_REQ_MAY_BACKLOG, > af_alg_async_cb, areq); > err = ctx->enc ? crypto_aead_encrypt(&areq->cra_u.aead_req) : > crypto_aead_decrypt(&areq->cra_u.aead_req); > + > + /* AIO operation in progress */ > + if (err == -EINPROGRESS || err == -EBUSY) { > + /* Remember output size that will be generated. */ > + areq->outlen = outlen; > + > + return -EIOCBQUEUED; > + } > + > + sock_put(sk); > } else { > /* Synchronous operation */ > aead_request_set_callback(&areq->cra_u.aead_req, > @@ -285,19 +296,9 @@ static int _aead_recvmsg(struct socket *sock, struct > msghdr *msg, > &ctx->wait); > } > > - /* AIO operation in progress */ > - if (err == -EINPROGRESS) { > - sock_hold(sk); > - > - /* Remember output size that will be generated. */ > - areq->outlen = outlen; > - > - return -EIOCBQUEUED; > - } > > free: > - af_alg_free_areq_sgls(areq); > - sock_kfree_s(sk, areq, areq->areqlen); > + af_alg_free_resources(areq); > > return err ? err : outlen; > } > diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c > index 9954b078f0b9..30cff827dd8f 100644 > --- a/crypto/algif_skcipher.c > +++ b/crypto/algif_skcipher.c > @@ -117,6 +117,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct > msghdr *msg, > > if (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) { > /* AIO operation */ > +
Re: [PATCH RESEND 1/4] crypto: caam: add caam-dma node to SEC4.0 device tree binding
On Fri, 10 Nov 2017 08:02:01 + Radu Andrei Alexe wrote: > On 11/9/2017 6:34 PM, Kim Phillips wrote: > > On Thu, 9 Nov 2017 11:54:13 + > > Radu Andrei Alexe wrote: > >> The next patch version will create the platform device dynamically at > >> run time. > > > > Why create a new device when that h/w already has one? > > > > Why doesn't the existing crypto driver register dma capabilities with > > the dma driver subsystem? > > > I can think of two reasons: > > 1. The code that this driver introduces has nothing to do with crypto > and everything to do with dma. I would think that at least a crypto "null" algorithm implementation would share code. > Placing the code in the same directory as > the caam subsystem would only create confusion for the reader of an > already complex driver. this different directory argument seems to be identical to your 2 below: > 2. I wanted this driver to be tracked by the dma engine team. They have > the right expertise to provide adequate feedback. If all the code was in > the crypto directory they wouldn't know about this driver or any > subsequent changes to it. dma subsystem bits could still be put in the dma area if deemed necessary but I don't think it is: I see drivers/crypto/ccp/ccp-dmaengine.c calls dma_async_device_register for example. I also don't see how that complicates things much further. What is the rationale for using the crypto h/w as a dma engine anyway? Are there supporting performance figures? Kim
[PATCH 2/2] crypto: caam - add Derived Key Protocol (DKP) support
Offload split key generation in CAAM engine, using DKP. DKP is supported starting with Era 6. Note that the way assoclen is transmitted from the job descriptor to the shared descriptor changes - DPOVRD register is used instead of MATH3 (where available), since DKP protocol thrashes the MATH registers. Signed-off-by: Horia Geantă --- drivers/crypto/caam/caamalg.c | 52 +-- drivers/crypto/caam/caamalg_desc.c | 176 ++--- drivers/crypto/caam/caamalg_desc.h | 10 +-- drivers/crypto/caam/caamalg_qi.c | 31 ++- drivers/crypto/caam/caamhash.c | 56 drivers/crypto/caam/desc.h | 29 ++ drivers/crypto/caam/desc_constr.h | 41 + drivers/crypto/caam/key_gen.c | 30 --- drivers/crypto/caam/key_gen.h | 30 +++ 9 files changed, 323 insertions(+), 132 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index baa8dd52472d..700dc09b80da 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -118,6 +118,7 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead) { struct caam_ctx *ctx = crypto_aead_ctx(aead); struct device *jrdev = ctx->jrdev; + struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent); u32 *desc; int rem_bytes = CAAM_DESC_BYTES_MAX - AEAD_DESC_JOB_IO_LEN - ctx->adata.keylen_pad; @@ -136,7 +137,8 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead) /* aead_encrypt shared descriptor */ desc = ctx->sh_desc_enc; - cnstr_shdsc_aead_null_encap(desc, &ctx->adata, ctx->authsize); + cnstr_shdsc_aead_null_encap(desc, &ctx->adata, ctx->authsize, + ctrlpriv->era); dma_sync_single_for_device(jrdev, ctx->sh_desc_enc_dma, desc_bytes(desc), DMA_TO_DEVICE); @@ -154,7 +156,8 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead) /* aead_decrypt shared descriptor */ desc = ctx->sh_desc_dec; - cnstr_shdsc_aead_null_decap(desc, &ctx->adata, ctx->authsize); + cnstr_shdsc_aead_null_decap(desc, &ctx->adata, ctx->authsize, + ctrlpriv->era); dma_sync_single_for_device(jrdev, ctx->sh_desc_dec_dma, desc_bytes(desc), DMA_TO_DEVICE); @@ -168,6 +171,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead) unsigned int ivsize = crypto_aead_ivsize(aead); struct caam_ctx *ctx = crypto_aead_ctx(aead); struct device *jrdev = ctx->jrdev; + struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent); u32 ctx1_iv_off = 0; u32 *desc, *nonce = NULL; u32 inl_mask; @@ -234,7 +238,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead) desc = ctx->sh_desc_enc; cnstr_shdsc_aead_encap(desc, &ctx->cdata, &ctx->adata, ivsize, ctx->authsize, is_rfc3686, nonce, ctx1_iv_off, - false); + false, ctrlpriv->era); dma_sync_single_for_device(jrdev, ctx->sh_desc_enc_dma, desc_bytes(desc), DMA_TO_DEVICE); @@ -266,7 +270,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead) desc = ctx->sh_desc_dec; cnstr_shdsc_aead_decap(desc, &ctx->cdata, &ctx->adata, ivsize, ctx->authsize, alg->caam.geniv, is_rfc3686, - nonce, ctx1_iv_off, false); + nonce, ctx1_iv_off, false, ctrlpriv->era); dma_sync_single_for_device(jrdev, ctx->sh_desc_dec_dma, desc_bytes(desc), DMA_TO_DEVICE); @@ -300,7 +304,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead) desc = ctx->sh_desc_enc; cnstr_shdsc_aead_givencap(desc, &ctx->cdata, &ctx->adata, ivsize, ctx->authsize, is_rfc3686, nonce, - ctx1_iv_off, false); + ctx1_iv_off, false, ctrlpriv->era); dma_sync_single_for_device(jrdev, ctx->sh_desc_enc_dma, desc_bytes(desc), DMA_TO_DEVICE); @@ -503,6 +507,7 @@ static int aead_setkey(struct crypto_aead *aead, { struct caam_ctx *ctx = crypto_aead_ctx(aead); struct device *jrdev = ctx->jrdev; + struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent); struct crypto_authenc_keys keys; int ret = 0; @@ -517,6 +522,27 @@ static int aead_setkey(struct crypto_aead *aead, DUMP_PREFIX_ADDRESS, 16, 4, key, keylen, 1); #endif + /* +* If DKP is supported, use it in the shared descriptor to generate +* the split key. +*/ + if (ctrlpriv->era >= 6) { + ctx->adata.keylen = keys.authkeylen
[PATCH 1/2] crypto: caam - save Era in driver's private data
Save Era in driver's private data for further usage, like deciding whether an erratum applies or a feature is available based on its value. Signed-off-by: Horia Geantă --- drivers/crypto/caam/ctrl.c | 4 +++- drivers/crypto/caam/intern.h | 1 + 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c index 027e121c6f70..75d280cb2dc0 100644 --- a/drivers/crypto/caam/ctrl.c +++ b/drivers/crypto/caam/ctrl.c @@ -611,6 +611,8 @@ static int caam_probe(struct platform_device *pdev) goto iounmap_ctrl; } + ctrlpriv->era = caam_get_era(); + ret = of_platform_populate(nprop, caam_match, NULL, dev); if (ret) { dev_err(dev, "JR platform devices creation error\n"); @@ -742,7 +744,7 @@ static int caam_probe(struct platform_device *pdev) /* Report "alive" for developer to see */ dev_info(dev, "device ID = 0x%016llx (Era %d)\n", caam_id, -caam_get_era()); +ctrlpriv->era); dev_info(dev, "job rings = %d, qi = %d, dpaa2 = %s\n", ctrlpriv->total_jobrs, ctrlpriv->qi_present, caam_dpaa2 ? "yes" : "no"); diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h index a52361258d3a..55aab74e7b5c 100644 --- a/drivers/crypto/caam/intern.h +++ b/drivers/crypto/caam/intern.h @@ -83,6 +83,7 @@ struct caam_drv_private { u8 qi_present; /* Nonzero if QI present in device */ int secvio_irq; /* Security violation interrupt number */ int virt_en;/* Virtualization enabled in CAAM */ + int era;/* CAAM Era (internal HW revision) */ #defineRNG4_MAX_HANDLES 2 /* RNG4 block */ -- 2.12.0.264.gd6db3f216544
[PATCH v3] crypto: AF_ALG - remove locking in async callback
The code paths protected by the socket-lock do not use or modify the socket in a non-atomic fashion. The actions pertaining the socket do not even need to be handled as an atomic operation. Thus, the socket-lock can be safely ignored. This fixes a bug regarding scheduling in atomic as the callback function may be invoked in interrupt context. In addition, the sock_hold is moved before the AIO encrypt/decrypt operation to ensure that the socket is always present. This avoids a tiny race window where the socket is unprotected and yet used by the AIO operation. Finally, the release of resources for a crypto operation is moved into a common function of af_alg_free_resources. Fixes: e870456d8e7c8 ("crypto: algif_skcipher - overhaul memory management") Fixes: d887c52d6ae43 ("crypto: algif_aead - overhaul memory management") Reported-by: Romain Izard Signed-off-by: Stephan Mueller --- crypto/af_alg.c | 21 ++--- crypto/algif_aead.c | 23 --- crypto/algif_skcipher.c | 23 --- include/crypto/if_alg.h | 1 + 4 files changed, 39 insertions(+), 29 deletions(-) diff --git a/crypto/af_alg.c b/crypto/af_alg.c index 85cea9de324a..358749c38894 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -1021,6 +1021,18 @@ ssize_t af_alg_sendpage(struct socket *sock, struct page *page, EXPORT_SYMBOL_GPL(af_alg_sendpage); /** + * af_alg_free_resources - release resources required for crypto request + */ +void af_alg_free_resources(struct af_alg_async_req *areq) +{ + struct sock *sk = areq->sk; + + af_alg_free_areq_sgls(areq); + sock_kfree_s(sk, areq, areq->areqlen); +} +EXPORT_SYMBOL_GPL(af_alg_free_resources); + +/** * af_alg_async_cb - AIO callback handler * * This handler cleans up the struct af_alg_async_req upon completion of the @@ -1036,18 +1048,13 @@ void af_alg_async_cb(struct crypto_async_request *_req, int err) struct kiocb *iocb = areq->iocb; unsigned int resultlen; - lock_sock(sk); - /* Buffer size written by crypto operation. */ resultlen = areq->outlen; - af_alg_free_areq_sgls(areq); - sock_kfree_s(sk, areq, areq->areqlen); - __sock_put(sk); + af_alg_free_resources(areq); + sock_put(sk); iocb->ki_complete(iocb, err ? err : resultlen, 0); - - release_sock(sk); } EXPORT_SYMBOL_GPL(af_alg_async_cb); diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index aacae0837aff..db9378a45296 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -268,12 +268,23 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, if (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) { /* AIO operation */ + sock_hold(sk); areq->iocb = msg->msg_iocb; aead_request_set_callback(&areq->cra_u.aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG, af_alg_async_cb, areq); err = ctx->enc ? crypto_aead_encrypt(&areq->cra_u.aead_req) : crypto_aead_decrypt(&areq->cra_u.aead_req); + + /* AIO operation in progress */ + if (err == -EINPROGRESS || err == -EBUSY) { + /* Remember output size that will be generated. */ + areq->outlen = outlen; + + return -EIOCBQUEUED; + } + + sock_put(sk); } else { /* Synchronous operation */ aead_request_set_callback(&areq->cra_u.aead_req, @@ -285,19 +296,9 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, &ctx->wait); } - /* AIO operation in progress */ - if (err == -EINPROGRESS) { - sock_hold(sk); - - /* Remember output size that will be generated. */ - areq->outlen = outlen; - - return -EIOCBQUEUED; - } free: - af_alg_free_areq_sgls(areq); - sock_kfree_s(sk, areq, areq->areqlen); + af_alg_free_resources(areq); return err ? err : outlen; } diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c index 9954b078f0b9..30cff827dd8f 100644 --- a/crypto/algif_skcipher.c +++ b/crypto/algif_skcipher.c @@ -117,6 +117,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg, if (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) { /* AIO operation */ + sock_hold(sk); areq->iocb = msg->msg_iocb; skcipher_request_set_callback(&areq->cra_u.skcipher_req, CRYPTO_TFM_REQ_MAY_SLEEP, @@ -124,6 +125,16 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg, err = ctx->enc ? crypto_skcipher_encrypt(&areq->cra_u.skcipher_req)
Re: [PATCH 0/2] hwrng: iproc-rng200: Add support for BCM7278
On Wed, Nov 01, 2017 at 04:20:04PM -0700, Florian Fainelli wrote: > Hi, > > This patch series adds support for the RNG200 block found on the BCM7278 SoC. > This requires us to update the compatible string (and associated binding > document) as well as the Kconfig option to make that driver selectable with > ARCH_BRCMSTB gating the enabling of such SoCs. > > Thank you > > Florian Fainelli (2): > dt-bindings: rng: Document BCM7278 RNG200 compatible > hwrng: iproc-rng200: Add support for BCM7278 All applied. Thanks. -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Re: [PATCH] lib/mpi: call cond_resched() from mpi_powm() loop
On Mon, Nov 06, 2017 at 10:19:51PM -0800, Eric Biggers wrote: > From: Eric Biggers > > On a non-preemptible kernel, if KEYCTL_DH_COMPUTE is called with the > largest permitted inputs (16384 bits), the kernel spends 10+ seconds > doing modular exponentiation in mpi_powm() without rescheduling. If all > threads do it, it locks up the system. Moreover, it can cause > rcu_sched-stall warnings. > > Notwithstanding the insanity of doing this calculation in kernel mode > rather than in userspace, fix it by calling cond_resched() as each bit > from the exponent is processed. It's still noninterruptible, but at > least it's preemptible now. > > Cc: sta...@vger.kernel.org # v4.12+ > Signed-off-by: Eric Biggers Patch applied. Thanks. -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Re: [PATCH v2 1/1] crypto: stm32/hash - Fix return issue on update
On Mon, Nov 06, 2017 at 11:41:52PM +0100, lionel.debi...@st.com wrote: > From: Lionel Debieve > > When data append reached the threshold for processing, > we must inform that processing is on going to wait before > managing the next request. > > Signed-off-by: Lionel Debieve Patch applied. Thanks. -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Re: [PATCH v2 0/5] crypto: dh - input validation fixes
On Sun, Nov 05, 2017 at 06:30:43PM -0800, Eric Biggers wrote: > This series fixes several corner cases in the Diffie-Hellman key > exchange implementations: > > 1. With the software DH implementation, using a large buffer for 'g' >caused a double free. > 2. With CONFIG_DEBUG_SG=y and the software DH implementation, setting 'p' >to 0 caused a BUG_ON(). > 3. With the QAT DH implementation, setting 'key' or 'g' larger than 'p' >caused a buffer underflow. > > Note that in kernels configured with CONFIG_KEY_DH_OPERATIONS=y, these > bugs are reachable by unprivileged users via KEYCTL_DH_COMPUTE. > > Patches 4 and 5 are cleanup only. > > Eric Biggers (5): > crypto: dh - Fix double free of ctx->p > crypto: dh - Don't permit 'p' to be 0 > crypto: dh - Don't permit 'key' or 'g' size longer than 'p' > crypto: qat - Clean up error handling in qat_dh_set_secret() > crypto: dh - Remove pointless checks for NULL 'p' and 'g' All applied. Thanks. -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Re: [PATCH 1/2] crypto: tcrypt - fix S/G table for test_aead_speed()
On Fri, Nov 10, 2017 at 09:17:33AM +, Horia Geantă wrote: > > > I must be missing something. In the case rem == 0, let's say > > the original value of np is npo. Then at the start of the loop, > > np = npo - 1, and at the last iteration, k = npo - 2, so we do > IIUC at the start of the loop np = npo (and not npo - 1), since np is no > longer decremented in the rem == 0 case: > - np--; > + if (rem) > + np--; > > > > > sg_set_buf(&sg[npo - 1], xbuf[npo - 2], PAGE_SIZE); > > > and accordingly last iteration is for k = npo - 1: > sg_set_buf(&sg[npo], xbuf[npo - 1], PAGE_SIZE); > > > While the sg_init_table call sets the end-of-table at > > > > sg_init_table(sg, npo + 1); > > > while this marks sg[npo] as last SG table entry. OK, we're both sort of right. You're correct that this generates a valid SG list in that the number of entries match the end-of-table marking. But the thing that prompted to check this patch in the first place is the semantics behind it. For the case rem == 0, it means that buflen is a multiple of PAGE_SIZE. In that case, the code with your patch will create an SG list that's one page longer than buflen. AFAICS it should be harmless but it would still be better if we can generate an SG list that's exactly buflen long rather than one page longer. Cheers, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Re: [PATCH v2] crypto: AF_ALG - remove locking in async callback
On Tue, Nov 07, 2017 at 10:05:48AM +0100, Stephan Müller wrote: > > if (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) { > /* AIO operation */ > + sock_hold(sk); > areq->iocb = msg->msg_iocb; > aead_request_set_callback(&areq->cra_u.aead_req, > CRYPTO_TFM_REQ_MAY_BACKLOG, > af_alg_async_cb, areq); > err = ctx->enc ? crypto_aead_encrypt(&areq->cra_u.aead_req) : >crypto_aead_decrypt(&areq->cra_u.aead_req); > + > + /* AIO operation in progress */ > + if (err == -EINPROGRESS) { > + /* Remember output size that will be generated. */ > + areq->outlen = outlen; > + > + return -EIOCBQUEUED; > + } Since we're allowing backlogs, what about EBUSY? Cheers, -- Email: Herbert Xu Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[PATCH v3] crypto: algif_aead - skip SGL entries with NULL page
Hi Herbert, I missed the termination of the outer loop of list_for_each_entry_safe. The patch was tested on x86 64 and 32 bit environments. ---8<--- The TX SGL may contain SGL entries that are assigned a NULL page. This may happen if a multi-stage AIO operation is performed where the data for each stage is pointed to by one SGL entry. Upon completion of that stage, af_alg_pull_tsgl will assign NULL to the SGL entry. The NULL cipher used to copy the AAD from TX SGL to the destination buffer, however, cannot handle the case where the SGL starts with an SGL entry having a NULL page. Thus, the code needs to advance the start pointer into the SGL to the first non-NULL entry. This fixes a crash visible on Intel x86 32 bit using the libkcapi test suite. Fixes: 72548b093ee38 ("crypto: algif_aead - copy AAD from src to dst") Signed-off-by: Stephan Mueller --- crypto/algif_aead.c | 33 - 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index 6cdd4fb08335..7822e2fecb0b 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -101,10 +101,10 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, struct aead_tfm *aeadc = pask->private; struct crypto_aead *tfm = aeadc->aead; struct crypto_skcipher *null_tfm = aeadc->null_tfm; - unsigned int as = crypto_aead_authsize(tfm); + unsigned int i, as = crypto_aead_authsize(tfm); struct af_alg_async_req *areq; - struct af_alg_tsgl *tsgl; - struct scatterlist *src; + struct af_alg_tsgl *tsgl, *tmp; + struct scatterlist *rsgl_src, *tsgl_src = NULL; int err = 0; size_t used = 0;/* [in] TX bufs to be en/decrypted */ size_t outlen = 0; /* [out] RX bufs produced by kernel */ @@ -178,7 +178,22 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, } processed = used + ctx->aead_assoclen; - tsgl = list_first_entry(&ctx->tsgl_list, struct af_alg_tsgl, list); + list_for_each_entry_safe(tsgl, tmp, &ctx->tsgl_list, list) { + for (i = 0; i < tsgl->cur; i++) { + struct scatterlist *process_sg = tsgl->sg + i; + + if (!(process_sg->length) || !sg_page(process_sg)) + continue; + tsgl_src = process_sg; + break; + } + if (tsgl_src) + break; + } + if (processed && !tsgl_src) { + err = -EFAULT; + goto free; + } /* * Copy of AAD from source to destination @@ -194,7 +209,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, */ /* Use the RX SGL as source (and destination) for crypto op. */ - src = areq->first_rsgl.sgl.sg; + rsgl_src = areq->first_rsgl.sgl.sg; if (ctx->enc) { /* @@ -207,7 +222,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, * v v * RX SGL: AAD || PT || Tag */ - err = crypto_aead_copy_sgl(null_tfm, tsgl->sg, + err = crypto_aead_copy_sgl(null_tfm, tsgl_src, areq->first_rsgl.sgl.sg, processed); if (err) goto free; @@ -225,7 +240,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, */ /* Copy AAD || CT to RX SGL buffer for in-place operation. */ - err = crypto_aead_copy_sgl(null_tfm, tsgl->sg, + err = crypto_aead_copy_sgl(null_tfm, tsgl_src, areq->first_rsgl.sgl.sg, outlen); if (err) goto free; @@ -257,11 +272,11 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, areq->tsgl); } else /* no RX SGL present (e.g. authentication only) */ - src = areq->tsgl; + rsgl_src = areq->tsgl; } /* Initialize the crypto operation */ - aead_request_set_crypt(&areq->cra_u.aead_req, src, + aead_request_set_crypt(&areq->cra_u.aead_req, rsgl_src, areq->first_rsgl.sgl.sg, used, ctx->iv); aead_request_set_ad(&areq->cra_u.aead_req, ctx->aead_assoclen); aead_request_set_tfm(&areq->cra_u.aead_req, tfm); -- 2.13.6
[PATCH v2] crypto: algif_aead - skip SGL entries with NULL page
The TX SGL may contain SGL entries that are assigned a NULL page. This may happen if a multi-stage AIO operation is performed where the data for each stage is pointed to by one SGL entry. Upon completion of that stage, af_alg_pull_tsgl will assign NULL to the SGL entry. The NULL cipher used to copy the AAD from TX SGL to the destination buffer, however, cannot handle the case where the SGL starts with an SGL entry having a NULL page. Thus, the code needs to advance the start pointer into the SGL to the first non-NULL entry. This fixes a crash visible on Intel x86 32 bit using the libkcapi test suite. Fixes: 72548b093ee38 ("crypto: algif_aead - copy AAD from src to dst") Signed-off-by: Stephan Mueller --- crypto/algif_aead.c | 31 ++- 1 file changed, 22 insertions(+), 9 deletions(-) diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index 6cdd4fb08335..bea39102184d 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -101,10 +101,10 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, struct aead_tfm *aeadc = pask->private; struct crypto_aead *tfm = aeadc->aead; struct crypto_skcipher *null_tfm = aeadc->null_tfm; - unsigned int as = crypto_aead_authsize(tfm); + unsigned int i, as = crypto_aead_authsize(tfm); struct af_alg_async_req *areq; - struct af_alg_tsgl *tsgl; - struct scatterlist *src; + struct af_alg_tsgl *tsgl, *tmp; + struct scatterlist *rsgl_src, *tsgl_src = NULL; int err = 0; size_t used = 0;/* [in] TX bufs to be en/decrypted */ size_t outlen = 0; /* [out] RX bufs produced by kernel */ @@ -178,7 +178,20 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, } processed = used + ctx->aead_assoclen; - tsgl = list_first_entry(&ctx->tsgl_list, struct af_alg_tsgl, list); + list_for_each_entry_safe(tsgl, tmp, &ctx->tsgl_list, list) { + for (i = 0; i < tsgl->cur; i++) { + struct scatterlist *process_sg = tsgl->sg + i; + + if (!(process_sg->length) || !sg_page(process_sg)) + continue; + tsgl_src = process_sg; + break; + } + } + if (processed && !tsgl_src) { + err = -EFAULT; + goto free; + } /* * Copy of AAD from source to destination @@ -194,7 +207,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, */ /* Use the RX SGL as source (and destination) for crypto op. */ - src = areq->first_rsgl.sgl.sg; + rsgl_src = areq->first_rsgl.sgl.sg; if (ctx->enc) { /* @@ -207,7 +220,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, * v v * RX SGL: AAD || PT || Tag */ - err = crypto_aead_copy_sgl(null_tfm, tsgl->sg, + err = crypto_aead_copy_sgl(null_tfm, tsgl_src, areq->first_rsgl.sgl.sg, processed); if (err) goto free; @@ -225,7 +238,7 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, */ /* Copy AAD || CT to RX SGL buffer for in-place operation. */ - err = crypto_aead_copy_sgl(null_tfm, tsgl->sg, + err = crypto_aead_copy_sgl(null_tfm, tsgl_src, areq->first_rsgl.sgl.sg, outlen); if (err) goto free; @@ -257,11 +270,11 @@ static int _aead_recvmsg(struct socket *sock, struct msghdr *msg, areq->tsgl); } else /* no RX SGL present (e.g. authentication only) */ - src = areq->tsgl; + rsgl_src = areq->tsgl; } /* Initialize the crypto operation */ - aead_request_set_crypt(&areq->cra_u.aead_req, src, + aead_request_set_crypt(&areq->cra_u.aead_req, rsgl_src, areq->first_rsgl.sgl.sg, used, ctx->iv); aead_request_set_ad(&areq->cra_u.aead_req, ctx->aead_assoclen); aead_request_set_tfm(&areq->cra_u.aead_req, tfm); -- 2.13.6
Re: [PATCH] crypto: algif_aead - skip SGL entries with NULL page
Am Freitag, 10. November 2017, 08:29:40 CET schrieb Stephan Müller: Hi, > The TX SGL may contain SGL entries that are assigned a NULL page. This > may happen if a multi-stage AIO operation is performed where the data > for each stage is pointed to by one SGL entry. Upon completion of that > stage, af_alg_pull_tsgl will assign NULL to the SGL entry. > > The NULL cipher used to copy the AAD from TX SGL to the destination > buffer, however, cannot handle the case where the SGL starts with an SGL > entry having a NULL page. Thus, the code needs to advance the start > pointer into the SGL to the first non-NULL entry. > > This fixes a crash visible on Intel x86 32 bit using the libkcapi test > suite. This one still has an issue with zero input. I will send a fix shortly. Ciao Stephan
Re: [PATCH 1/2] crypto: tcrypt - fix S/G table for test_aead_speed()
On 11/10/2017 9:43 AM, Herbert Xu wrote: > On Fri, Nov 10, 2017 at 06:37:22AM +, Horia Geantă wrote: >> On 11/10/2017 12:21 AM, Herbert Xu wrote: >>> On Thu, Nov 09, 2017 at 02:37:29PM +, Horia Geantă wrote: >> sg_init_table(sg, np + 1); sg_mark_end() marks sg[np]. >> -np--; >> +if (rem) >> +np--; >> for (k = 0; k < np; k++) >> sg_set_buf(&sg[k + 1], xbuf[k], PAGE_SIZE); In case rem == 0, last k value is np-1, thus sg[np-1+1] will be filled here with xbuf[np-1]. >>> >>> No, if rem == 0, then the last k value is np-2. >>> >> Notice that np-- above the for loop is done conditionally, so in the for >> loop k takes values in [0, np-1]. >> This means the for loop fills sg[1]...sg[np]. > > I must be missing something. In the case rem == 0, let's say > the original value of np is npo. Then at the start of the loop, > np = npo - 1, and at the last iteration, k = npo - 2, so we do IIUC at the start of the loop np = npo (and not npo - 1), since np is no longer decremented in the rem == 0 case: - np--; + if (rem) + np--; > > sg_set_buf(&sg[npo - 1], xbuf[npo - 2], PAGE_SIZE); > and accordingly last iteration is for k = npo - 1: sg_set_buf(&sg[npo], xbuf[npo - 1], PAGE_SIZE); > While the sg_init_table call sets the end-of-table at > > sg_init_table(sg, npo + 1); > while this marks sg[npo] as last SG table entry. Thanks, Horia
Re: [PATCH] crypto: s5p-sss - Remove a stray tab
On Thu, Nov 9, 2017 at 10:26 PM, Dan Carpenter wrote: > This code seems correct, but the goto was indented too far. > > Signed-off-by: Dan Carpenter > > diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c Reviewed-by: Krzysztof Kozlowski Best regards, Krzysztof
Re: [PATCH RESEND 1/4] crypto: caam: add caam-dma node to SEC4.0 device tree binding
On 11/9/2017 6:34 PM, Kim Phillips wrote: > On Thu, 9 Nov 2017 11:54:13 + > Radu Andrei Alexe wrote: > >> On 10/30/2017 4:24 PM, Kim Phillips wrote: >>> On Mon, 30 Oct 2017 15:46:51 +0200 >>> Horia Geantă wrote: >>> += +CAAM DMA Node + +Child node of the crypto node that enables the use of the DMA capabilities +of the CAAM by a stand-alone driver. The only required property is the +"compatible" property. All the other properties are determined from +the job rings on which the CAAM DMA driver depends (ex: the number of +dma-channels is equal to the number of defined job rings). + + - compatible + Usage: required + Value type: + Definition: Must include "fsl,sec-v4.0-dma" + +EXAMPLE + caam-dma { +compatible = "fsl,sec-v5.4-dma", + "fsl,sec-v5.0-dma", + "fsl,sec-v4.0-dma"; + } >>> >>> If this isn't describing an aspect of the hardware, then what is it >>> doing in the device tree? >>> >>> Kim >>> >> >> Because the caam_dma driver is a platform driver I needed to create a >> platform device to activate the driver. My thought was that it was >> simpler to implement it using device tree. >> The next patch version will create the platform device dynamically at >> run time. > > Why create a new device when that h/w already has one? > > Why doesn't the existing crypto driver register dma capabilities with > the dma driver subsystem? > > Kim > I can think of two reasons: 1. The code that this driver introduces has nothing to do with crypto and everything to do with dma. Placing the code in the same directory as the caam subsystem would only create confusion for the reader of an already complex driver. 2. I wanted this driver to be tracked by the dma engine team. They have the right expertise to provide adequate feedback. If all the code was in the crypto directory they wouldn't know about this driver or any subsequent changes to it. BR, Radu