Re: [PATCHv2 2/3] usb: gadget: f_uac*: Reduce code duplication
Hi Julian, [auto build test WARNING on balbi-usb/next] [also build test WARNING on next-20170630] [cannot apply to v4.12-rc7] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Julian-Scheel/USB-Audio-Gadget-Support-multiple-sampling-rates/20170702-215432 base: https://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git next coccinelle warnings: (new ones prefixed by >>) >> drivers/usb/gadget/legacy/audio.c:231:23-29: ERROR: application of sizeof to >> pointer Please review and possibly fold the followup patch. --- 0-DAY kernel test infrastructureOpen Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] usb: gadget: f_uac*: fix noderef.cocci warnings
drivers/usb/gadget/legacy/audio.c:231:23-29: ERROR: application of sizeof to pointer sizeof when applied to a pointer typed expression gives the size of the pointer Generated by: scripts/coccinelle/misc/noderef.cocci Fixes: f95cee9b299f ("usb: gadget: f_uac*: Reduce code duplication") CC: Julian ScheelSigned-off-by: Fengguang Wu --- audio.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/drivers/usb/gadget/legacy/audio.c +++ b/drivers/usb/gadget/legacy/audio.c @@ -228,7 +228,7 @@ static int audio_bind(struct usb_composi #endif #if !defined(CONFIG_GADGET_UAC1) || !defined(CONFIG_GADGET_UAC1_LEGACY) - memset(uac_opts, 0x0, sizeof(uac_opts)); + memset(uac_opts, 0x0, sizeof(*uac_opts)); uac_opts->p_chmask = p_chmask; uac_opts->p_srate[0] = p_srate; uac_opts->p_srate_active = p_srate; -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 08/28] crypto: talitos: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. drivers/crypto/talitos.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index 79791c6..0ab3c4d 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -2082,7 +2082,7 @@ static int keyhash(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen, case 0: break; case -EINPROGRESS: - case -EBUSY: + case -EIOCBQUEUED: ret = wait_for_completion_interruptible( ); if (!ret) -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 06/28] crypto: omap: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. drivers/crypto/omap-sham.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c index 9ad9d39..dfac821 100644 --- a/drivers/crypto/omap-sham.c +++ b/drivers/crypto/omap-sham.c @@ -1279,7 +1279,7 @@ static int omap_sham_finup(struct ahash_request *req) ctx->flags |= BIT(FLAGS_FINUP); err1 = omap_sham_update(req); - if (err1 == -EINPROGRESS || err1 == -EBUSY) + if (err1 == -EINPROGRESS || err1 == -EIOCBQUEUED) return err1; /* * final() has to be always called to cleanup resources -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 04/28] crypto: marvell/cesa: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. drivers/crypto/marvell/cesa.c | 2 +- drivers/crypto/marvell/cesa.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c index 6e7a5c7..5a0f7d1 100644 --- a/drivers/crypto/marvell/cesa.c +++ b/drivers/crypto/marvell/cesa.c @@ -184,7 +184,7 @@ int mv_cesa_queue_req(struct crypto_async_request *req, ret = crypto_enqueue_request(>queue, req); if ((mv_cesa_req_get_type(creq) == CESA_DMA_REQ) && (ret == -EINPROGRESS || - (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG))) +ret == -EIOCBQUEUED)) mv_cesa_tdma_chain(engine, creq); spin_unlock_bh(>lock); diff --git a/drivers/crypto/marvell/cesa.h b/drivers/crypto/marvell/cesa.h index b7872f6..4ca755f 100644 --- a/drivers/crypto/marvell/cesa.h +++ b/drivers/crypto/marvell/cesa.h @@ -763,7 +763,7 @@ static inline int mv_cesa_req_needs_cleanup(struct crypto_async_request *req, * the backlog and will be processed later. There's no need to * clean it up. */ - if (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG) + if (ret == -EIOCBQUEUED) return false; /* Request wasn't queued, we need to clean it up */ -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 12/28] ima: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. security/integrity/ima/ima_crypto.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 802d5d2..226dd88 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -212,7 +212,7 @@ static int ahash_wait(int err, struct ahash_completion *res) case 0: break; case -EINPROGRESS: - case -EBUSY: + case -EIOCBQUEUED: wait_for_completion(>completion); reinit_completion(>completion); err = res->err; -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 10/28] fscrypt: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. fs/crypto/crypto.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c index c7835df..c5c89ed 100644 --- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -190,7 +190,7 @@ int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw, res = crypto_skcipher_decrypt(req); else res = crypto_skcipher_encrypt(req); - if (res == -EINPROGRESS || res == -EBUSY) { + if (res == -EINPROGRESS || res == -EIOCBQUEUED) { BUG_ON(req->base.data != ); wait_for_completion(); res = ecr.res; -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 18/28] crypto: move gcm to generic async completion
gcm is starting an async. crypto op and waiting for it complete. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-Yossef--- crypto/gcm.c | 32 ++-- 1 file changed, 6 insertions(+), 26 deletions(-) diff --git a/crypto/gcm.c b/crypto/gcm.c index ffac821..fb923a5 100644 --- a/crypto/gcm.c +++ b/crypto/gcm.c @@ -16,7 +16,6 @@ #include #include #include "internal.h" -#include #include #include #include @@ -78,11 +77,6 @@ struct crypto_gcm_req_priv_ctx { } u; }; -struct crypto_gcm_setkey_result { - int err; - struct completion completion; -}; - static struct { u8 buf[16]; struct scatterlist sg; @@ -98,17 +92,6 @@ static inline struct crypto_gcm_req_priv_ctx *crypto_gcm_reqctx( return (void *)PTR_ALIGN((u8 *)aead_request_ctx(req), align + 1); } -static void crypto_gcm_setkey_done(struct crypto_async_request *req, int err) -{ - struct crypto_gcm_setkey_result *result = req->data; - - if (err == -EINPROGRESS) - return; - - result->err = err; - complete(>completion); -} - static int crypto_gcm_setkey(struct crypto_aead *aead, const u8 *key, unsigned int keylen) { @@ -119,7 +102,7 @@ static int crypto_gcm_setkey(struct crypto_aead *aead, const u8 *key, be128 hash; u8 iv[16]; - struct crypto_gcm_setkey_result result; + struct crypto_wait wait; struct scatterlist sg[1]; struct skcipher_request req; @@ -140,21 +123,18 @@ static int crypto_gcm_setkey(struct crypto_aead *aead, const u8 *key, if (!data) return -ENOMEM; - init_completion(>result.completion); + crypto_init_wait(>wait); sg_init_one(data->sg, >hash, sizeof(data->hash)); skcipher_request_set_tfm(>req, ctr); skcipher_request_set_callback(>req, CRYPTO_TFM_REQ_MAY_SLEEP | CRYPTO_TFM_REQ_MAY_BACKLOG, - crypto_gcm_setkey_done, - >result); + crypto_req_done, + >wait); skcipher_request_set_crypt(>req, data->sg, data->sg, sizeof(data->hash), data->iv); - err = crypto_skcipher_encrypt(>req); - if (err == -EINPROGRESS || err == -EIOCBQUEUED) { - wait_for_completion(>result.completion); - err = data->result.err; - } + err = crypto_wait_req(crypto_skcipher_encrypt(>req), + >wait); if (err) goto out; -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 14/28] crypto: introduce crypto wait for async op
Invoking a possibly async. crypto op and waiting for completion while correctly handling backlog processing is a common task in the crypto API implementation and outside users of it. This patch adds a generic implementation for doing so in preparation for using it across the board instead of hand rolled versions. Signed-off-by: Gilad Ben-YossefCC: Eric Biggers --- crypto/api.c | 13 + include/linux/crypto.h | 40 2 files changed, 53 insertions(+) diff --git a/crypto/api.c b/crypto/api.c index 941cd4c..2a2479d 100644 --- a/crypto/api.c +++ b/crypto/api.c @@ -24,6 +24,7 @@ #include #include #include +#include #include "internal.h" LIST_HEAD(crypto_alg_list); @@ -595,5 +596,17 @@ int crypto_has_alg(const char *name, u32 type, u32 mask) } EXPORT_SYMBOL_GPL(crypto_has_alg); +void crypto_req_done(struct crypto_async_request *req, int err) +{ + struct crypto_wait *wait = req->data; + + if (err == -EINPROGRESS) + return; + + wait->err = err; + complete(>completion); +} +EXPORT_SYMBOL_GPL(crypto_req_done); + MODULE_DESCRIPTION("Cryptographic core API"); MODULE_LICENSE("GPL"); diff --git a/include/linux/crypto.h b/include/linux/crypto.h index 84da997..47e884a 100644 --- a/include/linux/crypto.h +++ b/include/linux/crypto.h @@ -24,6 +24,7 @@ #include #include #include +#include /* * Autoloaded crypto modules should only use a prefixed name to avoid allowing @@ -468,6 +469,45 @@ struct crypto_alg { } CRYPTO_MINALIGN_ATTR; /* + * A helper struct for waiting for completion of async crypto ops + */ +struct crypto_wait { + struct completion completion; + int err; +}; + +/* + * Macro for declaring a crypto op async wait object on stack + */ +#define DECLARE_CRYPTO_WAIT(_wait) \ + struct crypto_wait _wait = { \ + COMPLETION_INITIALIZER_ONSTACK((_wait).completion), 0 } + +/* + * Async ops completion helper functioons + */ +void crypto_req_done(struct crypto_async_request *req, int err); + +static inline int crypto_wait_req(int err, struct crypto_wait *wait) +{ + switch (err) { + case -EINPROGRESS: + case -EIOCBQUEUED: + wait_for_completion(>completion); + reinit_completion(>completion); + err = wait->err; + break; + }; + + return err; +} + +static inline void crypto_init_wait(struct crypto_wait *wait) +{ + init_completion(>completion); +} + +/* * Algorithm registration interface. */ int crypto_register_alg(struct crypto_alg *alg); -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 25/28] crypto: talitos: move to generic async completion
The talitos driver starts several async crypto ops and waits for their completions. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-Yossef--- drivers/crypto/talitos.c | 39 +-- 1 file changed, 5 insertions(+), 34 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index 0ab3c4d..bf80d3b 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -2037,22 +2037,6 @@ static int ahash_import(struct ahash_request *areq, const void *in) return 0; } -struct keyhash_result { - struct completion completion; - int err; -}; - -static void keyhash_complete(struct crypto_async_request *req, int err) -{ - struct keyhash_result *res = req->data; - - if (err == -EINPROGRESS) - return; - - res->err = err; - complete(>completion); -} - static int keyhash(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen, u8 *hash) { @@ -2060,10 +2044,10 @@ static int keyhash(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen, struct scatterlist sg[1]; struct ahash_request *req; - struct keyhash_result hresult; + struct crypto_wait wait; int ret; - init_completion(); + crypto_init_wait(); req = ahash_request_alloc(tfm, GFP_KERNEL); if (!req) @@ -2072,25 +2056,12 @@ static int keyhash(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen, /* Keep tfm keylen == 0 during hash of the long key */ ctx->keylen = 0; ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - keyhash_complete, ); + crypto_req_done, ); sg_init_one([0], key, keylen); - ahash_request_set_crypt(req, sg, hash, keylen); - ret = crypto_ahash_digest(req); - switch (ret) { - case 0: - break; - case -EINPROGRESS: - case -EIOCBQUEUED: - ret = wait_for_completion_interruptible( - ); - if (!ret) - ret = hresult.err; - break; - default: - break; - } + ret = crypto_wait_req(crypto_ahash_digest(req), ); + ahash_request_free(req); return ret; -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 27/28] crypto: mediatek: move to generic async completion
The mediatek driver starts several async crypto ops and waits for their completions. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-Yossef--- drivers/crypto/mediatek/mtk-aes.c | 31 +-- 1 file changed, 5 insertions(+), 26 deletions(-) diff --git a/drivers/crypto/mediatek/mtk-aes.c b/drivers/crypto/mediatek/mtk-aes.c index 5254e13..e2c7c95 100644 --- a/drivers/crypto/mediatek/mtk-aes.c +++ b/drivers/crypto/mediatek/mtk-aes.c @@ -137,11 +137,6 @@ struct mtk_aes_gcm_ctx { struct crypto_skcipher *ctr; }; -struct mtk_aes_gcm_setkey_result { - int err; - struct completion completion; -}; - struct mtk_aes_drv { struct list_head dev_list; /* Device list lock */ @@ -936,17 +931,6 @@ static int mtk_aes_gcm_crypt(struct aead_request *req, u64 mode) >base); } -static void mtk_gcm_setkey_done(struct crypto_async_request *req, int err) -{ - struct mtk_aes_gcm_setkey_result *result = req->data; - - if (err == -EINPROGRESS) - return; - - result->err = err; - complete(>completion); -} - /* * Because of the hardware limitation, we need to pre-calculate key(H) * for the GHASH operation. The result of the encryption operation @@ -962,7 +946,7 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, const u8 *key, u32 hash[4]; u8 iv[8]; - struct mtk_aes_gcm_setkey_result result; + struct crypto_wait wait; struct scatterlist sg[1]; struct skcipher_request req; @@ -1002,22 +986,17 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, const u8 *key, if (!data) return -ENOMEM; - init_completion(>result.completion); + crypto_init_wait(>wait); sg_init_one(data->sg, >hash, AES_BLOCK_SIZE); skcipher_request_set_tfm(>req, ctr); skcipher_request_set_callback(>req, CRYPTO_TFM_REQ_MAY_SLEEP | CRYPTO_TFM_REQ_MAY_BACKLOG, - mtk_gcm_setkey_done, >result); + crypto_req_done, >wait); skcipher_request_set_crypt(>req, data->sg, data->sg, AES_BLOCK_SIZE, data->iv); - err = crypto_skcipher_encrypt(>req); - if (err == -EINPROGRESS || err == -EIOCBQUEUED) { - err = wait_for_completion_interruptible( - >result.completion); - if (!err) - err = data->result.err; - } + err = crypto_wait_req(crypto_skcipher_encrypt(>req), + >wait); if (err) goto out; -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 21/28] fscrypt: move to generic async completion
fscrypt starts several async. crypto ops and waiting for them to complete. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-Yossef--- fs/crypto/crypto.c | 29 + fs/crypto/fname.c | 36 ++-- fs/crypto/fscrypt_private.h | 10 -- fs/crypto/keyinfo.c | 21 +++-- 4 files changed, 14 insertions(+), 82 deletions(-) diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c index c5c89ed..4e8740f 100644 --- a/fs/crypto/crypto.c +++ b/fs/crypto/crypto.c @@ -126,21 +126,6 @@ struct fscrypt_ctx *fscrypt_get_ctx(const struct inode *inode, gfp_t gfp_flags) } EXPORT_SYMBOL(fscrypt_get_ctx); -/** - * page_crypt_complete() - completion callback for page crypto - * @req: The asynchronous cipher request context - * @res: The result of the cipher operation - */ -static void page_crypt_complete(struct crypto_async_request *req, int res) -{ - struct fscrypt_completion_result *ecr = req->data; - - if (res == -EINPROGRESS) - return; - ecr->res = res; - complete(>completion); -} - int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw, u64 lblk_num, struct page *src_page, struct page *dest_page, unsigned int len, @@ -151,7 +136,7 @@ int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw, u8 padding[FS_IV_SIZE - sizeof(__le64)]; } iv; struct skcipher_request *req = NULL; - DECLARE_FS_COMPLETION_RESULT(ecr); + DECLARE_CRYPTO_WAIT(wait); struct scatterlist dst, src; struct fscrypt_info *ci = inode->i_crypt_info; struct crypto_skcipher *tfm = ci->ci_ctfm; @@ -179,7 +164,7 @@ int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw, skcipher_request_set_callback( req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, - page_crypt_complete, ); + crypto_req_done, ); sg_init_table(, 1); sg_set_page(, dest_page, len, offs); @@ -187,14 +172,10 @@ int fscrypt_do_page_crypto(const struct inode *inode, fscrypt_direction_t rw, sg_set_page(, src_page, len, offs); skcipher_request_set_crypt(req, , , len, ); if (rw == FS_DECRYPT) - res = crypto_skcipher_decrypt(req); + res = crypto_wait_req(crypto_skcipher_decrypt(req), ); else - res = crypto_skcipher_encrypt(req); - if (res == -EINPROGRESS || res == -EIOCBQUEUED) { - BUG_ON(req->base.data != ); - wait_for_completion(); - res = ecr.res; - } + res = crypto_wait_req(crypto_skcipher_encrypt(req), ); + skcipher_request_free(req); if (res) { printk_ratelimited(KERN_ERR diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c index ad9f814..a80a0d3 100644 --- a/fs/crypto/fname.c +++ b/fs/crypto/fname.c @@ -15,21 +15,6 @@ #include "fscrypt_private.h" /** - * fname_crypt_complete() - completion callback for filename crypto - * @req: The asynchronous cipher request context - * @res: The result of the cipher operation - */ -static void fname_crypt_complete(struct crypto_async_request *req, int res) -{ - struct fscrypt_completion_result *ecr = req->data; - - if (res == -EINPROGRESS) - return; - ecr->res = res; - complete(>completion); -} - -/** * fname_encrypt() - encrypt a filename * * The caller must have allocated sufficient memory for the @oname string. @@ -40,7 +25,7 @@ static int fname_encrypt(struct inode *inode, const struct qstr *iname, struct fscrypt_str *oname) { struct skcipher_request *req = NULL; - DECLARE_FS_COMPLETION_RESULT(ecr); + DECLARE_CRYPTO_WAIT(wait); struct fscrypt_info *ci = inode->i_crypt_info; struct crypto_skcipher *tfm = ci->ci_ctfm; int res = 0; @@ -76,17 +61,12 @@ static int fname_encrypt(struct inode *inode, } skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, - fname_crypt_complete, ); + crypto_req_done, ); sg_init_one(, oname->name, cryptlen); skcipher_request_set_crypt(req, , , cryptlen, iv); /* Do the encryption */ - res = crypto_skcipher_encrypt(req); - if (res == -EINPROGRESS || res == -EBUSY) { - /* Request is being completed asynchronously; wait for it */ - wait_for_completion(); - res = ecr.res; - } + res = crypto_wait_req(crypto_skcipher_encrypt(req), ); skcipher_request_free(req); if (res < 0) { printk_ratelimited(KERN_ERR @@ -110,7 +90,7 @@ static int fname_decrypt(struct inode *inode,
[PATCH v3 20/28] dm: move dm-verity to generic async completion
dm-verity is starting async. crypto ops and waiting for them to complete. Move it over to generic code doing the same. This also fixes a possible data coruption bug created by the use of wait_for_completion_interruptible() without dealing correctly with an interrupt aborting the wait prior to the async op finishing. Signed-off-by: Gilad Ben-Yossef--- drivers/md/dm-verity-target.c | 81 +++ drivers/md/dm-verity.h| 5 --- 2 files changed, 20 insertions(+), 66 deletions(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 4fe7d18..343dd0d 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -92,74 +92,33 @@ static sector_t verity_position_at_level(struct dm_verity *v, sector_t block, return block >> (level * v->hash_per_block_bits); } -/* - * Callback function for asynchrnous crypto API completion notification - */ -static void verity_op_done(struct crypto_async_request *base, int err) -{ - struct verity_result *res = (struct verity_result *)base->data; - - if (err == -EINPROGRESS) - return; - - res->err = err; - complete(>completion); -} - -/* - * Wait for async crypto API callback - */ -static inline int verity_complete_op(struct verity_result *res, int ret) -{ - switch (ret) { - case 0: - break; - - case -EINPROGRESS: - case -EIOCBQUEUED: - ret = wait_for_completion_interruptible(>completion); - if (!ret) - ret = res->err; - reinit_completion(>completion); - break; - - default: - DMERR("verity_wait_hash: crypto op submission failed: %d", ret); - } - - if (unlikely(ret < 0)) - DMERR("verity_wait_hash: crypto op failed: %d", ret); - - return ret; -} - static int verity_hash_update(struct dm_verity *v, struct ahash_request *req, const u8 *data, size_t len, - struct verity_result *res) + struct crypto_wait *wait) { struct scatterlist sg; sg_init_one(, data, len); ahash_request_set_crypt(req, , NULL, len); - return verity_complete_op(res, crypto_ahash_update(req)); + return crypto_wait_req(crypto_ahash_update(req), wait); } /* * Wrapper for crypto_ahash_init, which handles verity salting. */ static int verity_hash_init(struct dm_verity *v, struct ahash_request *req, - struct verity_result *res) + struct crypto_wait *wait) { int r; ahash_request_set_tfm(req, v->tfm); ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP | CRYPTO_TFM_REQ_MAY_BACKLOG, - verity_op_done, (void *)res); - init_completion(>completion); + crypto_req_done, (void *)wait); + crypto_init_wait(wait); - r = verity_complete_op(res, crypto_ahash_init(req)); + r = crypto_wait_req(crypto_ahash_init(req), wait); if (unlikely(r < 0)) { DMERR("crypto_ahash_init failed: %d", r); @@ -167,18 +126,18 @@ static int verity_hash_init(struct dm_verity *v, struct ahash_request *req, } if (likely(v->salt_size && (v->version >= 1))) - r = verity_hash_update(v, req, v->salt, v->salt_size, res); + r = verity_hash_update(v, req, v->salt, v->salt_size, wait); return r; } static int verity_hash_final(struct dm_verity *v, struct ahash_request *req, -u8 *digest, struct verity_result *res) +u8 *digest, struct crypto_wait *wait) { int r; if (unlikely(v->salt_size && (!v->version))) { - r = verity_hash_update(v, req, v->salt, v->salt_size, res); + r = verity_hash_update(v, req, v->salt, v->salt_size, wait); if (r < 0) { DMERR("verity_hash_final failed updating salt: %d", r); @@ -187,7 +146,7 @@ static int verity_hash_final(struct dm_verity *v, struct ahash_request *req, } ahash_request_set_crypt(req, NULL, digest, 0); - r = verity_complete_op(res, crypto_ahash_final(req)); + r = crypto_wait_req(crypto_ahash_final(req), wait); out: return r; } @@ -196,17 +155,17 @@ int verity_hash(struct dm_verity *v, struct ahash_request *req, const u8 *data, size_t len, u8 *digest) { int r; - struct verity_result res; + struct crypto_wait wait; - r = verity_hash_init(v, req, ); + r = verity_hash_init(v, req, ); if (unlikely(r < 0)) goto out; - r = verity_hash_update(v, req, data, len, ); + r =
[PATCH v3 17/28] crypto: move drbg to generic async completion
DRBG is starting an async. crypto op and waiting for it complete. Move it over to generic code doing the same. The code now also passes CRYPTO_TFM_REQ_MAY_SLEEP flag indicating crypto request memory allocation may use GFP_KERNEL which should be perfectly fine as the code is obviously sleeping for the completion of the request any way. Signed-off-by: Gilad Ben-YossefAcked-by: Stephan Muller --- crypto/drbg.c | 36 +--- include/crypto/drbg.h | 3 +-- 2 files changed, 10 insertions(+), 29 deletions(-) diff --git a/crypto/drbg.c b/crypto/drbg.c index 850b451..c522251 100644 --- a/crypto/drbg.c +++ b/crypto/drbg.c @@ -1651,16 +1651,6 @@ static int drbg_fini_sym_kernel(struct drbg_state *drbg) return 0; } -static void drbg_skcipher_cb(struct crypto_async_request *req, int error) -{ - struct drbg_state *drbg = req->data; - - if (error == -EINPROGRESS) - return; - drbg->ctr_async_err = error; - complete(>ctr_completion); -} - static int drbg_init_sym_kernel(struct drbg_state *drbg) { struct crypto_cipher *tfm; @@ -1691,7 +1681,7 @@ static int drbg_init_sym_kernel(struct drbg_state *drbg) return PTR_ERR(sk_tfm); } drbg->ctr_handle = sk_tfm; - init_completion(>ctr_completion); + crypto_init_wait(>ctr_wait); req = skcipher_request_alloc(sk_tfm, GFP_KERNEL); if (!req) { @@ -1700,8 +1690,9 @@ static int drbg_init_sym_kernel(struct drbg_state *drbg) return -ENOMEM; } drbg->ctr_req = req; - skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - drbg_skcipher_cb, drbg); + skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG | + CRYPTO_TFM_REQ_MAY_SLEEP, + crypto_req_done, >ctr_wait); alignmask = crypto_skcipher_alignmask(sk_tfm); drbg->ctr_null_value_buf = kzalloc(DRBG_CTR_NULL_LEN + alignmask, @@ -1762,21 +1753,12 @@ static int drbg_kcapi_sym_ctr(struct drbg_state *drbg, /* Output buffer may not be valid for SGL, use scratchpad */ skcipher_request_set_crypt(drbg->ctr_req, _in, _out, cryptlen, drbg->V); - ret = crypto_skcipher_encrypt(drbg->ctr_req); - switch (ret) { - case 0: - break; - case -EINPROGRESS: - case -EIOCBQUEUED: - wait_for_completion(>ctr_completion); - if (!drbg->ctr_async_err) { - reinit_completion(>ctr_completion); - break; - } - default: + ret = crypto_wait_req(crypto_skcipher_encrypt(drbg->ctr_req), + >ctr_wait); + if (ret) goto out; - } - init_completion(>ctr_completion); + + crypto_init_wait(>ctr_wait); memcpy(outbuf, drbg->outscratchpad, cryptlen); diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h index 22f884c..8f94110 100644 --- a/include/crypto/drbg.h +++ b/include/crypto/drbg.h @@ -126,8 +126,7 @@ struct drbg_state { __u8 *ctr_null_value; /* CTR mode aligned zero buf */ __u8 *outscratchpadbuf; /* CTR mode output scratchpad */ __u8 *outscratchpad; /* CTR mode aligned outbuf */ - struct completion ctr_completion; /* CTR mode async handler */ - int ctr_async_err; /* CTR mode async error */ + struct crypto_wait ctr_wait;/* CTR mode async wait obj */ bool seeded;/* DRBG fully seeded? */ bool pr;/* Prediction resistance enabled? */ -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 28/28] crypto: adapt api sample to use async. op wait
The code sample is waiting for an async. crypto op completion. Adapt sample to use the new generic infrastructure to do the same. This also fixes a possible data coruption bug created by the use of wait_for_completion_interruptible() without dealing correctly with an interrupt aborting the wait prior to the async op finishing. Signed-off-by: Gilad Ben-Yossef--- Documentation/crypto/api-samples.rst | 52 +++- 1 file changed, 10 insertions(+), 42 deletions(-) diff --git a/Documentation/crypto/api-samples.rst b/Documentation/crypto/api-samples.rst index c6aa4ba..006827e 100644 --- a/Documentation/crypto/api-samples.rst +++ b/Documentation/crypto/api-samples.rst @@ -7,59 +7,27 @@ Code Example For Symmetric Key Cipher Operation :: -struct tcrypt_result { -struct completion completion; -int err; -}; - /* tie all data structures together */ struct skcipher_def { struct scatterlist sg; struct crypto_skcipher *tfm; struct skcipher_request *req; -struct tcrypt_result result; +struct crypto_wait wait; }; -/* Callback function */ -static void test_skcipher_cb(struct crypto_async_request *req, int error) -{ -struct tcrypt_result *result = req->data; - -if (error == -EINPROGRESS) -return; -result->err = error; -complete(>completion); -pr_info("Encryption finished successfully\n"); -} - /* Perform cipher operation */ static unsigned int test_skcipher_encdec(struct skcipher_def *sk, int enc) { -int rc = 0; +int rc; if (enc) -rc = crypto_skcipher_encrypt(sk->req); +rc = crypto_wait_req(crypto_skcipher_encrypt(sk->req), >wait); else -rc = crypto_skcipher_decrypt(sk->req); - -switch (rc) { -case 0: -break; -case -EINPROGRESS: -case -EIOCBQUEUED: -rc = wait_for_completion_interruptible( ->result.completion); -if (!rc && !sk->result.err) { -reinit_completion(>result.completion); -break; -} -default: -pr_info("skcipher encrypt returned with %d result %d\n", -rc, sk->result.err); -break; -} -init_completion(>result.completion); +rc = crypto_wait_req(crypto_skcipher_decrypt(sk->req), >wait); + + if (rc) + pr_info("skcipher encrypt returned with result %d\n", rc); return rc; } @@ -89,8 +57,8 @@ Code Example For Symmetric Key Cipher Operation } skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - test_skcipher_cb, - ); + crypto_req_done, + ); /* AES 256 with random key */ get_random_bytes(, 32); @@ -122,7 +90,7 @@ Code Example For Symmetric Key Cipher Operation /* We encrypt one block */ sg_init_one(, scratchpad, 16); skcipher_request_set_crypt(req, , , 16, ivdata); -init_completion(); +crypto_init_wait(); /* encrypt data */ ret = test_skcipher_encdec(, 1); -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 26/28] crypto: qce: move to generic async completion
The qce driver starts several async crypto ops and waits for their completions. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-Yossef--- drivers/crypto/qce/sha.c | 30 -- 1 file changed, 4 insertions(+), 26 deletions(-) diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c index a21d2a1c..53227d7 100644 --- a/drivers/crypto/qce/sha.c +++ b/drivers/crypto/qce/sha.c @@ -349,28 +349,12 @@ static int qce_ahash_digest(struct ahash_request *req) return qce->async_req_enqueue(tmpl->qce, >base); } -struct qce_ahash_result { - struct completion completion; - int error; -}; - -static void qce_digest_complete(struct crypto_async_request *req, int error) -{ - struct qce_ahash_result *result = req->data; - - if (error == -EINPROGRESS) - return; - - result->error = error; - complete(>completion); -} - static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, const u8 *key, unsigned int keylen) { unsigned int digestsize = crypto_ahash_digestsize(tfm); struct qce_sha_ctx *ctx = crypto_tfm_ctx(>base); - struct qce_ahash_result result; + struct crypto_wait wait; struct ahash_request *req; struct scatterlist sg; unsigned int blocksize; @@ -405,9 +389,9 @@ static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, const u8 *key, goto err_free_ahash; } - init_completion(); + crypto_init_wait(); ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - qce_digest_complete, ); + crypto_req_done, ); crypto_ahash_clear_flags(ahash_tfm, ~0); buf = kzalloc(keylen + QCE_MAX_ALIGN_SIZE, GFP_KERNEL); @@ -420,13 +404,7 @@ static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, const u8 *key, sg_init_one(, buf, keylen); ahash_request_set_crypt(req, , ctx->authkey, keylen); - ret = crypto_ahash_digest(req); - if (ret == -EINPROGRESS || ret == -EIOCBQUEUED) { - ret = wait_for_completion_interruptible(); - if (!ret) - ret = result.error; - } - + ret = crypto_wait_req(crypto_ahash_digest(req), ); if (ret) crypto_ahash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 24/28] crypto: tcrypt: move to generic async completion
tcrypt starts several async crypto ops and waits for their completions. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-Yossef--- crypto/tcrypt.c | 84 + 1 file changed, 25 insertions(+), 59 deletions(-) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 57f7ac4..0278a39 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -79,34 +79,11 @@ static char *check[] = { NULL }; -struct tcrypt_result { - struct completion completion; - int err; -}; - -static void tcrypt_complete(struct crypto_async_request *req, int err) -{ - struct tcrypt_result *res = req->data; - - if (err == -EINPROGRESS) - return; - - res->err = err; - complete(>completion); -} - static inline int do_one_aead_op(struct aead_request *req, int ret) { - if (ret == -EINPROGRESS || ret == -EIOCBQUEUED) { - struct tcrypt_result *tr = req->base.data; + struct crypto_wait *wait = req->base.data; - ret = wait_for_completion_interruptible(>completion); - if (!ret) - ret = tr->err; - reinit_completion(>completion); - } - - return ret; + return crypto_wait_req(ret, wait); } static int test_aead_jiffies(struct aead_request *req, int enc, @@ -248,7 +225,7 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs, char *axbuf[XBUFSIZE]; unsigned int *b_size; unsigned int iv_len; - struct tcrypt_result result; + struct crypto_wait wait; iv = kzalloc(MAX_IVLEN, GFP_KERNEL); if (!iv) @@ -284,7 +261,7 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs, goto out_notfm; } - init_completion(); + crypto_init_wait(); printk(KERN_INFO "\ntesting speed of %s (%s) %s\n", algo, get_driver_name(crypto_aead, tfm), e); @@ -296,7 +273,7 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs, } aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - tcrypt_complete, ); + crypto_req_done, ); i = 0; do { @@ -397,21 +374,16 @@ static void test_hash_sg_init(struct scatterlist *sg) static inline int do_one_ahash_op(struct ahash_request *req, int ret) { - if (ret == -EINPROGRESS || ret == -EIOCBQUEUED) { - struct tcrypt_result *tr = req->base.data; + struct crypto_wait *wait = req->base.data; - wait_for_completion(>completion); - reinit_completion(>completion); - ret = tr->err; - } - return ret; + return crypto_wait_req(ret, wait); } struct test_mb_ahash_data { struct scatterlist sg[TVMEMSIZE]; char result[64]; struct ahash_request *req; - struct tcrypt_result tresult; + struct crypto_wait wait; char *xbuf[XBUFSIZE]; }; @@ -440,7 +412,7 @@ static void test_mb_ahash_speed(const char *algo, unsigned int sec, if (testmgr_alloc_buf(data[i].xbuf)) goto out; - init_completion([i].tresult.completion); + crypto_init_wait([i].wait); data[i].req = ahash_request_alloc(tfm, GFP_KERNEL); if (!data[i].req) { @@ -449,8 +421,8 @@ static void test_mb_ahash_speed(const char *algo, unsigned int sec, goto out; } - ahash_request_set_callback(data[i].req, 0, - tcrypt_complete, [i].tresult); + ahash_request_set_callback(data[i].req, 0, crypto_req_done, + [i].wait); test_hash_sg_init(data[i].sg); } @@ -492,16 +464,16 @@ static void test_mb_ahash_speed(const char *algo, unsigned int sec, if (ret) break; - complete([k].tresult.completion); - data[k].tresult.err = 0; + crypto_req_done([k].req->base, 0); } for (j = 0; j < k; j++) { - struct tcrypt_result *tr = [j].tresult; + struct crypto_wait *wait = [j].wait; + int wait_ret; - wait_for_completion(>completion); - if (tr->err) - ret = tr->err; + wait_ret = crypto_wait_req(-EINPROGRESS, wait); + if (wait_ret) + ret = wait_ret; } end = get_cycles(); @@ -679,7 +651,7 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs,
[PATCH v3 23/28] ima: move to generic async completion
ima starts several async crypto ops and waits for their completions. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-YossefAcked-by: Mimi Zohar --- security/integrity/ima/ima_crypto.c | 56 +++-- 1 file changed, 17 insertions(+), 39 deletions(-) diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 226dd88..0e4db1fe 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -27,11 +27,6 @@ #include "ima.h" -struct ahash_completion { - struct completion completion; - int err; -}; - /* minimum file size for ahash use */ static unsigned long ima_ahash_minsize; module_param_named(ahash_minsize, ima_ahash_minsize, ulong, 0644); @@ -196,30 +191,13 @@ static void ima_free_atfm(struct crypto_ahash *tfm) crypto_free_ahash(tfm); } -static void ahash_complete(struct crypto_async_request *req, int err) +static inline int ahash_wait(int err, struct crypto_wait *wait) { - struct ahash_completion *res = req->data; - if (err == -EINPROGRESS) - return; - res->err = err; - complete(>completion); -} + err = crypto_wait_req(err, wait); -static int ahash_wait(int err, struct ahash_completion *res) -{ - switch (err) { - case 0: - break; - case -EINPROGRESS: - case -EIOCBQUEUED: - wait_for_completion(>completion); - reinit_completion(>completion); - err = res->err; - /* fall through */ - default: + if (err) pr_crit_ratelimited("ahash calculation failed: err: %d\n", err); - } return err; } @@ -233,7 +211,7 @@ static int ima_calc_file_hash_atfm(struct file *file, int rc, read = 0, rbuf_len, active = 0, ahash_rc = 0; struct ahash_request *req; struct scatterlist sg[1]; - struct ahash_completion res; + struct crypto_wait wait; size_t rbuf_size[2]; hash->length = crypto_ahash_digestsize(tfm); @@ -242,12 +220,12 @@ static int ima_calc_file_hash_atfm(struct file *file, if (!req) return -ENOMEM; - init_completion(); + crypto_init_wait(); ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, - ahash_complete, ); + crypto_req_done, ); - rc = ahash_wait(crypto_ahash_init(req), ); + rc = ahash_wait(crypto_ahash_init(req), ); if (rc) goto out1; @@ -288,7 +266,7 @@ static int ima_calc_file_hash_atfm(struct file *file, * read/request, wait for the completion of the * previous ahash_update() request. */ - rc = ahash_wait(ahash_rc, ); + rc = ahash_wait(ahash_rc, ); if (rc) goto out3; } @@ -304,7 +282,7 @@ static int ima_calc_file_hash_atfm(struct file *file, * read/request, wait for the completion of the * previous ahash_update() request. */ - rc = ahash_wait(ahash_rc, ); + rc = ahash_wait(ahash_rc, ); if (rc) goto out3; } @@ -318,7 +296,7 @@ static int ima_calc_file_hash_atfm(struct file *file, active = !active; /* swap buffers, if we use two */ } /* wait for the last update request to complete */ - rc = ahash_wait(ahash_rc, ); + rc = ahash_wait(ahash_rc, ); out3: if (read) file->f_mode &= ~FMODE_READ; @@ -327,7 +305,7 @@ static int ima_calc_file_hash_atfm(struct file *file, out2: if (!rc) { ahash_request_set_crypt(req, NULL, hash->digest, 0); - rc = ahash_wait(crypto_ahash_final(req), ); + rc = ahash_wait(crypto_ahash_final(req), ); } out1: ahash_request_free(req); @@ -527,7 +505,7 @@ static int calc_buffer_ahash_atfm(const void *buf, loff_t len, { struct ahash_request *req; struct scatterlist sg; - struct ahash_completion res; + struct crypto_wait wait; int rc, ahash_rc = 0; hash->length = crypto_ahash_digestsize(tfm); @@ -536,12 +514,12 @@ static int calc_buffer_ahash_atfm(const void *buf, loff_t len, if (!req) return -ENOMEM; - init_completion(); + crypto_init_wait(); ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, - ahash_complete, ); +
[PATCH v3 22/28] cifs: move to generic async completion
cifs starts an async. crypto op and waits for their completion. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-YossefAcked-by: Pavel Shilovsky --- fs/cifs/smb2ops.c | 30 -- 1 file changed, 4 insertions(+), 26 deletions(-) diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 00b1143..46abe62 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -1809,22 +1809,6 @@ init_sg(struct smb_rqst *rqst, u8 *sign) return sg; } -struct cifs_crypt_result { - int err; - struct completion completion; -}; - -static void cifs_crypt_complete(struct crypto_async_request *req, int err) -{ - struct cifs_crypt_result *res = req->data; - - if (err == -EINPROGRESS) - return; - - res->err = err; - complete(>completion); -} - static int smb2_get_enc_key(struct TCP_Server_Info *server, __u64 ses_id, int enc, u8 *key) { @@ -1865,12 +1849,10 @@ crypt_message(struct TCP_Server_Info *server, struct smb_rqst *rqst, int enc) struct aead_request *req; char *iv; unsigned int iv_len; - struct cifs_crypt_result result = {0, }; + DECLARE_CRYPTO_WAIT(wait); struct crypto_aead *tfm; unsigned int crypt_len = le32_to_cpu(tr_hdr->OriginalMessageSize); - init_completion(); - rc = smb2_get_enc_key(server, tr_hdr->SessionId, enc, key); if (rc) { cifs_dbg(VFS, "%s: Could not get %scryption key\n", __func__, @@ -1930,14 +1912,10 @@ crypt_message(struct TCP_Server_Info *server, struct smb_rqst *rqst, int enc) aead_request_set_ad(req, assoc_data_len); aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - cifs_crypt_complete, ); + crypto_req_done, ); - rc = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req); - - if (rc == -EINPROGRESS || rc == -EIOCBQUEUED) { - wait_for_completion(); - rc = result.err; - } + rc = crypto_wait_req(enc ? crypto_aead_encrypt(req) + : crypto_aead_decrypt(req), ); if (!rc && enc) memcpy(_hdr->Signature, sign, SMB2_SIGNATURE_SIZE); -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 19/28] crypto: move testmgr to generic async completion
testmgr is starting async. crypto ops and waiting for them to complete. Move it over to generic code doing the same. This also provides a test of the generic crypto async. wait code. Signed-off-by: Gilad Ben-Yossef--- crypto/testmgr.c | 204 ++- 1 file changed, 66 insertions(+), 138 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index fb5418f..c998b85 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -76,11 +76,6 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask) #define ENCRYPT 1 #define DECRYPT 0 -struct tcrypt_result { - struct completion completion; - int err; -}; - struct aead_test_suite { struct { const struct aead_testvec *vecs; @@ -155,17 +150,6 @@ static void hexdump(unsigned char *buf, unsigned int len) buf, len, false); } -static void tcrypt_complete(struct crypto_async_request *req, int err) -{ - struct tcrypt_result *res = req->data; - - if (err == -EINPROGRESS) - return; - - res->err = err; - complete(>completion); -} - static int testmgr_alloc_buf(char *buf[XBUFSIZE]) { int i; @@ -193,20 +177,10 @@ static void testmgr_free_buf(char *buf[XBUFSIZE]) free_page((unsigned long)buf[i]); } -static int wait_async_op(struct tcrypt_result *tr, int ret) -{ - if (ret == -EINPROGRESS || ret == -EIOCBQUEUED) { - wait_for_completion(>completion); - reinit_completion(>completion); - ret = tr->err; - } - return ret; -} - static int ahash_partial_update(struct ahash_request **preq, struct crypto_ahash *tfm, const struct hash_testvec *template, void *hash_buff, int k, int temp, struct scatterlist *sg, - const char *algo, char *result, struct tcrypt_result *tresult) + const char *algo, char *result, struct crypto_wait *wait) { char *state; struct ahash_request *req; @@ -236,7 +210,7 @@ static int ahash_partial_update(struct ahash_request **preq, } ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - tcrypt_complete, tresult); + crypto_req_done, wait); memcpy(hash_buff, template->plaintext + temp, template->tap[k]); @@ -247,7 +221,7 @@ static int ahash_partial_update(struct ahash_request **preq, pr_err("alg: hash: Failed to import() for %s\n", algo); goto out; } - ret = wait_async_op(tresult, crypto_ahash_update(req)); + ret = crypto_wait_req(crypto_ahash_update(req), wait); if (ret) goto out; *preq = req; @@ -272,7 +246,7 @@ static int __test_hash(struct crypto_ahash *tfm, char *result; char *key; struct ahash_request *req; - struct tcrypt_result tresult; + struct crypto_wait wait; void *hash_buff; char *xbuf[XBUFSIZE]; int ret = -ENOMEM; @@ -286,7 +260,7 @@ static int __test_hash(struct crypto_ahash *tfm, if (testmgr_alloc_buf(xbuf)) goto out_nobuf; - init_completion(); + crypto_init_wait(); req = ahash_request_alloc(tfm, GFP_KERNEL); if (!req) { @@ -295,7 +269,7 @@ static int __test_hash(struct crypto_ahash *tfm, goto out_noreq; } ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, - tcrypt_complete, ); + crypto_req_done, ); j = 0; for (i = 0; i < tcount; i++) { @@ -335,26 +309,26 @@ static int __test_hash(struct crypto_ahash *tfm, ahash_request_set_crypt(req, sg, result, template[i].psize); if (use_digest) { - ret = wait_async_op(, crypto_ahash_digest(req)); + ret = crypto_wait_req(crypto_ahash_digest(req), ); if (ret) { pr_err("alg: hash: digest failed on test %d " "for %s: ret=%d\n", j, algo, -ret); goto out; } } else { - ret = wait_async_op(, crypto_ahash_init(req)); + ret = crypto_wait_req(crypto_ahash_init(req), ); if (ret) { pr_err("alg: hash: init failed on test %d " "for %s: ret=%d\n", j, algo, -ret); goto out; } - ret = wait_async_op(, crypto_ahash_update(req)); + ret = crypto_wait_req(crypto_ahash_update(req), ); if (ret) { pr_err("alg: hash: update failed on test %d "
[PATCH v3 16/28] crypto: move pub key to generic async completion
public_key_verify_signature() is starting an async crypto op and waiting for it to complete. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-Yossef--- crypto/asymmetric_keys/public_key.c | 28 1 file changed, 4 insertions(+), 24 deletions(-) diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c index 3fad1fd..d916235 100644 --- a/crypto/asymmetric_keys/public_key.c +++ b/crypto/asymmetric_keys/public_key.c @@ -57,29 +57,13 @@ static void public_key_destroy(void *payload0, void *payload3) public_key_signature_free(payload3); } -struct public_key_completion { - struct completion completion; - int err; -}; - -static void public_key_verify_done(struct crypto_async_request *req, int err) -{ - struct public_key_completion *compl = req->data; - - if (err == -EINPROGRESS) - return; - - compl->err = err; - complete(>completion); -} - /* * Verify a signature using a public key. */ int public_key_verify_signature(const struct public_key *pkey, const struct public_key_signature *sig) { - struct public_key_completion compl; + struct crypto_wait cwait; struct crypto_akcipher *tfm; struct akcipher_request *req; struct scatterlist sig_sg, digest_sg; @@ -131,20 +115,16 @@ int public_key_verify_signature(const struct public_key *pkey, sg_init_one(_sg, output, outlen); akcipher_request_set_crypt(req, _sg, _sg, sig->s_size, outlen); - init_completion(); + crypto_init_wait(); akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, - public_key_verify_done, ); + crypto_req_done, ); /* Perform the verification calculation. This doesn't actually do the * verification, but rather calculates the hash expected by the * signature and returns that to us. */ - ret = crypto_akcipher_verify(req); - if ((ret == -EINPROGRESS) || (ret == -EIOCBQUEUED)) { - wait_for_completion(); - ret = compl.err; - } + ret = crypto_wait_req(crypto_akcipher_verify(req), ); if (ret < 0) goto out_free_output; -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 15/28] crypto: move algif to generic async completion
algif starts several async crypto ops and waits for their completion. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-Yossef--- crypto/af_alg.c | 27 --- crypto/algif_aead.c | 14 +++--- crypto/algif_hash.c | 29 + crypto/algif_skcipher.c | 15 +++ include/crypto/if_alg.h | 14 -- 5 files changed, 27 insertions(+), 72 deletions(-) diff --git a/crypto/af_alg.c b/crypto/af_alg.c index c67daba..bf4acaf 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -480,33 +480,6 @@ int af_alg_cmsg_send(struct msghdr *msg, struct af_alg_control *con) } EXPORT_SYMBOL_GPL(af_alg_cmsg_send); -int af_alg_wait_for_completion(int err, struct af_alg_completion *completion) -{ - switch (err) { - case -EINPROGRESS: - case -EIOCBQUEUED: - wait_for_completion(>completion); - reinit_completion(>completion); - err = completion->err; - break; - }; - - return err; -} -EXPORT_SYMBOL_GPL(af_alg_wait_for_completion); - -void af_alg_complete(struct crypto_async_request *req, int err) -{ - struct af_alg_completion *completion = req->data; - - if (err == -EINPROGRESS) - return; - - completion->err = err; - complete(>completion); -} -EXPORT_SYMBOL_GPL(af_alg_complete); - static int __init af_alg_init(void) { int err = proto_register(_proto, 0); diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c index 8af664f..4881cb1 100644 --- a/crypto/algif_aead.c +++ b/crypto/algif_aead.c @@ -57,7 +57,7 @@ struct aead_ctx { void *iv; - struct af_alg_completion completion; + struct crypto_wait wait; unsigned long used; @@ -648,10 +648,10 @@ static int aead_recvmsg_sync(struct socket *sock, struct msghdr *msg, int flags) used, ctx->iv); aead_request_set_ad(>aead_req, ctx->aead_assoclen); - err = af_alg_wait_for_completion(ctx->enc ? -crypto_aead_encrypt(>aead_req) : -crypto_aead_decrypt(>aead_req), ->completion); + err = crypto_wait_req(ctx->enc ? + crypto_aead_encrypt(>aead_req) : + crypto_aead_decrypt(>aead_req), + >wait); if (err) { /* EBADMSG implies a valid cipher operation took place */ @@ -912,7 +912,7 @@ static int aead_accept_parent_nokey(void *private, struct sock *sk) ctx->enc = 0; ctx->tsgl.cur = 0; ctx->aead_assoclen = 0; - af_alg_init_completion(>completion); + crypto_init_wait(>wait); sg_init_table(ctx->tsgl.sg, ALG_MAX_PAGES); INIT_LIST_HEAD(>list); @@ -920,7 +920,7 @@ static int aead_accept_parent_nokey(void *private, struct sock *sk) aead_request_set_tfm(>aead_req, aead); aead_request_set_callback(>aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG, - af_alg_complete, >completion); + crypto_req_done, >wait); sk->sk_destruct = aead_sock_destruct; diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c index 5e92bd2..6a6739a 100644 --- a/crypto/algif_hash.c +++ b/crypto/algif_hash.c @@ -26,7 +26,7 @@ struct hash_ctx { u8 *result; - struct af_alg_completion completion; + struct crypto_wait wait; unsigned int len; bool more; @@ -88,8 +88,7 @@ static int hash_sendmsg(struct socket *sock, struct msghdr *msg, if ((msg->msg_flags & MSG_MORE)) hash_free_result(sk, ctx); - err = af_alg_wait_for_completion(crypto_ahash_init(>req), - >completion); + err = crypto_wait_req(crypto_ahash_init(>req), >wait); if (err) goto unlock; } @@ -110,8 +109,8 @@ static int hash_sendmsg(struct socket *sock, struct msghdr *msg, ahash_request_set_crypt(>req, ctx->sgl.sg, NULL, len); - err = af_alg_wait_for_completion(crypto_ahash_update(>req), ->completion); + err = crypto_wait_req(crypto_ahash_update(>req), + >wait); af_alg_free_sg(>sgl); if (err) goto unlock; @@ -129,8 +128,8 @@ static int hash_sendmsg(struct socket *sock, struct msghdr *msg, goto unlock; ahash_request_set_crypt(>req, NULL, ctx->result, 0); - err = af_alg_wait_for_completion(crypto_ahash_final(>req), ->completion); + err =
[PATCH v3 13/28] crypto: adapt api sample to -EIOCBQUEUED as backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. Documentation/crypto/api-samples.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/crypto/api-samples.rst b/Documentation/crypto/api-samples.rst index 2531948..c6aa4ba 100644 --- a/Documentation/crypto/api-samples.rst +++ b/Documentation/crypto/api-samples.rst @@ -47,7 +47,7 @@ Code Example For Symmetric Key Cipher Operation case 0: break; case -EINPROGRESS: -case -EBUSY: +case -EIOCBQUEUED: rc = wait_for_completion_interruptible( >result.completion); if (!rc && !sk->result.err) { -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 11/28] cifs: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. fs/cifs/smb2ops.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 941c40b..00b1143 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -1934,7 +1934,7 @@ crypt_message(struct TCP_Server_Info *server, struct smb_rqst *rqst, int enc) rc = enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req); - if (rc == -EINPROGRESS || rc == -EBUSY) { + if (rc == -EINPROGRESS || rc == -EIOCBQUEUED) { wait_for_completion(); rc = result.err; } -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 09/28] dm: verity: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. drivers/md/dm-verity-target.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index b46705e..4fe7d18 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -116,7 +116,7 @@ static inline int verity_complete_op(struct verity_result *res, int ret) break; case -EINPROGRESS: - case -EBUSY: + case -EIOCBQUEUED: ret = wait_for_completion_interruptible(>completion); if (!ret) ret = res->err; -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 07/28] crypto: qce: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. drivers/crypto/qce/sha.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c index 47e114a..a21d2a1c 100644 --- a/drivers/crypto/qce/sha.c +++ b/drivers/crypto/qce/sha.c @@ -421,7 +421,7 @@ static int qce_ahash_hmac_setkey(struct crypto_ahash *tfm, const u8 *key, ahash_request_set_crypt(req, , ctx->authkey, keylen); ret = crypto_ahash_digest(req); - if (ret == -EINPROGRESS || ret == -EBUSY) { + if (ret == -EINPROGRESS || ret == -EIOCBQUEUED) { ret = wait_for_completion_interruptible(); if (!ret) ret = result.error; -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 05/28] crypto: mediatek: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API. Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. drivers/crypto/mediatek/mtk-aes.c | 2 +- drivers/crypto/mediatek/mtk-sha.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/mediatek/mtk-aes.c b/drivers/crypto/mediatek/mtk-aes.c index 9e845e8..5254e13 100644 --- a/drivers/crypto/mediatek/mtk-aes.c +++ b/drivers/crypto/mediatek/mtk-aes.c @@ -1012,7 +1012,7 @@ static int mtk_aes_gcm_setkey(struct crypto_aead *aead, const u8 *key, AES_BLOCK_SIZE, data->iv); err = crypto_skcipher_encrypt(>req); - if (err == -EINPROGRESS || err == -EBUSY) { + if (err == -EINPROGRESS || err == -EIOCBQUEUED) { err = wait_for_completion_interruptible( >result.completion); if (!err) diff --git a/drivers/crypto/mediatek/mtk-sha.c b/drivers/crypto/mediatek/mtk-sha.c index 5f4f845..5c75b50 100644 --- a/drivers/crypto/mediatek/mtk-sha.c +++ b/drivers/crypto/mediatek/mtk-sha.c @@ -782,7 +782,7 @@ static int mtk_sha_finup(struct ahash_request *req) ctx->flags |= SHA_FLAGS_FINUP; err1 = mtk_sha_update(req); - if (err1 == -EINPROGRESS || err1 == -EBUSY) + if (err1 == -EINPROGRESS || err1 == -EIOCBQUEUED) return err1; /* * final() has to be always called to cleanup resources -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 03/28] crypto: ccm: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. drivers/crypto/ccp/ccp-crypto-main.c | 10 +- drivers/crypto/ccp/ccp-dev.c | 8 +--- drivers/crypto/ccp/ccp-dmaengine.c | 2 +- 3 files changed, 11 insertions(+), 9 deletions(-) diff --git a/drivers/crypto/ccp/ccp-crypto-main.c b/drivers/crypto/ccp/ccp-crypto-main.c index 8dccbdd..dff1998 100644 --- a/drivers/crypto/ccp/ccp-crypto-main.c +++ b/drivers/crypto/ccp/ccp-crypto-main.c @@ -84,7 +84,7 @@ struct ccp_crypto_cpu { static inline bool ccp_crypto_success(int err) { - if (err && (err != -EINPROGRESS) && (err != -EBUSY)) + if (err && (err != -EINPROGRESS) && (err != -EIOCBQUEUED)) return false; return true; @@ -148,7 +148,7 @@ static void ccp_crypto_complete(void *data, int err) if (err == -EINPROGRESS) { /* Only propagate the -EINPROGRESS if necessary */ - if (crypto_cmd->ret == -EBUSY) { + if (crypto_cmd->ret == -EIOCBQUEUED) { crypto_cmd->ret = -EINPROGRESS; req->complete(req, -EINPROGRESS); } @@ -166,8 +166,8 @@ static void ccp_crypto_complete(void *data, int err) backlog->req->complete(backlog->req, -EINPROGRESS); } - /* Transition the state from -EBUSY to -EINPROGRESS first */ - if (crypto_cmd->ret == -EBUSY) + /* Transition the state from -EIOCBQUEUED to -EINPROGRESS first */ + if (crypto_cmd->ret == -EIOCBQUEUED) req->complete(req, -EINPROGRESS); /* Completion callbacks */ @@ -243,7 +243,7 @@ static int ccp_crypto_enqueue_cmd(struct ccp_crypto_cmd *crypto_cmd) } if (req_queue.cmd_count >= CCP_CRYPTO_MAX_QLEN) { - ret = -EBUSY; + ret = -EIOCBQUEUED; if (req_queue.backlog == _queue.cmds) req_queue.backlog = _cmd->entry; } diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c index 2506b50..b7006d7 100644 --- a/drivers/crypto/ccp/ccp-dev.c +++ b/drivers/crypto/ccp/ccp-dev.c @@ -269,7 +269,7 @@ EXPORT_SYMBOL_GPL(ccp_version); * Queue a cmd to be processed by the CCP. If queueing the cmd * would exceed the defined length of the cmd queue the cmd will * only be queued if the CCP_CMD_MAY_BACKLOG flag is set and will - * result in a return code of -EBUSY. + * result in a return code of -EIOCBQUEUED; * * The callback routine specified in the ccp_cmd struct will be * called to notify the caller of completion (if the cmd was not @@ -280,7 +280,7 @@ EXPORT_SYMBOL_GPL(ccp_version); * * The cmd has been successfully queued if: * the return code is -EINPROGRESS or - * the return code is -EBUSY and CCP_CMD_MAY_BACKLOG flag is set + * the return code is -EIOCBQUEUED */ int ccp_enqueue_cmd(struct ccp_cmd *cmd) { @@ -307,8 +307,10 @@ int ccp_enqueue_cmd(struct ccp_cmd *cmd) if (ccp->cmd_count >= MAX_CMD_QLEN) { ret = -EBUSY; - if (cmd->flags & CCP_CMD_MAY_BACKLOG) + if (cmd->flags & CCP_CMD_MAY_BACKLOG) { list_add_tail(>entry, >backlog); + ret = -EIOCBQUEUED; + } } else { ret = -EINPROGRESS; ccp->cmd_count++; diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c index e00be01..ab67304 100644 --- a/drivers/crypto/ccp/ccp-dmaengine.c +++ b/drivers/crypto/ccp/ccp-dmaengine.c @@ -146,7 +146,7 @@ static int ccp_issue_next_cmd(struct ccp_dma_desc *desc) desc->tx_desc.cookie, cmd); ret = ccp_enqueue_cmd(>ccp_cmd); - if (!ret || (ret == -EINPROGRESS) || (ret == -EBUSY)) + if (!ret || (ret == -EINPROGRESS) || (ret == -EIOCBQUEUED)) return 0; dev_dbg(desc->ccp->dev, "%s - error: ret=%d, tx %d, cmd=%p\n", __func__, -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 02/28] crypto: atmel: use -EIOCBQUEUED for backlog indication
Replace -EBUSY with -EIOCBQUEUED for backlog queueing indication as part of new API Signed-off-by: Gilad Ben-Yossef--- This patch should be squashed with the first patch in the series when applied. drivers/crypto/atmel-sha.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c index a948202..30223ee 100644 --- a/drivers/crypto/atmel-sha.c +++ b/drivers/crypto/atmel-sha.c @@ -1204,7 +1204,7 @@ static int atmel_sha_finup(struct ahash_request *req) ctx->flags |= SHA_FLAGS_FINUP; err1 = atmel_sha_update(req); - if (err1 == -EINPROGRESS || err1 == -EBUSY) + if (err1 == -EINPROGRESS || err1 == -EIOCBQUEUED) return err1; /* -- 2.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-doc" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH v3 01/28] crypto: change backlog return code to -EIOCBQUEUED
The crypto API was using the -EBUSY return value to indicate both a hard failure to submit a crypto operation into a transformation provider when the latter was busy and the backlog mechanism was not enabled as well as a notification that the operation was queued into the backlog when the backlog mechanism was enabled. Having the same return code indicate two very different conditions depending on a flag is both error prone and requires extra runtime check like the following to discern between the cases: if (err == -EINPROGRESS || (err == -EBUSY && (ahash_request_flags(req) & CRYPTO_TFM_REQ_MAY_BACKLOG))) This patch changes the return code used to indicate a crypto op was queued in the backlog to -EIOCBQUEUED, thus resolving both issues. Signed-off-by: Gilad Ben-Yossef--- crypto/af_alg.c | 2 +- crypto/ahash.c | 12 +++- crypto/algapi.c | 6 -- crypto/asymmetric_keys/public_key.c | 2 +- crypto/chacha20poly1305.c | 2 +- crypto/cryptd.c | 4 +--- crypto/cts.c| 6 ++ crypto/drbg.c | 2 +- crypto/gcm.c| 2 +- crypto/lrw.c| 8 ++-- crypto/rsa-pkcs1pad.c | 16 crypto/tcrypt.c | 6 +++--- crypto/testmgr.c| 12 ++-- crypto/xts.c| 8 ++-- 14 files changed, 32 insertions(+), 56 deletions(-) diff --git a/crypto/af_alg.c b/crypto/af_alg.c index 3556d8e..c67daba 100644 --- a/crypto/af_alg.c +++ b/crypto/af_alg.c @@ -484,7 +484,7 @@ int af_alg_wait_for_completion(int err, struct af_alg_completion *completion) { switch (err) { case -EINPROGRESS: - case -EBUSY: + case -EIOCBQUEUED: wait_for_completion(>completion); reinit_completion(>completion); err = completion->err; diff --git a/crypto/ahash.c b/crypto/ahash.c index 826cd7a..65d08db 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -334,9 +334,7 @@ static int ahash_op_unaligned(struct ahash_request *req, return err; err = op(req); - if (err == -EINPROGRESS || - (err == -EBUSY && (ahash_request_flags(req) & - CRYPTO_TFM_REQ_MAY_BACKLOG))) + if (err == -EINPROGRESS || err == -EIOCBQUEUED) return err; ahash_restore_req(req, err); @@ -394,9 +392,7 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err) req->base.complete = ahash_def_finup_done2; err = crypto_ahash_reqtfm(req)->final(req); - if (err == -EINPROGRESS || - (err == -EBUSY && (ahash_request_flags(req) & - CRYPTO_TFM_REQ_MAY_BACKLOG))) + if (err == -EINPROGRESS || err == -EIOCBQUEUED) return err; out: @@ -432,9 +428,7 @@ static int ahash_def_finup(struct ahash_request *req) return err; err = tfm->update(req); - if (err == -EINPROGRESS || - (err == -EBUSY && (ahash_request_flags(req) & - CRYPTO_TFM_REQ_MAY_BACKLOG))) + if (err == -EINPROGRESS || err == -EIOCBQUEUED) return err; return ahash_def_finup_finish1(req, err); diff --git a/crypto/algapi.c b/crypto/algapi.c index e4cc761..3bfd1fa 100644 --- a/crypto/algapi.c +++ b/crypto/algapi.c @@ -897,9 +897,11 @@ int crypto_enqueue_request(struct crypto_queue *queue, int err = -EINPROGRESS; if (unlikely(queue->qlen >= queue->max_qlen)) { - err = -EBUSY; - if (!(request->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) + if (!(request->flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) { + err = -EBUSY; goto out; + } + err = -EIOCBQUEUED; if (queue->backlog == >list) queue->backlog = >list; } diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c index 3cd6e12..3fad1fd 100644 --- a/crypto/asymmetric_keys/public_key.c +++ b/crypto/asymmetric_keys/public_key.c @@ -141,7 +141,7 @@ int public_key_verify_signature(const struct public_key *pkey, * signature and returns that to us. */ ret = crypto_akcipher_verify(req); - if ((ret == -EINPROGRESS) || (ret == -EBUSY)) { + if ((ret == -EINPROGRESS) || (ret == -EIOCBQUEUED)) { wait_for_completion(); ret = compl.err; } diff --git a/crypto/chacha20poly1305.c b/crypto/chacha20poly1305.c index db1bc31..e0e2785 100644 --- a/crypto/chacha20poly1305.c +++ b/crypto/chacha20poly1305.c @@ -79,7 +79,7 @@ static inline void async_done_continue(struct aead_request *req, int err, if (!err)
[PATCH v3 00/28] simplify crypto wait for async op
Many users of kernel async. crypto services have a pattern of starting an async. crypto op and than using a completion to wait for it to end. This patch set simplifies this common use case in two ways: First, by giving the case where a request was queued to the backlog a separate return code (-EIOCBQUEUED) of its own rather than sharing the -EBUSY return code with the fatal error of a busy provider when backlog is not enabled. Next, this change is than built on to create a generic way to wait for a async. crypto operation to complete. The end result is that after replacing all the call sites I could find, the code is smaller by ~340 lines, a branch is saved in some cases in run-time and the code is more straightforward to follow. Please note that patches 1-13 should be squashed together when applied in order to function correctly. They are only separated out to ease review. The patch set was boot tested on x86_64 and arm64 which at the very least tests the crypto users via testmgr and tcrypt but I do note that I do not have access to some of the HW whose drivers are modified nor do I claim I was able to test all of the corner cases. Last but not least, I do apologize for the size of this patch set and number of recipients, but I did have to touch every crypto async API user in the kernel. May the 340 deleted lines serve as penance for my sin :-) Changes from v2: - Patch title changed from "introduce crypto wait for async op" to better reflect the current state. - Rebase on top of latest linux-next. - Add a new return code of -EIOCBQUEUED for backlog queueing, as suggested by Herbert Xu. - Transform more users to the new API. - Update the drbg change to account for new init as indicated by Stephan Muller. Changes from v1: - Address review comments from Eric Biggers. - Separated out bug fixes of existing code and rebase on top of that patch set. - Rename 'ecr' to 'wait' in fscrypto code. - Split patch introducing the new API from the change moving over the algif code which it originated from to the new API. - Inline crypto_wait_req(). - Some code indentation fixes. Gilad Ben-Yossef (28): crypto: change backlog return code to -EIOCBQUEUED crypto: atmel: use -EIOCBQUEUED for backlog indication crypto: ccm: use -EIOCBQUEUED for backlog indication crypto: marvell/cesa: use -EIOCBQUEUED for backlog indication crypto: mediatek: use -EIOCBQUEUED for backlog indication crypto: omap: use -EIOCBQUEUED for backlog indication crypto: qce: use -EIOCBQUEUED for backlog indication crypto: talitos: use -EIOCBQUEUED for backlog indication dm: verity: use -EIOCBQUEUED for backlog indication fscrypt: use -EIOCBQUEUED for backlog indication cifs: use -EIOCBQUEUED for backlog indication ima: use -EIOCBQUEUED for backlog indication crypto: adapt api sample to -EIOCBQUEUED as backlog indication crypto: introduce crypto wait for async op crypto: move algif to generic async completion crypto: move pub key to generic async completion crypto: move drbg to generic async completion crypto: move gcm to generic async completion crypto: move testmgr to generic async completion dm: move dm-verity to generic async completion fscrypt: move to generic async completion cifs: move to generic async completion ima: move to generic async completion crypto: tcrypt: move to generic async completion crypto: talitos: move to generic async completion crypto: qce: move to generic async completion crypto: mediatek: move to generic async completion crypto: adapt api sample to use async. op wait Documentation/crypto/api-samples.rst | 52 ++--- crypto/af_alg.c | 27 - crypto/ahash.c | 12 +-- crypto/algapi.c | 6 +- crypto/algif_aead.c | 14 +-- crypto/algif_hash.c | 29 +++-- crypto/algif_skcipher.c | 15 ++- crypto/api.c | 13 +++ crypto/asymmetric_keys/public_key.c | 28 + crypto/chacha20poly1305.c| 2 +- crypto/cryptd.c | 4 +- crypto/cts.c | 6 +- crypto/drbg.c| 36 ++- crypto/gcm.c | 32 ++ crypto/lrw.c | 8 +- crypto/rsa-pkcs1pad.c| 16 +-- crypto/tcrypt.c | 84 +-- crypto/testmgr.c | 204 --- crypto/xts.c | 8 +- drivers/crypto/atmel-sha.c | 2 +- drivers/crypto/ccp/ccp-crypto-main.c | 10 +- drivers/crypto/ccp/ccp-dev.c | 8 +- drivers/crypto/ccp/ccp-dmaengine.c | 2 +- drivers/crypto/marvell/cesa.c| 2 +- drivers/crypto/marvell/cesa.h| 2 +- drivers/crypto/mediatek/mtk-aes.c| 31 +- drivers/crypto/mediatek/mtk-sha.c| 2 +- drivers/crypto/omap-sham.c | 2 +-
Re: [PATCHv2 2/3] usb: gadget: f_uac*: Reduce code duplication
Hi Julian, [auto build test ERROR on balbi-usb/next] [also build test ERROR on next-20170630] [cannot apply to v4.12-rc7] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system] url: https://github.com/0day-ci/linux/commits/Julian-Scheel/USB-Audio-Gadget-Support-multiple-sampling-rates/20170702-215432 base: https://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb.git next config: x86_64-randconfig-x012-201727 (attached as .config) compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901 reproduce: # save the attached .config to linux build tree make ARCH=x86_64 All errors (new ones prefixed by >>): drivers/usb/gadget/function/usb_f_uac2.o: In function `f_uac_attr_release': >> (.text+0x0): multiple definition of `f_uac_attr_release' drivers/usb/gadget/function/usb_f_uac1.o:(.text+0x0): first defined here --- 0-DAY kernel test infrastructureOpen Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation .config.gz Description: application/gzip
Re: [PATCH 3/6] cpufreq: governor: Drop min_sampling_rate
On Fri, Jun 30, 2017 at 11:10:33AM +0530, Viresh Kumar wrote: > On 30-06-17, 06:53, Dominik Brodowski wrote: > > On Fri, Jun 30, 2017 at 09:04:25AM +0530, Viresh Kumar wrote: > > > On 29-06-17, 20:01, Dominik Brodowski wrote: > > > > On Thu, Jun 29, 2017 at 04:29:06PM +0530, Viresh Kumar wrote: > > > > > The cpufreq core and governors aren't supposed to set a limit on how > > > > > fast we want to try changing the frequency. This is currently done for > > > > > the legacy governors with help of min_sampling_rate. > > > > > > > > > > At worst, we may end up setting the sampling rate to a value lower > > > > > than > > > > > the rate at which frequency can be changed and then one of the CPUs in > > > > > the policy will be only changing frequency for ever. > > > > > > > > Is it safe to issue requests to change the CPU frequency so frequently, > > > > > > Well, I assumed so. I am not sure the hardware would break though. > > > Overheating ? > > > > > > > even > > > > on historic hardware such as speedstep-{ich,smi,centrino}? In the past, > > speedstep-smi is the only one which sets transition_latency to > CPUFREQ_ETERNAL and the others are putting some meaningful values. So > yes, they should be doing DVFS dynamically. > > > > > these checks more or less disallowed the running of dynamic frequency > > > > scaling at least on speedstep-smi[*], > > > > > > We must by doing dynamic freq scaling even without this patch. I don't > > > see why you say the above then. > > > > > > All we do here is that we get rid of the limit on how soon we can > > > change the freq again. > > > > Well, as I understand it, first generation "speedstep" was designed more or > > less to switch frequencies only when AC power was lost or restored. > > > > The Linux implementation merely said: "no on-the-fly changes", but switch > > frequencies whenever a user explicitly requested such a change (presumably > > only every once in an unspecified while). > > > > This same reasoning may be present in other drivers using CPUFREQ_ETERNAL. > > Thanks for the explanation here and I am convinced that this series > has at least done one thing wrong. And that is removal of > max_transition_latency from governors and allowing ondemand to run on > such platforms (which may end up breaking them). > > So I will actually modify that patch and set max_transition_latency to > CPUFREQ_ETERNAL for ondemand/conservative instead of 10ms. Also we > should do the same for schedutil as well, so that will also use the > max_transition_latency field. > > But I hope, this patch will still be fine. Right ? Indeed, I have no comments otherwise. Thanks! Best Dominik signature.asc Description: PGP signature