Re: [RFC PATCH v3] crypto: Add IV generation algorithms
Hi Gilad, On 19 January 2017 at 15:17, Gilad Ben-Yossefwrote: > I tried adding sg_init_table() where I thought it was appropriate and > it didn't resolve the issue. > > For what it's worth, my guess is that the difference between our > setups is not related to Arm but to other options or the storage I'm > using. I was able to reproduce this again on my qemu setup with a 1GB virtual disk. That is the same thing I do with the x86 setup as well. > Are you using cryptd? You mean config CRYPTO_CRYPTD? -Binoy -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC PATCH v3] crypto: Add IV generation algorithms
On Thu, Jan 19, 2017 at 6:42 AM, Binoy Jayanwrote: > Hi Gilad, > > On 18 January 2017 at 20:51, Gilad Ben-Yossef wrote: >> I have some review comments and a bug report - > > Thank you very much for testing this on ARM and for the comments. My pleasure. Thanks for writing the code :-) > >>> + rinfo.iv_sector = ctx->cc_sector; >>> + rinfo.nents = nents; >>> + rinfo.iv = iv; >>> + >>> + skcipher_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out, >> >> Also, where do the scatterlist src2 and dst2 that you use >> sg_set_page() get sg_init_table() called on? >> I couldn't figure it out... > > Thank you pointing this out. I missed out to add sg_init_table(src2, 1) > and sg_init_table(dst2, 1), but sg_set_page is used in geniv_iter_block. > This is probably the reason for the panic on ARM platform. However it > ran fine under qemu-x86. May be I should setup an arm platform too > for testing. I tried adding sg_init_table() where I thought it was appropriate and it didn't resolve the issue. For what it's worth, my guess is that the difference between our setups is not related to Arm but to other options or the storage I'm using. Are you using cryptd? Thanks, Gilad > > Regards, > Binoy -- Gilad Ben-Yossef Chief Coffee Drinker "If you take a class in large-scale robotics, can you end up in a situation where the homework eats your dog?" -- Jean-Baptiste Queru -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC PATCH v3] crypto: Add IV generation algorithms
Hi Gilad, On 18 January 2017 at 20:51, Gilad Ben-Yossefwrote: > I have some review comments and a bug report - Thank you very much for testing this on ARM and for the comments. > I'm pretty sure this needs to be > > n2 = bio_segments(ctx->bio_out); Yes you are right, that was a typo :) >> + >> + rinfo.is_write = bio_data_dir(ctx->bio_in) == WRITE; > > Please consider wrapping the above boolean expression in parenthesis. Well, I can do that to enhance the clarity. >> + rinfo.iv_sector = ctx->cc_sector; >> + rinfo.nents = nents; >> + rinfo.iv = iv; >> + >> + skcipher_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out, > > Also, where do the scatterlist src2 and dst2 that you use > sg_set_page() get sg_init_table() called on? > I couldn't figure it out... Thank you pointing this out. I missed out to add sg_init_table(src2, 1) and sg_init_table(dst2, 1), but sg_set_page is used in geniv_iter_block. This is probably the reason for the panic on ARM platform. However it ran fine under qemu-x86. May be I should setup an arm platform too for testing. Regards, Binoy -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC PATCH v3] crypto: Add IV generation algorithms
Hi Binoy, On Wed, Jan 18, 2017 at 11:40 AM, Binoy Jayanwrote: > Currently, the iv generation algorithms are implemented in dm-crypt.c. > The goal is to move these algorithms from the dm layer to the kernel > crypto layer by implementing them as template ciphers so they can be > implemented in hardware for performance. As part of this patchset, the > iv-generation code is moved from the dm layer to the crypto layer and > adapt the dm-layer to send a whole 'bio' (as defined in the block layer) > at a time. Each bio contains an in memory representation of physically > contiguous disk blocks. The dm layer sets up a chained scatterlist of > these blocks split into physically contiguous segments in memory so that > DMA can be performed. Also, the key management code is moved from dm layer > to the cryto layer since the key selection for encrypting neighboring > sectors depend on the keycount. > > Synchronous crypto requests to encrypt/decrypt a sector are processed > sequentially. Asynchronous requests if processed in parallel, are freed > in the async callback. The dm layer allocates space for iv. The hardware > implementations can choose to make use of this space to generate their IVs > sequentially or allocate it on their own. > Interface to the crypto layer - include/crypto/geniv.h > > Signed-off-by: Binoy Jayan > --- I have some review comments and a bug report - > */ > -static int crypt_convert(struct crypt_config *cc, > -struct convert_context *ctx) > + > +static int crypt_convert_bio(struct crypt_config *cc, > +struct convert_context *ctx) > { > + unsigned int cryptlen, n1, n2, nents, i = 0, bytes = 0; > + struct skcipher_request *req; > + struct dm_crypt_request *dmreq; > + struct geniv_req_info rinfo; > + struct bio_vec bv_in, bv_out; > int r; > + u8 *iv; > > atomic_set(>cc_pending, 1); > + crypt_alloc_req(cc, ctx); > + > + req = ctx->req; > + dmreq = dmreq_of_req(cc, req); > + iv = iv_of_dmreq(cc, dmreq); > > - while (ctx->iter_in.bi_size && ctx->iter_out.bi_size) { > + n1 = bio_segments(ctx->bio_in); > + n2 = bio_segments(ctx->bio_in); I'm pretty sure this needs to be n2 = bio_segments(ctx->bio_out); > + nents = n1 > n2 ? n1 : n2; > + nents = nents > MAX_SG_LIST ? MAX_SG_LIST : nents; > + cryptlen = ctx->iter_in.bi_size; > > - crypt_alloc_req(cc, ctx); > + DMDEBUG("dm-crypt:%s: segments:[in=%u, out=%u] bi_size=%u\n", > + bio_data_dir(ctx->bio_in) == WRITE ? "write" : "read", > + n1, n2, cryptlen); > > > - /* There was an error while processing the request. */ > - default: > - atomic_dec(>cc_pending); > - return r; > - } > + sg_set_page(>sg_in[i], bv_in.bv_page, bv_in.bv_len, > + bv_in.bv_offset); > + sg_set_page(>sg_out[i], bv_out.bv_page, bv_out.bv_len, > + bv_out.bv_offset); > + > + bio_advance_iter(ctx->bio_in, >iter_in, bv_in.bv_len); > + bio_advance_iter(ctx->bio_out, >iter_out, bv_out.bv_len); > + > + bytes += bv_in.bv_len; > + i++; > } > > - return 0; > + DMDEBUG("dm-crypt: Processed %u of %u bytes\n", bytes, cryptlen); > + > + rinfo.is_write = bio_data_dir(ctx->bio_in) == WRITE; Please consider wrapping the above boolean expression in parenthesis. > + rinfo.iv_sector = ctx->cc_sector; > + rinfo.nents = nents; > + rinfo.iv = iv; > + > + skcipher_request_set_crypt(req, dmreq->sg_in, dmreq->sg_out, Also, where do the scatterlist src2 and dst2 that you use sg_set_page() get sg_init_table() called on? I couldn't figure it out... Last but not least, when performing the following sequence on Arm64 (on latest Qemu Virt platform) - 1. cryptsetup luksFormat fs3.img 2. cryptsetup open --type luks fs3.img croot 3. mke2fs /dev/mapper/croot [ fs3.img is a 16MB file for loopback ] The attached kernel panic happens. The same does not occur without the patch. Let me know if you need any additional information to recreate it. I've tried to debug it a little but did not came up with anything useful aside from above review notes. Thanks! -- Gilad Ben-Yossef Chief Coffee Drinker "If you take a class in large-scale robotics, can you end up in a situation where the homework eats your dog?" -- Jean-Baptiste Queru # # # mk mkdir mkdosfs mke2fsmkfifomknod mkpasswd mkswapmktemp # mk mkdir mkdosfs mke2fsmkfifomknod mkpasswd mkswapmktemp # mke2fs /dev/mapper/croot Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 3584 inodes, 14336 blocks 716 blocks (5%) reserved for the super user
[RFC PATCH v3] crypto: Add IV generation algorithms
Currently, the iv generation algorithms are implemented in dm-crypt.c. The goal is to move these algorithms from the dm layer to the kernel crypto layer by implementing them as template ciphers so they can be implemented in hardware for performance. As part of this patchset, the iv-generation code is moved from the dm layer to the crypto layer and adapt the dm-layer to send a whole 'bio' (as defined in the block layer) at a time. Each bio contains an in memory representation of physically contiguous disk blocks. The dm layer sets up a chained scatterlist of these blocks split into physically contiguous segments in memory so that DMA can be performed. Also, the key management code is moved from dm layer to the cryto layer since the key selection for encrypting neighboring sectors depend on the keycount. Synchronous crypto requests to encrypt/decrypt a sector are processed sequentially. Asynchronous requests if processed in parallel, are freed in the async callback. The dm layer allocates space for iv. The hardware implementations can choose to make use of this space to generate their IVs sequentially or allocate it on their own. Interface to the crypto layer - include/crypto/geniv.h Signed-off-by: Binoy Jayan--- drivers/md/dm-crypt.c | 1891 ++-- include/crypto/geniv.h | 47 ++ 2 files changed, 1399 insertions(+), 539 deletions(-) create mode 100644 include/crypto/geniv.h diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 7c6c572..7275b0f 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -32,170 +32,113 @@ #include #include #include - #include - -#define DM_MSG_PREFIX "crypt" - -/* - * context holding the current state of a multi-part conversion - */ -struct convert_context { - struct completion restart; - struct bio *bio_in; - struct bio *bio_out; - struct bvec_iter iter_in; - struct bvec_iter iter_out; - sector_t cc_sector; - atomic_t cc_pending; - struct skcipher_request *req; +#include +#include +#include +#include + +#define DM_MSG_PREFIX "crypt" +#define MAX_SG_LIST(BIO_MAX_PAGES * 8) +#define MIN_IOS64 +#define LMK_SEED_SIZE 64 /* hash + 0 */ +#define TCW_WHITENING_SIZE 16 + +struct geniv_ctx; +struct geniv_req_ctx; + +/* Sub request for each of the skcipher_request's for a segment */ +struct geniv_subreq { + struct skcipher_request req CRYPTO_MINALIGN_ATTR; + struct scatterlist src; + struct scatterlist dst; + int n; + struct geniv_req_ctx *rctx; }; -/* - * per bio private data - */ -struct dm_crypt_io { - struct crypt_config *cc; - struct bio *base_bio; - struct work_struct work; - - struct convert_context ctx; - - atomic_t io_pending; - int error; - sector_t sector; - - struct rb_node rb_node; -} CRYPTO_MINALIGN_ATTR; - -struct dm_crypt_request { - struct convert_context *ctx; - struct scatterlist sg_in; - struct scatterlist sg_out; +struct geniv_req_ctx { + struct geniv_subreq *subreq; + bool is_write; sector_t iv_sector; + unsigned int nents; + u8 *iv; + struct completion restart; + atomic_t req_pending; + struct skcipher_request *req; }; -struct crypt_config; - struct crypt_iv_operations { - int (*ctr)(struct crypt_config *cc, struct dm_target *ti, - const char *opts); - void (*dtr)(struct crypt_config *cc); - int (*init)(struct crypt_config *cc); - int (*wipe)(struct crypt_config *cc); - int (*generator)(struct crypt_config *cc, u8 *iv, -struct dm_crypt_request *dmreq); - int (*post)(struct crypt_config *cc, u8 *iv, - struct dm_crypt_request *dmreq); + int (*ctr)(struct geniv_ctx *ctx); + void (*dtr)(struct geniv_ctx *ctx); + int (*init)(struct geniv_ctx *ctx); + int (*wipe)(struct geniv_ctx *ctx); + int (*generator)(struct geniv_ctx *ctx, +struct geniv_req_ctx *rctx, +struct geniv_subreq *subreq); + int (*post)(struct geniv_ctx *ctx, + struct geniv_req_ctx *rctx, + struct geniv_subreq *subreq); }; -struct iv_essiv_private { +struct geniv_essiv_private { struct crypto_ahash *hash_tfm; u8 *salt; }; -struct iv_benbi_private { +struct geniv_benbi_private { int shift; }; -#define LMK_SEED_SIZE 64 /* hash + 0 */ -struct iv_lmk_private { +struct geniv_lmk_private { struct crypto_shash *hash_tfm; u8 *seed; }; -#define TCW_WHITENING_SIZE 16 -struct iv_tcw_private { +struct geniv_tcw_private { struct crypto_shash *crc32_tfm; u8 *iv_seed; u8 *whitening; }; -/* - * Crypt: maps a linear range of a block device - * and encrypts / decrypts at the