Re: [PATCH v2.1 4/7] crypto: GnuPG based MPI lib - additional sources (part 4)

2011-10-18 Thread James Morris
On Mon, 17 Oct 2011, Kasatkin, Dmitry wrote:

 It is there for completeness and it will not be even compiled at all
 without CONFIG_MPILIB_EXTRA
 
 Still remove?

Yes, please.


-- 
James Morris
jmor...@namei.org
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] talitos: handle descriptor not found in error path

2011-10-18 Thread Herbert Xu
Kim Phillips kim.phill...@freescale.com wrote:
 The CDPR (Current Descriptor Pointer Register) can be unreliable
 when trying to locate an offending descriptor.  Handle that case by
 (a) not OOPSing, and (b) reverting to the machine internal copy of
 the descriptor header in order to report the correct execution unit
 error.
 
 Note: printing all execution units' ISRs is not effective because it
 results in an internal time out (ITO) error and the EU resetting its
 ISR value (at least when specifying an invalid key length on an SEC
 2.2/MPC8313E).
 
 Reported-by: Sven Schnelle sv...@stackframe.org
 Signed-off-by: Kim Phillips kim.phill...@freescale.com
 ---
 please test, as it seems I cannot reproduce the descriptor not found
 case.

So what's the verdict Kim, should I take this patch or not?

Thanks,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 00/18] crypto: Add helper functions for parallelized LRW and XTS modes

2011-10-18 Thread Jussi Kivilinna
This series adds lrw_crypt() and xts_crypt() functions for cipher 
implementations that
can benefit from parallel cipher block operations. To make interface flexible, 
caller is reponsible of allocating buffer large enough to store temporary cipher
blocks. This buffer size should be as large as size of parallel blocks 
processed at
once. Cryption callback is called with buffer size at maximum of parallel 
blocks size
and at minimum size of one block.

Series adds LRW/XTS support to serpent-sse2 and twofish-x86_64-3way, and 
therefore
depends on those series.

Patches 1-4: include LRW fixes/cleanups, export gf128mul table and add 
lrw_crypt().
Patches 5-7: add LRW support to serpent-sse2, with tcrypt tests and test 
vectors.
Patches 8-10: add LRW support to twofish-x86_64-3way, with tcrypt tests and 
test vectors.
Patches 11-12: include XTS cleanup for blocksize usage and add xts_crypt().
Patches 13-15: add XTS support to serpent-sse2, with tcrypt tests and test 
vectors.
Patches 16-18: add XTS support to twofish-x86_64-3way, with tcrypt tests and 
test vectors.

---

Jussi Kivilinna (18):
  crypto: lrw: fix memleak
  crypto: lrw: use blocksize constant
  crypto: lrw: split gf128mul table initialization from setkey
  crypto: lrw: add interface for parallelized cipher implementions
  crypto: testmgr: add lrw(serpent) test vectors
  crypto: tcrypt: add lrw(serpent) tests
  crypto: serpent-sse2: add lrw support
  crypto: testmgr: add lrw(twofish) test vectors
  crypto: tcrypt: add lrw(twofish) tests
  crypto: twofish-x86_64-3way: add lrw support
  crypto: xts: use blocksize constant
  crypto: xts: add interface for parallelized cipher implementations
  crypto: testmgr: add xts(serpent) test vectors
  crypto: tcrypt: add xts(serpent) tests
  crypto: serpent-sse2: add xts support
  crypto: testmgr: add xts(twofish) test vectors
  crypto: tcrypt: add xts(twofish) tests
  crypto: twofish-x86_64-3way: add xts support


 arch/x86/crypto/serpent_sse2_glue.c |  387 ++
 arch/x86/crypto/twofish_glue_3way.c |  250 
 crypto/lrw.c|  155 ++
 crypto/serpent.c|   10 
 crypto/tcrypt.c |   28 
 crypto/tcrypt.h |2 
 crypto/testmgr.c|   60 +
 crypto/testmgr.h| 2366 +++
 crypto/twofish_common.c |   13 
 crypto/xts.c|   78 +
 include/crypto/lrw.h|   43 +
 include/crypto/serpent.h|2 
 include/crypto/twofish.h|2 
 include/crypto/xts.h|   27 
 14 files changed, 3379 insertions(+), 44 deletions(-)
 create mode 100644 include/crypto/lrw.h
 create mode 100644 include/crypto/xts.h
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 02/18] crypto: lrw: use blocksize constant

2011-10-18 Thread Jussi Kivilinna
LRW has fixed blocksize of 16. Define LRW_BLOCK_SIZE and use in place of
crypto_cipher_blocksize().

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/lrw.c |8 +---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/crypto/lrw.c b/crypto/lrw.c
index fca3246..bee6022 100644
--- a/crypto/lrw.c
+++ b/crypto/lrw.c
@@ -27,6 +27,8 @@
 #include crypto/b128ops.h
 #include crypto/gf128mul.h
 
+#define LRW_BLOCK_SIZE 16
+
 struct priv {
struct crypto_cipher *child;
/* optimizes multiplying a random (non incrementing, as at the
@@ -61,7 +63,7 @@ static int setkey(struct crypto_tfm *parent, const u8 *key,
struct crypto_cipher *child = ctx-child;
int err, i;
be128 tmp = { 0 };
-   int bsize = crypto_cipher_blocksize(child);
+   int bsize = LRW_BLOCK_SIZE;
 
crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) 
@@ -134,7 +136,7 @@ static int crypt(struct blkcipher_desc *d,
 {
int err;
unsigned int avail;
-   const int bs = crypto_cipher_blocksize(ctx-child);
+   const int bs = LRW_BLOCK_SIZE;
struct sinfo s = {
.tfm = crypto_cipher_tfm(ctx-child),
.fn = fn
@@ -218,7 +220,7 @@ static int init_tfm(struct crypto_tfm *tfm)
if (IS_ERR(cipher))
return PTR_ERR(cipher);
 
-   if (crypto_cipher_blocksize(cipher) != 16) {
+   if (crypto_cipher_blocksize(cipher) != LRW_BLOCK_SIZE) {
*flags |= CRYPTO_TFM_RES_BAD_BLOCK_LEN;
crypto_free_cipher(cipher);
return -EINVAL;

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 03/18] crypto: lrw: split gf128mul table initialization from setkey

2011-10-18 Thread Jussi Kivilinna
Split gf128mul initialization from setkey so that it can be used outside
lrw-module.

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/lrw.c |   61 ++
 1 files changed, 40 insertions(+), 21 deletions(-)

diff --git a/crypto/lrw.c b/crypto/lrw.c
index bee6022..91c17fa 100644
--- a/crypto/lrw.c
+++ b/crypto/lrw.c
@@ -29,8 +29,7 @@
 
 #define LRW_BLOCK_SIZE 16
 
-struct priv {
-   struct crypto_cipher *child;
+struct lrw_table_ctx {
/* optimizes multiplying a random (non incrementing, as at the
 * start of a new sector) value with key2, we could also have
 * used 4k optimization tables or no optimization at all. In the
@@ -45,6 +44,11 @@ struct priv {
be128 mulinc[128];
 };
 
+struct priv {
+   struct crypto_cipher *child;
+   struct lrw_table_ctx table;
+};
+
 static inline void setbit128_bbe(void *b, int bit)
 {
__set_bit(bit ^ (0x80 -
@@ -56,28 +60,16 @@ static inline void setbit128_bbe(void *b, int bit)
), b);
 }
 
-static int setkey(struct crypto_tfm *parent, const u8 *key,
- unsigned int keylen)
+static int lrw_init_table(struct lrw_table_ctx *ctx, const u8 *tweak)
 {
-   struct priv *ctx = crypto_tfm_ctx(parent);
-   struct crypto_cipher *child = ctx-child;
-   int err, i;
be128 tmp = { 0 };
-   int bsize = LRW_BLOCK_SIZE;
-
-   crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-   crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) 
-  CRYPTO_TFM_REQ_MASK);
-   if ((err = crypto_cipher_setkey(child, key, keylen - bsize)))
-   return err;
-   crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) 
-CRYPTO_TFM_RES_MASK);
+   int i;
 
if (ctx-table)
gf128mul_free_64k(ctx-table);
 
/* initialize multiplication table for Key2 */
-   ctx-table = gf128mul_init_64k_bbe((be128 *)(key + keylen - bsize));
+   ctx-table = gf128mul_init_64k_bbe((be128 *)tweak);
if (!ctx-table)
return -ENOMEM;
 
@@ -91,6 +83,32 @@ static int setkey(struct crypto_tfm *parent, const u8 *key,
return 0;
 }
 
+static void lrw_free_table(struct lrw_table_ctx *ctx)
+{
+   if (ctx-table)
+   gf128mul_free_64k(ctx-table);
+}
+
+static int setkey(struct crypto_tfm *parent, const u8 *key,
+ unsigned int keylen)
+{
+   struct priv *ctx = crypto_tfm_ctx(parent);
+   struct crypto_cipher *child = ctx-child;
+   int err, bsize = LRW_BLOCK_SIZE;
+   const u8 *tweak = key + keylen - bsize;
+
+   crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+   crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) 
+  CRYPTO_TFM_REQ_MASK);
+   err = crypto_cipher_setkey(child, key, keylen - bsize);
+   if (err)
+   return err;
+   crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) 
+CRYPTO_TFM_RES_MASK);
+
+   return lrw_init_table(ctx-table, tweak);
+}
+
 struct sinfo {
be128 t;
struct crypto_tfm *tfm;
@@ -157,7 +175,7 @@ static int crypt(struct blkcipher_desc *d,
s.t = *iv;
 
/* T - I*Key2 */
-   gf128mul_64k_bbe(s.t, ctx-table);
+   gf128mul_64k_bbe(s.t, ctx-table.table);
 
goto first;
 
@@ -165,7 +183,8 @@ static int crypt(struct blkcipher_desc *d,
do {
/* T - I*Key2, using the optimization
 * discussed in the specification */
-   be128_xor(s.t, s.t, ctx-mulinc[get_index128(iv)]);
+   be128_xor(s.t, s.t,
+ ctx-table.mulinc[get_index128(iv)]);
inc(iv);
 
 first:
@@ -233,8 +252,8 @@ static int init_tfm(struct crypto_tfm *tfm)
 static void exit_tfm(struct crypto_tfm *tfm)
 {
struct priv *ctx = crypto_tfm_ctx(tfm);
-   if (ctx-table)
-   gf128mul_free_64k(ctx-table);
+
+   lrw_free_table(ctx-table);
crypto_free_cipher(ctx-child);
 }
 

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 04/18] crypto: lrw: add interface for parallelized cipher implementions

2011-10-18 Thread Jussi Kivilinna
Export gf128mul table initialization routines and add lrw_crypt() function
that can be used by cipher implementations that can benefit from parallelized
cipher operations.

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/lrw.c |  105 --
 include/crypto/lrw.h |   43 
 2 files changed, 128 insertions(+), 20 deletions(-)
 create mode 100644 include/crypto/lrw.h

diff --git a/crypto/lrw.c b/crypto/lrw.c
index 91c17fa..66c4d22 100644
--- a/crypto/lrw.c
+++ b/crypto/lrw.c
@@ -3,7 +3,7 @@
  *
  * Copyright (c) 2006 Rik Snel rs...@cube.dyndns.org
  *
- * Based om ecb.c
+ * Based on ecb.c
  * Copyright (c) 2006 Herbert Xu herb...@gondor.apana.org.au
  *
  * This program is free software; you can redistribute it and/or modify it
@@ -16,6 +16,7 @@
  * http://www.mail-archive.com/stds-p1619@listserv.ieee.org/msg00173.html
  *
  * The test vectors are included in the testing module tcrypt.[ch] */
+
 #include crypto/algapi.h
 #include linux/err.h
 #include linux/init.h
@@ -26,23 +27,7 @@
 
 #include crypto/b128ops.h
 #include crypto/gf128mul.h
-
-#define LRW_BLOCK_SIZE 16
-
-struct lrw_table_ctx {
-   /* optimizes multiplying a random (non incrementing, as at the
-* start of a new sector) value with key2, we could also have
-* used 4k optimization tables or no optimization at all. In the
-* latter case we would have to store key2 here */
-   struct gf128mul_64k *table;
-   /* stores:
-*  key2*{ 0,0,...0,0,0,0,1 }, key2*{ 0,0,...0,0,0,1,1 },
-*  key2*{ 0,0,...0,0,1,1,1 }, key2*{ 0,0,...0,1,1,1,1 }
-*  key2*{ 0,0,...1,1,1,1,1 }, etc
-* needed for optimized multiplication of incrementing values
-* with key2 */
-   be128 mulinc[128];
-};
+#include crypto/lrw.h
 
 struct priv {
struct crypto_cipher *child;
@@ -60,7 +45,7 @@ static inline void setbit128_bbe(void *b, int bit)
), b);
 }
 
-static int lrw_init_table(struct lrw_table_ctx *ctx, const u8 *tweak)
+int lrw_init_table(struct lrw_table_ctx *ctx, const u8 *tweak)
 {
be128 tmp = { 0 };
int i;
@@ -82,12 +67,14 @@ static int lrw_init_table(struct lrw_table_ctx *ctx, const 
u8 *tweak)
 
return 0;
 }
+EXPORT_SYMBOL_GPL(lrw_init_table);
 
-static void lrw_free_table(struct lrw_table_ctx *ctx)
+void lrw_free_table(struct lrw_table_ctx *ctx)
 {
if (ctx-table)
gf128mul_free_64k(ctx-table);
 }
+EXPORT_SYMBOL_GPL(lrw_free_table);
 
 static int setkey(struct crypto_tfm *parent, const u8 *key,
  unsigned int keylen)
@@ -227,6 +214,84 @@ static int decrypt(struct blkcipher_desc *desc, struct 
scatterlist *dst,
 crypto_cipher_alg(ctx-child)-cia_decrypt);
 }
 
+int lrw_crypt(struct blkcipher_desc *desc, struct scatterlist *sdst,
+ struct scatterlist *ssrc, unsigned int nbytes,
+ struct lrw_crypt_req *req)
+{
+   const unsigned int bsize = LRW_BLOCK_SIZE;
+   const unsigned int max_blks = req-tbuflen / bsize;
+   struct lrw_table_ctx *ctx = req-table_ctx;
+   struct blkcipher_walk walk;
+   unsigned int nblocks;
+   be128 *iv, *src, *dst, *t;
+   be128 *t_buf = req-tbuf;
+   int err, i;
+
+   BUG_ON(max_blks  1);
+
+   blkcipher_walk_init(walk, sdst, ssrc, nbytes);
+
+   err = blkcipher_walk_virt(desc, walk);
+   nbytes = walk.nbytes;
+   if (!nbytes)
+   return err;
+
+   nblocks = min(walk.nbytes / bsize, max_blks);
+   src = (be128 *)walk.src.virt.addr;
+   dst = (be128 *)walk.dst.virt.addr;
+
+   /* calculate first value of T */
+   iv = (be128 *)walk.iv;
+   t_buf[0] = *iv;
+
+   /* T - I*Key2 */
+   gf128mul_64k_bbe(t_buf[0], ctx-table);
+
+   i = 0;
+   goto first;
+
+   for (;;) {
+   do {
+   for (i = 0; i  nblocks; i++) {
+   /* T - I*Key2, using the optimization
+* discussed in the specification */
+   be128_xor(t_buf[i], t,
+   ctx-mulinc[get_index128(iv)]);
+   inc(iv);
+first:
+   t = t_buf[i];
+
+   /* PP - T xor P */
+   be128_xor(dst + i, t, src + i);
+   }
+
+   /* CC - E(Key2,PP) */
+   req-crypt_fn(req-crypt_ctx, (u8 *)dst,
+ nblocks * bsize);
+
+   /* C - T xor CC */
+   for (i = 0; i  nblocks; i++)
+   be128_xor(dst + i, dst + i, t_buf[i]);
+
+   src += nblocks;
+   dst += nblocks;
+   nbytes -= nblocks * bsize;
+   nblocks = min(nbytes / 

[PATCH 05/18] crypto: testmgr: add lrw(serpent) test vectors

2011-10-18 Thread Jussi Kivilinna
Add test vectors for lrw(serpent). These are generated from lrw(aes) test 
vectors.

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/testmgr.c |   15 ++
 crypto/testmgr.h |  502 ++
 2 files changed, 517 insertions(+), 0 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 2018379..8c8ad61 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -2297,6 +2297,21 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
+   .alg = lrw(serpent),
+   .test = alg_test_skcipher,
+   .suite = {
+   .cipher = {
+   .enc = {
+   .vecs = serpent_lrw_enc_tv_template,
+   .count = SERPENT_LRW_ENC_TEST_VECTORS
+   },
+   .dec = {
+   .vecs = serpent_lrw_dec_tv_template,
+   .count = SERPENT_LRW_DEC_TEST_VECTORS
+   }
+   }
+   }
+   }, {
.alg = lzo,
.test = alg_test_comp,
.suite = {
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index ed4aec9..1f7c3fd 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -3108,6 +3108,9 @@ static struct cipher_testvec tf_ctr_dec_tv_template[] = {
 #define SERPENT_CTR_ENC_TEST_VECTORS   2
 #define SERPENT_CTR_DEC_TEST_VECTORS   2
 
+#define SERPENT_LRW_ENC_TEST_VECTORS   8
+#define SERPENT_LRW_DEC_TEST_VECTORS   8
+
 static struct cipher_testvec serpent_enc_tv_template[] = {
{
.input  = \x00\x01\x02\x03\x04\x05\x06\x07
@@ -3665,6 +3668,505 @@ static struct cipher_testvec 
serpent_ctr_dec_tv_template[] = {
},
 };
 
+static struct cipher_testvec serpent_lrw_enc_tv_template[] = {
+   /* Generated from AES-LRW test vectors */
+   {
+   .key= \x45\x62\xac\x25\xf8\x28\x17\x6d
+ \x4c\x26\x84\x14\xb5\x68\x01\x85
+ \x25\x8e\x2a\x05\xe7\x3e\x9d\x03
+ \xee\x5a\x83\x0c\xcc\x09\x4c\x87,
+   .klen   = 32,
+   .iv = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x01,
+   .input  = \x30\x31\x32\x33\x34\x35\x36\x37
+ \x38\x39\x41\x42\x43\x44\x45\x46,
+   .ilen   = 16,
+   .result = \x6f\xbf\xd4\xa4\x5d\x71\x16\x79
+ \x63\x9c\xa6\x8e\x40\xbe\x0d\x8a,
+   .rlen   = 16,
+   }, {
+   .key= \x59\x70\x47\x14\xf5\x57\x47\x8c
+ \xd7\x79\xe8\x0f\x54\x88\x79\x44
+ \x0d\x48\xf0\xb7\xb1\x5a\x53\xea
+ \x1c\xaa\x6b\x29\xc2\xca\xfb\xaf,
+   .klen   = 32,
+   .iv = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x02,
+   .input  = \x30\x31\x32\x33\x34\x35\x36\x37
+ \x38\x39\x41\x42\x43\x44\x45\x46,
+   .ilen   = 16,
+   .result = \xfd\xb2\x66\x98\x80\x96\x55\xad
+ \x08\x94\x54\x9c\x21\x7c\x69\xe3,
+   .rlen   = 16,
+   }, {
+   .key= \xd8\x2a\x91\x34\xb2\x6a\x56\x50
+ \x30\xfe\x69\xe2\x37\x7f\x98\x47
+ \xcd\xf9\x0b\x16\x0c\x64\x8f\xb6
+ \xb0\x0d\x0d\x1b\xae\x85\x87\x1f,
+   .klen   = 32,
+   .iv = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x02\x00\x00\x00\x00,
+   .input  = \x30\x31\x32\x33\x34\x35\x36\x37
+ \x38\x39\x41\x42\x43\x44\x45\x46,
+   .ilen   = 16,
+   .result = \x14\x5e\x3d\x70\xc0\x6e\x9c\x34
+ \x5b\x5e\xcf\x0f\xe4\x8c\x21\x5c,
+   .rlen   = 16,
+   }, {
+   .key= \x0f\x6a\xef\xf8\xd3\xd2\xbb\x15
+ \x25\x83\xf7\x3c\x1f\x01\x28\x74
+ \xca\xc6\xbc\x35\x4d\x4a\x65\x54
+ \x90\xae\x61\xcf\x7b\xae\xbd\xcc
+ \xad\xe4\x94\xc5\x4a\x29\xae\x70,
+   .klen   = 40,
+   .iv = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x01,
+   .input  = \x30\x31\x32\x33\x34\x35\x36\x37
+ \x38\x39\x41\x42\x43\x44\x45\x46,
+   .ilen   = 16,
+   .result = \x25\x39\xaa\xa5\xf0\x65\xc8\xdc
+ \x5d\x45\x95\x30\x8f\xff\x2f\x1b,
+   .rlen   = 16,
+   }, {
+   .key= \x8a\xd4\xee\x10\x2f\xbd\x81\xff
+ 

[PATCH 06/18] crypto: tcrypt: add lrw(serpent) tests

2011-10-18 Thread Jussi Kivilinna
Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/tcrypt.c |9 +
 crypto/tcrypt.h |1 +
 2 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 5526065..9a9e170 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -996,6 +996,7 @@ static int do_test(int m)
ret += tcrypt_test(ecb(serpent));
ret += tcrypt_test(cbc(serpent));
ret += tcrypt_test(ctr(serpent));
+   ret += tcrypt_test(lrw(serpent));
break;
 
case 10:
@@ -1305,6 +1306,10 @@ static int do_test(int m)
  speed_template_16_32);
test_cipher_speed(ctr(serpent), DECRYPT, sec, NULL, 0,
  speed_template_16_32);
+   test_cipher_speed(lrw(serpent), ENCRYPT, sec, NULL, 0,
+ speed_template_32_48);
+   test_cipher_speed(lrw(serpent), DECRYPT, sec, NULL, 0,
+ speed_template_32_48);
break;
 
case 300:
@@ -1521,6 +1526,10 @@ static int do_test(int m)
   speed_template_16_32);
test_acipher_speed(ctr(serpent), DECRYPT, sec, NULL, 0,
   speed_template_16_32);
+   test_acipher_speed(lrw(serpent), ENCRYPT, sec, NULL, 0,
+  speed_template_32_48);
+   test_acipher_speed(lrw(serpent), DECRYPT, sec, NULL, 0,
+  speed_template_32_48);
break;
 
case 1000:
diff --git a/crypto/tcrypt.h b/crypto/tcrypt.h
index 10cb925..3eceaef 100644
--- a/crypto/tcrypt.h
+++ b/crypto/tcrypt.h
@@ -51,6 +51,7 @@ static u8 speed_template_8_32[] = {8, 32, 0};
 static u8 speed_template_16_32[] = {16, 32, 0};
 static u8 speed_template_16_24_32[] = {16, 24, 32, 0};
 static u8 speed_template_32_40_48[] = {32, 40, 48, 0};
+static u8 speed_template_32_48[] = {32, 48, 0};
 static u8 speed_template_32_48_64[] = {32, 48, 64, 0};
 
 /*

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 07/18] crypto: serpent-sse2: add lrw support

2011-10-18 Thread Jussi Kivilinna
Patch adds LRW support for serpent-sse2 by using lrw_crypt(). Patch has been
tested with tcrypt and automated filesystem tests.

Tcrypt benchmarks results (serpent-sse2/serpent_generic speed ratios):

Benchmark results with tcrypt:

Intel Celeron T1600 (x86_64) (fam:6, model:15, step:13):

sizelrw-enc lrw-dec
16B 1.00x   0.96x
64B 1.01x   1.01x
256B3.01x   2.97x
1024B   3.39x   3.33x
8192B   3.35x   3.33x

AMD Phenom II 1055T (x86_64) (fam:16, model:10):

sizelrw-enc lrw-dec
16B 0.98x   1.03x
64B 1.01x   1.04x
256B2.10x   2.14x
1024B   2.28x   2.33x
8192B   2.30x   2.33x

Intel Atom N270 (i586):

sizelrw-enc lrw-dec
16B 0.97x   0.97x
64B 1.47x   1.50x
256B1.72x   1.69x
1024B   1.88x   1.81x
8192B   1.84x   1.79x

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 arch/x86/crypto/serpent_sse2_glue.c |  211 +++
 crypto/serpent.c|   10 +-
 include/crypto/serpent.h|2 
 3 files changed, 221 insertions(+), 2 deletions(-)

diff --git a/arch/x86/crypto/serpent_sse2_glue.c 
b/arch/x86/crypto/serpent_sse2_glue.c
index 5cf17e2..97c55b6 100644
--- a/arch/x86/crypto/serpent_sse2_glue.c
+++ b/arch/x86/crypto/serpent_sse2_glue.c
@@ -38,12 +38,17 @@
 #include crypto/cryptd.h
 #include crypto/b128ops.h
 #include crypto/ctr.h
+#include crypto/lrw.h
 #include asm/i387.h
 #include asm/serpent.h
 #include crypto/scatterwalk.h
 #include linux/workqueue.h
 #include linux/spinlock.h
 
+#if defined(CONFIG_CRYPTO_LRW) || defined(CONFIG_CRYPTO_LRW_MODULE)
+#define HAS_LRW
+#endif
+
 struct async_serpent_ctx {
struct cryptd_ablkcipher *cryptd_tfm;
 };
@@ -459,6 +464,152 @@ static struct crypto_alg blk_ctr_alg = {
},
 };
 
+#ifdef HAS_LRW
+
+struct crypt_priv {
+   struct serpent_ctx *ctx;
+   bool fpu_enabled;
+};
+
+static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+{
+   const unsigned int bsize = SERPENT_BLOCK_SIZE;
+   struct crypt_priv *ctx = priv;
+   int i;
+
+   ctx-fpu_enabled = serpent_fpu_begin(ctx-fpu_enabled, nbytes);
+
+   if (nbytes == bsize * SERPENT_PARALLEL_BLOCKS) {
+   serpent_enc_blk_xway(ctx-ctx, srcdst, srcdst);
+   return;
+   }
+
+   for (i = 0; i  nbytes / bsize; i++, srcdst += bsize)
+   __serpent_encrypt(ctx-ctx, srcdst, srcdst);
+}
+
+static void decrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+{
+   const unsigned int bsize = SERPENT_BLOCK_SIZE;
+   struct crypt_priv *ctx = priv;
+   int i;
+
+   ctx-fpu_enabled = serpent_fpu_begin(ctx-fpu_enabled, nbytes);
+
+   if (nbytes == bsize * SERPENT_PARALLEL_BLOCKS) {
+   serpent_dec_blk_xway(ctx-ctx, srcdst, srcdst);
+   return;
+   }
+
+   for (i = 0; i  nbytes / bsize; i++, srcdst += bsize)
+   __serpent_decrypt(ctx-ctx, srcdst, srcdst);
+}
+
+struct serpent_lrw_ctx {
+   struct lrw_table_ctx lrw_table;
+   struct serpent_ctx serpent_ctx;
+};
+
+static int lrw_serpent_setkey(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int keylen)
+{
+   struct serpent_lrw_ctx *ctx = crypto_tfm_ctx(tfm);
+   int err;
+
+   err = __serpent_setkey(ctx-serpent_ctx, key, keylen -
+   SERPENT_BLOCK_SIZE);
+   if (err)
+   return err;
+
+   return lrw_init_table(ctx-lrw_table, key + keylen -
+   SERPENT_BLOCK_SIZE);
+}
+
+static int lrw_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+  struct scatterlist *src, unsigned int nbytes)
+{
+   struct serpent_lrw_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
+   be128 buf[SERPENT_PARALLEL_BLOCKS];
+   struct crypt_priv crypt_ctx = {
+   .ctx = ctx-serpent_ctx,
+   .fpu_enabled = false,
+   };
+   struct lrw_crypt_req req = {
+   .tbuf = buf,
+   .tbuflen = sizeof(buf),
+
+   .table_ctx = ctx-lrw_table,
+   .crypt_ctx = crypt_ctx,
+   .crypt_fn = encrypt_callback,
+   };
+   int ret;
+
+   ret = lrw_crypt(desc, dst, src, nbytes, req);
+   serpent_fpu_end(crypt_ctx.fpu_enabled);
+
+   return ret;
+}
+
+static int lrw_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+  struct scatterlist *src, unsigned int nbytes)
+{
+   struct serpent_lrw_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
+   be128 buf[SERPENT_PARALLEL_BLOCKS];
+   struct crypt_priv crypt_ctx = {
+   .ctx = ctx-serpent_ctx,
+   .fpu_enabled = false,
+   };
+   struct lrw_crypt_req req = {
+   .tbuf = buf,
+   .tbuflen = sizeof(buf),
+
+   .table_ctx = ctx-lrw_table,
+   .crypt_ctx = crypt_ctx,
+   .crypt_fn = 

[PATCH 08/18] crypto: testmgr: add lrw(twofish) test vectors

2011-10-18 Thread Jussi Kivilinna
Add test vectors for lrw(twofish). These are generated from lrw(aes) test 
vectors.

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/testmgr.c |   15 ++
 crypto/testmgr.h |  501 ++
 2 files changed, 516 insertions(+), 0 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 8c8ad61..97fe1df 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -2312,6 +2312,21 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
+   .alg = lrw(twofish),
+   .test = alg_test_skcipher,
+   .suite = {
+   .cipher = {
+   .enc = {
+   .vecs = tf_lrw_enc_tv_template,
+   .count = TF_LRW_ENC_TEST_VECTORS
+   },
+   .dec = {
+   .vecs = tf_lrw_dec_tv_template,
+   .count = TF_LRW_DEC_TEST_VECTORS
+   }
+   }
+   }
+   }, {
.alg = lzo,
.test = alg_test_comp,
.suite = {
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 1f7c3fd..4b88789 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -2717,6 +2717,8 @@ static struct cipher_testvec bf_ctr_dec_tv_template[] = {
 #define TF_CBC_DEC_TEST_VECTORS5
 #define TF_CTR_ENC_TEST_VECTORS2
 #define TF_CTR_DEC_TEST_VECTORS2
+#define TF_LRW_ENC_TEST_VECTORS8
+#define TF_LRW_DEC_TEST_VECTORS8
 
 static struct cipher_testvec tf_enc_tv_template[] = {
{
@@ -3092,6 +3094,505 @@ static struct cipher_testvec tf_ctr_dec_tv_template[] = 
{
},
 };
 
+static struct cipher_testvec tf_lrw_enc_tv_template[] = {
+   /* Generated from AES-LRW test vectors */
+   {
+   .key= \x45\x62\xac\x25\xf8\x28\x17\x6d
+ \x4c\x26\x84\x14\xb5\x68\x01\x85
+ \x25\x8e\x2a\x05\xe7\x3e\x9d\x03
+ \xee\x5a\x83\x0c\xcc\x09\x4c\x87,
+   .klen   = 32,
+   .iv = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x01,
+   .input  = \x30\x31\x32\x33\x34\x35\x36\x37
+ \x38\x39\x41\x42\x43\x44\x45\x46,
+   .ilen   = 16,
+   .result = \xa1\x6c\x50\x69\x26\xa4\xef\x7b
+ \x7c\xc6\x91\xeb\x72\xdd\x9b\xee,
+   .rlen   = 16,
+   }, {
+   .key= \x59\x70\x47\x14\xf5\x57\x47\x8c
+ \xd7\x79\xe8\x0f\x54\x88\x79\x44
+ \x0d\x48\xf0\xb7\xb1\x5a\x53\xea
+ \x1c\xaa\x6b\x29\xc2\xca\xfb\xaf,
+   .klen   = 32,
+   .iv = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x02,
+   .input  = \x30\x31\x32\x33\x34\x35\x36\x37
+ \x38\x39\x41\x42\x43\x44\x45\x46,
+   .ilen   = 16,
+   .result = \xab\x72\x0a\xad\x3b\x0c\xf0\xc9
+ \x42\x2f\xf1\xae\xf1\x3c\xb1\xbd,
+   .rlen   = 16,
+   }, {
+   .key= \xd8\x2a\x91\x34\xb2\x6a\x56\x50
+ \x30\xfe\x69\xe2\x37\x7f\x98\x47
+ \xcd\xf9\x0b\x16\x0c\x64\x8f\xb6
+ \xb0\x0d\x0d\x1b\xae\x85\x87\x1f,
+   .klen   = 32,
+   .iv = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x02\x00\x00\x00\x00,
+   .input  = \x30\x31\x32\x33\x34\x35\x36\x37
+ \x38\x39\x41\x42\x43\x44\x45\x46,
+   .ilen   = 16,
+   .result = \x85\xa7\x56\x67\x08\xfa\x42\xe1
+ \x22\xe6\x82\xfc\xd9\xb4\xd7\xd4,
+   .rlen   = 16,
+   }, {
+   .key= \x0f\x6a\xef\xf8\xd3\xd2\xbb\x15
+ \x25\x83\xf7\x3c\x1f\x01\x28\x74
+ \xca\xc6\xbc\x35\x4d\x4a\x65\x54
+ \x90\xae\x61\xcf\x7b\xae\xbd\xcc
+ \xad\xe4\x94\xc5\x4a\x29\xae\x70,
+   .klen   = 40,
+   .iv = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x01,
+   .input  = \x30\x31\x32\x33\x34\x35\x36\x37
+ \x38\x39\x41\x42\x43\x44\x45\x46,
+   .ilen   = 16,
+   .result = \xd2\xaf\x69\x35\x24\x1d\x0e\x1c
+ \x84\x8b\x05\xe4\xa2\x2f\x16\xf5,
+   .rlen   = 16,
+   }, {
+   .key= \x8a\xd4\xee\x10\x2f\xbd\x81\xff
+ 

[PATCH 09/18] crypto: tcrypt: add lrw(twofish) tests

2011-10-18 Thread Jussi Kivilinna
Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/tcrypt.c |5 +
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 9a9e170..0120383 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -990,6 +990,7 @@ static int do_test(int m)
ret += tcrypt_test(ecb(twofish));
ret += tcrypt_test(cbc(twofish));
ret += tcrypt_test(ctr(twofish));
+   ret += tcrypt_test(lrw(twofish));
break;
 
case 9:
@@ -1249,6 +1250,10 @@ static int do_test(int m)
speed_template_16_24_32);
test_cipher_speed(ctr(twofish), DECRYPT, sec, NULL, 0,
speed_template_16_24_32);
+   test_cipher_speed(lrw(twofish), ENCRYPT, sec, NULL, 0,
+   speed_template_32_40_48);
+   test_cipher_speed(lrw(twofish), DECRYPT, sec, NULL, 0,
+   speed_template_32_40_48);
break;
 
case 203:

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 10/18] crypto: twofish-x86_64-3way: add lrw support

2011-10-18 Thread Jussi Kivilinna
Patch adds LRW support for twofish-x86_64-3way by using lrw_crypt(). Patch has
been tested with tcrypt and automated filesystem tests.

Tcrypt benchmarks results (twofish-3way/twofish-asm speed ratios):

Intel Celeron T1600 (fam:6, model:15, step:13):

sizelrw-enc lrw-dec
16B 0.99x   1.00x
64B 1.17x   1.17x
256B1.26x   1.27x
1024B   1.30x   1.31x
8192B   1.31x   1.32x

AMD Phenom II 1055T (fam:16, model:10):

sizelrw-enc lrw-dec
16B 1.06x   1.01x
64B 1.08x   1.14x
256B1.19x   1.20x
1024B   1.21x   1.22x
8192B   1.23x   1.24x

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 arch/x86/crypto/twofish_glue_3way.c |  135 +++
 crypto/twofish_common.c |   13 ++-
 include/crypto/twofish.h|2 +
 3 files changed, 145 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/twofish_glue_3way.c 
b/arch/x86/crypto/twofish_glue_3way.c
index 5ede9c4..fa9151d 100644
--- a/arch/x86/crypto/twofish_glue_3way.c
+++ b/arch/x86/crypto/twofish_glue_3way.c
@@ -32,6 +32,11 @@
 #include crypto/algapi.h
 #include crypto/twofish.h
 #include crypto/b128ops.h
+#include crypto/lrw.h
+
+#if defined(CONFIG_CRYPTO_LRW) || defined(CONFIG_CRYPTO_LRW_MODULE)
+#define HAS_LRW
+#endif
 
 /* regular block cipher functions from twofish_x86_64 module */
 asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst,
@@ -432,6 +437,124 @@ static struct crypto_alg blk_ctr_alg = {
},
 };
 
+#ifdef HAS_LRW
+
+static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+{
+   const unsigned int bsize = TF_BLOCK_SIZE;
+   struct twofish_ctx *ctx = priv;
+   int i;
+
+   if (nbytes == 3 * bsize) {
+   twofish_enc_blk_3way(ctx, srcdst, srcdst);
+   return;
+   }
+
+   for (i = 0; i  nbytes / bsize; i++, srcdst += bsize)
+   twofish_enc_blk(ctx, srcdst, srcdst);
+}
+
+static void decrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+{
+   const unsigned int bsize = TF_BLOCK_SIZE;
+   struct twofish_ctx *ctx = priv;
+   int i;
+
+   if (nbytes == 3 * bsize) {
+   twofish_dec_blk_3way(ctx, srcdst, srcdst);
+   return;
+   }
+
+   for (i = 0; i  nbytes / bsize; i++, srcdst += bsize)
+   twofish_dec_blk(ctx, srcdst, srcdst);
+}
+
+struct twofish_lrw_ctx {
+   struct lrw_table_ctx lrw_table;
+   struct twofish_ctx twofish_ctx;
+};
+
+static int lrw_twofish_setkey(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int keylen)
+{
+   struct twofish_lrw_ctx *ctx = crypto_tfm_ctx(tfm);
+   int err;
+
+   err = __twofish_setkey(ctx-twofish_ctx, key, keylen - TF_BLOCK_SIZE,
+  tfm-crt_flags);
+   if (err)
+   return err;
+
+   return lrw_init_table(ctx-lrw_table, key + keylen - TF_BLOCK_SIZE);
+}
+
+static int lrw_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+  struct scatterlist *src, unsigned int nbytes)
+{
+   struct twofish_lrw_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
+   be128 buf[3];
+   struct lrw_crypt_req req = {
+   .tbuf = buf,
+   .tbuflen = sizeof(buf),
+
+   .table_ctx = ctx-lrw_table,
+   .crypt_ctx = ctx-twofish_ctx,
+   .crypt_fn = encrypt_callback,
+   };
+
+   return lrw_crypt(desc, dst, src, nbytes, req);
+}
+
+static int lrw_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+  struct scatterlist *src, unsigned int nbytes)
+{
+   struct twofish_lrw_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
+   be128 buf[3];
+   struct lrw_crypt_req req = {
+   .tbuf = buf,
+   .tbuflen = sizeof(buf),
+
+   .table_ctx = ctx-lrw_table,
+   .crypt_ctx = ctx-twofish_ctx,
+   .crypt_fn = decrypt_callback,
+   };
+
+   return lrw_crypt(desc, dst, src, nbytes, req);
+}
+
+static void lrw_exit_tfm(struct crypto_tfm *tfm)
+{
+   struct twofish_lrw_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   lrw_free_table(ctx-lrw_table);
+}
+
+static struct crypto_alg blk_lrw_alg = {
+   .cra_name   = lrw(twofish),
+   .cra_driver_name= lrw-twofish-3way,
+   .cra_priority   = 300,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_blocksize  = TF_BLOCK_SIZE,
+   .cra_ctxsize= sizeof(struct twofish_lrw_ctx),
+   .cra_alignmask  = 0,
+   .cra_type   = crypto_blkcipher_type,
+   .cra_module = THIS_MODULE,
+   .cra_list   = LIST_HEAD_INIT(blk_lrw_alg.cra_list),
+   .cra_exit   = lrw_exit_tfm,
+   .cra_u = {
+   .blkcipher = {
+   .min_keysize= TF_MIN_KEY_SIZE + TF_BLOCK_SIZE,
+   .max_keysize

[PATCH 11/18] crypto: xts: use blocksize constant

2011-10-18 Thread Jussi Kivilinna
XTS has fixed blocksize of 16. Define XTS_BLOCK_SIZE and use in place of
crypto_cipher_blocksize().

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/xts.c |8 +---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/crypto/xts.c b/crypto/xts.c
index 8517054..96f3f88 100644
--- a/crypto/xts.c
+++ b/crypto/xts.c
@@ -24,6 +24,8 @@
 #include crypto/b128ops.h
 #include crypto/gf128mul.h
 
+#define XTS_BLOCK_SIZE 16
+
 struct priv {
struct crypto_cipher *child;
struct crypto_cipher *tweak;
@@ -96,7 +98,7 @@ static int crypt(struct blkcipher_desc *d,
 {
int err;
unsigned int avail;
-   const int bs = crypto_cipher_blocksize(ctx-child);
+   const int bs = XTS_BLOCK_SIZE;
struct sinfo s = {
.tfm = crypto_cipher_tfm(ctx-child),
.fn = fn
@@ -177,7 +179,7 @@ static int init_tfm(struct crypto_tfm *tfm)
if (IS_ERR(cipher))
return PTR_ERR(cipher);
 
-   if (crypto_cipher_blocksize(cipher) != 16) {
+   if (crypto_cipher_blocksize(cipher) != XTS_BLOCK_SIZE) {
*flags |= CRYPTO_TFM_RES_BAD_BLOCK_LEN;
crypto_free_cipher(cipher);
return -EINVAL;
@@ -192,7 +194,7 @@ static int init_tfm(struct crypto_tfm *tfm)
}
 
/* this check isn't really needed, leave it here just in case */
-   if (crypto_cipher_blocksize(cipher) != 16) {
+   if (crypto_cipher_blocksize(cipher) != XTS_BLOCK_SIZE) {
crypto_free_cipher(cipher);
crypto_free_cipher(ctx-child);
*flags |= CRYPTO_TFM_RES_BAD_BLOCK_LEN;

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 13/18] crypto: testmgr: add xts(serpent) test vectors

2011-10-18 Thread Jussi Kivilinna
Add test vectors for xts(serpent). These are generated from xts(aes) test 
vectors.

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/testmgr.c |   15 +
 crypto/testmgr.h |  682 ++
 2 files changed, 697 insertions(+), 0 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 97fe1df..8187405 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -2634,6 +2634,21 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
+   .alg = xts(serpent),
+   .test = alg_test_skcipher,
+   .suite = {
+   .cipher = {
+   .enc = {
+   .vecs = serpent_xts_enc_tv_template,
+   .count = SERPENT_XTS_ENC_TEST_VECTORS
+   },
+   .dec = {
+   .vecs = serpent_xts_dec_tv_template,
+   .count = SERPENT_XTS_DEC_TEST_VECTORS
+   }
+   }
+   }
+   }, {
.alg = zlib,
.test = alg_test_pcomp,
.suite = {
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 4b88789..9158a92 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -3612,6 +3612,9 @@ static struct cipher_testvec tf_lrw_dec_tv_template[] = {
 #define SERPENT_LRW_ENC_TEST_VECTORS   8
 #define SERPENT_LRW_DEC_TEST_VECTORS   8
 
+#define SERPENT_XTS_ENC_TEST_VECTORS   5
+#define SERPENT_XTS_DEC_TEST_VECTORS   5
+
 static struct cipher_testvec serpent_enc_tv_template[] = {
{
.input  = \x00\x01\x02\x03\x04\x05\x06\x07
@@ -4668,6 +4671,685 @@ static struct cipher_testvec 
serpent_lrw_dec_tv_template[] = {
},
 };
 
+static struct cipher_testvec serpent_xts_enc_tv_template[] = {
+   /* Generated from AES-XTS test vectors */
+   {
+   .key= \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x00,
+   .klen   = 32,
+   .iv = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x00,
+   .input  = \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x00,
+   .ilen   = 32,
+   .result = \xe1\x08\xb8\x1d\x2c\xf5\x33\x64
+ \xc8\x12\x04\xc7\xb3\x70\xe8\xc4
+ \x6a\x31\xc5\xf3\x00\xca\xb9\x16
+ \xde\xe2\x77\x66\xf7\xfe\x62\x08,
+   .rlen   = 32,
+   }, {
+   .key= \x11\x11\x11\x11\x11\x11\x11\x11
+ \x11\x11\x11\x11\x11\x11\x11\x11
+ \x22\x22\x22\x22\x22\x22\x22\x22
+ \x22\x22\x22\x22\x22\x22\x22\x22,
+   .klen   = 32,
+   .iv = \x33\x33\x33\x33\x33\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x00,
+   .input  = \x44\x44\x44\x44\x44\x44\x44\x44
+ \x44\x44\x44\x44\x44\x44\x44\x44
+ \x44\x44\x44\x44\x44\x44\x44\x44
+ \x44\x44\x44\x44\x44\x44\x44\x44,
+   .ilen   = 32,
+   .result = \x1a\x0a\x09\x5f\xcd\x07\x07\x98
+ \x41\x86\x12\xaf\xb3\xd7\x68\x13
+ \xed\x81\xcd\x06\x87\x43\x1a\xbb
+ \x13\x3d\xd6\x1e\x2b\xe1\x77\xbe,
+   .rlen   = 32,
+   }, {
+   .key= \xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8
+ \xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0
+ \x22\x22\x22\x22\x22\x22\x22\x22
+ \x22\x22\x22\x22\x22\x22\x22\x22,
+   .klen   = 32,
+   .iv = \x33\x33\x33\x33\x33\x00\x00\x00
+ \x00\x00\x00\x00\x00\x00\x00\x00,
+   .input  = \x44\x44\x44\x44\x44\x44\x44\x44
+ \x44\x44\x44\x44\x44\x44\x44\x44
+ \x44\x44\x44\x44\x44\x44\x44\x44
+ \x44\x44\x44\x44\x44\x44\x44\x44,
+   .ilen   = 32,
+   .result = \xf9\x9b\x28\xb8\x5c\xaf\x8c\x61
+ \xb6\x1c\x81\x8f\x2c\x87\x60\x89
+ \x0d\x8d\x7a\xe8\x60\x48\xcc\x86
+ \xc1\x68\x45\xaa\x00\xe9\x24\xc5,
+   .rlen   = 32,
+   }, {
+   .key= \x27\x18\x28\x18\x28\x45\x90\x45
+ \x23\x53\x60\x28\x74\x71\x35\x26
+

[PATCH 15/18] crypto: serpent-sse2: add xts support

2011-10-18 Thread Jussi Kivilinna
Patch adds XTS support for serpent-sse2 by using xts_crypt(). Patch has been
tested with tcrypt and automated filesystem tests.

Tcrypt benchmarks results (serpent-sse2/serpent_generic speed ratios):

Intel Celeron T1600 (x86_64) (fam:6, model:15, step:13):

sizexts-enc xts-dec
16B 0.98x   1.00x
64B 1.00x   1.01x
256B2.78x   2.75x
1024B   3.30x   3.26x
8192B   3.39x   3.30x

AMD Phenom II 1055T (x86_64) (fam:16, model:10):

sizexts-enc xts-dec
16B 1.05x   1.02x
64B 1.04x   1.03x
256B2.10x   2.05x
1024B   2.34x   2.35x
8192B   2.34x   2.40x

Intel Atom N270 (i586):

sizexts-enc xts-dec
16B 0.95x   0.96x
64B 1.53x   1.50x
256B1.72x   1.75x
1024B   1.88x   1.87x
8192B   1.86x   1.83x

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 arch/x86/crypto/serpent_sse2_glue.c |  180 +++
 1 files changed, 178 insertions(+), 2 deletions(-)

diff --git a/arch/x86/crypto/serpent_sse2_glue.c 
b/arch/x86/crypto/serpent_sse2_glue.c
index 97c55b6..2ad7ed8 100644
--- a/arch/x86/crypto/serpent_sse2_glue.c
+++ b/arch/x86/crypto/serpent_sse2_glue.c
@@ -39,6 +39,7 @@
 #include crypto/b128ops.h
 #include crypto/ctr.h
 #include crypto/lrw.h
+#include crypto/xts.h
 #include asm/i387.h
 #include asm/serpent.h
 #include crypto/scatterwalk.h
@@ -49,6 +50,10 @@
 #define HAS_LRW
 #endif
 
+#if defined(CONFIG_CRYPTO_XTS) || defined(CONFIG_CRYPTO_XTS_MODULE)
+#define HAS_XTS
+#endif
+
 struct async_serpent_ctx {
struct cryptd_ablkcipher *cryptd_tfm;
 };
@@ -464,7 +469,7 @@ static struct crypto_alg blk_ctr_alg = {
},
 };
 
-#ifdef HAS_LRW
+#if defined(HAS_LRW) || defined(HAS_XTS)
 
 struct crypt_priv {
struct serpent_ctx *ctx;
@@ -505,6 +510,10 @@ static void decrypt_callback(void *priv, u8 *srcdst, 
unsigned int nbytes)
__serpent_decrypt(ctx-ctx, srcdst, srcdst);
 }
 
+#endif
+
+#ifdef HAS_LRW
+
 struct serpent_lrw_ctx {
struct lrw_table_ctx lrw_table;
struct serpent_ctx serpent_ctx;
@@ -610,6 +619,114 @@ static struct crypto_alg blk_lrw_alg = {
 
 #endif
 
+#ifdef HAS_XTS
+
+struct serpent_xts_ctx {
+   struct serpent_ctx tweak_ctx;
+   struct serpent_ctx crypt_ctx;
+};
+
+static int xts_serpent_setkey(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int keylen)
+{
+   struct serpent_xts_ctx *ctx = crypto_tfm_ctx(tfm);
+   u32 *flags = tfm-crt_flags;
+   int err;
+
+   /* key consists of keys of equal size concatenated, therefore
+* the length must be even
+*/
+   if (keylen % 2) {
+   *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+   return -EINVAL;
+   }
+
+   /* first half of xts-key is for crypt */
+   err = __serpent_setkey(ctx-crypt_ctx, key, keylen / 2);
+   if (err)
+   return err;
+
+   /* second half of xts-key is for tweak */
+   return __serpent_setkey(ctx-tweak_ctx, key + keylen / 2, keylen / 2);
+}
+
+static int xts_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+  struct scatterlist *src, unsigned int nbytes)
+{
+   struct serpent_xts_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
+   be128 buf[SERPENT_PARALLEL_BLOCKS];
+   struct crypt_priv crypt_ctx = {
+   .ctx = ctx-crypt_ctx,
+   .fpu_enabled = false,
+   };
+   struct xts_crypt_req req = {
+   .tbuf = buf,
+   .tbuflen = sizeof(buf),
+
+   .tweak_ctx = ctx-tweak_ctx,
+   .tweak_fn = XTS_TWEAK_CAST(__serpent_encrypt),
+   .crypt_ctx = crypt_ctx,
+   .crypt_fn = encrypt_callback,
+   };
+   int ret;
+
+   ret = xts_crypt(desc, dst, src, nbytes, req);
+   serpent_fpu_end(crypt_ctx.fpu_enabled);
+
+   return ret;
+}
+
+static int xts_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+  struct scatterlist *src, unsigned int nbytes)
+{
+   struct serpent_xts_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
+   be128 buf[SERPENT_PARALLEL_BLOCKS];
+   struct crypt_priv crypt_ctx = {
+   .ctx = ctx-crypt_ctx,
+   .fpu_enabled = false,
+   };
+   struct xts_crypt_req req = {
+   .tbuf = buf,
+   .tbuflen = sizeof(buf),
+
+   .tweak_ctx = ctx-tweak_ctx,
+   .tweak_fn = XTS_TWEAK_CAST(__serpent_encrypt),
+   .crypt_ctx = crypt_ctx,
+   .crypt_fn = decrypt_callback,
+   };
+   int ret;
+
+   ret = xts_crypt(desc, dst, src, nbytes, req);
+   serpent_fpu_end(crypt_ctx.fpu_enabled);
+
+   return ret;
+}
+
+static struct crypto_alg blk_xts_alg = {
+   .cra_name   = __xts-serpent-sse2,
+   .cra_driver_name= __driver-xts-serpent-sse2,
+   .cra_priority   = 0,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_blocksize  = 

[PATCH 14/18] crypto: tcrypt: add xts(serpent) tests

2011-10-18 Thread Jussi Kivilinna
Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/tcrypt.c |9 +
 crypto/tcrypt.h |1 +
 2 files changed, 10 insertions(+), 0 deletions(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 0120383..a664595 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -998,6 +998,7 @@ static int do_test(int m)
ret += tcrypt_test(cbc(serpent));
ret += tcrypt_test(ctr(serpent));
ret += tcrypt_test(lrw(serpent));
+   ret += tcrypt_test(xts(serpent));
break;
 
case 10:
@@ -1315,6 +1316,10 @@ static int do_test(int m)
  speed_template_32_48);
test_cipher_speed(lrw(serpent), DECRYPT, sec, NULL, 0,
  speed_template_32_48);
+   test_cipher_speed(xts(serpent), ENCRYPT, sec, NULL, 0,
+ speed_template_32_64);
+   test_cipher_speed(xts(serpent), DECRYPT, sec, NULL, 0,
+ speed_template_32_64);
break;
 
case 300:
@@ -1535,6 +1540,10 @@ static int do_test(int m)
   speed_template_32_48);
test_acipher_speed(lrw(serpent), DECRYPT, sec, NULL, 0,
   speed_template_32_48);
+   test_acipher_speed(xts(serpent), ENCRYPT, sec, NULL, 0,
+  speed_template_32_64);
+   test_acipher_speed(xts(serpent), DECRYPT, sec, NULL, 0,
+  speed_template_32_64);
break;
 
case 1000:
diff --git a/crypto/tcrypt.h b/crypto/tcrypt.h
index 3eceaef..5be1fc8 100644
--- a/crypto/tcrypt.h
+++ b/crypto/tcrypt.h
@@ -53,6 +53,7 @@ static u8 speed_template_16_24_32[] = {16, 24, 32, 0};
 static u8 speed_template_32_40_48[] = {32, 40, 48, 0};
 static u8 speed_template_32_48[] = {32, 48, 0};
 static u8 speed_template_32_48_64[] = {32, 48, 64, 0};
+static u8 speed_template_32_64[] = {32, 64, 0};
 
 /*
  * Digest speed tests

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 17/18] crypto: tcrypt: add xts(twofish) tests

2011-10-18 Thread Jussi Kivilinna
Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 crypto/tcrypt.c |5 +
 1 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index a664595..7736a9f 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -991,6 +991,7 @@ static int do_test(int m)
ret += tcrypt_test(cbc(twofish));
ret += tcrypt_test(ctr(twofish));
ret += tcrypt_test(lrw(twofish));
+   ret += tcrypt_test(xts(twofish));
break;
 
case 9:
@@ -1255,6 +1256,10 @@ static int do_test(int m)
speed_template_32_40_48);
test_cipher_speed(lrw(twofish), DECRYPT, sec, NULL, 0,
speed_template_32_40_48);
+   test_cipher_speed(xts(twofish), ENCRYPT, sec, NULL, 0,
+   speed_template_32_48_64);
+   test_cipher_speed(xts(twofish), DECRYPT, sec, NULL, 0,
+   speed_template_32_48_64);
break;
 
case 203:

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 18/18] crypto: twofish-x86_64-3way: add xts support

2011-10-18 Thread Jussi Kivilinna
Patch adds XTS support for twofish-x86_64-3way by using xts_crypt(). Patch has
been tested with tcrypt and automated filesystem tests.

Tcrypt benchmarks results (twofish-3way/twofish-asm speed ratios):

Intel Celeron T1600 (fam:6, model:15, step:13):

sizexts-enc xts-dec
16B 0.98x   1.00x
64B 1.14x   1.15x
256B1.23x   1.25x
1024B   1.26x   1.29x
8192B   1.28x   1.30x

AMD Phenom II 1055T (fam:16, model:10):

sizexts-enc xts-dec
16B 1.03x   1.03x
64B 1.13x   1.16x
256B1.20x   1.20x
1024B   1.22x   1.22x
8192B   1.22x   1.21x

Signed-off-by: Jussi Kivilinna jussi.kivili...@mbnet.fi
---
 arch/x86/crypto/twofish_glue_3way.c |  119 ++-
 1 files changed, 117 insertions(+), 2 deletions(-)

diff --git a/arch/x86/crypto/twofish_glue_3way.c 
b/arch/x86/crypto/twofish_glue_3way.c
index fa9151d..954f59e 100644
--- a/arch/x86/crypto/twofish_glue_3way.c
+++ b/arch/x86/crypto/twofish_glue_3way.c
@@ -33,11 +33,16 @@
 #include crypto/twofish.h
 #include crypto/b128ops.h
 #include crypto/lrw.h
+#include crypto/xts.h
 
 #if defined(CONFIG_CRYPTO_LRW) || defined(CONFIG_CRYPTO_LRW_MODULE)
 #define HAS_LRW
 #endif
 
+#if defined(CONFIG_CRYPTO_XTS) || defined(CONFIG_CRYPTO_XTS_MODULE)
+#define HAS_XTS
+#endif
+
 /* regular block cipher functions from twofish_x86_64 module */
 asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst,
const u8 *src);
@@ -437,7 +442,7 @@ static struct crypto_alg blk_ctr_alg = {
},
 };
 
-#ifdef HAS_LRW
+#if defined(HAS_LRW) || defined(HAS_XTS)
 
 static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
 {
@@ -469,6 +474,10 @@ static void decrypt_callback(void *priv, u8 *srcdst, 
unsigned int nbytes)
twofish_dec_blk(ctx, srcdst, srcdst);
 }
 
+#endif
+
+#ifdef HAS_LRW
+
 struct twofish_lrw_ctx {
struct lrw_table_ctx lrw_table;
struct twofish_ctx twofish_ctx;
@@ -555,6 +564,99 @@ static struct crypto_alg blk_lrw_alg = {
 
 #endif
 
+#ifdef HAS_XTS
+
+struct twofish_xts_ctx {
+   struct twofish_ctx tweak_ctx;
+   struct twofish_ctx crypt_ctx;
+};
+
+static int xts_twofish_setkey(struct crypto_tfm *tfm, const u8 *key,
+ unsigned int keylen)
+{
+   struct twofish_xts_ctx *ctx = crypto_tfm_ctx(tfm);
+   u32 *flags = tfm-crt_flags;
+   int err;
+
+   /* key consists of keys of equal size concatenated, therefore
+* the length must be even
+*/
+   if (keylen % 2) {
+   *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+   return -EINVAL;
+   }
+
+   /* first half of xts-key is for crypt */
+   err = __twofish_setkey(ctx-crypt_ctx, key, keylen / 2, flags);
+   if (err)
+   return err;
+
+   /* second half of xts-key is for tweak */
+   return __twofish_setkey(ctx-tweak_ctx, key + keylen / 2, keylen / 2,
+   flags);
+}
+
+static int xts_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+  struct scatterlist *src, unsigned int nbytes)
+{
+   struct twofish_xts_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
+   be128 buf[3];
+   struct xts_crypt_req req = {
+   .tbuf = buf,
+   .tbuflen = sizeof(buf),
+
+   .tweak_ctx = ctx-tweak_ctx,
+   .tweak_fn = XTS_TWEAK_CAST(twofish_enc_blk),
+   .crypt_ctx = ctx-crypt_ctx,
+   .crypt_fn = encrypt_callback,
+   };
+
+   return xts_crypt(desc, dst, src, nbytes, req);
+}
+
+static int xts_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+  struct scatterlist *src, unsigned int nbytes)
+{
+   struct twofish_xts_ctx *ctx = crypto_blkcipher_ctx(desc-tfm);
+   be128 buf[3];
+   struct xts_crypt_req req = {
+   .tbuf = buf,
+   .tbuflen = sizeof(buf),
+
+   .tweak_ctx = ctx-tweak_ctx,
+   .tweak_fn = XTS_TWEAK_CAST(twofish_enc_blk),
+   .crypt_ctx = ctx-crypt_ctx,
+   .crypt_fn = decrypt_callback,
+   };
+
+   return xts_crypt(desc, dst, src, nbytes, req);
+}
+
+static struct crypto_alg blk_xts_alg = {
+   .cra_name   = xts(twofish),
+   .cra_driver_name= xts-twofish-3way,
+   .cra_priority   = 300,
+   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER,
+   .cra_blocksize  = TF_BLOCK_SIZE,
+   .cra_ctxsize= sizeof(struct twofish_xts_ctx),
+   .cra_alignmask  = 0,
+   .cra_type   = crypto_blkcipher_type,
+   .cra_module = THIS_MODULE,
+   .cra_list   = LIST_HEAD_INIT(blk_xts_alg.cra_list),
+   .cra_u = {
+   .blkcipher = {
+   .min_keysize= TF_MIN_KEY_SIZE * 2,
+   .max_keysize= TF_MAX_KEY_SIZE * 2,
+   .ivsize 

Hardware acceleration indication in af_alg

2011-10-18 Thread Matthias-Christian Ott
I did some experiments with af_alg and noticed that to be really
useful, it should indicate whether a certain algorithm is hardware
accelerated. I guess this has to be inferred by the priority of the
algorithm could be made available via a read-only socket option. Any
thoughts on this?

I can imagine, an alternative approach and perhaps better approach
would be to measure the speed of the kernel provided algorithm against
a software implementation, but there are many other factors that could
influence the results. Therefore, it is perhaps better to just make
the assumption that hardware acceleration is faster which is made in
the kernel anyhow.

Regards,
Matthias-Christian
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] talitos: handle descriptor not found in error path

2011-10-18 Thread Kim Phillips
On Tue, 18 Oct 2011 09:36:18 +0200
Herbert Xu herb...@gondor.apana.org.au wrote:

 Kim Phillips kim.phill...@freescale.com wrote:
  The CDPR (Current Descriptor Pointer Register) can be unreliable
  when trying to locate an offending descriptor.  Handle that case by
  (a) not OOPSing, and (b) reverting to the machine internal copy of
  the descriptor header in order to report the correct execution unit
  error.
  
  Note: printing all execution units' ISRs is not effective because it
  results in an internal time out (ITO) error and the EU resetting its
  ISR value (at least when specifying an invalid key length on an SEC
  2.2/MPC8313E).
  
  Reported-by: Sven Schnelle sv...@stackframe.org
  Signed-off-by: Kim Phillips kim.phill...@freescale.com
  ---
  please test, as it seems I cannot reproduce the descriptor not found
  case.
 
 So what's the verdict Kim, should I take this patch or not?

sure - I've verified it does at least satisfy Sven's shouldn't-oops
comment.

btw, can you take a look at applying this also?:

http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg05996.html

It makes IPSec AH work for async crypto implementations.

Thanks,

Kim

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html