[PATCH V2 0/9] Enable hashing and ciphers for v5 CCP

2016-11-04 Thread Gary R Hook
The following series implements new function for a version
5 CCP: Support for SHA-2, wiring of RSA using the updated
framework, additional RSA features for new devices, AES GCM
mode, and Triple-DES in ECB and CBC mode.

---

Gary R Hook (9):
  crypto: ccp - Fix handling of RSA exponent on a v5 device
  crypto: ccp - Update the command queue on errors
  crypto: ccp - Simplify some buffer management routines
  crypto: ccp - Add SHA-2 support
  crypto: Move RSA+MPI constructs into an #include file
  crypto: ccp - Add support for RSA on the CCP
  crypto: ccp - Enhance RSA support for a v5 CCP
  crypto: ccp - Enable support for AES GCM on v5 CCPs
  crypto: ccp - Enable 3DES function on v5 CCPs


 crypto/rsa.c   |   16 -
 drivers/crypto/ccp/Makefile|3 
 drivers/crypto/ccp/ccp-crypto-aes-galois.c |  257 ++
 drivers/crypto/ccp/ccp-crypto-des3.c   |  254 ++
 drivers/crypto/ccp/ccp-crypto-main.c   |   41 ++
 drivers/crypto/ccp/ccp-crypto-rsa.c|  297 +++
 drivers/crypto/ccp/ccp-crypto-sha.c|   22 +
 drivers/crypto/ccp/ccp-crypto.h|   75 +++
 drivers/crypto/ccp/ccp-dev-v3.c|2 
 drivers/crypto/ccp/ccp-dev-v5.c|   67 ++-
 drivers/crypto/ccp/ccp-dev.h   |   17 +
 drivers/crypto/ccp/ccp-ops.c   |  740 
 include/crypto/internal/rsa.h  |   17 +
 include/linux/ccp.h|   69 +++
 14 files changed, 1736 insertions(+), 141 deletions(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-aes-galois.c
 create mode 100644 drivers/crypto/ccp/ccp-crypto-des3.c
 create mode 100644 drivers/crypto/ccp/ccp-crypto-rsa.c

--
This is my day job.
Follow me on FB/IG/Twitter: @grhphotographer and @grhookphoto
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH V2 1/9] crypto: ccp - Fix handling of RSA exponent on a v5 device

2016-11-04 Thread Gary R Hook
The exponent size in the ccp_op structure is in bits. A v5
CCP requires the exponent size to be in bytes, so convert
the size from bits to bytes when populating the descriptor.

The current code references the exponent in memory, but
these fields have not been set since the exponent is
actually store in the LSB. Populate the descriptor with
the LSB location (address).

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-dev-v5.c |   10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-dev-v5.c b/drivers/crypto/ccp/ccp-dev-v5.c
index ff7816a..e2ce819 100644
--- a/drivers/crypto/ccp/ccp-dev-v5.c
+++ b/drivers/crypto/ccp/ccp-dev-v5.c
@@ -403,7 +403,7 @@ static int ccp5_perform_rsa(struct ccp_op *op)
CCP5_CMD_PROT(&desc) = 0;
 
function.raw = 0;
-   CCP_RSA_SIZE(&function) = op->u.rsa.mod_size;
+   CCP_RSA_SIZE(&function) = op->u.rsa.mod_size >> 3;
CCP5_CMD_FUNCTION(&desc) = function.raw;
 
CCP5_CMD_LEN(&desc) = op->u.rsa.input_len;
@@ -418,10 +418,10 @@ static int ccp5_perform_rsa(struct ccp_op *op)
CCP5_CMD_DST_HI(&desc) = ccp_addr_hi(&op->dst.u.dma);
CCP5_CMD_DST_MEM(&desc) = CCP_MEMTYPE_SYSTEM;
 
-   /* Key (Exponent) is in external memory */
-   CCP5_CMD_KEY_LO(&desc) = ccp_addr_lo(&op->exp.u.dma);
-   CCP5_CMD_KEY_HI(&desc) = ccp_addr_hi(&op->exp.u.dma);
-   CCP5_CMD_KEY_MEM(&desc) = CCP_MEMTYPE_SYSTEM;
+   /* Exponent is in LSB memory */
+   CCP5_CMD_KEY_LO(&desc) = op->sb_key * LSB_ITEM_SIZE;
+   CCP5_CMD_KEY_HI(&desc) = 0;
+   CCP5_CMD_KEY_MEM(&desc) = CCP_MEMTYPE_SB;
 
return ccp5_do_cmd(&desc, op->cmd_q);
 }

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH V2 2/9] crypto: ccp - Update the command queue on errors

2016-11-04 Thread Gary R Hook
Move the command queue tail pointer when an error is
detected. Always return the error.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-dev-v5.c |7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-dev-v5.c b/drivers/crypto/ccp/ccp-dev-v5.c
index e2ce819..05300a9 100644
--- a/drivers/crypto/ccp/ccp-dev-v5.c
+++ b/drivers/crypto/ccp/ccp-dev-v5.c
@@ -250,17 +250,20 @@ static int ccp5_do_cmd(struct ccp5_desc *desc,
ret = wait_event_interruptible(cmd_q->int_queue,
   cmd_q->int_rcvd);
if (ret || cmd_q->cmd_error) {
+   /* Log the error and flush the queue by
+* moving the head pointer
+*/
if (cmd_q->cmd_error)
ccp_log_error(cmd_q->ccp,
  cmd_q->cmd_error);
-   /* A version 5 device doesn't use Job IDs... */
+   iowrite32(tail, cmd_q->reg_head_lo);
if (!ret)
ret = -EIO;
}
cmd_q->int_rcvd = 0;
}
 
-   return 0;
+   return ret;
 }
 
 static int ccp5_perform_aes(struct ccp_op *op)

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH V2 3/9] crypto: ccp - Simplify some buffer management routines

2016-11-04 Thread Gary R Hook
The reverse-get/set functions can be simplified by
eliminating unused code.


Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-ops.c |  142 +-
 1 file changed, 56 insertions(+), 86 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 50fae44..efac3d5 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -184,62 +184,46 @@ static void ccp_get_dm_area(struct ccp_dm_workarea *wa, 
unsigned int wa_offset,
 }
 
 static int ccp_reverse_set_dm_area(struct ccp_dm_workarea *wa,
+  unsigned int wa_offset,
   struct scatterlist *sg,
-  unsigned int len, unsigned int se_len,
-  bool sign_extend)
+  unsigned int sg_offset,
+  unsigned int len)
 {
-   unsigned int nbytes, sg_offset, dm_offset, sb_len, i;
-   u8 buffer[CCP_REVERSE_BUF_SIZE];
-
-   if (WARN_ON(se_len > sizeof(buffer)))
-   return -EINVAL;
-
-   sg_offset = len;
-   dm_offset = 0;
-   nbytes = len;
-   while (nbytes) {
-   sb_len = min_t(unsigned int, nbytes, se_len);
-   sg_offset -= sb_len;
-
-   scatterwalk_map_and_copy(buffer, sg, sg_offset, sb_len, 0);
-   for (i = 0; i < sb_len; i++)
-   wa->address[dm_offset + i] = buffer[sb_len - i - 1];
-
-   dm_offset += sb_len;
-   nbytes -= sb_len;
-
-   if ((sb_len != se_len) && sign_extend) {
-   /* Must sign-extend to nearest sign-extend length */
-   if (wa->address[dm_offset - 1] & 0x80)
-   memset(wa->address + dm_offset, 0xff,
-  se_len - sb_len);
-   }
+   u8 *p, *q;
+
+   ccp_set_dm_area(wa, wa_offset, sg, sg_offset, len);
+
+   p = wa->address + wa_offset;
+   q = p + len - 1;
+   while (p < q) {
+   *p = *p ^ *q;
+   *q = *p ^ *q;
+   *p = *p ^ *q;
+   p++;
+   q--;
}
-
return 0;
 }
 
 static void ccp_reverse_get_dm_area(struct ccp_dm_workarea *wa,
+   unsigned int wa_offset,
struct scatterlist *sg,
+   unsigned int sg_offset,
unsigned int len)
 {
-   unsigned int nbytes, sg_offset, dm_offset, sb_len, i;
-   u8 buffer[CCP_REVERSE_BUF_SIZE];
-
-   sg_offset = 0;
-   dm_offset = len;
-   nbytes = len;
-   while (nbytes) {
-   sb_len = min_t(unsigned int, nbytes, sizeof(buffer));
-   dm_offset -= sb_len;
-
-   for (i = 0; i < sb_len; i++)
-   buffer[sb_len - i - 1] = wa->address[dm_offset + i];
-   scatterwalk_map_and_copy(buffer, sg, sg_offset, sb_len, 1);
-
-   sg_offset += sb_len;
-   nbytes -= sb_len;
+   u8 *p, *q;
+
+   p = wa->address + wa_offset;
+   q = p + len - 1;
+   while (p < q) {
+   *p = *p ^ *q;
+   *q = *p ^ *q;
+   *p = *p ^ *q;
+   p++;
+   q--;
}
+
+   ccp_get_dm_area(wa, wa_offset, sg, sg_offset, len);
 }
 
 static void ccp_free_data(struct ccp_data *data, struct ccp_cmd_queue *cmd_q)
@@ -1261,8 +1245,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
if (ret)
goto e_sb;
 
-   ret = ccp_reverse_set_dm_area(&exp, rsa->exp, rsa->exp_len,
- CCP_SB_BYTES, false);
+   ret = ccp_reverse_set_dm_area(&exp, 0, rsa->exp, 0, rsa->exp_len);
if (ret)
goto e_exp;
ret = ccp_copy_to_sb(cmd_q, &exp, op.jobid, op.sb_key,
@@ -1280,16 +1263,12 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
if (ret)
goto e_exp;
 
-   ret = ccp_reverse_set_dm_area(&src, rsa->mod, rsa->mod_len,
- CCP_SB_BYTES, false);
+   ret = ccp_reverse_set_dm_area(&src, 0, rsa->mod, 0, rsa->mod_len);
if (ret)
goto e_src;
-   src.address += o_len;   /* Adjust the address for the copy operation */
-   ret = ccp_reverse_set_dm_area(&src, rsa->src, rsa->src_len,
- CCP_SB_BYTES, false);
+   ret = ccp_reverse_set_dm_area(&src, o_len, rsa->src, 0, rsa->src_len);
if (ret)
goto e_src;
-   src.address -= o_len;   /* Reset the address to original value */
 
/* Prepare the output area for the operation */
ret = ccp_init_data(&dst, cmd_q, rsa->dst, rsa->mod_len,
@@ -1314,7 +1293,7 @@ static int ccp_run_rsa_cmd(struct ccp_

[PATCH V2 4/9] crypto: ccp - Add SHA-2 support

2016-11-04 Thread Gary R Hook
Incorporate 384-bit and 512-bit hashing for a version 5 CCP
device


Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-crypto-sha.c |   22 +++
 drivers/crypto/ccp/ccp-crypto.h |8 ++--
 drivers/crypto/ccp/ccp-ops.c|   72 +++
 include/linux/ccp.h |2 +
 4 files changed, 101 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c 
b/drivers/crypto/ccp/ccp-crypto-sha.c
index 84a652b..6b46eea 100644
--- a/drivers/crypto/ccp/ccp-crypto-sha.c
+++ b/drivers/crypto/ccp/ccp-crypto-sha.c
@@ -146,6 +146,12 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
case CCP_SHA_TYPE_256:
rctx->cmd.u.sha.ctx_len = SHA256_DIGEST_SIZE;
break;
+   case CCP_SHA_TYPE_384:
+   rctx->cmd.u.sha.ctx_len = SHA384_DIGEST_SIZE;
+   break;
+   case CCP_SHA_TYPE_512:
+   rctx->cmd.u.sha.ctx_len = SHA512_DIGEST_SIZE;
+   break;
default:
/* Should never get here */
break;
@@ -393,6 +399,22 @@ struct ccp_sha_def {
.digest_size= SHA256_DIGEST_SIZE,
.block_size = SHA256_BLOCK_SIZE,
},
+   {
+   .version= CCP_VERSION(5, 0),
+   .name   = "sha384",
+   .drv_name   = "sha384-ccp",
+   .type   = CCP_SHA_TYPE_384,
+   .digest_size= SHA384_DIGEST_SIZE,
+   .block_size = SHA384_BLOCK_SIZE,
+   },
+   {
+   .version= CCP_VERSION(5, 0),
+   .name   = "sha512",
+   .drv_name   = "sha512-ccp",
+   .type   = CCP_SHA_TYPE_512,
+   .digest_size= SHA512_DIGEST_SIZE,
+   .block_size = SHA512_BLOCK_SIZE,
+   },
 };
 
 static int ccp_register_hmac_alg(struct list_head *head,
diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
index 8335b32..95cce27 100644
--- a/drivers/crypto/ccp/ccp-crypto.h
+++ b/drivers/crypto/ccp/ccp-crypto.h
@@ -137,9 +137,11 @@ struct ccp_aes_cmac_exp_ctx {
u8 buf[AES_BLOCK_SIZE];
 };
 
-/* SHA related defines */
-#define MAX_SHA_CONTEXT_SIZE   SHA256_DIGEST_SIZE
-#define MAX_SHA_BLOCK_SIZE SHA256_BLOCK_SIZE
+/* SHA-related defines
+ * These values must be large enough to accommodate any variant
+ */
+#define MAX_SHA_CONTEXT_SIZE   SHA512_DIGEST_SIZE
+#define MAX_SHA_BLOCK_SIZE SHA512_BLOCK_SIZE
 
 struct ccp_sha_ctx {
struct scatterlist opad_sg;
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index efac3d5..213a752 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -41,6 +41,20 @@
cpu_to_be32(SHA256_H6), cpu_to_be32(SHA256_H7),
 };
 
+static const __be64 ccp_sha384_init[SHA512_DIGEST_SIZE / sizeof(__be64)] = {
+   cpu_to_be64(SHA384_H0), cpu_to_be64(SHA384_H1),
+   cpu_to_be64(SHA384_H2), cpu_to_be64(SHA384_H3),
+   cpu_to_be64(SHA384_H4), cpu_to_be64(SHA384_H5),
+   cpu_to_be64(SHA384_H6), cpu_to_be64(SHA384_H7),
+};
+
+static const __be64 ccp_sha512_init[SHA512_DIGEST_SIZE / sizeof(__be64)] = {
+   cpu_to_be64(SHA512_H0), cpu_to_be64(SHA512_H1),
+   cpu_to_be64(SHA512_H2), cpu_to_be64(SHA512_H3),
+   cpu_to_be64(SHA512_H4), cpu_to_be64(SHA512_H5),
+   cpu_to_be64(SHA512_H6), cpu_to_be64(SHA512_H7),
+};
+
 #defineCCP_NEW_JOBID(ccp)  ((ccp->vdata->version == CCP_VERSION(3, 
0)) ? \
ccp_gen_jobid(ccp) : 0)
 
@@ -947,6 +961,18 @@ static int ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
return -EINVAL;
block_size = SHA256_BLOCK_SIZE;
break;
+   case CCP_SHA_TYPE_384:
+   if (cmd_q->ccp->vdata->version < CCP_VERSION(4, 0)
+   || sha->ctx_len < SHA384_DIGEST_SIZE)
+   return -EINVAL;
+   block_size = SHA384_BLOCK_SIZE;
+   break;
+   case CCP_SHA_TYPE_512:
+   if (cmd_q->ccp->vdata->version < CCP_VERSION(4, 0)
+   || sha->ctx_len < SHA512_DIGEST_SIZE)
+   return -EINVAL;
+   block_size = SHA512_BLOCK_SIZE;
+   break;
default:
return -EINVAL;
}
@@ -1034,6 +1060,21 @@ static int ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
sb_count = 1;
ooffset = ioffset = 0;
break;
+   case CCP_SHA_TYPE_384:
+   digest_size = SHA384_DIGEST_SIZE;
+   init = (void *) ccp_sha384_init;
+   ctx_size = SHA512_DIGEST_SIZE;
+   sb_count = 2;
+   ioffset = 0;
+   ooffset = 2 * CCP_SB_BYTES - SHA384_DIGEST_SIZE;
+   brea

[PATCH V2 6/9] crypto: ccp - Add support for RSA on the CCP

2016-11-04 Thread Gary R Hook
Wire up the CCP as an RSA cipher provider.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/ccp-crypto-main.c |   19 ++
 drivers/crypto/ccp/ccp-crypto-rsa.c  |  294 ++
 drivers/crypto/ccp/ccp-crypto.h  |   32 
 include/linux/ccp.h  |1 
 5 files changed, 346 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-rsa.c

diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 346ceb8..23f89b7 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -12,4 +12,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
   ccp-crypto-aes.o \
   ccp-crypto-aes-cmac.o \
   ccp-crypto-aes-xts.o \
+  ccp-crypto-rsa.o \
   ccp-crypto-sha.o
diff --git a/drivers/crypto/ccp/ccp-crypto-main.c 
b/drivers/crypto/ccp/ccp-crypto-main.c
index e0380e5..38d4466 100644
--- a/drivers/crypto/ccp/ccp-crypto-main.c
+++ b/drivers/crypto/ccp/ccp-crypto-main.c
@@ -17,6 +17,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "ccp-crypto.h"
 
@@ -33,9 +34,14 @@
 module_param(sha_disable, uint, 0444);
 MODULE_PARM_DESC(sha_disable, "Disable use of SHA - any non-zero value");
 
+static unsigned int rsa_disable;
+module_param(rsa_disable, uint, 0444);
+MODULE_PARM_DESC(rsa_disable, "Disable use of RSA - any non-zero value");
+
 /* List heads for the supported algorithms */
 static LIST_HEAD(hash_algs);
 static LIST_HEAD(cipher_algs);
+static LIST_HEAD(akcipher_algs);
 
 /* For any tfm, requests for that tfm must be returned on the order
  * received.  With multiple queues available, the CCP can process more
@@ -343,6 +349,12 @@ static int ccp_register_algs(void)
return ret;
}
 
+   if (!rsa_disable) {
+   ret = ccp_register_rsa_algs(&akcipher_algs);
+   if (ret)
+   return ret;
+   }
+
return 0;
 }
 
@@ -350,6 +362,7 @@ static void ccp_unregister_algs(void)
 {
struct ccp_crypto_ahash_alg *ahash_alg, *ahash_tmp;
struct ccp_crypto_ablkcipher_alg *ablk_alg, *ablk_tmp;
+   struct ccp_crypto_akcipher_alg *ak_alg, *ak_tmp;
 
list_for_each_entry_safe(ahash_alg, ahash_tmp, &hash_algs, entry) {
crypto_unregister_ahash(&ahash_alg->alg);
@@ -362,6 +375,12 @@ static void ccp_unregister_algs(void)
list_del(&ablk_alg->entry);
kfree(ablk_alg);
}
+
+   list_for_each_entry_safe(ak_alg, ak_tmp, &akcipher_algs, entry) {
+   crypto_unregister_akcipher(&ak_alg->alg);
+   list_del(&ak_alg->entry);
+   kfree(ak_alg);
+   }
 }
 
 static int ccp_crypto_init(void)
diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c 
b/drivers/crypto/ccp/ccp-crypto-rsa.c
new file mode 100644
index 000..6cb6c6f
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
@@ -0,0 +1,294 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) RSA crypto API support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Gary R Hook 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ccp-crypto.h"
+
+static inline struct akcipher_request *akcipher_request_cast(
+   struct crypto_async_request *req)
+{
+   return container_of(req, struct akcipher_request, base);
+}
+
+static int ccp_rsa_complete(struct crypto_async_request *async_req, int ret)
+{
+   struct akcipher_request *req = akcipher_request_cast(async_req);
+   struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
+
+
+   if (!ret)
+   req->dst_len = rctx->cmd.u.rsa.mod_len;
+
+   ret = 0;
+
+   return ret;
+}
+
+static int ccp_rsa_maxsize(struct crypto_akcipher *tfm)
+{
+   return CCP_RSA_MAXMOD;
+}
+
+static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt)
+{
+   struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+   struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
+   int ret = 0;
+
+   if (!ctx->u.rsa.pkey.d && !ctx->u.rsa.pkey.e)
+   return -EINVAL;
+
+   memset(&rctx->cmd, 0, sizeof(rctx->cmd));
+   INIT_LIST_HEAD(&rctx->cmd.entry);
+   rctx->cmd.engine = CCP_ENGINE_RSA;
+
+   rctx->cmd.u.rsa.key_size = ctx->u.rsa.key_len; /* in bits */
+   if (encrypt) {
+   rctx->cmd.u.rsa.exp = &ctx->u.rsa.e_sg;
+   rctx->cmd.u.rsa.exp_len = ctx->u.rsa.e_len;
+   } else {
+   rctx->cmd.u.rsa.exp = &ctx->u.rsa.d_sg;
+   rctx->cmd.u.rsa.exp_len = ctx->u.rsa.d_len;
+   }
+   rc

[PATCH V2 5/9] crypto: Move RSA+MPI constructs into an #include file

2016-11-04 Thread Gary R Hook
RSA support of general use, but dependent upon MPI,
should go into internal/rsa.h

Signed-off-by: Gary R Hook 
---
 crypto/rsa.c  |   16 
 include/crypto/internal/rsa.h |   17 +
 2 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/crypto/rsa.c b/crypto/rsa.c
index 4c280b6..15e9220 100644
--- a/crypto/rsa.c
+++ b/crypto/rsa.c
@@ -16,12 +16,6 @@
 #include 
 #include 
 
-struct rsa_mpi_key {
-   MPI n;
-   MPI e;
-   MPI d;
-};
-
 /*
  * RSAEP function [RFC3447 sec 5.1.1]
  * c = m^e mod n;
@@ -240,16 +234,6 @@ static int rsa_verify(struct akcipher_request *req)
return ret;
 }
 
-static void rsa_free_mpi_key(struct rsa_mpi_key *key)
-{
-   mpi_free(key->d);
-   mpi_free(key->e);
-   mpi_free(key->n);
-   key->d = NULL;
-   key->e = NULL;
-   key->n = NULL;
-}
-
 static int rsa_check_key_length(unsigned int len)
 {
switch (len) {
diff --git a/include/crypto/internal/rsa.h b/include/crypto/internal/rsa.h
index 9e8f159..253b275 100644
--- a/include/crypto/internal/rsa.h
+++ b/include/crypto/internal/rsa.h
@@ -13,6 +13,7 @@
 #ifndef _RSA_HELPER_
 #define _RSA_HELPER_
 #include 
+#include 
 
 /**
  * rsa_key - RSA key structure
@@ -52,6 +53,22 @@ struct rsa_key {
size_t qinv_sz;
 };
 
+struct rsa_mpi_key {
+   MPI n;
+   MPI e;
+   MPI d;
+};
+
+static inline void rsa_free_mpi_key(struct rsa_mpi_key *key)
+{
+   mpi_free(key->d);
+   mpi_free(key->e);
+   mpi_free(key->n);
+   key->d = NULL;
+   key->e = NULL;
+   key->n = NULL;
+}
+
 int rsa_parse_pub_key(struct rsa_key *rsa_key, const void *key,
  unsigned int key_len);
 

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH V2 7/9] crypto: ccp - Enhance RSA support for a v5 CCP

2016-11-04 Thread Gary R Hook
Take advantage of the increased RSA key size support in
the v5 CCP.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-crypto-rsa.c |5 ++
 drivers/crypto/ccp/ccp-crypto.h |1 
 drivers/crypto/ccp/ccp-dev-v3.c |1 
 drivers/crypto/ccp/ccp-dev-v5.c |   10 +++--
 drivers/crypto/ccp/ccp-dev.h|2 +
 drivers/crypto/ccp/ccp-ops.c|   76 ++-
 6 files changed, 61 insertions(+), 34 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c 
b/drivers/crypto/ccp/ccp-crypto-rsa.c
index 6cb6c6f..5e68c8d 100644
--- a/drivers/crypto/ccp/ccp-crypto-rsa.c
+++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
@@ -45,7 +45,10 @@ static int ccp_rsa_complete(struct crypto_async_request 
*async_req, int ret)
 
 static int ccp_rsa_maxsize(struct crypto_akcipher *tfm)
 {
-   return CCP_RSA_MAXMOD;
+   if (ccp_version() > CCP_VERSION(3, 0))
+   return CCP5_RSA_MAXMOD;
+   else
+   return CCP_RSA_MAXMOD;
 }
 
 static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt)
diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
index aa525e6..76d8b63 100644
--- a/drivers/crypto/ccp/ccp-crypto.h
+++ b/drivers/crypto/ccp/ccp-crypto.h
@@ -223,6 +223,7 @@ struct ccp_rsa_req_ctx {
 };
 
 #defineCCP_RSA_MAXMOD  (4 * 1024 / 8)
+#defineCCP5_RSA_MAXMOD (16 * 1024 / 8)
 
 /* Common Context Structure */
 struct ccp_ctx {
diff --git a/drivers/crypto/ccp/ccp-dev-v3.c b/drivers/crypto/ccp/ccp-dev-v3.c
index 7bc0998..3a55628 100644
--- a/drivers/crypto/ccp/ccp-dev-v3.c
+++ b/drivers/crypto/ccp/ccp-dev-v3.c
@@ -571,4 +571,5 @@ static irqreturn_t ccp_irq_handler(int irq, void *data)
.perform = &ccp3_actions,
.bar = 2,
.offset = 0x2,
+   .rsamax = CCP_RSA_MAX_WIDTH,
 };
diff --git a/drivers/crypto/ccp/ccp-dev-v5.c b/drivers/crypto/ccp/ccp-dev-v5.c
index 05300a9..b31be75 100644
--- a/drivers/crypto/ccp/ccp-dev-v5.c
+++ b/drivers/crypto/ccp/ccp-dev-v5.c
@@ -421,10 +421,10 @@ static int ccp5_perform_rsa(struct ccp_op *op)
CCP5_CMD_DST_HI(&desc) = ccp_addr_hi(&op->dst.u.dma);
CCP5_CMD_DST_MEM(&desc) = CCP_MEMTYPE_SYSTEM;
 
-   /* Exponent is in LSB memory */
-   CCP5_CMD_KEY_LO(&desc) = op->sb_key * LSB_ITEM_SIZE;
-   CCP5_CMD_KEY_HI(&desc) = 0;
-   CCP5_CMD_KEY_MEM(&desc) = CCP_MEMTYPE_SB;
+   /* Key (Exponent) is in external memory */
+   CCP5_CMD_KEY_LO(&desc) = ccp_addr_lo(&op->exp.u.dma);
+   CCP5_CMD_KEY_HI(&desc) = ccp_addr_hi(&op->exp.u.dma);
+   CCP5_CMD_KEY_MEM(&desc) = CCP_MEMTYPE_SYSTEM;
 
return ccp5_do_cmd(&desc, op->cmd_q);
 }
@@ -1013,6 +1013,7 @@ static void ccp5other_config(struct ccp_device *ccp)
.perform = &ccp5_actions,
.bar = 2,
.offset = 0x0,
+   .rsamax = CCP5_RSA_MAX_WIDTH,
 };
 
 const struct ccp_vdata ccpv5b = {
@@ -1021,4 +1022,5 @@ static void ccp5other_config(struct ccp_device *ccp)
.perform = &ccp5_actions,
.bar = 2,
.offset = 0x0,
+   .rsamax = CCP5_RSA_MAX_WIDTH,
 };
diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
index 830f35e..f2e9bcb 100644
--- a/drivers/crypto/ccp/ccp-dev.h
+++ b/drivers/crypto/ccp/ccp-dev.h
@@ -193,6 +193,7 @@
 #define CCP_SHA_SB_COUNT   1
 
 #define CCP_RSA_MAX_WIDTH  4096
+#define CCP5_RSA_MAX_WIDTH 16384
 
 #define CCP_PASSTHRU_BLOCKSIZE 256
 #define CCP_PASSTHRU_MASKSIZE  32
@@ -638,6 +639,7 @@ struct ccp_vdata {
const struct ccp_actions *perform;
const unsigned int bar;
const unsigned int offset;
+   const unsigned int rsamax;
 };
 
 extern const struct ccp_vdata ccpv3;
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 213a752..f7398e9 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1282,37 +1282,43 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
unsigned int sb_count, i_len, o_len;
int ret;
 
-   if (rsa->key_size > CCP_RSA_MAX_WIDTH)
+   /* Check against the maximum allowable size, in bits */
+   if (rsa->key_size > cmd_q->ccp->vdata->rsamax)
return -EINVAL;
 
if (!rsa->exp || !rsa->mod || !rsa->src || !rsa->dst)
return -EINVAL;
 
-   /* The RSA modulus must precede the message being acted upon, so
-* it must be copied to a DMA area where the message and the
-* modulus can be concatenated.  Therefore the input buffer
-* length required is twice the output buffer length (which
-* must be a multiple of 256-bits).
-*/
-   o_len = ((rsa->key_size + 255) / 256) * 32;
-   i_len = o_len * 2;
-
-   sb_count = o_len / CCP_SB_BYTES;
-
memset(&op, 0, sizeof(op));
op.cmd_q = cmd_q;
-   op.jobid = ccp_gen_jobid(cmd_q->ccp);
-   op.sb_key = cmd_q->ccp->vdata->perform->

[PATCH V2 8/9] crypto: ccp - Enable support for AES GCM on v5 CCPs

2016-11-04 Thread Gary R Hook
A version 5 device provides the primitive commands
required for AES GCM. This patch adds support for
en/decryption.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/Makefile|1 
 drivers/crypto/ccp/ccp-crypto-aes-galois.c |  257 
 drivers/crypto/ccp/ccp-crypto-main.c   |   12 +
 drivers/crypto/ccp/ccp-crypto.h|   14 ++
 drivers/crypto/ccp/ccp-dev-v5.c|2 
 drivers/crypto/ccp/ccp-dev.h   |1 
 drivers/crypto/ccp/ccp-ops.c   |  252 +++
 include/linux/ccp.h|9 +
 8 files changed, 548 insertions(+)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-aes-galois.c

diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 23f89b7..fd77225 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -13,4 +13,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
   ccp-crypto-aes-cmac.o \
   ccp-crypto-aes-xts.o \
   ccp-crypto-rsa.o \
+  ccp-crypto-aes-galois.o \
   ccp-crypto-sha.o
diff --git a/drivers/crypto/ccp/ccp-crypto-aes-galois.c 
b/drivers/crypto/ccp/ccp-crypto-aes-galois.c
new file mode 100644
index 000..8bc18c9
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-crypto-aes-galois.c
@@ -0,0 +1,257 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) AES GCM crypto API support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Gary R Hook 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ccp-crypto.h"
+
+#defineAES_GCM_IVSIZE  12
+
+static int ccp_aes_gcm_complete(struct crypto_async_request *async_req, int 
ret)
+{
+   return ret;
+}
+
+static int ccp_aes_gcm_setkey(struct crypto_aead *tfm, const u8 *key,
+ unsigned int key_len)
+{
+   struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
+
+   switch (key_len) {
+   case AES_KEYSIZE_128:
+   ctx->u.aes.type = CCP_AES_TYPE_128;
+   break;
+   case AES_KEYSIZE_192:
+   ctx->u.aes.type = CCP_AES_TYPE_192;
+   break;
+   case AES_KEYSIZE_256:
+   ctx->u.aes.type = CCP_AES_TYPE_256;
+   break;
+   default:
+   crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+   return -EINVAL;
+   }
+
+   ctx->u.aes.mode = CCP_AES_MODE_GCM;
+   ctx->u.aes.key_len = key_len;
+
+   memcpy(ctx->u.aes.key, key, key_len);
+   sg_init_one(&ctx->u.aes.key_sg, ctx->u.aes.key, key_len);
+
+   return 0;
+}
+
+static int ccp_aes_gcm_setauthsize(struct crypto_aead *tfm,
+  unsigned int authsize)
+{
+   return 0;
+}
+
+static int ccp_aes_gcm_crypt(struct aead_request *req, bool encrypt)
+{
+   struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+   struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
+   struct ccp_aes_req_ctx *rctx = aead_request_ctx(req);
+   struct scatterlist *iv_sg = NULL;
+   unsigned int iv_len = 0;
+   int i;
+   int ret = 0;
+
+   if (!ctx->u.aes.key_len)
+   return -EINVAL;
+
+   if (ctx->u.aes.mode != CCP_AES_MODE_GCM)
+   return -EINVAL;
+
+   if (!req->iv)
+   return -EINVAL;
+
+   /*
+* 5 parts:
+*   plaintext/ciphertext input
+*   AAD
+*   key
+*   IV
+*   Destination+tag buffer
+*/
+
+   /* According to the way AES GCM has been implemented here,
+* per RFC 4106 it seems, the provided IV is fixed at 12 bytes,
+* occupies the beginning of the IV array. Write a 32-bit
+* integer after that (bytes 13-16) with a value of "1".
+*/
+   memcpy(rctx->iv, req->iv, AES_GCM_IVSIZE);
+   for (i = 0; i < 3; i++)
+   rctx->iv[i + AES_GCM_IVSIZE] = 0;
+   rctx->iv[AES_BLOCK_SIZE - 1] = 1;
+
+   /* Set up a scatterlist for the IV */
+   iv_sg = &rctx->iv_sg;
+   iv_len = AES_BLOCK_SIZE;
+   sg_init_one(iv_sg, rctx->iv, iv_len);
+
+   /* The AAD + plaintext are concatenated in the src buffer */
+   memset(&rctx->cmd, 0, sizeof(rctx->cmd));
+   INIT_LIST_HEAD(&rctx->cmd.entry);
+   rctx->cmd.engine = CCP_ENGINE_AES;
+   rctx->cmd.u.aes.type = ctx->u.aes.type;
+   rctx->cmd.u.aes.mode = ctx->u.aes.mode;
+   rctx->cmd.u.aes.action =
+   (encrypt) ? CCP_AES_ACTION_ENCRYPT : CCP_AES_ACTION_DECRYPT;
+   rctx->cmd.u.aes.key = &ctx->u.aes.key_sg;
+   rctx->cmd.u.aes.key_len = ctx->u.aes.key_len;
+   rctx->cmd.u.aes.iv = iv_sg;
+   rctx->cmd.u.aes.iv_len = iv_len;

[PATCH V2 9/9] crypto: ccp - Enable 3DES function on v5 CCPs

2016-11-04 Thread Gary R Hook
Wire up support for Triple DES in ECB mode.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/ccp-crypto-des3.c |  254 ++
 drivers/crypto/ccp/ccp-crypto-main.c |   10 +
 drivers/crypto/ccp/ccp-crypto.h  |   20 +++
 drivers/crypto/ccp/ccp-dev-v3.c  |1 
 drivers/crypto/ccp/ccp-dev-v5.c  |   54 +++
 drivers/crypto/ccp/ccp-dev.h |   14 ++
 drivers/crypto/ccp/ccp-ops.c |  198 +++
 include/linux/ccp.h  |   57 +++-
 9 files changed, 606 insertions(+), 3 deletions(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-des3.c

diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index fd77225..563594a 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -14,4 +14,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
   ccp-crypto-aes-xts.o \
   ccp-crypto-rsa.o \
   ccp-crypto-aes-galois.o \
+  ccp-crypto-des3.o \
   ccp-crypto-sha.o
diff --git a/drivers/crypto/ccp/ccp-crypto-des3.c 
b/drivers/crypto/ccp/ccp-crypto-des3.c
new file mode 100644
index 000..5af7347
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-crypto-des3.c
@@ -0,0 +1,254 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) DES3 crypto API support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Gary R Hook 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ccp-crypto.h"
+
+static int ccp_des3_complete(struct crypto_async_request *async_req, int ret)
+{
+   struct ablkcipher_request *req = ablkcipher_request_cast(async_req);
+   struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+   struct ccp_des3_req_ctx *rctx = ablkcipher_request_ctx(req);
+
+   if (ret)
+   return ret;
+
+   if (ctx->u.des3.mode != CCP_DES3_MODE_ECB)
+   memcpy(req->info, rctx->iv, DES3_EDE_BLOCK_SIZE);
+
+   return 0;
+}
+
+static int ccp_des3_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+   unsigned int key_len)
+{
+   struct ccp_ctx *ctx = crypto_tfm_ctx(crypto_ablkcipher_tfm(tfm));
+   struct ccp_crypto_ablkcipher_alg *alg =
+   ccp_crypto_ablkcipher_alg(crypto_ablkcipher_tfm(tfm));
+   u32 *flags = &tfm->base.crt_flags;
+
+
+   /* From des_generic.c:
+*
+* RFC2451:
+*   If the first two or last two independent 64-bit keys are
+*   equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
+*   same as DES.  Implementers MUST reject keys that exhibit this
+*   property.
+*/
+   const u32 *K = (const u32 *)key;
+
+   if (unlikely(!((K[0] ^ K[2]) | (K[1] ^ K[3])) ||
+!((K[2] ^ K[4]) | (K[3] ^ K[5]))) &&
+(*flags & CRYPTO_TFM_REQ_WEAK_KEY)) {
+   *flags |= CRYPTO_TFM_RES_WEAK_KEY;
+   return -EINVAL;
+   }
+
+   /* It's not clear that there is any support for a keysize of 112.
+* If needed, the caller should make K1 == K3
+*/
+   ctx->u.des3.type = CCP_DES3_TYPE_168;
+   ctx->u.des3.mode = alg->mode;
+   ctx->u.des3.key_len = key_len;
+
+   memcpy(ctx->u.des3.key, key, key_len);
+   sg_init_one(&ctx->u.des3.key_sg, ctx->u.des3.key, key_len);
+
+   return 0;
+}
+
+static int ccp_des3_crypt(struct ablkcipher_request *req, bool encrypt)
+{
+   struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+   struct ccp_des3_req_ctx *rctx = ablkcipher_request_ctx(req);
+   struct scatterlist *iv_sg = NULL;
+   unsigned int iv_len = 0;
+   int ret;
+
+   if (!ctx->u.des3.key_len)
+   return -EINVAL;
+
+   if (((ctx->u.des3.mode == CCP_DES3_MODE_ECB) ||
+(ctx->u.des3.mode == CCP_DES3_MODE_CBC)) &&
+   (req->nbytes & (DES3_EDE_BLOCK_SIZE - 1)))
+   return -EINVAL;
+
+   if (ctx->u.des3.mode != CCP_DES3_MODE_ECB) {
+   if (!req->info)
+   return -EINVAL;
+
+   memcpy(rctx->iv, req->info, DES3_EDE_BLOCK_SIZE);
+   iv_sg = &rctx->iv_sg;
+   iv_len = DES3_EDE_BLOCK_SIZE;
+   sg_init_one(iv_sg, rctx->iv, iv_len);
+   }
+
+   memset(&rctx->cmd, 0, sizeof(rctx->cmd));
+   INIT_LIST_HEAD(&rctx->cmd.entry);
+   rctx->cmd.engine = CCP_ENGINE_DES3;
+   rctx->cmd.u.des3.type = ctx->u.des3.type;
+   rctx->cmd.u.des3.mode = ctx->u.des3.mode;
+   rctx->cmd.u.des3.action = (encrypt)
+ ? CCP_DES3_ACTION_ENCRYPT
+ : CCP_DES3_ACTION_DECRYPT;
+   rctx->cmd.u.d

Re: vmalloced stacks and scatterwalk_map_and_copy()

2016-11-04 Thread Eric Biggers
On Thu, Nov 03, 2016 at 08:57:49PM -0700, Andy Lutomirski wrote:
> 
> The crypto request objects can live on the stack just fine.  It's the
> request buffers that need to live elsewhere (or the alternative
> interfaces can be used, or the crypto core code can start using
> something other than scatterlists).
> 

There are cases where a crypto operation is done on a buffer embedded in a
request object.  The example I'm aware of is in the GCM implementation
(crypto/gcm.c).  Basically it needs to encrypt 16 zero bytes prepended with the
actual data, so it fills a buffer in the request object
(crypto_gcm_req_priv_ctx.auth_tag) with zeroes and builds a new scatterlist
which covers both this buffer and the original data scatterlist.

Granted, GCM provides the aead interface not the skcipher interface, and
currently there is no AEAD_REQUEST_ON_STACK() macro like there is a
SKCIPHER_REQUEST_ON_STACK() macro.  So maybe no one is creating aead requests on
the stack right now.  But it's something to watch out for.

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] poly1305: generic C can be faster on chips with slow unaligned access

2016-11-04 Thread Eric Biggers
On Thu, Nov 03, 2016 at 11:20:08PM +0100, Jason A. Donenfeld wrote:
> Hi David,
> 
> On Thu, Nov 3, 2016 at 6:08 PM, David Miller  wrote:
> > In any event no piece of code should be doing 32-bit word reads from
> > addresses like "x + 3" without, at a very minimum, going through the
> > kernel unaligned access handlers.
> 
> Excellent point. In otherwords,
> 
> ctx->r[0] = (le32_to_cpuvp(key +  0) >> 0) & 0x3ff;
> ctx->r[1] = (le32_to_cpuvp(key +  3) >> 2) & 0x303;
> ctx->r[2] = (le32_to_cpuvp(key +  6) >> 4) & 0x3ffc0ff;
> ctx->r[3] = (le32_to_cpuvp(key +  9) >> 6) & 0x3f03fff;
> ctx->r[4] = (le32_to_cpuvp(key + 12) >> 8) & 0x00f;
> 
> should change to:
> 
> ctx->r[0] = (le32_to_cpuvp(key +  0) >> 0) & 0x3ff;
> ctx->r[1] = (get_unaligned_le32(key +  3) >> 2) & 0x303;
> ctx->r[2] = (get_unaligned_le32(key +  6) >> 4) & 0x3ffc0ff;
> ctx->r[3] = (get_unaligned_le32(key +  9) >> 6) & 0x3f03fff;
> ctx->r[4] = (le32_to_cpuvp(key + 12) >> 8) & 0x00f;
> 

I agree, and the current code is wrong; but do note that this proposal is
correct for poly1305_setrkey() but not for poly1305_setskey() and
poly1305_blocks().  In the latter two cases, 4-byte alignment of the source
buffer is *not* guaranteed.  Although crypto_poly1305_update() will be called
with a 4-byte aligned buffer due to the alignmask set on poly1305_alg, the
algorithm operates on 16-byte blocks and therefore has to buffer partial blocks.
If some number of bytes that is not 0 mod 4 is buffered, then the buffer will
fall out of alignment on the next update call.  Hence, get_unaligned_le32() is
actually needed on all the loads, since the buffer will, in general, be of
unknown alignment.

Note: some other shash algorithms have this problem too and do not handle it
correctly.  It seems to be a common mistake.

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: caam: do not register AES-XTS mode on LP units

2016-11-04 Thread Sven Ebenfeld
When using AES-XTS on a Wandboard, we receive a Mode error:
caam_jr 2102000.jr1: 20001311: CCB: desc idx 19: AES: Mode error.

Due to the Security Reference Manual, the Low Power AES units
of the i.MX6 do not support the XTS mode. Therefore we should
try to provide them them in the API.

Signed-off-by: Sven Ebenfeld 
---
 drivers/crypto/caam/caamalg.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 156aad1..f5a63ba 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -4583,6 +4583,15 @@ static int __init caam_algapi_init(void)
if (!aes_inst && (alg_sel == OP_ALG_ALGSEL_AES))
continue;
 
+   /*
+* Check support for AES modes not available
+* on LP devices.
+*/
+   if ((cha_vid & CHA_ID_LS_AES_MASK) == CHA_ID_LS_AES_LP)
+   if ((alg->class1_alg_type & OP_ALG_AAI_MASK) ==
+OP_ALG_AAI_XTS)
+   continue;
+
t_alg = caam_alg_alloc(alg);
if (IS_ERR(t_alg)) {
err = PTR_ERR(t_alg);
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html