Re: [PATCH 2/6] crypto: ccp - Remove unneeded sign-extension support

2016-10-13 Thread Tom Lendacky
On 10/13/2016 09:53 AM, Gary R Hook wrote:
> The reverse-get/set functions can be simplified by
> eliminating unused code.
> 
> 
> Signed-off-by: Gary R Hook 
> ---
>  drivers/crypto/ccp/ccp-ops.c |  145 
> +-
>  1 file changed, 59 insertions(+), 86 deletions(-)
> 
> diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
> index 8fedb14..82cc637 100644
> --- a/drivers/crypto/ccp/ccp-ops.c
> +++ b/drivers/crypto/ccp/ccp-ops.c
> @@ -198,62 +198,46 @@ static void ccp_get_dm_area(struct ccp_dm_workarea *wa, 
> unsigned int wa_offset,
>  }
>  
>  static int ccp_reverse_set_dm_area(struct ccp_dm_workarea *wa,
> +unsigned int wa_offset,
>  struct scatterlist *sg,
> -unsigned int len, unsigned int se_len,
> -bool sign_extend)
> +unsigned int sg_offset,
> +unsigned int len)
>  {
> - unsigned int nbytes, sg_offset, dm_offset, sb_len, i;
> - u8 buffer[CCP_REVERSE_BUF_SIZE];
> -
> - if (WARN_ON(se_len > sizeof(buffer)))
> - return -EINVAL;
> -
> - sg_offset = len;
> - dm_offset = 0;
> - nbytes = len;
> - while (nbytes) {
> - sb_len = min_t(unsigned int, nbytes, se_len);
> - sg_offset -= sb_len;
> -
> - scatterwalk_map_and_copy(buffer, sg, sg_offset, sb_len, 0);
> - for (i = 0; i < sb_len; i++)
> - wa->address[dm_offset + i] = buffer[sb_len - i - 1];
> -
> - dm_offset += sb_len;
> - nbytes -= sb_len;
> -
> - if ((sb_len != se_len) && sign_extend) {
> - /* Must sign-extend to nearest sign-extend length */
> - if (wa->address[dm_offset - 1] & 0x80)
> - memset(wa->address + dm_offset, 0xff,
> -se_len - sb_len);
> - }
> + u8 *p, *q;
> +
> + ccp_set_dm_area(wa, wa_offset, sg, sg_offset, len);
> +
> + p = wa->address + wa_offset;
> + q = p + len - 1;
> + while (p < q) {
> + *p = *p ^ *q;
> + *q = *p ^ *q;
> + *p = *p ^ *q;
> + p++;
> + q--;
>   }
> -
>   return 0;
>  }
>  
>  static void ccp_reverse_get_dm_area(struct ccp_dm_workarea *wa,
> + unsigned int wa_offset,
>   struct scatterlist *sg,
> + unsigned int sg_offset,
>   unsigned int len)
>  {
> - unsigned int nbytes, sg_offset, dm_offset, sb_len, i;
> - u8 buffer[CCP_REVERSE_BUF_SIZE];
> -
> - sg_offset = 0;
> - dm_offset = len;
> - nbytes = len;
> - while (nbytes) {
> - sb_len = min_t(unsigned int, nbytes, sizeof(buffer));
> - dm_offset -= sb_len;
> -
> - for (i = 0; i < sb_len; i++)
> - buffer[sb_len - i - 1] = wa->address[dm_offset + i];
> - scatterwalk_map_and_copy(buffer, sg, sg_offset, sb_len, 1);
> -
> - sg_offset += sb_len;
> - nbytes -= sb_len;
> + u8 *p, *q;
> +
> + p = wa->address + wa_offset;
> + q = p + len - 1;
> + while (p < q) {
> + *p = *p ^ *q;
> + *q = *p ^ *q;
> + *p = *p ^ *q;
> + p++;
> + q--;
>   }
> +
> + ccp_get_dm_area(wa, wa_offset, sg, sg_offset, len);
>  }
>  
>  static void ccp_free_data(struct ccp_data *data, struct ccp_cmd_queue *cmd_q)
> @@ -1294,7 +1278,9 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
> struct ccp_cmd *cmd)
>   struct ccp_data dst;
>   struct ccp_op op;
>   unsigned int sb_count, i_len, o_len;
> - int ret;
> + unsigned int dm_offset;
> + int i = 0;

Is "dm_offset" and "i" used anywhere?  I don't see them used in this
function...

> + int ret = 0;

No need to change this, is there?

Thanks,
Tom

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/6] crypto: ccp - Add support for RSA on the CCP

2016-10-13 Thread Gary R Hook

On 10/13/2016 01:25 PM, Stephan Mueller wrote:

Am Donnerstag, 13. Oktober 2016, 09:53:09 CEST schrieb Gary R Hook:

Hi Gary,


Wire up the v3 CCP as a cipher provider.

Signed-off-by: Gary R Hook 
---

...snip...

+}
+
+static void ccp_free_mpi_key(struct ccp_rsa_key *key)
+{
+   mpi_free(key->d);
+   key->d = NULL;
+   mpi_free(key->e);
+   key->e = NULL;
+   mpi_free(key->n);
+   key->n = NULL;
+}


Could you please see whether that function can be turned into a common
function call? crypto/rsa.c implements the same code in rsa_free_mpi_key.


I am happy to do so, but was unsure of protocol. rsa.c is in a module, which
makes my module depend upon another. I do not want to do that. And moving
this function elsewhere makes no sense.

I would go with an inline function, but there's no obvious place for it.
The RSA software implementation uses the MPI library, but there's no
requirement to do so (witness the qat driver). Thus, an inline function 
can't

be put in internal/rsa.h without moving the rsa_mpi_key definition and
referencing mpi.h.

I think that RSA+MPI things, such as rsa_mpi_key and this function, could go
into internal/rsa.h, but it would be necessary to #include mpi.h.

Or: create a new include file that contains these (and any other) RSA/MPI
amalgams.

Which would you prefer?


+
+static int ccp_check_key_length(unsigned int len)
+{
+   /* In bits */
+   if (len < 8 || len > 16384)
+   return -EINVAL;
+   return 0;
+}
+
+static void ccp_rsa_free_key_bufs(struct ccp_ctx *ctx)
+{
+   /* Clean up old key data */
+   kfree(ctx->u.rsa.e_buf);
+   ctx->u.rsa.e_buf = NULL;
+   ctx->u.rsa.e_len = 0;
+   kfree(ctx->u.rsa.n_buf);
+   ctx->u.rsa.n_buf = NULL;
+   ctx->u.rsa.n_len = 0;
+   kfree(ctx->u.rsa.d_buf);


kzfree, please.


Of course. Done.



...snip...

+}
+
+static struct akcipher_alg rsa = {
+   .encrypt = ccp_rsa_encrypt,
+   .decrypt = ccp_rsa_decrypt,
+   .sign = NULL,
+   .verify = NULL,
+   .set_pub_key = ccp_rsa_setpubkey,
+   .set_priv_key = ccp_rsa_setprivkey,
+   .max_size = ccp_rsa_maxsize,
+   .init = ccp_rsa_init_tfm,
+   .exit = ccp_rsa_exit_tfm,
+   .reqsize = sizeof(struct ccp_rsa_req_ctx),
+   .base = {
+   .cra_name = "rsa",
+   .cra_driver_name = "rsa-ccp",
+   .cra_priority = 100,


Are you sure you want to leave it at 100? With this value, it will content
with the C implementation.


No, I don't. Our other functions are at 300 (CCP_CRA_PRIORITY), which is 
what

this should be.




+   .cra_module = THIS_MODULE,
+   .cra_ctxsize = sizeof(struct ccp_ctx),
+   },
+};
+
...snip...

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html




Ciao
Stephan



Thank you. I hope snipping is acceptable...

--
This is my day job. Follow me at:
IG/Twitter/Facebook: @grhookphoto
IG/Twitter/Facebook: @grhphotographer
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 6/6] crypto: ccp - Enable 3DES function on v5 CCPs

2016-10-13 Thread Tom Lendacky
On 10/13/2016 09:53 AM, Gary R Hook wrote:
> Wire up support for Triple DES in ECB mode.
> 
> Signed-off-by: Gary R Hook 
> ---
>  drivers/crypto/ccp/Makefile  |1 
>  drivers/crypto/ccp/ccp-crypto-des3.c |  254 
> ++
>  drivers/crypto/ccp/ccp-crypto-main.c |   10 +
>  drivers/crypto/ccp/ccp-crypto.h  |   25 +++
>  drivers/crypto/ccp/ccp-dev-v3.c  |1 
>  drivers/crypto/ccp/ccp-dev-v5.c  |   65 -
>  drivers/crypto/ccp/ccp-dev.h |   18 ++
>  drivers/crypto/ccp/ccp-ops.c |  201 +++
>  drivers/crypto/ccp/ccp-pci.c |2 
>  include/linux/ccp.h  |   57 +++-
>  10 files changed, 624 insertions(+), 10 deletions(-)
>  create mode 100644 drivers/crypto/ccp/ccp-crypto-des3.c
> 

...  ...

> --- a/drivers/crypto/ccp/ccp-crypto.h
> +++ b/drivers/crypto/ccp/ccp-crypto.h
> @@ -26,6 +26,8 @@
>  #include 
>  #include 
>  
> +#define  CCP_LOG_LEVEL   KERN_INFO
> +

Not used anywhere that I can tell.

>  #define CCP_CRA_PRIORITY 300
>  
>  struct ccp_crypto_ablkcipher_alg {
> @@ -151,7 +153,26 @@ struct ccp_aes_cmac_exp_ctx {
>   u8 buf[AES_BLOCK_SIZE];
>  };
>  
> -/* SHA-related defines
> +/* 3DES related defines */
> +struct ccp_des3_ctx {
> + enum ccp_engine engine;
> + enum ccp_des3_type type;
> + enum ccp_des3_mode mode;
> +
> + struct scatterlist key_sg;
> + unsigned int key_len;
> + u8 key[AES_MAX_KEY_SIZE];
> +};
> +
> +struct ccp_des3_req_ctx {
> + struct scatterlist iv_sg;
> + u8 iv[AES_BLOCK_SIZE];
> +
> + struct ccp_cmd cmd;
> +};
> +
> +/*
> + * SHA-related defines
>   * These values must be large enough to accommodate any variant
>   */
>  #define MAX_SHA_CONTEXT_SIZE SHA512_DIGEST_SIZE
> @@ -236,6 +257,7 @@ struct ccp_ctx {
>   struct ccp_aes_ctx aes;
>   struct ccp_rsa_ctx rsa;
>   struct ccp_sha_ctx sha;
> + struct ccp_des3_ctx des3;
>   } u;
>  };
>  
> @@ -251,5 +273,6 @@ int ccp_register_aes_aeads(struct list_head *head);
>  int ccp_register_sha_algs(struct list_head *head);
>  int ccp_register_rsa_algs(void);
>  void ccp_unregister_rsa_algs(void);
> +int ccp_register_des3_algs(struct list_head *head);
>  
>  #endif
> diff --git a/drivers/crypto/ccp/ccp-dev-v3.c b/drivers/crypto/ccp/ccp-dev-v3.c
> index 75a0978..fccca16 100644
> --- a/drivers/crypto/ccp/ccp-dev-v3.c
> +++ b/drivers/crypto/ccp/ccp-dev-v3.c
> @@ -595,6 +595,7 @@ static irqreturn_t ccp_irq_handler(int irq, void *data)
>  static const struct ccp_actions ccp3_actions = {
>   .aes = ccp_perform_aes,
>   .xts_aes = ccp_perform_xts_aes,
> + .des3 = NULL,
>   .sha = ccp_perform_sha,
>   .rsa = ccp_perform_rsa,
>   .passthru = ccp_perform_passthru,
> diff --git a/drivers/crypto/ccp/ccp-dev-v5.c b/drivers/crypto/ccp/ccp-dev-v5.c
> index dcae391..85387dc 100644
> --- a/drivers/crypto/ccp/ccp-dev-v5.c
> +++ b/drivers/crypto/ccp/ccp-dev-v5.c
> @@ -101,6 +101,12 @@ union ccp_function {
>   u16 type:2;
>   } aes_xts;
>   struct {
> + u16 size:7;
> + u16 encrypt:1;
> + u16 mode:5;
> + u16 type:2;
> + } des3;
> + struct {
>   u16 rsvd1:10;
>   u16 type:4;
>   u16 rsvd2:1;
> @@ -132,6 +138,10 @@ union ccp_function {
>  #define  CCP_AES_TYPE(p) ((p)->aes.type)
>  #define  CCP_XTS_SIZE(p) ((p)->aes_xts.size)
>  #define  CCP_XTS_ENCRYPT(p)  ((p)->aes_xts.encrypt)
> +#define  CCP_DES3_SIZE(p)((p)->des3.size)
> +#define  CCP_DES3_ENCRYPT(p) ((p)->des3.encrypt)
> +#define  CCP_DES3_MODE(p)((p)->des3.mode)
> +#define  CCP_DES3_TYPE(p)((p)->des3.type)
>  #define  CCP_SHA_TYPE(p) ((p)->sha.type)
>  #define  CCP_RSA_SIZE(p) ((p)->rsa.size)
>  #define  CCP_PT_BYTESWAP(p)  ((p)->pt.byteswap)
> @@ -242,13 +252,16 @@ static int ccp5_do_cmd(struct ccp5_desc *desc,
>   /* Wait for the job to complete */
>   ret = wait_event_interruptible(cmd_q->int_queue,
>  cmd_q->int_rcvd);
> - if (ret || cmd_q->cmd_error) {
> + if (cmd_q->cmd_error) {
> + /*
> +  * Log the error and flush the queue by
> +  * moving the head pointer
> +  */

I don't think you wanted to remove the check for ret in the if
statement above.

>   if (cmd_q->cmd_error)
>   ccp_log_error(cmd_q->ccp,
> cmd_q->cmd_error);
> - /* A version 5 device doesn't use Job IDs... */
> - if (!ret)
> - ret = -EIO;
> + iowrite32(tail, cmd_q->reg_head_lo);
> + 

Re: [PATCH 5/6] crypto: ccp - Enable support for AES GCM on v5 CCPs

2016-10-13 Thread Tom Lendacky
On 10/13/2016 09:53 AM, Gary R Hook wrote:
> A version 5 device provides the primitive commands
> required for AES GCM. This patch adds support for
> en/decryption.
> 
> Signed-off-by: Gary R Hook 
> ---
>  drivers/crypto/ccp/Makefile|1 
>  drivers/crypto/ccp/ccp-crypto-aes-galois.c |  252 +++
>  drivers/crypto/ccp/ccp-crypto-main.c   |   12 +
>  drivers/crypto/ccp/ccp-crypto.h|   14 +
>  drivers/crypto/ccp/ccp-dev-v5.c|2 
>  drivers/crypto/ccp/ccp-dev.h   |1 
>  drivers/crypto/ccp/ccp-ops.c   |  262 
> 
>  include/linux/ccp.h|9 +
>  8 files changed, 553 insertions(+)
>  create mode 100644 drivers/crypto/ccp/ccp-crypto-aes-galois.c
> 
> diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
> index 23f89b7..fd77225 100644
> --- a/drivers/crypto/ccp/Makefile
> +++ b/drivers/crypto/ccp/Makefile
> @@ -13,4 +13,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
>  ccp-crypto-aes-cmac.o \
>  ccp-crypto-aes-xts.o \
>  ccp-crypto-rsa.o \
> +ccp-crypto-aes-galois.o \
>  ccp-crypto-sha.o
> diff --git a/drivers/crypto/ccp/ccp-crypto-aes-galois.c 
> b/drivers/crypto/ccp/ccp-crypto-aes-galois.c
> new file mode 100644
> index 000..5da324f
> --- /dev/null
> +++ b/drivers/crypto/ccp/ccp-crypto-aes-galois.c
> @@ -0,0 +1,252 @@
> +/*
> + * AMD Cryptographic Coprocessor (CCP) AES crypto API support
> + *
> + * Copyright (C) 2013,2016 Advanced Micro Devices, Inc.
> + *
> + * Author: Tom Lendacky 

Maybe put your name here...

> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "ccp-crypto.h"
> +
> +#define  AES_GCM_IVSIZE  12
> +
> +static int ccp_aes_gcm_complete(struct crypto_async_request *async_req, int 
> ret)
> +{
> + return ret;
> +}
> +
> +static int ccp_aes_gcm_setkey(struct crypto_aead *tfm, const u8 *key,
> +   unsigned int key_len)
> +{
> + struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
> +
> + switch (key_len) {
> + case AES_KEYSIZE_128:
> + ctx->u.aes.type = CCP_AES_TYPE_128;
> + break;
> + case AES_KEYSIZE_192:
> + ctx->u.aes.type = CCP_AES_TYPE_192;
> + break;
> + case AES_KEYSIZE_256:
> + ctx->u.aes.type = CCP_AES_TYPE_256;
> + break;
> + default:
> + crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> + return -EINVAL;
> + }
> +
> + ctx->u.aes.mode = CCP_AES_MODE_GCM;
> + ctx->u.aes.key_len = key_len;
> +
> + memcpy(ctx->u.aes.key, key, key_len);
> + sg_init_one(>u.aes.key_sg, ctx->u.aes.key, key_len);
> +
> + return 0;
> +}
> +
> +static int ccp_aes_gcm_setauthsize(struct crypto_aead *tfm,
> +unsigned int authsize)
> +{
> + return 0;
> +}
> +
> +static int ccp_aes_gcm_crypt(struct aead_request *req, bool encrypt)
> +{
> + struct crypto_aead *tfm = crypto_aead_reqtfm(req);
> + struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
> + struct ccp_aes_req_ctx *rctx = aead_request_ctx(req);
> + struct scatterlist *iv_sg = NULL;
> + unsigned int iv_len = 0;
> + int i;
> + int ret = 0;
> +
> + if (!ctx->u.aes.key_len)
> + return -EINVAL;
> +
> + if (ctx->u.aes.mode != CCP_AES_MODE_GCM)
> + return -EINVAL;
> +
> + if (!req->iv)
> + return -EINVAL;
> +
> + /*
> +  * 5 parts:
> +  *   plaintext/ciphertext input
> +  *   AAD
> +  *   key
> +  *   IV
> +  *   Destination+tag buffer
> +  */
> +
> + /* Copy the IV and initialize a scatterlist */
> + memset(rctx->iv, 0, AES_BLOCK_SIZE);
> + memcpy(rctx->iv, req->iv, AES_GCM_IVSIZE);
> + for (i = 0; i < 3; i++)
> + rctx->iv[i + AES_GCM_IVSIZE] = 0;

Is this needed if you did the memset to zero above?

> + rctx->iv[AES_BLOCK_SIZE - 1] = 1;
> + iv_sg = >iv_sg;
> + iv_len = AES_BLOCK_SIZE;
> + sg_init_one(iv_sg, rctx->iv, iv_len);
> +
> + /* The AAD + plaintext are concatenated in the src buffer */
> + memset(>cmd, 0, sizeof(rctx->cmd));
> + INIT_LIST_HEAD(>cmd.entry);
> + rctx->cmd.engine = CCP_ENGINE_AES;
> + rctx->cmd.u.aes.type = ctx->u.aes.type;
> + rctx->cmd.u.aes.mode = ctx->u.aes.mode;
> + rctx->cmd.u.aes.action =
> + (encrypt) ? CCP_AES_ACTION_ENCRYPT : CCP_AES_ACTION_DECRYPT;
> + rctx->cmd.u.aes.key = >u.aes.key_sg;
> + rctx->cmd.u.aes.key_len = 

Re: [PATCH 3/6] crypto: ccp - Add support for RSA on the CCP

2016-10-13 Thread Tom Lendacky
On 10/13/2016 09:53 AM, Gary R Hook wrote:
> Wire up the v3 CCP as a cipher provider.
> 
> Signed-off-by: Gary R Hook 
> ---
>  drivers/crypto/ccp/Makefile  |1 
>  drivers/crypto/ccp/ccp-crypto-main.c |   15 ++
>  drivers/crypto/ccp/ccp-crypto-rsa.c  |  258 
> ++
>  drivers/crypto/ccp/ccp-crypto.h  |   24 +++
>  drivers/crypto/ccp/ccp-dev-v3.c  |   38 +
>  drivers/crypto/ccp/ccp-ops.c |1 
>  include/linux/ccp.h  |   34 
>  7 files changed, 370 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/crypto/ccp/ccp-crypto-rsa.c
> 
> diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
> index 346ceb8..23f89b7 100644
> --- a/drivers/crypto/ccp/Makefile
> +++ b/drivers/crypto/ccp/Makefile
> @@ -12,4 +12,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
>  ccp-crypto-aes.o \
>  ccp-crypto-aes-cmac.o \
>  ccp-crypto-aes-xts.o \
> +ccp-crypto-rsa.o \
>  ccp-crypto-sha.o
> diff --git a/drivers/crypto/ccp/ccp-crypto-main.c 
> b/drivers/crypto/ccp/ccp-crypto-main.c
> index e0380e5..f3c4c25 100644
> --- a/drivers/crypto/ccp/ccp-crypto-main.c
> +++ b/drivers/crypto/ccp/ccp-crypto-main.c
> @@ -33,6 +33,10 @@ static unsigned int sha_disable;
>  module_param(sha_disable, uint, 0444);
>  MODULE_PARM_DESC(sha_disable, "Disable use of SHA - any non-zero value");
>  
> +static unsigned int rsa_disable;
> +module_param(rsa_disable, uint, 0444);
> +MODULE_PARM_DESC(rsa_disable, "Disable use of RSA - any non-zero value");
> +
>  /* List heads for the supported algorithms */
>  static LIST_HEAD(hash_algs);
>  static LIST_HEAD(cipher_algs);
> @@ -343,6 +347,14 @@ static int ccp_register_algs(void)
>   return ret;
>   }
>  
> + if (!rsa_disable) {
> + ret = ccp_register_rsa_algs();
> + if (ret) {
> + rsa_disable = 1;
> + return ret;
> + }
> + }
> +
>   return 0;
>  }
>  
> @@ -362,6 +374,9 @@ static void ccp_unregister_algs(void)
>   list_del(_alg->entry);
>   kfree(ablk_alg);
>   }
> +
> + if (!rsa_disable)
> + ccp_unregister_rsa_algs();
>  }
>  
>  static int ccp_crypto_init(void)
> diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c 
> b/drivers/crypto/ccp/ccp-crypto-rsa.c
> new file mode 100644
> index 000..7dab43b
> --- /dev/null
> +++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
> @@ -0,0 +1,258 @@
> +/*
> + * AMD Cryptographic Coprocessor (CCP) RSA crypto API support
> + *
> + * Copyright (C) 2016 Advanced Micro Devices, Inc.
> + *
> + * Author: Gary R Hook 
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "ccp-crypto.h"
> +
> +static inline struct akcipher_request *akcipher_request_cast(
> + struct crypto_async_request *req)
> +{
> + return container_of(req, struct akcipher_request, base);
> +}
> +
> +static int ccp_rsa_complete(struct crypto_async_request *async_req, int ret)
> +{
> + struct akcipher_request *req = akcipher_request_cast(async_req);
> + struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
> +
> + if (!ret)
> + req->dst_len = rctx->cmd.u.rsa.d_len;
> +
> + ret = 0;
> +
> + return ret;
> +}
> +
> +static int ccp_rsa_maxsize(struct crypto_akcipher *tfm)
> +{
> + return CCP_RSA_MAXMOD;
> +}
> +
> +static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt)
> +{
> + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
> + struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm);
> + struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
> + int ret = 0;
> +
> + if (!ctx->u.rsa.pkey.d && !ctx->u.rsa.pkey.e)
> + return -EINVAL;
> +
> + memset(>cmd, 0, sizeof(rctx->cmd));
> + INIT_LIST_HEAD(>cmd.entry);
> + rctx->cmd.engine = CCP_ENGINE_RSA;
> + rctx->cmd.u.rsa.mode = encrypt ? CCP_RSA_ENCRYPT : CCP_RSA_DECRYPT;
> +
> + rctx->cmd.u.rsa.pkey = ctx->u.rsa.pkey;
> + rctx->cmd.u.rsa.key_size = ctx->u.rsa.key_len;

The existing interface expects the key_size to be in bits, so you'll
need to multiply this by 8.

> + rctx->cmd.u.rsa.exp = >u.rsa.e_sg;
> + rctx->cmd.u.rsa.exp_len = ctx->u.rsa.e_len;
> + rctx->cmd.u.rsa.mod = >u.rsa.n_sg;
> + rctx->cmd.u.rsa.mod_len = ctx->u.rsa.n_len;
> + if (ctx->u.rsa.pkey.d) {
> + rctx->cmd.u.rsa.d_sg = >u.rsa.d_sg;
> + rctx->cmd.u.rsa.d_len = ctx->u.rsa.d_len;
> + }
> +
> + rctx->cmd.u.rsa.src = req->src;
> + rctx->cmd.u.rsa.src_len = req->src_len;
> + 

Re: [PATCH 1/6] crypto: ccp - Add SHA-2 support

2016-10-13 Thread Tom Lendacky
On 10/13/2016 09:52 AM, Gary R Hook wrote:
> Incorporate 384-bit and 512-bit hashing for a version 5 CCP
> device
> 
> 
> Signed-off-by: Gary R Hook 
> ---
>  drivers/crypto/ccp/ccp-crypto-sha.c |   22 +++
>  drivers/crypto/ccp/ccp-crypto.h |9 +++--
>  drivers/crypto/ccp/ccp-ops.c|   70 
> +++
>  include/linux/ccp.h |3 ++
>  4 files changed, 101 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c 
> b/drivers/crypto/ccp/ccp-crypto-sha.c
> index 84a652b..6b46eea 100644
> --- a/drivers/crypto/ccp/ccp-crypto-sha.c
> +++ b/drivers/crypto/ccp/ccp-crypto-sha.c
> @@ -146,6 +146,12 @@ static int ccp_do_sha_update(struct ahash_request *req, 
> unsigned int nbytes,
>   case CCP_SHA_TYPE_256:
>   rctx->cmd.u.sha.ctx_len = SHA256_DIGEST_SIZE;
>   break;
> + case CCP_SHA_TYPE_384:
> + rctx->cmd.u.sha.ctx_len = SHA384_DIGEST_SIZE;
> + break;
> + case CCP_SHA_TYPE_512:
> + rctx->cmd.u.sha.ctx_len = SHA512_DIGEST_SIZE;
> + break;
>   default:
>   /* Should never get here */
>   break;
> @@ -393,6 +399,22 @@ static struct ccp_sha_def sha_algs[] = {
>   .digest_size= SHA256_DIGEST_SIZE,
>   .block_size = SHA256_BLOCK_SIZE,
>   },
> + {
> + .version= CCP_VERSION(5, 0),
> + .name   = "sha384",
> + .drv_name   = "sha384-ccp",
> + .type   = CCP_SHA_TYPE_384,
> + .digest_size= SHA384_DIGEST_SIZE,
> + .block_size = SHA384_BLOCK_SIZE,
> + },
> + {
> + .version= CCP_VERSION(5, 0),
> + .name   = "sha512",
> + .drv_name   = "sha512-ccp",
> + .type   = CCP_SHA_TYPE_512,
> + .digest_size= SHA512_DIGEST_SIZE,
> + .block_size = SHA512_BLOCK_SIZE,
> + },
>  };
>  
>  static int ccp_register_hmac_alg(struct list_head *head,
> diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
> index 8335b32..ae442ac 100644
> --- a/drivers/crypto/ccp/ccp-crypto.h
> +++ b/drivers/crypto/ccp/ccp-crypto.h
> @@ -137,9 +137,12 @@ struct ccp_aes_cmac_exp_ctx {
>   u8 buf[AES_BLOCK_SIZE];
>  };
>  
> -/* SHA related defines */
> -#define MAX_SHA_CONTEXT_SIZE SHA256_DIGEST_SIZE
> -#define MAX_SHA_BLOCK_SIZE   SHA256_BLOCK_SIZE
> +/*
> + * SHA-related defines
> + * These values must be large enough to accommodate any variant
> + */
> +#define MAX_SHA_CONTEXT_SIZE SHA512_DIGEST_SIZE
> +#define MAX_SHA_BLOCK_SIZE   SHA512_BLOCK_SIZE
>  
>  struct ccp_sha_ctx {
>   struct scatterlist opad_sg;
> diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
> index 50fae44..8fedb14 100644
> --- a/drivers/crypto/ccp/ccp-ops.c
> +++ b/drivers/crypto/ccp/ccp-ops.c
> @@ -41,6 +41,20 @@ static const __be32 ccp_sha256_init[SHA256_DIGEST_SIZE / 
> sizeof(__be32)] = {
>   cpu_to_be32(SHA256_H6), cpu_to_be32(SHA256_H7),
>  };
>  
> +static const __be64 ccp_sha384_init[SHA512_DIGEST_SIZE / sizeof(__be64)] = {
> + cpu_to_be64(SHA384_H0), cpu_to_be64(SHA384_H1),
> + cpu_to_be64(SHA384_H2), cpu_to_be64(SHA384_H3),
> + cpu_to_be64(SHA384_H4), cpu_to_be64(SHA384_H5),
> + cpu_to_be64(SHA384_H6), cpu_to_be64(SHA384_H7),
> +};
> +
> +static const __be64 ccp_sha512_init[SHA512_DIGEST_SIZE / sizeof(__be64)] = {
> + cpu_to_be64(SHA512_H0), cpu_to_be64(SHA512_H1),
> + cpu_to_be64(SHA512_H2), cpu_to_be64(SHA512_H3),
> + cpu_to_be64(SHA512_H4), cpu_to_be64(SHA512_H5),
> + cpu_to_be64(SHA512_H6), cpu_to_be64(SHA512_H7),
> +};
> +
>  #define  CCP_NEW_JOBID(ccp)  ((ccp->vdata->version == CCP_VERSION(3, 
> 0)) ? \
>   ccp_gen_jobid(ccp) : 0)
>  
> @@ -963,6 +977,16 @@ static int ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, 
> struct ccp_cmd *cmd)
>   return -EINVAL;
>   block_size = SHA256_BLOCK_SIZE;
>   break;
> + case CCP_SHA_TYPE_384:
> + if (sha->ctx_len < SHA384_DIGEST_SIZE)
> + return -EINVAL;
> + block_size = SHA384_BLOCK_SIZE;
> + break;
> + case CCP_SHA_TYPE_512:
> + if (sha->ctx_len < SHA512_DIGEST_SIZE)
> + return -EINVAL;
> + block_size = SHA512_BLOCK_SIZE;
> + break;

A version 3 CCP won't support these new sizes.  You should add a version
check and return an error if v3.

>   default:
>   return -EINVAL;
>   }
> @@ -1050,6 +1074,21 @@ static int ccp_run_sha_cmd(struct ccp_cmd_queue 
> *cmd_q, struct ccp_cmd *cmd)
>   sb_count = 1;
>   ooffset = ioffset = 0;
>   break;
> + case CCP_SHA_TYPE_384:
> + digest_size = 

Re: [PATCH 3/6] crypto: ccp - Add support for RSA on the CCP

2016-10-13 Thread Stephan Mueller
Am Donnerstag, 13. Oktober 2016, 09:53:09 CEST schrieb Gary R Hook:

Hi Gary,

> Wire up the v3 CCP as a cipher provider.
> 
> Signed-off-by: Gary R Hook 
> ---
>  drivers/crypto/ccp/Makefile  |1
>  drivers/crypto/ccp/ccp-crypto-main.c |   15 ++
>  drivers/crypto/ccp/ccp-crypto-rsa.c  |  258
> ++ drivers/crypto/ccp/ccp-crypto.h  |  
> 24 +++
>  drivers/crypto/ccp/ccp-dev-v3.c  |   38 +
>  drivers/crypto/ccp/ccp-ops.c |1
>  include/linux/ccp.h  |   34 
>  7 files changed, 370 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/crypto/ccp/ccp-crypto-rsa.c
> 
> diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
> index 346ceb8..23f89b7 100644
> --- a/drivers/crypto/ccp/Makefile
> +++ b/drivers/crypto/ccp/Makefile
> @@ -12,4 +12,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
>  ccp-crypto-aes.o \
>  ccp-crypto-aes-cmac.o \
>  ccp-crypto-aes-xts.o \
> +ccp-crypto-rsa.o \
>  ccp-crypto-sha.o
> diff --git a/drivers/crypto/ccp/ccp-crypto-main.c
> b/drivers/crypto/ccp/ccp-crypto-main.c index e0380e5..f3c4c25 100644
> --- a/drivers/crypto/ccp/ccp-crypto-main.c
> +++ b/drivers/crypto/ccp/ccp-crypto-main.c
> @@ -33,6 +33,10 @@ static unsigned int sha_disable;
>  module_param(sha_disable, uint, 0444);
>  MODULE_PARM_DESC(sha_disable, "Disable use of SHA - any non-zero value");
> 
> +static unsigned int rsa_disable;
> +module_param(rsa_disable, uint, 0444);
> +MODULE_PARM_DESC(rsa_disable, "Disable use of RSA - any non-zero value");
> +
>  /* List heads for the supported algorithms */
>  static LIST_HEAD(hash_algs);
>  static LIST_HEAD(cipher_algs);
> @@ -343,6 +347,14 @@ static int ccp_register_algs(void)
>   return ret;
>   }
> 
> + if (!rsa_disable) {
> + ret = ccp_register_rsa_algs();
> + if (ret) {
> + rsa_disable = 1;
> + return ret;
> + }
> + }
> +
>   return 0;
>  }
> 
> @@ -362,6 +374,9 @@ static void ccp_unregister_algs(void)
>   list_del(_alg->entry);
>   kfree(ablk_alg);
>   }
> +
> + if (!rsa_disable)
> + ccp_unregister_rsa_algs();
>  }
> 
>  static int ccp_crypto_init(void)
> diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c
> b/drivers/crypto/ccp/ccp-crypto-rsa.c new file mode 100644
> index 000..7dab43b
> --- /dev/null
> +++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
> @@ -0,0 +1,258 @@
> +/*
> + * AMD Cryptographic Coprocessor (CCP) RSA crypto API support
> + *
> + * Copyright (C) 2016 Advanced Micro Devices, Inc.
> + *
> + * Author: Gary R Hook 
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "ccp-crypto.h"
> +
> +static inline struct akcipher_request *akcipher_request_cast(
> + struct crypto_async_request *req)
> +{
> + return container_of(req, struct akcipher_request, base);
> +}
> +
> +static int ccp_rsa_complete(struct crypto_async_request *async_req, int
> ret) +{
> + struct akcipher_request *req = akcipher_request_cast(async_req);
> + struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
> +
> + if (!ret)
> + req->dst_len = rctx->cmd.u.rsa.d_len;
> +
> + ret = 0;
> +
> + return ret;
> +}
> +
> +static int ccp_rsa_maxsize(struct crypto_akcipher *tfm)
> +{
> + return CCP_RSA_MAXMOD;
> +}
> +
> +static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt)
> +{
> + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
> + struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm);
> + struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
> + int ret = 0;
> +
> + if (!ctx->u.rsa.pkey.d && !ctx->u.rsa.pkey.e)
> + return -EINVAL;
> +
> + memset(>cmd, 0, sizeof(rctx->cmd));
> + INIT_LIST_HEAD(>cmd.entry);
> + rctx->cmd.engine = CCP_ENGINE_RSA;
> + rctx->cmd.u.rsa.mode = encrypt ? CCP_RSA_ENCRYPT : CCP_RSA_DECRYPT;
> +
> + rctx->cmd.u.rsa.pkey = ctx->u.rsa.pkey;
> + rctx->cmd.u.rsa.key_size = ctx->u.rsa.key_len;
> + rctx->cmd.u.rsa.exp = >u.rsa.e_sg;
> + rctx->cmd.u.rsa.exp_len = ctx->u.rsa.e_len;
> + rctx->cmd.u.rsa.mod = >u.rsa.n_sg;
> + rctx->cmd.u.rsa.mod_len = ctx->u.rsa.n_len;
> + if (ctx->u.rsa.pkey.d) {
> + rctx->cmd.u.rsa.d_sg = >u.rsa.d_sg;
> + rctx->cmd.u.rsa.d_len = ctx->u.rsa.d_len;
> + }
> +
> + rctx->cmd.u.rsa.src = req->src;
> + rctx->cmd.u.rsa.src_len = req->src_len;
> + rctx->cmd.u.rsa.dst = req->dst;
> + rctx->cmd.u.rsa.dst_len = req->dst_len;
> +
> 

[PATCH 6/6] crypto: ccp - Enable 3DES function on v5 CCPs

2016-10-13 Thread Gary R Hook
Wire up support for Triple DES in ECB mode.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/ccp-crypto-des3.c |  254 ++
 drivers/crypto/ccp/ccp-crypto-main.c |   10 +
 drivers/crypto/ccp/ccp-crypto.h  |   25 +++
 drivers/crypto/ccp/ccp-dev-v3.c  |1 
 drivers/crypto/ccp/ccp-dev-v5.c  |   65 -
 drivers/crypto/ccp/ccp-dev.h |   18 ++
 drivers/crypto/ccp/ccp-ops.c |  201 +++
 drivers/crypto/ccp/ccp-pci.c |2 
 include/linux/ccp.h  |   57 +++-
 10 files changed, 624 insertions(+), 10 deletions(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-des3.c

diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index fd77225..563594a 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -14,4 +14,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
   ccp-crypto-aes-xts.o \
   ccp-crypto-rsa.o \
   ccp-crypto-aes-galois.o \
+  ccp-crypto-des3.o \
   ccp-crypto-sha.o
diff --git a/drivers/crypto/ccp/ccp-crypto-des3.c 
b/drivers/crypto/ccp/ccp-crypto-des3.c
new file mode 100644
index 000..5af7347
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-crypto-des3.c
@@ -0,0 +1,254 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) DES3 crypto API support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Gary R Hook 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ccp-crypto.h"
+
+static int ccp_des3_complete(struct crypto_async_request *async_req, int ret)
+{
+   struct ablkcipher_request *req = ablkcipher_request_cast(async_req);
+   struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+   struct ccp_des3_req_ctx *rctx = ablkcipher_request_ctx(req);
+
+   if (ret)
+   return ret;
+
+   if (ctx->u.des3.mode != CCP_DES3_MODE_ECB)
+   memcpy(req->info, rctx->iv, DES3_EDE_BLOCK_SIZE);
+
+   return 0;
+}
+
+static int ccp_des3_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+   unsigned int key_len)
+{
+   struct ccp_ctx *ctx = crypto_tfm_ctx(crypto_ablkcipher_tfm(tfm));
+   struct ccp_crypto_ablkcipher_alg *alg =
+   ccp_crypto_ablkcipher_alg(crypto_ablkcipher_tfm(tfm));
+   u32 *flags = >base.crt_flags;
+
+
+   /* From des_generic.c:
+*
+* RFC2451:
+*   If the first two or last two independent 64-bit keys are
+*   equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
+*   same as DES.  Implementers MUST reject keys that exhibit this
+*   property.
+*/
+   const u32 *K = (const u32 *)key;
+
+   if (unlikely(!((K[0] ^ K[2]) | (K[1] ^ K[3])) ||
+!((K[2] ^ K[4]) | (K[3] ^ K[5]))) &&
+(*flags & CRYPTO_TFM_REQ_WEAK_KEY)) {
+   *flags |= CRYPTO_TFM_RES_WEAK_KEY;
+   return -EINVAL;
+   }
+
+   /* It's not clear that there is any support for a keysize of 112.
+* If needed, the caller should make K1 == K3
+*/
+   ctx->u.des3.type = CCP_DES3_TYPE_168;
+   ctx->u.des3.mode = alg->mode;
+   ctx->u.des3.key_len = key_len;
+
+   memcpy(ctx->u.des3.key, key, key_len);
+   sg_init_one(>u.des3.key_sg, ctx->u.des3.key, key_len);
+
+   return 0;
+}
+
+static int ccp_des3_crypt(struct ablkcipher_request *req, bool encrypt)
+{
+   struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+   struct ccp_des3_req_ctx *rctx = ablkcipher_request_ctx(req);
+   struct scatterlist *iv_sg = NULL;
+   unsigned int iv_len = 0;
+   int ret;
+
+   if (!ctx->u.des3.key_len)
+   return -EINVAL;
+
+   if (((ctx->u.des3.mode == CCP_DES3_MODE_ECB) ||
+(ctx->u.des3.mode == CCP_DES3_MODE_CBC)) &&
+   (req->nbytes & (DES3_EDE_BLOCK_SIZE - 1)))
+   return -EINVAL;
+
+   if (ctx->u.des3.mode != CCP_DES3_MODE_ECB) {
+   if (!req->info)
+   return -EINVAL;
+
+   memcpy(rctx->iv, req->info, DES3_EDE_BLOCK_SIZE);
+   iv_sg = >iv_sg;
+   iv_len = DES3_EDE_BLOCK_SIZE;
+   sg_init_one(iv_sg, rctx->iv, iv_len);
+   }
+
+   memset(>cmd, 0, sizeof(rctx->cmd));
+   INIT_LIST_HEAD(>cmd.entry);
+   rctx->cmd.engine = CCP_ENGINE_DES3;
+   rctx->cmd.u.des3.type = ctx->u.des3.type;
+   rctx->cmd.u.des3.mode = ctx->u.des3.mode;
+   rctx->cmd.u.des3.action = (encrypt)
+ ? CCP_DES3_ACTION_ENCRYPT
+ 

[PATCH 3/6] crypto: ccp - Add support for RSA on the CCP

2016-10-13 Thread Gary R Hook
Wire up the v3 CCP as a cipher provider.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/ccp-crypto-main.c |   15 ++
 drivers/crypto/ccp/ccp-crypto-rsa.c  |  258 ++
 drivers/crypto/ccp/ccp-crypto.h  |   24 +++
 drivers/crypto/ccp/ccp-dev-v3.c  |   38 +
 drivers/crypto/ccp/ccp-ops.c |1 
 include/linux/ccp.h  |   34 
 7 files changed, 370 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-rsa.c

diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 346ceb8..23f89b7 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -12,4 +12,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
   ccp-crypto-aes.o \
   ccp-crypto-aes-cmac.o \
   ccp-crypto-aes-xts.o \
+  ccp-crypto-rsa.o \
   ccp-crypto-sha.o
diff --git a/drivers/crypto/ccp/ccp-crypto-main.c 
b/drivers/crypto/ccp/ccp-crypto-main.c
index e0380e5..f3c4c25 100644
--- a/drivers/crypto/ccp/ccp-crypto-main.c
+++ b/drivers/crypto/ccp/ccp-crypto-main.c
@@ -33,6 +33,10 @@ static unsigned int sha_disable;
 module_param(sha_disable, uint, 0444);
 MODULE_PARM_DESC(sha_disable, "Disable use of SHA - any non-zero value");
 
+static unsigned int rsa_disable;
+module_param(rsa_disable, uint, 0444);
+MODULE_PARM_DESC(rsa_disable, "Disable use of RSA - any non-zero value");
+
 /* List heads for the supported algorithms */
 static LIST_HEAD(hash_algs);
 static LIST_HEAD(cipher_algs);
@@ -343,6 +347,14 @@ static int ccp_register_algs(void)
return ret;
}
 
+   if (!rsa_disable) {
+   ret = ccp_register_rsa_algs();
+   if (ret) {
+   rsa_disable = 1;
+   return ret;
+   }
+   }
+
return 0;
 }
 
@@ -362,6 +374,9 @@ static void ccp_unregister_algs(void)
list_del(_alg->entry);
kfree(ablk_alg);
}
+
+   if (!rsa_disable)
+   ccp_unregister_rsa_algs();
 }
 
 static int ccp_crypto_init(void)
diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c 
b/drivers/crypto/ccp/ccp-crypto-rsa.c
new file mode 100644
index 000..7dab43b
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
@@ -0,0 +1,258 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) RSA crypto API support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Gary R Hook 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ccp-crypto.h"
+
+static inline struct akcipher_request *akcipher_request_cast(
+   struct crypto_async_request *req)
+{
+   return container_of(req, struct akcipher_request, base);
+}
+
+static int ccp_rsa_complete(struct crypto_async_request *async_req, int ret)
+{
+   struct akcipher_request *req = akcipher_request_cast(async_req);
+   struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
+
+   if (!ret)
+   req->dst_len = rctx->cmd.u.rsa.d_len;
+
+   ret = 0;
+
+   return ret;
+}
+
+static int ccp_rsa_maxsize(struct crypto_akcipher *tfm)
+{
+   return CCP_RSA_MAXMOD;
+}
+
+static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt)
+{
+   struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
+   struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm);
+   struct ccp_rsa_req_ctx *rctx = akcipher_request_ctx(req);
+   int ret = 0;
+
+   if (!ctx->u.rsa.pkey.d && !ctx->u.rsa.pkey.e)
+   return -EINVAL;
+
+   memset(>cmd, 0, sizeof(rctx->cmd));
+   INIT_LIST_HEAD(>cmd.entry);
+   rctx->cmd.engine = CCP_ENGINE_RSA;
+   rctx->cmd.u.rsa.mode = encrypt ? CCP_RSA_ENCRYPT : CCP_RSA_DECRYPT;
+
+   rctx->cmd.u.rsa.pkey = ctx->u.rsa.pkey;
+   rctx->cmd.u.rsa.key_size = ctx->u.rsa.key_len;
+   rctx->cmd.u.rsa.exp = >u.rsa.e_sg;
+   rctx->cmd.u.rsa.exp_len = ctx->u.rsa.e_len;
+   rctx->cmd.u.rsa.mod = >u.rsa.n_sg;
+   rctx->cmd.u.rsa.mod_len = ctx->u.rsa.n_len;
+   if (ctx->u.rsa.pkey.d) {
+   rctx->cmd.u.rsa.d_sg = >u.rsa.d_sg;
+   rctx->cmd.u.rsa.d_len = ctx->u.rsa.d_len;
+   }
+
+   rctx->cmd.u.rsa.src = req->src;
+   rctx->cmd.u.rsa.src_len = req->src_len;
+   rctx->cmd.u.rsa.dst = req->dst;
+   rctx->cmd.u.rsa.dst_len = req->dst_len;
+
+   ret = ccp_crypto_enqueue_request(>base, >cmd);
+
+   return ret;
+}
+
+static int ccp_rsa_encrypt(struct akcipher_request *req)
+{
+   return ccp_rsa_crypt(req, true);
+}
+
+static int ccp_rsa_decrypt(struct akcipher_request *req)
+{
+   return 

[PATCH 2/6] crypto: ccp - Remove unneeded sign-extension support

2016-10-13 Thread Gary R Hook
The reverse-get/set functions can be simplified by
eliminating unused code.


Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-ops.c |  145 +-
 1 file changed, 59 insertions(+), 86 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 8fedb14..82cc637 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -198,62 +198,46 @@ static void ccp_get_dm_area(struct ccp_dm_workarea *wa, 
unsigned int wa_offset,
 }
 
 static int ccp_reverse_set_dm_area(struct ccp_dm_workarea *wa,
+  unsigned int wa_offset,
   struct scatterlist *sg,
-  unsigned int len, unsigned int se_len,
-  bool sign_extend)
+  unsigned int sg_offset,
+  unsigned int len)
 {
-   unsigned int nbytes, sg_offset, dm_offset, sb_len, i;
-   u8 buffer[CCP_REVERSE_BUF_SIZE];
-
-   if (WARN_ON(se_len > sizeof(buffer)))
-   return -EINVAL;
-
-   sg_offset = len;
-   dm_offset = 0;
-   nbytes = len;
-   while (nbytes) {
-   sb_len = min_t(unsigned int, nbytes, se_len);
-   sg_offset -= sb_len;
-
-   scatterwalk_map_and_copy(buffer, sg, sg_offset, sb_len, 0);
-   for (i = 0; i < sb_len; i++)
-   wa->address[dm_offset + i] = buffer[sb_len - i - 1];
-
-   dm_offset += sb_len;
-   nbytes -= sb_len;
-
-   if ((sb_len != se_len) && sign_extend) {
-   /* Must sign-extend to nearest sign-extend length */
-   if (wa->address[dm_offset - 1] & 0x80)
-   memset(wa->address + dm_offset, 0xff,
-  se_len - sb_len);
-   }
+   u8 *p, *q;
+
+   ccp_set_dm_area(wa, wa_offset, sg, sg_offset, len);
+
+   p = wa->address + wa_offset;
+   q = p + len - 1;
+   while (p < q) {
+   *p = *p ^ *q;
+   *q = *p ^ *q;
+   *p = *p ^ *q;
+   p++;
+   q--;
}
-
return 0;
 }
 
 static void ccp_reverse_get_dm_area(struct ccp_dm_workarea *wa,
+   unsigned int wa_offset,
struct scatterlist *sg,
+   unsigned int sg_offset,
unsigned int len)
 {
-   unsigned int nbytes, sg_offset, dm_offset, sb_len, i;
-   u8 buffer[CCP_REVERSE_BUF_SIZE];
-
-   sg_offset = 0;
-   dm_offset = len;
-   nbytes = len;
-   while (nbytes) {
-   sb_len = min_t(unsigned int, nbytes, sizeof(buffer));
-   dm_offset -= sb_len;
-
-   for (i = 0; i < sb_len; i++)
-   buffer[sb_len - i - 1] = wa->address[dm_offset + i];
-   scatterwalk_map_and_copy(buffer, sg, sg_offset, sb_len, 1);
-
-   sg_offset += sb_len;
-   nbytes -= sb_len;
+   u8 *p, *q;
+
+   p = wa->address + wa_offset;
+   q = p + len - 1;
+   while (p < q) {
+   *p = *p ^ *q;
+   *q = *p ^ *q;
+   *p = *p ^ *q;
+   p++;
+   q--;
}
+
+   ccp_get_dm_area(wa, wa_offset, sg, sg_offset, len);
 }
 
 static void ccp_free_data(struct ccp_data *data, struct ccp_cmd_queue *cmd_q)
@@ -1294,7 +1278,9 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
struct ccp_data dst;
struct ccp_op op;
unsigned int sb_count, i_len, o_len;
-   int ret;
+   unsigned int dm_offset;
+   int i = 0;
+   int ret = 0;
 
if (rsa->key_size > CCP_RSA_MAX_WIDTH)
return -EINVAL;
@@ -1331,8 +1317,7 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
if (ret)
goto e_sb;
 
-   ret = ccp_reverse_set_dm_area(, rsa->exp, rsa->exp_len,
- CCP_SB_BYTES, false);
+   ret = ccp_reverse_set_dm_area(, 0, rsa->exp, 0, rsa->exp_len);
if (ret)
goto e_exp;
ret = ccp_copy_to_sb(cmd_q, , op.jobid, op.sb_key,
@@ -1350,13 +1335,10 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
if (ret)
goto e_exp;
 
-   ret = ccp_reverse_set_dm_area(, rsa->mod, rsa->mod_len,
- CCP_SB_BYTES, false);
+   ret = ccp_reverse_set_dm_area(, 0, rsa->mod, 0, rsa->mod_len);
if (ret)
goto e_src;
-   src.address += o_len;   /* Adjust the address for the copy operation */
-   ret = ccp_reverse_set_dm_area(, rsa->src, rsa->src_len,
- CCP_SB_BYTES, false);
+   ret = 

[PATCH 0/6] Enable hashing and ciphers for v5 CCP

2016-10-13 Thread Gary R Hook
The following series implements new function for a version 5
CCP: Support for SHA-2, wiring of RSA using the updated
framework, AES GCM mode, and Triple-DES in ECB mode.

---

Gary R Hook (6):
  crypto: ccp - Add SHA-2 support
  crypto: ccp - Remove unneeded sign-extension support
  crypto: ccp - Add support for RSA on the CCP
  crypto: ccp - Add RSA support for a v5 ccp
  crypto: ccp - Enable support for AES GCM on v5 CCPs
  crypto: ccp - Enable 3DES function on v5 CCPs


 drivers/crypto/ccp/Makefile|3 
 drivers/crypto/ccp/ccp-crypto-aes-galois.c |  252 +
 drivers/crypto/ccp/ccp-crypto-des3.c   |  254 +
 drivers/crypto/ccp/ccp-crypto-main.c   |   37 +
 drivers/crypto/ccp/ccp-crypto-rsa.c|  258 +
 drivers/crypto/ccp/ccp-crypto-sha.c|   22 +
 drivers/crypto/ccp/ccp-crypto.h|   69 ++-
 drivers/crypto/ccp/ccp-dev-v3.c|   39 +
 drivers/crypto/ccp/ccp-dev-v5.c|   67 ++
 drivers/crypto/ccp/ccp-dev.h   |   21 +
 drivers/crypto/ccp/ccp-ops.c   |  772 
 drivers/crypto/ccp/ccp-pci.c   |2 
 include/linux/ccp.h|  103 
 13 files changed, 1768 insertions(+), 131 deletions(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-aes-galois.c
 create mode 100644 drivers/crypto/ccp/ccp-crypto-des3.c
 create mode 100644 drivers/crypto/ccp/ccp-crypto-rsa.c

--
This is my day job. Follow me at:
IG/Twitter/Facebook: @grhookphoto
IG/Twitter/Facebook: @grhphotographer
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/6] crypto: ccp - Enable support for AES GCM on v5 CCPs

2016-10-13 Thread Gary R Hook
A version 5 device provides the primitive commands
required for AES GCM. This patch adds support for
en/decryption.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/Makefile|1 
 drivers/crypto/ccp/ccp-crypto-aes-galois.c |  252 +++
 drivers/crypto/ccp/ccp-crypto-main.c   |   12 +
 drivers/crypto/ccp/ccp-crypto.h|   14 +
 drivers/crypto/ccp/ccp-dev-v5.c|2 
 drivers/crypto/ccp/ccp-dev.h   |1 
 drivers/crypto/ccp/ccp-ops.c   |  262 
 include/linux/ccp.h|9 +
 8 files changed, 553 insertions(+)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-aes-galois.c

diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 23f89b7..fd77225 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -13,4 +13,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
   ccp-crypto-aes-cmac.o \
   ccp-crypto-aes-xts.o \
   ccp-crypto-rsa.o \
+  ccp-crypto-aes-galois.o \
   ccp-crypto-sha.o
diff --git a/drivers/crypto/ccp/ccp-crypto-aes-galois.c 
b/drivers/crypto/ccp/ccp-crypto-aes-galois.c
new file mode 100644
index 000..5da324f
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-crypto-aes-galois.c
@@ -0,0 +1,252 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) AES crypto API support
+ *
+ * Copyright (C) 2013,2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ccp-crypto.h"
+
+#defineAES_GCM_IVSIZE  12
+
+static int ccp_aes_gcm_complete(struct crypto_async_request *async_req, int 
ret)
+{
+   return ret;
+}
+
+static int ccp_aes_gcm_setkey(struct crypto_aead *tfm, const u8 *key,
+ unsigned int key_len)
+{
+   struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
+
+   switch (key_len) {
+   case AES_KEYSIZE_128:
+   ctx->u.aes.type = CCP_AES_TYPE_128;
+   break;
+   case AES_KEYSIZE_192:
+   ctx->u.aes.type = CCP_AES_TYPE_192;
+   break;
+   case AES_KEYSIZE_256:
+   ctx->u.aes.type = CCP_AES_TYPE_256;
+   break;
+   default:
+   crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+   return -EINVAL;
+   }
+
+   ctx->u.aes.mode = CCP_AES_MODE_GCM;
+   ctx->u.aes.key_len = key_len;
+
+   memcpy(ctx->u.aes.key, key, key_len);
+   sg_init_one(>u.aes.key_sg, ctx->u.aes.key, key_len);
+
+   return 0;
+}
+
+static int ccp_aes_gcm_setauthsize(struct crypto_aead *tfm,
+  unsigned int authsize)
+{
+   return 0;
+}
+
+static int ccp_aes_gcm_crypt(struct aead_request *req, bool encrypt)
+{
+   struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+   struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
+   struct ccp_aes_req_ctx *rctx = aead_request_ctx(req);
+   struct scatterlist *iv_sg = NULL;
+   unsigned int iv_len = 0;
+   int i;
+   int ret = 0;
+
+   if (!ctx->u.aes.key_len)
+   return -EINVAL;
+
+   if (ctx->u.aes.mode != CCP_AES_MODE_GCM)
+   return -EINVAL;
+
+   if (!req->iv)
+   return -EINVAL;
+
+   /*
+* 5 parts:
+*   plaintext/ciphertext input
+*   AAD
+*   key
+*   IV
+*   Destination+tag buffer
+*/
+
+   /* Copy the IV and initialize a scatterlist */
+   memset(rctx->iv, 0, AES_BLOCK_SIZE);
+   memcpy(rctx->iv, req->iv, AES_GCM_IVSIZE);
+   for (i = 0; i < 3; i++)
+   rctx->iv[i + AES_GCM_IVSIZE] = 0;
+   rctx->iv[AES_BLOCK_SIZE - 1] = 1;
+   iv_sg = >iv_sg;
+   iv_len = AES_BLOCK_SIZE;
+   sg_init_one(iv_sg, rctx->iv, iv_len);
+
+   /* The AAD + plaintext are concatenated in the src buffer */
+   memset(>cmd, 0, sizeof(rctx->cmd));
+   INIT_LIST_HEAD(>cmd.entry);
+   rctx->cmd.engine = CCP_ENGINE_AES;
+   rctx->cmd.u.aes.type = ctx->u.aes.type;
+   rctx->cmd.u.aes.mode = ctx->u.aes.mode;
+   rctx->cmd.u.aes.action =
+   (encrypt) ? CCP_AES_ACTION_ENCRYPT : CCP_AES_ACTION_DECRYPT;
+   rctx->cmd.u.aes.key = >u.aes.key_sg;
+   rctx->cmd.u.aes.key_len = ctx->u.aes.key_len;
+   rctx->cmd.u.aes.iv = iv_sg;
+   rctx->cmd.u.aes.iv_len = iv_len;
+   rctx->cmd.u.aes.src = req->src;
+   rctx->cmd.u.aes.src_len = req->cryptlen;
+   rctx->cmd.u.aes.aad_len = req->assoclen;
+
+   /* The cipher text + the tag are in the dst buffer */
+   

[PATCH 1/6] crypto: ccp - Add SHA-2 support

2016-10-13 Thread Gary R Hook
Incorporate 384-bit and 512-bit hashing for a version 5 CCP
device


Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-crypto-sha.c |   22 +++
 drivers/crypto/ccp/ccp-crypto.h |9 +++--
 drivers/crypto/ccp/ccp-ops.c|   70 +++
 include/linux/ccp.h |3 ++
 4 files changed, 101 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c 
b/drivers/crypto/ccp/ccp-crypto-sha.c
index 84a652b..6b46eea 100644
--- a/drivers/crypto/ccp/ccp-crypto-sha.c
+++ b/drivers/crypto/ccp/ccp-crypto-sha.c
@@ -146,6 +146,12 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
case CCP_SHA_TYPE_256:
rctx->cmd.u.sha.ctx_len = SHA256_DIGEST_SIZE;
break;
+   case CCP_SHA_TYPE_384:
+   rctx->cmd.u.sha.ctx_len = SHA384_DIGEST_SIZE;
+   break;
+   case CCP_SHA_TYPE_512:
+   rctx->cmd.u.sha.ctx_len = SHA512_DIGEST_SIZE;
+   break;
default:
/* Should never get here */
break;
@@ -393,6 +399,22 @@ static struct ccp_sha_def sha_algs[] = {
.digest_size= SHA256_DIGEST_SIZE,
.block_size = SHA256_BLOCK_SIZE,
},
+   {
+   .version= CCP_VERSION(5, 0),
+   .name   = "sha384",
+   .drv_name   = "sha384-ccp",
+   .type   = CCP_SHA_TYPE_384,
+   .digest_size= SHA384_DIGEST_SIZE,
+   .block_size = SHA384_BLOCK_SIZE,
+   },
+   {
+   .version= CCP_VERSION(5, 0),
+   .name   = "sha512",
+   .drv_name   = "sha512-ccp",
+   .type   = CCP_SHA_TYPE_512,
+   .digest_size= SHA512_DIGEST_SIZE,
+   .block_size = SHA512_BLOCK_SIZE,
+   },
 };
 
 static int ccp_register_hmac_alg(struct list_head *head,
diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
index 8335b32..ae442ac 100644
--- a/drivers/crypto/ccp/ccp-crypto.h
+++ b/drivers/crypto/ccp/ccp-crypto.h
@@ -137,9 +137,12 @@ struct ccp_aes_cmac_exp_ctx {
u8 buf[AES_BLOCK_SIZE];
 };
 
-/* SHA related defines */
-#define MAX_SHA_CONTEXT_SIZE   SHA256_DIGEST_SIZE
-#define MAX_SHA_BLOCK_SIZE SHA256_BLOCK_SIZE
+/*
+ * SHA-related defines
+ * These values must be large enough to accommodate any variant
+ */
+#define MAX_SHA_CONTEXT_SIZE   SHA512_DIGEST_SIZE
+#define MAX_SHA_BLOCK_SIZE SHA512_BLOCK_SIZE
 
 struct ccp_sha_ctx {
struct scatterlist opad_sg;
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 50fae44..8fedb14 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -41,6 +41,20 @@ static const __be32 ccp_sha256_init[SHA256_DIGEST_SIZE / 
sizeof(__be32)] = {
cpu_to_be32(SHA256_H6), cpu_to_be32(SHA256_H7),
 };
 
+static const __be64 ccp_sha384_init[SHA512_DIGEST_SIZE / sizeof(__be64)] = {
+   cpu_to_be64(SHA384_H0), cpu_to_be64(SHA384_H1),
+   cpu_to_be64(SHA384_H2), cpu_to_be64(SHA384_H3),
+   cpu_to_be64(SHA384_H4), cpu_to_be64(SHA384_H5),
+   cpu_to_be64(SHA384_H6), cpu_to_be64(SHA384_H7),
+};
+
+static const __be64 ccp_sha512_init[SHA512_DIGEST_SIZE / sizeof(__be64)] = {
+   cpu_to_be64(SHA512_H0), cpu_to_be64(SHA512_H1),
+   cpu_to_be64(SHA512_H2), cpu_to_be64(SHA512_H3),
+   cpu_to_be64(SHA512_H4), cpu_to_be64(SHA512_H5),
+   cpu_to_be64(SHA512_H6), cpu_to_be64(SHA512_H7),
+};
+
 #defineCCP_NEW_JOBID(ccp)  ((ccp->vdata->version == CCP_VERSION(3, 
0)) ? \
ccp_gen_jobid(ccp) : 0)
 
@@ -963,6 +977,16 @@ static int ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
return -EINVAL;
block_size = SHA256_BLOCK_SIZE;
break;
+   case CCP_SHA_TYPE_384:
+   if (sha->ctx_len < SHA384_DIGEST_SIZE)
+   return -EINVAL;
+   block_size = SHA384_BLOCK_SIZE;
+   break;
+   case CCP_SHA_TYPE_512:
+   if (sha->ctx_len < SHA512_DIGEST_SIZE)
+   return -EINVAL;
+   block_size = SHA512_BLOCK_SIZE;
+   break;
default:
return -EINVAL;
}
@@ -1050,6 +1074,21 @@ static int ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
sb_count = 1;
ooffset = ioffset = 0;
break;
+   case CCP_SHA_TYPE_384:
+   digest_size = SHA384_DIGEST_SIZE;
+   init = (void *) ccp_sha384_init;
+   ctx_size = SHA512_DIGEST_SIZE;
+   sb_count = 2;
+   ioffset = 0;
+   ooffset = 2 * CCP_SB_BYTES - SHA384_DIGEST_SIZE;
+   break;
+   case 

[PATCH 4/6] crypto: ccp - Add RSA support for a v5 ccp

2016-10-13 Thread Gary R Hook
Take into account device implementation differences for
RSA.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-crypto-rsa.c |   14 +++--
 drivers/crypto/ccp/ccp-crypto.h |3 -
 drivers/crypto/ccp/ccp-dev.h|2 -
 drivers/crypto/ccp/ccp-ops.c|   97 +++
 4 files changed, 73 insertions(+), 43 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c 
b/drivers/crypto/ccp/ccp-crypto-rsa.c
index 7dab43b..94411de 100644
--- a/drivers/crypto/ccp/ccp-crypto-rsa.c
+++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
@@ -125,7 +125,7 @@ static void ccp_rsa_free_key_bufs(struct ccp_ctx *ctx)
 }
 
 static int ccp_rsa_setkey(struct crypto_akcipher *tfm, const void *key,
- unsigned int keylen, bool public)
+ unsigned int keylen, bool private)
 {
struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm);
struct rsa_key raw_key;
@@ -139,10 +139,10 @@ static int ccp_rsa_setkey(struct crypto_akcipher *tfm, 
const void *key,
memset(_key, 0, sizeof(raw_key));
 
/* Code borrowed from crypto/rsa.c */
-   if (public)
-   ret = rsa_parse_pub_key(_key, key, keylen);
-   else
+   if (private)
ret = rsa_parse_priv_key(_key, key, keylen);
+   else
+   ret = rsa_parse_pub_key(_key, key, keylen);
if (ret)
goto e_ret;
 
@@ -169,7 +169,7 @@ static int ccp_rsa_setkey(struct crypto_akcipher *tfm, 
const void *key,
goto e_nkey;
sg_init_one(>u.rsa.n_sg, ctx->u.rsa.n_buf, ctx->u.rsa.n_len);
 
-   if (!public) {
+   if (private) {
ctx->u.rsa.pkey.d = mpi_read_raw_data(raw_key.d, raw_key.d_sz);
if (!ctx->u.rsa.pkey.d)
goto e_nkey;
@@ -196,13 +196,13 @@ e_ret:
 static int ccp_rsa_setprivkey(struct crypto_akcipher *tfm, const void *key,
  unsigned int keylen)
 {
-   return ccp_rsa_setkey(tfm, key, keylen, false);
+   return ccp_rsa_setkey(tfm, key, keylen, true);
 }
 
 static int ccp_rsa_setpubkey(struct crypto_akcipher *tfm, const void *key,
 unsigned int keylen)
 {
-   return ccp_rsa_setkey(tfm, key, keylen, true);
+   return ccp_rsa_setkey(tfm, key, keylen, false);
 }
 
 static int ccp_rsa_init_tfm(struct crypto_akcipher *tfm)
diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
index 4a1d206..c6cf318 100644
--- a/drivers/crypto/ccp/ccp-crypto.h
+++ b/drivers/crypto/ccp/ccp-crypto.h
@@ -138,8 +138,7 @@ struct ccp_aes_cmac_exp_ctx {
u8 buf[AES_BLOCK_SIZE];
 };
 
-/*
- * SHA-related defines
+/* SHA-related defines
  * These values must be large enough to accommodate any variant
  */
 #define MAX_SHA_CONTEXT_SIZE   SHA512_DIGEST_SIZE
diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
index 0d996fe..143f00f 100644
--- a/drivers/crypto/ccp/ccp-dev.h
+++ b/drivers/crypto/ccp/ccp-dev.h
@@ -193,6 +193,7 @@
 #define CCP_SHA_SB_COUNT   1
 
 #define CCP_RSA_MAX_WIDTH  4096
+#define CCP5_RSA_MAX_WIDTH 16384
 
 #define CCP_PASSTHRU_BLOCKSIZE 256
 #define CCP_PASSTHRU_MASKSIZE  32
@@ -515,7 +516,6 @@ struct ccp_op {
struct ccp_passthru_op passthru;
struct ccp_ecc_op ecc;
} u;
-   struct ccp_mem key;
 };
 
 static inline u32 ccp_addr_lo(struct ccp_dma_info *info)
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index 826782d..07b8dfb 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1283,49 +1283,72 @@ static int ccp_run_rsa_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
int i = 0;
int ret = 0;
 
-   if (rsa->key_size > CCP_RSA_MAX_WIDTH)
-   return -EINVAL;
+   if (cmd_q->ccp->vdata->version < CCP_VERSION(4, 0)) {
+   if (rsa->key_size > CCP_RSA_MAX_WIDTH)
+   return -EINVAL;
+   } else {
+   if (rsa->key_size > CCP5_RSA_MAX_WIDTH)
+   return -EINVAL;
+   }
 
if (!rsa->exp || !rsa->mod || !rsa->src || !rsa->dst)
return -EINVAL;
 
-   /* The RSA modulus must precede the message being acted upon, so
-* it must be copied to a DMA area where the message and the
-* modulus can be concatenated.  Therefore the input buffer
-* length required is twice the output buffer length (which
-* must be a multiple of 256-bits).
-*/
-   o_len = ((rsa->key_size + 255) / 256) * 32;
-   i_len = o_len * 2;
-
-   sb_count = o_len / CCP_SB_BYTES;
-
memset(, 0, sizeof(op));
op.cmd_q = cmd_q;
-   op.jobid = ccp_gen_jobid(cmd_q->ccp);
-   op.sb_key = cmd_q->ccp->vdata->perform->sballoc(cmd_q, sb_count);
+   op.jobid = CCP_NEW_JOBID(cmd_q->ccp);
 
-   if (!op.sb_key)
-   

[PATCH 1/6] chcr:Fix memory corruption done

2016-10-13 Thread Harsh Jain
Fix memory corruption done by  *((u32 *)dec_key + k) operation.

Signed-off-by: Jitendra Lulla 
---
 drivers/crypto/chelsio/chcr_algo.c | 52 ++
 drivers/crypto/chelsio/chcr_algo.h | 58 +-
 2 files changed, 53 insertions(+), 57 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index e4ddb92..944c11f 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -178,6 +178,58 @@ static inline unsigned int calc_tx_flits_ofld(const struct 
sk_buff *skb)
return flits + sgl_len(cnt);
 }
 
+static void get_aes_decrypt_key(unsigned char *dec_key,
+   const unsigned char *key,
+   unsigned int keylength)
+{
+   u32 temp;
+   u32 w_ring[MAX_NK];
+   int i, j, k;
+   u8  nr, nk;
+
+   switch (keylength) {
+   case AES_KEYLENGTH_128BIT:
+   nk = KEYLENGTH_4BYTES;
+   nr = NUMBER_OF_ROUNDS_10;
+   break;
+   case AES_KEYLENGTH_192BIT:
+   nk = KEYLENGTH_6BYTES;
+   nr = NUMBER_OF_ROUNDS_12;
+   break;
+   case AES_KEYLENGTH_256BIT:
+   nk = KEYLENGTH_8BYTES;
+   nr = NUMBER_OF_ROUNDS_14;
+   break;
+   default:
+   return;
+   }
+   for (i = 0; i < nk; i++)
+   w_ring[i] = be32_to_cpu(*(u32 *)[4 * i]);
+
+   i = 0;
+   temp = w_ring[nk - 1];
+   while (i + nk < (nr + 1) * 4) {
+   if (!(i % nk)) {
+   /* RotWord(temp) */
+   temp = (temp << 8) | (temp >> 24);
+   temp = aes_ks_subword(temp);
+   temp ^= round_constant[i / nk];
+   } else if (nk == 8 && (i % 4 == 0)) {
+   temp = aes_ks_subword(temp);
+   }
+   w_ring[i % nk] ^= temp;
+   temp = w_ring[i % nk];
+   i++;
+   }
+   i--;
+   for (k = 0, j = i % nk; k < nk; k++) {
+   *((u32 *)dec_key + k) = htonl(w_ring[j]);
+   j--;
+   if (j < 0)
+   j += nk;
+   }
+}
+
 static struct shash_desc *chcr_alloc_shash(unsigned int ds)
 {
struct crypto_shash *base_hash = NULL;
diff --git a/drivers/crypto/chelsio/chcr_algo.h 
b/drivers/crypto/chelsio/chcr_algo.h
index ec64fbc..f34bc91 100644
--- a/drivers/crypto/chelsio/chcr_algo.h
+++ b/drivers/crypto/chelsio/chcr_algo.h
@@ -394,7 +394,7 @@ static const u8 aes_sbox[256] = {
187, 22
 };
 
-static u32 aes_ks_subword(const u32 w)
+static inline u32 aes_ks_subword(const u32 w)
 {
u8 bytes[4];
 
@@ -412,60 +412,4 @@ static u32 round_constant[11] = {
0x1B00, 0x3600, 0x6C00
 };
 
-/* dec_key - OUTPUT - Reverse round key
- * key - INPUT - key
- * keylength - INPUT - length of the key in number of bits
- */
-static inline void get_aes_decrypt_key(unsigned char *dec_key,
-  const unsigned char *key,
-  unsigned int keylength)
-{
-   u32 temp;
-   u32 w_ring[MAX_NK];
-   int i, j, k = 0;
-   u8  nr, nk;
-
-   switch (keylength) {
-   case AES_KEYLENGTH_128BIT:
-   nk = KEYLENGTH_4BYTES;
-   nr = NUMBER_OF_ROUNDS_10;
-   break;
-
-   case AES_KEYLENGTH_192BIT:
-   nk = KEYLENGTH_6BYTES;
-   nr = NUMBER_OF_ROUNDS_12;
-   break;
-   case AES_KEYLENGTH_256BIT:
-   nk = KEYLENGTH_8BYTES;
-   nr = NUMBER_OF_ROUNDS_14;
-   break;
-   default:
-   return;
-   }
-   for (i = 0; i < nk; i++ )
-   w_ring[i] = be32_to_cpu(*(u32 *)[4 * i]);
-
-   i = 0;
-   temp = w_ring[nk - 1];
-   while(i + nk < (nr + 1) * 4) {
-   if(!(i % nk)) {
-   /* RotWord(temp) */
-   temp = (temp << 8) | (temp >> 24);
-   temp = aes_ks_subword(temp);
-   temp ^= round_constant[i / nk];
-   }
-   else if (nk == 8 && (i % 4 == 0))
-   temp = aes_ks_subword(temp);
-   w_ring[i % nk] ^= temp;
-   temp = w_ring[i % nk];
-   i++;
-   }
-   for (k = 0, j = i % nk; k < nk; k++) {
-   *((u32 *)dec_key + k) = htonl(w_ring[j]);
-   j--;
-   if(j < 0)
-   j += nk;
-   }
-}
-
 #endif /* __CHCR_ALGO_H__ */
-- 
1.8.2.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Git clone/pull not working?

2016-10-13 Thread Gary R Hook
Am I the only person that can't clone/pull from kernel.org? Been getting 
handshake errors this week, but other sites (e.g. libvirt.org) seem to 
be working fine.


I thought I'd ask first... perhaps it's just me/my employer?
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/6] chcr: Move tfm ctx variable to request context

2016-10-13 Thread Harsh Jain
Move tfm ctx variable to request context.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 26 +-
 drivers/crypto/chelsio/chcr_crypto.h |  9 -
 2 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 7262bb3..18385d6 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -119,7 +119,7 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
   AES_BLOCK_SIZE);
}
dma_unmap_sg(_ctx->lldi.pdev->dev, ctx_req.req.ablk_req->dst,
-ABLK_CTX(ctx)->dst_nents, DMA_FROM_DEVICE);
+ctx_req.ctx.ablk_ctx->dst_nents, DMA_FROM_DEVICE);
if (ctx_req.ctx.ablk_ctx->skb) {
kfree_skb(ctx_req.ctx.ablk_ctx->skb);
ctx_req.ctx.ablk_ctx->skb = NULL;
@@ -138,8 +138,10 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
updated_digestsize = SHA256_DIGEST_SIZE;
else if (digestsize == SHA384_DIGEST_SIZE)
updated_digestsize = SHA512_DIGEST_SIZE;
-   if (ctx_req.ctx.ahash_ctx->skb)
+   if (ctx_req.ctx.ahash_ctx->skb) {
+   kfree_skb(ctx_req.ctx.ahash_ctx->skb);
ctx_req.ctx.ahash_ctx->skb = NULL;
+   }
if (ctx_req.ctx.ahash_ctx->result == 1) {
ctx_req.ctx.ahash_ctx->result = 0;
memcpy(ctx_req.req.ahash_req->result, input +
@@ -318,8 +320,7 @@ static inline int is_hmac(struct crypto_tfm *tfm)
struct chcr_alg_template *chcr_crypto_alg =
container_of(__crypto_ahash_alg(alg), struct chcr_alg_template,
 alg.hash);
-   if ((chcr_crypto_alg->type & CRYPTO_ALG_SUB_TYPE_MASK) ==
-   CRYPTO_ALG_SUB_TYPE_HASH_HMAC)
+   if (chcr_crypto_alg->type == CRYPTO_ALG_TYPE_HMAC)
return 1;
return 0;
 }
@@ -505,7 +506,7 @@ static struct sk_buff
struct sk_buff *skb = NULL;
struct chcr_wr *chcr_req;
struct cpl_rx_phys_dsgl *phys_cpl;
-   struct chcr_blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
+   struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req);
struct phys_sge_parm sg_param;
unsigned int frags = 0, transhdr_len, phys_dsgl;
unsigned int ivsize = crypto_ablkcipher_ivsize(tfm), kctx_len;
@@ -514,12 +515,11 @@ static struct sk_buff
 
if (!req->info)
return ERR_PTR(-EINVAL);
-   ablkctx->dst_nents = sg_nents_for_len(req->dst, req->nbytes);
-   if (ablkctx->dst_nents <= 0) {
+   reqctx->dst_nents = sg_nents_for_len(req->dst, req->nbytes);
+   if (reqctx->dst_nents <= 0) {
pr_err("AES:Invalid Destination sg lists\n");
return ERR_PTR(-EINVAL);
}
-   ablkctx->enc = op_type;
if ((ablkctx->enckey_len == 0) || (ivsize > AES_BLOCK_SIZE) ||
(req->nbytes <= 0) || (req->nbytes % AES_BLOCK_SIZE)) {
pr_err("AES: Invalid value of Key Len %d nbytes %d IV Len %d\n",
@@ -527,7 +527,7 @@ static struct sk_buff
return ERR_PTR(-EINVAL);
}
 
-   phys_dsgl = get_space_for_phys_dsgl(ablkctx->dst_nents);
+   phys_dsgl = get_space_for_phys_dsgl(reqctx->dst_nents);
 
kctx_len = (DIV_ROUND_UP(ablkctx->enckey_len, 16) * 16);
transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, phys_dsgl);
@@ -570,7 +570,7 @@ static struct sk_buff
}
}
phys_cpl = (struct cpl_rx_phys_dsgl *)((u8 *)(chcr_req + 1) + kctx_len);
-   sg_param.nents = ablkctx->dst_nents;
+   sg_param.nents = reqctx->dst_nents;
sg_param.obsize = req->nbytes;
sg_param.qid = qid;
sg_param.align = 1;
@@ -579,11 +579,11 @@ static struct sk_buff
goto map_fail1;
 
skb_set_transport_header(skb, transhdr_len);
-   memcpy(ablkctx->iv, req->info, ivsize);
-   write_buffer_to_skb(skb, , ablkctx->iv, ivsize);
+   memcpy(reqctx->iv, req->info, ivsize);
+   write_buffer_to_skb(skb, , reqctx->iv, ivsize);
write_sg_to_skb(skb, , req->src, req->nbytes);
create_wreq(ctx, chcr_req, req, skb, kctx_len, 0, phys_dsgl);
-   req_ctx->skb = skb;
+   reqctx->skb = skb;
skb_get(skb);
return skb;
 map_fail1:
diff --git a/drivers/crypto/chelsio/chcr_crypto.h 
b/drivers/crypto/chelsio/chcr_crypto.h
index 977d205..40a5182 100644
--- a/drivers/crypto/chelsio/chcr_crypto.h
+++ b/drivers/crypto/chelsio/chcr_crypto.h
@@ -120,17 +120,14 @@
 /* Aligned to 128 bit boundary */
 
 struct ablk_ctx {
-   u8 enc;
-   unsigned int processed_len;
__be32 key_ctx_hdr;

[PATCH 4/6] chcr: Use SHASH_DESC_ON_STACK

2016-10-13 Thread Harsh Jain
Use SHASH_DESC_ON_STACK macro to allocate memory for ipad/opad
calculation.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 63 +++-
 drivers/crypto/chelsio/chcr_crypto.h |  2 +-
 2 files changed, 27 insertions(+), 38 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 17d0c1f..7262bb3 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -228,40 +228,29 @@ static void get_aes_decrypt_key(unsigned char *dec_key,
}
 }
 
-static struct shash_desc *chcr_alloc_shash(unsigned int ds)
+static struct crypto_shash *chcr_alloc_shash(unsigned int ds)
 {
struct crypto_shash *base_hash = NULL;
-   struct shash_desc *desc;
 
switch (ds) {
case SHA1_DIGEST_SIZE:
-   base_hash = crypto_alloc_shash("sha1-generic", 0, 0);
+   base_hash = crypto_alloc_shash("sha1", 0, 0);
break;
case SHA224_DIGEST_SIZE:
-   base_hash = crypto_alloc_shash("sha224-generic", 0, 0);
+   base_hash = crypto_alloc_shash("sha224", 0, 0);
break;
case SHA256_DIGEST_SIZE:
-   base_hash = crypto_alloc_shash("sha256-generic", 0, 0);
+   base_hash = crypto_alloc_shash("sha256", 0, 0);
break;
case SHA384_DIGEST_SIZE:
-   base_hash = crypto_alloc_shash("sha384-generic", 0, 0);
+   base_hash = crypto_alloc_shash("sha384", 0, 0);
break;
case SHA512_DIGEST_SIZE:
-   base_hash = crypto_alloc_shash("sha512-generic", 0, 0);
+   base_hash = crypto_alloc_shash("sha512", 0, 0);
break;
}
-   if (IS_ERR(base_hash)) {
-   pr_err("Can not allocate sha-generic algo.\n");
-   return (void *)base_hash;
-   }
 
-   desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(base_hash),
-  GFP_KERNEL);
-   if (!desc)
-   return ERR_PTR(-ENOMEM);
-   desc->tfm = base_hash;
-   desc->flags = crypto_shash_get_flags(base_hash);
-   return desc;
+   return base_hash;
 }
 
 static int chcr_compute_partial_hash(struct shash_desc *desc,
@@ -770,6 +759,11 @@ static int get_alg_config(struct algo_param *params,
return 0;
 }
 
+static inline void chcr_free_shash(struct crypto_shash *base_hash)
+{
+   crypto_free_shash(base_hash);
+}
+
 /**
  * create_hash_wr - Create hash work request
  * @req - Cipher req base
@@ -1106,15 +1100,16 @@ static int chcr_ahash_setkey(struct crypto_ahash *tfm, 
const u8 *key,
unsigned int bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm));
unsigned int i, err = 0, updated_digestsize;
 
-   /*
-* use the key to calculate the ipad and opad. ipad will sent with the
+   SHASH_DESC_ON_STACK(shash, hmacctx->base_hash);
+
+   /* use the key to calculate the ipad and opad. ipad will sent with the
 * first request's data. opad will be sent with the final hash result
 * ipad in hmacctx->ipad and opad in hmacctx->opad location
 */
-   if (!hmacctx->desc)
-   return -EINVAL;
+   shash->tfm = hmacctx->base_hash;
+   shash->flags = crypto_shash_get_flags(hmacctx->base_hash);
if (keylen > bs) {
-   err = crypto_shash_digest(hmacctx->desc, key, keylen,
+   err = crypto_shash_digest(shash, key, keylen,
  hmacctx->ipad);
if (err)
goto out;
@@ -1135,13 +1130,13 @@ static int chcr_ahash_setkey(struct crypto_ahash *tfm, 
const u8 *key,
updated_digestsize = SHA256_DIGEST_SIZE;
else if (digestsize == SHA384_DIGEST_SIZE)
updated_digestsize = SHA512_DIGEST_SIZE;
-   err = chcr_compute_partial_hash(hmacctx->desc, hmacctx->ipad,
+   err = chcr_compute_partial_hash(shash, hmacctx->ipad,
hmacctx->ipad, digestsize);
if (err)
goto out;
chcr_change_order(hmacctx->ipad, updated_digestsize);
 
-   err = chcr_compute_partial_hash(hmacctx->desc, hmacctx->opad,
+   err = chcr_compute_partial_hash(shash, hmacctx->opad,
hmacctx->opad, digestsize);
if (err)
goto out;
@@ -1237,26 +1232,20 @@ static int chcr_hmac_cra_init(struct crypto_tfm *tfm)
 
crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
 sizeof(struct chcr_ahash_req_ctx));
-   hmacctx->desc = chcr_alloc_shash(digestsize);
-   if (IS_ERR(hmacctx->desc))
-   return PTR_ERR(hmacctx->desc);
+   hmacctx->base_hash = chcr_alloc_shash(digestsize);
+   if (IS_ERR(hmacctx->base_hash))
+   return PTR_ERR(hmacctx->base_hash);
return 

[PATCH 2/6] chcr: Remove malloc/free

2016-10-13 Thread Harsh Jain
Remove malloc/free in crypto operation and allocate memory via cra_ctxsize.
Added new structure chcr_wr to populate Work Request Header.
Fixes: 324429d74127 (chcr: Support for Chelsio's Crypto Hardware)

Reported-by: Dan Carpenter 
Signed-off-by: Jitendra Lulla 
---
 drivers/crypto/chelsio/chcr_algo.c   | 361 +--
 drivers/crypto/chelsio/chcr_algo.h   |  28 ++-
 drivers/crypto/chelsio/chcr_core.h   |  16 ++
 drivers/crypto/chelsio/chcr_crypto.h |  16 +-
 4 files changed, 210 insertions(+), 211 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 944c11f..d5e0066 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -150,8 +150,6 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
   sizeof(struct cpl_fw6_pld),
   updated_digestsize);
}
-   kfree(ctx_req.ctx.ahash_ctx->dummy_payload_ptr);
-   ctx_req.ctx.ahash_ctx->dummy_payload_ptr = NULL;
break;
}
return 0;
@@ -414,8 +412,23 @@ static inline int get_cryptoalg_subtype(struct crypto_tfm 
*tfm)
return chcr_crypto_alg->type & CRYPTO_ALG_SUB_TYPE_MASK;
 }
 
+static inline void write_buffer_to_skb(struct sk_buff *skb,
+   unsigned int *frags,
+   char *bfr,
+   u8 bfr_len)
+{
+   skb->len += bfr_len;
+   skb->data_len += bfr_len;
+   skb->truesize += bfr_len;
+   get_page(virt_to_page(bfr));
+   skb_fill_page_desc(skb, *frags, virt_to_page(bfr),
+  offset_in_page(bfr), bfr_len);
+   (*frags)++;
+}
+
+
 static inline void
-write_sg_data_page_desc(struct sk_buff *skb, unsigned int *frags,
+write_sg_to_skb(struct sk_buff *skb, unsigned int *frags,
struct scatterlist *sg, unsigned int count)
 {
struct page *spage;
@@ -424,8 +437,9 @@ write_sg_data_page_desc(struct sk_buff *skb, unsigned int 
*frags,
skb->len += count;
skb->data_len += count;
skb->truesize += count;
+
while (count > 0) {
-   if (sg && (!(sg->length)))
+   if (!sg || (!(sg->length)))
break;
spage = sg_page(sg);
get_page(spage);
@@ -441,29 +455,24 @@ static int generate_copy_rrkey(struct ablk_ctx *ablkctx,
   struct _key_ctx *key_ctx)
 {
if (ablkctx->ciph_mode == CHCR_SCMD_CIPHER_MODE_AES_CBC) {
-   get_aes_decrypt_key(key_ctx->key, ablkctx->key,
-   ablkctx->enckey_len << 3);
-   memset(key_ctx->key + ablkctx->enckey_len, 0,
-  CHCR_AES_MAX_KEY_LEN - ablkctx->enckey_len);
+   memcpy(key_ctx->key, ablkctx->rrkey, ablkctx->enckey_len);
} else {
memcpy(key_ctx->key,
   ablkctx->key + (ablkctx->enckey_len >> 1),
   ablkctx->enckey_len >> 1);
-   get_aes_decrypt_key(key_ctx->key + (ablkctx->enckey_len >> 1),
-   ablkctx->key, ablkctx->enckey_len << 2);
+   memcpy(key_ctx->key + (ablkctx->enckey_len >> 1),
+  ablkctx->rrkey, ablkctx->enckey_len >> 1);
}
return 0;
 }
 
 static inline void create_wreq(struct chcr_context *ctx,
-  struct fw_crypto_lookaside_wr *wreq,
+  struct chcr_wr *chcr_req,
   void *req, struct sk_buff *skb,
   int kctx_len, int hash_sz,
   unsigned int phys_dsgl)
 {
struct uld_ctx *u_ctx = ULD_CTX(ctx);
-   struct ulp_txpkt *ulptx = (struct ulp_txpkt *)(wreq + 1);
-   struct ulptx_idata *sc_imm = (struct ulptx_idata *)(ulptx + 1);
int iv_loc = IV_DSGL;
int qid = u_ctx->lldi.rxq_ids[ctx->tx_channel_id];
unsigned int immdatalen = 0, nr_frags = 0;
@@ -475,24 +484,27 @@ static inline void create_wreq(struct chcr_context *ctx,
nr_frags = skb_shinfo(skb)->nr_frags;
}
 
-   wreq->op_to_cctx_size = FILL_WR_OP_CCTX_SIZE(immdatalen,
-(kctx_len >> 4));
-   wreq->pld_size_hash_size =
+   chcr_req->wreq.op_to_cctx_size = FILL_WR_OP_CCTX_SIZE(immdatalen,
+   ((sizeof(chcr_req->key_ctx) + kctx_len) >> 4));
+   chcr_req->wreq.pld_size_hash_size =
htonl(FW_CRYPTO_LOOKASIDE_WR_PLD_SIZE_V(sgl_lengths[nr_frags]) |
  FW_CRYPTO_LOOKASIDE_WR_HASH_SIZE_V(hash_sz));
-   wreq->len16_pkd = htonl(FW_CRYPTO_LOOKASIDE_WR_LEN16_V(DIV_ROUND_UP(
+   chcr_req->wreq.len16_pkd =
+ 

[PATCH 3/6] chcr: Adjust Dest. buffer size

2016-10-13 Thread Harsh Jain
Destination buffer size passed to hardware should not be greater
than crypto operation output.

Signed-off-by: Jitendra Lulla 
---
 drivers/crypto/chelsio/chcr_algo.c | 50 +++---
 1 file changed, 20 insertions(+), 30 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index d5e0066..17d0c1f 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -335,25 +335,13 @@ static inline int is_hmac(struct crypto_tfm *tfm)
return 0;
 }
 
-static inline unsigned int ch_nents(struct scatterlist *sg,
-   unsigned int *total_size)
-{
-   unsigned int nents;
-
-   for (nents = 0, *total_size = 0; sg; sg = sg_next(sg)) {
-   nents++;
-   *total_size += sg->length;
-   }
-   return nents;
-}
-
 static void write_phys_cpl(struct cpl_rx_phys_dsgl *phys_cpl,
   struct scatterlist *sg,
   struct phys_sge_parm *sg_param)
 {
struct phys_sge_pairs *to;
-   unsigned int out_buf_size = sg_param->obsize;
-   unsigned int nents = sg_param->nents, i, j, tot_len = 0;
+   int out_buf_size = sg_param->obsize;
+   unsigned int nents = sg_param->nents, i, j = 0;
 
phys_cpl->op_to_tid = htonl(CPL_RX_PHYS_DSGL_OPCODE_V(CPL_RX_PHYS_DSGL)
| CPL_RX_PHYS_DSGL_ISRDMA_V(0));
@@ -371,25 +359,24 @@ static void write_phys_cpl(struct cpl_rx_phys_dsgl 
*phys_cpl,
   sizeof(struct cpl_rx_phys_dsgl));
 
for (i = 0; nents; to++) {
-   for (j = i; (nents && (j < (8 + i))); j++, nents--) {
-   to->len[j] = htons(sg->length);
+   for (j = 0; j < 8 && nents; j++, nents--) {
+   out_buf_size -= sg_dma_len(sg);
+   to->len[j] = htons(sg_dma_len(sg));
to->addr[j] = cpu_to_be64(sg_dma_address(sg));
-   if (out_buf_size) {
-   if (tot_len + sg_dma_len(sg) >= out_buf_size) {
-   to->len[j] = htons(out_buf_size -
-  tot_len);
-   return;
-   }
-   tot_len += sg_dma_len(sg);
-   }
sg = sg_next(sg);
}
}
+   if (out_buf_size) {
+   j--;
+   to--;
+   to->len[j] = htons(ntohs(to->len[j]) + (out_buf_size));
+   }
 }
 
-static inline unsigned
-int map_writesg_phys_cpl(struct device *dev, struct cpl_rx_phys_dsgl *phys_cpl,
-struct scatterlist *sg, struct phys_sge_parm *sg_param)
+static inline int map_writesg_phys_cpl(struct device *dev,
+   struct cpl_rx_phys_dsgl *phys_cpl,
+   struct scatterlist *sg,
+   struct phys_sge_parm *sg_param)
 {
if (!sg || !sg_param->nents)
return 0;
@@ -531,16 +518,19 @@ static struct sk_buff
struct cpl_rx_phys_dsgl *phys_cpl;
struct chcr_blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
struct phys_sge_parm sg_param;
-   unsigned int frags = 0, transhdr_len, phys_dsgl, dst_bufsize = 0;
+   unsigned int frags = 0, transhdr_len, phys_dsgl;
unsigned int ivsize = crypto_ablkcipher_ivsize(tfm), kctx_len;
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL :
GFP_ATOMIC;
 
if (!req->info)
return ERR_PTR(-EINVAL);
-   ablkctx->dst_nents = ch_nents(req->dst, _bufsize);
+   ablkctx->dst_nents = sg_nents_for_len(req->dst, req->nbytes);
+   if (ablkctx->dst_nents <= 0) {
+   pr_err("AES:Invalid Destination sg lists\n");
+   return ERR_PTR(-EINVAL);
+   }
ablkctx->enc = op_type;
-
if ((ablkctx->enckey_len == 0) || (ivsize > AES_BLOCK_SIZE) ||
(req->nbytes <= 0) || (req->nbytes % AES_BLOCK_SIZE)) {
pr_err("AES: Invalid value of Key Len %d nbytes %d IV Len %d\n",
-- 
1.8.2.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 6/6] Add support for AEAD algos.

2016-10-13 Thread Harsh Jain
Add support for following AEAD algos.
 GCM,CCM,RFC4106,RFC4309,authenc(hmac(shaXXX),cbc(aes)).

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/Kconfig   |1 +
 drivers/crypto/chelsio/chcr_algo.c   | 1466 +-
 drivers/crypto/chelsio/chcr_algo.h   |   16 +-
 drivers/crypto/chelsio/chcr_core.c   |8 +-
 drivers/crypto/chelsio/chcr_core.h   |2 -
 drivers/crypto/chelsio/chcr_crypto.h |   90 ++-
 6 files changed, 1541 insertions(+), 42 deletions(-)

diff --git a/drivers/crypto/chelsio/Kconfig b/drivers/crypto/chelsio/Kconfig
index 4ce67fb..3e104f5 100644
--- a/drivers/crypto/chelsio/Kconfig
+++ b/drivers/crypto/chelsio/Kconfig
@@ -4,6 +4,7 @@ config CRYPTO_DEV_CHELSIO
select CRYPTO_SHA1
select CRYPTO_SHA256
select CRYPTO_SHA512
+   select CRYPTO_AUTHENC
---help---
  The Chelsio Crypto Co-processor driver for T6 adapters.
 
diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 18385d6..cffc38f 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -54,6 +54,12 @@
 #include 
 #include 
 #include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
 #include 
 
 #include "t4fw_api.h"
@@ -62,6 +68,11 @@
 #include "chcr_algo.h"
 #include "chcr_crypto.h"
 
+static inline  struct chcr_aead_ctx *AEAD_CTX(struct chcr_context *ctx)
+{
+   return ctx->crypto_ctx->aeadctx;
+}
+
 static inline struct ablk_ctx *ABLK_CTX(struct chcr_context *ctx)
 {
return ctx->crypto_ctx->ablkctx;
@@ -72,6 +83,16 @@ static inline struct hmac_ctx *HMAC_CTX(struct chcr_context 
*ctx)
return ctx->crypto_ctx->hmacctx;
 }
 
+static inline struct chcr_gcm_ctx *GCM_CTX(struct chcr_aead_ctx *gctx)
+{
+   return gctx->ctx->gcm;
+}
+
+static inline struct chcr_authenc_ctx *AUTHENC_CTX(struct chcr_aead_ctx *gctx)
+{
+   return gctx->ctx->authenc;
+}
+
 static inline struct uld_ctx *ULD_CTX(struct chcr_context *ctx)
 {
return ctx->dev->u_ctx;
@@ -94,12 +115,37 @@ static inline unsigned int sgl_len(unsigned int n)
return (3 * n) / 2 + (n & 1) + 2;
 }
 
+static void chcr_verify_tag(struct aead_request *req, u8 *input, int *err)
+{
+   u8 temp[SHA512_DIGEST_SIZE];
+   struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+   int authsize = crypto_aead_authsize(tfm);
+   struct cpl_fw6_pld *fw6_pld;
+   int cmp = 0;
+
+   fw6_pld = (struct cpl_fw6_pld *)input;
+   if ((get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106) ||
+   (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_GCM)) {
+   cmp = memcmp(_pld->data[2], (fw6_pld + 1), authsize);
+   } else {
+
+   sg_pcopy_to_buffer(req->src, sg_nents(req->src), temp,
+   authsize, req->assoclen +
+   req->cryptlen - authsize);
+   cmp = memcmp(temp, (fw6_pld + 1), authsize);
+   }
+   if (cmp)
+   *err = -EBADMSG;
+   else
+   *err = 0;
+}
+
 /*
  * chcr_handle_resp - Unmap the DMA buffers associated with the request
  * @req: crypto request
  */
 int chcr_handle_resp(struct crypto_async_request *req, unsigned char *input,
-int error_status)
+int err)
 {
struct crypto_tfm *tfm = req->tfm;
struct chcr_context *ctx = crypto_tfm_ctx(tfm);
@@ -109,11 +155,27 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
unsigned int digestsize, updated_digestsize;
 
switch (tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK) {
+   case CRYPTO_ALG_TYPE_AEAD:
+   ctx_req.req.aead_req = (struct aead_request *)req;
+   ctx_req.ctx.reqctx = aead_request_ctx(ctx_req.req.aead_req);
+   dma_unmap_sg(_ctx->lldi.pdev->dev, ctx_req.req.aead_req->dst,
+ctx_req.ctx.reqctx->dst_nents, DMA_FROM_DEVICE);
+   if (ctx_req.ctx.reqctx->skb) {
+   kfree_skb(ctx_req.ctx.reqctx->skb);
+   ctx_req.ctx.reqctx->skb = NULL;
+   }
+   if (ctx_req.ctx.reqctx->verify == VERIFY_SW) {
+   chcr_verify_tag(ctx_req.req.aead_req, input,
+   );
+   ctx_req.ctx.reqctx->verify = VERIFY_HW;
+   }
+   break;
+
case CRYPTO_ALG_TYPE_BLKCIPHER:
ctx_req.req.ablk_req = (struct ablkcipher_request *)req;
ctx_req.ctx.ablk_ctx =
ablkcipher_request_ctx(ctx_req.req.ablk_req);
-   if (!error_status) {
+   if (!err) {
fw6_pld = (struct cpl_fw6_pld *)input;
memcpy(ctx_req.req.ablk_req->info, _pld->data[2],
   AES_BLOCK_SIZE);
@@ -154,7 +216,7 @@ int 

[PATCH 0/6] chcr: AEAD support and bug fixes

2016-10-13 Thread Harsh Jain
This patch series includes Bug Fixes, performance improvement and
support for following AEAD algos.
GCM,CCM,RFC4106,RFC4303,authenc(hmac(shaXXX),cbc(aes))

This patch series is based on linux-next tree and depends on
("crypto/chcr: Add support for Chelsio Crypto Driver ") series.

https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg20658.html

Jitendra Lulla (3):
  Fix memory corruption done by  *((u32 *)dec_key + k) operation.
  Remove malloc/free in crypto operation and allocate memory in Init.
  Added new structure chcr_wr to populate Work Request Header.
  Destination buffer size passed to hardware should not be greater than
crypto operation output.
Harsh Jain (3):
  Use SHASH_DESC_ON_STACK macro to allocate memory for ipad/opad
calculation.
  Move tfm ctx variable to request context.
  Add support for AEAD algos
GCM,CCM,RFC4106,RFC4303,authenc(hmac(shaXXX),cbc(aes))

 drivers/crypto/chelsio/Kconfig   |1 +
 drivers/crypto/chelsio/chcr_algo.c   | 1998 +-
 drivers/crypto/chelsio/chcr_algo.h   |  102 +-
 drivers/crypto/chelsio/chcr_core.c   |8 +-
 drivers/crypto/chelsio/chcr_core.h   |   18 +-
 drivers/crypto/chelsio/chcr_crypto.h |  115 +-
 6 files changed, 1857 insertions(+), 385 deletions(-)

-- 
1.8.2.3

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[bug report] crypto: omap-sham - add support functions for sg based data handling

2016-10-13 Thread Dan Carpenter
Hello Tero Kristo,

This is a semi-automatic email about new static checker warnings.

The patch f19de1bc67a0: "crypto: omap-sham - add support functions 
for sg based data handling" from Sep 19, 2016, leads to the following 
Smatch complaint:

drivers/crypto/omap-sham.c:808 omap_sham_prepare_request()
 warn: variable dereferenced before check 'req' (see line 801)

drivers/crypto/omap-sham.c
   800  {
   801  struct omap_sham_reqctx *rctx = ahash_request_ctx(req);
  ^^^
New dereference inside function.

   802  int bs;
   803  int ret;
   804  int nbytes;
   805  bool final = rctx->flags & BIT(FLAGS_FINUP);
   806  int xmit_len, hash_later;
   807  
   808  if (!req)

New check is too late.

   809  return 0;
   810  

regards,
dan carpenter
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html