Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread Tim Chen


On Mon, 2013-12-30 at 15:52 +0200, Andy Shevchenko wrote:
 It seems commit d764593a crypto: aesni - AVX and AVX2 version of AESNI-GCM
 encode and decode breaks a build on x86_32 since it's designed only for
 x86_64. This patch makes a compilation unit conditional to CONFIG_64BIT and
 functions usage to CONFIG_X86_64.

Thanks for catching and fixing it.

Tim
 
 Signed-off-by: Andy Shevchenko andriy.shevche...@linux.intel.com


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread H. Peter Anvin
Can the code be adjusted to compile for 32 bit x86 or is that pointless?

Tim Chen tim.c.c...@linux.intel.com wrote:


On Mon, 2013-12-30 at 15:52 +0200, Andy Shevchenko wrote:
 It seems commit d764593a crypto: aesni - AVX and AVX2 version of
AESNI-GCM
 encode and decode breaks a build on x86_32 since it's designed only
for
 x86_64. This patch makes a compilation unit conditional to
CONFIG_64BIT and
 functions usage to CONFIG_X86_64.

Thanks for catching and fixing it.

Tim
 
 Signed-off-by: Andy Shevchenko andriy.shevche...@linux.intel.com

-- 
Sent from my mobile phone.  Please pardon brevity and lack of formatting.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread Tim Chen
On Mon, 2014-01-06 at 09:45 -0800, H. Peter Anvin wrote:
 Can the code be adjusted to compile for 32 bit x86 or is that pointless?
 

Code was optimized for wide registers.  So it is only meant for x86_64.

Tim

 Tim Chen tim.c.c...@linux.intel.com wrote:
 
 
 On Mon, 2013-12-30 at 15:52 +0200, Andy Shevchenko wrote:
  It seems commit d764593a crypto: aesni - AVX and AVX2 version of
 AESNI-GCM
  encode and decode breaks a build on x86_32 since it's designed only
 for
  x86_64. This patch makes a compilation unit conditional to
 CONFIG_64BIT and
  functions usage to CONFIG_X86_64.
 
 Thanks for catching and fixing it.
 
 Tim
  
  Signed-off-by: Andy Shevchenko andriy.shevche...@linux.intel.com
 


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread H. Peter Anvin
On 01/06/2014 09:57 AM, Tim Chen wrote:
 On Mon, 2014-01-06 at 09:45 -0800, H. Peter Anvin wrote:
 Can the code be adjusted to compile for 32 bit x86 or is that pointless?

 
 Code was optimized for wide registers.  So it is only meant for x86_64.
 

Aren't the wide registers the vector registers?  Or are you also
relying on 64-bit integer registers (in which case we should just rename
the file to make it clear it is x86-64 only.)

-hpa


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread Tim Chen
On Mon, 2014-01-06 at 10:00 -0800, H. Peter Anvin wrote:
 On 01/06/2014 09:57 AM, Tim Chen wrote:
  On Mon, 2014-01-06 at 09:45 -0800, H. Peter Anvin wrote:
  Can the code be adjusted to compile for 32 bit x86 or is that pointless?
 
  
  Code was optimized for wide registers.  So it is only meant for x86_64.
  
 
 Aren't the wide registers the vector registers?  Or are you also
 relying on 64-bit integer registers (in which case we should just rename
 the file to make it clear it is x86-64 only.)
 
   -hpa
 
 

Yes, the code is in the file named aesni_intel_avx.S.  So it should
be clear that the code is meant for x86_64.

Tim

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/6] crypto: ccp - more code fixes/cleanup

2014-01-06 Thread Tom Lendacky
The following series implements a fix to hash length wrapping as well
as some additional fixes and cleanups (proper gfp_t type on some memory
allocations, scatterlist usage improvements, null request result field
checks and driver enabled/disabled changes).

This patch series is based on the cryptodev-2.6 kernel tree.

---

Tom Lendacky (6):
  crypto: ccp - Apply appropriate gfp_t type to memory allocations
  crypto: ccp - Cleanup scatterlist usage
  crypto: ccp - Check for caller result area before using it
  crypto: ccp - Change data length declarations to u64
  crypto: ccp - Cleanup hash invocation calls
  crypto: ccp - CCP device enabled/disabled changes


 drivers/crypto/ccp/ccp-crypto-aes-cmac.c |   38 +
 drivers/crypto/ccp/ccp-crypto-sha.c  |   88 ++
 drivers/crypto/ccp/ccp-crypto.h  |   10 +++
 drivers/crypto/ccp/ccp-dev.c |   15 +
 drivers/crypto/ccp/ccp-ops.c |   34 ++--
 drivers/crypto/ccp/ccp-pci.c |3 +
 include/linux/ccp.h  |   20 +--
 7 files changed, 139 insertions(+), 69 deletions(-)

-- 
Tom Lendacky

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/6] crypto: ccp - Cleanup hash invocation calls

2014-01-06 Thread Tom Lendacky
Cleanup the ahash digest invocations to check the init
return code and make use of the finup routine.

Signed-off-by: Tom Lendacky thomas.lenda...@amd.com
---
 drivers/crypto/ccp/ccp-crypto-aes-cmac.c |2 +-
 drivers/crypto/ccp/ccp-crypto-sha.c  |8 ++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c 
b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
index a52b97a..8e162ad 100644
--- a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
+++ b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
@@ -198,7 +198,7 @@ static int ccp_aes_cmac_digest(struct ahash_request *req)
if (ret)
return ret;
 
-   return ccp_do_cmac_update(req, req-nbytes, 1);
+   return ccp_aes_cmac_finup(req);
 }
 
 static int ccp_aes_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c 
b/drivers/crypto/ccp/ccp-crypto-sha.c
index d30f6c8..3867290 100644
--- a/drivers/crypto/ccp/ccp-crypto-sha.c
+++ b/drivers/crypto/ccp/ccp-crypto-sha.c
@@ -248,9 +248,13 @@ static int ccp_sha_finup(struct ahash_request *req)
 
 static int ccp_sha_digest(struct ahash_request *req)
 {
-   ccp_sha_init(req);
+   int ret;
 
-   return ccp_do_sha_update(req, req-nbytes, 1);
+   ret = ccp_sha_init(req);
+   if (ret)
+   return ret;
+
+   return ccp_sha_finup(req);
 }
 
 static int ccp_sha_setkey(struct crypto_ahash *tfm, const u8 *key,


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/6] crypto: ccp - Cleanup scatterlist usage

2014-01-06 Thread Tom Lendacky
Cleanup up the usage of scatterlists to make the code cleaner
and avoid extra memory allocations when not needed.

Signed-off-by: Tom Lendacky thomas.lenda...@amd.com
---
 drivers/crypto/ccp/ccp-crypto-aes-cmac.c |6 ++-
 drivers/crypto/ccp/ccp-crypto-sha.c  |   53 --
 2 files changed, 33 insertions(+), 26 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c 
b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
index 398832c..646c8d1 100644
--- a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
+++ b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
@@ -125,8 +125,10 @@ static int ccp_do_cmac_update(struct ahash_request *req, 
unsigned int nbytes,
sg_init_one(rctx-pad_sg, rctx-pad, pad_length);
sg = ccp_crypto_sg_table_add(rctx-data_sg, rctx-pad_sg);
}
-   if (sg)
+   if (sg) {
sg_mark_end(sg);
+   sg = rctx-data_sg.sgl;
+   }
 
/* Initialize the K1/K2 scatterlist */
if (final)
@@ -143,7 +145,7 @@ static int ccp_do_cmac_update(struct ahash_request *req, 
unsigned int nbytes,
rctx-cmd.u.aes.key_len = ctx-u.aes.key_len;
rctx-cmd.u.aes.iv = rctx-iv_sg;
rctx-cmd.u.aes.iv_len = AES_BLOCK_SIZE;
-   rctx-cmd.u.aes.src = (sg) ? rctx-data_sg.sgl : NULL;
+   rctx-cmd.u.aes.src = sg;
rctx-cmd.u.aes.src_len = rctx-hash_cnt;
rctx-cmd.u.aes.dst = NULL;
rctx-cmd.u.aes.cmac_key = cmac_key_sg;
diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c 
b/drivers/crypto/ccp/ccp-crypto-sha.c
index 0571940..bf913cb 100644
--- a/drivers/crypto/ccp/ccp-crypto-sha.c
+++ b/drivers/crypto/ccp/ccp-crypto-sha.c
@@ -122,7 +122,6 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
 unsigned int final)
 {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-   struct ccp_ctx *ctx = crypto_ahash_ctx(tfm);
struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req);
struct scatterlist *sg;
unsigned int block_size =
@@ -153,35 +152,32 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
/* Initialize the context scatterlist */
sg_init_one(rctx-ctx_sg, rctx-ctx, sizeof(rctx-ctx));
 
-   /* Build the data scatterlist table - allocate enough entries for all
-* possible data pieces (hmac ipad, buffer, input data)
-*/
-   sg_count = (nbytes) ? sg_nents(req-src) + 2 : 2;
-   gfp = req-base.flags  CRYPTO_TFM_REQ_MAY_SLEEP ?
-   GFP_KERNEL : GFP_ATOMIC;
-   ret = sg_alloc_table(rctx-data_sg, sg_count, gfp);
-   if (ret)
-   return ret;
-
sg = NULL;
-   if (rctx-first  ctx-u.sha.key_len) {
-   rctx-hash_cnt += block_size;
-
-   sg_init_one(rctx-pad_sg, ctx-u.sha.ipad, block_size);
-   sg = ccp_crypto_sg_table_add(rctx-data_sg, rctx-pad_sg);
-   }
+   if (rctx-buf_count  nbytes) {
+   /* Build the data scatterlist table - allocate enough entries
+* for both data pieces (buffer and input data)
+*/
+   gfp = req-base.flags  CRYPTO_TFM_REQ_MAY_SLEEP ?
+   GFP_KERNEL : GFP_ATOMIC;
+   sg_count = sg_nents(req-src) + 1;
+   ret = sg_alloc_table(rctx-data_sg, sg_count, gfp);
+   if (ret)
+   return ret;
 
-   if (rctx-buf_count) {
sg_init_one(rctx-buf_sg, rctx-buf, rctx-buf_count);
sg = ccp_crypto_sg_table_add(rctx-data_sg, rctx-buf_sg);
-   }
-
-   if (nbytes)
sg = ccp_crypto_sg_table_add(rctx-data_sg, req-src);
-
-   if (sg)
sg_mark_end(sg);
 
+   sg = rctx-data_sg.sgl;
+   } else if (rctx-buf_count) {
+   sg_init_one(rctx-buf_sg, rctx-buf, rctx-buf_count);
+
+   sg = rctx-buf_sg;
+   } else if (nbytes) {
+   sg = req-src;
+   }
+
rctx-msg_bits += (rctx-hash_cnt  3);/* Total in bits */
 
memset(rctx-cmd, 0, sizeof(rctx-cmd));
@@ -190,7 +186,7 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
rctx-cmd.u.sha.type = rctx-type;
rctx-cmd.u.sha.ctx = rctx-ctx_sg;
rctx-cmd.u.sha.ctx_len = sizeof(rctx-ctx);
-   rctx-cmd.u.sha.src = (sg) ? rctx-data_sg.sgl : NULL;
+   rctx-cmd.u.sha.src = sg;
rctx-cmd.u.sha.src_len = rctx-hash_cnt;
rctx-cmd.u.sha.final = rctx-final;
rctx-cmd.u.sha.msg_bits = rctx-msg_bits;
@@ -205,9 +201,12 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
 static int ccp_sha_init(struct ahash_request *req)
 {
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+   struct ccp_ctx *ctx = crypto_ahash_ctx(tfm);
struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req);
struct ccp_crypto_ahash_alg *alg =

[PATCH 3/6] crypto: ccp - Check for caller result area before using it

2014-01-06 Thread Tom Lendacky
For a hash operation, the caller doesn't have to supply a result
area on every call so don't use it / update it if it hasn't
been supplied.

Signed-off-by: Tom Lendacky thomas.lenda...@amd.com
---
 drivers/crypto/ccp/ccp-crypto-aes-cmac.c |4 +++-
 drivers/crypto/ccp/ccp-crypto-sha.c  |7 +--
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c 
b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
index 646c8d1..c6b8f9e 100644
--- a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
+++ b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
@@ -43,7 +43,9 @@ static int ccp_aes_cmac_complete(struct crypto_async_request 
*async_req,
} else
rctx-buf_count = 0;
 
-   memcpy(req-result, rctx-iv, digest_size);
+   /* Update result area if supplied */
+   if (req-result)
+   memcpy(req-result, rctx-iv, digest_size);
 
 e_free:
sg_free_table(rctx-data_sg);
diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c 
b/drivers/crypto/ccp/ccp-crypto-sha.c
index bf913cb..183d16e 100644
--- a/drivers/crypto/ccp/ccp-crypto-sha.c
+++ b/drivers/crypto/ccp/ccp-crypto-sha.c
@@ -74,6 +74,7 @@ static int ccp_sha_finish_hmac(struct crypto_async_request 
*async_req)
struct ahash_request *req = ahash_request_cast(async_req);
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct ccp_ctx *ctx = crypto_ahash_ctx(tfm);
+   struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req);
struct scatterlist sg[2];
unsigned int block_size =
crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm));
@@ -81,7 +82,7 @@ static int ccp_sha_finish_hmac(struct crypto_async_request 
*async_req)
 
sg_init_table(sg, ARRAY_SIZE(sg));
sg_set_buf(sg[0], ctx-u.sha.opad, block_size);
-   sg_set_buf(sg[1], req-result, digest_size);
+   sg_set_buf(sg[1], rctx-ctx, digest_size);
 
return ccp_sync_hash(ctx-u.sha.hmac_tfm, req-result, sg,
 block_size + digest_size);
@@ -106,7 +107,9 @@ static int ccp_sha_complete(struct crypto_async_request 
*async_req, int ret)
} else
rctx-buf_count = 0;
 
-   memcpy(req-result, rctx-ctx, digest_size);
+   /* Update result area if supplied */
+   if (req-result)
+   memcpy(req-result, rctx-ctx, digest_size);
 
/* If we're doing an HMAC, we need to perform that on the final op */
if (rctx-final  ctx-u.sha.key_len)


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/6] crypto: ccp - Change data length declarations to u64

2014-01-06 Thread Tom Lendacky
When performing a hash operation if the amount of data buffered and a
request at or near the maximum data length is received then the length
calcuation could wrap causing an error in executing the hash operation.
Fix this by using a u64 type for the input and output data lengths in
all CCP operations.

Signed-off-by: Tom Lendacky thomas.lenda...@amd.com
---
 drivers/crypto/ccp/ccp-crypto-aes-cmac.c |   21 +++
 drivers/crypto/ccp/ccp-crypto-sha.c  |   21 +++
 drivers/crypto/ccp/ccp-crypto.h  |   10 +++--
 drivers/crypto/ccp/ccp-ops.c |   34 +-
 include/linux/ccp.h  |8 ---
 5 files changed, 57 insertions(+), 37 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c 
b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
index c6b8f9e..a52b97a 100644
--- a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
+++ b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
@@ -37,8 +37,9 @@ static int ccp_aes_cmac_complete(struct crypto_async_request 
*async_req,
 
if (rctx-hash_rem) {
/* Save remaining data to buffer */
-   scatterwalk_map_and_copy(rctx-buf, rctx-cmd.u.aes.src,
-rctx-hash_cnt, rctx-hash_rem, 0);
+   unsigned int offset = rctx-nbytes - rctx-hash_rem;
+   scatterwalk_map_and_copy(rctx-buf, rctx-src,
+offset, rctx-hash_rem, 0);
rctx-buf_count = rctx-hash_rem;
} else
rctx-buf_count = 0;
@@ -62,8 +63,9 @@ static int ccp_do_cmac_update(struct ahash_request *req, 
unsigned int nbytes,
struct scatterlist *sg, *cmac_key_sg = NULL;
unsigned int block_size =
crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm));
-   unsigned int len, need_pad, sg_count;
+   unsigned int need_pad, sg_count;
gfp_t gfp;
+   u64 len;
int ret;
 
if (!ctx-u.aes.key_len)
@@ -72,7 +74,9 @@ static int ccp_do_cmac_update(struct ahash_request *req, 
unsigned int nbytes,
if (nbytes)
rctx-null_msg = 0;
 
-   if (!final  ((nbytes + rctx-buf_count) = block_size)) {
+   len = (u64)rctx-buf_count + (u64)nbytes;
+
+   if (!final  (len = block_size)) {
scatterwalk_map_and_copy(rctx-buf + rctx-buf_count, req-src,
 0, nbytes, 0);
rctx-buf_count += nbytes;
@@ -80,12 +84,13 @@ static int ccp_do_cmac_update(struct ahash_request *req, 
unsigned int nbytes,
return 0;
}
 
-   len = rctx-buf_count + nbytes;
+   rctx-src = req-src;
+   rctx-nbytes = nbytes;
 
rctx-final = final;
-   rctx-hash_cnt = final ? len : len  ~(block_size - 1);
-   rctx-hash_rem = final ?   0 : len   (block_size - 1);
-   if (!final  (rctx-hash_cnt == len)) {
+   rctx-hash_rem = final ? 0 : len  (block_size - 1);
+   rctx-hash_cnt = len - rctx-hash_rem;
+   if (!final  !rctx-hash_rem) {
/* CCP can't do zero length final, so keep some data around */
rctx-hash_cnt -= block_size;
rctx-hash_rem = block_size;
diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c 
b/drivers/crypto/ccp/ccp-crypto-sha.c
index 183d16e..d30f6c8 100644
--- a/drivers/crypto/ccp/ccp-crypto-sha.c
+++ b/drivers/crypto/ccp/ccp-crypto-sha.c
@@ -101,8 +101,9 @@ static int ccp_sha_complete(struct crypto_async_request 
*async_req, int ret)
 
if (rctx-hash_rem) {
/* Save remaining data to buffer */
-   scatterwalk_map_and_copy(rctx-buf, rctx-cmd.u.sha.src,
-rctx-hash_cnt, rctx-hash_rem, 0);
+   unsigned int offset = rctx-nbytes - rctx-hash_rem;
+   scatterwalk_map_and_copy(rctx-buf, rctx-src,
+offset, rctx-hash_rem, 0);
rctx-buf_count = rctx-hash_rem;
} else
rctx-buf_count = 0;
@@ -129,11 +130,14 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
struct scatterlist *sg;
unsigned int block_size =
crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm));
-   unsigned int len, sg_count;
+   unsigned int sg_count;
gfp_t gfp;
+   u64 len;
int ret;
 
-   if (!final  ((nbytes + rctx-buf_count) = block_size)) {
+   len = (u64)rctx-buf_count + (u64)nbytes;
+
+   if (!final  (len = block_size)) {
scatterwalk_map_and_copy(rctx-buf + rctx-buf_count, req-src,
 0, nbytes, 0);
rctx-buf_count += nbytes;
@@ -141,12 +145,13 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
return 0;
}
 
-   len = rctx-buf_count + nbytes;
+   rctx-src = req-src;
+   rctx-nbytes = nbytes;
 

[PATCH 6/6] crypto: ccp - CCP device enabled/disabled changes

2014-01-06 Thread Tom Lendacky
The CCP cannot be hot-plugged so it will either be there
or it won't.  Do not allow the driver to stay loaded if the
CCP does not successfully initialize.

Provide stub routines in the ccp.h file that return -ENODEV
if the CCP has not been configured in the build.

Signed-off-by: Tom Lendacky thomas.lenda...@amd.com
---
 drivers/crypto/ccp/ccp-dev.c |   15 ++-
 drivers/crypto/ccp/ccp-pci.c |3 +++
 include/linux/ccp.h  |   12 
 3 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index b2038a7..c3bc212 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -552,6 +552,7 @@ static const struct x86_cpu_id ccp_support[] = {
 static int __init ccp_mod_init(void)
 {
struct cpuinfo_x86 *cpuinfo = boot_cpu_data;
+   int ret;
 
if (!x86_match_cpu(ccp_support))
return -ENODEV;
@@ -560,7 +561,19 @@ static int __init ccp_mod_init(void)
case 22:
if ((cpuinfo-x86_model  48) || (cpuinfo-x86_model  63))
return -ENODEV;
-   return ccp_pci_init();
+
+   ret = ccp_pci_init();
+   if (ret)
+   return ret;
+
+   /* Don't leave the driver loaded if init failed */
+   if (!ccp_get_device()) {
+   ccp_pci_exit();
+   return -ENODEV;
+   }
+
+   return 0;
+
break;
}
 
diff --git a/drivers/crypto/ccp/ccp-pci.c b/drivers/crypto/ccp/ccp-pci.c
index 1fbeaf1..11836b7 100644
--- a/drivers/crypto/ccp/ccp-pci.c
+++ b/drivers/crypto/ccp/ccp-pci.c
@@ -268,6 +268,9 @@ static void ccp_pci_remove(struct pci_dev *pdev)
struct device *dev = pdev-dev;
struct ccp_device *ccp = dev_get_drvdata(dev);
 
+   if (!ccp)
+   return;
+
ccp_destroy(ccp);
 
pci_iounmap(pdev, ccp-io_map);
diff --git a/include/linux/ccp.h b/include/linux/ccp.h
index 12f1cfd..b941ab9 100644
--- a/include/linux/ccp.h
+++ b/include/linux/ccp.h
@@ -23,6 +23,9 @@
 struct ccp_device;
 struct ccp_cmd;
 
+#if defined(CONFIG_CRYPTO_DEV_CCP_DD) || \
+   defined(CONFIG_CRYPTO_DEV_CCP_DD_MODULE)
+
 /**
  * ccp_enqueue_cmd - queue an operation for processing by the CCP
  *
@@ -48,6 +51,15 @@ struct ccp_cmd;
  */
 int ccp_enqueue_cmd(struct ccp_cmd *cmd);
 
+#else /* CONFIG_CRYPTO_DEV_CCP_DD is not enabled */
+
+static inline int ccp_enqueue_cmd(struct ccp_cmd *cmd)
+{
+   return -ENODEV;
+}
+
+#endif /* CONFIG_CRYPTO_DEV_CCP_DD */
+
 
 /* AES engine */
 /**


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/6] crypto: ccp - Apply appropriate gfp_t type to memory allocations

2014-01-06 Thread Tom Lendacky
Fix some memory allocations to use the appropriate gfp_t type based
on the CRYPTO_TFM_REQ_MAY_SLEEP flag.

Signed-off-by: Tom Lendacky thomas.lenda...@amd.com
---
 drivers/crypto/ccp/ccp-crypto-aes-cmac.c |5 -
 drivers/crypto/ccp/ccp-crypto-sha.c  |5 -
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c 
b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
index 64dd35e..398832c 100644
--- a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
+++ b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
@@ -61,6 +61,7 @@ static int ccp_do_cmac_update(struct ahash_request *req, 
unsigned int nbytes,
unsigned int block_size =
crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm));
unsigned int len, need_pad, sg_count;
+   gfp_t gfp;
int ret;
 
if (!ctx-u.aes.key_len)
@@ -99,7 +100,9 @@ static int ccp_do_cmac_update(struct ahash_request *req, 
unsigned int nbytes,
 * possible data pieces (buffer, input data, padding)
 */
sg_count = (nbytes) ? sg_nents(req-src) + 2 : 2;
-   ret = sg_alloc_table(rctx-data_sg, sg_count, GFP_KERNEL);
+   gfp = req-base.flags  CRYPTO_TFM_REQ_MAY_SLEEP ?
+   GFP_KERNEL : GFP_ATOMIC;
+   ret = sg_alloc_table(rctx-data_sg, sg_count, gfp);
if (ret)
return ret;
 
diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c 
b/drivers/crypto/ccp/ccp-crypto-sha.c
index b0881df..0571940 100644
--- a/drivers/crypto/ccp/ccp-crypto-sha.c
+++ b/drivers/crypto/ccp/ccp-crypto-sha.c
@@ -128,6 +128,7 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
unsigned int block_size =
crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm));
unsigned int len, sg_count;
+   gfp_t gfp;
int ret;
 
if (!final  ((nbytes + rctx-buf_count) = block_size)) {
@@ -156,7 +157,9 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
 * possible data pieces (hmac ipad, buffer, input data)
 */
sg_count = (nbytes) ? sg_nents(req-src) + 2 : 2;
-   ret = sg_alloc_table(rctx-data_sg, sg_count, GFP_KERNEL);
+   gfp = req-base.flags  CRYPTO_TFM_REQ_MAY_SLEEP ?
+   GFP_KERNEL : GFP_ATOMIC;
+   ret = sg_alloc_table(rctx-data_sg, sg_count, gfp);
if (ret)
return ret;
 


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread Borislav Petkov
On Mon, Jan 06, 2014 at 10:10:55AM -0800, Tim Chen wrote:
 Yes, the code is in the file named aesni_intel_avx.S. So it should be
 clear that the code is meant for x86_64.

How do you deduce aesni_intel_avx.S is meant for x86_64 only from the
name?

Shouldn't it be called aesni_intel_avx-x86_64.S, as is the naming
convention in arch/x86/crypto/

?


-- 
Regards/Gruss,
Boris.

Sent from a fat crate under my desk. Formatting is fine.
--
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread H. Peter Anvin
On 01/06/2014 12:26 PM, Borislav Petkov wrote:
 On Mon, Jan 06, 2014 at 10:10:55AM -0800, Tim Chen wrote:
 Yes, the code is in the file named aesni_intel_avx.S. So it should be
 clear that the code is meant for x86_64.
 
 How do you deduce aesni_intel_avx.S is meant for x86_64 only from the
 name?
 
 Shouldn't it be called aesni_intel_avx-x86_64.S, as is the naming
 convention in arch/x86/crypto/
 

Quite.

-hpa


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread Tim Chen
On Mon, 2014-01-06 at 13:21 -0800, H. Peter Anvin wrote:
 On 01/06/2014 12:26 PM, Borislav Petkov wrote:
  On Mon, Jan 06, 2014 at 10:10:55AM -0800, Tim Chen wrote:
  Yes, the code is in the file named aesni_intel_avx.S. So it should be
  clear that the code is meant for x86_64.
  
  How do you deduce aesni_intel_avx.S is meant for x86_64 only from the
  name?
  
  Shouldn't it be called aesni_intel_avx-x86_64.S, as is the naming
  convention in arch/x86/crypto/
  
 
 Quite.
 
   -hpa
 
 

Will renaming the file to aesni_intel_avx-x86_64.S make things clearer
now?

Tim

---cut---here---

From 41656afcbd63ccb92357d4937a75629499f4fd4f Mon Sep 17 00:00:00 2001
From: Tim Chen tim.c.c...@linux.intel.com
Date: Mon, 6 Jan 2014 07:23:52 -0800
Subject: [PATCH] crypto: Rename aesni-intel_avx.S to indicate it only
 supports x86_64
To: Herbert Xu herb...@gondor.apana.org.au, H. Peter Anvin h...@zytor.com
Cc: Borislav Petkov b...@alien8.de, Andy Shevchenko 
andriy.shevche...@linux.intel.com, linux-crypto@vger.kernel.org

We rename aesni-intel_avx.S to aesni-intel_avx-x86_64.S to indicate
that it is only used by x86_64 architecture.
---
 arch/x86/crypto/Makefile| 2 +-
 arch/x86/crypto/{aesni-intel_avx.S = aesni-intel_avx-x86_64.S} | 0
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename arch/x86/crypto/{aesni-intel_avx.S = aesni-intel_avx-x86_64.S} (100%)

diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index 188b993..6ba54d6 100644
--- a/arch/x86/crypto/Makefile
+++ b/arch/x86/crypto/Makefile
@@ -76,7 +76,7 @@ ifeq ($(avx2_supported),yes)
 endif
 
 aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o fpu.o
-aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx.o
+aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o
 ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o
 sha1-ssse3-y := sha1_ssse3_asm.o sha1_ssse3_glue.o
 crc32c-intel-y := crc32c-intel_glue.o
diff --git a/arch/x86/crypto/aesni-intel_avx.S 
b/arch/x86/crypto/aesni-intel_avx-x86_64.S
similarity index 100%
rename from arch/x86/crypto/aesni-intel_avx.S
rename to arch/x86/crypto/aesni-intel_avx-x86_64.S
-- 
1.7.11.7



--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread H. Peter Anvin
On 01/06/2014 03:39 PM, Tim Chen wrote:
 
 Will renaming the file to aesni_intel_avx-x86_64.S make things clearer
 now?
 
 Tim

Yes.

Acked-by: H. Peter Anvin h...@linux.intel.com

Herbert, can you pick it up?

-hpa

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v1] crypto: aesni - fix build on x86 (32bit)

2014-01-06 Thread Herbert Xu
On Mon, Jan 06, 2014 at 03:41:51PM -0800, H. Peter Anvin wrote:
 On 01/06/2014 03:39 PM, Tim Chen wrote:
  
  Will renaming the file to aesni_intel_avx-x86_64.S make things clearer
  now?
  
  Tim
 
 Yes.
 
 Acked-by: H. Peter Anvin h...@linux.intel.com
 
 Herbert, can you pick it up?

Sure I'll apply this patch.

Thanks!
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html