Re: [PATCH V2 2/3] crypto: ccp - Enable support for AES GCM on v5 CCPs

2017-03-02 Thread Stephan Müller
Am Donnerstag, 2. März 2017, 22:26:54 CET schrieb Gary R Hook:

Hi Gary,

> A version 5 device provides the primitive commands
> required for AES GCM. This patch adds support for
> en/decryption.
> 
> Signed-off-by: Gary R Hook 
> ---
>  drivers/crypto/ccp/Makefile|1
>  drivers/crypto/ccp/ccp-crypto-aes-galois.c |  257
>  drivers/crypto/ccp/ccp-crypto-main.c   |  
> 12 +
>  drivers/crypto/ccp/ccp-crypto.h|   14 ++
>  drivers/crypto/ccp/ccp-dev-v5.c|2
>  drivers/crypto/ccp/ccp-dev.h   |1
>  drivers/crypto/ccp/ccp-ops.c   |  252
> +++ include/linux/ccp.h|   
> 9 +
>  8 files changed, 548 insertions(+)
>  create mode 100644 drivers/crypto/ccp/ccp-crypto-aes-galois.c
> 
> diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
> index 346ceb8..9ca1722 100644
> --- a/drivers/crypto/ccp/Makefile
> +++ b/drivers/crypto/ccp/Makefile
> @@ -12,4 +12,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
>  ccp-crypto-aes.o \
>  ccp-crypto-aes-cmac.o \
>  ccp-crypto-aes-xts.o \
> +ccp-crypto-aes-galois.o \
>  ccp-crypto-sha.o
> diff --git a/drivers/crypto/ccp/ccp-crypto-aes-galois.c
> b/drivers/crypto/ccp/ccp-crypto-aes-galois.c new file mode 100644
> index 000..8bc18c9
> --- /dev/null
> +++ b/drivers/crypto/ccp/ccp-crypto-aes-galois.c
> @@ -0,0 +1,257 @@
> +/*
> + * AMD Cryptographic Coprocessor (CCP) AES GCM crypto API support
> + *
> + * Copyright (C) 2016 Advanced Micro Devices, Inc.
> + *
> + * Author: Gary R Hook 
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "ccp-crypto.h"
> +
> +#define  AES_GCM_IVSIZE  12
> +
> +static int ccp_aes_gcm_complete(struct crypto_async_request *async_req, int
> ret) +{
> + return ret;
> +}
> +
> +static int ccp_aes_gcm_setkey(struct crypto_aead *tfm, const u8 *key,
> +   unsigned int key_len)
> +{
> + struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
> +
> + switch (key_len) {
> + case AES_KEYSIZE_128:
> + ctx->u.aes.type = CCP_AES_TYPE_128;
> + break;
> + case AES_KEYSIZE_192:
> + ctx->u.aes.type = CCP_AES_TYPE_192;
> + break;
> + case AES_KEYSIZE_256:
> + ctx->u.aes.type = CCP_AES_TYPE_256;
> + break;
> + default:
> + crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
> + return -EINVAL;
> + }
> +
> + ctx->u.aes.mode = CCP_AES_MODE_GCM;
> + ctx->u.aes.key_len = key_len;
> +
> + memcpy(ctx->u.aes.key, key, key_len);
> + sg_init_one(&ctx->u.aes.key_sg, ctx->u.aes.key, key_len);
> +
> + return 0;
> +}
> +
> +static int ccp_aes_gcm_setauthsize(struct crypto_aead *tfm,
> +unsigned int authsize)
> +{
> + return 0;
> +}
> +
> +static int ccp_aes_gcm_crypt(struct aead_request *req, bool encrypt)
> +{
> + struct crypto_aead *tfm = crypto_aead_reqtfm(req);
> + struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
> + struct ccp_aes_req_ctx *rctx = aead_request_ctx(req);
> + struct scatterlist *iv_sg = NULL;
> + unsigned int iv_len = 0;
> + int i;
> + int ret = 0;
> +
> + if (!ctx->u.aes.key_len)
> + return -EINVAL;
> +
> + if (ctx->u.aes.mode != CCP_AES_MODE_GCM)
> + return -EINVAL;
> +
> + if (!req->iv)
> + return -EINVAL;
> +
> + /*
> +  * 5 parts:
> +  *   plaintext/ciphertext input
> +  *   AAD
> +  *   key
> +  *   IV
> +  *   Destination+tag buffer
> +  */
> +
> + /* According to the way AES GCM has been implemented here,
> +  * per RFC 4106 it seems, the provided IV is fixed at 12 bytes,

When you have that restriction, should the cipher be called rfc4106(gcm(aes))?

But then, the key is 4 bytes longer than a normal AES key as it contains the 
leading 32 bits of the IV.

> +  * occupies the beginning of the IV array. Write a 32-bit
> +  * integer after that (bytes 13-16) with a value of "1".
> +  */
> + memcpy(rctx->iv, req->iv, AES_GCM_IVSIZE);
> + for (i = 0; i < 3; i++)
> + rctx->iv[i + AES_GCM_IVSIZE] = 0;
> + rctx->iv[AES_BLOCK_SIZE - 1] = 1;
> +
> + /* Set up a scatterlist for the IV */
> + iv_sg = &rctx->iv_sg;
> + iv_len = AES_BLOCK_SIZE;
> + sg_init_one(iv_sg, rctx->iv, iv_len);
> +
> + /* The AAD + plaintext are concatenated in the src buffer */
> + memset(&rctx->cmd, 0, sizeof(rctx->cmd));
> + INIT_LIST_HEAD(&rctx->cmd.entry);
> + rct

Re: XTS Crypto Not Found In /proc/crypto Even After Compiled for 4.10.1.

2017-03-02 Thread Herbert Xu
On Thu, Mar 02, 2017 at 05:35:30PM -0600, Nathan Royce wrote:
> ARM ODroid XU4
> 
> $ cat /proc/config.gz | gunzip | grep XTS
> CONFIG_CRYPTO_XTS=y
> 
> $ grep xts /proc/crypto
> //4.9.13
> name : xts(aes)
> driver   : xts(aes-generic)
> //4.10.1
> 
> //cbc can be found though
> 
> CRYPTTAB:
> cryptswap1 UUID= /dev/urandom
> swap,offset=2048,cipher=aes-xts-plain64:sha512,size=512,nofail
> 
> FSTAB:
> /dev/mapper/cryptswap1 none swap sw 0 0
> 
> Boot Log:
> [   10.535985] [ cut here ]
> [   10.539252] WARNING: CPU: 0 PID: 0 at crypto/skcipher.c:430
> skcipher_walk_first+0x13c/0x14c
> [   10.547542] Modules linked in: xor xor_neon aes_arm zlib_deflate
> dm_crypt raid6_pq nfsd auth_rpcgss oid_registry nfs_acl lockd grace sunrpc
> ip_tables x_tables
> [   10.561716] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.10.1-dirty #1
> [   10.568049] Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
> [   10.574171] [] (unwind_backtrace) from []
> (show_stack+0x10/0x14)
> [   10.581893] [] (show_stack) from []
> (dump_stack+0x84/0x98)
> [   10.589073] [] (dump_stack) from []
> (__warn+0xe8/0x100)
> [   10.595975] [] (__warn) from []
> (warn_slowpath_null+0x20/0x28)
> [   10.603546] [] (warn_slowpath_null) from []
> (skcipher_walk_first+0x13c/0x14c)
> [   10.612390] [] (skcipher_walk_first) from []
> (skcipher_walk_virt+0x1c/0x38)
> [   10.621056] [] (skcipher_walk_virt) from []
> (post_crypt+0x38/0x1c4)
> [   10.629022] [] (post_crypt) from []
> (decrypt_done+0x4c/0x54)
> [   10.636389] [] (decrypt_done) from []
> (s5p_aes_complete+0x70/0xfc)
> [   10.644274] [] (s5p_aes_complete) from []
> (s5p_aes_interrupt+0x134/0x1a0)
> [   10.652771] [] (s5p_aes_interrupt) from []
> (__handle_irq_event_percpu+0x9c/0x124)

This looks like a bug in the s5p driver.  It's calling the completion
function straight from the IRQ handler, which is triggering the
sanity check in skcipher_walk_first.

The s5p driver needs to schedule a tasklet to call the completion
function.

Do you have crypto self-test enabled? If so it should've caught
this at run-time.  Otherwise you can disable the s5p driver until
it's fixed.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


[PATCH] crypto: powerpc - Fix initialisation of crc32c context

2017-03-02 Thread Daniel Axtens
Turning on crypto self-tests on a POWER8 shows:

alg: hash: Test 1 failed for crc32c-vpmsum
: ff ff ff ff

Comparing the code with the Intel CRC32c implementation on which
ours is based shows that we are doing an init with 0, not ~0
as CRC32c requires.

This probably wasn't caught because btrfs does its own weird
open-coded initialisation.

Initialise our internal context to ~0 on init.

This makes the self-tests pass, and btrfs continues to work.

Fixes: 6dd7a82cc54e ("crypto: powerpc - Add POWER8 optimised crc32c")
Cc: Anton Blanchard 
Cc: sta...@vger.kernel.org
Signed-off-by: Daniel Axtens 
---
 arch/powerpc/crypto/crc32c-vpmsum_glue.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/crypto/crc32c-vpmsum_glue.c 
b/arch/powerpc/crypto/crc32c-vpmsum_glue.c
index 9fa046d56eba..411994551afc 100644
--- a/arch/powerpc/crypto/crc32c-vpmsum_glue.c
+++ b/arch/powerpc/crypto/crc32c-vpmsum_glue.c
@@ -52,7 +52,7 @@ static int crc32c_vpmsum_cra_init(struct crypto_tfm *tfm)
 {
u32 *key = crypto_tfm_ctx(tfm);
 
-   *key = 0;
+   *key = ~0;
 
return 0;
 }
-- 
2.9.3



Re: [PATCH] crypto: Add ECB dependency for XTS mode

2017-03-02 Thread Milan Broz
Patch below should be backported to 4.10 stable
(only 4.10, older kernels are ok).
We have reports some systems fail to boot from LUKS now
(missing ecb module in initramdisk) ...

Upstream commit is 12cb3a1c4184f891d965d1f39f8cfcc9ef617647

Thanks,
Milan

On 02/23/2017 08:38 AM, Milan Broz wrote:
> Since the
>commit f1c131b45410a202eb45cc55980a7a9e4e4b4f40
>crypto: xts - Convert to skcipher
> the XTS mode is based on ECB, so the mode must select
> ECB otherwise it can fail to initialize.
> 
> Signed-off-by: Milan Broz 
> ---
>  crypto/Kconfig | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/crypto/Kconfig b/crypto/Kconfig
> index 160f08e721cc..9c245eb0dd83 100644
> --- a/crypto/Kconfig
> +++ b/crypto/Kconfig
> @@ -374,6 +374,7 @@ config CRYPTO_XTS
>   select CRYPTO_BLKCIPHER
>   select CRYPTO_MANAGER
>   select CRYPTO_GF128MUL
> + select CRYPTO_ECB
>   help
> XTS: IEEE1619/D16 narrow block cipher use with aes-xts-plain,
> key size 256, 384 or 512 bits. This implementation currently
> 


Re: [PATCH V2 0/3] Series short description

2017-03-02 Thread Gary R Hook

On 03/02/2017 03:26 PM, Hook, Gary wrote:

The following series:
- Move verbose init messages to debug mode
- Update the queue pointers in the event of an error
- Simplify buffer management and eliminate an unused option


*sigh* That Subject line is supposed to read "Minor CCP improvements and 
clean-up".





---

Gary R Hook (3):
  crypto: ccp - Add SHA-2 384- and 512-bit support
  crypto: ccp - Enable support for AES GCM on v5 CCPs
  crypto: ccp - Enable 3DES function on v5 CCPs


 drivers/crypto/ccp/Makefile|2
 drivers/crypto/ccp/ccp-crypto-aes-galois.c |  257 ++
 drivers/crypto/ccp/ccp-crypto-des3.c   |  254 ++
 drivers/crypto/ccp/ccp-crypto-main.c   |   22 +
 drivers/crypto/ccp/ccp-crypto-sha.c|   22 +
 drivers/crypto/ccp/ccp-crypto.h|   44 ++
 drivers/crypto/ccp/ccp-dev-v3.c|1
 drivers/crypto/ccp/ccp-dev-v5.c|   56 +++
 drivers/crypto/ccp/ccp-dev.h   |   15 +
 drivers/crypto/ccp/ccp-ops.c   |  522

 include/linux/ccp.h|   68 
 11 files changed, 1257 insertions(+), 6 deletions(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-aes-galois.c
 create mode 100644 drivers/crypto/ccp/ccp-crypto-des3.c

--
Signature


--
This is my day job. Follow me at:
IG/Twitter/Facebook: @grhookphoto
IG/Twitter/Facebook: @grhphotographer


[PATCH V2 0/3] Series short description

2017-03-02 Thread Gary R Hook
The following series:
- Move verbose init messages to debug mode
- Update the queue pointers in the event of an error
- Simplify buffer management and eliminate an unused option

---

Gary R Hook (3):
  crypto: ccp - Add SHA-2 384- and 512-bit support
  crypto: ccp - Enable support for AES GCM on v5 CCPs
  crypto: ccp - Enable 3DES function on v5 CCPs


 drivers/crypto/ccp/Makefile|2 
 drivers/crypto/ccp/ccp-crypto-aes-galois.c |  257 ++
 drivers/crypto/ccp/ccp-crypto-des3.c   |  254 ++
 drivers/crypto/ccp/ccp-crypto-main.c   |   22 +
 drivers/crypto/ccp/ccp-crypto-sha.c|   22 +
 drivers/crypto/ccp/ccp-crypto.h|   44 ++
 drivers/crypto/ccp/ccp-dev-v3.c|1 
 drivers/crypto/ccp/ccp-dev-v5.c|   56 +++
 drivers/crypto/ccp/ccp-dev.h   |   15 +
 drivers/crypto/ccp/ccp-ops.c   |  522 
 include/linux/ccp.h|   68 
 11 files changed, 1257 insertions(+), 6 deletions(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-aes-galois.c
 create mode 100644 drivers/crypto/ccp/ccp-crypto-des3.c

--
Signature


[PATCH V2 3/3] crypto: ccp - Enable 3DES function on v5 CCPs

2017-03-02 Thread Gary R Hook
Wire up support for Triple DES in ECB mode.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/ccp-crypto-des3.c |  254 ++
 drivers/crypto/ccp/ccp-crypto-main.c |   10 +
 drivers/crypto/ccp/ccp-crypto.h  |   22 +++
 drivers/crypto/ccp/ccp-dev-v3.c  |1 
 drivers/crypto/ccp/ccp-dev-v5.c  |   54 +++
 drivers/crypto/ccp/ccp-dev.h |   14 ++
 drivers/crypto/ccp/ccp-ops.c |  198 +++
 include/linux/ccp.h  |   57 +++-
 9 files changed, 608 insertions(+), 3 deletions(-)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-des3.c

diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 9ca1722..60919a3 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -13,4 +13,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
   ccp-crypto-aes-cmac.o \
   ccp-crypto-aes-xts.o \
   ccp-crypto-aes-galois.o \
+  ccp-crypto-des3.o \
   ccp-crypto-sha.o
diff --git a/drivers/crypto/ccp/ccp-crypto-des3.c 
b/drivers/crypto/ccp/ccp-crypto-des3.c
new file mode 100644
index 000..5af7347
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-crypto-des3.c
@@ -0,0 +1,254 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) DES3 crypto API support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Gary R Hook 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ccp-crypto.h"
+
+static int ccp_des3_complete(struct crypto_async_request *async_req, int ret)
+{
+   struct ablkcipher_request *req = ablkcipher_request_cast(async_req);
+   struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+   struct ccp_des3_req_ctx *rctx = ablkcipher_request_ctx(req);
+
+   if (ret)
+   return ret;
+
+   if (ctx->u.des3.mode != CCP_DES3_MODE_ECB)
+   memcpy(req->info, rctx->iv, DES3_EDE_BLOCK_SIZE);
+
+   return 0;
+}
+
+static int ccp_des3_setkey(struct crypto_ablkcipher *tfm, const u8 *key,
+   unsigned int key_len)
+{
+   struct ccp_ctx *ctx = crypto_tfm_ctx(crypto_ablkcipher_tfm(tfm));
+   struct ccp_crypto_ablkcipher_alg *alg =
+   ccp_crypto_ablkcipher_alg(crypto_ablkcipher_tfm(tfm));
+   u32 *flags = &tfm->base.crt_flags;
+
+
+   /* From des_generic.c:
+*
+* RFC2451:
+*   If the first two or last two independent 64-bit keys are
+*   equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
+*   same as DES.  Implementers MUST reject keys that exhibit this
+*   property.
+*/
+   const u32 *K = (const u32 *)key;
+
+   if (unlikely(!((K[0] ^ K[2]) | (K[1] ^ K[3])) ||
+!((K[2] ^ K[4]) | (K[3] ^ K[5]))) &&
+(*flags & CRYPTO_TFM_REQ_WEAK_KEY)) {
+   *flags |= CRYPTO_TFM_RES_WEAK_KEY;
+   return -EINVAL;
+   }
+
+   /* It's not clear that there is any support for a keysize of 112.
+* If needed, the caller should make K1 == K3
+*/
+   ctx->u.des3.type = CCP_DES3_TYPE_168;
+   ctx->u.des3.mode = alg->mode;
+   ctx->u.des3.key_len = key_len;
+
+   memcpy(ctx->u.des3.key, key, key_len);
+   sg_init_one(&ctx->u.des3.key_sg, ctx->u.des3.key, key_len);
+
+   return 0;
+}
+
+static int ccp_des3_crypt(struct ablkcipher_request *req, bool encrypt)
+{
+   struct ccp_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
+   struct ccp_des3_req_ctx *rctx = ablkcipher_request_ctx(req);
+   struct scatterlist *iv_sg = NULL;
+   unsigned int iv_len = 0;
+   int ret;
+
+   if (!ctx->u.des3.key_len)
+   return -EINVAL;
+
+   if (((ctx->u.des3.mode == CCP_DES3_MODE_ECB) ||
+(ctx->u.des3.mode == CCP_DES3_MODE_CBC)) &&
+   (req->nbytes & (DES3_EDE_BLOCK_SIZE - 1)))
+   return -EINVAL;
+
+   if (ctx->u.des3.mode != CCP_DES3_MODE_ECB) {
+   if (!req->info)
+   return -EINVAL;
+
+   memcpy(rctx->iv, req->info, DES3_EDE_BLOCK_SIZE);
+   iv_sg = &rctx->iv_sg;
+   iv_len = DES3_EDE_BLOCK_SIZE;
+   sg_init_one(iv_sg, rctx->iv, iv_len);
+   }
+
+   memset(&rctx->cmd, 0, sizeof(rctx->cmd));
+   INIT_LIST_HEAD(&rctx->cmd.entry);
+   rctx->cmd.engine = CCP_ENGINE_DES3;
+   rctx->cmd.u.des3.type = ctx->u.des3.type;
+   rctx->cmd.u.des3.mode = ctx->u.des3.mode;
+   rctx->cmd.u.des3.action = (encrypt)
+ ? CCP_DES3_ACTION_ENCRYPT
+ : CCP_DES3_ACTION_DECRYPT;
+   rctx->cm

[PATCH V2 1/3] crypto: ccp - Add SHA-2 384- and 512-bit support

2017-03-02 Thread Gary R Hook
Incorporate 384-bit and 512-bit hashing for a version 5 CCP
device


Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/ccp-crypto-sha.c |   22 +++
 drivers/crypto/ccp/ccp-crypto.h |8 ++--
 drivers/crypto/ccp/ccp-ops.c|   72 +++
 include/linux/ccp.h |2 +
 4 files changed, 101 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c 
b/drivers/crypto/ccp/ccp-crypto-sha.c
index 84a652b..6b46eea 100644
--- a/drivers/crypto/ccp/ccp-crypto-sha.c
+++ b/drivers/crypto/ccp/ccp-crypto-sha.c
@@ -146,6 +146,12 @@ static int ccp_do_sha_update(struct ahash_request *req, 
unsigned int nbytes,
case CCP_SHA_TYPE_256:
rctx->cmd.u.sha.ctx_len = SHA256_DIGEST_SIZE;
break;
+   case CCP_SHA_TYPE_384:
+   rctx->cmd.u.sha.ctx_len = SHA384_DIGEST_SIZE;
+   break;
+   case CCP_SHA_TYPE_512:
+   rctx->cmd.u.sha.ctx_len = SHA512_DIGEST_SIZE;
+   break;
default:
/* Should never get here */
break;
@@ -393,6 +399,22 @@ struct ccp_sha_def {
.digest_size= SHA256_DIGEST_SIZE,
.block_size = SHA256_BLOCK_SIZE,
},
+   {
+   .version= CCP_VERSION(5, 0),
+   .name   = "sha384",
+   .drv_name   = "sha384-ccp",
+   .type   = CCP_SHA_TYPE_384,
+   .digest_size= SHA384_DIGEST_SIZE,
+   .block_size = SHA384_BLOCK_SIZE,
+   },
+   {
+   .version= CCP_VERSION(5, 0),
+   .name   = "sha512",
+   .drv_name   = "sha512-ccp",
+   .type   = CCP_SHA_TYPE_512,
+   .digest_size= SHA512_DIGEST_SIZE,
+   .block_size = SHA512_BLOCK_SIZE,
+   },
 };
 
 static int ccp_register_hmac_alg(struct list_head *head,
diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
index 8335b32..95cce27 100644
--- a/drivers/crypto/ccp/ccp-crypto.h
+++ b/drivers/crypto/ccp/ccp-crypto.h
@@ -137,9 +137,11 @@ struct ccp_aes_cmac_exp_ctx {
u8 buf[AES_BLOCK_SIZE];
 };
 
-/* SHA related defines */
-#define MAX_SHA_CONTEXT_SIZE   SHA256_DIGEST_SIZE
-#define MAX_SHA_BLOCK_SIZE SHA256_BLOCK_SIZE
+/* SHA-related defines
+ * These values must be large enough to accommodate any variant
+ */
+#define MAX_SHA_CONTEXT_SIZE   SHA512_DIGEST_SIZE
+#define MAX_SHA_BLOCK_SIZE SHA512_BLOCK_SIZE
 
 struct ccp_sha_ctx {
struct scatterlist opad_sg;
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index efac3d5..213a752 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -41,6 +41,20 @@
cpu_to_be32(SHA256_H6), cpu_to_be32(SHA256_H7),
 };
 
+static const __be64 ccp_sha384_init[SHA512_DIGEST_SIZE / sizeof(__be64)] = {
+   cpu_to_be64(SHA384_H0), cpu_to_be64(SHA384_H1),
+   cpu_to_be64(SHA384_H2), cpu_to_be64(SHA384_H3),
+   cpu_to_be64(SHA384_H4), cpu_to_be64(SHA384_H5),
+   cpu_to_be64(SHA384_H6), cpu_to_be64(SHA384_H7),
+};
+
+static const __be64 ccp_sha512_init[SHA512_DIGEST_SIZE / sizeof(__be64)] = {
+   cpu_to_be64(SHA512_H0), cpu_to_be64(SHA512_H1),
+   cpu_to_be64(SHA512_H2), cpu_to_be64(SHA512_H3),
+   cpu_to_be64(SHA512_H4), cpu_to_be64(SHA512_H5),
+   cpu_to_be64(SHA512_H6), cpu_to_be64(SHA512_H7),
+};
+
 #defineCCP_NEW_JOBID(ccp)  ((ccp->vdata->version == CCP_VERSION(3, 
0)) ? \
ccp_gen_jobid(ccp) : 0)
 
@@ -947,6 +961,18 @@ static int ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
return -EINVAL;
block_size = SHA256_BLOCK_SIZE;
break;
+   case CCP_SHA_TYPE_384:
+   if (cmd_q->ccp->vdata->version < CCP_VERSION(4, 0)
+   || sha->ctx_len < SHA384_DIGEST_SIZE)
+   return -EINVAL;
+   block_size = SHA384_BLOCK_SIZE;
+   break;
+   case CCP_SHA_TYPE_512:
+   if (cmd_q->ccp->vdata->version < CCP_VERSION(4, 0)
+   || sha->ctx_len < SHA512_DIGEST_SIZE)
+   return -EINVAL;
+   block_size = SHA512_BLOCK_SIZE;
+   break;
default:
return -EINVAL;
}
@@ -1034,6 +1060,21 @@ static int ccp_run_sha_cmd(struct ccp_cmd_queue *cmd_q, 
struct ccp_cmd *cmd)
sb_count = 1;
ooffset = ioffset = 0;
break;
+   case CCP_SHA_TYPE_384:
+   digest_size = SHA384_DIGEST_SIZE;
+   init = (void *) ccp_sha384_init;
+   ctx_size = SHA512_DIGEST_SIZE;
+   sb_count = 2;
+   ioffset = 0;
+   ooffset = 2 * CCP_SB_BYTES - SHA384_DIGEST_SIZE;
+   brea

[PATCH V2 2/3] crypto: ccp - Enable support for AES GCM on v5 CCPs

2017-03-02 Thread Gary R Hook
A version 5 device provides the primitive commands
required for AES GCM. This patch adds support for
en/decryption.

Signed-off-by: Gary R Hook 
---
 drivers/crypto/ccp/Makefile|1 
 drivers/crypto/ccp/ccp-crypto-aes-galois.c |  257 
 drivers/crypto/ccp/ccp-crypto-main.c   |   12 +
 drivers/crypto/ccp/ccp-crypto.h|   14 ++
 drivers/crypto/ccp/ccp-dev-v5.c|2 
 drivers/crypto/ccp/ccp-dev.h   |1 
 drivers/crypto/ccp/ccp-ops.c   |  252 +++
 include/linux/ccp.h|9 +
 8 files changed, 548 insertions(+)
 create mode 100644 drivers/crypto/ccp/ccp-crypto-aes-galois.c

diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 346ceb8..9ca1722 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -12,4 +12,5 @@ ccp-crypto-objs := ccp-crypto-main.o \
   ccp-crypto-aes.o \
   ccp-crypto-aes-cmac.o \
   ccp-crypto-aes-xts.o \
+  ccp-crypto-aes-galois.o \
   ccp-crypto-sha.o
diff --git a/drivers/crypto/ccp/ccp-crypto-aes-galois.c 
b/drivers/crypto/ccp/ccp-crypto-aes-galois.c
new file mode 100644
index 000..8bc18c9
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-crypto-aes-galois.c
@@ -0,0 +1,257 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) AES GCM crypto API support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Gary R Hook 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ccp-crypto.h"
+
+#defineAES_GCM_IVSIZE  12
+
+static int ccp_aes_gcm_complete(struct crypto_async_request *async_req, int 
ret)
+{
+   return ret;
+}
+
+static int ccp_aes_gcm_setkey(struct crypto_aead *tfm, const u8 *key,
+ unsigned int key_len)
+{
+   struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
+
+   switch (key_len) {
+   case AES_KEYSIZE_128:
+   ctx->u.aes.type = CCP_AES_TYPE_128;
+   break;
+   case AES_KEYSIZE_192:
+   ctx->u.aes.type = CCP_AES_TYPE_192;
+   break;
+   case AES_KEYSIZE_256:
+   ctx->u.aes.type = CCP_AES_TYPE_256;
+   break;
+   default:
+   crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+   return -EINVAL;
+   }
+
+   ctx->u.aes.mode = CCP_AES_MODE_GCM;
+   ctx->u.aes.key_len = key_len;
+
+   memcpy(ctx->u.aes.key, key, key_len);
+   sg_init_one(&ctx->u.aes.key_sg, ctx->u.aes.key, key_len);
+
+   return 0;
+}
+
+static int ccp_aes_gcm_setauthsize(struct crypto_aead *tfm,
+  unsigned int authsize)
+{
+   return 0;
+}
+
+static int ccp_aes_gcm_crypt(struct aead_request *req, bool encrypt)
+{
+   struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+   struct ccp_ctx *ctx = crypto_aead_ctx(tfm);
+   struct ccp_aes_req_ctx *rctx = aead_request_ctx(req);
+   struct scatterlist *iv_sg = NULL;
+   unsigned int iv_len = 0;
+   int i;
+   int ret = 0;
+
+   if (!ctx->u.aes.key_len)
+   return -EINVAL;
+
+   if (ctx->u.aes.mode != CCP_AES_MODE_GCM)
+   return -EINVAL;
+
+   if (!req->iv)
+   return -EINVAL;
+
+   /*
+* 5 parts:
+*   plaintext/ciphertext input
+*   AAD
+*   key
+*   IV
+*   Destination+tag buffer
+*/
+
+   /* According to the way AES GCM has been implemented here,
+* per RFC 4106 it seems, the provided IV is fixed at 12 bytes,
+* occupies the beginning of the IV array. Write a 32-bit
+* integer after that (bytes 13-16) with a value of "1".
+*/
+   memcpy(rctx->iv, req->iv, AES_GCM_IVSIZE);
+   for (i = 0; i < 3; i++)
+   rctx->iv[i + AES_GCM_IVSIZE] = 0;
+   rctx->iv[AES_BLOCK_SIZE - 1] = 1;
+
+   /* Set up a scatterlist for the IV */
+   iv_sg = &rctx->iv_sg;
+   iv_len = AES_BLOCK_SIZE;
+   sg_init_one(iv_sg, rctx->iv, iv_len);
+
+   /* The AAD + plaintext are concatenated in the src buffer */
+   memset(&rctx->cmd, 0, sizeof(rctx->cmd));
+   INIT_LIST_HEAD(&rctx->cmd.entry);
+   rctx->cmd.engine = CCP_ENGINE_AES;
+   rctx->cmd.u.aes.type = ctx->u.aes.type;
+   rctx->cmd.u.aes.mode = ctx->u.aes.mode;
+   rctx->cmd.u.aes.action =
+   (encrypt) ? CCP_AES_ACTION_ENCRYPT : CCP_AES_ACTION_DECRYPT;
+   rctx->cmd.u.aes.key = &ctx->u.aes.key_sg;
+   rctx->cmd.u.aes.key_len = ctx->u.aes.key_len;
+   rctx->cmd.u.aes.iv = iv_sg;
+   rctx->cmd.u.aes.iv_len = iv_len;

[RFC PATCH v2 17/32] x86: kvmclock: Clear encryption attribute when SEV is active

2017-03-02 Thread Brijesh Singh
The guest physical memory area holding the struct pvclock_wall_clock and
struct pvclock_vcpu_time_info are shared with the hypervisor. Hypervisor
periodically updates the contents of the memory. When SEV is active we must
clear the encryption attributes of the shared memory pages so that both
hypervisor and guest can access the data.

Signed-off-by: Brijesh Singh 
---
 arch/x86/kernel/kvmclock.c |   65 ++--
 1 file changed, 56 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index 278de4f..3b38b3d 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -27,6 +27,7 @@
 #include 
 #include 
 
+#include 
 #include 
 #include 
 
@@ -44,7 +45,7 @@ early_param("no-kvmclock", parse_no_kvmclock);
 
 /* The hypervisor will put information about time periodically here */
 static struct pvclock_vsyscall_time_info *hv_clock;
-static struct pvclock_wall_clock wall_clock;
+static struct pvclock_wall_clock *wall_clock;
 
 struct pvclock_vsyscall_time_info *pvclock_pvti_cpu0_va(void)
 {
@@ -62,15 +63,18 @@ static void kvm_get_wallclock(struct timespec *now)
int low, high;
int cpu;
 
-   low = (int)__pa_symbol(&wall_clock);
-   high = ((u64)__pa_symbol(&wall_clock) >> 32);
+   if (!wall_clock)
+   return;
+
+   low = (int)slow_virt_to_phys(wall_clock);
+   high = ((u64)slow_virt_to_phys(wall_clock) >> 32);
 
native_write_msr(msr_kvm_wall_clock, low, high);
 
cpu = get_cpu();
 
vcpu_time = &hv_clock[cpu].pvti;
-   pvclock_read_wallclock(&wall_clock, vcpu_time, now);
+   pvclock_read_wallclock(wall_clock, vcpu_time, now);
 
put_cpu();
 }
@@ -246,11 +250,40 @@ static void kvm_shutdown(void)
native_machine_shutdown();
 }
 
+static phys_addr_t kvm_memblock_alloc(phys_addr_t size, phys_addr_t align)
+{
+   phys_addr_t mem;
+
+   mem = memblock_alloc(size, align);
+   if (!mem)
+   return 0;
+
+   /* When SEV is active clear the encryption attributes of the pages */
+   if (sev_active()) {
+   if (early_set_memory_decrypted(__va(mem), size))
+   goto e_free;
+   }
+
+   return mem;
+e_free:
+   memblock_free(mem, size);
+   return 0;
+}
+
+static void kvm_memblock_free(phys_addr_t addr, phys_addr_t size)
+{
+   /* When SEV is active restore the encryption attributes of the pages */
+   if (sev_active())
+   early_set_memory_encrypted(__va(addr), size);
+
+   memblock_free(addr, size);
+}
+
 void __init kvmclock_init(void)
 {
struct pvclock_vcpu_time_info *vcpu_time;
-   unsigned long mem;
-   int size, cpu;
+   unsigned long mem, mem_wall_clock;
+   int size, cpu, wall_clock_size;
u8 flags;
 
size = PAGE_ALIGN(sizeof(struct pvclock_vsyscall_time_info)*NR_CPUS);
@@ -267,15 +300,29 @@ void __init kvmclock_init(void)
printk(KERN_INFO "kvm-clock: Using msrs %x and %x",
msr_kvm_system_time, msr_kvm_wall_clock);
 
-   mem = memblock_alloc(size, PAGE_SIZE);
-   if (!mem)
+   wall_clock_size = PAGE_ALIGN(sizeof(struct pvclock_wall_clock));
+   mem_wall_clock = kvm_memblock_alloc(wall_clock_size, PAGE_SIZE);
+   if (!mem_wall_clock)
return;
+
+   wall_clock = __va(mem_wall_clock);
+   memset(wall_clock, 0, wall_clock_size);
+
+   mem = kvm_memblock_alloc(size, PAGE_SIZE);
+   if (!mem) {
+   kvm_memblock_free(mem_wall_clock, wall_clock_size);
+   wall_clock = NULL;
+   return;
+   }
+
hv_clock = __va(mem);
memset(hv_clock, 0, size);
 
if (kvm_register_clock("primary cpu clock")) {
hv_clock = NULL;
-   memblock_free(mem, size);
+   kvm_memblock_free(mem, size);
+   kvm_memblock_free(mem_wall_clock, wall_clock_size);
+   wall_clock = NULL;
return;
}
 



[RFC PATCH v2 27/32] kvm: svm: Add support for SEV LAUNCH_FINISH command

2017-03-02 Thread Brijesh Singh
The command is used for finializing the SEV guest launch process.

Signed-off-by: Brijesh Singh 
---
 arch/x86/kvm/svm.c |   36 
 1 file changed, 36 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 62c2b22..c108064 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5921,6 +5921,38 @@ static int sev_launch_update_data(struct kvm *kvm, 
struct kvm_sev_cmd *argp)
return ret;
 }
 
+static int sev_launch_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   int i, ret;
+   struct sev_data_launch_finish *data;
+   struct kvm_vcpu *vcpu;
+
+   if (!sev_guest(kvm))
+   return -EINVAL;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   /* launch finish */
+   data->handle = sev_get_handle(kvm);
+   ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_FINISH, data, &argp->error);
+   if (ret)
+   goto err_1;
+
+   /* Iterate through each vcpus and set SEV KVM_SEV_FEATURE bit in
+* KVM_CPUID_FEATURE to indicate that SEV is enabled on this vcpu
+*/
+   kvm_for_each_vcpu(i, vcpu, kvm) {
+   sev_init_vmcb(to_svm(vcpu));
+   svm_cpuid_update(vcpu);
+   }
+
+err_1:
+   kfree(data);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -5940,6 +5972,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_launch_update_data(kvm, &sev_cmd);
break;
}
+   case KVM_SEV_LAUNCH_FINISH: {
+   r = sev_launch_finish(kvm, &sev_cmd);
+   break;
+   }
default:
break;
}



Re: [RFC PATCH v2 19/32] crypto: ccp: Introduce the AMD Secure Processor device

2017-03-02 Thread Brijesh Singh

Hi Mark,

On 03/02/2017 11:39 AM, Mark Rutland wrote:

On Thu, Mar 02, 2017 at 10:16:15AM -0500, Brijesh Singh wrote:

The CCP device is part of the AMD Secure Processor. In order to expand the
usage of the AMD Secure Processor, create a framework that allows functional
components of the AMD Secure Processor to be initialized and handled
appropriately.

Signed-off-by: Brijesh Singh 
Signed-off-by: Tom Lendacky 
---
 drivers/crypto/Kconfig   |   10 +
 drivers/crypto/ccp/Kconfig   |   43 +++--
 drivers/crypto/ccp/Makefile  |8 -
 drivers/crypto/ccp/ccp-dev-v3.c  |   86 +-
 drivers/crypto/ccp/ccp-dev-v5.c  |   73 -
 drivers/crypto/ccp/ccp-dev.c |  137 +---
 drivers/crypto/ccp/ccp-dev.h |   35 
 drivers/crypto/ccp/sp-dev.c  |  308 
 drivers/crypto/ccp/sp-dev.h  |  140 
 drivers/crypto/ccp/sp-pci.c  |  324 ++
 drivers/crypto/ccp/sp-platform.c |  268 +++
 include/linux/ccp.h  |3
 12 files changed, 1240 insertions(+), 195 deletions(-)
 create mode 100644 drivers/crypto/ccp/sp-dev.c
 create mode 100644 drivers/crypto/ccp/sp-dev.h
 create mode 100644 drivers/crypto/ccp/sp-pci.c
 create mode 100644 drivers/crypto/ccp/sp-platform.c



diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 346ceb8..8127e18 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -1,11 +1,11 @@
-obj-$(CONFIG_CRYPTO_DEV_CCP_DD) += ccp.o
-ccp-objs := ccp-dev.o \
+obj-$(CONFIG_CRYPTO_DEV_SP_DD) += ccp.o
+ccp-objs := sp-dev.o sp-platform.o
+ccp-$(CONFIG_PCI) += sp-pci.o
+ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
ccp-ops.o \
ccp-dev-v3.o \
ccp-dev-v5.o \
-   ccp-platform.o \
ccp-dmaengine.o


It looks like ccp-platform.c has morphed into sp-platform.c (judging by
the compatible string and general shape of the code), and the original
ccp-platform.c is no longer built.

Shouldn't ccp-platform.c be deleted by this patch?



Good catch. Both ccp-platform.c and ccp-pci.c should have been deleted 
by this patch. I missed deleting it, will fix in next rev.


~ Brijesh


[RFC PATCH v2 00/32] x86: Secure Encrypted Virtualization (AMD)

2017-03-02 Thread Brijesh Singh
This RFC series provides support for AMD's new Secure Encrypted Virtualization
(SEV) feature. This RFC is build upon Secure Memory Encryption (SME) RFCv4 [1].

SEV is an extension to the AMD-V architecture which supports running multiple
VMs under the control of a hypervisor. When enabled, SEV hardware tags all
code and data with its VM ASID which indicates which VM the data originated
from or is intended for. This tag is kept with the data at all times when
inside the SOC, and prevents that data from being used by anyone other than the
owner. While the tag protects VM data inside the SOC, AES with 128 bit
encryption protects data outside the SOC. When data leaves or enters the SOC,
it is encrypted/decrypted  respectively by hardware with a key based on the
associated tag.

SEV guest VMs have the concept of private and shared memory.  Private memory is
encrypted with the  guest-specific key, while shared memory may be encrypted
with hypervisor key.  Certain types of memory (namely instruction pages and
guest page tables) are always treated as private memory by the hardware.
For data memory, SEV guest VMs can choose which pages they would like to be
private. The choice is done using the standard CPU page tables using the C-bit,
and is fully controlled by the guest. Due to security reasons all the DMA
operations inside the  guest must be performed on shared pages (C-bit clear).
Note that since C-bit is only controllable by the guest OS when it is operating
in 64-bit or 32-bit PAE mode, in all other modes the SEV hardware forces the
C-bit to a 1.

SEV is designed to protect guest VMs from a benign but vulnerable (i.e. not
fully malicious) hypervisor. In particular, it reduces the attack surface of
guest VMs and can prevent certain types of VM-escape bugs (e.g. hypervisor
read-anywhere) from being used to steal guest data.

The RFC series also expands crypto driver (ccp.ko) to include the support for
Platform Security Processor (PSP) which is used for communicating with SEV
firmware that runs within the AMD secure processor providing a secure key
management interfaces. The hypervisor uses this interface to encrypt the
bootstrap code and perform common activities such as launching, running,
snapshotting, migrating and debugging encrypted guest.

A new ioctl (KVM_MEMORY_ENCRYPT_OP) is introduced which can be used by Qemu to
issue SEV guest life cycle commands.

The RFC series also includes patches required in guest OS to enable SEV feature.
A guest OS can check SEV support by calling KVM_FEATURE cpuid instruction.

The patch breakdown:
* [1 - 17]: guest OS specific changes when SEV is active
* [18]: already queued in kvm upstream tree but was not in tip tree hence its
  included so that build does not fail
* [19 - 21]: since CCP and PSP shares the same PCIe ID hence the patch expands
  the CCP driver by creating a high level AMD Secure Processor (SP) framework
  to allow integration of PSP device into ccp.ko.
* [22 - 32]: hypervisor changes to support memory encryption

The following links provide additional details:

AMD Memory Encryption whitepaper:
http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/AMD_Memory_Encryption_Whitepaper_v7-Public.pdf

AMD64 Architecture Programmer's Manual:
http://support.amd.com/TechDocs/24593.pdf
SME is section 7.10
SEV is section 15.34

Secure Encrypted Virutualization Key Management:
http://support.amd.com/TechDocs/55766_SEV-KM API_Specification.pdf

KVM Forum Presentation:
http://www.linux-kvm.org/images/7/74/02x08A-Thomas_Lendacky-AMDs_Virtualizatoin_Memory_Encryption_Technology.pdf

[1] http://marc.info/?l=linux-kernel&m=148725974113693&w=2

---

Based on the feedbacks, we have started adding the SEV guest support in OVMF
BIOS. This series has been tested using EDK2/OVMF BIOS, the initial EDK2 patches
has been submmited on edk2 mailing list for discussion.

TODO:
 - add support for migration commands
 - update QEMU RFC's to SEV spec 0.14
 - investigate virtio and vfio support for SEV guest
 - investigate SMM support for SEV guest
 - add support for nested virtualization

Changes since v1:
 - update to newer SEV key management API spec (0.12 -> 0.14)
 - expand the CCP driver and integrate the PSP interface support
 - remove the usage of SEV ref_count and release the SEV FW resources in
   kvm_x86_ops->vm_destroy
 - acquire the kvm->lock before executing the SEV commands and release on exit.
 - rename ioctl from KVM_SEV_ISSUE_CMD to KVM_MEMORY_ENCRYPT_OP
 - extend KVM_MEMORY_ENCRYPT_OP ioctl to require file descriptor for the SEV
   device. A program without access to /dev/sev will not be able to issue SEV
   commands
 - update vmcb on succesful LAUNCH_FINISH to indicate that SEV is active
 - serveral fixes based on Paolo's review feedbacks
 - add APIs to support sharing the guest physical address with hypervisor
 - update kvm pvclock driver to use the shared buffer when SEV is active
 - pin the SEV guest memory

Brijesh Singh (18):
  x86: mm: Provi

[RFC PATCH v2 15/32] x86: Add support for changing memory encryption attribute in early boot

2017-03-02 Thread Brijesh Singh
Some KVM-specific custom MSRs shares the guest physical address with
hypervisor. When SEV is active, the shared physical address must be mapped
with encryption attribute cleared so that both hypervsior and guest can
access the data.

Add APIs to change memory encryption attribute in early boot code.

Signed-off-by: Brijesh Singh 
---
 arch/x86/include/asm/mem_encrypt.h |   15 +
 arch/x86/mm/mem_encrypt.c  |   63 
 2 files changed, 78 insertions(+)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 9799835..95bbe4c 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -47,6 +47,9 @@ void __init sme_unmap_bootdata(char *real_mode_data);
 
 void __init sme_early_init(void);
 
+int __init early_set_memory_decrypted(void *addr, unsigned long size);
+int __init early_set_memory_encrypted(void *addr, unsigned long size);
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void);
 
@@ -110,6 +113,18 @@ static inline void __init sme_early_init(void)
 {
 }
 
+static inline int __init early_set_memory_decrypted(void *addr,
+   unsigned long size)
+{
+   return 1;
+}
+
+static inline int __init early_set_memory_encrypted(void *addr,
+   unsigned long size)
+{
+   return 1;
+}
+
 #define __sme_pa   __pa
 #define __sme_pa_nodebug   __pa_nodebug
 
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 7df5f4c..567e0d8 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -258,6 +259,68 @@ static void sme_free(struct device *dev, size_t size, void 
*vaddr,
swiotlb_free_coherent(dev, size, vaddr, dma_handle);
 }
 
+static unsigned long __init get_pte_flags(unsigned long address)
+{
+   int level;
+   pte_t *pte;
+   unsigned long flags = _KERNPG_TABLE_NOENC | _PAGE_ENC;
+
+   pte = lookup_address(address, &level);
+   if (!pte)
+   return flags;
+
+   switch (level) {
+   case PG_LEVEL_4K:
+   flags = pte_flags(*pte);
+   break;
+   case PG_LEVEL_2M:
+   flags = pmd_flags(*(pmd_t *)pte);
+   break;
+   case PG_LEVEL_1G:
+   flags = pud_flags(*(pud_t *)pte);
+   break;
+   default:
+   break;
+   }
+
+   return flags;
+}
+
+int __init early_set_memory_enc_dec(void *vaddr, unsigned long size,
+   unsigned long flags)
+{
+   unsigned long pfn, npages;
+   unsigned long addr = (unsigned long)vaddr & PAGE_MASK;
+
+   /* We are going to change the physical page attribute from C=1 to C=0.
+* Flush the caches to ensure that all the data with C=1 is flushed to
+* memory. Any caching of the vaddr after function returns will
+* use C=0.
+*/
+   clflush_cache_range(vaddr, size);
+
+   npages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+   pfn = slow_virt_to_phys((void *)addr) >> PAGE_SHIFT;
+
+   return kernel_map_pages_in_pgd(init_mm.pgd, pfn, addr, npages,
+   flags & ~sme_me_mask);
+
+}
+
+int __init early_set_memory_decrypted(void *vaddr, unsigned long size)
+{
+   unsigned long flags = get_pte_flags((unsigned long)vaddr);
+
+   return early_set_memory_enc_dec(vaddr, size, flags & ~sme_me_mask);
+}
+
+int __init early_set_memory_encrypted(void *vaddr, unsigned long size)
+{
+   unsigned long flags = get_pte_flags((unsigned long)vaddr);
+
+   return early_set_memory_enc_dec(vaddr, size, flags | _PAGE_ENC);
+}
+
 static struct dma_map_ops sme_dma_ops = {
.alloc  = sme_alloc,
.free   = sme_free,



Re: [RFC PATCH v2 19/32] crypto: ccp: Introduce the AMD Secure Processor device

2017-03-02 Thread Mark Rutland
On Thu, Mar 02, 2017 at 10:16:15AM -0500, Brijesh Singh wrote:
> The CCP device is part of the AMD Secure Processor. In order to expand the
> usage of the AMD Secure Processor, create a framework that allows functional
> components of the AMD Secure Processor to be initialized and handled
> appropriately.
> 
> Signed-off-by: Brijesh Singh 
> Signed-off-by: Tom Lendacky 
> ---
>  drivers/crypto/Kconfig   |   10 +
>  drivers/crypto/ccp/Kconfig   |   43 +++--
>  drivers/crypto/ccp/Makefile  |8 -
>  drivers/crypto/ccp/ccp-dev-v3.c  |   86 +-
>  drivers/crypto/ccp/ccp-dev-v5.c  |   73 -
>  drivers/crypto/ccp/ccp-dev.c |  137 +---
>  drivers/crypto/ccp/ccp-dev.h |   35 
>  drivers/crypto/ccp/sp-dev.c  |  308 
>  drivers/crypto/ccp/sp-dev.h  |  140 
>  drivers/crypto/ccp/sp-pci.c  |  324 
> ++
>  drivers/crypto/ccp/sp-platform.c |  268 +++
>  include/linux/ccp.h  |3 
>  12 files changed, 1240 insertions(+), 195 deletions(-)
>  create mode 100644 drivers/crypto/ccp/sp-dev.c
>  create mode 100644 drivers/crypto/ccp/sp-dev.h
>  create mode 100644 drivers/crypto/ccp/sp-pci.c
>  create mode 100644 drivers/crypto/ccp/sp-platform.c

> diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
> index 346ceb8..8127e18 100644
> --- a/drivers/crypto/ccp/Makefile
> +++ b/drivers/crypto/ccp/Makefile
> @@ -1,11 +1,11 @@
> -obj-$(CONFIG_CRYPTO_DEV_CCP_DD) += ccp.o
> -ccp-objs := ccp-dev.o \
> +obj-$(CONFIG_CRYPTO_DEV_SP_DD) += ccp.o
> +ccp-objs := sp-dev.o sp-platform.o
> +ccp-$(CONFIG_PCI) += sp-pci.o
> +ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
>   ccp-ops.o \
>   ccp-dev-v3.o \
>   ccp-dev-v5.o \
> - ccp-platform.o \
>   ccp-dmaengine.o

It looks like ccp-platform.c has morphed into sp-platform.c (judging by
the compatible string and general shape of the code), and the original
ccp-platform.c is no longer built.

Shouldn't ccp-platform.c be deleted by this patch?

Thanks,
Mark.


[RFC PATCH v2 10/32] x86: DMA support for SEV memory encryption

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

DMA access to memory mapped as encrypted while SEV is active can not be
encrypted during device write or decrypted during device read. In order
for DMA to properly work when SEV is active, the swiotlb bounce buffers
must be used.

Signed-off-by: Tom Lendacky 
---
 arch/x86/mm/mem_encrypt.c |   77 +
 1 file changed, 77 insertions(+)

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 090419b..7df5f4c 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -197,8 +197,81 @@ void __init sme_early_init(void)
/* Update the protection map with memory encryption mask */
for (i = 0; i < ARRAY_SIZE(protection_map); i++)
protection_map[i] = pgprot_encrypted(protection_map[i]);
+
+   if (sev_active())
+   swiotlb_force = SWIOTLB_FORCE;
+}
+
+static void *sme_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+  gfp_t gfp, unsigned long attrs)
+{
+   unsigned long dma_mask;
+   unsigned int order;
+   struct page *page;
+   void *vaddr = NULL;
+
+   dma_mask = dma_alloc_coherent_mask(dev, gfp);
+   order = get_order(size);
+
+   gfp &= ~__GFP_ZERO;
+
+   page = alloc_pages_node(dev_to_node(dev), gfp, order);
+   if (page) {
+   dma_addr_t addr;
+
+   /*
+* Since we will be clearing the encryption bit, check the
+* mask with it already cleared.
+*/
+   addr = phys_to_dma(dev, page_to_phys(page)) & ~sme_me_mask;
+   if ((addr + size) > dma_mask) {
+   __free_pages(page, get_order(size));
+   } else {
+   vaddr = page_address(page);
+   *dma_handle = addr;
+   }
+   }
+
+   if (!vaddr)
+   vaddr = swiotlb_alloc_coherent(dev, size, dma_handle, gfp);
+
+   if (!vaddr)
+   return NULL;
+
+   /* Clear the SME encryption bit for DMA use if not swiotlb area */
+   if (!is_swiotlb_buffer(dma_to_phys(dev, *dma_handle))) {
+   set_memory_decrypted((unsigned long)vaddr, 1 << order);
+   *dma_handle &= ~sme_me_mask;
+   }
+
+   return vaddr;
 }
 
+static void sme_free(struct device *dev, size_t size, void *vaddr,
+dma_addr_t dma_handle, unsigned long attrs)
+{
+   /* Set the SME encryption bit for re-use if not swiotlb area */
+   if (!is_swiotlb_buffer(dma_to_phys(dev, dma_handle)))
+   set_memory_encrypted((unsigned long)vaddr,
+1 << get_order(size));
+
+   swiotlb_free_coherent(dev, size, vaddr, dma_handle);
+}
+
+static struct dma_map_ops sme_dma_ops = {
+   .alloc  = sme_alloc,
+   .free   = sme_free,
+   .map_page   = swiotlb_map_page,
+   .unmap_page = swiotlb_unmap_page,
+   .map_sg = swiotlb_map_sg_attrs,
+   .unmap_sg   = swiotlb_unmap_sg_attrs,
+   .sync_single_for_cpu= swiotlb_sync_single_for_cpu,
+   .sync_single_for_device = swiotlb_sync_single_for_device,
+   .sync_sg_for_cpu= swiotlb_sync_sg_for_cpu,
+   .sync_sg_for_device = swiotlb_sync_sg_for_device,
+   .mapping_error  = swiotlb_dma_mapping_error,
+};
+
 /* Architecture __weak replacement functions */
 void __init mem_encrypt_init(void)
 {
@@ -208,6 +281,10 @@ void __init mem_encrypt_init(void)
/* Call into SWIOTLB to update the SWIOTLB DMA buffers */
swiotlb_update_mem_attributes();
 
+   /* Use SEV DMA operations if SEV is active */
+   if (sev_active())
+   dma_ops = &sme_dma_ops;
+
pr_info("AMD Secure Memory Encryption (SME) active\n");
 }
 



[RFC PATCH v2 14/32] x86: mm: Provide support to use memblock when spliting large pages

2017-03-02 Thread Brijesh Singh
If kernel_maps_pages_in_pgd is called early in boot process to change the
memory attributes then it fails to allocate memory when spliting large
pages. The patch extends the cpa_data to provide the support to use
memblock_alloc when slab allocator is not available.

The feature will be used in Secure Encrypted Virtualization (SEV) mode,
where we may need to change the memory region attributes in early boot
process.

Signed-off-by: Brijesh Singh 
---
 arch/x86/mm/pageattr.c |   51 
 1 file changed, 42 insertions(+), 9 deletions(-)

diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 46cc89d..9e4ab3b 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -37,6 +38,7 @@ struct cpa_data {
int flags;
unsigned long   pfn;
unsignedforce_split : 1;
+   unsignedforce_memblock :1;
int curpage;
struct page **pages;
 };
@@ -627,9 +629,8 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
 
 static int
 __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
-  struct page *base)
+ pte_t *pbase, unsigned long new_pfn)
 {
-   pte_t *pbase = (pte_t *)page_address(base);
unsigned long ref_pfn, pfn, pfninc = 1;
unsigned int i, level;
pte_t *tmp;
@@ -646,7 +647,7 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
return 1;
}
 
-   paravirt_alloc_pte(&init_mm, page_to_pfn(base));
+   paravirt_alloc_pte(&init_mm, new_pfn);
 
switch (level) {
case PG_LEVEL_2M:
@@ -707,7 +708,8 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
 * pagetable protections, the actual ptes set above control the
 * primary protection behavior:
 */
-   __set_pmd_pte(kpte, address, mk_pte(base, __pgprot(_KERNPG_TABLE)));
+   __set_pmd_pte(kpte, address,
+   native_make_pte((new_pfn << PAGE_SHIFT) + _KERNPG_TABLE));
 
/*
 * Intel Atom errata AAH41 workaround.
@@ -723,21 +725,50 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, 
unsigned long address,
return 0;
 }
 
+static pte_t *try_alloc_pte(struct cpa_data *cpa, unsigned long *pfn)
+{
+   unsigned long phys;
+   struct page *base;
+
+   if (cpa->force_memblock) {
+   phys = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+   if (!phys)
+   return NULL;
+   *pfn = phys >> PAGE_SHIFT;
+   return (pte_t *)__va(phys);
+   }
+
+   base = alloc_pages(GFP_KERNEL | __GFP_NOTRACK, 0);
+   if (!base)
+   return NULL;
+   *pfn = page_to_pfn(base);
+   return (pte_t *)page_address(base);
+}
+
+static void try_free_pte(struct cpa_data *cpa, pte_t *pte)
+{
+   if (cpa->force_memblock)
+   memblock_free(__pa(pte), PAGE_SIZE);
+   else
+   __free_page((struct page *)pte);
+}
+
 static int split_large_page(struct cpa_data *cpa, pte_t *kpte,
unsigned long address)
 {
-   struct page *base;
+   pte_t *new_pte;
+   unsigned long new_pfn;
 
if (!debug_pagealloc_enabled())
spin_unlock(&cpa_lock);
-   base = alloc_pages(GFP_KERNEL | __GFP_NOTRACK, 0);
+   new_pte = try_alloc_pte(cpa, &new_pfn);
if (!debug_pagealloc_enabled())
spin_lock(&cpa_lock);
-   if (!base)
+   if (!new_pte)
return -ENOMEM;
 
-   if (__split_large_page(cpa, kpte, address, base))
-   __free_page(base);
+   if (__split_large_page(cpa, kpte, address, new_pte, new_pfn))
+   try_free_pte(cpa, new_pte);
 
return 0;
 }
@@ -2035,6 +2066,7 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned 
long address,
unsigned numpages, unsigned long page_flags)
 {
int retval = -EINVAL;
+   int use_memblock = !slab_is_available();
 
struct cpa_data cpa = {
.vaddr = &address,
@@ -2044,6 +2076,7 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned 
long address,
.mask_set = __pgprot(0),
.mask_clr = __pgprot(0),
.flags = 0,
+   .force_memblock = use_memblock,
};
 
if (!(__supported_pte_mask & _PAGE_NX))



[RFC PATCH v2 13/32] KVM: SVM: Enable SEV by setting the SEV_ENABLE CPU feature

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Modify the SVM cpuid update function to indicate if Secure Encrypted
Virtualization (SEV) is active in the guest by setting the SEV KVM CPU
features bit. SEV is active if Secure Memory Encryption is enabled in
the host and the SEV_ENABLE bit of the VMCB is set.

Signed-off-by: Tom Lendacky 
---
 arch/x86/kvm/cpuid.c |4 +++-
 arch/x86/kvm/svm.c   |   18 ++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 1639de8..e0c40a8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -601,7 +601,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 
*entry, u32 function,
entry->edx = 0;
break;
case 0x8000:
-   entry->eax = min(entry->eax, 0x801a);
+   entry->eax = min(entry->eax, 0x801f);
break;
case 0x8001:
entry->edx &= kvm_cpuid_8000_0001_edx_x86_features;
@@ -634,6 +634,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 
*entry, u32 function,
break;
case 0x801d:
break;
+   case 0x801f:
+   break;
/*Add support for Centaur's CPUID instruction*/
case 0xC000:
/*Just support up to 0xC004 now*/
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 75b0645..36d61ff 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -46,6 +46,7 @@
 #include 
 
 #include 
+#include 
 #include "trace.h"
 
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
@@ -5005,10 +5006,27 @@ static void svm_cpuid_update(struct kvm_vcpu *vcpu)
 {
struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_cpuid_entry2 *entry;
+   struct vmcb_control_area *ca = &svm->vmcb->control;
+   struct kvm_cpuid_entry2 *features, *sev_info;
 
/* Update nrips enabled cache */
svm->nrips_enabled = !!guest_cpuid_has_nrips(&svm->vcpu);
 
+   /* Check for Secure Encrypted Virtualization support */
+   features = kvm_find_cpuid_entry(vcpu, KVM_CPUID_FEATURES, 0);
+   if (!features)
+   return;
+
+   sev_info = kvm_find_cpuid_entry(vcpu, 0x801f, 0);
+   if (!sev_info)
+   return;
+
+   if (ca->nested_ctl & SVM_NESTED_CTL_SEV_ENABLE) {
+   features->eax |= (1 << KVM_FEATURE_SEV);
+   cpuid(0x801f, &sev_info->eax, &sev_info->ebx,
+ &sev_info->ecx, &sev_info->edx);
+   }
+
if (!kvm_vcpu_apicv_active(vcpu))
return;
 



[RFC PATCH v2 12/32] x86: Add early boot support when running with SEV active

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Early in the boot process, add checks to determine if the kernel is
running with Secure Encrypted Virtualization (SEV) active by issuing
a CPUID instruction.

During early compressed kernel booting, if SEV is active the pagetables are
updated so that data is accessed and decompressed with encryption.

During uncompressed kernel booting, if SEV is the memory encryption mask is
set and a flag is set to indicate that SEV is enabled.

Signed-off-by: Tom Lendacky 
---
 arch/x86/boot/compressed/Makefile  |2 +
 arch/x86/boot/compressed/head_64.S |   16 +++
 arch/x86/boot/compressed/mem_encrypt.S |   75 
 arch/x86/include/uapi/asm/hyperv.h |4 ++
 arch/x86/include/uapi/asm/kvm_para.h   |3 +
 arch/x86/kernel/mem_encrypt_init.c |   24 ++
 6 files changed, 124 insertions(+)
 create mode 100644 arch/x86/boot/compressed/mem_encrypt.S

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 44163e8..51f9cd0 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -72,6 +72,8 @@ vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/head_$(BITS).o 
$(obj)/misc.o \
$(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
$(obj)/piggy.o $(obj)/cpuflags.o
 
+vmlinux-objs-$(CONFIG_X86_64) += $(obj)/mem_encrypt.o
+
 vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
 vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
 ifdef CONFIG_X86_64
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index d2ae1f8..625b5380 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -130,6 +130,19 @@ ENTRY(startup_32)
  /*
   * Build early 4G boot pagetable
   */
+   /*
+* If SEV is active set the encryption mask in the page tables. This
+* will insure that when the kernel is copied and decompressed it
+* will be done so encrypted.
+*/
+   callsev_enabled
+   xorl%edx, %edx
+   testl   %eax, %eax
+   jz  1f
+   subl$32, %eax   /* Encryption bit is always above bit 31 */
+   bts %eax, %edx  /* Set encryption mask for page tables */
+1:
+
/* Initialize Page tables to 0 */
lealpgtable(%ebx), %edi
xorl%eax, %eax
@@ -140,12 +153,14 @@ ENTRY(startup_32)
lealpgtable + 0(%ebx), %edi
leal0x1007 (%edi), %eax
movl%eax, 0(%edi)
+   addl%edx, 4(%edi)
 
/* Build Level 3 */
lealpgtable + 0x1000(%ebx), %edi
leal0x1007(%edi), %eax
movl$4, %ecx
 1: movl%eax, 0x00(%edi)
+   addl%edx, 0x04(%edi)
addl$0x1000, %eax
addl$8, %edi
decl%ecx
@@ -156,6 +171,7 @@ ENTRY(startup_32)
movl$0x0183, %eax
movl$2048, %ecx
 1: movl%eax, 0(%edi)
+   addl%edx, 4(%edi)
addl$0x0020, %eax
addl$8, %edi
decl%ecx
diff --git a/arch/x86/boot/compressed/mem_encrypt.S 
b/arch/x86/boot/compressed/mem_encrypt.S
new file mode 100644
index 000..8313c31
--- /dev/null
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -0,0 +1,75 @@
+/*
+ * AMD Memory Encryption Support
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+   .text
+   .code32
+ENTRY(sev_enabled)
+   xor %eax, %eax
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+   push%ebx
+   push%ecx
+   push%edx
+
+   /* Check if running under a hypervisor */
+   movl$0x4000, %eax
+   cpuid
+   cmpl$0x4001, %eax
+   jb  .Lno_sev
+
+   movl$0x4001, %eax
+   cpuid
+   bt  $KVM_FEATURE_SEV, %eax
+   jnc .Lno_sev
+
+   /*
+* Check for memory encryption feature:
+*   CPUID Fn8000_001F[EAX] - Bit 0
+*/
+   movl$0x801f, %eax
+   cpuid
+   bt  $0, %eax
+   jnc .Lno_sev
+
+   /*
+* Get memory encryption information:
+*   CPUID Fn8000_001F[EBX] - Bits 5:0
+* Pagetable bit position used to indicate encryption
+*/
+   movl%ebx, %eax
+   andl$0x3f, %eax
+   movl%eax, sev_enc_bit(%ebp)
+   jmp .Lsev_exit
+
+.Lno_sev:
+   xor %eax, %eax
+
+.Lsev_exit:
+   pop %edx
+   pop %ecx
+   pop %ebx
+
+#endif /* CONFIG_AMD_MEM_ENCRYPT */
+
+   ret
+ENDPROC(sev_enabled)
+
+   .bss
+sev_enc_bit:
+   .word   0
diff --git a/arch/x86/include/uapi/asm/hyperv.h 
b/arch/x86/include/uapi/asm/hyperv.h
index 9b1a918..8278161 100644
---

Re: Problem with RSA test from testmgr

2017-03-02 Thread Tadeusz Struk
On 03/01/2017 10:21 PM, Corentin Labbe wrote:
> I am finishing a patch that made testmgr test both (padded and unpadded).

Even if you patch the test vectors there is no guarantee that a user
of the API will always have the plain text padded.
It can be anything between 1 and the key size.
This needs to be the driver who adds padding if needed.
See how other implementations handle it.
Thanks,
-- 
Tadeusz


Re: [PATCH v3 2/2] crypto: vmx - Use skcipher for xts fallback

2017-03-02 Thread Herbert Xu
On Wed, Mar 01, 2017 at 11:00:00AM -0300, Paulo Flabiano Smorigo wrote:
> Signed-off-by: Paulo Flabiano Smorigo 

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Problem with RSA test from testmgr

2017-03-02 Thread Tadeusz Struk
Hi Stephan,
On 03/01/2017 10:08 PM, Stephan Müller wrote:
>>  memset(ptextp, 0, 256);
>>  memcpy(ptextp + 64 - 8, ptext_ex, plen);
> I actually have tested that and it did not return the data the kernel 
> implementation would return

It did for me:
Result 64 plen=8
63 1c cd 7b e1 7e e4 de c9 a8 89 a1 74 cb 3c 63 7d 24 ec 83 c3 15 e4 7f 73 05 
34 d1 ec 22 bb 8a 5e 32 39 6d c1 1d 7d 50 3b 9f 7a ad f0 2e 25 53 9f 6e bd 4c 
55 84 0c 9b cf 1a 4b 51 1e 9e 0c 06

Are you sure you are compering this with the fist test vector?
http://lxr.free-electrons.com/source/crypto/testmgr.h#L183

Thanks,
-- 
Tadeusz


[RFC PATCH v2 26/32] kvm: svm: Add support for SEV LAUNCH_UPDATE_DATA command

2017-03-02 Thread Brijesh Singh
The command is used for encrypting the guest memory region using the VM
encryption key (VEK) created from LAUNCH_START.

Signed-off-by: Brijesh Singh 
---
 arch/x86/kvm/svm.c |  150 
 1 file changed, 150 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b5fa8c0..62c2b22 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -38,6 +38,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -502,6 +504,7 @@ static void sev_deactivate_handle(struct kvm *kvm);
 static void sev_decommission_handle(struct kvm *kvm);
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+#define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
 
 static bool kvm_sev_enabled(void)
 {
@@ -5775,6 +5778,149 @@ static int sev_launch_start(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static struct page **sev_pin_memory(unsigned long uaddr, unsigned long ulen,
+   unsigned long *n)
+{
+   struct page **pages;
+   int first, last;
+   unsigned long npages, pinned;
+
+   /* Get number of pages */
+   first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+   last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+   npages = (last - first + 1);
+
+   pages = kzalloc(npages * sizeof(struct page *), GFP_KERNEL);
+   if (!pages)
+   return NULL;
+
+   /* pin the user virtual address */
+   down_read(¤t->mm->mmap_sem);
+   pinned = get_user_pages_fast(uaddr, npages, 1, pages);
+   up_read(¤t->mm->mmap_sem);
+   if (pinned != npages) {
+   printk(KERN_ERR "SEV: failed to pin  %ld pages (got %ld)\n",
+   npages, pinned);
+   goto err;
+   }
+
+   *n = npages;
+   return pages;
+err:
+   if (pinned > 0)
+   release_pages(pages, pinned, 0);
+   kfree(pages);
+
+   return NULL;
+}
+
+static void sev_unpin_memory(struct page **pages, unsigned long npages)
+{
+   release_pages(pages, npages, 0);
+   kfree(pages);
+}
+
+static void sev_clflush_pages(struct page *pages[], int num_pages)
+{
+   unsigned long i;
+   uint8_t *page_virtual;
+
+   if (num_pages == 0 || pages == NULL)
+   return;
+
+   for (i = 0; i < num_pages; i++) {
+   page_virtual = kmap_atomic(pages[i]);
+   clflush_cache_range(page_virtual, PAGE_SIZE);
+   kunmap_atomic(page_virtual);
+   }
+}
+
+static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   struct page **inpages;
+   unsigned long uaddr, ulen;
+   int i, len, ret, offset;
+   unsigned long nr_pages;
+   struct kvm_sev_launch_update_data params;
+   struct sev_data_launch_update_data *data;
+
+   if (!sev_guest(kvm))
+   return -EINVAL;
+
+   /* Get the parameters from the user */
+   ret = -EFAULT;
+   if (copy_from_user(¶ms, (void *)argp->data,
+   sizeof(struct kvm_sev_launch_update_data)))
+   goto err_1;
+
+   uaddr = params.address;
+   ulen = params.length;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data) {
+   ret = -ENOMEM;
+   goto err_1;
+   }
+
+   /* pin user pages */
+   inpages = sev_pin_memory(params.address, params.length, &nr_pages);
+   if (!inpages) {
+   ret = -ENOMEM;
+   goto err_2;
+   }
+
+   /* invalidate the cache line for these pages to ensure that DRAM
+* has recent content before calling the SEV commands to perform
+* the encryption.
+*/
+   sev_clflush_pages(inpages, nr_pages);
+
+   /* the array of pages returned by get_user_pages() is a page-aligned
+* memory. Since the user buffer is probably not page-aligned, we need
+* to calculate the offset within a page for first update entry.
+*/
+   offset = uaddr & (PAGE_SIZE - 1);
+   len = min_t(size_t, (PAGE_SIZE - offset), ulen);
+   ulen -= len;
+
+   /* update first page -
+* special care need to be taken for the first page because we might
+* be dealing with offset within the page
+*/
+   data->handle = sev_get_handle(kvm);
+   data->length = len;
+   data->address = __sev_page_pa(inpages[0]) + offset;
+   ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_DATA,
+   data, &argp->error);
+   if (ret)
+   goto err_3;
+
+   /* update remaining pages */
+   for (i = 1; i < nr_pages; i++) {
+
+   len = min_t(size_t, PAGE_SIZE, ulen);
+   ulen -= len;
+   data->length = len;
+   data->address = __sev_page_pa(inpages[i]);
+   ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_DATA,
+   d

[RFC PATCH v2 28/32] kvm: svm: Add support for SEV GUEST_STATUS command

2017-03-02 Thread Brijesh Singh
The command is used for querying the SEV guest status.

Signed-off-by: Brijesh Singh 
---
 arch/x86/kvm/svm.c |   37 +
 1 file changed, 37 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index c108064..977aa22 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5953,6 +5953,39 @@ static int sev_launch_finish(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static int sev_guest_status(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   int ret;
+   struct kvm_sev_guest_status params;
+   struct sev_data_guest_status *data;
+
+   if (!sev_guest(kvm))
+   return -ENOTTY;
+
+   if (copy_from_user(¶ms, (void *) argp->data,
+   sizeof(struct kvm_sev_guest_status)))
+   return -EFAULT;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   data->handle = sev_get_handle(kvm);
+   ret = sev_issue_cmd(kvm, SEV_CMD_GUEST_STATUS, data, &argp->error);
+   if (ret)
+   goto err_1;
+
+   params.policy = data->policy;
+   params.state = data->state;
+
+   if (copy_to_user((void *) argp->data, ¶ms,
+   sizeof(struct kvm_sev_guest_status)))
+   ret = -EFAULT;
+err_1:
+   kfree(data);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -5976,6 +6009,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_launch_finish(kvm, &sev_cmd);
break;
}
+   case KVM_SEV_GUEST_STATUS: {
+   r = sev_guest_status(kvm, &sev_cmd);
+   break;
+   }
default:
break;
}



[RFC PATCH v2 21/32] crypto: ccp: Add Secure Encrypted Virtualization (SEV) interface support

2017-03-02 Thread Brijesh Singh
The Secure Encrypted Virtualization (SEV) interface allows the memory
contents of a virtual machine (VM) to be transparently encrypted with
a key unique to the guest.

The interface provides:
  - /dev/sev device and ioctl (SEV_ISSUE_CMD) to execute the platform
provisioning commands from the userspace.
  - in-kernel API's to encrypt the guest memory region. The in-kernel APIs
will be used by KVM to bootstrap and debug the SEV guest.

SEV key management spec is available here [1]
[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf

Signed-off-by: Brijesh Singh 
---
 drivers/crypto/ccp/Kconfig   |7 
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/psp-dev.h |6 
 drivers/crypto/ccp/sev-dev.c |  348 ++
 drivers/crypto/ccp/sev-dev.h |   67 
 drivers/crypto/ccp/sev-ops.c |  324 
 include/linux/psp-sev.h  |  672 ++
 include/uapi/linux/Kbuild|1 
 include/uapi/linux/psp-sev.h |  123 
 9 files changed, 1546 insertions(+), 3 deletions(-)
 create mode 100644 drivers/crypto/ccp/sev-dev.c
 create mode 100644 drivers/crypto/ccp/sev-dev.h
 create mode 100644 drivers/crypto/ccp/sev-ops.c
 create mode 100644 include/linux/psp-sev.h
 create mode 100644 include/uapi/linux/psp-sev.h

diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index 59c207e..67d1917 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -41,4 +41,11 @@ config CRYPTO_DEV_PSP
help
 Provide the interface for AMD Platform Security Processor (PSP) device.
 
+config CRYPTO_DEV_SEV
+   bool "Secure Encrypted Virtualization (SEV) interface"
+   default y
+   help
+Provide the kernel and userspace (/dev/sev) interface to issue the
+Secure Encrypted Virtualization (SEV) commands.
+
 endif
diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 12e569d..4c4e77e 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -7,6 +7,7 @@ ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
ccp-dev-v5.o \
ccp-dmaengine.o
 ccp-$(CONFIG_CRYPTO_DEV_PSP) += psp-dev.o
+ccp-$(CONFIG_CRYPTO_DEV_SEV) += sev-dev.o sev-ops.o
 
 obj-$(CONFIG_CRYPTO_DEV_CCP_CRYPTO) += ccp-crypto.o
 ccp-crypto-objs := ccp-crypto-main.o \
diff --git a/drivers/crypto/ccp/psp-dev.h b/drivers/crypto/ccp/psp-dev.h
index bbd3d96..fd67b14 100644
--- a/drivers/crypto/ccp/psp-dev.h
+++ b/drivers/crypto/ccp/psp-dev.h
@@ -70,14 +70,14 @@ int psp_free_sev_irq(struct psp_device *psp, void *data);
 
 struct psp_device *psp_get_master_device(void);
 
-#ifdef CONFIG_AMD_SEV
+#ifdef CONFIG_CRYPTO_DEV_SEV
 
 int sev_dev_init(struct psp_device *psp);
 void sev_dev_destroy(struct psp_device *psp);
 int sev_dev_resume(struct psp_device *psp);
 int sev_dev_suspend(struct psp_device *psp, pm_message_t state);
 
-#else
+#else /* !CONFIG_CRYPTO_DEV_SEV */
 
 static inline int sev_dev_init(struct psp_device *psp)
 {
@@ -96,7 +96,7 @@ static inline int sev_dev_suspend(struct psp_device *psp, 
pm_message_t state)
return -ENODEV;
 }
 
-#endif /* __AMD_SEV_H */
+#endif /* CONFIG_CRYPTO_DEV_SEV */
 
 #endif /* __PSP_DEV_H */
 
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
new file mode 100644
index 000..a67e2d7
--- /dev/null
+++ b/drivers/crypto/ccp/sev-dev.c
@@ -0,0 +1,348 @@
+/*
+ * AMD Secure Encrypted Virtualization (SEV) interface
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Brijesh Singh 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "psp-dev.h"
+#include "sev-dev.h"
+
+extern struct file_operations sev_fops;
+
+static LIST_HEAD(sev_devs);
+static DEFINE_SPINLOCK(sev_devs_lock);
+static atomic_t sev_id;
+
+static unsigned int psp_poll;
+module_param(psp_poll, uint, 0444);
+MODULE_PARM_DESC(psp_poll, "Poll for sev command completion - any non-zero 
value");
+
+DEFINE_MUTEX(sev_cmd_mutex);
+
+void sev_add_device(struct sev_device *sev)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(&sev_devs_lock, flags);
+
+   list_add_tail(&sev->entry, &sev_devs);
+
+   spin_unlock_irqrestore(&sev_devs_lock, flags);
+}
+
+void sev_del_device(struct sev_device *sev)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(&sev_devs_lock, flags);
+
+   list_del(&sev->entry);
+   spin_unlock_irqrestore(&sev_devs_lock, flags);
+}
+
+static struct sev_device *get_sev_master_device(void)
+{
+   struct psp_device *psp = psp_get_master_device();
+
+   return psp ? psp->sev_data : NULL;
+}
+
+static int sev_wait_cmd_poll(struct sev_device *sev, unsigned int timeout,
+   

[RFC PATCH v2 25/32] kvm: svm: Add support for SEV LAUNCH_START command

2017-03-02 Thread Brijesh Singh
The command is used to bootstrap SEV guest from unencrypted boot images.
The command creates a new VM encryption key (VEK) using the guest owner's
public DH certificates, and session data. The VEK will be used to encrypt
the guest memory.

Signed-off-by: Brijesh Singh 
---
 arch/x86/kvm/svm.c |  302 
 1 file changed, 301 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index fb63398..b5fa8c0 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -37,6 +37,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -497,6 +498,10 @@ static inline bool gif_set(struct vcpu_svm *svm)
 /* Secure Encrypted Virtualization */
 static unsigned int max_sev_asid;
 static unsigned long *sev_asid_bitmap;
+static void sev_deactivate_handle(struct kvm *kvm);
+static void sev_decommission_handle(struct kvm *kvm);
+static int sev_asid_new(void);
+static void sev_asid_free(int asid);
 
 static bool kvm_sev_enabled(void)
 {
@@ -1534,6 +1539,17 @@ static inline int avic_free_vm_id(int id)
return 0;
 }
 
+static void sev_vm_destroy(struct kvm *kvm)
+{
+   if (!sev_guest(kvm))
+   return;
+
+   /* release the firmware resources */
+   sev_deactivate_handle(kvm);
+   sev_decommission_handle(kvm);
+   sev_asid_free(sev_get_asid(kvm));
+}
+
 static void avic_vm_destroy(struct kvm *kvm)
 {
unsigned long flags;
@@ -1551,6 +1567,12 @@ static void avic_vm_destroy(struct kvm *kvm)
spin_unlock_irqrestore(&svm_vm_data_hash_lock, flags);
 }
 
+static void svm_vm_destroy(struct kvm *kvm)
+{
+   avic_vm_destroy(kvm);
+   sev_vm_destroy(kvm);
+}
+
 static int avic_vm_init(struct kvm *kvm)
 {
unsigned long flags;
@@ -5502,6 +5524,282 @@ static inline void avic_post_state_restore(struct 
kvm_vcpu *vcpu)
avic_handle_ldr_update(vcpu);
 }
 
+static int sev_asid_new(void)
+{
+   int pos;
+
+   if (!max_sev_asid)
+   return -EINVAL;
+
+   pos = find_first_zero_bit(sev_asid_bitmap, max_sev_asid);
+   if (pos >= max_sev_asid)
+   return -EBUSY;
+
+   set_bit(pos, sev_asid_bitmap);
+   return pos + 1;
+}
+
+static void sev_asid_free(int asid)
+{
+   int cpu, pos;
+   struct svm_cpu_data *sd;
+
+   pos = asid - 1;
+   clear_bit(pos, sev_asid_bitmap);
+
+   for_each_possible_cpu(cpu) {
+   sd = per_cpu(svm_data, cpu);
+   sd->sev_vmcbs[pos] = NULL;
+   }
+}
+
+static int sev_issue_cmd(struct kvm *kvm, int id, void *data, int *error)
+{
+   int ret;
+   struct fd f;
+   int fd = sev_get_fd(kvm);
+
+   f = fdget(fd);
+   if (!f.file)
+   return -EBADF;
+
+   ret = sev_issue_cmd_external_user(f.file, id, data, 0, error);
+   fdput(f);
+
+   return ret;
+}
+
+static void sev_decommission_handle(struct kvm *kvm)
+{
+   int ret, error;
+   struct sev_data_decommission *data;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return;
+
+   data->handle = sev_get_handle(kvm);
+   ret = sev_guest_decommission(data, &error);
+   if (ret)
+   pr_err("SEV: DECOMMISSION %d (%#x)\n", ret, error);
+
+   kfree(data);
+}
+
+static void sev_deactivate_handle(struct kvm *kvm)
+{
+   int ret, error;
+   struct sev_data_deactivate *data;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return;
+
+   data->handle = sev_get_handle(kvm);
+   ret = sev_guest_deactivate(data, &error);
+   if (ret) {
+   pr_err("SEV: DEACTIVATE %d (%#x)\n", ret, error);
+   goto buffer_free;
+   }
+
+   wbinvd_on_all_cpus();
+
+   ret = sev_guest_df_flush(&error);
+   if (ret)
+   pr_err("SEV: DF_FLUSH %d (%#x)\n", ret, error);
+
+buffer_free:
+   kfree(data);
+}
+
+static int sev_activate_asid(unsigned int handle, int asid, int *error)
+{
+   int ret;
+   struct sev_data_activate *data;
+
+   wbinvd_on_all_cpus();
+
+   ret = sev_guest_df_flush(error);
+   if (ret) {
+   pr_err("SEV: DF_FLUSH %d (%#x)\n", ret, *error);
+   return ret;
+   }
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   data->handle = handle;
+   data->asid   = asid;
+   ret = sev_guest_activate(data, error);
+   if (ret)
+   pr_err("SEV: ACTIVATE %d (%#x)\n", ret, *error);
+
+   kfree(data);
+   return ret;
+}
+
+static int sev_pre_start(struct kvm *kvm, int *asid)
+{
+   int ret;
+
+   /* If guest has active SEV handle then deactivate before creating the
+* encryption context.
+*/
+   if (sev_guest(kvm)) {
+   sev_deactivate_handle(kvm);
+   sev_decommission_handle(kvm);
+   *asid = sev_get_asid(kvm);  /* reuse the

[RFC PATCH v2 20/32] crypto: ccp: Add Platform Security Processor (PSP) interface support

2017-03-02 Thread Brijesh Singh
AMD Platform Security Processor (PSP) is a dedicated processor that
provides the support for encrypting the guest memory in a Secure Encrypted
Virtualiztion (SEV) mode, along with software-based Tursted Executation
Environment (TEE) to enable the third-party tursted applications.

Signed-off-by: Brijesh Singh 
---
 drivers/crypto/ccp/Kconfig   |7 +
 drivers/crypto/ccp/Makefile  |1 
 drivers/crypto/ccp/psp-dev.c |  211 ++
 drivers/crypto/ccp/psp-dev.h |  102 
 drivers/crypto/ccp/sp-dev.c  |   16 +++
 drivers/crypto/ccp/sp-dev.h  |   34 +++
 drivers/crypto/ccp/sp-pci.c  |4 +
 7 files changed, 374 insertions(+), 1 deletion(-)
 create mode 100644 drivers/crypto/ccp/psp-dev.c
 create mode 100644 drivers/crypto/ccp/psp-dev.h

diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index bc08f03..59c207e 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -34,4 +34,11 @@ config CRYPTO_DEV_CCP
  Provides the interface to use the AMD Cryptographic Coprocessor
  which can be used to offload encryption operations such as SHA,
  AES and more.
+
+config CRYPTO_DEV_PSP
+   bool "Platform Security Processor interface"
+   default y
+   help
+Provide the interface for AMD Platform Security Processor (PSP) device.
+
 endif
diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 8127e18..12e569d 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -6,6 +6,7 @@ ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
ccp-dev-v3.o \
ccp-dev-v5.o \
ccp-dmaengine.o
+ccp-$(CONFIG_CRYPTO_DEV_PSP) += psp-dev.o
 
 obj-$(CONFIG_CRYPTO_DEV_CCP_CRYPTO) += ccp-crypto.o
 ccp-crypto-objs := ccp-crypto-main.o \
diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
new file mode 100644
index 000..6f64aa7
--- /dev/null
+++ b/drivers/crypto/ccp/psp-dev.c
@@ -0,0 +1,211 @@
+/*
+ * AMD Platform Security Processor (PSP) interface
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Brijesh Singh 
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "sp-dev.h"
+#include "psp-dev.h"
+
+static LIST_HEAD(psp_devs);
+static DEFINE_SPINLOCK(psp_devs_lock);
+
+const struct psp_vdata psp_entry = {
+   .offset = 0x10500,
+};
+
+void psp_add_device(struct psp_device *psp)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(&psp_devs_lock, flags);
+
+   list_add_tail(&psp->entry, &psp_devs);
+
+   spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+void psp_del_device(struct psp_device *psp)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(&psp_devs_lock, flags);
+
+   list_del(&psp->entry);
+   spin_unlock_irqrestore(&psp_devs_lock, flags);
+}
+
+static struct psp_device *psp_alloc_struct(struct sp_device *sp)
+{
+   struct device *dev = sp->dev;
+   struct psp_device *psp;
+
+   psp = devm_kzalloc(dev, sizeof(*psp), GFP_KERNEL);
+   if (!psp)
+   return NULL;
+
+   psp->dev = dev;
+   psp->sp = sp;
+
+   snprintf(psp->name, sizeof(psp->name), "psp-%u", sp->ord);
+
+   return psp;
+}
+
+irqreturn_t psp_irq_handler(int irq, void *data)
+{
+   unsigned int status;
+   irqreturn_t ret = IRQ_HANDLED;
+   struct psp_device *psp = data;
+
+   /* read the interrupt status */
+   status = ioread32(psp->io_regs + PSP_P2CMSG_INTSTS);
+
+   /* invoke subdevice interrupt handlers */
+   if (status) {
+   if (psp->sev_irq_handler)
+   ret = psp->sev_irq_handler(irq, psp->sev_irq_data);
+   }
+
+   /* clear the interrupt status */
+   iowrite32(status, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+   return ret;
+}
+
+static int psp_init(struct psp_device *psp)
+{
+   psp_add_device(psp);
+
+   sev_dev_init(psp);
+
+   return 0;
+}
+
+int psp_dev_init(struct sp_device *sp)
+{
+   struct device *dev = sp->dev;
+   struct psp_device *psp;
+   int ret;
+
+   ret = -ENOMEM;
+   psp = psp_alloc_struct(sp);
+   if (!psp)
+   goto e_err;
+   sp->psp_data = psp;
+
+   psp->vdata = (struct psp_vdata *)sp->dev_data->psp_vdata;
+   if (!psp->vdata) {
+   ret = -ENODEV;
+   dev_err(dev, "missing driver data\n");
+   goto e_err;
+   }
+
+   psp->io_regs = sp->io_map + psp->vdata->offset;
+
+   /* Disable and clear interrupts until ready */
+   iowrite32(0, psp->io_regs + PSP_P2CMSG_INTEN);
+   iowrite32(0x, psp->io_regs + PSP_P2CMSG_INTSTS);
+
+   dev_dbg(d

[RFC PATCH v2 29/32] kvm: svm: Add support for SEV DEBUG_DECRYPT command

2017-03-02 Thread Brijesh Singh
The command is used to decrypt guest memory region for debug purposes.

Signed-off-by: Brijesh Singh 
---
 arch/x86/kvm/svm.c |   76 
 1 file changed, 76 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 977aa22..ce8819a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -5986,6 +5986,78 @@ static int sev_guest_status(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static int __sev_dbg_decrypt_page(struct kvm *kvm, unsigned long src,
+   void *dst, int *error)
+{
+   int ret;
+   struct page **inpages;
+   struct sev_data_dbg *data;
+   unsigned long npages;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   inpages = sev_pin_memory(src, PAGE_SIZE, &npages);
+   if (!inpages) {
+   ret = -ENOMEM;
+   goto err_1;
+   }
+
+   data->handle = sev_get_handle(kvm);
+   data->dst_addr = __psp_pa(dst);
+   data->src_addr = __sev_page_pa(inpages[0]);
+   data->length = PAGE_SIZE;
+
+   ret = sev_issue_cmd(kvm, SEV_CMD_DBG_DECRYPT, data, error);
+   if (ret)
+   printk(KERN_ERR "SEV: DEBUG_DECRYPT %d (%#010x)\n",
+   ret, *error);
+   sev_unpin_memory(inpages, npages);
+err_1:
+   kfree(data);
+   return ret;
+}
+
+static int sev_dbg_decrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   void *data;
+   int ret, offset, len;
+   struct kvm_sev_dbg debug;
+
+   if (!sev_guest(kvm))
+   return -ENOTTY;
+
+   if (copy_from_user(&debug, (void *)argp->data,
+   sizeof(struct kvm_sev_dbg)))
+   return -EFAULT;
+   /*
+* TODO: add support for decrypting length which crosses the
+* page boundary.
+*/
+   offset = debug.src_addr & (PAGE_SIZE - 1);
+   if (offset + debug.length > PAGE_SIZE)
+   return -EINVAL;
+
+   data = (void *) get_zeroed_page(GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   /* decrypt full page */
+   ret = __sev_dbg_decrypt_page(kvm, debug.src_addr & PAGE_MASK,
+   data, &argp->error);
+   if (ret)
+   goto err_1;
+
+   /* we have decrypted full page but copy request length */
+   len = min_t(size_t, (PAGE_SIZE - offset), debug.length);
+   if (copy_to_user((uint8_t *)debug.dst_addr, data + offset, len))
+   ret = -EFAULT;
+err_1:
+   free_page((unsigned long)data);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -6013,6 +6085,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_guest_status(kvm, &sev_cmd);
break;
}
+   case KVM_SEV_DBG_DECRYPT: {
+   r = sev_dbg_decrypt(kvm, &sev_cmd);
+   break;
+   }
default:
break;
}



[RFC PATCH v2 16/32] x86: kvm: Provide support to create Guest and HV shared per-CPU variables

2017-03-02 Thread Brijesh Singh
Some KVM specific MSR's (steal-time, asyncpf, avic_eio) allocates per-CPU
variable at compile time and share its physical address with hypervisor.
It presents a challege when SEV is active in guest OS. When SEV is active,
guest memory is encrypted with guest key and hypervisor will no longer able
to modify the guest memory. When SEV is active, we need to clear the
encryption attribute of shared physical addresses so that both guest and
hypervisor can access the data.

To solve this problem, I have tried these three options:

1) Convert the static per-CPU to dynamic per-CPU allocation. When SEV is
detected then clear the encryption attribute. But while doing so I found
that per-CPU dynamic allocator was not ready when kvm_guest_cpu_init was
called.

2) Since the encryption attributes works on PAGE_SIZE hence add some extra
padding to 'struct kvm-steal-time' to make it PAGE_SIZE and then at runtime
clear the encryption attribute of the full PAGE. The downside of this was
now we need to modify structure which may break the compatibility.

3) Define a new per-CPU section (.data..percpu.hv_shared) which will be
used to hold the compile time shared per-CPU variables. When SEV is
detected we map this section with encryption attribute cleared.

This patch implements #3. It introduces a new DEFINE_PER_CPU_HV_SHAHRED
macro to create a compile time per-CPU variable. When SEV is detected we
map the per-CPU variable as decrypted (i.e with encryption attribute cleared).

Signed-off-by: Brijesh Singh 
---
 arch/x86/kernel/kvm.c |   43 +++--
 include/asm-generic/vmlinux.lds.h |3 +++
 include/linux/percpu-defs.h   |9 
 3 files changed, 48 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 099fcba..706a08e 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -75,8 +75,8 @@ static int parse_no_kvmclock_vsyscall(char *arg)
 
 early_param("no-kvmclock-vsyscall", parse_no_kvmclock_vsyscall);
 
-static DEFINE_PER_CPU(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64);
-static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64);
+static DEFINE_PER_CPU_HV_SHARED(struct kvm_vcpu_pv_apf_data, apf_reason) 
__aligned(64);
+static DEFINE_PER_CPU_HV_SHARED(struct kvm_steal_time, steal_time) 
__aligned(64);
 static int has_steal_clock = 0;
 
 /*
@@ -290,6 +290,22 @@ static void __init paravirt_ops_setup(void)
 #endif
 }
 
+static int kvm_map_percpu_hv_shared(void *addr, unsigned long size)
+{
+   /* When SEV is active, the percpu static variables initialized
+* in data section will contain the encrypted data so we first
+* need to decrypt it and then map it as decrypted.
+*/
+   if (sev_active()) {
+   unsigned long pa = slow_virt_to_phys(addr);
+
+   sme_early_decrypt(pa, size);
+   return early_set_memory_decrypted(addr, size);
+   }
+
+   return 0;
+}
+
 static void kvm_register_steal_time(void)
 {
int cpu = smp_processor_id();
@@ -298,12 +314,17 @@ static void kvm_register_steal_time(void)
if (!has_steal_clock)
return;
 
+   if (kvm_map_percpu_hv_shared(st, sizeof(*st))) {
+   pr_err("kvm-stealtime: failed to map hv_shared percpu\n");
+   return;
+   }
+
wrmsrl(MSR_KVM_STEAL_TIME, (slow_virt_to_phys(st) | KVM_MSR_ENABLED));
pr_info("kvm-stealtime: cpu %d, msr %llx\n",
cpu, (unsigned long long) slow_virt_to_phys(st));
 }
 
-static DEFINE_PER_CPU(unsigned long, kvm_apic_eoi) = KVM_PV_EOI_DISABLED;
+static DEFINE_PER_CPU_HV_SHARED(unsigned long, kvm_apic_eoi) = 
KVM_PV_EOI_DISABLED;
 
 static notrace void kvm_guest_apic_eoi_write(u32 reg, u32 val)
 {
@@ -327,25 +348,33 @@ static void kvm_guest_cpu_init(void)
if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF) && kvmapf) {
u64 pa = slow_virt_to_phys(this_cpu_ptr(&apf_reason));
 
+   if (kvm_map_percpu_hv_shared(this_cpu_ptr(&apf_reason),
+   sizeof(struct kvm_vcpu_pv_apf_data)))
+   goto skip_asyncpf;
 #ifdef CONFIG_PREEMPT
pa |= KVM_ASYNC_PF_SEND_ALWAYS;
 #endif
wrmsrl(MSR_KVM_ASYNC_PF_EN, pa | KVM_ASYNC_PF_ENABLED);
__this_cpu_write(apf_reason.enabled, 1);
-   printk(KERN_INFO"KVM setup async PF for cpu %d\n",
-  smp_processor_id());
+   printk(KERN_INFO"KVM setup async PF for cpu %d msr %llx\n",
+  smp_processor_id(), pa);
}
-
+skip_asyncpf:
if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) {
unsigned long pa;
/* Size alignment is implied but just to make it explicit. */
BUILD_BUG_ON(__alignof__(kvm_apic_eoi) < 4);
+   if (kvm_map_percpu_hv_shared(this_cpu_ptr(&kvm_apic_eoi),
+  

[RFC PATCH v2 22/32] kvm: svm: prepare to reserve asid for SEV guest

2017-03-02 Thread Brijesh Singh
In current implementation, asid allocation starts from 1, this patch
adds a min_asid variable in svm_vcpu structure to allow starting asid
from something other than 1.

Signed-off-by: Brijesh Singh 
Reviewed-by: Paolo Bonzini 
---
 arch/x86/kvm/svm.c |4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index b581499..8d8fe62 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -507,6 +507,7 @@ struct svm_cpu_data {
u64 asid_generation;
u32 max_asid;
u32 next_asid;
+   u32 min_asid;
struct kvm_ldttss_desc *tss_desc;
 
struct page *save_area;
@@ -763,6 +764,7 @@ static int svm_hardware_enable(void)
sd->asid_generation = 1;
sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
sd->next_asid = sd->max_asid + 1;
+   sd->min_asid = 1;
 
native_store_gdt(&gdt_descr);
gdt = (struct desc_struct *)gdt_descr.address;
@@ -2026,7 +2028,7 @@ static void new_asid(struct vcpu_svm *svm, struct 
svm_cpu_data *sd)
 {
if (sd->next_asid > sd->max_asid) {
++sd->asid_generation;
-   sd->next_asid = 1;
+   sd->next_asid = sd->min_asid;
svm->vmcb->control.tlb_ctl = TLB_CONTROL_FLUSH_ALL_ASID;
}
 



[RFC PATCH v2 18/32] kvm: svm: Use the hardware provided GPA instead of page walk

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

When a guest causes a NPF which requires emulation, KVM sometimes walks
the guest page tables to translate the GVA to a GPA. This is unnecessary
most of the time on AMD hardware since the hardware provides the GPA in
EXITINFO2.

The only exception cases involve string operations involving rep or
operations that use two memory locations. With rep, the GPA will only be
the value of the initial NPF and with dual memory locations we won't know
which memory address was translated into EXITINFO2.

Signed-off-by: Tom Lendacky 
Reviewed-by: Borislav Petkov 
Signed-off-by: Brijesh Singh 
---
 arch/x86/include/asm/kvm_emulate.h |1 +
 arch/x86/include/asm/kvm_host.h|3 ++
 arch/x86/kvm/emulate.c |   20 +---
 arch/x86/kvm/svm.c |2 ++
 arch/x86/kvm/x86.c |   45 
 5 files changed, 57 insertions(+), 14 deletions(-)

diff --git a/arch/x86/include/asm/kvm_emulate.h 
b/arch/x86/include/asm/kvm_emulate.h
index e9cd7be..3e8c287 100644
--- a/arch/x86/include/asm/kvm_emulate.h
+++ b/arch/x86/include/asm/kvm_emulate.h
@@ -441,5 +441,6 @@ int emulator_task_switch(struct x86_emulate_ctxt *ctxt,
 int emulate_int_real(struct x86_emulate_ctxt *ctxt, int irq);
 void emulator_invalidate_register_cache(struct x86_emulate_ctxt *ctxt);
 void emulator_writeback_register_cache(struct x86_emulate_ctxt *ctxt);
+bool emulator_can_use_gpa(struct x86_emulate_ctxt *ctxt);
 
 #endif /* _ASM_X86_KVM_X86_EMULATE_H */
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 37326b5..bff1f15 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -668,6 +668,9 @@ struct kvm_vcpu_arch {
 
int pending_ioapic_eoi;
int pending_external_vector;
+
+   /* GPA available (AMD only) */
+   bool gpa_available;
 };
 
 struct kvm_lpage_info {
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index cedbba0..45c7306 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -173,6 +173,7 @@
 #define NearBranch  ((u64)1 << 52)  /* Near branches */
 #define No16   ((u64)1 << 53)  /* No 16 bit operand */
 #define IncSP   ((u64)1 << 54)  /* SP is incremented before ModRM calc */
+#define TwoMemOp((u64)1 << 55)  /* Instruction has two memory operand */
 
 #define DstXacc (DstAccLo | SrcAccHi | SrcWrite)
 
@@ -4298,7 +4299,7 @@ static const struct opcode group1[] = {
 };
 
 static const struct opcode group1A[] = {
-   I(DstMem | SrcNone | Mov | Stack | IncSP, em_pop), N, N, N, N, N, N, N,
+   I(DstMem | SrcNone | Mov | Stack | IncSP | TwoMemOp, em_pop), N, N, N, 
N, N, N, N,
 };
 
 static const struct opcode group2[] = {
@@ -4336,7 +4337,7 @@ static const struct opcode group5[] = {
I(SrcMemFAddr | ImplicitOps,em_call_far),
I(SrcMem | NearBranch,  em_jmp_abs),
I(SrcMemFAddr | ImplicitOps,em_jmp_far),
-   I(SrcMem | Stack,   em_push), D(Undefined),
+   I(SrcMem | Stack | TwoMemOp,em_push), D(Undefined),
 };
 
 static const struct opcode group6[] = {
@@ -4556,8 +4557,8 @@ static const struct opcode opcode_table[256] = {
/* 0xA0 - 0xA7 */
I2bv(DstAcc | SrcMem | Mov | MemAbs, em_mov),
I2bv(DstMem | SrcAcc | Mov | MemAbs | PageTable, em_mov),
-   I2bv(SrcSI | DstDI | Mov | String, em_mov),
-   F2bv(SrcSI | DstDI | String | NoWrite, em_cmp_r),
+   I2bv(SrcSI | DstDI | Mov | String | TwoMemOp, em_mov),
+   F2bv(SrcSI | DstDI | String | NoWrite | TwoMemOp, em_cmp_r),
/* 0xA8 - 0xAF */
F2bv(DstAcc | SrcImm | NoWrite, em_test),
I2bv(SrcAcc | DstDI | Mov | String, em_mov),
@@ -5671,3 +5672,14 @@ void emulator_writeback_register_cache(struct 
x86_emulate_ctxt *ctxt)
 {
writeback_registers(ctxt);
 }
+
+bool emulator_can_use_gpa(struct x86_emulate_ctxt *ctxt)
+{
+   if (ctxt->rep_prefix && (ctxt->d & String))
+   return false;
+
+   if (ctxt->d & TwoMemOp)
+   return false;
+
+   return true;
+}
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 36d61ff..b581499 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4184,6 +4184,8 @@ static int handle_exit(struct kvm_vcpu *vcpu)
 
trace_kvm_exit(exit_code, vcpu, KVM_ISA_SVM);
 
+   vcpu->arch.gpa_available = (exit_code == SVM_EXIT_NPF);
+
if (!is_cr_intercept(svm, INTERCEPT_CR0_WRITE))
vcpu->arch.cr0 = svm->vmcb->save.cr0;
if (npt_enabled)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9e6a593..2099df8 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4465,6 +4465,21 @@ int kvm_write_guest_virt_system(struct x86_emulate_ctxt 
*ctxt,
 }
 EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system);
 
+static int vcpu_is_mmio_gpa(struct kvm_vcpu *vcpu, unsigned long gva,
+   gpa_t gpa, bool w

[RFC PATCH v2 23/32] kvm: introduce KVM_MEMORY_ENCRYPT_OP ioctl

2017-03-02 Thread Brijesh Singh
If hardware supports encrypting then KVM_MEMORY_ENCRYPT_OP ioctl can
be used by qemu to issue platform specific memory encryption commands.

Signed-off-by: Brijesh Singh 
---
 arch/x86/include/asm/kvm_host.h |2 ++
 arch/x86/kvm/x86.c  |   12 
 include/uapi/linux/kvm.h|2 ++
 3 files changed, 16 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index bff1f15..62651ad 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1033,6 +1033,8 @@ struct kvm_x86_ops {
void (*cancel_hv_timer)(struct kvm_vcpu *vcpu);
 
void (*setup_mce)(struct kvm_vcpu *vcpu);
+
+   int (*memory_encryption_op)(struct kvm *kvm, void __user *argp);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2099df8..6a737e9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3926,6 +3926,14 @@ static int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
return r;
 }
 
+static int kvm_vm_ioctl_memory_encryption_op(struct kvm *kvm, void __user 
*argp)
+{
+   if (kvm_x86_ops->memory_encryption_op)
+   return kvm_x86_ops->memory_encryption_op(kvm, argp);
+
+   return -ENOTTY;
+}
+
 long kvm_arch_vm_ioctl(struct file *filp,
   unsigned int ioctl, unsigned long arg)
 {
@@ -4189,6 +4197,10 @@ long kvm_arch_vm_ioctl(struct file *filp,
r = kvm_vm_ioctl_enable_cap(kvm, &cap);
break;
}
+   case KVM_MEMORY_ENCRYPT_OP: {
+   r = kvm_vm_ioctl_memory_encryption_op(kvm, argp);
+   break;
+   }
default:
r = kvm_vm_ioctl_assigned_device(kvm, ioctl, arg);
}
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index cac48ed..fef7d83 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1281,6 +1281,8 @@ struct kvm_s390_ucas_mapping {
 #define KVM_S390_GET_IRQ_STATE   _IOW(KVMIO, 0xb6, struct kvm_s390_irq_state)
 /* Available with KVM_CAP_X86_SMM */
 #define KVM_SMI   _IO(KVMIO,   0xb7)
+/* Memory Encryption Commands */
+#define KVM_MEMORY_ENCRYPT_OP_IOWR(KVMIO, 0xb8, unsigned long)
 
 #define KVM_DEV_ASSIGN_ENABLE_IOMMU(1 << 0)
 #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)



[RFC PATCH v2 01/32] x86: Add the Secure Encrypted Virtualization CPU feature

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Update the CPU features to include identifying and reporting on the
Secure Encrypted Virtualization (SEV) feature.  SME is identified by
CPUID 0x801f, but requires BIOS support to enable it (set bit 23 of
MSR_K8_SYSCFG and set bit 0 of MSR_K7_HWCR).  Only show the SEV feature
as available if reported by CPUID and enabled by BIOS.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/cpufeatures.h |1 +
 arch/x86/include/asm/msr-index.h   |2 ++
 arch/x86/kernel/cpu/amd.c  |   22 ++
 arch/x86/kernel/cpu/scattered.c|1 +
 4 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/cpufeatures.h 
b/arch/x86/include/asm/cpufeatures.h
index b1a4468..9907579 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -188,6 +188,7 @@
  */
 
 #define X86_FEATURE_SME( 7*32+ 0) /* AMD Secure Memory 
Encryption */
+#define X86_FEATURE_SEV( 7*32+ 1) /* AMD Secure Encrypted 
Virtualization */
 #define X86_FEATURE_CPB( 7*32+ 2) /* AMD Core Performance 
Boost */
 #define X86_FEATURE_EPB( 7*32+ 3) /* IA32_ENERGY_PERF_BIAS 
support */
 #define X86_FEATURE_CAT_L3 ( 7*32+ 4) /* Cache Allocation Technology L3 */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index e2d0503..e8b3b28 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -361,6 +361,8 @@
 #define MSR_K7_PERFCTR30xc0010007
 #define MSR_K7_CLK_CTL 0xc001001b
 #define MSR_K7_HWCR0xc0010015
+#define MSR_K7_HWCR_SMMLOCK_BIT0
+#define MSR_K7_HWCR_SMMLOCKBIT_ULL(MSR_K7_HWCR_SMMLOCK_BIT)
 #define MSR_K7_FID_VID_CTL 0xc0010041
 #define MSR_K7_FID_VID_STATUS  0xc0010042
 
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 6bddda3..675958e 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -617,10 +617,13 @@ static void early_init_amd(struct cpuinfo_x86 *c)
set_cpu_bug(c, X86_BUG_AMD_E400);
 
/*
-* BIOS support is required for SME. If BIOS has enabld SME then
-* adjust x86_phys_bits by the SME physical address space reduction
-* value. If BIOS has not enabled SME then don't advertise the
-* feature (set in scattered.c).
+* BIOS support is required for SME and SEV.
+*   For SME: If BIOS has enabled SME then adjust x86_phys_bits by
+*the SME physical address space reduction value.
+*If BIOS has not enabled SME then don't advertise the
+*SME feature (set in scattered.c).
+*   For SEV: If BIOS has not enabled SEV then don't advertise the
+*SEV feature (set in scattered.c).
 */
if (c->extended_cpuid_level >= 0x801f) {
if (cpu_has(c, X86_FEATURE_SME)) {
@@ -637,6 +640,17 @@ static void early_init_amd(struct cpuinfo_x86 *c)
clear_cpu_cap(c, X86_FEATURE_SME);
}
}
+
+   if (cpu_has(c, X86_FEATURE_SEV)) {
+   u64 syscfg, hwcr;
+
+   /* Check if SEV is enabled */
+   rdmsrl(MSR_K8_SYSCFG, syscfg);
+   rdmsrl(MSR_K7_HWCR, hwcr);
+   if (!(syscfg & MSR_K8_SYSCFG_MEM_ENCRYPT) ||
+   !(hwcr & MSR_K7_HWCR_SMMLOCK))
+   clear_cpu_cap(c, X86_FEATURE_SEV);
+   }
}
 }
 
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index cabda87..c3f58d9 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -31,6 +31,7 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_CPB,  CPUID_EDX,  9, 0x8007, 0 },
{ X86_FEATURE_PROC_FEEDBACK,CPUID_EDX, 11, 0x8007, 0 },
{ X86_FEATURE_SME,  CPUID_EAX,  0, 0x801f, 0 },
+   { X86_FEATURE_SEV,  CPUID_EAX,  1, 0x801f, 0 },
{ 0, 0, 0, 0, 0 }
 };
 



[RFC PATCH v2 03/32] KVM: SVM: prepare for new bit definition in nested_ctl

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Currently the nested_ctl variable in the vmcb_control_area structure is
used to indicate nested paging support. The nested paging support field
is actually defined as bit 0 of the field. In order to support a new
feature flag the usage of the nested_ctl and nested paging support must
be converted to operate on a single bit.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/svm.h |2 ++
 arch/x86/kvm/svm.c |7 ---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 14824fc..2aca535 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -136,6 +136,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_LOCK_MASK 0x0008ULL
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
+#define SVM_NESTED_CTL_NP_ENABLE   BIT(0)
+
 struct __attribute__ ((__packed__)) vmcb_seg {
u16 selector;
u16 attrib;
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 08a4d3a..75b0645 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1246,7 +1246,7 @@ static void init_vmcb(struct vcpu_svm *svm)
 
if (npt_enabled) {
/* Setup VMCB for Nested Paging */
-   control->nested_ctl = 1;
+   control->nested_ctl |= SVM_NESTED_CTL_NP_ENABLE;
clr_intercept(svm, INTERCEPT_INVLPG);
clr_exception_intercept(svm, PF_VECTOR);
clr_cr_intercept(svm, INTERCEPT_CR3_READ);
@@ -2840,7 +2840,8 @@ static bool nested_vmcb_checks(struct vmcb *vmcb)
if (vmcb->control.asid == 0)
return false;
 
-   if (vmcb->control.nested_ctl && !npt_enabled)
+   if ((vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) &&
+   !npt_enabled)
return false;
 
return true;
@@ -2915,7 +2916,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
else
svm->vcpu.arch.hflags &= ~HF_HIF_MASK;
 
-   if (nested_vmcb->control.nested_ctl) {
+   if (nested_vmcb->control.nested_ctl & SVM_NESTED_CTL_NP_ENABLE) {
kvm_mmu_unload(&svm->vcpu);
svm->nested.nested_cr3 = nested_vmcb->control.nested_cr3;
nested_svm_init_mmu_context(&svm->vcpu);



[RFC PATCH v2 24/32] kvm: x86: prepare for SEV guest management API support

2017-03-02 Thread Brijesh Singh
The patch adds initial support required to integrate Secure Encrypted
Virtualization (SEV) feature.

ASID management:
 - Reserve asid range for SEV guest, SEV asid range is obtained through
   CPUID Fn8000_001f[ECX]. A non-SEV guest can use any asid outside the SEV
   asid range.
 - SEV guest must have asid value within asid range obtained through CPUID.
 - SEV guest must have the same asid for all vcpu's. A TLB flush is required
   if different vcpu for the same ASID is to be run on the same host CPU.

Signed-off-by: Brijesh Singh 
---
 arch/x86/include/asm/kvm_host.h |8 ++
 arch/x86/kvm/svm.c  |  189 +++
 include/uapi/linux/kvm.h|   98 
 3 files changed, 294 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 62651ad..fcc4710 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -719,6 +719,12 @@ struct kvm_hv {
HV_REFERENCE_TSC_PAGE tsc_ref;
 };
 
+struct kvm_sev_info {
+   unsigned int handle;/* firmware handle */
+   unsigned int asid;  /* asid for this guest */
+   int sev_fd; /* SEV device fd */
+};
+
 struct kvm_arch {
unsigned int n_used_mmu_pages;
unsigned int n_requested_mmu_pages;
@@ -805,6 +811,8 @@ struct kvm_arch {
 
bool x2apic_format;
bool x2apic_broadcast_quirk_disabled;
+
+   struct kvm_sev_info sev_info;
 };
 
 struct kvm_vm_stat {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8d8fe62..fb63398 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -36,6 +36,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -211,6 +212,9 @@ struct vcpu_svm {
 */
struct list_head ir_list;
spinlock_t ir_list_lock;
+
+   /* which host cpu was used for running this vcpu */
+   bool last_cpuid;
 };
 
 /*
@@ -490,6 +494,64 @@ static inline bool gif_set(struct vcpu_svm *svm)
return !!(svm->vcpu.arch.hflags & HF_GIF_MASK);
 }
 
+/* Secure Encrypted Virtualization */
+static unsigned int max_sev_asid;
+static unsigned long *sev_asid_bitmap;
+
+static bool kvm_sev_enabled(void)
+{
+   return max_sev_asid ? 1 : 0;
+}
+
+static inline struct kvm_sev_info *sev_get_info(struct kvm *kvm)
+{
+   struct kvm_arch *vm_data = &kvm->arch;
+
+   return &vm_data->sev_info;
+}
+
+static unsigned int sev_get_handle(struct kvm *kvm)
+{
+   struct kvm_sev_info *sev_info = sev_get_info(kvm);
+
+   return sev_info->handle;
+}
+
+static inline int sev_guest(struct kvm *kvm)
+{
+   return sev_get_handle(kvm);
+}
+
+static inline int sev_get_asid(struct kvm *kvm)
+{
+   struct kvm_sev_info *sev_info = sev_get_info(kvm);
+
+   if (!sev_info)
+   return -EINVAL;
+
+   return sev_info->asid;
+}
+
+static inline int sev_get_fd(struct kvm *kvm)
+{
+   struct kvm_sev_info *sev_info = sev_get_info(kvm);
+
+   if (!sev_info)
+   return -EINVAL;
+
+   return sev_info->sev_fd;
+}
+
+static inline void sev_set_asid(struct kvm *kvm, int asid)
+{
+   struct kvm_sev_info *sev_info = sev_get_info(kvm);
+
+   if (!sev_info)
+   return;
+
+   sev_info->asid = asid;
+}
+
 static unsigned long iopm_base;
 
 struct kvm_ldttss_desc {
@@ -511,6 +573,8 @@ struct svm_cpu_data {
struct kvm_ldttss_desc *tss_desc;
 
struct page *save_area;
+
+   struct vmcb **sev_vmcbs;  /* index = sev_asid, value = vmcb pointer */
 };
 
 static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
@@ -764,7 +828,7 @@ static int svm_hardware_enable(void)
sd->asid_generation = 1;
sd->max_asid = cpuid_ebx(SVM_CPUID_FUNC) - 1;
sd->next_asid = sd->max_asid + 1;
-   sd->min_asid = 1;
+   sd->min_asid = max_sev_asid + 1;
 
native_store_gdt(&gdt_descr);
gdt = (struct desc_struct *)gdt_descr.address;
@@ -825,6 +889,7 @@ static void svm_cpu_uninit(int cpu)
 
per_cpu(svm_data, raw_smp_processor_id()) = NULL;
__free_page(sd->save_area);
+   kfree(sd->sev_vmcbs);
kfree(sd);
 }
 
@@ -842,6 +907,14 @@ static int svm_cpu_init(int cpu)
if (!sd->save_area)
goto err_1;
 
+   if (kvm_sev_enabled()) {
+   sd->sev_vmcbs = kmalloc((max_sev_asid + 1) * sizeof(void *),
+   GFP_KERNEL);
+   r = -ENOMEM;
+   if (!sd->sev_vmcbs)
+   goto err_1;
+   }
+
per_cpu(svm_data, cpu) = sd;
 
return 0;
@@ -1017,6 +1090,61 @@ static int avic_ga_log_notifier(u32 ga_tag)
return 0;
 }
 
+static __init void sev_hardware_setup(void)
+{
+   int ret, error, nguests;
+   struct sev_data_init *init;
+   struct sev_data_status *status;
+
+   /*
+* Get maximum number of encrypted guest supported: Fn8001_001F[ECX]
+*  Bit 31:0: Number 

[RFC PATCH v2 32/32] x86: kvm: Pin the guest memory when SEV is active

2017-03-02 Thread Brijesh Singh
The SEV memory encryption engine uses a tweak such that two identical
plaintexts at different location will have a different ciphertexts.
So swapping or moving ciphertexts of two pages will not result in
plaintexts being swapped. Relocating (or migrating) a physical backing pages
for SEV guest will require some additional steps. The current SEV key
management spec [1] does not provide commands to swap or migrate (move)
ciphertexts. For now we pin the memory allocated for the SEV guest. In
future when SEV key management spec provides the commands to support the
page migration we can update the KVM code to remove the pinning logical
without making any changes into userspace (qemu).

The patch pins userspace memory when a new slot is created and unpin the
memory when slot is removed.

[1] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Spec.pdf

Signed-off-by: Brijesh Singh 
---
 arch/x86/include/asm/kvm_host.h |6 +++
 arch/x86/kvm/svm.c  |   93 +++
 arch/x86/kvm/x86.c  |3 +
 3 files changed, 102 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index fcc4710..9dc59f0 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -723,6 +723,7 @@ struct kvm_sev_info {
unsigned int handle;/* firmware handle */
unsigned int asid;  /* asid for this guest */
int sev_fd; /* SEV device fd */
+   struct list_head pinned_memory_slot;
 };
 
 struct kvm_arch {
@@ -1043,6 +1044,11 @@ struct kvm_x86_ops {
void (*setup_mce)(struct kvm_vcpu *vcpu);
 
int (*memory_encryption_op)(struct kvm *kvm, void __user *argp);
+
+   void (*prepare_memory_region)(struct kvm *kvm,
+   struct kvm_memory_slot *memslot,
+   const struct kvm_userspace_memory_region *mem,
+   enum kvm_mr_change change);
 };
 
 struct kvm_arch_async_pf {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 13996d6..ab973f9 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -498,12 +498,21 @@ static inline bool gif_set(struct vcpu_svm *svm)
 }
 
 /* Secure Encrypted Virtualization */
+struct kvm_sev_pinned_memory_slot {
+   struct list_head list;
+   unsigned long npages;
+   struct page **pages;
+   unsigned long userspace_addr;
+   short id;
+};
+
 static unsigned int max_sev_asid;
 static unsigned long *sev_asid_bitmap;
 static void sev_deactivate_handle(struct kvm *kvm);
 static void sev_decommission_handle(struct kvm *kvm);
 static int sev_asid_new(void);
 static void sev_asid_free(int asid);
+static void sev_unpin_memory(struct page **pages, unsigned long npages);
 #define __sev_page_pa(x) ((page_to_pfn(x) << PAGE_SHIFT) | sme_me_mask)
 
 static bool kvm_sev_enabled(void)
@@ -1544,9 +1553,25 @@ static inline int avic_free_vm_id(int id)
 
 static void sev_vm_destroy(struct kvm *kvm)
 {
+   struct list_head *pos, *q;
+   struct kvm_sev_pinned_memory_slot *pinned_slot;
+   struct list_head *head = &kvm->arch.sev_info.pinned_memory_slot;
+
if (!sev_guest(kvm))
return;
 
+   /* if guest memory is pinned then unpin it now */
+   if (!list_empty(head)) {
+   list_for_each_safe(pos, q, head) {
+   pinned_slot = list_entry(pos,
+   struct kvm_sev_pinned_memory_slot, list);
+   sev_unpin_memory(pinned_slot->pages,
+   pinned_slot->npages);
+   list_del(pos);
+   kfree(pinned_slot);
+   }
+   }
+
/* release the firmware resources */
sev_deactivate_handle(kvm);
sev_decommission_handle(kvm);
@@ -5663,6 +5688,8 @@ static int sev_pre_start(struct kvm *kvm, int *asid)
}
*asid = ret;
ret = 0;
+
+   INIT_LIST_HEAD(&kvm->arch.sev_info.pinned_memory_slot);
}
 
return ret;
@@ -6189,6 +6216,71 @@ static int sev_launch_measure(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static struct kvm_sev_pinned_memory_slot *sev_find_pinned_memory_slot(
+   struct kvm *kvm, struct kvm_memory_slot *slot)
+{
+   struct kvm_sev_pinned_memory_slot *i;
+   struct list_head *head = &kvm->arch.sev_info.pinned_memory_slot;
+
+   list_for_each_entry(i, head, list) {
+   if (i->userspace_addr == slot->userspace_addr &&
+   i->id == slot->id)
+   return i;
+   }
+
+   return NULL;
+}
+
+static void amd_prepare_memory_region(struct kvm *kvm,
+   struct kvm_memory_slot *memslot,
+   const struct kvm_userspace_memory_region *mem,
+   enum kvm_mr_change change)
+{
+   struct kvm_sev_pinned_memory_slot *pinn

[RFC PATCH v2 31/32] kvm: svm: Add support for SEV LAUNCH_MEASURE command

2017-03-02 Thread Brijesh Singh
The command is used to retrieve the measurement of memory encrypted through
the LAUNCH_UPDATE_DATA command. This measurement can be used for attestation
purposes.

Signed-off-by: Brijesh Singh 
---
 arch/x86/kvm/svm.c |   52 
 1 file changed, 52 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 64899ed..13996d6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6141,6 +6141,54 @@ static int sev_dbg_encrypt(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static int sev_launch_measure(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   int ret;
+   void *addr = NULL;
+   struct kvm_sev_launch_measure params;
+   struct sev_data_launch_measure *data;
+
+   if (!sev_guest(kvm))
+   return -ENOTTY;
+
+   if (copy_from_user(¶ms, (void *)argp->data,
+   sizeof(struct kvm_sev_launch_measure)))
+   return -EFAULT;
+
+   data = kzalloc(sizeof(*data), GFP_KERNEL);
+   if (!data)
+   return -ENOMEM;
+
+   if (params.address && params.length) {
+   ret = -EFAULT;
+   addr = kzalloc(params.length, GFP_KERNEL);
+   if (!addr)
+   goto err_1;
+   data->address = __psp_pa(addr);
+   data->length = params.length;
+   }
+
+   data->handle = sev_get_handle(kvm);
+   ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_MEASURE, data, &argp->error);
+
+   /* copy the measurement to userspace */
+   if (addr &&
+   copy_to_user((void *)params.address, addr, params.length)) {
+   ret = -EFAULT;
+   goto err_1;
+   }
+
+   params.length = data->length;
+   if (copy_to_user((void *)argp->data, ¶ms,
+   sizeof(struct kvm_sev_launch_measure)))
+   ret = -EFAULT;
+
+   kfree(addr);
+err_1:
+   kfree(data);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -6176,6 +6224,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_dbg_encrypt(kvm, &sev_cmd);
break;
}
+   case KVM_SEV_LAUNCH_MEASURE: {
+   r = sev_launch_measure(kvm, &sev_cmd);
+   break;
+   }
default:
break;
}



[RFC PATCH v2 06/32] x86/pci: Use memremap when walking setup data

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

The use of ioremap will force the setup data to be mapped decrypted even
though setup data is encrypted.  Switch to using memremap which will be
able to perform the proper mapping.

Signed-off-by: Tom Lendacky 
---
 arch/x86/pci/common.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c
index a4fdfa7..0b06670 100644
--- a/arch/x86/pci/common.c
+++ b/arch/x86/pci/common.c
@@ -691,7 +691,7 @@ int pcibios_add_device(struct pci_dev *dev)
 
pa_data = boot_params.hdr.setup_data;
while (pa_data) {
-   data = ioremap(pa_data, sizeof(*rom));
+   data = memremap(pa_data, sizeof(*rom), MEMREMAP_WB);
if (!data)
return -ENOMEM;
 
@@ -710,7 +710,7 @@ int pcibios_add_device(struct pci_dev *dev)
}
}
pa_data = data->next;
-   iounmap(data);
+   memunmap(data);
}
set_dma_domain_ops(dev);
set_dev_domain_options(dev);



[RFC PATCH v2 05/32] x86: Use encrypted access of BOOT related data with SEV

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

When Secure Encrypted Virtualization (SEV) is active, BOOT data (such as
EFI related data, setup data) is encrypted and needs to be accessed as
such when mapped. Update the architecture override in early_memremap to
keep the encryption attribute when mapping this data.

Signed-off-by: Tom Lendacky 
---
 arch/x86/mm/ioremap.c |   36 +++-
 1 file changed, 31 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index c6cb921..c400ab5 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -462,12 +462,31 @@ static bool memremap_is_setup_data(resource_size_t 
phys_addr,
 }
 
 /*
- * This function determines if an address should be mapped encrypted.
- * Boot setup data, EFI data and E820 areas are checked in making this
- * determination.
+ * This function determines if an address should be mapped encrypted when
+ * SEV is active.  E820 areas are checked in making this determination.
  */
-static bool memremap_should_map_encrypted(resource_size_t phys_addr,
- unsigned long size)
+static bool memremap_sev_should_map_encrypted(resource_size_t phys_addr,
+ unsigned long size)
+{
+   /* Check if the address is in persistent memory */
+   switch (e820__get_entry_type(phys_addr, phys_addr + size - 1)) {
+   case E820_TYPE_PMEM:
+   case E820_TYPE_PRAM:
+   return false;
+   default:
+   break;
+   }
+
+   return true;
+}
+
+/*
+ * This function determines if an address should be mapped encrypted when
+ * SME is active.  Boot setup data, EFI data and E820 areas are checked in
+ * making this determination.
+ */
+static bool memremap_sme_should_map_encrypted(resource_size_t phys_addr,
+ unsigned long size)
 {
/*
 * SME is not active, return true:
@@ -508,6 +527,13 @@ static bool memremap_should_map_encrypted(resource_size_t 
phys_addr,
return true;
 }
 
+static bool memremap_should_map_encrypted(resource_size_t phys_addr,
+ unsigned long size)
+{
+   return sev_active() ? memremap_sev_should_map_encrypted(phys_addr, size)
+   : memremap_sme_should_map_encrypted(phys_addr, 
size);
+}
+
 /*
  * Architecure function to determine if RAM remap is allowed.
  */



[RFC PATCH v2 30/32] kvm: svm: Add support for SEV DEBUG_ENCRYPT command

2017-03-02 Thread Brijesh Singh
The command copies a plain text into guest memory and encrypts it using
the VM encryption key. The command will be used for debug purposes
(e.g setting breakpoint through gdbserver)

Signed-off-by: Brijesh Singh 
---
 arch/x86/kvm/svm.c |   87 
 1 file changed, 87 insertions(+)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ce8819a..64899ed 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -6058,6 +6058,89 @@ static int sev_dbg_decrypt(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
return ret;
 }
 
+static int sev_dbg_encrypt(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+   void *data;
+   int len, ret, d_off;
+   struct page **inpages;
+   struct kvm_sev_dbg debug;
+   struct sev_data_dbg *encrypt;
+   unsigned long src_addr, dst_addr, npages;
+
+   if (!sev_guest(kvm))
+   return -ENOTTY;
+
+   if (copy_from_user(&debug, argp, sizeof(*argp)))
+   return -EFAULT;
+
+   if (debug.length > PAGE_SIZE)
+   return -EINVAL;
+
+   len = debug.length;
+   src_addr = debug.src_addr;
+   dst_addr = debug.dst_addr;
+
+   inpages = sev_pin_memory(dst_addr, PAGE_SIZE, &npages);
+   if (!inpages)
+   return -EFAULT;
+
+   encrypt = kzalloc(sizeof(*encrypt), GFP_KERNEL);
+   if (!encrypt) {
+   ret = -ENOMEM;
+   goto err_1;
+   }
+
+   data = (void *) get_zeroed_page(GFP_KERNEL);
+   if (!data) {
+   ret = -ENOMEM;
+   goto err_2;
+   }
+
+   if ((len & 15) || (dst_addr & 15)) {
+   /* if destination address and length are not 16-byte
+* aligned then:
+* a) decrypt destination page into temporary buffer
+* b) copy source data into temporary buffer at correct offset
+* c) encrypt temporary buffer
+*/
+   ret = __sev_dbg_decrypt_page(kvm, dst_addr, data, &argp->error);
+   if (ret)
+   goto err_3;
+   d_off = dst_addr & (PAGE_SIZE - 1);
+
+   if (copy_from_user(data + d_off,
+   (uint8_t *)debug.src_addr, len)) {
+   ret = -EFAULT;
+   goto err_3;
+   }
+
+   encrypt->length = PAGE_SIZE;
+   encrypt->src_addr = __psp_pa(data);
+   encrypt->dst_addr =  __sev_page_pa(inpages[0]);
+   } else {
+   if (copy_from_user(data, (uint8_t *)debug.src_addr, len)) {
+   ret = -EFAULT;
+   goto err_3;
+   }
+
+   d_off = dst_addr & (PAGE_SIZE - 1);
+   encrypt->length = len;
+   encrypt->src_addr = __psp_pa(data);
+   encrypt->dst_addr = __sev_page_pa(inpages[0]);
+   encrypt->dst_addr += d_off;
+   }
+
+   encrypt->handle = sev_get_handle(kvm);
+   ret = sev_issue_cmd(kvm, SEV_CMD_DBG_ENCRYPT, encrypt, &argp->error);
+err_3:
+   free_page((unsigned long)data);
+err_2:
+   kfree(encrypt);
+err_1:
+   sev_unpin_memory(inpages, npages);
+   return ret;
+}
+
 static int amd_memory_encryption_cmd(struct kvm *kvm, void __user *argp)
 {
int r = -ENOTTY;
@@ -6089,6 +6172,10 @@ static int amd_memory_encryption_cmd(struct kvm *kvm, 
void __user *argp)
r = sev_dbg_decrypt(kvm, &sev_cmd);
break;
}
+   case KVM_SEV_DBG_ENCRYPT: {
+   r = sev_dbg_encrypt(kvm, &sev_cmd);
+   break;
+   }
default:
break;
}



[RFC PATCH v2 04/32] KVM: SVM: Add SEV feature definitions to KVM

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Define a new KVM CPU feature for Secure Encrypted Virtualization (SEV).
The kernel will check for the presence of this feature to determine if
it is running with SEV active.

Define the SEV enable bit for the VMCB control structure. The hypervisor
will use this bit to enable SEV in the guest.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/svm.h   |1 +
 arch/x86/include/uapi/asm/kvm_para.h |1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 2aca535..fba2a7b 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -137,6 +137,7 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
 #define SVM_VM_CR_SVM_DIS_MASK  0x0010ULL
 
 #define SVM_NESTED_CTL_NP_ENABLE   BIT(0)
+#define SVM_NESTED_CTL_SEV_ENABLE  BIT(1)
 
 struct __attribute__ ((__packed__)) vmcb_seg {
u16 selector;
diff --git a/arch/x86/include/uapi/asm/kvm_para.h 
b/arch/x86/include/uapi/asm/kvm_para.h
index 1421a65..bc2802f 100644
--- a/arch/x86/include/uapi/asm/kvm_para.h
+++ b/arch/x86/include/uapi/asm/kvm_para.h
@@ -24,6 +24,7 @@
 #define KVM_FEATURE_STEAL_TIME 5
 #define KVM_FEATURE_PV_EOI 6
 #define KVM_FEATURE_PV_UNHALT  7
+#define KVM_FEATURE_SEV8
 
 /* The last 8 bits are used to indicate how to interpret the flags field
  * in pvclock structure. If no bits are set, all flags are ignored.



[RFC PATCH v2 02/32] x86: Secure Encrypted Virtualization (SEV) support

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Provide support for Secure Encyrpted Virtualization (SEV). This initial
support defines a flag that is used by the kernel to determine if it is
running with SEV active.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |   14 +-
 arch/x86/mm/mem_encrypt.c  |3 +++
 include/linux/mem_encrypt.h|6 ++
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 1fd5426..9799835 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -20,10 +20,16 @@
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
 extern unsigned long sme_me_mask;
+extern unsigned int sev_enabled;
 
 static inline bool sme_active(void)
 {
-   return (sme_me_mask) ? true : false;
+   return (sme_me_mask && !sev_enabled) ? true : false;
+}
+
+static inline bool sev_active(void)
+{
+   return (sme_me_mask && sev_enabled) ? true : false;
 }
 
 static inline u64 sme_dma_mask(void)
@@ -53,6 +59,7 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long 
size);
 
 #ifndef sme_me_mask
 #define sme_me_mask0UL
+#define sev_enabled0
 
 static inline bool sme_active(void)
 {
@@ -64,6 +71,11 @@ static inline u64 sme_dma_mask(void)
return 0ULL;
 }
 
+static inline bool sev_active(void)
+{
+   return false;
+}
+
 static inline int set_memory_encrypted(unsigned long vaddr, int numpages)
 {
return 0;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index c5062e1..090419b 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -34,6 +34,9 @@ void __init __early_pgtable_flush(void);
 unsigned long sme_me_mask __section(.data) = 0;
 EXPORT_SYMBOL_GPL(sme_me_mask);
 
+unsigned int sev_enabled __section(.data) = 0;
+EXPORT_SYMBOL_GPL(sev_enabled);
+
 /* Buffer used for early in-place encryption by BSP, no locking needed */
 static char sme_early_buffer[PAGE_SIZE] __aligned(PAGE_SIZE);
 
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 913cf80..4b47c73 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -23,6 +23,7 @@
 
 #ifndef sme_me_mask
 #define sme_me_mask0UL
+#define sev_enabled0
 
 static inline bool sme_active(void)
 {
@@ -34,6 +35,11 @@ static inline u64 sme_dma_mask(void)
return 0ULL;
 }
 
+static inline bool sev_active(void)
+{
+   return false;
+}
+
 static inline int set_memory_encrypted(unsigned long vaddr, int numpages)
 {
return 0;



[RFC PATCH v2 09/32] x86: Change early_ioremap to early_memremap for BOOT data

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

In order to map BOOT data with the proper encryption bit, the
early_ioremap() function calls are changed to early_memremap() calls.
This allows the proper access for both SME and SEV.

Signed-off-by: Tom Lendacky 
---
 arch/x86/kernel/acpi/boot.c |4 ++--
 arch/x86/kernel/mpparse.c   |   10 +-
 drivers/sfi/sfi_core.c  |6 +++---
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 35174c6..468c25a 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -124,7 +124,7 @@ char *__init __acpi_map_table(unsigned long phys, unsigned 
long size)
if (!phys || !size)
return NULL;
 
-   return early_ioremap(phys, size);
+   return early_memremap(phys, size);
 }
 
 void __init __acpi_unmap_table(char *map, unsigned long size)
@@ -132,7 +132,7 @@ void __init __acpi_unmap_table(char *map, unsigned long 
size)
if (!map || !size)
return;
 
-   early_iounmap(map, size);
+   early_memunmap(map, size);
 }
 
 #ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
index 0d904d7..fd37f39 100644
--- a/arch/x86/kernel/mpparse.c
+++ b/arch/x86/kernel/mpparse.c
@@ -436,9 +436,9 @@ static unsigned long __init get_mpc_size(unsigned long 
physptr)
struct mpc_table *mpc;
unsigned long size;
 
-   mpc = early_ioremap(physptr, PAGE_SIZE);
+   mpc = early_memremap(physptr, PAGE_SIZE);
size = mpc->length;
-   early_iounmap(mpc, PAGE_SIZE);
+   early_memunmap(mpc, PAGE_SIZE);
apic_printk(APIC_VERBOSE, "  mpc: %lx-%lx\n", physptr, physptr + size);
 
return size;
@@ -450,7 +450,7 @@ static int __init check_physptr(struct mpf_intel *mpf, 
unsigned int early)
unsigned long size;
 
size = get_mpc_size(mpf->physptr);
-   mpc = early_ioremap(mpf->physptr, size);
+   mpc = early_memremap(mpf->physptr, size);
/*
 * Read the physical hardware table.  Anything here will
 * override the defaults.
@@ -461,10 +461,10 @@ static int __init check_physptr(struct mpf_intel *mpf, 
unsigned int early)
 #endif
pr_err("BIOS bug, MP table errors detected!...\n");
pr_cont("... disabling SMP support. (tell your hw vendor)\n");
-   early_iounmap(mpc, size);
+   early_memunmap(mpc, size);
return -1;
}
-   early_iounmap(mpc, size);
+   early_memunmap(mpc, size);
 
if (early)
return -1;
diff --git a/drivers/sfi/sfi_core.c b/drivers/sfi/sfi_core.c
index 296db7a..d00ae3f 100644
--- a/drivers/sfi/sfi_core.c
+++ b/drivers/sfi/sfi_core.c
@@ -92,7 +92,7 @@ static struct sfi_table_simple *syst_va __read_mostly;
 static u32 sfi_use_ioremap __read_mostly;
 
 /*
- * sfi_un/map_memory calls early_ioremap/iounmap which is a __init function
+ * sfi_un/map_memory calls early_memremap/memunmap which is a __init function
  * and introduces section mismatch. So use __ref to make it calm.
  */
 static void __iomem * __ref sfi_map_memory(u64 phys, u32 size)
@@ -103,7 +103,7 @@ static void __iomem * __ref sfi_map_memory(u64 phys, u32 
size)
if (sfi_use_ioremap)
return ioremap_cache(phys, size);
else
-   return early_ioremap(phys, size);
+   return early_memremap(phys, size);
 }
 
 static void __ref sfi_unmap_memory(void __iomem *virt, u32 size)
@@ -114,7 +114,7 @@ static void __ref sfi_unmap_memory(void __iomem *virt, u32 
size)
if (sfi_use_ioremap)
iounmap(virt);
else
-   early_iounmap(virt, size);
+   early_memunmap(virt, size);
 }
 
 static void sfi_print_table_header(unsigned long long pa,



[RFC PATCH v2 08/32] x86: Use PAGE_KERNEL protection for ioremap of memory page

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

In order for memory pages to be properly mapped when SEV is active, we
need to use the PAGE_KERNEL protection attribute as the base protection.
This will insure that memory mapping of, e.g. ACPI tables, receives the
proper mapping attributes.

Signed-off-by: Tom Lendacky 
---
 arch/x86/mm/ioremap.c |8 
 include/linux/mm.h|1 +
 kernel/resource.c |   40 
 3 files changed, 49 insertions(+)

diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index c400ab5..481c999 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -151,7 +151,15 @@ static void __iomem *__ioremap_caller(resource_size_t 
phys_addr,
pcm = new_pcm;
}
 
+   /*
+* If the page being mapped is in memory and SEV is active then
+* make sure the memory encryption attribute is enabled in the
+* resulting mapping.
+*/
prot = PAGE_KERNEL_IO;
+   if (sev_active() && page_is_mem(pfn))
+   prot = __pgprot(pgprot_val(prot) | _PAGE_ENC);
+
switch (pcm) {
case _PAGE_CACHE_MODE_UC:
default:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b84615b..825df27 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -445,6 +445,7 @@ static inline int get_page_unless_zero(struct page *page)
 }
 
 extern int page_is_ram(unsigned long pfn);
+extern int page_is_mem(unsigned long pfn);
 
 enum {
REGION_INTERSECTS,
diff --git a/kernel/resource.c b/kernel/resource.c
index 9b5f044..db56ba3 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -518,6 +518,46 @@ int __weak page_is_ram(unsigned long pfn)
 }
 EXPORT_SYMBOL_GPL(page_is_ram);
 
+/*
+ * This function returns true if the target memory is marked as
+ * IORESOURCE_MEM and IORESOUCE_BUSY and described as other than
+ * IORES_DESC_NONE (e.g. IORES_DESC_ACPI_TABLES).
+ */
+static int walk_mem_range(unsigned long start_pfn, unsigned long nr_pages)
+{
+   struct resource res;
+   unsigned long pfn, end_pfn;
+   u64 orig_end;
+   int ret = -1;
+
+   res.start = (u64) start_pfn << PAGE_SHIFT;
+   res.end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
+   res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+   orig_end = res.end;
+   while ((res.start < res.end) &&
+   (find_next_iomem_res(&res, IORES_DESC_NONE, true) >= 0)) {
+   pfn = (res.start + PAGE_SIZE - 1) >> PAGE_SHIFT;
+   end_pfn = (res.end + 1) >> PAGE_SHIFT;
+   if (end_pfn > pfn)
+   ret = (res.desc != IORES_DESC_NONE) ? 1 : 0;
+   if (ret)
+   break;
+   res.start = res.end + 1;
+   res.end = orig_end;
+   }
+   return ret;
+}
+
+/*
+ * This generic page_is_mem() returns true if specified address is
+ * registered as memory in iomem_resource list.
+ */
+int __weak page_is_mem(unsigned long pfn)
+{
+   return walk_mem_range(pfn, 1) == 1;
+}
+EXPORT_SYMBOL_GPL(page_is_mem);
+
 /**
  * region_intersects() - determine intersection of region with known resources
  * @start: region start address



[RFC PATCH v2 07/32] x86/efi: Access EFI data as encrypted when SEV is active

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

EFI data is encrypted when the kernel is run under SEV. Update the
page table references to be sure the EFI memory areas are accessed
encrypted.

Signed-off-by: Tom Lendacky 
Signed-off-by: Brijesh Singh 
---
 arch/x86/platform/efi/efi_64.c |   15 ++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index 2d8674d..9a76ed8 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -45,6 +45,7 @@
 #include 
 #include 
 #include 
+#include 
 
 /*
  * We allocate runtime services regions bottom-up, starting from -4G, i.e.
@@ -286,7 +287,10 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, 
unsigned num_pages)
 * as trim_bios_range() will reserve the first page and isolate it away
 * from memory allocators anyway.
 */
-   if (kernel_map_pages_in_pgd(pgd, 0x0, 0x0, 1, _PAGE_RW)) {
+   pf = _PAGE_RW;
+   if (sev_active())
+   pf |= _PAGE_ENC;
+   if (kernel_map_pages_in_pgd(pgd, 0x0, 0x0, 1, pf)) {
pr_err("Failed to create 1:1 mapping for the first page!\n");
return 1;
}
@@ -329,6 +333,9 @@ static void __init __map_region(efi_memory_desc_t *md, u64 
va)
if (!(md->attribute & EFI_MEMORY_WB))
flags |= _PAGE_PCD;
 
+   if (sev_active())
+   flags |= _PAGE_ENC;
+
pfn = md->phys_addr >> PAGE_SHIFT;
if (kernel_map_pages_in_pgd(pgd, pfn, va, md->num_pages, flags))
pr_warn("Error mapping PA 0x%llx -> VA 0x%llx!\n",
@@ -455,6 +462,9 @@ static int __init efi_update_mem_attr(struct mm_struct *mm, 
efi_memory_desc_t *m
if (!(md->attribute & EFI_MEMORY_RO))
pf |= _PAGE_RW;
 
+   if (sev_active())
+   pf |= _PAGE_ENC;
+
return efi_update_mappings(md, pf);
 }
 
@@ -506,6 +516,9 @@ void __init efi_runtime_update_mappings(void)
(md->type != EFI_RUNTIME_SERVICES_CODE))
pf |= _PAGE_RW;
 
+   if (sev_active())
+   pf |= _PAGE_ENC;
+
efi_update_mappings(md, pf);
}
 }



[RFC PATCH v2 19/32] crypto: ccp: Introduce the AMD Secure Processor device

2017-03-02 Thread Brijesh Singh
The CCP device is part of the AMD Secure Processor. In order to expand the
usage of the AMD Secure Processor, create a framework that allows functional
components of the AMD Secure Processor to be initialized and handled
appropriately.

Signed-off-by: Brijesh Singh 
Signed-off-by: Tom Lendacky 
---
 drivers/crypto/Kconfig   |   10 +
 drivers/crypto/ccp/Kconfig   |   43 +++--
 drivers/crypto/ccp/Makefile  |8 -
 drivers/crypto/ccp/ccp-dev-v3.c  |   86 +-
 drivers/crypto/ccp/ccp-dev-v5.c  |   73 -
 drivers/crypto/ccp/ccp-dev.c |  137 +---
 drivers/crypto/ccp/ccp-dev.h |   35 
 drivers/crypto/ccp/sp-dev.c  |  308 
 drivers/crypto/ccp/sp-dev.h  |  140 
 drivers/crypto/ccp/sp-pci.c  |  324 ++
 drivers/crypto/ccp/sp-platform.c |  268 +++
 include/linux/ccp.h  |3 
 12 files changed, 1240 insertions(+), 195 deletions(-)
 create mode 100644 drivers/crypto/ccp/sp-dev.c
 create mode 100644 drivers/crypto/ccp/sp-dev.h
 create mode 100644 drivers/crypto/ccp/sp-pci.c
 create mode 100644 drivers/crypto/ccp/sp-platform.c

diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 7956478..d31b469 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -456,14 +456,14 @@ config CRYPTO_DEV_ATMEL_SHA
  To compile this driver as a module, choose M here: the module
  will be called atmel-sha.
 
-config CRYPTO_DEV_CCP
-   bool "Support for AMD Cryptographic Coprocessor"
+config CRYPTO_DEV_SP
+   bool "Support for AMD Secure Processor"
depends on ((X86 && PCI) || (ARM64 && (OF_ADDRESS || ACPI))) && 
HAS_IOMEM
help
- The AMD Cryptographic Coprocessor provides hardware offload support
- for encryption, hashing and related operations.
+ The AMD Secure Processor provides hardware offload support for memory
+ encryption in virtualization and cryptographic hashing and related 
operations.
 
-if CRYPTO_DEV_CCP
+if CRYPTO_DEV_SP
source "drivers/crypto/ccp/Kconfig"
 endif
 
diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index 2238f77..bc08f03 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -1,26 +1,37 @@
-config CRYPTO_DEV_CCP_DD
-   tristate "Cryptographic Coprocessor device driver"
-   depends on CRYPTO_DEV_CCP
-   default m
-   select HW_RANDOM
-   select DMA_ENGINE
-   select DMADEVICES
-   select CRYPTO_SHA1
-   select CRYPTO_SHA256
-   help
- Provides the interface to use the AMD Cryptographic Coprocessor
- which can be used to offload encryption operations such as SHA,
- AES and more. If you choose 'M' here, this module will be called
- ccp.
-
 config CRYPTO_DEV_CCP_CRYPTO
tristate "Encryption and hashing offload support"
-   depends on CRYPTO_DEV_CCP_DD
+   depends on CRYPTO_DEV_SP_DD
default m
select CRYPTO_HASH
select CRYPTO_BLKCIPHER
select CRYPTO_AUTHENC
+   select CRYPTO_DEV_CCP
help
  Support for using the cryptographic API with the AMD Cryptographic
  Coprocessor. This module supports offload of SHA and AES algorithms.
  If you choose 'M' here, this module will be called ccp_crypto.
+
+config CRYPTO_DEV_SP_DD
+   tristate "Secure Processor device driver"
+   depends on CRYPTO_DEV_SP
+   default m
+   help
+ Provides the interface to use the AMD Secure Processor. The
+ AMD Secure Processor support the Platform Security Processor (PSP)
+ and Cryptographic Coprocessor (CCP). If you choose 'M' here, this
+ module will be called ccp.
+
+if CRYPTO_DEV_SP_DD
+config CRYPTO_DEV_CCP
+   bool "Cryptographic Coprocessor interface"
+   default y
+   select HW_RANDOM
+   select DMA_ENGINE
+   select DMADEVICES
+   select CRYPTO_SHA1
+   select CRYPTO_SHA256
+   help
+ Provides the interface to use the AMD Cryptographic Coprocessor
+ which can be used to offload encryption operations such as SHA,
+ AES and more.
+endif
diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index 346ceb8..8127e18 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -1,11 +1,11 @@
-obj-$(CONFIG_CRYPTO_DEV_CCP_DD) += ccp.o
-ccp-objs := ccp-dev.o \
+obj-$(CONFIG_CRYPTO_DEV_SP_DD) += ccp.o
+ccp-objs := sp-dev.o sp-platform.o
+ccp-$(CONFIG_PCI) += sp-pci.o
+ccp-$(CONFIG_CRYPTO_DEV_CCP) += ccp-dev.o \
ccp-ops.o \
ccp-dev-v3.o \
ccp-dev-v5.o \
-   ccp-platform.o \
ccp-dmaengine.o
-ccp-$(CONFIG_PCI) += ccp-pci.o
 
 obj-$(CONFIG_CRYPTO_DEV_CCP_CRYPTO) += ccp-crypto.o
 ccp-crypto-objs := ccp-crypto-main.o \
diff --git a/drivers/crypto/ccp/ccp-dev-v3.c b/drivers/crypto/ccp

[RFC PATCH v2 11/32] x86: Unroll string I/O when SEV is active

2017-03-02 Thread Brijesh Singh
From: Tom Lendacky 

Secure Encrypted Virtualization (SEV) does not support string I/O, so
unroll the string I/O operation into a loop operating on one element at
a time.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/io.h |   26 ++
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 833f7cc..b596114 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -327,14 +327,32 @@ static inline unsigned type in##bwl##_p(int port) 
\
\
 static inline void outs##bwl(int port, const void *addr, unsigned long count) \
 {  \
-   asm volatile("rep; outs" #bwl   \
-: "+S"(addr), "+c"(count) : "d"(port));\
+   if (sev_active()) { \
+   unsigned type *value = (unsigned type *)addr;   \
+   while (count) { \
+   out##bwl(*value, port); \
+   value++;\
+   count--;\
+   }   \
+   } else {\
+   asm volatile("rep; outs" #bwl   \
+: "+S"(addr), "+c"(count) : "d"(port));\
+   }   \
 }  \
\
 static inline void ins##bwl(int port, void *addr, unsigned long count) \
 {  \
-   asm volatile("rep; ins" #bwl\
-: "+D"(addr), "+c"(count) : "d"(port));\
+   if (sev_active()) { \
+   unsigned type *value = (unsigned type *)addr;   \
+   while (count) { \
+   *value = in##bwl(port); \
+   value++;\
+   count--;\
+   }   \
+   } else {\
+   asm volatile("rep; ins" #bwl\
+: "+D"(addr), "+c"(count) : "d"(port));\
+   }   \
 }
 
 BUILDIO(b, b, char)



Re: [RFC PATCH v4] IV Generation algorithms for dm-crypt

2017-03-02 Thread Gilad Ben-Yossef
On Wed, Mar 1, 2017 at 3:21 PM, Ondrej Mosnacek  wrote:
> 2017-03-01 13:42 GMT+01:00 Gilad Ben-Yossef :
>
> Wouldn't adopting a bulk request API (something like what I tried to
> do here [1]) that allows users to supply multiple messages, each with
> their own IV, fulfill this purpose? That way, we wouldn't need to
> introduce any new modes into Crypto API and the drivers/accelerators
> would only need to provide bulk implementations of common modes
> (xts(aes), cbc(aes), ...) to provide better performance for dm-crypt
> (and possibly other users, too).
>
> I'm not sure how exactly these crypto accelerators work, but wouldn't
> it help if the drivers simply get more messages (in our case sectors)
> in a single call? I wonder, would (efficiently) supporting such a
> scheme require changes in the HW itself or could it be achieved just
> by modifying driver code (let's say specifically for your CryptoCell
> accelerator)?
>
> [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg23007.html
>


>From a general perspective - that is things are expect to be true not
just for CryptoCell but for most HW crypto engines,
you want two things - for the HW engine to be able to burst work for a
long time and than rest for a long time vs. a stop and go scheme
(engine utilization)
and for the average IO transaction to be relatively long (bus utilization)

So, a big cluster size i.e. Milan's proposal) works great - you get both.

Submitting a series of sequential small clusters where the HW can
calculate the IV (e.g. Binoy's proposal) works great if the HW
supports it - you get both.

A batched series of small clusters + IV is less favorable - if your HW
engines has lots of parallel context processing (this is expensive for
HW) you might enjoy engine utilization but the bus utilization will be
low - lots of small transactions.

Gilad


-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru


Re: [PATCH v3 1/2] crypto: vmx - Use skcipher for cbc fallback

2017-03-02 Thread Herbert Xu
On Wed, Mar 01, 2017 at 10:58:20AM -0300, Paulo Flabiano Smorigo wrote:
> Signed-off-by: Paulo Flabiano Smorigo 
> ---

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt