[PATCH v2 3/4] crypto: talitos - move talitos_{edesc,request} to request private ctx

2015-03-13 Thread Horia Geanta
talitos_edesc and talitos_request structures are moved to crypto
request private context.

This avoids allocating memory in the driver in the cases when data
(assoc, in, out) is not scattered.

It is also an intermediary step towards adding backlogging support.

Signed-off-by: Horia Geanta horia.gea...@freescale.com
---
 drivers/crypto/talitos.c | 467 +--
 drivers/crypto/talitos.h |  54 +-
 2 files changed, 294 insertions(+), 227 deletions(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index 857414afa29a..c184987dfcc7 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -182,23 +182,23 @@ static int init_device(struct device *dev)
return 0;
 }
 
-/**
- * talitos_submit - submits a descriptor to the device for processing
- * @dev:   the SEC device to be used
- * @ch:the SEC device channel to be used
- * @desc:  the descriptor to be processed by the device
- * @callback:  whom to call when processing is complete
- * @context:   a handle for use by caller (optional)
- *
- * desc must contain valid dma-mapped (bus physical) address pointers.
- * callback must check err and feedback in descriptor header
- * for device processing status.
- */
-int talitos_submit(struct device *dev, int ch, struct talitos_desc *desc,
-  void (*callback)(struct device *dev,
-   struct talitos_desc *desc,
-   void *context, int error),
-  void *context)
+static struct talitos_request *to_talitos_req(struct crypto_async_request 
*areq)
+{
+   switch (crypto_tfm_alg_type(areq-tfm)) {
+   case CRYPTO_ALG_TYPE_ABLKCIPHER:
+   return ablkcipher_request_ctx(ablkcipher_request_cast(areq));
+   case CRYPTO_ALG_TYPE_AHASH:
+   return ahash_request_ctx(ahash_request_cast(areq));
+   case CRYPTO_ALG_TYPE_AEAD:
+   return aead_request_ctx(container_of(areq, struct aead_request,
+base));
+   default:
+   return ERR_PTR(-EINVAL);
+   }
+}
+
+int talitos_submit(struct device *dev, int ch,
+  struct crypto_async_request *areq)
 {
struct talitos_private *priv = dev_get_drvdata(dev);
struct talitos_request *request;
@@ -214,19 +214,20 @@ int talitos_submit(struct device *dev, int ch, struct 
talitos_desc *desc,
}
 
head = priv-chan[ch].head;
-   request = priv-chan[ch].fifo[head];
 
-   /* map descriptor and save caller data */
-   request-dma_desc = dma_map_single(dev, desc, sizeof(*desc),
+   request = to_talitos_req(areq);
+   if (IS_ERR(request))
+   return PTR_ERR(request);
+
+   request-dma_desc = dma_map_single(dev, request-desc,
+  sizeof(*request-desc),
   DMA_BIDIRECTIONAL);
-   request-callback = callback;
-   request-context = context;
 
/* increment fifo head */
priv-chan[ch].head = (priv-chan[ch].head + 1)  (priv-fifo_len - 1);
 
smp_wmb();
-   request-desc = desc;
+   priv-chan[ch].fifo[head] = request;
 
/* GO! */
wmb();
@@ -247,15 +248,15 @@ EXPORT_SYMBOL(talitos_submit);
 static void flush_channel(struct device *dev, int ch, int error, int reset_ch)
 {
struct talitos_private *priv = dev_get_drvdata(dev);
-   struct talitos_request *request, saved_req;
+   struct talitos_request *request;
unsigned long flags;
int tail, status;
 
spin_lock_irqsave(priv-chan[ch].tail_lock, flags);
 
tail = priv-chan[ch].tail;
-   while (priv-chan[ch].fifo[tail].desc) {
-   request = priv-chan[ch].fifo[tail];
+   while (priv-chan[ch].fifo[tail]) {
+   request = priv-chan[ch].fifo[tail];
 
/* descriptors with their done bits set don't get the error */
rmb();
@@ -271,14 +272,9 @@ static void flush_channel(struct device *dev, int ch, int 
error, int reset_ch)
 sizeof(struct talitos_desc),
 DMA_BIDIRECTIONAL);
 
-   /* copy entries so we can call callback outside lock */
-   saved_req.desc = request-desc;
-   saved_req.callback = request-callback;
-   saved_req.context = request-context;
-
/* release request entry in fifo */
smp_wmb();
-   request-desc = NULL;
+   priv-chan[ch].fifo[tail] = NULL;
 
/* increment fifo tail */
priv-chan[ch].tail = (tail + 1)  (priv-fifo_len - 1);
@@ -287,8 +283,8 @@ static void flush_channel(struct device *dev, int ch, int 
error, int reset_ch)
 
atomic_dec(priv-chan[ch].submit_count);
 
-   saved_req.callback(dev, saved_req.desc, 

Re: [PATCH 4/4] crypto: talitos - add software backlog queue handling

2015-03-13 Thread Tom Lendacky

On 03/13/2015 12:16 PM, Horia Geanta wrote:

I was running into situations where the hardware FIFO was filling up, and
the code was returning EAGAIN to dm-crypt and just dropping the submitted
crypto request.

This adds support in talitos for a software backlog queue. When requests
can't be queued to the hardware immediately EBUSY is returned. The queued
requests are dispatched to the hardware in received order as hardware FIFO
slots become available.

Signed-off-by: Martin Hicks m...@bork.org
Signed-off-by: Horia Geanta horia.gea...@freescale.com
---
  drivers/crypto/talitos.c | 107 +--
  drivers/crypto/talitos.h |   2 +
  2 files changed, 97 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index c184987dfcc7..d4679030d23c 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -197,23 +197,41 @@ static struct talitos_request *to_talitos_req(struct 
crypto_async_request *areq)
}
  }

-int talitos_submit(struct device *dev, int ch,
-  struct crypto_async_request *areq)
+/*
+ * Enqueue to HW queue a request, coming either from upper layer or taken from
+ * SW queue. When drawing from SW queue, check if there are backlogged requests
+ * and notify their producers.
+ */
+int __talitos_handle_queue(struct device *dev, int ch,
+  struct crypto_async_request *areq,
+  unsigned long *irq_flags)
  {
struct talitos_private *priv = dev_get_drvdata(dev);
struct talitos_request *request;
-   unsigned long flags;
int head;

-   spin_lock_irqsave(priv-chan[ch].head_lock, flags);
-
if (!atomic_inc_not_zero(priv-chan[ch].submit_count)) {
/* h/w fifo is full */
-   spin_unlock_irqrestore(priv-chan[ch].head_lock, flags);
-   return -EAGAIN;
+   if (!areq)
+   return -EBUSY;
+
+   /* Try to backlog request (if allowed) */
+   return crypto_enqueue_request(priv-chan[ch].queue, areq);


I'd remembered something about how hardware drivers should use their
own list element for queuing, searched back and found this:

http://marc.info/?l=linux-crypto-vgerm=137609769605139w=2

Thanks,
Tom


}

-   head = priv-chan[ch].head;
+   if (!areq) {
+   struct crypto_async_request *backlog =
+   crypto_get_backlog(priv-chan[ch].queue);
+
+   /* Dequeue the oldest request */
+   areq = crypto_dequeue_request(priv-chan[ch].queue);
+   if (!areq)
+   return 0;
+
+   /* Mark a backlogged request as in-progress */
+   if (backlog)
+   backlog-complete(backlog, -EINPROGRESS);
+   }

request = to_talitos_req(areq);
if (IS_ERR(request))
@@ -224,6 +242,7 @@ int talitos_submit(struct device *dev, int ch,
   DMA_BIDIRECTIONAL);

/* increment fifo head */
+   head = priv-chan[ch].head;
priv-chan[ch].head = (priv-chan[ch].head + 1)  (priv-fifo_len - 1);

smp_wmb();
@@ -236,14 +255,66 @@ int talitos_submit(struct device *dev, int ch,
out_be32(priv-chan[ch].reg + TALITOS_FF_LO,
 lower_32_bits(request-dma_desc));

+   return -EINPROGRESS;
+}
+
+int talitos_submit(struct device *dev, int ch,
+  struct crypto_async_request *areq)
+{
+   struct talitos_private *priv = dev_get_drvdata(dev);
+   unsigned long flags;
+   int ret;
+
+   spin_lock_irqsave(priv-chan[ch].head_lock, flags);
+
+   /*
+* Hidden assumption: we maintain submission order separately for
+* requests that may be backlogged and those that may not. For e.g. even
+* if SW queue has some requests, we won't drop an incoming request that
+* may not be backlogged, but enqueue it in the HW queue (in front of
+* pending ones).
+*/
+   if (areq-flags  CRYPTO_TFM_REQ_MAY_BACKLOG 
+   priv-chan[ch].queue.qlen) {
+   /*
+* There are pending requests in the SW queue. Since we want to
+* maintain the order of requests, we cannot enqueue in the HW
+* queue. Thus put this new request in SW queue and dispatch
+* the oldest backlogged request to the hardware.
+*/
+   ret = crypto_enqueue_request(priv-chan[ch].queue, areq);
+   __talitos_handle_queue(dev, ch, NULL, flags);
+   } else {
+   ret = __talitos_handle_queue(dev, ch, areq, flags);
+   }
+
spin_unlock_irqrestore(priv-chan[ch].head_lock, flags);

-   return -EINPROGRESS;
+   return ret;
  }
  EXPORT_SYMBOL(talitos_submit);

+static void talitos_handle_queue(struct device *dev, int ch)
+{
+   struct talitos_private *priv = 

[PATCH 2/4] net: esp: check CRYPTO_TFM_REQ_DMA flag when allocating crypto request

2015-03-13 Thread Horia Geanta
Some crypto backends might require the requests' private contexts
to be allocated in DMA-able memory.

Signed-off-by: Horia Geanta horia.gea...@freescale.com
---

Depends on patch 1/4 (sent only on crypto list) that adds the
CRYPTO_TFM_REQ_DMA flag.

 net/ipv4/esp4.c | 7 ++-
 net/ipv6/esp6.c | 7 ++-
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 60173d4d3a0e..3e6ddece0cbe 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -38,6 +38,7 @@ static u32 esp4_get_mtu(struct xfrm_state *x, int mtu);
 static void *esp_alloc_tmp(struct crypto_aead *aead, int nfrags, int seqhilen)
 {
unsigned int len;
+   gfp_t gfp = GFP_ATOMIC;
 
len = seqhilen;
 
@@ -54,7 +55,11 @@ static void *esp_alloc_tmp(struct crypto_aead *aead, int 
nfrags, int seqhilen)
 
len += sizeof(struct scatterlist) * nfrags;
 
-   return kmalloc(len, GFP_ATOMIC);
+   if (crypto_aead_reqsize(aead) 
+   (crypto_aead_get_flags(aead)  CRYPTO_TFM_REQ_DMA))
+   gfp |= GFP_DMA;
+
+   return kmalloc(len, gfp);
 }
 
 static inline __be32 *esp_tmp_seqhi(void *tmp)
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index e48f2c7c5c59..0d173eedad4e 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -65,6 +65,7 @@ static u32 esp6_get_mtu(struct xfrm_state *x, int mtu);
 static void *esp_alloc_tmp(struct crypto_aead *aead, int nfrags, int seqihlen)
 {
unsigned int len;
+   gfp_t gfp = GFP_ATOMIC;
 
len = seqihlen;
 
@@ -81,7 +82,11 @@ static void *esp_alloc_tmp(struct crypto_aead *aead, int 
nfrags, int seqihlen)
 
len += sizeof(struct scatterlist) * nfrags;
 
-   return kmalloc(len, GFP_ATOMIC);
+   if (crypto_aead_reqsize(aead) 
+   (crypto_aead_get_flags(aead)  CRYPTO_TFM_REQ_DMA))
+   gfp |= GFP_DMA;
+
+   return kmalloc(len, gfp);
 }
 
 static inline __be32 *esp_tmp_seqhi(void *tmp)
-- 
1.8.3.1

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/4] crypto: talitos - move talitos_{edesc,request} to request private ctx

2015-03-13 Thread Horia Geanta
talitos_edesc and talitos_request structures are moved to crypto
request private context.

This avoids allocating memory in the driver in the cases when data
(assoc, in, out) is not scattered.

It is also an intermediary step towards adding backlogging support.

Signed-off-by: Horia Geanta horia.gea...@freescale.com
---
 drivers/crypto/talitos.c | 467 +--
 drivers/crypto/talitos.h |  54 +-
 2 files changed, 294 insertions(+), 227 deletions(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index 857414afa29a..c184987dfcc7 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -182,23 +182,23 @@ static int init_device(struct device *dev)
return 0;
 }
 
-/**
- * talitos_submit - submits a descriptor to the device for processing
- * @dev:   the SEC device to be used
- * @ch:the SEC device channel to be used
- * @desc:  the descriptor to be processed by the device
- * @callback:  whom to call when processing is complete
- * @context:   a handle for use by caller (optional)
- *
- * desc must contain valid dma-mapped (bus physical) address pointers.
- * callback must check err and feedback in descriptor header
- * for device processing status.
- */
-int talitos_submit(struct device *dev, int ch, struct talitos_desc *desc,
-  void (*callback)(struct device *dev,
-   struct talitos_desc *desc,
-   void *context, int error),
-  void *context)
+static struct talitos_request *to_talitos_req(struct crypto_async_request 
*areq)
+{
+   switch (crypto_tfm_alg_type(areq-tfm)) {
+   case CRYPTO_ALG_TYPE_ABLKCIPHER:
+   return ablkcipher_request_ctx(ablkcipher_request_cast(areq));
+   case CRYPTO_ALG_TYPE_AHASH:
+   return ahash_request_ctx(ahash_request_cast(areq));
+   case CRYPTO_ALG_TYPE_AEAD:
+   return aead_request_ctx(container_of(areq, struct aead_request,
+base));
+   default:
+   return ERR_PTR(-EINVAL);
+   }
+}
+
+int talitos_submit(struct device *dev, int ch,
+  struct crypto_async_request *areq)
 {
struct talitos_private *priv = dev_get_drvdata(dev);
struct talitos_request *request;
@@ -214,19 +214,20 @@ int talitos_submit(struct device *dev, int ch, struct 
talitos_desc *desc,
}
 
head = priv-chan[ch].head;
-   request = priv-chan[ch].fifo[head];
 
-   /* map descriptor and save caller data */
-   request-dma_desc = dma_map_single(dev, desc, sizeof(*desc),
+   request = to_talitos_req(areq);
+   if (IS_ERR(request))
+   return PTR_ERR(request);
+
+   request-dma_desc = dma_map_single(dev, request-desc,
+  sizeof(*request-desc),
   DMA_BIDIRECTIONAL);
-   request-callback = callback;
-   request-context = context;
 
/* increment fifo head */
priv-chan[ch].head = (priv-chan[ch].head + 1)  (priv-fifo_len - 1);
 
smp_wmb();
-   request-desc = desc;
+   priv-chan[ch].fifo[head] = request;
 
/* GO! */
wmb();
@@ -247,15 +248,15 @@ EXPORT_SYMBOL(talitos_submit);
 static void flush_channel(struct device *dev, int ch, int error, int reset_ch)
 {
struct talitos_private *priv = dev_get_drvdata(dev);
-   struct talitos_request *request, saved_req;
+   struct talitos_request *request;
unsigned long flags;
int tail, status;
 
spin_lock_irqsave(priv-chan[ch].tail_lock, flags);
 
tail = priv-chan[ch].tail;
-   while (priv-chan[ch].fifo[tail].desc) {
-   request = priv-chan[ch].fifo[tail];
+   while (priv-chan[ch].fifo[tail]) {
+   request = priv-chan[ch].fifo[tail];
 
/* descriptors with their done bits set don't get the error */
rmb();
@@ -271,14 +272,9 @@ static void flush_channel(struct device *dev, int ch, int 
error, int reset_ch)
 sizeof(struct talitos_desc),
 DMA_BIDIRECTIONAL);
 
-   /* copy entries so we can call callback outside lock */
-   saved_req.desc = request-desc;
-   saved_req.callback = request-callback;
-   saved_req.context = request-context;
-
/* release request entry in fifo */
smp_wmb();
-   request-desc = NULL;
+   priv-chan[ch].fifo[tail] = NULL;
 
/* increment fifo tail */
priv-chan[ch].tail = (tail + 1)  (priv-fifo_len - 1);
@@ -287,8 +283,8 @@ static void flush_channel(struct device *dev, int ch, int 
error, int reset_ch)
 
atomic_dec(priv-chan[ch].submit_count);
 
-   saved_req.callback(dev, saved_req.desc, 

[PATCH 1/4] crypto: add CRYPTO_TFM_REQ_DMA flag

2015-03-13 Thread Horia Geanta
The CRYPTO_TFM_REQ_DMA flag can be used by backend implementations to
indicate to crypto API the need to allocate GFP_DMA memory
for private contexts of the crypto requests.

Signed-off-by: Horia Geanta horia.gea...@freescale.com
---
 include/linux/crypto.h | 9 +
 1 file changed, 9 insertions(+)

diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index fb5ef16d6a12..64258c9198d5 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -103,6 +103,7 @@
 #define CRYPTO_TFM_REQ_WEAK_KEY0x0100
 #define CRYPTO_TFM_REQ_MAY_SLEEP   0x0200
 #define CRYPTO_TFM_REQ_MAY_BACKLOG 0x0400
+#define CRYPTO_TFM_REQ_DMA 0x0800
 #define CRYPTO_TFM_RES_WEAK_KEY0x0010
 #define CRYPTO_TFM_RES_BAD_KEY_LEN 0x0020
 #define CRYPTO_TFM_RES_BAD_KEY_SCHED   0x0040
@@ -1108,6 +1109,10 @@ static inline struct ablkcipher_request 
*ablkcipher_request_alloc(
 {
struct ablkcipher_request *req;
 
+   if (crypto_ablkcipher_reqsize(tfm) 
+   (crypto_ablkcipher_get_flags(tfm)  CRYPTO_TFM_REQ_DMA))
+   gfp |= GFP_DMA;
+
req = kmalloc(sizeof(struct ablkcipher_request) +
  crypto_ablkcipher_reqsize(tfm), gfp);
 
@@ -1471,6 +1476,10 @@ static inline struct aead_request 
*aead_request_alloc(struct crypto_aead *tfm,
 {
struct aead_request *req;
 
+   if (crypto_aead_reqsize(tfm) 
+   (crypto_aead_get_flags(tfm)  CRYPTO_TFM_REQ_DMA))
+   gfp |= GFP_DMA;
+
req = kmalloc(sizeof(*req) + crypto_aead_reqsize(tfm), gfp);
 
if (likely(req))
-- 
1.8.3.1

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 1/4] crypto: add CRYPTO_TFM_REQ_DMA flag

2015-03-13 Thread Horia Geanta
The CRYPTO_TFM_REQ_DMA flag can be used by backend implementations to
indicate to crypto API the need to allocate GFP_DMA memory
for private contexts of the crypto requests.

Signed-off-by: Horia Geanta horia.gea...@freescale.com
---
 include/crypto/hash.h  | 4 
 include/linux/crypto.h | 9 +
 2 files changed, 13 insertions(+)

diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 98abda9ed3aa..6e23b27862cd 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -507,6 +507,10 @@ static inline struct ahash_request *ahash_request_alloc(
 {
struct ahash_request *req;
 
+   if (crypto_ahash_reqsize(tfm) 
+   (crypto_ahash_get_flags(tfm)  CRYPTO_TFM_REQ_DMA))
+   gfp |= GFP_DMA;
+
req = kmalloc(sizeof(struct ahash_request) +
  crypto_ahash_reqsize(tfm), gfp);
 
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index fb5ef16d6a12..64258c9198d5 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -103,6 +103,7 @@
 #define CRYPTO_TFM_REQ_WEAK_KEY0x0100
 #define CRYPTO_TFM_REQ_MAY_SLEEP   0x0200
 #define CRYPTO_TFM_REQ_MAY_BACKLOG 0x0400
+#define CRYPTO_TFM_REQ_DMA 0x0800
 #define CRYPTO_TFM_RES_WEAK_KEY0x0010
 #define CRYPTO_TFM_RES_BAD_KEY_LEN 0x0020
 #define CRYPTO_TFM_RES_BAD_KEY_SCHED   0x0040
@@ -1108,6 +1109,10 @@ static inline struct ablkcipher_request 
*ablkcipher_request_alloc(
 {
struct ablkcipher_request *req;
 
+   if (crypto_ablkcipher_reqsize(tfm) 
+   (crypto_ablkcipher_get_flags(tfm)  CRYPTO_TFM_REQ_DMA))
+   gfp |= GFP_DMA;
+
req = kmalloc(sizeof(struct ablkcipher_request) +
  crypto_ablkcipher_reqsize(tfm), gfp);
 
@@ -1471,6 +1476,10 @@ static inline struct aead_request 
*aead_request_alloc(struct crypto_aead *tfm,
 {
struct aead_request *req;
 
+   if (crypto_aead_reqsize(tfm) 
+   (crypto_aead_get_flags(tfm)  CRYPTO_TFM_REQ_DMA))
+   gfp |= GFP_DMA;
+
req = kmalloc(sizeof(*req) + crypto_aead_reqsize(tfm), gfp);
 
if (likely(req))
-- 
1.8.3.1

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/4] crypto: talitos - add software backlog queue handling

2015-03-13 Thread Horia Geanta
I was running into situations where the hardware FIFO was filling up, and
the code was returning EAGAIN to dm-crypt and just dropping the submitted
crypto request.

This adds support in talitos for a software backlog queue. When requests
can't be queued to the hardware immediately EBUSY is returned. The queued
requests are dispatched to the hardware in received order as hardware FIFO
slots become available.

Signed-off-by: Martin Hicks m...@bork.org
Signed-off-by: Horia Geanta horia.gea...@freescale.com
---
 drivers/crypto/talitos.c | 107 +--
 drivers/crypto/talitos.h |   2 +
 2 files changed, 97 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index c184987dfcc7..d4679030d23c 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -197,23 +197,41 @@ static struct talitos_request *to_talitos_req(struct 
crypto_async_request *areq)
}
 }
 
-int talitos_submit(struct device *dev, int ch,
-  struct crypto_async_request *areq)
+/*
+ * Enqueue to HW queue a request, coming either from upper layer or taken from
+ * SW queue. When drawing from SW queue, check if there are backlogged requests
+ * and notify their producers.
+ */
+int __talitos_handle_queue(struct device *dev, int ch,
+  struct crypto_async_request *areq,
+  unsigned long *irq_flags)
 {
struct talitos_private *priv = dev_get_drvdata(dev);
struct talitos_request *request;
-   unsigned long flags;
int head;
 
-   spin_lock_irqsave(priv-chan[ch].head_lock, flags);
-
if (!atomic_inc_not_zero(priv-chan[ch].submit_count)) {
/* h/w fifo is full */
-   spin_unlock_irqrestore(priv-chan[ch].head_lock, flags);
-   return -EAGAIN;
+   if (!areq)
+   return -EBUSY;
+
+   /* Try to backlog request (if allowed) */
+   return crypto_enqueue_request(priv-chan[ch].queue, areq);
}
 
-   head = priv-chan[ch].head;
+   if (!areq) {
+   struct crypto_async_request *backlog =
+   crypto_get_backlog(priv-chan[ch].queue);
+
+   /* Dequeue the oldest request */
+   areq = crypto_dequeue_request(priv-chan[ch].queue);
+   if (!areq)
+   return 0;
+
+   /* Mark a backlogged request as in-progress */
+   if (backlog)
+   backlog-complete(backlog, -EINPROGRESS);
+   }
 
request = to_talitos_req(areq);
if (IS_ERR(request))
@@ -224,6 +242,7 @@ int talitos_submit(struct device *dev, int ch,
   DMA_BIDIRECTIONAL);
 
/* increment fifo head */
+   head = priv-chan[ch].head;
priv-chan[ch].head = (priv-chan[ch].head + 1)  (priv-fifo_len - 1);
 
smp_wmb();
@@ -236,14 +255,66 @@ int talitos_submit(struct device *dev, int ch,
out_be32(priv-chan[ch].reg + TALITOS_FF_LO,
 lower_32_bits(request-dma_desc));
 
+   return -EINPROGRESS;
+}
+
+int talitos_submit(struct device *dev, int ch,
+  struct crypto_async_request *areq)
+{
+   struct talitos_private *priv = dev_get_drvdata(dev);
+   unsigned long flags;
+   int ret;
+
+   spin_lock_irqsave(priv-chan[ch].head_lock, flags);
+
+   /*
+* Hidden assumption: we maintain submission order separately for
+* requests that may be backlogged and those that may not. For e.g. even
+* if SW queue has some requests, we won't drop an incoming request that
+* may not be backlogged, but enqueue it in the HW queue (in front of
+* pending ones).
+*/
+   if (areq-flags  CRYPTO_TFM_REQ_MAY_BACKLOG 
+   priv-chan[ch].queue.qlen) {
+   /*
+* There are pending requests in the SW queue. Since we want to
+* maintain the order of requests, we cannot enqueue in the HW
+* queue. Thus put this new request in SW queue and dispatch
+* the oldest backlogged request to the hardware.
+*/
+   ret = crypto_enqueue_request(priv-chan[ch].queue, areq);
+   __talitos_handle_queue(dev, ch, NULL, flags);
+   } else {
+   ret = __talitos_handle_queue(dev, ch, areq, flags);
+   }
+
spin_unlock_irqrestore(priv-chan[ch].head_lock, flags);
 
-   return -EINPROGRESS;
+   return ret;
 }
 EXPORT_SYMBOL(talitos_submit);
 
+static void talitos_handle_queue(struct device *dev, int ch)
+{
+   struct talitos_private *priv = dev_get_drvdata(dev);
+   unsigned long flags;
+   int ret = -EINPROGRESS;
+
+   if (!priv-chan[ch].queue.qlen)
+   return;
+
+   spin_lock_irqsave(priv-chan[ch].head_lock, flags);
+
+   /* Queue backlogged requests as long as the 

[PATCH v2 4/4] crypto: talitos - add software backlog queue handling

2015-03-13 Thread Horia Geanta
I was running into situations where the hardware FIFO was filling up, and
the code was returning EAGAIN to dm-crypt and just dropping the submitted
crypto request.

This adds support in talitos for a software backlog queue. When requests
can't be queued to the hardware immediately EBUSY is returned. The queued
requests are dispatched to the hardware in received order as hardware FIFO
slots become available.

Signed-off-by: Martin Hicks m...@bork.org
Signed-off-by: Horia Geanta horia.gea...@freescale.com
---
 drivers/crypto/talitos.c | 107 +--
 drivers/crypto/talitos.h |   2 +
 2 files changed, 97 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index c184987dfcc7..d4679030d23c 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -197,23 +197,41 @@ static struct talitos_request *to_talitos_req(struct 
crypto_async_request *areq)
}
 }
 
-int talitos_submit(struct device *dev, int ch,
-  struct crypto_async_request *areq)
+/*
+ * Enqueue to HW queue a request, coming either from upper layer or taken from
+ * SW queue. When drawing from SW queue, check if there are backlogged requests
+ * and notify their producers.
+ */
+int __talitos_handle_queue(struct device *dev, int ch,
+  struct crypto_async_request *areq,
+  unsigned long *irq_flags)
 {
struct talitos_private *priv = dev_get_drvdata(dev);
struct talitos_request *request;
-   unsigned long flags;
int head;
 
-   spin_lock_irqsave(priv-chan[ch].head_lock, flags);
-
if (!atomic_inc_not_zero(priv-chan[ch].submit_count)) {
/* h/w fifo is full */
-   spin_unlock_irqrestore(priv-chan[ch].head_lock, flags);
-   return -EAGAIN;
+   if (!areq)
+   return -EBUSY;
+
+   /* Try to backlog request (if allowed) */
+   return crypto_enqueue_request(priv-chan[ch].queue, areq);
}
 
-   head = priv-chan[ch].head;
+   if (!areq) {
+   struct crypto_async_request *backlog =
+   crypto_get_backlog(priv-chan[ch].queue);
+
+   /* Dequeue the oldest request */
+   areq = crypto_dequeue_request(priv-chan[ch].queue);
+   if (!areq)
+   return 0;
+
+   /* Mark a backlogged request as in-progress */
+   if (backlog)
+   backlog-complete(backlog, -EINPROGRESS);
+   }
 
request = to_talitos_req(areq);
if (IS_ERR(request))
@@ -224,6 +242,7 @@ int talitos_submit(struct device *dev, int ch,
   DMA_BIDIRECTIONAL);
 
/* increment fifo head */
+   head = priv-chan[ch].head;
priv-chan[ch].head = (priv-chan[ch].head + 1)  (priv-fifo_len - 1);
 
smp_wmb();
@@ -236,14 +255,66 @@ int talitos_submit(struct device *dev, int ch,
out_be32(priv-chan[ch].reg + TALITOS_FF_LO,
 lower_32_bits(request-dma_desc));
 
+   return -EINPROGRESS;
+}
+
+int talitos_submit(struct device *dev, int ch,
+  struct crypto_async_request *areq)
+{
+   struct talitos_private *priv = dev_get_drvdata(dev);
+   unsigned long flags;
+   int ret;
+
+   spin_lock_irqsave(priv-chan[ch].head_lock, flags);
+
+   /*
+* Hidden assumption: we maintain submission order separately for
+* requests that may be backlogged and those that may not. For e.g. even
+* if SW queue has some requests, we won't drop an incoming request that
+* may not be backlogged, but enqueue it in the HW queue (in front of
+* pending ones).
+*/
+   if (areq-flags  CRYPTO_TFM_REQ_MAY_BACKLOG 
+   priv-chan[ch].queue.qlen) {
+   /*
+* There are pending requests in the SW queue. Since we want to
+* maintain the order of requests, we cannot enqueue in the HW
+* queue. Thus put this new request in SW queue and dispatch
+* the oldest backlogged request to the hardware.
+*/
+   ret = crypto_enqueue_request(priv-chan[ch].queue, areq);
+   __talitos_handle_queue(dev, ch, NULL, flags);
+   } else {
+   ret = __talitos_handle_queue(dev, ch, areq, flags);
+   }
+
spin_unlock_irqrestore(priv-chan[ch].head_lock, flags);
 
-   return -EINPROGRESS;
+   return ret;
 }
 EXPORT_SYMBOL(talitos_submit);
 
+static void talitos_handle_queue(struct device *dev, int ch)
+{
+   struct talitos_private *priv = dev_get_drvdata(dev);
+   unsigned long flags;
+   int ret = -EINPROGRESS;
+
+   if (!priv-chan[ch].queue.qlen)
+   return;
+
+   spin_lock_irqsave(priv-chan[ch].head_lock, flags);
+
+   /* Queue backlogged requests as long as the 

[PATCH v2 2/4] net: esp: check CRYPTO_TFM_REQ_DMA flag when allocating crypto request

2015-03-13 Thread Horia Geanta
Some crypto backends might require the requests' private contexts
to be allocated in DMA-able memory.

Signed-off-by: Horia Geanta horia.gea...@freescale.com
---

Depends on patch 1/4 (sent only on crypto list) that adds the
CRYPTO_TFM_REQ_DMA flag.

 net/ipv4/esp4.c | 7 ++-
 net/ipv6/esp6.c | 7 ++-
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 60173d4d3a0e..3e6ddece0cbe 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -38,6 +38,7 @@ static u32 esp4_get_mtu(struct xfrm_state *x, int mtu);
 static void *esp_alloc_tmp(struct crypto_aead *aead, int nfrags, int seqhilen)
 {
unsigned int len;
+   gfp_t gfp = GFP_ATOMIC;
 
len = seqhilen;
 
@@ -54,7 +55,11 @@ static void *esp_alloc_tmp(struct crypto_aead *aead, int 
nfrags, int seqhilen)
 
len += sizeof(struct scatterlist) * nfrags;
 
-   return kmalloc(len, GFP_ATOMIC);
+   if (crypto_aead_reqsize(aead) 
+   (crypto_aead_get_flags(aead)  CRYPTO_TFM_REQ_DMA))
+   gfp |= GFP_DMA;
+
+   return kmalloc(len, gfp);
 }
 
 static inline __be32 *esp_tmp_seqhi(void *tmp)
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index e48f2c7c5c59..0d173eedad4e 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -65,6 +65,7 @@ static u32 esp6_get_mtu(struct xfrm_state *x, int mtu);
 static void *esp_alloc_tmp(struct crypto_aead *aead, int nfrags, int seqihlen)
 {
unsigned int len;
+   gfp_t gfp = GFP_ATOMIC;
 
len = seqihlen;
 
@@ -81,7 +82,11 @@ static void *esp_alloc_tmp(struct crypto_aead *aead, int 
nfrags, int seqihlen)
 
len += sizeof(struct scatterlist) * nfrags;
 
-   return kmalloc(len, GFP_ATOMIC);
+   if (crypto_aead_reqsize(aead) 
+   (crypto_aead_get_flags(aead)  CRYPTO_TFM_REQ_DMA))
+   gfp |= GFP_DMA;
+
+   return kmalloc(len, gfp);
 }
 
 static inline __be32 *esp_tmp_seqhi(void *tmp)
-- 
1.8.3.1

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/4] crypto: add CRYPTO_TFM_REQ_DMA flag

2015-03-13 Thread Horia Geantă
On 3/13/2015 7:14 PM, Horia Geanta wrote:
 The CRYPTO_TFM_REQ_DMA flag can be used by backend implementations to
 indicate to crypto API the need to allocate GFP_DMA memory
 for private contexts of the crypto requests.
 
 Signed-off-by: Horia Geanta horia.gea...@freescale.com
 ---

ahash_request_alloc() update is missing from the patch.

  include/linux/crypto.h | 9 +
  1 file changed, 9 insertions(+)
 
 diff --git a/include/linux/crypto.h b/include/linux/crypto.h
 index fb5ef16d6a12..64258c9198d5 100644
 --- a/include/linux/crypto.h
 +++ b/include/linux/crypto.h
 @@ -103,6 +103,7 @@
  #define CRYPTO_TFM_REQ_WEAK_KEY  0x0100
  #define CRYPTO_TFM_REQ_MAY_SLEEP 0x0200
  #define CRYPTO_TFM_REQ_MAY_BACKLOG   0x0400
 +#define CRYPTO_TFM_REQ_DMA   0x0800
  #define CRYPTO_TFM_RES_WEAK_KEY  0x0010
  #define CRYPTO_TFM_RES_BAD_KEY_LEN   0x0020
  #define CRYPTO_TFM_RES_BAD_KEY_SCHED 0x0040
 @@ -1108,6 +1109,10 @@ static inline struct ablkcipher_request 
 *ablkcipher_request_alloc(
  {
   struct ablkcipher_request *req;
  
 + if (crypto_ablkcipher_reqsize(tfm) 
 + (crypto_ablkcipher_get_flags(tfm)  CRYPTO_TFM_REQ_DMA))
 + gfp |= GFP_DMA;
 +
   req = kmalloc(sizeof(struct ablkcipher_request) +
 crypto_ablkcipher_reqsize(tfm), gfp);
  
 @@ -1471,6 +1476,10 @@ static inline struct aead_request 
 *aead_request_alloc(struct crypto_aead *tfm,
  {
   struct aead_request *req;
  
 + if (crypto_aead_reqsize(tfm) 
 + (crypto_aead_get_flags(tfm)  CRYPTO_TFM_REQ_DMA))
 + gfp |= GFP_DMA;
 +
   req = kmalloc(sizeof(*req) + crypto_aead_reqsize(tfm), gfp);
  
   if (likely(req))
 



--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH] crypto: prevent helper ciphers from being allocated by users

2015-03-13 Thread Stephan Mueller
Hi,

Several hardware related cipher implementations are implemented as follows: a 
helper cipher implementation is registered with the kernel crypto API.

Such helper ciphers are never intended to be called by normal users. In some 
cases, calling them via the normal crypto API may even cause failures 
including kernel crashes. In a normal case, the wrapping ciphers that use 
the helpers ensure that these helpers are invoked such that they cannot cause 
any calamity.

Also, with kernel code, we can be reasonably sure that such helper ciphers are 
never called directly as the kernel code is under our control.

But I am getting very uneasy when the AF_ALG user space interface comes into 
play. With that, unprivileged users can call all ciphers registered with the 
crypto API, including these helper ciphers that are not intended to be called 
directly. That means, with AF_ALG user space may invoke these helper ciphers 
and may cause undefined states or side effects.

For example, without the commit 81e397d937a8e9f46f024a9f876cf14d8e2b45a7 the
AES-NI GCM implementation could be used to crash the kernel with the 
AF_ALG(aead) interface. But without the patch, using the AES-NI GCM 
implementation through the regular cipher types was no problem at all.

To avoid any potential side effects with such helpers, I propose a change to 
the kernel crypto API to prevent the helpers to be called directly. These 
helpers have the following properties:

- they are all marked with a cra_priority of 0 and can therefore be easily 
identified

- they are never intended to be instantiated via the regular crypto_alloc_* 
routines, but always via the crypto_*_spawn API. That API is separate from the 
regular allocation API of crypto_alloc_*

Therefore, a guard to prevent the instantiation of helper ciphers by normal 
users can be done by preventing successful instances of helper ciphers in 
crypto_alloc_*. To make life easy, I would recommend to simply use the 
cra_priority as a flag that shall trigger an error in crypto_alloc_*.

The following code is tested and confirmed to work (i.e. preventing the use of 
helper ciphers by callers, but allowing helper ciphers to be used to serve 
other ciphers). This patch searched for all invocations of __crypto_alloc_tfm 
and added the check for cra_priority except in the crypto_spawn_tfm call. 
Specifically, I tested __driver-gcm-aes-aesni vs rfc4106-gcm-aesni. In 
addition, I tested a large array of other ciphers where none were affected by 
the change.

diff --git a/crypto/ablkcipher.c b/crypto/ablkcipher.c
index db201bca..2cd83ad 100644
--- a/crypto/ablkcipher.c
+++ b/crypto/ablkcipher.c
@@ -688,7 +688,7 @@ struct crypto_ablkcipher *crypto_alloc_ablkcipher(const 
char *alg_name,
goto err;
}
 
-   tfm = __crypto_alloc_tfm(alg, type, mask);
+   tfm = __crypto_alloc_tfm_safe(alg, type, mask);
if (!IS_ERR(tfm))
return __crypto_ablkcipher_cast(tfm);
 
diff --git a/crypto/aead.c b/crypto/aead.c
index 710..9ae3aa9 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -542,7 +542,7 @@ struct crypto_aead *crypto_alloc_aead(const char 
*alg_name, u32 type, u32 mask)
goto err;
}
 
-   tfm = __crypto_alloc_tfm(alg, type, mask);
+   tfm = __crypto_alloc_tfm_safe(alg, type, mask);
if (!IS_ERR(tfm))
return __crypto_aead_cast(tfm);
 
diff --git a/crypto/api.c b/crypto/api.c
index 2a81e98..8b1bb2d 100644
--- a/crypto/api.c
+++ b/crypto/api.c
@@ -389,6 +389,27 @@ out:
 }
 EXPORT_SYMBOL_GPL(__crypto_alloc_tfm);
 
+struct crypto_tfm *__crypto_alloc_tfm_safe(struct crypto_alg *alg, u32 type,
+  u32 mask)
+{
+   /*
+* Prevent all ciphers from being loaded which have a cra_priority
+* of 0. Those cipher implementations are helper ciphers and
+* are not intended for general consumption.
+*
+* The only exceptions are the compression algorithms which
+* have no priority.
+*/
+   if (!alg-cra_priority 
+   ((alg-cra_flags  CRYPTO_ALG_TYPE_MASK) !=
+ CRYPTO_ALG_TYPE_PCOMPRESS) 
+   ((alg-cra_flags  CRYPTO_ALG_TYPE_MASK) !=
+ CRYPTO_ALG_TYPE_COMPRESS))
+   return ERR_PTR(-ENOENT);
+
+   return __crypto_alloc_tfm(alg, type, mask);
+}
+EXPORT_SYMBOL_GPL(__crypto_alloc_tfm_safe);
 /*
  * crypto_alloc_base - Locate algorithm and allocate transform
  * @alg_name: Name of algorithm
@@ -425,7 +446,7 @@ struct crypto_tfm *crypto_alloc_base(const char *alg_name, 
u32 type, u32 mask)
goto err;
}
 
-   tfm = __crypto_alloc_tfm(alg, type, mask);
+   tfm = __crypto_alloc_tfm_safe(alg, type, mask);
if (!IS_ERR(tfm))
return tfm;
 
diff --git 

[PATCH 12/12] crypto/sha-mb/sha1_mb.c : Syntax error

2015-03-13 Thread Ameen Ali
fixing a syntax-error .

Signed-off-by : Ameen Ali ameenali...@gmail.com
---
 arch/x86/crypto/sha-mb/sha1_mb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/crypto/sha-mb/sha1_mb.c b/arch/x86/crypto/sha-mb/sha1_mb.c
index fd9f6b0..ec0b989 100644
--- a/arch/x86/crypto/sha-mb/sha1_mb.c
+++ b/arch/x86/crypto/sha-mb/sha1_mb.c
@@ -828,7 +828,7 @@ static unsigned long sha1_mb_flusher(struct 
mcryptd_alg_cstate *cstate)
while (!list_empty(cstate-work_list)) {
rctx = list_entry(cstate-work_list.next,
struct mcryptd_hash_request_ctx, waiter);
-   if time_before(cur_time, rctx-tag.expire)
+   if (time_before(cur_time, rctx-tag.expire))
break;
kernel_fpu_begin();
sha_ctx = (struct sha1_hash_ctx *) 
sha1_ctx_mgr_flush(cstate-mgr);
-- 
2.1.0

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/4] net: esp: check CRYPTO_TFM_REQ_DMA flag when allocating crypto request

2015-03-13 Thread David Miller
From: Horia Geanta horia.gea...@freescale.com
Date: Fri, 13 Mar 2015 19:15:22 +0200

 Some crypto backends might require the requests' private contexts
 to be allocated in DMA-able memory.
 
 Signed-off-by: Horia Geanta horia.gea...@freescale.com

No way.

Upper layers should be absolutely not required to know about such
requirements.

Such details _must_ be hidden inside of the crypto layer and drivers
and not leak out into the users of the crypto interfaces.
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 12/12] crypto/sha-mb/sha1_mb.c : Syntax error

2015-03-13 Thread Tim Chen
On Fri, 2015-03-13 at 23:13 +0200, Ameen Ali wrote:
 fixing a syntax-error .
 
 Signed-off-by : Ameen Ali ameenali...@gmail.com
 ---
  arch/x86/crypto/sha-mb/sha1_mb.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/arch/x86/crypto/sha-mb/sha1_mb.c 
 b/arch/x86/crypto/sha-mb/sha1_mb.c
 index fd9f6b0..ec0b989 100644
 --- a/arch/x86/crypto/sha-mb/sha1_mb.c
 +++ b/arch/x86/crypto/sha-mb/sha1_mb.c
 @@ -828,7 +828,7 @@ static unsigned long sha1_mb_flusher(struct 
 mcryptd_alg_cstate *cstate)
   while (!list_empty(cstate-work_list)) {
   rctx = list_entry(cstate-work_list.next,
   struct mcryptd_hash_request_ctx, waiter);
 - if time_before(cur_time, rctx-tag.expire)
 + if(time_before(cur_time, rctx-tag.expire))

Can you add a space and make it 
if (time_before(cur_time, rctx-tag.expire))

Thanks.

Tim
   break;
   kernel_fpu_begin();
   sha_ctx = (struct sha1_hash_ctx *) 
 sha1_ctx_mgr_flush(cstate-mgr);


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 12/12] crypto/sha-mb/sha1_mb.c : Syntax error

2015-03-13 Thread Ameen Ali
fixing a syntax-error .

Signed-off-by : Ameen Ali ameenali...@gmail.com
---
 arch/x86/crypto/sha-mb/sha1_mb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/crypto/sha-mb/sha1_mb.c b/arch/x86/crypto/sha-mb/sha1_mb.c
index fd9f6b0..ec0b989 100644
--- a/arch/x86/crypto/sha-mb/sha1_mb.c
+++ b/arch/x86/crypto/sha-mb/sha1_mb.c
@@ -828,7 +828,7 @@ static unsigned long sha1_mb_flusher(struct 
mcryptd_alg_cstate *cstate)
while (!list_empty(cstate-work_list)) {
rctx = list_entry(cstate-work_list.next,
struct mcryptd_hash_request_ctx, waiter);
-   if time_before(cur_time, rctx-tag.expire)
+   if(time_before(cur_time, rctx-tag.expire))
break;
kernel_fpu_begin();
sha_ctx = (struct sha1_hash_ctx *) 
sha1_ctx_mgr_flush(cstate-mgr);
-- 
2.1.0

--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: testmgr - fix RNG return code enforcement

2015-03-13 Thread Herbert Xu
On Wed, Mar 11, 2015 at 01:11:07PM +0100, Alexander Bergmann wrote:
 From 0c7233769665f03e9f91342770dba7279f928c23 Mon Sep 17 00:00:00 2001
 From: Stephan Mueller smuel...@chronox.de
 Date: Tue, 10 Mar 2015 17:00:36 +0100
 Subject: [PATCH] crypto: testmgr - fix RNG return code enforcement
 
 Due to the change to RNGs to always return zero in success case, the
 invocation of the RNGs in the test manager must be updated as otherwise
 the RNG self tests are not properly executed any more.
 
 Signed-off-by: Stephan Mueller smuel...@chronox.de
 Signed-off-by: Alexander Bergmann abergm...@suse.com

Applied.
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RESEND] crypto: algif_rng - zeroize buffer with random data

2015-03-13 Thread Stephan Mueller
Due to the change to RNGs to always return zero in success case, the RNG
interface must zeroize the buffer with the length provided by the
caller.

Signed-off-by: Stephan Mueller smuel...@chronox.de
---
 crypto/algif_rng.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/algif_rng.c b/crypto/algif_rng.c
index 67f612c..a346173 100644
--- a/crypto/algif_rng.c
+++ b/crypto/algif_rng.c
@@ -87,7 +87,7 @@ static int rng_recvmsg(struct kiocb *unused, struct socket 
*sock,
return genlen;
 
err = memcpy_to_msg(msg, result, len);
-   memzero_explicit(result, genlen);
+   memzero_explicit(result, len);
 
return err ? err : len;
 }
-- 
2.1.0


--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] omap-rng: Change RNG_CONFIG_REG to RNG_CONTROL_REG when checking and disabling TRNG

2015-03-13 Thread Herbert Xu
On Wed, Mar 11, 2015 at 03:29:35PM +1100, Andre Wolokita wrote:
 In omap4_rng_init(), a check of bit 10 of the RNG_CONFIG_REG is done to 
 determine
 whether the RNG is running. This is suspicious firstly due to the use of
 RNG_CONTROL_ENABLE_TRNG_MASK and secondly because the same mask is written to
 RNG_CONTROL_REG after configuration of the FROs. Similar suspicious logic is
 repeated in omap4_rng_cleanup() when RNG_CONTROL_REG masked with
 RNG_CONTROL_ENABLE_TRNG_MASK is read, the same mask bit is cleared, and then
 written to RNG_CONFIG_REG. Unless the TRNG is enabled with one bit in 
 RNG_CONTROL
 and disabled with another in RNG_CONFIG and these bits are mirrored in some 
 way,
 I believe that the TRNG is not really shutting off.
 
 Apart from the strange logic, I have reason to suspect that the OMAP4 related
 code in this driver is driving an Inside Secure IP hardware RNG and strongly
 suspect that bit 10 of RNG_CONFIG_REG is one of the bits configuring the
 sampling rate of the FROs. This option is by default set to 0 and is not being
 set anywhere in omap-rng.c. Reading this bit during omap4_rng_init() will
 always return 0. It will remain 0 because ~(value of TRNG_MASK in control) 
 will
 always be 0, because the TRNG is never shut off. This is of course presuming
 that the OMAP4 features the Inside Secure IP.
 
 I'm interested in knowing what the guys at TI think about this, as only they
 can confirm or deny the detailed structure of these registers.

Where is the sign-off?
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: algif_rng - zeroize buffer holding random data

2015-03-13 Thread Herbert Xu
On Wed, Mar 11, 2015 at 07:45:35AM +0100, Stephan Mueller wrote:
 Due to the change to RNGs to always return zero in success case, the RNG
 interface must zeroize the buffer with the length provided by the
 caller.
 
 Signed-off-by: Stephan Mueller smuel...@chronox.de

Your patch is line-wrapped and doesn't apply.  Please resend.

Thanks,
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] crypto: AES-NI - fix memory usage in GCM decryption

2015-03-13 Thread Herbert Xu
On Thu, Mar 12, 2015 at 09:17:51AM +0100, Stephan Mueller wrote:
 The kernel crypto API logic requires the caller to provide the
 length of (ciphertext || authentication tag) as cryptlen for the
 AEAD decryption operation. Thus, the cipher implementation must
 calculate the size of the plaintext output itself and cannot simply use
 cryptlen.
 
 The RFC4106 GCM decryption operation tries to overwrite cryptlen memory
 in req-dst. As the destination buffer for decryption only needs to hold
 the plaintext memory but cryptlen references the input buffer holding
 (ciphertext || authentication tag), the assumption of the destination
 buffer length in RFC4106 GCM operation leads to a too large size. This
 patch simply uses the already calculated plaintext size.
 
 In addition, this patch fixes the offset calculation of the AAD buffer
 pointer: as mentioned before, cryptlen already includes the size of the
 tag. Thus, the tag does not need to be added. With the addition, the AAD
 will be written beyond the already allocated buffer.
 
 Note, this fixes a kernel crash that can be triggered from user space
 via AF_ALG(aead) -- simply use the libkcapi test application
 from [1] and update it to use rfc4106-gcm-aes.
 
 Using [1], the changes were tested using CAVS vectors to demonstrate
 that the crypto operation still delivers the right results.
 
 [1] http://www.chronox.de/libkcapi.html
 
 CC: Tadeusz Struk tadeusz.st...@intel.com
 Signed-off-by: Stephan Mueller smuel...@chronox.de

Patch applied.  Thanks!
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 4/15] crypto: don't export static symbol

2015-03-13 Thread Herbert Xu
On Wed, Mar 11, 2015 at 05:56:26PM +0100, Julia Lawall wrote:
 From: Julia Lawall julia.law...@lip6.fr
 
 The semantic patch that fixes this problem is as follows:
 (http://coccinelle.lip6.fr/)
 
 // smpl
 @r@
 type T;
 identifier f;
 @@
 
 static T f (...) { ... }
 
 @@
 identifier r.f;
 declarer name EXPORT_SYMBOL_GPL;
 @@
 
 -EXPORT_SYMBOL_GPL(f);
 // /smpl
 
 Signed-off-by: Julia Lawall julia.law...@lip6.fr

Applied.
-- 
Email: Herbert Xu herb...@gondor.apana.org.au
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] crypto: talitos: Add AES-XTS mode

2015-03-13 Thread Martin Hicks
Hi Horia,

On Wed, Mar 11, 2015 at 11:48 AM, Horia Geantă
horia.gea...@freescale.com wrote:

 While here: note that xts-talitos supports only two key lengths - 256
 and 512 bits. There are tcrypt speed tests that check also for 384-bit
 keys (which is out-of-spec, but still...), leading to a Key Size Error
 - see below (KSE bit in AESU Interrupt Status Register is set)

Ok.  I've limited the keysize to 32 or 64 bytes for AES-XTS in the
talitos driver.

This was my first experiments with the tcrypt module.  It also brought
up another issue related to the IV limitations of this hardware.  The
latest patch that I have returns an error when there is a non-zero
value in the second 8 bytes of the IV:

+   /*
+* AES-XTS uses the first two AES Context registers for:
+*
+* Register 1:   Sector Number (Little Endian)
+* Register 2:   Sector Size   (Big Endian)
+*
+* Whereas AES-CBC uses registers 1/2 as a 16-byte IV.
+*/
+   if ((ctx-desc_hdr_template 
+(DESC_HDR_SEL0_MASK | DESC_HDR_MODE0_MASK)) ==
+(DESC_HDR_SEL0_AESU | DESC_HDR_MODE0_AESU_XTS)) {
+   u64 *aesctx2 = (u64 *)areq-info + 1;
+
+   if (*aesctx2 != 0) {
+   dev_err(ctx-dev,
+   IV length limited to the first 8 bytes.);
+   return ERR_PTR(-EINVAL);
+   }
+
+   /* Fixed sized sector */
+   *aesctx2 = cpu_to_be64(1  SECTOR_SHIFT);
+   }


This approach causes the tcrypt tests to fail because tcrypt sets all
16 bytes of the IV to 0xff.  I think returning an error is the right
approach for the talitos module, but it would be nice if tcrypt still
worked.  Should tcrypt just set the IV bytes to 0 instead of 0xff?
Isn't one IV just as good as another?  I think adding exceptions to
the tcrypt code would be ugly, but maybe one should be made for XTS
since the standard dictates that the IV should be plain or plain64?

Thanks,
mh

-- 
Martin Hicks P.Eng.  | m...@bork.org
Bork Consulting Inc. |   +1 (613) 266-2296
--
To unsubscribe from this list: send the line unsubscribe linux-crypto in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html