where the tweak routine point to?

2010-04-08 Thread Bai Shuwei
Hi, All:
  When i use the cryptsetup command to set the aes-xts-plain
encryption, the system will entry the crypt() routine which defined in
the xts.c file. I find the routine will call two important routines:
tw and fn. I think the fn will point to the aes_encrypt/decrypt
routine. But i want to know which routine the tw will point to for
aes-xts-plain encryption? and where i can find its source code? I put
the segment where the tw routine will be called in the bellow.

   wsrc = w->src.virt.addr;
   wdst = w->dst.virt.addr;

   /* calculate first value of T */
   tw(crypto_cipher_tfm(ctx->tweak), w->iv, w->iv);

   goto first;

   for (;;) {
   do {


Thanks all!

Best Regards

Bai Shuweiv
-- 
Love other people, as same as love yourself!
Don't think all the time, do it by your hands!

E-Mail: baishu...@gmail.com
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] crypto: omap-sha1-md5: OMAP3 SHA1 & MD5 driver

2010-04-08 Thread Dmitry.Kasatkin
- Original message -
> Hi:
> 
> OK so you did answer my question :)
> 
> Dmitry Kasatkin  wrote:
> > 
> > Interesting case with hmac.
> > 
> > return crypto_shash_init(&desc.shash) ?:
> > crypto_shash_update(&desc.shash, ipad, bs) ?:
> > crypto_shash_export(&desc.shash, ipad) ?:
> > crypto_shash_init(&desc.shash) ?:
> > crypto_shash_update(&desc.shash, opad, bs) ?:
> > crypto_shash_export(&desc.shash, opad);
> > 
> > Basically it does not call final.
> > Then call init again.
> > 
> > hw has certain limitation that it requires to process last block with 
> > some bit set.
> > WHen update is called there is no possibility to know that no more 
> > update() will come.
> > So possible last block is stored and then hashed out from the final.
> > 
> > I see that above code will not work with the driver.
> > I wonder how intermediate export/import could be done with omap hw.
> > 
> > But if it's not possible, then why not to have hmac(sha1) as just sw.
> > Anyway hmac should not process as huge amount of data as hash itself.
> > 
> > What is your opinion/advice?
> 
> A sha1-only driver is not very useful since the biggest potential
> user IPsec uses hmac(sha1).
> 
> Is the omap hw documentation available publicly?
> 
> Thanks,
> -- 
> Visit Openswan at http://www.openswan.org/
> Email: Herbert Xu ~{PmV>HI~} 
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
> --
> To unsubscribe from this list: send the line "unsubscribe linux-crypto"
> in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at   http://vger.kernel.org/majordomo-info.html
> 
 
Hi.

Sha1 only is also very useful. We calcluate hashes of all binaries for 
integrity verification. We do not need hmac there.

But in general it is possible do add algo hmac(sha1) to the driver and 
implement it internally without import/export.

I have to check on documentation publicity.

Br,
Dmitry
N�r��yb�X��ǧv�^�)޺{.n�+{�r����ܨ}���Ơz�&j:+v���zZ+��+zf���h���~i���z��w���?�&�)ߢf

[PATCHv2 10/10] crypto mv_cesa : Add sha1 and hmac(sha1) async hash drivers

2010-04-08 Thread Uri Simchoni
Add sha1 and hmac(sha1) async hash drivers

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_p9/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p10/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_p9/drivers/crypto/mv_cesa.c  2010-03-16 12:33:45.504199755 
+0200
+++ linux-2.6.32.8_p10/drivers/crypto/mv_cesa.c 2010-03-16 14:11:38.819533227 
+0200
@@ -14,8 +14,14 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include "mv_cesa.h"
+
+#define MV_CESA"MV-CESA:"
+#define MAX_HW_HASH_SIZE   0x
+
 /*
  * STM:
  *   /---\
@@ -38,7 +44,7 @@ enum engine_status {
  * @dst_sg_it: sg iterator for dst
  * @sg_src_left:   bytes left in src to process (scatter list)
  * @src_start: offset to add to src start position (scatter list)
- * @crypt_len: length of current crypt process
+ * @crypt_len: length of current hw crypt/hash process
  * @hw_nbytes: total bytes to process in hw for this request
  * @copy_back: whether to copy data back (crypt) or not (hash)
  * @sg_dst_left:   bytes left dst to process in this scatter list
@@ -81,6 +87,8 @@ struct crypto_priv {
struct req_progress p;
int max_req_size;
int sram_size;
+   int has_sha1;
+   int has_hmac_sha1;
 };
 
 static struct crypto_priv *cpg;
@@ -102,6 +110,31 @@ struct mv_req_ctx {
int decrypt;
 };
 
+enum hash_op {
+   COP_SHA1,
+   COP_HMAC_SHA1
+};
+
+struct mv_tfm_hash_ctx {
+   struct crypto_shash *fallback;
+   struct crypto_shash *base_hash;
+   u32 ivs[2 * SHA1_DIGEST_SIZE / 4];
+   int count_add;
+   enum hash_op op;
+};
+
+struct mv_req_hash_ctx {
+   u64 count;
+   u32 state[SHA1_DIGEST_SIZE / 4];
+   u8 buffer[SHA1_BLOCK_SIZE];
+   int first_hash; /* marks that we don't have previous state */
+   int last_chunk; /* marks that this is the 'final' request */
+   int extra_bytes;/* unprocessed bytes in buffer */
+   enum hash_op op;
+   int count_add;
+   struct scatterlist dummysg;
+};
+
 static void compute_aes_dec_key(struct mv_ctx *ctx)
 {
struct crypto_aes_ctx gen_aes_key;
@@ -265,6 +298,132 @@ static void mv_crypto_algo_completion(vo
memcpy(req->info, cpg->sram + SRAM_DATA_IV_BUF, 16);
 }
 
+static void mv_process_hash_current(int first_block)
+{
+   struct ahash_request *req = ahash_request_cast(cpg->cur_req);
+   struct mv_req_hash_ctx *req_ctx = ahash_request_ctx(req);
+   struct req_progress *p = &cpg->p;
+   struct sec_accel_config op = { 0 };
+   int is_last;
+
+   switch (req_ctx->op) {
+   case COP_SHA1:
+   default:
+   op.config = CFG_OP_MAC_ONLY | CFG_MACM_SHA1;
+   break;
+   case COP_HMAC_SHA1:
+   op.config = CFG_OP_MAC_ONLY | CFG_MACM_HMAC_SHA1;
+   break;
+   }
+
+   op.mac_src_p =
+   MAC_SRC_DATA_P(SRAM_DATA_IN_START) | MAC_SRC_TOTAL_LEN((u32)
+   req_ctx->
+   count);
+
+   setup_data_in();
+
+   op.mac_digest =
+   MAC_DIGEST_P(SRAM_DIGEST_BUF) | MAC_FRAG_LEN(p->crypt_len);
+   op.mac_iv =
+   MAC_INNER_IV_P(SRAM_HMAC_IV_IN) |
+   MAC_OUTER_IV_P(SRAM_HMAC_IV_OUT);
+
+   is_last = req_ctx->last_chunk
+   && (p->hw_processed_bytes + p->crypt_len >= p->hw_nbytes)
+   && (req_ctx->count <= MAX_HW_HASH_SIZE);
+   if (req_ctx->first_hash) {
+   if (is_last)
+   op.config |= CFG_NOT_FRAG;
+   else
+   op.config |= CFG_FIRST_FRAG;
+
+   req_ctx->first_hash = 0;
+   } else {
+   if (is_last)
+   op.config |= CFG_LAST_FRAG;
+   else
+   op.config |= CFG_MID_FRAG;
+   }
+
+   memcpy(cpg->sram + SRAM_CONFIG, &op, sizeof(struct sec_accel_config));
+
+   writel(SRAM_CONFIG, cpg->reg + SEC_ACCEL_DESC_P0);
+   /* GO */
+   writel(SEC_CMD_EN_SEC_ACCL0, cpg->reg + SEC_ACCEL_CMD);
+
+   /*
+   * XXX: add timer if the interrupt does not occur for some mystery
+   * reason
+   */
+}
+
+static inline int mv_hash_import_sha1_ctx(const struct mv_req_hash_ctx *ctx,
+ struct shash_desc *desc)
+{
+   int i;
+   struct sha1_state shash_state;
+
+   shash_state.count = ctx->count + ctx->count_add;
+   for (i = 0; i < 5; i++)
+   shash_state.state[i] = ctx->state[i];
+   memcpy(shash_state.buffer, ctx->buffer, sizeof(shash_state.buffer));
+   return crypto_shash_import(desc, &shash_state);
+}
+
+static int mv_hash_final_fallback(struct ahash_request *req)
+{
+   const struct mv_tfm_hash_ctx *tfm_ctx = crypto_tfm_ctx(req->base.tfm);
+   struct mv_req_hash_ctx *req_ctx = ahash_request_ctx(req);
+   struct {
+   struct shash_desc s

[PATCHv2 9/10] crypto mv_cesa : Support processing of data from previous requests

2010-04-08 Thread Uri Simchoni
Support processing of data from previous requests (as in hashing
update/final requests).

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_p8/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p9/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_p8/drivers/crypto/mv_cesa.c  2010-03-16 12:25:34.815950170 
+0200
+++ linux-2.6.32.8_p9/drivers/crypto/mv_cesa.c  2010-03-16 12:33:45.504199755 
+0200
@@ -184,10 +184,11 @@ static void copy_src_to_buf(struct req_p
 static void setup_data_in(void)
 {
struct req_progress *p = &cpg->p;
-   p->crypt_len =
+   int data_in_sram =
min(p->hw_nbytes - p->hw_processed_bytes, cpg->max_req_size);
-   copy_src_to_buf(p, cpg->sram + SRAM_DATA_IN_START,
-   p->crypt_len);
+   copy_src_to_buf(p, cpg->sram + SRAM_DATA_IN_START + p->crypt_len,
+   data_in_sram - p->crypt_len);
+   p->crypt_len = data_in_sram;
 }
 
 static void mv_process_current_q(int first_block)
@@ -298,6 +299,7 @@ static void dequeue_complete_req(void)
} while (need_copy_len > 0);
}
 
+   cpg->p.crypt_len = 0;
 
BUG_ON(cpg->eng_st != ENGINE_W_DEQUEUE);
if (cpg->p.hw_processed_bytes < cpg->p.hw_nbytes) {


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 8/10] crypto mv_cesa : Make the copy-back of data optional

2010-04-08 Thread Uri Simchoni
Make the copy-back of data optional (not done in hashing requests)

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_p7/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p8/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_p7/drivers/crypto/mv_cesa.c  2010-03-16 12:07:31.147897717 
+0200
+++ linux-2.6.32.8_p8/drivers/crypto/mv_cesa.c  2010-03-16 12:25:34.815950170 
+0200
@@ -40,6 +40,7 @@ enum engine_status {
  * @src_start: offset to add to src start position (scatter list)
  * @crypt_len: length of current crypt process
  * @hw_nbytes: total bytes to process in hw for this request
+ * @copy_back: whether to copy data back (crypt) or not (hash)
  * @sg_dst_left:   bytes left dst to process in this scatter list
  * @dst_start: offset to add to dst start position (scatter list)
  * @hw_processed_bytes:number of bytes processed by hw (request).
@@ -60,6 +61,7 @@ struct req_progress {
int crypt_len;
int hw_nbytes;
/* dst mostly */
+   int copy_back;
int sg_dst_left;
int dst_start;
int hw_processed_bytes;
@@ -267,33 +269,35 @@ static void dequeue_complete_req(void)
struct crypto_async_request *req = cpg->cur_req;
void *buf;
int ret;
-   int need_copy_len = cpg->p.crypt_len;
-   int sram_offset = 0;
-
cpg->p.hw_processed_bytes += cpg->p.crypt_len;
-   do {
-   int dst_copy;
+   if (cpg->p.copy_back) {
+   int need_copy_len = cpg->p.crypt_len;
+   int sram_offset = 0;
+   do {
+   int dst_copy;
+
+   if (!cpg->p.sg_dst_left) {
+   ret = sg_miter_next(&cpg->p.dst_sg_it);
+   BUG_ON(!ret);
+   cpg->p.sg_dst_left = cpg->p.dst_sg_it.length;
+   cpg->p.dst_start = 0;
+   }
 
-   if (!cpg->p.sg_dst_left) {
-   ret = sg_miter_next(&cpg->p.dst_sg_it);
-   BUG_ON(!ret);
-   cpg->p.sg_dst_left = cpg->p.dst_sg_it.length;
-   cpg->p.dst_start = 0;
-   }
+   buf = cpg->p.dst_sg_it.addr;
+   buf += cpg->p.dst_start;
 
-   buf = cpg->p.dst_sg_it.addr;
-   buf += cpg->p.dst_start;
+   dst_copy = min(need_copy_len, cpg->p.sg_dst_left);
 
-   dst_copy = min(need_copy_len, cpg->p.sg_dst_left);
+   memcpy(buf,
+  cpg->sram + SRAM_DATA_OUT_START + sram_offset,
+  dst_copy);
+   sram_offset += dst_copy;
+   cpg->p.sg_dst_left -= dst_copy;
+   need_copy_len -= dst_copy;
+   cpg->p.dst_start += dst_copy;
+   } while (need_copy_len > 0);
+   }
 
-   memcpy(buf,
-  cpg->sram + SRAM_DATA_OUT_START + sram_offset,
-  dst_copy);
-   sram_offset += dst_copy;
-   cpg->p.sg_dst_left -= dst_copy;
-   need_copy_len -= dst_copy;
-   cpg->p.dst_start += dst_copy;
-   } while (need_copy_len > 0);
 
BUG_ON(cpg->eng_st != ENGINE_W_DEQUEUE);
if (cpg->p.hw_processed_bytes < cpg->p.hw_nbytes) {
@@ -336,6 +340,7 @@ static void mv_enqueue_new_req(struct ab
p->hw_nbytes = req->nbytes;
p->complete = mv_crypto_algo_completion;
p->process = mv_process_current_q;
+   p->copy_back = 1;
 
num_sgs = count_sgs(req->src, req->nbytes);
sg_miter_start(&p->src_sg_it, req->src, num_sgs, SG_MITER_FROM_SG);


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 7/10] crypto mv_cesa : Execute some code via function pointers rathr than direct calls

2010-04-08 Thread Uri Simchoni
Execute some code via function pointers rathr than direct calls
(to allow customization in the hashing request)

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_p6/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p7/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_p6/drivers/crypto/mv_cesa.c  2010-03-16 11:51:56.372211208 
+0200
+++ linux-2.6.32.8_p7/drivers/crypto/mv_cesa.c  2010-03-16 12:07:31.147897717 
+0200
@@ -51,6 +51,8 @@ enum engine_status {
 struct req_progress {
struct sg_mapping_iter src_sg_it;
struct sg_mapping_iter dst_sg_it;
+   void (*complete) (void);
+   void (*process) (int is_first);
 
/* src mostly */
int sg_src_left;
@@ -251,6 +253,9 @@ static void mv_crypto_algo_completion(vo
struct ablkcipher_request *req = ablkcipher_request_cast(cpg->cur_req);
struct mv_req_ctx *req_ctx = ablkcipher_request_ctx(req);
 
+   sg_miter_stop(&cpg->p.src_sg_it);
+   sg_miter_stop(&cpg->p.dst_sg_it);
+
if (req_ctx->op != COP_AES_CBC)
return ;
 
@@ -294,11 +299,9 @@ static void dequeue_complete_req(void)
if (cpg->p.hw_processed_bytes < cpg->p.hw_nbytes) {
/* process next scatter list entry */
cpg->eng_st = ENGINE_BUSY;
-   mv_process_current_q(0);
+   cpg->p.process(0);
} else {
-   sg_miter_stop(&cpg->p.src_sg_it);
-   sg_miter_stop(&cpg->p.dst_sg_it);
-   mv_crypto_algo_completion();
+   cpg->p.complete();
cpg->eng_st = ENGINE_IDLE;
local_bh_disable();
req->complete(req, 0);
@@ -331,6 +334,8 @@ static void mv_enqueue_new_req(struct ab
cpg->cur_req = &req->base;
memset(p, 0, sizeof(struct req_progress));
p->hw_nbytes = req->nbytes;
+   p->complete = mv_crypto_algo_completion;
+   p->process = mv_process_current_q;
 
num_sgs = count_sgs(req->src, req->nbytes);
sg_miter_start(&p->src_sg_it, req->src, num_sgs, SG_MITER_FROM_SG);


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 6/10] crypto mv_cesa : Rename a variable to a more suitable name

2010-04-08 Thread Uri Simchoni
Rename a variable to a more suitable name

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_p5/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p6/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_p5/drivers/crypto/mv_cesa.c  2010-03-16 11:43:37.443646086 
+0200
+++ linux-2.6.32.8_p6/drivers/crypto/mv_cesa.c  2010-03-16 11:51:56.372211208 
+0200
@@ -42,7 +42,7 @@ enum engine_status {
  * @hw_nbytes: total bytes to process in hw for this request
  * @sg_dst_left:   bytes left dst to process in this scatter list
  * @dst_start: offset to add to dst start position (scatter list)
- * @total_req_bytes:   total number of bytes processed (request).
+ * @hw_processed_bytes:number of bytes processed by hw (request).
  *
  * sg helper are used to iterate over the scatterlist. Since the size of the
  * SRAM may be less than the scatter size, this struct struct is used to keep
@@ -60,7 +60,7 @@ struct req_progress {
/* dst mostly */
int sg_dst_left;
int dst_start;
-   int total_req_bytes;
+   int hw_processed_bytes;
 };
 
 struct crypto_priv {
@@ -181,7 +181,7 @@ static void setup_data_in(void)
 {
struct req_progress *p = &cpg->p;
p->crypt_len =
-   min(p->hw_nbytes - p->total_req_bytes, cpg->max_req_size);
+   min(p->hw_nbytes - p->hw_processed_bytes, cpg->max_req_size);
copy_src_to_buf(p, cpg->sram + SRAM_DATA_IN_START,
p->crypt_len);
 }
@@ -265,7 +265,7 @@ static void dequeue_complete_req(void)
int need_copy_len = cpg->p.crypt_len;
int sram_offset = 0;
 
-   cpg->p.total_req_bytes += cpg->p.crypt_len;
+   cpg->p.hw_processed_bytes += cpg->p.crypt_len;
do {
int dst_copy;
 
@@ -291,7 +291,7 @@ static void dequeue_complete_req(void)
} while (need_copy_len > 0);
 
BUG_ON(cpg->eng_st != ENGINE_W_DEQUEUE);
-   if (cpg->p.total_req_bytes < cpg->p.hw_nbytes) {
+   if (cpg->p.hw_processed_bytes < cpg->p.hw_nbytes) {
/* process next scatter list entry */
cpg->eng_st = ENGINE_BUSY;
mv_process_current_q(0);


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 5/10] crypto mv_cesa : Enqueue generic async requests

2010-04-08 Thread Uri Simchoni
Enqueue generic async requests rather than ablkcipher requests
in the driver's queue

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_p4/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p5/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_p4/drivers/crypto/mv_cesa.c  2010-03-16 10:54:07.322816221 
+0200
+++ linux-2.6.32.8_p5/drivers/crypto/mv_cesa.c  2010-03-16 11:43:37.443646086 
+0200
@@ -39,6 +39,7 @@ enum engine_status {
  * @sg_src_left:   bytes left in src to process (scatter list)
  * @src_start: offset to add to src start position (scatter list)
  * @crypt_len: length of current crypt process
+ * @hw_nbytes: total bytes to process in hw for this request
  * @sg_dst_left:   bytes left dst to process in this scatter list
  * @dst_start: offset to add to dst start position (scatter list)
  * @total_req_bytes:   total number of bytes processed (request).
@@ -55,6 +56,7 @@ struct req_progress {
int sg_src_left;
int src_start;
int crypt_len;
+   int hw_nbytes;
/* dst mostly */
int sg_dst_left;
int dst_start;
@@ -71,7 +73,7 @@ struct crypto_priv {
spinlock_t lock;
struct crypto_queue queue;
enum engine_status eng_st;
-   struct ablkcipher_request *cur_req;
+   struct crypto_async_request *cur_req;
struct req_progress p;
int max_req_size;
int sram_size;
@@ -175,18 +177,18 @@ static void copy_src_to_buf(struct req_p
}
 }
 
-static void setup_data_in(struct ablkcipher_request *req)
+static void setup_data_in(void)
 {
struct req_progress *p = &cpg->p;
p->crypt_len =
-   min((int)req->nbytes - p->total_req_bytes, cpg->max_req_size);
+   min(p->hw_nbytes - p->total_req_bytes, cpg->max_req_size);
copy_src_to_buf(p, cpg->sram + SRAM_DATA_IN_START,
p->crypt_len);
 }
 
 static void mv_process_current_q(int first_block)
 {
-   struct ablkcipher_request *req = cpg->cur_req;
+   struct ablkcipher_request *req = ablkcipher_request_cast(cpg->cur_req);
struct mv_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
struct mv_req_ctx *req_ctx = ablkcipher_request_ctx(req);
struct sec_accel_config op;
@@ -229,7 +231,7 @@ static void mv_process_current_q(int fir
ENC_P_DST(SRAM_DATA_OUT_START);
op.enc_key_p = SRAM_DATA_KEY_P;
 
-   setup_data_in(req);
+   setup_data_in();
op.enc_len = cpg->p.crypt_len;
memcpy(cpg->sram + SRAM_CONFIG, &op,
sizeof(struct sec_accel_config));
@@ -246,7 +248,7 @@ static void mv_process_current_q(int fir
 
 static void mv_crypto_algo_completion(void)
 {
-   struct ablkcipher_request *req = cpg->cur_req;
+   struct ablkcipher_request *req = ablkcipher_request_cast(cpg->cur_req);
struct mv_req_ctx *req_ctx = ablkcipher_request_ctx(req);
 
if (req_ctx->op != COP_AES_CBC)
@@ -257,7 +259,7 @@ static void mv_crypto_algo_completion(vo
 
 static void dequeue_complete_req(void)
 {
-   struct ablkcipher_request *req = cpg->cur_req;
+   struct crypto_async_request *req = cpg->cur_req;
void *buf;
int ret;
int need_copy_len = cpg->p.crypt_len;
@@ -289,7 +291,7 @@ static void dequeue_complete_req(void)
} while (need_copy_len > 0);
 
BUG_ON(cpg->eng_st != ENGINE_W_DEQUEUE);
-   if (cpg->p.total_req_bytes < req->nbytes) {
+   if (cpg->p.total_req_bytes < cpg->p.hw_nbytes) {
/* process next scatter list entry */
cpg->eng_st = ENGINE_BUSY;
mv_process_current_q(0);
@@ -299,7 +301,7 @@ static void dequeue_complete_req(void)
mv_crypto_algo_completion();
cpg->eng_st = ENGINE_IDLE;
local_bh_disable();
-   req->base.complete(&req->base, 0);
+   req->complete(req, 0);
local_bh_enable();
}
 }
@@ -323,16 +325,19 @@ static int count_sgs(struct scatterlist 
 
 static void mv_enqueue_new_req(struct ablkcipher_request *req)
 {
+   struct req_progress *p = &cpg->p;
int num_sgs;
 
-   cpg->cur_req = req;
-   memset(&cpg->p, 0, sizeof(struct req_progress));
+   cpg->cur_req = &req->base;
+   memset(p, 0, sizeof(struct req_progress));
+   p->hw_nbytes = req->nbytes;
 
num_sgs = count_sgs(req->src, req->nbytes);
-   sg_miter_start(&cpg->p.src_sg_it, req->src, num_sgs, SG_MITER_FROM_SG);
+   sg_miter_start(&p->src_sg_it, req->src, num_sgs, SG_MITER_FROM_SG);
 
num_sgs = count_sgs(req->dst, req->nbytes);
-   sg_miter_start(&cpg->p.dst_sg_it, req->dst, num_sgs, SG_MITER_TO_SG);
+   sg_miter_start(&p->dst_sg_it, req->dst, num_sgs, SG_MITER_TO_SG);
+
mv_process_current_q(1);
 }
 
@@ -378,13 +383,13 @@ static int queue_manag(void *data)
return 0;
 }
 
-static int mv_handle_req(struct ablkcipher_request *req)
+static int mv_handle_req(stru

[PATCHv2 4/10] crypto mv_cesa : Fix situations where the src sglist spans more data than the request asks for

2010-04-08 Thread Uri Simchoni
Fix for situations where the source scatterlist spans more data than the
request nbytes

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_p3/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p4/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_p3/drivers/crypto/mv_cesa.c  2010-03-16 09:06:10.183753278 
+0200
+++ linux-2.6.32.8_p4/drivers/crypto/mv_cesa.c  2010-03-16 08:40:09.503257114 
+0200
@@ -143,27 +143,45 @@ static int mv_setkey_aes(struct crypto_a
return 0;
 }
 
-static void setup_data_in(struct ablkcipher_request *req)
+static void copy_src_to_buf(struct req_progress *p, char *dbuf, int len)
 {
int ret;
-   void *buf;
-
-   if (!cpg->p.sg_src_left) {
-   ret = sg_miter_next(&cpg->p.src_sg_it);
-   BUG_ON(!ret);
-   cpg->p.sg_src_left = cpg->p.src_sg_it.length;
-   cpg->p.src_start = 0;
-   }
+   void *sbuf;
+   int copied = 0;
 
-   cpg->p.crypt_len = min(cpg->p.sg_src_left, cpg->max_req_size);
+   while (1) {
+   if (!p->sg_src_left) {
+   ret = sg_miter_next(&p->src_sg_it);
+   BUG_ON(!ret);
+   p->sg_src_left = p->src_sg_it.length;
+   p->src_start = 0;
+   }
 
-   buf = cpg->p.src_sg_it.addr;
-   buf += cpg->p.src_start;
+   sbuf = p->src_sg_it.addr + p->src_start;
 
-   memcpy(cpg->sram + SRAM_DATA_IN_START, buf, cpg->p.crypt_len);
+   if (p->sg_src_left <= len - copied) {
+   memcpy(dbuf + copied, sbuf, p->sg_src_left);
+   copied += p->sg_src_left;
+   p->sg_src_left = 0;
+   if (copied >= len)
+   break;
+   } else {
+   int copy_len = len - copied;
+   memcpy(dbuf + copied, sbuf, copy_len);
+   p->src_start += copy_len;
+   p->sg_src_left -= copy_len;
+   break;
+   }
+   }
+}
 
-   cpg->p.sg_src_left -= cpg->p.crypt_len;
-   cpg->p.src_start += cpg->p.crypt_len;
+static void setup_data_in(struct ablkcipher_request *req)
+{
+   struct req_progress *p = &cpg->p;
+   p->crypt_len =
+   min((int)req->nbytes - p->total_req_bytes, cpg->max_req_size);
+   copy_src_to_buf(p, cpg->sram + SRAM_DATA_IN_START,
+   p->crypt_len);
 }
 
 static void mv_process_current_q(int first_block)
@@ -289,12 +307,16 @@ static void dequeue_complete_req(void)
 static int count_sgs(struct scatterlist *sl, unsigned int total_bytes)
 {
int i = 0;
+   size_t cur_len;
 
-   do {
-   total_bytes -= sl[i].length;
-   i++;
-
-   } while (total_bytes > 0);
+   while (1) {
+   cur_len = sl[i].length;
+   ++i;
+   if (total_bytes > cur_len)
+   total_bytes -= cur_len;
+   else
+   break;
+   }
 
return i;
 }


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 3/10] crypto mv_cesa : Fix situation where the dest sglist is organized differently than the source sglist

2010-04-08 Thread Uri Simchoni
Bugfix for situations where the destination scatterlist has a different
buffer structure than the source scatterlist (e.g. source has one 2K
buffer and dest has 2 1K buffers)

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_p2/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p3/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_p2/drivers/crypto/mv_cesa.c  2010-03-16 09:04:01.860953458 
+0200
+++ linux-2.6.32.8_p3/drivers/crypto/mv_cesa.c  2010-03-16 09:06:10.183753278 
+0200
@@ -242,6 +242,8 @@ static void dequeue_complete_req(void)
struct ablkcipher_request *req = cpg->cur_req;
void *buf;
int ret;
+   int need_copy_len = cpg->p.crypt_len;
+   int sram_offset = 0;
 
cpg->p.total_req_bytes += cpg->p.crypt_len;
do {
@@ -257,14 +259,16 @@ static void dequeue_complete_req(void)
buf = cpg->p.dst_sg_it.addr;
buf += cpg->p.dst_start;
 
-   dst_copy = min(cpg->p.crypt_len, cpg->p.sg_dst_left);
-
-   memcpy(buf, cpg->sram + SRAM_DATA_OUT_START, dst_copy);
+   dst_copy = min(need_copy_len, cpg->p.sg_dst_left);
 
+   memcpy(buf,
+  cpg->sram + SRAM_DATA_OUT_START + sram_offset,
+  dst_copy);
+   sram_offset += dst_copy;
cpg->p.sg_dst_left -= dst_copy;
-   cpg->p.crypt_len -= dst_copy;
+   need_copy_len -= dst_copy;
cpg->p.dst_start += dst_copy;
-   } while (cpg->p.crypt_len > 0);
+   } while (need_copy_len > 0);
 
BUG_ON(cpg->eng_st != ENGINE_W_DEQUEUE);
if (cpg->p.total_req_bytes < req->nbytes) {


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 2/10] crypto mv_cesa : Remove compiler warning in mv_cesa driver

2010-04-08 Thread Uri Simchoni
Remove compiler warning

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_p1/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p2/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_p1/drivers/crypto/mv_cesa.c  2010-03-16 08:59:12.074583163 
+0200
+++ linux-2.6.32.8_p2/drivers/crypto/mv_cesa.c  2010-03-16 09:04:01.860953458 
+0200
@@ -178,6 +178,7 @@ static void mv_process_current_q(int fir
op.config = CFG_OP_CRYPT_ONLY | CFG_ENCM_AES | CFG_ENC_MODE_ECB;
break;
case COP_AES_CBC:
+   default:
op.config = CFG_OP_CRYPT_ONLY | CFG_ENCM_AES | CFG_ENC_MODE_CBC;
op.enc_iv = ENC_IV_POINT(SRAM_DATA_IV) |
ENC_IV_BUF_POINT(SRAM_DATA_IV_BUF);


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 1/10] crypto mv_cesa : Invoke the user callback from a softirq context

2010-04-08 Thread Uri Simchoni
Invoke the user callback from a softirq context

Signed-off-by: Uri Simchoni 
---
diff -upr linux-2.6.32.8_orig/drivers/crypto/mv_cesa.c 
linux-2.6.32.8_p1/drivers/crypto/mv_cesa.c
--- linux-2.6.32.8_orig/drivers/crypto/mv_cesa.c2010-02-09 
14:57:19.0 +0200
+++ linux-2.6.32.8_p1/drivers/crypto/mv_cesa.c  2010-03-16 08:59:12.074583163 
+0200
@@ -275,7 +275,9 @@ static void dequeue_complete_req(void)
sg_miter_stop(&cpg->p.dst_sg_it);
mv_crypto_algo_completion();
cpg->eng_st = ENGINE_IDLE;
+   local_bh_disable();
req->base.complete(&req->base, 0);
+   local_bh_enable();
}
 }
 


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCHv2 0/10] crypto mv_cesa : Add sha1 and hmac(sha1) support to the mv_cesa driver

2010-04-08 Thread Uri Simchoni
This is a resubmission of a patchset I set a while ago, and that was corrupted 
by my email client.

The following patchset adds async hashing (sha1 and hmac-sha1) to the mv_cesa 
crypto driver. This driver utilizes the Marvell CESA crypto accelerator that 
exists in some Marvell CPU's (Orion and Kirkwood). The existing driver has AES 
crypto support.

Compared to SW hashing on a 1.2GHz Kirkwood, the HW acceleration is about 20% 
faster, but more importantly, at reduced CPU utilization.

The patchset is divided as follows:
- patches 1-4 are bug/warning fixes to the existing driver
- patches 5-9 refactor the existing driver with no functional change to 
accommodate the added functionality
- patch 10 adds the sha1 and hmac-sha1 support.

The driver requires the sha1 and hmac sw drivers in order to handle some corner 
cases (i.e. it never falls back on a complete request but sometimes it hashes 
the last 64 bytes in sw)
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] crypto: omap-sha1-md5: OMAP3 SHA1 & MD5 driver

2010-04-08 Thread Herbert Xu
Hi:

OK so you did answer my question :)

Dmitry Kasatkin  wrote:
>
> Interesting case with hmac.
> 
> return crypto_shash_init(&desc.shash) ?:
>crypto_shash_update(&desc.shash, ipad, bs) ?:
>crypto_shash_export(&desc.shash, ipad) ?:
>crypto_shash_init(&desc.shash) ?:
>crypto_shash_update(&desc.shash, opad, bs) ?:
>crypto_shash_export(&desc.shash, opad);
> 
> Basically it does not call final.
> Then call init again.
> 
> hw has certain limitation that it requires to process last block with 
> some bit set.
> WHen update is called there is no possibility to know that no more 
> update() will come.
> So possible last block is stored and then hashed out from the final.
> 
> I see that above code will not work with the driver.
> I wonder how intermediate export/import could be done with omap hw.
> 
> But if it's not possible, then why not to have hmac(sha1) as just sw.
> Anyway hmac should not process as huge amount of data as hash itself.
>
> What is your opinion/advice?

A sha1-only driver is not very useful since the biggest potential
user IPsec uses hmac(sha1).

Is the omap hw documentation available publicly?

Thanks,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] crypto: omap-sha1-md5: OMAP3 SHA1 & MD5 driver

2010-04-08 Thread Herbert Xu
On Tue, Mar 23, 2010 at 07:32:39PM +0800, Herbert Xu wrote:
>
> My only question is what's your plan with respect to HMAC? If
> you're going to do it in hardware then it's fine as it is.
> 
> Otherwise you need to implement export/import and we also need
> to add ahash support to hmac.c.

Dmitry, did you answer this before or did it get lost in the mail :)

Thanks,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv2 1/2] crypto: updates omap sham device related platform code

2010-04-08 Thread Paul Walmsley
On Thu, 8 Apr 2010, Dmitry Kasatkin wrote:

> Hi,
> 
> BTW..
> 
> How will it work if those resource structures anyway conditionally compiled
> for OMAP2 or 3 or 4?
> only one structure will be compiled at once.

Hi Dmitry,

it is possible to build a kernel that will run on both OMAP2 and OMAP3 
(for example).  We call these kernels 'multi-OMAP' kernels.  In such a 
case, there will be multiple CONFIG_ARCH_OMAP* symbols defined, so 
multiple structure records will be included in the kernel image.  Also 
when this happens, the 'cpu_is_omap*' macros will be defined to something 
that is evaluated at kernel run-time, rather than compile-time (as would 
be the case with single-OMAP kernels). So then the appropriate structure 
can be passed at run-time to some device registration code.


- Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCHv2 1/2] crypto: updates omap sham device related platform code

2010-04-08 Thread Dmitry Kasatkin

Hi,

BTW..

How will it work if those resource structures anyway conditionally 
compiled for OMAP2 or 3 or 4?

only one structure will be compiled at once.

- Dmitry


On 30/03/10 12:41, ext Paul Walmsley wrote:

Hi Dmitry,

a few comments:

On Thu, 25 Mar 2010, Dmitry Kasatkin wrote:

   

- registration
- clocks

Signed-off-by: Dmitry Kasatkin
---
  arch/arm/mach-omap2/clock2420_data.c   |2 +-
  arch/arm/mach-omap2/clock2430_data.c   |2 +-
  arch/arm/mach-omap2/clock3xxx_data.c   |2 +-
  arch/arm/mach-omap2/devices.c  |   26 --
  arch/arm/plat-omap/include/plat/omap34xx.h |5 +
  5 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/arch/arm/mach-omap2/clock2420_data.c 
b/arch/arm/mach-omap2/clock2420_data.c
index d932b14..1820a55 100644
--- a/arch/arm/mach-omap2/clock2420_data.c
+++ b/arch/arm/mach-omap2/clock2420_data.c
@@ -1836,7 +1836,7 @@ static struct omap_clk omap2420_clks[] = {
CLK(NULL,   "vlynq_ick",  &vlynq_ick, CK_242X),
CLK(NULL,   "vlynq_fck",  &vlynq_fck, CK_242X),
CLK(NULL,   "des_ick",&des_ick,   CK_242X),
-   CLK(NULL,   "sha_ick",&sha_ick,   CK_242X),
+   CLK("omap-sham",  "ick",&sha_ick,   CK_242X),
CLK("omap_rng",   "ick",&rng_ick,   CK_242X),
CLK(NULL,   "aes_ick",&aes_ick,   CK_242X),
CLK(NULL,   "pka_ick",&pka_ick,   CK_242X),
diff --git a/arch/arm/mach-omap2/clock2430_data.c 
b/arch/arm/mach-omap2/clock2430_data.c
index 0438b6e..5884ac6 100644
--- a/arch/arm/mach-omap2/clock2430_data.c
+++ b/arch/arm/mach-omap2/clock2430_data.c
@@ -1924,7 +1924,7 @@ static struct omap_clk omap2430_clks[] = {
CLK(NULL,   "sdma_ick",   &sdma_ick,  CK_243X),
CLK(NULL,   "sdrc_ick",   &sdrc_ick,  CK_243X),
CLK(NULL,   "des_ick",&des_ick,   CK_243X),
-   CLK(NULL,   "sha_ick",&sha_ick,   CK_243X),
+   CLK("omap-sham",  "ick",&sha_ick,   CK_243X),
CLK("omap_rng",   "ick",&rng_ick,   CK_243X),
CLK(NULL,   "aes_ick",&aes_ick,   CK_243X),
CLK(NULL,   "pka_ick",&pka_ick,   CK_243X),
diff --git a/arch/arm/mach-omap2/clock3xxx_data.c 
b/arch/arm/mach-omap2/clock3xxx_data.c
index d5153b6..5a974dc 100644
--- a/arch/arm/mach-omap2/clock3xxx_data.c
+++ b/arch/arm/mach-omap2/clock3xxx_data.c
@@ -3360,7 +3360,7 @@ static struct omap_clk omap3xxx_clks[] = {
CLK("mmci-omap-hs.2", "ick",&mmchs3_ick,CK_3430ES2 | 
CK_AM35XX),
CLK(NULL,   "icr_ick",&icr_ick,   CK_343X),
CLK(NULL,   "aes2_ick",   &aes2_ick,  CK_343X),
-   CLK(NULL,   "sha12_ick",  &sha12_ick, CK_343X),
+   CLK("omap-sham",  "ick",&sha12_ick, CK_343X),
CLK(NULL,   "des2_ick",   &des2_ick,  CK_343X),
CLK("mmci-omap-hs.1", "ick",&mmchs2_ick,CK_3XXX),
CLK("mmci-omap-hs.0", "ick",&mmchs1_ick,CK_3XXX),
 

The above changes are all

Acked-by: Paul Walmsley

... but ...

   

diff --git a/arch/arm/mach-omap2/devices.c b/arch/arm/mach-omap2/devices.c
index 23e4d77..3e20b9c 100644
--- a/arch/arm/mach-omap2/devices.c
+++ b/arch/arm/mach-omap2/devices.c
@@ -26,6 +26,7 @@
  #include
  #include
  #include
+#include

  #include "mux.h"

@@ -453,7 +454,9 @@ static void omap_init_mcspi(void)
  static inline void omap_init_mcspi(void) {}
  #endif

-#ifdef CONFIG_OMAP_SHA1_MD5
+#if defined(CONFIG_CRYPTO_DEV_OMAP_SHAM) || 
defined(CONFIG_CRYPTO_DEV_OMAP_SHAM_MODULE)
+
+#ifdef CONFIG_ARCH_OMAP2
  static struct resource sha1_md5_resources[] = {
{
.start  = OMAP24XX_SEC_SHA1MD5_BASE,
@@ -465,9 +468,28 @@ static struct resource sha1_md5_resources[] = {
.flags  = IORESOURCE_IRQ,
}
  };
+#endif
+
+#ifdef CONFIG_ARCH_OMAP3
+static struct resource sha1_md5_resources[] = {
+   {
+   .start  = OMAP34XX_SEC_SHA1MD5_BASE,
+   .end= OMAP34XX_SEC_SHA1MD5_BASE + 0x64,
+   .flags  = IORESOURCE_MEM,
+   },
+   {
+   .start  = INT_34XX_SHA1MD52_IRQ,
+   .flags  = IORESOURCE_IRQ,
+   },
+   {
+   .start  = OMAP34XX_DMA_SHA1MD5_RX,
+   .flags  = IORESOURCE_DMA,
+   }
+};
+#endif
 

The above will break multi-OMAP2 kernels.  Please change the above to make
the variable names unique on a per-SoC basis (e.g.,
omap3_sha1_md5_resources) and modify the SHA1/MD5 device registration code
to use the appropriate struct resource array at runtime.  For an example,
see mach-omap2/devices.c:omap_init_mbox().

   


  static struct platform_device sha1_md5_device = {
-   .name   = "OMAP SHA1/MD5",
+   .name   = "omap-sham",
.id = -1,
.num_resources  = ARRAY_SIZE(sha1_md5_resources),
.resource   = sha1_md5_resources,
dif

Re: [PATCH 3/7] crypto/testmgr: add testing for arc4 based on ecb(arc4)

2010-04-08 Thread Sebastian Andrzej Siewior
* Herbert Xu | 2010-04-07 17:29:07 [+0800]:

>Sebastian, how about precomputing the IV and provide them directly
>as a hex array?
>
>To test arc4_setup_iv itself, you can add an alg_test_arc4 function
>(like alg_test_crc32) that tests IV generation specifically.
>
>Alternatively, just add an alg_test_arc4 that computes the IV
>before calling alg_test_skcipher.

I take a look at this.

>Cheers,

Sebastian
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html