Re: IPSec ESN: Packets decryption fail with ESN enabled connection

2019-01-09 Thread Harsh Jain


On 04-01-2019 14:04, Steffen Klassert wrote:
> On Thu, Jan 03, 2019 at 04:16:56PM +0530, Harsh Jain wrote:
>> On 02-01-2019 18:21, Herbert Xu wrote:
>>> Does this occur if you use software crypto on the receiving end
>>> while keeping the sending end unchanged?
>> I tried with "authencesn(hmac(sha1-ssse3),cbc(aes-asm))" on both sides.
>>
>> Server : iperf  -s  -w 512k  -p 20002
>>
>> Client : iperf  -t 60 -w 512k -l 2048 -c 1.0.0.96 -P 32 -p 20002
>>
>>> If not then I would start debugging this within your driver.
>> ESP Packet whose's sequence No. is out of window gets dropped with EBADMSG.  
>> It seems that "xfrm_replay_seqhi" intentionally increments the "seq_hi" to 
>> fail verification for Out of seq packet.
> Yes, this is defined in RFC 4303 Appendix A2.2.

Thanks, It means we cannot avoid verification part for packets with low seql.




Re: IPSec ESN: Packets decryption fail with ESN enabled connection

2019-01-03 Thread Harsh Jain


On 02-01-2019 18:21, Herbert Xu wrote:
> On Wed, Dec 26, 2018 at 03:16:29PM +0530, Harsh Jain wrote:
>> +linux-crypto
>>
>> On 26-12-2018 14:54, Harsh Jain wrote:
>>> Hi All,
>>>
>>> Kernel version on both machines: 4.19.7.
>>>
>>> Packet drops with EBADMSG is observed on receive end of connection. It 
>>> seems that sometimes crypto driver receives packet with wrong "seq_hi" 
>>> value in AAD. See below the dump of assoc data for 1 such instance.
>>>
>>> [  380.823454] assoclen 8th byte 1 clen 1464 op 1  ==> High byte of ESN
>>> [  380.828398] authsize 12 cryptlen 1464
>>> [  380.832637] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06 
>>> ==> Decrypted data seems correct,last byte is proto(06 TCP)
>>> [  380.840215] dt0010: bf ee 4f 80 a4 7f 2a 50 6a 5a 0b 10
>>> [  380.846636] ass: 0a bc d3 31 <00 00 00 01> 00 1c e5 ec 0e af 04 
>>> 69 ==> ESN-Hi = 1
>>> [  380.854316] ass0010: a4 fc 08 ad
>>>
>>> Note: If I decrypt the same packet with ESN - Hi = 0. It Decrypt 
>>> successfully means peer machine has used ESN-HI = 0 while encrypting.
>>>
>>> To debug further we added trace in "xfrm_replay_seqhi". Following was the 
>>> output:
>>>
>>>  -0 [003] ..s.   380.967766: xfrm_replay_seqhi: seq_hi 0x 1 seq 
>>> 0x 1ce5ec bottom 0x 1ce5ee replay seq 0x 1ce62d replay window 0x 40
>>>
>>> 1) Is this an expected variable with ESN enables connection?.
>>>
>>> 2) If packets are supposed to be dropped can't we avoid decryption overhead.
>>>
>>> Following logs are attached
>>>
>>> 1) dmesg log
>>>
>>> 2) debug patch used to reproduce the issue.
>>>
>>> 3) ftace log file
>>>
>>> 4) ip xfrm state list
> Does this occur if you use software crypto on the receiving end
> while keeping the sending end unchanged?

I tried with "authencesn(hmac(sha1-ssse3),cbc(aes-asm))" on both sides.

Server : iperf  -s  -w 512k  -p 20002

Client : iperf  -t 60 -w 512k -l 2048 -c 1.0.0.96 -P 32 -p 20002

>
> If not then I would start debugging this within your driver.

ESP Packet whose's sequence No. is out of window gets dropped with EBADMSG.  It 
seems that "xfrm_replay_seqhi" intentionally increments the "seq_hi" to fail 
verification for Out of seq packet.

>
> Thanks,


[PATCH] crypto:authencesn: Avoid twice completion call in decrypt path

2019-01-03 Thread Harsh Jain
Authencesn template in decrypt path unconditionally calls aead_request_complete
after ahash_verify which leads to following kernel panic in after decryption.

[  338.539800] BUG: unable to handle kernel NULL pointer dereference at 
0004
[  338.548372] PGD 0 P4D 0
[  338.551157] Oops:  [#1] SMP PTI
[  338.554919] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Tainted: GW 
I   4.19.7+ #13
[  338.564431] Hardware name: Supermicro X8ST3/X8ST3, BIOS 2.007/29/10
[  338.572212] RIP: 0010:esp_input_done2+0x350/0x410 [esp4]
[  338.578030] Code: ff 0f b6 68 10 48 8b 83 c8 00 00 00 e9 8e fe ff ff 8b 04 
25 04 00 00 00 83 e8 01 48 98 48 8b 3c c5 10 00 00 00 e9 f7 fd ff ff <8b> 04 25 
04 00 00 00 83 e8 01 48 98 4c 8b 24 c5 10 00 00 00 e9 3b
[  338.598547] RSP: 0018:911c97803c00 EFLAGS: 00010246
[  338.604268] RAX: 0002 RBX: 911c4469ee00 RCX: 
[  338.612090] RDX:  RSI: 0130 RDI: 911b87c20400
[  338.619874] RBP:  R08: 911b87c20498 R09: 000a
[  338.627610] R10: 0001 R11: 0004 R12: 
[  338.635402] R13: 911c8959 R14: 911c9173 R15: 
[  338.643234] FS:  () GS:911c9780() 
knlGS:
[  338.652047] CS:  0010 DS:  ES:  CR0: 80050033
[  338.658299] CR2: 0004 CR3: 0001ec20a000 CR4: 06f0
[  338.666382] Call Trace:
[  338.669051]  
[  338.671254]  esp_input_done+0x12/0x20 [esp4]
[  338.675922]  chcr_handle_resp+0x3b5/0x790 [chcr]
[  338.680949]  cpl_fw6_pld_handler+0x37/0x60 [chcr]
[  338.686080]  chcr_uld_rx_handler+0x22/0x50 [chcr]
[  338.691233]  uldrx_handler+0x8c/0xc0 [cxgb4]
[  338.695923]  process_responses+0x2f0/0x5d0 [cxgb4]
[  338.701177]  ? bitmap_find_next_zero_area_off+0x3a/0x90
[  338.706882]  ? matrix_alloc_area.constprop.7+0x60/0x90
[  338.712517]  ? apic_update_irq_cfg+0x82/0xf0
[  338.717177]  napi_rx_handler+0x14/0xe0 [cxgb4]
[  338.722015]  net_rx_action+0x2aa/0x3e0
[  338.726136]  __do_softirq+0xcb/0x280
[  338.730054]  irq_exit+0xde/0xf0
[  338.733504]  do_IRQ+0x54/0xd0
[  338.736745]  common_interrupt+0xf/0xf

Signed-off-by: Harsh Jain 
Cc: sta...@vger.kernel.org
---
 crypto/authencesn.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/crypto/authencesn.c b/crypto/authencesn.c
index 80a25cc..4741fe8 100644
--- a/crypto/authencesn.c
+++ b/crypto/authencesn.c
@@ -279,7 +279,7 @@ static void authenc_esn_verify_ahash_done(struct 
crypto_async_request *areq,
struct aead_request *req = areq->data;
 
err = err ?: crypto_authenc_esn_decrypt_tail(req, 0);
-   aead_request_complete(req, err);
+   authenc_esn_request_complete(req, err);
 }
 
 static int crypto_authenc_esn_decrypt(struct aead_request *req)
-- 
2.1.4



Re: IPSec ESN: Packets decryption fail with ESN enabled connection

2018-12-26 Thread Harsh Jain
+linux-crypto

On 26-12-2018 14:54, Harsh Jain wrote:
> Hi All,
>
> Kernel version on both machines: 4.19.7.
>
> Packet drops with EBADMSG is observed on receive end of connection. It seems 
> that sometimes crypto driver receives packet with wrong "seq_hi" value in 
> AAD. See below the dump of assoc data for 1 such instance.
>
> [  380.823454] assoclen 8th byte 1 clen 1464 op 1  ==> High byte of ESN
> [  380.828398] authsize 12 cryptlen 1464
> [  380.832637] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06 
> ==> Decrypted data seems correct,last byte is proto(06 TCP)
> [  380.840215] dt0010: bf ee 4f 80 a4 7f 2a 50 6a 5a 0b 10
> [  380.846636] ass: 0a bc d3 31 <00 00 00 01> 00 1c e5 ec 0e af 04 69 
> ==> ESN-Hi = 1
> [  380.854316] ass0010: a4 fc 08 ad
>
> Note: If I decrypt the same packet with ESN - Hi = 0. It Decrypt successfully 
> means peer machine has used ESN-HI = 0 while encrypting.
>
> To debug further we added trace in "xfrm_replay_seqhi". Following was the 
> output:
>
>  -0 [003] ..s.   380.967766: xfrm_replay_seqhi: seq_hi 0x 1 seq 0x 
> 1ce5ec bottom 0x 1ce5ee replay seq 0x 1ce62d replay window 0x 40
>
> 1) Is this an expected variable with ESN enables connection?.
>
> 2) If packets are supposed to be dropped can't we avoid decryption overhead.
>
> Following logs are attached
>
> 1) dmesg log
>
> 2) debug patch used to reproduce the issue.
>
> 3) ftace log file
>
> 4) ip xfrm state list
>
>
> Regards
>
> Harsh Jain
>
>


IPSec ESN: Packets decryption fail with ESN enabled connection

2018-12-26 Thread Harsh Jain
Hi All,

Kernel version on both machines: 4.19.7.

Packet drops with EBADMSG is observed on receive end of connection. It seems 
that sometimes crypto driver receives packet with wrong "seq_hi" value in AAD. 
See below the dump of assoc data for 1 such instance.

[  380.823454] assoclen 8th byte 1 clen 1464 op 1  ==> High byte of ESN
[  380.828398] authsize 12 cryptlen 1464
[  380.832637] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06 ==> 
Decrypted data seems correct,last byte is proto(06 TCP)
[  380.840215] dt0010: bf ee 4f 80 a4 7f 2a 50 6a 5a 0b 10
[  380.846636] ass: 0a bc d3 31 <00 00 00 01> 00 1c e5 ec 0e af 04 69 
==> ESN-Hi = 1
[  380.854316] ass0010: a4 fc 08 ad

Note: If I decrypt the same packet with ESN - Hi = 0. It Decrypt successfully 
means peer machine has used ESN-HI = 0 while encrypting.

To debug further we added trace in "xfrm_replay_seqhi". Following was the 
output:

 -0 [003] ..s.   380.967766: xfrm_replay_seqhi: seq_hi 0x 1 seq 0x 
1ce5ec bottom 0x 1ce5ee replay seq 0x 1ce62d replay window 0x 40

1) Is this an expected variable with ESN enables connection?.

2) If packets are supposed to be dropped can't we avoid decryption overhead.

Following logs are attached

1) dmesg log

2) debug patch used to reproduce the issue.

3) ftace log file

4) ip xfrm state list


Regards

Harsh Jain


[  332.906713] alg: No test for seqiv(rfc4106(gcm(aes))) 
(seqiv(rfc4106-gcm-aes-chcr))
[  380.823454] assoclen 8th byte 1 clen 1464 op 1
[  380.828398] authsize 12 cryptlen 1464
[  380.832637] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06
[  380.840215] dt0010: bf ee 4f 80 a4 7f 2a 50 6a 5a 0b 10
[  380.846636] ass: 0a bc d3 31 00 00 00 01 00 1c e5 ec 0e af 04 69
[  380.854316] ass0010: a4 fc 08 ad
[  410.554838] assoclen 8th byte 1 clen 48 op 1
[  410.559944] authsize 12 cryptlen 48
[  410.563885] dt: 01 01 08 0a 6c 1d 2c 2e 16 7d fa 58 00 01 01 06
[  410.571504] dt0010: 35 05 32 69 93 2a 68 24 4c 45 b3 8d
[  410.577908] ass: 0a bc d3 31 00 00 00 01 00 3f 89 5e 0e af 04 69
[  410.585604] ass0010: a4 df 64 1f
[  410.783712] assoclen 8th byte 1 clen 1464 op 1
[  410.788603] authsize 12 cryptlen 1464
[  410.792617] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06
[  410.800236] dt0010: 49 7d 28 3e e2 35 66 ca 0f 84 54 b1
[  410.806615] ass: 0a bc d3 31 00 00 00 01 00 3f c2 c9 0e af 04 69
[  410.814336] ass0010: a4 df 2f 88
[  410.856390] assoclen 8th byte 1 clen 1464 op 1
[  410.861499] authsize 12 cryptlen 1464
[  410.865710] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06
[  410.873407] dt0010: 3f 59 b5 41 1f a0 2a 49 72 38 ea 04
[  410.879786] ass: 0a bc d3 31 00 00 00 01 00 3f cc b6 0e af 04 69
[  410.887404] ass0010: a4 df 21 f7
[  410.895450] assoclen 8th byte 1 clen 1464 op 1
[  410.900492] authsize 12 cryptlen 1464
[  410.904686] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06
[  410.912327] dt0010: f7 67 32 e5 55 5c 46 b0 7c 58 dd d8
[  410.918725] ass: 0a bc d3 31 00 00 00 01 00 3f cd 9c 0e af 04 69
[  410.926440] ass0010: a4 df 20 dd
[  410.960343] assoclen 8th byte 1 clen 1464 op 1
[  410.965221] authsize 12 cryptlen 1464
[  410.969334] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06
[  410.977003] dt0010: f0 11 fd 7c cb 11 86 86 af 91 9b 6d
[  410.983527] ass: 0a bc d3 31 00 00 00 01 00 3f d5 21 0e af 04 69
[  410.991697] ass0010: a4 df 38 60
[  411.156836] assoclen 8th byte 1 clen 1464 op 1
[  411.161836] authsize 12 cryptlen 1464
[  411.166052] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06
[  411.173699] dt0010: ad 04 6f 17 40 fe 82 72 76 80 45 09
[  411.180183] ass: 0a bc d3 31 00 00 00 01 00 40 01 d1 0e af 04 69
[  411.187981] ass0010: a4 a0 ec 90
[  411.218735] assoclen 8th byte 1 clen 1464 op 1
[  411.223650] authsize 12 cryptlen 1464
[  411.227841] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06
[  411.235115] dt0010: 15 1f 5d d6 c0 f3 93 85 0e 95 5e 48
[  411.241166] ass: 0a bc d3 31 00 00 00 01 00 40 05 9b 0e af 04 69
[  411.248798] ass0010: a4 a0 e8 da
[  411.343551] assoclen 8th byte 1 clen 1464 op 1
[  411.348507] authsize 12 cryptlen 1464
[  411.352691] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06
[  411.360120] dt0010: 6f 30 6e bb 11 0c 79 f2 39 7e a0 1a
[  411.366496] ass: 0a bc d3 31 00 00 00 01 00 40 17 3d 0e af 04 69
[  411.374265] ass0010: a4 a0 fa 7c
[  411.390500] assoclen 8th byte 1 clen 1464 op 1
[  411.395523] authsize 12 cryptlen 1464
[  411.399569] dt: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 06
[  411.406857] dt0010: d0 9f 7c cf 6a c7 90 51 4f 3d 1f dc
[  411.413053] ass: 0a bc d3 31 00 00 00 01 00 40 19 ff 0e af 04 69
[  411.420480] ass0010: a4 a0 f4 be
[  411.451036] assoclen 8th byte 1 clen 1464 op 1

Re: Bug with GRE tunnel and "ip xfrm" GRE match?

2017-10-18 Thread Harsh Jain
Also sdding netdev for more inputs.

On Wed, Oct 18, 2017 at 12:13 PM, Harsh Jain  wrote:
> Hi keith,
>
> Its being a long time when I observed this issue. What I remember is ,
> The kernel patch which I shared was not compatible with latest kernel.
> there after I switched to another project, Didn't get chane to
> re-produce the issue. It is an OVS bug. I am adding ovs Dev team may
> be they can help you out.
>
> Regards
> Harsh Jain
>
> On Wed, Oct 18, 2017 at 5:19 AM, Keith Holleman
>  wrote:
>>
>> Does anyone know the status of this problem?  The code has changed from the
>> original proposed patch but I don't think I see it fixed as of yet.
>>
>> https://mail.openvswitch.org/pipermail/ovs-discuss/2015-June/037681.html
>>
>> I can't find an issue that was ever raised to track this either.
>>
>> The reason I'm asking is that I have run into what seems to be very similar
>> behavior where an installed "ip xfrm" policy that attempts to match on GRE
>> keys but does not seem to work when the GRE packet is generated by a local
>> OVS switch.
>>
>> -K


[PATCH v2 2/7] crypto:chelsio: Check error code with IS_ERR macro

2017-10-08 Thread Harsh Jain
From: Yeshaswi M R Gowda 

Check and return proper error code.

Signed-off-by: Jitendra Lulla 
Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index bdb1014..e4bf32d 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -1455,8 +1455,8 @@ static int chcr_ahash_update(struct ahash_request *req)
req_ctx->result = 0;
req_ctx->data_len += params.sg_len + params.bfr_len;
skb = create_hash_wr(req, ¶ms);
-   if (!skb)
-   return -ENOMEM;
+   if (IS_ERR(skb))
+   return PTR_ERR(skb);
 
if (remainder) {
u8 *temp;
@@ -1519,8 +1519,8 @@ static int chcr_ahash_final(struct ahash_request *req)
params.more = 0;
}
skb = create_hash_wr(req, ¶ms);
-   if (!skb)
-   return -ENOMEM;
+   if (IS_ERR(skb))
+   return PTR_ERR(skb);
 
skb->dev = u_ctx->lldi.ports[0];
set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_qidx);
@@ -1570,8 +1570,8 @@ static int chcr_ahash_finup(struct ahash_request *req)
}
 
skb = create_hash_wr(req, ¶ms);
-   if (!skb)
-   return -ENOMEM;
+   if (IS_ERR(skb))
+   return PTR_ERR(skb);
 
skb->dev = u_ctx->lldi.ports[0];
set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_qidx);
@@ -1621,8 +1621,8 @@ static int chcr_ahash_digest(struct ahash_request *req)
}
 
skb = create_hash_wr(req, ¶ms);
-   if (!skb)
-   return -ENOMEM;
+   if (IS_ERR(skb))
+   return PTR_ERR(skb);
 
skb->dev = u_ctx->lldi.ports[0];
set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_qidx);
-- 
2.1.4



[PATCH v2 1/7] crypto:chelsio: Remove unused parameter

2017-10-08 Thread Harsh Jain
From: Yeshaswi M R Gowda 

Remove unused parameter sent to latest fw.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 43 +++---
 drivers/crypto/chelsio/chcr_algo.h | 12 +--
 2 files changed, 23 insertions(+), 32 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 0e81607..bdb1014 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -577,36 +577,27 @@ static int chcr_cipher_fallback(struct crypto_skcipher 
*cipher,
 static inline void create_wreq(struct chcr_context *ctx,
   struct chcr_wr *chcr_req,
   void *req, struct sk_buff *skb,
-  int kctx_len, int hash_sz,
-  int is_iv,
+  int hash_sz,
   unsigned int sc_len,
   unsigned int lcb)
 {
struct uld_ctx *u_ctx = ULD_CTX(ctx);
-   int iv_loc = IV_DSGL;
int qid = u_ctx->lldi.rxq_ids[ctx->rx_qidx];
-   unsigned int immdatalen = 0, nr_frags = 0;
+   unsigned int immdatalen = 0;
 
-   if (is_ofld_imm(skb)) {
+   if (is_ofld_imm(skb))
immdatalen = skb->data_len;
-   iv_loc = IV_IMMEDIATE;
-   } else {
-   nr_frags = skb_shinfo(skb)->nr_frags;
-   }
 
-   chcr_req->wreq.op_to_cctx_size = FILL_WR_OP_CCTX_SIZE(immdatalen,
-   ((sizeof(chcr_req->key_ctx) + kctx_len) >> 4));
+   chcr_req->wreq.op_to_cctx_size = FILL_WR_OP_CCTX_SIZE;
chcr_req->wreq.pld_size_hash_size =
-   htonl(FW_CRYPTO_LOOKASIDE_WR_PLD_SIZE_V(sgl_lengths[nr_frags]) |
- FW_CRYPTO_LOOKASIDE_WR_HASH_SIZE_V(hash_sz));
+   htonl(FW_CRYPTO_LOOKASIDE_WR_HASH_SIZE_V(hash_sz));
chcr_req->wreq.len16_pkd =
htonl(FW_CRYPTO_LOOKASIDE_WR_LEN16_V(DIV_ROUND_UP(
(calc_tx_flits_ofld(skb) * 8), 16)));
chcr_req->wreq.cookie = cpu_to_be64((uintptr_t)req);
chcr_req->wreq.rx_chid_to_rx_q_id =
FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
-   is_iv ? iv_loc : IV_NOP, !!lcb,
-   ctx->tx_qidx);
+   !!lcb, ctx->tx_qidx);
 
chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
   qid);
@@ -616,7 +607,7 @@ static inline void create_wreq(struct chcr_context *ctx,
chcr_req->sc_imm.cmd_more = FILL_CMD_MORE(immdatalen);
chcr_req->sc_imm.len = cpu_to_be32(sizeof(struct cpl_tx_sec_pdu) +
   sizeof(chcr_req->key_ctx) +
-  kctx_len + sc_len + immdatalen);
+  sc_len + immdatalen);
 }
 
 /**
@@ -706,8 +697,8 @@ static struct sk_buff *create_cipher_wr(struct 
cipher_wr_param *wrparam)
write_buffer_to_skb(skb, &frags, reqctx->iv, ivsize);
write_sg_to_skb(skb, &frags, wrparam->srcsg, wrparam->bytes);
atomic_inc(&adap->chcr_stats.cipher_rqst);
-   create_wreq(ctx, chcr_req, &(wrparam->req->base), skb, kctx_len, 0, 1,
-   sizeof(struct cpl_rx_phys_dsgl) + phys_dsgl,
+   create_wreq(ctx, chcr_req, &(wrparam->req->base), skb, 0,
+   sizeof(struct cpl_rx_phys_dsgl) + phys_dsgl + kctx_len,
ablkctx->ciph_mode == CHCR_SCMD_CIPHER_MODE_AES_CBC);
reqctx->skb = skb;
skb_get(skb);
@@ -1417,8 +1408,8 @@ static struct sk_buff *create_hash_wr(struct 
ahash_request *req,
if (param->sg_len != 0)
write_sg_to_skb(skb, &frags, req->src, param->sg_len);
atomic_inc(&adap->chcr_stats.digest_rqst);
-   create_wreq(ctx, chcr_req, &req->base, skb, kctx_len,
-   hash_size_in_response, 0, DUMMY_BYTES, 0);
+   create_wreq(ctx, chcr_req, &req->base, skb, hash_size_in_response,
+   DUMMY_BYTES + kctx_len, 0);
req_ctx->skb = skb;
skb_get(skb);
return skb;
@@ -2080,8 +2071,8 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
write_buffer_to_skb(skb, &frags, req->iv, ivsize);
write_sg_to_skb(skb, &frags, src, req->cryptlen);
atomic_inc(&adap->chcr_stats.cipher_rqst);
-   create_wreq(ctx, chcr_req, &req->base, skb, kctx_len, size, 1,
-  sizeof(struct cpl_rx_phys_dsgl) + dst_size, 0);
+   create_wreq(ctx, chcr_req, &req->base, skb, size,
+  sizeof(struct cpl_rx_phys_dsgl) + dst_size + kctx_len, 0);
reqctx-&g

[PATCH v2 4/7] crypto:chelsio:Use x8_ble gf multiplication to calculate IV.

2017-10-08 Thread Harsh Jain
gf128mul_x8_ble() will reduce gf Multiplication iteration by 8.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 11 +--
 drivers/crypto/chelsio/chcr_crypto.h |  1 +
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index e4bf32d..e0ab34a 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -888,9 +888,11 @@ static int chcr_update_tweak(struct ablkcipher_request 
*req, u8 *iv)
int ret, i;
u8 *key;
unsigned int keylen;
+   int round = reqctx->last_req_len / AES_BLOCK_SIZE;
+   int round8 = round / 8;
 
cipher = ablkctx->aes_generic;
-   memcpy(iv, req->info, AES_BLOCK_SIZE);
+   memcpy(iv, reqctx->iv, AES_BLOCK_SIZE);
 
keylen = ablkctx->enckey_len / 2;
key = ablkctx->key + keylen;
@@ -899,7 +901,10 @@ static int chcr_update_tweak(struct ablkcipher_request 
*req, u8 *iv)
goto out;
 
crypto_cipher_encrypt_one(cipher, iv, iv);
-   for (i = 0; i < (reqctx->processed / AES_BLOCK_SIZE); i++)
+   for (i = 0; i < round8; i++)
+   gf128mul_x8_ble((le128 *)iv, (le128 *)iv);
+
+   for (i = 0; i < (round % 8); i++)
gf128mul_x_ble((le128 *)iv, (le128 *)iv);
 
crypto_cipher_decrypt_one(cipher, iv, iv);
@@ -1040,6 +1045,7 @@ static int chcr_handle_cipher_resp(struct 
ablkcipher_request *req,
CRYPTO_ALG_SUB_TYPE_CTR)
bytes = adjust_ctr_overflow(reqctx->iv, bytes);
reqctx->processed += bytes;
+   reqctx->last_req_len = bytes;
wrparam.qid = u_ctx->lldi.rxq_ids[ctx->rx_qidx];
wrparam.req = req;
wrparam.bytes = bytes;
@@ -1132,6 +1138,7 @@ static int process_cipher(struct ablkcipher_request *req,
goto error;
}
reqctx->processed = bytes;
+   reqctx->last_req_len = bytes;
reqctx->dst = reqctx->dstsg;
reqctx->op = op_type;
wrparam.qid = qid;
diff --git a/drivers/crypto/chelsio/chcr_crypto.h 
b/drivers/crypto/chelsio/chcr_crypto.h
index 30af1ee..b3722b3 100644
--- a/drivers/crypto/chelsio/chcr_crypto.h
+++ b/drivers/crypto/chelsio/chcr_crypto.h
@@ -247,6 +247,7 @@ struct chcr_blkcipher_req_ctx {
struct scatterlist *dst;
struct scatterlist *newdstsg;
unsigned int processed;
+   unsigned int last_req_len;
unsigned int op;
short int dst_nents;
u8 iv[CHCR_MAX_CRYPTO_IV_LEN];
-- 
2.1.4



[PATCH v2 5/7] crypto:chelsio:Remove allocation of sg list to implement 2K limit of dsgl header

2017-10-08 Thread Harsh Jain
Update DMA address index instead of allocating new sg list to impose  2k size 
limit for each entry.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 237 +++
 drivers/crypto/chelsio/chcr_algo.h   |   3 +-
 drivers/crypto/chelsio/chcr_core.h   |   2 +-
 drivers/crypto/chelsio/chcr_crypto.h |   6 -
 4 files changed, 76 insertions(+), 172 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index e0ab34a..b13991d 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -117,6 +117,21 @@ static inline unsigned int sgl_len(unsigned int n)
return (3 * n) / 2 + (n & 1) + 2;
 }
 
+static int dstsg_2k(struct scatterlist *sgl, unsigned int reqlen)
+{
+   int nents = 0;
+   unsigned int less;
+
+   while (sgl && reqlen) {
+   less = min(reqlen, sgl->length);
+   nents += DIV_ROUND_UP(less, CHCR_SG_SIZE);
+   reqlen -= less;
+   sgl = sg_next(sgl);
+   }
+
+   return nents;
+}
+
 static void chcr_verify_tag(struct aead_request *req, u8 *input, int *err)
 {
u8 temp[SHA512_DIGEST_SIZE];
@@ -166,8 +181,6 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
kfree_skb(ctx_req.ctx.reqctx->skb);
ctx_req.ctx.reqctx->skb = NULL;
}
-   free_new_sg(ctx_req.ctx.reqctx->newdstsg);
-   ctx_req.ctx.reqctx->newdstsg = NULL;
if (ctx_req.ctx.reqctx->verify == VERIFY_SW) {
chcr_verify_tag(ctx_req.req.aead_req, input,
&err);
@@ -388,31 +401,41 @@ static void write_phys_cpl(struct cpl_rx_phys_dsgl 
*phys_cpl,
 {
struct phys_sge_pairs *to;
unsigned int len = 0, left_size = sg_param->obsize;
-   unsigned int nents = sg_param->nents, i, j = 0;
+   unsigned int j = 0;
+   int offset, ent_len;
 
phys_cpl->op_to_tid = htonl(CPL_RX_PHYS_DSGL_OPCODE_V(CPL_RX_PHYS_DSGL)
| CPL_RX_PHYS_DSGL_ISRDMA_V(0));
+   to = (struct phys_sge_pairs *)((unsigned char *)phys_cpl +
+  sizeof(struct cpl_rx_phys_dsgl));
+   while (left_size && sg) {
+   len = min_t(u32, left_size, sg_dma_len(sg));
+   offset = 0;
+   while (len) {
+   ent_len =  min_t(u32, len, CHCR_SG_SIZE);
+   to->len[j % 8] = htons(ent_len);
+   to->addr[j % 8] = cpu_to_be64(sg_dma_address(sg) +
+ offset);
+   offset += ent_len;
+   len -= ent_len;
+   j++;
+   if ((j % 8) == 0)
+   to++;
+   }
+   left_size -= min(left_size, sg_dma_len(sg));
+   sg = sg_next(sg);
+   }
phys_cpl->pcirlxorder_to_noofsgentr =
htonl(CPL_RX_PHYS_DSGL_PCIRLXORDER_V(0) |
  CPL_RX_PHYS_DSGL_PCINOSNOOP_V(0) |
  CPL_RX_PHYS_DSGL_PCITPHNTENB_V(0) |
  CPL_RX_PHYS_DSGL_PCITPHNT_V(0) |
  CPL_RX_PHYS_DSGL_DCAID_V(0) |
- CPL_RX_PHYS_DSGL_NOOFSGENTR_V(nents));
+ CPL_RX_PHYS_DSGL_NOOFSGENTR_V(j));
phys_cpl->rss_hdr_int.opcode = CPL_RX_PHYS_ADDR;
phys_cpl->rss_hdr_int.qid = htons(sg_param->qid);
phys_cpl->rss_hdr_int.hash_val = 0;
-   to = (struct phys_sge_pairs *)((unsigned char *)phys_cpl +
-  sizeof(struct cpl_rx_phys_dsgl));
-   for (i = 0; nents && left_size; to++) {
-   for (j = 0; j < 8 && nents && left_size; j++, nents--) {
-   len = min(left_size, sg_dma_len(sg));
-   to->len[j] = htons(len);
-   to->addr[j] = cpu_to_be64(sg_dma_address(sg));
-   left_size -= len;
-   sg = sg_next(sg);
-   }
-   }
+
 }
 
 static inline int map_writesg_phys_cpl(struct device *dev,
@@ -523,31 +546,33 @@ static int generate_copy_rrkey(struct ablk_ctx *ablkctx,
 static int chcr_sg_ent_in_wr(struct scatterlist *src,
 struct scatterlist *dst,
 unsigned int minsg,
-unsigned int space,
-short int *sent,
-short int *dent)
+unsigned int space)
 {
int srclen = 0, dstlen = 0;
int srcsg = minsg, dstsg = 0;
+   int offset = 0, less;
 
-   *sent = 0;
-   *dent = 0;
while (src &&

[PATCH v2 7/7] crypto:chelsio: Fix memory leak

2017-10-08 Thread Harsh Jain
Fix memory leak when device does not support crypto.

Reported-by: Dan Carpenter 
Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_core.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_core.c 
b/drivers/crypto/chelsio/chcr_core.c
index b6dd9cb..4f677b3 100644
--- a/drivers/crypto/chelsio/chcr_core.c
+++ b/drivers/crypto/chelsio/chcr_core.c
@@ -154,15 +154,15 @@ static void *chcr_uld_add(const struct cxgb4_lld_info 
*lld)
struct uld_ctx *u_ctx;
 
/* Create the device and add it in the device list */
+   if (!(lld->ulp_crypto & ULP_CRYPTO_LOOKASIDE))
+   return ERR_PTR(-EOPNOTSUPP);
+
+   /* Create the device and add it in the device list */
u_ctx = kzalloc(sizeof(*u_ctx), GFP_KERNEL);
if (!u_ctx) {
u_ctx = ERR_PTR(-ENOMEM);
goto out;
}
-   if (!(lld->ulp_crypto & ULP_CRYPTO_LOOKASIDE)) {
-   u_ctx = ERR_PTR(-ENOMEM);
-   goto out;
-   }
u_ctx->lldi = *lld;
 out:
return u_ctx;
-- 
2.1.4



[PATCH v2 6/7] crypto:chelsio:Move DMA un/mapping to chcr from lld cxgb4 driver

2017-10-08 Thread Harsh Jain
Allow chcr to do DMA mapping/Unmapping instead of lld cxgb4.
It moves "Copy AAD to dst buffer" requirement from driver to
firmware.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 1645 ++
 drivers/crypto/chelsio/chcr_algo.h   |   44 +-
 drivers/crypto/chelsio/chcr_crypto.h |  114 ++-
 drivers/net/ethernet/chelsio/cxgb4/sge.c |8 +-
 4 files changed, 1116 insertions(+), 695 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index b13991d..646dfff 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -70,6 +70,8 @@
 #include "chcr_algo.h"
 #include "chcr_crypto.h"
 
+#define IV AES_BLOCK_SIZE
+
 static inline  struct chcr_aead_ctx *AEAD_CTX(struct chcr_context *ctx)
 {
return ctx->crypto_ctx->aeadctx;
@@ -102,7 +104,7 @@ static inline struct uld_ctx *ULD_CTX(struct chcr_context 
*ctx)
 
 static inline int is_ofld_imm(const struct sk_buff *skb)
 {
-   return (skb->len <= CRYPTO_MAX_IMM_TX_PKT_LEN);
+   return (skb->len <= SGE_MAX_WR_LEN);
 }
 
 /*
@@ -117,21 +119,92 @@ static inline unsigned int sgl_len(unsigned int n)
return (3 * n) / 2 + (n & 1) + 2;
 }
 
-static int dstsg_2k(struct scatterlist *sgl, unsigned int reqlen)
+static int sg_nents_xlen(struct scatterlist *sg, unsigned int reqlen,
+unsigned int entlen,
+unsigned int skip)
 {
int nents = 0;
unsigned int less;
+   unsigned int skip_len = 0;
 
-   while (sgl && reqlen) {
-   less = min(reqlen, sgl->length);
-   nents += DIV_ROUND_UP(less, CHCR_SG_SIZE);
-   reqlen -= less;
-   sgl = sg_next(sgl);
+   while (sg && skip) {
+   if (sg_dma_len(sg) <= skip) {
+   skip -= sg_dma_len(sg);
+   skip_len = 0;
+   sg = sg_next(sg);
+   } else {
+   skip_len = skip;
+   skip = 0;
+   }
}
 
+   while (sg && reqlen) {
+   less = min(reqlen, sg_dma_len(sg) - skip_len);
+   nents += DIV_ROUND_UP(less, entlen);
+   reqlen -= less;
+   skip_len = 0;
+   sg = sg_next(sg);
+   }
return nents;
 }
 
+static inline void chcr_handle_ahash_resp(struct ahash_request *req,
+ unsigned char *input,
+ int err)
+{
+   struct chcr_ahash_req_ctx *reqctx = ahash_request_ctx(req);
+   int digestsize, updated_digestsize;
+   struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+   struct uld_ctx *u_ctx = ULD_CTX(h_ctx(tfm));
+
+   if (input == NULL)
+   goto out;
+   reqctx = ahash_request_ctx(req);
+   digestsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(req));
+   if (reqctx->is_sg_map)
+   chcr_hash_dma_unmap(&u_ctx->lldi.pdev->dev, req);
+   if (reqctx->dma_addr)
+   dma_unmap_single(&u_ctx->lldi.pdev->dev, reqctx->dma_addr,
+reqctx->dma_len, DMA_TO_DEVICE);
+   reqctx->dma_addr = 0;
+   updated_digestsize = digestsize;
+   if (digestsize == SHA224_DIGEST_SIZE)
+   updated_digestsize = SHA256_DIGEST_SIZE;
+   else if (digestsize == SHA384_DIGEST_SIZE)
+   updated_digestsize = SHA512_DIGEST_SIZE;
+   if (reqctx->result == 1) {
+   reqctx->result = 0;
+   memcpy(req->result, input + sizeof(struct cpl_fw6_pld),
+  digestsize);
+   } else {
+   memcpy(reqctx->partial_hash, input + sizeof(struct cpl_fw6_pld),
+  updated_digestsize);
+   }
+out:
+   req->base.complete(&req->base, err);
+
+   }
+
+static inline void chcr_handle_aead_resp(struct aead_request *req,
+unsigned char *input,
+int err)
+{
+   struct chcr_aead_reqctx *reqctx = aead_request_ctx(req);
+   struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+   struct uld_ctx *u_ctx = ULD_CTX(a_ctx(tfm));
+
+
+   chcr_aead_dma_unmap(&u_ctx->lldi.pdev->dev, req, reqctx->op);
+   if (reqctx->b0_dma)
+   dma_unmap_single(&u_ctx->lldi.pdev->dev, reqctx->b0_dma,
+reqctx->b0_len, DMA_BIDIRECTIONAL);
+   if (reqctx->verify == VERIFY_SW) {
+   chcr_verify_tag(req, input, &err);
+   reqctx->verify = VERIFY_HW;
+}
+   req->base.complete(&req->base, err);
+
+}
 static void chcr_verify_tag(struct aead_request *req, u8 *input, int *err)
 {
  

[PATCH v2 3/7] crypto:gf128mul: The x8_ble multiplication functions

2017-10-08 Thread Harsh Jain
It multiply GF(2^128) elements in the ble format.
It will be used by chelsio driver to speed up gf multiplication.

Signed-off-by: Harsh Jain 
---
 crypto/gf128mul.c | 13 +
 include/crypto/gf128mul.h |  2 +-
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/crypto/gf128mul.c b/crypto/gf128mul.c
index dc01212..24e6019 100644
--- a/crypto/gf128mul.c
+++ b/crypto/gf128mul.c
@@ -156,6 +156,19 @@ static void gf128mul_x8_bbe(be128 *x)
x->b = cpu_to_be64((b << 8) ^ _tt);
 }
 
+void gf128mul_x8_ble(le128 *r, const le128 *x)
+{
+   u64 a = le64_to_cpu(x->a);
+   u64 b = le64_to_cpu(x->b);
+
+   /* equivalent to gf128mul_table_be[b >> 63] (see crypto/gf128mul.c): */
+   u64 _tt = gf128mul_table_be[a >> 56];
+
+   r->a = cpu_to_le64((a << 8) | (b >> 56));
+   r->b = cpu_to_le64((b << 8) ^ _tt);
+}
+EXPORT_SYMBOL(gf128mul_x8_ble);
+
 void gf128mul_lle(be128 *r, const be128 *b)
 {
be128 p[8];
diff --git a/include/crypto/gf128mul.h b/include/crypto/gf128mul.h
index 0977fb1..fa0a63d 100644
--- a/include/crypto/gf128mul.h
+++ b/include/crypto/gf128mul.h
@@ -227,7 +227,7 @@ struct gf128mul_4k *gf128mul_init_4k_lle(const be128 *g);
 struct gf128mul_4k *gf128mul_init_4k_bbe(const be128 *g);
 void gf128mul_4k_lle(be128 *a, const struct gf128mul_4k *t);
 void gf128mul_4k_bbe(be128 *a, const struct gf128mul_4k *t);
-
+void gf128mul_x8_ble(le128 *r, const le128 *x);
 static inline void gf128mul_free_4k(struct gf128mul_4k *t)
 {
kzfree(t);
-- 
2.1.4



Re: [PATCH 3/7] crypto:gf128mul: The x8_ble multiplication functions

2017-10-05 Thread Harsh Jain


On 03-10-2017 20:28, David Laight wrote:
> From: Harsh Jain
>> Sent: 03 October 2017 07:46
>> It multiply GF(2^128) elements in the ble format.
>> It will be used by chelsio driver to fasten gf multiplication.
>^ speed up ??
It should be speed up. Will fix the same in V2. Thanks
>
>   David
>



[PATCH 2/7] crypto:chelsio: Check error code with IS_ERR macro

2017-10-02 Thread Harsh Jain
Check and return proper error code.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index bdb1014..e4bf32d 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -1455,8 +1455,8 @@ static int chcr_ahash_update(struct ahash_request *req)
req_ctx->result = 0;
req_ctx->data_len += params.sg_len + params.bfr_len;
skb = create_hash_wr(req, ¶ms);
-   if (!skb)
-   return -ENOMEM;
+   if (IS_ERR(skb))
+   return PTR_ERR(skb);
 
if (remainder) {
u8 *temp;
@@ -1519,8 +1519,8 @@ static int chcr_ahash_final(struct ahash_request *req)
params.more = 0;
}
skb = create_hash_wr(req, ¶ms);
-   if (!skb)
-   return -ENOMEM;
+   if (IS_ERR(skb))
+   return PTR_ERR(skb);
 
skb->dev = u_ctx->lldi.ports[0];
set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_qidx);
@@ -1570,8 +1570,8 @@ static int chcr_ahash_finup(struct ahash_request *req)
}
 
skb = create_hash_wr(req, ¶ms);
-   if (!skb)
-   return -ENOMEM;
+   if (IS_ERR(skb))
+   return PTR_ERR(skb);
 
skb->dev = u_ctx->lldi.ports[0];
set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_qidx);
@@ -1621,8 +1621,8 @@ static int chcr_ahash_digest(struct ahash_request *req)
}
 
skb = create_hash_wr(req, ¶ms);
-   if (!skb)
-   return -ENOMEM;
+   if (IS_ERR(skb))
+   return PTR_ERR(skb);
 
skb->dev = u_ctx->lldi.ports[0];
set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_qidx);
-- 
2.1.4



[PATCH 4/7] crypto:chelsio:Use x8_ble gf multiplication to calculate IV.

2017-10-02 Thread Harsh Jain
gf128mul_x8_ble() will reduce gf Multiplication iteration by 8.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 11 +--
 drivers/crypto/chelsio/chcr_crypto.h |  1 +
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index e4bf32d..e0ab34a 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -888,9 +888,11 @@ static int chcr_update_tweak(struct ablkcipher_request 
*req, u8 *iv)
int ret, i;
u8 *key;
unsigned int keylen;
+   int round = reqctx->last_req_len / AES_BLOCK_SIZE;
+   int round8 = round / 8;
 
cipher = ablkctx->aes_generic;
-   memcpy(iv, req->info, AES_BLOCK_SIZE);
+   memcpy(iv, reqctx->iv, AES_BLOCK_SIZE);
 
keylen = ablkctx->enckey_len / 2;
key = ablkctx->key + keylen;
@@ -899,7 +901,10 @@ static int chcr_update_tweak(struct ablkcipher_request 
*req, u8 *iv)
goto out;
 
crypto_cipher_encrypt_one(cipher, iv, iv);
-   for (i = 0; i < (reqctx->processed / AES_BLOCK_SIZE); i++)
+   for (i = 0; i < round8; i++)
+   gf128mul_x8_ble((le128 *)iv, (le128 *)iv);
+
+   for (i = 0; i < (round % 8); i++)
gf128mul_x_ble((le128 *)iv, (le128 *)iv);
 
crypto_cipher_decrypt_one(cipher, iv, iv);
@@ -1040,6 +1045,7 @@ static int chcr_handle_cipher_resp(struct 
ablkcipher_request *req,
CRYPTO_ALG_SUB_TYPE_CTR)
bytes = adjust_ctr_overflow(reqctx->iv, bytes);
reqctx->processed += bytes;
+   reqctx->last_req_len = bytes;
wrparam.qid = u_ctx->lldi.rxq_ids[ctx->rx_qidx];
wrparam.req = req;
wrparam.bytes = bytes;
@@ -1132,6 +1138,7 @@ static int process_cipher(struct ablkcipher_request *req,
goto error;
}
reqctx->processed = bytes;
+   reqctx->last_req_len = bytes;
reqctx->dst = reqctx->dstsg;
reqctx->op = op_type;
wrparam.qid = qid;
diff --git a/drivers/crypto/chelsio/chcr_crypto.h 
b/drivers/crypto/chelsio/chcr_crypto.h
index 30af1ee..b3722b3 100644
--- a/drivers/crypto/chelsio/chcr_crypto.h
+++ b/drivers/crypto/chelsio/chcr_crypto.h
@@ -247,6 +247,7 @@ struct chcr_blkcipher_req_ctx {
struct scatterlist *dst;
struct scatterlist *newdstsg;
unsigned int processed;
+   unsigned int last_req_len;
unsigned int op;
short int dst_nents;
u8 iv[CHCR_MAX_CRYPTO_IV_LEN];
-- 
2.1.4



[PATCH 0/7]crypto:chelsio: Bugs fixes

2017-10-02 Thread Harsh Jain
It includes bug fix and performance improvement changes.

Harsh Jain (7):
  crypto:gf128mul: The x8_ble multiplication functions
  crypto:chelsio:Use x8_ble gf multiplication to calculate IV.
  crypto:chelsio:Remove allocation of sg list to implement 2K limit of
dsgl header
  crypto:chelsio:Move DMA un/mapping to chcr from lld  cxgb4 driver
  crypto:chelsio: Fix memory leak
  crypto:chelsio: Remove unused parameter
  crypto:chelsio: Check error code with IS_ERR macro

 crypto/gf128mul.c|   13 +
 drivers/crypto/chelsio/chcr_algo.c   | 1784 +-
 drivers/crypto/chelsio/chcr_algo.h   |   57 +-
 drivers/crypto/chelsio/chcr_core.c   |8 +-
 drivers/crypto/chelsio/chcr_core.h   |2 +-
 drivers/crypto/chelsio/chcr_crypto.h |  121 +-
 drivers/net/ethernet/chelsio/cxgb4/sge.c |8 +-
 include/crypto/gf128mul.h|2 +-
 8 files changed, 1166 insertions(+), 829 deletions(-)

-- 
2.1.4



[PATCH 3/7] crypto:gf128mul: The x8_ble multiplication functions

2017-10-02 Thread Harsh Jain
It multiply GF(2^128) elements in the ble format.
It will be used by chelsio driver to fasten gf multiplication.

Signed-off-by: Harsh Jain 
---
 crypto/gf128mul.c | 13 +
 include/crypto/gf128mul.h |  2 +-
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/crypto/gf128mul.c b/crypto/gf128mul.c
index dc01212..24e6019 100644
--- a/crypto/gf128mul.c
+++ b/crypto/gf128mul.c
@@ -156,6 +156,19 @@ static void gf128mul_x8_bbe(be128 *x)
x->b = cpu_to_be64((b << 8) ^ _tt);
 }
 
+void gf128mul_x8_ble(le128 *r, const le128 *x)
+{
+   u64 a = le64_to_cpu(x->a);
+   u64 b = le64_to_cpu(x->b);
+
+   /* equivalent to gf128mul_table_be[b >> 63] (see crypto/gf128mul.c): */
+   u64 _tt = gf128mul_table_be[a >> 56];
+
+   r->a = cpu_to_le64((a << 8) | (b >> 56));
+   r->b = cpu_to_le64((b << 8) ^ _tt);
+}
+EXPORT_SYMBOL(gf128mul_x8_ble);
+
 void gf128mul_lle(be128 *r, const be128 *b)
 {
be128 p[8];
diff --git a/include/crypto/gf128mul.h b/include/crypto/gf128mul.h
index 0977fb1..fa0a63d 100644
--- a/include/crypto/gf128mul.h
+++ b/include/crypto/gf128mul.h
@@ -227,7 +227,7 @@ struct gf128mul_4k *gf128mul_init_4k_lle(const be128 *g);
 struct gf128mul_4k *gf128mul_init_4k_bbe(const be128 *g);
 void gf128mul_4k_lle(be128 *a, const struct gf128mul_4k *t);
 void gf128mul_4k_bbe(be128 *a, const struct gf128mul_4k *t);
-
+void gf128mul_x8_ble(le128 *r, const le128 *x);
 static inline void gf128mul_free_4k(struct gf128mul_4k *t)
 {
kzfree(t);
-- 
2.1.4



[PATCH 5/7] crypto:chelsio:Remove allocation of sg list to implement 2K limit of dsgl header

2017-10-02 Thread Harsh Jain
Update DMA address index instead of allocating new sg list to impose  2k size 
limit for each entry.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 237 +++
 drivers/crypto/chelsio/chcr_algo.h   |   3 +-
 drivers/crypto/chelsio/chcr_core.h   |   2 +-
 drivers/crypto/chelsio/chcr_crypto.h |   6 -
 4 files changed, 76 insertions(+), 172 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index e0ab34a..b13991d 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -117,6 +117,21 @@ static inline unsigned int sgl_len(unsigned int n)
return (3 * n) / 2 + (n & 1) + 2;
 }
 
+static int dstsg_2k(struct scatterlist *sgl, unsigned int reqlen)
+{
+   int nents = 0;
+   unsigned int less;
+
+   while (sgl && reqlen) {
+   less = min(reqlen, sgl->length);
+   nents += DIV_ROUND_UP(less, CHCR_SG_SIZE);
+   reqlen -= less;
+   sgl = sg_next(sgl);
+   }
+
+   return nents;
+}
+
 static void chcr_verify_tag(struct aead_request *req, u8 *input, int *err)
 {
u8 temp[SHA512_DIGEST_SIZE];
@@ -166,8 +181,6 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
kfree_skb(ctx_req.ctx.reqctx->skb);
ctx_req.ctx.reqctx->skb = NULL;
}
-   free_new_sg(ctx_req.ctx.reqctx->newdstsg);
-   ctx_req.ctx.reqctx->newdstsg = NULL;
if (ctx_req.ctx.reqctx->verify == VERIFY_SW) {
chcr_verify_tag(ctx_req.req.aead_req, input,
&err);
@@ -388,31 +401,41 @@ static void write_phys_cpl(struct cpl_rx_phys_dsgl 
*phys_cpl,
 {
struct phys_sge_pairs *to;
unsigned int len = 0, left_size = sg_param->obsize;
-   unsigned int nents = sg_param->nents, i, j = 0;
+   unsigned int j = 0;
+   int offset, ent_len;
 
phys_cpl->op_to_tid = htonl(CPL_RX_PHYS_DSGL_OPCODE_V(CPL_RX_PHYS_DSGL)
| CPL_RX_PHYS_DSGL_ISRDMA_V(0));
+   to = (struct phys_sge_pairs *)((unsigned char *)phys_cpl +
+  sizeof(struct cpl_rx_phys_dsgl));
+   while (left_size && sg) {
+   len = min_t(u32, left_size, sg_dma_len(sg));
+   offset = 0;
+   while (len) {
+   ent_len =  min_t(u32, len, CHCR_SG_SIZE);
+   to->len[j % 8] = htons(ent_len);
+   to->addr[j % 8] = cpu_to_be64(sg_dma_address(sg) +
+ offset);
+   offset += ent_len;
+   len -= ent_len;
+   j++;
+   if ((j % 8) == 0)
+   to++;
+   }
+   left_size -= min(left_size, sg_dma_len(sg));
+   sg = sg_next(sg);
+   }
phys_cpl->pcirlxorder_to_noofsgentr =
htonl(CPL_RX_PHYS_DSGL_PCIRLXORDER_V(0) |
  CPL_RX_PHYS_DSGL_PCINOSNOOP_V(0) |
  CPL_RX_PHYS_DSGL_PCITPHNTENB_V(0) |
  CPL_RX_PHYS_DSGL_PCITPHNT_V(0) |
  CPL_RX_PHYS_DSGL_DCAID_V(0) |
- CPL_RX_PHYS_DSGL_NOOFSGENTR_V(nents));
+ CPL_RX_PHYS_DSGL_NOOFSGENTR_V(j));
phys_cpl->rss_hdr_int.opcode = CPL_RX_PHYS_ADDR;
phys_cpl->rss_hdr_int.qid = htons(sg_param->qid);
phys_cpl->rss_hdr_int.hash_val = 0;
-   to = (struct phys_sge_pairs *)((unsigned char *)phys_cpl +
-  sizeof(struct cpl_rx_phys_dsgl));
-   for (i = 0; nents && left_size; to++) {
-   for (j = 0; j < 8 && nents && left_size; j++, nents--) {
-   len = min(left_size, sg_dma_len(sg));
-   to->len[j] = htons(len);
-   to->addr[j] = cpu_to_be64(sg_dma_address(sg));
-   left_size -= len;
-   sg = sg_next(sg);
-   }
-   }
+
 }
 
 static inline int map_writesg_phys_cpl(struct device *dev,
@@ -523,31 +546,33 @@ static int generate_copy_rrkey(struct ablk_ctx *ablkctx,
 static int chcr_sg_ent_in_wr(struct scatterlist *src,
 struct scatterlist *dst,
 unsigned int minsg,
-unsigned int space,
-short int *sent,
-short int *dent)
+unsigned int space)
 {
int srclen = 0, dstlen = 0;
int srcsg = minsg, dstsg = 0;
+   int offset = 0, less;
 
-   *sent = 0;
-   *dent = 0;
while (src &&

[PATCH 1/7] crypto:chelsio: Remove unused parameter

2017-10-02 Thread Harsh Jain
Remove unused parameter sent to latest fw.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 43 +++---
 drivers/crypto/chelsio/chcr_algo.h | 12 +--
 2 files changed, 23 insertions(+), 32 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 0e81607..bdb1014 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -577,36 +577,27 @@ static int chcr_cipher_fallback(struct crypto_skcipher 
*cipher,
 static inline void create_wreq(struct chcr_context *ctx,
   struct chcr_wr *chcr_req,
   void *req, struct sk_buff *skb,
-  int kctx_len, int hash_sz,
-  int is_iv,
+  int hash_sz,
   unsigned int sc_len,
   unsigned int lcb)
 {
struct uld_ctx *u_ctx = ULD_CTX(ctx);
-   int iv_loc = IV_DSGL;
int qid = u_ctx->lldi.rxq_ids[ctx->rx_qidx];
-   unsigned int immdatalen = 0, nr_frags = 0;
+   unsigned int immdatalen = 0;
 
-   if (is_ofld_imm(skb)) {
+   if (is_ofld_imm(skb))
immdatalen = skb->data_len;
-   iv_loc = IV_IMMEDIATE;
-   } else {
-   nr_frags = skb_shinfo(skb)->nr_frags;
-   }
 
-   chcr_req->wreq.op_to_cctx_size = FILL_WR_OP_CCTX_SIZE(immdatalen,
-   ((sizeof(chcr_req->key_ctx) + kctx_len) >> 4));
+   chcr_req->wreq.op_to_cctx_size = FILL_WR_OP_CCTX_SIZE;
chcr_req->wreq.pld_size_hash_size =
-   htonl(FW_CRYPTO_LOOKASIDE_WR_PLD_SIZE_V(sgl_lengths[nr_frags]) |
- FW_CRYPTO_LOOKASIDE_WR_HASH_SIZE_V(hash_sz));
+   htonl(FW_CRYPTO_LOOKASIDE_WR_HASH_SIZE_V(hash_sz));
chcr_req->wreq.len16_pkd =
htonl(FW_CRYPTO_LOOKASIDE_WR_LEN16_V(DIV_ROUND_UP(
(calc_tx_flits_ofld(skb) * 8), 16)));
chcr_req->wreq.cookie = cpu_to_be64((uintptr_t)req);
chcr_req->wreq.rx_chid_to_rx_q_id =
FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
-   is_iv ? iv_loc : IV_NOP, !!lcb,
-   ctx->tx_qidx);
+   !!lcb, ctx->tx_qidx);
 
chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
   qid);
@@ -616,7 +607,7 @@ static inline void create_wreq(struct chcr_context *ctx,
chcr_req->sc_imm.cmd_more = FILL_CMD_MORE(immdatalen);
chcr_req->sc_imm.len = cpu_to_be32(sizeof(struct cpl_tx_sec_pdu) +
   sizeof(chcr_req->key_ctx) +
-  kctx_len + sc_len + immdatalen);
+  sc_len + immdatalen);
 }
 
 /**
@@ -706,8 +697,8 @@ static struct sk_buff *create_cipher_wr(struct 
cipher_wr_param *wrparam)
write_buffer_to_skb(skb, &frags, reqctx->iv, ivsize);
write_sg_to_skb(skb, &frags, wrparam->srcsg, wrparam->bytes);
atomic_inc(&adap->chcr_stats.cipher_rqst);
-   create_wreq(ctx, chcr_req, &(wrparam->req->base), skb, kctx_len, 0, 1,
-   sizeof(struct cpl_rx_phys_dsgl) + phys_dsgl,
+   create_wreq(ctx, chcr_req, &(wrparam->req->base), skb, 0,
+   sizeof(struct cpl_rx_phys_dsgl) + phys_dsgl + kctx_len,
ablkctx->ciph_mode == CHCR_SCMD_CIPHER_MODE_AES_CBC);
reqctx->skb = skb;
skb_get(skb);
@@ -1417,8 +1408,8 @@ static struct sk_buff *create_hash_wr(struct 
ahash_request *req,
if (param->sg_len != 0)
write_sg_to_skb(skb, &frags, req->src, param->sg_len);
atomic_inc(&adap->chcr_stats.digest_rqst);
-   create_wreq(ctx, chcr_req, &req->base, skb, kctx_len,
-   hash_size_in_response, 0, DUMMY_BYTES, 0);
+   create_wreq(ctx, chcr_req, &req->base, skb, hash_size_in_response,
+   DUMMY_BYTES + kctx_len, 0);
req_ctx->skb = skb;
skb_get(skb);
return skb;
@@ -2080,8 +2071,8 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
write_buffer_to_skb(skb, &frags, req->iv, ivsize);
write_sg_to_skb(skb, &frags, src, req->cryptlen);
atomic_inc(&adap->chcr_stats.cipher_rqst);
-   create_wreq(ctx, chcr_req, &req->base, skb, kctx_len, size, 1,
-  sizeof(struct cpl_rx_phys_dsgl) + dst_size, 0);
+   create_wreq(ctx, chcr_req, &req->base, skb, size,
+  sizeof(struct cpl_rx_phys_dsgl) + dst_size + kctx_len, 0);
reqctx->skb = skb;
skb_get(skb);
 

[PATCH 6/7] crypto:chelsio:Move DMA un/mapping to chcr from lld cxgb4 driver

2017-10-02 Thread Harsh Jain
Allow chcr to do DMA mapping/Unmapping instead of lld cxgb4.
It moves "Copy AAD to dst buffer" requirement from driver to
firmware.

Signed-off-by: Ganesh Goudar 
Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 1645 ++
 drivers/crypto/chelsio/chcr_algo.h   |   44 +-
 drivers/crypto/chelsio/chcr_crypto.h |  114 ++-
 drivers/net/ethernet/chelsio/cxgb4/sge.c |8 +-
 4 files changed, 1116 insertions(+), 695 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index b13991d..646dfff 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -70,6 +70,8 @@
 #include "chcr_algo.h"
 #include "chcr_crypto.h"
 
+#define IV AES_BLOCK_SIZE
+
 static inline  struct chcr_aead_ctx *AEAD_CTX(struct chcr_context *ctx)
 {
return ctx->crypto_ctx->aeadctx;
@@ -102,7 +104,7 @@ static inline struct uld_ctx *ULD_CTX(struct chcr_context 
*ctx)
 
 static inline int is_ofld_imm(const struct sk_buff *skb)
 {
-   return (skb->len <= CRYPTO_MAX_IMM_TX_PKT_LEN);
+   return (skb->len <= SGE_MAX_WR_LEN);
 }
 
 /*
@@ -117,21 +119,92 @@ static inline unsigned int sgl_len(unsigned int n)
return (3 * n) / 2 + (n & 1) + 2;
 }
 
-static int dstsg_2k(struct scatterlist *sgl, unsigned int reqlen)
+static int sg_nents_xlen(struct scatterlist *sg, unsigned int reqlen,
+unsigned int entlen,
+unsigned int skip)
 {
int nents = 0;
unsigned int less;
+   unsigned int skip_len = 0;
 
-   while (sgl && reqlen) {
-   less = min(reqlen, sgl->length);
-   nents += DIV_ROUND_UP(less, CHCR_SG_SIZE);
-   reqlen -= less;
-   sgl = sg_next(sgl);
+   while (sg && skip) {
+   if (sg_dma_len(sg) <= skip) {
+   skip -= sg_dma_len(sg);
+   skip_len = 0;
+   sg = sg_next(sg);
+   } else {
+   skip_len = skip;
+   skip = 0;
+   }
}
 
+   while (sg && reqlen) {
+   less = min(reqlen, sg_dma_len(sg) - skip_len);
+   nents += DIV_ROUND_UP(less, entlen);
+   reqlen -= less;
+   skip_len = 0;
+   sg = sg_next(sg);
+   }
return nents;
 }
 
+static inline void chcr_handle_ahash_resp(struct ahash_request *req,
+ unsigned char *input,
+ int err)
+{
+   struct chcr_ahash_req_ctx *reqctx = ahash_request_ctx(req);
+   int digestsize, updated_digestsize;
+   struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+   struct uld_ctx *u_ctx = ULD_CTX(h_ctx(tfm));
+
+   if (input == NULL)
+   goto out;
+   reqctx = ahash_request_ctx(req);
+   digestsize = crypto_ahash_digestsize(crypto_ahash_reqtfm(req));
+   if (reqctx->is_sg_map)
+   chcr_hash_dma_unmap(&u_ctx->lldi.pdev->dev, req);
+   if (reqctx->dma_addr)
+   dma_unmap_single(&u_ctx->lldi.pdev->dev, reqctx->dma_addr,
+reqctx->dma_len, DMA_TO_DEVICE);
+   reqctx->dma_addr = 0;
+   updated_digestsize = digestsize;
+   if (digestsize == SHA224_DIGEST_SIZE)
+   updated_digestsize = SHA256_DIGEST_SIZE;
+   else if (digestsize == SHA384_DIGEST_SIZE)
+   updated_digestsize = SHA512_DIGEST_SIZE;
+   if (reqctx->result == 1) {
+   reqctx->result = 0;
+   memcpy(req->result, input + sizeof(struct cpl_fw6_pld),
+  digestsize);
+   } else {
+   memcpy(reqctx->partial_hash, input + sizeof(struct cpl_fw6_pld),
+  updated_digestsize);
+   }
+out:
+   req->base.complete(&req->base, err);
+
+   }
+
+static inline void chcr_handle_aead_resp(struct aead_request *req,
+unsigned char *input,
+int err)
+{
+   struct chcr_aead_reqctx *reqctx = aead_request_ctx(req);
+   struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+   struct uld_ctx *u_ctx = ULD_CTX(a_ctx(tfm));
+
+
+   chcr_aead_dma_unmap(&u_ctx->lldi.pdev->dev, req, reqctx->op);
+   if (reqctx->b0_dma)
+   dma_unmap_single(&u_ctx->lldi.pdev->dev, reqctx->b0_dma,
+reqctx->b0_len, DMA_BIDIRECTIONAL);
+   if (reqctx->verify == VERIFY_SW) {
+   chcr_verify_tag(req, input, &err);
+   reqctx->verify = VERIFY_HW;
+}
+   req->base.complete(&req->base, err);
+
+}
 static void chcr_verify_tag(struct aead_request *req, u8 *input

[PATCH 7/7] crypto:chelsio: Fix memory leak

2017-10-02 Thread Harsh Jain
Fix memory leak when device does not support crypto.

Reported-by: Dan Carpenter 
Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_core.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_core.c 
b/drivers/crypto/chelsio/chcr_core.c
index b6dd9cb..4f677b3 100644
--- a/drivers/crypto/chelsio/chcr_core.c
+++ b/drivers/crypto/chelsio/chcr_core.c
@@ -154,15 +154,15 @@ static void *chcr_uld_add(const struct cxgb4_lld_info 
*lld)
struct uld_ctx *u_ctx;
 
/* Create the device and add it in the device list */
+   if (!(lld->ulp_crypto & ULP_CRYPTO_LOOKASIDE))
+   return ERR_PTR(-EOPNOTSUPP);
+
+   /* Create the device and add it in the device list */
u_ctx = kzalloc(sizeof(*u_ctx), GFP_KERNEL);
if (!u_ctx) {
u_ctx = ERR_PTR(-ENOMEM);
goto out;
}
-   if (!(lld->ulp_crypto & ULP_CRYPTO_LOOKASIDE)) {
-   u_ctx = ERR_PTR(-ENOMEM);
-   goto out;
-   }
u_ctx->lldi = *lld;
 out:
return u_ctx;
-- 
2.1.4



[PATCH 6/9] chcr - Add debug counters

2017-06-15 Thread Harsh Jain
Count types of operation done by HW.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 16 +-
 drivers/crypto/chelsio/chcr_core.c |  2 ++
 drivers/net/ethernet/chelsio/cxgb4/cxgb4.h |  1 +
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c | 35 ++
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h |  9 ++
 5 files changed, 62 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 03b817f..2f388af 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -154,6 +154,7 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
struct uld_ctx *u_ctx = ULD_CTX(ctx);
struct chcr_req_ctx ctx_req;
unsigned int digestsize, updated_digestsize;
+   struct adapter *adap = padap(ctx->dev);
 
switch (tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_AEAD:
@@ -207,6 +208,7 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
ctx_req.req.ahash_req->base.complete(req, err);
break;
}
+   atomic_inc(&adap->chcr_stats.complete);
return err;
 }
 
@@ -639,6 +641,7 @@ static struct sk_buff *create_cipher_wr(struct 
cipher_wr_param *wrparam)
unsigned int ivsize = AES_BLOCK_SIZE, kctx_len;
gfp_t flags = wrparam->req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
GFP_KERNEL : GFP_ATOMIC;
+   struct adapter *adap = padap(ctx->dev);
 
phys_dsgl = get_space_for_phys_dsgl(reqctx->dst_nents);
 
@@ -701,6 +704,7 @@ static struct sk_buff *create_cipher_wr(struct 
cipher_wr_param *wrparam)
skb_set_transport_header(skb, transhdr_len);
write_buffer_to_skb(skb, &frags, reqctx->iv, ivsize);
write_sg_to_skb(skb, &frags, wrparam->srcsg, wrparam->bytes);
+   atomic_inc(&adap->chcr_stats.cipher_rqst);
create_wreq(ctx, chcr_req, &(wrparam->req->base), skb, kctx_len, 0, 1,
sizeof(struct cpl_rx_phys_dsgl) + phys_dsgl,
ablkctx->ciph_mode == CHCR_SCMD_CIPHER_MODE_AES_CBC);
@@ -1337,6 +1341,7 @@ static struct sk_buff *create_hash_wr(struct 
ahash_request *req,
u8 hash_size_in_response = 0;
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL :
GFP_ATOMIC;
+   struct adapter *adap = padap(ctx->dev);
 
iopad_alignment = KEYCTX_ALIGN_PAD(digestsize);
kctx_len = param->alg_prm.result_size + iopad_alignment;
@@ -1393,7 +1398,7 @@ static struct sk_buff *create_hash_wr(struct 
ahash_request *req,
param->bfr_len);
if (param->sg_len != 0)
write_sg_to_skb(skb, &frags, req->src, param->sg_len);
-
+   atomic_inc(&adap->chcr_stats.digest_rqst);
create_wreq(ctx, chcr_req, &req->base, skb, kctx_len,
hash_size_in_response, 0, DUMMY_BYTES, 0);
req_ctx->skb = skb;
@@ -1873,6 +1878,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
int null = 0;
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL :
GFP_ATOMIC;
+   struct adapter *adap = padap(ctx->dev);
 
if (aeadctx->enckey_len == 0 || (req->cryptlen == 0))
goto err;
@@ -1911,6 +1917,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
T6_MAX_AAD_SIZE,
transhdr_len + (sgl_len(src_nent + MIN_AUTH_SG) * 8),
op_type)) {
+   atomic_inc(&adap->chcr_stats.fallback);
return ERR_PTR(chcr_aead_fallback(req, op_type));
}
skb = alloc_skb((transhdr_len + sizeof(struct sge_opaque_hdr)), flags);
@@ -1983,6 +1990,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
}
write_buffer_to_skb(skb, &frags, req->iv, ivsize);
write_sg_to_skb(skb, &frags, src, req->cryptlen);
+   atomic_inc(&adap->chcr_stats.cipher_rqst);
create_wreq(ctx, chcr_req, &req->base, skb, kctx_len, size, 1,
   sizeof(struct cpl_rx_phys_dsgl) + dst_size, 0);
reqctx->skb = skb;
@@ -2206,6 +2214,7 @@ static struct sk_buff *create_aead_ccm_wr(struct 
aead_request *req,
int error = -EINVAL, src_nent;
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL :
GFP_ATOMIC;
+   struct adapter *adap = padap(ctx->dev);
 
 
if (op_type && req->cryptlen < crypto_aead_authsize(tfm))
@@ -2245,6 +2254,7 @@ static struct sk_buff *create_aead_

[PATCH 1/9] crypto: chcr - Pass lcb bit setting to firmware

2017-06-15 Thread Harsh Jain
GCM and CBC mode of operation requires Last Cipher Block.
This patch set lcb bit in WR header when required.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 18 +++---
 drivers/crypto/chelsio/chcr_algo.h |  4 ++--
 2 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index f00e0d8..e8ff505 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -518,7 +518,8 @@ static inline void create_wreq(struct chcr_context *ctx,
   void *req, struct sk_buff *skb,
   int kctx_len, int hash_sz,
   int is_iv,
-  unsigned int sc_len)
+  unsigned int sc_len,
+  unsigned int lcb)
 {
struct uld_ctx *u_ctx = ULD_CTX(ctx);
int iv_loc = IV_DSGL;
@@ -543,7 +544,8 @@ static inline void create_wreq(struct chcr_context *ctx,
chcr_req->wreq.cookie = cpu_to_be64((uintptr_t)req);
chcr_req->wreq.rx_chid_to_rx_q_id =
FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
-   is_iv ? iv_loc : IV_NOP, ctx->tx_qidx);
+   is_iv ? iv_loc : IV_NOP, !!lcb,
+   ctx->tx_qidx);
 
chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
   qid);
@@ -652,7 +654,8 @@ static inline void create_wreq(struct chcr_context *ctx,
write_buffer_to_skb(skb, &frags, reqctx->iv, ivsize);
write_sg_to_skb(skb, &frags, req->src, req->nbytes);
create_wreq(ctx, chcr_req, req, skb, kctx_len, 0, 1,
-   sizeof(struct cpl_rx_phys_dsgl) + phys_dsgl);
+   sizeof(struct cpl_rx_phys_dsgl) + phys_dsgl,
+   ablkctx->ciph_mode == CHCR_SCMD_CIPHER_MODE_AES_CBC);
reqctx->skb = skb;
skb_get(skb);
return skb;
@@ -923,7 +926,7 @@ static struct sk_buff *create_hash_wr(struct ahash_request 
*req,
write_sg_to_skb(skb, &frags, req->src, param->sg_len);
 
create_wreq(ctx, chcr_req, req, skb, kctx_len, hash_size_in_response, 0,
-   DUMMY_BYTES);
+   DUMMY_BYTES, 0);
req_ctx->skb = skb;
skb_get(skb);
return skb;
@@ -1508,7 +1511,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
write_buffer_to_skb(skb, &frags, req->iv, ivsize);
write_sg_to_skb(skb, &frags, src, req->cryptlen);
create_wreq(ctx, chcr_req, req, skb, kctx_len, size, 1,
-  sizeof(struct cpl_rx_phys_dsgl) + dst_size);
+  sizeof(struct cpl_rx_phys_dsgl) + dst_size, 0);
reqctx->skb = skb;
skb_get(skb);
 
@@ -1804,7 +1807,7 @@ static struct sk_buff *create_aead_ccm_wr(struct 
aead_request *req,
skb_set_transport_header(skb, transhdr_len);
frags = fill_aead_req_fields(skb, req, src, ivsize, aeadctx);
create_wreq(ctx, chcr_req, req, skb, kctx_len, 0, 1,
-   sizeof(struct cpl_rx_phys_dsgl) + dst_size);
+   sizeof(struct cpl_rx_phys_dsgl) + dst_size, 0);
reqctx->skb = skb;
skb_get(skb);
return skb;
@@ -1950,7 +1953,8 @@ static struct sk_buff *create_gcm_wr(struct aead_request 
*req,
write_buffer_to_skb(skb, &frags, reqctx->iv, ivsize);
write_sg_to_skb(skb, &frags, src, req->cryptlen);
create_wreq(ctx, chcr_req, req, skb, kctx_len, size, 1,
-   sizeof(struct cpl_rx_phys_dsgl) + dst_size);
+   sizeof(struct cpl_rx_phys_dsgl) + dst_size,
+   reqctx->verify);
reqctx->skb = skb;
skb_get(skb);
return skb;
diff --git a/drivers/crypto/chelsio/chcr_algo.h 
b/drivers/crypto/chelsio/chcr_algo.h
index 751d06a..9894c7b 100644
--- a/drivers/crypto/chelsio/chcr_algo.h
+++ b/drivers/crypto/chelsio/chcr_algo.h
@@ -185,11 +185,11 @@
FW_CRYPTO_LOOKASIDE_WR_CCTX_LOC_V(1) | \
FW_CRYPTO_LOOKASIDE_WR_CCTX_SIZE_V((ctx_len)))
 
-#define FILL_WR_RX_Q_ID(cid, qid, wr_iv, fid) \
+#define FILL_WR_RX_Q_ID(cid, qid, wr_iv, lcb, fid) \
htonl( \
FW_CRYPTO_LOOKASIDE_WR_RX_CHID_V((cid)) | \
FW_CRYPTO_LOOKASIDE_WR_RX_Q_ID_V((qid)) | \
-   FW_CRYPTO_LOOKASIDE_WR_LCB_V(0) | \
+   FW_CRYPTO_LOOKASIDE_WR_LCB_V((lcb)) | \
FW_CRYPTO_LOOKASIDE_WR_IV_V((wr_iv)) | \
FW_CRYPTO_LOOKASIDE_WR_FQIDX_V(fid))
 
-- 
1.8.3.1



[PATCH 0/9] Bug fixes and ctr mode of operation

2017-06-15 Thread Harsh Jain
This series is based on cryptodev2.6 tree and includes bug fix ,ctr(aes), 
rfc3686(ctr(aes)) algo.

Harsh Jain (7):
  crypto: chcr - Pass lcb bit setting to firmware
  crypto: chcr - Set fallback key
  crypto: chcr - Return correct error code
  crypto: chcr - Avoid changing request structure
  crypto:chcr - Add ctr mode and process large sg entries for cipher
  MAINTAINERS:Add maintainer for chelsio crypto driver
  crypto: chcr - Ensure Destination sg entry size less than  2k
Atul Gupta (2):
  chcr - Add debug counters
  crypto: chcr - Select device in Round Robin fashion

 MAINTAINERS|7 +
 drivers/crypto/chelsio/chcr_algo.c | 1096 
 drivers/crypto/chelsio/chcr_algo.h |   30 +-
 drivers/crypto/chelsio/chcr_core.c |   56 +-
 drivers/crypto/chelsio/chcr_core.h |5 +-
 drivers/crypto/chelsio/chcr_crypto.h   |   25 +-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4.h |1 +
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c |   35 +
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c |1 +
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h |   10 +
 10 files changed, 1020 insertions(+), 246 deletions(-)

-- 
1.8.3.1



[PATCH 5/9] crypto:chcr - Add ctr mode and process large sg entries for cipher

2017-06-15 Thread Harsh Jain
It send multiple WRs to H/W to handle large sg lists. Adds ctr(aes)
and rfc(ctr(aes)) modes.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 786 ---
 drivers/crypto/chelsio/chcr_algo.h   |  26 +-
 drivers/crypto/chelsio/chcr_core.c   |   1 -
 drivers/crypto/chelsio/chcr_core.h   |   3 +
 drivers/crypto/chelsio/chcr_crypto.h |  19 +-
 5 files changed, 690 insertions(+), 145 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 9c839c6..03b817f 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -55,6 +55,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 #include 
 #include 
 #include 
@@ -151,12 +153,11 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
struct chcr_context *ctx = crypto_tfm_ctx(tfm);
struct uld_ctx *u_ctx = ULD_CTX(ctx);
struct chcr_req_ctx ctx_req;
-   struct cpl_fw6_pld *fw6_pld;
unsigned int digestsize, updated_digestsize;
 
switch (tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK) {
case CRYPTO_ALG_TYPE_AEAD:
-   ctx_req.req.aead_req = (struct aead_request *)req;
+   ctx_req.req.aead_req = aead_request_cast(req);
ctx_req.ctx.reqctx = aead_request_ctx(ctx_req.req.aead_req);
dma_unmap_sg(&u_ctx->lldi.pdev->dev, ctx_req.ctx.reqctx->dst,
 ctx_req.ctx.reqctx->dst_nents, DMA_FROM_DEVICE);
@@ -169,27 +170,16 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
&err);
ctx_req.ctx.reqctx->verify = VERIFY_HW;
}
+   ctx_req.req.aead_req->base.complete(req, err);
break;
 
case CRYPTO_ALG_TYPE_ABLKCIPHER:
-   ctx_req.req.ablk_req = (struct ablkcipher_request *)req;
-   ctx_req.ctx.ablk_ctx =
-   ablkcipher_request_ctx(ctx_req.req.ablk_req);
-   if (!err) {
-   fw6_pld = (struct cpl_fw6_pld *)input;
-   memcpy(ctx_req.req.ablk_req->info, &fw6_pld->data[2],
-  AES_BLOCK_SIZE);
-   }
-   dma_unmap_sg(&u_ctx->lldi.pdev->dev, ctx_req.req.ablk_req->dst,
-ctx_req.ctx.ablk_ctx->dst_nents, DMA_FROM_DEVICE);
-   if (ctx_req.ctx.ablk_ctx->skb) {
-   kfree_skb(ctx_req.ctx.ablk_ctx->skb);
-   ctx_req.ctx.ablk_ctx->skb = NULL;
-   }
+err = chcr_handle_cipher_resp(ablkcipher_request_cast(req),
+  input, err);
break;
 
case CRYPTO_ALG_TYPE_AHASH:
-   ctx_req.req.ahash_req = (struct ahash_request *)req;
+   ctx_req.req.ahash_req = ahash_request_cast(req);
ctx_req.ctx.ahash_ctx =
ahash_request_ctx(ctx_req.req.ahash_req);
digestsize =
@@ -214,6 +204,7 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
   sizeof(struct cpl_fw6_pld),
   updated_digestsize);
}
+   ctx_req.req.ahash_req->base.complete(req, err);
break;
}
return err;
@@ -392,7 +383,7 @@ static void write_phys_cpl(struct cpl_rx_phys_dsgl 
*phys_cpl,
   struct phys_sge_parm *sg_param)
 {
struct phys_sge_pairs *to;
-   int out_buf_size = sg_param->obsize;
+   unsigned int len = 0, left_size = sg_param->obsize;
unsigned int nents = sg_param->nents, i, j = 0;
 
phys_cpl->op_to_tid = htonl(CPL_RX_PHYS_DSGL_OPCODE_V(CPL_RX_PHYS_DSGL)
@@ -409,20 +400,15 @@ static void write_phys_cpl(struct cpl_rx_phys_dsgl 
*phys_cpl,
phys_cpl->rss_hdr_int.hash_val = 0;
to = (struct phys_sge_pairs *)((unsigned char *)phys_cpl +
   sizeof(struct cpl_rx_phys_dsgl));
-
-   for (i = 0; nents; to++) {
-   for (j = 0; j < 8 && nents; j++, nents--) {
-   out_buf_size -= sg_dma_len(sg);
-   to->len[j] = htons(sg_dma_len(sg));
+   for (i = 0; nents && left_size; to++) {
+   for (j = 0; j < 8 && nents && left_size; j++, nents--) {
+   len = min(left_size, sg_dma_len(sg));
+   to->len[j] = htons(len);
to->addr[j] = cpu_to_be64(sg_dma_address(sg));
+   left_size -= len;
sg = sg_next(sg);
}
}
-   if (out_buf_size) {
-   j--;
-   

[PATCH 4/9] crypto: chcr - Avoid changing request structure

2017-06-15 Thread Harsh Jain
Do not update assoclen received in aead_request.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 37 ++---
 1 file changed, 14 insertions(+), 23 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 156065d..9c839c6 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -126,13 +126,13 @@ static void chcr_verify_tag(struct aead_request *req, u8 
*input, int *err)
fw6_pld = (struct cpl_fw6_pld *)input;
if ((get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106) ||
(get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_GCM)) {
-   cmp = memcmp(&fw6_pld->data[2], (fw6_pld + 1), authsize);
+   cmp = crypto_memneq(&fw6_pld->data[2], (fw6_pld + 1), authsize);
} else {
 
sg_pcopy_to_buffer(req->src, sg_nents(req->src), temp,
authsize, req->assoclen +
req->cryptlen - authsize);
-   cmp = memcmp(temp, (fw6_pld + 1), authsize);
+   cmp = crypto_memneq(temp, (fw6_pld + 1), authsize);
}
if (cmp)
*err = -EBADMSG;
@@ -1840,9 +1840,8 @@ static struct sk_buff *create_gcm_wr(struct aead_request 
*req,
struct scatterlist *src;
unsigned int frags = 0, transhdr_len;
unsigned int ivsize = AES_BLOCK_SIZE;
-   unsigned int dst_size = 0, kctx_len;
+   unsigned int dst_size = 0, kctx_len, assoclen = req->assoclen;
unsigned char tag_offset = 0;
-   unsigned int crypt_len = 0;
unsigned int authsize = crypto_aead_authsize(tfm);
int error = -EINVAL, src_nent;
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL :
@@ -1854,27 +1853,21 @@ static struct sk_buff *create_gcm_wr(struct 
aead_request *req,
 
if (op_type && req->cryptlen < crypto_aead_authsize(tfm))
goto err;
-   src_nent = sg_nents_for_len(req->src, req->assoclen + req->cryptlen);
+   src_nent = sg_nents_for_len(req->src, assoclen + req->cryptlen);
if (src_nent < 0)
goto err;
 
-   src = scatterwalk_ffwd(reqctx->srcffwd, req->src, req->assoclen);
+   src = scatterwalk_ffwd(reqctx->srcffwd, req->src, assoclen);
reqctx->dst = src;
if (req->src != req->dst) {
error = chcr_copy_assoc(req, aeadctx);
if (error)
return  ERR_PTR(error);
reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst,
-  req->assoclen);
+  assoclen);
}
 
-   if (!req->cryptlen)
-   /* null-payload is not supported in the hardware.
-* software is sending block size
-*/
-   crypt_len = AES_BLOCK_SIZE;
-   else
-   crypt_len = req->cryptlen;
+
reqctx->dst_nents = sg_nents_for_len(reqctx->dst, req->cryptlen +
 (op_type ? -authsize : authsize));
if (reqctx->dst_nents < 0) {
@@ -1907,19 +1900,19 @@ static struct sk_buff *create_gcm_wr(struct 
aead_request *req,
memset(chcr_req, 0, transhdr_len);
 
if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106)
-   req->assoclen -= 8;
+   assoclen = req->assoclen - 8;
 
tag_offset = (op_type == CHCR_ENCRYPT_OP) ? 0 : authsize;
chcr_req->sec_cpl.op_ivinsrtofst = FILL_SEC_CPL_OP_IVINSR(
ctx->dev->rx_channel_id, 2, (ivsize ?
-   (req->assoclen + 1) : 0));
+   (assoclen + 1) : 0));
chcr_req->sec_cpl.pldlen =
-   htonl(req->assoclen + ivsize + req->cryptlen);
+   htonl(assoclen + ivsize + req->cryptlen);
chcr_req->sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI(
-   req->assoclen ? 1 : 0, req->assoclen,
-   req->assoclen + ivsize + 1, 0);
+   assoclen ? 1 : 0, assoclen,
+   assoclen + ivsize + 1, 0);
chcr_req->sec_cpl.cipherstop_lo_authinsert =
-   FILL_SEC_CPL_AUTHINSERT(0, req->assoclen + ivsize + 1,
+   FILL_SEC_CPL_AUTHINSERT(0, assoclen + ivsize + 1,
tag_offset, tag_offset);
chcr_req->sec_cpl.seqno_numivs =
FILL_SEC_CPL_SCMD0_SEQNO(op_type, (op_type ==
@@ -1955,9 +1948,7 @@ static struct sk_buff *c

[PATCH 8/9] crypto: chcr - Ensure Destination sg entry size less than 2k

2017-06-15 Thread Harsh Jain
Allocate new sg list in case received destination sg list has entry
greater that 2k.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   | 153 +++
 drivers/crypto/chelsio/chcr_crypto.h |   6 ++
 2 files changed, 142 insertions(+), 17 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 2f388af..9a84ffa 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -166,6 +166,8 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
kfree_skb(ctx_req.ctx.reqctx->skb);
ctx_req.ctx.reqctx->skb = NULL;
}
+   free_new_sg(ctx_req.ctx.reqctx->newdstsg);
+   ctx_req.ctx.reqctx->newdstsg = NULL;
if (ctx_req.ctx.reqctx->verify == VERIFY_SW) {
chcr_verify_tag(ctx_req.req.aead_req, input,
&err);
@@ -1068,6 +1070,8 @@ static int chcr_handle_cipher_resp(struct 
ablkcipher_request *req,
chcr_send_wr(skb);
return 0;
 complete:
+   free_new_sg(reqctx->newdstsg);
+   reqctx->newdstsg = NULL;
req->base.complete(&req->base, err);
return err;
 }
@@ -1083,7 +1087,7 @@ static int process_cipher(struct ablkcipher_request *req,
struct chcr_context *ctx = crypto_ablkcipher_ctx(tfm);
struct ablk_ctx *ablkctx = ABLK_CTX(ctx);
struct  cipher_wr_param wrparam;
-   int bytes, err = -EINVAL;
+   int bytes, nents, err = -EINVAL;
 
reqctx->newdstsg = NULL;
reqctx->processed = 0;
@@ -1097,7 +1101,14 @@ static int process_cipher(struct ablkcipher_request *req,
goto error;
}
wrparam.srcsg = req->src;
-   reqctx->dstsg = req->dst;
+   if (is_newsg(req->dst, &nents)) {
+   reqctx->newdstsg = alloc_new_sg(req->dst, nents);
+   if (IS_ERR(reqctx->newdstsg))
+   return PTR_ERR(reqctx->newdstsg);
+   reqctx->dstsg = reqctx->newdstsg;
+   } else {
+   reqctx->dstsg = req->dst;
+   }
bytes = chcr_sg_ent_in_wr(wrparam.srcsg, reqctx->dstsg, MIN_CIPHER_SG,
 SPACE_LEFT(ablkctx->enckey_len),
 &wrparam.snent,
@@ -1150,6 +1161,8 @@ static int process_cipher(struct ablkcipher_request *req,
 
return 0;
 error:
+   free_new_sg(reqctx->newdstsg);
+   reqctx->newdstsg = NULL;
return err;
 }
 
@@ -1808,6 +1821,63 @@ static void chcr_hmac_cra_exit(struct crypto_tfm *tfm)
}
 }
 
+static int is_newsg(struct scatterlist *sgl, unsigned int *newents)
+{
+   int nents = 0;
+   int ret = 0;
+
+   while (sgl) {
+   if (sgl->length > CHCR_SG_SIZE)
+   ret = 1;
+   nents += DIV_ROUND_UP(sgl->length, CHCR_SG_SIZE);
+   sgl = sg_next(sgl);
+   }
+   *newents = nents;
+   return ret;
+}
+
+static inline void free_new_sg(struct scatterlist *sgl)
+{
+   kfree(sgl);
+}
+
+static struct scatterlist *alloc_new_sg(struct scatterlist *sgl,
+  unsigned int nents)
+{
+   struct scatterlist *newsg, *sg;
+   int i, len, processed = 0;
+   struct page *spage;
+   int offset;
+
+   newsg = kmalloc_array(nents, sizeof(struct scatterlist), GFP_KERNEL);
+   if (!newsg)
+   return ERR_PTR(-ENOMEM);
+   sg = newsg;
+   sg_init_table(sg, nents);
+   offset = sgl->offset;
+   spage = sg_page(sgl);
+   for (i = 0; i < nents; i++) {
+   len = min_t(u32, sgl->length - processed, CHCR_SG_SIZE);
+   sg_set_page(sg, spage, len, offset);
+   processed += len;
+   offset += len;
+   if (offset >= PAGE_SIZE) {
+   offset = offset % PAGE_SIZE;
+   spage++;
+   }
+   if (processed == sgl->length) {
+   processed = 0;
+   sgl = sg_next(sgl);
+   if (!sgl)
+   break;
+   spage = sg_page(sgl);
+   offset = sgl->offset;
+   }
+   sg = sg_next(sg);
+   }
+   return newsg;
+}
+
 static int chcr_copy_assoc(struct aead_request *req,
struct chcr_aead_ctx *ctx)
 {
@@ -1870,7 +1940,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
struct scatterlist *src;
unsigned int frags = 0, transhdr_len;
unsigned int ivsize = crypto_aead_ivsize(tfm), dst_size = 0;
-   unsigned int   kctx_len = 0;
+   unsigned int   kctx_len = 0, n

[PATCH 3/9] crypto: chcr - Return correct error code

2017-06-15 Thread Harsh Jain
Return correct error instead of EINVAL.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 76 +-
 1 file changed, 42 insertions(+), 34 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 14641c6..156065d 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -1399,7 +1399,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
unsigned short stop_offset = 0;
unsigned int  assoclen = req->assoclen;
unsigned int  authsize = crypto_aead_authsize(tfm);
-   int err = -EINVAL, src_nent;
+   int error = -EINVAL, src_nent;
int null = 0;
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL :
GFP_ATOMIC;
@@ -1416,9 +1416,9 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
reqctx->dst = src;
 
if (req->src != req->dst) {
-   err = chcr_copy_assoc(req, aeadctx);
-   if (err)
-   return ERR_PTR(err);
+   error = chcr_copy_assoc(req, aeadctx);
+   if (error)
+   return ERR_PTR(error);
reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst,
   req->assoclen);
}
@@ -1430,6 +1430,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
 (op_type ? -authsize : authsize));
if (reqctx->dst_nents < 0) {
pr_err("AUTHENC:Invalid Destination sg entries\n");
+   error = -EINVAL;
goto err;
}
dst_size = get_space_for_phys_dsgl(reqctx->dst_nents);
@@ -1443,8 +1444,10 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
return ERR_PTR(chcr_aead_fallback(req, op_type));
}
skb = alloc_skb((transhdr_len + sizeof(struct sge_opaque_hdr)), flags);
-   if (!skb)
+   if (!skb) {
+   error = -ENOMEM;
goto err;
+   }
 
/* LLD is going to write the sge hdr. */
skb_reserve(skb, sizeof(struct sge_opaque_hdr));
@@ -1496,9 +1499,9 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
sg_param.nents = reqctx->dst_nents;
sg_param.obsize = req->cryptlen + (op_type ? -authsize : authsize);
sg_param.qid = qid;
-   sg_param.align = 0;
-   if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, reqctx->dst,
- &sg_param))
+   error = map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl,
+   reqctx->dst, &sg_param);
+   if (error)
goto dstmap_fail;
 
skb_set_transport_header(skb, transhdr_len);
@@ -1520,7 +1523,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
/* ivmap_fail: */
kfree_skb(skb);
 err:
-   return ERR_PTR(-EINVAL);
+   return ERR_PTR(error);
 }
 
 static int set_msg_len(u8 *block, unsigned int msglen, int csize)
@@ -1730,7 +1733,7 @@ static struct sk_buff *create_aead_ccm_wr(struct 
aead_request *req,
unsigned int dst_size = 0, kctx_len;
unsigned int sub_type;
unsigned int authsize = crypto_aead_authsize(tfm);
-   int err = -EINVAL, src_nent;
+   int error = -EINVAL, src_nent;
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL :
GFP_ATOMIC;
 
@@ -1746,10 +1749,10 @@ static struct sk_buff *create_aead_ccm_wr(struct 
aead_request *req,
reqctx->dst = src;
 
if (req->src != req->dst) {
-   err = chcr_copy_assoc(req, aeadctx);
-   if (err) {
+   error = chcr_copy_assoc(req, aeadctx);
+   if (error) {
pr_err("AAD copy to destination buffer fails\n");
-   return ERR_PTR(err);
+   return ERR_PTR(error);
}
reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst,
   req->assoclen);
@@ -1758,11 +1761,11 @@ static struct sk_buff *create_aead_ccm_wr(struct 
aead_request *req,
 (op_type ? -authsize : authsize));
if (reqctx->dst_nents < 0) {
pr_err("CCM:Invalid Destination sg entries\n");
+   error = -EINVAL;
goto err;
}
-
-
-   if (aead_ccm_validate_input(op_type, req, aeadctx, sub_type))
+   error = aead_ccm_validate_input(op_type, req, aeadctx, sub_type);
+   if (error)
goto err;
 
dst_size = get_space_for_phys_dsgl(reqctx->dst_nents);

[PATCH 7/9] MAINTAINERS:Add maintainer for chelsio crypto driver

2017-06-15 Thread Harsh Jain
Add myself as maintainer for chcr.

Signed-off-by: Harsh Jain 
---
 MAINTAINERS | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 1f20176..504dc65 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3706,6 +3706,13 @@ S:   Supported
 F: drivers/infiniband/hw/cxgb4/
 F: include/uapi/rdma/cxgb4-abi.h
 
+CXGB4 CRYPTO DRIVER (chcr)
+M: Harsh Jain 
+L: linux-cry...@vger.kernel.org
+W: http://www.chelsio.com
+S: Supported
+F: drivers/crypto/chelsio
+
 CXGB4VF ETHERNET DRIVER (CXGB4VF)
 M: Casey Leedom 
 L: netdev@vger.kernel.org
-- 
1.8.3.1



[PATCH 9/9] crypto: chcr - Select device in Round Robin fashion

2017-06-15 Thread Harsh Jain
When multiple devices are present in system select device
in round-robin fashion for crypto operations

Signed-off-by: Atul Gupta 
Reviewed-by: Ganesh Goudar 
---
 drivers/crypto/chelsio/chcr_algo.c |  8 ++--
 drivers/crypto/chelsio/chcr_core.c | 53 ++
 drivers/crypto/chelsio/chcr_core.h |  2 +-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c |  1 +
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h |  1 +
 5 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 9a84ffa..aa4e5b8 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -1216,7 +1216,7 @@ static int chcr_aes_decrypt(struct ablkcipher_request 
*req)
 
 static int chcr_device_init(struct chcr_context *ctx)
 {
-   struct uld_ctx *u_ctx;
+   struct uld_ctx *u_ctx = NULL;
struct adapter *adap;
unsigned int id;
int txq_perchan, txq_idx, ntxq;
@@ -1224,12 +1224,12 @@ static int chcr_device_init(struct chcr_context *ctx)
 
id = smp_processor_id();
if (!ctx->dev) {
-   err = assign_chcr_device(&ctx->dev);
-   if (err) {
+   u_ctx = assign_chcr_device();
+   if (!u_ctx) {
pr_err("chcr device assignment fails\n");
goto out;
}
-   u_ctx = ULD_CTX(ctx);
+   ctx->dev = u_ctx->dev;
adap = padap(ctx->dev);
ntxq = min_not_zero((unsigned int)u_ctx->lldi.nrxq,
adap->vres.ncrypto_fc);
diff --git a/drivers/crypto/chelsio/chcr_core.c 
b/drivers/crypto/chelsio/chcr_core.c
index 5ae659a..b6dd9cb 100644
--- a/drivers/crypto/chelsio/chcr_core.c
+++ b/drivers/crypto/chelsio/chcr_core.c
@@ -29,6 +29,7 @@
 static LIST_HEAD(uld_ctx_list);
 static DEFINE_MUTEX(dev_mutex);
 static atomic_t dev_count;
+static struct uld_ctx *ctx_rr;
 
 typedef int (*chcr_handler_func)(struct chcr_dev *dev, unsigned char *input);
 static int cpl_fw6_pld_handler(struct chcr_dev *dev, unsigned char *input);
@@ -49,25 +50,28 @@
.rx_handler = chcr_uld_rx_handler,
 };
 
-int assign_chcr_device(struct chcr_dev **dev)
+struct uld_ctx *assign_chcr_device(void)
 {
-   struct uld_ctx *u_ctx;
-   int ret = -ENXIO;
+   struct uld_ctx *u_ctx = NULL;
 
/*
-* Which device to use if multiple devices are available TODO
-* May be select the device based on round robin. One session
-* must go to the same device to maintain the ordering.
+* When multiple devices are present in system select
+* device in round-robin fashion for crypto operations
+* Although One session must use the same device to
+* maintain request-response ordering.
 */
-   mutex_lock(&dev_mutex); /* TODO ? */
-   list_for_each_entry(u_ctx, &uld_ctx_list, entry)
-   if (u_ctx->dev) {
-   *dev = u_ctx->dev;
-   ret = 0;
-   break;
+   mutex_lock(&dev_mutex);
+   if (!list_empty(&uld_ctx_list)) {
+   u_ctx = ctx_rr;
+   if (list_is_last(&ctx_rr->entry, &uld_ctx_list))
+   ctx_rr = list_first_entry(&uld_ctx_list,
+ struct uld_ctx,
+ entry);
+   else
+   ctx_rr = list_next_entry(ctx_rr, entry);
}
mutex_unlock(&dev_mutex);
-   return ret;
+   return u_ctx;
 }
 
 static int chcr_dev_add(struct uld_ctx *u_ctx)
@@ -82,11 +86,27 @@ static int chcr_dev_add(struct uld_ctx *u_ctx)
u_ctx->dev = dev;
dev->u_ctx = u_ctx;
atomic_inc(&dev_count);
+   mutex_lock(&dev_mutex);
+   list_add_tail(&u_ctx->entry, &uld_ctx_list);
+   if (!ctx_rr)
+   ctx_rr = u_ctx;
+   mutex_unlock(&dev_mutex);
return 0;
 }
 
 static int chcr_dev_remove(struct uld_ctx *u_ctx)
 {
+   if (ctx_rr == u_ctx) {
+   if (list_is_last(&ctx_rr->entry, &uld_ctx_list))
+   ctx_rr = list_first_entry(&uld_ctx_list,
+ struct uld_ctx,
+ entry);
+   else
+   ctx_rr = list_next_entry(ctx_rr, entry);
+   }
+   list_del(&u_ctx->entry);
+   if (list_empty(&uld_ctx_list))
+   ctx_rr = NULL;
kfree(u_ctx->dev);
u_ctx->dev = NULL;
atomic_dec(&dev_count);
@@ -139,10 +159,11 @@ static void *chcr_uld_add(const struct cxgb4_lld_info 
*lld)
u_ctx = ERR_PTR(-ENOMEM);
goto out;
}
+   if (!(lld->ulp_crypto & ULP_CRYPTO_LOOKASIDE)) {
+   u_ctx = ERR_PTR(-ENOMEM);
+   goto out;
+   }

[PATCH 2/9] crypto: chcr - Fix fallback key setting

2017-06-15 Thread Harsh Jain
Set key of fallback tfm for rfc4309.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index e8ff505..14641c6 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -2210,7 +2210,8 @@ static int chcr_aead_rfc4309_setkey(struct crypto_aead 
*aead, const u8 *key,
unsigned int keylen)
 {
struct chcr_context *ctx = crypto_aead_ctx(aead);
-struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
+   struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
+   int error;
 
if (keylen < 3) {
crypto_tfm_set_flags((struct crypto_tfm *)aead,
@@ -2218,6 +2219,15 @@ static int chcr_aead_rfc4309_setkey(struct crypto_aead 
*aead, const u8 *key,
aeadctx->enckey_len = 0;
return  -EINVAL;
}
+   crypto_aead_clear_flags(aeadctx->sw_cipher, CRYPTO_TFM_REQ_MASK);
+   crypto_aead_set_flags(aeadctx->sw_cipher, crypto_aead_get_flags(aead) &
+ CRYPTO_TFM_REQ_MASK);
+   error = crypto_aead_setkey(aeadctx->sw_cipher, key, keylen);
+   crypto_aead_clear_flags(aead, CRYPTO_TFM_RES_MASK);
+   crypto_aead_set_flags(aead, crypto_aead_get_flags(aeadctx->sw_cipher) &
+ CRYPTO_TFM_RES_MASK);
+   if (error)
+   return error;
keylen -= 3;
memcpy(aeadctx->salt, key + keylen, 3);
return chcr_ccm_common_setkey(aead, key, keylen);
-- 
1.8.3.1



Re: [PATCH 08/22] crypto: chcr: Make use of the new sg_map helper function

2017-04-14 Thread Harsh Jain
On Fri, Apr 14, 2017 at 3:35 AM, Logan Gunthorpe  wrote:
> The get_page in this area looks *highly* suspect due to there being no
> corresponding put_page. However, I've left that as is to avoid breaking
> things.
chcr driver will post the request to LLD driver cxgb4 and put_page is
implemented there. it will no harm. Any how
we have removed the below code from driver.

http://www.mail-archive.com/linux-crypto@vger.kernel.org/msg24561.html

After this merge we can ignore your patch. Thanks

>
> I've also removed the KMAP_ATOMIC_ARGS check as it appears to be dead
> code that dates back to when it was first committed...


>
> Signed-off-by: Logan Gunthorpe 
> ---
>  drivers/crypto/chelsio/chcr_algo.c | 28 +++-
>  1 file changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/crypto/chelsio/chcr_algo.c 
> b/drivers/crypto/chelsio/chcr_algo.c
> index 41bc7f4..a993d1d 100644
> --- a/drivers/crypto/chelsio/chcr_algo.c
> +++ b/drivers/crypto/chelsio/chcr_algo.c
> @@ -1489,22 +1489,21 @@ static struct sk_buff *create_authenc_wr(struct 
> aead_request *req,
> return ERR_PTR(-EINVAL);
>  }
>
> -static void aes_gcm_empty_pld_pad(struct scatterlist *sg,
> - unsigned short offset)
> +static int aes_gcm_empty_pld_pad(struct scatterlist *sg,
> +unsigned short offset)
>  {
> -   struct page *spage;
> unsigned char *addr;
>
> -   spage = sg_page(sg);
> -   get_page(spage); /* so that it is not freed by NIC */
> -#ifdef KMAP_ATOMIC_ARGS
> -   addr = kmap_atomic(spage, KM_SOFTIRQ0);
> -#else
> -   addr = kmap_atomic(spage);
> -#endif
> -   memset(addr + sg->offset, 0, offset + 1);
> +   get_page(sg_page(sg)); /* so that it is not freed by NIC */
> +
> +   addr = sg_map(sg, SG_KMAP_ATOMIC);
> +   if (IS_ERR(addr))
> +   return PTR_ERR(addr);
> +
> +   memset(addr, 0, offset + 1);
> +   sg_unmap(sg, addr, SG_KMAP_ATOMIC);
>
> -   kunmap_atomic(addr);
> +   return 0;
>  }
>
>  static int set_msg_len(u8 *block, unsigned int msglen, int csize)
> @@ -1940,7 +1939,10 @@ static struct sk_buff *create_gcm_wr(struct 
> aead_request *req,
> if (req->cryptlen) {
> write_sg_to_skb(skb, &frags, src, req->cryptlen);
> } else {
> -   aes_gcm_empty_pld_pad(req->dst, authsize - 1);
> +   err = aes_gcm_empty_pld_pad(req->dst, authsize - 1);
> +   if (err)
> +   goto dstmap_fail;
> +
> write_sg_to_skb(skb, &frags, reqctx->dst, crypt_len);
>
> }
> --
> 2.1.4
>


[PATCH 2/4] chcr:Set hmac_ctrl bit to use HW register HMAC_CFG[456]

2017-04-10 Thread Harsh Jain
Use hmac_ctrl bit value saved in setauthsize callback.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c |   24 +---
 1 files changed, 5 insertions(+), 19 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 7d59591..2d61043 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -1335,19 +1335,6 @@ static int chcr_copy_assoc(struct aead_request *req,
return crypto_skcipher_encrypt(skreq);
 }
 
-static unsigned char get_hmac(unsigned int authsize)
-{
-   switch (authsize) {
-   case ICV_8:
-   return CHCR_SCMD_HMAC_CTRL_PL1;
-   case ICV_10:
-   return CHCR_SCMD_HMAC_CTRL_TRUNC_RFC4366;
-   case ICV_12:
-   return CHCR_SCMD_HMAC_CTRL_IPSEC_96BIT;
-   }
-   return CHCR_SCMD_HMAC_CTRL_NO_TRUNC;
-}
-
 
 static struct sk_buff *create_authenc_wr(struct aead_request *req,
 unsigned short qid,
@@ -1600,13 +1587,13 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu 
*sec_cpl,
  struct chcr_context *chcrctx)
 {
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+   struct chcr_aead_ctx *aeadctx = AEAD_CTX(crypto_aead_ctx(tfm));
unsigned int ivsize = AES_BLOCK_SIZE;
unsigned int cipher_mode = CHCR_SCMD_CIPHER_MODE_AES_CCM;
unsigned int mac_mode = CHCR_SCMD_AUTH_MODE_CBCMAC;
unsigned int c_id = chcrctx->dev->rx_channel_id;
unsigned int ccm_xtra;
unsigned char tag_offset = 0, auth_offset = 0;
-   unsigned char hmac_ctrl = get_hmac(crypto_aead_authsize(tfm));
unsigned int assoclen;
 
if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4309)
@@ -1642,8 +1629,8 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu 
*sec_cpl,
crypto_aead_authsize(tfm));
sec_cpl->seqno_numivs =  FILL_SEC_CPL_SCMD0_SEQNO(op_type,
(op_type == CHCR_ENCRYPT_OP) ? 0 : 1,
-   cipher_mode, mac_mode, hmac_ctrl,
-   ivsize >> 1);
+   cipher_mode, mac_mode,
+   aeadctx->hmac_ctrl, ivsize >> 1);
 
sec_cpl->ivgen_hdrlen = FILL_SEC_CPL_IVGEN_HDRLEN(0, 0, 1, 0,
1, dst_size);
@@ -1820,7 +1807,6 @@ unsigned int fill_aead_req_fields(struct sk_buff *skb,
unsigned char tag_offset = 0;
unsigned int crypt_len = 0;
unsigned int authsize = crypto_aead_authsize(tfm);
-   unsigned char hmac_ctrl = get_hmac(authsize);
int err = 0;
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL :
GFP_ATOMIC;
@@ -1893,8 +1879,8 @@ unsigned int fill_aead_req_fields(struct sk_buff *skb,
FILL_SEC_CPL_SCMD0_SEQNO(op_type, (op_type ==
CHCR_ENCRYPT_OP) ? 1 : 0,
CHCR_SCMD_CIPHER_MODE_AES_GCM,
-   CHCR_SCMD_AUTH_MODE_GHASH, hmac_ctrl,
-   ivsize >> 1);
+   CHCR_SCMD_AUTH_MODE_GHASH,
+   aeadctx->hmac_ctrl, ivsize >> 1);
} else {
chcr_req->sec_cpl.cipherstop_lo_authinsert =
FILL_SEC_CPL_AUTHINSERT(0, 0, 0, 0);
-- 
1.7.1



[PATCH 3/4] chcr:Fix txq ids.

2017-04-10 Thread Harsh Jain
The patch fixes a critical issue to map txqid with flows on the hardware 
appropriately,
if tx queues created are more than flows configured then  txqid shall map within
the range of hardware flows configured. This ensure that un-mapped txqid does 
not remain un-handled.
The patch also segregated the rxqid and txqid for clarity.

Signed-off-by: Atul Gupta 
Reviewed-by: Ganesh Goudar 
---
 drivers/crypto/chelsio/chcr_algo.c  |   47 +-
 drivers/crypto/chelsio/chcr_core.h  |2 +
 drivers/crypto/chelsio/chcr_crypto.h|3 +-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c |9 
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h  |1 +
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h   |3 +-
 6 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 2d61043..5470e4e 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -522,7 +522,7 @@ static inline void create_wreq(struct chcr_context *ctx,
 {
struct uld_ctx *u_ctx = ULD_CTX(ctx);
int iv_loc = IV_DSGL;
-   int qid = u_ctx->lldi.rxq_ids[ctx->tx_channel_id];
+   int qid = u_ctx->lldi.rxq_ids[ctx->rx_qidx];
unsigned int immdatalen = 0, nr_frags = 0;
 
if (is_ofld_imm(skb)) {
@@ -543,7 +543,7 @@ static inline void create_wreq(struct chcr_context *ctx,
chcr_req->wreq.cookie = cpu_to_be64((uintptr_t)req);
chcr_req->wreq.rx_chid_to_rx_q_id =
FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
-   is_iv ? iv_loc : IV_NOP, ctx->tx_channel_id);
+   is_iv ? iv_loc : IV_NOP, ctx->tx_qidx);
 
chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
   qid);
@@ -721,19 +721,19 @@ static int chcr_aes_encrypt(struct ablkcipher_request 
*req)
struct sk_buff *skb;
 
if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0],
-   ctx->tx_channel_id))) {
+   ctx->tx_qidx))) {
if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
return -EBUSY;
}
 
-   skb = create_cipher_wr(req, u_ctx->lldi.rxq_ids[ctx->tx_channel_id],
+   skb = create_cipher_wr(req, u_ctx->lldi.rxq_ids[ctx->rx_qidx],
   CHCR_ENCRYPT_OP);
if (IS_ERR(skb)) {
pr_err("chcr : %s : Failed to form WR. No memory\n", __func__);
return  PTR_ERR(skb);
}
skb->dev = u_ctx->lldi.ports[0];
-   set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_channel_id);
+   set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_qidx);
chcr_send_wr(skb);
return -EINPROGRESS;
 }
@@ -746,19 +746,19 @@ static int chcr_aes_decrypt(struct ablkcipher_request 
*req)
struct sk_buff *skb;
 
if (unlikely(cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0],
-   ctx->tx_channel_id))) {
+   ctx->tx_qidx))) {
if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
return -EBUSY;
}
 
-   skb = create_cipher_wr(req, u_ctx->lldi.rxq_ids[0],
+   skb = create_cipher_wr(req, u_ctx->lldi.rxq_ids[ctx->rx_qidx],
   CHCR_DECRYPT_OP);
if (IS_ERR(skb)) {
pr_err("chcr : %s : Failed to form WR. No memory\n", __func__);
return PTR_ERR(skb);
}
skb->dev = u_ctx->lldi.ports[0];
-   set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_channel_id);
+   set_wr_txq(skb, CPL_PRIORITY_DATA, ctx->tx_qidx);
chcr_send_wr(skb);
return -EINPROGRESS;
 }
@@ -766,7 +766,9 @@ static int chcr_aes_decrypt(struct ablkcipher_request *req)
 static int chcr_device_init(struct chcr_context *ctx)
 {
struct uld_ctx *u_ctx;
+   struct adapter *adap;
unsigned int id;
+   int txq_perchan, txq_idx, ntxq;
int err = 0, rxq_perchan, rxq_idx;
 
id = smp_processor_id();
@@ -777,11 +779,18 @@ static int chcr_device_init(struct chcr_context *ctx)
goto out;
}
u_ctx = ULD_CTX(ctx);
+   adap = padap(ctx->dev);
+   ntxq = min_not_zero((unsigned int)u_ctx->lldi.nrxq,
+   adap->vres.ncrypto_fc);
rxq_perchan = u_ctx->lldi.nrxq / u_ctx->lldi.nchan;
+   txq_perchan = ntxq / u_ctx->lldi.nchan;
rxq_idx = ctx->dev->tx_channel_id * rxq_perchan;
rxq_idx += id % rxq_perchan;
+   txq_idx = ctx->dev->tx_channel_id * txq_perchan;
+   txq_idx += id % txq_perchan;
spin_lock(&ctx->dev->lock_chcr_dev);
-   ctx->tx_channe

[PATCH 4/4] chcr: Add fallback for AEAD algos

2017-04-10 Thread Harsh Jain
Fallback to sw when
I AAD length greater than 511
II Zero length payload
II No of sg entries exceeds Request size.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   |  219 ++
 drivers/crypto/chelsio/chcr_algo.h   |4 +
 drivers/crypto/chelsio/chcr_crypto.h |3 +-
 3 files changed, 151 insertions(+), 75 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 5470e4e..53d9ce4 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -1343,7 +1343,36 @@ static int chcr_copy_assoc(struct aead_request *req,
 
return crypto_skcipher_encrypt(skreq);
 }
+static int chcr_aead_need_fallback(struct aead_request *req, int src_nent,
+  int aadmax, int wrlen,
+  unsigned short op_type)
+{
+   unsigned int authsize = crypto_aead_authsize(crypto_aead_reqtfm(req));
+
+   if (((req->cryptlen - (op_type ? authsize : 0)) == 0) ||
+   (req->assoclen > aadmax) ||
+   (src_nent > MAX_SKB_FRAGS) ||
+   (wrlen > MAX_WR_SIZE))
+   return 1;
+   return 0;
+}
 
+static int chcr_aead_fallback(struct aead_request *req, unsigned short op_type)
+{
+   struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+   struct chcr_context *ctx = crypto_aead_ctx(tfm);
+   struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
+   struct aead_request *subreq = aead_request_ctx(req);
+
+   aead_request_set_tfm(subreq, aeadctx->sw_cipher);
+   aead_request_set_callback(subreq, req->base.flags,
+ req->base.complete, req->base.data);
+aead_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+req->iv);
+aead_request_set_ad(subreq, req->assoclen);
+   return op_type ? crypto_aead_decrypt(subreq) :
+   crypto_aead_encrypt(subreq);
+}
 
 static struct sk_buff *create_authenc_wr(struct aead_request *req,
 unsigned short qid,
@@ -1367,7 +1396,7 @@ static int chcr_copy_assoc(struct aead_request *req,
unsigned short stop_offset = 0;
unsigned int  assoclen = req->assoclen;
unsigned int  authsize = crypto_aead_authsize(tfm);
-   int err = 0;
+   int err = -EINVAL, src_nent;
int null = 0;
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL :
GFP_ATOMIC;
@@ -1377,8 +1406,8 @@ static int chcr_copy_assoc(struct aead_request *req,
 
if (op_type && req->cryptlen < crypto_aead_authsize(tfm))
goto err;
-
-   if (sg_nents_for_len(req->src, req->assoclen + req->cryptlen) < 0)
+   src_nent = sg_nents_for_len(req->src, req->assoclen + req->cryptlen);
+   if (src_nent < 0)
goto err;
src = scatterwalk_ffwd(reqctx->srcffwd, req->src, req->assoclen);
reqctx->dst = src;
@@ -1396,7 +1425,7 @@ static int chcr_copy_assoc(struct aead_request *req,
}
reqctx->dst_nents = sg_nents_for_len(reqctx->dst, req->cryptlen +
 (op_type ? -authsize : authsize));
-   if (reqctx->dst_nents <= 0) {
+   if (reqctx->dst_nents < 0) {
pr_err("AUTHENC:Invalid Destination sg entries\n");
goto err;
}
@@ -1404,6 +1433,12 @@ static int chcr_copy_assoc(struct aead_request *req,
kctx_len = (ntohl(KEY_CONTEXT_CTX_LEN_V(aeadctx->key_ctx_hdr)) << 4)
- sizeof(chcr_req->key_ctx);
transhdr_len = CIPHER_TRANSHDR_SIZE(kctx_len, dst_size);
+   if (chcr_aead_need_fallback(req, src_nent + MIN_AUTH_SG,
+   T6_MAX_AAD_SIZE,
+   transhdr_len + (sgl_len(src_nent + MIN_AUTH_SG) * 8),
+   op_type)) {
+   return ERR_PTR(chcr_aead_fallback(req, op_type));
+   }
skb = alloc_skb((transhdr_len + sizeof(struct sge_opaque_hdr)), flags);
if (!skb)
goto err;
@@ -1485,24 +1520,6 @@ static int chcr_copy_assoc(struct aead_request *req,
return ERR_PTR(-EINVAL);
 }
 
-static void aes_gcm_empty_pld_pad(struct scatterlist *sg,
- unsigned short offset)
-{
-   struct page *spage;
-   unsigned char *addr;
-
-   spage = sg_page(sg);
-   get_page(spage); /* so that it is not freed by NIC */
-#ifdef KMAP_ATOMIC_ARGS
-   addr = kmap_atomic(spage, KM_SOFTIRQ0);
-#else
-   addr = kmap_atomic(spage);
-#endif
-   memset(addr + sg->offset, 0, offset + 1);
-
-   kunmap_atomic(addr);
-}
-
 static int set_msg_len(u8 *block, unsigned int msglen, int csize)
 {
__be32 data;
@@ -1566,11 +1583,6 @@ static in

[PATCH 1/4] chcr: Increase priority of AEAD algos.

2017-04-10 Thread Harsh Jain
templates(gcm,ccm etc) inherit priority value of driver to
calculate its priority. In some cases template priority becomes
 more than driver priority for same algo.
Without this patch we will not be able to use driver authenc algos. It will
be good if it pushed in stable kernel.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c   |   12 ++--
 drivers/crypto/chelsio/chcr_crypto.h |4 ++--
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 41bc7f4..7d59591 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -2673,6 +2673,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_name = "gcm(aes)",
.cra_driver_name = "gcm-aes-chcr",
.cra_blocksize  = 1,
+   .cra_priority = CHCR_AEAD_PRIORITY,
.cra_ctxsize =  sizeof(struct chcr_context) +
sizeof(struct chcr_aead_ctx) +
sizeof(struct chcr_gcm_ctx),
@@ -2691,6 +2692,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_name = "rfc4106(gcm(aes))",
.cra_driver_name = "rfc4106-gcm-aes-chcr",
.cra_blocksize   = 1,
+   .cra_priority = CHCR_AEAD_PRIORITY + 1,
.cra_ctxsize =  sizeof(struct chcr_context) +
sizeof(struct chcr_aead_ctx) +
sizeof(struct chcr_gcm_ctx),
@@ -2710,6 +2712,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_name = "ccm(aes)",
.cra_driver_name = "ccm-aes-chcr",
.cra_blocksize   = 1,
+   .cra_priority = CHCR_AEAD_PRIORITY,
.cra_ctxsize =  sizeof(struct chcr_context) +
sizeof(struct chcr_aead_ctx),
 
@@ -2728,6 +2731,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_name = "rfc4309(ccm(aes))",
.cra_driver_name = "rfc4309-ccm-aes-chcr",
.cra_blocksize   = 1,
+   .cra_priority = CHCR_AEAD_PRIORITY + 1,
.cra_ctxsize =  sizeof(struct chcr_context) +
sizeof(struct chcr_aead_ctx),
 
@@ -2747,6 +2751,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_driver_name =
"authenc-hmac-sha1-cbc-aes-chcr",
.cra_blocksize   = AES_BLOCK_SIZE,
+   .cra_priority = CHCR_AEAD_PRIORITY,
.cra_ctxsize =  sizeof(struct chcr_context) +
sizeof(struct chcr_aead_ctx) +
sizeof(struct chcr_authenc_ctx),
@@ -2768,6 +2773,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_driver_name =
"authenc-hmac-sha256-cbc-aes-chcr",
.cra_blocksize   = AES_BLOCK_SIZE,
+   .cra_priority = CHCR_AEAD_PRIORITY,
.cra_ctxsize =  sizeof(struct chcr_context) +
sizeof(struct chcr_aead_ctx) +
sizeof(struct chcr_authenc_ctx),
@@ -2788,6 +2794,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_driver_name =
"authenc-hmac-sha224-cbc-aes-chcr",
.cra_blocksize   = AES_BLOCK_SIZE,
+   .cra_priority = CHCR_AEAD_PRIORITY,
.cra_ctxsize =  sizeof(struct chcr_context) +
sizeof(struct chcr_aead_ctx) +
sizeof(struct chcr_authenc_ctx),
@@ -2807,6 +2814,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_driver_name =
"authenc-hmac-sha384-cbc-aes-chcr",
.cra_blocksize   = AES_BLOCK_SIZE,
+   .cra_priority = CHCR_AEAD_PRIORITY,
.

[PATCH 0/4] Bug fixes and fallback for AEAD

2017-04-10 Thread Harsh Jain
This series based on Herbert cryptodev-2.6.
It includes bug fixes and fallback for AEAD algos.

Harsh Jain (3):
  chcr: Increase priority of AEAD algos.
  chcr:Set hmac_ctrl bit to use HW register HMAC_CFG[456].
  chcr: Add fallback for AEAD algos
Atul Gupta (1):
  chcr: Fix txq ids

 drivers/crypto/chelsio/chcr_algo.c  |  298 ++-
 drivers/crypto/chelsio/chcr_algo.h  |4 +
 drivers/crypto/chelsio/chcr_core.h  |2 +
 drivers/crypto/chelsio/chcr_crypto.h|   10 +-
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c |9 +
 drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h  |1 +
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h   |3 +-
 7 files changed, 210 insertions(+), 117 deletions(-)



[PATCH 1/8] crypto:chcr-Change flow IDs

2017-01-27 Thread Harsh Jain
Change assign flowc id to each outgoing request.Firmware use flowc id
to schedule each request onto HW. FW reply may miss without this change.

Reviewed-by: Hariprasad Shenai 
Signed-off-by: Atul Gupta 
---
 drivers/crypto/chelsio/chcr_algo.c| 18 ++
 drivers/crypto/chelsio/chcr_algo.h|  9 +
 drivers/crypto/chelsio/chcr_core.h|  1 +
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h |  8 
 4 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index d29c2b4..deec7c0 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -542,10 +542,11 @@ static inline void create_wreq(struct chcr_context *ctx,
(calc_tx_flits_ofld(skb) * 8), 16)));
chcr_req->wreq.cookie = cpu_to_be64((uintptr_t)req);
chcr_req->wreq.rx_chid_to_rx_q_id =
-   FILL_WR_RX_Q_ID(ctx->dev->tx_channel_id, qid,
-   is_iv ? iv_loc : IV_NOP);
+   FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
+   is_iv ? iv_loc : IV_NOP, ctx->tx_channel_id);
 
-   chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id);
+   chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
+  qid);
chcr_req->ulptx.len = htonl((DIV_ROUND_UP((calc_tx_flits_ofld(skb) * 8),
16) - ((sizeof(chcr_req->wreq)) >> 4)));
 
@@ -606,7 +607,7 @@ static inline void create_wreq(struct chcr_context *ctx,
chcr_req = (struct chcr_wr *)__skb_put(skb, transhdr_len);
memset(chcr_req, 0, transhdr_len);
chcr_req->sec_cpl.op_ivinsrtofst =
-   FILL_SEC_CPL_OP_IVINSR(ctx->dev->tx_channel_id, 2, 1);
+   FILL_SEC_CPL_OP_IVINSR(ctx->dev->rx_channel_id, 2, 1);
 
chcr_req->sec_cpl.pldlen = htonl(ivsize + req->nbytes);
chcr_req->sec_cpl.aadstart_cipherstop_hi =
@@ -782,6 +783,7 @@ static int chcr_device_init(struct chcr_context *ctx)
spin_lock(&ctx->dev->lock_chcr_dev);
ctx->tx_channel_id = rxq_idx;
ctx->dev->tx_channel_id = !ctx->dev->tx_channel_id;
+   ctx->dev->rx_channel_id = 0;
spin_unlock(&ctx->dev->lock_chcr_dev);
}
 out:
@@ -874,7 +876,7 @@ static struct sk_buff *create_hash_wr(struct ahash_request 
*req,
memset(chcr_req, 0, transhdr_len);
 
chcr_req->sec_cpl.op_ivinsrtofst =
-   FILL_SEC_CPL_OP_IVINSR(ctx->dev->tx_channel_id, 2, 0);
+   FILL_SEC_CPL_OP_IVINSR(ctx->dev->rx_channel_id, 2, 0);
chcr_req->sec_cpl.pldlen = htonl(param->bfr_len + param->sg_len);
 
chcr_req->sec_cpl.aadstart_cipherstop_hi =
@@ -1425,7 +1427,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
 * to the hardware spec
 */
chcr_req->sec_cpl.op_ivinsrtofst =
-   FILL_SEC_CPL_OP_IVINSR(ctx->dev->tx_channel_id, 2,
+   FILL_SEC_CPL_OP_IVINSR(ctx->dev->rx_channel_id, 2,
   (ivsize ? (assoclen + 1) : 0));
chcr_req->sec_cpl.pldlen = htonl(assoclen + ivsize + req->cryptlen);
chcr_req->sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI(
@@ -1601,7 +1603,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu 
*sec_cpl,
unsigned int ivsize = AES_BLOCK_SIZE;
unsigned int cipher_mode = CHCR_SCMD_CIPHER_MODE_AES_CCM;
unsigned int mac_mode = CHCR_SCMD_AUTH_MODE_CBCMAC;
-   unsigned int c_id = chcrctx->dev->tx_channel_id;
+   unsigned int c_id = chcrctx->dev->rx_channel_id;
unsigned int ccm_xtra;
unsigned char tag_offset = 0, auth_offset = 0;
unsigned char hmac_ctrl = get_hmac(crypto_aead_authsize(tfm));
@@ -1877,7 +1879,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request 
*req,
 
tag_offset = (op_type == CHCR_ENCRYPT_OP) ? 0 : authsize;
chcr_req->sec_cpl.op_ivinsrtofst = FILL_SEC_CPL_OP_IVINSR(
-   ctx->dev->tx_channel_id, 2, (ivsize ?
+   ctx->dev->rx_channel_id, 2, (ivsize ?
(req->assoclen + 1) : 0));
chcr_req->sec_cpl.pldlen = htonl(req->assoclen + ivsize + crypt_len);
chcr_req->sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI(
diff --git a/drivers/crypto/chelsio/chcr_algo.h 
b/drivers/crypto/chelsio/chcr_algo.h
index 3c7c51f..ba38bae 100644
--- a/drivers/crypto/chelsio/chcr_algo.h
+++ b/drivers/crypto/chelsio/chcr_algo.h
@@ -185,20 +185,21 @@
FW_CRYPTO_LOOKASIDE_WR_CCTX_LOC_V(1) | \
FW_CRYPTO_LOOKASIDE_WR_CCTX_SIZE_V((ctx_len)))
 
-#define FILL_WR_RX_Q_ID(cid, qid, wr_iv) \
+#define FILL_WR_RX_Q_ID(

[PATCH 4/8] crypto:chcr- Use cipher instead of Block Cipher in gcm setkey

2017-01-27 Thread Harsh Jain
1 Block of encrption can be done with aes-generic. no need of
cbc(aes). This patch replaces cbc(aes-generic) with aes-generic.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 6c2dea3..d335943 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -2189,8 +2189,7 @@ static int chcr_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
struct chcr_context *ctx = crypto_aead_ctx(aead);
struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
struct chcr_gcm_ctx *gctx = GCM_CTX(aeadctx);
-   struct blkcipher_desc h_desc;
-   struct scatterlist src[1];
+   struct crypto_cipher *cipher;
unsigned int ck_size;
int ret = 0, key_ctx_size = 0;
 
@@ -2223,27 +,26 @@ static int chcr_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
CHCR_KEYCTX_MAC_KEY_SIZE_128,
0, 0,
key_ctx_size >> 4);
-   /* Calculate the H = CIPH(K, 0 repeated 16 times) using sync aes
-* blkcipher It will go on key context
+   /* Calculate the H = CIPH(K, 0 repeated 16 times).
+* It will go in key context
 */
-   h_desc.tfm = crypto_alloc_blkcipher("cbc(aes-generic)", 0, 0);
-   if (IS_ERR(h_desc.tfm)) {
+   cipher = crypto_alloc_cipher("aes-generic", 0, 0);
+   if (IS_ERR(cipher)) {
aeadctx->enckey_len = 0;
ret = -ENOMEM;
goto out;
}
-   h_desc.flags = 0;
-   ret = crypto_blkcipher_setkey(h_desc.tfm, key, keylen);
+
+   ret = crypto_cipher_setkey(cipher, key, keylen);
if (ret) {
aeadctx->enckey_len = 0;
goto out1;
}
memset(gctx->ghash_h, 0, AEAD_H_SIZE);
-   sg_init_one(&src[0], gctx->ghash_h, AEAD_H_SIZE);
-   ret = crypto_blkcipher_encrypt(&h_desc, &src[0], &src[0], AEAD_H_SIZE);
+   crypto_cipher_encrypt_one(cipher, gctx->ghash_h, gctx->ghash_h);
 
 out1:
-   crypto_free_blkcipher(h_desc.tfm);
+   crypto_free_cipher(cipher);
 out:
return ret;
 }
-- 
1.8.2.3



[PATCH 0/8] Bug fixes

2017-01-27 Thread Harsh Jain
This patch series is based on Herbert's cryptodev-2.6 tree and depends on 
patch series "Bug Fixes for 4.10". It includes Bug Fixes.

Atul Gupta (2)
  crypto:chcr-Change flow IDs
  crypto:chcr- Fix wrong typecasting
Harsh Jain (8):
  crypto:chcr- Fix key length for RFC4106
  crypto:chcr-fix itnull.cocci warnings
  crypto:chcr- Use cipher instead of Block Cipher in gcm setkey
  crypto:chcr: Change cra_flags for cipher algos
  crypto:chcr- Change algo priority
  crypto:chcr-Fix Smatch Complaint

 drivers/crypto/chelsio/chcr_algo.c| 53 ++-
 drivers/crypto/chelsio/chcr_algo.h|  9 +++--
 drivers/crypto/chelsio/chcr_core.c| 11 +++---
 drivers/crypto/chelsio/chcr_core.h|  1 +
 drivers/crypto/chelsio/chcr_crypto.h  |  2 +-
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h |  8 
 6 files changed, 47 insertions(+), 37 deletions(-)
 mode change 100644 => 100755 drivers/crypto/chelsio/chcr_algo.c

-- 
1.8.2.3



[PATCH 8/8] crypto:chcr-Fix Smatch Complaint

2017-01-27 Thread Harsh Jain
Initialise variable after null check.

Reported-by: Dan Carpenter 
Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
 mode change 100644 => 100755 drivers/crypto/chelsio/chcr_algo.c

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
old mode 100644
new mode 100755
index 21fc04c..41bc7f4
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -2456,13 +2456,14 @@ static int chcr_aead_op(struct aead_request *req,
 {
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
struct chcr_context *ctx = crypto_aead_ctx(tfm);
-   struct uld_ctx *u_ctx = ULD_CTX(ctx);
+   struct uld_ctx *u_ctx;
struct sk_buff *skb;
 
-   if (ctx && !ctx->dev) {
+   if (!ctx->dev) {
pr_err("chcr : %s : No crypto device.\n", __func__);
return -ENXIO;
}
+   u_ctx = ULD_CTX(ctx);
if (cxgb4_is_crypto_q_full(u_ctx->lldi.ports[0],
   ctx->tx_channel_id)) {
if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG))
-- 
1.8.2.3



[PATCH 7/8] crypto:chcr- Fix wrong typecasting

2017-01-27 Thread Harsh Jain
Typecast the pointer with correct structure.

Signed-off-by: Atul Gupta 
---
 drivers/crypto/chelsio/chcr_core.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_core.c 
b/drivers/crypto/chelsio/chcr_core.c
index 2bfd61a..c28e018 100644
--- a/drivers/crypto/chelsio/chcr_core.c
+++ b/drivers/crypto/chelsio/chcr_core.c
@@ -151,18 +151,17 @@ int chcr_uld_rx_handler(void *handle, const __be64 *rsp,
 {
struct uld_ctx *u_ctx = (struct uld_ctx *)handle;
struct chcr_dev *dev = u_ctx->dev;
-   const struct cpl_act_establish *rpl = (struct cpl_act_establish
-  *)rsp;
+   const struct cpl_fw6_pld *rpl = (struct cpl_fw6_pld *)rsp;
 
-   if (rpl->ot.opcode != CPL_FW6_PLD) {
+   if (rpl->opcode != CPL_FW6_PLD) {
pr_err("Unsupported opcode\n");
return 0;
}
 
if (!pgl)
-   work_handlers[rpl->ot.opcode](dev, (unsigned char *)&rsp[1]);
+   work_handlers[rpl->opcode](dev, (unsigned char *)&rsp[1]);
else
-   work_handlers[rpl->ot.opcode](dev, pgl->va);
+   work_handlers[rpl->opcode](dev, pgl->va);
return 0;
 }
 
-- 
1.8.2.3



[PATCH 2/8] crypto:chcr- Fix key length for RFC4106

2017-01-27 Thread Harsh Jain
Check keylen before copying salt to avoid wrap around of Integer.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index deec7c0..6c2dea3 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -2194,8 +2194,8 @@ static int chcr_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
unsigned int ck_size;
int ret = 0, key_ctx_size = 0;
 
-   if (get_aead_subtype(aead) ==
-   CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106) {
+   if (get_aead_subtype(aead) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106 &&
+   keylen > 3) {
keylen -= 4;  /* nonce/salt is present in the last 4 bytes */
memcpy(aeadctx->salt, key + keylen, 4);
}
-- 
1.8.2.3



[PATCH 3/8] crypto:chcr-fix itnull.cocci warnings

2017-01-27 Thread Harsh Jain
The first argument to list_for_each_entry cannot be NULL.

Generated by: scripts/coccinelle/iterators/itnull.cocci

Signed-off-by: Julia Lawall 
Signed-off-by: Fengguang Wu 
Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/chelsio/chcr_core.c 
b/drivers/crypto/chelsio/chcr_core.c
index 1c65f07..2bfd61a 100644
--- a/drivers/crypto/chelsio/chcr_core.c
+++ b/drivers/crypto/chelsio/chcr_core.c
@@ -61,7 +61,7 @@ int assign_chcr_device(struct chcr_dev **dev)
 */
mutex_lock(&dev_mutex); /* TODO ? */
list_for_each_entry(u_ctx, &uld_ctx_list, entry)
-   if (u_ctx && u_ctx->dev) {
+   if (u_ctx->dev) {
*dev = u_ctx->dev;
ret = 0;
break;
-- 
1.8.2.3



[PATCH 5/8] crypto:chcr: Change cra_flags for cipher algos

2017-01-27 Thread Harsh Jain
Change cipher algos flags to CRYPTO_ALG_TYPE_ABLKCIPHER.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index d335943..21fc04c 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -171,7 +171,7 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
}
break;
 
-   case CRYPTO_ALG_TYPE_BLKCIPHER:
+   case CRYPTO_ALG_TYPE_ABLKCIPHER:
ctx_req.req.ablk_req = (struct ablkcipher_request *)req;
ctx_req.ctx.ablk_ctx =
ablkcipher_request_ctx(ctx_req.req.ablk_req);
@@ -2492,7 +2492,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_name   = "cbc(aes)",
.cra_driver_name= "cbc-aes-chcr",
.cra_priority   = CHCR_CRA_PRIORITY,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+   .cra_flags  = CRYPTO_ALG_TYPE_ABLKCIPHER |
CRYPTO_ALG_ASYNC,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct chcr_context)
@@ -2519,7 +2519,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_name   = "xts(aes)",
.cra_driver_name= "xts-aes-chcr",
.cra_priority   = CHCR_CRA_PRIORITY,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+   .cra_flags  = CRYPTO_ALG_TYPE_ABLKCIPHER |
CRYPTO_ALG_ASYNC,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct chcr_context) +
-- 
1.8.2.3



[PATCH 6/8] crypto:chcr- Change algo priority

2017-01-27 Thread Harsh Jain
Update priorities to 3000

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_crypto.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/chelsio/chcr_crypto.h 
b/drivers/crypto/chelsio/chcr_crypto.h
index 7ec0a8f..81cfd0b 100644
--- a/drivers/crypto/chelsio/chcr_crypto.h
+++ b/drivers/crypto/chelsio/chcr_crypto.h
@@ -48,7 +48,7 @@
  * giving the processed data
  */
 
-#define CHCR_CRA_PRIORITY 300
+#define CHCR_CRA_PRIORITY 3000
 
 #define CHCR_AES_MAX_KEY_LEN  (2 * (AES_MAX_KEY_SIZE)) /* consider xts */
 #define CHCR_MAX_CRYPTO_IV_LEN 16 /* AES IV len */
-- 
1.8.2.3



[PATCH v1 4/4] crypto:chcr-fix itnull.cocci warnings

2017-01-13 Thread Harsh Jain
The first argument to list_for_each_entry cannot be NULL.

Generated by: scripts/coccinelle/iterators/itnull.cocci

Signed-off-by: Julia Lawall 
Signed-off-by: Fengguang Wu 
Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/chelsio/chcr_core.c 
b/drivers/crypto/chelsio/chcr_core.c
index 1c65f07..2bfd61a 100644
--- a/drivers/crypto/chelsio/chcr_core.c
+++ b/drivers/crypto/chelsio/chcr_core.c
@@ -61,7 +61,7 @@ int assign_chcr_device(struct chcr_dev **dev)
 */
mutex_lock(&dev_mutex); /* TODO ? */
list_for_each_entry(u_ctx, &uld_ctx_list, entry)
-   if (u_ctx && u_ctx->dev) {
+   if (u_ctx->dev) {
*dev = u_ctx->dev;
ret = 0;
break;
-- 
1.8.2.3



[PATCH v1 3/4] crypto:chcr- Check device is allocated before use

2017-01-13 Thread Harsh Jain
Ensure dev is allocated for crypto uld context before using the device
for crypto operations.

Signed-off-by: Atul Gupta 
---
 drivers/crypto/chelsio/chcr_core.c | 18 --
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_core.c 
b/drivers/crypto/chelsio/chcr_core.c
index 918da8e..1c65f07 100644
--- a/drivers/crypto/chelsio/chcr_core.c
+++ b/drivers/crypto/chelsio/chcr_core.c
@@ -52,6 +52,7 @@
 int assign_chcr_device(struct chcr_dev **dev)
 {
struct uld_ctx *u_ctx;
+   int ret = -ENXIO;
 
/*
 * Which device to use if multiple devices are available TODO
@@ -59,15 +60,14 @@ int assign_chcr_device(struct chcr_dev **dev)
 * must go to the same device to maintain the ordering.
 */
mutex_lock(&dev_mutex); /* TODO ? */
-   u_ctx = list_first_entry(&uld_ctx_list, struct uld_ctx, entry);
-   if (!u_ctx) {
-   mutex_unlock(&dev_mutex);
-   return -ENXIO;
+   list_for_each_entry(u_ctx, &uld_ctx_list, entry)
+   if (u_ctx && u_ctx->dev) {
+   *dev = u_ctx->dev;
+   ret = 0;
+   break;
}
-
-   *dev = u_ctx->dev;
mutex_unlock(&dev_mutex);
-   return 0;
+   return ret;
 }
 
 static int chcr_dev_add(struct uld_ctx *u_ctx)
@@ -202,10 +202,8 @@ static int chcr_uld_state_change(void *handle, enum 
cxgb4_state state)
 
 static int __init chcr_crypto_init(void)
 {
-   if (cxgb4_register_uld(CXGB4_ULD_CRYPTO, &chcr_uld_info)) {
+   if (cxgb4_register_uld(CXGB4_ULD_CRYPTO, &chcr_uld_info))
pr_err("ULD register fail: No chcr crypto support in cxgb4");
-   return -1;
-   }
 
return 0;
 }
-- 
1.8.2.3



[PATCH v1 2/4] crypto:chcr- Fix panic on dma_unmap_sg

2017-01-13 Thread Harsh Jain
Save DMA mapped sg list addresses to request context buffer.

Signed-off-by: Atul Gupta 
---
 drivers/crypto/chelsio/chcr_algo.c   | 49 +++-
 drivers/crypto/chelsio/chcr_crypto.h |  3 +++
 2 files changed, 29 insertions(+), 23 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 1d7dfcf..deec7c0 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -158,7 +158,7 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
case CRYPTO_ALG_TYPE_AEAD:
ctx_req.req.aead_req = (struct aead_request *)req;
ctx_req.ctx.reqctx = aead_request_ctx(ctx_req.req.aead_req);
-   dma_unmap_sg(&u_ctx->lldi.pdev->dev, ctx_req.req.aead_req->dst,
+   dma_unmap_sg(&u_ctx->lldi.pdev->dev, ctx_req.ctx.reqctx->dst,
 ctx_req.ctx.reqctx->dst_nents, DMA_FROM_DEVICE);
if (ctx_req.ctx.reqctx->skb) {
kfree_skb(ctx_req.ctx.reqctx->skb);
@@ -1364,8 +1364,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
struct chcr_wr *chcr_req;
struct cpl_rx_phys_dsgl *phys_cpl;
struct phys_sge_parm sg_param;
-   struct scatterlist *src, *dst;
-   struct scatterlist src_sg[2], dst_sg[2];
+   struct scatterlist *src;
unsigned int frags = 0, transhdr_len;
unsigned int ivsize = crypto_aead_ivsize(tfm), dst_size = 0;
unsigned int   kctx_len = 0;
@@ -1385,19 +1384,21 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
 
if (sg_nents_for_len(req->src, req->assoclen + req->cryptlen) < 0)
goto err;
-   src = scatterwalk_ffwd(src_sg, req->src, req->assoclen);
-   dst = src;
+   src = scatterwalk_ffwd(reqctx->srcffwd, req->src, req->assoclen);
+   reqctx->dst = src;
+
if (req->src != req->dst) {
err = chcr_copy_assoc(req, aeadctx);
if (err)
return ERR_PTR(err);
-   dst = scatterwalk_ffwd(dst_sg, req->dst, req->assoclen);
+   reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst,
+  req->assoclen);
}
if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_NULL) {
null = 1;
assoclen = 0;
}
-   reqctx->dst_nents = sg_nents_for_len(dst, req->cryptlen +
+   reqctx->dst_nents = sg_nents_for_len(reqctx->dst, req->cryptlen +
 (op_type ? -authsize : authsize));
if (reqctx->dst_nents <= 0) {
pr_err("AUTHENC:Invalid Destination sg entries\n");
@@ -1462,7 +1463,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
sg_param.obsize = req->cryptlen + (op_type ? -authsize : authsize);
sg_param.qid = qid;
sg_param.align = 0;
-   if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, dst,
+   if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, reqctx->dst,
  &sg_param))
goto dstmap_fail;
 
@@ -1713,8 +1714,7 @@ static struct sk_buff *create_aead_ccm_wr(struct 
aead_request *req,
struct chcr_wr *chcr_req;
struct cpl_rx_phys_dsgl *phys_cpl;
struct phys_sge_parm sg_param;
-   struct scatterlist *src, *dst;
-   struct scatterlist src_sg[2], dst_sg[2];
+   struct scatterlist *src;
unsigned int frags = 0, transhdr_len, ivsize = AES_BLOCK_SIZE;
unsigned int dst_size = 0, kctx_len;
unsigned int sub_type;
@@ -1730,17 +1730,19 @@ static struct sk_buff *create_aead_ccm_wr(struct 
aead_request *req,
if (sg_nents_for_len(req->src, req->assoclen + req->cryptlen) < 0)
goto err;
sub_type = get_aead_subtype(tfm);
-   src = scatterwalk_ffwd(src_sg, req->src, req->assoclen);
-   dst = src;
+   src = scatterwalk_ffwd(reqctx->srcffwd, req->src, req->assoclen);
+   reqctx->dst = src;
+
if (req->src != req->dst) {
err = chcr_copy_assoc(req, aeadctx);
if (err) {
pr_err("AAD copy to destination buffer fails\n");
return ERR_PTR(err);
}
-   dst = scatterwalk_ffwd(dst_sg, req->dst, req->assoclen);
+   reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst,
+  req->assoclen);
}
-   reqctx->dst_nents = sg_nents_for_len(dst, req->cryptlen +
+   reqctx->dst_nents = sg_nents_for_len(reqctx->dst, req->cryptlen +
 (op_type ? -authsize : authsize));
if (reqctx->dst_nents <= 0) {
pr_err("CCM:Invalid Destination sg entries\n");
@@ -1779,7 +1781,7 @@ static struct sk_buff *create_aead_

[PATCH v1 1/4] crypto:chcr-Change flow IDs

2017-01-13 Thread Harsh Jain
Change assign flowc id to each outgoing request.Firmware use flowc id
to schedule each request onto HW. FW reply may lost without this change.

Reviewed-by: Hariprasad Shenai 
Signed-off-by: Atul Gupta 
---
 drivers/crypto/chelsio/chcr_algo.c| 18 ++
 drivers/crypto/chelsio/chcr_algo.h|  9 +
 drivers/crypto/chelsio/chcr_core.h|  1 +
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h |  8 
 4 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 2ed1e24..1d7dfcf 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -542,10 +542,11 @@ static inline void create_wreq(struct chcr_context *ctx,
(calc_tx_flits_ofld(skb) * 8), 16)));
chcr_req->wreq.cookie = cpu_to_be64((uintptr_t)req);
chcr_req->wreq.rx_chid_to_rx_q_id =
-   FILL_WR_RX_Q_ID(ctx->dev->tx_channel_id, qid,
-   is_iv ? iv_loc : IV_NOP);
+   FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
+   is_iv ? iv_loc : IV_NOP, ctx->tx_channel_id);
 
-   chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id);
+   chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
+  qid);
chcr_req->ulptx.len = htonl((DIV_ROUND_UP((calc_tx_flits_ofld(skb) * 8),
16) - ((sizeof(chcr_req->wreq)) >> 4)));
 
@@ -606,7 +607,7 @@ static inline void create_wreq(struct chcr_context *ctx,
chcr_req = (struct chcr_wr *)__skb_put(skb, transhdr_len);
memset(chcr_req, 0, transhdr_len);
chcr_req->sec_cpl.op_ivinsrtofst =
-   FILL_SEC_CPL_OP_IVINSR(ctx->dev->tx_channel_id, 2, 1);
+   FILL_SEC_CPL_OP_IVINSR(ctx->dev->rx_channel_id, 2, 1);
 
chcr_req->sec_cpl.pldlen = htonl(ivsize + req->nbytes);
chcr_req->sec_cpl.aadstart_cipherstop_hi =
@@ -782,6 +783,7 @@ static int chcr_device_init(struct chcr_context *ctx)
spin_lock(&ctx->dev->lock_chcr_dev);
ctx->tx_channel_id = rxq_idx;
ctx->dev->tx_channel_id = !ctx->dev->tx_channel_id;
+   ctx->dev->rx_channel_id = 0;
spin_unlock(&ctx->dev->lock_chcr_dev);
}
 out:
@@ -874,7 +876,7 @@ static struct sk_buff *create_hash_wr(struct ahash_request 
*req,
memset(chcr_req, 0, transhdr_len);
 
chcr_req->sec_cpl.op_ivinsrtofst =
-   FILL_SEC_CPL_OP_IVINSR(ctx->dev->tx_channel_id, 2, 0);
+   FILL_SEC_CPL_OP_IVINSR(ctx->dev->rx_channel_id, 2, 0);
chcr_req->sec_cpl.pldlen = htonl(param->bfr_len + param->sg_len);
 
chcr_req->sec_cpl.aadstart_cipherstop_hi =
@@ -1424,7 +1426,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
 * to the hardware spec
 */
chcr_req->sec_cpl.op_ivinsrtofst =
-   FILL_SEC_CPL_OP_IVINSR(ctx->dev->tx_channel_id, 2,
+   FILL_SEC_CPL_OP_IVINSR(ctx->dev->rx_channel_id, 2,
   (ivsize ? (assoclen + 1) : 0));
chcr_req->sec_cpl.pldlen = htonl(assoclen + ivsize + req->cryptlen);
chcr_req->sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI(
@@ -1600,7 +1602,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu 
*sec_cpl,
unsigned int ivsize = AES_BLOCK_SIZE;
unsigned int cipher_mode = CHCR_SCMD_CIPHER_MODE_AES_CCM;
unsigned int mac_mode = CHCR_SCMD_AUTH_MODE_CBCMAC;
-   unsigned int c_id = chcrctx->dev->tx_channel_id;
+   unsigned int c_id = chcrctx->dev->rx_channel_id;
unsigned int ccm_xtra;
unsigned char tag_offset = 0, auth_offset = 0;
unsigned char hmac_ctrl = get_hmac(crypto_aead_authsize(tfm));
@@ -1875,7 +1877,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request 
*req,
 
tag_offset = (op_type == CHCR_ENCRYPT_OP) ? 0 : authsize;
chcr_req->sec_cpl.op_ivinsrtofst = FILL_SEC_CPL_OP_IVINSR(
-   ctx->dev->tx_channel_id, 2, (ivsize ?
+   ctx->dev->rx_channel_id, 2, (ivsize ?
(req->assoclen + 1) : 0));
chcr_req->sec_cpl.pldlen = htonl(req->assoclen + ivsize + crypt_len);
chcr_req->sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI(
diff --git a/drivers/crypto/chelsio/chcr_algo.h 
b/drivers/crypto/chelsio/chcr_algo.h
index 3c7c51f..ba38bae 100644
--- a/drivers/crypto/chelsio/chcr_algo.h
+++ b/drivers/crypto/chelsio/chcr_algo.h
@@ -185,20 +185,21 @@
FW_CRYPTO_LOOKASIDE_WR_CCTX_LOC_V(1) | \
FW_CRYPTO_LOOKASIDE_WR_CCTX_SIZE_V((ctx_len)))
 
-#define FILL_WR_RX_Q_ID(cid, qid, wr_iv) \
+#define FILL_WR_RX_Q_ID(

[PATCH v1 0/4]crypto:chcr- Bug Fixes for 4.10

2017-01-13 Thread Harsh Jain
This patch series is based on Herbert's cryptodev-2.6 tree.
It includes several critical bug fixes.

Atul Gupta (3):
  crypto:chcr-Change flow IDs
  crypto:chcr- Fix panic on dma_unmap_sg
  crypto:chcr- Check device is allocated before use
Julia Lawall (1):
  crypto:chcr-fix itnull.cocci warnings

 drivers/crypto/chelsio/chcr_algo.c| 67 ++-
 drivers/crypto/chelsio/chcr_algo.h|  9 ++--
 drivers/crypto/chelsio/chcr_core.c| 18 ---
 drivers/crypto/chelsio/chcr_core.h|  1 +
 drivers/crypto/chelsio/chcr_crypto.h  |  3 ++
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h |  8 
 6 files changed, 61 insertions(+), 45 deletions(-)

-- 
1.8.2.3



Re: [PATCH v1 3/8] crypto:chcr- Fix key length for RFC4106

2017-01-12 Thread Harsh Jain


On 12-01-2017 21:39, Herbert Xu wrote:
> On Fri, Jan 06, 2017 at 02:01:34PM +0530, Harsh Jain wrote:
>> Check keylen before copying salt to avoid wrap around of Integer.
>>
>> Signed-off-by: Harsh Jain 
>> ---
>>  drivers/crypto/chelsio/chcr_algo.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/crypto/chelsio/chcr_algo.c 
>> b/drivers/crypto/chelsio/chcr_algo.c
>> index deec7c0..6c2dea3 100644
>> --- a/drivers/crypto/chelsio/chcr_algo.c
>> +++ b/drivers/crypto/chelsio/chcr_algo.c
>> @@ -2194,8 +2194,8 @@ static int chcr_gcm_setkey(struct crypto_aead *aead, 
>> const u8 *key,
>>  unsigned int ck_size;
>>  int ret = 0, key_ctx_size = 0;
>>  
>> -if (get_aead_subtype(aead) ==
>> -CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106) {
>> +if (get_aead_subtype(aead) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106 &&
>> +keylen > 3) {
>>  keylen -= 4;  /* nonce/salt is present in the last 4 bytes */
>>  memcpy(aeadctx->salt, key + keylen, 4);
>>  }
> We should return an error in this case.
That case is already handled in next if condition.It will error out with 
-EINVAL in next condition.

if (keylen == AES_KEYSIZE_128) {

>
> Cheers,



[PATCH v1 1/8] crypto:chcr-Change flow IDs

2017-01-06 Thread Harsh Jain
Change assign flowc id to each outgoing request.Firmware use flowc id
to schedule each request onto HW.

Reviewed-by: Hariprasad Shenai 
Signed-off-by: Atul Gupta 
---
 drivers/crypto/chelsio/chcr_algo.c| 18 ++
 drivers/crypto/chelsio/chcr_algo.h|  9 +
 drivers/crypto/chelsio/chcr_core.h|  1 +
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h |  8 
 4 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 2ed1e24..1d7dfcf 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -542,10 +542,11 @@ static inline void create_wreq(struct chcr_context *ctx,
(calc_tx_flits_ofld(skb) * 8), 16)));
chcr_req->wreq.cookie = cpu_to_be64((uintptr_t)req);
chcr_req->wreq.rx_chid_to_rx_q_id =
-   FILL_WR_RX_Q_ID(ctx->dev->tx_channel_id, qid,
-   is_iv ? iv_loc : IV_NOP);
+   FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid,
+   is_iv ? iv_loc : IV_NOP, ctx->tx_channel_id);
 
-   chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id);
+   chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id,
+  qid);
chcr_req->ulptx.len = htonl((DIV_ROUND_UP((calc_tx_flits_ofld(skb) * 8),
16) - ((sizeof(chcr_req->wreq)) >> 4)));
 
@@ -606,7 +607,7 @@ static inline void create_wreq(struct chcr_context *ctx,
chcr_req = (struct chcr_wr *)__skb_put(skb, transhdr_len);
memset(chcr_req, 0, transhdr_len);
chcr_req->sec_cpl.op_ivinsrtofst =
-   FILL_SEC_CPL_OP_IVINSR(ctx->dev->tx_channel_id, 2, 1);
+   FILL_SEC_CPL_OP_IVINSR(ctx->dev->rx_channel_id, 2, 1);
 
chcr_req->sec_cpl.pldlen = htonl(ivsize + req->nbytes);
chcr_req->sec_cpl.aadstart_cipherstop_hi =
@@ -782,6 +783,7 @@ static int chcr_device_init(struct chcr_context *ctx)
spin_lock(&ctx->dev->lock_chcr_dev);
ctx->tx_channel_id = rxq_idx;
ctx->dev->tx_channel_id = !ctx->dev->tx_channel_id;
+   ctx->dev->rx_channel_id = 0;
spin_unlock(&ctx->dev->lock_chcr_dev);
}
 out:
@@ -874,7 +876,7 @@ static struct sk_buff *create_hash_wr(struct ahash_request 
*req,
memset(chcr_req, 0, transhdr_len);
 
chcr_req->sec_cpl.op_ivinsrtofst =
-   FILL_SEC_CPL_OP_IVINSR(ctx->dev->tx_channel_id, 2, 0);
+   FILL_SEC_CPL_OP_IVINSR(ctx->dev->rx_channel_id, 2, 0);
chcr_req->sec_cpl.pldlen = htonl(param->bfr_len + param->sg_len);
 
chcr_req->sec_cpl.aadstart_cipherstop_hi =
@@ -1424,7 +1426,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
 * to the hardware spec
 */
chcr_req->sec_cpl.op_ivinsrtofst =
-   FILL_SEC_CPL_OP_IVINSR(ctx->dev->tx_channel_id, 2,
+   FILL_SEC_CPL_OP_IVINSR(ctx->dev->rx_channel_id, 2,
   (ivsize ? (assoclen + 1) : 0));
chcr_req->sec_cpl.pldlen = htonl(assoclen + ivsize + req->cryptlen);
chcr_req->sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI(
@@ -1600,7 +1602,7 @@ static void fill_sec_cpl_for_aead(struct cpl_tx_sec_pdu 
*sec_cpl,
unsigned int ivsize = AES_BLOCK_SIZE;
unsigned int cipher_mode = CHCR_SCMD_CIPHER_MODE_AES_CCM;
unsigned int mac_mode = CHCR_SCMD_AUTH_MODE_CBCMAC;
-   unsigned int c_id = chcrctx->dev->tx_channel_id;
+   unsigned int c_id = chcrctx->dev->rx_channel_id;
unsigned int ccm_xtra;
unsigned char tag_offset = 0, auth_offset = 0;
unsigned char hmac_ctrl = get_hmac(crypto_aead_authsize(tfm));
@@ -1875,7 +1877,7 @@ static struct sk_buff *create_gcm_wr(struct aead_request 
*req,
 
tag_offset = (op_type == CHCR_ENCRYPT_OP) ? 0 : authsize;
chcr_req->sec_cpl.op_ivinsrtofst = FILL_SEC_CPL_OP_IVINSR(
-   ctx->dev->tx_channel_id, 2, (ivsize ?
+   ctx->dev->rx_channel_id, 2, (ivsize ?
(req->assoclen + 1) : 0));
chcr_req->sec_cpl.pldlen = htonl(req->assoclen + ivsize + crypt_len);
chcr_req->sec_cpl.aadstart_cipherstop_hi = FILL_SEC_CPL_CIPHERSTOP_HI(
diff --git a/drivers/crypto/chelsio/chcr_algo.h 
b/drivers/crypto/chelsio/chcr_algo.h
index 3c7c51f..ba38bae 100644
--- a/drivers/crypto/chelsio/chcr_algo.h
+++ b/drivers/crypto/chelsio/chcr_algo.h
@@ -185,20 +185,21 @@
FW_CRYPTO_LOOKASIDE_WR_CCTX_LOC_V(1) | \
FW_CRYPTO_LOOKASIDE_WR_CCTX_SIZE_V((ctx_len)))
 
-#define FILL_WR_RX_Q_ID(cid, qid, wr_iv) \
+#define FILL_WR_RX_Q_ID(cid, qid, wr_iv, fid) \
   

[PATCH v1 8/8] crypto:chcr- Fix wrong typecasting

2017-01-06 Thread Harsh Jain
Typecast the pointer with correct structure.

Signed-off-by: Atul Gupta 
---
 drivers/crypto/chelsio/chcr_core.c | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_core.c 
b/drivers/crypto/chelsio/chcr_core.c
index 1c65f07..aec3562 100644
--- a/drivers/crypto/chelsio/chcr_core.c
+++ b/drivers/crypto/chelsio/chcr_core.c
@@ -151,18 +151,17 @@ int chcr_uld_rx_handler(void *handle, const __be64 *rsp,
 {
struct uld_ctx *u_ctx = (struct uld_ctx *)handle;
struct chcr_dev *dev = u_ctx->dev;
-   const struct cpl_act_establish *rpl = (struct cpl_act_establish
-  *)rsp;
+   const struct cpl_fw6_pld *rpl = (struct cpl_fw6_pld *)rsp;
 
-   if (rpl->ot.opcode != CPL_FW6_PLD) {
+   if (rpl->opcode != CPL_FW6_PLD) {
pr_err("Unsupported opcode\n");
return 0;
}
 
if (!pgl)
-   work_handlers[rpl->ot.opcode](dev, (unsigned char *)&rsp[1]);
+   work_handlers[rpl->opcode](dev, (unsigned char *)&rsp[1]);
else
-   work_handlers[rpl->ot.opcode](dev, pgl->va);
+   work_handlers[rpl->opcode](dev, pgl->va);
return 0;
 }
 
-- 
1.8.2.3



[PATCH v1 2/8] crypto:chcr- Fix panic on dma_unmap_sg

2017-01-06 Thread Harsh Jain
Save DMA mapped sg list addresses to request context buffer.

Signed-off-by: Atul Gupta 
---
 drivers/crypto/chelsio/chcr_algo.c   | 49 +++-
 drivers/crypto/chelsio/chcr_crypto.h |  3 +++
 2 files changed, 29 insertions(+), 23 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 1d7dfcf..deec7c0 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -158,7 +158,7 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
case CRYPTO_ALG_TYPE_AEAD:
ctx_req.req.aead_req = (struct aead_request *)req;
ctx_req.ctx.reqctx = aead_request_ctx(ctx_req.req.aead_req);
-   dma_unmap_sg(&u_ctx->lldi.pdev->dev, ctx_req.req.aead_req->dst,
+   dma_unmap_sg(&u_ctx->lldi.pdev->dev, ctx_req.ctx.reqctx->dst,
 ctx_req.ctx.reqctx->dst_nents, DMA_FROM_DEVICE);
if (ctx_req.ctx.reqctx->skb) {
kfree_skb(ctx_req.ctx.reqctx->skb);
@@ -1364,8 +1364,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
struct chcr_wr *chcr_req;
struct cpl_rx_phys_dsgl *phys_cpl;
struct phys_sge_parm sg_param;
-   struct scatterlist *src, *dst;
-   struct scatterlist src_sg[2], dst_sg[2];
+   struct scatterlist *src;
unsigned int frags = 0, transhdr_len;
unsigned int ivsize = crypto_aead_ivsize(tfm), dst_size = 0;
unsigned int   kctx_len = 0;
@@ -1385,19 +1384,21 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
 
if (sg_nents_for_len(req->src, req->assoclen + req->cryptlen) < 0)
goto err;
-   src = scatterwalk_ffwd(src_sg, req->src, req->assoclen);
-   dst = src;
+   src = scatterwalk_ffwd(reqctx->srcffwd, req->src, req->assoclen);
+   reqctx->dst = src;
+
if (req->src != req->dst) {
err = chcr_copy_assoc(req, aeadctx);
if (err)
return ERR_PTR(err);
-   dst = scatterwalk_ffwd(dst_sg, req->dst, req->assoclen);
+   reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst,
+  req->assoclen);
}
if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_NULL) {
null = 1;
assoclen = 0;
}
-   reqctx->dst_nents = sg_nents_for_len(dst, req->cryptlen +
+   reqctx->dst_nents = sg_nents_for_len(reqctx->dst, req->cryptlen +
 (op_type ? -authsize : authsize));
if (reqctx->dst_nents <= 0) {
pr_err("AUTHENC:Invalid Destination sg entries\n");
@@ -1462,7 +1463,7 @@ static struct sk_buff *create_authenc_wr(struct 
aead_request *req,
sg_param.obsize = req->cryptlen + (op_type ? -authsize : authsize);
sg_param.qid = qid;
sg_param.align = 0;
-   if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, dst,
+   if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, reqctx->dst,
  &sg_param))
goto dstmap_fail;
 
@@ -1713,8 +1714,7 @@ static struct sk_buff *create_aead_ccm_wr(struct 
aead_request *req,
struct chcr_wr *chcr_req;
struct cpl_rx_phys_dsgl *phys_cpl;
struct phys_sge_parm sg_param;
-   struct scatterlist *src, *dst;
-   struct scatterlist src_sg[2], dst_sg[2];
+   struct scatterlist *src;
unsigned int frags = 0, transhdr_len, ivsize = AES_BLOCK_SIZE;
unsigned int dst_size = 0, kctx_len;
unsigned int sub_type;
@@ -1730,17 +1730,19 @@ static struct sk_buff *create_aead_ccm_wr(struct 
aead_request *req,
if (sg_nents_for_len(req->src, req->assoclen + req->cryptlen) < 0)
goto err;
sub_type = get_aead_subtype(tfm);
-   src = scatterwalk_ffwd(src_sg, req->src, req->assoclen);
-   dst = src;
+   src = scatterwalk_ffwd(reqctx->srcffwd, req->src, req->assoclen);
+   reqctx->dst = src;
+
if (req->src != req->dst) {
err = chcr_copy_assoc(req, aeadctx);
if (err) {
pr_err("AAD copy to destination buffer fails\n");
return ERR_PTR(err);
}
-   dst = scatterwalk_ffwd(dst_sg, req->dst, req->assoclen);
+   reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst,
+  req->assoclen);
}
-   reqctx->dst_nents = sg_nents_for_len(dst, req->cryptlen +
+   reqctx->dst_nents = sg_nents_for_len(reqctx->dst, req->cryptlen +
 (op_type ? -authsize : authsize));
if (reqctx->dst_nents <= 0) {
pr_err("CCM:Invalid Destination sg entries\n");
@@ -1779,7 +1781,7 @@ static struct sk_buff *create_aead_

[PATCH v1 6/8] crypto:chcr- Change algo priority

2017-01-06 Thread Harsh Jain
Update priorities to 3000

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_crypto.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/chelsio/chcr_crypto.h 
b/drivers/crypto/chelsio/chcr_crypto.h
index 7ec0a8f..81cfd0b 100644
--- a/drivers/crypto/chelsio/chcr_crypto.h
+++ b/drivers/crypto/chelsio/chcr_crypto.h
@@ -48,7 +48,7 @@
  * giving the processed data
  */
 
-#define CHCR_CRA_PRIORITY 300
+#define CHCR_CRA_PRIORITY 3000
 
 #define CHCR_AES_MAX_KEY_LEN  (2 * (AES_MAX_KEY_SIZE)) /* consider xts */
 #define CHCR_MAX_CRYPTO_IV_LEN 16 /* AES IV len */
-- 
1.8.2.3



[PATCH v1 3/8] crypto:chcr- Fix key length for RFC4106

2017-01-06 Thread Harsh Jain
Check keylen before copying salt to avoid wrap around of Integer.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index deec7c0..6c2dea3 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -2194,8 +2194,8 @@ static int chcr_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
unsigned int ck_size;
int ret = 0, key_ctx_size = 0;
 
-   if (get_aead_subtype(aead) ==
-   CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106) {
+   if (get_aead_subtype(aead) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106 &&
+   keylen > 3) {
keylen -= 4;  /* nonce/salt is present in the last 4 bytes */
memcpy(aeadctx->salt, key + keylen, 4);
}
-- 
1.8.2.3



[PATCH v1 0/8] crypto:chcr- Bug fixes

2017-01-06 Thread Harsh Jain
The patch series is based on Herbert's cryptodev-2.6 tree.
It include bug fixes.

Atul Gupta (4):
  crypto:chcr-Change flow IDs
  crypto:chcr- Fix panic on dma_unmap_sg
  crypto:chcr- Check device is allocated before use
  crypto:chcr- Fix wrong typecasting
Harsh Jain (4):
  crypto:chcr- Fix key length for RFC4106
  crypto:chcr- Use cipher instead of Block Cipher in gcm setkey
  crypto:chcr: Change cra_flags for cipher algos
  crypto:chcr- Change algo priority


 drivers/crypto/chelsio/chcr_algo.c| 97 ++-
 drivers/crypto/chelsio/chcr_algo.h|  9 +--
 drivers/crypto/chelsio/chcr_core.c| 27 
 drivers/crypto/chelsio/chcr_core.h|  1 +
 drivers/crypto/chelsio/chcr_crypto.h  |  5 +-
 drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h |  8 +++
 6 files changed, 80 insertions(+), 67 deletions(-)

-- 
1.8.2.3



[PATCH v1 7/8] crypto:chcr- Check device is allocated before use

2017-01-06 Thread Harsh Jain
Ensure dev is allocated for crypto uld context before using the device
for crypto operations.

Signed-off-by: Atul Gupta 
---
 drivers/crypto/chelsio/chcr_core.c | 18 --
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_core.c 
b/drivers/crypto/chelsio/chcr_core.c
index 918da8e..1c65f07 100644
--- a/drivers/crypto/chelsio/chcr_core.c
+++ b/drivers/crypto/chelsio/chcr_core.c
@@ -52,6 +52,7 @@
 int assign_chcr_device(struct chcr_dev **dev)
 {
struct uld_ctx *u_ctx;
+   int ret = -ENXIO;
 
/*
 * Which device to use if multiple devices are available TODO
@@ -59,15 +60,14 @@ int assign_chcr_device(struct chcr_dev **dev)
 * must go to the same device to maintain the ordering.
 */
mutex_lock(&dev_mutex); /* TODO ? */
-   u_ctx = list_first_entry(&uld_ctx_list, struct uld_ctx, entry);
-   if (!u_ctx) {
-   mutex_unlock(&dev_mutex);
-   return -ENXIO;
+   list_for_each_entry(u_ctx, &uld_ctx_list, entry)
+   if (u_ctx && u_ctx->dev) {
+   *dev = u_ctx->dev;
+   ret = 0;
+   break;
}
-
-   *dev = u_ctx->dev;
mutex_unlock(&dev_mutex);
-   return 0;
+   return ret;
 }
 
 static int chcr_dev_add(struct uld_ctx *u_ctx)
@@ -202,10 +202,8 @@ static int chcr_uld_state_change(void *handle, enum 
cxgb4_state state)
 
 static int __init chcr_crypto_init(void)
 {
-   if (cxgb4_register_uld(CXGB4_ULD_CRYPTO, &chcr_uld_info)) {
+   if (cxgb4_register_uld(CXGB4_ULD_CRYPTO, &chcr_uld_info))
pr_err("ULD register fail: No chcr crypto support in cxgb4");
-   return -1;
-   }
 
return 0;
 }
-- 
1.8.2.3



[PATCH v1 5/8] crypto:chcr: Change cra_flags for cipher algos

2017-01-06 Thread Harsh Jain
Change cipher algos flags to CRYPTO_ALG_TYPE_ABLKCIPHER.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index d335943..21fc04c 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -171,7 +171,7 @@ int chcr_handle_resp(struct crypto_async_request *req, 
unsigned char *input,
}
break;
 
-   case CRYPTO_ALG_TYPE_BLKCIPHER:
+   case CRYPTO_ALG_TYPE_ABLKCIPHER:
ctx_req.req.ablk_req = (struct ablkcipher_request *)req;
ctx_req.ctx.ablk_ctx =
ablkcipher_request_ctx(ctx_req.req.ablk_req);
@@ -2492,7 +2492,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_name   = "cbc(aes)",
.cra_driver_name= "cbc-aes-chcr",
.cra_priority   = CHCR_CRA_PRIORITY,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+   .cra_flags  = CRYPTO_ALG_TYPE_ABLKCIPHER |
CRYPTO_ALG_ASYNC,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct chcr_context)
@@ -2519,7 +2519,7 @@ static int chcr_aead_op(struct aead_request *req,
.cra_name   = "xts(aes)",
.cra_driver_name= "xts-aes-chcr",
.cra_priority   = CHCR_CRA_PRIORITY,
-   .cra_flags  = CRYPTO_ALG_TYPE_BLKCIPHER |
+   .cra_flags  = CRYPTO_ALG_TYPE_ABLKCIPHER |
CRYPTO_ALG_ASYNC,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct chcr_context) +
-- 
1.8.2.3



[PATCH v1 4/8] crypto:chcr- Use cipher instead of Block Cipher in gcm setkey

2017-01-06 Thread Harsh Jain
1 Block of encrption can be done with aes-generic. no need of
cbc(aes). This patch replaces cbc(aes-generic) with aes-generic.

Signed-off-by: Harsh Jain 
---
 drivers/crypto/chelsio/chcr_algo.c | 20 +---
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c 
b/drivers/crypto/chelsio/chcr_algo.c
index 6c2dea3..d335943 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -2189,8 +2189,7 @@ static int chcr_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
struct chcr_context *ctx = crypto_aead_ctx(aead);
struct chcr_aead_ctx *aeadctx = AEAD_CTX(ctx);
struct chcr_gcm_ctx *gctx = GCM_CTX(aeadctx);
-   struct blkcipher_desc h_desc;
-   struct scatterlist src[1];
+   struct crypto_cipher *cipher;
unsigned int ck_size;
int ret = 0, key_ctx_size = 0;
 
@@ -2223,27 +,26 @@ static int chcr_gcm_setkey(struct crypto_aead *aead, 
const u8 *key,
CHCR_KEYCTX_MAC_KEY_SIZE_128,
0, 0,
key_ctx_size >> 4);
-   /* Calculate the H = CIPH(K, 0 repeated 16 times) using sync aes
-* blkcipher It will go on key context
+   /* Calculate the H = CIPH(K, 0 repeated 16 times).
+* It will go in key context
 */
-   h_desc.tfm = crypto_alloc_blkcipher("cbc(aes-generic)", 0, 0);
-   if (IS_ERR(h_desc.tfm)) {
+   cipher = crypto_alloc_cipher("aes-generic", 0, 0);
+   if (IS_ERR(cipher)) {
aeadctx->enckey_len = 0;
ret = -ENOMEM;
goto out;
}
-   h_desc.flags = 0;
-   ret = crypto_blkcipher_setkey(h_desc.tfm, key, keylen);
+
+   ret = crypto_cipher_setkey(cipher, key, keylen);
if (ret) {
aeadctx->enckey_len = 0;
goto out1;
}
memset(gctx->ghash_h, 0, AEAD_H_SIZE);
-   sg_init_one(&src[0], gctx->ghash_h, AEAD_H_SIZE);
-   ret = crypto_blkcipher_encrypt(&h_desc, &src[0], &src[0], AEAD_H_SIZE);
+   crypto_cipher_encrypt_one(cipher, gctx->ghash_h, gctx->ghash_h);
 
 out1:
-   crypto_free_blkcipher(h_desc.tfm);
+   crypto_free_cipher(cipher);
 out:
return ret;
 }
-- 
1.8.2.3



packet encryption based on gre key value with ip xfrm command

2015-05-31 Thread Harsh Jain
Hi,

I am trying to encrypt Gre packet have specific key values in GRE
header with following command

ip xfrm policy add src 192.168.1.9 dst 192.168.1.5 proto gre key 3 dir
in tmpl src 192.168.1.9 dst 192.168.1.5 proto esp reqid 16387 mode
transport


But it is not working. If I remove the "key 3" from above system
encrypt all GRE packets.

I tried with kernel version 3.18 and iproute2 version.2.4.

 I got iproute2 patch file having changes to support filtering based
on keys but didn't find corresponding kernel patch.
How to encrypt Packets based on GRE key value.?


Regards
Harsh Jain
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html