Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Harsh Jain
Hi All,

How can we open socket of type "authenc(hmac(sha256),cbc(aes))" from
userspace program.I check libkcapi library. It has test programs for
GCM/CCM. There are 3 types of approaches to Authenticated Encryption,
Which of them is supported in crypto framework.

1) Encrypt-then-MAC (EtM)
 The plaintext is first encrypted, then a MAC is produced based on
the resulting ciphertext. The ciphertext and its MAC are sent
together.
2) Encrypt-and-MAC (E&M)
 A MAC is produced based on the plaintext, and the plaintext is
encrypted without the MAC. The plaintext's MAC and the ciphertext are
sent together.

3) MAC-then-Encrypt (MtE)
 A MAC is produced based on the plaintext, then the plaintext and
MAC are together encrypted to produce a ciphertext based on both. The
ciphertext (containing an encrypted MAC) is sent.


Regards
Harsh Jain
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Stephan Mueller
Am Dienstag, 31. Mai 2016, 12:31:16 schrieb Harsh Jain:

Hi Harsh,

> Hi All,
> 
> How can we open socket of type "authenc(hmac(sha256),cbc(aes))" from
> userspace program.I check libkcapi library. It has test programs for
> GCM/CCM. There are 3 types of approaches to Authenticated Encryption,
> Which of them is supported in crypto framework.
> 
> 1) Encrypt-then-MAC (EtM)
>  The plaintext is first encrypted, then a MAC is produced based on
> the resulting ciphertext. The ciphertext and its MAC are sent
> together.
> 2) Encrypt-and-MAC (E&M)
>  A MAC is produced based on the plaintext, and the plaintext is
> encrypted without the MAC. The plaintext's MAC and the ciphertext are
> sent together.
> 
> 3) MAC-then-Encrypt (MtE)
>  A MAC is produced based on the plaintext, then the plaintext and
> MAC are together encrypted to produce a ciphertext based on both. The
> ciphertext (containing an encrypted MAC) is sent.

The cipher types you mention refer to the implementation of authenc(). IIRC, 
authenc implements EtM as this is mandated by IPSEC.

When you use libkcapi, you should simply be able to use your cipher name with 
the AEAD API. I.e. use the examples you see for CCM or GCM and use those with 
the chosen authenc() cipher. Do you experience any issues?

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v5 3/3] crypto: kpp - Add ECDH software support

2016-05-31 Thread Herbert Xu
On Mon, May 09, 2016 at 10:40:41PM +0100, Salvatore Benedetto wrote:
>
> + do {
> + if (tries++ >= MAX_TRIES)
> + goto err_retries;
> +
> + ecc_point_mult(pk, &curve->g, priv, NULL, curve->p, ndigits);
> +
> + } while (ecc_point_is_zero(pk));

You might want to read this again.  The original code did this
because it changed the private key in the loop, in your code
priv is constant...

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] crypto: rsa - return raw integers for the ASN.1 parser

2016-05-31 Thread Herbert Xu
On Thu, May 12, 2016 at 06:00:33PM +0300, Tudor Ambarus wrote:
>
>  int rsa_get_n(void *context, size_t hdrlen, unsigned char tag,
> const void *value, size_t vlen)
>  {
>   struct rsa_key *key = context;
> + const char *ptr = value;
> + int ret;
>  
> - key->n = mpi_read_raw_data(value, vlen);
> -
> - if (!key->n)
> - return -ENOMEM;
> + while (!*ptr && vlen) {
> + ptr++;
> + vlen--;
> + }
>  
>   /* In FIPS mode only allow key size 2K & 3K */
> - if (fips_enabled && (mpi_get_size(key->n) != 256 &&
> -  mpi_get_size(key->n) != 384)) {
> + if (fips_enabled && (vlen != 256 && vlen != 384)) {
>   pr_err("RSA: key size not allowed in FIPS mode\n");
> - mpi_free(key->n);
> - key->n = NULL;
>   return -EINVAL;
>   }
> + /* invalid key size provided */
> + ret = rsa_check_key_length(vlen << 3);
> + if (ret)
> + return ret;
> +
> + key->n = kzalloc(vlen, key->flags);
> + if (!key->n)
> + return -ENOMEM;
> +

The helper shouldn't be copying it at all.

Just return the raw key as is and then the caller can copy it
or MPI parse it, etc.

The helper should just do the parsing and nothing else.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Atmel driver - XTS mode - Alignement issue

2016-05-31 Thread levent demir
Hi all, 

I am working on SAMA5D3 board with atmel-aes driver, I have a question
about sg. 

This board does not support XTS mode, however we want to add this
functionality. 


As a recall XTS mode : 

For a 512 bytes block:

1) We encrypt the given IV with the second part of the key [ECB]. 
2) We compute the 32 tweaks value (with GF multiplication)
3) We Xor the plaintext with the tweaks (called XOR_1)
4) We encrypt the result with ECB and the first half of the key
5) Again we xor the result with tweaks (called XOR_2)

So if I want to add my own XTS mode I need to make all those
operations. 

I have seen in the code there is a alignement issue to handle, if source
data is aligned 
we can encrypt it directly. If  source data is not aligned we call a
function to copy into a buffer : 

625 if (!src_aligned) {
626sg_copy_to_buffer(src, sg_nents(src), dd->buf, len);
...

Moreover we are working with dm-crypt. 

My question is : is it possible to make the XOR operation directly on
scatterlist if data is aligned ?
Or I am forced to use the sg_copy_to/from_buffer.

We have tested and here are the results :

1) The easy solution is to copy the src data into the buffer and to xor
it with tweaks for XOR_1 and XOR_2
2) If we xor only the ciphertext [dst] (XOR_2) directly with the
scatterlist and compute the src XOR_1 with the buffer, it is working. 
3) If we xor direcly with the scatterlist for XOR_1 and XOR_2 we have an
error at the mount step using dm-crypt :

[269132.78] EXT4-fs (dm-0): ext4_check_descriptors: Block bitmap for
group 0 not in group
(block 16843203)!
[269132.79] EXT4-fs (dm-0): group descriptors corrupted! 


If you can help me on this point. 

Thanks.

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/7] crypto : stylistic cleanup in sha1-mb

2016-05-31 Thread Herbert Xu
On Thu, May 19, 2016 at 05:43:04PM -0700, Megha Dey wrote:
> From: Megha Dey 
> 
> Currently there are several checkpatch warnings in the sha1_mb.c file:
> 'WARNING: line over 80 characters' in the sha1_mb.c file. Also, the
> syntax of some multi-line comments are not correct. This patch fixes
> these issues.
> 
> Signed-off-by: Megha Dey 

This patch says 1/7 but there is no cover letter and I've only
seen patches 1 and 2.  What's going on?

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Harsh Jain
Hi,

You means to say like this

./kcapi -x 2 -e -c "authenc(hmac(sha1),cbc(aes))" -p
48981da18e4bb9ef7e2e3162d16b19108b19050f66582cb7f7e4b6c873819b71 -k
8d7dd9b0170ce0b5f2f8e1aa768e01e91da8bfc67fd486d081b28254c99eb423 -i
7fbc02ebf5b93322329df9bfccb635af -a afcd7202d621e06ca53b70c2bdff7fb2
-l 16f4a3eacfbdadd3b1a17117b1d67ffc1f1e21efbbc6d83724a8c296e3bb8cda0c44

It gives following error with kernel 4.5.2
Symmetric cipher setkey failed
Failed to invoke testing



Regards
Harsh Jain

On Tue, May 31, 2016 at 12:35 PM, Stephan Mueller  wrote:
> Am Dienstag, 31. Mai 2016, 12:31:16 schrieb Harsh Jain:
>
> Hi Harsh,
>
>> Hi All,
>>
>> How can we open socket of type "authenc(hmac(sha256),cbc(aes))" from
>> userspace program.I check libkcapi library. It has test programs for
>> GCM/CCM. There are 3 types of approaches to Authenticated Encryption,
>> Which of them is supported in crypto framework.
>>
>> 1) Encrypt-then-MAC (EtM)
>>  The plaintext is first encrypted, then a MAC is produced based on
>> the resulting ciphertext. The ciphertext and its MAC are sent
>> together.
>> 2) Encrypt-and-MAC (E&M)
>>  A MAC is produced based on the plaintext, and the plaintext is
>> encrypted without the MAC. The plaintext's MAC and the ciphertext are
>> sent together.
>>
>> 3) MAC-then-Encrypt (MtE)
>>  A MAC is produced based on the plaintext, then the plaintext and
>> MAC are together encrypted to produce a ciphertext based on both. The
>> ciphertext (containing an encrypted MAC) is sent.
>
> The cipher types you mention refer to the implementation of authenc(). IIRC,
> authenc implements EtM as this is mandated by IPSEC.
>
> When you use libkcapi, you should simply be able to use your cipher name with
> the AEAD API. I.e. use the examples you see for CCM or GCM and use those with
> the chosen authenc() cipher. Do you experience any issues?
>
> Ciao
> Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v5 3/3] crypto: kpp - Add ECDH software support

2016-05-31 Thread Benedetto, Salvatore


> -Original Message-
> From: Herbert Xu [mailto:herb...@gondor.apana.org.au]
> Sent: Tuesday, May 31, 2016 7:55 AM
> To: Benedetto, Salvatore 
> Cc: linux-crypto@vger.kernel.org
> Subject: Re: [PATCH v5 3/3] crypto: kpp - Add ECDH software support
> 
> On Mon, May 09, 2016 at 10:40:41PM +0100, Salvatore Benedetto wrote:
> >
> > +   do {
> > +   if (tries++ >= MAX_TRIES)
> > +   goto err_retries;
> > +
> > +   ecc_point_mult(pk, &curve->g, priv, NULL, curve->p, ndigits);
> > +
> > +   } while (ecc_point_is_zero(pk));
> 
> You might want to read this again.  The original code did this because it
> changed the private key in the loop, in your code priv is constant...

Yep, I'll fix that. Thanks for reviewing.

Salvatore 

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v5 2/3] crypto: kpp - Add DH software implementation

2016-05-31 Thread Benedetto, Salvatore


> -Original Message-
> From: Herbert Xu [mailto:herb...@gondor.apana.org.au]
> Sent: Tuesday, May 31, 2016 7:53 AM
> To: Benedetto, Salvatore 
> Cc: linux-crypto@vger.kernel.org
> Subject: Re: [PATCH v5 2/3] crypto: kpp - Add DH software implementation
> 
> On Mon, May 09, 2016 at 10:40:40PM +0100, Salvatore Benedetto wrote:
> >
> > +static int dh_set_params(struct crypto_kpp *tfm, void *buffer,
> > +unsigned int len)
> > +{
> > +   struct dh_ctx *ctx = dh_get_ctx(tfm);
> > +   struct dh_params *params = (struct dh_params *)buffer;
> > +
> > +   if (unlikely(!buffer || !len))
> > +   return -EINVAL;
> 
> What's the point of len? It's never checked anywhere apart from this non-
> zero check which is pointless.  Just get rid of it.

When I first created the API I thought it would be useful to validate the given
buffer in case the user passed in the wrong structure. The actual check would
have been

If (unlikely(!len && len != sizeof(struct dh_params))
return -EINVAL;

but I agree I don't see much value in that now. I'll remove it.

Regards,
Salvatore
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Stephan Mueller
Am Dienstag, 31. Mai 2016, 14:10:20 schrieb Harsh Jain:

Hi Harsh,

> Hi,
> 
> You means to say like this
> 
> ./kcapi -x 2 -e -c "authenc(hmac(sha1),cbc(aes))" -p
> 48981da18e4bb9ef7e2e3162d16b19108b19050f66582cb7f7e4b6c873819b71 -k
> 8d7dd9b0170ce0b5f2f8e1aa768e01e91da8bfc67fd486d081b28254c99eb423 -i
> 7fbc02ebf5b93322329df9bfccb635af -a afcd7202d621e06ca53b70c2bdff7fb2
> -l 16f4a3eacfbdadd3b1a17117b1d67ffc1f1e21efbbc6d83724a8c296e3bb8cda0c44
> 
> It gives following error with kernel 4.5.2
> Symmetric cipher setkey failed
> Failed to invoke testing
> 

Please see testmgr.h for usage (especially the key encoding):

invocation:
./kcapi -x 2 -e -c "authenc(hmac(sha1),cbc(aes))" -p 
53696e676c6520626c6f636b206d7367 -k 
0800011006a9214036b8a15b512e03d534120006
 
-i 3dafba429d9eb430b422da802c9fac41 -a 3dafba429d9eb430b422da802c9fac41 -l 20

return:
e353779c1079aeb82708942dbe77181a1b13cbaf895ee12c13c52ea3cceddcb50371a206

This is the first test of hmac_sha1_aes_cbc_enc_tv_temp (RFC3601 case 1). 
Note, the input string of "Single block msg" was converted to hex 
53696e676c6520626c6f636b206d7367 as my tool always treats all input data as 
hex data.

> 
> 
> Regards
> Harsh Jain
> 
> On Tue, May 31, 2016 at 12:35 PM, Stephan Mueller  
wrote:
> > Am Dienstag, 31. Mai 2016, 12:31:16 schrieb Harsh Jain:
> > 
> > Hi Harsh,
> > 
> >> Hi All,
> >> 
> >> How can we open socket of type "authenc(hmac(sha256),cbc(aes))" from
> >> userspace program.I check libkcapi library. It has test programs for
> >> GCM/CCM. There are 3 types of approaches to Authenticated Encryption,
> >> Which of them is supported in crypto framework.
> >> 
> >> 1) Encrypt-then-MAC (EtM)
> >> 
> >>  The plaintext is first encrypted, then a MAC is produced based on
> >> 
> >> the resulting ciphertext. The ciphertext and its MAC are sent
> >> together.
> >> 2) Encrypt-and-MAC (E&M)
> >> 
> >>  A MAC is produced based on the plaintext, and the plaintext is
> >> 
> >> encrypted without the MAC. The plaintext's MAC and the ciphertext are
> >> sent together.
> >> 
> >> 3) MAC-then-Encrypt (MtE)
> >> 
> >>  A MAC is produced based on the plaintext, then the plaintext and
> >> 
> >> MAC are together encrypted to produce a ciphertext based on both. The
> >> ciphertext (containing an encrypted MAC) is sent.
> > 
> > The cipher types you mention refer to the implementation of authenc().
> > IIRC, authenc implements EtM as this is mandated by IPSEC.
> > 
> > When you use libkcapi, you should simply be able to use your cipher name
> > with the AEAD API. I.e. use the examples you see for CCM or GCM and use
> > those with the chosen authenc() cipher. Do you experience any issues?
> > 
> > Ciao
> > Stephan


Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Harsh Jain
Hi,

Thanks Stephen, I will check the same.1 suggestion for kcapi tool. Add
some switch cases in tool to test digest and finup path of crypto
driver. Current implementation triggers only init/update/final.


Regards
Harsh Jain

On Tue, May 31, 2016 at 2:29 PM, Stephan Mueller  wrote:
> Am Dienstag, 31. Mai 2016, 14:10:20 schrieb Harsh Jain:
>
> Hi Harsh,
>
>> Hi,
>>
>> You means to say like this
>>
>> ./kcapi -x 2 -e -c "authenc(hmac(sha1),cbc(aes))" -p
>> 48981da18e4bb9ef7e2e3162d16b19108b19050f66582cb7f7e4b6c873819b71 -k
>> 8d7dd9b0170ce0b5f2f8e1aa768e01e91da8bfc67fd486d081b28254c99eb423 -i
>> 7fbc02ebf5b93322329df9bfccb635af -a afcd7202d621e06ca53b70c2bdff7fb2
>> -l 16f4a3eacfbdadd3b1a17117b1d67ffc1f1e21efbbc6d83724a8c296e3bb8cda0c44
>>
>> It gives following error with kernel 4.5.2
>> Symmetric cipher setkey failed
>> Failed to invoke testing
>>
>
> Please see testmgr.h for usage (especially the key encoding):
>
> invocation:
> ./kcapi -x 2 -e -c "authenc(hmac(sha1),cbc(aes))" -p
> 53696e676c6520626c6f636b206d7367 -k
> 0800011006a9214036b8a15b512e03d534120006
> -i 3dafba429d9eb430b422da802c9fac41 -a 3dafba429d9eb430b422da802c9fac41 -l 20
>
> return:
> e353779c1079aeb82708942dbe77181a1b13cbaf895ee12c13c52ea3cceddcb50371a206
>
> This is the first test of hmac_sha1_aes_cbc_enc_tv_temp (RFC3601 case 1).
> Note, the input string of "Single block msg" was converted to hex
> 53696e676c6520626c6f636b206d7367 as my tool always treats all input data as
> hex data.
>
>>
>>
>> Regards
>> Harsh Jain
>>
>> On Tue, May 31, 2016 at 12:35 PM, Stephan Mueller 
> wrote:
>> > Am Dienstag, 31. Mai 2016, 12:31:16 schrieb Harsh Jain:
>> >
>> > Hi Harsh,
>> >
>> >> Hi All,
>> >>
>> >> How can we open socket of type "authenc(hmac(sha256),cbc(aes))" from
>> >> userspace program.I check libkcapi library. It has test programs for
>> >> GCM/CCM. There are 3 types of approaches to Authenticated Encryption,
>> >> Which of them is supported in crypto framework.
>> >>
>> >> 1) Encrypt-then-MAC (EtM)
>> >>
>> >>  The plaintext is first encrypted, then a MAC is produced based on
>> >>
>> >> the resulting ciphertext. The ciphertext and its MAC are sent
>> >> together.
>> >> 2) Encrypt-and-MAC (E&M)
>> >>
>> >>  A MAC is produced based on the plaintext, and the plaintext is
>> >>
>> >> encrypted without the MAC. The plaintext's MAC and the ciphertext are
>> >> sent together.
>> >>
>> >> 3) MAC-then-Encrypt (MtE)
>> >>
>> >>  A MAC is produced based on the plaintext, then the plaintext and
>> >>
>> >> MAC are together encrypted to produce a ciphertext based on both. The
>> >> ciphertext (containing an encrypted MAC) is sent.
>> >
>> > The cipher types you mention refer to the implementation of authenc().
>> > IIRC, authenc implements EtM as this is mandated by IPSEC.
>> >
>> > When you use libkcapi, you should simply be able to use your cipher name
>> > with the AEAD API. I.e. use the examples you see for CCM or GCM and use
>> > those with the chosen authenc() cipher. Do you experience any issues?
>> >
>> > Ciao
>> > Stephan
>
>
> Ciao
> Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Stephan Mueller
Am Dienstag, 31. Mai 2016, 14:45:27 schrieb Harsh Jain:

Hi Harsh,

> Hi,
> 
> Thanks Stephen, I will check the same.1 suggestion for kcapi tool. Add
> some switch cases in tool to test digest and finup path of crypto
> driver. Current implementation triggers only init/update/final.

You mean for hashes? I guess the following is what you refer to? This logic is 
even found for the other cipher types (symmetric algos, AEAD ciphers). See the 
documentation on stream vs one-shot use cases.

/**
 * kcapi_md_init() - initialize cipher handle
 * @handle: cipher handle filled during the call - output
 * @ciphername: kernel crypto API cipher name as specified in
 * /proc/crypto - input
 * @flags: flags specifying the type of cipher handle
 *
 * This function provides the initialization of a (keyed) message digest 
handle
 * and establishes the connection to the kernel.
 *
 * Return: 0 upon success; ENOENT - algorithm not available;
 * -EOPNOTSUPP - AF_ALG family not available;
 * -EINVAL - accept syscall failed
 * -ENOMEM - cipher handle cannot be allocated
 */
int kcapi_md_init(struct kcapi_handle **handle, const char *ciphername,
  uint32_t flags);

/**
 * kcapi_md_update() - message digest update function (stream)
 * @handle: cipher handle - input
 * @buffer: holding the data to add to the message digest - input
 * @len: buffer length - input
 *
 * Return: 0 upon success;
 * < 0 in case of error
 */
int32_t kcapi_md_update(struct kcapi_handle *handle,
const uint8_t *buffer, uint32_t len);

/**
 * kcapi_md_final() - message digest finalization function (stream)
 * @handle: cipher handle - input
 * @buffer: filled with the message digest - output
 * @len: buffer length - input
 *
 * Return: size of message digest upon success;
 * -EIO - data cannot be obtained;
 * -ENOMEM - buffer is too small for the complete message digest,
 * the buffer is filled with the truncated message digest
 */
int32_t kcapi_md_final(struct kcapi_handle *handle,
   uint8_t *buffer, uint32_t len);


The test/kcapi tool is a crude test tool that I use for my regression testing. 
It is not intended for anything else.
> 
> 
> Regards
> Harsh Jain
> 
> On Tue, May 31, 2016 at 2:29 PM, Stephan Mueller  
wrote:
> > Am Dienstag, 31. Mai 2016, 14:10:20 schrieb Harsh Jain:
> > 
> > Hi Harsh,
> > 
> >> Hi,
> >> 
> >> You means to say like this
> >> 
> >> ./kcapi -x 2 -e -c "authenc(hmac(sha1),cbc(aes))" -p
> >> 48981da18e4bb9ef7e2e3162d16b19108b19050f66582cb7f7e4b6c873819b71 -k
> >> 8d7dd9b0170ce0b5f2f8e1aa768e01e91da8bfc67fd486d081b28254c99eb423 -i
> >> 7fbc02ebf5b93322329df9bfccb635af -a afcd7202d621e06ca53b70c2bdff7fb2
> >> -l 16f4a3eacfbdadd3b1a17117b1d67ffc1f1e21efbbc6d83724a8c296e3bb8cda0c44
> >> 
> >> It gives following error with kernel 4.5.2
> >> Symmetric cipher setkey failed
> >> Failed to invoke testing
> > 
> > Please see testmgr.h for usage (especially the key encoding):
> > 
> > invocation:
> > ./kcapi -x 2 -e -c "authenc(hmac(sha1),cbc(aes))" -p
> > 53696e676c6520626c6f636b206d7367 -k
> > 0800011006a9214036b8a15b51
> > 2e03d534120006 -i 3dafba429d9eb430b422da802c9fac41 -a
> > 3dafba429d9eb430b422da802c9fac41 -l 20
> > 
> > return:
> > e353779c1079aeb82708942dbe77181a1b13cbaf895ee12c13c52ea3cceddcb50371a206
> > 
> > This is the first test of hmac_sha1_aes_cbc_enc_tv_temp (RFC3601 case 1).
> > Note, the input string of "Single block msg" was converted to hex
> > 53696e676c6520626c6f636b206d7367 as my tool always treats all input data
> > as
> > hex data.
> > 
> >> Regards
> >> Harsh Jain
> >> 
> >> On Tue, May 31, 2016 at 12:35 PM, Stephan Mueller 
> > 
> > wrote:
> >> > Am Dienstag, 31. Mai 2016, 12:31:16 schrieb Harsh Jain:
> >> > 
> >> > Hi Harsh,
> >> > 
> >> >> Hi All,
> >> >> 
> >> >> How can we open socket of type "authenc(hmac(sha256),cbc(aes))" from
> >> >> userspace program.I check libkcapi library. It has test programs for
> >> >> GCM/CCM. There are 3 types of approaches to Authenticated Encryption,
> >> >> Which of them is supported in crypto framework.
> >> >> 
> >> >> 1) Encrypt-then-MAC (EtM)
> >> >> 
> >> >>  The plaintext is first encrypted, then a MAC is produced based on
> >> >> 
> >> >> the resulting ciphertext. The ciphertext and its MAC are sent
> >> >> together.
> >> >> 2) Encrypt-and-MAC (E&M)
> >> >> 
> >> >>  A MAC is produced based on the plaintext, and the plaintext is
> >> >> 
> >> >> encrypted without the MAC. The plaintext's MAC and the ciphertext are
> >> >> sent together.
> >> >> 
> >> >> 3) MAC-then-Encrypt (MtE)
> >> >> 
> >> >>  A MAC is produced based on the plaintext, then the plaintext and
> >> >> 
> >> >> MAC are together encrypted to produce a ciphertext based on both. The
> >> >> ciphertext (containing an encrypted MAC) is sent.
> >> > 
> >> > The cipher types you mention refer to the implementation of authen

Re: [PATCH] KEYS: Add placeholder for KDF usage with DH

2016-05-31 Thread David Howells
Hi James,

> Could you pass this along to Linus as soon as possible, please?  This
> alters a new keyctl function added in the current merge window to allow for
> a future extension planned for the next merge window.

Is this likely to go to Linus before -rc2?  If not, we'll need to do things
differently.

David
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/6] crypto: talitos - making mapping helpers more generic

2016-05-31 Thread Herbert Xu
On Fri, May 27, 2016 at 11:32:36AM +0200, Christophe Leroy wrote:
>
> + sg_count = sg_to_link_tbl_offset(src, sg_count, offset, len,
> +  &edesc->link_tbl[tbl_off])
> + if (sg_count == 1) {
> + /* Only one segment now, so no link tbl needed*/
> + copy_talitos_ptr(ptr, &edesc->link_tbl[tbl_off], is_sec1);
> + return sg_count;
> + }

This patch doesn't build.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: Intel SHA - add MODULE_ALIAS

2016-05-31 Thread Herbert Xu
On Fri, May 13, 2016 at 02:02:00PM +0200, Stephan Mueller wrote:
> Add the MODULE_ALIAS for the cra_driver_name of the different ciphers to
> allow an automated loading if a driver name is used.
> 
> Signed-off-by: Stephan Mueller 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: user - no parsing of CRYPTO_MSG_GETALG

2016-05-31 Thread Herbert Xu
On Mon, May 16, 2016 at 02:53:36AM +0200, Stephan Mueller wrote:
> The CRYPTO_MSG_GETALG netlink message type provides a buffer to the
> kernel to retrieve information from the kernel. The data buffer will not
> provide any input and will not be read. Hence the nlmsg_parse is not
> applicable to this netlink message type.
> 
> This patch fixes the following kernel log message when using this
> netlink interface:
> 
> netlink: 208 bytes leftover after parsing attributes in process `XXX'.
> 
> Patch successfully tested with libkcapi from [1] which uses
> CRYPTO_MSG_GETALG to obtain cipher-specific information from the kernel.
> 
> [1] http://www.chronox.de/libkcapi.html
> 
> Signed-off-by: Stephan Mueller 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: qat - fix typos sizeof for ctx

2016-05-31 Thread Herbert Xu
On Tue, May 17, 2016 at 10:53:51AM -0700, Tadeusz Struk wrote:
> The sizeof(*ctx->dec_cd) and sizeof(*ctx->enc_cd) are equal,
> but we should use the correct one for freeing memory anyway.
> 
> Signed-off-by: Tadeusz Struk 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0211/1529] Fix typo

2016-05-31 Thread Herbert Xu
On Sat, May 21, 2016 at 02:03:38PM +0200, Andrea Gelmini wrote:
> Signed-off-by: Andrea Gelmini 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0010/1529] Fix typo

2016-05-31 Thread Herbert Xu
On Sat, May 21, 2016 at 01:36:43PM +0200, Andrea Gelmini wrote:
> Signed-off-by: Andrea Gelmini 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 14/54] MAINTAINERS: Add file patterns for crypto device tree bindings

2016-05-31 Thread Herbert Xu
On Sun, May 22, 2016 at 11:05:51AM +0200, Geert Uytterhoeven wrote:
> Submitters of device tree binding documentation may forget to CC
> the subsystem maintainer if this is missing.
> 
> Signed-off-by: Geert Uytterhoeven 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 19/54] MAINTAINERS: Add file patterns for rng device tree bindings

2016-05-31 Thread Herbert Xu
On Sun, May 22, 2016 at 11:05:56AM +0200, Geert Uytterhoeven wrote:
> Submitters of device tree binding documentation may forget to CC
> the subsystem maintainer if this is missing.
> 
> Signed-off-by: Geert Uytterhoeven 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 0/8] crypto: caam - add support for LS1043A SoC

2016-05-31 Thread Herbert Xu
On Thu, May 19, 2016 at 06:06:45PM +0300, Horia Geantă wrote:
> v3:
> -DT maintainers - please ack patch 8/8 "arm64: dts: ls1043a: add crypto node"
> (to go into kernel 4.8 via crypto tree)
> -Fixed typo in pdb.h: s/be32/__be32
> -Appended Acks (from v2) into commit messages
> -Tested that current patch set works on top of RSA support being added by
> Tudor Ambarus:
> [PATCH v6 0/3] crypto: caam - add support for RSA algorithm
> https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg19085.html

Patches 1 through 7 applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] lib/mpi: purge mpi_set_buffer()

2016-05-31 Thread Herbert Xu
On Thu, May 26, 2016 at 12:57:50PM +0200, Nicolai Stange wrote:
> mpi_set_buffer() has no in-tree users and similar functionality is provided
> by mpi_read_raw_data().
> 
> Remove mpi_set_buffer().
> 
> Signed-off-by: Nicolai Stange 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] hwrng: stm32: fix maybe uninitialized variable warning

2016-05-31 Thread Herbert Xu
On Thu, May 26, 2016 at 11:34:57AM +0200, Maxime Coquelin wrote:
> This patch fixes the following warning:
> drivers/char/hw_random/stm32-rng.c: In function 'stm32_rng_read':
> drivers/char/hw_random/stm32-rng.c:82:19: warning: 'sr' may be used
> uninitialized in this function
> 
> Reported-by: Sudip Mukherjee 
> Suggested-by: Arnd Bergmann 
> Cc: Daniel Thompson 
> Signed-off-by: Maxime Coquelin 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] lib/mpi: mpi_read_raw_data(): purge redundant clearing of nbits

2016-05-31 Thread Herbert Xu
On Thu, May 26, 2016 at 01:05:31PM +0200, Nicolai Stange wrote:
> In mpi_read_raw_data(), unsigned nbits is calculated as follows:
> 
>  nbits = nbytes * 8;
> 
> and redundantly cleared later on if nbytes == 0:
> 
>   if (nbytes > 0)
> ...
>   else
> nbits = 0;
> 
> Purge this redundant clearing for the sake of clarity.
> 
> Signed-off-by: Nicolai Stange 

Both applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/5] refactor mpi_read_from_buffer()

2016-05-31 Thread Herbert Xu
On Thu, May 26, 2016 at 11:19:50PM +0200, Nicolai Stange wrote:
> mpi_read_from_buffer() and mpi_read_raw_data() do almost the same and share a
> fair amount of common code.
> 
> This patchset attempts to rewrite mpi_read_from_buffer() in order to implement
> it in terms of mpi_read_raw_data().
> 
> The patches 1 and 3, i.e.
>   "lib/mpi: mpi_read_from_buffer(): return error code"
> and
>   "lib/mpi: mpi_read_from_buffer(): return -EINVAL upon too short buffer"
> do the groundwork in that they move any error detection unique to
> mpi_read_from_buffer() out of the data handling loop.
> 
> The patches 2 and 4, that is
>   "lib/digsig: digsig_verify_rsa(): return -EINVAL if modulo length is zero"
> and
>   "lib/mpi: mpi_read_from_buffer(): sanitize short buffer printk"
> are not strictly necessary for the refactoring: they cleanup some minor 
> oddities
> related to error handling I came across.
> 
> Finally, the last patch in this series,
>   "lib/mpi: refactor mpi_read_from_buffer() in terms of mpi_read_raw_data()"
> actually does what this series is all about.
> 
> 
> Applicable to linux-next-20160325.

All applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] crypto: s5p-sss - Use consistent indentation for variables and members

2016-05-31 Thread Herbert Xu
On Fri, May 27, 2016 at 01:49:40PM +0200, Krzysztof Kozlowski wrote:
> Bring some consistency by:
> 1. Replacing fixed-space indentation of structure members with just
>tabs.
> 2. Remove indentation in declaration of local variable between type and
>name.  Driver was mixing usage of such indentation and lack of it.
>When removing indentation, reorder variables in
>reversed-christmas-tree order with first variables being initialized
>ones.
> 
> Signed-off-by: Krzysztof Kozlowski 

Applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 0/4] hw rng support for NSP SoC

2016-05-31 Thread Herbert Xu
On Fri, May 27, 2016 at 06:10:37AM -0400, Yendapally Reddy Dhananjaya Reddy 
wrote:
> This patchset contains the hw random number generator support for the
> Broadcom's NSP SoC. The block is similar to the block available in
> bcm2835 with different default interrupt mask value. Due to lack of
> documentation, I cannot confirm the interrupt mask register details
> in bcm2835. In an effort to not break the existing functionality of
> bcm2835, I used a different compatible string to mask the interrupt
> for NSP SoC. Please let me know. Also supported providing requested
> number of random numbers instead of static size of four bytes.
> 
> The first patch contains the documentation changes and the second patch
> contains the support for rng available in NSP SoC. The third patch
> contains the device tree changes for NSP SoC. The fourth patch contains
> the support for reading requested number of random numbers.
> 
> This patch set has been tested on NSP bcm958625HR board.
> This patch set is based on v4.6.0-rc1 and is available from github
> repo: https://github.com/Broadcom/cygnus-linux.git
> branch: nsp-rng-v2
> 
> Changes since v1

All applied.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Harsh Jain
Hi Stephen,

Yes ,It's for Hash. The available API in library for hash update
"_kcapi_md_update" uses "MSG_MORE" flag always set. It will not
trigger driver's digest/finup implementation. We need something like
that

 _kcapi_common_accept()
send(handle->opfd, buffer, len, 0); ==> flag = 0.

It will execute digest callback of selected tfm from User
Space.(init->digest) Similarly

_kcapi_common_accept()
send(handle->opfd, buffer, len, MSG_MORE);
send(handle->opfd, buffer, len, 0);

It will execute finup callback of selected tfm. (init->update->finup).

In that way we can test all callbacks from userspace. In future if you
feel this use case important. You can add API's to implement this.


Regards
Harsh Jain

regards
Harsh Jain






regards
Harsh Jain

On Tue, May 31, 2016 at 2:51 PM, Stephan Mueller  wrote:
> Am Dienstag, 31. Mai 2016, 14:45:27 schrieb Harsh Jain:
>
> Hi Harsh,
>
>> Hi,
>>
>> Thanks Stephen, I will check the same.1 suggestion for kcapi tool. Add
>> some switch cases in tool to test digest and finup path of crypto
>> driver. Current implementation triggers only init/update/final.
>
> You mean for hashes? I guess the following is what you refer to? This logic is
> even found for the other cipher types (symmetric algos, AEAD ciphers). See the
> documentation on stream vs one-shot use cases.
>
> /**
>  * kcapi_md_init() - initialize cipher handle
>  * @handle: cipher handle filled during the call - output
>  * @ciphername: kernel crypto API cipher name as specified in
>  * /proc/crypto - input
>  * @flags: flags specifying the type of cipher handle
>  *
>  * This function provides the initialization of a (keyed) message digest
> handle
>  * and establishes the connection to the kernel.
>  *
>  * Return: 0 upon success; ENOENT - algorithm not available;
>  * -EOPNOTSUPP - AF_ALG family not available;
>  * -EINVAL - accept syscall failed
>  * -ENOMEM - cipher handle cannot be allocated
>  */
> int kcapi_md_init(struct kcapi_handle **handle, const char *ciphername,
>   uint32_t flags);
>
> /**
>  * kcapi_md_update() - message digest update function (stream)
>  * @handle: cipher handle - input
>  * @buffer: holding the data to add to the message digest - input
>  * @len: buffer length - input
>  *
>  * Return: 0 upon success;
>  * < 0 in case of error
>  */
> int32_t kcapi_md_update(struct kcapi_handle *handle,
> const uint8_t *buffer, uint32_t len);
>
> /**
>  * kcapi_md_final() - message digest finalization function (stream)
>  * @handle: cipher handle - input
>  * @buffer: filled with the message digest - output
>  * @len: buffer length - input
>  *
>  * Return: size of message digest upon success;
>  * -EIO - data cannot be obtained;
>  * -ENOMEM - buffer is too small for the complete message digest,
>  * the buffer is filled with the truncated message digest
>  */
> int32_t kcapi_md_final(struct kcapi_handle *handle,
>uint8_t *buffer, uint32_t len);
>
>
> The test/kcapi tool is a crude test tool that I use for my regression testing.
> It is not intended for anything else.
>>
>>
>> Regards
>> Harsh Jain
>>
>> On Tue, May 31, 2016 at 2:29 PM, Stephan Mueller 
> wrote:
>> > Am Dienstag, 31. Mai 2016, 14:10:20 schrieb Harsh Jain:
>> >
>> > Hi Harsh,
>> >
>> >> Hi,
>> >>
>> >> You means to say like this
>> >>
>> >> ./kcapi -x 2 -e -c "authenc(hmac(sha1),cbc(aes))" -p
>> >> 48981da18e4bb9ef7e2e3162d16b19108b19050f66582cb7f7e4b6c873819b71 -k
>> >> 8d7dd9b0170ce0b5f2f8e1aa768e01e91da8bfc67fd486d081b28254c99eb423 -i
>> >> 7fbc02ebf5b93322329df9bfccb635af -a afcd7202d621e06ca53b70c2bdff7fb2
>> >> -l 16f4a3eacfbdadd3b1a17117b1d67ffc1f1e21efbbc6d83724a8c296e3bb8cda0c44
>> >>
>> >> It gives following error with kernel 4.5.2
>> >> Symmetric cipher setkey failed
>> >> Failed to invoke testing
>> >
>> > Please see testmgr.h for usage (especially the key encoding):
>> >
>> > invocation:
>> > ./kcapi -x 2 -e -c "authenc(hmac(sha1),cbc(aes))" -p
>> > 53696e676c6520626c6f636b206d7367 -k
>> > 0800011006a9214036b8a15b51
>> > 2e03d534120006 -i 3dafba429d9eb430b422da802c9fac41 -a
>> > 3dafba429d9eb430b422da802c9fac41 -l 20
>> >
>> > return:
>> > e353779c1079aeb82708942dbe77181a1b13cbaf895ee12c13c52ea3cceddcb50371a206
>> >
>> > This is the first test of hmac_sha1_aes_cbc_enc_tv_temp (RFC3601 case 1).
>> > Note, the input string of "Single block msg" was converted to hex
>> > 53696e676c6520626c6f636b206d7367 as my tool always treats all input data
>> > as
>> > hex data.
>> >
>> >> Regards
>> >> Harsh Jain
>> >>
>> >> On Tue, May 31, 2016 at 12:35 PM, Stephan Mueller 
>> >
>> > wrote:
>> >> > Am Dienstag, 31. Mai 2016, 12:31:16 schrieb Harsh Jain:
>> >> >
>> >> > Hi Harsh,
>> >> >
>> >> >> Hi All,
>> >> >>
>> >> >> How can we open socket of type "authenc(hmac(sha256),cbc(aes))" from
>> >> >> userspace program.I check libkcapi library. It has 

Re: Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Stephan Mueller
Am Dienstag, 31. Mai 2016, 16:28:14 schrieb Harsh Jain:

Hi Harsh,

> Hi Stephen,
> 
> Yes ,It's for Hash. The available API in library for hash update
> "_kcapi_md_update" uses "MSG_MORE" flag always set. It will not
> trigger driver's digest/finup implementation. We need something like
> that
> 
>  _kcapi_common_accept()
> send(handle->opfd, buffer, len, 0); ==> flag = 0.
> 
> It will execute digest callback of selected tfm from User
> Space.(init->digest) Similarly
> 
> _kcapi_common_accept()
> send(handle->opfd, buffer, len, MSG_MORE);
> send(handle->opfd, buffer, len, 0);
> 
> It will execute finup callback of selected tfm. (init->update->finup).
> 
> In that way we can test all callbacks from userspace. In future if you
> feel this use case important. You can add API's to implement this.

Ok, I see that the finup code path is not exercised in the kernel by my 
library.

Why do you think that this code path should be tested by my test code?

The test code shall verify that libkcapi works fine.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] crypto: DRBG - reduce number of setkey calls

2016-05-31 Thread Stephan Mueller
The CTR DRBG code always set the key for each sym cipher invocation even
though the key has not been changed.

The patch ensures that the setkey is only invoked when a new key is
generated by the DRBG.

With this patch, the CTR DRBG performance increases by more than 150%.

Signed-off-by: Stephan Mueller 
---
 crypto/drbg.c | 33 -
 1 file changed, 24 insertions(+), 9 deletions(-)

diff --git a/crypto/drbg.c b/crypto/drbg.c
index 0a3538f..0aca2b9 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -252,8 +252,10 @@ MODULE_ALIAS_CRYPTO("drbg_nopr_ctr_aes192");
 MODULE_ALIAS_CRYPTO("drbg_pr_ctr_aes128");
 MODULE_ALIAS_CRYPTO("drbg_nopr_ctr_aes128");
 
-static int drbg_kcapi_sym(struct drbg_state *drbg, const unsigned char *key,
- unsigned char *outval, const struct drbg_string *in);
+static void drbg_kcapi_symsetkey(struct drbg_state *drbg,
+const unsigned char *key);
+static int drbg_kcapi_sym(struct drbg_state *drbg, unsigned char *outval,
+ const struct drbg_string *in);
 static int drbg_init_sym_kernel(struct drbg_state *drbg);
 static int drbg_fini_sym_kernel(struct drbg_state *drbg);
 
@@ -270,6 +272,7 @@ static int drbg_ctr_bcc(struct drbg_state *drbg,
drbg_string_fill(&data, out, drbg_blocklen(drbg));
 
/* 10.4.3 step 2 / 4 */
+   drbg_kcapi_symsetkey(drbg, key);
list_for_each_entry(curr, in, list) {
const unsigned char *pos = curr->buf;
size_t len = curr->len;
@@ -278,7 +281,7 @@ static int drbg_ctr_bcc(struct drbg_state *drbg,
/* 10.4.3 step 4.2 */
if (drbg_blocklen(drbg) == cnt) {
cnt = 0;
-   ret = drbg_kcapi_sym(drbg, key, out, &data);
+   ret = drbg_kcapi_sym(drbg, out, &data);
if (ret)
return ret;
}
@@ -290,7 +293,7 @@ static int drbg_ctr_bcc(struct drbg_state *drbg,
}
/* 10.4.3 step 4.2 for last block */
if (cnt)
-   ret = drbg_kcapi_sym(drbg, key, out, &data);
+   ret = drbg_kcapi_sym(drbg, out, &data);
 
return ret;
 }
@@ -425,6 +428,7 @@ static int drbg_ctr_df(struct drbg_state *drbg,
/* 10.4.2 step 12: overwriting of outval is implemented in next step */
 
/* 10.4.2 step 13 */
+   drbg_kcapi_symsetkey(drbg, temp);
while (generated_len < bytes_to_return) {
short blocklen = 0;
/*
@@ -432,7 +436,7 @@ static int drbg_ctr_df(struct drbg_state *drbg,
 * implicit as the key is only drbg_blocklen in size based on
 * the implementation of the cipher function callback
 */
-   ret = drbg_kcapi_sym(drbg, temp, X, &cipherin);
+   ret = drbg_kcapi_sym(drbg, X, &cipherin);
if (ret)
goto out;
blocklen = (drbg_blocklen(drbg) <
@@ -488,6 +492,7 @@ static int drbg_ctr_update(struct drbg_state *drbg, struct 
list_head *seed,
ret = drbg_ctr_df(drbg, df_data, drbg_statelen(drbg), seed);
if (ret)
goto out;
+   drbg_kcapi_symsetkey(drbg, drbg->C);
}
 
drbg_string_fill(&cipherin, drbg->V, drbg_blocklen(drbg));
@@ -500,7 +505,7 @@ static int drbg_ctr_update(struct drbg_state *drbg, struct 
list_head *seed,
crypto_inc(drbg->V, drbg_blocklen(drbg));
/*
 * 10.2.1.2 step 2.2 */
-   ret = drbg_kcapi_sym(drbg, drbg->C, temp + len, &cipherin);
+   ret = drbg_kcapi_sym(drbg, temp + len, &cipherin);
if (ret)
goto out;
/* 10.2.1.2 step 2.3 and 3 */
@@ -517,6 +522,7 @@ static int drbg_ctr_update(struct drbg_state *drbg, struct 
list_head *seed,
 
/* 10.2.1.2 step 5 */
memcpy(drbg->C, temp, drbg_keylen(drbg));
+   drbg_kcapi_symsetkey(drbg, drbg->C);
/* 10.2.1.2 step 6 */
memcpy(drbg->V, temp + drbg_keylen(drbg), drbg_blocklen(drbg));
ret = 0;
@@ -546,6 +552,7 @@ static int drbg_ctr_generate(struct drbg_state *drbg,
ret = drbg_ctr_update(drbg, addtl, 2);
if (ret)
return 0;
+   drbg_kcapi_symsetkey(drbg, drbg->C);
}
 
/* 10.2.1.5.2 step 4.1 */
@@ -554,7 +561,7 @@ static int drbg_ctr_generate(struct drbg_state *drbg,
while (len < buflen) {
int outlen = 0;
/* 10.2.1.5.2 step 4.2 */
-   ret = drbg_kcapi_sym(drbg, drbg->C, drbg->scratchpad, &data);
+   ret = drbg_kcapi_sym(drbg, drbg->scratchpad, &data);
if (ret) {
len = ret;
goto out;
@@ 

[RFC] DRBG: which shall be default?

2016-05-31 Thread Stephan Mueller
Hi Herbert,

with that patch, the CTR DRBG is the fasted DRBG by orders of magnitude -- 
about 2 times faster than the HMAC DRBG (current default) and 1.5 times faster 
than the Hash DRBG.

However, I am not too fond of the CTR DRBG due to the following that I already 
mentioned some days ago. Quote:

"""
the DF/BCC function in the DRBG is critical as I think it looses entropy 
IMHO. When you seed the DRBG with, say 256 or 384 bits of data, the BCC acts 
akin a MAC by taking the 256 or 384 bits and collapse it into one AES block of 
128 bits. Then he DF function expands this one block into the DRBG internal 
state including the AES key of 256 / 384 bits depending on the type of AES you 
use. So, if you have 256 bits of entropy in the seed, you have 128 bits left 
after the BCC operation.
"""


The current default of the HMAC DRBG is the leanest and cleanest, but it is 
also the slowest.

The fastest DRBG is the one that has the most complex state maintenance and I 
do not like parts of it.


Hence my question: shall we leave the HMAC DRBG as default or shall we use the 
CTR DRBG as default?

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Harsh Jain
Hi,

1) User can use libkcapi to write program for finup/digest.
2) No, kernel test for finup (not sure).
3) We can test all callback of new hash tfm drivers added in kernel.
4) My driver had issue in finup path which is not caught by kcapi test
program :)

regards
Harsh Jain

regards
Harsh Jain

On Tue, May 31, 2016 at 4:35 PM, Stephan Mueller  wrote:
> Am Dienstag, 31. Mai 2016, 16:28:14 schrieb Harsh Jain:
>
> Hi Harsh,
>
>> Hi Stephen,
>>
>> Yes ,It's for Hash. The available API in library for hash update
>> "_kcapi_md_update" uses "MSG_MORE" flag always set. It will not
>> trigger driver's digest/finup implementation. We need something like
>> that
>>
>>  _kcapi_common_accept()
>> send(handle->opfd, buffer, len, 0); ==> flag = 0.
>>
>> It will execute digest callback of selected tfm from User
>> Space.(init->digest) Similarly
>>
>> _kcapi_common_accept()
>> send(handle->opfd, buffer, len, MSG_MORE);
>> send(handle->opfd, buffer, len, 0);
>>
>> It will execute finup callback of selected tfm. (init->update->finup).
>>
>> In that way we can test all callbacks from userspace. In future if you
>> feel this use case important. You can add API's to implement this.
>
> Ok, I see that the finup code path is not exercised in the kernel by my
> library.
>
> Why do you think that this code path should be tested by my test code?
>
> The test code shall verify that libkcapi works fine.
>
> Ciao
> Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 2/4] crypto: kdf - add known answer tests

2016-05-31 Thread Stephan Mueller
Add known answer tests to the testmgr for the KDF (SP800-108) cipher.

Signed-off-by: Stephan Mueller 
---
 crypto/testmgr.c | 167 +++
 crypto/testmgr.h | 111 
 2 files changed, 278 insertions(+)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index c727fb0..425c212 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -115,6 +115,11 @@ struct drbg_test_suite {
unsigned int count;
 };
 
+struct kdf_test_suite {
+   struct kdf_testvec *vecs;
+   unsigned int count;
+};
+
 struct akcipher_test_suite {
struct akcipher_testvec *vecs;
unsigned int count;
@@ -133,6 +138,7 @@ struct alg_test_desc {
struct hash_test_suite hash;
struct cprng_test_suite cprng;
struct drbg_test_suite drbg;
+   struct kdf_test_suite kdf;
struct akcipher_test_suite akcipher;
} suite;
 };
@@ -1777,6 +1783,65 @@ static int alg_test_drbg(const struct alg_test_desc 
*desc, const char *driver,
 
 }
 
+static int kdf_cavs_test(struct kdf_testvec *test,
+const char *driver, u32 type, u32 mask)
+{
+   int ret = -EAGAIN;
+   struct crypto_rng *drng;
+   unsigned char *buf = kzalloc(test->expectedlen, GFP_KERNEL);
+
+   if (!buf)
+   return -ENOMEM;
+
+   drng = crypto_alloc_rng(driver, type | CRYPTO_ALG_INTERNAL, mask);
+   if (IS_ERR(drng)) {
+   printk(KERN_ERR "alg: kdf: could not allocate cipher handle "
+  "for %s\n", driver);
+   kzfree(buf);
+   return -ENOMEM;
+   }
+
+   ret = crypto_rng_reset(drng, test->K1, test->K1len);
+   if (ret) {
+   printk(KERN_ERR "alg: kdf: could not set key derivation key\n");
+   goto err;
+   }
+
+   ret = crypto_rng_generate(drng, test->context, test->contextlen,
+ buf, test->expectedlen);
+   if (ret) {
+   printk(KERN_ERR "alg: kdf: could not obtain key data\n");
+   goto err;
+   }
+
+   ret = memcmp(test->expected, buf, test->expectedlen);
+
+err:
+   crypto_free_rng(drng);
+   kzfree(buf);
+   return ret;
+}
+
+static int alg_test_kdf(const struct alg_test_desc *desc, const char *driver,
+   u32 type, u32 mask)
+{
+   int err = 0;
+   unsigned int i = 0;
+   struct kdf_testvec *template = desc->suite.kdf.vecs;
+   unsigned int tcount = desc->suite.kdf.count;
+
+   for (i = 0; i < tcount; i++) {
+   err = kdf_cavs_test(&template[i], driver, type, mask);
+   if (err) {
+   printk(KERN_ERR "alg: kdf: Test %d failed for %s\n",
+  i, driver);
+   err = -EINVAL;
+   break;
+   }
+   }
+   return err;
+}
+
 static int do_test_rsa(struct crypto_akcipher *tfm,
   struct akcipher_testvec *vecs)
 {
@@ -3273,6 +3338,108 @@ static const struct alg_test_desc alg_test_descs[] = {
.fips_allowed = 1,
.test = alg_test_null,
}, {
+   .alg = "kdf_ctr(cmac(aes))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+   .alg = "kdf_ctr(cmac(des3_ede))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+   .alg = "kdf_ctr(hmac(sha1))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+   .alg = "kdf_ctr(hmac(sha224))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+   .alg = "kdf_ctr(hmac(sha256))",
+   .test = alg_test_kdf,
+   .fips_allowed = 1,
+   .suite = {
+   .kdf = {
+   .vecs = kdf_ctr_hmac_sha256_tv_template,
+   .count = 
ARRAY_SIZE(kdf_ctr_hmac_sha256_tv_template)
+   }
+   }
+   }, {
+   .alg = "kdf_ctr(hmac(sha384))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+   .alg = "kdf_ctr(hmac(sha512))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+   .alg = "kdf_dpi(cmac(aes))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+   .alg = "kdf_dpi(cmac(des3_ede))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+   .alg = "kdf_dpi(hmac(sha1))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+   .alg = "kdf_dpi(hmac(sha224))",
+   .test = alg_test_null,
+   .fips_allowed = 1,
+   }, {
+ 

[PATCH v2 4/4] crypto: kdf - enable compilation

2016-05-31 Thread Stephan Mueller
Include KDF into Kconfig and Makefile for compilation.

Signed-off-by: Stephan Mueller 
---
 crypto/Kconfig  | 7 +++
 crypto/Makefile | 1 +
 2 files changed, 8 insertions(+)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1d33beb..89c1891 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -349,6 +349,13 @@ config CRYPTO_KEYWRAP
  Support for key wrapping (NIST SP800-38F / RFC3394) without
  padding.
 
+config CRYPTO_KDF
+   tristate "Key Derivation Function (SP800-108)"
+   select CRYPTO_RNG
+   help
+ Support for KDF compliant to SP800-108. All three types of
+ KDF specified in SP800-108 are implemented.
+
 comment "Hash modes"
 
 config CRYPTO_CMAC
diff --git a/crypto/Makefile b/crypto/Makefile
index 4f4ef7e..7f7c4be 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -72,6 +72,7 @@ obj-$(CONFIG_CRYPTO_LRW) += lrw.o
 obj-$(CONFIG_CRYPTO_XTS) += xts.o
 obj-$(CONFIG_CRYPTO_CTR) += ctr.o
 obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o
+obj-$(CONFIG_CRYPTO_KDF) += kdf.o
 obj-$(CONFIG_CRYPTO_GCM) += gcm.o
 obj-$(CONFIG_CRYPTO_CCM) += ccm.o
 obj-$(CONFIG_CRYPTO_CHACHA20POLY1305) += chacha20poly1305.o
-- 
2.5.5


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 0/4] crypto: Key Derivation Function (SP800-108)

2016-05-31 Thread Stephan Mueller
Hi,

this patch set implements all three key derivation functions defined in
SP800-108.

The implementation is provided as a template for random number generators,
since a KDF can be considered a form of deterministic RNG where the key
material is used as a seed.

With the KDF implemented as a template, all types of keyed hashes can be
utilized, including HMAC and CMAC. The testmgr tests are derived from
publicly available test vectors from NIST.

The KDF are all tested with a complete round of CAVS testing on 32 and 64 bit.

The patch set introduces an extension to the kernel crypto API in the first
patch by adding a template handling for random number generators based on the
same logic as for keyed hashes.

Changes v2:
* port to 4.7-rc1

Stephan Mueller (4):
  crypto: add template handling for RNGs
  crypto: kdf - add known answer tests
  crypto: kdf - SP800-108 Key Derivation Function
  crypto: kdf - enable compilation

 crypto/Kconfig   |   7 +
 crypto/Makefile  |   1 +
 crypto/kdf.c | 514 +++
 crypto/rng.c |  31 
 crypto/testmgr.c | 167 +
 crypto/testmgr.h | 111 +++
 include/crypto/rng.h |  39 
 7 files changed, 870 insertions(+)
 create mode 100644 crypto/kdf.c

-- 
2.5.5


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v2 3/4] crypto: kdf - SP800-108 Key Derivation Function

2016-05-31 Thread Stephan Mueller
The SP800-108 compliant Key Derivation Function is implemented as a
random number generator considering that it behaves like a deterministic
RNG.

All three KDF types specified in SP800-108 are implemented.

The code comments provide details about how to invoke the different KDF
types.

Signed-off-by: Stephan Mueller 
---
 crypto/kdf.c | 514 +++
 1 file changed, 514 insertions(+)
 create mode 100644 crypto/kdf.c

diff --git a/crypto/kdf.c b/crypto/kdf.c
new file mode 100644
index 000..b39bddf
--- /dev/null
+++ b/crypto/kdf.c
@@ -0,0 +1,514 @@
+/*
+ * Copyright (C) 2015, Stephan Mueller 
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *notice, and the entire permission notice in its entirety,
+ *including the disclaimer of warranties.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *notice, this list of conditions and the following disclaimer in the
+ *documentation and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote
+ *products derived from this software without specific prior
+ *written permission.
+ *
+ * ALTERNATIVELY, this product may be distributed under the terms of
+ * the GNU General Public License, in which case the provisions of the GPL2
+ * are required INSTEAD OF the above restrictions.  (This clause is
+ * necessary due to a potential bad interaction between the GPL and
+ * the restrictions contained in a BSD-style copyright.)
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
+ * WHICH ARE HEREBY DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
+ * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ */
+
+/*
+ * For performing a KDF operation, the following input is required
+ * from the caller:
+ *
+ * * Keying material to be used to derive the new keys from
+ *   (denoted as Ko in SP800-108)
+ * * Label -- a free form binary string
+ * * Context -- a free form binary string
+ *
+ * The KDF is implemented as a random number generator.
+ *
+ * The Ko keying material is to be provided with the initialization of the KDF
+ * "random number generator", i.e. with the crypto_rng_reset function.
+ *
+ * The Label and Context concatenated string is provided when obtaining random
+ * numbers, i.e. with the crypto_rng_generate function. The caller must format
+ * the free-form Label || Context input as deemed necessary for the given
+ * purpose. Note, SP800-108 mandates that the Label and Context are separated
+ * by a 0x00 byte, i.e. the caller shall provide the input as
+ * Label || 0x00 || Context when trying to be compliant to SP800-108. For
+ * the feedback KDF, an IV is required as documented below.
+ *
+ * Example without proper error handling:
+ * char *keying_material = "\x00\x11\x22\x33\x44\x55\x66\x77";
+ * char *label_context = "\xde\xad\xbe\xef\x00\xde\xad\xbe\xef";
+ * kdf = crypto_alloc_rng(name, 0, 0);
+ * crypto_rng_reset(kdf, keying_material, 8);
+ * crypto_rng_generate(kdf, label_context, 9, outbuf, outbuflen);
+ *
+ * NOTE: Technically you can use one buffer for holding the label_context and
+ *  the outbuf in the example above. Howerver, multiple rounds of the
+ *  KDF are to be expected with the input must always be the same.
+ *  The first round would replace the input in case of one buffer, and the
+ *  KDF would calculate a cryptographically strong result which, however,
+ *  is not portable to other KDF implementations! Thus, always use
+ *  different buffers for the label_context and the outbuf. A safe
+ *  in-place operation can only be done when only one round of the KDF
+ *  is executed (i.e. the size of the requested buffer is equal to the
+ *  digestsize of the used MAC).
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+struct crypto_kdf_ctx {
+   struct shash_desc shash;
+   char ctx[];
+};
+
+/* convert 32 bit integer into its string representation */
+static inline void crypto_kw_cpu_to_be32(u32 val, u8 *buf)
+{
+   __be32 *a = (__be32 *)buf;
+
+   *a = cpu_to_be32(val);
+}
+
+/*
+ * Implementation of the KDF in double pipeline iteration

[PATCH v2 1/4] crypto: add template handling for RNGs

2016-05-31 Thread Stephan Mueller
This patch adds the ability to register templates for RNGs. RNGs are
"meta" mechanisms using raw cipher primitives. Thus, RNGs can now be
implemented as templates to allow the complete flexibility the kernel
crypto API provides.

Signed-off-by: Stephan Mueller 
---
 crypto/rng.c | 31 +++
 include/crypto/rng.h | 39 +++
 2 files changed, 70 insertions(+)

diff --git a/crypto/rng.c b/crypto/rng.c
index b81cffb..92cc02a 100644
--- a/crypto/rng.c
+++ b/crypto/rng.c
@@ -232,5 +232,36 @@ void crypto_unregister_rngs(struct rng_alg *algs, int 
count)
 }
 EXPORT_SYMBOL_GPL(crypto_unregister_rngs);
 
+void rng_free_instance(struct crypto_instance *inst)
+{
+   crypto_drop_spawn(crypto_instance_ctx(inst));
+   kfree(rng_instance(inst));
+}
+EXPORT_SYMBOL_GPL(rng_free_instance);
+
+static int rng_prepare_alg(struct rng_alg *alg)
+{
+   struct crypto_alg *base = &alg->base;
+
+   base->cra_type = &crypto_rng_type;
+   base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+   base->cra_flags |= CRYPTO_ALG_TYPE_RNG;
+
+   return 0;
+}
+
+int rng_register_instance(struct crypto_template *tmpl,
+ struct rng_instance *inst)
+{
+   int err;
+
+   err = rng_prepare_alg(&inst->alg);
+   if (err)
+   return err;
+
+   return crypto_register_instance(tmpl, rng_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(rng_register_instance);
+
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Random Number Generator");
diff --git a/include/crypto/rng.h b/include/crypto/rng.h
index b95ede3..b8a6ea3 100644
--- a/include/crypto/rng.h
+++ b/include/crypto/rng.h
@@ -15,6 +15,7 @@
 #define _CRYPTO_RNG_H
 
 #include 
+#include 
 
 struct crypto_rng;
 
@@ -197,4 +198,42 @@ static inline int crypto_rng_seedsize(struct crypto_rng 
*tfm)
return crypto_rng_alg(tfm)->seedsize;
 }
 
+struct rng_instance {
+   struct rng_alg alg;
+};
+
+static inline struct rng_instance *rng_alloc_instance(
+   const char *name, struct crypto_alg *alg)
+{
+   return crypto_alloc_instance2(name, alg,
+ sizeof(struct rng_alg) - sizeof(*alg));
+}
+
+static inline struct crypto_instance *rng_crypto_instance(
+   struct rng_instance *inst)
+{
+   return container_of(&inst->alg.base, struct crypto_instance, alg);
+}
+
+static inline void *rng_instance_ctx(struct rng_instance *inst)
+{
+   return crypto_instance_ctx(rng_crypto_instance(inst));
+}
+
+static inline struct rng_alg *__crypto_rng_alg(struct crypto_alg *alg)
+{
+   return container_of(alg, struct rng_alg, base);
+}
+
+static inline struct rng_instance *rng_instance(
+   struct crypto_instance *inst)
+{
+   return container_of(__crypto_rng_alg(&inst->alg),
+   struct rng_instance, alg);
+}
+
+int rng_register_instance(struct crypto_template *tmpl,
+ struct rng_instance *inst);
+void rng_free_instance(struct crypto_instance *inst);
+
 #endif
-- 
2.5.5


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Test AEAD/authenc algorithms from userspace

2016-05-31 Thread Stephan Mueller
Am Dienstag, 31. Mai 2016, 17:22:12 schrieb Harsh Jain:

Hi Harsh,

> Hi,
> 
> 1) User can use libkcapi to write program for finup/digest.
> 2) No, kernel test for finup (not sure).
> 3) We can test all callback of new hash tfm drivers added in kernel.
> 4) My driver had issue in finup path which is not caught by kcapi test
> program :)

Can you please elaborate on the last one?

Maybe we should then extend the test app to really cover also all aspects of 
AF_ALG.

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 06/10] crypto: acomp - add support for lz4 via scomp

2016-05-31 Thread Giovanni Cabiddu
This patch implements an scomp backend for the lz4 compression algorithm.
This way, lz4 is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig |1 +
 crypto/lz4.c   |   91 +--
 2 files changed, 82 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 08075c1..114d43b 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1530,6 +1530,7 @@ config CRYPTO_842
 config CRYPTO_LZ4
tristate "LZ4 compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select LZ4_COMPRESS
select LZ4_DECOMPRESS
help
diff --git a/crypto/lz4.c b/crypto/lz4.c
index aefbcea..205eaa5 100644
--- a/crypto/lz4.c
+++ b/crypto/lz4.c
@@ -23,36 +23,53 @@
 #include 
 #include 
 #include 
+#include 
 
 struct lz4_ctx {
void *lz4_comp_mem;
 };
 
+static void *lz4_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = vmalloc(LZ4_MEM_COMPRESS);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
 static int lz4_init(struct crypto_tfm *tfm)
 {
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   ctx->lz4_comp_mem = vmalloc(LZ4_MEM_COMPRESS);
-   if (!ctx->lz4_comp_mem)
+   ctx->lz4_comp_mem = lz4_alloc_ctx(NULL);
+   if (IS_ERR(ctx->lz4_comp_mem))
return -ENOMEM;
 
return 0;
 }
 
+static void lz4_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   vfree(ctx);
+}
+
 static void lz4_exit(struct crypto_tfm *tfm)
 {
struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
-   vfree(ctx->lz4_comp_mem);
+
+   lz4_free_ctx(NULL, ctx->lz4_comp_mem);
 }
 
-static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lz4_compress_crypto(const u8 *src, unsigned int slen,
+u8 *dst, unsigned int *dlen, void *ctx)
 {
-   struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen;
int err;
 
-   err = lz4_compress(src, slen, dst, &tmp_len, ctx->lz4_comp_mem);
+   err = lz4_compress(src, slen, dst, &tmp_len, ctx);
 
if (err < 0)
return -EINVAL;
@@ -61,8 +78,23 @@ static int lz4_compress_crypto(struct crypto_tfm *tfm, const 
u8 *src,
return 0;
 }
 
-static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lz4_scompress(struct crypto_scomp *tfm, const u8 *src,
+unsigned int slen, u8 *dst, unsigned int *dlen,
+void *ctx)
+{
+   return __lz4_compress_crypto(src, slen, dst, dlen, ctx);
+}
+
+static int lz4_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct lz4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   return __lz4_compress_crypto(src, slen, dst, dlen, ctx->lz4_comp_mem);
+}
+
+static int __lz4_decompress_crypto(const u8 *src, unsigned int slen,
+  u8 *dst, unsigned int *dlen, void *ctx)
 {
int err;
size_t tmp_len = *dlen;
@@ -76,6 +108,20 @@ static int lz4_decompress_crypto(struct crypto_tfm *tfm, 
const u8 *src,
return err;
 }
 
+static int lz4_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen,
+  void *ctx)
+{
+   return __lz4_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
+static int lz4_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
+unsigned int slen, u8 *dst,
+unsigned int *dlen)
+{
+   return __lz4_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
 static struct crypto_alg alg_lz4 = {
.cra_name   = "lz4",
.cra_flags  = CRYPTO_ALG_TYPE_COMPRESS,
@@ -89,14 +135,39 @@ static struct crypto_alg alg_lz4 = {
.coa_decompress = lz4_decompress_crypto } }
 };
 
+static struct scomp_alg scomp = {
+   .alloc_ctx  = lz4_alloc_ctx,
+   .free_ctx   = lz4_free_ctx,
+   .compress   = lz4_scompress,
+   .decompress = lz4_sdecompress,
+   .base   = {
+   .cra_name   = "lz4",
+   .cra_driver_name = "lz4-scomp",
+   .cra_module  = THIS_MODULE,
+   }
+};
+
 static int __init lz4_mod_init(void)
 {
-   return crypto_register_alg(&alg_lz4);
+   int ret;
+
+   ret = crypto_register_alg(&alg_lz4);
+   if (ret)
+   return ret;
+
+   ret = crypto_register_scomp_qdecomp(&scomp);
+   if (ret) {
+   crypto_unregister_alg(&alg_lz4);
+   return ret;
+   }
+
+   return ret;
 }
 
 

[PATCH v4 10/10] crypto: acomp - update testmgr with support for acomp

2016-05-31 Thread Giovanni Cabiddu
This patch adds tests to the test manager for algorithms exposed through
the acomp api

Signed-off-by: Giovanni Cabiddu 
---
 crypto/testmgr.c |  158 +-
 1 files changed, 145 insertions(+), 13 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index c727fb0..cc531f3 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "internal.h"
 
@@ -1423,6 +1424,121 @@ out:
return ret;
 }
 
+static int test_acomp(struct crypto_acomp *tfm, struct comp_testvec *ctemplate,
+ struct comp_testvec *dtemplate, int ctcount, int dtcount)
+{
+   const char *algo = crypto_tfm_alg_driver_name(crypto_acomp_tfm(tfm));
+   unsigned int i;
+   char output[COMP_BUF_SIZE];
+   int ret;
+   struct scatterlist src, dst;
+   struct acomp_req *req;
+   struct tcrypt_result result;
+
+   for (i = 0; i < ctcount; i++) {
+   unsigned int dlen = COMP_BUF_SIZE;
+   int ilen = ctemplate[i].inlen;
+
+   memset(output, 0, sizeof(output));
+   init_completion(&result.completion);
+   sg_init_one(&src, ctemplate[i].input, ilen);
+   sg_init_one(&dst, output, dlen);
+
+   req = acomp_request_alloc(tfm, GFP_KERNEL);
+   if (!req) {
+   pr_err("alg: acomp: request alloc failed for %s\n",
+  algo);
+   ret = -ENOMEM;
+   goto out;
+   }
+
+   acomp_request_set_params(req, &src, &dst, ilen, dlen);
+   acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+  tcrypt_complete, &result);
+
+   ret = wait_async_op(&result, crypto_acomp_compress(req));
+   if (ret) {
+   pr_err("alg: acomp: compression failed on test %d for 
%s: ret=%d\n",
+  i + 1, algo, -ret);
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (req->produced != ctemplate[i].outlen) {
+   pr_err("alg: acomp: Compression test %d failed for %s: 
output len = %d\n",
+  i + 1, algo, req->produced);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (memcmp(output, ctemplate[i].output, req->produced)) {
+   pr_err("alg: acomp: Compression test %d failed for 
%s\n",
+  i + 1, algo);
+   hexdump(output, req->produced);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   acomp_request_free(req);
+   }
+
+   for (i = 0; i < dtcount; i++) {
+   unsigned int dlen = COMP_BUF_SIZE;
+   int ilen = dtemplate[i].inlen;
+
+   memset(output, 0, sizeof(output));
+   init_completion(&result.completion);
+   sg_init_one(&src, dtemplate[i].input, ilen);
+   sg_init_one(&dst, output, dlen);
+
+   req = acomp_request_alloc(tfm, GFP_KERNEL);
+   if (!req) {
+   pr_err("alg: acomp: request alloc failed for %s\n",
+  algo);
+   ret = -ENOMEM;
+   goto out;
+   }
+
+   acomp_request_set_params(req, &src, &dst, ilen, dlen);
+   acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+  tcrypt_complete, &result);
+
+   ret = wait_async_op(&result, crypto_acomp_decompress(req));
+   if (ret) {
+   pr_err("alg: acomp: decompression failed on test %d for 
%s: ret=%d\n",
+  i + 1, algo, -ret);
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (req->produced != dtemplate[i].outlen) {
+   pr_err("alg: acomp: Decompression test %d failed for 
%s: output len = %d\n",
+  i + 1, algo, req->produced);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   if (memcmp(output, dtemplate[i].output, req->produced)) {
+   pr_err("alg: acomp: Decompression test %d failed for 
%s\n",
+  i + 1, algo);
+   hexdump(output, req->produced);
+   ret = -EINVAL;
+   acomp_request_free(req);
+   goto out;
+   }
+
+   acomp_r

[PATCH v4 03/10] crypto: add driver-side scomp interface

2016-05-31 Thread Giovanni Cabiddu
Add a synchronous back-end (scomp) to acomp. This allows to easily expose
the already present compression algorithms in LKCF via acomp

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Makefile |1 +
 crypto/acompress.c  |   49 +++-
 crypto/scompress.c  |  257 +++
 include/crypto/acompress.h  |   33 ++---
 include/crypto/internal/acompress.h |   16 ++
 include/crypto/internal/scompress.h |  134 ++
 include/linux/crypto.h  |2 +
 7 files changed, 469 insertions(+), 23 deletions(-)
 create mode 100644 crypto/scompress.c
 create mode 100644 include/crypto/internal/scompress.h

diff --git a/crypto/Makefile b/crypto/Makefile
index e817b38..fc8fcfe 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -32,6 +32,7 @@ obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
 obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
 
 obj-$(CONFIG_CRYPTO_ACOMP2) += acompress.o
+obj-$(CONFIG_CRYPTO_ACOMP2) += scompress.o
 
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
diff --git a/crypto/acompress.c b/crypto/acompress.c
index f24fef3..885d15d 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -22,8 +22,11 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
+static const struct crypto_type crypto_acomp_type;
+
 #ifdef CONFIG_NET
 static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
 {
@@ -67,6 +70,13 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
struct acomp_alg *alg = crypto_acomp_alg(acomp);
 
+   if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
+   return crypto_init_scomp_ops_async(tfm);
+
+   acomp->compress = alg->compress;
+   acomp->decompress = alg->decompress;
+   acomp->reqsize = alg->reqsize;
+
if (alg->exit)
acomp->base.exit = crypto_acomp_exit_tfm;
 
@@ -76,15 +86,25 @@ static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
return 0;
 }
 
+unsigned int crypto_acomp_extsize(struct crypto_alg *alg)
+{
+   int extsize = crypto_alg_extsize(alg);
+
+   if (alg->cra_type != &crypto_acomp_type)
+   extsize += sizeof(struct crypto_scomp *);
+
+   return extsize;
+}
+
 static const struct crypto_type crypto_acomp_type = {
-   .extsize = crypto_alg_extsize,
+   .extsize = crypto_acomp_extsize,
.init_tfm = crypto_acomp_init_tfm,
 #ifdef CONFIG_PROC_FS
.show = crypto_acomp_show,
 #endif
.report = crypto_acomp_report,
.maskclear = ~CRYPTO_ALG_TYPE_MASK,
-   .maskset = CRYPTO_ALG_TYPE_MASK,
+   .maskset = CRYPTO_ALG_TYPE_ACOMPRESS_MASK,
.type = CRYPTO_ALG_TYPE_ACOMPRESS,
.tfmsize = offsetof(struct crypto_acomp, base),
 };
@@ -96,6 +116,31 @@ struct crypto_acomp *crypto_alloc_acomp(const char 
*alg_name, u32 type,
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_acomp);
 
+struct acomp_req *acomp_request_alloc(struct crypto_acomp *acomp, gfp_t gfp)
+{
+   struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+   struct acomp_req *req;
+
+   req = __acomp_request_alloc(acomp, gfp);
+   if (req && (tfm->__crt_alg->cra_type != &crypto_acomp_type))
+   return crypto_acomp_scomp_alloc_ctx(req);
+
+   return req;
+}
+EXPORT_SYMBOL_GPL(acomp_request_alloc);
+
+void acomp_request_free(struct acomp_req *req)
+{
+   struct crypto_acomp *acomp = crypto_acomp_reqtfm(req);
+   struct crypto_tfm *tfm = crypto_acomp_tfm(acomp);
+
+   if (tfm->__crt_alg->cra_type != &crypto_acomp_type)
+   crypto_acomp_scomp_free_ctx(req);
+
+   __acomp_request_free(req);
+}
+EXPORT_SYMBOL_GPL(acomp_request_free);
+
 int crypto_register_acomp(struct acomp_alg *alg)
 {
struct crypto_alg *base = &alg->base;
diff --git a/crypto/scompress.c b/crypto/scompress.c
new file mode 100644
index 000..5a25e17
--- /dev/null
+++ b/crypto/scompress.c
@@ -0,0 +1,257 @@
+/*
+ * Synchronous Compression operations
+ *
+ * Copyright 2015 LG Electronics Inc.
+ * Copyright (c) 2016, Intel Corporation
+ * Author: Giovanni Cabiddu 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "internal.h"
+
+static const struct crypto_type crypto_scomp_type;
+
+#ifdef CONFIG_NET
+static int crypto_scomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   struct crypto_report_comp rscomp;
+
+   strncpy(rscomp.type, "scomp", sizeof(rscomp.type));
+
+   if (nla_put(sk

[PATCH v4 08/10] crypto: acomp - add support for 842 via scomp

2016-05-31 Thread Giovanni Cabiddu
This patch implements an scomp backend for the 842 compression algorithm.
This way, 842 is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/842.c   |   82 +--
 crypto/Kconfig |1 +
 2 files changed, 80 insertions(+), 3 deletions(-)

diff --git a/crypto/842.c b/crypto/842.c
index 98e387e..a433ac3 100644
--- a/crypto/842.c
+++ b/crypto/842.c
@@ -31,11 +31,46 @@
 #include 
 #include 
 #include 
+#include 
 
 struct crypto842_ctx {
-   char wmem[SW842_MEM_COMPRESS];  /* working memory for compress */
+   void *wmem; /* working memory for compress */
 };
 
+static void *crypto842_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = kmalloc(SW842_MEM_COMPRESS, GFP_KERNEL);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
+static int crypto842_init(struct crypto_tfm *tfm)
+{
+   struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   ctx->wmem = crypto842_alloc_ctx(NULL);
+   if (IS_ERR(ctx->wmem))
+   return -ENOMEM;
+
+   return 0;
+}
+
+static void crypto842_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   kfree(ctx);
+}
+
+static void crypto842_exit(struct crypto_tfm *tfm)
+{
+   struct crypto842_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   crypto842_free_ctx(NULL, ctx->wmem);
+}
+
 static int crypto842_compress(struct crypto_tfm *tfm,
  const u8 *src, unsigned int slen,
  u8 *dst, unsigned int *dlen)
@@ -45,6 +80,13 @@ static int crypto842_compress(struct crypto_tfm *tfm,
return sw842_compress(src, slen, dst, dlen, ctx->wmem);
 }
 
+static int crypto842_scompress(struct crypto_scomp *tfm,
+  const u8 *src, unsigned int slen,
+  u8 *dst, unsigned int *dlen, void *ctx)
+{
+   return sw842_compress(src, slen, dst, dlen, ctx);
+}
+
 static int crypto842_decompress(struct crypto_tfm *tfm,
const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen)
@@ -52,27 +94,61 @@ static int crypto842_decompress(struct crypto_tfm *tfm,
return sw842_decompress(src, slen, dst, dlen);
 }
 
+static int crypto842_sdecompress(struct crypto_scomp *tfm,
+const u8 *src, unsigned int slen,
+u8 *dst, unsigned int *dlen, void *ctx)
+{
+   return sw842_decompress(src, slen, dst, dlen);
+}
+
 static struct crypto_alg alg = {
.cra_name   = "842",
.cra_driver_name= "842-generic",
.cra_priority   = 100,
.cra_flags  = CRYPTO_ALG_TYPE_COMPRESS,
-   .cra_ctxsize= sizeof(struct crypto842_ctx),
.cra_module = THIS_MODULE,
+   .cra_init   = crypto842_init,
+   .cra_exit   = crypto842_exit,
.cra_u  = { .compress = {
.coa_compress   = crypto842_compress,
.coa_decompress = crypto842_decompress } }
 };
 
+static struct scomp_alg scomp = {
+   .alloc_ctx  = crypto842_alloc_ctx,
+   .free_ctx   = crypto842_free_ctx,
+   .compress   = crypto842_scompress,
+   .decompress = crypto842_sdecompress,
+   .base   = {
+   .cra_name   = "842",
+   .cra_driver_name = "842-scomp",
+   .cra_priority= 100,
+   .cra_module  = THIS_MODULE,
+   }
+};
+
 static int __init crypto842_mod_init(void)
 {
-   return crypto_register_alg(&alg);
+   int ret;
+
+   ret = crypto_register_alg(&alg);
+   if (ret)
+   return ret;
+
+   ret = crypto_register_scomp_qdecomp(&scomp);
+   if (ret) {
+   crypto_unregister_alg(&alg);
+   return ret;
+   }
+
+   return ret;
 }
 module_init(crypto842_mod_init);
 
 static void __exit crypto842_mod_exit(void)
 {
crypto_unregister_alg(&alg);
+   crypto_unregister_scomp(&scomp);
 }
 module_exit(crypto842_mod_exit);
 
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 59570da..09c88ba 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1522,6 +1522,7 @@ config CRYPTO_LZO
 config CRYPTO_842
tristate "842 compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select 842_COMPRESS
select 842_DECOMPRESS
help
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 09/10] crypto: acomp - add support for deflate via scomp

2016-05-31 Thread Giovanni Cabiddu
This patch implements an scomp backend for the deflate compression
algorithm. This way, deflate is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig   |1 +
 crypto/deflate.c |  111 +-
 2 files changed, 102 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 09c88ba..b617c5d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1502,6 +1502,7 @@ comment "Compression"
 config CRYPTO_DEFLATE
tristate "Deflate compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select ZLIB_INFLATE
select ZLIB_DEFLATE
help
diff --git a/crypto/deflate.c b/crypto/deflate.c
index 95d8d37..f942cb3 100644
--- a/crypto/deflate.c
+++ b/crypto/deflate.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define DEFLATE_DEF_LEVEL  Z_DEFAULT_COMPRESSION
 #define DEFLATE_DEF_WINBITS11
@@ -101,9 +102,8 @@ static void deflate_decomp_exit(struct deflate_ctx *ctx)
vfree(ctx->decomp_stream.workspace);
 }
 
-static int deflate_init(struct crypto_tfm *tfm)
+static int __deflate_init(void *ctx)
 {
-   struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
int ret;
 
ret = deflate_comp_init(ctx);
@@ -116,19 +116,55 @@ out:
return ret;
 }
 
-static void deflate_exit(struct crypto_tfm *tfm)
+static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
+{
+   struct deflate_ctx *ctx;
+   int ret;
+
+   ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   ret = __deflate_init(ctx);
+   if (ret) {
+   kfree(ctx);
+   return ERR_PTR(ret);
+   }
+
+   return ctx;
+}
+
+static int deflate_init(struct crypto_tfm *tfm)
 {
struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
 
+   return __deflate_init(ctx);
+}
+
+static void __deflate_exit(void *ctx)
+{
deflate_comp_exit(ctx);
deflate_decomp_exit(ctx);
 }
 
-static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static void deflate_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   __deflate_exit(ctx);
+   kzfree(ctx);
+}
+
+static void deflate_exit(struct crypto_tfm *tfm)
+{
+   struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   __deflate_exit(ctx);
+}
+
+static int __deflate_compress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
 {
int ret = 0;
-   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+   struct deflate_ctx *dctx = ctx;
struct z_stream_s *stream = &dctx->comp_stream;
 
ret = zlib_deflateReset(stream);
@@ -153,12 +189,27 @@ out:
return ret;
 }
 
-static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int deflate_compress(struct crypto_tfm *tfm, const u8 *src,
+   unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+
+   return __deflate_compress(src, slen, dst, dlen, dctx);
+}
+
+static int deflate_scompress(struct crypto_scomp *tfm, const u8 *src,
+unsigned int slen, u8 *dst, unsigned int *dlen,
+void *ctx)
+{
+   return __deflate_compress(src, slen, dst, dlen, ctx);
+}
+
+static int __deflate_decompress(const u8 *src, unsigned int slen,
+   u8 *dst, unsigned int *dlen, void *ctx)
 {
 
int ret = 0;
-   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+   struct deflate_ctx *dctx = ctx;
struct z_stream_s *stream = &dctx->decomp_stream;
 
ret = zlib_inflateReset(stream);
@@ -194,6 +245,21 @@ out:
return ret;
 }
 
+static int deflate_decompress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct deflate_ctx *dctx = crypto_tfm_ctx(tfm);
+
+   return __deflate_decompress(src, slen, dst, dlen, dctx);
+}
+
+static int deflate_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen,
+  void *ctx)
+{
+   return __deflate_decompress(src, slen, dst, dlen, ctx);
+}
+
 static struct crypto_alg alg = {
.cra_name   = "deflate",
.cra_flags  = CRYPTO_ALG_TYPE_COMPRESS,
@@ -206,14 +272,39 @@ static struct crypto_alg alg = {
.coa_decompress = deflate_decompress } }
 };
 
+static struct scomp_alg scomp = {
+   .alloc_ctx  = deflate_alloc_ctx,
+   .free_ctx   = deflate_free_ctx,
+   .compress   = deflate_scompress,
+   .decompress = def

[PATCH v4 07/10] crypto: acomp - add support for lz4hc via scomp

2016-05-31 Thread Giovanni Cabiddu
This patch implements an scomp backend for the lz4hc compression algorithm.
This way, lz4hc is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig |1 +
 crypto/lz4hc.c |   92 +--
 2 files changed, 83 insertions(+), 10 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 114d43b..59570da 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1539,6 +1539,7 @@ config CRYPTO_LZ4
 config CRYPTO_LZ4HC
tristate "LZ4HC compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select LZ4HC_COMPRESS
select LZ4_DECOMPRESS
help
diff --git a/crypto/lz4hc.c b/crypto/lz4hc.c
index a1d3b5b..068b06f 100644
--- a/crypto/lz4hc.c
+++ b/crypto/lz4hc.c
@@ -22,37 +22,53 @@
 #include 
 #include 
 #include 
+#include 
 
 struct lz4hc_ctx {
void *lz4hc_comp_mem;
 };
 
+static void *lz4hc_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = vmalloc(LZ4HC_MEM_COMPRESS);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
 static int lz4hc_init(struct crypto_tfm *tfm)
 {
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   ctx->lz4hc_comp_mem = vmalloc(LZ4HC_MEM_COMPRESS);
-   if (!ctx->lz4hc_comp_mem)
+   ctx->lz4hc_comp_mem = lz4hc_alloc_ctx(NULL);
+   if (IS_ERR(ctx->lz4hc_comp_mem))
return -ENOMEM;
 
return 0;
 }
 
+static void lz4hc_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   vfree(ctx);
+}
+
 static void lz4hc_exit(struct crypto_tfm *tfm)
 {
struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   vfree(ctx->lz4hc_comp_mem);
+   lz4hc_free_ctx(NULL, ctx->lz4hc_comp_mem);
 }
 
-static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lz4hc_compress_crypto(const u8 *src, unsigned int slen,
+  u8 *dst, unsigned int *dlen, void *ctx)
 {
-   struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen;
int err;
 
-   err = lz4hc_compress(src, slen, dst, &tmp_len, ctx->lz4hc_comp_mem);
+   err = lz4hc_compress(src, slen, dst, &tmp_len, ctx);
 
if (err < 0)
return -EINVAL;
@@ -61,8 +77,25 @@ static int lz4hc_compress_crypto(struct crypto_tfm *tfm, 
const u8 *src,
return 0;
 }
 
-static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lz4hc_scompress(struct crypto_scomp *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen,
+  void *ctx)
+{
+   return __lz4hc_compress_crypto(src, slen, dst, dlen, ctx);
+}
+
+static int lz4hc_compress_crypto(struct crypto_tfm *tfm, const u8 *src,
+unsigned int slen, u8 *dst,
+unsigned int *dlen)
+{
+   struct lz4hc_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   return __lz4hc_compress_crypto(src, slen, dst, dlen,
+   ctx->lz4hc_comp_mem);
+}
+
+static int __lz4hc_decompress_crypto(const u8 *src, unsigned int slen,
+u8 *dst, unsigned int *dlen, void *ctx)
 {
int err;
size_t tmp_len = *dlen;
@@ -76,6 +109,20 @@ static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, 
const u8 *src,
return err;
 }
 
+static int lz4hc_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+unsigned int slen, u8 *dst, unsigned int *dlen,
+void *ctx)
+{
+   return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
+static int lz4hc_decompress_crypto(struct crypto_tfm *tfm, const u8 *src,
+  unsigned int slen, u8 *dst,
+  unsigned int *dlen)
+{
+   return __lz4hc_decompress_crypto(src, slen, dst, dlen, NULL);
+}
+
 static struct crypto_alg alg_lz4hc = {
.cra_name   = "lz4hc",
.cra_flags  = CRYPTO_ALG_TYPE_COMPRESS,
@@ -89,14 +136,39 @@ static struct crypto_alg alg_lz4hc = {
.coa_decompress = lz4hc_decompress_crypto } }
 };
 
+static struct scomp_alg scomp = {
+   .alloc_ctx  = lz4hc_alloc_ctx,
+   .free_ctx   = lz4hc_free_ctx,
+   .compress   = lz4hc_scompress,
+   .decompress = lz4hc_sdecompress,
+   .base   = {
+   .cra_name   = "lz4hc",
+   .cra_driver_name = "lz4hc-scomp",
+   .cra_module  = THIS_MODULE,
+   }
+};
+
 static int __init lz4hc_mod_init(void)
 {
-   return crypto_register_alg(&alg_lz4hc);
+   int ret;
+
+   ret = crypto_register_alg(&alg_lz4hc);
+   if (ret)
+ 

[PATCH v4 01/10] crypto: shrink hash down to two types

2016-05-31 Thread Giovanni Cabiddu
Move hash to 0xe to free up the space for acomp/scomp/qdecomp

Signed-off-by: Giovanni Cabiddu 
---
 include/linux/crypto.h |   10 +-
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 6e28c89..d844cbc 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -48,15 +48,15 @@
 #define CRYPTO_ALG_TYPE_BLKCIPHER  0x0004
 #define CRYPTO_ALG_TYPE_ABLKCIPHER 0x0005
 #define CRYPTO_ALG_TYPE_GIVCIPHER  0x0006
-#define CRYPTO_ALG_TYPE_DIGEST 0x0008
-#define CRYPTO_ALG_TYPE_HASH   0x0008
-#define CRYPTO_ALG_TYPE_SHASH  0x0009
-#define CRYPTO_ALG_TYPE_AHASH  0x000a
 #define CRYPTO_ALG_TYPE_RNG0x000c
 #define CRYPTO_ALG_TYPE_AKCIPHER   0x000d
+#define CRYPTO_ALG_TYPE_DIGEST 0x000e
+#define CRYPTO_ALG_TYPE_HASH   0x000e
+#define CRYPTO_ALG_TYPE_SHASH  0x000e
+#define CRYPTO_ALG_TYPE_AHASH  0x000f
 
 #define CRYPTO_ALG_TYPE_HASH_MASK  0x000e
-#define CRYPTO_ALG_TYPE_AHASH_MASK 0x000c
+#define CRYPTO_ALG_TYPE_AHASH_MASK 0x000e
 #define CRYPTO_ALG_TYPE_BLKCIPHER_MASK 0x000c
 
 #define CRYPTO_ALG_LARVAL  0x0010
-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 02/10] crypto: add asynchronous compression api

2016-05-31 Thread Giovanni Cabiddu
This patch introduces acomp, an asynchronous compression api that uses
scatterlist buffers.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig  |   10 ++
 crypto/Makefile |2 +
 crypto/acompress.c  |  118 
 crypto/crypto_user.c|   21 +++
 include/crypto/acompress.h  |  260 +++
 include/crypto/internal/acompress.h |   66 +
 include/linux/crypto.h  |1 +
 7 files changed, 478 insertions(+), 0 deletions(-)
 create mode 100644 crypto/acompress.c
 create mode 100644 include/crypto/acompress.h
 create mode 100644 include/crypto/internal/acompress.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1d33beb..24fef55 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -93,6 +93,15 @@ config CRYPTO_AKCIPHER
select CRYPTO_AKCIPHER2
select CRYPTO_ALGAPI
 
+config CRYPTO_ACOMP
+   tristate
+   select CRYPTO_ACOMP2
+   select CRYPTO_ALGAPI
+
+config CRYPTO_ACOMP2
+   tristate
+   select CRYPTO_ALGAPI2
+
 config CRYPTO_RSA
tristate "RSA algorithm"
select CRYPTO_AKCIPHER
@@ -115,6 +124,7 @@ config CRYPTO_MANAGER2
select CRYPTO_HASH2
select CRYPTO_BLKCIPHER2
select CRYPTO_AKCIPHER2
+   select CRYPTO_ACOMP2
 
 config CRYPTO_USER
tristate "Userspace cryptographic algorithm configuration"
diff --git a/crypto/Makefile b/crypto/Makefile
index 4f4ef7e..e817b38 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -31,6 +31,8 @@ obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
 
 obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
 
+obj-$(CONFIG_CRYPTO_ACOMP2) += acompress.o
+
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
 clean-files += rsapubkey-asn1.c rsapubkey-asn1.h
diff --git a/crypto/acompress.c b/crypto/acompress.c
new file mode 100644
index 000..f24fef3
--- /dev/null
+++ b/crypto/acompress.c
@@ -0,0 +1,118 @@
+/*
+ * Asynchronous Compression operations
+ *
+ * Copyright (c) 2016, Intel Corporation
+ * Authors: Weigang Li 
+ *  Giovanni Cabiddu 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "internal.h"
+
+#ifdef CONFIG_NET
+static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   struct crypto_report_comp racomp;
+
+   strncpy(racomp.type, "acomp", sizeof(racomp.type));
+
+   if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRESS,
+   sizeof(struct crypto_report_comp), &racomp))
+   goto nla_put_failure;
+   return 0;
+
+nla_put_failure:
+   return -EMSGSIZE;
+}
+#else
+static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   return -ENOSYS;
+}
+#endif
+
+static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
+   __attribute__ ((unused));
+
+static void crypto_acomp_show(struct seq_file *m, struct crypto_alg *alg)
+{
+   seq_puts(m, "type : acomp\n");
+}
+
+static void crypto_acomp_exit_tfm(struct crypto_tfm *tfm)
+{
+   struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+   struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+   alg->exit(acomp);
+}
+
+static int crypto_acomp_init_tfm(struct crypto_tfm *tfm)
+{
+   struct crypto_acomp *acomp = __crypto_acomp_tfm(tfm);
+   struct acomp_alg *alg = crypto_acomp_alg(acomp);
+
+   if (alg->exit)
+   acomp->base.exit = crypto_acomp_exit_tfm;
+
+   if (alg->init)
+   return alg->init(acomp);
+
+   return 0;
+}
+
+static const struct crypto_type crypto_acomp_type = {
+   .extsize = crypto_alg_extsize,
+   .init_tfm = crypto_acomp_init_tfm,
+#ifdef CONFIG_PROC_FS
+   .show = crypto_acomp_show,
+#endif
+   .report = crypto_acomp_report,
+   .maskclear = ~CRYPTO_ALG_TYPE_MASK,
+   .maskset = CRYPTO_ALG_TYPE_MASK,
+   .type = CRYPTO_ALG_TYPE_ACOMPRESS,
+   .tfmsize = offsetof(struct crypto_acomp, base),
+};
+
+struct crypto_acomp *crypto_alloc_acomp(const char *alg_name, u32 type,
+   u32 mask)
+{
+   return crypto_alloc_tfm(alg_name, &crypto_acomp_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_acomp);
+
+int crypto_register_acomp(struct acomp_alg *alg)
+{
+   struct crypto_alg *base = &alg->base;
+
+   base->cra_type = &crypto_acomp_type;
+   base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+   base->cra_flags |= CRYPTO_ALG_TYPE_ACOMPRESS;
+
+   return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_registe

[PATCH v4 04/10] crypto: add quick decompression api

2016-05-31 Thread Giovanni Cabiddu
This patch introduces qdecomp, an asynchronous decompression api.
qdecomp is a front-end for acomp and scomp algorithms which do not
not need additional vmalloc work space for decompression.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Makefile   |1 +
 crypto/acompress.c|   50 -
 crypto/qdecompress.c  |   93 +++
 crypto/scompress.c|   92 +++-
 include/crypto/internal/acompress.h   |   15 +++
 include/crypto/internal/qdecompress.h |   22 
 include/crypto/internal/scompress.h   |   14 +++
 include/crypto/qdecompress.h  |  204 +
 include/linux/crypto.h|4 +
 9 files changed, 491 insertions(+), 4 deletions(-)
 create mode 100644 crypto/qdecompress.c
 create mode 100644 include/crypto/internal/qdecompress.h
 create mode 100644 include/crypto/qdecompress.h

diff --git a/crypto/Makefile b/crypto/Makefile
index fc8fcfe..2621451 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -33,6 +33,7 @@ obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
 
 obj-$(CONFIG_CRYPTO_ACOMP2) += acompress.o
 obj-$(CONFIG_CRYPTO_ACOMP2) += scompress.o
+obj-$(CONFIG_CRYPTO_ACOMP2) += qdecompress.o
 
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
diff --git a/crypto/acompress.c b/crypto/acompress.c
index 885d15d..dcc9ddc 100644
--- a/crypto/acompress.c
+++ b/crypto/acompress.c
@@ -23,9 +23,10 @@
 #include 
 #include 
 #include 
+#include 
 #include "internal.h"
 
-static const struct crypto_type crypto_acomp_type;
+const struct crypto_type crypto_acomp_type;
 
 #ifdef CONFIG_NET
 static int crypto_acomp_report(struct sk_buff *skb, struct crypto_alg *alg)
@@ -96,7 +97,39 @@ unsigned int crypto_acomp_extsize(struct crypto_alg *alg)
return extsize;
 }
 
-static const struct crypto_type crypto_acomp_type = {
+static void crypto_exit_acomp_ops_nospace(struct crypto_tfm *tfm)
+{
+   struct crypto_tfm **ctx = crypto_tfm_ctx(tfm);
+   struct crypto_acomp *acomp = __crypto_acomp_tfm(*ctx);
+
+   crypto_free_acomp(acomp);
+   *ctx = NULL;
+}
+
+int crypto_init_acomp_ops_nospace(struct crypto_tfm *tfm)
+{
+   struct crypto_alg *calg = tfm->__crt_alg;
+   struct crypto_qdecomp *crt = __crypto_qdecomp_tfm(tfm);
+   struct crypto_tfm **ctx = crypto_tfm_ctx(tfm);
+   struct crypto_acomp *acomp;
+
+   if (!crypto_mod_get(calg))
+   return -EAGAIN;
+
+   acomp = crypto_create_tfm(calg, &crypto_acomp_type);
+   if (IS_ERR(acomp)) {
+   crypto_mod_put(calg);
+   return PTR_ERR(acomp);
+   }
+
+   *ctx = &acomp->base;
+   tfm->exit = crypto_exit_acomp_ops_nospace;
+   crt->decompress = (int (*)(struct qdecomp_req *req))acomp->decompress;
+
+   return 0;
+}
+
+const struct crypto_type crypto_acomp_type = {
.extsize = crypto_acomp_extsize,
.init_tfm = crypto_acomp_init_tfm,
 #ifdef CONFIG_PROC_FS
@@ -108,6 +141,7 @@ static const struct crypto_type crypto_acomp_type = {
.type = CRYPTO_ALG_TYPE_ACOMPRESS,
.tfmsize = offsetof(struct crypto_acomp, base),
 };
+EXPORT_SYMBOL_GPL(crypto_acomp_type);
 
 struct crypto_acomp *crypto_alloc_acomp(const char *alg_name, u32 type,
u32 mask)
@@ -153,6 +187,18 @@ int crypto_register_acomp(struct acomp_alg *alg)
 }
 EXPORT_SYMBOL_GPL(crypto_register_acomp);
 
+int crypto_register_acomp_qdecomp(struct acomp_alg *alg)
+{
+   struct crypto_alg *base = &alg->base;
+
+   base->cra_type = &crypto_acomp_type;
+   base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+   base->cra_flags |= CRYPTO_ALG_TYPE_ACOMPRESS_QDCMP;
+
+   return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_acomp_qdecomp);
+
 int crypto_unregister_acomp(struct acomp_alg *alg)
 {
return crypto_unregister_alg(&alg->base);
diff --git a/crypto/qdecompress.c b/crypto/qdecompress.c
new file mode 100644
index 000..f229016
--- /dev/null
+++ b/crypto/qdecompress.c
@@ -0,0 +1,93 @@
+/*
+ * Quick Decompression operations
+ *
+ * Copyright (c) 2016, Intel Corporation
+ * Authors: Giovanni Cabiddu 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "internal.h"
+
+#ifdef CONFIG_NET
+static int crypto_qdecomp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   struct crypto_report_comp rqdecomp;
+
+   strncpy(rqdecomp.type, "qdecomp", sizeof(rqdecomp.type));
+
+   if (nla_put(skb, CRYPTOCFGA_REPORT_COMPRES

[PATCH v4 00/10] crypto: asynchronous compression api

2016-05-31 Thread Giovanni Cabiddu
The following patch set introduces acomp, a generic asynchronous
(de)compression api with support for SG lists.
We propose a new crypto type called crypto_acomp_type, a new struct acomp_alg
and struct crypto_acomp, together with number of helper functions to register
acomp type algorithms and allocate tfm instances.
This interface will allow the following operations:

int (*compress)(struct acomp_req *req);
int (*decompress)(struct acomp_req *req);

Together with acomp we propose a new driver-side interface, scomp, which
handles compression implementations which use linear buffers. We converted all
compression algorithms available in LKCF to use this interface so that those
algorithms will be accessible through the acomp api.

Finally we propose qdecomp, a simple decompression api for algorithms
which do not need work space for performing decompression operations.

Changes in v4:
- added qdecompress api, a front-end for decompression algorithms which
  do not need additional vmalloc work space

Changes in v3:
- added driver-side scomp interface
- provided support for lzo, lz4, lz4hc, 842, deflate compression algorithms
  via the acomp api (through scomp)
- extended testmgr to support acomp
- removed extended acomp api for supporting deflate algorithm parameters
  (will be enhanced and re-proposed in future)
Note that (2) to (7) are a rework of Joonsoo Kim's scomp patches.

Changes in v2:
- added compression and decompression request sizes in acomp_alg
  in order to enable noctx support
- extended api with helpers to allocate compression and
  decompression requests

Changes from initial submit:
- added consumed and produced fields to acomp_req
- extended api to support configuration of deflate compressors

---
Giovanni Cabiddu (10):
  crypto: shrink hash down to two types
  crypto: add asynchronous compression api
  crypto: add driver-side scomp interface
  crypto: add quick decompression api
  crypto: acomp - add support for lzo via scomp
  crypto: acomp - add support for lz4 via scomp
  crypto: acomp - add support for lz4hc via scomp
  crypto: acomp - add support for 842 via scomp
  crypto: acomp - add support for deflate via scomp
  crypto: acomp - update testmgr with support for acomp

 crypto/842.c  |   82 -
 crypto/Kconfig|   15 ++
 crypto/Makefile   |4 +
 crypto/acompress.c|  209 
 crypto/crypto_user.c  |   21 ++
 crypto/deflate.c  |  111 ++-
 crypto/lz4.c  |   91 -
 crypto/lz4hc.c|   92 -
 crypto/lzo.c  |   97 --
 crypto/qdecompress.c  |   93 +
 crypto/scompress.c|  345 +
 crypto/testmgr.c  |  158 ++--
 include/crypto/acompress.h|  251 
 include/crypto/internal/acompress.h   |   97 +
 include/crypto/internal/qdecompress.h |   22 ++
 include/crypto/internal/scompress.h   |  148 ++
 include/crypto/qdecompress.h  |  204 +++
 include/linux/crypto.h|   17 ++-
 18 files changed, 1991 insertions(+), 66 deletions(-)
 create mode 100644 crypto/acompress.c
 create mode 100644 crypto/qdecompress.c
 create mode 100644 crypto/scompress.c
 create mode 100644 include/crypto/acompress.h
 create mode 100644 include/crypto/internal/acompress.h
 create mode 100644 include/crypto/internal/qdecompress.h
 create mode 100644 include/crypto/internal/scompress.h
 create mode 100644 include/crypto/qdecompress.h

-- 
1.7.4.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 05/10] crypto: acomp - add support for lzo via scomp

2016-05-31 Thread Giovanni Cabiddu
This patch implements an scomp backend for the lzo compression algorithm.
This way, lzo is exposed through the acomp api.

Signed-off-by: Giovanni Cabiddu 
---
 crypto/Kconfig |1 +
 crypto/lzo.c   |   97 +++-
 2 files changed, 83 insertions(+), 15 deletions(-)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 24fef55..08075c1 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1513,6 +1513,7 @@ config CRYPTO_DEFLATE
 config CRYPTO_LZO
tristate "LZO compression algorithm"
select CRYPTO_ALGAPI
+   select CRYPTO_ACOMP2
select LZO_COMPRESS
select LZO_DECOMPRESS
help
diff --git a/crypto/lzo.c b/crypto/lzo.c
index c3f3dd9..732d59b 100644
--- a/crypto/lzo.c
+++ b/crypto/lzo.c
@@ -22,40 +22,55 @@
 #include 
 #include 
 #include 
+#include 
 
 struct lzo_ctx {
void *lzo_comp_mem;
 };
 
+static void *lzo_alloc_ctx(struct crypto_scomp *tfm)
+{
+   void *ctx;
+
+   ctx = kmalloc(LZO1X_MEM_COMPRESS, GFP_KERNEL | __GFP_NOWARN);
+   if (!ctx)
+   ctx = vmalloc(LZO1X_MEM_COMPRESS);
+   if (!ctx)
+   return ERR_PTR(-ENOMEM);
+
+   return ctx;
+}
+
 static int lzo_init(struct crypto_tfm *tfm)
 {
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   ctx->lzo_comp_mem = kmalloc(LZO1X_MEM_COMPRESS,
-   GFP_KERNEL | __GFP_NOWARN);
-   if (!ctx->lzo_comp_mem)
-   ctx->lzo_comp_mem = vmalloc(LZO1X_MEM_COMPRESS);
-   if (!ctx->lzo_comp_mem)
+   ctx->lzo_comp_mem = lzo_alloc_ctx(NULL);
+   if (IS_ERR(ctx->lzo_comp_mem))
return -ENOMEM;
 
return 0;
 }
 
+static void lzo_free_ctx(struct crypto_scomp *tfm, void *ctx)
+{
+   kvfree(ctx);
+}
+
 static void lzo_exit(struct crypto_tfm *tfm)
 {
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
 
-   kvfree(ctx->lzo_comp_mem);
+   lzo_free_ctx(NULL, ctx->lzo_comp_mem);
 }
 
-static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
-   unsigned int slen, u8 *dst, unsigned int *dlen)
+static int __lzo_compress(const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen, void *ctx)
 {
-   struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
int err;
 
-   err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx->lzo_comp_mem);
+   err = lzo1x_1_compress(src, slen, dst, &tmp_len, ctx);
 
if (err != LZO_E_OK)
return -EINVAL;
@@ -64,8 +79,23 @@ static int lzo_compress(struct crypto_tfm *tfm, const u8 
*src,
return 0;
 }
 
-static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
- unsigned int slen, u8 *dst, unsigned int *dlen)
+static int lzo_compress(struct crypto_tfm *tfm, const u8 *src,
+   unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   return __lzo_compress(src, slen, dst, dlen, ctx->lzo_comp_mem);
+}
+
+static int lzo_scompress(struct crypto_scomp *tfm, const u8 *src,
+unsigned int slen, u8 *dst, unsigned int *dlen,
+void *ctx)
+{
+   return __lzo_compress(src, slen, dst, dlen, ctx);
+}
+
+static int __lzo_decompress(const u8 *src, unsigned int slen,
+   u8 *dst, unsigned int *dlen)
 {
int err;
size_t tmp_len = *dlen; /* size_t(ulong) <-> uint on 64 bit */
@@ -77,7 +107,19 @@ static int lzo_decompress(struct crypto_tfm *tfm, const u8 
*src,
 
*dlen = tmp_len;
return 0;
+}
 
+static int lzo_decompress(struct crypto_tfm *tfm, const u8 *src,
+ unsigned int slen, u8 *dst, unsigned int *dlen)
+{
+   return __lzo_decompress(src, slen, dst, dlen);
+}
+
+static int lzo_sdecompress(struct crypto_scomp *tfm, const u8 *src,
+  unsigned int slen, u8 *dst, unsigned int *dlen,
+  void *ctx)
+{
+   return __lzo_decompress(src, slen, dst, dlen);
 }
 
 static struct crypto_alg alg = {
@@ -88,18 +130,43 @@ static struct crypto_alg alg = {
.cra_init   = lzo_init,
.cra_exit   = lzo_exit,
.cra_u  = { .compress = {
-   .coa_compress   = lzo_compress,
-   .coa_decompress = lzo_decompress } }
+   .coa_compress   = lzo_compress,
+   .coa_decompress = lzo_decompress } }
+};
+
+static struct scomp_alg scomp = {
+   .alloc_ctx  = lzo_alloc_ctx,
+   .free_ctx   = lzo_free_ctx,
+   .compress   = lzo_scompress,
+   .decompress = lzo_sdecompress,
+   .base   = {
+   .cra_name   = "lzo",
+   .cra_driver_name = "lzo-scomp",
+   .cra_module  = THIS_MODULE,
+   }
 }

[PATCH v7 0/3] Key-agreement Protocol Primitives (KPP) API

2016-05-31 Thread Salvatore Benedetto
Hi Herb,

the following patchset introduces a new API for abstracting key-agreement
protocols such as DH and ECDH. It provides the primitives required for 
implementing
the protocol, thus the name KPP (Key-agreement Protocol Primitives).

Regards,
Salvatore

Changes from v6:
* Remove len parameter from crypto_kpp_set_params. Adjust rest of code
  accordingly
* Remove the while loop in ecdh_make_pub_key as the private key is fixed and
  iterating is pointless. EAGAIN is now to returned to make the user aware
  that he needs to regenerate/reset the private key

Changes from v5:
* Fix ecdh loading in fips mode.

Changes from v4:
* If fips_enabled is set allow only P256 (or higher) as Stephan suggested
* Pass ndigits as argument to ecdh_make_pub_key and ecdh_shared_secret
  so that VLA can be used like in the rest of the module

Changes from v3:
* Move curve ID definition to public header ecdh.h as users need to
  have access to those ids when selecting the curve

Changes from v2:
* Add support for ECDH (curve P192 and P256). I reused the ecc module
  already present in net/bluetooth and extended it in order to select
  different curves at runtime. Code for P192 was taken from tinycrypt.

Changes from v1:
* Change check in dh_check_params_length based on Stephan review


Salvatore Benedetto (3):
  crypto: Key-agreement Protocol Primitives API (KPP)
  crypto: kpp - Add DH software implementation
  crypto: kpp - Add ECDH software support

 crypto/Kconfig  |   23 +
 crypto/Makefile |6 +
 crypto/crypto_user.c|   20 +
 crypto/dh.c |  223 +
 crypto/ecc.c| 1011 +++
 crypto/ecc.h|   70 +++
 crypto/ecc_curve_defs.h |   57 +++
 crypto/ecdh.c   |  170 +++
 crypto/kpp.c|  123 +
 crypto/testmgr.c|  275 +++
 crypto/testmgr.h|  286 +++
 include/crypto/dh.h |   23 +
 include/crypto/ecdh.h   |   24 +
 include/crypto/internal/kpp.h   |   64 +++
 include/crypto/kpp.h|  331 +
 include/linux/crypto.h  |1 +
 include/uapi/linux/cryptouser.h |5 +
 17 files changed, 2712 insertions(+)
 create mode 100644 crypto/dh.c
 create mode 100644 crypto/ecc.c
 create mode 100644 crypto/ecc.h
 create mode 100644 crypto/ecc_curve_defs.h
 create mode 100644 crypto/ecdh.c
 create mode 100644 crypto/kpp.c
 create mode 100644 include/crypto/dh.h
 create mode 100644 include/crypto/ecdh.h
 create mode 100644 include/crypto/internal/kpp.h
 create mode 100644 include/crypto/kpp.h

-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v7 2/3] crypto: kpp - Add DH software implementation

2016-05-31 Thread Salvatore Benedetto
 * Implement MPI based Diffie-Hellman under kpp API
 * Test provided uses data generad by OpenSSL

Signed-off-by: Salvatore Benedetto 
---
 crypto/Kconfig  |   8 ++
 crypto/Makefile |   2 +
 crypto/dh.c | 223 
 crypto/testmgr.c| 157 
 crypto/testmgr.h| 208 
 include/crypto/dh.h |  23 ++
 6 files changed, 621 insertions(+)
 create mode 100644 crypto/dh.c
 create mode 100644 include/crypto/dh.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 0bd6d7f..4190e0d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -111,6 +111,14 @@ config CRYPTO_RSA
help
  Generic implementation of the RSA public key algorithm.
 
+config CRYPTO_DH
+   tristate "Diffie-Hellman algorithm"
+   select CRYPTO_KPP
+   select MPILIB
+   help
+ Generic implementation of the Diffie-Hellman algorithm.
+
+
 config CRYPTO_MANAGER
tristate "Cryptographic algorithm manager"
select CRYPTO_MANAGER2
diff --git a/crypto/Makefile b/crypto/Makefile
index 5b60890..101f8fd 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -32,6 +32,8 @@ obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
 obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
 obj-$(CONFIG_CRYPTO_KPP2) += kpp.o
 
+obj-$(CONFIG_CRYPTO_DH) += dh.o
+
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
 clean-files += rsapubkey-asn1.c rsapubkey-asn1.h
diff --git a/crypto/dh.c b/crypto/dh.c
new file mode 100644
index 000..8a13d3b
--- /dev/null
+++ b/crypto/dh.c
@@ -0,0 +1,223 @@
+/*  Diffie-Hellman Key Agreement Method [RFC2631]
+ *
+ * Copyright (c) 2016, Intel Corporation
+ * Authors: Salvatore Benedetto 
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+struct dh_ctx {
+   MPI p;
+   MPI g;
+   MPI xa;
+};
+
+static void dh_free_ctx(struct dh_ctx *ctx)
+{
+   mpi_free(ctx->p);
+   mpi_free(ctx->g);
+   mpi_free(ctx->xa);
+   ctx->p = NULL;
+   ctx->g = NULL;
+   ctx->xa = NULL;
+}
+
+/*
+ * Public key generation function [RFC2631 sec 2.1.1]
+ * ya = g^xa mod p;
+ */
+static int _generate_public_key(const struct dh_ctx *ctx, MPI ya)
+{
+   /* ya = g^xa mod p */
+   return mpi_powm(ya, ctx->g, ctx->xa, ctx->p);
+}
+
+/*
+ * ZZ generation function [RFC2631 sec 2.1.1]
+ * ZZ = yb^xa mod p;
+ */
+static int _compute_shared_secret(const struct dh_ctx *ctx, MPI yb,
+ MPI zz)
+{
+   /* ZZ = yb^xa mod p */
+   return mpi_powm(zz, yb, ctx->xa, ctx->p);
+}
+
+static inline struct dh_ctx *dh_get_ctx(struct crypto_kpp *tfm)
+{
+   return kpp_tfm_ctx(tfm);
+}
+
+static int dh_check_params_length(unsigned int p_len)
+{
+   return (p_len < 1536) ? -EINVAL : 0;
+}
+
+static int dh_set_params(struct crypto_kpp *tfm, void *buffer)
+{
+   struct dh_ctx *ctx = dh_get_ctx(tfm);
+   struct dh_params *params = (struct dh_params *)buffer;
+
+   if (unlikely(!buffer))
+   return -EINVAL;
+
+   if (unlikely(!params->p || !params->g))
+   return -EINVAL;
+
+   if (dh_check_params_length(params->p_size << 3))
+   return -EINVAL;
+
+   ctx->p = mpi_read_raw_data(params->p, params->p_size);
+   if (!ctx->p)
+   return -EINVAL;
+
+   ctx->g = mpi_read_raw_data(params->g, params->g_size);
+   if (!ctx->g) {
+   mpi_free(ctx->p);
+   return -EINVAL;
+   }
+
+   return 0;
+}
+
+static int dh_set_secret(struct crypto_kpp *tfm, void *buffer,
+unsigned int len)
+{
+   struct dh_ctx *ctx = dh_get_ctx(tfm);
+
+   if (unlikely(!buffer || !len))
+   return -EINVAL;
+
+   ctx->xa = mpi_read_raw_data(buffer, len);
+
+   if (!ctx->xa)
+   return -EINVAL;
+
+   return 0;
+}
+
+static int dh_generate_public_key(struct kpp_request *req)
+{
+   struct crypto_kpp *tfm = crypto_kpp_reqtfm(req);
+   const struct dh_ctx *ctx = dh_get_ctx(tfm);
+   MPI ya = mpi_alloc(0);
+   int ret = 0;
+   int sign;
+
+   if (!ya)
+   return -ENOMEM;
+
+   if (unlikely(!ctx->p || !ctx->g || !ctx->xa)) {
+   ret = -EINVAL;
+   goto err_free_ya;
+   }
+   ret = _generate_public_key(ctx, ya);
+   if (ret)
+   goto err_free_ya;
+
+   ret = mpi_write_to_sgl(ya, req->dst, &req->dst_len, &sign);
+   if (ret)
+   goto err_free_ya;
+
+   if (sign < 0)
+   ret = -EBADMSG;
+
+err_free_ya:
+  

[PATCH v7 3/3] crypto: kpp - Add ECDH software support

2016-05-31 Thread Salvatore Benedetto
 * Implement ECDH under kpp API
 * Provide ECC software support for curve P-192 and
   P-256.
 * Add kpp test for ECDH with data generated by OpenSSL

Signed-off-by: Salvatore Benedetto 
---
 crypto/Kconfig  |5 +
 crypto/Makefile |3 +
 crypto/ecc.c| 1011 +++
 crypto/ecc.h|   70 
 crypto/ecc_curve_defs.h |   57 +++
 crypto/ecdh.c   |  170 
 crypto/testmgr.c|  136 ++-
 crypto/testmgr.h|   78 
 include/crypto/ecdh.h   |   24 ++
 9 files changed, 1545 insertions(+), 9 deletions(-)
 create mode 100644 crypto/ecc.c
 create mode 100644 crypto/ecc.h
 create mode 100644 crypto/ecc_curve_defs.h
 create mode 100644 crypto/ecdh.c
 create mode 100644 include/crypto/ecdh.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 4190e0d..5533c69 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -118,6 +118,11 @@ config CRYPTO_DH
help
  Generic implementation of the Diffie-Hellman algorithm.
 
+config CRYPTO_ECDH
+   tristate "ECDH algorithm"
+   select CRYTPO_KPP
+   help
+ Generic implementation of the ECDH algorithm
 
 config CRYPTO_MANAGER
tristate "Cryptographic algorithm manager"
diff --git a/crypto/Makefile b/crypto/Makefile
index 101f8fd..ba03079 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -33,6 +33,9 @@ obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
 obj-$(CONFIG_CRYPTO_KPP2) += kpp.o
 
 obj-$(CONFIG_CRYPTO_DH) += dh.o
+ecdh_generic-y := ecc.o
+ecdh_generic-y += ecdh.o
+obj-$(CONFIG_CRYPTO_ECDH) += ecdh_generic.o
 
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
diff --git a/crypto/ecc.c b/crypto/ecc.c
new file mode 100644
index 000..ca2febf
--- /dev/null
+++ b/crypto/ecc.c
@@ -0,0 +1,1011 @@
+/*
+ * Copyright (c) 2013, Kenneth MacKay
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *  * Redistributions of source code must retain the above copyright
+ *   notice, this list of conditions and the following disclaimer.
+ *  * Redistributions in binary form must reproduce the above copyright
+ *notice, this list of conditions and the following disclaimer in the
+ *documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ecc.h"
+#include "ecc_curve_defs.h"
+
+typedef struct {
+   u64 m_low;
+   u64 m_high;
+} uint128_t;
+
+static inline const struct ecc_curve *ecc_get_curve(unsigned int curve_id)
+{
+   switch (curve_id) {
+   /* In FIPS mode only allow P256 and higher */
+   case ECC_CURVE_NIST_P192:
+   return fips_enabled ? NULL : &nist_p192;
+   case ECC_CURVE_NIST_P256:
+   return &nist_p256;
+   default:
+   return NULL;
+   }
+}
+
+static u64 *ecc_alloc_digits_space(unsigned int ndigits)
+{
+   size_t len = ndigits * sizeof(u64);
+
+   if (!len)
+   return NULL;
+
+   return kmalloc(len, GFP_KERNEL);
+}
+
+static void ecc_free_digits_space(u64 *space)
+{
+   kzfree(space);
+}
+
+static struct ecc_point *ecc_alloc_point(unsigned int ndigits)
+{
+   struct ecc_point *p = kmalloc(sizeof(*p), GFP_KERNEL);
+
+   if (!p)
+   return NULL;
+
+   p->x = ecc_alloc_digits_space(ndigits);
+   if (!p->x)
+   goto err_alloc_x;
+
+   p->y = ecc_alloc_digits_space(ndigits);
+   if (!p->y)
+   goto err_alloc_y;
+
+   p->ndigits = ndigits;
+
+   return p;
+
+err_alloc_y:
+   ecc_free_digits_space(p->x);
+err_alloc_x:
+   kfree(p);
+   return NULL;
+}
+
+static void ecc_free_point(struct ecc_point *p)
+{
+   if (!p)
+   return;
+
+   kzfree(p->x);
+   kzfree(p->y);
+   kzfree(p);
+}
+
+static void vli_clear(u64 *vli, unsigned int ndigits)
+{
+   int i;
+
+   for (i = 0; i < ndigits; i++)
+   vli[i] = 0;
+}

[PATCH v7 1/3] crypto: Key-agreement Protocol Primitives API (KPP)

2016-05-31 Thread Salvatore Benedetto
Add key-agreement protocol primitives (kpp) API which allows to
implement primitives required by protocols such as DH and ECDH.
The API is composed mainly by the following functions
 * set_params() - It allows the user to set the parameters known to
   both parties involved in the key-agreement session
 * set_secret() - It allows the user to set his secret, also
   referred to as his private key
 * generate_public_key() - It generates the public key to be sent to
   the other counterpart involved in the key-agreement session. The
   function has to be called after set_params() and set_secret()
 * generate_secret() - It generates the shared secret for the session

Other functions such as init() and exit() are provided for allowing
cryptographic hardware to be inizialized properly before use

Signed-off-by: Salvatore Benedetto 
---
 crypto/Kconfig  |  10 ++
 crypto/Makefile |   1 +
 crypto/crypto_user.c|  20 +++
 crypto/kpp.c| 123 +++
 include/crypto/internal/kpp.h   |  64 
 include/crypto/kpp.h| 331 
 include/linux/crypto.h  |   1 +
 include/uapi/linux/cryptouser.h |   5 +
 8 files changed, 555 insertions(+)
 create mode 100644 crypto/kpp.c
 create mode 100644 include/crypto/internal/kpp.h
 create mode 100644 include/crypto/kpp.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1d33beb..0bd6d7f 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -93,6 +93,15 @@ config CRYPTO_AKCIPHER
select CRYPTO_AKCIPHER2
select CRYPTO_ALGAPI
 
+config CRYPTO_KPP2
+   tristate
+   select CRYPTO_ALGAPI2
+
+config CRYPTO_KPP
+   tristate
+   select CRYPTO_ALGAPI
+   select CRYPTO_KPP2
+
 config CRYPTO_RSA
tristate "RSA algorithm"
select CRYPTO_AKCIPHER
@@ -115,6 +124,7 @@ config CRYPTO_MANAGER2
select CRYPTO_HASH2
select CRYPTO_BLKCIPHER2
select CRYPTO_AKCIPHER2
+   select CRYPTO_KPP2
 
 config CRYPTO_USER
tristate "Userspace cryptographic algorithm configuration"
diff --git a/crypto/Makefile b/crypto/Makefile
index 4f4ef7e..5b60890 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -30,6 +30,7 @@ crypto_hash-y += shash.o
 obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
 
 obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
+obj-$(CONFIG_CRYPTO_KPP2) += kpp.o
 
 $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
 $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
diff --git a/crypto/crypto_user.c b/crypto/crypto_user.c
index f71960d..e7a0a9d 100644
--- a/crypto/crypto_user.c
+++ b/crypto/crypto_user.c
@@ -28,6 +28,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "internal.h"
 
@@ -126,6 +127,21 @@ nla_put_failure:
return -EMSGSIZE;
 }
 
+static int crypto_report_kpp(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   struct crypto_report_kpp rkpp;
+
+   strncpy(rkpp.type, "kpp", sizeof(rkpp.type));
+
+   if (nla_put(skb, CRYPTOCFGA_REPORT_KPP,
+   sizeof(struct crypto_report_kpp), &rkpp))
+   goto nla_put_failure;
+   return 0;
+
+nla_put_failure:
+   return -EMSGSIZE;
+}
+
 static int crypto_report_one(struct crypto_alg *alg,
 struct crypto_user_alg *ualg, struct sk_buff *skb)
 {
@@ -176,6 +192,10 @@ static int crypto_report_one(struct crypto_alg *alg,
goto nla_put_failure;
 
break;
+   case CRYPTO_ALG_TYPE_KPP:
+   if (crypto_report_kpp(skb, alg))
+   goto nla_put_failure;
+   break;
}
 
 out:
diff --git a/crypto/kpp.c b/crypto/kpp.c
new file mode 100644
index 000..d36ce05
--- /dev/null
+++ b/crypto/kpp.c
@@ -0,0 +1,123 @@
+/*
+ * Key-agreement Protocol Primitives (KPP)
+ *
+ * Copyright (c) 2016, Intel Corporation
+ * Authors: Salvatore Benedetto 
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "internal.h"
+
+#ifdef CONFIG_NET
+static int crypto_kpp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   struct crypto_report_kpp rkpp;
+
+   strncpy(rkpp.type, "kpp", sizeof(rkpp.type));
+
+   if (nla_put(skb, CRYPTOCFGA_REPORT_KPP,
+   sizeof(struct crypto_report_kpp), &rkpp))
+   goto nla_put_failure;
+   return 0;
+
+nla_put_failure:
+   return -EMSGSIZE;
+}
+#else
+static int crypto_kpp_report(struct sk_buff *skb, struct crypto_alg *alg)
+{
+   return -ENOSYS;
+}
+#endif
+
+static void crypto_kpp_show(struct seq_file *m, struct crypto_alg *alg)
+ 

Re: [PATCH] KEYS: Add optional key derivation parameters for DH

2016-05-31 Thread Mat Martineau

On Thu, 26 May 2016, David Howells wrote:


Mat Martineau  wrote:


+struct keyctl_kdf_params {
+   char *name;
+   __u8 reserved[32]; /* Reserved for future use, must be 0 */
+};
+
 #endif /*  _LINUX_KEYCTL_H */
diff --git a/security/keys/compat.c b/security/keys/compat.c
index c8783b3..36c80bf 100644
--- a/security/keys/compat.c
+++ b/security/keys/compat.c
@@ -134,7 +134,7 @@ COMPAT_SYSCALL_DEFINE5(keyctl, u32, option,

case KEYCTL_DH_COMPUTE:
return keyctl_dh_compute(compat_ptr(arg2), compat_ptr(arg3),
-arg4);
+arg4, compat_ptr(arg5));


Given the new structure above, this won't work.  The problem is that on a
64-bit system the kernel expects 'name' to be a 64-bit pointer, but if we're
in the compat handler, we have a 32-bit userspace's idea of the struct - in
which 'name' is a 31-bit (s390x) or a 32-bit pointer without any padding.

So in compat code you can't just pass the user pointer direct through to
keyctl_dh_compute().  You need to supply a compat_keyctl_kdf_params struct and
translator code.


Since none of the members of the structure were accessed, I thought the 
simple conversion was adequate for the null check and was deferring the 
real compat handling until the rest of the structure was known. I should 
have explained that in a comment.



What I would recommend you do at the moment is to mark the syscall argument as
"reserved, must be 0" and deal with the implementation in the next merge
window.


Yeah, there's not much value in defining the keyctl_kdf_params struct and 
then not using it. Should have kept it simple.


Thanks to you and Stephan for updating the patch and moving things along.


Regards,

--
Mat Martineau
Intel OTC
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 0/4] hw rng support for NSP SoC

2016-05-31 Thread Florian Fainelli
On 05/31/2016 03:19 AM, Herbert Xu wrote:
> On Fri, May 27, 2016 at 06:10:37AM -0400, Yendapally Reddy Dhananjaya Reddy 
> wrote:
>> This patchset contains the hw random number generator support for the
>> Broadcom's NSP SoC. The block is similar to the block available in
>> bcm2835 with different default interrupt mask value. Due to lack of
>> documentation, I cannot confirm the interrupt mask register details
>> in bcm2835. In an effort to not break the existing functionality of
>> bcm2835, I used a different compatible string to mask the interrupt
>> for NSP SoC. Please let me know. Also supported providing requested
>> number of random numbers instead of static size of four bytes.
>>
>> The first patch contains the documentation changes and the second patch
>> contains the support for rng available in NSP SoC. The third patch
>> contains the device tree changes for NSP SoC. The fourth patch contains
>> the support for reading requested number of random numbers.
>>
>> This patch set has been tested on NSP bcm958625HR board.
>> This patch set is based on v4.6.0-rc1 and is available from github
>> repo: https://github.com/Broadcom/cygnus-linux.git
>> branch: nsp-rng-v2
>>
>> Changes since v1
> 
> All applied.

FYI, ARM Device Tree patches usually go via ARM SoC pull requests, so it
is best if this is planned in advance. Can you make sure you document
that there could be a merge conflict in your pull request to Linus?

Thanks
-- 
Florian
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v6 6/6] crypto: AF_ALG - add support for key_id

2016-05-31 Thread Tadeusz Struk
Hi Mat,
On 05/25/2016 05:45 PM, Mat Martineau wrote:
> 
> On Sat, 14 May 2016, Tadeusz Struk wrote:
> 
>> diff --git a/crypto/algif_akcipher.c b/crypto/algif_akcipher.c
>> index e00793d..6733df1 100644
>> --- a/crypto/algif_akcipher.c
>> +++ b/crypto/algif_akcipher.c
>> +static int asym_key_verify(const struct key *key, struct akcipher_request 
>> *req)
>> +{
>> +struct public_key_signature sig;
>> +char *src = NULL, *in;
>> +int ret;
>> +
>> +if (!sg_is_last(req->src)) {
>> +src = kmalloc(req->src_len, GFP_KERNEL);
>> +if (!src)
>> +return -ENOMEM;
>> +scatterwalk_map_and_copy(src, req->src, 0, req->src_len, 0);
>> +in = src;
>> +} else {
>> +in = sg_virt(req->src);
>> +}
>> +sig.pkey_algo = "rsa";
>> +sig.encoding = "pkcs1";
>> +/* Need to find a way to pass the hash param */
> 
> Are you referring to sig.digest here? It looks like you will hit a BUG_ON() 
> in public_key_verify_signature() if sig.digest is 0. However, sig.digest is 
> unlikely to be 0 because the struct is not cleared - should fix this, since 
> public_key_verify_signature() will try to follow that random pointer.
> 

Right, I need to have a local buffer for the digest here.

>> +sig.hash_algo = "sha1";
>> +sig.digest_size = 20;
>> +sig.s_size = req->src_len;
>> +sig.s = src;
>> +ret = verify_signature(key, NULL, &sig);
> 
> Is the idea to write the signature to the socket, and then read out the 
> expected digest (the digest comparison being done elsewhere)? Is that 
> something that will be supported by a future hardware asymmetric key subtype?

After the verify operation the output will be copied to the user,
and the user needs to verify it.

> 
> verify_signature() ends up calling public_key_verify_signature(), which 
> currently expects to get both the digest and signature as input and returns 
> an error if verification fails. The output of crypto_akcipher_verify() is 
> discarded before public_key_verify_signature() returns so nothing ends up in 
> req->dst to read from the socket.
> 
> ALG_OP_VERIFY should behave the same whether using ALG_SET_PUBKEY or 
> ALG_SET_PUBKEY_ID, and they aren't right now.
> 
> If sig.digest is 0, verify_signature() could return the expected digest in 
> the sig structure and skip the digest comparison it currently does. Then that 
> data could be packaged up in req as if crypto_akcipher_verify() had been 
> called. I don't know if this change confuses the semantics of 
> verify_signature() too much, maybe a new function is required with all the 
> requisite plumbing to the asymmetric key subtype.
> 

We need to copy output to the user to verify because we don't have it.
That will be consistent for both ALG_SET_PUBKEY and ALG_SET_PUBKEY_ID.
Thanks for your comments and sorry for the delayed response. I'll will send v7 
soon.
-- 
TS
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 3/5] crypto: Linux Random Number Generator

2016-05-31 Thread Stephan Mueller
The LRNG with all its properties is documented in [1]. This
documentation covers the functional discussion as well as testing of all
aspects of entropy processing. In addition, the documentation explains
the conducted regression tests to verify that the LRNG is API and ABI
compatible with the legacy /dev/random implementation.

[1] http://www.chronox.de/lrng.html

Signed-off-by: Stephan Mueller 
---
 crypto/lrng.c | 1981 +
 1 file changed, 1981 insertions(+)
 create mode 100644 crypto/lrng.c

diff --git a/crypto/lrng.c b/crypto/lrng.c
new file mode 100644
index 000..b2d83fc
--- /dev/null
+++ b/crypto/lrng.c
@@ -0,0 +1,1981 @@
+/*
+ * Linux Random Number Generator (LRNG)
+ *
+ * Documentation and test code: http://www.chronox.de/lrng.html
+ *
+ * Copyright (C) 2016, Stephan Mueller 
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *notice, and the entire permission notice in its entirety,
+ *including the disclaimer of warranties.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *notice, this list of conditions and the following disclaimer in the
+ *documentation and/or other materials provided with the distribution.
+ * 3. The name of the author may not be used to endorse or promote
+ *products derived from this software without specific prior
+ *written permission.
+ *
+ * ALTERNATIVELY, this product may be distributed under the terms of
+ * the GNU General Public License, in which case the provisions of the GPL2
+ * are required INSTEAD OF the above restrictions.  (This clause is
+ * necessary due to a potential bad interaction between the GPL and
+ * the restrictions contained in a BSD-style copyright.)
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+ * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF
+ * WHICH ARE HEREBY DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
+ * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH
+ * DAMAGE.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+/*
+ * Define a DRBG plus a hash / MAC used to extract data from the entropy pool.
+ * For LRNG_HASH_NAME you can use a hash or a MAC (HMAC or CMAC) of your choice
+ * (Note, you should use the suggested selections below -- using SHA-1 or MD5
+ * is not wise). The idea is that the used cipher primitive can be selected to
+ * be the same as used for the DRBG. I.e. the LRNG only uses one cipher
+ * primitive using the same cipher implementation with the options offered in
+ * the following. This means, if the CTR DRBG is selected and AES-NI is 
present,
+ * both the CTR DRBG and the selected cmac(aes) use AES-NI.
+ *
+ * This definition is allowed to be changed.
+ */
+#ifdef CONFIG_CRYPTO_DRBG_HMAC
+# if 0
+#  define LRNG_DRBG_BLOCKLEN_BYTES 64
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 32
+#  define LRNG_DRBG_CORE "drbg_nopr_hmac_sha512"   /* HMAC DRBG SHA-512 */
+#  define LRNG_HASH_NAME "sha512"
+# else
+#  define LRNG_DRBG_BLOCKLEN_BYTES 32
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 32
+#  define LRNG_DRBG_CORE "drbg_nopr_hmac_sha256"   /* HMAC DRBG SHA-256 */
+#  define LRNG_HASH_NAME "sha256"
+# endif
+#elif defined CONFIG_CRYPTO_DRBG_HASH
+# if 0
+#  define LRNG_DRBG_BLOCKLEN_BYTES 64
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 32
+#  define LRNG_DRBG_CORE "drbg_nopr_sha512"/* Hash DRBG SHA-512 */
+#  define LRNG_HASH_NAME "sha512"
+# else
+#  define LRNG_DRBG_BLOCKLEN_BYTES 32
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 32
+#  define LRNG_DRBG_CORE "drbg_nopr_sha256"/* Hash DRBG SHA-256 */
+#  define LRNG_HASH_NAME "sha256"
+# endif
+#elif defined CONFIG_CRYPTO_DRBG_CTR
+# define LRNG_HASH_NAME "cmac(aes)"
+# ifndef CONFIG_CRYPTO_CMAC
+#  error "CMAC support not compiled"
+# endif
+# if 0
+#  define LRNG_DRBG_BLOCKLEN_BYTES 16
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 16
+#  define LRNG_DRBG_CORE "drbg_nopr_ctr_aes128"/* CTR DRBG 
AES-128 */
+# else
+#  define LRNG_DRBG_BLOCKLEN_BYTES 16
+#  define LRNG_DRBG_SECURITY_STRENGTH_BYTES 32
+#  define LRNG_DRBG_CORE "drbg_nopr_ctr_aes256"

[PATCH v4 5/5] random: add interrupt callback to VMBus IRQ handler

2016-05-31 Thread Stephan Mueller
The Hyper-V Linux Integration Services use the VMBus implementation for
communication with the Hypervisor. VMBus registers its own interrupt
handler that completely bypasses the common Linux interrupt handling.
This implies that the interrupt entropy collector is not triggered.

This patch adds the interrupt entropy collection callback into the VMBus
interrupt handler function.

Signed-off-by: Stephan Mueller 
---
 drivers/char/random.c  | 1 +
 drivers/hv/vmbus_drv.c | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index ef89c0e..ac74716 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -948,6 +948,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
/* award one bit for the contents of the fast pool */
credit_entropy_bits(r, credit + 1);
 }
+EXPORT_SYMBOL_GPL(add_interrupt_randomness);
 
 #ifdef CONFIG_BLOCK
 void add_disk_randomness(struct gendisk *disk)
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 952f20f..e82f7e1 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -42,6 +42,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "hyperv_vmbus.h"
 
 static struct acpi_device  *hv_acpi_dev;
@@ -806,6 +807,8 @@ static void vmbus_isr(void)
else
tasklet_schedule(hv_context.msg_dpc[cpu]);
}
+
+   add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR, 0);
 }
 
 
-- 
2.7.2


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 2/5] random: conditionally compile code depending on LRNG

2016-05-31 Thread Stephan Mueller
When selecting the LRNG for compilation, disable the legacy /dev/random
implementation.

The LRNG is a drop-in replacement for the legacy /dev/random which
implements the same in-kernel and user space API. Only the hooks of
/dev/random into other parts of the kernel need to be disabled.

Signed-off-by: Stephan Mueller 
---
 drivers/char/random.c  | 8 
 include/linux/genhd.h  | 5 +
 include/linux/random.h | 7 ++-
 3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 0158d3b..ef89c0e 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -268,6 +268,8 @@
 #include 
 #include 
 
+#ifndef CONFIG_CRYPTO_LRNG
+
 #define CREATE_TRACE_POINTS
 #include 
 
@@ -1621,6 +1623,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, 
count,
}
return urandom_read(NULL, buf, count, NULL);
 }
+#endif /* CONFIG_CRYPTO_LRNG */
 
 /
  *
@@ -1628,6 +1631,7 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, 
count,
  *
  /
 
+#ifndef CONFIG_CRYPTO_LRNG
 #ifdef CONFIG_SYSCTL
 
 #include 
@@ -1765,6 +1769,8 @@ struct ctl_table random_table[] = {
 };
 #endif /* CONFIG_SYSCTL */
 
+#endif /* CONFIG_CRYPTO_LRNG */
+
 static u32 random_int_secret[MD5_MESSAGE_BYTES / 4] cacheline_aligned;
 
 int random_int_secret_init(void)
@@ -1840,6 +1846,7 @@ randomize_range(unsigned long start, unsigned long end, 
unsigned long len)
return PAGE_ALIGN(get_random_int() % range + start);
 }
 
+#ifndef CONFIG_CRYPTO_LRNG
 /* Interface for in-kernel drivers of true hardware RNGs.
  * Those devices may produce endless random bits and will be throttled
  * when our pool is full.
@@ -1859,3 +1866,4 @@ void add_hwgenerator_randomness(const char *buffer, 
size_t count,
credit_entropy_bits(poolp, entropy);
 }
 EXPORT_SYMBOL_GPL(add_hwgenerator_randomness);
+#endif /* CONFIG_CRYPTO_LRNG */
diff --git a/include/linux/genhd.h b/include/linux/genhd.h
index 359a8e4..24cfb99 100644
--- a/include/linux/genhd.h
+++ b/include/linux/genhd.h
@@ -433,8 +433,13 @@ extern void disk_flush_events(struct gendisk *disk, 
unsigned int mask);
 extern unsigned int disk_clear_events(struct gendisk *disk, unsigned int mask);
 
 /* drivers/char/random.c */
+#ifdef CONFIG_CRYPTO_LRNG
+#define add_disk_randomness(disk) do {} while (0)
+#define rand_initialize_disk(disk) do {} while (0)
+#else
 extern void add_disk_randomness(struct gendisk *disk);
 extern void rand_initialize_disk(struct gendisk *disk);
+#endif
 
 static inline sector_t get_start_sect(struct block_device *bdev)
 {
diff --git a/include/linux/random.h b/include/linux/random.h
index e47e533..8773dfc 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -17,10 +17,15 @@ struct random_ready_callback {
struct module *owner;
 };
 
-extern void add_device_randomness(const void *, unsigned int);
 extern void add_input_randomness(unsigned int type, unsigned int code,
 unsigned int value);
 extern void add_interrupt_randomness(int irq, int irq_flags);
+#ifdef CONFIG_CRYPTO_LRNG
+#define add_device_randomness(buf, nbytes) do {} while (0)
+#else  /* CONFIG_CRYPTO_LRNG */
+extern void add_device_randomness(const void *, unsigned int);
+#define lrng_irq_process()
+#endif /* CONFIG_CRYPTO_LRNG */
 
 extern void get_random_bytes(void *buf, int nbytes);
 extern int add_random_ready_callback(struct random_ready_callback *rdy);
-- 
2.7.2


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 4/5] crypto: LRNG - enable compile

2016-05-31 Thread Stephan Mueller
Add LRNG compilation support.

Signed-off-by: Stephan Mueller 
---
 crypto/Kconfig  | 10 ++
 crypto/Makefile |  1 +
 2 files changed, 11 insertions(+)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1d33beb..9aaf96c 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1588,6 +1588,16 @@ config CRYPTO_JITTERENTROPY
  random numbers. This Jitterentropy RNG registers with
  the kernel crypto API and can be used by any caller.
 
+config CRYPTO_LRNG
+   bool "Linux Random Number Generator"
+   select CRYPTO_DRBG_MENU
+   help
+ The Linux Random Number Generator (LRNG) is the replacement
+ of the legacy /dev/random provided with drivers/char/random.c.
+ It generates entropy from different noise sources and
+ delivers significant entropy during boot. The LRNG only
+ works with the presence of a high-resolution timer.
+
 config CRYPTO_USER_API
tristate
 
diff --git a/crypto/Makefile b/crypto/Makefile
index 4f4ef7e..7f91c8e 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -114,6 +114,7 @@ obj-$(CONFIG_CRYPTO_DRBG) += drbg.o
 obj-$(CONFIG_CRYPTO_JITTERENTROPY) += jitterentropy_rng.o
 CFLAGS_jitterentropy.o = -O0
 jitterentropy_rng-y := jitterentropy.o jitterentropy-kcapi.o
+obj-$(CONFIG_CRYPTO_LRNG) += lrng.o
 obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
 obj-$(CONFIG_CRYPTO_GHASH) += ghash-generic.o
 obj-$(CONFIG_CRYPTO_USER_API) += af_alg.o
-- 
2.7.2


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 1/5] crypto: DRBG - externalize DRBG functions for LRNG

2016-05-31 Thread Stephan Mueller
This patch allows several DRBG functions to be called by the LRNG kernel
code paths outside the drbg.c file.

Signed-off-by: Stephan Mueller 
---
 crypto/drbg.c | 11 +--
 include/crypto/drbg.h |  7 +++
 2 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/crypto/drbg.c b/crypto/drbg.c
index 0a3538f..c339a2e 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -113,7 +113,7 @@
  * the SHA256 / AES 256 over other ciphers. Thus, the favored
  * DRBGs are the latest entries in this array.
  */
-static const struct drbg_core drbg_cores[] = {
+struct drbg_core drbg_cores[] = {
 #ifdef CONFIG_CRYPTO_DRBG_CTR
{
.flags = DRBG_CTR | DRBG_STRENGTH128,
@@ -205,7 +205,7 @@ static int drbg_uninstantiate(struct drbg_state *drbg);
  * Return: normalized strength in *bytes* value or 32 as default
  *to counter programming errors
  */
-static inline unsigned short drbg_sec_strength(drbg_flag_t flags)
+unsigned short drbg_sec_strength(drbg_flag_t flags)
 {
switch (flags & DRBG_STRENGTH_MASK) {
case DRBG_STRENGTH128:
@@ -1140,7 +1140,7 @@ static int drbg_seed(struct drbg_state *drbg, struct 
drbg_string *pers,
 }
 
 /* Free all substructures in a DRBG state without the DRBG state structure */
-static inline void drbg_dealloc_state(struct drbg_state *drbg)
+void drbg_dealloc_state(struct drbg_state *drbg)
 {
if (!drbg)
return;
@@ -1159,7 +1159,7 @@ static inline void drbg_dealloc_state(struct drbg_state 
*drbg)
  * Allocate all sub-structures for a DRBG state.
  * The DRBG state structure must already be allocated.
  */
-static inline int drbg_alloc_state(struct drbg_state *drbg)
+int drbg_alloc_state(struct drbg_state *drbg)
 {
int ret = -ENOMEM;
unsigned int sb_size = 0;
@@ -1682,8 +1682,7 @@ static int drbg_kcapi_sym(struct drbg_state *drbg, const 
unsigned char *key,
  *
  * return: flags
  */
-static inline void drbg_convert_tfm_core(const char *cra_driver_name,
-int *coreref, bool *pr)
+void drbg_convert_tfm_core(const char *cra_driver_name, int *coreref, bool *pr)
 {
int i = 0;
size_t start = 0;
diff --git a/include/crypto/drbg.h b/include/crypto/drbg.h
index d961b2b..d24ec22 100644
--- a/include/crypto/drbg.h
+++ b/include/crypto/drbg.h
@@ -268,4 +268,11 @@ enum drbg_prefixes {
DRBG_PREFIX3
 };
 
+extern int drbg_alloc_state(struct drbg_state *drbg);
+extern void drbg_dealloc_state(struct drbg_state *drbg);
+extern void drbg_convert_tfm_core(const char *cra_driver_name, int *coreref,
+ bool *pr);
+extern struct drbg_core drbg_cores[];
+extern unsigned short drbg_sec_strength(drbg_flag_t flags);
+
 #endif /* _DRBG_H */
-- 
2.7.2


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v4 0/5] /dev/random - a new approach

2016-05-31 Thread Stephan Mueller
Hi Herbert, Ted,

The following patch set provides a different approach to /dev/random which
I call Linux Random Number Generator (LRNG) to collect entropy within the Linux
kernel. The main improvements compared to the legacy /dev/random is to provide
sufficient entropy during boot time as well as in virtual environments and when
using SSDs. A secondary design goal is to limit the impact of the entropy
collection on massive parallel systems and also allow the use accelerated
cryptographic primitives. Also, all steps of the entropic data processing are
testable. Finally massive performance improvements are visible at /dev/urandom
and get_random_bytes.

The design and implementation is driven by a set of goals described in [1]
that the LRNG completely implements. Furthermore, [1] includes a
comparison with RNG design suggestions such as SP800-90B, SP800-90C, and
AIS20/31.

Changes v4:
* port to 4.7-rc1
* Use classical twisted LFSR approach to collect entropic data as requested by
  George Spelvin. The LFSR is based on a primitive and irreducible polynomial
  whose taps are not too close to the location the current byte is mixed in.
  Primitive polynomials for other entropy pool sizes are offered in the code.
* The reading of the entropy pool is performed with a hash. The hash can be
  specified at compile time. The pre-defined hashes are the same as used for
  the DRBG type (e.g. a SHA256 Hash DRBG implies the use of SHA-256, an AES256
  CTR DRBG implies the use of CMAC-AES).
* Addition of the example defines for a CTR DRBG with AES128 which can be
  enabled during compile time.
* Entropy estimate: one bit of entropy per interrupt. In case a system does
  not have a high-resolution timer, apply 1/10th bit of entropy per interrupt.
  The interrupt estimates can be changed arbitrarily at compile time.
* Use kmalloc_node for the per-NUMA node secondary DRBGs.
* Add boot time entropy tests discussed in section 3.4.3 [1].
* Align all buffers that are processed by the kernel crypto API to an 8 byte
  boundary. This boundary covers all currently existing cipher implementations.

Changes v3:
* Convert debug printk to pr_debug as suggested by Joe Perches
* Add missing \n as suggested by Joe Perches
* Do not mix in struck IRQ measurements as requested by Pavel Machek
* Add handling logic for systems without high-res timer as suggested by Pavel
  Machek -- it uses ideas from the add_interrupt_randomness of the legacy
  /dev/random implementation
* add per NUMA node secondary DRBGs as suggested by Andi Kleen -- the
  explanation of how the logic works is given in section 2.1.1 of my
  documentation [1], especially how the initial seeding is performed.

Changes v2:
* Removal of the Jitter RNG fast noise source as requested by Ted
* Addition of processing of add_input_randomness as suggested by Ted
* Update documentation and testing in [1] to cover the updates
* Addition of a SystemTap script to test add_input_randomness
* To clarify the question whether sufficient entropy is present during boot
  I added one more test in 3.3.1 [1] which demonstrates the providing of
  sufficient entropy during initialization. In the worst case of no fast noise
  sources, in the worst case of a virtual machine with only very few hardware
  devices, the testing shows that the secondary DRBG is fully seeded with 256
  bits of entropy before user space injects the random data obtained
  during shutdown of the previous boot (i.e. the requirement phrased by the
  legacy /dev/random implementation). As the writing of the random data into
  /dev/random by user space will happen before any cryptographic service
  is initialized in user space, this test demonstrates that sufficient
  entropy is already present in the LRNG at the time user space requires it
  for seeding cryptographic daemons. Note, this test result was obtained
  for different architectures, such as x86 64 bit, x86 32 bit, ARM 32 bit and
  MIPS 32 bit.

[1] http://www.chronox.de/lrng/doc/lrng.pdf

[2] http://www.chronox.de/lrng.html

Stephan Mueller (5):
  crypto: DRBG - externalize DRBG functions for LRNG
  random: conditionally compile code depending on LRNG
  crypto: Linux Random Number Generator
  crypto: LRNG - enable compile
  random: add interrupt callback to VMBus IRQ handler

 crypto/Kconfig |   10 +
 crypto/Makefile|1 +
 crypto/drbg.c  |   11 +-
 crypto/lrng.c  | 1981 
 drivers/char/random.c  |9 +
 drivers/hv/vmbus_drv.c |3 +
 include/crypto/drbg.h  |7 +
 include/linux/genhd.h  |5 +
 include/linux/random.h |7 +-
 9 files changed, 2027 insertions(+), 7 deletions(-)
 create mode 100644 crypto/lrng.c

-- 
2.7.2


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/5] refactor mpi_read_from_buffer()

2016-05-31 Thread Nicolai Stange
Herbert Xu  writes:

> On Thu, May 26, 2016 at 11:19:50PM +0200, Nicolai Stange wrote:
>> mpi_read_from_buffer() and mpi_read_raw_data() do almost the same and share a
>> fair amount of common code.
>> 
>> This patchset attempts to rewrite mpi_read_from_buffer() in order to 
>> implement
>> it in terms of mpi_read_raw_data().
>> 
>> The patches 1 and 3, i.e.
>>   "lib/mpi: mpi_read_from_buffer(): return error code"
>> and
>>   "lib/mpi: mpi_read_from_buffer(): return -EINVAL upon too short buffer"
>> do the groundwork in that they move any error detection unique to
>> mpi_read_from_buffer() out of the data handling loop.
>> 
>> The patches 2 and 4, that is
>>   "lib/digsig: digsig_verify_rsa(): return -EINVAL if modulo length is zero"
>> and
>>   "lib/mpi: mpi_read_from_buffer(): sanitize short buffer printk"
>> are not strictly necessary for the refactoring: they cleanup some minor 
>> oddities
>> related to error handling I came across.
>> 
>> Finally, the last patch in this series,
>>   "lib/mpi: refactor mpi_read_from_buffer() in terms of mpi_read_raw_data()"
>> actually does what this series is all about.
>> 
>> 
>> Applicable to linux-next-20160325.
>
> All applied.

Thanks! (As well as for applying the separately sent patches, of course)
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/7] crypto : stylistic cleanup in sha1-mb

2016-05-31 Thread Megha Dey
On Tue, 2016-05-31 at 16:13 +0800, Herbert Xu wrote:
> On Thu, May 19, 2016 at 05:43:04PM -0700, Megha Dey wrote:
> > From: Megha Dey 
> > 
> > Currently there are several checkpatch warnings in the sha1_mb.c file:
> > 'WARNING: line over 80 characters' in the sha1_mb.c file. Also, the
> > syntax of some multi-line comments are not correct. This patch fixes
> > these issues.
> > 
> > Signed-off-by: Megha Dey 
> 
> This patch says 1/7 but there is no cover letter and I've only
> seen patches 1 and 2.  What's going on?

Herbert , I had 7 patches for the async+avx2 sha256-mb implementation. I
had sent just the async related changes here. However, I will resend the
patches as 2 sets. One which has only async implementation changes and
the other which has only sha256-mb avx2 implementation changes. Please
disregard the current patches and review the ones I will be sending
again.

> 
> Cheers,


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/2] async implementation for sha1-mb

2016-05-31 Thread Megha Dey
From: Megha Dey 

Currently, sha1-mb uses an async interface for the outer algorithm
and a sync interface for the inner algorithm.
Herbert wants the sha1-mb algorithm to have an async implementation:
https://lkml.org/lkml/2016/4/5/286.
This patch introduces a async interface for even the inner algorithm.
Additionally, there are several checkpatch warnings in the sha1_mb.c file:
'WARNING: line over 80 characters' in the sha1_mb.c file. Also, the
syntax of some multi-line comments are not correct. This patchset fixes
these issues.

Megha Dey (2):
  crypto : stylistic cleanup in sha1-mb
  crypto : async implementation for sha1-mb

 arch/x86/crypto/sha-mb/sha1_mb.c | 292 ---
 crypto/ahash.c   |   6 -
 crypto/mcryptd.c | 117 
 include/crypto/hash.h|   6 +
 include/crypto/internal/hash.h   |   8 +-
 include/crypto/mcryptd.h |   8 +-
 6 files changed, 254 insertions(+), 183 deletions(-)

-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/2] crypto : async implementation for sha1-mb

2016-05-31 Thread Megha Dey
From: Megha Dey 

Herbert wants the sha1-mb algorithm to have an async implementation:
https://lkml.org/lkml/2016/4/5/286.
Currently, sha1-mb uses an async interface for the outer algorithm
and a sync interface for the inner algorithm. This patch introduces
a async interface for even the inner algorithm.

Signed-off-by: Megha Dey 
Signed-off-by: Tim Chen 
---
 arch/x86/crypto/sha-mb/sha1_mb.c | 190 ++-
 crypto/ahash.c   |   6 --
 crypto/mcryptd.c | 117 +---
 include/crypto/hash.h|   6 ++
 include/crypto/internal/hash.h   |   8 +-
 include/crypto/mcryptd.h |   8 +-
 6 files changed, 184 insertions(+), 151 deletions(-)

diff --git a/arch/x86/crypto/sha-mb/sha1_mb.c b/arch/x86/crypto/sha-mb/sha1_mb.c
index 0a46491..efc19e3 100644
--- a/arch/x86/crypto/sha-mb/sha1_mb.c
+++ b/arch/x86/crypto/sha-mb/sha1_mb.c
@@ -68,6 +68,7 @@
 #include 
 #include 
 #include "sha_mb_ctx.h"
+#include 
 
 #define FLUSH_INTERVAL 1000 /* in usec */
 
@@ -80,10 +81,10 @@ struct sha1_mb_ctx {
 static inline struct mcryptd_hash_request_ctx
*cast_hash_to_mcryptd_ctx(struct sha1_hash_ctx *hash_ctx)
 {
-   struct shash_desc *desc;
+   struct ahash_request *areq;
 
-   desc = container_of((void *) hash_ctx, struct shash_desc, __ctx);
-   return container_of(desc, struct mcryptd_hash_request_ctx, desc);
+   areq = container_of((void *) hash_ctx, struct ahash_request, __ctx);
+   return container_of(areq, struct mcryptd_hash_request_ctx, areq);
 }
 
 static inline struct ahash_request
@@ -93,7 +94,7 @@ static inline struct ahash_request
 }
 
 static void req_ctx_init(struct mcryptd_hash_request_ctx *rctx,
-   struct shash_desc *desc)
+   struct ahash_request *areq)
 {
rctx->flag = HASH_UPDATE;
 }
@@ -375,9 +376,9 @@ static struct sha1_hash_ctx *sha1_ctx_mgr_flush(struct 
sha1_ctx_mgr *mgr)
}
 }
 
-static int sha1_mb_init(struct shash_desc *desc)
+static int sha1_mb_init(struct ahash_request *areq)
 {
-   struct sha1_hash_ctx *sctx = shash_desc_ctx(desc);
+   struct sha1_hash_ctx *sctx = ahash_request_ctx(areq);
 
hash_ctx_init(sctx);
sctx->job.result_digest[0] = SHA1_H0;
@@ -395,7 +396,7 @@ static int sha1_mb_init(struct shash_desc *desc)
 static int sha1_mb_set_results(struct mcryptd_hash_request_ctx *rctx)
 {
int i;
-   struct  sha1_hash_ctx *sctx = shash_desc_ctx(&rctx->desc);
+   struct  sha1_hash_ctx *sctx = ahash_request_ctx(&rctx->areq);
__be32  *dst = (__be32 *) rctx->out;
 
for (i = 0; i < 5; ++i)
@@ -427,7 +428,7 @@ static int sha_finish_walk(struct mcryptd_hash_request_ctx 
**ret_rctx,
 
}
sha_ctx = (struct sha1_hash_ctx *)
-   shash_desc_ctx(&rctx->desc);
+   ahash_request_ctx(&rctx->areq);
kernel_fpu_begin();
sha_ctx = sha1_ctx_mgr_submit(cstate->mgr, sha_ctx,
rctx->walk.data, nbytes, flag);
@@ -519,11 +520,10 @@ static void sha1_mb_add_list(struct 
mcryptd_hash_request_ctx *rctx,
mcryptd_arm_flusher(cstate, delay);
 }
 
-static int sha1_mb_update(struct shash_desc *desc, const u8 *data,
- unsigned int len)
+static int sha1_mb_update(struct ahash_request *areq)
 {
struct mcryptd_hash_request_ctx *rctx =
-   container_of(desc, struct mcryptd_hash_request_ctx, desc);
+   container_of(areq, struct mcryptd_hash_request_ctx, areq);
struct mcryptd_alg_cstate *cstate =
this_cpu_ptr(sha1_mb_alg_state.alg_cstate);
 
@@ -539,7 +539,7 @@ static int sha1_mb_update(struct shash_desc *desc, const u8 
*data,
}
 
/* need to init context */
-   req_ctx_init(rctx, desc);
+   req_ctx_init(rctx, areq);
 
nbytes = crypto_ahash_walk_first(req, &rctx->walk);
 
@@ -552,7 +552,7 @@ static int sha1_mb_update(struct shash_desc *desc, const u8 
*data,
rctx->flag |= HASH_DONE;
 
/* submit */
-   sha_ctx = (struct sha1_hash_ctx *) shash_desc_ctx(desc);
+   sha_ctx = (struct sha1_hash_ctx *) ahash_request_ctx(areq);
sha1_mb_add_list(rctx, cstate);
kernel_fpu_begin();
sha_ctx = sha1_ctx_mgr_submit(cstate->mgr, sha_ctx, rctx->walk.data,
@@ -579,11 +579,10 @@ done:
return ret;
 }
 
-static int sha1_mb_finup(struct shash_desc *desc, const u8 *data,
-unsigned int len, u8 *out)
+static int sha1_mb_finup(struct ahash_request *areq)
 {
struct mcryptd_hash_request_ctx *rctx =
-   container_of(desc, struct mcryptd_hash_request_ctx, desc);
+   container_of(areq, struct mcryptd_hash_request_ctx, areq);
struct mcryptd_alg_cstate *cstate =

[PATCH 1/2] crypto : stylistic cleanup in sha1-mb

2016-05-31 Thread Megha Dey
From: Megha Dey 

Currently there are several checkpatch warnings in the sha1_mb.c file:
'WARNING: line over 80 characters' in the sha1_mb.c file. Also, the
syntax of some multi-line comments are not correct. This patch fixes
these issues.

Signed-off-by: Megha Dey 
---
 arch/x86/crypto/sha-mb/sha1_mb.c | 110 ++-
 1 file changed, 74 insertions(+), 36 deletions(-)

diff --git a/arch/x86/crypto/sha-mb/sha1_mb.c b/arch/x86/crypto/sha-mb/sha1_mb.c
index 9c5af33..0a46491 100644
--- a/arch/x86/crypto/sha-mb/sha1_mb.c
+++ b/arch/x86/crypto/sha-mb/sha1_mb.c
@@ -77,7 +77,8 @@ struct sha1_mb_ctx {
struct mcryptd_ahash *mcryptd_tfm;
 };
 
-static inline struct mcryptd_hash_request_ctx *cast_hash_to_mcryptd_ctx(struct 
sha1_hash_ctx *hash_ctx)
+static inline struct mcryptd_hash_request_ctx
+   *cast_hash_to_mcryptd_ctx(struct sha1_hash_ctx *hash_ctx)
 {
struct shash_desc *desc;
 
@@ -85,7 +86,8 @@ static inline struct mcryptd_hash_request_ctx 
*cast_hash_to_mcryptd_ctx(struct s
return container_of(desc, struct mcryptd_hash_request_ctx, desc);
 }
 
-static inline struct ahash_request *cast_mcryptd_ctx_to_req(struct 
mcryptd_hash_request_ctx *ctx)
+static inline struct ahash_request
+   *cast_mcryptd_ctx_to_req(struct mcryptd_hash_request_ctx *ctx)
 {
return container_of((void *) ctx, struct ahash_request, __ctx);
 }
@@ -97,10 +99,12 @@ static void req_ctx_init(struct mcryptd_hash_request_ctx 
*rctx,
 }
 
 static asmlinkage void (*sha1_job_mgr_init)(struct sha1_mb_mgr *state);
-static asmlinkage struct job_sha1* (*sha1_job_mgr_submit)(struct sha1_mb_mgr 
*state,
- struct job_sha1 *job);
-static asmlinkage struct job_sha1* (*sha1_job_mgr_flush)(struct sha1_mb_mgr 
*state);
-static asmlinkage struct job_sha1* (*sha1_job_mgr_get_comp_job)(struct 
sha1_mb_mgr *state);
+static asmlinkage struct job_sha1* (*sha1_job_mgr_submit)
+   (struct sha1_mb_mgr *state, struct job_sha1 *job);
+static asmlinkage struct job_sha1* (*sha1_job_mgr_flush)
+   (struct sha1_mb_mgr *state);
+static asmlinkage struct job_sha1* (*sha1_job_mgr_get_comp_job)
+   (struct sha1_mb_mgr *state);
 
 static inline void sha1_init_digest(uint32_t *digest)
 {
@@ -131,7 +135,8 @@ static inline uint32_t sha1_pad(uint8_t 
padblock[SHA1_BLOCK_SIZE * 2],
return i >> SHA1_LOG2_BLOCK_SIZE;
 }
 
-static struct sha1_hash_ctx *sha1_ctx_mgr_resubmit(struct sha1_ctx_mgr *mgr, 
struct sha1_hash_ctx *ctx)
+static struct sha1_hash_ctx *sha1_ctx_mgr_resubmit(struct sha1_ctx_mgr *mgr,
+   struct sha1_hash_ctx *ctx)
 {
while (ctx) {
if (ctx->status & HASH_CTX_STS_COMPLETE) {
@@ -177,8 +182,8 @@ static struct sha1_hash_ctx *sha1_ctx_mgr_resubmit(struct 
sha1_ctx_mgr *mgr, str
 
ctx->job.buffer = (uint8_t *) buffer;
ctx->job.len = len;
-   ctx = (struct sha1_hash_ctx *) 
sha1_job_mgr_submit(&mgr->mgr,
-   
  &ctx->job);
+   ctx = (struct sha1_hash_ctx 
*)sha1_job_mgr_submit(&mgr->mgr,
+   
&ctx->job);
continue;
}
}
@@ -191,13 +196,15 @@ static struct sha1_hash_ctx *sha1_ctx_mgr_resubmit(struct 
sha1_ctx_mgr *mgr, str
if (ctx->status & HASH_CTX_STS_LAST) {
 
uint8_t *buf = ctx->partial_block_buffer;
-   uint32_t n_extra_blocks = sha1_pad(buf, 
ctx->total_length);
+   uint32_t n_extra_blocks =
+   sha1_pad(buf, ctx->total_length);
 
ctx->status = (HASH_CTX_STS_PROCESSING |
   HASH_CTX_STS_COMPLETE);
ctx->job.buffer = buf;
ctx->job.len = (uint32_t) n_extra_blocks;
-   ctx = (struct sha1_hash_ctx *) 
sha1_job_mgr_submit(&mgr->mgr, &ctx->job);
+   ctx = (struct sha1_hash_ctx *)
+   sha1_job_mgr_submit(&mgr->mgr, &ctx->job);
continue;
}
 
@@ -208,14 +215,17 @@ static struct sha1_hash_ctx *sha1_ctx_mgr_resubmit(struct 
sha1_ctx_mgr *mgr, str
return NULL;
 }
 
-static struct sha1_hash_ctx *sha1_ctx_mgr_get_comp_ctx(struct sha1_ctx_mgr 
*mgr)
+static struct sha1_hash_ctx
+   *sha1_ctx_mgr_get_comp_ctx(struct sha1_ctx_mgr *mgr)
 {
/*
 * If get_comp_job returns NULL, there are no jobs complete.
-* If get_comp_job returns a job, verify that it is safe

Re: [PATCH v4 0/5] /dev/random - a new approach

2016-05-31 Thread George Spelvin
I'll be a while going through this.

I was thinking about our earlier discussion where I was hammering on
the point that compressing entropy too early is a mistake, and just
now realized that I should have given you credit for my recent 4.7-rc1
patch 2a18da7a.  The hash function ("good, fast AND cheap!") introduced
there exploits that point: using a larger hash state (and postponing
compression to the final size) dramatically reduces the requirements on
the hash mixing function.

I wasn't conscious of it at the time, but I just now realized that
explaining it clarified the point in my mind, which led to applying
the principle in other situations.

So thank you!
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 0/4] hw rng support for NSP SoC

2016-05-31 Thread Herbert Xu
On Tue, May 31, 2016 at 10:09:39AM -0700, Florian Fainelli wrote:
>
> FYI, ARM Device Tree patches usually go via ARM SoC pull requests, so it
> is best if this is planned in advance. Can you make sure you document
> that there could be a merge conflict in your pull request to Linus?

Sure I can do that.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 2/2] crypto: omap: convert to the new cryptoengine API

2016-05-31 Thread Baolin Wang
On 30 May 2016 at 21:32, LABBE Corentin  wrote:
> Since the crypto engine has been converted to use crypto_async_request
> instead of ablkcipher_request, minor changes are needed to use it.
>
> Signed-off-by: LABBE Corentin 
> ---
>  drivers/crypto/omap-aes.c | 10 ++
>  drivers/crypto/omap-des.c | 10 ++
>  2 files changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
> index ce174d3..7007f13 100644
> --- a/drivers/crypto/omap-aes.c
> +++ b/drivers/crypto/omap-aes.c
> @@ -519,7 +519,7 @@ static void omap_aes_finish_req(struct omap_aes_dev *dd, 
> int err)
>
> pr_debug("err: %d\n", err);
>
> -   crypto_finalize_request(dd->engine, req, err);
> +   crypto_finalize_request(dd->engine, &req->base, err);
>  }
>
>  static int omap_aes_crypt_dma_stop(struct omap_aes_dev *dd)
> @@ -592,14 +592,15 @@ static int omap_aes_handle_queue(struct omap_aes_dev 
> *dd,
>  struct ablkcipher_request *req)
>  {
> if (req)
> -   return crypto_transfer_request_to_engine(dd->engine, req);
> +   return crypto_transfer_request_to_engine(dd->engine, 
> &req->base);
>
> return 0;
>  }
>
>  static int omap_aes_prepare_req(struct crypto_engine *engine,
> -   struct ablkcipher_request *req)
> +   struct crypto_async_request *areq)
>  {
> +   struct ablkcipher_request *req = ablkcipher_request_cast(areq);
> struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
> crypto_ablkcipher_reqtfm(req));
> struct omap_aes_dev *dd = omap_aes_find_dev(ctx);
> @@ -642,8 +643,9 @@ static int omap_aes_prepare_req(struct crypto_engine 
> *engine,
>  }
>
>  static int omap_aes_crypt_req(struct crypto_engine *engine,
> - struct ablkcipher_request *req)
> + struct crypto_async_request *areq)
>  {
> +   struct ablkcipher_request *req = ablkcipher_request_cast(areq);
> struct omap_aes_ctx *ctx = crypto_ablkcipher_ctx(
> crypto_ablkcipher_reqtfm(req));
> struct omap_aes_dev *dd = omap_aes_find_dev(ctx);
> diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
> index 3eedb03..0da5686 100644
> --- a/drivers/crypto/omap-des.c
> +++ b/drivers/crypto/omap-des.c
> @@ -506,7 +506,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, 
> int err)
> pr_debug("err: %d\n", err);
>
> pm_runtime_put(dd->dev);
> -   crypto_finalize_request(dd->engine, req, err);
> +   crypto_finalize_request(dd->engine, &req->base, err);
>  }
>
>  static int omap_des_crypt_dma_stop(struct omap_des_dev *dd)
> @@ -572,14 +572,15 @@ static int omap_des_handle_queue(struct omap_des_dev 
> *dd,
>  struct ablkcipher_request *req)
>  {
> if (req)
> -   return crypto_transfer_request_to_engine(dd->engine, req);
> +   return crypto_transfer_request_to_engine(dd->engine, 
> &req->base);
>
> return 0;
>  }
>
>  static int omap_des_prepare_req(struct crypto_engine *engine,
> -   struct ablkcipher_request *req)
> +   struct crypto_async_request *areq)
>  {
> +   struct ablkcipher_request *req = ablkcipher_request_cast(areq);
> struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
> crypto_ablkcipher_reqtfm(req));
> struct omap_des_dev *dd = omap_des_find_dev(ctx);
> @@ -620,8 +621,9 @@ static int omap_des_prepare_req(struct crypto_engine 
> *engine,
>  }
>
>  static int omap_des_crypt_req(struct crypto_engine *engine,
> - struct ablkcipher_request *req)
> + struct crypto_async_request *areq)
>  {
> +   struct ablkcipher_request *req = ablkcipher_request_cast(areq);
> struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
> crypto_ablkcipher_reqtfm(req));
> struct omap_des_dev *dd = omap_des_find_dev(ctx);
> --
> 2.7.3
>

Reviewed-by: Baolin Wang 

-- 
Baolin.wang
Best Regards
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2 1/2] crypto: engine: permit to enqueue ashash_request

2016-05-31 Thread Baolin Wang
On 30 May 2016 at 21:32, LABBE Corentin  wrote:
> The current crypto engine allow only ablkcipher_request to be enqueued.
> Thus denying any use of it for hardware that also handle hash algo.
>
> This patch convert all ablkcipher_request references to the
> more general crypto_async_request.
>
> Signed-off-by: LABBE Corentin 
> ---
>  crypto/crypto_engine.c  | 17 +++--
>  include/crypto/algapi.h | 14 +++---
>  2 files changed, 14 insertions(+), 17 deletions(-)
>
> diff --git a/crypto/crypto_engine.c b/crypto/crypto_engine.c
> index a55c82d..b658cb8 100644
> --- a/crypto/crypto_engine.c
> +++ b/crypto/crypto_engine.c
> @@ -19,7 +19,7 @@
>  #define CRYPTO_ENGINE_MAX_QLEN 10
>
>  void crypto_finalize_request(struct crypto_engine *engine,
> -struct ablkcipher_request *req, int err);
> +struct crypto_async_request *req, int err);
>
>  /**
>   * crypto_pump_requests - dequeue one request from engine queue to process
> @@ -34,7 +34,6 @@ static void crypto_pump_requests(struct crypto_engine 
> *engine,
>  bool in_kthread)
>  {
> struct crypto_async_request *async_req, *backlog;
> -   struct ablkcipher_request *req;
> unsigned long flags;
> bool was_busy = false;
> int ret;
> @@ -82,9 +81,7 @@ static void crypto_pump_requests(struct crypto_engine 
> *engine,
> if (!async_req)
> goto out;
>
> -   req = ablkcipher_request_cast(async_req);
> -
> -   engine->cur_req = req;
> +   engine->cur_req = async_req;
> if (backlog)
> backlog->complete(backlog, -EINPROGRESS);
>
> @@ -142,7 +139,7 @@ static void crypto_pump_work(struct kthread_work *work)
>   * @req: the request need to be listed into the engine queue
>   */
>  int crypto_transfer_request(struct crypto_engine *engine,
> -   struct ablkcipher_request *req, bool need_pump)
> +   struct crypto_async_request *req, bool need_pump)
>  {
> unsigned long flags;
> int ret;
> @@ -154,7 +151,7 @@ int crypto_transfer_request(struct crypto_engine *engine,
> return -ESHUTDOWN;
> }
>
> -   ret = ablkcipher_enqueue_request(&engine->queue, req);
> +   ret = crypto_enqueue_request(&engine->queue, req);
>
> if (!engine->busy && need_pump)
> queue_kthread_work(&engine->kworker, &engine->pump_requests);
> @@ -171,7 +168,7 @@ EXPORT_SYMBOL_GPL(crypto_transfer_request);
>   * @req: the request need to be listed into the engine queue
>   */
>  int crypto_transfer_request_to_engine(struct crypto_engine *engine,
> - struct ablkcipher_request *req)
> + struct crypto_async_request *req)
>  {
> return crypto_transfer_request(engine, req, true);
>  }
> @@ -184,7 +181,7 @@ EXPORT_SYMBOL_GPL(crypto_transfer_request_to_engine);
>   * @err: error number
>   */
>  void crypto_finalize_request(struct crypto_engine *engine,
> -struct ablkcipher_request *req, int err)
> +struct crypto_async_request *req, int err)
>  {
> unsigned long flags;
> bool finalize_cur_req = false;
> @@ -208,7 +205,7 @@ void crypto_finalize_request(struct crypto_engine *engine,
> spin_unlock_irqrestore(&engine->queue_lock, flags);
> }
>
> -   req->base.complete(&req->base, err);
> +   req->complete(req, err);
>
> queue_kthread_work(&engine->kworker, &engine->pump_requests);
>  }
> diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
> index eeafd21..d720a2a 100644
> --- a/include/crypto/algapi.h
> +++ b/include/crypto/algapi.h
> @@ -173,26 +173,26 @@ struct crypto_engine {
> int (*unprepare_crypt_hardware)(struct crypto_engine *engine);
>
> int (*prepare_request)(struct crypto_engine *engine,
> -  struct ablkcipher_request *req);
> +  struct crypto_async_request *req);
> int (*unprepare_request)(struct crypto_engine *engine,
> -struct ablkcipher_request *req);
> +struct crypto_async_request *req);
> int (*crypt_one_request)(struct crypto_engine *engine,
> -struct ablkcipher_request *req);
> +struct crypto_async_request *req);
>
> struct kthread_worker   kworker;
> struct task_struct  *kworker_task;
> struct kthread_work pump_requests;
>
> void*priv_data;
> -   struct ablkcipher_request   *cur_req;
> +   struct crypto_async_request *cur_req;
>  };
>
>  int crypto_transfer_request(struct crypto_engine *engine,
> -   struct ablkcipher_request *req, bool need_pump);
> +

Re: [PATCH] KEYS: Add placeholder for KDF usage with DH

2016-05-31 Thread James Morris
On Tue, 31 May 2016, David Howells wrote:

> Hi James,
> 
> > Could you pass this along to Linus as soon as possible, please?  This
> > alters a new keyctl function added in the current merge window to allow for
> > a future extension planned for the next merge window.
> 
> Is this likely to go to Linus before -rc2?  If not, we'll need to do things
> differently.

It should be ok, I'll see how it goes with Linus.

-- 
James Morris


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[no subject]

2016-05-31 Thread Jeffrey Walton
Please forgive my ignorance here...

I have test system with a VIA C7-M processor and PM-400 chipset. This
is one of those Thin Client/Internet of Things processor and chipsets
I test security libraries on (like OpenSSL, Cryptlib and Crypto++).

The processor includes the Padlock extensions. Padlock is similar to
Intel's RDRAND, RDSEED and AES-NI, and it predates Intel's
instructions by about a decade.

The Padlock Security Engine can produce a stream of random numbers at
megabits per socond, so I've been kind of surprised it has been
suffering entropy depletion. Here's what the audit trail looks like:

Testing operating system provided blocking random number generator...
FAILED:  it took 74 seconds to generate 5 bytes
passed:  5 generated bytes compressed to 7 bytes by DEFLATE

Above, the blocking RNG is drained. Then, 16 bytes are requested. It
appears to take over one minute to gather five bytes when effectively
an endless stream is available.

My question is, is this system expected to suffer entropy depletion
out of the box? Or are users expected to do something special so the
system does not fail?

Thanks in advance.

Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC v3 0/4] Introduce the bulk mode method when sending request to crypto layer

2016-05-31 Thread Baolin Wang
This patchset will check if the cipher can support bulk mode, then dm-crypt
will handle different ways to send requests to crypto layer according to
cipher mode. For bulk mode, we can use sg table to map the whole bio and
send all scatterlists of one bio to crypto engine to encrypt or decrypt,
which can improve the hardware engine's efficiency.

Changes since v2:
 - Add one cipher user with CRYPTO_ALG_BULK flag to support bulk mode.
 - Add one atomic variable to avoid the sg table race.

Changes since v1:
 - Refactor the blk_bio_map_sg() function to avoid duplicated code.
 - Move the sg table allocation to crypt_ctr_cipher() function to avoid memory
   allocation in the IO path.
 - Remove the crypt_sg_entry() function.
 - Other optimization.

Baolin Wang (4):
  block: Introduce blk_bio_map_sg() to map one bio
  crypto: Introduce CRYPTO_ALG_BULK flag
  md: dm-crypt: Introduce the bulk mode method when sending request
  crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher

 block/blk-merge.c |   36 --
 drivers/crypto/omap-aes.c |2 +-
 drivers/md/dm-crypt.c |  159 -
 include/crypto/skcipher.h |7 ++
 include/linux/blkdev.h|2 +
 include/linux/crypto.h|6 ++
 6 files changed, 205 insertions(+), 7 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC v3 3/4] md: dm-crypt: Introduce the bulk mode method when sending request

2016-05-31 Thread Baolin Wang
In now dm-crypt code, it is ineffective to map one segment (always one
sector) of one bio with just only one scatterlist at one time for hardware
crypto engine. Especially for some encryption mode (like ecb or xts mode)
cooperating with the crypto engine, they just need one initial IV or null
IV instead of different IV for each sector. In this situation We can consider
to use multiple scatterlists to map the whole bio and send all scatterlists
of one bio to crypto engine to encrypt or decrypt, which can improve the
hardware engine's efficiency.

With this optimization, On my test setup (beaglebone black board and dd testing)
using 64KB I/Os on an eMMC storage device I saw about 127% improvement in
throughput for encrypted writes, and about 206% improvement for encrypted reads.
But this is not fit for other modes which need different IV for each sector.

Signed-off-by: Baolin Wang 
---
 drivers/md/dm-crypt.c |  159 -
 1 file changed, 158 insertions(+), 1 deletion(-)

diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 4f3cb35..0b1d452 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -33,6 +33,7 @@
 #include 
 
 #define DM_MSG_PREFIX "crypt"
+#define DM_MAX_SG_LIST 512
 
 /*
  * context holding the current state of a multi-part conversion
@@ -142,6 +143,11 @@ struct crypt_config {
char *cipher;
char *cipher_string;
 
+   struct sg_table sgt_in;
+   struct sg_table sgt_out;
+   atomic_t sgt_init_done;
+   struct completion sgt_restart;
+
struct crypt_iv_operations *iv_gen_ops;
union {
struct iv_essiv_private essiv;
@@ -837,6 +843,141 @@ static u8 *iv_of_dmreq(struct crypt_config *cc,
crypto_skcipher_alignmask(any_tfm(cc)) + 1);
 }
 
+static void crypt_init_sg_table(struct scatterlist *sgl)
+{
+   struct scatterlist *sg;
+   int i;
+
+   for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) {
+   if (i < DM_MAX_SG_LIST - 1 && sg_is_last(sg))
+   sg_unmark_end(sg);
+   else if (i == DM_MAX_SG_LIST - 1)
+   sg_mark_end(sg);
+   }
+
+   for_each_sg(sgl, sg, DM_MAX_SG_LIST, i) {
+   memset(sg, 0, sizeof(struct scatterlist));
+
+   if (i == DM_MAX_SG_LIST - 1)
+   sg_mark_end(sg);
+   }
+}
+
+static void crypt_reinit_sg_table(struct crypt_config *cc)
+{
+   if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents)
+   return;
+
+   crypt_init_sg_table(cc->sgt_in.sgl);
+   crypt_init_sg_table(cc->sgt_out.sgl);
+
+   if (atomic_inc_and_test(&cc->sgt_init_done))
+   complete(&cc->sgt_restart);
+   atomic_set(&cc->sgt_init_done, 1);
+}
+
+static int crypt_alloc_sg_table(struct crypt_config *cc)
+{
+   unsigned int bulk_mode = skcipher_is_bulk_mode(any_tfm(cc));
+   int ret = 0;
+
+   if (!bulk_mode)
+   goto out_skip_alloc;
+
+   ret = sg_alloc_table(&cc->sgt_in, DM_MAX_SG_LIST, GFP_KERNEL);
+   if (ret)
+   goto out_skip_alloc;
+
+   ret = sg_alloc_table(&cc->sgt_out, DM_MAX_SG_LIST, GFP_KERNEL);
+   if (ret)
+   goto out_free_table;
+
+   init_completion(&cc->sgt_restart);
+   atomic_set(&cc->sgt_init_done, 1);
+   return 0;
+
+out_free_table:
+   sg_free_table(&cc->sgt_in);
+out_skip_alloc:
+   cc->sgt_in.orig_nents = 0;
+   cc->sgt_out.orig_nents = 0;
+
+   return ret;
+}
+
+static int crypt_convert_bulk_block(struct crypt_config *cc,
+   struct convert_context *ctx,
+   struct skcipher_request *req)
+{
+   struct bio *bio_in = ctx->bio_in;
+   struct bio *bio_out = ctx->bio_out;
+   unsigned int total_bytes = bio_in->bi_iter.bi_size;
+   unsigned int total_sg_in, total_sg_out;
+   struct scatterlist *sg_in, *sg_out;
+   struct dm_crypt_request *dmreq;
+   u8 *iv;
+   int r;
+
+   if (!cc->sgt_in.orig_nents || !cc->sgt_out.orig_nents)
+   return -EINVAL;
+
+   if (!atomic_dec_and_test(&cc->sgt_init_done)) {
+   wait_for_completion(&cc->sgt_restart);
+   reinit_completion(&cc->sgt_restart);
+   }
+
+   dmreq = dmreq_of_req(cc, req);
+   iv = iv_of_dmreq(cc, dmreq);
+   dmreq->iv_sector = ctx->cc_sector;
+   dmreq->ctx = ctx;
+
+   total_sg_in = blk_bio_map_sg(bdev_get_queue(bio_in->bi_bdev),
+bio_in, cc->sgt_in.sgl);
+   if ((total_sg_in <= 0) || (total_sg_in > DM_MAX_SG_LIST)) {
+   DMERR("%s in sg map error %d, sg table nents[%d]\n",
+ __func__, total_sg_in, cc->sgt_in.orig_nents);
+   return -EINVAL;
+   }
+
+   ctx->iter_in.bi_size -= total_bytes;
+   sg_in = cc->sgt_in.sgl;
+   sg_out = cc->sgt_in.sgl;
+
+   if (bio_data_dir(bio_in) == READ)
+

[RFC v3 4/4] crypto: Add the CRYPTO_ALG_BULK flag for ecb(aes) cipher

2016-05-31 Thread Baolin Wang
Since the ecb(aes) cipher does not need to handle the IV things for encryption
or decryption, that means it can support for bulk block when handling data.
Thus this patch adds the CRYPTO_ALG_BULK flag for ecb(aes) cipher to improve
the hardware aes engine's efficiency.

Signed-off-by: Baolin Wang 
---
 drivers/crypto/omap-aes.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index ce174d3..ab09429 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -804,7 +804,7 @@ static struct crypto_alg algs_ecb_cbc[] = {
.cra_priority   = 300,
.cra_flags  = CRYPTO_ALG_TYPE_ABLKCIPHER |
  CRYPTO_ALG_KERN_DRIVER_ONLY |
- CRYPTO_ALG_ASYNC,
+ CRYPTO_ALG_ASYNC | CRYPTO_ALG_BULK,
.cra_blocksize  = AES_BLOCK_SIZE,
.cra_ctxsize= sizeof(struct omap_aes_ctx),
.cra_alignmask  = 0,
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC v3 2/4] crypto: Introduce CRYPTO_ALG_BULK flag

2016-05-31 Thread Baolin Wang
Now some cipher hardware engines prefer to handle bulk block rather than one
sector (512 bytes) created by dm-crypt, cause these cipher engines can handle
the intermediate values (IV) by themselves in one bulk block. This means we
can increase the size of the request by merging request rather than always 512
bytes and thus increase the hardware engine processing speed.

So introduce 'CRYPTO_ALG_BULK' flag to indicate this cipher can support bulk
mode.

Signed-off-by: Baolin Wang 
---
 include/crypto/skcipher.h |7 +++
 include/linux/crypto.h|6 ++
 2 files changed, 13 insertions(+)

diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 0f987f5..d89d29a 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -519,5 +519,12 @@ static inline void skcipher_request_set_crypt(
req->iv = iv;
 }
 
+static inline unsigned int skcipher_is_bulk_mode(struct crypto_skcipher 
*sk_tfm)
+{
+   struct crypto_tfm *tfm = crypto_skcipher_tfm(sk_tfm);
+
+   return crypto_tfm_alg_bulk(tfm);
+}
+
 #endif /* _CRYPTO_SKCIPHER_H */
 
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 6e28c89..a315487 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -63,6 +63,7 @@
 #define CRYPTO_ALG_DEAD0x0020
 #define CRYPTO_ALG_DYING   0x0040
 #define CRYPTO_ALG_ASYNC   0x0080
+#define CRYPTO_ALG_BULK0x0100
 
 /*
  * Set this bit if and only if the algorithm requires another algorithm of
@@ -623,6 +624,11 @@ static inline u32 crypto_tfm_alg_type(struct crypto_tfm 
*tfm)
return tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK;
 }
 
+static inline unsigned int crypto_tfm_alg_bulk(struct crypto_tfm *tfm)
+{
+   return tfm->__crt_alg->cra_flags & CRYPTO_ALG_BULK;
+}
+
 static inline unsigned int crypto_tfm_alg_blocksize(struct crypto_tfm *tfm)
 {
return tfm->__crt_alg->cra_blocksize;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: (none)

2016-05-31 Thread Herbert Xu
Jeffrey Walton  wrote:
> Please forgive my ignorance here...
> 
> I have test system with a VIA C7-M processor and PM-400 chipset. This
> is one of those Thin Client/Internet of Things processor and chipsets
> I test security libraries on (like OpenSSL, Cryptlib and Crypto++).
> 
> The processor includes the Padlock extensions. Padlock is similar to
> Intel's RDRAND, RDSEED and AES-NI, and it predates Intel's
> instructions by about a decade.
> 
> The Padlock Security Engine can produce a stream of random numbers at
> megabits per socond, so I've been kind of surprised it has been
> suffering entropy depletion. Here's what the audit trail looks like:
> 
>Testing operating system provided blocking random number generator...
>FAILED:  it took 74 seconds to generate 5 bytes
>passed:  5 generated bytes compressed to 7 bytes by DEFLATE
> 
> Above, the blocking RNG is drained. Then, 16 bytes are requested. It
> appears to take over one minute to gather five bytes when effectively
> an endless stream is available.
> 
> My question is, is this system expected to suffer entropy depletion
> out of the box? Or are users expected to do something special so the
> system does not fail?

I don't think anybody has written either an hwrng driver or a rdrand
hook for padlock.  Patches are welcome.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC v3 1/4] block: Introduce blk_bio_map_sg() to map one bio

2016-05-31 Thread Baolin Wang
In dm-crypt, it need to map one bio to scatterlist for improving the
hardware engine encryption efficiency. Thus this patch introduces the
blk_bio_map_sg() function to map one bio with scatterlists.

For avoiding the duplicated code in __blk_bios_map_sg() function, add
one parameter to distinguish bio map or request map.

Signed-off-by: Baolin Wang 
---
 block/blk-merge.c  |   36 +++-
 include/linux/blkdev.h |2 ++
 2 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 2613531..badae44 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -376,7 +376,7 @@ new_segment:
 
 static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
 struct scatterlist *sglist,
-struct scatterlist **sg)
+struct scatterlist **sg, bool single_bio)
 {
struct bio_vec bvec, bvprv = { NULL };
struct bvec_iter iter;
@@ -408,13 +408,39 @@ single_segment:
return 1;
}
 
-   for_each_bio(bio)
+   if (!single_bio) {
+   for_each_bio(bio)
+   bio_for_each_segment(bvec, bio, iter)
+   __blk_segment_map_sg(q, &bvec, sglist, &bvprv,
+sg, &nsegs, &cluster);
+   } else {
bio_for_each_segment(bvec, bio, iter)
-   __blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg,
-&nsegs, &cluster);
+   __blk_segment_map_sg(q, &bvec, sglist, &bvprv,
+sg, &nsegs, &cluster);
+   }
+
+   return nsegs;
+}
+
+/*
+ * Map a bio to scatterlist, return number of sg entries setup. Caller must
+ * make sure sg can hold bio segments entries.
+ */
+int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
+  struct scatterlist *sglist)
+{
+   struct scatterlist *sg = NULL;
+   int nsegs = 0;
+
+   if (bio)
+   nsegs = __blk_bios_map_sg(q, bio, sglist, &sg, true);
+
+   if (sg)
+   sg_mark_end(sg);
 
return nsegs;
 }
+EXPORT_SYMBOL(blk_bio_map_sg);
 
 /*
  * map a request to scatterlist, return number of sg entries setup. Caller
@@ -427,7 +453,7 @@ int blk_rq_map_sg(struct request_queue *q, struct request 
*rq,
int nsegs = 0;
 
if (rq->bio)
-   nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg);
+   nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg, false);
 
if (unlikely(rq->cmd_flags & REQ_COPY_USER) &&
(blk_rq_bytes(rq) & q->dma_pad_mask)) {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1fd8fdf..5868062 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1013,6 +1013,8 @@ extern void blk_queue_write_cache(struct request_queue 
*q, bool enabled, bool fu
 extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device 
*bdev);
 
 extern int blk_rq_map_sg(struct request_queue *, struct request *, struct 
scatterlist *);
+extern int blk_bio_map_sg(struct request_queue *q, struct bio *bio,
+ struct scatterlist *sglist);
 extern void blk_dump_rq_flags(struct request *, char *);
 extern long nr_blockdev_pages(void);
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: (none)

2016-05-31 Thread Stephan Mueller
Am Mittwoch, 1. Juni 2016, 12:59:43 schrieb Herbert Xu:

Hi Herbert,

> Jeffrey Walton  wrote:
> > Please forgive my ignorance here...
> > 
> > I have test system with a VIA C7-M processor and PM-400 chipset. This
> > is one of those Thin Client/Internet of Things processor and chipsets
> > I test security libraries on (like OpenSSL, Cryptlib and Crypto++).
> > 
> > The processor includes the Padlock extensions. Padlock is similar to
> > Intel's RDRAND, RDSEED and AES-NI, and it predates Intel's
> > instructions by about a decade.
> > 
> > The Padlock Security Engine can produce a stream of random numbers at
> > megabits per socond, so I've been kind of surprised it has been
> > 
> > suffering entropy depletion. Here's what the audit trail looks like:
> >Testing operating system provided blocking random number generator...
> >FAILED:  it took 74 seconds to generate 5 bytes
> >passed:  5 generated bytes compressed to 7 bytes by DEFLATE
> > 
> > Above, the blocking RNG is drained. Then, 16 bytes are requested. It
> > appears to take over one minute to gather five bytes when effectively
> > an endless stream is available.
> > 
> > My question is, is this system expected to suffer entropy depletion
> > out of the box? Or are users expected to do something special so the
> > system does not fail?
> 
> I don't think anybody has written either an hwrng driver or a rdrand
> hook for padlock.  Patches are welcome.

I thought via-rng.c covers the VIA Padlock RNG?

Ciao
Stephan
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: (none)

2016-05-31 Thread Herbert Xu
On Wed, Jun 01, 2016 at 07:53:38AM +0200, Stephan Mueller wrote:
>
> I thought via-rng.c covers the VIA Padlock RNG?

Indeed, you're quite right.  In that case Jeffrey was the via-rng
driver loaded?

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html