Re: [TLS] [Cfrg] Data limit to achieve Indifferentiability for ciphertext with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

2016-11-21 Thread Dang, Quynh (Fed)
Hi Ilari,


You were right, for testing, a smaller number should be used.


Quynh.





From: ilariliusva...@welho.com  on behalf of Ilari 
Liusvaara 
Sent: Monday, November 21, 2016 3:42 PM
To: Dang, Quynh (Fed)
Cc: Martin Thomson; tls@ietf.org; c...@ietf.org
Subject: Re: [TLS] [Cfrg] Data limit to achieve Indifferentiability for 
ciphertext with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

On Mon, Nov 14, 2016 at 02:54:23AM +, Dang, Quynh (Fed) wrote:
>
> Rekeying too often than needed would just create more room for
> issues for the connection/session without gaining any additional
> practical security at all.

With regards to rekeying frequency I'm concerned about testability,
have it to be too rare and it is pretty much as good as nonexistent.

This is the reason why I set the rekey limit to 2M(!) records in
btls (with first rekey at 1k(!) records). These limits have absolutely
nothing to do with any sort of cryptographic reasoning[1][2].




[1] If they did, then Chacha rekey limit would be when RSN exhaustion
is imminent (since RSNs can't wrap, but can be reset).

[2] The 2M limit is chosen so that it is reached in ~1minute in fast
transfer tests.


-Ilari
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Data limit to achieve Indifferentiability for ciphertext with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

2016-11-21 Thread Ilari Liusvaara
On Mon, Nov 14, 2016 at 02:54:23AM +, Dang, Quynh (Fed) wrote:
> 
> Rekeying too often than needed would just create more room for
> issues for the connection/session without gaining any additional
> practical security at all.

With regards to rekeying frequency I'm concerned about testability,
have it to be too rare and it is pretty much as good as nonexistent.

This is the reason why I set the rekey limit to 2M(!) records in
btls (with first rekey at 1k(!) records). These limits have absolutely
nothing to do with any sort of cryptographic reasoning[1][2].




[1] If they did, then Chacha rekey limit would be when RSN exhaustion
is imminent (since RSNs can't wrap, but can be reset). 

[2] The 2M limit is chosen so that it is reached in ~1minute in fast
transfer tests.


-Ilari

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Data limit to achieve Indifferentiability for ciphertext with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

2016-11-13 Thread Dang, Quynh (Fed)
Hi Martin,


"very conservative" ? No. 1/2^32 and 1/2^57 are practically the same; they are 
both practically zero.


By your argument, if somebody wants to be more "conservative" and uses the 
margin of 1/2^75 instead, then he/she would need to stop using GCM then.


Rekeying too often than needed would just create more room for issues for the 
connection/session without gaining any additional practical security at all.


Quynh.



From: Martin Thomson 
Sent: Sunday, November 13, 2016 6:54 PM
To: Dang, Quynh (Fed)
Cc: e...@rtfm.com; tls@ietf.org; c...@ietf.org
Subject: Re: [Cfrg] Data limit to achieve Indifferentiability for ciphertext 
with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

These are intentionally very conservative.  Having implemented this, I
find it OK.  The text cites its sources.  Reading those sources
corrects any misapprehension.

The key point here is that we want to ensure that the initial - maybe
uninformed - inferences need to be for the safe thing.  We don't want
to have the situation where we have looser text and that leads to
mistakes.

For instance, someone could deploy code that assumes a certain
"average" record size based on a particular deployment and hence a
larger limit.  If the deployment characteristics change without the
code changing we potentially have an issue.

You really need to demonstrate that there is harm with the current
text.  if rekeying happens on that timescale (which is still very
large), that's not harmful.  I'm concerned that we aren't going to
rekey often enough.  I don't agree that it will create any negative
perception of GCM.

On 14 November 2016 at 05:48, Dang, Quynh (Fed)  wrote:
> Hi Eric and all,
>
>
> Regardless of the actual record size, each 128-bit block encryption is
> performed with a unique 128-bit counter which is formed by the 96-bit IV and
> the 32-bit counter_block value called CB in NIST SP 800-38D under a given
> key as long as the number of encrypted records is not more than 2^64.
>
> Assuming a user would like to limit the probability of a collision among
> 128-bit ciphertext-blocks under 1/2^32, the data limit of the ciphertext (
> or plaintext) is 2^(96/2) (= 2^48) 128-bit blocks which is 2^64 bytes.
>
> Reading the 2nd paragraph of section 5.5, a user might feel that he/she
> needs to rekey a lot more quicker than he/she needs. Putting an
> unnecessarily low data limit of 2^24.5 full-size records (2^38.5 bytes) also
> creates an incorrect negative impression (in my opinion) about GCM.
>
> I would like to request the working group to consider to revise the text.
>
> Regards,
> Quynh.
>
>
> ___
> Cfrg mailing list
> c...@irtf.org
> https://www.irtf.org/mailman/listinfo/cfrg
>
___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls


Re: [TLS] [Cfrg] Data limit to achieve Indifferentiability for ciphertext with TLS 1.3 GCM, and the 2nd paragraph of Section 5.5

2016-11-13 Thread Martin Thomson
These are intentionally very conservative.  Having implemented this, I
find it OK.  The text cites its sources.  Reading those sources
corrects any misapprehension.

The key point here is that we want to ensure that the initial - maybe
uninformed - inferences need to be for the safe thing.  We don't want
to have the situation where we have looser text and that leads to
mistakes.

For instance, someone could deploy code that assumes a certain
"average" record size based on a particular deployment and hence a
larger limit.  If the deployment characteristics change without the
code changing we potentially have an issue.

You really need to demonstrate that there is harm with the current
text.  if rekeying happens on that timescale (which is still very
large), that's not harmful.  I'm concerned that we aren't going to
rekey often enough.  I don't agree that it will create any negative
perception of GCM.

On 14 November 2016 at 05:48, Dang, Quynh (Fed)  wrote:
> Hi Eric and all,
>
>
> Regardless of the actual record size, each 128-bit block encryption is
> performed with a unique 128-bit counter which is formed by the 96-bit IV and
> the 32-bit counter_block value called CB in NIST SP 800-38D under a given
> key as long as the number of encrypted records is not more than 2^64.
>
> Assuming a user would like to limit the probability of a collision among
> 128-bit ciphertext-blocks under 1/2^32, the data limit of the ciphertext (
> or plaintext) is 2^(96/2) (= 2^48) 128-bit blocks which is 2^64 bytes.
>
> Reading the 2nd paragraph of section 5.5, a user might feel that he/she
> needs to rekey a lot more quicker than he/she needs. Putting an
> unnecessarily low data limit of 2^24.5 full-size records (2^38.5 bytes) also
> creates an incorrect negative impression (in my opinion) about GCM.
>
> I would like to request the working group to consider to revise the text.
>
> Regards,
> Quynh.
>
>
> ___
> Cfrg mailing list
> c...@irtf.org
> https://www.irtf.org/mailman/listinfo/cfrg
>

___
TLS mailing list
TLS@ietf.org
https://www.ietf.org/mailman/listinfo/tls