[ 
https://issues.apache.org/jira/browse/HADOOP-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14233337#comment-14233337
 ] 

Mike Yoder commented on HADOOP-11343:
-------------------------------------

{quote}
Just described as above, combination of the current caculateIV and other Cipher 
counter increment will cause problem if these two are not consistent.
{quote}
Yeah, you're right.  This is a good catch.  Let me see if I can state this 
problem differently.

If the underlying java (or openssl) ctr calculation is different than 
calculateIV, there is a problem IF
- assume an initial IV of 00 00 00 00 00 00 00 00 ff ff ff ff ff ff ff ff 
- the file is 32 bytes
- File A is written, all 32 bytes at once (one call to calculateIV with counter 
of 0)
- File B is written, the first 16 bytes and then the second 16 bytes (two calls 
to calculateIV with counter of 0 and 1)
- Then the last 16 bytes of files A and B will be different

This actually isn't a problem *if* the files are read back _exactly_ as they 
are written.  But if you try to read file A in two steps, or read file B in one 
step, the second block will look corrupted.  It seems possible to construct a 
test case for this.

The code in the patch looks reasonable, although I haven't sat down with paper 
and pencil to work through the math.  The test cases are convincing.  Have you 
tested with both the openssl and java crypto implementations?

I do believe that you still need to provide an upgrade path.  This means 
defining a new crypto SUITE and make it the default.  Existing files will use 
the old SUITE; the upgrade path is to simply copy all the files in an EZ; when 
writing the new files the new SUITE will be used and everything will work out. 

> Overflow is not properly handled in caclulating final iv for AES CTR
> --------------------------------------------------------------------
>
>                 Key: HADOOP-11343
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11343
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>    Affects Versions: trunk-win, 2.7.0
>            Reporter: Jerry Chen
>            Assignee: Jerry Chen
>            Priority: Blocker
>         Attachments: HADOOP-11343.patch
>
>
> In the AesCtrCryptoCodec calculateIV, as the init IV is a random generated 16 
> bytes, 
> final byte[] iv = new byte[cc.getCipherSuite().getAlgorithmBlockSize()];
>       cc.generateSecureRandom(iv);
> Then the following calculation of iv and counter on 8 bytes (64bit) space 
> would easily cause overflow and this overflow gets lost.  The result would be 
> the 128 bit data block was encrypted with a wrong counter and cannot be 
> decrypted by standard aes-ctr.
> /**
>    * The IV is produced by adding the initial IV to the counter. IV length 
>    * should be the same as {@link #AES_BLOCK_SIZE}
>    */
>   @Override
>   public void calculateIV(byte[] initIV, long counter, byte[] IV) {
>     Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE);
>     Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE);
>     
>     System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET);
>     long l = 0;
>     for (int i = 0; i < 8; i++) {
>       l = ((l << 8) | (initIV[CTR_OFFSET + i] & 0xff));
>     }
>     l += counter;
>     IV[CTR_OFFSET + 0] = (byte) (l >>> 56);
>     IV[CTR_OFFSET + 1] = (byte) (l >>> 48);
>     IV[CTR_OFFSET + 2] = (byte) (l >>> 40);
>     IV[CTR_OFFSET + 3] = (byte) (l >>> 32);
>     IV[CTR_OFFSET + 4] = (byte) (l >>> 24);
>     IV[CTR_OFFSET + 5] = (byte) (l >>> 16);
>     IV[CTR_OFFSET + 6] = (byte) (l >>> 8);
>     IV[CTR_OFFSET + 7] = (byte) (l);
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to