Hi. I think it kind of depends on your use case. Personally I would look to 
block the data up into manageable blocks of a certain size, fixed or 
variable length. There are advantages to this: specifically that you can 
random access data on a block basis, decrypt and verify as opposed to 
having to decypt the whole lot in one go just to read a subsection of data. 

Obviously you will need to either create some header on variable length 
blocks or add padding to the last block if using fixed length blocks 

Regarding the IV. You will need to differentiate for each block. You could 
generate a ‘base iv’ for the file and add a block counter in the low order 
bits of the IV relating to the block number (0-n). This way you only need 
to store one base IV for the file then calculate the counter for the block 
being read. You also know that each IV for each block is unique (assuming 
you have enough bits allocated to your counter field in the IV)



On Tuesday, 12 August 2025 at 22:07:13 UTC+1 Lana Deere wrote:

> On Tuesday, August 12, 2025 at 8:10:03 AM UTC-4 Jeffrey Walton wrote:
>
> GCM plaintext maximum length is specified in bits, not bytes. See 
> SP800-39D, Section 5.2.1.1 Input Data, p. 8, <
> https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-38d.pdf>.
>  
> That leads to:
>
>     2^39 - 256 = 549755813632
>     549755813632 / 8 = 68719476704
>
>
> Is there a standard practice for handling AES encryption of large files?  
> E.g., create a new IV and resume encryption?  Use something other than GCM 
> which has a higher limit?
>
> Thanks!
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"Crypto++ Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/cryptopp-users/fddfc356-b2ba-4116-b696-b85edb819ee5n%40googlegroups.com.

Reply via email to