On 04/26/2017 05:47 PM, Eric Blake wrote: > On 04/26/2017 04:41 PM, John Snow wrote: >> >> >> On 04/17/2017 09:33 PM, Eric Blake wrote: >>> In the process of converting sector-based interfaces to bytes, >>> I'm finding it easier to represent a byte count as a 64-bit >>> integer at the block layer (even if we are internally capped >>> by SIZE_MAX or even INT_MAX for individual transactions, it's >>> still nicer to not have to worry about truncation/overflow >>> issues on as many variables). Update the signature of >>> bdrv_round_to_clusters() to uniformly use uint64_t, matching >> ^^^^^^^^ >> >> While we're here, since you went with int64_t in the end, what steered >> you away from uint64_t, or was that just a thinko? > > Later patches were made easier with signed (the compiler complained when > I mixed signed and unsigned pointers). > >> >> (AFAICT: off_t is usually something like int64_t, so your choice makes >> sense to me, generally.) > > Indeed, and that's something I should update my commit message to mention. > >> >> --js >> >>> the signature already chosen for bdrv_is_allocated, and >>> adjust clients according to the required fallout. > > If you want me to try and use uint64_t *pnum instead of int64_t *pnum > throughout both my series 1 (the changes to bdrv_is_allocated) and this > one, it will take more effort. I'll do it if there's a reason, but I'd > rather not if the signed version is good enough. >
No, I didn't mean to imply you should, I was just pointing out the commit message typo. int64_t is likely the correct choice for a number of reasons, at least being able to return -1 from functions returning a byte offset being the chief reason. --js
signature.asc
Description: OpenPGP digital signature