On Thu, Mar 07, 2013 at 10:23:54AM -0500, Jeff Cody wrote: > On Thu, Mar 07, 2013 at 03:30:44PM +0100, Stefan Hajnoczi wrote: > > On Wed, Mar 06, 2013 at 09:48:11AM -0500, Jeff Cody wrote: > > > + ret = bdrv_pread(bs->file, s->bat_offset, s->bat, s->bat_rt.length); > > > + > > > + for (i = 0; i < s->bat_entries; i++) { > > > + le64_to_cpus(&s->bat[i]); > > > + } > > > > How does BAT size scale when the image size is increased? QCOW2 and QED > > use caches for metadata that would be too large or wasteful to keep in > > memory. > > The BAT size is dependent on the virtual disk size, and the block > size. The block size is allowed to range from 1MB - 256MB. There is > one BAT entry per block. > > In practice, the large block size keeps the BAT entry table reasonable > (for a 127GB file, the block size created by Hyper-V is 32MB, so the > table is pretty small - 32KB). > > However, I don't see anything in the spec that forces the block size > to be larger, for a large virtual disk size. So for the max size of > 64TB, and the smallest block size of 1MB, keeping the BAT in memory > would indeed be excessive. > > I'll re-read the spec, and see if there is anything that ties the > block size and virtual size together. If not, I'll have to add > caching.
BTW the qcow2 cache code can be reused. Stefan