On 05/24/2013 05:59 AM, Stefan Hajnoczi wrote:
On Thu, May 23, 2013 at 01:44:40PM -0400, Corey Bryant wrote:
This patch series provides VNVRAM persistent storage support that
QEMU can use internally. The initial target user will be a software
vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
reboot/migrate and retain the keys.
This support uses QEMU's block driver to provide persistent storage
by reading/writing VNVRAM data from/to a drive image. The VNVRAM
drive image is provided with the -drive command line option just like
any other drive image and the vnvram_create() API will find it.
The APIs allow for VNVRAM entries to be registered, one at a time,
each with a maximum blob size. Entry blobs can then be read/written
from/to an entry on the drive. Here's an example of usage:
VNVRAM *vnvram;
int errcode
const VNVRAMEntryName entry_name;
const char *blob_w = "blob data";
char *blob_r;
uint32_t blob_r_size;
vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
strcpy((char *)entry_name, "first-entry");
VNVRAMEntryName is very prone to buffer overflow. I hope real code
doesn't use strcpy(). The cast is ugly, please don't hide the type.
vnvram_register_entry(vnvram, &entry_name, 1024);
vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
These are synchronous functions. If I/O is involved then this is a
problem: QEMU will be blocked waiting for host I/O to complete and the
big QEMU lock is held. This can cause poor guest interactivity and poor
scalability because vcpus cannot make progress, neither can the QEMU
monitor respond.
The vTPM is going to run as a thread and will have to write state blobs
into a bdrv. The above functions will typically be called from this
thead. When I originally wrote the code, the vTPM thread could not write
the blobs into bdrv directly, so I had to resort to sending a message to
the main QEMU thread to write the data to the bdrv. How else could we do
this?
Stefan