Il 18/03/2013 21:33, Michael R. Hines ha scritto: >> >> +int qemu_drain(QEMUFile *f) >> +{ >> + return f->ops->drain ? f->ops->drain(f->opaque) : 0; >> +} >> Hmm, this is very similar to qemu_fflush, but not quite. :/ >> >> Why exactly is this needed? > > Good idea - I'll replace drain with flush once I added > the "qemu_file_ops_are(const QEMUFile *, const QEMUFileOps *) " > that you recommended......
If I understand correctly, the problem is that save_rdma_page is asynchronous and you have to wait for pending operations to do the put_buffer protocol correctly. Would it work to just do the "drain" in the put_buffer operation, if and only if it was preceded by a save_rdma_page operation? > >>> /** Flushes QEMUFile buffer >>> * >>> */ >>> @@ -723,6 +867,8 @@ int qemu_get_byte(QEMUFile *f) >>> int64_t qemu_ftell(QEMUFile *f) >>> { >>> qemu_fflush(f); >>> + if(migrate_use_rdma(f)) >>> + return delta_norm_mig_bytes_transferred(); >> Not needed, and another undesirable dependency (savevm.c -> >> arch_init.c). Just update f->pos in save_rdma_page. > > f->pos isn't good enough because save_rdma_page does not > go through QEMUFile directly - only non-live state goes > through QEMUFile ....... pc.ram uses direct RDMA writes. > > As a result, the position pointer does not get updated > and the accounting is missed........ Yes, I am suggesting to modify f->pos in save_rdma_page instead. Paolo