On 06/27/2012 10:56 PM, Eric Blake wrote: > On 06/27/2012 04:34 AM, Orit Wasserman wrote: >> In the outgoing migration check to see if the page is cached and >> changed than send compressed page by using save_xbrle_page function. >> In the incoming migration check to see if RAM_SAVE_FLAG_XBRLE is set >> and decompress the page (by using load_xbrle function). > > Mismatch between commit comment and actual code (XBRLE vs. XBZRLE). > > >> + >> + /* Stage 1 cache the page and exit. >> + Stage 2 check to see if page is cached , if not cache the page. > > s/cached ,/cached,/ > > >> + if (encoded_len == 0) { >> + DPRINTF("Unmodifed page skipping\n"); > > spelling and grammar: > s/Unmodifed page skipping/Skipping unmodified page/ > > >> if (!block) >> block = QLIST_FIRST(&ram_list.blocks); >> >> + >> + >> do { > > Why the spurious whitespace addition? > > >> + } else if (migrate_use_xbzrle() && stage != 3) { >> + current_addr = block->offset + offset; >> + /* In stage 1 we only cache the pages before sending them >> + from the cache (uncompressed). >> + We doen't use compression for stage 3. > > s/doen't/don't/ > >> + >> + /* either we didn't send yet (we may got XBZRLE overflow) */ > > s/may got/may have had/ > >> >> - break; >> + /* if page is unmodified lets continue to the next */ > > s/lets/let's/ > > or even shorter: > > s/ lets/,/ > >> @@ -331,6 +440,17 @@ int ram_save_live(QEMUFile *f, int stage, void *opaque) >> last_offset = 0; >> sort_ram_list(); >> >> + if (migrate_use_xbzrle()) { >> + XBZRLE.cache = cache_init(migrate_xbzrle_cache_size() / >> + TARGET_PAGE_SIZE, >> + TARGET_PAGE_SIZE); >> + if (!XBZRLE.cache) { >> + DPRINTF("Error creating cache\n"); >> + return -1; >> + } >> + XBZRLE.encoded_buf = g_malloc0(TARGET_PAGE_SIZE); > > This guarantees long alignment (good, per my proposed g_assert in > 10/13), but not page alignment. Do we get any benefits by using a > different allocation to guarantee page alignment? (Off-hand, I'm not > thinking of any.) > >> @@ -389,7 +509,7 @@ int ram_save_live(QEMUFile *f, int stage, void *opaque) >> while ((bytes_sent = ram_save_block(f, stage)) != -1) { >> bytes_transferred += bytes_sent; >> } >> - memory_global_dirty_log_stop(); >> + migration_end(); >> } >> >> qemu_put_be64(f, RAM_SAVE_FLAG_EOS); > > This hunk should have been squashed into 9/13. > >> @@ -512,6 +675,16 @@ int ram_load(QEMUFile *f, void *opaque, int version_id) >> host = host_from_stream_offset(f, addr, flags); >> >> qemu_get_buffer(f, host, TARGET_PAGE_SIZE); >> + } else if (flags & RAM_SAVE_FLAG_XBZRLE) { > > We should probably be erroring out if we detect this incoming flag, but > the management app explicitly disabled the xbzrle migration capability, > since that represents a case of user error, and failing the migration is > better than proceeding to decode things anyway. > correct, I will fix it.
Orit