Pavel Fedin <p.fe...@samsung.com> wrote: > Hello! > >> Power people have a similar problem with its hashed page tables, they >> integrated their own save_live implementation because they are too big >> for the last stage. You can look there for inspiration. > > I examined their code. Interesting, and, indeed, it opens up a way > for decreasing downtime by implementing iterative migration for > the ITS. > However, this is not really what is necessary. This thing aims to > produce own data chunk, and it's not good for ITS. ITS already > stores everything in system RAM, therefore savevm_ram_handlers take > perfect care about these data. The only thing to do is to tell > the ITS to dump its state into RAM. This is what i currently do using > migration_in_completion(). > An alternate, perhaps better approach, would be to be able to hook > into ram_save_iterate() and ram_save_complete(). This way we > could kick ITS right before attempting to migrate RAM. > Could we extend the infrastructure so that: > a) Handlers are prioritized, and we can determine order of their execution? > b) We can choose whether our handlers actually produce extra chunk or not? > > OTOH, what i've done is actually a way to hook up into > save_live_complete before any other registered handlers get > executed. What > is missing is one more notifier_list_notify() call right before > qemu_savevm_state_iterate(), and a corresponding > migration_is_active() checker. > > What do you think ?
ok, your problem here is that you modify ram. Could you take a look at how vhost manage this? It is done at migration_bitmap_sync(), and it just marks the pages that are dirty. The "just" part in the interesting bit. It uses the memory_region operations. Michael, do you understand that code better than me, could you give an introduction and say if it would work for the ITS? Thanks. Later, Juan.