Paolo Bonzini <pbonz...@redhat.com> wrote: > On 23/01/2017 22:32, Juan Quintela wrote: >> We make the locking and the transfer of information specific, even if we >> are still transmiting things through the main thread. >> >> Signed-off-by: Juan Quintela <quint...@redhat.com> >> --- >> migration/ram.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++++++- >> 1 file changed, 52 insertions(+), 1 deletion(-) >> >> diff --git a/migration/ram.c b/migration/ram.c >> index c71929e..9d7bc64 100644 >> --- a/migration/ram.c >> +++ b/migration/ram.c >> @@ -392,17 +392,25 @@ void migrate_compress_threads_create(void) >> /* Multiple fd's */ >> >> struct MultiFDSendParams { >> + /* not changed */ >> QemuThread thread; >> QIOChannel *c; >> QemuCond cond; >> QemuMutex mutex; >> + /* protected by param mutex */ >> bool quit; >> bool started; >> + uint8_t *address; >> + /* protected by multifd mutex */ >> + bool done; >> }; >> typedef struct MultiFDSendParams MultiFDSendParams; >> >> static MultiFDSendParams *multifd_send; >> >> +QemuMutex multifd_send_mutex; >> +QemuCond multifd_send_cond; > > Having n+1 semaphores instead of n+1 cond/mutex pairs could be more > efficient. See thread-pool.c for an example.
Did that. See next version. Only partial success. It goes faster, and code is somehow easier. But on reception, I end having to add 3 sems for thread (ok, I could move to only two reusing them, but indeed). On send side, I got speedups, on reception side no, but I haven't still found the cause. Thanks, Juan.