Peter Xu <pet...@redhat.com> wrote: > On Mon, Jul 17, 2017 at 03:42:34PM +0200, Juan Quintela wrote:
>> +static void multifd_recv_page(uint8_t *address, uint16_t fd_num) >> +{ >> + int thread_count; >> + MultiFDRecvParams *p; >> + static multifd_pages_t pages; >> + static bool once; >> + >> + if (!once) { >> + multifd_init_group(&pages); >> + once = true; >> + } >> + >> + pages.iov[pages.num].iov_base = address; >> + pages.iov[pages.num].iov_len = TARGET_PAGE_SIZE; >> + pages.num++; >> + >> + if (fd_num == UINT16_MAX) { > > (so this check is slightly mistery as well if we don't define > something... O:-) It means that we continue sending pages on the same "group". Will add a comment. > >> + return; >> + } >> + >> + thread_count = migrate_multifd_threads(); >> + assert(fd_num < thread_count); >> + p = multifd_recv_state->params[fd_num]; >> + >> + qemu_sem_wait(&p->ready); > > Shall we check for p->pages.num == 0 before wait? What if the > corresponding thread is already finished its old work and ready? this is a semaphore, not a condition variable. We only use it with values 0 and 1. We only wait if the other thread hasn't done the post, if it has done the post, the wait don't have to wait. (no, I didn't invented the semaphore names). >> + >> + qemu_mutex_lock(&p->mutex); >> + p->done = false; >> + iov_copy(p->pages.iov, pages.num, pages.iov, pages.num, 0, >> + iov_size(pages.iov, pages.num)); > > Question: any reason why we don't use the same for loop in > multifd-send codes, and just copy the IOVs in that loop? (offset is > always zero, and we are copying the whole thing after all) When I found the function, I only remembered to change one of the loops. Nice catch. Later, Juan.