RE: [PATCH RFC 03/14] migration/rdma: Create multiFd migration threads

2020-02-14 Thread fengzhimin
Thanks for your review. I will fix these errors in the next version(V3).

Due to migration data transfer using RDMA WRITE operation, we don't need to 
receive data in the destination.
We only need to poll the CQE in the destination, so multifd_recv_thread() can't 
be used directly.

-Original Message-
From: Juan Quintela [mailto:quint...@redhat.com] 
Sent: Thursday, February 13, 2020 6:13 PM
To: fengzhimin 
Cc: dgilb...@redhat.com; arm...@redhat.com; ebl...@redhat.com; 
qemu-devel@nongnu.org; Zhanghailiang ; 
jemmy858...@gmail.com
Subject: Re: [PATCH RFC 03/14] migration/rdma: Create multiFd migration threads

Zhimin Feng  wrote:
> Creation of the multifd send threads for RDMA migration, nothing 
> inside yet.
>
> Signed-off-by: Zhimin Feng 
> ---
>  migration/multifd.c   | 33 +---
>  migration/multifd.h   |  2 +
>  migration/qemu-file.c |  5 +++
>  migration/qemu-file.h |  1 +
>  migration/rdma.c  | 88 ++-
>  migration/rdma.h  |  3 ++
>  6 files changed, 125 insertions(+), 7 deletions(-)
>
> diff --git a/migration/multifd.c b/migration/multifd.c index 
> b3e8ae9bcc..63678d7fdd 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -424,7 +424,7 @@ void multifd_send_sync_main(QEMUFile *f)  {
>  int i;
>  
> -if (!migrate_use_multifd()) {
> +if (!migrate_use_multifd() || migrate_use_rdma()) {

You don't need sync with main channel on rdma?

> +static void rdma_send_channel_create(MultiFDSendParams *p) {
> +Error *local_err = NULL;
> +
> +if (p->quit) {
> +error_setg(&local_err, "multifd: send id %d already quit", p->id);
> +return ;
> +}
> +p->running = true;
> +
> +qemu_thread_create(&p->thread, p->name, multifd_rdma_send_thread, p,
> +   QEMU_THREAD_JOINABLE); }
> +
>  static void multifd_new_send_channel_async(QIOTask *task, gpointer 
> opaque)  {
>  MultiFDSendParams *p = opaque;
> @@ -621,7 +635,11 @@ int multifd_save_setup(Error **errp)
>  p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
>  p->packet->version = cpu_to_be32(MULTIFD_VERSION);
>  p->name = g_strdup_printf("multifdsend_%d", i);
> -socket_send_channel_create(multifd_new_send_channel_async, p);
> +if (!migrate_use_rdma()) {
> +socket_send_channel_create(multifd_new_send_channel_async, p);
> +} else {
> +rdma_send_channel_create(p);
> +}

This is what we are trying to avoid.  Just create a struct ops, where we have a

ops->create_channel(new_channel_async, p)

or whatever, and fill it differently for rdma and for tcp.


>  }
>  return 0;
>  }
> @@ -720,7 +738,7 @@ void multifd_recv_sync_main(void)  {
>  int i;
>  
> -if (!migrate_use_multifd()) {
> +if (!migrate_use_multifd() || migrate_use_rdma()) {
>  return;
>  }

Ok. you can just put an empty function for you here.

>  for (i = 0; i < migrate_multifd_channels(); i++) { @@ -890,8 
> +908,13 @@ bool multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
>  p->num_packets = 1;
>  
>  p->running = true;
> -qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,
> -   QEMU_THREAD_JOINABLE);
> +if (!migrate_use_rdma()) {
> +qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,
> +   QEMU_THREAD_JOINABLE);
> +} else {
> +qemu_thread_create(&p->thread, p->name, multifd_rdma_recv_thread, p,
> +   QEMU_THREAD_JOINABLE);
> +}

new_recv_chanel() member function.

>  atomic_inc(&multifd_recv_state->count);
>  return atomic_read(&multifd_recv_state->count) ==
> migrate_multifd_channels(); diff --git 
> a/migration/multifd.h b/migration/multifd.h index 
> d8b0205977..c9c11ad140 100644
> --- a/migration/multifd.h
> +++ b/migration/multifd.h
> @@ -13,6 +13,8 @@
>  #ifndef QEMU_MIGRATION_MULTIFD_H
>  #define QEMU_MIGRATION_MULTIFD_H
>  
> +#include "migration/rdma.h"
> +
>  int multifd_save_setup(Error **errp);  void 
> multifd_save_cleanup(void);  int multifd_load_setup(Error **errp);

You are not exporting anything rdma related from here, are you?

> diff --git a/migration/qemu-file.c b/migration/qemu-file.c index 
> 1c3a358a14..f0ed8f1381 100644
> --- a/migration/qemu-file.c
> +++ b/migration/qemu-file.c
> @@ -248,6 +248,11 @@ void qemu_fflush(QEMUFile *f)
>  f->iovcnt = 0;
>  }
>  
> +void *getQIOChannel(QEMUFile *f)
> +{
> +return f->op

RE: [PATCH RFC 12/12] migration/rdma: only register the virt-ram block for MultiRDMA

2020-01-20 Thread fengzhimin
The performance increase is due to the multiple RDMA channels instead of 
multiple threads, so we must register RAM blocks for the multiple RDMA channels.

Zhimin Feng

-Original Message-
From: Dr. David Alan Gilbert [mailto:dgilb...@redhat.com] 
Sent: Monday, January 20, 2020 5:05 PM
To: fengzhimin 
Cc: quint...@redhat.com; arm...@redhat.com; ebl...@redhat.com; 
qemu-devel@nongnu.org; Zhanghailiang ; 
jemmy858...@gmail.com
Subject: Re: [PATCH RFC 12/12] migration/rdma: only register the virt-ram block 
for MultiRDMA

* fengzhimin (fengzhim...@huawei.com) wrote:
> OK, I will modify it.
> 
> Due to the mach-virt.ram is sent by the multiRDMA channels instead of the 
> main channel, it don't to register on the main channel.

You might be OK if instead  of using the name, you use a size threshold; e.g. 
you use the multirdma threads for any RAM block larger than say 128MB.

> It takes a long time to register the mach-virt.ram for VM with large capacity 
> memory, so we shall try our best not to register it.

I'm curious why, I know it's expensive to register RAM blocks with rdma code; 
but I thought that would just be the first time; it surprises me that 
registering it with a 2nd RDMA channel is as expensive.

But then that makes me ask a 2nd question; is your performance increase due to 
multiple threads or is it due to the multiple RDMA channels?
COuld you have multiple threads but still a single RDMA channel (and with 
sufficient locking) still get the performance?

Dave

> Thanks for your review.
> 
> Zhimin Feng
> 
> -Original Message-
> From: Dr. David Alan Gilbert [mailto:dgilb...@redhat.com]
> Sent: Saturday, January 18, 2020 2:52 AM
> To: fengzhimin 
> Cc: quint...@redhat.com; arm...@redhat.com; ebl...@redhat.com; 
> qemu-devel@nongnu.org; Zhanghailiang ; 
> jemmy858...@gmail.com
> Subject: Re: [PATCH RFC 12/12] migration/rdma: only register the 
> virt-ram block for MultiRDMA
> 
> * Zhimin Feng (fengzhim...@huawei.com) wrote:
> > From: fengzhimin 
> > 
> > The virt-ram block is sent by MultiRDMA, so we only register it for 
> > MultiRDMA channels and main channel don't register the virt-ram block.
> > 
> > Signed-off-by: fengzhimin 
> 
> You can't specialise on the name of the RAMBlock like that.
> 'mach-virt.ram' is the name specific to just the main ram on just aarch's 
> machine type;  for example the name on x86 is completely different and if you 
> use NUMA or hotplug etc it would also be different on aarch.
> 
> Is there a downside to also registering the mach-virt.ram on the main channel?
> 
> Dave
> 
> > ---
> >  migration/rdma.c | 140
> > +--
> >  1 file changed, 112 insertions(+), 28 deletions(-)
> > 
> > diff --git a/migration/rdma.c b/migration/rdma.c index 
> > 0a150099e2..1477fd509b 100644
> > --- a/migration/rdma.c
> > +++ b/migration/rdma.c
> > @@ -618,7 +618,9 @@ const char *print_wrid(int wrid);  static int 
> > qemu_rdma_exchange_send(RDMAContext *rdma, RDMAControlHeader *head,
> > uint8_t *data, RDMAControlHeader *resp,
> > int *resp_idx,
> > -   int (*callback)(RDMAContext *rdma));
> > +   int (*callback)(RDMAContext *rdma,
> > +   uint8_t id),
> > +   uint8_t id);
> >  
> >  static inline uint64_t ram_chunk_index(const uint8_t *start,
> > const uint8_t *host) @@
> > -1198,24 +1200,81 @@ static int qemu_rdma_alloc_qp(RDMAContext *rdma)
> >  return 0;
> >  }
> >  
> > -static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma)
> > +/*
> > + * Parameters:
> > + *@id == UNUSED_ID :
> > + *This means that we register memory for the main RDMA channel,
> > + *the main RDMA channel don't register the mach-virt.ram block
> > + *when we use multiRDMA method to migrate.
> > + *
> > + *@id == 0 or id == 1 or ... :
> > + *This means that we register memory for the multiRDMA channels,
> > + *the multiRDMA channels only register memory for the mach-virt.ram
> > + *block when we use multiRDAM method to migrate.
> > + */
> > +static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma, 
> > +uint8_t
> > +id)
> >  {
> >  int i;
> >  RDMALocalBlocks *local = &rdma->local_ram_blocks;
> >  
> > -for (i = 0; i < local->nb_blocks; i++) {
> > -local->block[i].mr =
> > -  

RE: [PATCH RFC 12/12] migration/rdma: only register the virt-ram block for MultiRDMA

2020-01-18 Thread fengzhimin
OK, I will modify it.

Due to the mach-virt.ram is sent by the multiRDMA channels instead of the main 
channel, it don't to register on the main channel.
It takes a long time to register the mach-virt.ram for VM with large capacity 
memory, so we shall try our best not to register it.

Thanks for your review.

Zhimin Feng

-Original Message-
From: Dr. David Alan Gilbert [mailto:dgilb...@redhat.com] 
Sent: Saturday, January 18, 2020 2:52 AM
To: fengzhimin 
Cc: quint...@redhat.com; arm...@redhat.com; ebl...@redhat.com; 
qemu-devel@nongnu.org; Zhanghailiang ; 
jemmy858...@gmail.com
Subject: Re: [PATCH RFC 12/12] migration/rdma: only register the virt-ram block 
for MultiRDMA

* Zhimin Feng (fengzhim...@huawei.com) wrote:
> From: fengzhimin 
> 
> The virt-ram block is sent by MultiRDMA, so we only register it for 
> MultiRDMA channels and main channel don't register the virt-ram block.
> 
> Signed-off-by: fengzhimin 

You can't specialise on the name of the RAMBlock like that.
'mach-virt.ram' is the name specific to just the main ram on just aarch's 
machine type;  for example the name on x86 is completely different and if you 
use NUMA or hotplug etc it would also be different on aarch.

Is there a downside to also registering the mach-virt.ram on the main channel?

Dave

> ---
>  migration/rdma.c | 140 
> +--
>  1 file changed, 112 insertions(+), 28 deletions(-)
> 
> diff --git a/migration/rdma.c b/migration/rdma.c index 
> 0a150099e2..1477fd509b 100644
> --- a/migration/rdma.c
> +++ b/migration/rdma.c
> @@ -618,7 +618,9 @@ const char *print_wrid(int wrid);  static int 
> qemu_rdma_exchange_send(RDMAContext *rdma, RDMAControlHeader *head,
> uint8_t *data, RDMAControlHeader *resp,
> int *resp_idx,
> -   int (*callback)(RDMAContext *rdma));
> +   int (*callback)(RDMAContext *rdma,
> +   uint8_t id),
> +   uint8_t id);
>  
>  static inline uint64_t ram_chunk_index(const uint8_t *start,
> const uint8_t *host) @@ 
> -1198,24 +1200,81 @@ static int qemu_rdma_alloc_qp(RDMAContext *rdma)
>  return 0;
>  }
>  
> -static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma)
> +/*
> + * Parameters:
> + *@id == UNUSED_ID :
> + *This means that we register memory for the main RDMA channel,
> + *the main RDMA channel don't register the mach-virt.ram block
> + *when we use multiRDMA method to migrate.
> + *
> + *@id == 0 or id == 1 or ... :
> + *This means that we register memory for the multiRDMA channels,
> + *the multiRDMA channels only register memory for the mach-virt.ram
> + *block when we use multiRDAM method to migrate.
> + */
> +static int qemu_rdma_reg_whole_ram_blocks(RDMAContext *rdma, uint8_t 
> +id)
>  {
>  int i;
>  RDMALocalBlocks *local = &rdma->local_ram_blocks;
>  
> -for (i = 0; i < local->nb_blocks; i++) {
> -local->block[i].mr =
> -ibv_reg_mr(rdma->pd,
> -local->block[i].local_host_addr,
> -local->block[i].length,
> -IBV_ACCESS_LOCAL_WRITE |
> -IBV_ACCESS_REMOTE_WRITE
> -);
> -if (!local->block[i].mr) {
> -perror("Failed to register local dest ram block!\n");
> -break;
> +if (migrate_use_multiRDMA()) {
> +if (id == UNUSED_ID) {
> +for (i = 0; i < local->nb_blocks; i++) {
> +/* main RDMA channel don't register the mach-virt.ram block 
> */
> +if (strcmp(local->block[i].block_name, "mach-virt.ram") == 
> 0) {
> +continue;
> +}
> +
> +local->block[i].mr =
> +ibv_reg_mr(rdma->pd,
> +local->block[i].local_host_addr,
> +local->block[i].length,
> +IBV_ACCESS_LOCAL_WRITE |
> +IBV_ACCESS_REMOTE_WRITE
> +);
> +if (!local->block[i].mr) {
> +perror("Failed to register local dest ram block!\n");
> +break;
> +}
> +rdma->total_registrations++;
> +}
> +} else {
> +for (i = 0; i < local->nb_blocks; i++) {
> +/*
> + * The multiRDAM cha

RE: [PATCH RFC 04/12] migration/rdma: Create multiRDMA migration threads

2020-01-16 Thread fengzhimin
Thanks for your review. I will merge this with multifd.

-Original Message-
From: Juan Quintela [mailto:quint...@redhat.com] 
Sent: Thursday, January 16, 2020 9:25 PM
To: fengzhimin 
Cc: dgilb...@redhat.com; arm...@redhat.com; ebl...@redhat.com; 
qemu-devel@nongnu.org; Zhanghailiang ; 
jemmy858...@gmail.com
Subject: Re: [PATCH RFC 04/12] migration/rdma: Create multiRDMA migration 
threads

Zhimin Feng  wrote:
> From: fengzhimin 
>
> Creation of the RDMA threads, nothing inside yet.
>
> Signed-off-by: fengzhimin 

> ---
>  migration/migration.c |   1 +
>  migration/migration.h |   2 +
>  migration/rdma.c  | 283 ++
>  3 files changed, 286 insertions(+)
>
> diff --git a/migration/migration.c b/migration/migration.c index 
> 5756a4806e..f8d4eb657e 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1546,6 +1546,7 @@ static void migrate_fd_cleanup(MigrationState *s)
>  qemu_mutex_lock_iothread();
>  
>  multifd_save_cleanup();
> +multiRDMA_save_cleanup();

Can we merge this with multifd?


> +typedef struct {
> +/* this fields are not changed once the thread is created */
> +/* channel number */
> +uint8_t id;
> +/* channel thread name */
> +char *name;
> +/* channel thread id */
> +QemuThread thread;
> +/* sem where to wait for more work */
> +QemuSemaphore sem;
> +/* this mutex protects the following parameters */
> +QemuMutex mutex;
> +/* is this channel thread running */
> +bool running;
> +/* should this thread finish */
> +bool quit;
> +}  MultiRDMASendParams;

This is basically the same than MultiFBSendParams, same for the rest.

I would very much preffer not to have two sets of threads that are really 
equivalent.

Thanks, Juan.




RE: [PATCH RFC 01/12] migration: Add multiRDMA capability support

2020-01-16 Thread fengzhimin
Thanks for your review. I will modify its according to your suggestions.

-Original Message-
From: Juan Quintela [mailto:quint...@redhat.com] 
Sent: Thursday, January 16, 2020 9:19 PM
To: Dr. David Alan Gilbert 
Cc: fengzhimin ; arm...@redhat.com; ebl...@redhat.com; 
qemu-devel@nongnu.org; Zhanghailiang ; 
jemmy858...@gmail.com
Subject: Re: [PATCH RFC 01/12] migration: Add multiRDMA capability support

"Dr. David Alan Gilbert"  wrote:
> * Zhimin Feng (fengzhim...@huawei.com) wrote:
>> From: fengzhimin 
>> 
>> Signed-off-by: fengzhimin 
>
> Instead of creating x-multirdma as a capability and the corresponding 
> parameter for the number of channels; it would be better just to use 
> the multifd parameters when used with an rdma transport; as far as I 
> know multifd doesn't work with rdma at the moment, and to the user the 
> idea of multifd over rdma is just the same thing.

I was about to suggest that.  We could setup both capabilities:

multifd + rdma




RE: [PATCH RFC 00/12] *** mulitple RDMA channels for migration ***

2020-01-15 Thread fengzhimin
Thanks for your review. I will add more trace_ calls in the next version(V2) 
and modify its according to your suggestions.

-Original Message-
From: Dr. David Alan Gilbert [mailto:dgilb...@redhat.com] 
Sent: Thursday, January 16, 2020 3:57 AM
To: fengzhimin 
Cc: quint...@redhat.com; arm...@redhat.com; ebl...@redhat.com; 
qemu-devel@nongnu.org; Zhanghailiang ; 
jemmy858...@gmail.com
Subject: Re: [PATCH RFC 00/12] *** mulitple RDMA channels for migration ***

* Zhimin Feng (fengzhim...@huawei.com) wrote:
> From: fengzhimin 
> 
> Currently there is a single channel for RDMA migration, this causes 
> the problem that the network bandwidth is not fully utilized for 
> 25Gigabit NIC. Inspired by the Multifd, we use two RDMA channels to 
> send RAM pages, which we call MultiRDMA.
> 
> We compare the migration performance of MultiRDMA with origin RDMA 
> migration. The VM specifications for migration are as follows:
> - VM use 4k page;
> - the number of VCPU is 4;
> - the total memory is 16Gigabit;
> - use 'mempress' tool to pressurize VM(mempress 8000 500);
> - use 25Gigabit network card to migrate;
> 
> For origin RDMA and MultiRDMA migration, the total migration times of 
> VM are as follows:
> +
> | | NOT rdma-pin-all | rdma-pin-all |
> +
> | origin RDMA |   18 s   | 23 s |
> -
> |  MultiRDMA  |   13 s   | 18 s |
> +

Very nice.

> For NOT rdma-pin-all migration, the multiRDMA can improve the total 
> migration time by about 27.8%.
> For rdma-pin-all migration, the multiRDMA can improve the total 
> migration time by about 21.7%.
> 
> Test the multiRDMA migration like this:
> 'virsh migrate --live --rdma-parallel --migrateuri rdma://hostname 
> domain qemu+tcp://hostname/system'

It will take me a while to finish the review; but another general suggestion is 
add more trace_ calls; it will make it easier to diagnose problems later.

Dave

> 
> fengzhimin (12):
>   migration: Add multiRDMA capability support
>   migration: Export the 'migration_incoming_setup' function   
>  and add the 'migrate_use_rdma_pin_all' function
>   migration: Create the multi-rdma-channels parameter
>   migration/rdma: Create multiRDMA migration threads
>   migration/rdma: Create the multiRDMA channels
>   migration/rdma: Transmit initial package
>   migration/rdma: Be sure all channels are created
>   migration/rdma: register memory for multiRDMA channels
>   migration/rdma: Wait for all multiRDMA to complete registration
>   migration/rdma: use multiRDMA to send RAM block for rdma-pin-all mode
>   migration/rdma: use multiRDMA to send RAM block for NOT rdma-pin-all
>   mode
>   migration/rdma: only register the virt-ram block for MultiRDMA
> 
>  migration/migration.c |   55 +-
>  migration/migration.h |6 +
>  migration/rdma.c  | 1320 +
>  monitor/hmp-cmds.c|7 +
>  qapi/migration.json   |   27 +-
>  5 files changed, 1285 insertions(+), 130 deletions(-)
> 
> --
> 2.19.1
> 
> 
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




RE: [PATCH RFC 01/12] migration: Add multiRDMA capability support

2020-01-14 Thread fengzhimin
Thanks for your review. I will fix these errors in the next version(V2).
I hope you can busy schedule to find time to check other patches about 
Multi-RDMA.

-Original Message-
From: Eric Blake [mailto:ebl...@redhat.com] 
Sent: Tuesday, January 14, 2020 12:27 AM
To: fengzhimin ; quint...@redhat.com; 
dgilb...@redhat.com; arm...@redhat.com
Cc: qemu-devel@nongnu.org; Zhanghailiang ; 
jemmy858...@gmail.com
Subject: Re: [PATCH RFC 01/12] migration: Add multiRDMA capability support

On 1/8/20 10:59 PM, Zhimin Feng wrote:
> From: fengzhimin 
> 
> Signed-off-by: fengzhimin 
> ---

> +++ b/qapi/migration.json
> @@ -421,6 +421,8 @@
>   # @validate-uuid: Send the UUID of the source to allow the destination
>   # to ensure it is the same. (since 4.2)
>   #
> +# @multirdma: Use more than one channels for rdma migration. (since 4.2)

We've missed 4.2; the next release will be 5.0.

> +#
>   # Since: 1.2
>   ##
>   { 'enum': 'MigrationCapability',
> @@ -428,7 +430,7 @@
>  'compress', 'events', 'postcopy-ram', 'x-colo', 'release-ram',
>  'block', 'return-path', 'pause-before-switchover', 'multifd',
>  'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
> -   'x-ignore-shared', 'validate-uuid' ] }
> +   'x-ignore-shared', 'validate-uuid', 'multirdma' ] }
>   
>   ##
>   # @MigrationCapabilityStatus:
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



RE: [PATCH RFC 03/12] migration: Create the multi-rdma-channels parameter

2020-01-14 Thread fengzhimin
Thanks for your review. I will fix these errors in the next version(V2).

-Original Message-
From: Markus Armbruster [mailto:arm...@redhat.com] 
Sent: Monday, January 13, 2020 11:35 PM
To: fengzhimin 
Cc: quint...@redhat.com; dgilb...@redhat.com; ebl...@redhat.com; 
jemmy858...@gmail.com; qemu-devel@nongnu.org; Zhanghailiang 

Subject: Re: [PATCH RFC 03/12] migration: Create the multi-rdma-channels 
parameter

Zhimin Feng  writes:

> From: fengzhimin 
>
> Indicates the number of RDMA threads that we would create.
> By default we create 2 threads for RDMA migration.
>
> Signed-off-by: fengzhimin 
[...]
> diff --git a/qapi/migration.json b/qapi/migration.json index 
> c995ffdc4c..ab79bf0600 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -588,6 +588,10 @@
>  # @max-cpu-throttle: maximum cpu throttle percentage.
>  #Defaults to 99. (Since 3.1)
>  #
> +# @multi-rdma-channels: Number of channels used to migrate data in
> +#   parallel. This is the same number that the

same number as

> +#   number of multiRDMA used for migration.  The

Pardon my ignorance: what's "the number of multiRDMA used for migration"?

> +#   default value is 2 (since 4.2)

(since 5.0)

>  # Since: 2.4
>  ##
>  { 'enum': 'MigrationParameter',
> @@ -600,7 +604,8 @@
> 'downtime-limit', 'x-checkpoint-delay', 'block-incremental',
> 'multifd-channels',
> 'xbzrle-cache-size', 'max-postcopy-bandwidth',
> -   'max-cpu-throttle' ] }
> +   'max-cpu-throttle',
> +   'multi-rdma-channels'] }
>  
>  ##
>  # @MigrateSetParameters:
> @@ -690,6 +695,10 @@
>  # @max-cpu-throttle: maximum cpu throttle percentage.
>  #The default value is 99. (Since 3.1)
>  #
> +# @multi-rdma-channels: Number of channels used to migrate data in
> +#   parallel. This is the same number that the
> +#   number of multiRDMA used for migration.  The
> +#   default value is 2 (since 4.2)

See above.

>  # Since: 2.4
>  ##
>  # TODO either fuse back into MigrationParameters, or make @@ -715,7 
> +724,8 @@
>  '*multifd-channels': 'int',
>  '*xbzrle-cache-size': 'size',
>  '*max-postcopy-bandwidth': 'size',
> - '*max-cpu-throttle': 'int' } }
> + '*max-cpu-throttle': 'int',

Please use spaces instead of tab.

> +'*multi-rdma-channels': 'int'} }
>  
>  ##
>  # @migrate-set-parameters:
> @@ -825,6 +835,10 @@
>  #Defaults to 99.
>  # (Since 3.1)
>  #
> +# @multi-rdma-channels: Number of channels used to migrate data in
> +#   parallel. This is the same number that the
> +#   number of multiRDMA used for migration.  The
> +#   default value is 2 (since 4.2)
>  # Since: 2.4

See above.

>  ##
>  { 'struct': 'MigrationParameters',
> @@ -847,8 +861,9 @@
>  '*block-incremental': 'bool' ,
>  '*multifd-channels': 'uint8',
>  '*xbzrle-cache-size': 'size',
> - '*max-postcopy-bandwidth': 'size',
> -'*max-cpu-throttle':'uint8'} }
> + '*max-postcopy-bandwidth': 'size',
> +'*max-cpu-throttle':'uint8',
> +'*multi-rdma-channels':'uint8'} }
>  
>  ##
>  # @query-migrate-parameters:

Please use spaces instead of tab.




RE: [PATCH RFC 01/12] migration: Add multiRDMA capability support

2020-01-14 Thread fengzhimin
Thanks for your review. I will change it in the next version(V2).

-Original Message-
From: Markus Armbruster [mailto:arm...@redhat.com] 
Sent: Monday, January 13, 2020 11:30 PM
To: fengzhimin 
Cc: quint...@redhat.com; dgilb...@redhat.com; ebl...@redhat.com; 
jemmy858...@gmail.com; qemu-devel@nongnu.org; Zhanghailiang 

Subject: Re: [PATCH RFC 01/12] migration: Add multiRDMA capability support

Zhimin Feng  writes:

> From: fengzhimin 
>
> Signed-off-by: fengzhimin 
> ---
[...]
> diff --git a/qapi/migration.json b/qapi/migration.json index 
> b7348d0c8b..c995ffdc4c 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -421,6 +421,8 @@
>  # @validate-uuid: Send the UUID of the source to allow the destination
>  # to ensure it is the same. (since 4.2)
>  #
> +# @multirdma: Use more than one channels for rdma migration. (since 
> +4.2) #
>  # Since: 1.2
>  ##
>  { 'enum': 'MigrationCapability',
> @@ -428,7 +430,7 @@
> 'compress', 'events', 'postcopy-ram', 'x-colo', 'release-ram',
> 'block', 'return-path', 'pause-before-switchover', 'multifd',
> 'dirty-bitmaps', 'postcopy-blocktime', 'late-block-activate',
> -   'x-ignore-shared', 'validate-uuid' ] }
> +   'x-ignore-shared', 'validate-uuid', 'multirdma' ] }
>  
>  ##
>  # @MigrationCapabilityStatus:

Spell it multi-rdma?