Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Fri, Jul 19, 2024 at 01:54:37PM -0300, Fabiano Rosas wrote: >> Peter Xu writes: >> >> > On Thu, Jul 18, 2024 at 07:32:05PM -0300, Fabiano Rosas wrote: >> >> Peter Xu writes: >> >> >> >> > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: >> >> >> Peter Xu writes: >> >> >> >> >> >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: >> >> >> >> v2 is ready, but unfortunately this approach doesn't work. When >> >> >> >> client A >> >> >> >> takes the payload, it fills it with it's data, which may include >> >> >> >> allocating memory. MultiFDPages_t does that for the offset. This >> >> >> >> means >> >> >> >> we need a round of free/malloc at every packet sent. For every >> >> >> >> client >> >> >> >> and every allocation they decide to do. >> >> >> > >> >> >> > Shouldn't be a blocker? E.g. one option is: >> >> >> > >> >> >> > /* Allocate both the pages + offset[] */ >> >> >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + >> >> >> > sizeof(ram_addr_t) * n, 1); >> >> >> > pages->allocated = n; >> >> >> > pages->offset = &pages[1]; >> >> >> > >> >> >> > Or.. we can also make offset[] dynamic size, if that looks less >> >> >> > tricky: >> >> >> > >> >> >> > typedef struct { >> >> >> > /* number of used pages */ >> >> >> > uint32_t num; >> >> >> > /* number of normal pages */ >> >> >> > uint32_t normal_num; >> >> >> > /* number of allocated pages */ >> >> >> > uint32_t allocated; >> >> >> > RAMBlock *block; >> >> >> > /* offset of each page */ >> >> >> > ram_addr_t offset[0]; >> >> >> > } MultiFDPages_t; >> >> >> >> >> >> I think you missed the point. If we hold a pointer inside the payload, >> >> >> we lose the reference when the other client takes the structure and >> >> >> puts >> >> >> its own data there. So we'll need to alloc/free everytime we send a >> >> >> packet. >> >> > >> >> > For option 1: when the buffer switch happens, MultiFDPages_t will >> >> > switch as >> >> > a whole, including its offset[], because its offset[] always belong to >> >> > this >> >> > MultiFDPages_t. So yes, we want to lose that *offset reference together >> >> > with MultiFDPages_t here, so the offset[] always belongs to one single >> >> > MultiFDPages_t object for its lifetime. >> >> >> >> MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated >> >> individually: >> >> >> >> struct MultiFDSendData { >> >> MultiFDPayloadType type; >> >> union { >> >> MultiFDPages_t ram_payload; >> >> } u; >> >> }; >> >> >> >> (and even if it did, then we'd lose the pointer to ram_payload anyway - >> >> or require multiple free/alloc) >> > >> > IMHO it's the same. >> > >> > The core idea is we allocate a buffer to put MultiFDSendData which may >> > contain either Pages_t or DeviceState_t, and the size of the buffer should >> > be MAX(A, B). >> > >> >> Right, but with your zero-length array proposals we need to have a >> separate allocation for MultiFDPages_t because to expand the array we >> need to include the number of pages. > > We need to fetch the max size we need and allocate one object covers all > the sizes we need. I sincerely don't understand why it's an issue.. > What you describe is this: p->data = g_malloc(sizeof(MultiFDPayloadType) + max(sizeof(MultiFDPages_t) + sizeof(ram_addr_t) * page_count, sizeof(MultiFDDevice_t))); This pushes the payload specific information into multifd_send_setup() which is against what we've been doing, namely isolating payload information out of multifd main code. >> >> Also, don't think only about MultiFDPages_t. With this approach we >> cannot have pointers to memory allocated by the client at all anywhere >> inside the union. Every pointer needs to have another reference >> somewhere else to ensure we don't leak it. That's an unnecessary >> restriction. > > So even if there can be multiple pointers we can definitely play the same > trick that we allocate object A+B+C+D in the same chunk and let A->b points > to B, A->c points to C, and so on. > > Before that, my question is do we really need that. > > For device states, AFAIU it'll always be an opaque buffer.. VFIO needs > that, vDPA probably the same, and for VMSDs it'll be a temp buffer to put > the VMSD dump. > > For multifd, I used offset[0] just to make sure things like "dynamic sized > multifd buffers" will easily work without much changes. Or even we could > have this, afaict: > > #define MULTIFD_PAGES_PER_PACKET (128) > > typedef struct { > /* number of used pages */ > uint32_t num; > /* number of normal pages */ > uint32_t normal_num; > /* number of allocated pages */ > uint32_t allocated; > RAMBlock *block; > /* offset of each page */ > ram_addr_t offset[MULTIFD_PAGES_PER_PACKET]; > } MultiFDPages_t; I think this is off the table, we're looking into al
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Fri, Jul 19, 2024 at 01:54:37PM -0300, Fabiano Rosas wrote: > Peter Xu writes: > > > On Thu, Jul 18, 2024 at 07:32:05PM -0300, Fabiano Rosas wrote: > >> Peter Xu writes: > >> > >> > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: > >> >> Peter Xu writes: > >> >> > >> >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: > >> >> >> v2 is ready, but unfortunately this approach doesn't work. When > >> >> >> client A > >> >> >> takes the payload, it fills it with it's data, which may include > >> >> >> allocating memory. MultiFDPages_t does that for the offset. This > >> >> >> means > >> >> >> we need a round of free/malloc at every packet sent. For every client > >> >> >> and every allocation they decide to do. > >> >> > > >> >> > Shouldn't be a blocker? E.g. one option is: > >> >> > > >> >> > /* Allocate both the pages + offset[] */ > >> >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + > >> >> > sizeof(ram_addr_t) * n, 1); > >> >> > pages->allocated = n; > >> >> > pages->offset = &pages[1]; > >> >> > > >> >> > Or.. we can also make offset[] dynamic size, if that looks less > >> >> > tricky: > >> >> > > >> >> > typedef struct { > >> >> > /* number of used pages */ > >> >> > uint32_t num; > >> >> > /* number of normal pages */ > >> >> > uint32_t normal_num; > >> >> > /* number of allocated pages */ > >> >> > uint32_t allocated; > >> >> > RAMBlock *block; > >> >> > /* offset of each page */ > >> >> > ram_addr_t offset[0]; > >> >> > } MultiFDPages_t; > >> >> > >> >> I think you missed the point. If we hold a pointer inside the payload, > >> >> we lose the reference when the other client takes the structure and puts > >> >> its own data there. So we'll need to alloc/free everytime we send a > >> >> packet. > >> > > >> > For option 1: when the buffer switch happens, MultiFDPages_t will switch > >> > as > >> > a whole, including its offset[], because its offset[] always belong to > >> > this > >> > MultiFDPages_t. So yes, we want to lose that *offset reference together > >> > with MultiFDPages_t here, so the offset[] always belongs to one single > >> > MultiFDPages_t object for its lifetime. > >> > >> MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated > >> individually: > >> > >> struct MultiFDSendData { > >> MultiFDPayloadType type; > >> union { > >> MultiFDPages_t ram_payload; > >> } u; > >> }; > >> > >> (and even if it did, then we'd lose the pointer to ram_payload anyway - > >> or require multiple free/alloc) > > > > IMHO it's the same. > > > > The core idea is we allocate a buffer to put MultiFDSendData which may > > contain either Pages_t or DeviceState_t, and the size of the buffer should > > be MAX(A, B). > > > > Right, but with your zero-length array proposals we need to have a > separate allocation for MultiFDPages_t because to expand the array we > need to include the number of pages. We need to fetch the max size we need and allocate one object covers all the sizes we need. I sincerely don't understand why it's an issue.. > > Also, don't think only about MultiFDPages_t. With this approach we > cannot have pointers to memory allocated by the client at all anywhere > inside the union. Every pointer needs to have another reference > somewhere else to ensure we don't leak it. That's an unnecessary > restriction. So even if there can be multiple pointers we can definitely play the same trick that we allocate object A+B+C+D in the same chunk and let A->b points to B, A->c points to C, and so on. Before that, my question is do we really need that. For device states, AFAIU it'll always be an opaque buffer.. VFIO needs that, vDPA probably the same, and for VMSDs it'll be a temp buffer to put the VMSD dump. For multifd, I used offset[0] just to make sure things like "dynamic sized multifd buffers" will easily work without much changes. Or even we could have this, afaict: #define MULTIFD_PAGES_PER_PACKET (128) typedef struct { /* number of used pages */ uint32_t num; /* number of normal pages */ uint32_t normal_num; /* number of allocated pages */ uint32_t allocated; RAMBlock *block; /* offset of each page */ ram_addr_t offset[MULTIFD_PAGES_PER_PACKET]; } MultiFDPages_t; It might change perf on a few archs where psize is not 4K, but I don't see it a huge deal, personally. Then everything will have no pointers, and it can be even slightly faster because we use 64B cachelines in most systems nowadays, and one indirect pointer may always need a load on a new cacheline otherwise.. This whole cacheline thing is trivial. What I worried that you worry too much on that flexibility that we may never need. And even with that flexibilty I don't understand why you don't like allocating an object that's larger than how the union is defined: I really don't see it a problem.. It'll need
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Thu, Jul 18, 2024 at 07:32:05PM -0300, Fabiano Rosas wrote: >> Peter Xu writes: >> >> > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: >> >> Peter Xu writes: >> >> >> >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: >> >> >> v2 is ready, but unfortunately this approach doesn't work. When client >> >> >> A >> >> >> takes the payload, it fills it with it's data, which may include >> >> >> allocating memory. MultiFDPages_t does that for the offset. This means >> >> >> we need a round of free/malloc at every packet sent. For every client >> >> >> and every allocation they decide to do. >> >> > >> >> > Shouldn't be a blocker? E.g. one option is: >> >> > >> >> > /* Allocate both the pages + offset[] */ >> >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + >> >> > sizeof(ram_addr_t) * n, 1); >> >> > pages->allocated = n; >> >> > pages->offset = &pages[1]; >> >> > >> >> > Or.. we can also make offset[] dynamic size, if that looks less tricky: >> >> > >> >> > typedef struct { >> >> > /* number of used pages */ >> >> > uint32_t num; >> >> > /* number of normal pages */ >> >> > uint32_t normal_num; >> >> > /* number of allocated pages */ >> >> > uint32_t allocated; >> >> > RAMBlock *block; >> >> > /* offset of each page */ >> >> > ram_addr_t offset[0]; >> >> > } MultiFDPages_t; >> >> >> >> I think you missed the point. If we hold a pointer inside the payload, >> >> we lose the reference when the other client takes the structure and puts >> >> its own data there. So we'll need to alloc/free everytime we send a >> >> packet. >> > >> > For option 1: when the buffer switch happens, MultiFDPages_t will switch as >> > a whole, including its offset[], because its offset[] always belong to this >> > MultiFDPages_t. So yes, we want to lose that *offset reference together >> > with MultiFDPages_t here, so the offset[] always belongs to one single >> > MultiFDPages_t object for its lifetime. >> >> MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated >> individually: >> >> struct MultiFDSendData { >> MultiFDPayloadType type; >> union { >> MultiFDPages_t ram_payload; >> } u; >> }; >> >> (and even if it did, then we'd lose the pointer to ram_payload anyway - >> or require multiple free/alloc) > > IMHO it's the same. > > The core idea is we allocate a buffer to put MultiFDSendData which may > contain either Pages_t or DeviceState_t, and the size of the buffer should > be MAX(A, B). > Right, but with your zero-length array proposals we need to have a separate allocation for MultiFDPages_t because to expand the array we need to include the number of pages. Also, don't think only about MultiFDPages_t. With this approach we cannot have pointers to memory allocated by the client at all anywhere inside the union. Every pointer needs to have another reference somewhere else to ensure we don't leak it. That's an unnecessary restriction. >> >> > >> > For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, >> > but make it part of the struct (MultiFDPages_t.offset[]). Logically it's >> > the same as option 1 but maybe slight cleaner. We just need to make it >> > sized 0 so as to be dynamic in size. >> >> Seems like an undefined behavior magnet. If I sent this as the first >> version, you'd NACK me right away. >> >> Besides, it's an unnecessary restriction to impose in the client >> code. And like above, we don't allocate the struct directly, it's part >> of MultiFDSendData, that's an advantage of using the union. >> >> I think we've reached the point where I'd like to hear more concrete >> reasons for not going with the current proposal, except for the >> simplicity argument you already put. I like the union idea, but OTOH we >> already have a working solution right here. > > I think the issue with current proposal is each client will need to > allocate (N+1)*buffer, so more user using it the more buffers we'll need (M > users, then M*(N+1)*buffer). Currently it seems to me we will have 3 users > at least: RAM, VFIO, and some other VMSD devices TBD in mid-long futures; > the latter two will share the same DeviceState_t. Maybe vDPA as well at > some point? Then 4. You used the opposite argument earlier in this thread to argue in favor of the union: We'll only have 2 clients. I'm confused. Although, granted, this RFC does use more memory. > I'd agree with this approach only if multifd is flexible enough to not even > know what's the buffers, but it's not the case, and we seem only care about > two: > > if (type==RAM) > ... > else > assert(type==DEVICE); > ... I don't understand: "not even know what's the buffers" is exactly what this series is about. It doesn't have any such conditional on "type". > > In this case I think it's easier we have multifd manage all the buffers > (after all
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Thu, Jul 18, 2024 at 07:32:05PM -0300, Fabiano Rosas wrote: > Peter Xu writes: > > > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: > >> Peter Xu writes: > >> > >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: > >> >> v2 is ready, but unfortunately this approach doesn't work. When client A > >> >> takes the payload, it fills it with it's data, which may include > >> >> allocating memory. MultiFDPages_t does that for the offset. This means > >> >> we need a round of free/malloc at every packet sent. For every client > >> >> and every allocation they decide to do. > >> > > >> > Shouldn't be a blocker? E.g. one option is: > >> > > >> > /* Allocate both the pages + offset[] */ > >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + > >> > sizeof(ram_addr_t) * n, 1); > >> > pages->allocated = n; > >> > pages->offset = &pages[1]; > >> > > >> > Or.. we can also make offset[] dynamic size, if that looks less tricky: > >> > > >> > typedef struct { > >> > /* number of used pages */ > >> > uint32_t num; > >> > /* number of normal pages */ > >> > uint32_t normal_num; > >> > /* number of allocated pages */ > >> > uint32_t allocated; > >> > RAMBlock *block; > >> > /* offset of each page */ > >> > ram_addr_t offset[0]; > >> > } MultiFDPages_t; > >> > >> I think you missed the point. If we hold a pointer inside the payload, > >> we lose the reference when the other client takes the structure and puts > >> its own data there. So we'll need to alloc/free everytime we send a > >> packet. > > > > For option 1: when the buffer switch happens, MultiFDPages_t will switch as > > a whole, including its offset[], because its offset[] always belong to this > > MultiFDPages_t. So yes, we want to lose that *offset reference together > > with MultiFDPages_t here, so the offset[] always belongs to one single > > MultiFDPages_t object for its lifetime. > > MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated > individually: > > struct MultiFDSendData { > MultiFDPayloadType type; > union { > MultiFDPages_t ram_payload; > } u; > }; > > (and even if it did, then we'd lose the pointer to ram_payload anyway - > or require multiple free/alloc) IMHO it's the same. The core idea is we allocate a buffer to put MultiFDSendData which may contain either Pages_t or DeviceState_t, and the size of the buffer should be MAX(A, B). > > > > > For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, > > but make it part of the struct (MultiFDPages_t.offset[]). Logically it's > > the same as option 1 but maybe slight cleaner. We just need to make it > > sized 0 so as to be dynamic in size. > > Seems like an undefined behavior magnet. If I sent this as the first > version, you'd NACK me right away. > > Besides, it's an unnecessary restriction to impose in the client > code. And like above, we don't allocate the struct directly, it's part > of MultiFDSendData, that's an advantage of using the union. > > I think we've reached the point where I'd like to hear more concrete > reasons for not going with the current proposal, except for the > simplicity argument you already put. I like the union idea, but OTOH we > already have a working solution right here. I think the issue with current proposal is each client will need to allocate (N+1)*buffer, so more user using it the more buffers we'll need (M users, then M*(N+1)*buffer). Currently it seems to me we will have 3 users at least: RAM, VFIO, and some other VMSD devices TBD in mid-long futures; the latter two will share the same DeviceState_t. Maybe vDPA as well at some point? Then 4. I'd agree with this approach only if multifd is flexible enough to not even know what's the buffers, but it's not the case, and we seem only care about two: if (type==RAM) ... else assert(type==DEVICE); ... In this case I think it's easier we have multifd manage all the buffers (after all, it knows them well...). Then the consumption is not M*(N+1)*buffer, but (M+N)*buffer. Perhaps push your tree somewhere so we can have a quick look? I'm totally lost when you said I'll nack it.. so maybe I didn't really get what you meant. Codes may clarify that. -- Peter Xu
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: >> Peter Xu writes: >> >> > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: >> >> v2 is ready, but unfortunately this approach doesn't work. When client A >> >> takes the payload, it fills it with it's data, which may include >> >> allocating memory. MultiFDPages_t does that for the offset. This means >> >> we need a round of free/malloc at every packet sent. For every client >> >> and every allocation they decide to do. >> > >> > Shouldn't be a blocker? E.g. one option is: >> > >> > /* Allocate both the pages + offset[] */ >> > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + >> > sizeof(ram_addr_t) * n, 1); >> > pages->allocated = n; >> > pages->offset = &pages[1]; >> > >> > Or.. we can also make offset[] dynamic size, if that looks less tricky: >> > >> > typedef struct { >> > /* number of used pages */ >> > uint32_t num; >> > /* number of normal pages */ >> > uint32_t normal_num; >> > /* number of allocated pages */ >> > uint32_t allocated; >> > RAMBlock *block; >> > /* offset of each page */ >> > ram_addr_t offset[0]; >> > } MultiFDPages_t; >> >> I think you missed the point. If we hold a pointer inside the payload, >> we lose the reference when the other client takes the structure and puts >> its own data there. So we'll need to alloc/free everytime we send a >> packet. > > For option 1: when the buffer switch happens, MultiFDPages_t will switch as > a whole, including its offset[], because its offset[] always belong to this > MultiFDPages_t. So yes, we want to lose that *offset reference together > with MultiFDPages_t here, so the offset[] always belongs to one single > MultiFDPages_t object for its lifetime. MultiFDPages_t is part of MultiFDSendData, it doesn't get allocated individually: struct MultiFDSendData { MultiFDPayloadType type; union { MultiFDPages_t ram_payload; } u; }; (and even if it did, then we'd lose the pointer to ram_payload anyway - or require multiple free/alloc) > > For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, > but make it part of the struct (MultiFDPages_t.offset[]). Logically it's > the same as option 1 but maybe slight cleaner. We just need to make it > sized 0 so as to be dynamic in size. Seems like an undefined behavior magnet. If I sent this as the first version, you'd NACK me right away. Besides, it's an unnecessary restriction to impose in the client code. And like above, we don't allocate the struct directly, it's part of MultiFDSendData, that's an advantage of using the union. I think we've reached the point where I'd like to hear more concrete reasons for not going with the current proposal, except for the simplicity argument you already put. I like the union idea, but OTOH we already have a working solution right here. > > Hmm.. is it the case?
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Thu, Jul 18, 2024 at 06:27:32PM -0300, Fabiano Rosas wrote: > Peter Xu writes: > > > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: > >> v2 is ready, but unfortunately this approach doesn't work. When client A > >> takes the payload, it fills it with it's data, which may include > >> allocating memory. MultiFDPages_t does that for the offset. This means > >> we need a round of free/malloc at every packet sent. For every client > >> and every allocation they decide to do. > > > > Shouldn't be a blocker? E.g. one option is: > > > > /* Allocate both the pages + offset[] */ > > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + > > sizeof(ram_addr_t) * n, 1); > > pages->allocated = n; > > pages->offset = &pages[1]; > > > > Or.. we can also make offset[] dynamic size, if that looks less tricky: > > > > typedef struct { > > /* number of used pages */ > > uint32_t num; > > /* number of normal pages */ > > uint32_t normal_num; > > /* number of allocated pages */ > > uint32_t allocated; > > RAMBlock *block; > > /* offset of each page */ > > ram_addr_t offset[0]; > > } MultiFDPages_t; > > I think you missed the point. If we hold a pointer inside the payload, > we lose the reference when the other client takes the structure and puts > its own data there. So we'll need to alloc/free everytime we send a > packet. For option 1: when the buffer switch happens, MultiFDPages_t will switch as a whole, including its offset[], because its offset[] always belong to this MultiFDPages_t. So yes, we want to lose that *offset reference together with MultiFDPages_t here, so the offset[] always belongs to one single MultiFDPages_t object for its lifetime. For option 2: I meant MultiFDPages_t will have no offset[] pointer anymore, but make it part of the struct (MultiFDPages_t.offset[]). Logically it's the same as option 1 but maybe slight cleaner. We just need to make it sized 0 so as to be dynamic in size. Hmm.. is it the case? -- Peter Xu
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: >> v2 is ready, but unfortunately this approach doesn't work. When client A >> takes the payload, it fills it with it's data, which may include >> allocating memory. MultiFDPages_t does that for the offset. This means >> we need a round of free/malloc at every packet sent. For every client >> and every allocation they decide to do. > > Shouldn't be a blocker? E.g. one option is: > > /* Allocate both the pages + offset[] */ > MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + > sizeof(ram_addr_t) * n, 1); > pages->allocated = n; > pages->offset = &pages[1]; > > Or.. we can also make offset[] dynamic size, if that looks less tricky: > > typedef struct { > /* number of used pages */ > uint32_t num; > /* number of normal pages */ > uint32_t normal_num; > /* number of allocated pages */ > uint32_t allocated; > RAMBlock *block; > /* offset of each page */ > ram_addr_t offset[0]; > } MultiFDPages_t; I think you missed the point. If we hold a pointer inside the payload, we lose the reference when the other client takes the structure and puts its own data there. So we'll need to alloc/free everytime we send a packet.
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Thu, Jul 18, 2024 at 04:39:00PM -0300, Fabiano Rosas wrote: > v2 is ready, but unfortunately this approach doesn't work. When client A > takes the payload, it fills it with it's data, which may include > allocating memory. MultiFDPages_t does that for the offset. This means > we need a round of free/malloc at every packet sent. For every client > and every allocation they decide to do. Shouldn't be a blocker? E.g. one option is: /* Allocate both the pages + offset[] */ MultiFDPages_t *pages = g_malloc0(sizeof(MultiFDPages_t) + sizeof(ram_addr_t) * n, 1); pages->allocated = n; pages->offset = &pages[1]; Or.. we can also make offset[] dynamic size, if that looks less tricky: typedef struct { /* number of used pages */ uint32_t num; /* number of normal pages */ uint32_t normal_num; /* number of allocated pages */ uint32_t allocated; RAMBlock *block; /* offset of each page */ ram_addr_t offset[0]; } MultiFDPages_t; -- Peter Xu
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Thu, Jul 11, 2024 at 11:12:09AM -0300, Fabiano Rosas wrote: >> What about the QEMUFile traffic? There's an iov in there. I have been >> thinking of replacing some of qemu-file.c guts with calls to >> multifd. Instead of several qemu_put_byte() we could construct an iov >> and give it to multifd for transfering, call multifd_sync at the end and >> get rid of the QEMUFile entirely. I don't have that completely laid out >> at the moment, but I think it should be possible. I get concerned about >> making assumptions on the types of data we're ever going to want to >> transmit. I bet someone thought in the past that multifd would never be >> used for anything other than ram. > > Hold on a bit.. there're two things I want to clarity with you. > > Firstly, qemu_put_byte() has buffering on f->buf[]. Directly changing them > to iochannels may regress performance. I never checked, but I would assume > some buffering will be needed for small chunk of data even with iochannels. > > Secondly, why multifd has things to do with this? What you're talking > about is more like the rework of qemufile->iochannel thing to me, and IIUC > that doesn't yet involve multifd. For many of such conversions, it'll > still be operating on the main channel, which is not the multifd channels. > What matters might be about what's in your mind to be put over multifd > channels there. > >> >> > >> > I wonder why handshake needs to be done per-thread. I was naturally >> > thinking the handshake should happen sequentially, talking over everything >> > including multifd. >> >> Well, it would still be thread based. Just that it would be 1 thread and >> it would not be managed by multifd. I don't see the point. We could make >> everything be multifd-based. Any piece of data that needs to reach the >> other side of the migration could be sent through multifd, no? > > Hmm yes we can. But what do we gain from it, if we know it'll be a few > MBs in total? There ain't a lot of huge stuff to move, it seems to me. > >> >> Also, when you say "per-thread", that's the model we're trying to get >> away from. There should be nothing "per-thread", the threads just >> consume the data produced by the clients. Anything "per-thread" that is >> not strictly related to the thread model should go away. For instance, >> p->page_size, p->page_count, p->write_flags, p->flags, etc. None of >> these should be in MultiFDSendParams. That thing should be (say) >> MultifdChannelState and contain only the semaphores and control flags >> for the threads. >> >> It would be nice if we could once and for all have a model that can >> dispatch data transfers without having to fiddle with threading all the >> time. Any time someone wants to do something different in the migration >> code, there it goes a random qemu_create_thread() flying around. > > That's exactly what I want to avoid. Not all things will need a thread, > only performance relevant ones. > > So now we have multifd threads, they're for IO throughputs: if we want to > push a fast NIC, that's the only way to go. Anything wants to push that > NIC, should use multifd. > > Then it turns out we want more concurrency, it's about VFIO save()/load() > of the kenrel drivers and it can block. Same to other devices that can > take time to save()/load() if it can happen concurrently in the future. I > think that's the reason why I suggested the VFIO solution to provide a > generic concept of thread pool so it services a generic purpose, and can be > reused in the future. > > I hope that'll stop anyone else on migration to create yet another thread > randomly, and I definitely don't like that either. I would _suspect_ the > next one to come as such is TDX.. I remember at least in the very initial > proposal years ago, TDX migration involves its own "channel" to migrate, > migration.c may not even know where is that channel. We'll see. > > [...] > >> > One thing to mention is that when with an union we may probably need to get >> > rid of multifd_send_state->pages already. >> >> Hehe, please don't do this like "oh, by the way...". This is a major >> pain point. I've been complaining about that "holding of client data" >> since the fist time I read that code. So if you're going to propose >> something, it needs to account for that. > > The client puts something into a buffer (SendData), then it delivers it to > multifd (who silently switches the buffer). After enqueued, the client > assumes the buffer is sent and reusable again. > > It looks pretty common to me, what is the concern within the procedure? > What's the "holding of client data" issue? > v2 is ready, but unfortunately this approach doesn't work. When client A takes the payload, it fills it with it's data, which may include allocating memory. MultiFDPages_t does that for the offset. This means we need a round of free/malloc at every packet sent. For every client and every allocation they decide to do.
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Wed, Jul 17, 2024 at 11:07:17PM +0200, Maciej S. Szmigiero wrote: > On 17.07.2024 21:00, Peter Xu wrote: > > On Tue, Jul 16, 2024 at 10:10:25PM +0200, Maciej S. Szmigiero wrote: > > > > > > > The comment I removed is slightly misleading to me too, because > > > > > > > right now > > > > > > > active_slot contains the data hasn't yet been delivered to > > > > > > > multifd, so > > > > > > > we're "putting it back to free list" not because of it's free, > > > > > > > but because > > > > > > > we know it won't get used until the multifd send thread consumes > > > > > > > it > > > > > > > (because before that the thread will be busy, and we won't use > > > > > > > the buffer > > > > > > > if so in upcoming send()s). > > > > > > > > > > > > > > And then when I'm looking at this again, I think maybe it's a > > > > > > > slight > > > > > > > overkill, and maybe we can still keep the "opaque data" managed > > > > > > > by multifd. > > > > > > > One reason might be that I don't expect the "opaque data" payload > > > > > > > keep > > > > > > > growing at all: it should really be either RAM or device state as > > > > > > > I > > > > > > > commented elsewhere in a relevant thread, after all it's a thread > > > > > > > model > > > > > > > only for migration purpose to move vmstates.. > > > > > > > > > > > > Some amount of flexibility needs to be baked in. For instance, what > > > > > > about the handshake procedure? Don't we want to use multifd threads > > > > > > to > > > > > > put some information on the wire for that as well? > > > > > > > > > > Is this an orthogonal question? > > > > > > > > I don't think so. You say the payload data should be either RAM or > > > > device state. I'm asking what other types of data do we want the multifd > > > > channel to transmit and suggesting we need to allow room for the > > > > addition of that, whatever it is. One thing that comes to mind that is > > > > neither RAM or device state is some form of handshake or capabilities > > > > negotiation. > > > > > > The RFC version of my multifd device state transfer patch set introduced > > > a new migration channel header (by Avihai) for clean and extensible > > > migration channel handshaking but people didn't like so it was removed in > > > v1. > > > > Hmm, I'm not sure this is relevant to the context of discussion here, but I > > confess I didn't notice the per-channel header thing in the previous RFC > > series. Link is here: > > > > https://lore.kernel.org/r/636cec92eb801f13ba893de79d4872f5d8342097.1713269378.git.maciej.szmigi...@oracle.com > > The channel header patches were dropped because Daniel didn't like them: > https://lore.kernel.org/qemu-devel/zh-kf72fe9ov6...@redhat.com/ > https://lore.kernel.org/qemu-devel/zh_6w8u3h4fmg...@redhat.com/ Ah I missed that too when I quickly went over the old series, sorry. I think what Dan meant was that we'd better do that with the handshake work, which should cover more than this. I've no problem with that. It's just that sooner or later, we should provide something more solid than commit 6720c2b327 ("migration: check magic value for deciding the mapping of channels"). > > > Maciej, if you want, you can split that out of the seriess. So far it looks > > like a good thing with/without how VFIO tackles it. > > Unfortunately, these Avihai's channel header patches obviously impact wire > protocol and are a bit of intermingled with the rest of the device state > transfer patch set so it would be good to know upfront whether there is > some consensus to (re)introduce this new channel header (CCed Daniel, too). When I mentioned posting it separately, it'll still not be relevant to the VFIO series. IOW, I think below is definitely not needed (and I think we're on the same page now to reuse multifd threads as generic channels, so there's no issue now): https://lore.kernel.org/qemu-devel/027695db92ace07d2d6ee66da05f8e85959fd46a.1713269378.git.maciej.szmigi...@oracle.com/ So I assume we should leave that for later for whoever refactors the handshake process. Thanks, -- Peter Xu
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On 17.07.2024 21:00, Peter Xu wrote: On Tue, Jul 16, 2024 at 10:10:25PM +0200, Maciej S. Szmigiero wrote: The comment I removed is slightly misleading to me too, because right now active_slot contains the data hasn't yet been delivered to multifd, so we're "putting it back to free list" not because of it's free, but because we know it won't get used until the multifd send thread consumes it (because before that the thread will be busy, and we won't use the buffer if so in upcoming send()s). And then when I'm looking at this again, I think maybe it's a slight overkill, and maybe we can still keep the "opaque data" managed by multifd. One reason might be that I don't expect the "opaque data" payload keep growing at all: it should really be either RAM or device state as I commented elsewhere in a relevant thread, after all it's a thread model only for migration purpose to move vmstates.. Some amount of flexibility needs to be baked in. For instance, what about the handshake procedure? Don't we want to use multifd threads to put some information on the wire for that as well? Is this an orthogonal question? I don't think so. You say the payload data should be either RAM or device state. I'm asking what other types of data do we want the multifd channel to transmit and suggesting we need to allow room for the addition of that, whatever it is. One thing that comes to mind that is neither RAM or device state is some form of handshake or capabilities negotiation. The RFC version of my multifd device state transfer patch set introduced a new migration channel header (by Avihai) for clean and extensible migration channel handshaking but people didn't like so it was removed in v1. Hmm, I'm not sure this is relevant to the context of discussion here, but I confess I didn't notice the per-channel header thing in the previous RFC series. Link is here: https://lore.kernel.org/r/636cec92eb801f13ba893de79d4872f5d8342097.1713269378.git.maciej.szmigi...@oracle.com The channel header patches were dropped because Daniel didn't like them: https://lore.kernel.org/qemu-devel/zh-kf72fe9ov6...@redhat.com/ https://lore.kernel.org/qemu-devel/zh_6w8u3h4fmg...@redhat.com/ Maciej, if you want, you can split that out of the seriess. So far it looks like a good thing with/without how VFIO tackles it. Unfortunately, these Avihai's channel header patches obviously impact wire protocol and are a bit of intermingled with the rest of the device state transfer patch set so it would be good to know upfront whether there is some consensus to (re)introduce this new channel header (CCed Daniel, too). Thanks, Thanks, Maciej
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Tue, Jul 16, 2024 at 10:10:25PM +0200, Maciej S. Szmigiero wrote: > > > > > The comment I removed is slightly misleading to me too, because right > > > > > now > > > > > active_slot contains the data hasn't yet been delivered to multifd, so > > > > > we're "putting it back to free list" not because of it's free, but > > > > > because > > > > > we know it won't get used until the multifd send thread consumes it > > > > > (because before that the thread will be busy, and we won't use the > > > > > buffer > > > > > if so in upcoming send()s). > > > > > > > > > > And then when I'm looking at this again, I think maybe it's a slight > > > > > overkill, and maybe we can still keep the "opaque data" managed by > > > > > multifd. > > > > > One reason might be that I don't expect the "opaque data" payload keep > > > > > growing at all: it should really be either RAM or device state as I > > > > > commented elsewhere in a relevant thread, after all it's a thread > > > > > model > > > > > only for migration purpose to move vmstates.. > > > > > > > > Some amount of flexibility needs to be baked in. For instance, what > > > > about the handshake procedure? Don't we want to use multifd threads to > > > > put some information on the wire for that as well? > > > > > > Is this an orthogonal question? > > > > I don't think so. You say the payload data should be either RAM or > > device state. I'm asking what other types of data do we want the multifd > > channel to transmit and suggesting we need to allow room for the > > addition of that, whatever it is. One thing that comes to mind that is > > neither RAM or device state is some form of handshake or capabilities > > negotiation. > > The RFC version of my multifd device state transfer patch set introduced > a new migration channel header (by Avihai) for clean and extensible > migration channel handshaking but people didn't like so it was removed in v1. Hmm, I'm not sure this is relevant to the context of discussion here, but I confess I didn't notice the per-channel header thing in the previous RFC series. Link is here: https://lore.kernel.org/r/636cec92eb801f13ba893de79d4872f5d8342097.1713269378.git.maciej.szmigi...@oracle.com Maciej, if you want, you can split that out of the seriess. So far it looks like a good thing with/without how VFIO tackles it. Thanks, -- Peter Xu
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On 10.07.2024 22:16, Fabiano Rosas wrote: Peter Xu writes: On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: Peter Xu writes: On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: Or graphically: 1) client fills the active slot with data. Channels point to nothing at this point: [a] <-- active slot [][][][] <-- free slots, one per-channel [][][][] <-- channels' p->data pointers 2) multifd_send() swaps the pointers inside the client slot. Channels still point to nothing: [] [a][][][] [][][][] 3) multifd_send() finds an idle channel and updates its pointer: It seems the action "finds an idle channel" is in step 2 rather than step 3, which means the free slot is selected based on the id of the channel found, am I understanding correctly? I think you're right. Actually I also feel like the desription here is ambiguous, even though I think I get what Fabiano wanted to say. The free slot should be the first step of step 2+3, here what Fabiano really wanted to suggest is we move the free buffer array from multifd channels into the callers, then the caller can pass in whatever data to send. So I think maybe it's cleaner to write it as this in code (note: I didn't really change the code, just some ordering and comments): ===8<=== @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) */ active_slot = slots->active; slots->active = slots->free[p->id]; -p->data = active_slot; - -/* - * By the next time we arrive here, the channel will certainly - * have consumed the active slot. Put it back on the free list - * now. - */ slots->free[p->id] = active_slot; +/* Assign the current active slot to the chosen thread */ +p->data = active_slot; ===8<=== The comment I removed is slightly misleading to me too, because right now active_slot contains the data hasn't yet been delivered to multifd, so we're "putting it back to free list" not because of it's free, but because we know it won't get used until the multifd send thread consumes it (because before that the thread will be busy, and we won't use the buffer if so in upcoming send()s). And then when I'm looking at this again, I think maybe it's a slight overkill, and maybe we can still keep the "opaque data" managed by multifd. One reason might be that I don't expect the "opaque data" payload keep growing at all: it should really be either RAM or device state as I commented elsewhere in a relevant thread, after all it's a thread model only for migration purpose to move vmstates.. Some amount of flexibility needs to be baked in. For instance, what about the handshake procedure? Don't we want to use multifd threads to put some information on the wire for that as well? Is this an orthogonal question? I don't think so. You say the payload data should be either RAM or device state. I'm asking what other types of data do we want the multifd channel to transmit and suggesting we need to allow room for the addition of that, whatever it is. One thing that comes to mind that is neither RAM or device state is some form of handshake or capabilities negotiation. The RFC version of my multifd device state transfer patch set introduced a new migration channel header (by Avihai) for clean and extensible migration channel handshaking but people didn't like so it was removed in v1. Thanks, Maciej
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Fri, Jul 12, 2024 at 09:44:02AM -0300, Fabiano Rosas wrote: > Do you have a reference for that kubevirt issue I could look at? It > maybe interesting to investigate further. Where's the throttling coming > from? And doesn't less vcpu time imply less dirtying and therefore > faster convergence? Sorry I don't have a link on hand.. sometimes it's not about converge, it's about impacting the guest workload too much without intention which is not wanted, especially if on a public cloud. It's understandable to me since they're under the same cgroup with throttled cpu resources applie to QEMU+Libvirt processes as a whole, probably based on N_VCPUS with some tiny extra room for other stuff. For example, I remember they also hit other threads content with the vcpu threads like the block layer thread pools. It's a separate issue here when talking about locked_vm, as kubevirt probably need to figure out a way to say "these are mgmt threads, and those are vcpu threads", because mgmt threads can take quite some cpu resources sometimes and it's not avoidable. Page pinning will be another story, as in many cases pinning should not be required, except VFIO, zerocopy and other special stuff. -- Peter Xu
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Thu, Jul 11, 2024 at 06:12:44PM -0300, Fabiano Rosas wrote: >> Peter Xu writes: >> >> > On Thu, Jul 11, 2024 at 04:37:34PM -0300, Fabiano Rosas wrote: >> > >> > [...] >> > >> >> We also don't flush the iov at once, so f->buf seems redundant to >> >> me. But of course, if we touch any of that we must ensure we're not >> >> dropping any major optimization. >> > >> > Yes some tests over that would be more persuasive when it comes. >> > >> > Per my limited experience in the past few years: memcpy on chips nowadays >> > is pretty cheap. You'll see very soon one more example of that when you >> > start to look at the qatzip series: that series decided to do one more >> > memcpy for all guest pages, to make it a larger chunk of buffer instead of >> > submitting the compression tasks in 4k chunks (while I thought 4k wasn't >> > too small itself). >> > >> > That may be more involved so may not be a great example (e.g. the >> > compression algo can be special in this case where it just likes larger >> > buffers), but it's not uncommon that I see people trade things with memcpy, >> > especially small buffers. >> > >> > [...] >> > >> >> Any piece of code that fills an iov with data is prone to be able to >> >> send that data through multifd. From this perspective, multifd is just a >> >> way to give work to an iochannel. We don't *need* to use it, but it >> >> might be simple enough to the point that the benefit of ditching >> >> QEMUFile can be reached without too much rework. >> >> >> >> Say we provision multifd threads early and leave them waiting for any >> >> part of the migration code to send some data. We could have n-1 threads >> >> idle waiting for the bulk of the data and use a single thread for any >> >> early traffic that does not need to be parallel. >> >> >> >> I'm not suggesting we do any of this right away or even that this is the >> >> correct way to go, I'm just letting you know some of my ideas and why I >> >> think ram + device state might not be the only data we put through >> >> multifd. >> > >> > We can wait and see whether that can be of any use in the future, even if >> > so, we still have chance to add more types into the union, I think. But >> > again, I don't expect. >> > >> > My gut feeling: we shouldn't bother putting any (1) non-huge-chunk, or (2) >> > non-IO, data onto multifd. Again, I would ask "why not the main channel", >> > otherwise. >> > >> > [...] >> > >> >> Just to be clear, do you want a thread-pool to replace multifd? Or would >> >> that be only used for concurrency on the producer side? >> > >> > Not replace multifd. It's just that I was imagining multifd threads only >> > manage IO stuff, nothing else. >> > >> > I was indeed thinking whether we can reuse multifd threads, but then I >> > found there's risk mangling these two concepts, as: when we do more than IO >> > in multifd threads (e.g., talking to VFIO kernel fetching data which can >> > block), we have risk of blocking IO even if we can push more so the NICs >> > can be idle again. There's also the complexity where the job fetches data >> > from VFIO kernel and want to enqueue again, it means an multifd task can >> > enqueue to itself, and circular enqueue can be challenging: imagine 8 >> > concurrent tasks (with a total of 8 multifd threads) trying to enqueue at >> > the same time; they hunger themselves to death. Things like that. Then I >> > figured the rest jobs are really fn(void*) type of things; they should >> > deserve their own pool of threads. >> > >> > So the VFIO threads (used to be per-device) becomes migration worker >> > threads, we need them for both src/dst: on dst there's still pending work >> > to apply the continuous VFIO data back to the kernel driver, and that can't >> > be done by multifd thread too due to similar same reason. Then those dest >> > side worker threads can also do load() not only for VFIO but also other >> > device states if we can add more. >> > >> > So to summary, we'll have: >> > >> > - 1 main thread (send / recv) >> > - N multifd threads (IOs only) >> > - M worker threads (jobs only) >> > >> > Of course, postcopy not involved.. How's that sound? >> >> Looks good. There's a better divide between producer and consumer this >> way. I think it will help when designing new features. >> >> One observation is that we'll still have two different entities doing IO >> (multifd threads and the migration thread), which I would prefer were >> using a common code at a higher level than the iochannel. > > At least for the main channel probably yes. I think Dan has had the idea > of adding the buffering layer over iochannels, then replace qemufiles with > that. Multifd channels looks ok so far to use as raw channels. > >> >> One thing that I tried to look into for mapped-ram was whether we could >> set up iouring in the migration code, but got entirely discouraged by >> the migration thread doing IO at random points. And of course, you've >> see
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Thu, Jul 11, 2024 at 06:12:44PM -0300, Fabiano Rosas wrote: > Peter Xu writes: > > > On Thu, Jul 11, 2024 at 04:37:34PM -0300, Fabiano Rosas wrote: > > > > [...] > > > >> We also don't flush the iov at once, so f->buf seems redundant to > >> me. But of course, if we touch any of that we must ensure we're not > >> dropping any major optimization. > > > > Yes some tests over that would be more persuasive when it comes. > > > > Per my limited experience in the past few years: memcpy on chips nowadays > > is pretty cheap. You'll see very soon one more example of that when you > > start to look at the qatzip series: that series decided to do one more > > memcpy for all guest pages, to make it a larger chunk of buffer instead of > > submitting the compression tasks in 4k chunks (while I thought 4k wasn't > > too small itself). > > > > That may be more involved so may not be a great example (e.g. the > > compression algo can be special in this case where it just likes larger > > buffers), but it's not uncommon that I see people trade things with memcpy, > > especially small buffers. > > > > [...] > > > >> Any piece of code that fills an iov with data is prone to be able to > >> send that data through multifd. From this perspective, multifd is just a > >> way to give work to an iochannel. We don't *need* to use it, but it > >> might be simple enough to the point that the benefit of ditching > >> QEMUFile can be reached without too much rework. > >> > >> Say we provision multifd threads early and leave them waiting for any > >> part of the migration code to send some data. We could have n-1 threads > >> idle waiting for the bulk of the data and use a single thread for any > >> early traffic that does not need to be parallel. > >> > >> I'm not suggesting we do any of this right away or even that this is the > >> correct way to go, I'm just letting you know some of my ideas and why I > >> think ram + device state might not be the only data we put through > >> multifd. > > > > We can wait and see whether that can be of any use in the future, even if > > so, we still have chance to add more types into the union, I think. But > > again, I don't expect. > > > > My gut feeling: we shouldn't bother putting any (1) non-huge-chunk, or (2) > > non-IO, data onto multifd. Again, I would ask "why not the main channel", > > otherwise. > > > > [...] > > > >> Just to be clear, do you want a thread-pool to replace multifd? Or would > >> that be only used for concurrency on the producer side? > > > > Not replace multifd. It's just that I was imagining multifd threads only > > manage IO stuff, nothing else. > > > > I was indeed thinking whether we can reuse multifd threads, but then I > > found there's risk mangling these two concepts, as: when we do more than IO > > in multifd threads (e.g., talking to VFIO kernel fetching data which can > > block), we have risk of blocking IO even if we can push more so the NICs > > can be idle again. There's also the complexity where the job fetches data > > from VFIO kernel and want to enqueue again, it means an multifd task can > > enqueue to itself, and circular enqueue can be challenging: imagine 8 > > concurrent tasks (with a total of 8 multifd threads) trying to enqueue at > > the same time; they hunger themselves to death. Things like that. Then I > > figured the rest jobs are really fn(void*) type of things; they should > > deserve their own pool of threads. > > > > So the VFIO threads (used to be per-device) becomes migration worker > > threads, we need them for both src/dst: on dst there's still pending work > > to apply the continuous VFIO data back to the kernel driver, and that can't > > be done by multifd thread too due to similar same reason. Then those dest > > side worker threads can also do load() not only for VFIO but also other > > device states if we can add more. > > > > So to summary, we'll have: > > > > - 1 main thread (send / recv) > > - N multifd threads (IOs only) > > - M worker threads (jobs only) > > > > Of course, postcopy not involved.. How's that sound? > > Looks good. There's a better divide between producer and consumer this > way. I think it will help when designing new features. > > One observation is that we'll still have two different entities doing IO > (multifd threads and the migration thread), which I would prefer were > using a common code at a higher level than the iochannel. At least for the main channel probably yes. I think Dan has had the idea of adding the buffering layer over iochannels, then replace qemufiles with that. Multifd channels looks ok so far to use as raw channels. > > One thing that I tried to look into for mapped-ram was whether we could > set up iouring in the migration code, but got entirely discouraged by > the migration thread doing IO at random points. And of course, you've > seen what we had to do with direct-io. That was in part due to having > the migration thread in parallel doing it's small writes
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Thu, Jul 11, 2024 at 04:37:34PM -0300, Fabiano Rosas wrote: > > [...] > >> We also don't flush the iov at once, so f->buf seems redundant to >> me. But of course, if we touch any of that we must ensure we're not >> dropping any major optimization. > > Yes some tests over that would be more persuasive when it comes. > > Per my limited experience in the past few years: memcpy on chips nowadays > is pretty cheap. You'll see very soon one more example of that when you > start to look at the qatzip series: that series decided to do one more > memcpy for all guest pages, to make it a larger chunk of buffer instead of > submitting the compression tasks in 4k chunks (while I thought 4k wasn't > too small itself). > > That may be more involved so may not be a great example (e.g. the > compression algo can be special in this case where it just likes larger > buffers), but it's not uncommon that I see people trade things with memcpy, > especially small buffers. > > [...] > >> Any piece of code that fills an iov with data is prone to be able to >> send that data through multifd. From this perspective, multifd is just a >> way to give work to an iochannel. We don't *need* to use it, but it >> might be simple enough to the point that the benefit of ditching >> QEMUFile can be reached without too much rework. >> >> Say we provision multifd threads early and leave them waiting for any >> part of the migration code to send some data. We could have n-1 threads >> idle waiting for the bulk of the data and use a single thread for any >> early traffic that does not need to be parallel. >> >> I'm not suggesting we do any of this right away or even that this is the >> correct way to go, I'm just letting you know some of my ideas and why I >> think ram + device state might not be the only data we put through >> multifd. > > We can wait and see whether that can be of any use in the future, even if > so, we still have chance to add more types into the union, I think. But > again, I don't expect. > > My gut feeling: we shouldn't bother putting any (1) non-huge-chunk, or (2) > non-IO, data onto multifd. Again, I would ask "why not the main channel", > otherwise. > > [...] > >> Just to be clear, do you want a thread-pool to replace multifd? Or would >> that be only used for concurrency on the producer side? > > Not replace multifd. It's just that I was imagining multifd threads only > manage IO stuff, nothing else. > > I was indeed thinking whether we can reuse multifd threads, but then I > found there's risk mangling these two concepts, as: when we do more than IO > in multifd threads (e.g., talking to VFIO kernel fetching data which can > block), we have risk of blocking IO even if we can push more so the NICs > can be idle again. There's also the complexity where the job fetches data > from VFIO kernel and want to enqueue again, it means an multifd task can > enqueue to itself, and circular enqueue can be challenging: imagine 8 > concurrent tasks (with a total of 8 multifd threads) trying to enqueue at > the same time; they hunger themselves to death. Things like that. Then I > figured the rest jobs are really fn(void*) type of things; they should > deserve their own pool of threads. > > So the VFIO threads (used to be per-device) becomes migration worker > threads, we need them for both src/dst: on dst there's still pending work > to apply the continuous VFIO data back to the kernel driver, and that can't > be done by multifd thread too due to similar same reason. Then those dest > side worker threads can also do load() not only for VFIO but also other > device states if we can add more. > > So to summary, we'll have: > > - 1 main thread (send / recv) > - N multifd threads (IOs only) > - M worker threads (jobs only) > > Of course, postcopy not involved.. How's that sound? Looks good. There's a better divide between producer and consumer this way. I think it will help when designing new features. One observation is that we'll still have two different entities doing IO (multifd threads and the migration thread), which I would prefer were using a common code at a higher level than the iochannel. One thing that I tried to look into for mapped-ram was whether we could set up iouring in the migration code, but got entirely discouraged by the migration thread doing IO at random points. And of course, you've seen what we had to do with direct-io. That was in part due to having the migration thread in parallel doing it's small writes at undetermined points in time.
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Thu, Jul 11, 2024 at 04:37:34PM -0300, Fabiano Rosas wrote: [...] > We also don't flush the iov at once, so f->buf seems redundant to > me. But of course, if we touch any of that we must ensure we're not > dropping any major optimization. Yes some tests over that would be more persuasive when it comes. Per my limited experience in the past few years: memcpy on chips nowadays is pretty cheap. You'll see very soon one more example of that when you start to look at the qatzip series: that series decided to do one more memcpy for all guest pages, to make it a larger chunk of buffer instead of submitting the compression tasks in 4k chunks (while I thought 4k wasn't too small itself). That may be more involved so may not be a great example (e.g. the compression algo can be special in this case where it just likes larger buffers), but it's not uncommon that I see people trade things with memcpy, especially small buffers. [...] > Any piece of code that fills an iov with data is prone to be able to > send that data through multifd. From this perspective, multifd is just a > way to give work to an iochannel. We don't *need* to use it, but it > might be simple enough to the point that the benefit of ditching > QEMUFile can be reached without too much rework. > > Say we provision multifd threads early and leave them waiting for any > part of the migration code to send some data. We could have n-1 threads > idle waiting for the bulk of the data and use a single thread for any > early traffic that does not need to be parallel. > > I'm not suggesting we do any of this right away or even that this is the > correct way to go, I'm just letting you know some of my ideas and why I > think ram + device state might not be the only data we put through > multifd. We can wait and see whether that can be of any use in the future, even if so, we still have chance to add more types into the union, I think. But again, I don't expect. My gut feeling: we shouldn't bother putting any (1) non-huge-chunk, or (2) non-IO, data onto multifd. Again, I would ask "why not the main channel", otherwise. [...] > Just to be clear, do you want a thread-pool to replace multifd? Or would > that be only used for concurrency on the producer side? Not replace multifd. It's just that I was imagining multifd threads only manage IO stuff, nothing else. I was indeed thinking whether we can reuse multifd threads, but then I found there's risk mangling these two concepts, as: when we do more than IO in multifd threads (e.g., talking to VFIO kernel fetching data which can block), we have risk of blocking IO even if we can push more so the NICs can be idle again. There's also the complexity where the job fetches data from VFIO kernel and want to enqueue again, it means an multifd task can enqueue to itself, and circular enqueue can be challenging: imagine 8 concurrent tasks (with a total of 8 multifd threads) trying to enqueue at the same time; they hunger themselves to death. Things like that. Then I figured the rest jobs are really fn(void*) type of things; they should deserve their own pool of threads. So the VFIO threads (used to be per-device) becomes migration worker threads, we need them for both src/dst: on dst there's still pending work to apply the continuous VFIO data back to the kernel driver, and that can't be done by multifd thread too due to similar same reason. Then those dest side worker threads can also do load() not only for VFIO but also other device states if we can add more. So to summary, we'll have: - 1 main thread (send / recv) - N multifd threads (IOs only) - M worker threads (jobs only) Of course, postcopy not involved.. How's that sound? -- Peter Xu
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Thu, Jul 11, 2024 at 11:12:09AM -0300, Fabiano Rosas wrote: >> What about the QEMUFile traffic? There's an iov in there. I have been >> thinking of replacing some of qemu-file.c guts with calls to >> multifd. Instead of several qemu_put_byte() we could construct an iov >> and give it to multifd for transfering, call multifd_sync at the end and >> get rid of the QEMUFile entirely. I don't have that completely laid out >> at the moment, but I think it should be possible. I get concerned about >> making assumptions on the types of data we're ever going to want to >> transmit. I bet someone thought in the past that multifd would never be >> used for anything other than ram. > > Hold on a bit.. there're two things I want to clarity with you. > > Firstly, qemu_put_byte() has buffering on f->buf[]. Directly changing them > to iochannels may regress performance. I never checked, but I would assume > some buffering will be needed for small chunk of data even with iochannels. Right, but there's an extra memcpy to do that. Not sure how those balance out. We also don't flush the iov at once, so f->buf seems redundant to me. But of course, if we touch any of that we must ensure we're not dropping any major optimization. > Secondly, why multifd has things to do with this? What you're talking > about is more like the rework of qemufile->iochannel thing to me, and IIUC > that doesn't yet involve multifd. For many of such conversions, it'll > still be operating on the main channel, which is not the multifd channels. > What matters might be about what's in your mind to be put over multifd > channels there. > Any piece of code that fills an iov with data is prone to be able to send that data through multifd. From this perspective, multifd is just a way to give work to an iochannel. We don't *need* to use it, but it might be simple enough to the point that the benefit of ditching QEMUFile can be reached without too much rework. Say we provision multifd threads early and leave them waiting for any part of the migration code to send some data. We could have n-1 threads idle waiting for the bulk of the data and use a single thread for any early traffic that does not need to be parallel. I'm not suggesting we do any of this right away or even that this is the correct way to go, I'm just letting you know some of my ideas and why I think ram + device state might not be the only data we put through multifd. >> >> > >> > I wonder why handshake needs to be done per-thread. I was naturally >> > thinking the handshake should happen sequentially, talking over everything >> > including multifd. >> >> Well, it would still be thread based. Just that it would be 1 thread and >> it would not be managed by multifd. I don't see the point. We could make >> everything be multifd-based. Any piece of data that needs to reach the >> other side of the migration could be sent through multifd, no? > > Hmm yes we can. But what do we gain from it, if we know it'll be a few > MBs in total? There ain't a lot of huge stuff to move, it seems to me. Well it depends on what the alternative is. If we're going to create a thread to send small chunks of data anyway, we could use the multifd threads instead. > >> >> Also, when you say "per-thread", that's the model we're trying to get >> away from. There should be nothing "per-thread", the threads just >> consume the data produced by the clients. Anything "per-thread" that is >> not strictly related to the thread model should go away. For instance, >> p->page_size, p->page_count, p->write_flags, p->flags, etc. None of >> these should be in MultiFDSendParams. That thing should be (say) >> MultifdChannelState and contain only the semaphores and control flags >> for the threads. >> >> It would be nice if we could once and for all have a model that can >> dispatch data transfers without having to fiddle with threading all the >> time. Any time someone wants to do something different in the migration >> code, there it goes a random qemu_create_thread() flying around. > > That's exactly what I want to avoid. Not all things will need a thread, > only performance relevant ones. > > So now we have multifd threads, they're for IO throughputs: if we want to > push a fast NIC, that's the only way to go. Anything wants to push that > NIC, should use multifd. > > Then it turns out we want more concurrency, it's about VFIO save()/load() > of the kenrel drivers and it can block. Same to other devices that can > take time to save()/load() if it can happen concurrently in the future. I > think that's the reason why I suggested the VFIO solution to provide a > generic concept of thread pool so it services a generic purpose, and can be > reused in the future. Just to be clear, do you want a thread-pool to replace multifd? Or would that be only used for concurrency on the producer side? > I hope that'll stop anyone else on migration to create yet another thread > randomly, and I def
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Thu, Jul 11, 2024 at 11:12:09AM -0300, Fabiano Rosas wrote: > What about the QEMUFile traffic? There's an iov in there. I have been > thinking of replacing some of qemu-file.c guts with calls to > multifd. Instead of several qemu_put_byte() we could construct an iov > and give it to multifd for transfering, call multifd_sync at the end and > get rid of the QEMUFile entirely. I don't have that completely laid out > at the moment, but I think it should be possible. I get concerned about > making assumptions on the types of data we're ever going to want to > transmit. I bet someone thought in the past that multifd would never be > used for anything other than ram. Hold on a bit.. there're two things I want to clarity with you. Firstly, qemu_put_byte() has buffering on f->buf[]. Directly changing them to iochannels may regress performance. I never checked, but I would assume some buffering will be needed for small chunk of data even with iochannels. Secondly, why multifd has things to do with this? What you're talking about is more like the rework of qemufile->iochannel thing to me, and IIUC that doesn't yet involve multifd. For many of such conversions, it'll still be operating on the main channel, which is not the multifd channels. What matters might be about what's in your mind to be put over multifd channels there. > > > > > I wonder why handshake needs to be done per-thread. I was naturally > > thinking the handshake should happen sequentially, talking over everything > > including multifd. > > Well, it would still be thread based. Just that it would be 1 thread and > it would not be managed by multifd. I don't see the point. We could make > everything be multifd-based. Any piece of data that needs to reach the > other side of the migration could be sent through multifd, no? Hmm yes we can. But what do we gain from it, if we know it'll be a few MBs in total? There ain't a lot of huge stuff to move, it seems to me. > > Also, when you say "per-thread", that's the model we're trying to get > away from. There should be nothing "per-thread", the threads just > consume the data produced by the clients. Anything "per-thread" that is > not strictly related to the thread model should go away. For instance, > p->page_size, p->page_count, p->write_flags, p->flags, etc. None of > these should be in MultiFDSendParams. That thing should be (say) > MultifdChannelState and contain only the semaphores and control flags > for the threads. > > It would be nice if we could once and for all have a model that can > dispatch data transfers without having to fiddle with threading all the > time. Any time someone wants to do something different in the migration > code, there it goes a random qemu_create_thread() flying around. That's exactly what I want to avoid. Not all things will need a thread, only performance relevant ones. So now we have multifd threads, they're for IO throughputs: if we want to push a fast NIC, that's the only way to go. Anything wants to push that NIC, should use multifd. Then it turns out we want more concurrency, it's about VFIO save()/load() of the kenrel drivers and it can block. Same to other devices that can take time to save()/load() if it can happen concurrently in the future. I think that's the reason why I suggested the VFIO solution to provide a generic concept of thread pool so it services a generic purpose, and can be reused in the future. I hope that'll stop anyone else on migration to create yet another thread randomly, and I definitely don't like that either. I would _suspect_ the next one to come as such is TDX.. I remember at least in the very initial proposal years ago, TDX migration involves its own "channel" to migrate, migration.c may not even know where is that channel. We'll see. [...] > > One thing to mention is that when with an union we may probably need to get > > rid of multifd_send_state->pages already. > > Hehe, please don't do this like "oh, by the way...". This is a major > pain point. I've been complaining about that "holding of client data" > since the fist time I read that code. So if you're going to propose > something, it needs to account for that. The client puts something into a buffer (SendData), then it delivers it to multifd (who silently switches the buffer). After enqueued, the client assumes the buffer is sent and reusable again. It looks pretty common to me, what is the concern within the procedure? What's the "holding of client data" issue? > > > The object can't be a global > > cache (in which case so far it's N+1, N being n_multifd_channels, while "1" > > is the extra buffer as only RAM uses it). In the union world we'll need to > > allocate M+N SendData, where N is still the n_multifd_channels, and M is > > the number of users, in VFIO's case, VFIO allocates the cached SendData and > > use that to enqueue, right after enqueue it'll get a free one by switching > > it with another one in the multifd's array[N]. Sam
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Wed, Jul 10, 2024 at 05:16:36PM -0300, Fabiano Rosas wrote: >> Peter Xu writes: >> >> > On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: >> >> Peter Xu writes: >> >> >> >> > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: >> >> >> > Or graphically: >> >> >> > >> >> >> > 1) client fills the active slot with data. Channels point to nothing >> >> >> >at this point: >> >> >> > [a] <-- active slot >> >> >> > [][][][] <-- free slots, one per-channel >> >> >> > >> >> >> > [][][][] <-- channels' p->data pointers >> >> >> > >> >> >> > 2) multifd_send() swaps the pointers inside the client slot. Channels >> >> >> >still point to nothing: >> >> >> > [] >> >> >> > [a][][][] >> >> >> > >> >> >> > [][][][] >> >> >> > >> >> >> > 3) multifd_send() finds an idle channel and updates its pointer: >> >> >> >> >> >> It seems the action "finds an idle channel" is in step 2 rather than >> >> >> step 3, >> >> >> which means the free slot is selected based on the id of the channel >> >> >> found, am I >> >> >> understanding correctly? >> >> > >> >> > I think you're right. >> >> > >> >> > Actually I also feel like the desription here is ambiguous, even though >> >> > I >> >> > think I get what Fabiano wanted to say. >> >> > >> >> > The free slot should be the first step of step 2+3, here what Fabiano >> >> > really wanted to suggest is we move the free buffer array from multifd >> >> > channels into the callers, then the caller can pass in whatever data to >> >> > send. >> >> > >> >> > So I think maybe it's cleaner to write it as this in code (note: I >> >> > didn't >> >> > really change the code, just some ordering and comments): >> >> > >> >> > ===8<=== >> >> > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) >> >> > */ >> >> > active_slot = slots->active; >> >> > slots->active = slots->free[p->id]; >> >> > -p->data = active_slot; >> >> > - >> >> > -/* >> >> > - * By the next time we arrive here, the channel will certainly >> >> > - * have consumed the active slot. Put it back on the free list >> >> > - * now. >> >> > - */ >> >> > slots->free[p->id] = active_slot; >> >> > >> >> > +/* Assign the current active slot to the chosen thread */ >> >> > +p->data = active_slot; >> >> > ===8<=== >> >> > >> >> > The comment I removed is slightly misleading to me too, because right >> >> > now >> >> > active_slot contains the data hasn't yet been delivered to multifd, so >> >> > we're "putting it back to free list" not because of it's free, but >> >> > because >> >> > we know it won't get used until the multifd send thread consumes it >> >> > (because before that the thread will be busy, and we won't use the >> >> > buffer >> >> > if so in upcoming send()s). >> >> > >> >> > And then when I'm looking at this again, I think maybe it's a slight >> >> > overkill, and maybe we can still keep the "opaque data" managed by >> >> > multifd. >> >> > One reason might be that I don't expect the "opaque data" payload keep >> >> > growing at all: it should really be either RAM or device state as I >> >> > commented elsewhere in a relevant thread, after all it's a thread model >> >> > only for migration purpose to move vmstates.. >> >> >> >> Some amount of flexibility needs to be baked in. For instance, what >> >> about the handshake procedure? Don't we want to use multifd threads to >> >> put some information on the wire for that as well? >> > >> > Is this an orthogonal question? >> >> I don't think so. You say the payload data should be either RAM or >> device state. I'm asking what other types of data do we want the multifd >> channel to transmit and suggesting we need to allow room for the >> addition of that, whatever it is. One thing that comes to mind that is >> neither RAM or device state is some form of handshake or capabilities >> negotiation. > > Indeed what I thought was multifd payload should be either ram or device, > nothing else. The worst case is we can add one more into the union, but I > can't think of. What about the QEMUFile traffic? There's an iov in there. I have been thinking of replacing some of qemu-file.c guts with calls to multifd. Instead of several qemu_put_byte() we could construct an iov and give it to multifd for transfering, call multifd_sync at the end and get rid of the QEMUFile entirely. I don't have that completely laid out at the moment, but I think it should be possible. I get concerned about making assumptions on the types of data we're ever going to want to transmit. I bet someone thought in the past that multifd would never be used for anything other than ram. > > I wonder why handshake needs to be done per-thread. I was naturally > thinking the handshake should happen sequentially, talking over everything > including multifd. Well, it would still be thread based. Just that it would be 1 thread and it would not be managed by multifd. I don't s
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Wed, Jul 10, 2024 at 05:16:36PM -0300, Fabiano Rosas wrote: > Peter Xu writes: > > > On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: > >> Peter Xu writes: > >> > >> > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: > >> >> > Or graphically: > >> >> > > >> >> > 1) client fills the active slot with data. Channels point to nothing > >> >> >at this point: > >> >> > [a] <-- active slot > >> >> > [][][][] <-- free slots, one per-channel > >> >> > > >> >> > [][][][] <-- channels' p->data pointers > >> >> > > >> >> > 2) multifd_send() swaps the pointers inside the client slot. Channels > >> >> >still point to nothing: > >> >> > [] > >> >> > [a][][][] > >> >> > > >> >> > [][][][] > >> >> > > >> >> > 3) multifd_send() finds an idle channel and updates its pointer: > >> >> > >> >> It seems the action "finds an idle channel" is in step 2 rather than > >> >> step 3, > >> >> which means the free slot is selected based on the id of the channel > >> >> found, am I > >> >> understanding correctly? > >> > > >> > I think you're right. > >> > > >> > Actually I also feel like the desription here is ambiguous, even though I > >> > think I get what Fabiano wanted to say. > >> > > >> > The free slot should be the first step of step 2+3, here what Fabiano > >> > really wanted to suggest is we move the free buffer array from multifd > >> > channels into the callers, then the caller can pass in whatever data to > >> > send. > >> > > >> > So I think maybe it's cleaner to write it as this in code (note: I didn't > >> > really change the code, just some ordering and comments): > >> > > >> > ===8<=== > >> > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) > >> > */ > >> > active_slot = slots->active; > >> > slots->active = slots->free[p->id]; > >> > -p->data = active_slot; > >> > - > >> > -/* > >> > - * By the next time we arrive here, the channel will certainly > >> > - * have consumed the active slot. Put it back on the free list > >> > - * now. > >> > - */ > >> > slots->free[p->id] = active_slot; > >> > > >> > +/* Assign the current active slot to the chosen thread */ > >> > +p->data = active_slot; > >> > ===8<=== > >> > > >> > The comment I removed is slightly misleading to me too, because right > >> > now > >> > active_slot contains the data hasn't yet been delivered to multifd, so > >> > we're "putting it back to free list" not because of it's free, but > >> > because > >> > we know it won't get used until the multifd send thread consumes it > >> > (because before that the thread will be busy, and we won't use the buffer > >> > if so in upcoming send()s). > >> > > >> > And then when I'm looking at this again, I think maybe it's a slight > >> > overkill, and maybe we can still keep the "opaque data" managed by > >> > multifd. > >> > One reason might be that I don't expect the "opaque data" payload keep > >> > growing at all: it should really be either RAM or device state as I > >> > commented elsewhere in a relevant thread, after all it's a thread model > >> > only for migration purpose to move vmstates.. > >> > >> Some amount of flexibility needs to be baked in. For instance, what > >> about the handshake procedure? Don't we want to use multifd threads to > >> put some information on the wire for that as well? > > > > Is this an orthogonal question? > > I don't think so. You say the payload data should be either RAM or > device state. I'm asking what other types of data do we want the multifd > channel to transmit and suggesting we need to allow room for the > addition of that, whatever it is. One thing that comes to mind that is > neither RAM or device state is some form of handshake or capabilities > negotiation. Indeed what I thought was multifd payload should be either ram or device, nothing else. The worst case is we can add one more into the union, but I can't think of. I wonder why handshake needs to be done per-thread. I was naturally thinking the handshake should happen sequentially, talking over everything including multifd. IMO multifd to have these threads are mostly for the sake of performance. I sometimes think we have some tiny places where we "over-engineered" multifd, e.g. on attaching ZLIB/ZSTD/... flags on each packet header, even if they should never change, and that is the part of thing we can put into handshake too, and after handshake we should assume both sides and all threads are in sync. There's no need to worry compressor per-packet, per-channel. It could be a global thing and done upfront, even if Libvirt didn't guarantee those. > > > > > What I meant above is it looks fine to me to keep "device state" in > > multifd.c, as long as it is not only about VFIO. > > > > What you were saying seems to be about how to identify this is a device > > state, then I just hope VFIO shares the same flag with any future device > > that would also like to send its st
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: >> Peter Xu writes: >> >> > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: >> >> > Or graphically: >> >> > >> >> > 1) client fills the active slot with data. Channels point to nothing >> >> >at this point: >> >> > [a] <-- active slot >> >> > [][][][] <-- free slots, one per-channel >> >> > >> >> > [][][][] <-- channels' p->data pointers >> >> > >> >> > 2) multifd_send() swaps the pointers inside the client slot. Channels >> >> >still point to nothing: >> >> > [] >> >> > [a][][][] >> >> > >> >> > [][][][] >> >> > >> >> > 3) multifd_send() finds an idle channel and updates its pointer: >> >> >> >> It seems the action "finds an idle channel" is in step 2 rather than step >> >> 3, >> >> which means the free slot is selected based on the id of the channel >> >> found, am I >> >> understanding correctly? >> > >> > I think you're right. >> > >> > Actually I also feel like the desription here is ambiguous, even though I >> > think I get what Fabiano wanted to say. >> > >> > The free slot should be the first step of step 2+3, here what Fabiano >> > really wanted to suggest is we move the free buffer array from multifd >> > channels into the callers, then the caller can pass in whatever data to >> > send. >> > >> > So I think maybe it's cleaner to write it as this in code (note: I didn't >> > really change the code, just some ordering and comments): >> > >> > ===8<=== >> > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) >> > */ >> > active_slot = slots->active; >> > slots->active = slots->free[p->id]; >> > -p->data = active_slot; >> > - >> > -/* >> > - * By the next time we arrive here, the channel will certainly >> > - * have consumed the active slot. Put it back on the free list >> > - * now. >> > - */ >> > slots->free[p->id] = active_slot; >> > >> > +/* Assign the current active slot to the chosen thread */ >> > +p->data = active_slot; >> > ===8<=== >> > >> > The comment I removed is slightly misleading to me too, because right now >> > active_slot contains the data hasn't yet been delivered to multifd, so >> > we're "putting it back to free list" not because of it's free, but because >> > we know it won't get used until the multifd send thread consumes it >> > (because before that the thread will be busy, and we won't use the buffer >> > if so in upcoming send()s). >> > >> > And then when I'm looking at this again, I think maybe it's a slight >> > overkill, and maybe we can still keep the "opaque data" managed by multifd. >> > One reason might be that I don't expect the "opaque data" payload keep >> > growing at all: it should really be either RAM or device state as I >> > commented elsewhere in a relevant thread, after all it's a thread model >> > only for migration purpose to move vmstates.. >> >> Some amount of flexibility needs to be baked in. For instance, what >> about the handshake procedure? Don't we want to use multifd threads to >> put some information on the wire for that as well? > > Is this an orthogonal question? I don't think so. You say the payload data should be either RAM or device state. I'm asking what other types of data do we want the multifd channel to transmit and suggesting we need to allow room for the addition of that, whatever it is. One thing that comes to mind that is neither RAM or device state is some form of handshake or capabilities negotiation. > > What I meant above is it looks fine to me to keep "device state" in > multifd.c, as long as it is not only about VFIO. > > What you were saying seems to be about how to identify this is a device > state, then I just hope VFIO shares the same flag with any future device > that would also like to send its state via multifd, like: > > #define MULTIFD_FLAG_DEVICE_STATE (32 << 1) > > Then set it in MultiFDPacket_t.flags. The dest qemu should route that > packet to the device vmsd / save_entry for parsing. Sure, that part I agree with, no issue here. > >> >> > Putting it managed by multifd thread should involve less change than this >> > series, but it could look like this: >> > >> > typedef enum { >> > MULTIFD_PAYLOAD_RAM = 0, >> > MULTIFD_PAYLOAD_DEVICE_STATE = 1, >> > } MultifdPayloadType; >> > >> > typedef enum { >> > MultiFDPages_t ram_payload; >> > MultifdDeviceState_t device_payload; >> > } MultifdPayload; >> > >> > struct MultiFDSendData { >> > MultifdPayloadType type; >> > MultifdPayload data; >> > }; >> >> Is that an union up there? So you want to simply allocate in multifd the > > Yes. > >> max amount of memory between the two types of payload? But then we'll > > Yes. > >> need a memset(p->data, 0, ...) at every round of sending to avoid giving >> stale data from one client to another. That doesn't work with the > > I think as long as the one to enqueue will always setup the fields, we > don't
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Wed, Jul 10, 2024 at 01:10:37PM -0300, Fabiano Rosas wrote: > Peter Xu writes: > > > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: > >> > Or graphically: > >> > > >> > 1) client fills the active slot with data. Channels point to nothing > >> >at this point: > >> > [a] <-- active slot > >> > [][][][] <-- free slots, one per-channel > >> > > >> > [][][][] <-- channels' p->data pointers > >> > > >> > 2) multifd_send() swaps the pointers inside the client slot. Channels > >> >still point to nothing: > >> > [] > >> > [a][][][] > >> > > >> > [][][][] > >> > > >> > 3) multifd_send() finds an idle channel and updates its pointer: > >> > >> It seems the action "finds an idle channel" is in step 2 rather than step > >> 3, > >> which means the free slot is selected based on the id of the channel > >> found, am I > >> understanding correctly? > > > > I think you're right. > > > > Actually I also feel like the desription here is ambiguous, even though I > > think I get what Fabiano wanted to say. > > > > The free slot should be the first step of step 2+3, here what Fabiano > > really wanted to suggest is we move the free buffer array from multifd > > channels into the callers, then the caller can pass in whatever data to > > send. > > > > So I think maybe it's cleaner to write it as this in code (note: I didn't > > really change the code, just some ordering and comments): > > > > ===8<=== > > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) > > */ > > active_slot = slots->active; > > slots->active = slots->free[p->id]; > > -p->data = active_slot; > > - > > -/* > > - * By the next time we arrive here, the channel will certainly > > - * have consumed the active slot. Put it back on the free list > > - * now. > > - */ > > slots->free[p->id] = active_slot; > > > > +/* Assign the current active slot to the chosen thread */ > > +p->data = active_slot; > > ===8<=== > > > > The comment I removed is slightly misleading to me too, because right now > > active_slot contains the data hasn't yet been delivered to multifd, so > > we're "putting it back to free list" not because of it's free, but because > > we know it won't get used until the multifd send thread consumes it > > (because before that the thread will be busy, and we won't use the buffer > > if so in upcoming send()s). > > > > And then when I'm looking at this again, I think maybe it's a slight > > overkill, and maybe we can still keep the "opaque data" managed by multifd. > > One reason might be that I don't expect the "opaque data" payload keep > > growing at all: it should really be either RAM or device state as I > > commented elsewhere in a relevant thread, after all it's a thread model > > only for migration purpose to move vmstates.. > > Some amount of flexibility needs to be baked in. For instance, what > about the handshake procedure? Don't we want to use multifd threads to > put some information on the wire for that as well? Is this an orthogonal question? What I meant above is it looks fine to me to keep "device state" in multifd.c, as long as it is not only about VFIO. What you were saying seems to be about how to identify this is a device state, then I just hope VFIO shares the same flag with any future device that would also like to send its state via multifd, like: #define MULTIFD_FLAG_DEVICE_STATE (32 << 1) Then set it in MultiFDPacket_t.flags. The dest qemu should route that packet to the device vmsd / save_entry for parsing. > > > Putting it managed by multifd thread should involve less change than this > > series, but it could look like this: > > > > typedef enum { > > MULTIFD_PAYLOAD_RAM = 0, > > MULTIFD_PAYLOAD_DEVICE_STATE = 1, > > } MultifdPayloadType; > > > > typedef enum { > > MultiFDPages_t ram_payload; > > MultifdDeviceState_t device_payload; > > } MultifdPayload; > > > > struct MultiFDSendData { > > MultifdPayloadType type; > > MultifdPayload data; > > }; > > Is that an union up there? So you want to simply allocate in multifd the Yes. > max amount of memory between the two types of payload? But then we'll Yes. > need a memset(p->data, 0, ...) at every round of sending to avoid giving > stale data from one client to another. That doesn't work with the I think as long as the one to enqueue will always setup the fields, we don't need to do memset. I am not sure if it's a major concern to always set all the relevant fields in the multifd enqueue threads. It sounds like the thing we should always better do. > current ram migration because it wants p->pages to remain active across > several calls of multifd_queue_page(). I don't think I followed here. What I meant: QEMU maintains SendData[8], now a bunch of pages arrives, it enqueues "pages" into a free slot index 2 (set type=pages), then before thread 2 finished sending the bunch of pages, SendData[2] will always represent those
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Peter Xu writes: > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: >> > Or graphically: >> > >> > 1) client fills the active slot with data. Channels point to nothing >> >at this point: >> > [a] <-- active slot >> > [][][][] <-- free slots, one per-channel >> > >> > [][][][] <-- channels' p->data pointers >> > >> > 2) multifd_send() swaps the pointers inside the client slot. Channels >> >still point to nothing: >> > [] >> > [a][][][] >> > >> > [][][][] >> > >> > 3) multifd_send() finds an idle channel and updates its pointer: >> >> It seems the action "finds an idle channel" is in step 2 rather than step 3, >> which means the free slot is selected based on the id of the channel found, >> am I >> understanding correctly? > > I think you're right. > > Actually I also feel like the desription here is ambiguous, even though I > think I get what Fabiano wanted to say. > > The free slot should be the first step of step 2+3, here what Fabiano > really wanted to suggest is we move the free buffer array from multifd > channels into the callers, then the caller can pass in whatever data to > send. > > So I think maybe it's cleaner to write it as this in code (note: I didn't > really change the code, just some ordering and comments): > > ===8<=== > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) > */ > active_slot = slots->active; > slots->active = slots->free[p->id]; > -p->data = active_slot; > - > -/* > - * By the next time we arrive here, the channel will certainly > - * have consumed the active slot. Put it back on the free list > - * now. > - */ > slots->free[p->id] = active_slot; > > +/* Assign the current active slot to the chosen thread */ > +p->data = active_slot; > ===8<=== > > The comment I removed is slightly misleading to me too, because right now > active_slot contains the data hasn't yet been delivered to multifd, so > we're "putting it back to free list" not because of it's free, but because > we know it won't get used until the multifd send thread consumes it > (because before that the thread will be busy, and we won't use the buffer > if so in upcoming send()s). > > And then when I'm looking at this again, I think maybe it's a slight > overkill, and maybe we can still keep the "opaque data" managed by multifd. > One reason might be that I don't expect the "opaque data" payload keep > growing at all: it should really be either RAM or device state as I > commented elsewhere in a relevant thread, after all it's a thread model > only for migration purpose to move vmstates.. Some amount of flexibility needs to be baked in. For instance, what about the handshake procedure? Don't we want to use multifd threads to put some information on the wire for that as well? > Putting it managed by multifd thread should involve less change than this > series, but it could look like this: > > typedef enum { > MULTIFD_PAYLOAD_RAM = 0, > MULTIFD_PAYLOAD_DEVICE_STATE = 1, > } MultifdPayloadType; > > typedef enum { > MultiFDPages_t ram_payload; > MultifdDeviceState_t device_payload; > } MultifdPayload; > > struct MultiFDSendData { > MultifdPayloadType type; > MultifdPayload data; > }; Is that an union up there? So you want to simply allocate in multifd the max amount of memory between the two types of payload? But then we'll need a memset(p->data, 0, ...) at every round of sending to avoid giving stale data from one client to another. That doesn't work with the current ram migration because it wants p->pages to remain active across several calls of multifd_queue_page(). > > Then the "enum" makes sure the payload only consumes only the max of both > types; a side benefit to save some memory. > > I think we need to make sure MultifdDeviceState_t is generic enough so that > it will work for mostly everything (especially normal VMSDs). In this case > the VFIO series should be good as that was currently defined as: > > typedef struct { > MultiFDPacketHdr_t hdr; > > char idstr[256] QEMU_NONSTRING; > uint32_t instance_id; > > /* size of the next packet that contains the actual data */ > uint32_t next_packet_size; > } __attribute__((packed)) MultiFDPacketDeviceState_t; This is the packet, a different thing. Not sure if your paragraph above means to talk about that or really MultifdDeviceState, which is what is exchanged between the multifd threads and the client code. > > IIUC that was what we need exactly with idstr+instance_id, so as to nail > exactly at where should the "opaque device state" go to, then load it with > a buffer-based loader when it's ready (starting from VFIO, to get rid of > qemufile). For VMSDs in the future if ever possible, that should be a > modified version of vmstate_load() where it may take buffers not qemufiles. > > To Maciej: please see whether above makes sense to you, and if you also > agree please consider that with your VFIO wor
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Thu, Jun 27, 2024 at 10:40:11AM -0400, Peter Xu wrote: > On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: > > > Or graphically: > > > > > > 1) client fills the active slot with data. Channels point to nothing > > >at this point: > > > [a] <-- active slot > > > [][][][] <-- free slots, one per-channel > > > > > > [][][][] <-- channels' p->data pointers > > > > > > 2) multifd_send() swaps the pointers inside the client slot. Channels > > >still point to nothing: > > > [] > > > [a][][][] > > > > > > [][][][] > > > > > > 3) multifd_send() finds an idle channel and updates its pointer: > > > > It seems the action "finds an idle channel" is in step 2 rather than step 3, > > which means the free slot is selected based on the id of the channel found, > > am I > > understanding correctly? > > I think you're right. > > Actually I also feel like the desription here is ambiguous, even though I > think I get what Fabiano wanted to say. > > The free slot should be the first step of step 2+3, here what Fabiano > really wanted to suggest is we move the free buffer array from multifd > channels into the callers, then the caller can pass in whatever data to > send. > > So I think maybe it's cleaner to write it as this in code (note: I didn't > really change the code, just some ordering and comments): > > ===8<=== > @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) > */ > active_slot = slots->active; > slots->active = slots->free[p->id]; > -p->data = active_slot; > - > -/* > - * By the next time we arrive here, the channel will certainly > - * have consumed the active slot. Put it back on the free list > - * now. > - */ > slots->free[p->id] = active_slot; > > +/* Assign the current active slot to the chosen thread */ > +p->data = active_slot; > ===8<=== > > The comment I removed is slightly misleading to me too, because right now > active_slot contains the data hasn't yet been delivered to multifd, so > we're "putting it back to free list" not because of it's free, but because > we know it won't get used until the multifd send thread consumes it > (because before that the thread will be busy, and we won't use the buffer > if so in upcoming send()s). > > And then when I'm looking at this again, I think maybe it's a slight > overkill, and maybe we can still keep the "opaque data" managed by multifd. > One reason might be that I don't expect the "opaque data" payload keep > growing at all: it should really be either RAM or device state as I > commented elsewhere in a relevant thread, after all it's a thread model > only for migration purpose to move vmstates.. > > Putting it managed by multifd thread should involve less change than this > series, but it could look like this: > > typedef enum { > MULTIFD_PAYLOAD_RAM = 0, > MULTIFD_PAYLOAD_DEVICE_STATE = 1, > } MultifdPayloadType; > > typedef enum { > MultiFDPages_t ram_payload; > MultifdDeviceState_t device_payload; > } MultifdPayload; PS: please conditionally read "enum" as "union" throughout the previous email of mine, sorry. [I'll leave that to readers to decide when should do the replacement..] > > struct MultiFDSendData { > MultifdPayloadType type; > MultifdPayload data; > }; > > Then the "enum" makes sure the payload only consumes only the max of both > types; a side benefit to save some memory. > > I think we need to make sure MultifdDeviceState_t is generic enough so that > it will work for mostly everything (especially normal VMSDs). In this case > the VFIO series should be good as that was currently defined as: > > typedef struct { > MultiFDPacketHdr_t hdr; > > char idstr[256] QEMU_NONSTRING; > uint32_t instance_id; > > /* size of the next packet that contains the actual data */ > uint32_t next_packet_size; > } __attribute__((packed)) MultiFDPacketDeviceState_t; > > IIUC that was what we need exactly with idstr+instance_id, so as to nail > exactly at where should the "opaque device state" go to, then load it with > a buffer-based loader when it's ready (starting from VFIO, to get rid of > qemufile). For VMSDs in the future if ever possible, that should be a > modified version of vmstate_load() where it may take buffers not qemufiles. > > To Maciej: please see whether above makes sense to you, and if you also > agree please consider that with your VFIO work. > > Thanks, > > > > > > [] > > > [a][][][] > > > > > > [a][][][] > > > ^idle > > -- > Peter Xu -- Peter Xu
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On Thu, Jun 27, 2024 at 11:27:08AM +0800, Wang, Lei wrote: > > Or graphically: > > > > 1) client fills the active slot with data. Channels point to nothing > >at this point: > > [a] <-- active slot > > [][][][] <-- free slots, one per-channel > > > > [][][][] <-- channels' p->data pointers > > > > 2) multifd_send() swaps the pointers inside the client slot. Channels > >still point to nothing: > > [] > > [a][][][] > > > > [][][][] > > > > 3) multifd_send() finds an idle channel and updates its pointer: > > It seems the action "finds an idle channel" is in step 2 rather than step 3, > which means the free slot is selected based on the id of the channel found, > am I > understanding correctly? I think you're right. Actually I also feel like the desription here is ambiguous, even though I think I get what Fabiano wanted to say. The free slot should be the first step of step 2+3, here what Fabiano really wanted to suggest is we move the free buffer array from multifd channels into the callers, then the caller can pass in whatever data to send. So I think maybe it's cleaner to write it as this in code (note: I didn't really change the code, just some ordering and comments): ===8<=== @@ -710,15 +710,11 @@ static bool multifd_send(MultiFDSlots *slots) */ active_slot = slots->active; slots->active = slots->free[p->id]; -p->data = active_slot; - -/* - * By the next time we arrive here, the channel will certainly - * have consumed the active slot. Put it back on the free list - * now. - */ slots->free[p->id] = active_slot; +/* Assign the current active slot to the chosen thread */ +p->data = active_slot; ===8<=== The comment I removed is slightly misleading to me too, because right now active_slot contains the data hasn't yet been delivered to multifd, so we're "putting it back to free list" not because of it's free, but because we know it won't get used until the multifd send thread consumes it (because before that the thread will be busy, and we won't use the buffer if so in upcoming send()s). And then when I'm looking at this again, I think maybe it's a slight overkill, and maybe we can still keep the "opaque data" managed by multifd. One reason might be that I don't expect the "opaque data" payload keep growing at all: it should really be either RAM or device state as I commented elsewhere in a relevant thread, after all it's a thread model only for migration purpose to move vmstates.. Putting it managed by multifd thread should involve less change than this series, but it could look like this: typedef enum { MULTIFD_PAYLOAD_RAM = 0, MULTIFD_PAYLOAD_DEVICE_STATE = 1, } MultifdPayloadType; typedef enum { MultiFDPages_t ram_payload; MultifdDeviceState_t device_payload; } MultifdPayload; struct MultiFDSendData { MultifdPayloadType type; MultifdPayload data; }; Then the "enum" makes sure the payload only consumes only the max of both types; a side benefit to save some memory. I think we need to make sure MultifdDeviceState_t is generic enough so that it will work for mostly everything (especially normal VMSDs). In this case the VFIO series should be good as that was currently defined as: typedef struct { MultiFDPacketHdr_t hdr; char idstr[256] QEMU_NONSTRING; uint32_t instance_id; /* size of the next packet that contains the actual data */ uint32_t next_packet_size; } __attribute__((packed)) MultiFDPacketDeviceState_t; IIUC that was what we need exactly with idstr+instance_id, so as to nail exactly at where should the "opaque device state" go to, then load it with a buffer-based loader when it's ready (starting from VFIO, to get rid of qemufile). For VMSDs in the future if ever possible, that should be a modified version of vmstate_load() where it may take buffers not qemufiles. To Maciej: please see whether above makes sense to you, and if you also agree please consider that with your VFIO work. Thanks, > > > [] > > [a][][][] > > > > [a][][][] > > ^idle -- Peter Xu
Re: [RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
On 6/21/2024 5:21, Fabiano Rosas wrote:> Multifd currently has a simple scheduling mechanism that distributes > work to the various channels by providing the client (producer) with a > memory slot and swapping that slot with free slot from the next idle > channel (consumer). Or graphically: > > [] <-- multifd_send_state->pages > [][][][] <-- channels' p->pages pointers > > 1) client fills the empty slot with data: > [a] > [][][][] > > 2) multifd_send_pages() finds an idle channel and swaps the pointers: > [a] > [][][][] > ^idle > > [] > [a][][][] > > 3) client can immediately fill new slot with more data: > [b] > [a][][][] > > 4) channel processes the data, the channel slot is now free to use >again: > [b] > [][][][] > > This works just fine, except that it doesn't allow different types of > payloads to be processed at the same time in different channels, > i.e. the data type of multifd_send_state->pages needs to be the same > as p->pages. For each new data type different from MultiFDPage_t that > is to be handled, this logic needs to be duplicated by adding new > fields to multifd_send_state and to the channels. > > The core of the issue here is that we're using the channel parameters > (MultiFDSendParams) to hold the storage space on behalf of the multifd > client (currently ram.c). This is cumbersome because it forces us to > change multifd_send_pages() to check the data type being handled > before deciding which field to use. > > One way to solve this is to detach the storage space from the multifd > channel and put it somewhere else, in control of the multifd > client. That way, multifd_send_pages() can operate on an opaque > pointer without needing to be adapted to each new data type. Implement > this logic with a new "slots" abstraction: > > struct MultiFDSendData { > void *opaque; > size_t size; > } > > struct MultiFDSlots { > MultiFDSendData **free; <-- what used to be p->pages > MultiFDSendData *active; <-- what used to be multifd_send_state->pages > }; > > Each multifd client now gets one set of slots to use. The slots are > passed into multifd_send_pages() (renamed to multifd_send). The > channels now only hold a pointer to the generic MultiFDSendData, and > after it's processed that reference can be dropped. > > Or graphically: > > 1) client fills the active slot with data. Channels point to nothing >at this point: > [a] <-- active slot > [][][][] <-- free slots, one per-channel > > [][][][] <-- channels' p->data pointers > > 2) multifd_send() swaps the pointers inside the client slot. Channels >still point to nothing: > [] > [a][][][] > > [][][][] > > 3) multifd_send() finds an idle channel and updates its pointer: It seems the action "finds an idle channel" is in step 2 rather than step 3, which means the free slot is selected based on the id of the channel found, am I understanding correctly? > [] > [a][][][] > > [a][][][] > ^idle > > 4) a second client calls multifd_send(), but with it's own slots: > [] [b] > [a][][][] [][][][] > > [a][][][] > > 5) multifd_send() does steps 2 and 3 again: > [] [] > [a][][][] [][b][][] > > [a][b][][] >^idle > > 6) The channels continue processing the data and lose/acquire the > references as multifd_send() updates them. The free lists of each > client are not affected. > > Signed-off-by: Fabiano Rosas > --- > migration/multifd.c | 119 +++- > migration/multifd.h | 17 +++ > migration/ram.c | 1 + > 3 files changed, 102 insertions(+), 35 deletions(-) > > diff --git a/migration/multifd.c b/migration/multifd.c > index 6fe339b378..f22a1c2e84 100644 > --- a/migration/multifd.c > +++ b/migration/multifd.c > @@ -97,6 +97,30 @@ struct { > MultiFDMethods *ops; > } *multifd_recv_state; > > +MultiFDSlots *multifd_allocate_slots(void *(*alloc_fn)(void), > + void (*reset_fn)(void *), > + void (*cleanup_fn)(void *)) > +{ > +int thread_count = migrate_multifd_channels(); > +MultiFDSlots *slots = g_new0(MultiFDSlots, 1); > + > +slots->active = g_new0(MultiFDSendData, 1); > +slots->free = g_new0(MultiFDSendData *, thread_count); > + > +slots->active->opaque = alloc_fn(); > +slots->active->reset = reset_fn; > +slots->active->cleanup = cleanup_fn; > + > +for (int i = 0; i < thread_count; i++) { > +slots->free[i] = g_new0(MultiFDSendData, 1); > +slots->free[i]->opaque = alloc_fn(); > +slots->free[i]->reset = reset_fn; > +slots->free[i]->cleanup = cleanup_fn; > +} > + > +return slots; > +} > + > static bool multifd_use_packets(void) > { > return !migrate_mapped_ram(); > @@ -313,8 +337,10 @@ void multifd_register_ops(int method, MultiFDMethods > *ops) > } > > /* Reset a MultiFDPages_t* object for th
[RFC PATCH 6/7] migration/multifd: Move payload storage out of the channel parameters
Multifd currently has a simple scheduling mechanism that distributes work to the various channels by providing the client (producer) with a memory slot and swapping that slot with free slot from the next idle channel (consumer). Or graphically: [] <-- multifd_send_state->pages [][][][] <-- channels' p->pages pointers 1) client fills the empty slot with data: [a] [][][][] 2) multifd_send_pages() finds an idle channel and swaps the pointers: [a] [][][][] ^idle [] [a][][][] 3) client can immediately fill new slot with more data: [b] [a][][][] 4) channel processes the data, the channel slot is now free to use again: [b] [][][][] This works just fine, except that it doesn't allow different types of payloads to be processed at the same time in different channels, i.e. the data type of multifd_send_state->pages needs to be the same as p->pages. For each new data type different from MultiFDPage_t that is to be handled, this logic needs to be duplicated by adding new fields to multifd_send_state and to the channels. The core of the issue here is that we're using the channel parameters (MultiFDSendParams) to hold the storage space on behalf of the multifd client (currently ram.c). This is cumbersome because it forces us to change multifd_send_pages() to check the data type being handled before deciding which field to use. One way to solve this is to detach the storage space from the multifd channel and put it somewhere else, in control of the multifd client. That way, multifd_send_pages() can operate on an opaque pointer without needing to be adapted to each new data type. Implement this logic with a new "slots" abstraction: struct MultiFDSendData { void *opaque; size_t size; } struct MultiFDSlots { MultiFDSendData **free; <-- what used to be p->pages MultiFDSendData *active; <-- what used to be multifd_send_state->pages }; Each multifd client now gets one set of slots to use. The slots are passed into multifd_send_pages() (renamed to multifd_send). The channels now only hold a pointer to the generic MultiFDSendData, and after it's processed that reference can be dropped. Or graphically: 1) client fills the active slot with data. Channels point to nothing at this point: [a] <-- active slot [][][][] <-- free slots, one per-channel [][][][] <-- channels' p->data pointers 2) multifd_send() swaps the pointers inside the client slot. Channels still point to nothing: [] [a][][][] [][][][] 3) multifd_send() finds an idle channel and updates its pointer: [] [a][][][] [a][][][] ^idle 4) a second client calls multifd_send(), but with it's own slots: [] [b] [a][][][] [][][][] [a][][][] 5) multifd_send() does steps 2 and 3 again: [] [] [a][][][] [][b][][] [a][b][][] ^idle 6) The channels continue processing the data and lose/acquire the references as multifd_send() updates them. The free lists of each client are not affected. Signed-off-by: Fabiano Rosas --- migration/multifd.c | 119 +++- migration/multifd.h | 17 +++ migration/ram.c | 1 + 3 files changed, 102 insertions(+), 35 deletions(-) diff --git a/migration/multifd.c b/migration/multifd.c index 6fe339b378..f22a1c2e84 100644 --- a/migration/multifd.c +++ b/migration/multifd.c @@ -97,6 +97,30 @@ struct { MultiFDMethods *ops; } *multifd_recv_state; +MultiFDSlots *multifd_allocate_slots(void *(*alloc_fn)(void), + void (*reset_fn)(void *), + void (*cleanup_fn)(void *)) +{ +int thread_count = migrate_multifd_channels(); +MultiFDSlots *slots = g_new0(MultiFDSlots, 1); + +slots->active = g_new0(MultiFDSendData, 1); +slots->free = g_new0(MultiFDSendData *, thread_count); + +slots->active->opaque = alloc_fn(); +slots->active->reset = reset_fn; +slots->active->cleanup = cleanup_fn; + +for (int i = 0; i < thread_count; i++) { +slots->free[i] = g_new0(MultiFDSendData, 1); +slots->free[i]->opaque = alloc_fn(); +slots->free[i]->reset = reset_fn; +slots->free[i]->cleanup = cleanup_fn; +} + +return slots; +} + static bool multifd_use_packets(void) { return !migrate_mapped_ram(); @@ -313,8 +337,10 @@ void multifd_register_ops(int method, MultiFDMethods *ops) } /* Reset a MultiFDPages_t* object for the next use */ -static void multifd_pages_reset(MultiFDPages_t *pages) +static void multifd_pages_reset(void *opaque) { +MultiFDPages_t *pages = opaque; + /* * We don't need to touch offset[] array, because it will be * overwritten later when reused. @@ -388,8 +414,9 @@ static int multifd_recv_initial_packet(QIOChannel *c, Error **errp) return msg.id; } -static MultiFDPages_t *multifd_pages_init(uint32_t n) +static void *multifd_pages_init(void) { +uint32_t n = MULTIFD_PACKET_SIZE / qe