"Wang, Wei W" <wei.w.w...@intel.com> wrote:
> On Friday, May 19, 2023 9:31 AM, Wang, Lei4 wrote:
>> On 5/18/2023 17:16, Juan Quintela wrote:
>> > Lei Wang <lei4.w...@intel.com> wrote:
>> >> When destination VM is launched, the "backlog" parameter for listen()
>> >> is set to 1 as default in socket_start_incoming_migration_internal(),
>> >> which will lead to socket connection error (the queue of pending
>> >> connections is full) when "multifd" and "multifd-channels" are set
>> >> later on and a high number of channels are used. Set it to a
>> >> hard-coded higher default value 512 to fix this issue.
>> >>
>> >> Reported-by: Wei Wang <wei.w.w...@intel.com>
>> >> Signed-off-by: Lei Wang <lei4.w...@intel.com>
>> >
>> > [cc'd daiel who is the maintainer of qio]
>> >
>> > My understanding of that value is that 230 or something like that
>> > would be more than enough.  The maxiimum number of multifd channels is
>> 256.
>> 
>> You are right, the "multifd-channels" expects uint8_t, so 256 is enough.
>> 
>
> We can change it to uint16_t or uint32_t, but need to see if listening on a 
> larger
> value is OK to everyone.

If we need something more than 256 channels for migration, we ar edoing
something really weird.  We can saturate a 100Gigabit network relatively
easily with 10 channels.  256 Channels would mean that we have at least
2TBit/s networking.  I am not expecting that really soon.  And as soon
as that happens I would expect CPU's to handle easily more that
10Gigabits/second.

> Man page of listen mentions that the  maximum length of the queue for
> incomplete sockets can be set using /proc/sys/net/ipv4/tcp_max_syn_backlog,
> and it is 4096 by default on my machine.

I think that current code is ok.  We just need to enforce that we use
defer.

Later, Juan.


Reply via email to