On 08/10/2018 17:09, Ben Pfaff wrote:
>
> I backported:
>
> 69c51582ff78 ("dpif-netlink: don't allocate per thread netlink sockets")
> to 2.10, 2.9, and 2.8.
>
> 769b50349f28 ("dpif: Remove support for multiple queues per port.")
> to 2.10. It got patch rejects on 2.9, so I skipped it there. I
On Fri, Oct 05, 2018 at 10:58:35AM +0100, Markos Chandras wrote:
> On 05/10/2018 09:55, Matteo Croce wrote:
> > On Fri, Oct 5, 2018 at 8:32 AM Markos Chandras wrote:
> >>
> >> On 25/09/2018 22:14, Ben Pfaff wrote:
> >>>
> >>> Applied to master thanks!
> >>>
> >>> I sent a patch to remove support f
On Fri, Oct 5, 2018 at 9:58 AM Markos Chandras wrote:
>
> On 05/10/2018 09:55, Matteo Croce wrote:
> > On Fri, Oct 5, 2018 at 8:32 AM Markos Chandras wrote:
> >>
> >> On 25/09/2018 22:14, Ben Pfaff wrote:
> >>>
> >>> Applied to master thanks!
> >>>
> >>> I sent a patch to remove support for multi
On Fri, Oct 5, 2018 at 11:27 PM Guru Shetty wrote:
> I get a segfault with this patch with the following backtrace. I have not
> investigated.
>
> Program received signal SIGSEGV, Segmentation fault.
> nl_sock_pid (sock=0x0) at lib/netlink-socket.c:1424
> 1424 return sock->pid;
> (gdb) where
Reproduction looks to be easy. I just need to use the kernel module from
OVS repo and run 'ovs-vswitchd --pidfile'. If I do a 'ovs-vsctl add-br
br0', ovs-vswitchd will segfault. I do see the the following message in
ovs-vswitchd log:
2018-10-05T12:39:19Z|4|netlink_socket|INFO|netlink: could n
On Tue, 25 Sep 2018 at 01:51, Matteo Croce wrote:
> When using the kernel datapath, OVS allocates a pool of sockets to handle
> netlink events. The number of sockets is: ports * n-handler-threads, where
> n-handler-threads is user configurable and defaults to 3/4*number of cores.
>
> This because
On 05/10/2018 09:55, Matteo Croce wrote:
> On Fri, Oct 5, 2018 at 8:32 AM Markos Chandras wrote:
>>
>> On 25/09/2018 22:14, Ben Pfaff wrote:
>>>
>>> Applied to master thanks!
>>>
>>> I sent a patch to remove support for multiple queues in the
>>> infrastructure layer:
>>> https://patchwork
On Fri, Oct 5, 2018 at 8:32 AM Markos Chandras wrote:
>
> On 25/09/2018 22:14, Ben Pfaff wrote:
> >
> > Applied to master thanks!
> >
> > I sent a patch to remove support for multiple queues in the
> > infrastructure layer:
> > https://patchwork.ozlabs.org/patch/974755/
> > ___
On 25/09/2018 22:14, Ben Pfaff wrote:
>
> Applied to master thanks!
>
> I sent a patch to remove support for multiple queues in the
> infrastructure layer:
> https://patchwork.ozlabs.org/patch/974755/
> ___
> dev mailing list
> d...@openvswitch.
On Tue, Sep 25, 2018 at 10:51:05AM +0200, Matteo Croce wrote:
> When using the kernel datapath, OVS allocates a pool of sockets to handle
> netlink events. The number of sockets is: ports * n-handler-threads, where
> n-handler-threads is user configurable and defaults to 3/4*number of cores.
>
> T
On Tue, Sep 25, 2018 at 10:51:05AM +0200, Matteo Croce wrote:
> When using the kernel datapath, OVS allocates a pool of sockets to handle
> netlink events. The number of sockets is: ports * n-handler-threads, where
> n-handler-threads is user configurable and defaults to 3/4*number of cores.
>
> T
When using the kernel datapath, OVS allocates a pool of sockets to handle
netlink events. The number of sockets is: ports * n-handler-threads, where
n-handler-threads is user configurable and defaults to 3/4*number of cores.
This because vswitchd starts n-handler-threads threads, each one with a
n
12 matches
Mail list logo