[Bug 277349] The net.inet.ip.source_address_validation should ignore CARP IP in backup state

2024-03-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277349

--- Comment #10 from commit-h...@freebsd.org ---
A commit in branch stable/14 references this bug:

URL:
https://cgit.FreeBSD.org/src/commit/?id=d6e1ae659b11a13a9c289424735394173907c1d3

commit d6e1ae659b11a13a9c289424735394173907c1d3
Author: Gleb Smirnoff 
AuthorDate: 2024-03-19 18:48:59 +
Commit: Gleb Smirnoff 
CommitDate: 2024-03-28 19:35:45 +

carp: check CARP status in in_localip_fib(), in6_localip_fib()

Don't report a BACKUP CARP address as local.  These two functions are used
only by source address validation for input packets, controlled by sysctls
net.inet.ip.source_address_validation and
net.inet6.ip6.source_address_validation.  For this purpose we definitely
want to treat BACKUP addresses as non local.

This change is conservative and doesn't modify compat in_localip() and
in6_localip().  They are used more widely than the FIB-aware versions.
The change would modify the notion of ipfw(4) 'me' keyword.  There might
be other consequences as in_localip() is used by various tunneling
protocols.

PR: 277349
(cherry picked from commit 56f7860087eec14b4a65310b70bd704e79e1b48c)

 sys/netinet/in.c   | 4 +++-
 sys/netinet6/in6.c | 4 +++-
 2 files changed, 6 insertions(+), 2 deletions(-)

-- 
You are receiving this mail because:
You are the assignee for the bug.


[Bug 277349] The net.inet.ip.source_address_validation should ignore CARP IP in backup state

2024-03-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277349

Gleb Smirnoff  changed:

   What|Removed |Added

 Status|New |Closed
 Resolution|--- |FIXED

-- 
You are receiving this mail because:
You are the assignee for the bug.


Re: Request for Testing: TCP RACK

2024-03-28 Thread tuexen
> On 28. Mar 2024, at 15:00, Nuno Teixeira  wrote:
> 
> Hello all!
> 
> Running rack @b7b78c1c169 "Optimize HPTS..." very happy on my laptop (amd64)!
> 
> Thanks all!
Thanks for the feedback!

Best regards
Michael
> 
> Drew Gallatin  escreveu (quinta, 21/03/2024 à(s) 12:58):
> The entire point is to *NOT* go through the overhead of scheduling something 
> asynchronously, but to take advantage of the fact that a user/kernel 
> transition is going to trash the cache anyway.
> 
> In the common case of a system which has less than the threshold  number of 
> connections , we access the tcp_hpts_softclock function pointer, make one 
> function call, and access hpts_that_need_softclock, and then return.  So 
> that's 2 variables and a function call.
> 
> I think it would be preferable to avoid that call, and to move the 
> declaration of tcp_hpts_softclock and hpts_that_need_softclock so that they 
> are in the same cacheline.  Then we'd be hitting just a single line in the 
> common case.  (I've made comments on the review to that effect).
> 
> Also, I wonder if the threshold could get higher by default, so that hpts is 
> never called in this context unless we're to the point where we're scheduling 
> thousands of runs of the hpts thread (and taking all those clock interrupts).
> 
> Drew
> 
> On Wed, Mar 20, 2024, at 8:17 PM, Konstantin Belousov wrote:
>> On Tue, Mar 19, 2024 at 06:19:52AM -0400, rrs wrote:
>>> Ok I have created
>>> 
>>> https://reviews.freebsd.org/D44420
>>> 
>>> 
>>> To address the issue. I also attach a short version of the patch that Nuno
>>> can try and validate
>>> 
>>> it works. Drew you may want to try this and validate the optimization does
>>> kick in since I can
>>> 
>>> only now test that it does not on my local box :)
>> The patch still causes access to all cpu's cachelines on each userret.
>> It would be much better to inc/check the threshold and only schedule the
>> call when exceeded.  Then the call can occur in some dedicated context,
>> like per-CPU thread, instead of userret.
>> 
>>> 
>>> 
>>> R
>>> 
>>> 
>>> 
>>> On 3/18/24 3:42 PM, Drew Gallatin wrote:
 No.  The goal is to run on every return to userspace for every thread.
 
 Drew
 
 On Mon, Mar 18, 2024, at 3:41 PM, Konstantin Belousov wrote:
> On Mon, Mar 18, 2024 at 03:13:11PM -0400, Drew Gallatin wrote:
>> I got the idea from
>> https://people.mpi-sws.org/~druschel/publications/soft-timers-tocs.pdf
>> The gist is that the TCP pacing stuff needs to run frequently, and
>> rather than run it out of a clock interrupt, its more efficient to run
>> it out of a system call context at just the point where we return to
>> userspace and the cache is trashed anyway. The current implementation
>> is fine for our workload, but probably not idea for a generic system.
>> Especially one where something is banging on system calls.
>> 
>> Ast's could be the right tool for this, but I'm super unfamiliar with
>> them, and I can't find any docs on them.
>> 
>> Would ast_register(0, ASTR_UNCOND, 0, func) be roughly equivalent to
>> what's happening here?
> This call would need some AST number added, and then it registers the
> ast to run on next return to userspace, for the current thread.
> 
> Is it enough?
>> 
>> Drew
> 
>> 
>> On Mon, Mar 18, 2024, at 2:33 PM, Konstantin Belousov wrote:
>>> On Mon, Mar 18, 2024 at 07:26:10AM -0500, Mike Karels wrote:
 On 18 Mar 2024, at 7:04, tue...@freebsd.org wrote:
 
>> On 18. Mar 2024, at 12:42, Nuno Teixeira
>  wrote:
>> 
>> Hello all!
>> 
>> It works just fine!
>> System performance is OK.
>> Using patch on main-n268841-b0aaf8beb126(-dirty).
>> 
>> ---
>> net.inet.tcp.functions_available:
>> Stack   D
> AliasPCB count
>> freebsd freebsd  0
>> rack*
> rack 38
>> ---
>> 
>> It would be so nice that we can have a sysctl tunnable for
> this patch
>> so we could do more tests without recompiling kernel.
> Thanks for testing!
> 
> @gallatin: can you come up with a patch that is acceptable
> for Netflix
> and allows to mitigate the performance regression.
 
 Ideally, tcphpts could enable this automatically when it
> starts to be
 used (enough?), but a sysctl could select auto/on/off.
>>> There is already a well-known mechanism to request execution of the
>>> specific function on return to userspace, namely AST.  The difference
>>> with the current hack is that the execution is requested for one
> callback
>>> in the context of the specific thread.
>>> 
>>> Still, it might be worth a try to use it; what 

Re: Request for Testing: TCP RACK

2024-03-28 Thread Nuno Teixeira
Hello all!

Running rack @b7b78c1c169 "Optimize HPTS..." very happy on my laptop
(amd64)!

Thanks all!

Drew Gallatin  escreveu (quinta, 21/03/2024 à(s)
12:58):

> The entire point is to *NOT* go through the overhead of scheduling
> something asynchronously, but to take advantage of the fact that a
> user/kernel transition is going to trash the cache anyway.
>
> In the common case of a system which has less than the threshold  number
> of connections , we access the tcp_hpts_softclock function pointer, make
> one function call, and access hpts_that_need_softclock, and then return.
> So that's 2 variables and a function call.
>
> I think it would be preferable to avoid that call, and to move the
> declaration of tcp_hpts_softclock and hpts_that_need_softclock so that they
> are in the same cacheline.  Then we'd be hitting just a single line in the
> common case.  (I've made comments on the review to that effect).
>
> Also, I wonder if the threshold could get higher by default, so that hpts
> is never called in this context unless we're to the point where we're
> scheduling thousands of runs of the hpts thread (and taking all those clock
> interrupts).
>
> Drew
>
> On Wed, Mar 20, 2024, at 8:17 PM, Konstantin Belousov wrote:
>
> On Tue, Mar 19, 2024 at 06:19:52AM -0400, rrs wrote:
> > Ok I have created
> >
> > https://reviews.freebsd.org/D44420
> >
> >
> > To address the issue. I also attach a short version of the patch that
> Nuno
> > can try and validate
> >
> > it works. Drew you may want to try this and validate the optimization
> does
> > kick in since I can
> >
> > only now test that it does not on my local box :)
> The patch still causes access to all cpu's cachelines on each userret.
> It would be much better to inc/check the threshold and only schedule the
> call when exceeded.  Then the call can occur in some dedicated context,
> like per-CPU thread, instead of userret.
>
> >
> >
> > R
> >
> >
> >
> > On 3/18/24 3:42 PM, Drew Gallatin wrote:
> > > No.  The goal is to run on every return to userspace for every thread.
> > >
> > > Drew
> > >
> > > On Mon, Mar 18, 2024, at 3:41 PM, Konstantin Belousov wrote:
> > > > On Mon, Mar 18, 2024 at 03:13:11PM -0400, Drew Gallatin wrote:
> > > > > I got the idea from
> > > > >
> https://people.mpi-sws.org/~druschel/publications/soft-timers-tocs.pdf
> > > > > The gist is that the TCP pacing stuff needs to run frequently, and
> > > > > rather than run it out of a clock interrupt, its more efficient to
> run
> > > > > it out of a system call context at just the point where we return
> to
> > > > > userspace and the cache is trashed anyway. The current
> implementation
> > > > > is fine for our workload, but probably not idea for a generic
> system.
> > > > > Especially one where something is banging on system calls.
> > > > >
> > > > > Ast's could be the right tool for this, but I'm super unfamiliar
> with
> > > > > them, and I can't find any docs on them.
> > > > >
> > > > > Would ast_register(0, ASTR_UNCOND, 0, func) be roughly equivalent
> to
> > > > > what's happening here?
> > > > This call would need some AST number added, and then it registers the
> > > > ast to run on next return to userspace, for the current thread.
> > > >
> > > > Is it enough?
> > > > >
> > > > > Drew
> > > >
> > > > >
> > > > > On Mon, Mar 18, 2024, at 2:33 PM, Konstantin Belousov wrote:
> > > > > > On Mon, Mar 18, 2024 at 07:26:10AM -0500, Mike Karels wrote:
> > > > > > > On 18 Mar 2024, at 7:04, tue...@freebsd.org wrote:
> > > > > > >
> > > > > > > >> On 18. Mar 2024, at 12:42, Nuno Teixeira
> > > >  wrote:
> > > > > > > >>
> > > > > > > >> Hello all!
> > > > > > > >>
> > > > > > > >> It works just fine!
> > > > > > > >> System performance is OK.
> > > > > > > >> Using patch on main-n268841-b0aaf8beb126(-dirty).
> > > > > > > >>
> > > > > > > >> ---
> > > > > > > >> net.inet.tcp.functions_available:
> > > > > > > >> Stack   D
> > > > AliasPCB count
> > > > > > > >> freebsd freebsd  0
> > > > > > > >> rack*
> > > > rack 38
> > > > > > > >> ---
> > > > > > > >>
> > > > > > > >> It would be so nice that we can have a sysctl tunnable for
> > > > this patch
> > > > > > > >> so we could do more tests without recompiling kernel.
> > > > > > > > Thanks for testing!
> > > > > > > >
> > > > > > > > @gallatin: can you come up with a patch that is acceptable
> > > > for Netflix
> > > > > > > > and allows to mitigate the performance regression.
> > > > > > >
> > > > > > > Ideally, tcphpts could enable this automatically when it
> > > > starts to be
> > > > > > > used (enough?), but a sysctl could select auto/on/off.
> > > > > > There is already a well-known mechanism to request execution of
> the
> > > > > > specific function on return to userspace, namely AST.  The
> difference
> > > > > > with the current hack is that the execution is requested for one
> > > > 

Re: vnet with interfaces

2024-03-28 Thread Mario Marietto
---> A very simple and elegant shell management tool to play with bhyve is
vm-bhyve

I never use it. I created my own elegant script that in my opinion works
better than vm-bhyve. And I think that I can improve it
I will...

On Tue, Mar 26, 2024 at 8:30 PM Tomek CEDRO  wrote:

> On Tue, Mar 26, 2024 at 7:32 PM Benoit Chesneau
>  wrote:
> > How does work VNET with interfaces? Is this as efficient as using pci
> passtrough in a vm ?
> > Benoît
>
> Vnet allows you to control networks by the system and make various
> configurations networks jails etc, example here:
>
>
> https://klarasystems.com/articles/virtualize-your-network-on-freebsd-with-vnet/
>
> PCI passthrough would skip all kernel networking and give your vm
> access to the physical cable attached to a NIC. Note that passthrough
> needs entry in /boot/loader.conf and disables that device for use in
> system. I have a dedicated USB 3.0 controller working that way.
>
> A very simple and elegant shell management tool to play with bhyve is
> vm-bhyve:
>
> https://www.freshports.org/sysutils/vm-bhyve/
>
> Have fun :-)
>
> --
> CeDeROM, SQ7MHZ, http://www.tomek.cedro.info
>
>

-- 
Mario.


Re: vnet with interfaces

2024-03-28 Thread Tom Jones



On Tue, Mar 26, 2024, at 18:31, Benoit Chesneau wrote:
> How does work VNET with interfaces? Is this as efficient as using pci 
> passtrough in a vm ? 

Overhead should be minimal, while the device is logically missing from the 
default vnet there isn't any more "in the way" for actual usage. Marco's paper 
might be a good starting point for further digging: 
https://papers.freebsd.org/2003/zec-vimage/

Tom



[Bug 275225] On ARM64 carp preempt not working as expected

2024-03-28 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=275225

--- Comment #4 from ekoort  ---
So today it did not work as expected. 
Main went down, secondary took over, main came up and instantly main took over
while it (cluster services) should stay on secondary.
So it's a mixed results for unknown reason.

-- 
You are receiving this mail because:
You are the assignee for the bug.