Re: Request for Testing: TCP RACK
(...) Backup server is https://www.rsync.net/ (free 500GB for FreeBSD developers). Nuno Teixeira escreveu (quarta, 10/04/2024 à(s) 13:39): > With base stack I can complete restic check successfully > downloading/reading/checking all files from a "big" remote compressed > backup. > Changing it to RACK stack, it fails. > > I run this command often because in the past, compression corruption > occured and this is the equivalent of restoring backup to check its > integrity. > > Maybe someone could do a restic test to check if this is reproducible. > > Thanks, > > > > escreveu (quarta, 10/04/2024 à(s) 13:12): > >> >> >> > On 10. Apr 2024, at 13:40, Nuno Teixeira wrote: >> > >> > Hello all, >> > >> > @ current 1500018 and fetching torrents with net-p2p/qbittorrent >> finished ~2GB download and connection UP until the end: >> > >> > --- >> > Apr 10 11:26:46 leg kernel: re0: watchdog timeout >> > Apr 10 11:26:46 leg kernel: re0: link state changed to DOWN >> > Apr 10 11:26:49 leg dhclient[58810]: New IP Address (re0): 192.168.1.67 >> > Apr 10 11:26:49 leg dhclient[58814]: New Subnet Mask (re0): >> 255.255.255.0 >> > Apr 10 11:26:49 leg dhclient[58818]: New Broadcast Address (re0): >> 192.168.1.255 >> > Apr 10 11:26:49 leg kernel: re0: link state changed to UP >> > Apr 10 11:26:49 leg dhclient[58822]: New Routers (re0): 192.168.1.1 >> > --- >> > >> > In the past tests, I've got more watchdog timeouts, connection goes >> down and a reboot needed to put it back (`service netif restart` didn't >> work). >> > >> > Other way to reproduce this is using sysutils/restic (backup program) >> to read/check all files from a remote server via sftp: >> > >> > `restic -r sftp:user@remote:restic-repo check --read-data` from a 60GB >> compressed backup. >> > >> > --- >> > watchdog timeout x3 as above >> > --- >> > >> > restic check fail log @ 15% progress: >> > --- >> > >> > Load(, 17310001, 0) returned error, retrying after >> 1.7670599s: connection lost >> > Load(, 17456892, 0) returned error, retrying after >> 4.619104908s: connection lost >> > Load(, 17310001, 0) returned error, retrying after >> 5.477648517s: connection lost >> > List(lock) returned error, retrying after 293.057766ms: connection lost >> > List(lock) returned error, retrying after 385.206693ms: connection lost >> > List(lock) returned error, retrying after 1.577594281s: connection lost >> > >> > >> > Connection continues UP. >> Hi, >> >> I'm not sure what the issue is you are reporting. Could you state >> what behavior you are experiencing with the base stack and with >> the RACK stack. In particular, what the difference is? >> >> Best regards >> Michael >> > >> > Cheers, >> > >> > escreveu (quinta, 28/03/2024 à(s) 15:53): >> >> On 28. Mar 2024, at 15:00, Nuno Teixeira wrote: >> >> >> >> Hello all! >> >> >> >> Running rack @b7b78c1c169 "Optimize HPTS..." very happy on my laptop >> (amd64)! >> >> >> >> Thanks all! >> > Thanks for the feedback! >> > >> > Best regards >> > Michael >> >> >> >> Drew Gallatin escreveu (quinta, 21/03/2024 >> à(s) 12:58): >> >> The entire point is to *NOT* go through the overhead of scheduling >> something asynchronously, but to take advantage of the fact that a >> user/kernel transition is going to trash the cache anyway. >> >> >> >> In the common case of a system which has less than the threshold >> number of connections , we access the tcp_hpts_softclock function pointer, >> make one function call, and access hpts_that_need_softclock, and then >> return. So that's 2 variables and a function call. >> >> >> >> I think it would be preferable to avoid that call, and to move the >> declaration of tcp_hpts_softclock and hpts_that_need_softclock so that they >> are in the same cacheline. Then we'd be hitting just a single line in the >> common case. (I've made comments on the review to that effect). >> >> >> >> Also, I wonder if the threshold could get higher by default, so that >> hpts is never called in this context unless we're to the point where we're >> scheduling thousands of runs of the hpts thread (and
Re: Request for Testing: TCP RACK
With base stack I can complete restic check successfully downloading/reading/checking all files from a "big" remote compressed backup. Changing it to RACK stack, it fails. I run this command often because in the past, compression corruption occured and this is the equivalent of restoring backup to check its integrity. Maybe someone could do a restic test to check if this is reproducible. Thanks, escreveu (quarta, 10/04/2024 à(s) 13:12): > > > > On 10. Apr 2024, at 13:40, Nuno Teixeira wrote: > > > > Hello all, > > > > @ current 1500018 and fetching torrents with net-p2p/qbittorrent > finished ~2GB download and connection UP until the end: > > > > --- > > Apr 10 11:26:46 leg kernel: re0: watchdog timeout > > Apr 10 11:26:46 leg kernel: re0: link state changed to DOWN > > Apr 10 11:26:49 leg dhclient[58810]: New IP Address (re0): 192.168.1.67 > > Apr 10 11:26:49 leg dhclient[58814]: New Subnet Mask (re0): 255.255.2550 > > Apr 10 11:26:49 leg dhclient[58818]: New Broadcast Address (re0): > 192.168.1.255 > > Apr 10 11:26:49 leg kernel: re0: link state changed to UP > > Apr 10 11:26:49 leg dhclient[58822]: New Routers (re0): 192.168.1.1 > > --- > > > > In the past tests, I've got more watchdog timeouts, connection goes down > and a reboot needed to put it back (`service netif restart` didn't work). > > > > Other way to reproduce this is using sysutils/restic (backup program) to > read/check all files from a remote server via sftp: > > > > `restic -r sftp:user@remote:restic-repo check --read-data` from a 60GB > compressed backup. > > > > --- > > watchdog timeout x3 as above > > --- > > > > restic check fail log @ 15% progress: > > --- > > > > Load(, 17310001, 0) returned error, retrying after > 1.7670599s: connection lost > > Load(, 17456892, 0) returned error, retrying after > 4.619104908s: connection lost > > Load(, 17310001, 0) returned error, retrying after > 5.477648517s: connection lost > > List(lock) returned error, retrying after 293.057766ms: connection lost > > List(lock) returned error, retrying after 385.206693ms: connection lost > > List(lock) returned error, retrying after 1.577594281s: connection lost > > > > > > Connection continues UP. > Hi, > > I'm not sure what the issue is you are reporting. Could you state > what behavior you are experiencing with the base stack and with > the RACK stack. In particular, what the difference is? > > Best regards > Michael > > > > Cheers, > > > > escreveu (quinta, 28/03/2024 à(s) 15:53): > >> On 28. Mar 2024, at 15:00, Nuno Teixeira wrote: > >> > >> Hello all! > >> > >> Running rack @b7b78c1c169 "Optimize HPTS..." very happy on my laptop > (amd64)! > >> > >> Thanks all! > > Thanks for the feedback! > > > > Best regards > > Michael > >> > >> Drew Gallatin escreveu (quinta, 21/03/2024 à(s) > 12:58): > >> The entire point is to *NOT* go through the overhead of scheduling > something asynchronously, but to take advantage of the fact that a > user/kernel transition is going to trash the cache anyway. > >> > >> In the common case of a system which has less than the threshold > number of connections , we access the tcp_hpts_softclock function pointer, > make one function call, and access hpts_that_need_softclock, and then > return. So that's 2 variables and a function call. > >> > >> I think it would be preferable to avoid that call, and to move the > declaration of tcp_hpts_softclock and hpts_that_need_softclock so that they > are in the same cacheline. Then we'd be hitting just a single line in the > common case. (I've made comments on the review to that effect). > >> > >> Also, I wonder if the threshold could get higher by default, so that > hpts is never called in this context unless we're to the point where we're > scheduling thousands of runs of the hpts thread (and taking all those clock > interrupts). > >> > >> Drew > >> > >> On Wed, Mar 20, 2024, at 8:17 PM, Konstantin Belousov wrote: > >>> On Tue, Mar 19, 2024 at 06:19:52AM -0400, rrs wrote: > >>>> Ok I have created > >>>> > >>>> https://reviews.freebsd.org/D44420 > >>>> > >>>> > >>>> To address the issue. I also attach a short version of the patch that > Nuno > >>>> can try and validate > >>>> > >>>> it works. Drew you may want to try this
Re: Request for Testing: TCP RACK
Hello all, @ current 1500018 and fetching torrents with net-p2p/qbittorrent finished ~2GB download and connection UP until the end: --- Apr 10 11:26:46 leg kernel: re0: watchdog timeout Apr 10 11:26:46 leg kernel: re0: link state changed to DOWN Apr 10 11:26:49 leg dhclient[58810]: New IP Address (re0): 192.168.1.67 Apr 10 11:26:49 leg dhclient[58814]: New Subnet Mask (re0): 255.255.255.0 Apr 10 11:26:49 leg dhclient[58818]: New Broadcast Address (re0): 192.168.1.255 Apr 10 11:26:49 leg kernel: re0: link state changed to UP Apr 10 11:26:49 leg dhclient[58822]: New Routers (re0): 192.168.1.1 --- In the past tests, I've got more watchdog timeouts, connection goes down and a reboot needed to put it back (`service netif restart` didn't work). Other way to reproduce this is using sysutils/restic (backup program) to read/check all files from a remote server via sftp: `restic -r sftp:user@remote:restic-repo check --read-data` from a 60GB compressed backup. --- watchdog timeout x3 as above --- restic check fail log @ 15% progress: --- Load(, 17310001, 0) returned error, retrying after 1.7670599s: connection lost Load(, 17456892, 0) returned error, retrying after 4.619104908s: connection lost Load(, 17310001, 0) returned error, retrying after 5.477648517s: connection lost List(lock) returned error, retrying after 293.057766ms: connection lost List(lock) returned error, retrying after 385.206693ms: connection lost List(lock) returned error, retrying after 1.577594281s: connection lost Connection continues UP. Cheers, escreveu (quinta, 28/03/2024 à(s) 15:53): > > On 28. Mar 2024, at 15:00, Nuno Teixeira wrote: > > > > Hello all! > > > > Running rack @b7b78c1c169 "Optimize HPTS..." very happy on my laptop > (amd64)! > > > > Thanks all! > Thanks for the feedback! > > Best regards > Michael > > > > Drew Gallatin escreveu (quinta, 21/03/2024 à(s) > 12:58): > > The entire point is to *NOT* go through the overhead of scheduling > something asynchronously, but to take advantage of the fact that a > user/kernel transition is going to trash the cache anyway. > > > > In the common case of a system which has less than the threshold number > of connections , we access the tcp_hpts_softclock function pointer, make > one function call, and access hpts_that_need_softclock, and then return. > So that's 2 variables and a function call. > > > > I think it would be preferable to avoid that call, and to move the > declaration of tcp_hpts_softclock and hpts_that_need_softclock so that they > are in the same cacheline. Then we'd be hitting just a single line in the > common case. (I've made comments on the review to that effect). > > > > Also, I wonder if the threshold could get higher by default, so that > hpts is never called in this context unless we're to the point where we're > scheduling thousands of runs of the hpts thread (and taking all those clock > interrupts). > > > > Drew > > > > On Wed, Mar 20, 2024, at 8:17 PM, Konstantin Belousov wrote: > >> On Tue, Mar 19, 2024 at 06:19:52AM -0400, rrs wrote: > >>> Ok I have created > >>> > >>> https://reviews.freebsd.org/D44420 > >>> > >>> > >>> To address the issue. I also attach a short version of the patch that > Nuno > >>> can try and validate > >>> > >>> it works. Drew you may want to try this and validate the optimization > does > >>> kick in since I can > >>> > >>> only now test that it does not on my local box :) > >> The patch still causes access to all cpu's cachelines on each userret. > >> It would be much better to inc/check the threshold and only schedule the > >> call when exceeded. Then the call can occur in some dedicated context, > >> like per-CPU thread, instead of userret. > >> > >>> > >>> > >>> R > >>> > >>> > >>> > >>> On 3/18/24 3:42 PM, Drew Gallatin wrote: > >>>> No. The goal is to run on every return to userspace for every thread. > >>>> > >>>> Drew > >>>> > >>>> On Mon, Mar 18, 2024, at 3:41 PM, Konstantin Belousov wrote: > >>>>> On Mon, Mar 18, 2024 at 03:13:11PM -0400, Drew Gallatin wrote: > >>>>>> I got the idea from > >>>>>> > https://people.mpi-sws.org/~druschel/publications/soft-timers-tocs.pdf > >>>>>> The gist is that the TCP pacing stuff needs to run frequently, and > >>>>>> rather than run it out of a clock interrupt, its more efficient to > run >
Re: Request for Testing: TCP RACK
Hello all! Running rack @b7b78c1c169 "Optimize HPTS..." very happy on my laptop (amd64)! Thanks all! Drew Gallatin escreveu (quinta, 21/03/2024 à(s) 12:58): > The entire point is to *NOT* go through the overhead of scheduling > something asynchronously, but to take advantage of the fact that a > user/kernel transition is going to trash the cache anyway. > > In the common case of a system which has less than the threshold number > of connections , we access the tcp_hpts_softclock function pointer, make > one function call, and access hpts_that_need_softclock, and then return. > So that's 2 variables and a function call. > > I think it would be preferable to avoid that call, and to move the > declaration of tcp_hpts_softclock and hpts_that_need_softclock so that they > are in the same cacheline. Then we'd be hitting just a single line in the > common case. (I've made comments on the review to that effect). > > Also, I wonder if the threshold could get higher by default, so that hpts > is never called in this context unless we're to the point where we're > scheduling thousands of runs of the hpts thread (and taking all those clock > interrupts). > > Drew > > On Wed, Mar 20, 2024, at 8:17 PM, Konstantin Belousov wrote: > > On Tue, Mar 19, 2024 at 06:19:52AM -0400, rrs wrote: > > Ok I have created > > > > https://reviews.freebsd.org/D44420 > > > > > > To address the issue. I also attach a short version of the patch that > Nuno > > can try and validate > > > > it works. Drew you may want to try this and validate the optimization > does > > kick in since I can > > > > only now test that it does not on my local box :) > The patch still causes access to all cpu's cachelines on each userret. > It would be much better to inc/check the threshold and only schedule the > call when exceeded. Then the call can occur in some dedicated context, > like per-CPU thread, instead of userret. > > > > > > > R > > > > > > > > On 3/18/24 3:42 PM, Drew Gallatin wrote: > > > No. The goal is to run on every return to userspace for every thread. > > > > > > Drew > > > > > > On Mon, Mar 18, 2024, at 3:41 PM, Konstantin Belousov wrote: > > > > On Mon, Mar 18, 2024 at 03:13:11PM -0400, Drew Gallatin wrote: > > > > > I got the idea from > > > > > > https://people.mpi-sws.org/~druschel/publications/soft-timers-tocs.pdf > > > > > The gist is that the TCP pacing stuff needs to run frequently, and > > > > > rather than run it out of a clock interrupt, its more efficient to > run > > > > > it out of a system call context at just the point where we return > to > > > > > userspace and the cache is trashed anyway. The current > implementation > > > > > is fine for our workload, but probably not idea for a generic > system. > > > > > Especially one where something is banging on system calls. > > > > > > > > > > Ast's could be the right tool for this, but I'm super unfamiliar > with > > > > > them, and I can't find any docs on them. > > > > > > > > > > Would ast_register(0, ASTR_UNCOND, 0, func) be roughly equivalent > to > > > > > what's happening here? > > > > This call would need some AST number added, and then it registers the > > > > ast to run on next return to userspace, for the current thread. > > > > > > > > Is it enough? > > > > > > > > > > Drew > > > > > > > > > > > > > > On Mon, Mar 18, 2024, at 2:33 PM, Konstantin Belousov wrote: > > > > > > On Mon, Mar 18, 2024 at 07:26:10AM -0500, Mike Karels wrote: > > > > > > > On 18 Mar 2024, at 7:04, tue...@freebsd.org wrote: > > > > > > > > > > > > > > >> On 18. Mar 2024, at 12:42, Nuno Teixeira > > > > wrote: > > > > > > > >> > > > > > > > >> Hello all! > > > > > > > >> > > > > > > > >> It works just fine! > > > > > > > >> System performance is OK. > > > > > > > >> Using patch on main-n268841-b0aaf8beb126(-dirty). > > > > > > > >> > > > > > > > >> --- > > > > > > > >> net.inet.tcp.functions_available: > > > > > > > >> Stack
Re: Request for Testing: TCP RACK
Hello all! It works just fine! System performance is OK. Using patch on main-n268841-b0aaf8beb126(-dirty). --- net.inet.tcp.functions_available: Stack D AliasPCB count freebsd freebsd 0 rack* rack 38 --- It would be so nice that we can have a sysctl tunnable for this patch so we could do more tests without recompiling kernel. Thanks all! Really happy here :) Cheers, Nuno Teixeira escreveu (domingo, 17/03/2024 à(s) 20:26): > > Hello, > > > I don't have the full context, but it seems like the complaint is a > > performance regression in bonnie++ and perhaps other things when tcp_hpts > > is loaded, even when it is not used. Is that correct? > > > > If so, I suspect its because we drive the tcp_hpts_softclock() routine from > > userret(), in order to avoid tons of timer interrupts and context switches. > > To test this theory, you could apply a patch like: > > It's affecting overall system performance, bonnie was just a way to > get some numbers to compare. > > Tomorrow I will test patch. > > Thanks! > > -- > Nuno Teixeira > FreeBSD Committer (ports) -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
Hello, > I don't have the full context, but it seems like the complaint is a > performance regression in bonnie++ and perhaps other things when tcp_hpts is > loaded, even when it is not used. Is that correct? > > If so, I suspect its because we drive the tcp_hpts_softclock() routine from > userret(), in order to avoid tons of timer interrupts and context switches. > To test this theory, you could apply a patch like: It's affecting overall system performance, bonnie was just a way to get some numbers to compare. Tomorrow I will test patch. Thanks! -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
Hello, > > - I can't remember better tests to do as I can feel the entire OS is > > being slow, without errors, just slow. > This is interesting. It seems a consequence on loading TCPHPTS, not actually > using it. Exactly, just loading module and not using it by setting sysctl. > I have CCed Drew and Randall, who know much more about HPTS and might have > follow up questions. I'll bring the issue up in the FreeBSD transport call > next Thursday. > > What hardware are you using? Laptop: Legion 5-15IMH05 (Lenovo) - Type 82AU Thanks! -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
> Just to double check: you only load the tcp_rack. You don't run > sysctl net.inet.tcp.functions_default=rack I'm not using sysctl, just loading module. > What does "poudriere testport net/gitup" do? Only build stuff or does is > also download something? > > What does bonnie++ do? poudriere is for testing ports and it uses jails to build stuff. It have restrict access to net to fetch distfiles (not the case as distfile is present on disk) bonnie++ is a disk benchmark > Could you reboot the system, run the test, do kldload tcphpts, run > the test again, do kldload tcp_rack, and run it again. The previous test [2] was obtained by loading tcp_rack Now I will post results for test [3] by loading (only) tcphpts module. [3] kldload tcphpts: ==> poudriere testport net/gitup: 55.26s real 5.23s user 1m19.91s sys ==> bonnie++ Version 1.98 --Sequential Output-- --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Name:Size etc/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP leg.home32G 12k 99 73.0m 99 51.1m 99 27k 99 128m 99 8038 2194 Latency 1763ms 194ms 23979us 431ms1267us2776us Version 1.98 --Sequential Create-- Random Create leg.home-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 7017.934752 98 21243.823180 100 7780.918458 99 9281.48 98 21368.647657 100 7457.828733 99 Latency 3015us 220us2398us1106us 386us2473us Summary: - I can't remember better tests to do as I can feel the entire OS is being slow, without errors, just slow. - Approx. results as test [2] Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
> > Will update amd64 laptop to main-n268827-75464941dc17 (Mar 16) and test it. > Please do so... @main-n268827-75464941dc17 GENERIC-NODEBUG amd64 Ok, I think I have here some numbers related to disk performance being affected by tcp_rack that somehow is messing with something. NOTES: - test [1] was done after a boot without tcp_rack in loader.conf and [2] tcp_rack was loaded manually with kldload (without rebooting) - After unloading tcp_rack, same results as [2] - Cannot unload tcptpts: device busy [1] without tcp_rack loaded: ==> poudriere testport net/gitup: 11.16s real 5.35s user 6.35s sys ==> bonnie++: Version 1.98 --Sequential Output-- --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Name:Size etc/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP leg.home32G 25k 99 105m 99 88.7m 99 77k 99 198m 99 12716 1784 Latency 351ms1793us2340us 241ms 638us2514us Version 1.98 --Sequential Create-- Random Create leg.home-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 14908.273230 97 + +++ 6878.516657 99 15194.808063 97 + +++ 8019.731670 99 Latency 1108us 182us2228us1013us 152us2424us [2] kldload tcp_rack: ==> poudriere testport net/gitup: 1m0.54s real4.98s user 1m31.38s sys ==> bonnie++: Version 1.98 --Sequential Output-- --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Name:Size etc/sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP leg.home32G 14k 99 78.0m 99 46.6m 99 25k 99 120m 99 6688 2161 Latency 676ms 18309us 76171us 385ms 924us2274us Version 1.98 --Sequential Create-- Random Create leg.home-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 8139.513260 96 19437.792294 99 5494.638508 99 8099.275425 96 19723.528878 99 6363.123671 99 Latency 2982us 338us3072us1135us 591us3236us Cheers, -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
> If you load tcp_rack via kldload, tcphtps get loaded automatically. > If you load if via /boot/loader.conf, you need to have > tcphpts_load="YES" > in addition to > tcp_rack_load="YES" As of my tests, loader.conf: tcp_rack_load="YES" loads tcphtps.ko auto: 31 0x81f53000545e0 tcp_rack.ko 42 0x81fa800014588 tcphpts.ko On aarch64 (rpi4) I didn't get any performance issues in main-n268730-96ad640178ea (Mar 8) and main-n268827-75464941dc17 (Mar 16) So it seems not related to rack commit e18b97bd63a8: Update to bring the rack stack with all its fixes in. Will update amd64 laptop to main-n268827-75464941dc17 (Mar 16) and test it. -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
> > Resuming I only need to add: > > > > options TCPHPTS > > > > Is this correct? > > > > Yeah, that will probably fix it. According to a comment in > /usr/src/sys/netinet/tcp_hpts.c it adds a high precision timer > system for tcp, used by RACK and BBR. As tuexen said, on main, loader.conf: tcp_rack_load="YES" will load tcphpts.ko as I am seing in my rpi4 right now. I'm testing it and check its performance. I will test again on my amd64 laptop and run more tests too. -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
(...) > These entries are missing in GENERIC: > > makeoptions WITH_EXTRA_TCP_STACKS=1 >From >https://cgit.freebsd.org/src/commit/?id=3a338c534154164504005beb00a3c6feb03756cc WITH_EXTRA_TCP_STACKS was dropped. > options TCPHPTS That's the missing option in GENERIC that could lead to my slow opearations problem > options TCP_RACK Don't think I need this one as I will use kernel module instead of building it in kernel. Resuming I only need to add: options TCPHPTS Is this correct? Thanks, -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
Hello Gary, Nice that you found this. tcp_tack manual doesn't mention that we need extra config in kernel but it loads module and it is shown in sysctl net.inet.tcp.functions_available without errors. I will add missing config to GENERIC and see how it goes. Cheers, Gary Jennejohn escreveu (sábado, 16/03/2024 à(s) 09:40): > > On Sat, 16 Mar 2024 09:41:13 +0100 > tue...@freebsd.org wrote: > > > > On 16. Mar 2024, at 08:57, Nuno Teixeira wrote: > > > > > > Hello all, > > > > > > On a laptop i7/16MB ram, desktop use and port testing (poudriere) I've > > > noticed that all operations on OS was increased by 3 to 5 times > > > longer. > > > examples: > > > - firefox took way long time to open > > > - poudriere testport tooked up to 3 times longer to finished > > > > make.conf: > > KERNCONF=GENERIC-NODEBUG > > src.conf: > > WITH_MALLOC_PRODUCTION=yes > > > > tested on main-n268800-6a6ec90681cf > > > > > How did you enable the RACK stack? Does the poudriere involve > > network interaction? > > > > Interesting. RACK works for me: > > net.inet.tcp.functions_available: > Stack D AliasPCB count > freebsd freebsd 0 > rack* rack 23 > > I don't see any lags when starting/using FireFox or any other browser. > > Mail delivery (in/out) is also not affected. > > But GENERIC, which is loaded by GENERIC-NODEBUG, doesn't support RACK. > > These entries are missing in GENERIC: > > makeoptions WITH_EXTRA_TCP_STACKS=1 > options TCPHPTS > options TCP_RACK > > -- > Gary Jennejohn -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
Followed man tcp_rack: loader.conf: tcp_rack_load="YES" sysctl.conf: net.inet.tcp.functions_default=rack poudriere have restricted access to network, usually for fetch distfiles. escreveu (sábado, 16/03/2024 à(s) 08:41): > > > On 16. Mar 2024, at 08:57, Nuno Teixeira wrote: > > > > Hello all, > > > > On a laptop i7/16MB ram, desktop use and port testing (poudriere) I've > > noticed that all operations on OS was increased by 3 to 5 times > > longer. > > examples: > > - firefox took way long time to open > > - poudriere testport tooked up to 3 times longer to finished > How did you enable the RACK stack? Does the poudriere involve > network interaction? > > Best regards > Michael > > > > make.conf: > > KERNCONF=GENERIC-NODEBUG > > src.conf: > > WITH_MALLOC_PRODUCTION=yes > > > > tested on main-n268800-6a6ec90681cf > > > > Thanks, > > > > escreveu (quinta, 14/03/2024 à(s) 10:51): > >> > >>> On 14. Mar 2024, at 11:04, Dag-Erling Smørgrav wrote: > >>> > >>> tue...@freebsd.org writes: > >>>> Gary Jennejohn writes: > >>>>> In the article we have option TCPHPTS and option TCP_RACK. But in > >>>>> /sys/conf/NOTES we have options TCPHPTS and options TCP_RACK and > >>>>> not option. > >>>> Thanks for reporting, that is a typo in the article. It should > >>>> always read options instead of option. > >>> > >>> It's not a typo, both spellings work, cf. config(5). > >> Thank you very much for the hint. I did not know this. I wrote > >> option in the article (for whatever reason) and tested the > >> configs using options... > >> > >> Best regards > >> Michael > >>> > >>> DES > >>> -- > >>> Dag-Erling Smørgrav - d...@freebsd.org > >> > > > > > > -- > > Nuno Teixeira > > FreeBSD Committer (ports) > -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
Hello all, On a laptop i7/16MB ram, desktop use and port testing (poudriere) I've noticed that all operations on OS was increased by 3 to 5 times longer. examples: - firefox took way long time to open - poudriere testport tooked up to 3 times longer to finished make.conf: KERNCONF=GENERIC-NODEBUG src.conf: WITH_MALLOC_PRODUCTION=yes tested on main-n268800-6a6ec90681cf Thanks, escreveu (quinta, 14/03/2024 à(s) 10:51): > > > On 14. Mar 2024, at 11:04, Dag-Erling Smørgrav wrote: > > > > tue...@freebsd.org writes: > >> Gary Jennejohn writes: > >>> In the article we have option TCPHPTS and option TCP_RACK. But in > >>> /sys/conf/NOTES we have options TCPHPTS and options TCP_RACK and > >>> not option. > >> Thanks for reporting, that is a typo in the article. It should > >> always read options instead of option. > > > > It's not a typo, both spellings work, cf. config(5). > Thank you very much for the hint. I did not know this. I wrote > option in the article (for whatever reason) and tested the > configs using options... > > Best regards > Michael > > > > DES > > -- > > Dag-Erling Smørgrav - d...@freebsd.org > -- Nuno Teixeira FreeBSD Committer (ports)
Re: Request for Testing: TCP RACK
Hello, I'm curious about tcp RACK. As I do not run on a server background, only a laptop and a rpi4 for poudriere, git, browsing, some torrent and ssh/sftp connections, will I see any difference using RACK? What tests should I do for comparison? Thanks, escreveu (quinta, 16/11/2023 à(s) 15:10): > > Dear all, > > recently the main branch was changed to build the TCP RACK stack > which is a loadable kernel module, by default: > https://cgit.FreeBSD.org/src/commit/?id=3a338c534154164504005beb00a3c6feb03756cc > > As discussed on the bi-weekly transport call, it would be great if people > could test the RACK stack for their workload. Please report any problems to > the > net@ mailing list or open an issue in the bug tracker and drop me a note via > email. > This includes regressions in CPU usage, regressions in performance or any > other > unexpected change you observe. > > You can load the kernel module using > kldload tcp_rack > > You can make the RACK stack the default stack using > sysctl net.inet.tcp.functions_default=rack > > Based on the feedback we get, the default stack might be switched to the > RACK stack. > > Please let me know if you have any questions. > > Best regards > Michael > > > -- Nuno Teixeira FreeBSD Committer (ports)
Re: FreeBSD lacks PPPoE (pppoa3 solution)
Hi, Please see http://speedtouch.sourceforge.net/index.php?/news.en.html "Real" PPPoE with a ethernet card connected with a ADSL Modem works. This problem is related with ISPs that supports *only* PPPoE protocol with USB Modems (this case Alcatel) that "emulates" ethernet with TUN/TAP devices. USB modems don't have a connection to ethernet cards. FreeBSD pppoa port works ok with Alcatel USB Modems but only for PPPoA protocols and not PPPoE. Almost all europe ISPs only support PPPoE and not PPPoA (I don't know the reason why). Thanks, Nuno Teixeira On Thu, Jul 10, 2003 at 01:29:26PM -0700, Julian Elischer wrote: > I'm confused.. FreeBSD has had full PPPoE support for about 4 years. > > there is also PPPoA support.. > > why do you think there is not? > > > > > > On Thu, 10 Jul 2003, Nuno Teixeira wrote: > > > > > Hello to all, > > > > I'm using FreeBSD for almost 4 years and I will continue with it because > > I can't find better. > > > > I subscribed to a ADSL connection in Portugal that supports only PPPoE > > (and not PPPoA). > > > > Almost everyone in Portugal uses only 2 modems (supported by ISPs): > > Siemens Santis USB and Alcatel SpeedTouch 330 USB. > > > > Linux people has already support to Alcatel USB modems with PPPoE > > connections and FreeBSD still lacks of PPPoE support. > > > > I don't like Linux so, to solve my home network problem, I install a > > Windows machine to share the Internet (ooops!) across my LAN. > > > > The new Speedtouch 1.2 beta2 driver > > (http://speedtouch.sourceforge.net/index.php?/news.en.html), > > already support Bridging 1483 mode (PPPoE support) in pppoa3 but without > > use in FreeBSD. > > > > Please read the following thread to see some solutions for implementing > > PPPoE in FreeBSD. > > > > http://www.mail-archive.com/[EMAIL PROTECTED]/msg04514.html > > > > For what you can see in this thread: > > > > "...that task is simply a matter of two or three #ifdefs for each > > BSD flavor, but nobody seems volunteering to accomplish it." > > > > I'm just a FreeBSD user not a programmer or hacker, so I can only help > > FreeBSD community asking you to try to implement PPPoE in FreeBSD so > > everyone uses it. > > > > > > > > Thanks very much for your great work, > > > > Nuno Teixeira > > > > -- > > > > /* > > PGP fingerprint: > > C6D1 06ED EB54 A99C 6B14 6732 0A5D 810D 727D F6C6 > > */ > > ___ > > [EMAIL PROTECTED] mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-hackers > > To unsubscribe, send any mail to "[EMAIL PROTECTED]" > > > > ___ > [EMAIL PROTECTED] mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "[EMAIL PROTECTED]" -- /* PGP fingerprint: C6D1 06ED EB54 A99C 6B14 6732 0A5D 810D 727D F6C6 */ ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"
FreeBSD lacks PPPoE (pppoa3 solution)
Hello to all, I'm using FreeBSD for almost 4 years and I will continue with it because I can't find better. I subscribed to a ADSL connection in Portugal that supports only PPPoE (and not PPPoA). Almost everyone in Portugal uses only 2 modems (supported by ISPs): Siemens Santis USB and Alcatel SpeedTouch 330 USB. Linux people has already support to Alcatel USB modems with PPPoE connections and FreeBSD still lacks of PPPoE support. I don't like Linux so, to solve my home network problem, I install a Windows machine to share the Internet (ooops!) across my LAN. The new Speedtouch 1.2 beta2 driver (http://speedtouch.sourceforge.net/index.php?/news.en.html), already support Bridging 1483 mode (PPPoE support) in pppoa3 but without use in FreeBSD. Please read the following thread to see some solutions for implementing PPPoE in FreeBSD. http://www.mail-archive.com/[EMAIL PROTECTED]/msg04514.html For what you can see in this thread: "...that task is simply a matter of two or three #ifdefs for each BSD flavor, but nobody seems volunteering to accomplish it." I'm just a FreeBSD user not a programmer or hacker, so I can only help FreeBSD community asking you to try to implement PPPoE in FreeBSD so everyone uses it. Thanks very much for your great work, Nuno Teixeira -- /* PGP fingerprint: C6D1 06ED EB54 A99C 6B14 6732 0A5D 810D 727D F6C6 */ ___ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"