Re: LibreQos

2023-05-13 Thread Dave Taht
On Sat, May 13, 2023 at 12:28 AM Mark Tinka  wrote:
>
>
>
> On 5/12/23 17:59, Dave Taht wrote:
>
> > :blush:
> >
> > We have done a couple podcasts about it, like this one:
> >
> > https://packetpushers.net/podcast/heavy-networking-666-improving-quality-of-experience-with-libreqos/
> >
> > and have perhaps made a mistake by using matrix chat, rather than a
> > web forum, to too-invisibly, do development and support in, but it has
> > been a highly entertaining way to get a better picture of the real
> > problems caring ISPs have.
> >
> > I see you are in Africa? We have a few ISPs playing with this in kenya...
>
> DM me, please, the ones you are aware about that would be willing to
> share their experiences. I'd like to get them to talk about what they've
> gathered at the upcoming SAFNOG meeting in Lusaka.
>
> We have a fairly large network in Kenya, so would be happy to engage
> with the operators running the LibreQoS there.

I forwarded your info.

Slide 40 here, has an anonymized libreqos report of observed latencies
in Africa. The RTTs there are severely bimodal (30ms vs 300ms), which
mucks with a default assumption in sch_cake of a default 100ms RTT.

http://www.taht.net/~d/Misunderstanding_Residential_Bandwidth_Latency.pdf

There are two ways to deal with this, right now we are recommending
cake rtt 200ms setting to keep throughput up. The FQ component
dominates for most traffic, anyway. With a bit more work we hope to
come up with a way of more consistent queuing delay. Or, we could just
wait for more CDNs to and IXPs to deploy there.

/me hides

A note about our public plots:  We had a lot of people sharing
screenshots, so we added a "klingon mode" to consistently
transliterate the more private data to that language.

Another fun fact was by deploying this stuff several folk found
sufficient non-paying clients on their network to pay for the hardware
inside of a month or two.

>
> > We do not know. Presently our work is supported by equinix´s open
> > source program, with four servers in their Dallas DC, and they are
> > 25Gbit ports. Putting together enough dough to get to 100Gbit or
> > finding someone willing to send traffic through more bare metal at
> > that data center or elsewhere is on my mind. In other words, we can
> > easily spin up the ability to L2 route some traffic through a box in
> > their DCs, if only we knew where to find it. :)
> >
> > If you assume linearity to cores (which is a lousy assumption, ok?),
> > 64 Xeon cores could do about 200Gbit, running flat out. I am certain
> > it will not scale linearly and we will hit multiple bottlenecks on a
> > way to that goal.
> >
> > Limits we know about:
> >
> > A) Trying to drive 10s of gbits of realistic traffic through this
> > requires more test clients and servers than we have, or someone with
> > daring and that kind of real traffic in the first place. For example
> > one of our most gung-ho clients has 100Gbit ports, but not anywhere
> > near that amount of inbound traffic. (they are crazy enough to pull
> > git head, try it for a few minutes in production, and then roll back
> > or leave it up)
> >
> > B) A brief test of a 64 core AMD + Nvidia ethernet was severely
> > outperformed by our current choice of a 20 core xeon gold + intel 710
> > or 810 card. It is far more the ethernet card that is the dominating
> > factor. I would kill if I could find one that did a LPM -> CPU
> > mapping... (e.g. instead of a LPM->route mapping, LPM to what cpu to
> > interrupt). We also tried an 80 core arm to inconclusive results early
> > on.
> >
> > Tests of the latest ubuntu release are ongoing. I am not prepared to
> > bless that or release any results yet.
> >
> > C) A single cake instance on one of the more high end Xeons can
> > *almost* push 10Gbit/sec while eating a core.
> >
> > D) Our model is one cake instance per subscriber + the ability to
> > establish trees emulating links further down the chain. One ISP is
> > modeling 10 mmwave hops. Another is just putting in multiple boxes
> > closer to the towers.
> >
> > So in other words, 100s of gbits is achievable today if you throw
> > boxes at it, and more cost effective to do that way. We will of
> > course, keep striving to crack 100gbit native on a single box with
> > multiple cards. It is a nice goal to have.
> >
> > E) In our present, target markets, 10k typical residential subscribers
> > only eat 11Gbit/sec at peak. That is a LOT of the smaller ISPs and
> > networks that fit into that space, so of late we have been focusing
> > more on analytics and polish than pushing more traffic. Some of our
> > new R/T analytics break down at 10k cake instances (that is 40 million
> > fq_codel queues, ok?), and we cannot sample at 10ms rates, falling
> > back to (presently) 1s conservatively.
> >
> > We are nearing putting out a v1.4-rc7 which is just features and
> > polish, you can get a .deb of v1.4-rc6 here:
> >
> > https://github.com/LibreQoE/LibreQoS/releases/tag/v1.4-rc6
> >
> > Th

Major changes to ARIN RPKI functionality (Fwd: [arin-announce] New Features Added to ARIN Online)

2023-05-13 Thread John Curran
NANOGers -

If you are using (or interested in) RPKI, please review the attached 
announcement regarding the rollout of significant changes to RPKI functionality 
at ARIN.

FYI,
/John

John Curran
President and CEO
American Registry for Internet Numbers

Begin forwarded message:

From: ARIN 
Subject: [arin-announce] New Features Added to ARIN Online
Date: May 13, 2023 at 12:58:50 PM EDT
To: "arin-annou...@arin.net" 

ARIN is pleased to present the latest version of ARIN Online, including 
improvements, new features, and updates. Full release notes are included at the 
end of this message.

All ARIN systems have been restored and are now operating normally. We thank 
you for your patience. If you have additional questions, comments, or issues, 
please submit an Ask ARIN ticket using your ARIN Online account or contact the 
Registration Services Help Desk by phone Monday through Friday, 7:00 AM to 7:00 
PM ET at +1.703.227.0660.

Regards,

Mark Kosters
Chief Technology Officer
American Registry for Internet Numbers (ARIN)

Release Notes

- For customers of the Hosted RPKI (Resource Public Key Infrastructure) 
service, significant changes have been made, especially to the management of 
ROAs (Route Origin Authorizations) in ARIN Online.
- We have removed the requirement for users to generate a public/private key 
pair to create Resource Certificates and ROAs.
- Signing up for Hosted RPKI is no longer a ticketed process.
- Creating ROAs is no longer a ticketed process. Created ROAs will be returned 
immediately and available in the next repository delta.
- All ROAs in the repository created using ARIN Online will auto renew.
- We have streamlined the top navigation menu of the RPKI pages in ARIN Online.

- For customers of the Hosted RPKI, changes have been made to the RESTful API 
for managing ROAs.
- It is now possible to create and delete multiple ROAs through a single API 
call.
- ROAs created with the new API will auto renew.
- All ROAs in the repository created using the previous API will not auto renew 
and will retain their original expiration date.
- We have created a new URL associated with the new RESTful API calls for 
managing ROAs. We will maintain the old API endpoints to prevent existing users 
from having to change their scripts.

- We will migrate all existing ROAs in the RPKI repository created using the 
ARIN Online interface to auto renew. ROAs generated using the previous RESTful 
API will not be migrated to auto renew and will retain their existing 
expiration date. To take advantage of auto renew, ROAs created with the 
previous RESTful API must be replaced with ones created using the newly 
released RESTful API.

- Some changes have been made to the management of Delegated RPKI.
- Signing up for Delegated RPKI service is no longer a ticketed process.
- Updating the child_request document is no longer a ticketed process.

- We have made stylistic updates in ARIN Online to provide a more consistent 
experience across devices and operating systems.


___
ARIN-Announce
You are receiving this message because you are subscribed to
the ARIN Announce Mailing List (arin-annou...@arin.net).
Unsubscribe or manage your mailing list subscription at:
https://lists.arin.net/mailman/listinfo/arin-announce
Please contact i...@arin.net if you experience any issues.



Re: Routed optical networks

2023-05-13 Thread Mark Tinka



On 5/12/23 22:14, Mike Hammett wrote:

"I remember 10y ago every presentation started from the claim that 
100B of IoT would drive XXX traffic. It did not happen"


Often the type of people making these kinds of predictions that a tire 
pressure sensor generates 20 gigabytes of traffic a day.


I like growing old... your BS detector becomes so slick, you know to 
ignore certain links, conferences, speakers, topics, meetings, 
slideware, e-mails, colleagues and announcements without fear of 
actually missing out on trends, because you know that in the end it will 
lead to nowhere real :-).


Mark.

Re: LibreQos

2023-05-13 Thread Mark Tinka




On 5/12/23 17:59, Dave Taht wrote:


:blush:

We have done a couple podcasts about it, like this one:

https://packetpushers.net/podcast/heavy-networking-666-improving-quality-of-experience-with-libreqos/

and have perhaps made a mistake by using matrix chat, rather than a
web forum, to too-invisibly, do development and support in, but it has
been a highly entertaining way to get a better picture of the real
problems caring ISPs have.

I see you are in Africa? We have a few ISPs playing with this in kenya...


DM me, please, the ones you are aware about that would be willing to 
share their experiences. I'd like to get them to talk about what they've 
gathered at the upcoming SAFNOG meeting in Lusaka.


We have a fairly large network in Kenya, so would be happy to engage 
with the operators running the LibreQoS there.




We do not know. Presently our work is supported by equinix´s open
source program, with four servers in their Dallas DC, and they are
25Gbit ports. Putting together enough dough to get to 100Gbit or
finding someone willing to send traffic through more bare metal at
that data center or elsewhere is on my mind. In other words, we can
easily spin up the ability to L2 route some traffic through a box in
their DCs, if only we knew where to find it. :)

If you assume linearity to cores (which is a lousy assumption, ok?),
64 Xeon cores could do about 200Gbit, running flat out. I am certain
it will not scale linearly and we will hit multiple bottlenecks on a
way to that goal.

Limits we know about:

A) Trying to drive 10s of gbits of realistic traffic through this
requires more test clients and servers than we have, or someone with
daring and that kind of real traffic in the first place. For example
one of our most gung-ho clients has 100Gbit ports, but not anywhere
near that amount of inbound traffic. (they are crazy enough to pull
git head, try it for a few minutes in production, and then roll back
or leave it up)

B) A brief test of a 64 core AMD + Nvidia ethernet was severely
outperformed by our current choice of a 20 core xeon gold + intel 710
or 810 card. It is far more the ethernet card that is the dominating
factor. I would kill if I could find one that did a LPM -> CPU
mapping... (e.g. instead of a LPM->route mapping, LPM to what cpu to
interrupt). We also tried an 80 core arm to inconclusive results early
on.

Tests of the latest ubuntu release are ongoing. I am not prepared to
bless that or release any results yet.

C) A single cake instance on one of the more high end Xeons can
*almost* push 10Gbit/sec while eating a core.

D) Our model is one cake instance per subscriber + the ability to
establish trees emulating links further down the chain. One ISP is
modeling 10 mmwave hops. Another is just putting in multiple boxes
closer to the towers.

So in other words, 100s of gbits is achievable today if you throw
boxes at it, and more cost effective to do that way. We will of
course, keep striving to crack 100gbit native on a single box with
multiple cards. It is a nice goal to have.

E) In our present, target markets, 10k typical residential subscribers
only eat 11Gbit/sec at peak. That is a LOT of the smaller ISPs and
networks that fit into that space, so of late we have been focusing
more on analytics and polish than pushing more traffic. Some of our
new R/T analytics break down at 10k cake instances (that is 40 million
fq_codel queues, ok?), and we cannot sample at 10ms rates, falling
back to (presently) 1s conservatively.

We are nearing putting out a v1.4-rc7 which is just features and
polish, you can get a .deb of v1.4-rc6 here:

https://github.com/LibreQoE/LibreQoS/releases/tag/v1.4-rc6

There is an optional, and anonymized reporting facility built into
that. In the last two months, 44404 cake shaped devices shaping
.19Tbits that we know of have come online. Aside from that we have no
idea how many ISPs have picked it up! a best guess would be well over
100k subs at this point.

Putting in libreqos is massively cheaper than upgrading all the cpe to
good queue management, (it takes about 8 minutes to get it going in
monitor mode, but exporting shaping data into it requires glue, and
time) but better cpe remains desirable - especially that the uplink
component of the cpe also do sane shaping natively.

"And dang, it, ISPs of the world, please ship decent wifi!?", because
we can see the wifi going south in many cases from this vantage point
now. In the past year mikrotik in particular has done a nice update to
fq_codel and cake in RouterOS, eero 6s have got quite good, much of
openwifi/openwrt, evenroute  is good...

It feels good, after 14 years of trying to fix the internet, to be
seeing such progress, on fixing bufferbloat, and in understanding and
explaining the internet better. jon us..


All sounds very exciting.

I'll share this with some friends at Cisco who are actively looking at 
ways to incorporate such tech. in their routers in response to QUIC. 
They might