CISA(DHS): Version 2.0 of Essential Critical Infrastructure Workforce

2020-03-29 Thread Sean Donelan



The U.S. Cybersecurity & Infrastructure Security Agency (DHS) has released 
version 2.0 of its advisory memorandum on identification of essential 
critical infrastructure workers during COVID-19 response.


A lot of wordsmithing, and extensive lobbying by certain industries.  Its 
easy to pick out when you compare the versions.



https://www.cisa.gov/sites/default/files/publications/CISA_Guidance_on_the_Essential_Critical_Infrastructure_Workforce_Version_2.0_Updated.pdf




Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Michael Thomas



On 3/29/20 1:46 PM, Joe Greco wrote:

On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:

Joe Greco wrote on 29/03/2020 15:56:

The concept of flooding isn't problematic by itself.
Flood often works fine until you attempt to scale it.  Then it breaks,
just like Bj??rn admitted. Flooding is inherently problematic at scale.

For... what, exactly?  General Usenet?  Perhaps, but mainly because you
do not have a mutual agreement on traffic levels and a bunch of other
factors.  Flooding works just fine within private hierarchies, and since
I thought this was a discussion of "free collaborative tools" rather than
"random newbie trying to masochistically keep up with a full backbone
Usenet feed", it definitely should work fine for a private hierarchy and
collaborative use.


AFAIK, Usenet didn't die because it wasn't scalable. It died because 
people figured out how to make it a business model.


Mike



Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Joe Greco
On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:
> Joe Greco wrote on 29/03/2020 15:56:
> >On Sun, Mar 29, 2020 at 03:01:04PM +0100, Nick Hilliard wrote:
> >>because it uses flooding and can't guarantee reliable message
> >>distribution, particularly at higher traffic levels.
> >
> >That's so hideously wrong.  It's like claiming web forums don't
> >work because IP packet delivery isn't reliable.
> 
> Really, it's nothing like that.

Sure it is.  At a certain point you can get web forums to stop working
by DDoS.  You can't guarantee reliable interaction with a web site if
that happens.

> >Usenet message delivery at higher levels works just fine, except that
> >on the public backbone, it is generally implemented as "best effort"
> >rather than a concerted effort to deliver reliably.
> 
> If you can explain the bit of the protocol that guarantees that all 
> nodes have received all postings, then let's discuss it.

There isn't, just like there isn't a bit of the protocol that guarantees
that an IP packet is received by its intended recipient.  No magic.

It's perfectly possible to make sure that you are not backlogging to a
peer and to contact them to remediate if there is a problem.  When done 
at scale, this does actually work.  And unlike IP packet delivery, news
will happily backlog and recover from a server being down or whatever.

> >The concept of flooding isn't problematic by itself.
> 
> Flood often works fine until you attempt to scale it.  Then it breaks, 
> just like Bj??rn admitted. Flooding is inherently problematic at scale.

For... what, exactly?  General Usenet?  Perhaps, but mainly because you
do not have a mutual agreement on traffic levels and a bunch of other
factors.  Flooding works just fine within private hierarchies, and since
I thought this was a discussion of "free collaborative tools" rather than
"random newbie trying to masochistically keep up with a full backbone 
Usenet feed", it definitely should work fine for a private hierarchy and
collaborative use.

> > If you wanted to
> >implement a collaborative system, you could easily run a private
> >hierarchy and run a separate feed for it, which you could then monitor
> >for backlogs or issues.  You do not need to dump your local traffic on
> >the public Usenet.  This can happily coexist alongside public traffic
> >on your server.  It is easy to make it 100% reliable if that is a goal.
> 
> For sure, you can operate mostly reliable self-contained systems with 
> limited distribution.  We're all in agreement about this.

Okay, good. 

> >>The fact that it ended up having to implement TAKETHIS is only one
> >>indication of what a truly awful protocol it is.
> >
> >No, the fact that it ended up having to implement TAKETHIS is a nod to
> >the problem of RTT.
> 
> TAKETHIS was necessary to keep things running because of the dual 
> problem of RTT and lack of pipelining.  Taken together, these two 
> problems made it impossible to optimise incoming feeds, because of ... 
> well, flooding, which meant that even if you attempted an IHAVE, by the 
> time you delivered the article, some other feed might already have 
> delivered it.  TAKETHIS managed to sweep these problems under the 
> carpet, but it's a horrible, awful protocol hack.

It's basically cheap pipelining.  If you want to call pipelining in
general a horrible, awful protocol hack, then that's probably got
some validity.

> >It did and has.  The large scale binaries sites are still doing a
> >great job of propagating binaries with very close to 100% reliability.
> 
> which is mostly because there are so few large binary sites these days, 
> i.e. limited distribution model.

No, there are so few large binary sites these days because of consolidation
and buyouts.

> >I was there.
> 
> So was I, and probably so were lots of other people on nanog-l.  We all 
> played our part trying to keep the thing hanging together.

I'd say most of the folks here were out of this fifteen to twenty years
ago, well before the explosion of binaries in the early 2000's.

> >I'm the maintainer of Diablo.  It's fair to say I had a
> >large influence on this issue as it was Diablo's distributed backend
> >capability that really instigated retention competition, and a number
> >of optimizations that I made helped make it practical.
> 
> Diablo was great - I used it for years after INN-related head-bleeding. 
> Afterwards, Typhoon improved things even more.
> 
> >The problem for smaller sites is simply the immense traffic volume.
> >If you want to carry binaries, you need double digits Gbps.  If you
> >filter them out, the load is actually quite trivial.
> 
> Right, so you've put your finger on the other major problem relating to 
> flooding which isn't the distribution synchronisation / optimisation 
> problem: all sites get all posts for all groups which they're configured 
> for.  This is a profound waste of resources + it doesn't scale in any 
> meaningful way.

So if you don't like that 

Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Joe Greco
On Sun, Mar 29, 2020 at 03:01:04PM +0100, Nick Hilliard wrote:
> Bj??rn Mork wrote on 29/03/2020 13:44:
> >How is nntp non-scalable?
> 
> because it uses flooding and can't guarantee reliable message 
> distribution, particularly at higher traffic levels.

That's so hideously wrong.  It's like claiming web forums don't
work because IP packet delivery isn't reliable.

Usenet message delivery at higher levels works just fine, except that
on the public backbone, it is generally implemented as "best effort"
rather than a concerted effort to deliver reliably.

The concept of flooding isn't problematic by itself.  If you wanted to
implement a collaborative system, you could easily run a private
hierarchy and run a separate feed for it, which you could then monitor
for backlogs or issues.  You do not need to dump your local traffic on
the public Usenet.  This can happily coexist alongside public traffic
on your server.  It is easy to make it 100% reliable if that is a goal.

> The fact that it ended up having to implement TAKETHIS is only one 
> indication of what a truly awful protocol it is.

No, the fact that it ended up having to implement TAKETHIS is a nod to
the problem of RTT.

> Once again in simpler terms:
> 
> > How is nntp non-scalable?
> [...]
> > Binaries broke USENET.  That has little to do with nntp.
> 
> If it had been scalable, it could have scaled to handling the binary groups.

It did and has.  The large scale binaries sites are still doing a 
great job of propagating binaries with very close to 100% reliability.

I was there.  I'm the maintainer of Diablo.  It's fair to say I had a 
large influence on this issue as it was Diablo's distributed backend
capability that really instigated retention competition, and a number
of optimizations that I made helped make it practical.

The problem for smaller sites is simply the immense traffic volume. 
If you want to carry binaries, you need double digits Gbps.  If you
filter them out, the load is actually quite trivial.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


RE: Backhoe season?

2020-03-29 Thread Hal Murray


> I heard, and am seeing that construction type jobs don't seem to be affected
> much with the virus shutdown.  I mean I see guys building homes and working
> on roads all around me...  furthermore, we've heard of a couple fiber cuts
> that have brought portions of our network down a couple times in the last
> week or so. 

I suspect any reduction in backhoe activity will depend strongly on where you 
are looking.  The San Francisco Bay area, including Silicon Valley is taking 
things seriously.

>From the City of Menlo Park, Calif, March 20th:

   Due to the statewide stay-at-home order, effective Friday, March 20, no
   construction activity is allowed within the city of Menlo Park, except
   for essential infrastructure projects as determined by the City
   Manager/Emergency Services Director, until further notice. Active
   construction sites are instructed to secure their site and cease all
   further work immediately. Only activities necessary to address
   immediate health and safety concerns, as determined by the City
   Manager/Emergency Services Director, are allowed. This action is not
   taken lightly and is out of extreme concern for the health and safety
   of construction workers and city employees. Further guidance in light
   of this decision is expected to be released the week of March 23, 2020.
   Please visit the city website at menlopark.org/coronavirus for updates.



-- 
These are my opinions.  I hate spam.





Measuring packet loss and Latency Between eastern Europe and north america

2020-03-29 Thread LTGJAMAICA
I have a customer in eastern Europe accessing a SAAS application hosted in
one of Azure's north America datacenters. for the past few days every
morning between 3am and 6am est performance slows to crawl. This is person
is like 8am to 11am locally so they cant get much done.

The local ISP is providing 100mbps up/down.

So far speed test to Saas providers speed test page is slow 0.02mbps down 6
mbps up

Speedtest.net to north American ISPs like Verizon in New York slow

Speedtest to servers in Easter europe 100 up 100 down

Traceroutes/MTR dont help because a lot of hops seem to drop icmp packets

Need a tool or service that can detect packet loss/latency between provider
in eastern europe and a north american service provider. Any help is
appreciated


Re: UDP/123 policers & status

2020-03-29 Thread Ragnar Sundblad


Hi Harlan,

I am quite sure that we actually generally agree and are just talking
past each other, and so are you judging from your mail below.

Let’s move this discussion from the list.

Regards,

Ragnar

> On 29 Mar 2020, at 03:06, Harlan Stenn  wrote:
> 
> 
> 
> On 3/28/2020 5:35 PM, Ragnar Sundblad wrote:
>> 
>> 
>>> On 29 Mar 2020, at 01:18, Harlan Stenn  wrote:
>>> 
>>> Ragnar,
>>> 
>>> On 3/28/2020 4:59 PM, Ragnar Sundblad wrote:
 
 
> On 29 Mar 2020, at 00:35, Harlan Stenn  wrote:
> 
> Ragnar,
> 
> On 3/28/2020 4:09 PM, Ragnar Sundblad wrote:
>> 
>>> On 28 Mar 2020, at 23:58, Harlan Stenn  wrote:
>>> 
 Steven Sommars said:
> The secure time transfer of NTS was designed to avoid
 amplification attacks.
>>> 
>>> Uh, no.
>> 
>> Yes, it was.
>> 
>> As Steven said, “The secure time transfer of NTS was designed to
>> avoid amplification attacks”. I would even say - to make it
>> impossible to use for amplification attacks.
> 
> Please tell me how.  I've been part of this specific topic since the
> original NTS spec.  For what y'all are saying to be true, there are some
> underlying assumptions that would need to be in place, and they are
> clearly not in place now and won't be until people update their
> software, and even better, tweak their configs.
 
 The NTS protected NTP request is always of the same size, or in some
 cases larger, than the NTS protected NTP response. It is carefully
 designed to work that way.
>>> 
>>> So what?
>>> 
>>> The use of NTS is completely independent of whether or not a server will
>>> respond to a packet.
>>> 
>>> And an unauthenticated NTP request that generates an unauthenticated
>>> response is *always* smaller than an authenticated request, regardless
>>> of whether or not the server responds.
>>> 
>>> Claiming that amplification is a significant issue in the case where
>>> there's no amplification but the base packet size is bigger is ignoring
>>> a key piece of information, and is disingenuous in my book.
>> 
>> You are now comparing unauthenticated mode 3 and mode 4 packets to
>> cryptographically secured ones, which is a completely different thing.
>> 
>> Disingenuous?
> 
> I made no such claim.
> 
> I was talking about:
> 
>> As Steven said, “The secure time transfer of NTS was designed to
>> avoid amplification attacks”. I would even say - to make it
>> impossible to use for amplification attacks.
> 
> and that statement is not as clear as it could be.  Specifically:
> 
> NTS was designed to avoid amplification attacks
> 
> is vague.
> 
> You have just now written:
> 
>> You are now comparing unauthenticated mode 3 and mode 4 packets to
>> cryptographically secured ones, which is a completely different thing.
> 
> Completely different?  How?
> 
> Where is the amplification in an unauthenticated mode 3 request and an
> unauthenticated response?
> 
> How does cryptographically securing these packets make any difference here?
> 
>> A protocol with varying packet size, as the NTS protected NTP is,
>> can easily have the bad property of having responses larger than the
>> requests if not taken care. Don’t you see that?
> 
> I sure think I understand these points.
> 
> Who here has said that there was any problem with there being an
> amplification issue with properly-authenticated NTS packets?
> 
> The current NTS spec is *only* written for mode 3/4 exchanges.  While it
> might be applicable for mode 6/7, I haven't seen any specs for this
> usage.  In the NTP Project's Reference implementation there are extra
> implementation-specific protections built in to mode 6/7 exchanges.
> While some of this might be addressed in the NTS spec, I don't recall
> ever seeing this.
> 
> So why are you talking about mode 6/7 in this context?
> 
 Hence, [what Steven said].
 
>>> If you understand what's going on from the perspective of both the
>>> client and the server and think about the various cases, I think you'll
>>> see what I mean.
>> 
>> Hopefully, no-one exposes mode 6 or mode 7 on the internet anymore
>> at least not unauthenticated, and at least not the commands that are
>> not safe from amplification attacks. Those just can not be allowed
>> to be used anonymously.
> 
> But mode 6/7 is completely independent of NTS.
 
 Exactly. No one needs to, or should, expose mode6/7 at all. They were
 designed at a time when the internet was thought to be nice place were
 people behaved, decades ago, today they are just huge pains in the
 rear. Sadly allowing anonymous mode 6/7 was left in there far to long
 (admittedly being wise in hindsight is so much easier than in advance).
 And here we are, with UDP port 123 still being abused by the bad
 guys, and still being filtered by the networks.
>>> 
>>> Your statement about "exposing" is imprecise and 

Re: UDP/123 policers & status

2020-03-29 Thread Ragnar Sundblad



> On 29 Mar 2020, at 01:18, Harlan Stenn  wrote:
> 
> Ragnar,
> 
> On 3/28/2020 4:59 PM, Ragnar Sundblad wrote:
>> 
>> 
>>> On 29 Mar 2020, at 00:35, Harlan Stenn  wrote:
>>> 
>>> Ragnar,
>>> 
>>> On 3/28/2020 4:09 PM, Ragnar Sundblad wrote:
 
> On 28 Mar 2020, at 23:58, Harlan Stenn  wrote:
> 
>> Steven Sommars said:
>>> The secure time transfer of NTS was designed to avoid
>>  amplification attacks.
> 
> Uh, no.
 
 Yes, it was.
 
 As Steven said, “The secure time transfer of NTS was designed to
 avoid amplification attacks”. I would even say - to make it
 impossible to use for amplification attacks.
>>> 
>>> Please tell me how.  I've been part of this specific topic since the
>>> original NTS spec.  For what y'all are saying to be true, there are some
>>> underlying assumptions that would need to be in place, and they are
>>> clearly not in place now and won't be until people update their
>>> software, and even better, tweak their configs.
>> 
>> The NTS protected NTP request is always of the same size, or in some
>> cases larger, than the NTS protected NTP response. It is carefully
>> designed to work that way.
> 
> So what?
> 
> The use of NTS is completely independent of whether or not a server will
> respond to a packet.
> 
> And an unauthenticated NTP request that generates an unauthenticated
> response is *always* smaller than an authenticated request, regardless
> of whether or not the server responds.
> 
> Claiming that amplification is a significant issue in the case where
> there's no amplification but the base packet size is bigger is ignoring
> a key piece of information, and is disingenuous in my book.

You are now comparing unauthenticated mode 3 and mode 4 packets to
cryptographically secured ones, which is a completely different thing.

Disingenuous?

A protocol with varying packet size, as the NTS protected NTP is,
can easily have the bad property of having responses larger than the
requests if not taken care. Don’t you see that?

>> Hence, [what Steven said].
>> 
> If you understand what's going on from the perspective of both the
> client and the server and think about the various cases, I think you'll
> see what I mean.
 
 Hopefully, no-one exposes mode 6 or mode 7 on the internet anymore
 at least not unauthenticated, and at least not the commands that are
 not safe from amplification attacks. Those just can not be allowed
 to be used anonymously.
>>> 
>>> But mode 6/7 is completely independent of NTS.
>> 
>> Exactly. No one needs to, or should, expose mode6/7 at all. They were
>> designed at a time when the internet was thought to be nice place were
>> people behaved, decades ago, today they are just huge pains in the
>> rear. Sadly allowing anonymous mode 6/7 was left in there far to long
>> (admittedly being wise in hindsight is so much easier than in advance).
>> And here we are, with UDP port 123 still being abused by the bad
>> guys, and still being filtered by the networks.
> 
> Your statement about "exposing" is imprecise and bordering on incorrect,
> at least in some cases.

Exposing to the Internet, or anyone but the system owner.

I just can’t imagine that you didn’t fully understand that.

> But again, the use of mode 6/7 is completely independent of NTS.  You
> are trying to tie them together.

I am certainly not, and I think that should be perfectly clear from
the above.

Mode 6/7 packets should generally never be exposed outside localhost,
and should probably be replaced by something entirely different.

They are just a extremely troublesome relics from a time long ago,
and they should have been removed from anonymous exposure on the
Internet twenty years ago at least.

Don’t you understand that?
If you don't, you are part of the problem of killing UDP port 123,
not part of the solution for saving it.

>>> It's disingenuous for people to imply otherwise.
>> 
>> I couldn’t say, I don’t even know of an example of someone who does.
> 
> You've done it in two cases here, from everything I have seen.

I have not. End of story.

Ragnar



Re: UDP/123 policers & status

2020-03-29 Thread Ragnar Sundblad



> On 29 Mar 2020, at 00:35, Harlan Stenn  wrote:
> 
> Ragnar,
> 
> On 3/28/2020 4:09 PM, Ragnar Sundblad wrote:
>> 
>>> On 28 Mar 2020, at 23:58, Harlan Stenn  wrote:
>>> 
 Steven Sommars said:
> The secure time transfer of NTS was designed to avoid
   amplification attacks.
>>> 
>>> Uh, no.
>> 
>> Yes, it was.
>> 
>> As Steven said, “The secure time transfer of NTS was designed to
>> avoid amplification attacks”. I would even say - to make it
>> impossible to use for amplification attacks.
> 
> Please tell me how.  I've been part of this specific topic since the
> original NTS spec.  For what y'all are saying to be true, there are some
> underlying assumptions that would need to be in place, and they are
> clearly not in place now and won't be until people update their
> software, and even better, tweak their configs.

The NTS protected NTP request is always of the same size, or in some
cases larger, than the NTS protected NTP response. It is carefully
designed to work that way.

Hence, [what Steven said].

>>> If you understand what's going on from the perspective of both the
>>> client and the server and think about the various cases, I think you'll
>>> see what I mean.
>> 
>> Hopefully, no-one exposes mode 6 or mode 7 on the internet anymore
>> at least not unauthenticated, and at least not the commands that are
>> not safe from amplification attacks. Those just can not be allowed
>> to be used anonymously.
> 
> But mode 6/7 is completely independent of NTS.

Exactly. No one needs to, or should, expose mode6/7 at all. They were
designed at a time when the internet was thought to be nice place were
people behaved, decades ago, today they are just huge pains in the
rear. Sadly allowing anonymous mode 6/7 was left in there far to long
(admittedly being wise in hindsight is so much easier than in advance).
And here we are, with UDP port 123 still being abused by the bad
guys, and still being filtered by the networks.

> It's disingenuous for people to imply otherwise.

I couldn’t say, I don’t even know of an example of someone who does.

Ragnar



Re: UDP/123 policers & status

2020-03-29 Thread Ragnar Sundblad


> On 28 Mar 2020, at 23:58, Harlan Stenn  wrote:
> 
>> Steven Sommars said:
>>> The secure time transfer of NTS was designed to avoid
>>amplification attacks.
> 
> Uh, no.

Yes, it was.

As Steven said, “The secure time transfer of NTS was designed to
avoid amplification attacks”. I would even say - to make it
impossible to use for amplification attacks.

> If you understand what's going on from the perspective of both the
> client and the server and think about the various cases, I think you'll
> see what I mean.

Hopefully, no-one exposes mode 6 or mode 7 on the internet anymore
at least not unauthenticated, and at least not the commands that are
not safe from amplification attacks. Those just can not be allowed
to be used anonymously.

> NTS is a task-specific hammer.

Yes.

Ragnar



Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Joe Greco
On Sun, Mar 29, 2020 at 10:31:50PM +0100, Nick Hilliard wrote:
> Joe Greco wrote on 29/03/2020 21:46:
> >On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:
> >>>That's so hideously wrong.  It's like claiming web forums don't
> >>>work because IP packet delivery isn't reliable.
> >>
> >>Really, it's nothing like that.
> >
> >Sure it is.  At a certain point you can get web forums to stop working
> >by DDoS.  You can't guarantee reliable interaction with a web site if
> >that happens.
> 
> this is failure caused by external agency, not failure caused by 
> inherent protocol limitations.

Yet we're discussing "low BW and losy(sic) connections".  Which would be
failure of IP to be magically available with zero packet loss and at high
speeds.  There are lots of people for whom low speed DSL, dialup, WISP,
4G, GPRS, satellite, or actually nothing at all are available as the 
Internet options.

> >>>Usenet message delivery at higher levels works just fine, except that
> >>>on the public backbone, it is generally implemented as "best effort"
> >>>rather than a concerted effort to deliver reliably.
> >>
> >>If you can explain the bit of the protocol that guarantees that all
> >>nodes have received all postings, then let's discuss it.
> >
> >There isn't, just like there isn't a bit of the protocol that guarantees
> >that an IP packet is received by its intended recipient.  No magic.
> 
> tcp vs udp.

IP vs ... what exactly?

> >>Flood often works fine until you attempt to scale it.  Then it breaks,
> >>just like Bj??rn admitted. Flooding is inherently problematic at scale.
> >
> >For... what, exactly?  General Usenet?
> 
> yes, this is what we're talking about.  It couldn't scale to general 
> usenet levels.

The scale issue wasn't flooding, it was bandwidth and storage.  It's 
actually not problematic to do history lookups (the key mechanism in 
what you're calling "flooding") because even at a hundred thousand per 
second, that's well within the speed of CPU and RAM.  Oh, well, yes, 
if you're trying to do it on HDD, that won't work anymore, and quite 
possibly SSD will reach limits.  But that's a design issue, not a scale
problem.

Most of Usenet's so-called "scale" problems had to do with disk I/O and
network speeds, not flood fill.

> >Perhaps, but mainly because you
> >do not have a mutual agreement on traffic levels and a bunch of other
> >factors.  Flooding works just fine within private hierarchies and since
> >I thought this was a discussion of "free collaborative tools" rather than
> >"random newbie trying to masochistically keep up with a full backbone
> >Usenet feed", it definitely should work fine for a private hierarchy and
> >collaborative use.
> 
> Then we're in violent agreement on this point.  Great!

Okay, fine, but it's kinda the same thing as "last week some noob got a
1990's era book on setting up a webhost, bought a T1, and was flummoxed
at why his service sucked."

The Usenet "backbone" with binaries isn't going to be viable without a
real large capex investment and significant ongoing opex.  This isn't a
failure in the technology.

> >>delivered it.  TAKETHIS managed to sweep these problems under the
> >>carpet, but it's a horrible, awful protocol hack.
> >
> >It's basically cheap pipelining.
> 
> no, TAKETHIS is unrestrained flooding, not cheap pipelining.

It is definitely not unrestrained.  Sorry, been there inside the code.
There's a limited window out of necessity, because you get interesting
behaviours if a peer is held off too long.

> >If you want to call pipelining in
> >general a horrible, awful protocol hack, then that's probably got
> >some validity.
> 
> you could characterise pipelining as a necessary reaction to the fact 
> that the speed of light is so damned slow.

Sure.

> >>which is mostly because there are so few large binary sites these days,
> >>i.e. limited distribution model.
> >
> >No, there are so few large binary sites these days because of consolidation
> >and buyouts.
> 
> 20 years ago, lots of places hosted binaries.  They stopped because it 
> was pointless and wasteful, not because of consolidation.

I thought they stopped it because some of us offered them a better model 
that reduced their expenses and eliminated the need to have someone who was 
an expert in an esoteric '80's era service, while also investing in all the
capex/opex. 

Lots of companies sold wholesale Usenet, usually just by offering access to 
a remote service.  As the amount of Usenet content exploded, the increasing 
cost of storage for a feature a declining number of users was using didn't 
make sense. 

One of my companies specialized in shipping dreader boxes to ISP's and 
letting them backend off remote spools, usually for a fairly modest cost 
(high three, low four figures?).  This let them have control over the service
that was unlike anything other service providers were doing.  Custom groups,
custom integration with their auth/billing, etc., required about a megabit
of 

Re: UDP/123 policers & status

2020-03-29 Thread Ragnar Sundblad



> On 28 Mar 2020, at 23:29, Bottiger  wrote:
...
> Broken protocols need to be removed and blacklisted at every edge.

A protocol isn’t broken just because it can be abused when spoofed,
it is abused. Even TCP can be abused in that way.
Should we blacklist and remove TCP?

> Pushing the responsibility to BCP38 is unrealistic.

It would help quite a bit against a lot if abuse, and it would be
reasonable to include it on a lowest level of technical level to
actually get to be called an ISP.

So what do the ISP:s want - earn money while doing nothing until the
Internet is unusable? I don’t get it.
There are enough threats against the open Internet as it is, we
don’t need that too.

Ragnar



Re: UDP/123 policers & status

2020-03-29 Thread Ragnar Sundblad



> On 27 Mar 2020, at 18:54, Saku Ytti  wrote:
> 
> On Fri, 27 Mar 2020 at 19:48, Ragnar Sundblad  wrote:
> 
>> Is this really what the ISP community wants - to kill off port 123,
>> and force NTP to move to random ports?
> 
> Make NST attenuation vector, so that reply is guaranteed to be
> significantly smaller than request, and by standard drop small
> requests.

The NTP replies on port 123 are of the same size as the request, or
smaller on error.

If filtering on request/reply (or “mode” in NTP lingo), you could filter
out the control packets which have the amplification problems in very old
configurations.
You could allow request and reply, mode 3 and 4, but disallow control
packets, mode 6.
This kind of filtering may not be possible in all equipment though.

Another option is to rate limit the traffic, even though that is not
entirely without problems either - public servers are supposed to get
a lot more traffic than a typical client generates.

I know that ISP:s have been hunting down machine with other services
that could be used for e.g. amplification or spam, like SMTP relays,
DNS resolvers, HTTP proxies, and similar.
This would be fully possible also with these bad NTP configurations
that have not been updated in many many years.
I think only the ISP:s are in a position to both find out who they
are, and to force them to be fixed.

Ragnar



Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Nick Hilliard

Joe Greco wrote on 29/03/2020 21:46:

On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:

That's so hideously wrong.  It's like claiming web forums don't
work because IP packet delivery isn't reliable.


Really, it's nothing like that.


Sure it is.  At a certain point you can get web forums to stop working
by DDoS.  You can't guarantee reliable interaction with a web site if
that happens.


this is failure caused by external agency, not failure caused by 
inherent protocol limitations.



Usenet message delivery at higher levels works just fine, except that
on the public backbone, it is generally implemented as "best effort"
rather than a concerted effort to deliver reliably.


If you can explain the bit of the protocol that guarantees that all
nodes have received all postings, then let's discuss it.


There isn't, just like there isn't a bit of the protocol that guarantees
that an IP packet is received by its intended recipient.  No magic.


tcp vs udp.


Flood often works fine until you attempt to scale it.  Then it breaks,
just like Bj??rn admitted. Flooding is inherently problematic at scale.


For... what, exactly?  General Usenet?


yes, this is what we're talking about.  It couldn't scale to general 
usenet levels.



Perhaps, but mainly because you
do not have a mutual agreement on traffic levels and a bunch of other
factors.  Flooding works just fine within private hierarchies and since
I thought this was a discussion of "free collaborative tools" rather than
"random newbie trying to masochistically keep up with a full backbone
Usenet feed", it definitely should work fine for a private hierarchy and
collaborative use.


Then we're in violent agreement on this point.  Great!


delivered it.  TAKETHIS managed to sweep these problems under the
carpet, but it's a horrible, awful protocol hack.


It's basically cheap pipelining.


no, TAKETHIS is unrestrained flooding, not cheap pipelining.


If you want to call pipelining in
general a horrible, awful protocol hack, then that's probably got
some validity.


you could characterise pipelining as a necessary reaction to the fact 
that the speed of light is so damned slow.



which is mostly because there are so few large binary sites these days,
i.e. limited distribution model.


No, there are so few large binary sites these days because of consolidation
and buyouts.


20 years ago, lots of places hosted binaries.  They stopped because it 
was pointless and wasteful, not because of consolidation.



Right, so you've put your finger on the other major problem relating to
flooding which isn't the distribution synchronisation / optimisation
problem: all sites get all posts for all groups which they're configured
for.  This is a profound waste of resources + it doesn't scale in any
meaningful way.


So if you don't like that everyone gets everything they are configured to
get, you are suggesting that they... what, exactly?  Shouldn't get everything
they want?


The default distribution model of the 1990s was *.  These days, only a 
tiny handful of sites handle everything, because the overheads of 
flooding are so awful.  To make it clear, this awfulness is resource 
related, and the knock-on effect is that the resource cost is untenable.


Usenet, like other systems, can be reduced to an engineering / economics 
management problem.  If the cost of making it operate correctly doesn't 
work, then it's non-viable.



None of this changes that it's a robust, mature protocol that was originally
designed for handling non-binaries and is actually pretty good in that role.
Having the content delivered to each site means that there is no dependence
on long-distance interactive IP connections and that each participating site
can keep the content for however long they deem useful.  Usenet keeps hummin'
along under conditions that would break more modern things like web forums.


It's a complete crock of a protocol with robust and mature 
implementations.  Diablo is one and for that, we have people like Matt 
and you to thank.


Nick


SAFNOG-6: Deferred - Coronavirus (COVID-19)

2020-03-29 Thread Mark Tinka
Hello all.

As you are aware, the world is facing the ongoing challenge of the
spread of the Coronavirus (COVID-19), which has had a material impact on
a number of local, regional and global events and businesses across
various industries.

Unfortunately, the preparation for SAFNOG-6, initially scheduled for the
September, 2020, has not been spared, as the pandemic has made
logistical planning uncertain amongst our sponsors, supporters and
venues. To this end, we have made the rather difficult, but responsible,
decision to defer the SAFNOG-6 meeting to 20th - 22nd September, 2021,
in order to ensure that safety and health comes first.

SAFNOG would also like to take this opportunity to encourage its and
other Network Operators Groups members to actively use and increase
their participation on this and other mailing lists around the world, as
the learning and engagement process never ends; and in fact, is probably
more crucial in these times. The operational community can continue to
exist successfully even throughout this period of social distancing.

We look forward to seeing you all in the near future, and wish you
continued good health and care.

Mark Tinka
On Behalf of the SAFNOG Management Committee


Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Nick Hilliard

Joe Greco wrote on 29/03/2020 15:56:

On Sun, Mar 29, 2020 at 03:01:04PM +0100, Nick Hilliard wrote:

because it uses flooding and can't guarantee reliable message
distribution, particularly at higher traffic levels.


That's so hideously wrong.  It's like claiming web forums don't
work because IP packet delivery isn't reliable.


Really, it's nothing like that.


Usenet message delivery at higher levels works just fine, except that
on the public backbone, it is generally implemented as "best effort"
rather than a concerted effort to deliver reliably.


If you can explain the bit of the protocol that guarantees that all 
nodes have received all postings, then let's discuss it.



The concept of flooding isn't problematic by itself.


Flood often works fine until you attempt to scale it.  Then it breaks, 
just like Bjørn admitted. Flooding is inherently problematic at scale.



 If you wanted to
implement a collaborative system, you could easily run a private
hierarchy and run a separate feed for it, which you could then monitor
for backlogs or issues.  You do not need to dump your local traffic on
the public Usenet.  This can happily coexist alongside public traffic
on your server.  It is easy to make it 100% reliable if that is a goal.


For sure, you can operate mostly reliable self-contained systems with 
limited distribution.  We're all in agreement about this.



The fact that it ended up having to implement TAKETHIS is only one
indication of what a truly awful protocol it is.


No, the fact that it ended up having to implement TAKETHIS is a nod to
the problem of RTT.


TAKETHIS was necessary to keep things running because of the dual 
problem of RTT and lack of pipelining.  Taken together, these two 
problems made it impossible to optimise incoming feeds, because of ... 
well, flooding, which meant that even if you attempted an IHAVE, by the 
time you delivered the article, some other feed might already have 
delivered it.  TAKETHIS managed to sweep these problems under the 
carpet, but it's a horrible, awful protocol hack.



It did and has.  The large scale binaries sites are still doing a
great job of propagating binaries with very close to 100% reliability.


which is mostly because there are so few large binary sites these days, 
i.e. limited distribution model.



I was there.


So was I, and probably so were lots of other people on nanog-l.  We all 
played our part trying to keep the thing hanging together.



I'm the maintainer of Diablo.  It's fair to say I had a
large influence on this issue as it was Diablo's distributed backend
capability that really instigated retention competition, and a number
of optimizations that I made helped make it practical.


Diablo was great - I used it for years after INN-related head-bleeding. 
Afterwards, Typhoon improved things even more.



The problem for smaller sites is simply the immense traffic volume.
If you want to carry binaries, you need double digits Gbps.  If you
filter them out, the load is actually quite trivial.


Right, so you've put your finger on the other major problem relating to 
flooding which isn't the distribution synchronisation / optimisation 
problem: all sites get all posts for all groups which they're configured 
for.  This is a profound waste of resources + it doesn't scale in any 
meaningful way.


Nick


Re: Free.fr vs HE.net IPv6 (Was: CISA: Guidance on the Essential Critical Infrastructure Workforce)

2020-03-29 Thread Mike Hammett
I did error somewhere, yes. If I didn't read that part, didn't send the right 
link, etc. Not sure. 


Yeah, single-homed on Cogent IPv6 is a problem. 


Maybe I just assumed that if you had transit from someone, that you got IPv4 
and IPv6 service with them. Who doesn't do that? 




- 
Mike Hammett 
Intelligent Computing Solutions 

Midwest Internet Exchange 

The Brothers WISP 

- Original Message -

From: "Radu-Adrian Feurdean"  
To: "NANOG"  
Sent: Saturday, March 28, 2020 10:22:24 PM 
Subject: Free.fr vs HE.net IPv6 (Was: CISA: Guidance on the Essential Critical 
Infrastructure Workforce) 

On Sat, Mar 28, 2020, at 19:52, Mike Hammett wrote: 
> https://radar.qrator.net/as12322/providers#startDate=2019-12-27=2020-03-27=current
>  

Did you read the part about *IPv6* traffic ? 
Your link points to some IPv*4* relationship. Over IPv6, you get this : 

https://radar.qrator.net/as12322/ipv6-providers#startDate=2019-12-29=2020-03-29=current
 

Note the "Active Now" part, which is only active for Cogent. 

And then, rather than taking QRator (which does a good job and has interesting 
information on a number of things - who buys transit from who *NOT* being one 
of those things - or at least not the public information) as word of absolute 
truth, did you test that bgp.he.net thinks about this ? Since HE is one of the 
parties, it does make sense to check their tools to see their point of view. 

Long story short: 
- Free.fr in known in France (where I happen to live and work) for only having 
Cogent as a transit for the last few years. 
- they are also known to peer (like "only exchange own routes and customer 
routes") with some "very big" networks (usually called "tier-1") : level3 and 
zayo among them. 
- Cogent and HE over IPv6 ... I suppose everybody knows the story. 
- Free.fr depeered he.net about one week ago... 

There have been some exchanges of tentative traceroutes in both directions on 
FRnOG (French NOG) and things are clear : free.fr and he.net cannot exchange 
IPv6 traffic. 



Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Nick Hilliard

Bjørn Mork wrote on 29/03/2020 13:44:

How is nntp non-scalable?


because it uses flooding and can't guarantee reliable message 
distribution, particularly at higher traffic levels.


The fact that it ended up having to implement TAKETHIS is only one 
indication of what a truly awful protocol it is.


Once again in simpler terms:

> How is nntp non-scalable?
[...]
> Binaries broke USENET.  That has little to do with nntp.

If it had been scalable, it could have scaled to handling the binary groups.

Nick


Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Bjørn Mork
Nick Hilliard  writes:

> nntp is a non-scalable protocol which broke under its own
> weight.

How is nntp non-scalable?  It allows an infinite number of servers
connected in a tiered network, where you only have to connect to a few
other peers and carry whatever part of the traffic you want.

Binaries broke USENET.  That has little to do with nntp.

nntp is still working just fine and still carrying a few discussion
groups here and there.  And you have a really nice mailling list gateway
at news.gmane.io (which recently replaced gmane.org - see
https://lars.ingebrigtsen.no/2020/01/15/news-gmane-org-is-now-news-gmane-io/
for full story)


Bjørn


Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Rich Kulawiec
On Wed, Mar 25, 2020 at 05:27:41PM +, Nick Hilliard wrote:
> nntp is a non-scalable protocol which broke under its own weight. Threaded
> news-readers are a great way of catching up with large mailing lists if
> you're prepared to put in the effort to create a bidirectional gateway.  But
> that's really a statement that mail readers are usually terrible at handling
> large threads rather than a statement about nntp as a useful media delivery
> protocol.

Some mail readers are terrible at that: mutt isn't.

And one of the nice things about trn (and I believe slrn, although
that's an educated guess, I haven't checked) is that it can save
Usenet news articles in Unix mbox format, which means that you can
read them with mutt as well.  I have trn set up to run via a cron job
that executes a script that grabs the appropriate set of newsgroups,
spam-filters them, saves what's left to a per-newsgroup mbox file that
I can read just like I read this list.

Similarly, rss2email saves RSS feeds in Unix mbox format.  And one of
the *very* nice things about coercing everything into mbox format is
that myriad tools existing for sorting, searching, indexing, etc.

---rsk