Re: [External] Re: What do you think about this airline vs 5G brouhaha?

2022-01-20 Thread Keenan Tims

On 2022-01-20 00:49, Mark Tinka wrote:
Furthermore, the AoA "Disagree Alert" message that would need to 
appear on the pilot's display in the case of an AoA sensor failure, 
was an "optional extra", which most airlines elected not to add to the 
purchase order, e.g., Air Canada, American Airlines and Westjet all 
bought the extra feature, but Lion Air didn't.


There were a huge number of failures at Boeing in the MAX/MCAS program; 
it's clear the program if not the whole company was rotten to the core; 
but this isn't quite an accurate characterization of that particular 
failure.


The AOA DISAGREE alert was never intended as an optional feature. 
However either due to a software bug or miscommunication between Boeing 
and their contractor for the avionics package (Collins?), it got tied to 
the optional AoA (value) indicator. This was caught *by Boeing* and 
reported to the contractor, but Boeing instructed them not to fix the 
problem and defer it to a later software update 3 years later, and never 
bothered to notify operators or the FAA about the problem.


Somehow it's even worse this way. I don't think a working DISAGREE alarm 
would have saved the flights, though.


Keenan



Re: Elephant in the room - Akamai

2019-12-06 Thread Keenan Tims
Speaking as a (very) small operator, we've also been seeing less and 
less of our Akamai traffic coming to us over peering over the last 
couple years. I've reached out to Akamai NOC as well as Jared directly 
on a few occasions and while they've been helpful and their changes 
usually have some short-term impact, the balance has always shifted back 
some weeks/months later. I've more or less resigned myself to this being 
how Akamai wants things, and as we so often have to as small fish, just 
dealing with it.


We're currently seeing about 80% of our AS20940 origin traffic coming 
from transit, and I'm certain there's a significant additional amount 
which is difficult to identify coming from on-net caches at our upstream 
providers (though it appears from the thread that may be reducing as 
well). Only about 20% is coming from peering where we have significantly 
more capacity and lower costs. Whatever the algorithm is doing, from my 
perspective it doesn't make a lot of sense and is pretty frustrating, 
and I'm somewhat concerned about busting commits and possibly running 
into congestion for the next big event that does hit us, which would not 
be a problem if it were delivered over peering.


Luckily we're business focussed, so we're not getting hit by these 
gaming events.


Keenan Tims
Stargate Connections Inc (AS19171)

On 2019-12-06 8:13 a.m., Jared Mauch wrote:



On Dec 6, 2019, at 9:59 AM, Chris Adams  wrote:

Once upon a time, Fawcett, Nick  said:

We had three onsite Akamai caches a few months ago.  They called us up and said 
they are removing that service and sent us boxes to pack up the hardware and 
ship back.  We’ve had quite the increase in DIA traffic as a result of it.

Same here.  We'd had Akamai servers for many years, replaced as needed
(including one failed servre replaced right before they turned them
off).  Now about 50% of our Akamai traffic comes across transit links,
not peering.  This seems like it would be rather inefficient for them
too…

There’s an element of scale when it comes to certain content that makes it not 
viable if the majority of traffic is VOD with variable bitrates it requires a 
lot more capital.

Things like downloads of software updates (eg: patch Tuesday) lend themselves 
to different optimizations.  The hardware has a cost as well as the bandwidth 
as well.

I’ll say that most places that have a few servers may only see a minor 
improvement in their in:out.  If you’re not peering with us or are and see 
significant traffic via transit, please do reach out.

I’m happy to discuss in private or at any NANOG/IETF meeting people are at.  We 
generally have someone at most of the other NOG meetings as well, including 
RIPE, APRICOT and even GPF etc.

I am personally always looking for better ways to serve the medium (or small) 
size providers better.

- Jared





Re: Juniper Config Commit causes Cisco Etherchannels to go into err-disable state

2018-04-06 Thread Keenan Tims
What it's telling you is totally unclear, though. I've asked TAC to
explain to me the packet behaviour that generates this errdisable, and
haven't been able to get a clear answer from them. It seems to come out
of 'nowhere' on multi-vendor networks, where all other vendors are
perfectly happy and no operational or configuration issue is evident,
other than Cisco shutting the port. As far as I can tell from the
documentation's description of this case, it should not even be possible
for it to trigger when LACP is in use (as the 'port channel' is
negotiated by LACP, not configured by the user...), yet it certainly can.

FWIW, I've also seen this between Juniper and Cisco, and have been
forced to disable the misconfig detection.

If you know exactly what Cisco's STP is telling me happened with this
error, I'd really love to know, it might at least help to understand how
it could be triggering, because it is definitely not 'port-channel
misconfiguration'.

Keenan


On 2018-04-05 02:26 PM, Naslund, Steve wrote:
> It really does not resolve anything it just allows a bad configuration to 
> work.  The guard is there so that if one side is configured as a channel and 
> the other side is not, the channel gets shut down.  Allowing it to remain up 
> can cause a BPDU loop.  Your spanning tree is trying to tell you something, 
> you should listen or you could get really hard to isolate issues.
>
> Steven Naslund
> Chicago IL  
>
>> -Original Message-
>> From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Joseph Jenkins
>> Sent: Thursday, April 05, 2018 4:16 PM
>> To: Robert Webb
>> Cc: nanog@nanog.org
>> Subject: Re: Juniper Config Commit causes Cisco Etherchannels to go into 
>> err-disable state
>>
>> No there isn't, but from what I am getting responses both onlist and off 
>> list is to just run this on the Cisco switches:
>>
>> no spanning-tree etherchannel guard misconfig
>>
>> and that should resolve the issue.
>>
>> Thanks Everyone.



Re: US/Canada International border concerns for routing

2017-08-08 Thread Keenan Tims

On 2017-08-08 17:10, Bill Woodcock wrote:

No.  In fact, Bell Canada / Bell Aliant and Telus guarantee that you_will_  go 
through Chicago, Seattle, New York, or Ashburn, since none of them peer 
anywhere in Canada at all.


The major national networks (Bell, Rogers, Telus, Shaw, Zayo/Allstream) 
do peer with each other and some other large / old Canadian networks 
(e.g. MTS, SaskTel, Peer1) within Canada. While they do practice peering 
protectionism and only purchase transit out of country, the situation is 
not *quite* so bad that all traffic round-trips through the US.


Of course if neither side of the conversation has at least one of those 
major networks as a transit upstream - which is most of the eyeballs and 
most of the important Canadian content - you'll see that hop through 
Chicago or Seattle (or worse). Which is exactly the way they like it.


Keenan



Re: Juniper Advertise MED on EBGP session.

2017-02-21 Thread Keenan Tims
I also spent a significant amount of time trying to figure out a way to 
do this, and was using communities for a while before I found a 
solution. It turns out that the expression knob lets you use the 
existing metric as an input, and this works to export the iBGP MED, at 
least on my 12.3X48 SRX:


then {
metric {
expression {
metric multiplier 1;
}
}
}

Keenan

On 2017-02-21 07:26, Leo Bicknell wrote:

I tried to pull an old trick out of my playbook this morning and
failed.  I'd like to advertise BGP Metrics on an EBGP session,
specifically the existing internal metrics.  I know how to do this
on a Cisco, but I tried on a Juniper and it seems to be impossible.

I can set a metric in a policy, or put a default metric on the
session as a whole, or even set it to IGP.  But none of those are
what I want.  I want the existing metrics advertised as-is, just
like would be done over an IBGP session.  After an hour of reading
documentation and trying a few things, I'm starting to think it
may be impossible on JunOS.

Anyone have a tip or trick?





Re: Recent NTP pool traffic increase

2016-12-20 Thread Keenan Tims
Better for whom? I'm sure all mobile operating systems provide some 
access to time, with a least 'seconds' resolution. If an app deems this 
time source untrustworthy for some reason, I don't think the reasonable 
response is to make independent time requests from a volunteer-operated 
pool for public servers designed for host synchronization. Particularly 
on mobile, the compartmentalization of applications means that this 
'better' time will only be accessible to one application, and many 
applications may have this 'better time' requirement. These developers 
should be lobbying Apple and Google for better time, if they need it, 
not making many millions of calls to the NTP pool. To make things worse, 
I'm fairly sure that Apple's 'no background tasks' policy means that an 
application can't *maintain* its sense of time, so it would not surprise 
me if it fires off NTP requests every time it is focused, further 
compounding the burden.


Time is already available, and having every application query for its 
own time against a public resource doesn't seem very friendly. It 
certainly doesn't scale. If they are unsuccessful lobbying the OS, why 
not trust the time provided by the API calls they are surely doing to 
their own servers? Most HTTP responses include a timestamp; surely this 
is good enough for expiring Snaps. Or at least operate their own NTP 
infrastructure.


I'm sure that Snap had no malicious intent and commend them for their 
quick and appropriate response once the issue was identified, but I 
don't think this behaviour is very defensible. I for one was not harmed 
by the ~10x increase in load and traffic on my NTP pool node, but the 
100x increase if a handful of similar apps decided they 'need' more 
accurate time than the OS provides would be cause for concern, and I 
suspect a great many pool nodes would simply disappear, compounding the 
problem. Please make use of these and similar services as they are 
designed to be used, and as efficiently as possible, especially if you 
are responsible for millions of users / machines.


In a similar vein, I've always been curious what the ratio Google sees 
of ICMP echo vs. DNS traffic to 8.8.8.8 is...


Keenan


On 2016-12-20 18:16, Tim Raphael wrote:

Exactly,

Also they’re across Android and iOS and getting parity of operations across 
those two OSs isn’t easy.
Better to just embed what they need inside the app if it is specialised enough.

- Tim


On 21 Dec. 2016, at 10:13 am, Emille Blanc  wrote:

Perhaps the host OS' to which snapchat caters, don't all have a devent ntp 
subststem available?
I have vague recollections of some other software (I'm sure we all know which) 
implemented it's own malloc layer for every system it ran on, for less trivial 
reasons. ;)


From: NANOG [nanog-boun...@nanog.org] On Behalf Of Tim Raphael 
[raphael.timo...@gmail.com]
Sent: Tuesday, December 20, 2016 5:34 PM
To: Gary E. Miller
Cc: nanog@nanog.org
Subject: Re: Recent NTP pool traffic increase

This was my thought actually, Apple does offer some time services as part of 
the OS but it’s becoming common with larger / more popular apps to provide some 
of these services internally.
Look at the FB app for example, there are a lot of “system” things they do 
themselves due to the ability to control specifics. Users don’t want to have to 
install a second “specialised app” for this either.

With regard to an ephemeral chat app requiring time sync, I can think of quite 
a few use cases and mechanisms in the app that might require time services.

- Tim



On 21 Dec. 2016, at 9:26 am, Gary E. Miller  wrote:

Yo valdis.kletni...@vt.edu!

On Tue, 20 Dec 2016 20:20:48 -0500
valdis.kletni...@vt.edu wrote:


On Tue, 20 Dec 2016 18:11:11 -0500, Peter Beckman said:

Mostly out of curiosity, what was the reason for the change in the
Snapchat code, and what plans does Snap have for whatever reason
the NTP change was put in place?

 From other comments in the thread, it sounds like the app was simply
linked against a broken version of a library

But why is a chat app doing NTP at all?  it should rely on the OS, or
a specialized app, to keep local time accurate.

RGDS
GARY
---
Gary E. Miller Rellim 109 NW Wilmington Ave., Suite E, Bend, OR 97703
  g...@rellim.com  Tel:+1 541 382 8588




Re: Dyn DDoS this AM?

2016-10-21 Thread Keenan Tims
I don't have a horse in this race, and haven't used it in anger, but 
Netflix released denominator to attempt to deal with some of these issues:


https://github.com/Netflix/denominator

Their goal is to support the highest common denominator of features 
among the supported providers,


Maybe that helps someone.

Keenan

On 2016-10-21 16:19, Niels Bakker wrote:

The point of outsourcing DNS isn't just availability of static
hostnames, it's the added services delivered, like returning different
answers based on source of the question, even monitoring your
infrastructure (or it reporting load into the DNS management system).

That is very hard to replicate with two DNS providers.


-- Niels.




Re: google search threshold

2016-02-29 Thread Keenan Tims
FWIW I have seen the captchas more often on IPv6 both from home and the office 
than when both networks were using a single shared IPv4; not sure if this is 
just related to chronology or a real effect. Once a month or so I seem to get 
them for a couple of days, then they go away.

No idea what's triggering it. It would be *really* helpful if Google could 
provide some useful technical details beyond a generic FAQ page. As it is I 
just get annoyed by it and have no way to troubleshoot or correct the constant 
false positives. How is Google detecting "robots"? My sense is that I tend to 
trigger the captcha thing when iterating similar search terms (particularly due 
to removal of the + operator and extremely poor "change my search terms because 
you think you know better than I do what I want to search for" behaviour. My 
search patterns haven't really changed since turning up IPv6 everywhere, so I 
have to think either the captcha trigger has gotten more aggressive, or somehow 
prefers to blacklist IPv6 users.

In any case, just going to IPv6 is definitely not a complete fix for this. It 
seems to be related to search behaviour and $blackbox_magic.

Keenan Tims
Stargate Connections

From: NANOG <nanog-boun...@nanog.org> on behalf of Philip Lavine via NANOG 
<nanog@nanog.org>
Sent: February 29, 2016 7:53 AM
To: Damian Menscher
Cc: nanog@nanog.org
Subject: Re: google search threshold

I have about 2000 users behind a single NAT. I have been looking at netflow, 
URL filter logs, IDS logs, etc. The traffic seems to be legit.

I am going to move more users to IPv6 and divide some of the subnets into 
different NATS and see if that alleviates the traffic load.
Thanks for the advice.
-Philip


  From: Damian Menscher <dam...@google.com>
 To: Philip Lavine <source_ro...@yahoo.com>
Cc: "nanog@nanog.org" <nanog@nanog.org>
 Sent: Friday, February 26, 2016 6:05 PM
 Subject: Re: google search threshold

On Fri, Feb 26, 2016 at 3:01 PM, Philip Lavine via NANOG <nanog@nanog.org> 
wrote:

Does anybody know what the threshold for google searches is before you get the 
captcha?I  am trying to decide if I need to break up the overload NAT to a pool.


There isn't a threshold -- if you send automated searches from an IP, then it 
gets blocked (for a while).

So... this comes down to how much you trust your machines/users.  If you're a 
company with managed systems, then you can have thousands of users share the 
same IP without problems.  But if you're an ISP, you'll likely run into 
problems much earlier (since users like their malware).
Some tips:   - if you do NAT: try to partition users into pools so one abusive 
user can't get all your external IPs blocked  - if you have a proxy: make sure 
it inserts the X-Forwarded-For header, and is restricted to your own users  - 
if you're an ISP: IPv6 will allow each user to have their own /64, which avoids 
shared-fate from abusive ones
Damian (responsible for DDoS defense)-- Damian Menscher :: Security Reliability 
Engineer :: Google :: AS15169




Re: Devices with only USB console port - Need a Console Server Solution

2016-02-02 Thread Keenan Tims
On 2016-02-02 02:02, Bjørn Mork wrote:
> As you are probably aware, there are no standard
> USB-DB9 Console adapters.  They are all vendor specific.  But the
> cloning industry has created a few semi-standards based on specific
> chipsets.
This is not strictly true. There is a Communications Device Class (CDC
ACM) defined by the USB-IF that covers basic serial devices and most OSs
(even Windows! Though it does require a .inf file anyway) include a
driver for it. A rumour I heard recently was that its lack of popularity
was a result of Microsoft and Intel not wanting device developers to
ignore the advantages of USB and just use CDC to continue using their
old-school RS232 protocols for mice or whatever. There are also some
good reasons not to use it, such as flow control, strict timing, higher
data rates, and added features available with custom chipsets, but it's
just fine for a serial console.

Exar, Microchip and others make simple and cheap USB-UART chips using
CDC ACM, and it's a very common application example for USB
microcontrollers.

USB console ports are just adding complexity where it offers no
advantage. KISS.

Keenan



Re: Binge On! - And So This is Net Neutrality?

2015-11-23 Thread Keenan Tims
I'm surprised you're supporting T-Mob here Owen. To me it's pretty
clear: they are charging more for bits that are not streaming video.
That's not neutral treatment from a policy perspective, and has no basis
in the cost of operating the network.

Granted, the network itself is neutral, but the purported purpose of NN
in my eyes is twofold: take away the influence of the network on user
and operator behaviour, and encourage an open market in network services
(both content and access). Allowing zero-rating based on *any* criteria
gives them a strong influence over what the end users are going to do
with their network connection, and distorts the market for network
services. What makes streaming video special to merit zero-rating?

I like Clay's connection to the boiling frog. Yes, it's "nice" for most
consumers now, but it's still distorting the market.

I'm also not seeing why they have to make this so complicated. If they
can afford to zero-rate high-bandwidth services like video and audio
streaming, clearly there is network capacity to spare. The user
behaviour they're encouraging with free video streaming is *precisely*
what the incumbents claimed was causing congestion to merit throttling a
few years ago, and still to this day whine about constantly. I don't
have data, but I would expect usage of this to align quite nicely with
their current peaks.

Why not just raise the caps to something reasonable or make it unlimited
across the board? I could even get behind zero-rating all
'off-peak-hours' use like we used to have for mobile voice; at least
that makes sense for the network. What they're doing though is product
differentiation where none exists; in fact the zero-rating is likely to
cause more load on the system than just doubling or tripling the users'
caps. That there seems to be little obvious justification for it from a
network perspective makes me vary wary.

Keenan

On 2015-11-23 18:05, Owen DeLong wrote:
> 
>> On Nov 23, 2015, at 17:28 , Baldur Norddahl  
>> wrote:
>>
>> On 24 November 2015 at 00:22, Owen DeLong  wrote:
>>
>>> Are there a significant number (ANY?) streaming video providers using UDP
>>> to deliver their streams?
>>>
>>
>> What else could we have that is UDP based? Ah voice calls. Video calls.
>> Stuff that requires low latency and where TCP retransmit of stale data is
>> bad. Media without buffering because it is real time.
>>
>> And why would a telco want to zero rate all the bandwidth heavy media with
>> certain exceptions? Like not zero rating media that happens to compete with
>> some of their own services, such as voice calls and video calls.
>>
>> Yes sounds like net neutrality to me too (or not!).
>>
>> Regards,
>>
>> Baldur
> 
> All T-Mobile plans include unlimited 128kbps data, so a voice call is 
> effectively
> already zero-rated for all practical purposes.
> 
> I guess the question is: Is it better for the consumer to pay for everything 
> equally,
> or, is it reasonable for carriers to be able to give away some free data 
> without opening
> it up to everything?
> 
> To me, net neutrality isn’t as much about what you charge the customer for 
> the data, it’s about
> whether you prioritize certain classes of traffic to the detriment of others 
> in terms of
> service delivery.
> 
> If T-Mobile were taking money from the video streaming services or only 
> accepting
> certain video streaming services, I’d likely agree with you that this is a 
> neutrality
> issue.
> 
> However, in this case, it appears to me that they aren’t trying to give an 
> advantage to
> any particular competing streaming video service over the other, they aren’t 
> taking
> money from participants in the program, and consumers stand to benefit from 
> it.
> 
> If you see an actual way in which it’s better for everyone if T-Mobile 
> weren’t doing this,
> then please explain it. If not, then this strikes me as harmless and overall 
> benefits
> consumers.
> 
> Owen
> 


Re: OT - Small DNS appliances for remote offices.

2015-02-19 Thread Keenan Tims
If you have a lot of locations, as I believe Ray is looking for, all of
this is a manual process you need to do for each instance. That is slow
and inefficient. If you're doing more than a few, you probably want
something you can PXE boot for provisioning and manage with your
preferred DevOps tools. It also sounds like he wants to run anycast for
this service, so probably needs a BGP speaker and other site-specific
configuration that I assume is not covered by the cookie-cutter OSX
tools. Of course you could still do it this way with a Mac Mini running
some other OS, but why would you want to when there are plenty of other
mini-PC options that are more appropriate?

Also: With Apple dropping their Pro products and leaving customers in
the lurch, and no longer having any actual server hardware, I would have
very little confidence in their server software product's quality or
likely longevity. And of course they're mum on their plans, so it's
impossible to plan around if they decide to exit the market.

Keenan

On 02/19/2015 11:47 AM, Mel Beckman wrote:
 If your time is worth anything, you can't beat the Mac Mini, especially for a 
 branch office mission-critical application like DNS.
 
 I just picked up a Mini from BestBuy for $480. I plugged it in, applied the 
 latest updates, purchased the MacOSX Server component from the Apples Store 
 ($19), and then via the Server control panel enabled DNS with forwarding.
 
 Total time from unboxing to working DNS: 20 minutes.
 
 The Server component smartly ships with all services disabled, in contrast to 
 a lot of Linux distros, so it's pretty secure out of the box. You can harden 
 it a bit more with the built-in PF firewall. The machine is also IPv6 ready 
 out of the box, so my new DNS server automatically services both IPv4 and 
 IPv6 clients.
 
 You get Apple's warranty and full support. Any Apple store can do testing and 
 repair.
 
 And with a dual-core 1.4GHz I5 and 4GB memory, it's going to handle loads of 
 DNS requests.
 
 Of course, if your time is worth little, spend a lot of time tweaking slow, 
 unsupported, incomplete solutions.
 
  -mel
  
 On Feb 19, 2015, at 11:32 AM, Denys Fedoryshchenko de...@visp.net.lb
  wrote:
 
 On 2015-02-19 18:26, valdis.kletni...@vt.edu wrote:
 On Thu, 19 Feb 2015 14:52:42 +, David Reader said:
 I'm using several to connect sensors, actuators, and such to a private
 network, which it's great for - but I'd think at least twice before 
 deploying
 one as a public-serving host in user-experience-critical role in a remote
 location.
 I have a Pi that's found a purpose in life as a remote smokeping sensor and
 related network monitoring, a task it does quite nicely.
 Note that they just released the Pi 2, which goes from the original 
 single-core
 ARM V6 to a quad-core ARM V7, and increases memory from 256M to1G. All at 
 the
 same price point.  That may change the calculus. I admit not having gotten 
 one
 in hand to play with yet.
 Weird thing - it still has Ethernet over ugly USB 2.0
 That kills any interest to run it for any serious networking applications.

 ---
 Best regards,
 Denys
 


Re: wifi blocking [was Re: Marriott wifi blocking]

2014-10-08 Thread Keenan Tims
There is a provision in the regulations somewhere that allows
underground/tunnel transmitters on licensed bands without a license,
provided certain power limits are honoured outside of the tunnel.
Perhaps they are operating under these provisions?

K

On 10/08/2014 02:11 PM, William Herrin wrote:
 On Wed, Oct 8, 2014 at 4:37 PM, joel jaeggli joe...@bogus.com wrote:
 On 10/8/14 1:29 PM, Larry Sheldon wrote:
 On 10/8/2014 08:47, William Herrin wrote:
 BART would not have had an FCC license. They'd have had contracts with
 the various phone companies to co-locate equipment and provide wired
 backhaul out of the tunnels. The only thing they'd be guilty of is
 breach of contract, and that only if the cell phone companies decided
 their behavior was inconsistent with the SLA..

 OK that makes more sense than the private answer I got from Roy.  I
 wondered why the FCC didn't take action if there was a license violation.

 http://www.nytimes.com/2012/03/03/technology/fcc-reviews-need-for-rules-to-interrupt-wireless-service.html?_r=0
 
 From the article: Among the issues on which the F.C.C. is seeking
 comment is whether it even has authority over the issue.
 
 Also: The BART system owns the wireless transmitters and receivers
 that allow for cellphone reception within its network.
 
 I'm not entirely clear how that works.
 
 Regards,
 Bill Herrin
 
 
 


Re: Belkin Router issues this morning?

2014-10-07 Thread Keenan Tims
While we weren't really impacted by this issue, my understanding is that
the Belkin devices ping 'heartbeat.belkin.com' periodically. If their
pings fail, they do DNS redirection to the device's configuration
interface and alert the user of the error.

It appears that this server went down or had some sort of network issue,
causing the redirect to kick in and users to be unable to access the
Internet.

Any solution that either a) avoids the router's DNS redirection or b)
cause its pings to succeed would restore service.

It looks like Belkin has now re-implemented this service on top of a
DNS-cluster of AWS instances, so I think users are probably back online
now, as the IPs in the initial reports were not AWS and are no longer in
the DNS.

Workarounds include:

a) Add the IP(s) of the heartbeat server as loopbacks on a server/router
within your (service provider) network that will respond to the echos
for your customers
b) Add the hostname 'heartbeat.belkin.com' to the router's host file as
127.0.0.1
c) Modify the DNS configuration of client machines or the Belkin DHCP server

K

On 10/07/2014 03:27 PM, John Neiberger wrote:
 Sounds like it might have been a DNS issue of some sort. The end result was
 that the customer routers couldn't reach their heartbeat server, which made
 them think they weren't on the net. The routers would then be helpful and
 redirect all customer port 80 traffic to the router's configuration page to
 correct the problem.
 
 It was a big mess that hopefully doesn't happen again. They've been making
 changes to their DNS this afternoon. Looks like they finally have it
 straightened out.
 
 John
 
 On Tue, Oct 7, 2014 at 10:05 AM, Steve Atkins st...@blighty.com wrote:
 

 On Oct 7, 2014, at 8:34 AM, Justin Krejci jkre...@usinternet.com wrote:

 https://twitter.com/search?q=%23belkin

 Sounds like a bad firmware update most likely.
 Presumably the Belkin routers perform caching DNS for the LAN clients
 for if the LAN clients use alternate DNS servers (OpenDNS, Google, your
 ISPs, etc) there are no longer any issues for those devices, as reported by
 several random people on the Internet.

 Over on outages, someone mentioned that heartbeat.belkin.com was
 unreachable from some areas, and that was causing their routers to shut
 down.

 Cheers,
   Steve




Re: wifi blocking [was Re: Marriott wifi blocking]

2014-10-07 Thread Keenan Tims
I don't think it changes much. Passive methods (ie. Faraday cage) would
likely be fine, as would layer 8 through 10 methods.

Actively interfering with the RF would probably garner them an even
bigger smackdown than they got here, as these are licensed bands where
the mobile carrier is the primary or secondary user. [name of company]
has no right to even use the frequencies in question.

Seems pretty consistent to me.

K

On 10/07/2014 05:28 PM, Larry Sheldon wrote:
 I have a question for the company assembled:
 
 Suppose that instead of [name of company] being offended by people using 
 their own data paths instead to the pricey choice offered, [name of 
 company] took the position that people should use the voice telephone 
 service they offered and block cell phone service on (and near) their 
 property.
 
 What would change in the several arguments that have been presented?
 


Re: Marriott wifi blocking

2014-10-03 Thread Keenan Tims
 The question here is what is authorized and what is not.  Was this to protect 
 their network from rogues, or protect revenue from captive customers.  

I can't imagine that any 'AP-squashing' packets are ever authorized,
outside of a lab. The wireless spectrum is shared by all, regardless of
physical locality. Because it's your building doesn't mean you own the
spectrum.

My reading of this is that these features are illegal, period. Rogue AP
detection is one thing, and disabling them via network or
administrative (ie. eject the guest) means would be fine, but
interfering with the wireless is not acceptable per the FCC regulations.

Seems like common sense to me. If the FCC considers this 'interference',
which it apparently does, then devices MUST NOT intentionally interfere.

K


Re: Muni Fiber and Politics

2014-07-22 Thread Keenan Tims
To take this in a slightly different direction, as long as we're looking
for pies in the sky, has anyone considered the bundling problem?

If we assume that a residential deployment pulls one strand (or perhaps
a pair) to each prem, similar to current practice for POTS, there's a
resource allocation problem if I want to buy TV services from provider A
and Internet services from provider B (or maybe I want to provision a
private WAN to my place of work). This could be done with WDM equipment
by the muni in the L1 model, or at L2, but it isn't something that's
often mentioned. I suspect L2 wins here, at least on cost.

Or are we going forward under the assumption that all of this will be
rolled into the Internets and delivery that way and competition in
that space will be sufficient?

K

Are we assuming that this will be taken care of by Internet-based delivery

On 07/22/2014 02:00 PM, Owen DeLong wrote:
 The beauty is that if you have a L1 infrastructure of star-topology fiber from
 a serving wire center each ISP can decide active E or PON or whatever
 on their own.
 
 That's why I think it's so critical to build out colo facilities with SWCs on 
 the other
 side of the MMR as the architecture of choice. Let anyone who wants to be an
 ANYTHING service provider (internet, TV, phone, whatever else they can 
 imagine)
 install the optical term at the customer prem and whatever they want in the 
 colo
 and XC the fiber to them on a flat per-subscriber strand fee basis that 
 applies to
 all comers with a per-rack price for the colo space.
 
 So I think we are completely on the same page now.
 
 Owen
 
 On Jul 22, 2014, at 13:37 , Ray Soucy r...@maine.edu wrote:
 
 I was mentally where you were a few years ago with the idea of having
 switching and L2 covered by a public utility but after seeing some
 instances of it I'm more convinced that different ISPs should use
 their own equipment.

 The equipment is what makes the speed and quality of service.  If you
 have shared infrastructure for L2 then what exactly differentiates a
 service?  More to the point; if that equipment gets oversubscribed or
 gets neglected who is responsible for it?  I don't think the
 municipality or public utility is a good fit.

 Just give us the fiber and we'll decided what to light it up with.

 BTW I don't know why I would have to note this, but of course I'm
 talking about active FTTH.  PON is basically throwing money away if
 you look at the long term picture.

 Sure, having one place switch everything and just assign people to the
 right VLAN keeps trucks from rolling for individual ISPs, but I don't
 think giving up control over the quality of the service is in the
 interest of an ISP.  What you're asking for is basically to have a
 competitive environment where everyone delivers the same service.
 If your service is slow and it's because of L2 infrastructure, no
 change in provider will fix that the way you're looking to do it.



 On Tue, Jul 22, 2014 at 2:26 PM, Scott Helms khe...@zcorum.com wrote:
 One of the main problems with trying to draw the line at layer 1 is that its
 extremely inefficient in terms of the gear.  Now, this is in large part a
 function of how gear is built and if a significant number of locales went in
 this direction we _might_ see changes, but today each ISP would have to
 purchase their own OLTs and that leads to many more shelves than the total
 number of line cards would otherwise dictate.  There are certainly many
 other issues, some of which have been discussed on this list before, but
 I've done open access networks for several cities and _today_ the cleanest
 situations by far (that I've seen) had the city handling layer 1 and 2 with
 the layer 2 hand off being Ethernet regardless of the access technology
 used.


 Scott Helms
 Vice President of Technology
 ZCorum
 (678) 507-5000
 
 http://twitter.com/kscotthelms
 


 On Tue, Jul 22, 2014 at 2:13 PM, Ray Soucy r...@maine.edu wrote:

 IMHO the way to go here is to have the physical fiber plant separate.

 FTTH is a big investment.  Easy for a municipality to absorb, but not
 attractive for a commercial ISP to do.  A business will want to
 realize an ROI much faster than the life of the fiber plant, and will
 need assurance of having a monopoly and dense deployment to achieve
 that.  None of those conditions apply in the majority of the US, so
 we're stuck with really old infrastructure delivering really slow
 service.

 Municipal FTTH needs to be a regulated public utility (ideally at a
 state or regional level).  It should have an open access policy at
 published rates and be forbidden from offering lit service on the
 fiber (conflict of interest).  This covers the fiber box in the house
 to the communications hut to patch in equipment.

 Think of it like the power company and the separation between
 generation and transmission.

 That's Step #1.

 Step #2 is finding an ISP to make 

RE: Verizon Public Policy on Netflix

2014-07-10 Thread Keenan Tims
 A little experimentation validates this:  Traffic from my FIOS home router
 flows through alter.net and xo.net before hitting netflix. Now alter.net is
 now owned by Verizon, but when I run traceroutes, I see all the delays
 starting halfway through XO's network -- so why is nobody pointing a finger
 at XO?

Traceroute is pretty meaningless for analyzing if there is congestion or not. 
The presence of delays could mean many things that don't indicate congestion. 
Most large networks are well managed internally; congestion almost always 
appears at network edges.

In this case, the assertion is that XO's link to Verizon is congested. If that 
is in fact the case, it's because Verizon is running it hot. Verizon is 
(presumably) an XO customer, and it is on them to increase capacity or do 
network engineering such that their links are upgraded or traffic shifted 
elsewhere. It's worth pointing out that if Verizon is running a transit link 
hot like this, Netflix is not the only traffic that's going to be impacted, and 
that is in no way Netflix' fault. Even if it is a peering link, their dispute 
should be with XO.

What people seem to miss here is that there is no other out for $ISP than a) 
increase transit capacity, b) sufficiently peer with $CONTENT or c) allow 
performance to degrade (ie. Don't give customers what they are paying for). If 
we take c) off the table, it tells us that settlement-free peering would be the 
preferred alternative as it would usually cost less than buying more transit.

 I'll also note that traffic to/from google, and youtube (also google of
 course) seems to flow FIOS - alter.net - google -- with no delays.  So again,
 why aren't Netflix and Verizon pointing their fingers at XO.

Verizon (apparently) refuses to peer with Netflix, since Netflix has an open 
polic. They do, however, appear to peer with Google. Why?

 This is the classic asymmetric peering situation - which raises a legitimate
 question of who's responsible for paying for the costs of transit service and
 interconnections?

If this were a question of Verizon transiting traffic for Netflix 
asymmetrically, then sure. However they are terminating the traffic in 
question, the only transit is to a paying Verizon customer on Verizon 
equipment; this is the part of the network their customer pays them to maintain.

 And, of course, one might ask why Netflix isn't buying a direct feed into
 either alter.net or FIOS POPs, and/or making use of a caching network like
 Akamai, as many other large traffic sources do on a routine basis.

They likely can already meet easily at many points across the country, with 
little cost to either party. It is quite obvious that Netflix is very open to 
doing so. Why doesn't Verizon want to play? Apparently because they think they 
can successfully convince users that the problem is Netflix' and not Verizon's. 
Content peering with eyeballs should be a no-brainer - it saves both parties 
plenty of money and improves performance across the board. Netflix seems 
willing to bring their traffic to Verizon's edge for free, all Verizon needs to 
do is turn up the ports and build whatever capacity they would need to build 
anyway regardless of where the traffic comes from or what it is. Or, if the 
power and space is cheaper than the transport from where they meet (or to where 
they can meet), they can install Netflix' appliances. They always have the 
option of just buying more transit too, but the bottom line is that this 
expansion is required to carry their customer's traffic, it's not something 
they would be trying to charge content/transit for if it were organic traffic 
growth from diverse sources, they would simply upgrade their network like the 
rest of us.

Keenan

 
 Personally, I think Netflix is screwing the pooch on this one, and pointing 
 the
 finger at Verizon as a convenient fall guy.
 
 Miles Fidelman
 
 
 
 
 
 
 
 --
 In theory, there is no difference between theory and practice.
 In practice, there is.    Yogi Berra



RE: Observations of an Internet Middleman (Level3) (was: RIP

2014-05-15 Thread Keenan Tims
Their existing agreements notwithstanding, I believe the problem many have with 
Comcast's balanced ratio requirement is that they have 10s of millions of 
customers, all or almost all of whom are sold unbalanced services. In 
addition the majority of their customers are end users, who are also going to 
bias toward heavily inbound patterns (which is one of the reasons for the 
asymmetric connections in the first place).

As primarily an eyeball network with a token (8000 quoted) number of transit 
customers it does not seem reasonable for them to expect balanced ratios on 
peering links. They are, effectively by their own choice of market, always 
going to have a heavily inbound traffic ratio. It seems to me that requiring 
anything else is basically a way to give the finger to a potential peer while 
claiming to be neutral. I find it hard to believe that Comcast would be running 
many balanced links (peering or transit) at all, except perhaps to other 
consumer ISPs.

In today's environment there are inevitably going to be heavily inbound and 
heavily outbound networks. Content networks don't have any problem with SFI 
despite their ratio. Eyeball networks do. Both are in the position they are 
because of the line of business they have respectively chosen. But the eyeball 
network is the only one that is explicitly and exclusively paid *to carry 
traffic*. IMO if the content network is willing to bring their content, for 
free, to the eyeball network's edge, this is to the benefit of the eyeball 
network more than content, in the absence of other factors.

In this case that factor appears to me to be ad-hoc oligopoly. If customers 
had options and an easy path to switch, they would not tolerate this behaviour 
when they can switch to a competitor who provides good service for the bits 
they request. Content would gain a lot of leverage in this situation as they 
could help educate customers on alternatives, automatically and without 
paying a support agent. Of course we should be careful not to let the opposite 
situation occur either...

Keenan


From: NANOG nanog-boun...@nanog.org on behalf of Scott Helms 
khe...@zcorum.com
Sent: May 15, 2014 12:54 PM
To: Joe Greco
Cc: nanog@nanog.org
Subject: Re: Observations of an Internet Middleman (Level3) (was: RIP

On Thu, May 15, 2014 at 3:05 PM, Joe Greco jgr...@ns.sol.net wrote:

  So by extension, if you enter an agreement and promise to remain
 balanced y=
  ou can just willfully throw that out and abuse the heck out of it? Where
 do=
  es it end? Why even bother having peering policies at all then?

 It doesn't strike you as a ridiculous promise to extract from someone?



You could certainly say its ridiculous, but it is (and has been) the basis
for almost all peering arrangements in North America for several decades in
my personal experience.  I believe that the practice came from the telco
world when large telephone companies would exchange traffic without billing
each other so long as the traffic was relatively balanced.  You can imagine
ATT and Sprint exchange toll traffic and so long as things we're fairly
close there wasn't a big imbalance of traffic to worry the financial folks
over and thus having to do exact accounting on each minute, which was
technically challenging 30 years ago.


Hi I'm an Internet company.  I don't actually know what the next big
 thing next year will be but I promise that I won't host it on my network
 and cause our traffic to become lopsided.

 Wow.  Is that what you're saying?


That's not what happened.  What happened is that Netflix went to Level 3
who already had a peering arrangement with Comcast which was built around
normal (roughly) balanced peering.  It had been in place for years before
Netflix signed with Level 3 and worked, and was contracted this way, around
relatively balanced traffic.  Once Netflix started sending most of their
traffic destined to Comcast end user through Level 3 things got out of
balance.  Netflix still has a contract with Cogent (I believe that is the
correct one) or other provider that had previously been handling the bulk
of the Comcast directed traffic, but the Level 3 connection was cheaper for
Netflix.  If anyone actually acted in bad faith it was, IMO, Level 3.



  To use an analogy, if you and I agree to buy a car together and agree to
 sw=
  itch off who uses it every other day, can I just say forget our
 agreement =
  =96 I=92m just going to drive the car myself every single day =96 its
 all m=
  ine=94?

 Seems like a poor analogy since I'm pretty sure both parties on a peering
 can use the port at the same time.


His point was you can't simply change a contract without having both
parties involved.  Level 3 tried to do just that.


 ... JG
 --
 Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
 We call it the 'one bite at the apple' rule. Give me one chance [and]
 then I
 won't contact you again. - Direct