Re: Pilot Fiber, Chicago Area: Impressions?

2020-03-31 Thread Josh Hoppes
Employer has been using them for transit in Chicago for a while now.
There was a case where they had a weird detour path through a router
on the east coast for a prefix ultimately destined for the west coast,
but once we notified them they quickly (same day) got it resolved.
Been pretty happy with them so far.

On Tue, Mar 31, 2020 at 9:42 AM Shawn Ritchie  wrote:
>
> Pricing looks good, considering them for cheap backhaul as a tertiary path. 
> Anybody have experience with them for just IP transit?
>
> --
> Shawn


Re: Partial vs Full tables

2020-06-08 Thread Josh Hoppes
Juniper Networks has also tried using Bloom filters.

https://patents.google.com/patent/US20170187624

I think the QFX10002 was the first product they made which used this approach.

https://forums.juniper.net/t5/Archive/Juniper-QFX10002-Technical-Overview/ba-p/270358

On Mon, Jun 8, 2020 at 1:45 PM William Herrin  wrote:
>
> On Mon, Jun 8, 2020 at 10:52 AM  wrote:
> > Every "fast" FIB implementation I'm aware of takes a set of prefixes, 
> > stores them in some sort of data structure, which can perform a 
> > longest-prefix lookup on the destination address and eventually get to an 
> > actual physical interface for forwarding that packet.  Exactly how those 
> > prefixes are stored and exactly how load-balancing is performed is *very* 
> > platform specific, and has tons of variability.  I've worked on at least a 
> > dozen different hardware based forwarding planes, and not a single pair of 
> > them used the same set of data structures and design tradeoffs.
>
> Howdy,
>
> AFAIK, there are two basic approaches: TCAM and Trie.  You can get off
> in to the weeds fast dealing with how you manage that TCAM or Trie and
> the Trie-based implementations have all manner of caching strategies
> to speed them up but the basics go back to TCAM and Trie.
>
> TCAM (ternary content addressable memory) is a sort of tri-state SRAM
> with a special read function. It's organized in rows and each bit in a
> row is set to 0, 1 or Don't-Care. You organize the routes in that
> memory in order from most to least specific with the netmask expressed
> as don't-care bits. You feed the address you want to match in to the
> TCAM. It's evaluated against every row in parallel during that clock
> cycle. The TCAM spits out the first matching row.
>
> A Trie is a tree data structure organized by bits in the address.
> Ordinary memory and CPU. Log-nish traversal down to the most specific
> route. What you expect from a tree.
>
> Or have I missed one?
>
> Regards,
> Bill Herrin
>
>
> --
> William Herrin
> b...@herrin.us
> https://bill.herrin.us/


Re: Vyatta to VyOS

2013-12-23 Thread Josh Hoppes
Ubiquiti has been contributing to VyOS, so I'm assuming it is the
version they are using as the upstream for their code.

On Mon, Dec 23, 2013 at 1:18 PM, Nolan Rollo  wrote:
> I wonder how Ubiquiti Networks is going to react to this since their EdgeMax 
> Routers run a fork of the Vyatta code (EdgeOS).
>
> http://community.ubnt.com/t5/EdgeMAX/Vyatta-Community-Edition-dead/m-p/591487/highlight/true#M16059
>
> It looks like there is a post in the form where a UBNT Employee said that 
> they were working directly with the VyOS guys. In this case I wonder what 
> other commercial vendor is going to jump on the open source bandwagon.
>
> -Original Message-
> From: Scott Howard [mailto:sc...@doc.net.au]
> Sent: Monday, December 23, 2013 1:45 PM
> To: Ray Soucy
> Cc: NANOG
> Subject: Re: Vyatta to VyOS
>
> Who wants to tell them that it's really 2013?
>
> News
> 22 Dec *2012*
> Version 1.0.0 (hydrogen) released.
>
>
>   Scott
>
>
>
> On Mon, Dec 23, 2013 at 7:18 AM, Ray Soucy  wrote:
>
>> Many here might be interested,
>>
>> In response to Brocade not giving the community edition of Vyatta much
>> attention recently, some of the more active community members have
>> created a fork of the GPL code used in Vyatta.
>>
>> It's called VyOS, and yesterday they released 1.0.
>>
>> http://vyos.net/
>>
>> I've been playing with the development builds and it seems to be every
>> bit as stable as the Vyatta releases.
>>
>> Will be interesting to see how the project unfolds :-)
>>
>> --
>> Ray Patrick Soucy
>> Network Engineer
>> University of Maine System
>>
>> T: 207-561-3526
>> F: 207-561-3531
>>
>> MaineREN, Maine's Research and Education Network www.maineren.net
>>
>



Re: turning on comcast v6

2013-12-31 Thread Josh Hoppes
> Now, boss man comes in and has a new office opening up.  Go grab the r1 box
> out of the closet, you need to upgrade the code and reconfigure it.  Cable
> it up to your PC with a serial port, open some some sort of terminal program
> so you can catch the boot and password recover it.  Plug it's ethernet into
> your lan, as you're going to need to tftp down new config, and turn it on.


Why are you putting a router that you know needs to be reconfigured
onto a production network? This could backfire regardless of IPv6,
since you could have a similar issue if the router was performing DHCP
from a locally configured pool. If someone did this and complained to
me I would tell them they just learned a lesson and now know better
then to go connecting equipment with an existing configuration to a
production network without doing a full review first.



Re: Is Level(3) AS3356 absorbing GBLX AS3549

2013-01-24 Thread Josh Hoppes
Yep,

http://www.nanog.org/meetings/nanog56/presentations/Monday/mon.lightning.siegel.pdf


On Thu, Jan 24, 2013 at 6:03 AM, Christopher J. Pilkington wrote:

> Overnight BGPmon reports that 3356 was adjacent to our AS, but it is
> not. Only plausible situation I can think of is Level(3) absorbing the
> 3549 GlobalCrossing AS.
>
> Is this going on? Or am I suffering from insufficient caffeination?
>
> -cjp
>
>


Re: TOR fiber patch panels

2013-01-31 Thread Josh Hoppes
Have you looked at anything from Clear Field, just as an example
something like this.

http://www.clearfieldconnection.com/products/panels/fieldsmart-small-count-delivery-scd-1ru-rack-mount-cabinet-mount-panel.html

On Thu, Jan 31, 2013 at 11:44 AM, Chuck Anderson  wrote:
> I'm looking for better Top-Of-Rack fiber patch panels than the ones
> I've been using up to this point.  I'm looking for something that is
> 1U, holds 12 to 24 strands of SC, ST, or LC, has fiber jumper
> management rings, and has a door that doesn't interfere with the U
> below (a server might be mounted immediately below the fiber patch
> panel).  I prefer one that doesn't have a sliding mechanism, because
> I've had issues with fiber installers not installing those properly,
> causing fiber to be crunched and broken when the tray is slid out/in
> during patching.  Of course, I would still like one that is easy to
> get your fingers into to install and remove fiber jumpers.
>
> Does such a thing exist?  What are people's favorite fiber patch
> panels?
>
> Thanks.
>



Re: Common operational misconceptions

2012-02-16 Thread Josh Hoppes
2012/2/16 Masataka Ohta :
> Andreas Echavez wrote:
>> *How NAT breaks end-to-end connectivity (fun one..., took me
>>  hours to explain to an old boss why doing NAT at the ISP level
>>  was horrendously wrong)
>
> That's another misconception.
>
> While NAT breaks the end to end connectivity, it can be
> restored by end systems by reversing translations by NAT,
> if proper information on the translations are obtained
> through some protocol such as UPnP.
>
>                                        Masataka Ohta
>

UPnP can scale to about the size of an average home use, but it's
worth jack squat at the ISP level when NAT44 comes into play. UPnP is
not an ISP grade solution, it's a consumer one.



Re: Shim6, was: Re: filtering /48 is going to be necessary

2012-03-12 Thread Josh Hoppes
On Mon, Mar 12, 2012 at 8:01 PM, William Herrin  wrote:
> But suppose you had a TCP protocol that wasn't statically bound to the
> IP address by the application layer. Suppose each side of the
> connection referenced each other by name, TCP expected to spread
> packets across multiple local and remote addresses, and suppose TCP,
> down at layer 4, expected to generate calls to the DNS any time it
> wasn't sure what addresses it should be talking to.
>
> DNS servers can withstand the update rate. And the prefix count is
> moot. DNS is a distributed database. It *already* easily withstands
> hundreds of millions of entries in the in-addr.arpa zone alone. And if
> the node gets even moderately good at predicting when it will lose
> availability for each network it connects to and/or when to ask the
> DNS again instead of continuing to try the known IP addresses you can
> get to where network drops are ordinarily lossless and only
> occasionally result in a few packet losses over the course of a a
> single-digit number of seconds.
>
> Which would be just dandy for mobile IP applications.

DNS handles many of millions of records sure, but that's because it
was designed with caching in mind. DNS changes are rarely done at the
rapid I think you are suggesting except for those who can stand the
brunt of 5 minute time to live values. I think it would be insane to
try and set a TTL much lower then that, but that would seem to work
counter to the idea of sub 10 second loss. If you cut down caching as
significantly as I think this idea would suggest I would expect
scaling will take a plunge.

Also consider the significant increased load on DNS servers to
handling the constant stream of dynamic DNS updates to make this
possible, and that you have to find some reliable trust mechanism to
handle these updates because with out that you just made man in the
middle attacks a just a little bit easier.

That said, I might be misunderstanding something. I would like to see
that idea elaborated.



Re: Shim6, was: Re: filtering /48 is going to be necessary

2012-03-14 Thread Josh Hoppes
On Wed, Mar 14, 2012 at 1:45 PM, Owen DeLong  wrote:
> I fully expect them to develop an HDCP-or-equivalent enabled protocol to run 
> over IP Multicast.
>
> Do you have any reason you believe that won't happen?
>
> Owen

I'm pretty sure it's already in place for IPTV solutions.



Re: juniper mx80 vs cisco asr 1000

2012-01-19 Thread Josh Hoppes
I would also be interested in peoples experiences with the MX80
platform. Currently considering the MX40 license level of MX80
platform for a project. We have had good experiences with the ASR1002
but want to keep our options open.

On Thu, Jan 19, 2012 at 2:45 PM, PC  wrote:
> Which specific models are you looking at?
>
> Both contain a large product range.
>
> On Thu, Jan 19, 2012 at 1:10 PM, jon Heise  wrote:
>
>> Does anyone have any experience with these two routers, we're looking to
>> buy one of them but i have little experience dealing with cisco routers and
>> zero experience with juniper.
>>



Re: juniper mx80 vs cisco asr 1000

2012-01-20 Thread Josh Hoppes
I certainly agree they have very different applications, and hopefully
that will help those looking for this kind of insight.

On Fri, Jan 20, 2012 at 3:54 PM, Saku Ytti  wrote:
> On (2012-01-20 09:50 -0700), PC wrote:
>
>> Juniper has some very aggressive pricing on mx80 bundles license-locked to
>> 5gb, which are cheaper and blow the performance specifications of the
>> equivalent low end ASR1002 out of the water for internet edge BGP
>> applications.  Unlike the ASR, a simple upgrade license can unlock the
>> boxes full potential.
>
> ASR1002 list price is 18kUSD, MX5 list price is 29.5kUSD. Upgrade license
> for MX5 -> MX80 literally costs more than new MX80 (with all but jflow
> license, two psu and 20SFP MIC)
>
> Sure MX5 will do line rate on 20 SFP ports, vastly more than ASR1002, but
> this is little consolation if you need high touch services such as NAPT,
> IPSEC etc. So applications for these boxes are quite different.
>
> --
>  ++ytti
>



Re: XBOX 720: possible digital download mass service.

2012-01-28 Thread Josh Hoppes
I've seen this discussion show up in a number of venues lately. I'm
not at all surprised about the trend as I've been using Steam for a
few years now. I expect they will take a similar path and continue to
sell physical medium with keys to tie the game to an account, and do
staged downloads using encrypted data which is unlocked at release
time. The biggest content for games is really art assets, and much of
that work is done months ahead of release and unlikely to change,
while fine tuning and game logic (binaries) are small enough that
staging downloads in tiers should be easy. There is also the system
Blizzard is using for World of Warcraft where the game can stream
content down while playing. Most of these publishers/developers
already have pretty good grasps on what capabilities are at their
disposal thanks to the DLC model they have now, they will just be
going an order of magnitude larger on the downloads. I wonder how many
will also attempt to leverage P2P models as well to assist CDNs,
cheaper for them and maybe even a revenue generator for ISPs charging
for transfer overages.



the alleged evils of NAT, was Rate of growth on IPv6 not fast enough?

2010-04-27 Thread Josh Hoppes
I'll preface this that I'm more of an end user then a network
administrator, but I do feel I have a good enough understanding of the
protocols and
network administration to submit my two cents.

The issue I see with this level of NAT, is the fact that I don't
expect that UPNP be implemented at that level.
I would see UPNP as being a security risk and prone to denial of
service attacks when you have torrent clients attempting to grab every
available port.

Now that's going to create problems with services like Xbox Live which
require UPNP to fully function since at least on one persons
connection
so they can "host" the game. When you're looking at player counts in
the millions I'm fairly sure that they are going to be effected by
CGN.
That's one application I expect to see break by such large scale NAT
implementations.



Re: Who controlls the Internet?

2010-07-25 Thread Josh Hoppes
In all honesty control over the Internet doesn't sound like the issue
here. The US Government regulates entities functioning with in it's
boarders. This would be no different if I being in the US were
restricted access to a site in any other country due to their
regulations.



Re: The stupidity of trying to "fix" DHCPv6

2011-06-10 Thread Josh Hoppes
On Fri, Jun 10, 2011 at 2:21 PM, Steve Clark  wrote:
> On 06/10/2011 09:37 AM, Ray Soucy wrote:
>>
>> You really didn't just write an entire post saying that RA is bad
>> because if a moron of a network engineer plugs an incorrectly
>> configured device into a production network it may cause problems, did
>> you?
>>
>
> You are the moron - this stuff happens and wishing it didn't doesn't stop
> it. Get a clue!
>

No matter how much stupidity you try account for, there is infinitely
more to come.



Re: Why are there no GeoDNS solutions anywhere in sight?

2013-03-21 Thread Josh Hoppes
> But what I don't understand is why everyone implies that the status
> quo with round-robin DNS is any better.

I don't think anyone believes round robin DNS records is better. It's
that attempting to do better requires adding onto or changing
standards that must maintain backwards compatibility and thus nearly
useless until everyone adopts it, or hack jobs that have hilariously
funny failure scenarios that are unavoidable because it comes down to
"guess work".



Re: Google's QUIC

2013-06-28 Thread Josh Hoppes
My first question is, how are they going to keep themselves from
congesting links?

On Fri, Jun 28, 2013 at 3:09 PM, Michael Thomas  wrote:
> http://arstechnica.com/information-technology/2013/06/google-making-the-web-faster-with-protocol-that-reduces-round-trips/?comments=1
>
> Sorry if this is a little more on the dev side, and less on the ops side but
> since
> it's Google, it will almost certainly affect the ops side eventually.
>
> My first reaction to this was why not SCTP, but apparently they think that
> middle
> boxen/firewalls make it problematic. That may be, but UDP based port
> filtering is
> probably not far behind on the flaky front.
>
> The second justification was TLS layering inefficiencies. That definitely
> has my
> sympathies as TLS (especially cert exchange) is bloated and the way that it
> was
> grafted onto TCP wasn't exactly the most elegant. Interestingly enough,
> their
> main justification wasn't a security concern so much as "helpful" middle
> boxen
> getting their filthy mitts on the traffic and screwing it up.
>
> The last thing that occurs to me reading their FAQ is that they are
> seemingly trying
> to send data with 0 round trips. That is, SYN, data, data, data... That
> really makes me
> wonder about security/dos considerations. As in, it sounds too good to be
> true. But
> maybe that's just the security cruft? But what about SYN cookies/dos? Hmmm.
>
> Other comments or clue?
>
> Mike
>



Re: iOS 7 update traffic

2013-09-18 Thread Josh Hoppes
Our local Akamai cluster has pegged it's 1G uplink a few times, and we
are hitting our 1G Equinix IX link pretty hard as well.

On Wed, Sep 18, 2013 at 1:10 PM, Ben Bartsch  wrote:
> We are seeing Akamai traffic up about 100-300% since noon CDT.  Seeing
> similar increased from our participants - colleges and universities mainly.
>
> AS32440
>
> -ben
>
>
> On Wed, Sep 18, 2013 at 12:59 PM, Tassos Chatzithomaoglou <
> ach...@forthnetgroup.gr> wrote:
>
>> We also noticed an interesting spike (+ ~40%), mostly in akamai.
>> The same happened on previous iOS too.
>>
>> --
>> Tassos
>>
>> Zachary McGibbon wrote on 18/9/2013 20:38:
>> > So iOS 7 just came out, here's the spike in our graphs going to our ISP
>> > here at McGill, anyone else noticing a big spike?
>> >
>> > [image: internet-sw1 - Traffic - Te0/7 - To Internet1-srp (IR Canet) -
>> > TenGigabitEthernet0/7]
>> >
>> > Zachary McGibbon
>> >
>>
>>
>>



Re: Quakecon: Network Operations Center tour

2015-08-02 Thread Josh Hoppes
Not that often you see a bunch of people talking about a video you're
in, especially so on NANOG. So here goes.

BYOC is around 2700 seats. Total attendance was around 11,000.

2Gbps has been saturated at some point every year we have had it.
Additional bandwidth is definitely a serious consideration going
forward. It is a lot better than the 45mbps or less we dealt with 2010
and prior, but better doesn't mean good enough. Many games these days
do depend upon online services, and forced us to look for options.
AT&T has been sponsoring since then and we do appreciate it.

We have had the potential for DDoS attacks on our minds. Our first
option in those cases is blackhole announcements to the carrier for
the targeted /32. AT&T did provide address space for us to use so the
BYOC was using public IPs, and hopefully the impact of blackholing a
single IP could be made minimal. Thankfully we have not yet been
targeted, and we can only keep hoping it stays that way.

We haven't tackled IPv6 yet since it adds complexity that our primary
focus doesn't significantly benefit from yet since most games just
don't support it. Our current table switches don't have an RA guard,
and will probably require replacement to get ones that are capable.

We also re-designed the LAN back in 2011 to break up the giant single
broadcast domain down to a subnet per table switch. This has
definitely gotten us some flack from the BYOC since it breaks their
LAN browsers, but we thought a stable network was more important with
how much games have become dependent on stable Internet connectivity.
Still trying to find a good way to provide a middle ground for
attendees on that one, but I'm sure everyone here would understand how
insane a single broadcast domain with 2000+ hosts that aren't under
your control is. We have tried to focus on latency on the LAN, however
when so many games are no longer LAN oriented Internet connectivity
became a dominant issue.

Some traffic is routed out a separate lower capacity connection to
keep saturation issues from impacting it during the event.

Squid and nginx do help with caching, and thankfully Steam migrated to
a http distribution method and allows for easy caching. Some other
services make it more difficult, but we try our best. Before Steam
changed to http distribution there were a few years they helped in
providing a local mirror but that seems to have been discontinued with
the migration to http. The cache pushed a little over 4Gbps of traffic
at peak at the event.

The core IT team which handles the network (L2 and above) is about 9
volunteers. The physical infrastructure is our IP & D team, which gets
a huge team of volunteers put together in order to get that 13 miles
of cable ready between Monday and Wednesday. The event is very
volunteer driven, like many LAN parties across the planet. We try to
reuse cable from year to year, including loading up the table runs
onto a pallet to be used in making new cables out of in future years.

I imagine I haven't answered everyone's questions, but hopefully that
fills in some of the blanks.

If this has anyone considering sponsorship interest in the event the
contact email is sponsors(at)quakecon.org. Information is also
available on the website http://www.quakecon.org/.


Re: Quakecon: Network Operations Center tour

2015-08-02 Thread Josh Hoppes
On Sun, Aug 2, 2015 at 4:59 PM, Randy Bush  wrote:
> josh,
>
> thanks for the more technical scoop.  now i get it a bit better.
>
>> We also re-designed the LAN back in 2011 to break up the giant single
>> broadcast domain down to a subnet per table switch.
>
> so it is heavily routed using L3 on the core 'switches'?  makes a lot of
> sense.

Single core switch, the Cisco 6509 VE in the video, handles routing
between subnets. Table switches have an IP for management and
monitoring. We have some 3750Gs for additional routing in other parts
of the event.


Re: Google Apps for ISPs -- Lingering fallout

2015-08-24 Thread Josh Hoppes
When it comes to reasons for them to force everyone off I believe it
has to do with control. ISP accounts tend to be personal accounts, but
when you stop being a customer of the ISP they will deactivate the
account. Now that they tied purchases on the play store to the account
it made things very messy when a customers account was deactivated and
they suddenly lose all of this stuff they paid for.

On Mon, Aug 24, 2015 at 7:45 AM, Matt Hoppes  wrote:
> Which is odd. Considering it was basically gmail on the back end and they 
> still got ad revenue from it.
>
>
>
>> On Aug 24, 2015, at 08:34, Scott Helms  wrote:
>>
>> Ryan,
>>
>> Most certainly, the charges varied some  because of size and other factors
>> but it was around 25 cents monthly per Gmail box.
>>
>>
>> Scott Helms
>> Vice President of Technology
>> ZCorum
>> (678) 507-5000
>> 
>> http://twitter.com/kscotthelms
>> 
>>
>>> On Mon, Aug 24, 2015 at 1:43 AM, Ryan Finnesey  wrote:
>>>
>>> Was Google charging ISPs for this service?
>>>
>>> Cheers
>>> Ryan
>>>
>>>
>>> -Original Message-
>>> From: NANOG [mailto:nanog-boun...@nanog.org] On Behalf Of Gary Greene
>>> Sent: Tuesday, August 18, 2015 2:18 PM
>>> To: Shawn L 
>>> Cc: nanog 
>>> Subject: Re: Google Apps for ISPs -- Lingering fallout
>>>
>>> You’ll need to escalate this with Google. If the front-end support team
>>> cannot help, move up the chain as far as you can. It should eventually
>>> reach the PM that worked on the turn-down of that service and get some
>>> action.
>>>
>>> --
>>> Gary L. Greene, Jr.
>>> Sr. Systems Administrator
>>> IT Operations
>>> Minerva Networks, Inc.
>>> Cell: +1 (650) 704-6633
>>>
>>>
>>>
>>>
 On Aug 18, 2015, at 10:39 AM, Shawn L  wrote:


 I know there are others on this list who used Google Apps for ISPs and
>>> recently migrated off (as the service was discontinued).

 We have had several cases where the user had a YouTube channel or Picasa
>>> photo albums, etc. that they created with their Google Apps for ISPs
>>> credentials.  Now that the service is gone, those channels and albums still
>>> exist but the users are unable to login to them or manage them in any way
>>> because it tells them that their account has been disabled.

 Of course, Google had been un-responsive to all of our (and the
>>> customer's) inquiries about how to fix this.

 Has anyone else run into this and found a way around it?

 thanks


 Shawn

>>>
>>>