Re: "Permanent" DST

2022-03-17 Thread Brett Frankenberger
On Wed, Mar 16, 2022 at 10:29:07AM -0700, Owen DeLong via NANOG wrote:
> 
> You’re right… Two changes to a single file in most cases:
> 
> 1.Set the correct new timezone (e.g. MST for California).
> 2.Turn off the Daylight Stupid Time flag.
> 
> The previous change involved updating MANY zone files to change when
> DST happened.
> 
> This change eliminates that complexity altogether.
> 
> This is a MUCH simpler change than the previous one.

If the requirement is "I need to correctly convert epoch time to
localtime for all times at or after the time when the new rules go into
effect", that's a workable solution.  If the requirement includes
getting it right for historical times, then, well, not so much.

 -- Brett


Re: Texas internet connectivity declining due to blackouts

2021-02-16 Thread Brett Frankenberger
On Tue, Feb 16, 2021 at 08:02:38AM +0200, Mark Tinka wrote:
> 
> On 2/16/21 07:49, Matthew Petach wrote:
> 
> > Isn't that a result of ERCOT stubbornly refusing to interconnect with
> > the rest of the national grid, out of an irrational fear of coming under
> > federal regulation?
> > 
> > I suspect that trying to be self-sufficient works most of the time--but
> > when you get to the edges of the bell curve locally, your ability to be
> > resilient and survive depends heavily upon your ability to be supported
> > by others around you.  This certainly holds true for individual humans;
> > I suspect power grids aren't that different.
> 
> If there was a state-wide blackout, they'd need to restart from the national
> grid anyway. 

The Texas Grid has black-start capability.  In the event of a
state-wide blackout, they would not restart from the Eastern or Western
US Grid.

> Why not have some standing interconnection agreement with them
> anyway, that gets activated in cases such as these?

They have 820MW of interconnection with the Eastern Interconnect (the
Eastern US grid).  During most of this, it's been moving nearly 820MW
into Texas.  (Three were power shortages and rolling blackouts in
portions of the Eastern Interconnect also, although for much shorter
windows of time.  During those times, less power was flowing into
Texas, presumably because the Eastern Interconnect didn't have it
available (in the right places).)

Connections are more expensive that just a transmission line, because
you have to go AC-DC-AC (or have a rotary frequency converter).

> Sorry, unfamiliar with U.S. politics in this regard, so just doing 1+1.

Three grids, Western, Eastern, and Texas.  A GW or so of DC ties
between the Eastern and Western; nothing between the Western and Texas
(directly), and, as noted above, 880MW between the Eastern and Texas. 
(Very roughly, that's 2% of peak demand for the Texas grid.)

Eastern and Western exist largely for technical reasons (too big to
keep synchronized, at least without building a lot more ties between
them).  Texas is independent largely for political reasons.

 -- Brett


Re: 60 ms cross-continent

2020-06-21 Thread Brett Frankenberger
On Sun, Jun 21, 2020 at 02:17:08PM -0300, Rubens Kuhl wrote:
> On Sat, Jun 20, 2020 at 5:05 PM Marshall Eubanks 
> wrote:
> 
> > This was also pitched as one of the killer-apps for the SpaceX
> > Starlink satellite array, particularly for cross-Atlantic and
> > cross-Pacific trading.
> >
> >
> > https://blogs.cfainstitute.org/marketintegrity/2019/06/25/fspacex-is-opening-up-the-next-frontier-for-hft/
> >
> > "Several commentators quickly caught onto the fact that an extremely
> > expensive network whose main selling point is long-distance,
> > low-latency coverage has a unique chance to fund its growth by
> > addressing the needs of a wealthy market that has a high willingness
> > to pay — high-frequency traders."
> >
> >
> This is a nice plot for a movie, but not how HFT is really done. It's so
> much easier to colocate on the same datacenter of the exchange and run
> algorithms from there; while those algorithms need humans to guide their
> strategy, the human thought process takes a couple of seconds anyways. So
> the real HFTs keep using the defined strategy while the human controller
> doesn't tell it otherwise.

For faster access to one exchange, yes, absolutely, colocate at the
exchange.  But there's more then one exchange.

As one example, many index futures trade in Chicago.  The stocks that
make up those indices mostly trade in New York.  There's money to be
made on the arbitrage, if your Chicago algorithms get faster
information from New York (and vice versa) than everyone else's
algorithms.

More expensive but shorter fiber routes have been build between NYC and
Chicago for this reason, as have a microwave paths (to get
speed-of-light in air rather than in glass).  There's competition to
have the microwave towers as close as possible to the data centers,
because the last mile is fiber so the longer your last mile, the less
valuable your network.

https://www.bloomberg.com/news/features/2019-03-08/the-gazillion-dollar-standoff-over-two-high-frequency-trading-towers

 -- Brett


Re: CloudFlare issues?

2019-07-06 Thread Brett Frankenberger
On Thu, Jul 04, 2019 at 11:46:05AM +0200, Mark Tinka wrote:
> I finally thought about this after I got off my beer high :-).
> 
> Some of our customers complained about losing access to Cloudflare's
> resources during the Verizon debacle. Since we are doing ROV and
> dropping Invalids, this should not have happened, given most of
> Cloudflare's IPv4 and IPv6 routes are ROA'd.

These were more-specifics, though.  So if you drop all the
more-specifics as failing ROV, then you end up following the valid
shorter prefix to the destination.  Quite possibly that points at the
upstream which sent you the more-specific which you rejected, at which
point your packets end up same going to the same place they would have
gone if you had accepted the invalid more-specific.

Two potential issues here:  First, if you don't have an upstream who
is also rejecting the invalid routes, then anywhere you send the
packets, they're going to follow the more-specific.  Second, even if
you do have an upstream that is rejecting the invalid routes, ROV won't
cause you to prefer the less-specific from an upstream that is
rejecting the invalid routes over a less-specific from an upstream that
is accepting the invalid routes.

For example:
   if upstream A sends you:
  10.0.0.0./16 valid
   and upstream B sends you
  10.0.0.0/16 valid
  10.0.0.0/17 invalid
  10.0.128.0/17 invalid
you want send to send the packet to A.  But ROV won't cause that, and if
upstream B is selected by your BGP decision criteria (path length,
etc.), you're packets will ultimately follow the more-specific.

(Of course, the problem is can occur more than one network away.  Even
if you do send to upstream A, there's no guarantee that A's
less-specifics aren't pointed at another network that does have the
more-specifics.  But at least you give them a fighting chance by
sending them to A.)

 -- Brett


Re: What's the point of prepend communities?

2017-10-29 Thread Brett Frankenberger
On Sun, Oct 29, 2017 at 07:01:13AM -0500, Mike Hammett wrote:
> If I understand the OP correctly, I will use this real world example: 
> 
> https://onestep.net/communities/as174/ 
> 
> 174:3001 through 174:3003 as compared to doing the prepending
> yourself.  What is the functional difference?
> 
> BGP neighbors of 174 will see just as many AS hops either way, but
> non-BGP customers of 174 would see you just one hop away.  It's just
> another method of traffic engineering.

According to the link you provided, 3001..3003 are effective on "ALL
peer[s]" (which is differnet from all BGP neighbors).  So BGP-speaking
customers of 174 will not see the prepending if you use 174:3001..3003,
but peers will; but if you do the prepending yourself, then all 174's
peers and all 174's BGP-speaking customers see it.

 -- Brett


Re: What's the point of prepend communities?

2017-10-26 Thread Brett Frankenberger
On Thu, Oct 26, 2017 at 03:05:25PM -0400, William Herrin wrote:
> 
> You'd only use communities like that if you want to signal the ISP to
> deprioritize your advertisement on a particular peer or set of peers but
> not others. That's when you're getting fancy. It's not the norm. The norm
> is you want to deprioritize one of your paths as a whole. Maybe that link
> has less capacity or is enough better connected that it would always
> override your other links unless you detune it a little.
> 
> I mean, you could tell the ISP to prepend everything based on a community,
> assuming they support such a community, but why would you? That needlessly
> makes things more complicated.

Completely agree.  I would add that some providers' "prepend
everything" community is really "prepend to all peers" (or something
else shy of "prepend to every BGP neighbor we have").  In that case, if
the customer prepends, the prepend is seen by the provider's other
customers, but if the customer sets the "prepend to all peers"
community, the provider's customers won't see the prepend.  There are
cases where that functionality would be useful.

I have never seen a provider with a true "prepend to every BGP neighbor
we have" community, but it might well exist somewhere.

 -- Brett


Re: Vendors spamming NANOG attendees

2017-06-14 Thread Brett Frankenberger
On Wed, Jun 14, 2017 at 02:02:47PM -, John Levine wrote:
> In article <63cd2031-701d-4567-b88a-2986e8b3f...@beckman.org> you write:
> >But as I said, harvesting emails is not illegal under can spam. 
> 
> This might be a good time to review 15 USC 7704(b)(1), which is titled
> "Address harvesting and dictionary attacks".

When reviewing it, make sure to read the whole thing.  Including the
part where it doesn't prohibit those things (harvesting and dictionary
attacks), but, instead, declares that those things are aggravating
factors if done my someone as part of doing things that are prohibited
by the section that actually prohibits things, which is 7704(a).

 -- Brett


Re: Vendors spamming NANOG attendees

2017-06-14 Thread Brett Frankenberger
On Wed, Jun 14, 2017 at 01:21:21PM +, Mel Beckman wrote:
> Rodney,
> 
> You make a good point. But I wonder how often spammers are so
> obvious, and I wonder if his "leveraging" falls amiss of CAN-SPAM's
> specific prohibition:
> 
> (I) harvesting electronic mail addresses of the users of a website,
> proprietary service, or other online public forum operated by another
> person, without the authorization of such person; and
> 
> (II) randomly generating electronic mail addresses by computer;
> 
> Technically, this spammer harvested the names of attendees at a
> physical conference, not of some online resource, which is what
> CAN-SPAM prohibits.  I know it's splitting hairs, but that's what
> spammers do.

There is no such specific prohibition in CAN-SPAM.

The section of CAN SPAN from which you are quoting (15 USC 7703)
instructs the Sentencing Commission to consider sentence enhancements
for criminals convicted under existing computer crimes laws if they did
one of the two things you list above.

The part you left out (and which immediately precedes the part you
quoted) reads:

(2) In carrying out this subsection, the Sentencing
Commission shall consider providing sentencing enhancements for—
(A) those convicted under section 1037 of title 18 who—
(i) obtained electronic mail addresses through improper means,
including—
  [ then (I) and (II) from above ]

Merely sending non-misleading spam does not violate 18 USC 1037.

> My point is that CAN-SPAM is virtually useless. There have been a
> handful of prosecutions in more than a decade, and spammers are not
> seeming to be deterred.
> 
> I know there are honeypots that try to catch electronic harvesters,
> but I don't think they could provide proof of someone who got his
> emails from a list of attendees at an event, a shared customer list,
> etc.

And even if someone did, no crime is committed.

But if someone uses those addresses in the commission of another crime,
he might go to prison for longer.

 -- Brett


Re: [NOC] ARIN contact needed: something bad happens with legacy IPv4 block's reverse delegations

2017-03-20 Thread Brett Frankenberger
On Sat, Mar 18, 2017 at 09:27:11PM -0700, Doug Barton wrote:
> 
> > As to why DNS-native zone operations are not utilized, the challenge
> > is that reverse DNS zones for IPv4 and DNS operations are on octet
> > boundaries, but IPv4 address blocks may be aligned on any bit
> > boundary.
> 
> Yes, deeply familiar with that problem. Are you dealing with any address
> blocks smaller than a /24? If the answer is no (which it almost certainly
> is), what challenges are you facing that you haven't figured out how to
> overcome yet? (Even < /24 blocks can be dealt with, obviously, but I'd be
> interested to learn that there are problems with /24 and up that are too
> difficult to solve.)

Hypotheically:

10.11.0.0/16 (11.10.in-addr.arpa) is managed by ARIN
10.11.16.0/20 is ARIN space
10.11.32.0/20 is RIPE space

If ARIN delegated 32.11.10.in-addr.arpa through 47.11.10.in-addr.arpa
to a RIPE nameserver, there's no good way for RIPE to then delegate,
say, 10.11.34.0/24 (34.11.10.in-addr.arpa) to the nameserver of the
entity to which RIPE has allocated 10.11.34.0.  (Sure, it can be done,
using the same techniques as are used for allocations of
longer-than-/24, but recipients of /24 and larger reasonably expect to
have the X.X.X.in-addr.arpa delegated to their nameservers.)

So, instead, RIPE communicates to ARIN the proper delegations for
32.11.10.in-addr.arpa through 47.11.10.in-addr.arpa, and ARIN merges
those into 11.10.in-addr.arpa.

One way for RIPE to communicate those delegations to ARIN is to put
then into some other zone, which ARIN could then zone-transfer.  But
ARIN would still need a process to merge the data from that other e
with the real 11.10.in-addr.arpa zone.  But that has the same risks as
the current process, which apparently communicates those delegations
via something other than zone-transfer.

 -- Brett


Re: SHA1 collisions proven possisble

2017-02-26 Thread Brett Frankenberger
On Sun, Feb 26, 2017 at 12:18:48PM -0500, Patrick W. Gilmore wrote:
> 
> I repeat something I've said a couple times in this thread: If I can
> somehow create two docs with the same hash, and somehow con someone
> into using one of them, chances are there are bigger problems than a
> SHA1 hash collision.
> 
> If you assume I could somehow get Verisign to use a cert I created to
> match another cert with the same hash, why in the hell would that
> matter?  I HAVE THE ONE VERISIGN IS USING.  Game over.
> 
> Valdis came up with a possible use of such documents. While I do not
> think there is zero utility in those instances, they are pretty small
> vectors compared to, say, having a root cert at a major CA.

I want a google.com cert.  I ask a CA to sign my fake google.com
certificate.  They decline, because I can't prove I control google.com.

I create a cert for mydomain.com,that hashes to the same value as my
fake google.com cret.  I ask a CA to sign my mydomain.com cert.  They
do, because I can prove I control mydomain.com.

Now I effectively have a signed google.com cert.

Of course, SHA1 is already deprecated for this purpose, and the
currently demonstrated attack isn't flexible enough to have much chance
at getting a colliding certificate signed.  So, practically speaking,
this isn't a problem *today* (even if SHA1 were deprecated).  So this
is more of a "here's the sort of thing collision attacks can be used
for" point, rather than "here's what you can do with this attack right
now" point.

 -- Brett


Re: Accepting a Virtualized Functions (VNFs) into Corporate IT

2016-11-28 Thread Brett Frankenberger
On Mon, Nov 28, 2016 at 01:44:25PM -0500, Rich Kulawiec wrote:
> On Mon, Nov 28, 2016 at 09:53:41AM -0800, Kasper Adel wrote:
> > Vendor X wants you to run their VNF (Router, Firewall or Whatever) and they
> > refuse to give you root access, or any means necessary to do 'maintenance'
> > kind of work, whether its applying security updates, or any other similar
> > type of task that is needed for you to integrate the Linux VM into your IT
> > eco-system.
> 
> Thus simultaneously (a) making vendor X a far more attractive target for
> attacks and (b) ensuring that when -- not if, when -- vendor X has its
> infrastructure compromised that the attackers will shortly thereafter
> own part of your network, for a value of "your" equal to "all customers
> of vendor X".
> 
> (By the way, this isn't really much of a leap on my part, since it's
> already happened.)

Sure.  But that's mostly the risk of running a black-box appliance.  It
doesn't really matter if it's a VM or a piece of hardware.  Businesses
that are comfortable with physical appliances (running on Intel
hardware under the covers) for Router/Firewall/Whatever accept little
additional risk if they then run that same code on a VM.

(Sure, there's the possibility of the virtual appliance being
compromised, and then being used to exploit a hypervisor bug that
allows breaking out of the VM.  So the risk isn't *zero*.  But the
overwhelming majority of the risk comes from the decision to run the
appliance, not the HW vs. VM decision.)

 -- Brett


Re: NEVERMIND! (was: Seeking Google reverse DNS delegation

2016-11-13 Thread Brett Frankenberger
contact)
User-Agent: Mutt/1.6.1 (2016-04-27)

On Sun, Nov 13, 2016 at 03:57:19PM -0800, Christopher Morrow wrote:
> So... actually someone did tell arin to aim these at
> ns1/2google.com...
> I'll go ask arin to 'fix the glitch'.

For 138.8.204.in-addr.arpa ...

ARIN is delegating to ns[12].saversagreeable.com

The NS records on the saversagreeable.com servers are pointing to
ns[12].google.com.

> > http://pastebin.com/raw/VNwmgMHh

 -- Brett


Re: Dyn DDoS this AM?

2016-10-21 Thread Brett Frankenberger
On Fri, Oct 21, 2016 at 05:11:34PM -0700, Crist Clark wrote:
>
> Given the scale of these attacks, whether having two providers does any
> good may be a crap shoot.
> 
> That is, what if the target happens to share the same providers you do?
> Given the whole asymmetry of resources that make this a problem in the
> first place, the attackers probably have the resources to take out multiple
> providers.
> 
> Having multiple providers may reduce your chance of being collateral damage
> (and I'd also still worry more about the more mundane risks of a single
> provider, maintenance or upgrade gone bad, business risks, etc., than these
> sensational ones), but multiple providers likely won't save you if you are
> the actual target of the attack.

Good, perfect, enemy, etc.

How many sites were down today?  How many were the intended target?

 -- Brett


Re: DHCPv6 PD & Routing Questions

2015-12-06 Thread Brett Frankenberger
On Sun, Dec 06, 2015 at 02:20:36PM -0800, Owen DeLong wrote:
> 
> As an alternative worth considering, it could do this with BGP instead of 
> OSPF.
> 
> There’s nothing mythical or magical about BGP. A CPE autoconfiguring
> itself to advertise the prefix(es) it has received from upstream
> DHCPv6 server(s) to it’s neighbors is not rocket science. In fact,
> this would mean that the CPE could also accept a default route via
> the same BGP session and it could even be used to enable automatic
> failover for mulihomed dynamically addressed sites.
> 
> Sure, this requires modifying the CPE, but not in a particularly huge
> way and it provides a much cleaner and more scaleable solution for
> the ISP side of the equation than OSPF.
> 
> Most current implementations use RIPv2, but we all know just how icky
> that is.

How do you secure that?  Or do you just assume no one will announce
someone else's prefix?  (I can think of ways to secure it, of course,
but none of the approaches for having the DHCP server configure some
sort of prefix access control seem to me to be any better or easier
than having the DCHP server configure a static route).

This isn't a problem I face, but if it were, I think I'd solve it by
having the DHCP server inject the route via BGP with an appropriate
next-hop.

 -- Brett


Fw: new message

2015-10-25 Thread Brett Frankenberger
Hey!

 

New message, please read <http://clddesign.com/tired.php?l9>

 

Brett Frankenberger



Re: buffer bloat and packet pacing

2015-09-03 Thread Brett Frankenberger
On Thu, Sep 03, 2015 at 01:04:34PM +0100, Nick Hilliard wrote:
> On 03/09/2015 11:56, Saku Ytti wrote:
> > 40GE server will flood the window as fast as it can, instead of
> > limiting itself to 10Gbps, optimally it'll send at linerate.
> 
> optimally, but tcp slow start will generally stop this from happening on
> well behaved sending-side stacks so you send up ramping up quickly to path
> rate rather than egress line rate from the sender side.  Also, regardless
> of an individual flow's buffering requirements, the intermediate path will
> be catering with large numbers of flows, so while it's interesting to talk
> about 375mb of intermediate path buffers, this is shared buffer space and
> any attempt on the part of an individual sender to (ab)use the entire path
> buffer will end up causing RED/WRED for everyone else.
> 
> Otherwise, this would be a fascinating talk if people had real world data.

The original analysis is flawed because it assumes latency is constant.
Any analysis has to include the fact that buffering changes latency.

If you start with a 300ms path (by propogation delay, switching latency,
hetc.), and 375MB of buffers on a 10G port, then, when the buffers
fill, you end up with a 600ms path[1].  And a 375MB window is no longer
sufficient to keep the pipe full.

Instead, you need a 750MB buffer.

But now the latency is 900ms.

And so on.  This doesn't converge.  Every byte of filled buffer is
another byte you need in the window if you're going to fill the pipe.

Not accounting for this is part of the reason the original analysis is
flawed.  The end result is that you always run out of window or run out
of buffer (causing packet loss).

Here's a paper that shows you don't need buffers equal to
bandwidth*delay to get near capacity:
http://www.cs.bu.edu/~matta/Papers/hstcp-globecom04.pdf
(I'm not endorsing it.  Just pointing out it out as a datapoint.)

 -- Brett

[1] 0.300 + 375E6 * 8 / 10E9 = 600ms


Re: buffer bloat and packet pacing

2015-09-03 Thread Brett Frankenberger
On Thu, Sep 03, 2015 at 05:48:00PM +0300, Saku Ytti wrote:
> Hey Brett,
>
> > Here's a paper that shows you don't need buffers equal to
> > bandwidth*delay to get near capacity:
> > http://www.cs.bu.edu/~matta/Papers/hstcp-globecom04.pdf
> > (I'm not endorsing it.  Just pointing out it out as a datapoint.)
>
> Quick glance makes me believe the S and D nodes are equal bandwidth,
> but only R1-R2 bandwidth is explicitly stated.S1, D1, Sn, Dn are only
> ever mentioned in the topology. If Sender is same or lower rate than
> Destination, then we really shouldn't need almost any buffering.

Unless Sender is higher than R1-R2.

> Issue should only come when Sender is significantly higher rate than
> Destination and network is not limiting them.

I didn't read it in detail either, but at first glance, it appears to
me that the model is infinite bandwidth and zero latency between S and
R1, and D and R2, with queueing happening in R1.

That's not going to give materially different results than, having S-R1
be 4 times R1-R2, and R2-D being the same as R1-R2.  So it fits well
with the original discussion here of 40G into 10G.

 -- Brett


Re: United Airlines is Down (!) due to network connectivity problems

2015-07-08 Thread Brett Frankenberger
On Wed, Jul 08, 2015 at 01:55:43PM -0400, valdis.kletni...@vt.edu wrote:
 On Wed, 08 Jul 2015 17:42:52 -, Matthew Huff said:

  Given that the technical resources at the NYSE are significant and
  the lengthy duration of the outage, I believe this is more serious
  than is being reported.
 
 My personal, totally zero-info suspicion:
 
 Some chuckleheaded NOC banana-eater made a typo, and discovered an
 entirely new class of wondrous BGP-wedgie style We know how we got
 here, but how do we get back? network misbehaviors

We don't know how long the underlying problem lasted, and how much of
the continued outage time is dealing with the logistics of restarting
trading mid-day.  Completely stopping and then restarting trading
mid-day is likely not a quick process even if the underlying technical
issue is immediately resolved.
 
 (Such things have happened before - like the med school a few years ago that
 extended their ethernet spanning tree one hop too far, and discovered that
 merely removing the one hop too far wasn't sufficient to let it come back 
 up...)

No, but picking a bridge in the center, giving it priority sufficient
for it to become root, and then configuring timers[1] that would
support a much larger than default diameter, possibly followed by some
reboots, probably would have.  

From what has been publicly stated, they likely took a much longer and
more complicated path to service restoration than was strictly
necessary.  (I have no non-public information on that event.  There may
be good reasons, technical or otherwise, why that wasn't the chosen
solution.)

 -- Brett

[1] You only have to configure them on the root; non-root bridges use
what root sends out, not what they ahve configured.


Re: Charter ARP Leak

2014-12-29 Thread Brett Frankenberger
On Mon, Dec 29, 2014 at 12:27:04PM -0500, Jay Ashworth wrote:
  
  Valdis, you are correct. What your seeing is caused by multiple IP
  blocks being assigned to the same CMTS interface.
 
 Am I incorrect, though, in believing that ARP packets should only be visible
 within a broadcast domain, 

broadcast domain != subnet

 and that because of that, they should not be
 being passed through a cablemodem attached to such a CMTS interface unless
 they're within the IP network in which that interface lives (which is
 probably not 0/0)? 
 
 This sounds like a firmware bug in either the CMTS or the cablemodem.

int ethernet 0/0
  ip address 10.0.0.1 255.255.0.0
  ip address 11.0.0.1 255.255.0.0 secondary
  ip address 12.0.0.1 255.255.0.0 secondary

The broadcast domain will have ARP broadcasts for all three subnets.

Doing it over a CMTS doesn't change that.

 -- Brett


Re: Equinix Virginia - Ethernet OOB suggestions

2014-11-10 Thread Brett Frankenberger
On Mon, Nov 10, 2014 at 08:20:44AM -0600, Joe Greco wrote:
  Hey,
  
  VPN setup is not really a viable option (for us) in this scenario.
  Honestly, I'd prefer to just call it done already and have a VPN but due to
  certain restraints, we have to go down this route.
 
 Without explaining the restraints, this kinda boils down to 'cuz we
 want it, which stopped being good justification many years ago.  

Not to ARIN, which isn't in the business of deciding what uses are
valid and what uses are not valid (only that there is, in fact, use). 
With the recent reduction in minimum allocation sizes, he could get PI
space for this directly from ARIN (depending on his previous
allocations and efficient utilization thereof, of course).
 
 I doubt you'll find many takers who would want to provide you with a
 circuit for a few Mbps with a /23 for OOB purposes 'just cuz.
 
 I note that we're present in Equinix Ashburn and could do it, and that
 this is basically a nonstarter for us.

Not an unreasonable business decision.  His challange will be finding a
provider large enough that they can easily allocate a /23 but small
enough that they're interested in a 10(ish) Mbps connection that isn't
likely to grow much.

 -- Brett


Re: Marriott wifi blocking

2014-10-05 Thread Brett Frankenberger
On Sat, Oct 04, 2014 at 11:19:57PM -0700, Owen DeLong wrote:
 
  There's a lot of amateur lawyering ogain on in this thread, in an area
  where there's a lot of ambiguity.  We don't even know for sure that
  what Marriott did is illegal -- all we know is that the FCC asserted it
  was and Mariott decided to settle rather than litigate the matter.  And
  that was an extreme case -- Marriott was making transmissions for the
  *sole purpose of preventing others from using the spectrum*.
 
 I don't see a lot of ambiguity in a plain text reading of part 15.
 Could you please read part 15 and tell me what you think is
 ambiguous?

Marriott was actually accused of violating 47 USC 333:
   No person shall willfully or maliciously interfere with or cause
   interference to any radio communications of any station licensed or
   authorized by or under this chapter or operated by the United States
   Government.

In cases like the Marriott case, where the sole purpose of the
transmission is to interfere with other usage of the transmission,
there's not much ambiguity.  But other cases aren't clear from the
text.  

For example, you've asserted that if I've been using ABCD as my SSID
for two years, and then I move, and my new neighbor is already using
that, that I have to change.  But that if, instead of duplicating my
new neighbor's pre-existing SSID, I operate with a different SSID but
on the same channel, I don't have to change.  I'm not saying your
position is wrong, but it's certainly not clear from the text above
that that's where the line is.  That's what I meant by ambiguity.

(What's your position on a case where someone puts up, say, a
continuous carrier point-to-point system on the same channel as an
existing WiFi system that is now rendered useless by the p-to-p system
that won't share the spectrum?  Illegal or Legal?  And do you think the
text above is unambiguous on that point?)

 -- Brett


Re: Marriott wifi blocking

2014-10-04 Thread Brett Frankenberger
On Sat, Oct 04, 2014 at 01:33:13PM -0700, Owen DeLong wrote:
 
 On Oct 4, 2014, at 12:39 , Brandon Ross br...@pobox.com wrote:
 
  On Sat, 4 Oct 2014, Michael Thomas wrote:
  
  The problem is that there's really no such thing as a copycat if
  the client doesn't have the means of authenticating the
  destination. If that's really the requirement, people should start
  bitching to ieee to get destination auth on ap's instead of
  blatantly asserting that somebody owns a particular ssid because,
  well, because.
  
  In the enterprise environment that there's been some insistence
  from folks on this list is a legitimate place to block rogue APs,
  what makes those SSIDs, yours?  Just because they were used first
  by the enterprise? That doesn't seem to hold water in an unlicensed
  environment to me at all.
 
 Pretty much... Here's why...
 
 If you are using an SSID in an area, anyone else using the same SSID
 later is causing harmful interference to your network. It's a
 first-come-first-serve situation. Just like amateur radio spectrum...
 If you're using a frequency to carry on a conversation with someone,
 other hams have an obligation not to interfere with your conversation
 (except in an emergency). It's a bit more complicated there, because
 you're obliged to reasonably accommodate others wishing to use the
 frequency, but in the case of SSIDs, there's no such requirement.
 
 Now, if I start using SSID XYZ in building 1 and someone else is
 using it in building 3 and the two coverage zones don't overlap, I'm
 not entitled to extend my XYZ SSID into building 3 when I rent space
 there, because someone else is using it in that location first.

So your position is that if I start using Starbuck's SSID in a location
where there is no Starbuck, and they layer move in to that building,
I'm entitled to compel them to not use their SSID?

 I can only extend my XYZ coverage zone so far as there are no
 competing XYZ SSIDs in the locations I'm expanding in to.

Is ther FCC guidance on this, or is this Regulations As Interpreted By
Owen?

 Depends on whether you were the first one using the SSID in a
 particular location or not.
 
 Sure, this can get ambiguous and difficult to prove, but the reality
 is that most cases are pretty clear cut and it's usually not hard to
 tell who is the interloper on a given SSID.

It's usually easy to tell, but I doubt the FCC would find it relevant. 

There's a lot of amateur lawyering ogain on in this thread, in an area
where there's a lot of ambiguity.  We don't even know for sure that
what Marriott did is illegal -- all we know is that the FCC asserted it
was and Mariott decided to settle rather than litigate the matter.  And
that was an extreme case -- Marriott was making transmissions for the
*sole purpose of preventing others from using the spectrum*.

 -- Brett


Re: 2000::/6

2014-09-14 Thread Brett Frankenberger
On Sun, Sep 14, 2014 at 04:19:42PM -0500, Jimmy Hess wrote:
 On Sat, Sep 13, 2014 at 5:33 AM, Tarko Tikan ta...@lanparty.ee wrote:
  2000::/64 has nothing to do with it.
 
  Any address between 2000::::::: and
  23ff::::::: together with misconfigured prefix
  length (6 instead 64) becomes 2000::/6 prefix.
 
 It should be rejected for the same reason that  192.168.10.0/16 is
 invalid in a prefix list  or access list.

RTR(config)#ip prefix-list TEST permit 192.168.10.0/16
RTR(config)#do sho ip prefix-list TEST
ip prefix-list TEST: 1 entries
   seq 5 permit 192.168.0.0/16

This isn't surprising to people who've been using IOS for a while ...
 
 Any decent router won't allow you to enter just anything in that range
 into the export rules  with a /6,  except 2000::  itself, and will
 even show you a failure response instead of silently ignoring the
 invalid input,  for the very purpose of helping you avoid such errors.

Well, unfortunately, a lot of us have (as you define the term) indecent
routers.

RTR(config)#ipv6 prefix-list TEST permit 2000:::/6
RTR(config)#do sho ipv6 prefix-list TEST
ipv6 prefix-list TEST: 1 entries
   seq 5 permit 2000::/6

2001::1/6  would be an example of an invalid input --  there are
 one or more non-zero bits listed outside the prefix, or where  bits in
 the mask are zero.
 
 Only 2000:::::::/6properly conforms,
 not just any IP   in that range  can have a /6  appended to the end.

 -- Brett


Re: So Philip Smith / Geoff Huston's CIDR report becomes worth a good hard look today

2014-08-13 Thread Brett Frankenberger
On Wed, Aug 13, 2014 at 07:53:45PM -0400, Patrick W. Gilmore wrote:
  you mean your vendor won't give you the knobs to do it smartly ([j]tac
  tickets open for five years)?  wonder why.
 
 Might be useful if you mentioned what you considered a smart way to
 trim the fib. But then you couldn't bitch and moan about people not
 understanding you, which is the real reason you post to NANOG.

Optimization #1 -- elimination of more specifics where there's a less
specific that has the same next hop (obviously only in cases where the
less specific is the one that would be used if the more specific were
left out).

Example: if 10.10.4.0/22 has the same next hop as 10.10.7.0/24, the
latter can be left out of TCAM (assuming there's not a 10.10.6.0/23
with a different next hop).

Optimization #2 -- concatenation of adjacent routes when they have the
same next hop

Example: If 10.10.12.0/15 and 10.10.14.0/15 have the same next hop,
leave them both out of TCAM and install 10.10.14.0/14

Optimization #3 -- elimination of routes that have more specifics for
their entire range.

Example: Don't program 10.10.4.0/22 in TCAM is 10.10.4.0/23,
10.10.6.0/24 an 10.10.7.0/24 all exist

Some additional points:  

-- This isn't that hard to implement.  Once you have a FIB and
primitives for manipulating it, it's not especially difficult to extend
them to also maintain a minimal-size-FIB.

-- The key is that aggregation need not be limited to identical routes. 
Any two routes *that have the same next hop from the perspective of the
router doing the aggregating* can be aggregated in TCAM.  DFZ routers
have half a million routes, but comparatively few direct adjacencies. 
So lots of opportunity to aggregate. 

-- What I've described above gives forwarding behavior *identical* to
unaggregated forwarding behavior, but with fewer TCAM entries. 
Obviously, you can get further reductions if you're willing to accept
different behavior (for example, igoring more specifics when there's a
less specific, even if the less specific has a different next hop).

(This might or might not be what Randy was talking about.  Maybe he's
looking for knobs to allow some routes to be excluded from TCAM at the
expense of changing forwarding behavior.  But even without any such
things, there's still opportunity to meaningfully reduce usage just by
handling the cases where forwarding behavior will not change.)

 -- Brett


Re: Large DDoS, small extortion

2014-05-23 Thread Brett Frankenberger
On Fri, May 23, 2014 at 02:09:18PM -0400, Barry Shein wrote:
 
 On May 24, 2014 at 00:38 rdobb...@arbor.net (Roland Dobbins) wrote:
   Never, under any circumstances, pay.  Not even if you've persuaded
   the Men from U.N.C.L.E. to help you, and they suggest you pay
   because they think they can trace the money, do not pay.
 
 Ok, you're recommending $VICTIM ignores or resists the advice of law
 enforcement authorities, right?

Law enforcement and victims have different objectives.  Law enforcement
wants to find the criminal, gather sufficient evidence to prove their
guilt, then prosecute them.  More attacks helps law enforcement.

The victims, in general, want the attacks to stop.

 I just don't know and would suggest reliance on case studies and
 experienced professionals.

Agreed.  But make sure the experienced professionals you talk with have
interests that are aligned with yours.

(Not arguing pay or don't pay here.  I don't know, either.  My
instincts say don't pay but I have no data.)

 -- Brett


Re: Need trusted NTP Sources

2014-02-09 Thread Brett Frankenberger
On Sun, Feb 09, 2014 at 03:45:19PM -0500, Jay Ashworth wrote:
 - Original Message -
  From: Saku Ytti s...@ytti.fi
 
   That's only true if the two devices have common failure modes,
   though, is it not?
  
  No, we can assume arbitrary fault which causes NTP to output bad time. With
  two NTP servers it's more likely that any one of them will start doing
  that than with one alone. And if any of the two start doing it, you don't
  know which one.
 
 Hey, waitaminnit!  I saw you palm that card.  :-)
 
 If I'm locked to 2 coherent upstreams and one goes insane, I'm going to
 know which one it is, because the other one will still match what I already
 have running, no?

If it suddenly goes insane as a step function?  Sure.  But if the one
you've selected for synchronization starts drifting off true time very
slowly, it will take your clock with it, and then ultimately the other
one (that is actually the good clock) will appear to be insane clock.

 -- Brett



Re: Updated ARIN allocation information

2014-01-31 Thread Brett Frankenberger
On Fri, Jan 31, 2014 at 05:10:51AM -0800, Owen DeLong wrote:
 
  A /8 slot costs as much as a /28 slot to hold process etc.  A routing
  slot is a routing slot.  The *only* reason this isn't a legal problems
  at the moment is people can still get /24s.  The moment /24's aren't
  readily available and they are forced into using this range anyone
  filtering on /24 in this range is leaving themselves open to lawsuits.
 
 On what basis? How do you have the right to force me to carry your route on
 my network? Especially in light of the recent strike-down of the net 
 neutrality
 rules?
 
  Now as this range is allocated for transition to IPv6 a defence for
  edge networks may be we can reach all their services over IPv6
  but that doesn't work for transit providers.  Eyeball networks would
  need to ensure that all their customers had access to IPv6 and even
  that may not be enough.
 
 Please point to the law which requires a transit provider to provide transit
 to every tiny corner of every internet. 

Speaking only with respect to the US:

I am aware of no such law.

However, I am aware of a law that makes it unlawful for a bunch of
large providers who already have large blocks of space to collude to
prevent new entrants into the market by refusing to carry their routes.

If the guy with the /28 he can't route alleges that that's what's
happening, there are lots of arguments on the other side the ISPs with
the filters could make.  They've been filtering at /24 for a lot longer
than it started to seriosuly harm new entrants into the market ...
there was never any formal agreement to filter at /24; it just happened
(but everyone ended up filtering at /24 ... that wasn't just
coincidence) ...  there are real technical reasons for limiting FIB
size ... and so on.  I don't know who would win the anti-trust lawsuit,
but I wouldn't consider it a slam dunk for the ISPs doing the
filtering.

I don't expect there to actually be such a lawsuit.  Among other
things, buying a /24 will likely be cheaper than litigating this, so
the only way it gets to trial is an organization litigating on
principle.  And, as I said, I'm not convinced the filtering providers
lose if there is one.  But anytime the big guys collectively have a
policy that keeps out the new entrants, there's anti-trust exposure.

-- Brett



Re: Headscratcher of the week

2013-05-31 Thread Brett Frankenberger
On Fri, May 31, 2013 at 03:25:22PM -0700, Mike wrote:
 Gang,
 
   In the interest of sharing 'the weird stuff' which makes the job of
 being an operator ... uh, fun? is that the right word?..., I would
 like to present the following two smokeping latency/packetloss
 plots, which are by far the weirdest I have ever seen.
 
   These plots are from our smokeping host out to a customer location.
 The customer is connected via DSL and they run PPPoE over it to
 connect with our access concentrator. There is about 5 physical
 insfastructure hops between the host and customer; The switch, the
 BRAS, the Switch again, and then directly to the DSLAM and then
 customer on the end.
 
 
 The 10 day plot:
 http://picpaste.com/10_Day_graph-YV3IdvRV.png
 
 The 30 hour plot:
 http://picpaste.com/30_hour_graph-DrwzfhYJ.png
 
   How can you possibly have consistent increase in latency like that?
 I'd love to hear theories (or offers of beer, your choice!).

Theory:

There's a stateful device (firewall, NAT, something else) in the path
that is creating state for every ICMP Echo Request it forwards and
(possibly) searching that state when forwarding the ICMP Echo Reply
responses, and never destroying that state, and either the create
operation or the search operation (or both) takes an amount of time
that is a linear function of the number of state entries.

 -- Brett




Re: Variety, On The Media, don't understand the Internet

2013-05-15 Thread Brett Frankenberger
On Tue, May 14, 2013 at 09:14:56PM -0400, Jean-Francois Mezei wrote:
 On 13-05-14 20:55, Patrick W. Gilmore wrote:
 
  Since when is peering not part of the Internet? 
 
 Yes, one car argue that an device with an IP address routable from the
 internet is part of the internet.
 
 But when traffic from a cahe server flows directly into an ISP's
 intranet to end users, it doesn't really make use of the Internet nor
 does it cost the ISP transit capacity.

So it's only on the Internet if it uses a provider's transit capacity?
So if ISP1 and ISP2 are customers of ISP3 (and ISP3 is the only
provider-to-provider connection for ISP1 and ISP2), then traffic
between a customer of ISP1 and a customer of ISP2 is on the Internet? 
What if ISP1 and ISP2 then setup a private peering connection?  Is
traffic between ISP1 and ISP2 still on the Internet, or is that
reserved for traffic over paid transit?

And if that's still on the Internet, what happens if ISP1 then buys
IPS2?  Does the traffic between them cease to be on the Internet now
that it's the same company?

And, if you define on the Internet to mean goes over paid transit,
then the only traffic that is on the Internet is traffic to ISPs who
have paid transit.  Traffic between end customers of two Tier 1
providers (defined as providers who don't pay for any transit for the
purposes of this message) would never be on the Internet?  

(I assume transit, if that's your threshold, is transit paid for by
a provider.  End user connections are essentially paid transit, even
though it's not typically called that, especially at the lower end.)

The point is:  I don't think you definition works.  Could post exactly
what your definition of on the Internet is (as opposed to just
enumerating examples of things you think are on the internet and things
you think are not on the Internet)?

 -- Brett



Re: 100.100.0.0/24

2012-10-06 Thread Brett Frankenberger
On Fri, Oct 05, 2012 at 10:24:18AM -0500, Ben Bartsch wrote:
 use this:
 
 http://www.team-cymru.org/Services/Bogons/bgp.html

Please tell me how I can configure my router to use that feed to
automatically reject any bogon advertisements I receive from other BGP
neigbhors.

 On Fri, Oct 5, 2012 at 10:18 AM, Jared Mauch ja...@puck.nether.net wrote:
 
  I suspect not everyone has updated their 'bogon' filters.  I found a very
  minor gap in our filters, we are working on correcting it.

 -- Brett



Re: The Department of Work and Pensions, UK has an entire /8 nanog@nanog.org

2012-09-19 Thread Brett Frankenberger
On Wed, Sep 19, 2012 at 06:46:54PM -0700, Jo Rhett wrote:
 
 For these networks to have gateways which connect to the outside, you
 have to have an understanding of which IP networks are inside, and
 which IP networks are outside. Your proxy client then forwards
 connections to outside networks to the gateway. You can't use the
 same networks inside and outside of the gateway. It doesn't work. The
 gateway and the proxy clients need to know which way to route those
 packets.

It works fine if the gateway has multiple routing tables (VRF or
equivalent) and application software that is multiple-routing-table
aware.

Not disagreeing at all with the point many are making that not on the
Internet doesn't mean not in use.  Many people for good reason
decide to use globally unique space on networks that are not connected
to the Internet.  But the idea that you *can't* tie two networks
togethor with an application gateway unless the address space is unique
is an overstatement.  It's just harder.

 -- Brett



Re: raging bulls

2012-08-08 Thread Brett Frankenberger
On Wed, Aug 08, 2012 at 08:52:51AM -0500, Naslund, Steve wrote:
 It seems to me that all the markets have been doing this the wrong way.
 Would it now be more fair to use some kind of signed timestamp and
 process all transactions in the order that they originated?  Perhaps
 each trade could have a signed GPS tag with the absolute time on it. It
 would keep everyone's trades in order no matter how latent their
 connection to the market was.  All you would have to do is introduce a
 couple of seconds delay to account for the longest circuit and then take
 them in order.  They could certainly use less expensive connections and
 ensure that international traders get a fair shake.

This isn't about giving international traders a fair shake.  This sort
of latency is only relevant to high speed program trading, and the
international traders can locate their servers in NYC just as easily as
the US-based traders.

What it's about is allowing traders to arbitrage between markets.  When
product A is traded in, say, London, and product B is traded in New
York, and their prices are correlated, you can make money if your
program running in NY can learn the price of product B in London a few
milliseconds before the other guy's program.  And you can make money if
your program running in London can learn the price of product A in NY a
few milliseconds before the other guy's program.

Even if you execute the trades based on a GPS timestamp (I'm ignoring
all the logistics of preventing cheating here), it doesn't matter,
because the computer that got the information first will make the
trading decision first.

 -- Brett



Re: raging bulls

2012-08-08 Thread Brett Frankenberger
On Wed, Aug 08, 2012 at 09:08:18AM -0500, Naslund, Steve wrote:
 Also, we are only talking about a delay long enough to satisfy the
 longest circuit so you could not push your timestamp very far back and
 would have to get the fake one done pretty quickly in order for it to be
 worthwhile.  The real question is could you fake a cryptographic
 timestamp fast enough to actually gain time on the system.  Possibly but
 it would be a very tall order.

Why would generating a fake timestamp take longer than generating a
real one?  

I assume you're proposing an architecture where if I were a trader, I'd
have to buy a secure physical box from a third party trusted by the
market, and I'd send my trade to that box and then it would timestamp
it and sign it and then I'd send it to the market.

Obvious failure modes include: (a) spoofing the GPS signal received by
the box, so the box thinks the current time is some number of
milliseconds before the actual time (how to do this is well understood
and solved, and because GPS is one-way, even if the satellites started
signing their time updates, that would only prevent spoofing times in
the future, it wouldn't prevent spoofing times on the past), and (b)
generating 10 trades at time X, then holding on to the signed messages
until X+Y, and then only sending the ones that are profitable based on
the new information you learned between (X) and (X+Y).

Yes, there are some solutions.  But most of those solutions have
problems of their own.  It's overwhelmingly difficult. (But also
irrelevant, as I noted in my other post).

If you think this through to what a working implementation would look
like in detail, I think the failures become more obvious ...

 -- Brett



Re: using reserved IPv6 space

2012-07-15 Thread Brett Frankenberger
On Sat, Jul 14, 2012 at 09:48:49PM -0400, Robert E. Seastrom wrote:
 
 Actually, that's one of the most insightful meta-points I've seen on
 NANOG in a long time.
 
 There is a HUGE difference between IPv4 and IPv6 thinking.  We've all
 been living in an austerity regime for so long that we've completely
 forgotten how to leave parsimony behind.  Even those of us who worked
 at companies that were summarily handed a Class B when we mumbled
 something about internal subnetting have a really hard time
 remembering how to act when we suddenly don't have to answer for every
 single host address and can design a network to conserve other things
 (like our brain cells).

Addresses no longer being scarce is a significant shift, but this
thread is about a lot more than that.  I didn't get the feeling that
the original poster was wanting to use non-global addresses on his
internal links because he was concerned about running out.  He also
wanted to do so for purposes of security.

And that's not a paradigm shift between v4 and v6.  Obscurity /
non-global address magic was pretend security in v4 and it's pretend
security in v6.  People who used RFC1918 space where they didn't need
global uniqueness in v4 often did so initially because of scarcity (and
were often making a completely reasonable decision in doing so), but
they then falsly imputed a security benefit to that.  

If we can leverage the v6 migraton to get out of the thinking that some
addresses are magically more secure than others, then that's probably a
win, but it's not a fundamental difference between v4 and v6.  It's not
that correct IPv4 thinking is 1918 is more secure but the security
model of v6 is different.  1918 was never more secure.

 -- Brett



Re: F-ckin Leap Seconds, how do they work?

2012-07-04 Thread Brett Frankenberger
On Tue, Jul 03, 2012 at 04:54:24PM -0400, valdis.kletni...@vt.edu wrote:
 On Tue, 03 Jul 2012 21:49:40, Peter Lothberg said:
 
  Leapseconds can be both positive and negative, but up to now, the
  earth has only slowed down, so we have added seconds.
 
 That's what many people believe, but it's not exactly right.  Leap seconds
 are added for the exact same reason leap days are - the earth's rotation
 isn't a clean multiple of the year.  We know we need to stick in an entire
 leap day every 4 years or so, then add the 400 hack to get it closer. At
 that point, it's *really* close, to the point where just shimming in a second
 every once in a while is enough to get it back in sync.
 
 The earth's slowdown (or speedup) is measured by *how often* we
 need to add leap seconds.  If we needed to add one every 3 years, but
 the frequency rises to once every 2.5 years, *that* indicates slowing.
 In other words,  the slowdown or speedup is the first derivative of
 the rate that UT and TAI diverge - if the earth rotated at constant
 speed, the derivative would be zero, and we'd insert leap seconds on
 a nice predictable schedule.

Leap Seconds and Leap Years are completely unrelated and solve two
completely different problems.  

Leap Seconds exist to adjust time to match the Earth's actual rotation. 
They exist because the solar day is not exactly 24 hours.

Leap Years exist to adjust time to match the Earth's actual revolution
around the Sun.  They exist because the that time period isn't exactly
365 days.

Without leap seconds, the sun stops being overhead at noon.  Without
leap years, the equinozes and solstices start drifting to different
days. 

 -- Brett



Re: F-ckin Leap Seconds, how do they work?

2012-07-04 Thread Brett Frankenberger
On Wed, Jul 04, 2012 at 05:02:02PM -0400, valdis.kletni...@vt.edu wrote:
 On Wed, 04 Jul 2012 12:44:40 -0500, Brett Frankenberger said:
 
  Leap Seconds and Leap Years are completely unrelated and solve two
  completely different problems.
 
  Leap Seconds exist to adjust time to match the Earth's actual rotation.
  They exist because the solar day is not exactly 24 hours.
 
  Leap Years exist to adjust time to match the Earth's actual revolution
  around the Sun.  They exist because the that time period isn't exactly
  365 days.
 
 Actually, it's the same exact problem - an astronomical value isn't
 exactly conformant to the civil value, and thus adjustments are needed.

No.  Leap Years arise because the solar year is not an integral
multiple of the solar day. 

Yes, you can argue that leap years exist because the Earth doesn't
revolve around the sun in 86400*365 seconds, but that missed the
underlying point that since well before civil time differed from solar
time, people have defined a year in terms of days, preferring not to
have years starting a midnight, then dawn, then noon, then dusk, and so
on.  Leap years have existed since well before civil time and solar
time were any different.

 And you missed the bigger point - that leap seconds aren't needed because the
 earth is slowing any more than leap days are needed because the year is 
 getting
 longer.  If an actual siderial day was a fixed unchanging 86400.005 seconds
 long, you'd still need a leap second every 200 days.  *SLOWING* would be
 indicated by the every 200 days changing to every 175 or every 150.

I assume you meant solar instead of [sidereal] -- the sidereal day
hasn't been 86400.anything seconds ever.  And if the mean solar day
were unchanging, then it would be 86400 civil seconds today, just like
it was (by definition) in 1900.  The civil second was initially
defined as 1/86400 of the mean solar day in 1900 (then later redefined
based on radiation from the cesium atom, but the redefinition didn't
change the length of the second by enough to matter for the purposes of
this discission).  The only reason the mean solar day today isn't 86400
is because the Earth's rotation has slowed since 1900 and we've elected
to not redefine the length of a second.

Yes, technically, you're right that if the Earth's rotation rate were
constant and were such that the mean solar day were 86400.005 seconds
long, we'd still need leap sections.  But that's a highly unlikely
counterfactual hypothetical, because, again, if the Earth weren't
slowing, then 1/86400-of-mean-solar-day defintion of the second would
still hold.  There's virtually no chance that on a hypothetical Earth
that wasn't slowing, that population would have decided that the
second should be 1/86400.005 of a solar day.

 -- Brett



Re: FYI Netflix is down

2012-07-02 Thread Brett Frankenberger
On Mon, Jul 02, 2012 at 09:09:09AM -0700, Leo Bicknell wrote:
 In a message written on Mon, Jul 02, 2012 at 11:30:06AM -0400, Todd Underwood 
 wrote:
  from the perspective of people watching B-rate movies:  this was a
  failure to implement and test a reliable system for streaming those
  movies in the face of a power outage at one facility.
 
 I want to emphasize _and test_.
 
 Work on an infrastructure which is redundant and designed to provide
 100% uptime (which is impossible, but that's another story) means
 that there should be confidence in a failure being automatically
 worked around, detected, and reported.
 
 I used to work with a guy who had a simple test for these things,
 and if I was a VP at Amazon, Netflix, or any other large company I
 would do the same.  About once a month he would walk out on the
 floor of the data center and break something.  Pull out an ethernet.
 Unplug a server.  Flip a breaker.

Sounds like something a VP would do.  And, actually, it's an important
step: make sure the easy failures are covered. 

But it's really a very small part of resilience.  What happens when one
instance of a shared service starts performing slowly?  What happens
when one instance of a redundant database starts timing out queries or
returning empty result sets?  What happens when the Ethernet interface
starts dropping 10% of the packets across it?  When happens when the
Ethernet switch linecard locks up and stops passing dataplane traffic,
but link (physical layer) and/or control plane traffic flows just fine? 
What happens when the server kernel panics due to bad memeory, reboots,
gets all the way up, runs for 30 seconds, kernel panics, lather, rinse,
repeat.

Reliability is hard.  And if you stop looking once you get to the point
where you can safely toggle the power switch without causing an impact,
you're only a very small part of the way there.  

 -- Brett



Re: FYI Netflix is down

2012-06-30 Thread Brett Frankenberger
On Sat, Jun 30, 2012 at 01:19:54PM -0700, Scott Howard wrote:
 On Sat, Jun 30, 2012 at 12:04 PM, Todd Underwood toddun...@gmail.comwrote:
 
  This was not a cascading failure.  It was a simple power outage
 
  Cascading failures involve interdependencies among components.
 
 
 Not always.  Cascading failures can also occur when there is zero
 dependency between components.  The simplest form of this is where one
 environment fails over to another, but the target environment is not
 capable of handling the additional load and then fails itself as a result
 (in some form or other, but frequently different to the mode of the
 original failure).

That's an interdependency.  Environment A is dependent on environment B
being up and pulling some of the load away from A; B is dependent on A
beingup and pulling some of the load away from B.
A Crashes for reason X - Load Shifts to B - B Crashes due to load
is a classic cascading failure.  And it's not limited to software
systems.  It's how most major blackouts occur (except with more than
three steps in the cascade, of course).

 -- Brett



Re: Dear Linkedin,

2012-06-10 Thread Brett Frankenberger
On Sun, Jun 10, 2012 at 04:34:55PM -0400, valdis.kletni...@vt.edu wrote:
 On Sun, 10 Jun 2012 12:29:46 -0700, Owen DeLong said:
  It is far preferable for the merchant to request ID and verify that the
  signature matches the ID _AND_ the picture in the ID matches the customer.
 
 Maybe from the anti-fraud standpoint, but not necessarily from the merchant's 
 viewpoint.
 
 It's only better if nobody's standing in line.  If matching the ID
 and signature and picture reduces fraud from 4% to 3%, but increases
 the time to serve the customer by 5%, you're losing money due to
 fewer sales/hour.

For the most part, fraud in a card present transaction isn't eaten by
the merchant.

But the same reasoning still applies.  The card issuers don't want you
have to show ID, becuase you might decide it's too much trouble, and
just use some other method to pay.

Eliminating fraud isn't an objective of card issuers.  Making money is.
Fraud reduction is only done when the savings from the reduced fraud
exceeds both the cost of the fraud preventing measure and any revenue
that is lost because of inconveniencing customers.  And, sometimes,
they'll choose to accept a higher rate of fraud if it will generate
enough revenue to offset it ... consider how many places you can now
avoid signing for small dollar purchases.  The cost of accepting the
additional fraud was considered worth it in comparison to the revenue
generated from getting people to use their cards for small
transactions.

 -- Brett



Re: Dear Linkedin,

2012-06-10 Thread Brett Frankenberger
On Sun, Jun 10, 2012 at 03:47:20PM -0700, Owen DeLong wrote:
 
 On Jun 10, 2012, at 3:06 PM, Brett Frankenberger wrote:
  
  Eliminating fraud isn't an objective of card issuers.  Making money is.
  Fraud reduction is only done when the savings from the reduced fraud
  exceeds both the cost of the fraud preventing measure and any revenue
  that is lost because of inconveniencing customers.  And, sometimes,
  they'll choose to accept a higher rate of fraud if it will generate
  enough revenue to offset it ... consider how many places you can now
  avoid signing for small dollar purchases.  The cost of accepting the
  additional fraud was considered worth it in comparison to the revenue
  generated from getting people to use their cards for small
  transactions.
 
 Right, but eliminating fraud should be an objective of consumers
 because ultimately, we are the ones paying for it regardless of who
 eats it on the actual transaction.

That assumes that minimizing cost is an objective of consumers.  In
general, it's not.  Maximizing utility is.

For some, minimizing cost is a major part of that. 

For me, I routinely trade money for convenience.  And I'll gladly pay a
percentage point or two more in exchange for all my credit transactions
being handled more quickly.  I'm far from the only one.  Credit card
companies keep making it easier to use their card, because they've
found it more profitable to do so.  There doesn't seem to be a market
for a card that is harder to use, but saves consumers a little money
through reduced fraud.

 -- Brett



Re: IPv6 day and tunnels

2012-06-04 Thread Brett Frankenberger
On Mon, Jun 04, 2012 at 07:39:58AM -0700, Templin, Fred L wrote:

 https://datatracker.ietf.org/doc/draft-generic-v6ops-tunmtu/
 
 3) For IPv6 packets between 1281-1500, break the packet
into two (roughly) equal-sized pieces and admit each
piece into the tunnel. (In other words, intentionally
violate the IPv6 deprecation of router fragmentation.)
Assumption is that the final destination can reassemble
at least 1500, and that the 32-bit Identification value
inserted by the tunnel provides sufficient assurance
against reassembly mis-associations.

Fragmenting the outer packet, rather than the inner packet, gets around
the problem of router fragmentation of packets.  The outer packet is a
new packet and there's nothing wrong with the originator of that packet
fragmenting it.

Of course, that forces reassembly on the other tunnel endpoint, rather
than on the ultimate end system, which might be problematic with some
endpoints and traffic volumes.

(With IPv4 in IPv4 tunnels, this is what I've always done.  1500 byte
MTU on the tunnel, fragment the outer packet, let the other end of the
tunnel do the reassembly.  Not providing 1500 byte end-to-end (at least
with in the network I control) for IPv4 has proven to consume lots of
troubleshooting time; fragmenting the inner packet doesn't work unless
you ignore the DF bit that is typically set by TCP endpoints who want
to do PMTU discovery.)
 
 I presume no one here would object to clauses 1) and 2).
 Clause 3) is obviously a bit more controversial - but,
 what harm would it cause from an operational standpoint?

 -- Brett



Re: DNS anycasting - multiple DNS servers on same subnet Vs registrar/registry policies

2012-05-28 Thread Brett Frankenberger
On Mon, May 28, 2012 at 09:32:29PM +0200, Stephane Bortzmeyer wrote:
 On Tue, May 29, 2012 at 12:21:10AM +0530,
  Anurag Bhatia m...@anuragbhatia.com wrote 
  a message of 28 lines which said:
 
  I know few registry/registrars which do not accept both (or all)
  name servers of domain name on same subnet.
 
 Since my employer is one of these registries, let me mention that I
 fully agree with David Conrad here.

How does your employer know if two nameservers (two IP addresses) are
on the same subnet?

 -- Brett



Re: SORBS?!

2012-04-06 Thread Brett Frankenberger
On Thu, Apr 05, 2012 at 06:45:30PM +0100, Nick Hilliard wrote:
 On 05/04/2012 17:48, goe...@anime.net wrote:
  But they will care about a /24.
 
 I'm curious as to why they would want to stop at /24.  If you're going to
 take the shotgun approach, why not blacklist the entire ASN?

It's a balancing act.  Too little collateral damage and the provider
hosting the spammer isn't motivated to act.  Too much collateral
damage, and no one uses your blacklist because using it generates too
many user complaints, and then your list doesn't motivate anyone to do
anything because there's no real downside to being on the list.  Just
the right amount of collateral damage, and your list gets widely used,
and causes enough pain on the other of the /24 that they clean things
up.

I'm not arguing for or against any particular amount of collateral
damage.  Just commenting on the effects of varying amounts of
collateral damage.

 -- Brett



Re: Quad-A records in Network Solutions ?

2012-03-28 Thread Brett Frankenberger
On Wed, Mar 28, 2012 at 04:13:53PM -0300, Carlos Martinez-Cagnazzo wrote:
 I'm not convinced. What you mention is real, but the code they need is
 little more than a regular expression that can be found on Google and a
 20-line script for testing lames. And a couple of weeks of testing, and
 I think I'm exaggerating.
 
 If they don't want to offer support for it, they can just put up some
 disclaimer.
 
 regards,
 
 Carlos
 
 
 On 3/28/12 3:55 PM, David Conrad wrote:
  On Mar 28, 2012, at 11:47 AM, Carlos Martinez-Cagnazzo wrote:
  I'm not a fan of conspiracy theories, but, c'mon. For a provisioning
  system, an  record is just a fragging string, just like any other
  DNS record. How difficult to support can it be ?
 
  Of course it is more than a string. It requires touching code, (hopefully) 
  testing that code, deploying it, training customer support staff to answer 
  questions, updating documentation, etc. Presumably Netsol did the 
  cost/benefit analysis and decided the potential increase in revenue 
  generated by the vast hordes of people demanding IPv6 (or the potential 
  lost in revenue as the vast hordes transfer away) didn't justify the 
  expense. Simple business decision.
 
  Regards,
  -drc
 
 
 



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-23 Thread Brett Frankenberger
On Wed, Nov 23, 2011 at 05:45:08PM -0500, Jay Ashworth wrote:
 
 Yeah.  But at least that's stuff you have a hope of managing.  Firmware
 underwent bit rot is simply not visible -- unless there's, say, signature 
 tracing through the main controller.

I can't speak to traffic light controllers directly, but at least some
vital logical controllers do check signatures of their firmware and
programming and will fail into a safe configuration if the
signatures don't validate.

 -- Brett



Re: First real-world SCADA attack in US

2011-11-22 Thread Brett Frankenberger
On Mon, Nov 21, 2011 at 11:16:14PM -0500, Jay Ashworth wrote:
 
 Precisely.  THe case in point example these days is traffic light
 controllers.
 
 I know from traffic light controllers; when I was a kid, that was my dad's
 beat for the City of Boston.  Being a geeky kid, I drilled the guys in the
 signal shop, the few times I got to go there (Saturdays, and such).
 
 The old design for traffic signal controllers was that the relays that drove
 each signal/group were electrically interlocked: the relay that made N/S able 
 to engage it's greens *got its power from* the relay that made E/W red; if 
 there
 wasn't a red there, you *couldn't* make the other direction green.
 
 These days, I'm not sure that's still true: I can *see* the signal
 change propagate across a row of 5 LED signals from one end to the
 other.  Since I don't think the speed of electricity is slow enough
 to do that (it's probably on the order of 5ms light to light), I have
 to assume that it's processor delay as the processor runs a display
 list to turn on output transistors that drive the LED light heads.
 
 That implies to me that it is *physically* possible to get opposing greens
 (which we refer to, in technical terms as traffic fatalities) out of the
 controller box... in exactly the same way that it didn't used to be.
 
 That's unsettling enough that I'm going to go hunt down a signal mechanic
 and ask.

The typical implementation in a modern controller is to have a separate
conflict monitor unit that will detect when conflicting greens (for
example) are displayed, and trigger a (also separate) flasher unit that
will cause the signal to display a flashing red in all directions
(sometimes flashing yellow for one higher volume route). 

So the controller would output conflicting greens if it failed or was
misprogrammed, but the conflict monitor would detect that and restore
the signal to a safe (albeit flashing, rather than normal operation)
state.

 -- Brett



Re: First real-world SCADA attack in US

2011-11-22 Thread Brett Frankenberger
On Tue, Nov 22, 2011 at 10:16:56AM -0500, Jay Ashworth wrote:
 - Original Message -
  From: Brett Frankenberger rbf+na...@panix.com
 
  The typical implementation in a modern controller is to have a separate
  conflict monitor unit that will detect when conflicting greens (for
  example) are displayed, and trigger a (also separate) flasher unit that
  will cause the signal to display a flashing red in all directions
  (sometimes flashing yellow for one higher volume route).
  
  So the controller would output conflicting greens if it failed or was
  misprogrammed, but the conflict monitor would detect that and restore
  the signal to a safe (albeit flashing, rather than normal operation)
  state.
 
 ... assuming the *conflict monitor* hasn't itself failed.
 
 There, FTFY.
 
 Moron designers.

Yes, but then you're two failures deep -- you need a controller
failure, in a manner that creates an unsafe condition, followed by a
failure of the conflict monitor.  Lots of systems are vulnerable to
multiple failure conditions.

Relays can have interesting failure modes also.  You can only protect
for so many failures deep.

 -- Brett



Re: OT: Traffic Light Control (was Re: First real-world SCADA attack in US)

2011-11-22 Thread Brett Frankenberger
On Tue, Nov 22, 2011 at 11:16:54AM -0500, Jay Ashworth wrote:
 - Original Message -
  From: Owen DeLong o...@delong.com
 
  As in all cases, additional flexibility results in additional
  ability to make mistakes. Simple mechanical lockouts do not scale
  to the modern world.  The benefits of these additional capabilities
  far outweigh the perceived risks of programming errors.

Relay logic has the potential for programming (i.e. wiring) errors
also.

It's not fair to compare conflict monitor to properly programmed
relay logic.  We either have to include the risk of programming
failures (which means improper wiring in the case of relay logic) in
both cases, or exclude programming failures in both cases.

 The perceived risk in this case is multiple high-speed traffic fatalities.

Some of the benefits of the newer systems are safety related also.
 
 I believe we rank that pretty high; it's entirely possible that a traffic
 light controller is the most potentially dangerous artifact (in terms of 
 number of possible deaths) that the average citizen interacts with on a 
 daily basis.

Some other things to consider.

Relays are more likely to fail.  Yes, the relay architecture was
carefully designed such that the most failures would not result in
conflicting greens, but that's not the only risk.  When the traffic
signal is failing, even if it's failing with dark or red in every
direction, the intersection becomes more dangerous.  Not as dangerous
as conflicting greens, but more dangerous than a properly operating
intersection.  If we can eliminate 1000 failures without conflicting
greens, at the cost of one failure with a conflicting green, it might
be a net win in terms of safety.

Modern intersections are often considerably more complicated than a two
phase allow N/S, then allow E/W, then repeat system.  Wiring relays
to completley avoid conflict in that case is very complex, and,
therefore, more error prone.  Even if a properly configured relay
solution is more reliable than a properly configured solid-state
conflict-monitor solution, if the relay solution is more likely to be
misconfigured, then there's not necessarily a net win.

Cost is an object.  If implementing a solid state controller is less
expensive (on CapEx and OpEx basis) than a relay-based controller, then
it might be possible to implement traffic signals at four previously
uncontrolled intersections, instead of just three.  That's a pretty big
safety win.

And, yes, convenience is also an objective.  Most people wouldn't want
to live in a city where the throughput benefit of modern traffic
signalling weren't available, even if they have to accept a very, very
small increase in risk.
  
 -- Brett



Re: First real-world SCADA attack in US

2011-11-22 Thread Brett Frankenberger
On Tue, Nov 22, 2011 at 06:14:54PM -0500, Jay Ashworth wrote:
 - Original Message -
  From: Matthew Kaufman matt...@matthew.at
 
  Indeed. All solid-state controllers, microprocessor or not, are required
  to have a completely independent conflict monitor that watches the
  actual HV outputs to the lamps and, in the event of a fault, uses
  electromechanical relays to disconnect the controller and connect the
  reds to a separate flasher circuit.
  
  The people building these things and writing the requirements do
  understand the consequences of failure.
 
 If you mean an independent conflict monitor which, *in the event
 there is NO discernable fault*, *connects* the controller to the lamp
 outputs... so that in the event the monitor itself fails, gravity or
 springs will return those outputs to the flasher circuit, than I'll
 accept that latter assertion.

That protects against a conflicting output from the controller at the
same time the conflict monitor completely dies (assuming its death is
in a manner that removes voltage from the relays).  It doesn't protect
against the case of conflicting output from the controller which the
conflict monitor fails to detect.  (Which is one of the cases you
seemed to be concerned about before.)

 -- Brett



Re: Arguing against using public IP space

2011-11-13 Thread Brett Frankenberger
On Sun, Nov 13, 2011 at 06:29:39PM -0500, Jay Ashworth wrote:
 
 SCADA networks should be hard air-gapped from any other network.
 
 In case you're in charge of one, and you didn't hear that, let me say
 it again:
 
 *SCADA networks should he hard air-gapped from any other network.*
 
 If you're in administrative control of one, and it's attacked because you
 didn't follow this rule, and someone dies because of it, I heartily, and
 perfectly seriously, encourage that you be charged with homicide.
 
 We do it with Professional Engineers; I see no reason we shouldn't expect
 the same level of responsibility from other types.

What if you air-gap the SCADA network of which you are in
administrative control, and then there's a failure on it, and the people
responsible for troubleshooting it can't do it remotely (because of the
air gap), so the trouble continues for an extra hour while they drive
to the office, and that extra hour of failure causes someone to die. 
Should that result in a homicide charge?

What if you air-gap the SCADA network of which you are in
administrative control, but, having done so, you can't afford the level
of redundancy you could when it wasn't air-gapped, and a transport
failure leaves you without remote control of a system at a time when
it's needed to prevent a cascading failure, and that leads to someone
dieing.  Should that result in a homicide charge?

Air-gap means you have to build your own facilities for the entire
SCADA network.  No MPLS-VPN service from providers.  Can't even use
point-to-point TDM circuits (T1, for example) from providers, since
those are typically mixed in with other circuits in the carrier's DACS,
so it's only logical separation.  And even if you want to redefine
air-gap to be air-gap, or at least no co-mingling in any packet
switching equipment, you've ruled out any use of commercial wireless
service (i.e. cellular) for backup paths.

A good engineer weighs all the tradeoffs and makes a judgement.  In
some systems, there's might be a safety component of availability that
justifies accepting some very small increase in the risk of outside
compromise.

You can argue that safety is paramount -- that the system needs to be
designed to never get into an unsafe condition because of a
communications failure (which, in fact is a good argument) -- that
there must always be sufficient local control to keep the system in a
safe state.  Then you can implement the air-gap policy, knowing that
while it might make remote control less reliable, there's no chance of,
say, the plant blowing up because of loss of remote control.  (Except,
of course, that that's only true if you have complete faith in the local
control system.  Sometimes remote monitoring can allow a human to see
and correct a developing unsafe condition that the control system was
never programmed to deal with.)

But even if the local control is completely safe in the loss-of-comm
failure case, it's still not as cut and dried as it sounds.  The plant
might not blow up.  But it might trip offline with there being no way
to restart it because of a comm failure.  Ok, fine, you say, it's still
in a safe condition.  Except, of course, that power outages, especially
widespread ones, can kill people.  Remote control of the power grid
might not be necessarily to keep plants from blowing up, but it's
certainly necessary in certain cases to keep it online.  (And in this
paragraph, I'm using the power grid as an example.  But the point I'm
making in this post is more general case.)

Sure, anytime there's an attack or failure on a SCADA network that
wouldn't have occurred had it been air-gapped, it's easy for people to
knee-jerk a SCADA networks should be airgapped response.  But that's
not really intelligent commentary unless you carefully consider what
risks are associated with air-gapping the network.

Practically speaking, non-trivial SCADA networks are almost never
completely air-gapped.  Have you talked to people who run them?

 -- Brett



Re: Nxdomain redirect revenue

2011-09-28 Thread Brett Frankenberger
On Tue, Sep 27, 2011 at 04:09:03PM -0700, Owen DeLong wrote:
 
  Yes, it is realistic to expect every mom-and-pop posting a personal
  web site to utilize a provider that implements SNI,  and the sooner
  they do it.
 
 No, it isn't because it requires you to send the domain portion of the URL
 in clear text and it may be that you don't necessarily want to disclose even
 that much information about your browsing to the public.

That's what happens without SNI.  Without SNI, the IP address of the
server is sent in the clear; anyone who captures that traffic knows the
IP address, and, without SNI, anyone who want s to translate the IP
address to a domain name need only connect to the server and see what
certificate is presented.

 -- Brett



Re: wet-behind-the-ears whippersnapper seeking advice on building a nationwide network

2011-09-20 Thread Brett Frankenberger
On Tue, Sep 20, 2011 at 04:13:57PM -0400, Dorn Hetzel wrote:
 
 full time connection to two or more providers should be satisfied when the
 network involved has (or has contracted for and will have) two or more
 connections that are diverse from each other at ANY point in their path
 between the end network location or locations and the far end BGP peers,
 whether or not the two or more connections are exposed to one or more common
 points of failure, as long as their are any failure modes for which one
 connection can provide protection against that failure mode somewhere in the
 other connection.

The GRE tunnel configuration being discussed in this thread passes this test. 
Consider the following:
   ISP #1 has transit connections to upstream A and B.
   ISP #2 has transit connections to upstream C and D
   ISP 1 and ISP 2 peer.

Customer gets a connection to ISP #1 and runs BGP, and, over that
connection, establishes a GRE tunnel to ISP #2, and runs BGP over that
also.

I assume your last clause requires that each connection provide
protection against a failure more in the other connection (not just
that one of the two provide protection against a failure mode on the
other).  This is satisfied.  In my example:

ISP #1 provides protection against ISP #2 having a complete meltdown.

ISP #2 provides protection against ISP #1 losing both its upstream
connections.

 -- Brett



Re: Microsoft deems all DigiNotar certificates untrustworthy, releases

2011-09-13 Thread Brett Frankenberger
On Tue, Sep 13, 2011 at 09:45:39AM -0500, Chris Adams wrote:
 Once upon a time, Tei oscar.vi...@gmail.com said:
  He, I just want to self-sign my CERT's and remove the ugly warning that
  browsers shows.
 
 SSL without some verification of the far end is useless, as a
 man-in-the-middle attack can create self-signed certs just as easily.

It protects against attacks where the attacker merely monitors the
traffic between the two endpoints.

As you suggest, it does not protect against MITM, but that's different
from being useless.  

The value of protecting against the former but not the latter may vary
by situation, but it's not always zero.  Not all attackers/attacks that
can sniff also have the capability and willingness to MITM.

(And even SSL w/ endpoint verification isn't absolute security.  For
example, it doesn't protect against endpoint compromises.  But that
doesn't make it endpoint verification useless.)

 -- Brett



Re: IPv6 end user addressing

2011-08-07 Thread Brett Frankenberger
On Sun, Aug 07, 2011 at 09:45:31PM -0400, valdis.kletni...@vt.edu wrote:
 On Sun, 07 Aug 2011 20:47:48 EDT, Randy Carpenter said:
  Does ATT seriously serve the entire state of Indiana from a single POP???
  Sounds crazy to me.
 
 It makes sense if they're managing to bill customers by the cable mile from
 their location to the POP.  Imagine a POP in Terre Haute or Indianapolis and
 1,500+ customers in the Gary area and another 1K in the South Bend and Fort
 Wayne areas...  Of course, some other provider would get a clue and  and offer
 the same price per mile your location to our POP - after putting a POP in
 Gary, South Bend, and Fort Wayne. :)

ATT doesn't serve the entire state of Indiana from a single POP.

The question at hand was how many POPs *with layer 3 service* they had. 
I don't know the answer to that question and don't claim that it is or
is not one, but the TDM or L2 backhaul from the nearest POP to
whatever other POP has the Layer 3 service isn't paid for by the
customer.

It's also not clear if they were talking about ATT the LEC (offering
services like DSL) or ATT the IXC (offering things like business
Internet service, V4VPN services, etc).  If the latter, it's not at all
surprising; legacy IXCs often have more POPs than POPs w/ Layer 3
services, and they backhaul L3 services over their legacy TDM and/or
Layer 2 (ATM or FR) networks to a POP that has a router.  This was a
way for them to get IP service everywhere without installing routers
everywhere; as the service took off, more POPs could be IP enabled to
reduce the about of TDM (etc.) backhaul.  But large legacy IXCs have a
lot of POPs and, in general, still don't have routers (customer facing
routers, anyway) in all of them.

 -- Brett



Re: Had an idea - looking for a math buff to tell me if it's possible?with today's technology.

2011-05-20 Thread Brett Frankenberger
On Fri, May 20, 2011 at 06:46:45PM +, Eu-Ming Lee wrote:
 To do this, you only need 2 numbers: the nth digit of pi and the number of 
 digits.
 
 Simply convert your message into a single extremely long integer. Somewhere, 
 in the digits of pi, you will find a matching series of digits the same as 
 your integer!
 
 Decompressing the number is relatively easy after some sort-of recent 
 advances in our understanding of pi.
 
 Finding out what those 2 numbers are--- well, we still have a ways to go 
 on that.

Even if those problems were solved, you'd need (on average) just as
many bits to represent which digit of pi to start with as you'd need to
represent the original message.

 -- Brett



Re: Had an idea - looking for a math buff to tell me if it's possible with today's technology.

2011-05-18 Thread Brett Frankenberger
On Thu, May 19, 2011 at 12:26:26AM +0100, Heath Jones wrote:
 I wonder if this is possible:
 
 - Take a hash of the original file. Keep a counter.
 - Generate data in some sequential method on sender side (for example simply
 starting at 0 and iterating until you generate the same as the original
 data)
 - Each time you iterate, take the hash of the generated data. If it matches
 the hash of the original file, increment counter.
 - Send the hash and the counter value to recipient.
 - Recipient performs same sequential generation method, stopping when
 counter reached.
 
 Any thoughts?

That will work.  Of course, the CPU usage will be overwhelming --
longer than the age of the universe to do a large file -- but,
theoretically, with enough CPU power, it will work.

For a 8,000,000,000 bit file and a 128 bit hash, you will need a
counter of at least 7,999,999,872 bits to cover the number of possible
collisions.

So you will need at leat 7,999,999,872 + 128 = 8,000,000,000 bits to
send your 8,000,000,000 bit file.  If your goal is to reduce the number
of bits you send, this wouldn't be a good choice.

 -- Brett



Re: Amazon diagnosis

2011-05-01 Thread Brett Frankenberger
On Sun, May 01, 2011 at 12:50:37PM -0700, George Bonser wrote:
 
 From my reading of what happened, it looks like they didn't have a
 single point of failure but ended up routing around their own
 redundancy.
 
 They apparently had a redundant primary network and, on top of that, a
 secondary network.  The secondary network, however, did not have the
 capacity of the primary network.
 
 Rather than failing over from the active portion of the primary network
 to the standby portion of the primary network, they inadvertently failed
 the entire primary network to the secondary.  This resulted in the
 secondary network reaching saturation and becoming unusable.
 
 There isn't anything that can be done to mitigate against human error.
 You can TRY, but as history shows us, it all boils down the human that
 implements the procedure.  All the redundancy in the world will not do
 you an iota of good if someone explicitly does the wrong thing.
   [ ... ]
 
 This looks like it was a procedural error and not an architectural
 problem.  They seem to have had standby capability on the primary
 network and, from the way I read their statement, did not use it.

The procedural error was putting all the traffic on the secondary
network.  They promptly recognized that error, and fixed it.  It's
certainly true that you can't eliminate human error.

The architectural problem is that they had insufficient error recovery
capability.  Initially, the system was trying to use a network that was
too small; that situation lasted for some number of minutes; it's no
surprise that the system couldn't operate under those conditions and
that isn't an indictment of the architecture.  However, after they put
it back on a network that wasn't too small, the service stayed
down/degraded for many, many hours.  That's an architectural problem. 
(And a very common one.  Error recovery is hard and tedious and more
often than not, not done well.)

Prodecural error isn't the only way to get into that boat.  If the
wrong pair of redundant equipment in their primary network failed
simultanesouly, they'd have likely found themselves in the same boat: a
short outage caused by a risk they accepted: loss of a pair of
rundundant hardware; followed by a long outage (after they restored the
network) caused by insufficient recovery capability.

Their writeup suggests they fully understand these issues and are doing
the right thing by seeking to have better recovery capability.  They
spent one sentence saying they'll look at their procedures to reduce
the risk of a similar procedural error in the future, and then spent
paragraphs on what they are going to do to have better recovery should
something like this occur in the future.

(One additional comment, for whoever posted that NetFlix had a better
architecture and wasn't impacted by this outage.  It might well be that
NetFlix does have a better archiecture and that might be why they
weren't impacted ... but there's also the possibility that they just
run in a different region.  Lots of entities with poor architecture
running on AWS survived this outage just fine, simply by not being in
the region that had the problem.)

 -- Brett



Re: Some truth about Comcast - WikiLeaks style

2010-12-21 Thread Brett Frankenberger
On Tue, Dec 21, 2010 at 12:42:09AM -0600, Robert Bonomi wrote:
 
  From: Leo Bicknell bickn...@ufp.org
 
  So if it's illegal for you to put a letter inside a FedEx box,
 
 Bzzt!  It's -not- illegal to put a letter inside a FedEx box.  It just has
 to have the appropriate (USPS) postage on it, _as_well_ as paying the FedEx
 service/delivery fee.  

Bzzt!.  It is, in general, as a practical matter, completely legal to
send letters overnight via FedEx, without paying any US postage.  Under
the Extremely Urgent exception, any shipment for which the shipper
pays more than the greater of $3 and twice what the USPS would charge
to send it first class, is deemed extremely urgent (whether or not it
reallt is) and is excemt from the requirement to use the USPS or pay
USPS postage.

Except in some pretty rare cases, any shipment of letters sent via
FedEx is going to cost more than $3 and more than double what the USPS
would have charged to sent it first class.

 This is true if it is just the letter you're sending,
 or if it is a sealed letter -inside- a box/package being shipped..

Actually, if the sealed letter relates to the cargo in the box/package,
it's legal to include it, under an exception separate from the
Extremely Urgent exception listed above.

 -- Brett



Re: Did Internet Founders Actually Anticipate Paid, PrioritizedTraffic?

2010-09-13 Thread Brett Frankenberger
On Mon, Sep 13, 2010 at 10:15:02AM -0400, Jamie Bowden wrote:

 I was thinking more along the lines of the fact that I pay for access
 at home, my employer pays for access here at work, and Google, Apple,
 etc. pay for access (unless they've moved into the DFZ, which only
 happens when it's beneficial for all players that you're there).  

Moving into the DFZ is different from not paying for access.  Many
enterprises and providers take full BGP routes and have no default, but
they're still paying for connectivity.

 Why should we pay extra for what we're already supposed to be
 getting.  If the ISps can't deliver what we're already paying for,
 they're broken.

The little secret (for some values of secret) that no one isn this
thread is talking about is that consumer Internet access is a low
margin cutthroat business.  Consumers demand ever-increasing amounts of
bandwidth and don't want to pay more for it.  Providers figure out a
way to deliver or lose the business to another provider who figures out
a way.  Of course they're going to try to monetize the other end, so
they can charge the customer less and keep his business, and of course
they're going to do things that the purists object to and that are
harmful, because most of the customers won't care and they'll like the
low price.

It's the same reason we have NAT boxes in everyone's homes.  It saves
money, and consumers are heavily cost driven, and they don't know or
don't care what they are losing when they buy purely, or almost purely,
on price.

There's no NAT in my house, and I'll switch to commercial grade
Internet service (and pay the appropriate price) if residential service
drops to an unacceptable level of quality for me.  (Right now, I can
opt out of their attempts to monetize the other end -- for example, I
run my own DNS server rather than use my provider's that redirects
typos somewhere that gets them money.)  But my costs -- for more than
one IP address, for a real router rather than a consumer grade toy --
are considerably higher than what most people are willing to pay.

Companies of any significant size probably aren't going to fall prey to
net-non-neutrality ... but they're going to pay business prices for
Internet, and that's going to cover the costs of providing the service
and a reasonable profit.  If that's what you want at home, then pay
that price and you can get that.

But most people at home will choose to pay less for their service and
let their provider monetize both ends of the connection.  

To be clear, I'm not staking out a philosophical postion here.  I'm a
purist -- see above, I don't NAT and I'll pay for a better connection
if my consumer connections become insufficiently neutral -- but most
people won't and there is and will be a real market in providing cheap,
less pure, bandwidth.

 -- Brett



Re: ISP port blocking practice

2010-09-06 Thread Brett Frankenberger
On Sun, Sep 05, 2010 at 09:18:54PM -0400, Jon Lewis wrote:

 Anti-spam is a never ending arms race.  

That's really the question at hand here -- whether or not there's any
benefit to continuing the never ending arms race game.  Some people
think there is.  Others question whether anything is really being
accomplished.  Certainly we're playing it out like an arms race -- ISPs
block something, spammers find a new way to inject spam, and so on. 
The end result of lots of time spend on blocking thins, less
functionality for customers ... but no decrease in spam.

 Originally, the default config  
 for most SMTP servers was to relay for anyone.  10 years ago, sending 
 spam through open SMTP relays was quite common.   Eventually, the default 
 changed, nearly all SMTP relays now restrict access by either client IP 
 or password authentication, and the spammers adapted to open proxies.  
 Today, nobody in their right mind sets up an open HTTP proxy, because if 
 they do, it'll be found and abused by spammers in no time.  These too 
 have mostly been eliminated, so the spammers had to adapt again, this 
 time to botted end user systems.

 Getting rid of the vast majority of open relays and open proxies didn't  
 solve the spam problem, but there'd be more ways to send spam if those  
 methods were still generally available.  The idea that doing away with  
 open relays and proxies was ineffective, so we may as well not have done  
 and should go back to deploying open relays and open proxies it is silly.

Is it?  It's likely true that the amount of span sent through open
relays today is smaller than the amount of spam send through open
relays 10 years ago.  If the objective is less spam via open relays,
closing down open relays was a raging success.  But that's not the
objective.  The objective is less spam, and there's certainly not less
spam today than there was 10 years ago.

Of course, those who worked to close open relays might argue that there
would be even more spam today if there were still open relays.  But
they don't know that and there's no real evidence to support that.

The theory behind closing open relays, blocking port 25, etc., seems to
be:
(a) That will make it harder on spammers, and that will reduce spam --
some of the spammers will find other other ways to inject spam, but
some will just stop, OR
(b) Eventually, we'll find technical solutions to *all* the ways spam
is injected, and then there will be no more spam.

There's little evidence for either.

 -- Brett



Re: ISP port blocking practice

2010-09-06 Thread Brett Frankenberger
On Mon, Sep 06, 2010 at 10:38:15PM +, deles...@gmail.com wrote:

 Having worked in past @ 3 large ISPs with residential customer pools
 I can tell you we saw a very direct drop in spam issues when we
 blocked port 25.

No one is disputing that.  Or, at least, I'm not disputing that.  I'm
questioning whether or not the *Internet* has experienced any decrease
in aggregate spam as a result of ISPs blocking port 25.  Did the spam
you blocked disappear, or did it all get sent some other way?  I'm
questioning the evidence for the claim that it didn't just all get sent
some other way.  (By all, I mean almost all.  I'm sure at least one
piece of spam has been permanently prevented from getting sent as a
result of port 25 filters.)

I'm not suggesting ISPs should or shouldn't block port 25.  That's a
decision each must make for itself.  And I'm not questioning that
blocking offers benefits for those who choose to block it.  What I'm
questioning is whether or not it results in any meaningful reduction in
aggregate spam.  (That is, are people *receiving* less spam becuase the
three ISPs you describe above -- along with many others -- blocked port
25.)

 -- Brett



Re: Did your BGP crash today?

2010-08-29 Thread Brett Frankenberger
On Sun, Aug 29, 2010 at 12:30:21AM -0700, Paul Ferguson wrote:
 
 It would seem to me that there should actually be a better option, e.g.
 recognizing the malformed update, and simply discarding it (and sending the
 originator an error message) instead of resetting the session.
 
 Resetting of BGP sessions should only be done in the most dire of
 circumstances, to avoid a widespread instability incident.

The only thing you know for sure when you receive a malformed update
is that the router on the other end of the connection is broken (or
that there's something in between the other router and you that is
corrupting messages, but for the purposes of this, that's essentially
the same thing).

Accepting information received from a router known to be broken, and
then passing that on to other routers, is a bad idea and something that
could lead to a widespread instability incident.  Of course, in theory,
you discard the bad updates and only pass on the good updates, but
doing that relies on the assumption that the known-to-be-broken router
on the other end of the connection is broken in such a way that ensures
that all the corrupted messages it sends will be recognizable as
malformed and can be discarded.  There's plenty of corruption that
can't be detected on the receiving end.

On top of that, there's problems with being out of sync with the router
on the other end.  For example, suppose a router developed a condition
that caused it to malform all withdraw messages (or, more precisely,
all UPDATE messages where the withdrarn routes length field is
non-zero).  If we implement what you suggest above, then we'll accept
all the advertisements from that router, but ignore all the withdraws,
and end up sending that router a bunch of traffic that it won't
actually be able to handle.

 -- Brett



Re: Did your BGP crash today?

2010-08-28 Thread Brett Frankenberger
On Sat, Aug 28, 2010 at 02:19:28PM +0200, Florian Weimer wrote:
 * Claudio Jeker:
 
  I think you blame the wrong people. The vendor should make sure that
  their implementation does not violate the very basics of the BGP
  protocol.
 
 The curious thing here is that the peer that resets the session, as
 required by the spec, causes the actual damage (the session reset),
 and not the peer producing the wrong update.
 
 This whole thread is quite schizophrenic because the consensus appears
 to be that (a) a *researcher is not to blame* for sending out a BGP
 message which eventually leads to session resets, and (b) an
 *implementor is to blame* for sending out a BGP messages which
 eventually leads to session resets.  You really can't have it both
 ways.

The researcher is not to blame because all the BGP messages he sent out
were properly formed.

The implementor is to blame becuase the code he wrote send out BGP
messages which were not properly formed.

 I'm fed up with this situation, and we will fix it this time.  My take
 is that if you reset the session, you're part of the problem, and
 consequently deserve part of the blame.  So if you receive a
 properly-framed BGP update message you cannot parse, you should just
 log it, but not take down the session.

If you get your wish, and that gets implemented, in some numer of years
trree will be a NANOG posting (perhaps from you, perhaps not) arguing
that any malformed BGP message should result in the session being torn
down.  This will be after a router develops a failure that causes it to
send many incorrect messages, but only some of them malformed.  So the
malformed ones will be discarded, the remainder will be propogated
throughout the Internet.  If the ones that are incorrect but not
malformed are, say, filled with more specifics for large portions of
the Internet, someone will be asking: How could all the other routers
accept these advertisement from a router known to be broken ... it was
sending malformed advertisements, but instead of tearning down the
sessions, you decided to trust all the validly formed messages from
this known-to-be-broken router.

My point is:  we can't always look at the most recent failure to decide
what the correct policy is.  We have good data on the cases where
NOTIFY on any malformed packet has caused significantly outages in the
Internet.  We don't have nearly as good data on the cases where
NOTIFY-on-any-malformed-packet saved the Internet from a significant
outage.

I don't claim to know which is the bigger problem.  But any serious
argument to change the behavior needs to consider the risk from
propogating information received from a router known to be broken, on
the theory that the brokenness only causes malformed messages (which
can be discarded) and does not also cause incorrect but correctly
formed messages to be sent.

 -- Brett



Re: Lightly used IP addresses

2010-08-15 Thread Brett Frankenberger
On Sun, Aug 15, 2010 at 11:44:18AM -0400, Owen DeLong wrote:

 You and Randy operate from the assumption that these less certain
 rights somehow exist at all. I believe them to be fictitious in
 nature and contrary to the intent of number stewardship all the way
 back to Postel's original notebook. Postel himself is on record
 stating that disused addresses should be returned.

A non-trivial number of people likely believe they have property rights
in their legacy address space (or, more precisely, in the entry in the
ARIN database that corresponds to their legacy address space) and that
those property rights are much more extensive than the rights they have
under the LRSA.

John points out that the LRSA gives legacy address holders a degree of
certainty that they don't otherwise have.  That's almost certainly
true; I doubt any legancy address holders are in possession of legal
advice to the effect of you absolutely have property rights in that
allocation; there's absoutely no chance you'd lose should you attempt
to assert those rights in court.  (On the other hand, no one really
knows that ARIN has the authority to make the guarantees it's making
under the LRSA.  The LRSA only binds ARIN ... there's nothing to say
the us Government won't step in an and assert its own authority over
legacy space.  So, while the LRSA confers a degree of certainty, it
doesn't confer absolute certainty, or anything close to it.)

But John doesn't seem to want to acknowledge, at least directly, the
possibility that that thsoe property rights might be reasonably
believed by some to exist.  I suspect some entities are in possession
of legal advice to the effect of you probably have property rights and
probably can do whatever you want with your space and probably get
court orders as needed to force ARIN to respond accordingly.  If one
has gotten such advice from one's lawyers, and one has discussed with
those lawyers just how probable probably is, it might well be that
signing the LRSA is legitimately perceived as giving up rights.

  Because that's intended to be part of the price, Randy. In exchange
  for gaining enforceable rights with respect to ARIN's provision of
  services, you quit any claim to your legacy addresses as property,

 I would say you acknowledge the lack of such a claim in the first
 place rather than quit claim. Thus you are not giving up anything and
 the only actual price is $100 per year with very limited possible
 increases over future years.

The reality is that *no one knows* whether or not there are property
rights.  The difference between quit claim any rights you have and
acknowledge you never had any rights isn't really relevant.  Either
way, you go from having whatever property rights you originally had
(and no one knows for sure what those rights are) to probably not
having any such rights.

With either language, if you never had any such rights, you aren't
giving up anything.  If you did previously have such rights, you
probably are giving up something.  Whether the language is written
presupposing the existance of such rights, or presupposing the
non-existance of such rights, has no real effect.

OF course ARIN's position is that that clause merely clarifies a
situation that already exists.  But the fact that ARIN feels it needs
clarifying illustrates the ambiguity.

 Any belief that non-signatories enjoy rights not present in the RSA
 is speculative at best.

I suspect some people are in possession of legal advice to the
contrary.  (Well, sure, technically, it is speculative.  But I'd
imagine that some people have a pretty high degree of confidence in
their speculation.)

Let's put it this way:  (This is a hypothetical point; I'm not actually
making an offer here.) Say I'm willing to buy, for $10 per /24, any
property rights that anyone with legacy space has in their legacy
allocation, provided they have not signed an RSA or LRSA with respect
to that space, and provided that they agree to never sign any such
agreement, or nay similar agreement, with respect to that space.

If there's no property rights, that's a free $10 per /24.  On the other
hand, if there are property rights, then that's a pretty low price for
giving me the authority to direct a transfer of the space whenever I
feel like it.

How many people do you think would rationally take me up on this offer? 
Would you advise an ISP with a legacy allocation that is temporarily
short on cash to engage in such a transaction?  If so, are you
confident enough in your position that you'd agree to personally
indemnify them against any loss they might incur if it turns out that
there are property rights and now I hold them?

And that's really the crux of this argument.  One side assumes there
are no property rights and argues from that premise, the other side
assumes there are and argues from that premise.  But sides' arguments
are logically sound (more or less), but they start from different
premises, and starting there isn't going to do 

Re: Vyatta as a BRAS

2010-07-18 Thread Brett Frankenberger
On Sun, Jul 18, 2010 at 06:12:29PM +0100, Nick Hilliard wrote:
 On 18 Jul 2010, at 10:58, Dobbins, Roland rdobb...@arbor.net wrote:
  ASR1K, which is what I'm assuming you're referring to, is a
  hardware-based router.  Same for ASR9K.
 
 My c* SE swears that the asr1k is a software router.  I didn't push
 him on it's architecture though.

All routers have hardware, and any but the most overwhelmingly simple
hardware based devices are using ASICs running software to puah
packets around.  The line has been blurred for a long time, and the
ASR1K makes it very, very blurry.

It forwards packets in a relatively general-purpose (but not as general
purpose as, say the Intel processors inside your servers) CPU that has
40 cores and is optimised (it's architecture, instruction set, etc.)
for moving packets around.  Is that hardware forwarding?  Is that
software forwarding?  Depends in what you want to call it.

Do video cards with high-end GPUs do things in hardware or
software?  There are now development kits to allow you to easily use
those GPUs to do general purpose compute tasks.  The processors in the
ASR could do that, also, but Cisco hasn't written any code or released
ay libraries to actually do that (at lease not publicly; I wouldn't be
surprised to learn that some developer has hacked a 40-threaded
s...@home or something like that onto it just to prove it could be
done).

So where do you draw the line?  Is the ASR hardware forwarding?  If so,
would it still be hardware if, intead of the specialized processor,
Cisco got Intel to develop a 40-core pentium and used that?  What if
Cisco instead used 10 off-the-shelf 4-core processors from Intel or
AMD?  Where along this continuoum do we cross the line from software
router to hardware router?

 -- Brett



Re: Vyatta as a BRAS

2010-07-18 Thread Brett Frankenberger
On Mon, Jul 19, 2010 at 07:13:46AM +0930, Mark Smith wrote:
 
 This document supports that. If the definition of a software router is
 one that doesn't have a fixed at the factory forwarding function, then
 the ASR1K is one.

The code running in the ASICs on line cards in 6500-series
chassis isn't fixed at the factory.  Same with the code running on the
PFCs in those boxes.  There's not a tremendous amount of flexibility to
make changes after the fact, because the code is so tightly integrated
with the hardware, but there is some.

(Not saying the 6500 is a software-based platform.  It's pretty clearly
a hardware-based platform under most peoples' definition.  But:  the
line is blurry.)

 -- Brett



Re: On the control of the Internet.

2010-06-13 Thread Brett Frankenberger
On Sun, Jun 13, 2010 at 03:23:06PM -0500, Larry Sheldon wrote:
 On 6/13/2010 14:59, Joe Greco wrote:
 
  How about the case where the master zone file has be amputated and the
  secondaries can no longer get updates?
 
 Mea culpa.
 
 That was suppose to say How about the case where the master zone file
 has beEN amputated and the secondaries can no longer get updates?

I'm really not sure what you're asking, and I don't know what master
zone file has been amputated means, but if the master server goes
unreachable, then, for each secondary, either:
  (a) it's not reachable from anywhere, in which case it doesn't really
matter what information it has because nothing will be querying it, or
  (b) it is reachable from somewhere, in which case you log in to it
from that somewhere, edit the configuration file, change slave to
master, and restart BIND.  (Adjust as needed for whatever DNS server
is in use, if it's not BIND.)

 -- Brett



Re: RFID in datacenter (was Re: Default Passwords for World Wide Packets/Lightning Edge Equipment)

2010-01-13 Thread Brett Frankenberger
On Wed, Jan 13, 2010 at 01:51:41PM -0500, George Imburgia wrote:

 On Wed, 13 Jan 2010, Barry Shein wrote:

 The big advantage of RFIDs is that you don't need line of sight access
 like you do with bar codes, they use RF, radio frequency.

 Which is also a big disadvantage in a datacenter. Ever tried to use a  
 radio in one?

 The RF noise generated by digital equipment seriously erodes signal  
 quality. Considering the relatively weak signal returned from RFID tags,  
 I'd be surprised if you'd get any kind of useful range.

 Has anybody tried it out?




Re: Consumer-grade dual-homed connectivity options?

2009-12-30 Thread Brett Frankenberger
On Wed, Dec 30, 2009 at 11:13:24AM -0500, Steven Bellovin wrote:
 
 I know nothing of how to do this on a Catalyst; for PCs, my own guess
 is that you're looking far too high-end.  If the issue is relaying to
 the outside, I suspect that a small, dedicated Soekris or the like
 will do all you need -- there's no point in switching traffic faster
 than your DSL lines can run.  I'm not doing load-balancing, but all
 traffic from my house to the outside world (I have a cable modem)
 goes through a Soekris 4801, and I can download large files from my
 office at 12-13M bps.  Further, since the Soekris is bridging some
 networks, its interfaces are in promiscuous mode, so the box is
 seeing every packet on my home LAN. 

Really?  If it's connected to a switch, I'd expect it to only see
broadcast/multicast/unknown destination MACs, as well as traffic
actually flowing through the Soekris.

 -- Brett



Re: DMCA takedowns of networks

2009-10-24 Thread Brett Frankenberger
On Sat, Oct 24, 2009 at 11:06:29AM -0400, Patrick W. Gilmore wrote:
 On Oct 24, 2009, at 10:53 AM, Richard A Steenbergen wrote:
 On Sat, Oct 24, 2009 at 09:36:05AM -0400, Patrick W. Gilmore wrote:
 On Oct 24, 2009, at 9:28 AM, Jeffrey Lyon wrote:

 Outside of child pornography there is no content that I would ever
 consider censoring without a court order nor would I ever purchase
 transit from a company that engages in this type of behavior.

 A DMCA takedown order has the force of law.

It most certainly does not.

 The DMCA defines a process by which copyright violations can be
 handled. One of the options in that process is to send a
 counter-notice to the takedown notice.

 Laws frequently have multiple options for compliance.  Doesn't mean you 
 don't have to follow the law.

But you should understand the law.

The DMCA does NOT require that any provider, anywhere, ever, take down
material because they were notified that the material is infringing on
a copyright holder's rights.

What the DMCA does say is that if a provider receives such a
notification, and promptly takes down the material, then the ISP is
immune from being held liable for the infringement.  Many providers
routinely take down material when they receive a DMCA take-down notice. 
But if they do so out of the belief that they are required to do so,
they are confused.  They are not required to do so.  They can choose to
take it down in exchange for getting the benefit of immunity from being
sued (many, probably most, providers make this choice).  Or they can
choose to leave it up, which leaves them vulnerable to a lawsuit by the
copyright holder.  (In such a lawsuit, they copyright holder would have
to prove that infringement occurred and that the provider is liable for
it.)

(I'm not commenting on the merits of HE's actions here.  Just on that
the DMCA actually says.  It's certainly a good practice for providers
that don't want to spend time evaluating copyright claims and defending
copyright infringement suits (which, I think, is most providers) to
take advantage of the DMCAs safe-harbor provisions.  I'm not disputing
that.)

 -- Brett



Re: Important New Requirement for IPv4 Requests [re impacting revenue]

2009-04-25 Thread Brett Frankenberger
On Fri, Apr 24, 2009 at 01:12:42PM +0100, Michael Dillon wrote:
 
 I think that many company officers will ask to see the results of an audit
 before they sign this document, and they will want the audit to be performed
 by qualified CPAs. Are your IPv4 records in good enough shape that an
 accountant will sign off on them?

My boss (who is an officer of the company within the meaning of the
term in the new ARIN requirement) will attest to my employer's next IP
assignment (we're an end user with PI space) request to ARIN on nothing
but my say-so that it is accurate.  He's not a network guy, has no good
way of verifying the data himself and won't require some external
entity to come audit the request.  He might ask me a few questions
before signing, but that will be it.  If he didn't trust me, he'd have
replaced me a long time ago.  (For the record, yes, my records are good
enough that an accountant would likely sign off on them.  But that
won't be necessary.)

Of course, I haven't been submitting fraudulent requests to ARIN and
don't plan to start, so I'm not the target of ARIN's new policy anyway.

There are many things the new policy won't stop.  It won't stop
fraudulent requests where the officer of the company is knowingly in
the loop of the fraud (this would include small organizations where the
entire network engineering staff is the VP of Enginering).  It won't
stop fraudulent requests where the requestors are willing to lie to
company executives (except in what I expect are relatively rare cases
where the executives independantly verify the data before signing off
on it).

It *will* stop fraudulent requests where the requests are being made by
engineers who are (a) willing to lie to ARIN, but (b) not willing to
lie to their boss and boss's boss (through however many levels it takes
to get to an officer who meets ARIN's requirements).  I suspect that's
a non-trivial amount of the fraud that is going on.  ARIN can't fire
anyone.  Managers typically don't like to be lied to and might very
well fire an engineer caught lying ... many people won't take that sort
of chance with their job.  (Sure, some will tell their boss the truth
and then ask him to lie to ARIN, and some officers will go along with
that -- I covered that possibility the previous paragraph -- but no
where near all will.)

Many of the attacks here against ARIN's policy are centered on the fact
that it isn't perfect and there are still lots of ways for fraud to
happen.  All of those attacks are valid, but they ignore the fact that
the policy probably was't intended to stop all fraud, just reduce
fraud.  I have no data, but my gut tells me it will reduce some fraud. 
I have no idea how much.

 -- Brett



Re: Shady areas of TCP window autotuning?

2009-03-17 Thread Brett Frankenberger
On Mon, Mar 16, 2009 at 10:48:42PM -0500, Frank Bulk - iName.com wrote:
 It was my understanding that (most) cable modems are L2 devices -- how it is
 that they have a buffer, other than what the network processor needs to
 switch it?

The Ethernet is typically faster than the upstream cable channel.  So
it needs some place to put the data that arrives from the Ethernet port
until it gets sent upstream.

This has nothing to do with layer 2 / layer 3.  Any device connecting
between media of different speeds (or connecting more than two ports --
creating the possibility of contention) would need some amount of
buffering.

 -- Brett



Re: What is the most standard subnet length on internet

2008-12-24 Thread Brett Frankenberger
On Tue, Dec 23, 2008 at 08:25:40AM -0600, Alex H. Ryu wrote:
 Also one of the reason why not putting default route may be because of
 recursive lookup from routing table.
 If you have multi-homed site within your network with static route, and
 if you use next-hop IP address instead of named interface, you will see
 the problem when you have default route in routing table.
 For an example, if you have ip route 1.0.0.0 255.0.0.0 2.2.2.2.
 If the interface for 2.2.2.2 is down, 1.0.0.0/8 will be still be in the
 routing table because 2.2.2.2 can be reached via default route
 (0.0.0.0/0) from routing table recursive lookup.
 Therefore the traffic for 1.0.0.0/8 will be forwarded to 0.0.0.0/0
 next-hop ip address, and customer fail-over scenario will not be working
 at all.
 
 Only way to resolve this problem is... Actually three...
 1) Use named interface such as serial 1/0 instead of x.x.x.x IP
 next-hop address.
 But sometimes this is not an option if you use ethernet circuit or
 something like Broadcast or NBMA network.

ip route 1.0.0.0 255.0.0.0 fa0/0 2.2.2.2

 -- Brett



Re: ARCOS Outage

2008-12-06 Thread Brett Frankenberger
On Fri, Dec 05, 2008 at 09:31:11AM -0500, Alex Rubenstein wrote:

 I wonder if having a spare card there would have been cheaper than
 this outage and resulting flights and labour?

It unquestionably would have cheaper to have a spare for that card at
that location.  What might not have been cheaper, though, is having a
spare for *every* type of card that could fail, *everywhere* those
cards are deployed.

  Yup, there is a defective card in the Bahamas. They should be flying in
  this

 -- Brett



Re: Telecom Collapse?

2008-12-04 Thread Brett Frankenberger
On Thu, Dec 04, 2008 at 08:48:27AM -0600, Chris Adams wrote:
 Once upon a time, Paul Ferguson [EMAIL PROTECTED] said:
  I deliberated for a while on whether to send this, or not, but  I figure it
  might be of interest to this community:
  
  http://techliberation.com/2008/12/04/telecom-collapse/
 
 One thing doesn't make sense in that article: it talks about POTS being
 subsidized by other services, and people cutting POTS lines.  Wouldn't
 that be _good_ for the companies and their other services?  The way the
 article describes things, fewer POTS lines = smaller subsidies taken
 from other services = better profits for other services and the company.

The marginal cost of POTS service isn't subsidized by other services;
at the margin, POTS is profitable.  The subsidy covers some of the
fixed costs (but not all of them, some of the fixed costs are covered
by POTS revenues).  So ... every time a POTS line is taken out, the
fixed costs that were being covered by the revenue from that line now
have to be covered from somewhere else (= More Subsidy).

 -- Brett