Re: ipmi access

2014-06-02 Thread Brian Dickson
Here's one useful method, which depends on having appropriate subnet and
VLAN capabilities.

Have all hosts at a given site, have their main interface do dot1q (switch
config trunked port).
The ipmi interfaces will be on one VLAN (put those ports in that VLAN).
The first VLAN is the public routed subnet. The router needs to be
configured with this VLAN.
The second VLAN is the IPMI, and is NOT CONFIGURED ON THE ROUTER.

This second VLAN has only hosts and IMPIs. No connectivity to the world,
period.
(Be sure IP packet forwarding is disabled on the hosts!)

On (some subset of) the hosts' main interface, run suitable remote KVM-ish
protocol.
For instance, use ssh with X forwarding, and/or VNC (or XVNC), and whatever
local IMPI client thing you want (browser).

Connecting to ipmi is by IP on the second VLAN (or by name, left as
exercise for the reader.)

Avoids ACL fiddling, is as secure as your host access method (but no more
secure, obviously).
YMMV.

Brian


Re: Naive IPv6 (was AT&T UVERSE Native IPv6, a HOWTO)

2013-12-04 Thread Brian Dickson
On Wed, Dec 4, 2013 at 3:48 PM, Christopher Morrow
wrote:

> On Wed, Dec 4, 2013 at 3:43 PM, Brian Dickson
>  wrote:
> > Except that we have a hard limit of 1M total, which after a few 100K from
>
> where does the 1M come from?
>

FIB table sizes, usually dictated by TCAM size. Think deployed hardware,
lots of it.
(Most instances of TCAM share it for IPv4 + IPv6, with each slot on IPv6
taking two slots of TCAM, IIRC. And a few other things also consume TCAM,
maybe not as significantly.)

(Newer boxes may handle more on some network's cores, but I don't believe
it is ubiquitously the case across the DFZ.)

Brian


Re: Naive IPv6 (was AT&T UVERSE Native IPv6, a HOWTO)

2013-12-04 Thread Brian Dickson
On Wed, Dec 4, 2013 at 3:09 PM, Owen DeLong  wrote:

>
> On Dec 4, 2013, at 10:21 , Brian Dickson 
> wrote:
>
> Second of all, what would make much more sense in your scenario is
> to aggregate at one or two of those levels. I'd expect probably the POP
> and the Border device levels most likely, so what you're really looking
> at is 5000*100 = 500,000 /48s per border. To make this even, we'll
> round that up to 524,288 (2^19) and actually to make life easy, let's
> take that to a nibble boundary (2^20) 1,048,576, which gives us a
> /28 per Border Device.
>
>
>
Except that we have a hard limit of 1M total, which after a few 100K from
the
global routing tables (IPv4+IPv6), this 500,000 looks pretty dicey.


>
> > And root of the problem was brought into existence by the insistence that
> > every network (LAN) must be a /64.
>
> Not really. The original plan was for everything to be 64 bits, so by
> adding
> another 64 bits and making every network a /64, we're actually better off
> than we would have been if we'd just gone to 64 bit addresses in toto.
>
> Thanks for playing.
>
> Owen
>
> Understand, I am not saying anyone got it wrong, but rather, that there is
a risk associated
with continuing forever to use a /64 fixed LAN size. Yes, we are better
than we
were, but the point I'm making is, if push comes to shove, that the /64 is
a small thing
to sacrifice (at very small incremental cost, SEND + AUTOCONF
modifications).

I can't believe I just called 2**64 small.

Brian


Re: Naive IPv6 (was AT&T UVERSE Native IPv6, a HOWTO)

2013-12-04 Thread Brian Dickson
On Wed, Dec 4, 2013 at 2:34 PM, Tony Hain  wrote:

> Brian Dickson wrote:
> > > And root of the problem was brought into existence by the insistence
> > > that every network (LAN) must be a /64.
>
[snip]

> about how many bits to add for hosts on the lan. The fact it came out to 64
>
>
The point I'm making (and have made before), is not how many bits, but that
it is a fixed number (64).

And again, the concern isn't about bits, it is about slots in routing
tables.
Routing table hardware is generally:
(a) non-upgradeable in currently deployed systems, and
(b) very very expensive (at least for TCAMs).
(c) for lots of routers, is per-linecard (not per-router) cost

If you look back at the need for CIDR (where we went from class A, B, and C
to variable-length subnet masks),
it might be a bit more obvious. CIDR fixed the total number of available
routes problem, but fundamentally
cannot do anything about # route slots.

Fixed sizes do not have the scalability that variable sizes do,
when it comes to networks, even within a fixed total size (network + host).
Variable sizes are needed for aggregation, which is the only solution to
router slot issues.

IF deployed by operators correctly, the global routing table should be 1
IPv6 route per ASN.
However, that is only feasible if each ASN can efficiently aggregate
forever (or 50 years at least).

The math shows that 61 bits, given out in 16-bit chunks to end users, when
aggregation is used
hierarchically, runs the risk of not lasting 50 years.

[snip]

> Now
> go back to the concept of miserly central control of lan bits and figure
> out
> how one might come up with something like RFC3971 (SEND) that would work in
> any network.
>
>
There are two really easy ways, just off the top of my head:
(1) prefix delegation. Host wanting SEND asks its upstream router to
delegate a prefix of adequate size.
The prefix is routed to the host, with only the immediate upstream router
knowing the host's non-SEND address.
The host picks SEND addresses from the prefix, rotates whenever it wants,
problem solved. QED.

(2) pass the prefix size to the host. Have the SEND math use whatever
scheme it wants, modulo that size.
E.g. rand(2**64-1) -> rand(2**prefix_size-1). Host redoes this whenever it
changes addresses, problem solved. QED.
(The question of how much space is adequate for the protection offered by
SEND. I suggest that 32 bits as a minimum,
would still be adequate.)

As far as I have been able to determine, there are very few places where
fixed /64 (as a requirement) actually makes sense.
For example, the code for IPv6 on Linux, basically does "check that we're
/64 and if not fail", but otherwise has no /64 dependencies.

We can always do an IPv6-bis in future, to first fix SEND, then fix
autoconf, and finally remove /64 from IPv6 -- if we think the usage rate
justifies doing so.

Brian


Re: Naive IPv6 (was AT&T UVERSE Native IPv6, a HOWTO)

2013-12-04 Thread Brian Dickson
Not necessarily transit - leaf ASN ISP networks (which do IP transit for
consumers, but do not have BGP customers) would also be counted in. They do
still exist, right?

Brian


On Wed, Dec 4, 2013 at 1:35 PM, Christopher Morrow
wrote:

> On Wed, Dec 4, 2013 at 1:32 PM, Rob Seastrom  wrote:
> >
> > Brian Dickson  writes:
> >
> >> Rob Seastrom wrote:
> >>
> >>> "Ricky Beam"  http://mailman.nanog.org/mailman/listinfo/nanog>>
> >>> writes:
> >>> >
> >>> * On Fri, 29 Nov 2013 08:39:59 -0500, Rob Seastrom  >>> <http://mailman.nanog.org/mailman/listinfo/nanog>> wrote: *>>
> >>> * So there really is no excuse on AT&T's part for the /60s on uverse
> >>> 6rd... *>
> >>> * ... *>
> >>> * Handing out /56's like Pez is just wasting address space -- someone
> *>
> >>> * *is* paying for that space. Yes, it's waste; giving everyone 256 *>
> >>> * networks when they're only ever likely to use one or two (or maybe *>
> >>> * four), is intentionally wasting space you could've assigned to *>
> >>> * someone else. (or **sold** to someone else :-)) IPv6 may be huge to
> *>
> >>> * the power of huge, but it's still finite. People like you are *>
> >>> * repeating the same mistakes from the early days of IPv4... * There's
> >>> finite, and then there's finite. Please complete the
> >>> following math assignment so as to calibrate your perceptions before
> >>> leveling further allegations of profligate waste.
> >>> Suppose that every mobile phone on the face of the planet was an "end
> >>> site" in the classic sense and got a /48 (because miraculously,
> >>> the mobile providers aren't being stingy).
> >>> Now give such a phone to every human on the face of the earth.
> >>> Unfortunately for our conservation efforts, every person with a
> >>> cell phone is actually the cousin of either Avi Freedman or Vijay
> >>> Gill, and consequently actually has FIVE cell phones on active
> >>> plans at any given time.
> >>> Assume 2:1 overprovisioning of address space because per Cameron
> >>> Byrne's comments on ARIN 2013-2, the cellular equipment providers
> >>> can't seem to figure out how to have N+1 or N+2 redundancy rather
> >>> than 2N redundancy on Home Agent hardware.
> >>> What percentage of the total available IPv6 space have we burned
> >>> through in this scenario? Show your work.
> >>> -r
> >>
> >>
> >> Here's the problem with the math, presuming everyone gets roughly the
> same
> >> answer:
> >> The efficiency (number of prefixes vs total space) is only achieved if
> >> there is a "flat" network,
> >> which carries every IPv6 prefix (i.e. that there is no aggregation being
> >> done).
> >>
> >> This means 1:1 router slots (for routes) vs prefixes, globally, or even
> >> internally on ISP networks.
> >>
> >> If any ISP has > 1M customers, oops. So, we need to aggregate.
> >>
> >> Basically, the problem space (waste) boils down to the question, "How
> many
> >> levels of aggregation are needed"?
> >>
> >> If you have variable POP sizes, region sizes, and assign/aggregate
> towards
> >> customers topologically, the result is:
> >> - the need to maintain power-of-2 address block sizes (for aggregation),
> >> plus
> >> - the need to aggregate at each level (to keep #prefixes sane) plus
> >> - asymmetric sizes which don't often end up being just short of the next
> >> power-of-2
> >> - equals (necessarily) low utilization rates
> >> - i.e. much larger prefixes than would be suggested by "flat" allocation
> >> from a single pool.
> >>
> >> Here's a worked example, for a hypothetical big consumer ISP:
> >> - 22 POPs with "core" devices
> >> - each POP has anywhere from 2 to 20 "border" devices (feeding access
> >> devices)
> >> - each "border" has 5 to 100 "access" devices
> >> - each access device has up to 5000 customers
> >>
> >> Rounding up each, using max(count-per-level) as the basis, we get:
> >> 5000->8192 (2^13)
> >> 100->128 (2^7)
> >> 20->32 (2^5)
> >> 22->32 (2^5)
> >> 5+5+7+13=30 bits of aggregatio

Re: Naive IPv6 (was AT&T UVERSE Native IPv6, a HOWTO)

2013-12-04 Thread Brian Dickson
On Wed, Dec 4, 2013 at 1:32 PM, Rob Seastrom  wrote:

>
> Brian Dickson  writes:
>
> > Rob Seastrom wrote:
> >
> >> "Ricky Beam"  http://mailman.nanog.org/mailman/listinfo/nanog>>
> >> writes:
> >> >
> >> * On Fri, 29 Nov 2013 08:39:59 -0500, Rob Seastrom  >> <http://mailman.nanog.org/mailman/listinfo/nanog>> wrote: *>>
> >> * So there really is no excuse on AT&T's part for the /60s on uverse
> >> 6rd... *>
> >> * ... *>
> >> * Handing out /56's like Pez is just wasting address space -- someone *>
> >> * *is* paying for that space. Yes, it's waste; giving everyone 256 *>
> >> * networks when they're only ever likely to use one or two (or maybe *>
> >> * four), is intentionally wasting space you could've assigned to *>
> >> * someone else. (or **sold** to someone else :-)) IPv6 may be huge to *>
> >> * the power of huge, but it's still finite. People like you are *>
> >> * repeating the same mistakes from the early days of IPv4... * There's
> >> finite, and then there's finite. Please complete the
> >> following math assignment so as to calibrate your perceptions before
> >> leveling further allegations of profligate waste.
> >> Suppose that every mobile phone on the face of the planet was an "end
> >> site" in the classic sense and got a /48 (because miraculously,
> >> the mobile providers aren't being stingy).
> >> Now give such a phone to every human on the face of the earth.
> >> Unfortunately for our conservation efforts, every person with a
> >> cell phone is actually the cousin of either Avi Freedman or Vijay
> >> Gill, and consequently actually has FIVE cell phones on active
> >> plans at any given time.
> >> Assume 2:1 overprovisioning of address space because per Cameron
> >> Byrne's comments on ARIN 2013-2, the cellular equipment providers
> >> can't seem to figure out how to have N+1 or N+2 redundancy rather
> >> than 2N redundancy on Home Agent hardware.
> >> What percentage of the total available IPv6 space have we burned
> >> through in this scenario? Show your work.
> >> -r
> >
> >
> > Here's the problem with the math, presuming everyone gets roughly the
> same
> > answer:
> > The efficiency (number of prefixes vs total space) is only achieved if
> > there is a "flat" network,
> > which carries every IPv6 prefix (i.e. that there is no aggregation being
> > done).
> >
> > This means 1:1 router slots (for routes) vs prefixes, globally, or even
> > internally on ISP networks.
> >
> > If any ISP has > 1M customers, oops. So, we need to aggregate.
> >
> > Basically, the problem space (waste) boils down to the question, "How
> many
> > levels of aggregation are needed"?
> >
> > If you have variable POP sizes, region sizes, and assign/aggregate
> towards
> > customers topologically, the result is:
> > - the need to maintain power-of-2 address block sizes (for aggregation),
> > plus
> > - the need to aggregate at each level (to keep #prefixes sane) plus
> > - asymmetric sizes which don't often end up being just short of the next
> > power-of-2
> > - equals (necessarily) low utilization rates
> > - i.e. much larger prefixes than would be suggested by "flat" allocation
> > from a single pool.
> >
> > Here's a worked example, for a hypothetical big consumer ISP:
> > - 22 POPs with "core" devices
> > - each POP has anywhere from 2 to 20 "border" devices (feeding access
> > devices)
> > - each "border" has 5 to 100 "access" devices
> > - each access device has up to 5000 customers
> >
> > Rounding up each, using max(count-per-level) as the basis, we get:
> > 5000->8192 (2^13)
> > 100->128 (2^7)
> > 20->32 (2^5)
> > 22->32 (2^5)
> > 5+5+7+13=30 bits of aggregation
> > 2^30 of /48 = /18
> > This leaves room for 2^10 such ISPs (a mere 1024), from the current /8.
> > A thousand ISPs seems like a lot, but consider this: the ISP we did this
> > for, might only have 3M customers.
> > Scale this up (horizontally or vertically or both), and it is dangerously
> > close to capacity already.
> >
> > The answer above (worked math) will be unique per ISP. It will also drive
> > consumption at the apex, i.e. the size of allocations to ISPs.
> >
> > And root of the problem

Naive IPv6 (was AT&T UVERSE Native IPv6, a HOWTO)

2013-12-04 Thread Brian Dickson
Rob Seastrom wrote:

> "Ricky Beam"  gmail.com>
> writes:
> >
> * On Fri, 29 Nov 2013 08:39:59 -0500, Rob Seastrom  > wrote: *>>
> * So there really is no excuse on AT&T's part for the /60s on uverse
> 6rd... *>
> * ... *>
> * Handing out /56's like Pez is just wasting address space -- someone *>
> * *is* paying for that space. Yes, it's waste; giving everyone 256 *>
> * networks when they're only ever likely to use one or two (or maybe *>
> * four), is intentionally wasting space you could've assigned to *>
> * someone else. (or **sold** to someone else :-)) IPv6 may be huge to *>
> * the power of huge, but it's still finite. People like you are *>
> * repeating the same mistakes from the early days of IPv4... * There's
> finite, and then there's finite. Please complete the
> following math assignment so as to calibrate your perceptions before
> leveling further allegations of profligate waste.
> Suppose that every mobile phone on the face of the planet was an "end
> site" in the classic sense and got a /48 (because miraculously,
> the mobile providers aren't being stingy).
> Now give such a phone to every human on the face of the earth.
> Unfortunately for our conservation efforts, every person with a
> cell phone is actually the cousin of either Avi Freedman or Vijay
> Gill, and consequently actually has FIVE cell phones on active
> plans at any given time.
> Assume 2:1 overprovisioning of address space because per Cameron
> Byrne's comments on ARIN 2013-2, the cellular equipment providers
> can't seem to figure out how to have N+1 or N+2 redundancy rather
> than 2N redundancy on Home Agent hardware.
> What percentage of the total available IPv6 space have we burned
> through in this scenario? Show your work.
> -r


Here's the problem with the math, presuming everyone gets roughly the same
answer:
The efficiency (number of prefixes vs total space) is only achieved if
there is a "flat" network,
which carries every IPv6 prefix (i.e. that there is no aggregation being
done).

This means 1:1 router slots (for routes) vs prefixes, globally, or even
internally on ISP networks.

If any ISP has > 1M customers, oops. So, we need to aggregate.

Basically, the problem space (waste) boils down to the question, "How many
levels of aggregation are needed"?

If you have variable POP sizes, region sizes, and assign/aggregate towards
customers topologically, the result is:
- the need to maintain power-of-2 address block sizes (for aggregation),
plus
- the need to aggregate at each level (to keep #prefixes sane) plus
- asymmetric sizes which don't often end up being just short of the next
power-of-2
- equals (necessarily) low utilization rates
- i.e. much larger prefixes than would be suggested by "flat" allocation
from a single pool.

Here's a worked example, for a hypothetical big consumer ISP:
- 22 POPs with "core" devices
- each POP has anywhere from 2 to 20 "border" devices (feeding access
devices)
- each "border" has 5 to 100 "access" devices
- each access device has up to 5000 customers

Rounding up each, using max(count-per-level) as the basis, we get:
5000->8192 (2^13)
100->128 (2^7)
20->32 (2^5)
22->32 (2^5)
5+5+7+13=30 bits of aggregation
2^30 of /48 = /18
This leaves room for 2^10 such ISPs (a mere 1024), from the current /8.
A thousand ISPs seems like a lot, but consider this: the ISP we did this
for, might only have 3M customers.
Scale this up (horizontally or vertically or both), and it is dangerously
close to capacity already.

The answer above (worked math) will be unique per ISP. It will also drive
consumption at the apex, i.e. the size of allocations to ISPs.

And root of the problem was brought into existence by the insistence that
every network (LAN) must be a /64.

That's my 2 cents/observation.

Brian


Re: route for linx.net in Level3?

2013-04-04 Thread Brian Dickson
Leo Bicknell wrote:

Even if the exchange does not advertise the

exchange LAN, it's probably the case that it is in the IGP (or at

least IBGP) of everyone connected to it, and by extension all of

their customers with a default route pointed at them.

Actually, that may not be the case, and probably *should* not be the case.

Here's why, in a nutshell:

If two regional ISPs on either side of the planet, point default to the
same Global ISP,
even if they do not peer with that ISP, by using the IX next-hop at IX A
(for ISP A),
and IX B (for ISP B), then the Global ISP is now giving free on-net transit
to A and B.

So, it turns out that pretty much the only way to prevent this at a routing
level,
is to not carry IXP networks (in IGP or IBGP), but rather to do
next-hop-self.

The other way is to filter at a packet level on ingress, based on Layer 2
information,
which on many kinds of IX-capable hardware, is actually impossible.

So, when it comes to IXPs: Next-Hop-Self.

(BCP 38 actually doesn't even enter into it, oddly enough.)

Brian


Re: Open Resolver Problems

2013-04-01 Thread Brian Dickson
For filtering to/from "client-only" networks, here's the filtering rules
(in pseudo-code, convert to appropriate code for whatever devices you
operate), for DNS.

The objective here is:
- prevent spoofed-source DNS reflection attacks from your customers, from
leaving your network
- prevent your customers' open DNS servers (regardless of what they are)
from being used in reflection attacks
- permit normal DNS usage by clients, regardless of whether they are
talking to an external DNS resolver, or doing their own local resolution
(e.g. local DNS resolver on a host, or SOHO router)

from client:
permit source=client-subnet dest=any port=53 proto=TCP (TCP only works if
reaches "established", i.e. spoofing is irrelevant, but we stop spoofed SYN
here)
permit source=client-subnet dest=any port=53 proto=UDP QR=0 (first/highest
bit of 3rd octet of DNS packet payload of UDP)
deny port=53 (regardless of source/dest - either spoofed source, or QR=1,
if reached this rule)

to client:
permit dest=any source=any port=53 proto=TCP
permit dest=any source=any port=53 QR=1 (first/highest bit of 3rd octet of
DNS packet payload of UDP)
deny port=53 proto=UDP (QR=0 which is what we want to avoid)
(We don't have to check dest==client-subnet, since routing handles this
requirement)

If you have "eyeball" networks, please apply liberally.

Brian


puck.nether.net outage?

2013-02-13 Thread Brian Dickson
Anyone know about puck.nether.net?

I read the "outages" list via web archive there, but can't connect
currently.

(I know - irony or what.)

If you know what's going on, please post on NANOG?

kthanks,

Brian


Re: Update from the NANOG Communications Committee regarding recent off-topic posts

2012-07-30 Thread Brian Dickson
>
> As a quick update, we've implemented some list settings last week to help
> to
>
> keep spam off the list.  New subscribers are moderated until we're
> comfortable
> with their posts.  We rejected the idea of keyword based message filtering
> since not only is a lot of work to maintain, it's trivial to get around it
> if
> you really want to post banned words.
> Comments and suggestions are welcome.
> Matt Griswold, on behalf of the NANOG Communications Committee
>
> I've always liked the idea found in xkcd.com/810 ;-).

Brian


Re: [Idr] draft-ietf-idr-as0-00 (bgp update destroying transit on redback routers ?)

2011-12-03 Thread Brian Dickson
Can anyone familiar with this knob and its usage, answer a question:

Would anything break, in terms of use of that knob, if instead of
"zeroing" the AGGREGATOR, the local AS (as seen from the outside
world, in the case of confederations) were used?

Would the functionality of the knob, in reducing updates, be preserved?

Would routes be considered malformed or would it trigger any other bad behavior?

Perhaps this is a way of resolving the conflict between this knob and
the AS0 draft?

Brian

On Fri, Dec 2, 2011 at 8:12 AM, Daniel Ginsburg  wrote:
> Hi,
>
> This is true that "no-aggregator-id" knob zeroes out the AGGREGATOR attribute.
>
> The knob, as far as I was able to find out, dates back to gated and there's a 
> reason why it was introduced - it helps to avoid unnecessary updates. Assume 
> that an aggregate route is generated by two (or more) speakers in the 
> network. These two aggregates differ only in AGGREGATOR attribute. One of the 
> aggregates is preferred within the network (due to IGP metric, for instance, 
> or any other reasons) and is announced out. Now if something changes within 
> the network and the other instance of the aggregate becomes preferred, the 
> network has to issue an outward update different from the previous only in 
> AGGREGATOR attribute, which is completely superfluous.
>
> If the network employs the "no-aggregator-id" knob to zero out the AGGREGATOR 
> attribute, both instances of the aggregate route are completely equivalent, 
> and no redundant outward updates have to be send if one instance becomes 
> better than another due to some internal event, which nobody in the Internet 
> cares about.
>
> In other words, the "no-aggregator-id" knob has valid operational reasons to 
> be used. And, IMHO, the draft-ietf-idr-as0-00 should not prohibit AS0 in 
> AGGREGATOR attribute.
>
> On 02.12.2011, at 1:56, Jeff Tantsura wrote:
>
>> Hi,
>>
>> Let me take it over from now on, I'm the IP Routing/MPLS Product Manager at 
>> Ericsson responsible for all routing protocols.
>> There's nothing wrong in checking ASN in AGGREGATOR, we don't really want 
>> see ASN 0 anywhere, that's how draft-wkumari-idr-as0 (draft-ietf-idr-as0-00) 
>> came into the worlds.
>>
>> To my knowledge - the only vendor which allows changing ASN in AGGREGATOR is 
>> Juniper, see "no-aggregator-id", in the past I've tried to talk to Yakov 
>> about it, without any results though.
>> So for those who have it configured - please rethink whether you really need 
>> it.
>>
>> As for SEOS - understanding that this badly affects our customers and not 
>> having draft-ietf-idr-error-handling fully implemented yet, we will 
>> temporarily disable this check in our code.
>> Patch will be made available.
>>
>> Please contact me for any further clarifications.
>>
>> Regards,
>> Jeff
>>
>> P.S. Warren has recently  included AGGREGATOR in the draft, please see
>>
>> 2. Behavior
>>   This document specifies that a BGP speaker MUST NOT originate or
>>   propagate a route with an AS number of zero.  If a BGP speaker
>>   receives a route which has an AS number of zero in the AS_PATH (or
>>   AS4_PATH) attribute, it SHOULD be logged and treated as a WITHDRAW.
>>   This same behavior applies to routes containing zero as the
>>   Aggregator or AS4 Aggregator.
>>
>
> ___
> Idr mailing list
> i...@ietf.org
> https://www.ietf.org/mailman/listinfo/idr



Re: Outgoing SMTP Servers

2011-10-25 Thread Brian Dickson
Owen wrote:

>On Oct 25, 2011, at 3:29 AM,  wrote:
>
>> On Tue, 25 Oct 2011 02:35:31 PDT, Owen DeLong said:
>>
>>> If they are using someone else's mail server for outbound, how, exactly do 
>>> you control
>>> whether or not they use AUTH in the process?
>>
>> 1) You don't even really *care* if they do or not, because...
>>
>> 2) if some other site is running with an un-AUTHed open port 587, the 
>> miscreants will
>> find it and abuse it just like any other open mail relay. The community will
>> deal with it quick enough so you don't have to. And at that point, it's the
>> open mail relay's IP that ends up on the block lists, not your mail relay's 
>> IP.
>>
>But that applies to port 25 also, so, I'm not understanding the difference.
>
>> Other people running open port 587s tends to be quite self-correcting.
>>
>
>At this point, so do open port 25s.
>
>Owen

I'll try to explain with text stick-diagrams...

The players are:
G - good user
B - botnet host
I - ISP
O - open relay
S - mail-submission relay
V - victim SMTP/mailbox host

It's all about how port-25 traffic containing SPAM gets to machine
"V". (Or not, which is the preferred situation.)

Possible routes include:
B.25 -> (I allows 25) -> O -> V (classic open relay) [SPAM]
B.25 -> (I allows 25) -> V (new mode, and what William Herrin is
talking about) [SPAM]
B.587 -> (I !allow 25) -> V (but that makes no sense - how does B
authenticate to the victim? She doesn't!!) [BLOCKED]
B.587 -> (I !allow 25) -> S (ditto - not an open unauthenticated
relay, only allows authenticated relaying!!!) [BLOCKED]

Meanwhile, we have:
G.587 -> (I !allow 25) -> S.g.587/.25 (mail submission gateway for G)
-> V.25 [NOT-SPAM && NOT-BLOCKED]

S.g is either G's enterprise mail server, or G's home mail server, or
G's ISP themselves, or some other S to which G can authenticate.
S.g receives on 587, and sends on 25, and is a generally reputable
port-25 host (whatever that means).

So, basically, not blocking 587 and blocking 25 removes all the
avenues for direct botnet spam.
Authenticating botnet sources become trackable on auth-hosts, and easy
to shut down.

Is there some path not listed above that could allow a spammer (botnet
host) behind the ISP to send email, without having a relay host to
which it can authenticate, that I'm not seeing?

Brian



Re: Cogent & HE

2011-06-09 Thread Brian Dickson
RAS wrote:
>On Thu, Jun 09, 2011 at 12:55:44AM -0700, Owen DeLong wrote:
>>
>> Respectfully, RAS, I disagree. I think there's a big difference
>> between being utterly unwilling to resolve the situation by peering
>> and merely refusing to purchase transit to a network that appears to
> offer little or no value to the purchaser or their customers.

>Owen, can you please name me one single instance in the history of the
>Internet where a peering dispute which lead to network partitioning did
>NOT involve one side saying "hey, we're willing to peer" and the other
>side saying "no thanks"? Being the one who wants to peer means
>absolutely NOTHING here, the real question is which side is causing the
>partitioning, and in this case the answer is very clearly HE.

I don't know if Owen can, but I know I can.

Back in the day, when there were many fewer Tier-1's but the number was growing,
there were enough disputes over peering requests that there was a
danger of things
actually getting regulated (e.g. by the dreaded FCC).

As part of one of the many mergers, the biggest player at that time
(AS 701), made their
peering requirements public, *and* honored those requirements.

So, long history short, there were in fact peering disputes that had
one side saying,
"hey, we want to peer" and the other side saying "you don't have
enough traffic",
or "your ratio is too imbalanced", or "you're my customer - tough!".
And some of those got resolved by the ratios changing, or the traffic levels
reaching sufficiently high. (I can historically mention AS 6453.)

Some of the other early players didn't play fair, and to my knowledge
still don't. You have
to know someone, or be named "Ren" to get peering with them. (Sorry, Ren. :-))

IMHO, what Cogent are effectively trying to do, is to extort "paid
peering", masquerading as transit.

Personally, I think the global traffic patterns, loss/latency/jitter,
and general karma of the Internet
would be improved, if those who currently peer with Cogent were to do
evaluate the impact of
de-peering them:

- How many networks are *single*-homed behind Cogent?
- Is anyone who *needs* Internet connectivity that unwise (to be
single homed anywhere, let alone behind Cogent)?
- If they *are* single-homed-to-Cogent, they aren't *your* customers. :-)
- (This could be applied to both IPv6 *and* IPv4 - the logic is the same)

Brinksmanship, like virtue or stupidity, is its own reward.

Brian

P.S. In the ancient game "go", there's a special rule on the two
players playing alternate single-piece steals, that limits it to N
times for very small N.
The game becomes futile and pointless, beyond a certain number of
repeated moves.
Ditto for not peering.



RE: 1/8 and 27/8 allocated to APNIC

2010-01-22 Thread Brian Dickson
I think it would certainly be useful, both diagnostically and operationally,
for IANA and the RIR's to *actually announce* the unused space, and run either 
or
both of tar-pits and honey-pots on those, for just such a reason - to gauge 
problems
that might exist on "unused" space, *before* the space is assigned.

And, it'd be nice if there were a check-box for "I volunteer for the pain".
In the movies, where something bad can happen and they draw straws, it is almost
always drawing straws from among volunteers.

There are certainly reasons for wanting to identify and not assign space that 
has "issues",
to certain recipients or when certain conditions exist.

E.g.: critical infrastructure /24's; initial assignments (where the recipient 
doesn't
gave another block into which to internally interchange addresses); less 
technically adept
recipients (e.g. in the developing nations, where adding "pain" would really be 
unusually cruel.)

Brian

-Original Message-
From: Nick Hilliard [mailto:n...@foobar.org] 
Sent: January-22-10 3:09 PM
To: Brian Dickson
Cc: William Allen Simpson; nanog@nanog.org
Subject: Re: 1/8 and 27/8 allocated to APNIC

On 22/01/2010 16:32, Brian Dickson wrote:
> So, if the tainted *portions* of problem /8's are set aside

What portion of 1/8 is untainted?  Or any other /8 that the IANA has
identified as having problems?  How do you measure it?

How do you ensure that other /8s which don't _appear_ to have problems really 
don't have
problems due to invisible use?  

IANA hands out /8s.  We know that some of these are going to cause serious
problems, but life sucks and we just have to deal with what happens.




RE: 1/8 and 27/8 allocated to APNIC

2010-01-22 Thread Brian Dickson
Nick Hilliard wrote:

Someone else mentioned that we are now scraping the bottom of the ipv4
barrel.  As of two days ago, there were quantifiable problems associated
with 13 out of the 26 remaining /8s.  12 of these are known to be used to
one extent or another on internet connected networks, and are seen as
source addresses on various end-points around the place.  One of them
(223/8) has rfc-3330 issues (although later fixed in 5735).

So, the issue for IANA is how to allocate these /8s in a way which is
demonstrably unbiased towards any particular RIR.  The solution which
they've agreed on with the RIRs looks unbiassed, unpredictable in advance,
calculable in retrospect and best of all, it's not open to abuse.  And
while Chuck Norris could probably predict the footsie, the djia and the
hang-seng weeks in advance, this sort of prognostication appears to be
beyond the capabilities of ICANN, IANA and the RIRs.  At least if it isn't,
no-one's saying anything.

Do you have a better suggestion about how to allocate tainted address space
in a way that is going to ensure that the organisations at the receiving
end aren't going to accuse you of bias?

My response:

The granularity of allocations is arbitrary, and when scraping the bottom of 
the barrel,
where there are known problems, it may time to get more granular.

There's really no difference in managing a handful of /N's rather than /8's, if 
N is not
that much bigger than 8.

The granularity boundary probably should stay around or above the biggest 
assignments a given
RIR is expected to make, or has made.

So, if the tainted *portions* of problem /8's are set aside, you end up with 
sets of varying
sizes of /N. E.g. if there is one /24 that is a problem, you set that aside, 
and end up with
a set that consists of one each of /9 through /24. Even if you set aside a /16, 
you get a set
which is /9, /10, /11, /12, /13, /14, /15, and /16.

>From a documentation and allocation perspective, you could even treat that as 
>giving the whole
of the /8 to the RIR, and having them give back (assign) the problem chunk to 
IANA.

Do this for the 13 problem /8's, and then group the resulting untainted sets 
and give them out.
Give them out according to the needs of the RIRs, and the larger community will 
consider it fair.
No one will think badly of giving the /9's to one of the big 3 (APNIC, ARIN, or 
RIPE), and the smaller
ones to the other RIRs, I'm sure.

As long as there are no tainted portions assigned to the RIRs, I don't see how 
this could be a problem.

Brian



RE: DNS question, null MX records

2009-12-16 Thread Brian Dickson
I realize we're a bit off-topic, but to be tangential to the original topic, 
and thus barely relevant:

(Presuming the "sink.arpa." thing succeeds, big presumption I realize...)

So, how about using sink.arpa. as a(n) MNAME?

Or perhaps, one of the hosts listed in AS112?

Maybe a new AS112 entry that resolves to one of the IP addresses mentioned 
already (127.0.0.1 and friends)?
(Too bad there isn't a reserved IP address that definitively goes to /dev/null 
locally. Or is there?)

Does an MNAME really need to be fully qualified?

Cute idea:  use the "search" of non-fully-qualified names, to direct bad UPDATE 
traffic to a very local server, e.g. "ns1" (no dot).

It would also be nice to have a place-holder in DNS that "resolves" (in the 
loose sense) to an appropriate pre-defined "thing", like the default nameserver 
for a client, or "yourself" for a resolver.

But I digress. :-)

Brian

-Original Message-

> When I attempted to document a similar idea (using an empty label in the =
> MNAME field of an SOA record in order to avoid unwanted DNS UPDATE =
> traffic) the consensus of the room was that the idea was both =
> controversial and bad :-)
> 
> http://tools.ietf.org/html/draft-jabley-dnsop-missing-mname-00

Well UPDATE traffic is supposed to go to the nameservers listed in
the NS RRset prefering the MNAME if and only if the MNAME is a
nameserver.  Lots of update clients don't do it quite right but
there are some that actually send to all the nameservers.

Setting the MNAME to "." does not actually address the problem.

Mark

> Joe
> 
> 
-- 
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: ma...@isc.org




RE: backupdns.com down?

2009-11-22 Thread Brian Dickson
Ditto (puck.nether.net +9).

Thanks on behalf of everyone, Jared.

For those wanting further diversity, I also recommend using any or all of:

everydns.net
twisted4life.com
afraid.org

Also easy to use and set-up, IMHO. YMMV.

Brian Dickson


From: Christopher Morrow [morrowc.li...@gmail.com]
Sent: Friday, November 20, 2009 6:43 PM
To: Jared Mauch
Cc: nanog@nanog.org
Subject: Re: backupdns.com down?

On Fri, Nov 20, 2009 at 8:26 AM, Jared Mauch  wrote:
> A plug for the secondary dns service I run (free!)
>

+9

it's also v6 enabled (if that matters to you) and pretty simple to setup/use.

Thanks Jared!
-Chris

> http://puck.nether.net/dns/
>
> Jared Mauch
>
> On Nov 20, 2009, at 7:09 AM, James Bensley  wrote:
>
>> Not working for me.
>>
>>
>> -- Regards,
>> James ;)
>>
>> Joan
>> Crawford<http://www.brainyquote.com/quotes/authors/j/joan_crawford.html>
>> - "I, Joan Crawford, I believe in the dollar. Everything I earn, I
>> spend."
>
>




RE: Upstream BGP community support

2009-11-02 Thread Brian Dickson
(Apologies for top-replying, but hey, it makes it easier to ignore stuff you've 
already read.)

I think the main things to consider in identifying what things "belong" in a 
standardized community are:
- is it something that is really global, and not local, in behaviour and scope?
- is it something that is mostly a signalling-of-intent "flag", and not a 
modification of path-selection?
--> (because) It should not require universal adoption to be useful - if it is 
ignored by A, and passed to B, will it still be useful/semantically correct?
- is it something that will either be applied by the originator to all 
instances of a given prefix (i.e. sending X to neighbour A and neighbour B, 
with community M);
- or something that will be applied by the originator to some instances of a 
given prefix (i.e. sending X to A with N, and X to B without N)
- is the intent being signalled easily understood, ideally in an absolute sense?

The two things I can think of, which would benefit from such a community, and 
where nearly-universal adoption would be of significant benefit, are:
(1) Blackhole this prefix, and;
(2) Use this prefix as a last resort only.
(Please add to this list if you think of any other cases meeting the above 
criteria).

Comments on particular proposed-standard-community cases follows...

I bring up (2) because it is otherwise *very* difficult to signal (or achieve), 
and often leads to potential wedgies.

The existing mechanisms to achieve (2) are often indistinguishable from Traffic 
Engineering - but this is very much not TE.

TE is "reduce load on this instance". (2) is "Don't use this for traffic unless 
you have no paths not marked with this community".

TE will typically be observed with small numbers of AS-prepending. (2) will be 
observed with large numbers of AS-prepending.

And, my guess would be, 80% of the prepending would not be needed if (2) were 
standardized and in use.

It might even reduce significantly the observed instances of wedgies and/or 
persistent oscillations in the wild.

Brian

-Original Message-
From: Jack Bates [mailto:jba...@brightok.net] 
Sent: November-02-09 4:03 PM
To: Joel Jaeggli
Cc: nanog@nanog.org
Subject: Re: Upstream BGP community support

Joel Jaeggli wrote:
> more accessible and therefore more likely to be used, I don't think
> traffic engineering is something I particularly want to encourage to
> excess but RTBH is a know that more people need access to quite frankly.

I think creating a standard or at least a template might push more 
people to adopt communities support and to use them. There are 
definitely times it is useful to redirect traffic 2 ASN's away through a 
longer route. In some cases (like my small self, I try to adopt policies 
that allow communities to me to also be rewritten to the corresponding 
peers communities to alter things as far as 3 ASN's away from my customer.

Jack



RE:

2009-09-16 Thread Brian Dickson
RAS wrote:

[ lots of good stuff elided for brevity ]

> c) lower the mtu on the ds3 interface to 1500.

This will have another benefit, if it is done to all such interfaces on the two 
devices.
(Where by "all such interfaces", I mean "everything with set-able MTU > 1500".)

Configuring one common MTU size on the interfaces, means the buffer pool on the 
box will switch from several pools of varying sizes, to one pool.
The number of buffers per pool get pro-rated by speed * buffer size, so 
high-speed, high-MTU interfaces get a gigantic chunk of buffer space.

Once you reduce things to one pool, you significantly reduce the likelihood of 
buffer starvation.

Note that the discussion on benefits to buffers is old info, and may even be 
moot these days, but buffers are fundamental enough that I doubt it.

However, the original problem, iBGP not working, will definitely be resolved by 
this.

Note also, changing this often won't take effect until a reboot, and/or may 
result in buffer re-carving with an attendant "hit" of up to 30 seconds of no 
forwarding packets (!!). You've been warned...

In other words, plan this carefully, and make sure you have remote hands 
available or are on site. This qualifies as "deep voodoo". ;-)

Brian



RE:

2009-09-15 Thread Brian Dickson
And more specifically, possibly an interface MTU (or ip mtu, I forget which).

If there is a mismatch between ends of a link, in one direction, MTU-sized 
packets get sent, and the other end sees those as "giants".

I've seen situations where the MTU is calculated incorrectly, when using some 
technology that adds a few bytes (e.g. VLAN tags, MPLS tags, etc.).

On Cisco boxes, when talking to other Cisco boxes, even.

Take a look at the interfaces over which the peering session runs, at both ends.
I.e., is this the only BGP session *over that interface*, for the local box?

(It might not be the end you think it's at, BTW.)

Oh, and if you find something, please, let us know.
War stories make for great bar BOFs at NANOG meetings. :-)

Brian

-Original Message-
From: Mikael Abrahamsson [mailto:swm...@swm.pp.se] 
Sent: September-14-09 2:39 PM
To: Michael Ruiz
Cc: nanog@nanog.org
Subject: Re: 

On Mon, 14 Sep 2009, Michael Ruiz wrote:

> I am having difficulty maintaining my BGP session from my 6509 with
> Sup-7203bxls to a 7206 VXR NPE-400.  The session bounces every 3
> minutes.  I do have other IBGP sessions that are established with no
> problems, however, this is the only IBGP peer that is bouncing
> regularly.
>
> What does exactly the message mean and how do I stabilize this?  Any
> help will be appreciated.

This is most likely an MTU problem. Your SYN/SYN+ACK goes thru, but then 
the first fullsize MSS packet is sent, and it's not getting to the 
destination. 3 minutes is the dead timer for keepalives, which are not 
getting thru either because of the stalled TCP session.

-- 
Mikael Abrahamssonemail: swm...@swm.pp.se




Re: Redundancy & Summarization

2009-08-21 Thread Brian Dickson
> My institution has a single /16 spread across 2 sites: the lower /17 is
> used at site A, the upper /17 at site B.  Sites A & B are connected
> internally.  Currently both sites have their own ISPs and only advertise
> their own /17's.  For redundancy we proposed that each site advertise
> both their own /17 and the whole /16, so that an ISP failure at either
> site would trigger traffic from both /17s to reconverge towards the
> unaffected location.

There are two different ways to achieve almost-identical results.

However, one is a 50%  more "green" than the other, i.e. friendly to other 
network operators.

These two choices are functionally equivalent, and possible, only because 
things currently work for both your /17's.

Here are the two ways to do this:

One is:
- announce /17 "A" and /16 from uplink ISP-A
- announce /17 "B" and /16 from uplink ISP-B
- This results in 3 prefixes globally: A, B, and /16.

The other is:
- announce /17 "A" and /17 "B", with different policies (i.e. prepend your AS 
once or twice), at *both* ISPs.
- This results in 2 prefixes globally: A and B.

In all cases, as long as one ISP link is up, there is a path to both A and B.
In most cases, the best path to A or B, is *mostly*, but not completely, under 
your influence.

So, the main difference to everyone else is, the presence or absence of a 
routing slot (/16), and/or extra copies of A and/or B.

The routing slot occupies a slot in data-forwarding-plane hardware that is very 
limited.

The extra copies of A and B (and extra copies of your AS in the AS-path) only 
eat cheap control-plane memory.

If everyone did things nicely, we would not have as much of a crisis on the 
hardware side as we (collectively) do.

Please consider being part of the solution (announcing only /17's, but in both 
places) rather than part of the problem (adding a new redundant /16 to 
everyone's routers, including in the hardware slots.)

Brian



Re: Great Suggestion for the DNS problem...?

2008-08-28 Thread Brian Dickson

Alex Pilosov wrote:

On Thu, 28 Aug 2008, Brian Dickson wrote:

  

However, if *AS-path* filtering is done based on IRR data, specifically
on the as-sets of customers and customers' customers etc., then the
attack *can* be prevented.

The as-path prepending depends on upstreams and their peers accepting
the prefix with a path which differs from the expected path (if the
upstreams register their as-sets in the IRR).


You are thinking about this specific exploit - which may in fact be
stopped by as-path-filtering. However, that's not the problem you are
solving. Problem is the hijacking. There are many other ways to reinject
traffic closer to victim - will require attacker to work a little harder,
but not really fix the problem. (Think, GRE tunnels, no-export,
no-export-to-specific-peer, etc).



  


Very true. Tying allocations and assignments to ASN-of-origin and 
providing a suitable place
to validate such, for building prefix and as-path filters, would do a 
lot towards further limiting
the ability to hijack prefixes - but only to the degree to which 
providers filter their customers.


And that's only on routes - to say nothing of packet filtering (BCP 38)...


So, if the upstreams of as-hijacker reject all prefixes with an as-path
which includes as-bar (because as-bar is not a member of any customer's
as-set expansion), the attack fails.


What's to stop me from adding as-bar into my as-set? To do what you are
describing, you will have to enforce "export AS-LEFT" and "import
AS-RIGHT" rules on every pair of AS-PATH adjacencies. And I'm not sure if
existing tools can do that - or how many existing adjacencies fail that
test.


  


True, there is nothing actually stopping you from doing this.

However, (and this is both the key, and a big presumption) changes to 
as-sets are the kind of thing
that automatic provisioning tools should alert on, and require 
confirmation of, before updating prefix-lists

and as-path lists based on the new members of the as-set.

While prefixes are routinely added, new transit relationships at a BGP 
level don't happen all *that* often.


The idea isn't just to stop the prefix announcement, it is to trap 
activity that would otherwise permit the

prefix hijack, at the time it occurs and before it takes effect.

The closer this occurs to the hijacker, the more likely it is that 
appropriate responses can be directed at the bad guy,

and with a greater likelihood of success (e.g. prosecution).

The threat alone of such response may act as a suitable deterrent...

As for the filter building and enforcement mechanisms:

The inbound as-path filters need to permit the permutations of paths 
that result from expanding as-sets, but that can
be accomplished unilaterally on ingress, without enforcement beyond the 
immediate peering session. The expansion
is for each as-set behind an ASN, there should be a corresponding 
as-set, and so on... each gets expanded and added to

the expansion of the permitted paths via that ASN. Lather, rinse, repeat.

All filtering is local, although the more places filtering happens, the 
better.


Brian



Re: Revealed: The Internet's well known

2008-08-28 Thread Brian Dickson

(Sorry - repost with fixed Subject line. My bad. -briand)

Alex P wrote:


*) There is no currently deployable solution to this problem yet.

*) Filtering your customers using IRR is a requirement, however, it is 
not

a solution - in fact, in the demonstration, we registered the /24 prefix
we hijacked in IRR. RIRs need to integrate the allocation data with their
IRR data.

-alex [your former moderator]


Kind of true. When doing *prefix* filtering, this kind of hijack is not
preventable.

However, if *AS-path* filtering is done based on IRR data, specifically
on the as-sets
of customers and customers' customers etc., then the attack *can* be
prevented.

The as-path prepending depends on upstreams and their peers accepting
the prefix with a path
which differs from the expected path (if the upstreams register their
as-sets in the IRR).

If the as-path filter only allows generally-feasible as-paths from
customers, where the permitted
variations are just "N copies of ASfoo" (where "foo" is a member of an
as-set), then adding
a fake "ASbar" in the as-path will cause the prefix to be rejected.

If you follow the diagram from the presentation, information about the
*real* path to the victim,
from the perspective of the hijacker, requires that the AS's on that
path not see the hijacked prefix
as announced by the hijackers.

This means that if the AS's traversed are: as-hijacker, as-bar,
as-victim, then as-bar must *not* see
the hijacked victim, for the attack to work. By adding "as-bar" into the
as-path of the hijacked prefix,
the loop-prevention logic of BGP makes sure as-bar can't see the
hijacked prefix.

So, if the upstreams of as-hijacker reject all prefixes with an as-path
which includes as-bar (because as-bar is not
a member of any customer's as-set expansion), the attack fails.

I hope I haven't managed to confuse folks too much.

But, the short answer is:

If you use the IRR, the full value is best realized by adding *as-path*
filters to the things you build
from the IRR data, and applying them to your customers (and peers !!).

Oh, and if you already do IRR stuff, it's really quite easy to do.

Brian Dickson





Re: Great Suggestion for the DNS problem...?

2008-08-28 Thread Brian Dickson

Alex P wrote:


*) There is no currently deployable solution to this problem yet.

*) Filtering your customers using IRR is a requirement, however, it is 
not

a solution - in fact, in the demonstration, we registered the /24 prefix
we hijacked in IRR. RIRs need to integrate the allocation data with their
IRR data.

-alex [your former moderator]


Kind of true. When doing *prefix* filtering, this kind of hijack is not 
preventable.


However, if *AS-path* filtering is done based on IRR data, specifically 
on the as-sets
of customers and customers' customers etc., then the attack *can* be 
prevented.


The as-path prepending depends on upstreams and their peers accepting 
the prefix with a path
which differs from the expected path (if the upstreams register their 
as-sets in the IRR).


If the as-path filter only allows generally-feasible as-paths from 
customers, where the permitted
variations are just "N copies of ASfoo" (where "foo" is a member of an 
as-set), then adding

a fake "ASbar" in the as-path will cause the prefix to be rejected.

If you follow the diagram from the presentation, information about the 
*real* path to the victim,
from the perspective of the hijacker, requires that the AS's on that 
path not see the hijacked prefix

as announced by the hijackers.

This means that if the AS's traversed are: as-hijacker, as-bar, 
as-victim, then as-bar must *not* see
the hijacked victim, for the attack to work. By adding "as-bar" into the 
as-path of the hijacked prefix,
the loop-prevention logic of BGP makes sure as-bar can't see the 
hijacked prefix.


So, if the upstreams of as-hijacker reject all prefixes with an as-path 
which includes as-bar (because as-bar is not

a member of any customer's as-set expansion), the attack fails.

I hope I haven't managed to confuse folks too much.

But, the short answer is:

If you use the IRR, the full value is best realized by adding *as-path* 
filters to the things you build

from the IRR data, and applying them to your customers (and peers !!).

Oh, and if you already do IRR stuff, it's really quite easy to do.

Brian Dickson




Re: Great Suggestion for the DNS problem...?

2008-07-28 Thread Brian Dickson
What would the ip-blocking BGP feed accomplish? Spoofed source 
addresses are a staple of the DNS cache poisoning attack.
Worst case scenario, you've opened yourself up to a new avenue of 
attack where you're nameservers are receiving spoofed packets intended 
to trigger a blackhole filter, blocking communication between your 
network and the legitimate owner of the forged ip address.




Yes, but what about blocking the addresses of recursive resolvers that 
are not yet patched?


That would certainly stop them from being poisoned, and incent their 
owners to patch...


1/2 :-)

Brian


Michael Smith wrote:

Still off topic, but perhaps a BGP feed from Cymru or similar to 
block IP

addresses on the list?

Regards,

Mike










Re: cooling door

2008-03-30 Thread Brian Dickson


Paul Vixie wrote:

if you find me 300Ksqft along the caltrain fiber corridor in the peninsula
where i can get 10mW of power and have enough land around it for 10mW worth
of genset, and the price per sqft is low enough that i can charge by the
watt and floor space be damned and still come out even or ahead, then please
do send me the address.


Well, there are alternatives to the 10mW power and 10mW genset... 
10-20mW (or more) of nuclear power/gensets.


Get off the grid, sell back some power to the grid, reduce your costs 
for power, especially peak rate power,

and be able to brag about having your own nuke plant.

This may give you more flexibility in finding the space along the fibre 
corridor...


BTW, I did some quick googling and found a few interesting links related 
to smaller nuclear generator systems:


http://peswiki.com/index.php/Directory:Toshiba's_Micro_Nuclear_Reactor
http://www.eia.doe.gov/cneaf/nuclear/page/analysis/nucenviss2.html
http://www.eoearth.org/article/Small_nuclear_power_reactors
http://www.uic.com.au/nip60.htm

And if you have access to a deep harbor:
http://atomic.msco.ru/cgi-bin/common.cgi?lang=eng&skin=menu1&fn=projects

:-)

Brian


Mailing list newbies suggestion

2008-03-22 Thread Brian Dickson


Here's a suggestion - perhaps it should go to nanog-futures, not sure...

Since newbies have to first subscribe before they can post...

Why not have the subscription procedure include sending the newbie
a link to a page with newbie-related material,
much like the nanog page on tools and resources,
or perhaps a wiki (so that other list members can update it,
to minimize the impact to their mailing-list joy that newbies
appear to cause)?

Brian




Re: v6 subnet size for DSL & leased line customers

2008-01-02 Thread Brian Dickson


>> As to "there must be better knobs" I think it may be a little late 
for that; by design (or as a consequence of it) the set of IPv6 knobs is 
the same as the set of IPv4 knobs.
> The trouble is that BGP doesn't have a meaningful inter-AS metric. 
(Although there is something that is called that.) If I want to increase 
my path length by 10% through a certain neighboring AS, I don't get to 
do that. I only get to double or triple it. (Unless I was doing very 
heavy prepending to begin with.)


Actually, while it isn't a true metric as such, there *is* just such a knob.

The "origin" attribute, can act as a fractional AS-path-length. It's 
mandatory and transitive.

It gets evaluated after as path length, but before other attributes.

It has three possible values (internal, external, unknown), and as such, 
gives the limited ability to influence path choice at third-party 
locations quite distant.


The only caveat is, that some parties may mess with it on prefixes they 
receive.


I've used it in the past, with considerable success.

Brian Dickson