Re: Our first inbound email via IPv6

2012-06-10 Thread Paul Vixie
Livingood, Jason jason_living...@cable.comcast.com writes:

 In preparation for the World IPv6 Launch, inbound (SMTP) email to the
 comcast.net domain was IPv6-enabled today, June 5, 2012, at 9:34 UTC.
 Roughly one minute later, at 9:35:30 UTC we received our first
  inbound email over IPv6 from 2001:4ba0:fff4:1c::2. That first bit of mail
 was spam, and was caught by our Cloudmark messaging anti-abuse platform
 (the sender attempted a range of standard spam tactics in subsequent
 connections). ...

rim shot:

i suggest that the e-mail industry consider a two-level approach to
rejecting ipv6 spam based on source address.

for more information see:

http://www.circleid.com/posts/20110607_two_stage_filtering_for_ipv6_electronic_mail/

paul



Re: Our first inbound email via IPv6

2012-06-10 Thread Paul Vixie
Randy Bush ra...@psg.com writes:

  ...
 i have assiduously avoided gaining serious anti-spam fu.  but it seems
 to me that ipv6 does not create/enable significantly more spam-bots.

the malware will generally have complete control over the bottom 64 bits
of an ipv6 address. there's no reason to expect to ever receive more than
one spam message from any single ipv6 source.

so, we'll all be blackholing /64's.

moreover, there are going to be more native endpoints in ipv6 than there
were in ipv4, since the NAT incentives are very different in the larger
address pool.

so, we'll all need network operators to whitelist the parts of their
address spaces that they plan to send e-mail from, so that we can avoid
having to blackhole things one /64 at a time.

as before: for more information see:

http://www.circleid.com/posts/20110607_two_stage_filtering_for_ipv6_electronic_mail/

paul



Re: ROVER routing security - its not enumeration

2012-06-10 Thread Paul Vixie
Doug Montgomery dougm.tl...@gmail.com writes:

  ...

 I think we debate the superficial here, and without sufficient imagination.
 The enumerations vs query issue is a NOOP as far as I am concerned.With
 a little imagination, one could envision building a box that takes a feed
 of prefixes observed, builds an aged cache of prefixes of interest, queries
 for their SRO records, re queries for those records before their TTLs
 expire, and maintains a white list of SRO valid prefix/origin pairs that
 it downloads to the router.

this sounds like a steady state system. how would you initially populate it,
given for example a newly installed core router having no routing table yet?

if the answer is, rsync from somewhere, then i propose, rsync from RPKI.

if the answer is, turn off security during bootup, then i claim, bad idea.

 ...

 Point being, with a little imagination I think one could build components
 with either approach with similar  black box behavior.

i don't think so. and i'm still waiting for a network operator to say what
they think the merits of ROVER might be in comparison to the RPKI approach.
(noting, arguments from non-operators should and do carry less weight.)

-- 
Paul Vixie
KI6YSY



rate limiting (Re: Open DNS Resolver reflection attack Mitigation)

2012-06-10 Thread Paul Vixie
Joe Maimon jmai...@ttec.com writes:

 Is there any publicly available rate limiting for BIND?

 How about host-based IDS that can be used to trigger rtbh or iptables?

 Google and Level3 manage to run open resolvers, why cant I?

rate limiting on recursive servers is complicated by the lack of caching
in most stub resolvers and applications. this makes it hard to tell by
pure automation when a request flow is a spoof-source attack and when not.

for most of us this isn't a problem since we'll put access control lists
on our recursive name servers, only allowing queries from on-campus or
on-net.

for intentionally open resolvers, i expect there's a lot of monitoring
and hand tuning, and that many deliberately low-grade attacks get by.

noting that there are at least 15 million open recursive servers (most in
low-quality CPE boxes front-ending cable or DSL links), an attacker has
a long menu of places to send a small number of queries (to each) so that
any rate limiting done by any one of the open recursive servers would not
defend any victims against spoofed-source.

spoofed-source is becoming wildly more popular. that's probably where to
fix this. also the 15 million open recursives would be good to see fixed.

at the moment most attacks are using authority servers, where it's far
easier to automatically tell attack flows from non-attack flows. 

-- 
Paul Vixie
KI6YSY



Re: isc - a good business

2012-05-30 Thread Paul Vixie
On 2012-05-30 12:53 AM, Nabil Sharma wrote:
 Paul:

 Where can we read details about the services ISC provided to the FBI,
 and how they were compensated?

it's in the AP News article published a few weeks ago. for an example:

http://www.foxnews.com/scitech/2012/04/23/hundreds-thousands-may-lose-internet-in-july/

 As Mahatma Gandhi says: it is difficult, but not impossible, to
 conduct strictly honest business.

 Sincerely,
 Nabil

the FBI's business dealings are transparent in this case. judge for
yourself whether it's also strictly honest.

paul


Re: rpki vs. secure dns?

2012-05-29 Thread paul vixie
On 5/29/2012 10:27 AM, Stephane Bortzmeyer wrote:
 On Mon, May 28, 2012 at 10:01:59PM +,
  paul vixie vi...@isc.org wrote 
  a message of 37 lines which said:

 i can tell more than that. rover is a system that only works at all
 when everything everywhere is working well, and when changes always
 come in perfect time-order,
 Exactly like DNSSEC. 

no. dnssec for a response only needs that response's delegation and
signing path to work, not everything everywhere.

 So, DNSSEC is doomed :-)

i hope not. if we had to start over on something that can protect the
cache against trivial pollution and also enable new applications like
DANE, we'd be ten years from first prototype instead of ten years from
ubiquity.

paul



Re: rpki vs. secure dns?

2012-05-29 Thread Paul Vixie
On 2012-05-29 5:37 PM, Richard Barnes wrote:
 I agree with the person higher up the thread that ROVER seems like
 just another distribution mechanism for what is essentially RPKI data.

noting, that up-thread person also said i havn't studied this in detail
so i'm probably wrong.

 But does that distribution method easily allow you to get the full set of 
 available data?
 From what little I know, it seems to me that ROVER is optimized for
 point queries, rather than bulk data access.  Which is the opposite of
 making it easy to get full data :)

that's close to the problem but it is not the problem.

RPKI is a catalogue. it's possible to fetch all of the data you could
need, before starting what's basically the batch job of computing the
filters you will use at BGP-reception-time to either accept or ignore an
incoming route. if your fetch and recompute steps don't work, then
you'll have to continue filtering using stale data. if that data becomes
too stale you're likely to have to turn off the filtering until you can
resynchronize.

ROVER is not a catalogue. it's impossible to know what data you could
need to precompute any route filters, and it's impossible to know what
'all possible rover data' is -- in fact that would be a nonsequitur. you
could i suppose query for every possible netblock (of every possible
size) but that's an awful lot of queries and you'd have to do it every
day in order to see new stuff or to know when to forget old stuff.

the problem is in time domain bounding of data validity and data
reachability. ROVER expects you to be able to query for the information
about a route at the time you receive that route. that's point-in-time
validity and reachability, which you might not have depending on where
the DNS servers are and what order you're receiving routes in. RPKI+ROA
expects you to have periodic but complete access to a catalogue, and
then your future use of the data you fetched has only the risk of
staleness or invalidity, but never reachability.

as others have stated, there is no reference collection of bad ideas.
otherwise we would have written this one up in 1996 when a couple of dns
people looked at the routing system and said 'hey what about something
like [ROVER]?' and the routing people explained in detail why it
wouldn't work.

paul



Re: rpki vs. secure dns?

2012-05-29 Thread Paul Vixie
ah, the force is strong in this one.

On 2012-05-30 3:52 AM, Shane Amante wrote:
 On May 29, 2012, at 9:23 AM, Alex Band wrote:
 ...

 As far as I know, ROVER doesn't work like that. You can make a positive 
 statement about a Prefix+AS combination, but that doesn't mark the 
 origination from another AS 'unauthorized' or 'invalid', there merely isn't 
 a statement for it. (Someone please confirm. I may be wrong.)
 Actually, I believe it does.  Specifically, there are two types of DNS RR's:
 a)  RLOCK: Route Lock
 b)  SRO: Secure Route Origin
 Please refer to the following URL for definitions of each: 
 http://tools.ietf.org/html/draft-gersch-grow-revdns-bgp-00#section-3.

as dns is an unreliable protocol, and is not atomic across multiple
questions, what is the effect of seeing one of those rrsets but not the
other? (here again we see the disadvantage of starting from incomplete
information.)

On 2012-05-30 4:24 AM, Shane Amante wrote:
 On May 29, 2012, at 8:44 PM, Paul Vixie wrote:
 ...

 the problem is in time domain bounding of data validity and data
 reachability. ROVER expects you to be able to query for the information
 about a route at the time you receive that route. that's point-in-time
 validity and reachability, which you might not have depending on where
 the DNS servers are and what order you're receiving routes in. RPKI+ROA
 expects you to have periodic but complete access to a catalogue, and
 then your future use of the data you fetched has only the risk of
 staleness or invalidity, but never reachability.

 as others have stated, there is no reference collection of bad ideas.
 otherwise we would have written this one up in 1996 when a couple of dns
 people looked at the routing system and said 'hey what about something
 like [ROVER]?' and the routing people explained in detail why it
 wouldn't work.
 Just one correction to the above.  As pointed out in Section 4 of 
 draft-gersch-grow-revdns-bgp-00 near-real-time route origin verification is 
 merely one instantiation of the ROVER concept.  Please refer to that 
 section for other potential uses of such published data.

my comment on that draft was (on the dnsop mailing list, march 10, 2012):

 your draft-gersch-dnsop-revdns-cidr-01 is very clean and simple; the
 draft and the design are of admirable quality. as a co-author of RFC
 2317 i agree that it does not suit the needs of bgp security since it
 seeks only to provide a method of fully naming hosts, not networks.

 importantly, i see no reference to RFC 1101 in your draft. RFC 1101
 describes a way to name networks, and while at first it did not seem to
 be compatible with CIDR, implementation (in netstat -r back in BSD/OS
 3.1) showed that RFC 1101 was in fact not as classful as it appeared.
...
 you may find that some of your work has already been done for you, or,
 you may find that this is related work that should be referenced in your
 draft along with the reasons why your proposed method is necessary.

joe and dan (authors of this draft) responded:

thanks for your comments and support.  We will definitely reference RFC 
 draft 1101 in our next version.

and indeed this was done, but weakly:

Changes from version 01 to 02
...
   Expanded the related work discussion to include RFC 1101.

looking at draft -02 we see:

 4.1.  Naming via RFC 1101

The problem of associating records with network names dates back to
at least [RFC1101].  This work coincides with some of the early
development of DNS and discusses issues regarding hosts.txt files.
The RFC observation makes a key observation that one should provide
mappings for subnets, even when nested.

The approach taken here clearly states how to map an IPv4 prefix that
is on an octet boundary.  The RFC maps 10.IN-ADDR.ARPA for class A
net 10, 2.128.IN-ADDR.ARPA for class B net 128.2, etc.  This is
essentially the same as the approach proposed here, although we
append an m label (discussed later in this document).

[RFC1101] also mentions more specific subnets, but the details are
limited.  We believe the approach proposed here builds on the best
ideas from this RFC and expands on the details of how the naming
convention would work.

in other words no explaination for why the existing RFC 1101 encoding
will not serve, even though RFC 1101 encoding been used for network
naming in the CIDR era, without limitation.

this reader is left wondering, what's the real impetus here?

 I would also ask people to expand their minds beyond the it must have a 
 (near-)real-time mechanism directly coupled to the Control Plane for a 
 variety of reasons.  Such a tight coupling of /any/ two systems inevitably, 
 and unfortunately, will only fail at scale in ways that likely would never 
 have been predicted a priori[1] -- 

i think you're paying insufficient attention to this discussion, if you
think that failure predictions have not already been well made with
respect to the rover

isc - a good business

2012-05-28 Thread paul vixie
greetings. i didn't notice this before, and i want to complete the record.

i'm paying more attention to the quoting this time, too.

 On Wed, May 23, 2012 at 04:33:28PM -0400, Christopher Morrow wrote:
  On Wed, May 23, 2012 at 1:40 AM,  bmanning at vacation.karoshi.com wrote:
   Paul will be there to turn things off when
  they no longer make money for his company.
  
  is the dns changer thingy making money for isc?

no. yes. depends on what you mean.

we're under contract to the department of justice to run the servers. the 
amount is
intended to be cost-recovery level. (doing stuff like this for nothing does not 
scale,
unless the business is successful enough elsewhere, and even then there'll be 
limits.)

 From: bmanning at vacation.karoshi.com (bmanning at vacation.karoshi.com)
 Date: Wed, 23 May 2012 21:00:27 +
 Message-ID: 20120523210027.ga26...@vacation.karoshi.com.

   pretty sure.  a contract w/ the Feds, outsouring contracts w/ affected 
 ISPs
   when the Fed deal runs out, development funding to code these kinds of 
 fixes 
   into future versions of software, any number of second and third order 
 fallout.

i don't know of any outsourcing contracts that will come into force when the 
fed deal
runs out. there is no development funding because there are no code fixes for 
this kind
of problem. i am certainly hoping for second and third order fallout; we pay 
the people
who write BIND using money collected from other people who buy support for 
BIND, or
consulting, or training, or custom feature development. that's how we keep BIND 
free,
and that's how we use a BSD-like license rather than a GPL-like license -- 
noone who
derives their appliance or product from BIND has any obligation to anybody. 
(noting,
nlnetlabs for unbound and nsd also use a BSD-like license, so ISC is not alone 
here.)

   No telling how effective constent self-promotion is.  One thing is 
 clear, Paul
   is able to tell a great story.

thanks for your kind words. i've been trying to uplevel my storytelling 
capability since
i've long since lost my coding skills.

   but its all speculation from here. ISC is well positioned to extract 
 value
   from both ends of the spectrum.  They have a great business model. The 
 optics
   look pretty odd from here, at lesat to me however - I am very glad for: 
 )open 
   source  )other vendors of DNS SW.

the women and men of ISC work their asses off to keep the world safer and 
saner. as you
puzzle over our business model please keep in mind that there is no exit 
possibility
here -- no IPO, no buyout. salaries at ISC are fair and reasonable, but 
nobody's getting
rich doing this work. so while i am likewise glad for open source and for other 
vendors
of open source DNS software, i'm struggling to grasp the intended suggestion. 
should ISC
stop giving away software? should we stop adding the features to our software 
that make it
relevant and desireable? should we stop selling the support and training and 
consulting
that make it possible to give away this software?

if you have a specific accusation of evil-doing, or a specific suggestion for 
how ISC can
become more morally pure, then please say exactly what you mean, and we can 
discuss that.

for more information, see: 
http://www.isc.org/community/blog/201001/why-isc-not-profit.
the short version is: ISC is a good business, we do good things, mostly for 
free, but also
for our customers. and we're totally unapologetic about what we do and who we 
are.

paul




Re: isc - a good business

2012-05-28 Thread paul vixie
On 5/28/2012 11:52 AM, Randy Bush wrote:
 ... maybe a bit too much layer ten for my taste. ...

on that, we're trying to improve. for example, we used to forego
features that some of us found repugnant, such as nxdomain remapping /
ad insertion. since the result was that our software was less relevant
but that there was no reduction in nxdomain remapping as a result of
BIND not providing it. so we dropped some of the layer ten stuff and
moved more in the direction of providing features the community of
interest found relevant. some say we sold out. maybe that's what manning
is on about, i can't tell. the software is free, and isc cherishes our
relevance.

if you catch us doing wierd layer ten stuff that bugs you, give a shout.
maybe we don't really mean it.

 ... and i run and appreciate the software.

that's why we're here.

paul



Re: rpki vs. secure dns?

2012-05-28 Thread Paul Vixie
more threads from the crypt as i catch up to 6000 missed nanog posts.

Dobbins, Roland rdobb...@arbor.net writes:

 On Apr 28, 2012, at 5:17 PM, Saku Ytti wrote:

 People might scared to rely on DNS on accepting routes, but is this
 really an issue?

 Yes, recursive dependencies are an issue.  I'm really surprised that
 folks are even seriously considering something like this, but OTOH, this
 sort of thing keeps cropping up in various contexts from time to time,
 sigh.

so, first, i think you mean circular dependencies not recursive dependencies.

second, i'd agree that that's probably bad engineering.

third, rsync's dependencies on routing (as in the RPKI+ROA case) are not
circular (which i think was david conrad's point but i'll drag it to here.)

my reason for not taking ROVER seriously is that route filter preparation
is an essentially offline activity -- you do it from a cron job not live.
and to do this you have to know in advance what policy data is available
which may or may not have the same coverage as the routes you will receive
between one cron job and the next.

we could in other words use DNS to store route policy data if we wanted to
use a recursive zone transfer of all policy zones, as a replacement for
rsync. (but why would we do this? we have rsync, which worked for IRR data
for many years.)

ROVER expects that we will query for policy at the instant of need. that's
nuts for a lot of reasons, one of which is its potentially and unmanageably
circular dependency on the acceptance of a route you don't know how to
accept or reject yet.

my take-away from this thread is: very few people take RPKI seriously, but
even fewer take ROVER seriously.

-- 
Paul Vixie
KI6YSY



Re: isc - a good business

2012-05-28 Thread Paul Vixie
(all caught up after this.)

Jay Ashworth j...@baylink.com writes:

 - Original Message -
 From: paul vixie vi...@isc.org

 On 5/28/2012 11:52 AM, Randy Bush wrote:
  ... maybe a bit too much layer ten for my taste. ...
 
 on that, we're trying to improve. for example, we used to forego
 features that some of us found repugnant, such as nxdomain remapping /
 ad insertion. since the result was that our software was less relevant
 but that there was no reduction in nxdomain remapping as a result of
 BIND not providing it.

 To clarify that a bit...

let's keep trying.

 You're saying you used to decline to include in BIND the capability to
 break the Internet by returning things other than NXDOMAIN for names
 which do not exist...

no, that's not what i'm saying.

 but now you're *ok* with breaking the internet, and BIND now does that?

no, that's also not what i'm saying.

 If that's what you mean, I'll explain to you why that's a bad layer 10 call.

it's not, but i'm listening.

 *Now*, you see, we no longer have a canonical Good Engineering Example to 
 which we can point when yelling at people (and software vendors) which
 *do* permit that, to say see?  You shouldn't be doing that; it's bad.

 The Web Is Not The Internet.

i see what you mean, and i'm sad that this arrow is no longer in your
quiver. perhaps you can still refer to nlnetlabs unbound for this purpose.

if i thought there was even one isp anywhere who wanted to use nxdomain
remapping but didn't because bind didn't have that feature, i'd be ready to
argue the point. but all isc did by not supporting this feature was force
some isp's to not use bind, and: isc is not in the sour grapes business.

meanwhile isc continues to push for ubiquitous dnssec, through to the stub,
to take this issue off the table for all people and all time. (that's the
real fix for nxdomain remapping.)

-- 
Paul Vixie
KI6YSY



Re: rpki vs. secure dns?

2012-05-28 Thread paul vixie
On 5/28/2012 9:42 PM, David Conrad wrote:
 On May 28, 2012, at 1:59 PM, Paul Vixie wrote:
 third, rsync's dependencies on routing (as in the RPKI+ROA case) are not
 circular (which i think was david conrad's point but i'll drag it to here.)
 Nope.  My point was that anything that uses the Internet to fetch the data 
 (including rsync) has a circular dependency on routing. It's just a question 
 of timing.

when you say it's a question of timing, i agree, but then i think you
won't agree again. in batch mode i can express a policy that's true from
that moment forward until removed. in real time mode i have to be able
to express a policy which is both valid and reachable at the moment of
validation. that's a question of timing but the word just as in just
a question of timing trivializes a galaxy-sized problem space.


 ROVER expects that we will query for policy at the instant of need.
 Might want to review 
 https://ripe64.ripe.net/presentations/57-ROVER_RIPE_Apr_2012.pdf, 
 particularly the slide entitled Avoid a Cyclic Dependency.

i read it. this is also what draft-gersch-grow-revdns-bpg says. this
makes for a fail open design -- the only thing the system can reliably
tell you is reject. this precludes islands of, or a mode of, fail
closed. while i don't consider a mode of fail closed to be
practically likely, i'd like to preserve the fig leaf of theoretical
possibility. (and the more likely possibility that islands of fail
closed could be sewn in.)

 As far as I can tell, ROVER is simply Yet Another RPKI Access Method like 
 rsync and bittorrent with its own positives and negatives. 

i can tell more than that. rover is a system that only works at all when
everything everywhere is working well, and when changes always come in
perfect time-order, where that order is nowhere defined, and is in any
event uncontrollable.

rsync's punt of the ordering problem is batch mode with policy start
times rather than policy valid instants. to my mind anyone who doesn't
punt the ordering problem has to instead solve it. rover does neither.

paul



vixie, father of multitudes

2012-05-23 Thread paul vixie
thanks to several folks who let me know this was going on. i hadn't even
noticed that i wasn't getting nanog@. thanks to seclists.org for hosting
an archive i could use.

---

From: bmanning () vacation karoshi com
Date: Wed, 23 May 2012 05:40:16 +

On Tue, May 22, 2012 at 10:07:52PM -0700, Michael J Wise wrote:

On May 22, 2012, at 9:10 PM, bmanning () vacation karoshi com wrote:


On Tue, May 22, 2012 at 08:52:52PM -0700, Michael J Wise wrote:

On May 22, 2012, at 8:35 PM, Randy Bush wrote:


father of bind?  that's news.

  
http://boingboing.net/2012/03/29/paul-vixies-firsthand-accoun.html

He was there, and Put The Fix In, to down the network.

Certainly news to Phil Almquist and the entire BIND
development team
at UCB.   Paul was at DECWRL and cut his teeth on
pre-existing code.
While he (and ISC) have since revised, gutted, tossed all
the orginal
code, rebuilt it twice - and others have done similar for
their DNS
software,  based on the BIND code base, implementation
assumptions, and
with little or no ISC code, and they call it BIND as well, 
it would be
a HUGE leap of faith to call Paul Vixie the father of
BIND - The Berkeley Internet Naming Daemon.

Methinks we're talking at cross purposes.

maybe... :)  my comment was refering to the father of bind
statement.

i don't describe myself that way. i inherited bind at 4.8.3 and fixed
stuff. i
rewrote a lot of it for 4.9.

we (mostly me but with huge work by robert halley and mark andrews)
rewrote most of
it for bind 8.1. (there was no 8.0.) other people (not me) wrote bind
9.x. other
people (mostly not the same people) are writing bind 10.

if my wikipedia entry is wrong in this regard i invite folks to fix it.
last i
heard it's disallowed for people to edit their own entries, so i have
not tried.

i am not the father of anything, except four healthy kids. i do
sometimes call
myself the wierd uncle of the internet but father of bind is not
what i mean.


As for being there and Put The Fix In...  Makes for great
PR but
in actual fact, its a bandaid that is not going to stem the
tide.
An actual fix would really need to change the nature of the
creaky
1980's implementation artifacts that this community loves so
well.

I don't think we're talking about the same thing at all.
Paul was there to shut down the DNS changer system and replace it
with something that restored functionality to the
infected machines.
And I gather Paul will be one of the people who will turn the lights
out on it.

yes, and yes.

He didn't shut down DNS Changer, he put up an equivalent
system to hijack
DNS traffic and direct it to the right place...  SO folks
didn't see any
problem and the DNS Changer infection grew and got worse.  When
he is legally
required to take his bandaide out of service, then the problem
will resolve
by folks who will have to clean their systems.

it's true, the fbi team who powered all that stuff off and loaded it into a
u-haul truck are the ones who shut down dns changer. or perhaps it was the
police in estonia who arrested all those people. i'm not the shutter-downer.

As for turning the lights out - that will only happen when the
value of
DNS hijacking drops.   As it is now,  ISC has placed DNS
hijacking code
into their mainstream code base... because DNS hijacking is so
valuable to
folks.  In a modestly favorable light, ISC looks like an arms
dealer (DNS redirection)
to the bad guys -AND- (via DNSSEC) the good guys.  Either way,
they make money.

well, no. but that seems off-topic. start a new thread if you care.
(and, cc me!)

And yes, I think I agree with you.  Paul will be there to turn
things off when
they no longer make money for his company.

well, no. when the court order runs out we will have to shut things
down. but the
money FBI is paying us for this is just to cover costs. and, it's not my
company.
isc is a 501(c)(3), basically a ward of the state of delaware, having no
shares
and therefore no shareholders.
 

Your other comments are non-sequitur to the main issue.

Perhaps I am not a member of the Paul Vixie cult of personality.  

so sad.


When those servers are turned off, Customer Support folks at many
ISPs will prolly want to take their accrued
vacation.

Amen.  And there will be thousands more of them when the court
order expires than
existed when the Feds called him in.

um. no. hundreds of thousands less than before the feds called ISC in.
see dcwg.org.

it's lovely to have so many fans. keep those cards and letters coming.
(but, cc me!)

paul




rpki vs. secure dns?

2012-04-27 Thread Paul Vixie
http://tech.slashdot.org/story/12/04/27/2039237/engineers-ponder-easier-fix-to-internet-problem

 The problem: Border Gateway Protocol (BGP) enables routers to
 communicate about the best path to other networks, but routers don't
 verify the route 'announcements.' When routing problems erupt, 'it's
 very difficult to tell if this is fat fingering on a router or
 malicious
 http://www.itworld.com/security/272320/engineers-ponder-easier-fix-dangerous-internet-problem,'
 said Joe Gersch, chief operating officer for Secure64, a company that
 makes Domain Name System (DNS) server software. In a well-known
 incident, Pakistan Telecom made an error with BGP after Pakistan's
 government ordered in 2008 that ISPs block YouTube, which ended up
 knocking Google's service offline
 http://slashdot.org/story/08/02/25/1322252/pakistan-youtube-block-breaks-the-world.
 A solution exists, but it's complex, and deployment has been slow. Now
 experts have found an easier way.

this seems late, compared to the various commitments made to rpki in
recent years. is anybody taking it seriously?



Re: wet-behind-the-ears whippersnapper seeking advice on building a nationwide network

2011-09-22 Thread Paul Vixie
Benson Schliesser bens...@queuefull.net writes:

 Hi, Paul.

sorry for the delay.  i'll include the entirety of this short thread.

 For what it's worth, I agree that ARIN has a pretty good governance
 structure. (With the exception of NomCom this year, which is shamefully
 unbalanced.) ...
 
 as the chairman of the 2011 ARIN NomCom, i hope you'll explain further,
 either publically here, or privately, as you prefer.

 My understanding is that the NomCom consists of 7 people. Of those, 2
 come from the board and 2 come from the AC. Together, those 4 members of
 the existing establishment choose the remaining 3 NomCom members. In the
 past, there was at least the appearance of random selection for some of
 the NomCom members. But in any case, due to its composition, the NomCom
 has the appearance of a body biased in favor of the existing
 establishment.

 Please correct any misunderstanding that I might have. Otherwise, I
 encourage an update to the structure of future NomComs.

can you explain what it was about prior nomcoms that gave the appearance
of random selection?  to the best of my knowledge, including knowledge i
gained as chair of the 2008 ARIN NomCom, we've been doing it the same way
for quite a while now.  so i do not understand your reference to at least
the appearance of random selection in the past.

since ARIN members-in-good-standing elect the board and advisory council,
and also make up three of the four seats of the nominations committee, i
do not share your view on bias as expressed above.  i think it shows
that ARIN is clearly governed by its members -- which is as it should be.

by your two references to the existing establishment do you intend to
imply that ARIN's members don't currently have the establishment that they
want, or that they could not change this establishment if they wanted to,
or that ARIN's members are themselves part of the existing establishment
in some way that's bad?

ARIN's bylaws firmly place control of ARIN into the hands of its members.
if you think that's the wrong approach, i'm curious to hear your reasoning
and your proposed alternative.
-- 
Paul Vixie
KI6YSY



Re: wet-behind-the-ears whippersnapper seeking advice on building a nationwide network

2011-09-22 Thread Paul Vixie
On Thu, 22 Sep 2011 21:05:51 -0500
Benson Schliesser bens...@queuefull.net wrote:

 Earlier this year I received the following from ARIN member
 services:  This year the NomCom charter was changed by the Board.
 In the past the 3 Member volunteers were selected at random.  This
 year the 3 volunteers will be chosen by the 4 current members of the
 NomCom (2 from the Board 2 from the AC)

yow.  i should have remembered this, you'd think.

 The above quote was sent to me in response to a query I made,
 inquiring how the NomCom would be chosen in 2011.  It is consistent
 with what I was told in 2010, when I was chosen to be part of the
 2010 NomCom.  At that time I was told that Member volunteers were
 chosen randomly.  During my NomCom tenure, however, it was suggested
 to me privately that there was very little randomness involved in the
 selection process; I was told that individuals were specifically
 chosen for NomCom.  I don't know what to make of this disparity,
 honestly, which is why I referenced the appearance of random
 selection.

suggested to you privately by arin staff?

 The NomCom acts as a filter, of sorts.  It chooses the candidates
 that the membership will see.  The fact that the NomCom is so closely
 coupled with the existing leadership has an unfortunate appearance
 that suggests a bias.  I'm unable to say whether the bias exists, is
 recognized, and/or is reflected in the slate of candidates.  But it
 seems like an easy enough thing to avoid.

you seem to mean that the appearance of bias would be easy to avoid,
then.

 As for my use of existing establishment:  I'm of the impression
 that a relatively small group of individuals drive ARIN, that most
 ARIN members don't actively participate.  I have my own opinions on
 why this is, but they aren't worth elaborating at this time - in
 fact, I suspect many ARIN members here on NANOG can speak for
 themselves if they wanted to.  In any case, this is just my
 impression.  If you would rather share some statistics on member
 participation, election fairness, etc, then such facts might be more
 useful.

i think our participation level in elections is quite high and i'll ask
for details and see them published here.

  ARIN's bylaws firmly place control of ARIN into the hands of its
  members. if you think that's the wrong approach, i'm curious to
  hear your reasoning and your proposed alternative.
 
 One of ARIN's governance strengths is the availability of petition at
 many steps, including for candidates rejected by the NomCom.
 Likewise, as you noted, leaders are elected by the membership.  For
 these reasons I previously noted that ARIN has a pretty good
 governance structure and I continue to think so.  It could be
 improved by increased member involvement, as well as broader
 involvement from the community. (For instance, policy petitions
 should include responses from the entire affected community, not just
 PPML.)  But my criticisms should be interpreted as constructive, and
 are not an indictment of the whole approach.

thanks for saying so.
-- 
Paul Vixie



Re: wet-behind-the-ears whippersnapper seeking advice on building a nationwide network

2011-09-20 Thread Paul Vixie
Benson Schliesser bens...@queuefull.net writes:

 For what it's worth, I agree that ARIN has a pretty good governance
 structure. (With the exception of NomCom this year, which is shamefully
 unbalanced.) ...

as the chairman of the 2011 ARIN NomCom, i hope you'll explain further,
either publically here, or privately, as you prefer.
-- 
Paul Vixie
KI6YSY



not operational -- call for nominations for ARIN council board

2011-08-09 Thread Paul Vixie
gentlefolk, ARIN is the community's self-generated steward for internet
numbering resources (ip addresses and autonomous system numbers) and it
is governed by volunteers from the community who serve on its advisory
council and executive board.  every year ARIN holds an election to fill
or renew several expiring terms.  candidates need not be ARIN members.

please see https://www.arin.net/announcements/2011/20110725_elec.html
and think about whether who you can nominate or whether you can self-
nominate.

paul vixie
chairman, 2011 arin nomcom




Re: ICANN to allow commercial gTLDs

2011-06-19 Thread Paul Vixie
David Conrad d...@virtualized.org writes:

 I believe the root server operators have stated (the equivalent of) that
 it is not their job to make editorial decisions on what the root zone
 contains.  They distribute what the ICANN/NTIA/Verisign gestalt
 publishes.

yes.  for one example, see:

http://www.icann.org/en/announcements/announcement-04jan08.htm

other rootops who have spoken about this have said similar/compatible things.
-- 
Paul Vixie
KI6YSY



Re: unqualified domains, was ICANN to allow commercial gTLDs

2011-06-19 Thread Paul Vixie
Adam Atkinson gh...@mistral.co.uk writes:

 It was a very long time ago, but I seem to recall being shown http://dk,
 the home page of Denmark, some time in the mid 90s.

 Must I be recalling incorrectly?

no you need not must be.  it would work as long as no dk.this or dk.that
would be found first in a search list containing 'this' and 'that', where
the default search list is normally the parent domain name of your own
hostname (so for me on six.vix.com the search list would be vix.com and
so as long as dk.vix.com did not exist then http://dk/ would reach dk.)
-- 
Paul Vixie
KI6YSY



Re: ICANN to allow commercial gTLDs

2011-06-19 Thread Paul Vixie
Jay Ashworth j...@baylink.com writes:

 ... and that the root wouldn't be affected by the sort of things that
 previously-2LD now TLD operators might want to do with their
 monocomponent names...

someone asked me privately a related question which is, if there's a .SONY
and someone's web browser looks up http://sony/ and a root name server gets
a query for SONY./IN/ then what will happen?  the answer is happily that
the result will be a delegation (no  RR in the answer section even if
the root name server knows one for some reason).  the answer section will be
empty, the authority section will have a SONY/IN/NS RRset in it, and the
additional section will have the nec'y IN/ and IN/A RRsets for those NSs.

this is sometimes called the BIND9 behaviour in contrast to BIND8/BIND4
which would have answered the question if they knew the answer, even if they
also knew that the qname had been delegated.  BIND9 changed this, and NSD
does it the same way.  RFC 1034/1035 is pretty clear about this, so to be
this should not be called the BIND9 behaviour but rather simply correct.

 which as someone pointed out, a 3-digit RFC forbids for security reasons
 anyway.

three digit?  i was thinking of http://www.ietf.org/rfc/rfc1535.txt which
was written as air cover for me when i added the ndots:NNN behaviour to
BIND4's stub resolver.  and, looking at firefox on my workstation just now:

[58] 2011-06-19 23:27:49.906040 [#4 em1 0] \
[24.104.150.12].24003 [24.104.150.2].53  \
dns QUERY,NOERROR,57397,rd \
1 sony.vix.com,IN,A 0 0 0
[58] 2011-06-19 23:27:49.909895 [#5 em1 0] \
[24.104.150.12].26356 [24.104.150.2].53  \
dns QUERY,NOERROR,57398,rd \
1 sony.vix.com,IN, 0 0 0
[50] 2011-06-19 23:27:49.910489 [#6 em1 0] \
[24.104.150.12].23228 [24.104.150.2].53  \
dns QUERY,NOERROR,57399,rd \
1 sony,IN,A 0 0 0
[50] 2011-06-19 23:27:49.930022 [#7 em1 0] \
[24.104.150.12].37238 [24.104.150.2].53  \
dns QUERY,NOERROR,57400,rd \
1 sony,IN, 0 0 0
[58] 2011-06-19 23:27:49.931059 [#8 em1 0] \
[24.104.150.12].17401 [24.104.150.2].53  \
dns QUERY,NOERROR,33742,rd \
1 www.sony.com,IN,A 0 0 0
[124] 2011-06-19 23:27:50.112451 [#9 em1 0] \
[24.104.150.2].53 [24.104.150.12].17401  \
dns QUERY,NOERROR,33742,qr|rd|ra \
1 www.sony.com,IN,A \
1 www.sony.com,IN,A,600,72.52.6.10 \
2 sony.com,IN,NS,172800,pdns1.cscdns.net \
sony.com,IN,NS,172800,pdns2.cscdns.net 0

...i see that the browser's stub is indeed looking at the search list first
when there are no dots in the name.  that's correct behaviour by the RFC
and also anecdotally since if i had an internal web server here called
sony.vix.com i would want my web browser to assume that that was the one i
wanted when i typed http://sony/;.  having it go outside my network and
hit a TLD first would be a dangerous information leak.  (this also shows
DNS's lack of a formal presentation layer as clearly as anything ever
could.)

inevitably there will be folks who register .FOOBAR and advertise it as
http://foobar/; on a billboard and then get burned by all of the local
foobar.this.tld and foobar.that.tld names that will get reached
instead of their TLD.  i say inevitable; i don't know a way to avoid it
since there will be a lot of money and a lot of people involved.
-- 
Paul Vixie
KI6YSY



Re: unqualified domains, was ICANN to allow commercial gTLDs

2011-06-19 Thread Paul Vixie
 Date: Sun, 19 Jun 2011 19:30:58 -0500
 From: Jeremy jba...@gmail.com
 
 DK may not be hierarchical, but DK. is. If you try to resolve DK
 on it's own, many (most? all?) DNS clients will attach the search
 string/domain name of the local system in order to make it a FQDN. The
 same happens when you try and resolve a non-existent domain. Such as
 alskdiufwfeiuwdr3948dx.com, in wireshark I see the initial request
 followed by alskdiufwfeiuwdr3948dx.com.gateway.2wire.net. However if I
 qualify it with the trailing dot, it stops after the first lookup.
 DK. is a valid FQDN and should be considered hierarchical due to the
 dot being the root and anything before that is a branch off of the
 root. see RFC1034

i think he's seen RFC 1034 :-).  anyway, i don't see the difference between
http://sony/ and http://sony./ and if a technology person tried to explain
to a marketing person that single-token TLD names *can* be used as long as
there's a trailing dot, the result would hopefully be that glazed look of
nonunderstanding but would far more likely be an interpretation of oh, so
it's OK after all, we'll use it that way, thanks!

furthermore, the internet has more in it than just the web, and i know that
foo@sony. will not have its RHS (sony.) treated as a hierarchical name.

i think we have to just discourage lookups of single-token names, universally.



Re: unqualified domains, was ICANN to allow commercial gTLDs

2011-06-19 Thread Paul Vixie
 From: David Conrad d...@virtualized.org
 Date: Sun, 19 Jun 2011 16:04:09 -1000
 
 On Jun 19, 2011, at 3:24 PM, Paul Vixie wrote:
 
  i think we have to just discourage lookups of single-token names,
  universally.
 
 How?

that's a good question.  marka mentioned writing an RFC, but i expect
that ICANN could also have an impact on this by having applicants sign
something that says i know that my single-label top level domain name
will not be directly usable the way normal domain names are and i intend
to use it only to register subdomain names which will work normally.



Re: unqualified domains, was ICANN to allow commercial gTLDs

2011-06-19 Thread Paul Vixie
 Date: Sun, 19 Jun 2011 19:22:46 -0700
 From: Michael Thomas m...@mtcc.com
 
  that's a good question.  marka mentioned writing an RFC, but i expect
  that ICANN could also have an impact on this by having applicants sign
  something that says i know that my single-label top level domain name
  will not be directly usable the way normal domain names are and i intend
  to use it only to register subdomain names which will work normally.
 
 Isn't this problem self regulating? If sufficient things break with a
 single label, people will stop making themselves effectively unreachable,
 right?

alas, no.  if someone adds something to the internet that doesn't work right
but they ignore this and press onward until they have market share, then the
final disposition will be based on market size not on first mover advantage.

if you live in the san francisco bay area you probably know about the sound
walls along the US101 corridor.  the freeway was originally built a long way
from where the houses were, but then a few generations of people built their
houses closer and closer to the freeway.  then their descendants or the folks
who bought these houses third or fourth hand complained about the road noise
and so we have sound walls.  no harm exactly, and no foul, except, noone likes
the result much.

here's this quote again:

Distant hands in foreign lands
are turning hidden wheels,
causing things to come about
which no one seems to feel.
All invisible from where we stand,
the connections come to pass
and though too strange to comprehend,
they affect us nonetheless, yes.
James Taylor, _Migrations_

good stewardship and good governance means trying to avoid such outcomes.



Re: unqualified domains, was ICANN to allow commercial gTLDs

2011-06-19 Thread Paul Vixie
 Date: Sun, 19 Jun 2011 22:32:59 -0700
 From: Doug Barton do...@dougbarton.us
 
 ... the highly risk-averse folks who won't unconditionally enable IPv6
 on their web sites because it will cause problems for 1/2000 of their
 customers.

let me just say that if i was making millions of dollars a day and i had
the choice of reducing that by 1/2000th or not i would not choose to
reduce it.  as much as i love the free interchange of ideas i will point
out that commerce is what's paid the internet's bills all these years.



Re: v6 proof of life

2011-06-07 Thread Paul Vixie
Jima na...@jima.tk writes:

 44 2001:db8::230:48ff:fef2:f340
 44 2001:db8::230:48ff:fef0:1de

 How can 2001:db8::/32 reach your machines ?

  Lack of ingress filtering on Mr. Vixie's part, ...

indeed.  i had no idea.

 and lack of egress
 filtering on whoever-owns-those-Supermicro-board's part.
  That's not to say there's a route back, by any means.

i'll bet i'm not alone in seeing traffic from this prefix.  as a rootop
i can tell you that we see plenty of queries from ipv4 rfc1918 as well.
-- 
Paul Vixie
KI6YSY



v6 proof of life

2011-06-06 Thread Paul Vixie
it's been a while since i looked at the query stream still hitting
{rbl,dul}.maps.vix.com.  this was the world's first RBL but it was
renamed from maps.vix.com to mail-abuse.org back in Y2K or so.  i
have not sent anything but NXDOMAIN in response to one of these
queries for at least ten years, yet the queries just keep coming.

here's a histogram of source-ip.  feel free to remove yourself :-).
it's just a half hour sample.  i'll put this up on a web page soon.

importantly and happily, there's a great deal of IPv6 happening here.

re:

 524 200.59.134.164
 492 2001:288:8201:1::14
 455 209.177.144.50
 418 193.124.83.73
 392 140.144.32.205
 360 143.54.204.171
 355 208.76.170.121
 282 2403:2c00:1::5
 262 2001:1a80:103:5::2
 225 2001:288:8201:1::2
 195 2604:3500::ad:1
 186 2001:288:8201:1::10
 174 209.157.254.10
 167 2001:b30:1:100::100
 158 12.47.198.68
 142 2001:288:0:2::60
 125 2002:d58e:8901::d58e:8901
 118 77.72.136.2
 115 140.118.31.99
 113 66.240.198.100
 102 2001:9b0:1:601:230:48ff:fe8a:c7f4
 100 2001:888:0:24:194:109:20:107
  99 212.73.100.132
  92 2405:d000:0:100::214
  86 202.177.26.107
  83 2405:d000:0:100::228
  82 195.101.253.253
  77 210.175.244.210
  77 2001:558:1014:f:69:252:96:24
  76 64.168.228.129
  76 2001:2c0:11:1::c02
  71 2001:b30:1::190
  68 2001:558:1014:f:68:87:76:185
  68 2001:4dd0:100:1020:53:2:0:1
  67 2001:558:1014:f:69:252:96:22
  67 2001:558:1014:f:68:87:76:189
  63 2001:558:1014:f:69:252:96:25
  57 2607:f758:6000:13::e4
  56 2001:558:1014:f:68:87:76:181
  55 212.234.229.242
  52 2607:f758:e000:13::e4
  51 2607:fdb8:0:b:2::2
  51 208.188.98.249
  51 2001:558:1014:f:68:87:76:190
  49 66.192.109.211
  45 2001:558:1014:f:69:252:96:23
  44 2001:db8::230:48ff:fef2:f340
  44 2001:db8::230:48ff:fef0:1de
  42 213.171.61.117
  41 201.116.43.232
  40 2607:fdb8:0:b:1::2
  40 2001:470:1f0a:c1b::2
  40 190.82.65.243
  38 190.169.30.2
  36 2605:d400:0:27::3
  35 2001:d10:2:3::1:2
  31 2605:d400:0:27::7
  30 220.110.24.250
  28 84.55.220.139
  28 2a00:1db0:16:2::347:2
  23 2607:f2e0::1:3:2
  23 2001:15c8:8::2:1
  22 218.44.167.26
  22 2001:6c8:2:100::53
  22 2001:6b0:1::201
  21 193.169.45.5
  20 2607:f470:1003::3:3
  19 2a01:8c00:ff60:2:230:48ff:fe85:d47e
  19 221.133.36.229
  19 213.178.66.2
  19 202.169.240.10
  19 2001:c28:1:1:dad3:85ff:fee1:30ec
  19 163.29.248.1
  18 195.58.224.34
  17 2a02:460:4:0:250:56ff:fea7:23d
  17 221.245.76.99
  17 2001:c28:1:1:dad3:85ff:fee0:4f68
  16 2001:630:200:8120::d:1
  15 2a01:1b8::1
  15 218.44.236.98
  15 2001:ad0::
  15 2001:41d8:1:8028::54
  14 2a00:1b50::2:2
  14 192.149.202.9
  13 200.74.222.140
  12 98.130.2.250
  12 2a00:1eb8:0:1::1
  12 2600:c00:0:1::301
  12 2001:c28:1:1:dad3:85ff:fee0:3f20
  12 2001:380:124::4
  12 2001:218:4c0:1:2e0:81ff:fe55:a018
  12 2001:12d0::3
  12 195.228.156.150
  12 194.42.134.132
  11 66.111.66.240
  11 2607:f010:3fe:100:0:ff:fe00:1
  11 211.25.195.236
  11 210.162.229.210
  11 2001:c28:1:1:dad3:85ff:fee0:be80
  11 2001:a10:1:::2:c918



Re: Yahoo and IPv6

2011-05-17 Thread Paul Vixie
 Date: Tue, 17 May 2011 11:07:17 +0200
 From: Mans Nilsson mansa...@besserwisser.org
 
   ... It's not like you can even reach anything at home now, let alone
   reach it by name.
  
  that must and will change.  let's be the generation who makes it possible.
 
 I'd like to respond to this by stating that I support this fully, but
 I'm busy making sure I can reach my machines at home from the IPv6
 Internet. By name. ;-)

:-).

to be clear, the old pre-web T1 era internet did not have much content
but what content there was, was not lopsided.  other than slip and ppp
there weren't a lot of networks one would call access and a smaller
number of networks one would call content.  i am not wishing for that,
i like the web, i like content, i know there will be specialized networks
for access and content.  but i also think (as jim gettys does) that we
ought to be able to get useful work done without being able to reach the
whole internet all the time.  that's going to mean being able to reach
other mostly-access networks in our same neighborhoods and multitenant
buildings and towns and cities, directly, and by name.  it does not mean
being able to start facebook 2.0 out of somebody's basement, but it does
mean being able to run a personal smtp or web server in one's basement
and have it mostly work for the whole internet and work best for accessors
who are close by and still work even when the upstream path for the
neighborhood is down.



Re: Yahoo and IPv6

2011-05-17 Thread Paul Vixie
 Date: Tue, 17 May 2011 11:49:47 -0400
 From: Steve Clark scl...@netwolves.com
 
 This is all very confusing to me. How are meaningful names going to assigned
 automatically?

It'll probably be a lot like Apple's and Xerox's various multicast naming
systems if we want it to work in non-globally connected networks.

 Right now I see something like ool-6038bdcc.static.optonline.net for
 one of our servers, how does this mean anything to anyone else?

It wouldn't of course.  I'm sorry if my earlier words on this were useless.

Dave Taht gave a wonderful talk a few weeks ago (Finishing the Internet,
see http://amw.org/prog11.pdf) during which he had us start an rsync
from his wireless laptop to as many of ours as could run rsync, and then
had the conference organizer turn off the upstream link.  He noted that
those of us using the local resource (a giant file, either an ISO or a
MPEG or similar) were still getting work done whereas those of us trying
to use the internet were dead in the water.  Then, referring to his
time in Nicaragua he said that he has a lot of days like this and he'd
like more work to be possible when only local connectivity was available.

Compelling stuff.  Pity there's no global market for localized services
or we'd already have it.  Nevertheless this must and will get fixed, and
we should be the generation who does it.



Re: Yahoo and IPv6

2011-05-16 Thread Paul Vixie
 Date: Mon, 16 May 2011 14:37:46 -0400
 From: Jim Gettys j...@freedesktop.org
 
  perhaps i'm too close to the problem because that solution looks quite
  viable to me.  dns providers who don't keep up with the market (which
  means ipv6+dnssec in this context) will lose business to those who do.
 
 I don't believe it is currently viable for any but the hackers out there,
 given my experience during the Comcast IPv6 trial.  Typing V6 addresses
 (much less remembering them) is a PITA.

 You are asking people who don't even know DNS exists, to bother to
 establish another business relationship (or maybe DNS services might
 someday be provided by their ISP).

actually, i'm asking the opposite.  only hackers run their own dns mostly;
the vast majority of users who don't know what ipv6 or dnssec are, are
already outsourcing to ultradns/neustar, or verisign, or dyn.com, etc, or
for recursive they're using opendns, google dns, etc.  these companies can
either add the new services and do outreach to their customer bases, or
they can allow their competitors to do so.

of those who still run their own dns, the vast majority actually do know
the dnssec and ipv6 issues facing them.

 If you get past that hurdle they get to type long IPv6 addresses into a web
 page they won't remember where it was the year before when they did this
 the last time to add a machine to their DNS.

i've been using ipv6 dual stack for ten years at ISC and for one year at
home (i was comcast's first north american dual stack native customer) and
the only time i type long ipv6 addresses is when editing dns zone files or
configuring routers and hosts.  i think your experiences may have been
worse than mine and i'll be interested in knowing whether they're common.

 The way this ought to work for clueless home users (or cluefull users
 too, for that matter) is that, when a new machine appears on a network, it
 just works, by which I mean that a globally routeable IPv6 address
 appears in DNS without fussing around using the name that was given to the
 machine when it was first booted, and that a home user's names are
 accessible via secondaries even if they are off line.

this is why ISC DHCP and ISC BIND can communicate using RFC 2136 DNS
dynamic updates, secured with RFC 2845 transaction signatures.  once you
get this running then you don't have to type ipv6 addresses anywhere.  and
i know that infoblox and other BIND Inside appliance vendors have the same
capability, and that Cisco and other DNS/DHCP vendors can also participate
in these open standards pretty much out of the box.  this is what i worked
on when i first found out about IETF back in 1995 or so.  it's all done now
you just have to learn it and deploy it.  (and if you don't think end users
ought to have to learn how to configure their DHCP to talk to their DNS,
i will point them at a half dozen appliance and outsourcing vendors who can
take the ones and zeroes out of this for them.)

 And NXDOMAIN should work the way it was intended, for all the reasons
 you know better than I.

while i agree, i don't think the people who are substituting positive
responses for NXDOMAIN care at all what you think or what i think, so i'm
going to focus on what can be done which is advancing robust solutions.

 This is entirely possible ;-).  Just go ask Evan Hunt what he's been up to
 with Dave Taht recently

more appliance vendors including open source are definitely welcome.  the
pool is large enough for everybody to swim in it.



Re: Yahoo and IPv6

2011-05-16 Thread Paul Vixie
 From: Owen DeLong o...@delong.com
 Date: Mon, 16 May 2011 16:12:27 -0700
 
 ... It's not like you can even reach anything at home now, let alone
 reach it by name.

that must and will change.  let's be the generation who makes it possible.



Re: Yahoo and IPv6

2011-05-14 Thread Paul Vixie
Matthew Kaufman matt...@matthew.at writes:

 My Desktop is not able to make any IPv4 socket connections anymore.  I get
 Protocol not supported. So there are IPv6-only users, already bitten by
 no .  So that's -1 from me.

 Sounds to me like you're not on The Internet any more.

in http://www.merit.edu/mail.archives/nanog/2001-04/msg00294.html we see:

(*2)Q: But what IS the Internet?
A: It's the largest equivalence class in the reflexive, transitive,
symmetric, closure of the relationship 'can be reached by an IP
packet from'. Seth Breidbart

by which definition, matthew's observation would be correct.  folks who want
to run V6 only and still be on the internet will need proxies for a long
while.  folks who want to run V6 only *today* and not have any proxies *today*
are sort of on their own -- the industry will not cater to market non-forces.
-- 
Paul Vixie
KI6YSY



Re: Yahoo and IPv6

2011-05-14 Thread Paul Vixie
 From: Marshall Eubanks t...@americafree.tv
 Date: Sat, 14 May 2011 13:02:16 -0400
 
 I think that the real question is, when will people who are running
 IPv4 only not be on the Internet by this definition ?

is there an online betting mechanism we could use, that we all think will
still be in business decades from now when the truth is known?  if we're
going to start picking the month and year when IPv4 is the new PDP-11
compatibility mode (that's a VAX reference), where the winner is whoever
comes closest without going over, my pick is July 2021, and i'm in for $50.



Re: Yahoo and IPv6

2011-05-14 Thread Paul Vixie
Jim Gettys j...@freedesktop.org writes:

 ... we have to get naming squared away.  Typing IPv6 addresses is for the
 birds, and having everyone have to go fuss with a DNS provider isn't a
 viable solution.

perhaps i'm too close to the problem because that solution looks quite
viable to me.  dns providers who don't keep up with the market (which means
ipv6 and dnssec in this context) will lose business to those who do.
-- 
Paul Vixie
KI6YSY



Re: NTT as a service provider in the US

2011-02-27 Thread Paul Vixie
powerzo...@gmail.com powerzo...@gmail.com writes:

 Anyone have any thoughts on NTT as a service provider in the US ? Anyone
 currently or previously using them please chime in.

can't do it.  i have thoughts but i won't answer a freemail address.  i'm
taking the time to say so because your post looks like trolling to me.  if
you ask again with a real domain name and a real meatspace signature, i'll
be happy to say what i think about ntt as a service provider in the US.
-- 
Paul Vixie
KI6YSY



Re: Mac OS X 10.7, still no DHCPv6

2011-02-27 Thread Paul Vixie
there are two replies here.

---

Christopher Morrow morrowc.li...@gmail.com writes:

 ..., what's the harm in dhcpv6? (different strokes and all that)

only the egos and reputations of those who said that stateless autoconf
was all ipv6 needed.  (which is a small price to pay, according to me.)

---

Dobbins, Roland rdobb...@arbor.net writes:

 On Feb 28, 2011, at 10:47 AM, Steven Bellovin wrote:

 Also don't forget privacy-enhanced addresses.

 Yes, which have extremely negative opsec connotations in terms of
 complicating traceback.

/64 csma subnets with low order 64 bits controlled by infectable pc's means
we'll be blackholing by /64 when we blackhole in ipv6.  it's no big deal.
-- 
Paul Vixie
KI6YSY



Re: Leasing of space via non-connectivity providers

2011-02-10 Thread Paul Vixie
 Date: Thu, 10 Feb 2011 01:13:49 -0600
 From: Jimmy Hess mysi...@gmail.com
 
 With them not requiring a /8 in the first place (after CIDR); one
 begins to wonder how much of their /8 allocations they actually
 touched in any meaningful way.

i expect that after final depletion there will be some paid transfers
from some of the large legacy blocks.  i have no personal knowledge of
HP's situation or indeed any /8 holder's situation, but if the market
value of these transfers ever meaningfully exceeds the renumbering penalty
then their beancounters will find a way to get it done.  that's the way
of the world.

i can imagine this NOT happening.  most businesses are looking for long
term strategic investments not quick-fix short-term band-aids.  a buddy
loaned me a macbook after my thinkpad was stolen, and i loved it, and i
went down to the apple store to buy one of my own just like my buddy's
loaner and they said we only sell that with the chicklet keyboard now
and i tried it and hated it.  i could buy my buddy's laptop but without
applecare and without the ability to replace it if it's lost/stolen i'm
not willing to make that investment.  so for me it's another thinkpad.

so if a company who traditionally needs a lot of IPv4 to grow their
network knows that they can get one last quarter's worth of it from some
legacy /8 holder, they might do some kind of paid transfer, or they might
just hum some dire straits and keep moving with their ipv6 plans.

Now it's past last call for alcohol
Past recall has been here and gone
The landlord finally paid us all
The satin jazzmen have put away their horns
And we're standing outside of this wonderland
Looking so bereaved and so bereft
Like a Bowery bum when he finally understands
The bottle's empty and there's nothing left

(Your Latest Trick)

for some IPv4 based businesses a /8 would be more than a lifetime supply,
but there's a value ceiling imposed by the space other people can get.
(when everybody else has made other arrangements, the relative value of
one's own hoard decreases.)

 Perhaps the RIRs should personally and directly ask each /8 legacy
 holder to provide account of their utilization (which portions of the
 allocation is used, how many hosts), and ASK for each unused /22 [or
 shorter] to be returned.
 
 The legacy holders might (or might not) refuse.  They might (or might
 not) tell the RIRs Hell no In any case, the registry should ASK and
 publish an indication for each legacy /8 at least.
 
 So the community will know which (if any) legacy /8 holders are likely
 to be returning the community's IPv4 addresses that they obtained but
 don't have need for.
 
 The community should also know which /8 legacy holders say Hell no,
 we're keeping all our /8s, and not telling you how much of the
 community's IPv4 resources we're actually using.

this gets into the controversial topic of an RIR's standing with respect
to legacy space, and i'll leave that to the lawyers to talk about.  but
as owen and others have said, if a legacy holder were approached in this
way knowing that their answer was going to be on the public record in the
way, they probably would see no incentive at all to answer the question.



Re: Leasing of space via non-connectivity providers

2011-02-09 Thread Paul Vixie
David Conrad d...@virtualized.org writes:

 I'm curious: when HP acquired the assets of Compaq (or when Compaq
 acquired the assets of Digital), is it your position that HP (or Compaq)
 met the same criteria as if they were requesting an IP address directly
 from the IR. for 16.0.0.0/8?

since i was the guy to do the initial carving on 16.0.0.0/8 i pondered this
at the time of the CPQ and HP acquisitions.  my research revealed that the
network that DEC had numbered using 16.0.0.0/8 was still in existence and
had been part of the acquisition process.  there's an interesting question
as to whether the acquirer should have had to renumber, since the acquirer
had their own /8 and probably had the ability to contain both the old and
new networks in the same /8.  there's another interesting question as to
whether either DEC or HP could have qualified for a /8 under current rules,
since the basis for these (pre-RIR) allocations was that they needed more
than a /16 and these were the days before CIDR.  (at the time i received
the /8 allocation at DEC, we had a half dozen /16's several dozen /24's that
we wanted to stop using because we worried about the size of the global
routing table... what whacky kids we all were.  hint: i had hair back then.)
-- 
Paul Vixie
KI6YSY



Re: Verizon acquiring Terremark

2011-02-02 Thread Paul Vixie
 Date: Wed, 2 Feb 2011 03:22:39 -0500
 From: Jeffrey Lyon jeffrey.l...@blacklotus.net
 
 I'm sure everything will be fine in practice as others have indicated,
 I was merely making a point of the inherent conflict of interest.

ah.  if you mean it's unusual or it's difficult rather than it
cannot be then i have no arguments.  the reason PAIX got traction
at all, coming late to the game (1995-ish) as we did, was because MFS
was then able to charge circuit prices for many forms of cross connect
down at MAE West.  and i did face continuous pressure from MFN to go
after a share of PAIX's carrier's circuit revenue.  (which i never did
and which none of my successors have done either.)

noting, the game as moved on.  if verizon behaves badly as terremark's
owner then the presence of equinix in the market will act as a relief
valve.  i think the neutral and commercial model is very well
established and that verizon will not want to be the only carrier in
those facilities nor have their circuit-holders be the only customers
for the real estate.  it's an awful lot of space to use just as colo,
and it's both over- and underbuilt for colo (vs. an IX).

re:

 On Wed, Feb 2, 2011 at 1:38 AM, Paul Vixie vi...@isc.org wrote:
  Jeffrey Lyon jeffrey.l...@blacklotus.net writes:
 
  One cannot be owned by a carrier and remain carrier neutral.
 
  My two cents,
 
  my experience running PAIX when it was owned by MFN was not
  like you're saying.



Re: Verizon acquiring Terremark

2011-02-01 Thread Paul Vixie
Jeffrey Lyon jeffrey.l...@blacklotus.net writes:

 One cannot be owned by a carrier and remain carrier neutral.

 My two cents,

my experience running PAIX when it was owned by MFN was not like you're saying.
-- 
Paul Vixie
KI6YSY



Re: [arin-announce] ARIN Resource Certification Update

2011-01-30 Thread Paul Vixie
 From: Alex Band al...@ripe.net
 Date: Sun, 30 Jan 2011 11:39:36 +0100
 
 I think my question is very pertinent. Of course the number of signed
 prefixes directly influences the number of validators. Do you think
 the RIPE NCC Validator tool would have been downloaded over 100 times
 in the last month if there were only 5 certified prefixes?

i think we may be talking past each other.  the number of production
validators will be unrelated to any difference between prefixes signed
because signing is easy and prefixes signed because operators are
willing to do something hard.  the operators who will sign even if it's
hard (for example, deploying up/down) and also the operators who will
only do it if it's easy (for example, hosted at an RIR) will each not
care how many production validators there are at that moment -- their
decision will be made on some other basis.

 Practically, in the real world, why would anyone invest time and
 effort in altering their current BGP decision making process to
 accommodate for resource certification if the technology is on
 nobody's radar, it's hard to get your feet wet and there are just a
 handful of certified prefixes out there. Wouldn't it be good if
 network operators think: Because it helps increase global routing
 security, it's easy to get started and lots of people are already
 involved, perhaps I should have a look at (both sides of) resource
 certification too.

the reasoning you're describing is what we had in mind when we built DLV
as an early deployment aid for DNSSEC.  we had to break stiction where
if there were no validators there would be incentive to sign, and if
there were no signatures there would be no incentive to validate.  are
you likewise proposing the hosted solution only as an early deployment
aid?  i'm really quite curious as to whether you'll continue operating
an RPKI hosting capability even if it becomes unnec'y (as proved some
day if many operators of all sizes demonstrate capability for up/down).
if so, can you share the reasoning behind that business decision?

i know it sounds like i'm arguing against a hosted solution, but i'm
not.  i'm just saying that network operators are going to make business
decisions (comparing cost and risk to benefit) as to whether to sign and
whether to validate, and RIR's are going to make business decisions
(comparing cost and risk to benefit) as to what provisioning mode to
offer, and i don't plan to try to tell any network operators to sign or
validate based on my own criteria, nor do i plan to try to tell any RIR
that they should host or do up/down based on my own criteria.  it's
their own criteria that matters.  let's just get the best starting
conditions we can get, and then expect that everybody will make the best
decision they can make based on those conditions.

at ISC i have been extremely interested in participating in RPKI
development and i think that sra and randy (and the whole RPKI team
inside IETF and among the RIRs) have done great work improving the
starting conditions for anyone who has to make a business decision of
whether to deploy and if so what mode to deploy in.  on the ARIN BoT i
have likewise been very interested in and supportive of RPKI and i'm
happy to repeat john curran's words which were, ARIN is looking at the
risks and benefits of various RPKI deployment scenarios, and we expect
to do more public and member outreach and consultation at our upcoming
meeting in san juan PR.

Paul Vixie
Chairman and Chief Scientist, ISC
Member, ARIN BoT

re:

  i don't agree that that question is pertinent.  in deployment scenario
  planning i've come up with three alternatives and this question is not
  relevant to any of them.  perhaps you know a fourth alternative.  here
  are mine.
  
  1. people who receive routes will prefer signed vs. unsigned, and other
  people who can sign routes will sign them if it's easy (for example,
  hosted) but not if it's too hard (for example, up/down).
  
  2. same as #1 except people who really care about their routes (like
  banks or asp's) will sign them even if it is hard (for example, up/down).
  
  3. people who receive routes will ignore any unsigned routes they hear,
  and everyone who can sign routes will sign them no matter how hard it is.
  
  i do not expect to live long enough to see #3.  the difference between #1
  and #2 depends on the number of validators not the number of signed routes
  (since it's an incentive question).  therefore small differences in the
  size of the set of signed routes does not matter very much in 2011, and
  the risk:benefit profile of hosted vs. up/down still matters far more.
  ...



Re: [arin-announce] ARIN Resource Certification Update

2011-01-29 Thread Paul Vixie
 From: Alex Band al...@ripe.net
 Date: Sat, 29 Jan 2011 16:26:55 +0100
 
 ... So the question is, if the RIPE NCC would have required everyone
 to run their own certification setup using the open source tool-sets
 Randy mentions, would there be this much certified address space now?

i don't agree that that question is pertinent.  in deployment scenario
planning i've come up with three alternatives and this question is not
relevant to any of them.  perhaps you know a fourth alternative.  here
are mine.

1. people who receive routes will prefer signed vs. unsigned, and other
people who can sign routes will sign them if it's easy (for example,
hosted) but not if it's too hard (for example, up/down).

2. same as #1 except people who really care about their routes (like
banks or asp's) will sign them even if it is hard (for example, up/down).

3. people who receive routes will ignore any unsigned routes they hear,
and everyone who can sign routes will sign them no matter how hard it is.

i do not expect to live long enough to see #3.  the difference between #1
and #2 depends on the number of validators not the number of signed routes
(since it's an incentive question).  therefore small differences in the
size of the set of signed routes does not matter very much in 2011, and
the risk:benefit profile of hosted vs. up/down still matters far more.

 Looking at the depletion of IPv4 address space, it's going to be
 crucially important to have validatable proof who is the legitimate
 holder of Internet resources. I fear that by not offering a hosted
 certification solution, real world adoption rates will rival those of
 IPv6 and DNSSEC. Can the Internet community afford that?

while i am expecting a rise in address piracy following depletion, i am
not expecting #3 (see above) and i think most of the piracy will be of
fallow or idle address space that will therefore have no competing route
(signed or otherwise).  this will become more pronounced as address
space holders who care about this and worry about this sign their routes
-- the pirates will go after easier prey.  so again we see no material
difference between hosted and up/down on the deployment side or if there
is a difference it is much smaller than the risk:benefit profile
difference on the provisioning side.

in summary, i am excited about RPKI and i've been pushing hard for in
both my day job and inside the ARIN BoT, but... let's not overstate the
case for it or kneejerk our way into provisioning models whose business
sense has not been closely evaluated.  as john curran said, ARIN will
look to the community for the guideance he needs on this question.  i
hope to see many of you at the upcoming ARIN public policy meeting in
san juan PR where this is sure to be discussed both at the podium and in
the hallways and bar rooms.

Paul Vixie
Chairman and Chief Scientist, ISC
Member, ARIN BoT



Re: AltDB?

2011-01-08 Thread Paul Vixie
 Date: Sat, 08 Jan 2011 15:47:51 +0900
 From: Randy Bush ra...@psg.com
 ...
 more recent rumors, and john's posting here, seem to indicate that
 ...

even to the extent that i know what's really happened or happening, i'd
be loathe to comment on rumours.  i have high confidence in arin's board
and staff, and i believe that the right things are happening, even with
the delays.  right things as in what's best for the community and for
the internet industry in the arin service region.  as a strong proponent
of rpki and of all things like rpki that will strengthen infrastructure,
i remain delay-tolerant if review is the cost of getting it right.

 first, it would really help if the arin bot and management were much
 more open about these issues and decisions.  at the detailed level.  we
 are all not fools out here, present company excepted :).  for a radical
 example, considering that arin is managing a public resource for the
 community, why are bot meetings not streamed a la cspan?

can you cite some examples of nonprofit companies whose boards operate at
the level of transparency you're asking me to consider in this example?

the process of rolling out something like rpki involves some checks and
balances, it's no longer just a simple matter of the technical people doing 
the right thing even though i remember older times when that was the way
most things on the internet worked.

 i do not see how you are going to get rid of the liability.  you have it
 now in whois/irr if i use it for routing (except they are so widely known
 to be bad data that the world knows i would be a fool to bet on them).
 whether the source of a roa is a user whacking on an arin web page or by
 other means, you still attested to the rights to that address space.

my own belief here (not speaking for ARIN or for the ARIN BoT) is that the
folks who use IRR/whois data to build route filters have a confidence level
much lower than those who will use RPKI to do the same will have.  i know
that if i still had enable on anything other than my home router, that's
how i'd feel.  also, liability isn't just got rid of it's also documented
and risk-managed, and doing that may require some kind of internal review.

 but all this is based on inference and rumor.  can you please be more
 open and direct about this?  thanks.

i don't know.  john (speaking for ARIN) gave an excellent and complete answer
that i completely agree with.  you're repeating some rumours which i won't
comment on one way or the other.  if you have specific questions which were
not answered by john's response or which were raised by john's response you
should ask them.  saying i heard a rumour, would anyone care to refute it?
is not going to move the conversational line of scrimmage at all.

paul



Re: AltDB?

2011-01-08 Thread Paul Vixie
 From: David Conrad d...@virtualized.org
 Date: Fri, 7 Jan 2011 21:01:52 -1000
 
  do you have a specific proposal? i've noted in the past that arin tries
  hard to stick to its knitting, which is allocation and allocation policy.
 
 Yes. This is a positive (IMHO), however it seems that occasionally,
 ARIN's knitting tangles up folks who don't necessarily involve
 themselves with ARIN's existing interaction mechanisms (at least
 directly).

the price of changing what ARIN does is, at a minimum: participation.

  it seems to me that if some in the community wanted arin to run SIGs
  or WGs on things like routing policy arin could do it but that a lot
  of folks would say that's mission creep and that it would be arin
  poaching on nanog lands.
 
 The issue I see is that there are non-address allocation{, policy}
 topics that can deeply affect network operations in which ARIN has a
 direct role, yet network operators (outside of the normal ARIN
 participants) have no obvious mechanism in which to
 comment/discuss/etc.  Examples would include reverse DNS operations,
 whois database-related issues (operations, schema, access methods,
 etc.), (potentially?) RPKI, etc.  It doesn't seem appropriate to me
 for these to be discussed in relation to addressing policy nor are the
 issues associated with those examples necessarily related to address
 allocation, hence I wouldn't think they'd be fodder for ppml.

they are, though.  i understand the subtlety of the question, is that a
policy matter? but discussions on ppml@ have led to determinations of
what is lameness? and when is a nameserver so lame that it's better to
remove it from in-addr than to leave it in?  i hear in what you're saying
a desire to have a way to impact ARIN's behaviour outside of NRPM edits
and perhaps ARIN does need to address this with some new online forum for
things which aren't allocation policy but which should still be decided
using community input.  (as i recall my first act as a new ARIN trustee
was to sign onto a policy proposal that would have changed the way e-mail
templates worked, and at the end of the process the ARIN BoT shot it down
because it wasn't a policy, and i understood that decision.  strange, eh?)

 ...
 
 So, in other words, no, I don't really have a specific proposal.

perhaps others will chime in.  i will continue to think about it also.



Re: AltDB?

2011-01-08 Thread Paul Vixie
 From: David Conrad d...@virtualized.org
 Date: Fri, 7 Jan 2011 23:11:32 -1000
 
 On Jan 7, 2011, at 10:24 PM, Paul Vixie wrote:
  the price of changing what ARIN does is, at a minimum: participation.
 
 Another view is that ARIN's whole and sole reason for being is to
 provide services to the network operators in the ARIN region.

yes.

 As such, it would be ill-advised for ARIN to change those services
 without consulting the community that ARIN serves and getting their
 buy-in.

that's very much what i mean by participation.  arin could never exist
without a community to serve.  if there are better ways to serve the
community or better ways for the community to participate in steering
arin's services, then i'm very interested in discovering them.

 Hopefully, there's a middle ground.

this *is* the middle ground.  we're beyond the span of decades when a
couple of smart engineers could bang out a working solution that the
rest of the community would just adopt out of opportunity and inertia.
and let's not just blame-the-lawyers for that.  the stakeholders in
the infrastructure of the information economy now number in the 'many'
and their views and needs have to be represented in the decisions that
get made by places like ICANN, IETF, the RIRs, and similar.

  i hear in what you're saying a desire to have a way to impact ARIN's
  behaviour outside of NRPM edits and perhaps ARIN does need to address
  this with some new online forum for things which aren't allocation
  policy but which should still be decided using community input.
 
 Yep.  Not sure it should be an ARIN-operated thing (nor am I sure that
 it shouldn't be), but something a bit more focused on the operation of
 services ARIN provides than ppml might be helpful.

count me as 'intrigued' and expect me to be thinking more about this.



Re: AltDB?

2011-01-08 Thread Paul Vixie
 Date: Sat, 08 Jan 2011 18:17:55 +0900
 From: Randy Bush ra...@psg.com
 
 let me be a bit more clear on this

thanks.

   o you affect the operational community, you talk with (not to) the
 operational community where the operational community talks

i think arin does this today.  certainly that is the intent.  on the other
fork of this thread, drc has noted some ways that this engagement area can
be further improved, and i have counted myself as intrigued.

also, i neglected to mention in my earlier notes on this thread that in
addition to public policy meetings and the public policy mailing list
which are open to the entire community not just arin members and which
allow for remote participation not just those who can travel, arin has a
consultation and suggestion process (URL below).  i urge all operators
and interested parties of the operational community to consider sharing
their perspectives and their wisdom with arin to guide it going forward.

ARIN Consultation and Suggestion Process:
https://www.arin.net/participate/acsp/index.html

ARIN Public Policy Mailing List:
http://lists.arin.net/mailman/listinfo/arin-ppml

Meetings:
https://www.arin.net/participate/meetings/index.html
https://www.arin.net/participate/meetings/reports/ARIN_XXVI/index.html
https://www.arin.net/participate/meetings/ARIN-XXVI/remote.html
https://www.arin.net/participate/meetings/ARIN-XXVII/index.html
https://www.arin.net/participate/meetings/ARIN-XXVIII/index.html

Fellowships:
https://www.arin.net/participate/meetings/fellowship.html

Scholarships:
https://www.arin.net/participate/meetings/scholarships.html



Re: AltDB?

2011-01-07 Thread Paul Vixie
note that while i am also an ARIN trustee, i am speaking here as what randy
calls just another bozo on this bus.  for further background, ISC has done
some rpki work and everybody at ISC including me likes rpki just fine.  when
the ARIN board was first considering funding ISC to do some early rpki work,
went out into the hallway until the discussion was over (ending positively.)

On Jan 5, 2011, at 12:32 PM, Randy Bush wrote:
 i have a rumor that arin is delaying and possibly not doing rpki that
 seems to have been announced on the ppml list (to which i do not
 subscribe).  

john curran has explained that arin is doing its due diligence on some
concerns that were brought up during a review of the rpki rollout.  there
is no sense in which arin has said that it is not doing rpki although the
current review does technically qualify as delaying rpki.  i'm treating
the above rumour as false.

David Conrad d...@virtualized.org writes:
 I heard about the delay, but not about ARIN possibly not doing RPKI. That
 would be ... surprising.  [...]

it would be very much surprising to me as well.

[bush]
 as it has impact on routing, not address policy, across north america
 and, in fact the globe, one would think it would be announced and
 discussed a bit more openly and widely.

even if i thought that the operational impact could be felt in these early
days when rpki remains an almost completely nonproduction service, and i
don't think this by the way, i would still say that an internal review of
a new service is not really something the whole community cares about.

[conrad]
 The definition of what comes under the public policy mailing list
 umbrella has always been a bit confusing to me.  Too bad something like
 the APNIC SIGs and RIPE Working Groups don't really exist in the ARIN
 region.

do you have a specific proposal?  i've noted in the past that arin tries
hard to stick to its knitting, which is allocation and allocation policy.
it seems to me that if some in the community wanted arin to run SIGs or WGs
on things like routing policy arin could do it but that a lot of folks would
say that's mission creep and that it would be arin poaching on nanog lands.
-- 
Paul Vixie
Chairman and Chief Scientist, ISC
Trustee, ARIN



Re: Comcast enables 6to4 relays

2010-08-29 Thread Paul Vixie
John Jason Brzozowski john_brzozow...@cable.comcast.com writes:

 This does not alter our plans for our native dual stack trials, in fact, I
 hope to have more news on this front soon.

comcast native dual stack is working fine at my house.
traceroute6 -q1 mol.redbarn.org shows details.



Re: [Bruce Hoffman] Thank-you for your recent participation.

2010-06-27 Thread Paul Vixie
Rich Kulawiec r...@gsp.org writes:

 Amusingly, this was sent to me *after* I replied to ab...@internap
 complaining about getting spammed.

 This spam came from the icontact spammers-for-hire: they're absolute
 filth who have been abusing individuals and mailing lists for years.
 I recommend blacklisting them permanently.

domains and/or cidrs, plz?
-- 
Paul Vixie
KI6YSY



Re: Nato warns of strike against cyber attackers

2010-06-09 Thread Paul Vixie
d...@bungi.com (Dave Rand) writes:
 ...
 With more than 100,000,000 compromised computers out there, it's really
 time for us to step up to the plate, and make this happen.

+1.
-- 
Paul Vixie
KI6YSY



Re: getting the hint

2010-04-17 Thread Paul Vixie
Larry Sheldon larryshel...@cox.net writes:

 The only response that works -- and even this is not guaranteed -- is
 shunning.
 
 Drop the message.  Do not respond.  Ever.

 And for the love of Pete, when somebody (as I have) makes a mistake and
 does his bidding, tell the miscreant VIA PRIVATE EMAIL or a note tied to
 a brick, but do not prate incessantly about it on the list.

+1.
-- 
Paul Vixie
KI6YSY



Re: legacy /8

2010-04-12 Thread Paul Vixie
 From: David Conrad d...@virtualized.org
 Date: Sun, 11 Apr 2010 13:52:24 -1000
 
 On Apr 11, 2010, at 10:57 AM, Paul Vixie wrote:
  ... i'd like to pick the easiest problem and for that reason i'm urging
  dual-stack ipv4/ipv6 for all networks new or old.
 
 Is anyone arguing against this?

yes.  plenty of people have accused ipv6 of being a solution in search of
a problem.  on this very mailing list within the last 72 hours i've seen
another person assert that ipv6 isn't needed.  while i tend to agree
with tony li who of ipv6 famously said it was too little and too soon we
have been Overtaken By Events and we now have to deploy it or else.  the
only way we're going to do that is with widescale dual-stack, either
native dual-stack (which is generally easy since ipv6 address space is
cheap and plentiful) or dual-stack-lite (which is ipv4-NAT ipv6-native
with aggregated encap/decap at the POP or edge) or with any other method
(or trick) that comes to mind or seems attractive.

what we can't do is presume that any form of ipv4 steady state forever or
wait for something better than ipv6 before abandoning ipv4 is practical,
or that these would be less expensive (in both direct cost, indirect cost,
and network/market stability) than dual-stack now, ipv6-mostly soon, and
ipv6-only eventually.

 The problem is what happens when there isn't sufficient IPv4 to do dual
 stack.

that problem has many low hanging solutions, some of which mark andrews
gave in his response to your note.  one popular address allocation policy
proposal is reserving the last IPv4 /8 for use in IPv6 deployment, for
example as public-facing dual-stack-lite gateways.

which brings me to the subject of address allocation policies, and meetings
that happen periodically to discuss same.  one such address allocator is
ARIN (American Registry for Internet Numbers) and one such public policy
meeting is next week in toronto.  details of this meeting can be found at:

https://www.arin.net/participate/meetings/ARIN-XXV/

and anyone, not just from the ARIN service region and not just ARIN members,
can attend.  there are also remote participation options, see above web page.
--
Paul Vixie
Chairman, ARIN BoT



Re: legacy /8

2010-04-11 Thread Paul Vixie
William Warren hescomins...@emmanuelcomputerconsulting.com writes:

 We've been dealing with the IPV4 myth now for over 7 years that i have
 followed it.  It's about as valid as the exaflood myth.  Part fo the
 reason folks aren't rushing to the V6 bandwagon is it's not needed.  Stop
 doing the chicken little dance folks.  V6 is nice and gives us tons of
 more addresses but I can tell you V4 is more than two years form dying
 just by seeing all the arm flailing going around.

anyone claiming that the then-existing ipv4 will stop working when the free
pool runs out is indeed doing the chicken little dance.  however, for many
networks, growth is life, and for them, free pool depletion is a problem.
-- 
Paul Vixie
Chairman, ARIN BoT




Re: legacy /8

2010-04-11 Thread Paul Vixie
David Conrad d...@virtualized.org writes:

 Growth in IPv4 accessible hosts will stop or become significantly more
 expensive or both in about 2.5 years (+/- 6 months).

 Growth stopping is extremely unlikely. Growth becoming significantly more
 expensive is guaranteed.  ...

more expensive for whom, though?  if someone has to find existing address
space and transfer it at some payment to the current holder in order to
grow their own network, that's a direct expense.  if this became common
and resulted in significant deaggregation, then everybody else attached in
some way to the global routing table would also have to pay some costs,
which would be indirect expenses.

unless a market in routing slots appears, there's no way for the direct
beneficiaries of deaggregation to underwrite the indirect costs of same.

at a systemic level, i'd characterize the cost of that kind of growth as
instability rather merely expense.

 Address utilization efficiency will increase as people see the value in
 public IPv4 addresses.  ISPs interested in continuing to grow will do
 what it takes to obtain IPv4 addresses and folks with allocated- but-
 unused addresses will be happy to oblige (particularly when they accept
 that they only need a couple of public IP addresses for their entire
 network).  At some point, it may be that the cost of obtaining IPv4 will
 outstrip the cost of migrating to IPv6.  If we're lucky.

the cost:benefit of using ipv6 depends on what other people have deployed.
that is, when most of the people that an operator and their customers want
to talk to are already running ipv6, then the cost:benefit will be
compelling compared to any form of continued use of ipv4.  arguments about
the nature and location of that tipping point amount to reading tea leaves.

nevertheless if everybody who can deploy dual-stack does so, we'll reach
that tipping point sooner and it'll be less spectacular.
-- 
Paul Vixie
Chairman, ARIN BoT



Re: legacy /8

2010-04-11 Thread Paul Vixie
 From: David Conrad d...@virtualized.org
 Date: Sun, 11 Apr 2010 10:30:05 -1000
 
  unless a market in routing slots appears, there's no way for the direct
  beneficiaries of deaggregation to underwrite the indirect costs of same.
 
 And that's different from how it's always been in what way?

when 64MB was all anybody had, deaggregation was rendered ineffective by
route filtering.  what i've seen more recently is gradual monotonic
increase in the size of the full table.  if the systemic cost of using
all of ipv4 includes a 10X per year step function in routing table size
then it will manifest as instability (in both the network and the market).

as you have pointed out many times, ipv6 offers the same number of /32's
as ipv4.  however, a /32 worth of ipv6 is enough for a lifetime even for
most multinationals, whereas for ipv4 it's one NAT or ALG box.  so, i'm
thinking that making ipv4 growth happen beyond pool exhaustion would be a
piecemeal affair and that the routing system wouldn't accomodate it
painlessly.  the rate of expansion of other people's routers seems to
fit the growth curve we've seen, but will it fit massive deaggregation?

 My tea leaf reading is that history will repeat itself.  As it was in the
 mid-90's, as soon as routers fall over ISPs will deploy prefix length (or
 other) filters to protect their own infrastructure as everybody scrambles
 to come up with some hack that won't be a solution, but will allow folks
 to limp along.  Over time, router vendors will improve their kit, ISPs
 will rotate out routers that can't deal with the size/flux of the bigger
 routing table (passing the cost on to their customers, of course), and
 commercial pressures will force the removal of filters.  Until the next
 go around since IPv6 doesn't solve the routing scalability problem.

instability like we had in the mid-1990's would be far more costly today,
given that the internet is now used by the general population and serves a
global economy.  if the rate of endpoint growth does not continue beyond
ipv4 pool exhaustion we'll have a problem.  if it does, we'll also have a
problem but a different problem.  i'd like to pick the easiest problem and
for that reason i'm urging dual-stack ipv4/ipv6 for all networks new or old.
--
Paul Vixie
Chairman, ARIN BoT



Re: DNS server software

2010-02-22 Thread Paul Vixie
Claudio Lapidus clapi...@gmail.com writes:

 We are a mid-sized carrier (1.2M broadband subscribers) and we are
 looking for an upgrade in our public DNS resolver infrastructure, so we
 are interested in getting to know what are you guys using in your
 networks.  Mainly what kind/brand of software and which architecture did
 you use to deploy it, and how did you do the sizing, all of it would be
 most helpful information.

Unsurprisingly, we (AS1280, AS3557) run BIND 9.  see http://www.isc.org/.
We have at least two recursives in each AS1280 site, and one in each
AS3557 location (f-root).  Stubs (either /etc/resolv.conf or DHCP) each use
all local plus some non-local, for a minimum of three total.  Recursive DNS
servers do not use forwarding or other cache-sharing techniques, each is
fully independent.  Most have DNSSEC validation enabled, and of those, all
are subscribed to ISC DLV, see http://dlv.isc.org/.  Most server hosts
here run FreeBSD on AMD64/EM64T or else i386.
-- 
Paul Vixie
KI6YSY



Re: Spamhaus...

2010-02-21 Thread Paul Vixie
Rich Kulawiec r...@gsp.org writes:

 On Fri, Feb 19, 2010 at 08:20:36PM -0500, William Herrin wrote:
 Whine all you want about backscatter but until you propose a
 comprehensive solution that's still reasonably compatible with RFC
 2821's section 3.7 you're just talking trash.

 We're well past that.  Every minimally-competent postmaster on this
 planet knows that clause became operationally obsolete years ago [1], and
 has configured their mail systems to always reject, never bounce. [2]

for smtp, i agree.  yet, uucp and other non-smtp last miles are not dead.

 [2] Yes, there are occasionally some edge cases of limited scope and
 duration that can be tough to handle.  ...  The key points here are
 limited scope and limited duration.  There is never any reason or
 need in any mail environment to permit these problems to grow beyond
 those boundaries.

so, a uucp-only site should have upgraded to real smtp by now, and by not
doing it they and their internet gateway are a joint menace to society?

that seems overly harsh.  there was a time (1986 or so?) when most of the
MX RR's in DNS were smtp gateways for uucp-connected (or decnet-connected,
etc) nodes.  it was never possible to reject nonexist...@uucpconnected at
their gateway since the gateway didn't know what existed or not.  i'm not
ready to declare that era dead.

william herrin had a pretty good list of suggested tests to avoid sending
useless bounce messages:

No bounce if the message claimed to be from a mailing list.
No bounce if the spam scored higher than 8 in spamassassin
No bounce if the server which you received the spam from doesn't match
my domain's published SPF records evaluated as if ~all and ?all
are -all

i think if RFC 2821 is to be updated to address the backscatter problem, it
ought to be along those lines, rather than everything must be synchronous.
-- 
Paul Vixie
KI6YSY



Re: DNS queries for . IN A return rcode 2 SERVFAIL from windows DNS recursing resolvers

2010-01-12 Thread Paul Vixie
Joe Maimon jmai...@ttec.com writes:

 Hey all,

 This must be old news for everyone else. While looking at a dns monitor
 on a load balancer that defaulted to . A queries to check liveliness on
 DNS resolvers, it became quite clear that windows 2000/2003 DNS server
 appears to return rcode=2 for queries looking for an A record for the
 root. The resolvers appear to work properly in all other regards.

well, there is no A RR for the root domain.  RCODE=2 is still an error,
you should receive RCODE=0 ANCOUNT=0 for an unused RR type.  but many
resolvers get confused when the root domain is the QNAME, so let's assume
that you're using one of those.

 So the monitors were switched to localhost. A

 (Is this a bad idea?)

probably.  there is no localhost in the root zone.  this name is a TCP/IP
stack convention, not a standard.  for health monitoring purposes you should
probably choose one of your own local names, since there's almost certainly
no local intelligence in your resolver about them.  that means to look up
one of your own names the resolver probably has to iterate downward from the
root zone to the top level and all the way down to your authority nameservers.
(the problem here is, you may be testing more than you intend, and a failure
in your own authority server or in the delegation path to it would look the
same as an IP path failure or a resolver problem.)

 A little testing later and the results for . A are:

 Windows NT 4, ancount=0, authority=1, rcode=0
 Windows 2000, rcode=2
 Windows 2003, rcode=2
 bind, ancount=0, authority=1, rcode=0

 To my (inexpert) eyes that doesnt seem quite right.

probably resolver bugs, either in those TCP/IP stacks or in the recursive
nameserver they are using.  (is the same recursive nameserver used in all
four tests?)

 I cant seem to find any online information regarding this difference of
 behavior.

 Enlightenment appreciated.

i suggest re-asking this over on dns-operati...@lists.dns-oarc.net, since it
a bit deep in the DNS bits for a general purpose list like NANOG.
-- 
Paul Vixie
KI6YSY



EDNS (Re: Are the Servers of Spamhaus.rg and blackholes.us down?)

2010-01-01 Thread Paul Vixie
Jason Bertoch ja...@i6ix.com writes:

 Dec 31 10:12:37 linux-1ij2 named[14306]: too many timeouts resolving
 'XXX.YYY.ZZZ/A' (in 'YYY.ZZZ'?): disabling EDNS

 Do you have a firewall in front of this server that limits DNS packets to
 512 bytes?

statistically speaking, yes, most people have that.  which is damnfoolery,
but well supported by the vendors, who think either that udp/53 datagrams
larger than 512 octets are amplification attacks, or that udp packets having
no port numbers because they are fragments lacking any udp port information,
are evil and dangerous.  sadly, noone has yet been fired for buying devices
that implement this kind of overspecification.  hopefully that will change
after the DNS root zone is signed and udp/53 responses start to generally
include DNSSEC signatures, pushing most of them way over the 512 octet limit.

it's going to be another game of chicken -- will the people who build and/or
deploy such crapware lose their jobs, or will ICANN back down from DNSSEC?
-- 
Paul Vixie
KI6YSY



Re: EDNS (Re: Are the Servers of Spamhaus.rg and blackholes.us down?)

2010-01-01 Thread Paul Vixie
 Date: Fri, 1 Jan 2010 22:16:31 +
 From: bmann...@vacation.karoshi.com
 
   It would help if the BIND EDNS0 negotiation would not fall back to
   the 512 byte limit - perhaps you could talk with the ISC developers
   about that.

i don't agree that your proposed change would help with this problem at all.
but in any case nanog isn't the place to ask ISC to change BIND, nor is it
the place to discuss protocol implementation or interpretation.  i suggest
bind-users@, bind-workers@, dns-operations@, dnsop@, and/or namedroppers@,
depending on what aspect of your above-described concerns you focus on.



Re: Article on spammers and their infrastructure

2009-12-30 Thread Paul Vixie
Randy Bush ra...@psg.com writes:

 If ARIN and/or RIPE and/or ICANN and/or anyone else were truly
 interested in making a dent in the problem, then they would have already
 paid attention to our collective work product.

 the rirs, the ietf, the icann, ... each think they are the top of the
 mountain.  we are supposed to come to them and pray.  more likely that
 the itu will come to them and prey.

ARIN (an RIR) does not think in terms of mountains.  the staff and company
does what members and the elected board and elected advisory council ask.
ARIN is a 501(c)(6) and sticks to its knitting, which thus far means no
distinguished role in spammers and their infrastructure but that could
change if someone writes a policy proposal which is adopted after the
normal policy development process.

please do consider whether ARIN could help with spammers and their
infrastructure and if so, write a policy draft to that effect.  ARIN is
responsive to community input, and has well established and well publicized
mechanisms for receiving and processing community input.  nobody has to
come and pray, but likewise, nobody should expect ARIN to look for mission
creep opportunities.  ARIN will go on doing what the community asks, no
less, no more.  ARIN has no mechanism, as a company, for [paying]
attention to [your] collective work product.  our members, and the public
at large who participates in ARIN's policy development process, do that.
-- 
Paul Vixie
Chairman, ARIN BoT
KI6YSY



Re: DNS question, null MX records

2009-12-16 Thread Paul Vixie
Douglas Otis do...@mail-abuse.org writes:

 If MX TEST-NET became common, legitimate email handlers unable to
 validate messages prior to acceptance might find their server
 resource constrained when bouncing a large amount of spam as well.

none of this will block spam.  spammers do not follow RFC 974 today
(since i see a lot of them come to my A RR rather than an MX RR, or
in the wrong order).  any well known pattern that says don't try
to deliver e-mail here will only be honoured by friend people who
don't want us to get e-mail we don't want to get.
-- 
Paul Vixie
KI6YSY



Re: DNS question, null MX records

2009-12-16 Thread Paul Vixie
Douglas Otis do...@mail-abuse.org writes:

 Agreed. But it will impact providers generating a large amount of bounce
 traffic, and some portion of spam sources that often start at lower
 priority MX records in an attempt to find backup servers without valid
 recipient information.  In either case, this will not cause extraneous
 traffic to hit roots or ARPA.

if you're just trying to stop blowback from forged-source spam, and not
trying to stop the spam itself, then some mechanism like an unreachable
MX does seem called for.  note that those approaches will cause queuing
on the blowerbackers, rather than outright reject/die.  other approaches
that could cause outright reject/die would likely direct the blowback to
the blowback postmasters, who are as innocent as the spam victims.  i'm
not sure there's a right way to do this in current SMTP.  i used to think
we could offer to verify that a piece of e-mail had come from us using
some kind of semi-opaque H(message-id) scheme, but in studying it i
found that as usual with spam the economic incentives are all backwards.
-- 
Paul Vixie
KI6YSY



Re: Breaking the internet (hotels, guestnet style)

2009-12-08 Thread Paul Vixie
Steven Bellovin s...@cs.columbia.edu writes:

 It's why I run an ssh server on 443 somewhere -- and as needed, I
 ssh-tunnel http to a squid proxy, smtp, and as many IMAP/SSL connections
 as I really need...

me too, more or less.  but steve, if we were only trying to build digital
infrastructure for people who know how to do that, then we'd all still be
using Usenet over modems.  we're trying to build digital infrastructure for
all of humanity, and that means stuff like the above has to be unnecessary.
-- 
Paul Vixie
KI6YSY



Re: What DNS Is Not

2009-11-26 Thread Paul Vixie
 From: David Conrad d...@virtualized.org
 Date: Thu, 26 Nov 2009 07:42:15 -0800
 
 As you know, as long as people rely on their ISPs for resolution
 services, DNSSEC isn't going to help.  Where things get really offensive
 if when the ISPs _require_ customers (through port 53 blocking, T-Mobile
 Hotspot, I'm looking at you) to use the ISP's resolution services.

the endgame for provider-in-the-middle attacks is enduser validators, which
is unfortunate since this use case is not well supported by current DNSSEC
and so there's some more protocol work in our future (n!!).

i also expect to see DNS carried via HTTPS, which providers tend to leave
alone since they don't want to hear from the lawyers at 1-800-flowers.com.
(so, get ready for https://ns.vix.com/dns/query/www.vix.com/in/ard=1ad=1).



Re: What DNS Is Not

2009-11-26 Thread Paul Vixie
 From: David Conrad d...@virtualized.org
 Date: Thu, 26 Nov 2009 13:25:39 -0800
 
 At some point, we may as well bite the bullet and redefine http{,s} as IPv7.

since products and services designed to look inside encrypted streams and
inspect, modify, or redirect them are illegal in most parts of the world:

yes, inevitably.




Re: What DNS Is Not

2009-11-25 Thread Paul Vixie
Jorge Amodio jmamo...@gmail.com writes:

 What needs to be done to have ISPs and other service providers stop
 tampering with DNS ?

we have to fix DNS so that provider-in-the-middle attacks no longer work.
(this is why in spite of its technical excellence i am not a DNSCURVE fan,
and also why in spite of its technical suckitude i'm working on DNSSEC.)

http://queue.acm.org/detail.cfm?id=1647302 lays out this case.
-- 
Paul Vixie
KI6YSY



Re: What DNS Is Not

2009-11-12 Thread Paul Vixie
Kevin Oberman ober...@es.net writes:

 I find it mildly amusing that my first contact with Paul was about 25
 years ago when he was at DEC and I objected to his use of a wildcard for
 dec.com.

I was only an egg.

 The situations are not parallel and the Internet was a very different
 animal in those days (and DEC was mostly DECnet), but still I managed to
 maintain a full set of MX records for all of our DECnet systems.

Based partly on my conversation with you, I ended up pulling over the
list of DECnet nodes and generating MX's for each one, just to remove
that wildcard.  You were right, and I listened.  Probably I forgot to
thank you until now.  Thanks.
-- 
Paul Vixie
KI6YSY



Re: What DNS Is Not

2009-11-09 Thread Paul Vixie
i loved the henry ford analogy -- but i think henry ford would have said that
the automatic transmission was a huge step forward since he wanted everybody
to have a car.  i can't think of anything that's happened in the automobile
market that henry ford wouldn't've wished he'd thought of.

i knew that the incoherent DNS market would rise up on its hind legs and
say all kinds of things in its defense against the ACM Queue article, and i'm
not going to engage with every such speaker.

there three more-specific replies below.

Dave Temkin dav...@gmail.com writes:

 Alex Balashov wrote:

 For example, perhaps in the case of CDNs geographic optimisation should
 be in the province of routing (e.g. anycast) and not DNS?

 In most cases it already is.  He completely fails to address the concept
 of Anycast DNS and assumes people are using statically mapped resolvers.

anycast DNS appears to mean different things to different people.  i didn't
mention it because to me anycast dns is a bgp level construct whereby the
same (coherent) answer is available from many servers having the same IP
address but not actually being the same server.  see for example how several
root name servers are distributed.  http://www.root-servers.org/.  if you
are using anycast DNS to mean carefully crafted (noncoherent) responses
from a similarly distributed/advertised set of servers, then i did address
your topic in the ACM Queue article.

David Andersen d...@cs.cmu.edu writes:

 This myth ... was debunked years ago:

 DNS Performance and the Effectiveness of Caching
 Jaeyeon Jung, Emil Sit, Hari Balakrishnan, and Robert Morris
 http://pdos.csail.mit.edu/papers/dns:ton.pdf

my reason for completely dismissing that paper at the time it came out was
that it tried to predict the system level impact of DNS caching while only
looking at the resolver side and only from one client population having a
small and uniform user base.  show me a trace driven simulation of the
whole system, that takes into account significant authority servers (which
would include root, tld, and amazon and google) as well as significant
caching servers (which would not include MIT's or any university's but
which would definitely include comcast's and cox's and att's), and i'll
read it with high hopes.  note that ISC SIE (see http://sie.isc.org/ may
yet grow into a possible data source for this kind of study, which is one
of the reasons we created it.)

Simon Lyall si...@darkmere.gen.nz writes:

 I heard some anti-spam people use DNS to distribute big databases of
 information. I bet Vixie would have nasty things to say to the guy who
 first thought that up.

someone made this same comment in the slashdot thread.  my response there
and here is: the MAPS RBL has always delivered coherent responses where the
answer is an expressed fact, not kerned in any way based on the identity of
the querier.  perhaps my language in the ACM Queue article was imprecise 
(delivering facts rather than policy) and i should have stuck with the
longer formulation (incoherent responses crafted based on the identity of
the querier rather than on the authoritative data).
-- 
Paul Vixie
KI6YSY



Re: Gmail Down?

2009-09-24 Thread Paul Vixie
cabenth cabe...@gmail.com writes:

 Gmail is definitely been having a hard time this morning.

 Stand by for the comments from the peanut gallery about allowing google
 to scan your e-mail, etc like last time Gmail had an issue.

i recently explored webmail for my family and found prayer, which is a
pure C application (no php, no perl) built on the uw-imap c-client library.
it's blindingly fast even for thousands of huge mailboxes stored in MH
format.  anyone who was using the cloud because they couldn't stand the
poor performance of the apache-based webmail systems should take a look.

http://www-uxsup.csx.cam.ac.uk/~dpc22/prayer/ is the home page.  though i
found it in freebsd /usr/ports/mail/prayer.
-- 
Paul Vixie
KI6YSY



Re: DNS hardening, was Re: Dan Kaminsky

2009-08-06 Thread Paul Vixie
Christopher Morrow morrowc.li...@gmail.com writes:

 how does SCTP ensure against spoofed or reflected attacks?

there is no server side protocol control block required in SCTP.  someone
sends you a create association request, you send back a ok, here's your
cookie and you're done until/unless they come back and say ok, here's my
cookie, and here's my DNS request.  so a spoofer doesn't get a cookie and
a reflector doesn't burden a server any more than a ddos would do.

because of the extra round trips nec'y to create an SCTP association (for
which you can think, lightweight TCP-like session-like), it's going to be
nec'y to leave associations in place between iterative caches and authority
servers, and in place between stubs and iterative caches.  however, because
the state is mostly on the client side, a server with associations open to
millions of clients at the same time is actually no big deal.
-- 
Paul Vixie
KI6YSY



Re: DNS hardening, was Re: Dan Kaminsky

2009-08-06 Thread Paul Vixie
note, i went off-topic in my previous note, and i'll be answering florian
on namedroppers@ since it's not operational.  chris's note was operational:

 Date: Thu, 6 Aug 2009 10:18:11 -0400
 From: Christopher Morrow morrowc.li...@gmail.com
 
 awesome, how does that work with devices in the f-root-anycast design?
 (both local hosts in the rack and if I flip from rack to rack) If I send
 along a request to a host which I do not have an association created do I
 get a failure and then re-setup? (inducing further latency)

yes.  so, association setup cost will occur once per route-change event.
note that the f-root-anycast design already hashes by flow within a rack
to keep TCP from failing, so the only route-change events of interest to
this point are in wide area BGP.

 ...: Do loadbalancers, or loadbalanced deployments, deal with this
 properly? (loadbalancers like F5, citrix, radware, cisco, etc...)

as far as i know, no loadbalancer understands SCTP today.  if they can be
made to pass SCTP through unmodified and only do their enhanced L4 on UDP
and TCP as they do now, all will be well.  if not then a loadbalancer
upgrade or removal will be nec'y for anyone who wants to deploy SCTP.

it's interesting to me that existing deployments of L4-aware packet level
devices can form a barrier to new kinds of L4.  it's as if the internet is
really just the web, and our networks are TCP/UDP networks not IP networks.



Re: Dan Kaminsky

2009-08-04 Thread Paul Vixie
Curtis Maurand cmaur...@xyonet.com writes:

 What does this have to do with Nanog, the guy found a critical
 security bug on DNS last year.

 He didn't find it.  He only publicized it.  the guy who wrote djbdns fount
 it years ago.

first blood on both the DNS TXID attack, and on what we now call the
Kashpureff attack, goes to chris schuba who published in 1993:

http://ftp.cerias.purdue.edu/pub/papers/christoph-schuba/schuba-DNS-msthesis.pdf

i didn't pay any special heed to it since there was no way to get enough
bites at the apple due to negative caching.  when i saw djb's announcement
(i think in 1999 or 2000, so, seven years after schuba's paper came out) i
said, geez, that's a lot of code complexity and kernel overhead for a
problem that can occur at most once per DNS TTL.  and sure enough when we
did finally put source port randomization into BIND it crashed a bunch of
kernels and firewalls and NATs, and is still paying painful dividends for
large ISP's who are now forced to implement it.

why forced?  what was it about kaminsky's announcement that changed this
from a once-per-TTL problem that didn't deserve this complex/costly solution
into a once-per-packet problem that made the world sit up and care?  if you
don't know the answer off the top of your head, then maybe do some reading
or ask somebody privately, rather than continuing to announce in public that
bernstein's problem statement was the same as kaminsky's problem statement.
and, always give credit to chris schuba, who got there first.

 Powerdns was patched for the flaw a year and a half before
 Kaminsky published his article.

nevertheless bert was told about the problem and was given a lengthy window
in which to test or improve his solutions for it.  and i think openbsd may
have had source port randomization first, since they do it in their kernel
when you try to bind(2) to port 0.  most kernels are still very predictable
when they're assigning a UDP port to an outbound socket.
-- 
Paul Vixie
KI6YSY



Re: Fwd: Dan Kaminsky

2009-08-03 Thread Paul Vixie
William Allen Simpson william.allen.simp...@gmail.com writes:

 Are we paying enough attention to securing our systems?

almost certainly not.  skimming RFC 2196 again just now i find three things.

1. it's out of date and needs a refresh -- yo barb!
2. i'm not doing about half of what it recommends
3. my users complain bitterly about the other half

in terms of cost:benefit, it's more and more the case that outsourcing looks
cheaper than doing the job correctly in-house.  not because outsourcing *is*
more secure but because it gives the user somebody to sue rather than fire,
where a lawsuit could recover some losses and firing someone usually won't.

digital security is getting a lot of investor attention right now.  i wonder
if this will ever consolidate or if pandora's box is just broken for all time.
-- 
Paul Vixie
KI6YSY



Re: White House net security paper

2009-06-02 Thread Paul Vixie
Randy Bush ra...@psg.com writes:

 ...  a few battalions of B's and C's, if wisely deployed, could bridge
 that gap.

 there is a reason Bs and Cs have spare round-tuits.

 fred brooks was no fool.  os/360 taught some of us some lessons.
 batallions work in the infantry, or so i am told.  this is rocket
 science.

to me wisely means backfilling 80% of what the Good Guys do that isn't
rocket science.  (most A's are not doing only what only A's can do.)
-- 
Paul Vixie
KI6YSY



Re: White House net security paper

2009-05-31 Thread Paul Vixie
Randy Bush ra...@psg.com writes:

 As hire As.  Bs hire Cs.  Lots of Cs.

 this problem needs neurons, not battalions.

this problem needs round-tuits, which Good Guys are consistently short of,
but which Bad Guys always have as many of as they can find use for.  a few
battalions of B's and C's, if wisely deployed, could bridge that gap.  the
key to all this is therefore not really neurons but rather wiselyness.

i promise to, um, mention this, or maybe more, in my nanog-philly keynote.
-- 
Paul Vixie
KI6YSY



Re: White House net security paper

2009-05-31 Thread Paul Vixie
Sean Donelan s...@donelan.com writes:

 How many ISPs have too many network security people?

network security is a loss center.  not just a cost center, a *loss* center.
non-bankrupt ISP's whose investors will make good multiples only staff their
*profit* centers.  the Good Guys and Bad Guys all know this -- the difference
is that the Good Guys try not to think about this whereas the Bad Guys think 
about it all the time.
-- 
Paul Vixie
KI6YSY



Re: Why choose 120 volts?

2009-05-26 Thread Paul Vixie
Leo Bicknell bickn...@ufp.org writes:

...
 http://www.apcmedia.com/salestools/NRAN-6CN8PK_R0_EN.pdf
...
 But what you'll find in the paper is that the change allows you to
 re-architect the power plant in a way that saves you money on PDU's,
 transformers, and other stuff.  Thus this makes the most sense to
 consider in a green field deployment.

noting also that architect is a noun, i find that on large plants the
cost of copper wire and circuit breakers add up, where sizes (and prices)
are based on ampherage not wattage.  in the old days when a rack needed
6kW, that was 208V 30A (10 gauge wire) or it was two of 120V 30A (also 10
gauge wire).  somewhere near the first hundred or so racks, the price of
the wire and breakers starts to seem high, and very much worth halving.

once in a while some crashcart CRT monitor won't run on anything but 120V
but for $50 NRC it can be replaced with an LCD.  everything else that's
still worth plugging in (that is, having a power/heat cost per performance
better than that of a blow dryer) doesn't care what voltage it lives on.
-- 
Paul Vixie
KI6YSY



Re: Colo on the West Coast

2009-05-26 Thread Paul Vixie
Pshem Kowalczyk pshe...@gmail.com writes:

 (answers can be off-list)

See http://www.vix.com/personalcolo/.  (updates still welcomed, btw.)
-- 
Paul Vixie
KI6YSY



Re: integrated KVMoIP and serial console terminal server

2009-04-25 Thread Paul Vixie
Owen DeLong o...@delong.com writes:

 My favorite front end for serial console management is conserver.

 It's really great software and the price is right.

 http://www.conserver.com

see also /usr/ports/sysutils/rtty on freebsd, which pulls from 

MASTER_SITES=   ftp://ftp.isc.org/isc/rtty/ \
ftp://gatekeeper.research.compaq.com/pub/misc/vixie/

since the ftp server mentioned here in 1996

http://www.merit.edu/mail.archives/nanog/1996-08/msg00223.html

is dead.
-- 
Paul Vixie
KI6YSY



Re: IXP

2009-04-23 Thread Paul Vixie
Bill Woodcock wo...@pch.net writes:

 ... Nobody's arguing against VLANs.  Paul's argument was that VLANs
 rendered shared subnets obsolete, and everybody else has been rebutting
 that. Not saying that VLANs shouldn't be used.

i think i saw several folks, not just stephen, say virtual wire was how
they'd do an IXP today if they had to start from scratch.  i know that
for many here, starting from scratch isn't a reachable worldview, and so
i've tagged most of the defenses of shared subnets with that caveat.  the
question i was answering was from someone starting from scratch, and when
starting an IXP from scratch, a shared subnet would be just crazy talk.
-- 
Paul Vixie



Re: IXP

2009-04-18 Thread Paul Vixie
 From: Paul Vixie vi...@isc.org
 Date: Sat, 18 Apr 2009 00:08:04 +
 ...
 i should answer something said earlier: yes there's only 14 bits of tag and
 yes 2**14 is 4096.  in the sparsest and most wasteful allocation scheme,
 tags would be assigned 7:7 so there'd be a max of 64 peers.  

i meant of course 12 bits, that 2**12 is 4096, and 6:6.  apologies for slop.



Re: IXP

2009-04-18 Thread Paul Vixie
stephen, any idea why this hasn't hit the nanog mailing list yet?
it's been hours, and things that others have sent on this thread
has appeared.  is it stuck in a mail queue? --paul

re:

 To: Deepak Jain dee...@ai.net
 cc: Matthew Moyle-Croft m...@internode.com.au,
 Arnold Nipper arn...@nipper.de, Paul Vixie vi...@isc.org,
 na...@merit.edu na...@merit.edu
 Subject: Re: IXP 
 Date: Sat, 18 Apr 2009 05:30:41 +
 From: Stephen Stuart stu...@tech.org
 
  Not sure how switches handle HOL blocking with QinQ traffic across trunks,
  but hey...
  what's the fun of running an IXP without testing some limits?
 
 Indeed. Those with longer memories will remember that I used to
 regularly apologize at NANOG meetings for the DEC Gigaswitch/FDDI
 head-of-line blocking that all Gigaswitch-based IXPs experienced when
 some critical mass of OC3 backbone circuits was reached and the 100
 MB/s fabric rolled over and died, offered here (again) as a cautionary
 tale for those who want to test those particular limits (again).
 
 At PAIX, when we upgraded to the Gigaswitch/FDDI (from a DELNI; we
 loved the DELNI), I actually used a feature of the switch that you
 could black out certain sections of the crossbar to prevent packets
 arriving on one port from exiting certain others at the request of
 some networks to align L2 connectivity with their peering
 agreements. It was fortunate that the scaling meltdown occurred when
 it did, otherwise I would have spent more software development
 resources trying to turn that capability into something that was
 operationally sustainable for networks to configure the visibility of
 their port to only those networks with which they had peering
 agreements. That software would probably have been thrown away with
 the Gigaswitches had it actually been developed, and rewritten to use
 something horrendous like MAC-based filtering, and if I recall
 correctly the options didn't look feasible at the time - and who wants
 to have to talk to a portal when doing a 2am emergency replacement of
 a linecard to change registered MAC addresses, anyway?. The port-based
 stuff had a chance of being operationally feasible.
 
 The notion of a partial pseudo-wire mesh, with a self-service portal
 to request/accept connections like the MAEs had for their ATM-based
 fabrics, follows pretty well from that and everything that's been
 learned by anyone about advancing the state of the art, and extends
 well to allow an IXP to have a distributed fabric benefit from
 scalable L2.5/L3 traffic management features while looking as much
 like wires to the networks using the IXP.
 
 If the gear currently deployed in IXP interconnection fabrics actually
 supports the necessary features, maybe someone will be brave enough to
 commit the software development resources necessary to try to make it
 an operational reality. If it requires capital investment, though, I
 suspect it'll be a while.
 
 The real lesson from the last fifteen or so years, though, is that
 bear skins and stone knives clearly have a long operational lifetime.
 
 Stephen



Re: IXP

2009-04-18 Thread Paul Vixie
 Date: Sat, 18 Apr 2009 10:09:00 +
 From: bmann...@vacation.karoshi.com
 
   ... well...  while there is a certain childlike obession with the
   byzantine, rube-goldburg, lots of bells, knobs, whistles type
   machines... for solid, predictable performance, simple clean
   machines work best.

like you i long for the days when a DELNI could do this job.  nobody
makes hubs anymore though.  but the above text juxtaposes poorly against
the below text:

 Date: Sat, 18 Apr 2009 16:35:51 +0100
 From: Nick Hilliard n...@foobar.org
 
 ... These days, we have switches which do multicast and broadcast storm
 control, unicast flood control, mac address counting, l2 and l3 acls,
 dynamic arp inspection, and they can all be configured to ignore bpdus in
 a variety of imaginative ways. We have arp sponges and broadcast
 monitors. ...

in terms of solid and predictable i would take per-peering VLANs with IP
addresses assigned by the peers themselves, over switches that do unicast
flood control or which are configured to ignore bpdu's in imaginative ways.

but either way it's not a DELNI any more.  what i see is inevitable
complexity and various different ways of layering that complexity in.  the
choice of per-peering VLANs represents a minimal response to the problems
of shared IXP fabrics, with maximal impedance matching to the PNI's that
inevitably follow successful shared-port peerings.




Re: IXP

2009-04-18 Thread Paul Vixie
 Date: Sat, 18 Apr 2009 16:35:51 +0100
 From: Nick Hilliard n...@foobar.org
 
 ... i just don't care if people use L2 connectivity to get to an exchange
 from a router somewhere else on their LAN. They have one mac address to
 play around with, and if they start leaking mac addresses towards the
 exchange fabric, all they're going to do is hose their own
 connectivity.

yeah we did that at PAIX.  if today's extremenetworks device has an option
to learn one MAC address per port and no more, it's because we had a
terrible time getting people to register their new MAC address when they'd
change out interface cards or routers.  hilarious levels of fingerpointing
and downtime later, our switch vendor added a knob for us.  but we still
saw typo's in IP address configurations whereby someone could answer ARPs
for somebody else's IP.  when i left PAIX (the day MFN entered bankruptcy)
we were negotiating for more switch knobs to prevent accidental and/or
malicious ARP poisoning.  (and note, this was on top of a no-L2-devices
rule which included draconian auditing rights for L2/L3 capable hardware.)

 As you've noted, there is a natural progression for services providers
 here from shared access to pni, which advances according to the business
 and financial requirements of the parties involved. If exchange users
 decide to move from shared access peering to PNI, good for them - it
 means their business is doing well. But this doesn't mean that IXPs don't
 offer an important level of service to their constituents. Because of
 them, the isp industry has convenient access to dense interconnection at
 a pretty decent price.

yes, that's the progression of success.  and my way of designing for
success is to start people off with VNI's (two-port VLANs containing one
peering) so that when they move from shared-access to dedicated they're
just moving from a virtual wire to a physical wire without losing any of
the side-benefits they may have got from a shared-access peering fabric.

  Q in Q is not how i'd build this... cisco and juniper both have
  hardware tunnelling capabilities that support this stuff...  it just
  means as the IXP fabric grows it has to become router-based.
 
 Hey, I have an idea: you could take this plan and build a tunnel-based or
 even a native IP access IXP platform like this, extend it to multiple
 locations and then buy transit from a bunch of companies which would give
 you a native L3 based IXP with either client prefixes only or else an
 option for full DFZ connectivity over the exchange fabric.  You could
 even build a global IXP on this basis!  It's a brilliant idea, and I just
 can't imagine why no-one thought of it before.

:-).

i've been known to extend IXP fabrics to cover a metro, but never beyond.



Re: IXP

2009-04-18 Thread Paul Vixie
 Date: Sat, 18 Apr 2009 13:17:11 -0400
 From: Steven M. Bellovin s...@cs.columbia.edu
 
 On Sat, 18 Apr 2009 16:58:24 +
 bmann...@vacation.karoshi.com wrote:
 
  i make the claim that simple, clean design and execution is
  best. even the security goofs will agree.   

 Even?  *Especially* -- or they're not competent at doing security.

wouldn't a security person also know about

http://en.wikipedia.org/wiki/ARP_spoofing

and know that many colo facilities now use one customer per vlan due
to this concern?  (i remember florian weimer being surprised that we
didn't have such a policy on the ISC guest network.)

if we maximize for simplicity we get a DELNI.  oops that's not fast
enough we need a switch not a hub and it has to go 10Gbit/sec/port.
looks like we traded away some simplicity in order to reach our goals.



Re: IXP

2009-04-17 Thread Paul Vixie
 Large IXP have 300 customers. You would need up to 45k vlan tags,
 wouldn't you?

the 300-peer IXP's i've been associated with weren't quite full mesh
in terms of who actually wanted to peer with whom, so, no.



www.vix.com/personalcolo (Re: US west coast personal colo)

2009-04-17 Thread Paul Vixie
i just restored http://www.vix.com/personalcolo/ from backup.  last update
2007.  i guess this calls for another round of send me your updates, folks.

re:

Sean Donelan s...@donelan.com writes:

 Is anyone still doing personal colo on the west coast?  I'm looking for a
 new home for my personal server on the west coast, and it seems like
 the economy has taken out most of the old personal colo offers. Even the
 old web page on www.vix.com/personalcolo is gone.




-- 
Paul Vixie



Re: [OT] Re: Fiber cut in SF area

2009-04-11 Thread Paul Vixie
Christopher Morrow morrowc.li...@gmail.com writes:

 and I also would ask.. what's the cost/risk here? 'We' lost at best
 ~1day for some folks in the outage, nothing  global and nothing
 earth-shattering... This has happened (this sort of thing) 1 time in
 how many years? Expending $$ and time and people to go 'put padlocks
 on manhole covers' seems like spending in the wrong place...

as long as the west's ideological opponents want terror rather than panic,
and also to inflict long term losses rather than short term losses, that's
true.  in this light you can hopefully understand why bollards to protect
internet exchanges against truck bombs are not only penny wise pound foolish
(since the manholes a half mile away won't be hardened or monitored or even
locked) but also completely wrongheaded (since terrorists need publicity
which means they need their victims to be fully able to communicate.)
-- 
Paul Vixie



Re: ISC DLV

2009-04-05 Thread Paul Vixie
Paul Ferguson fergdawgs...@gmail.com writes:

 On Sat, Apr 4, 2009 at 9:55 PM, Marcelo Gardini do Amaral
 mgard...@gmail.com wrote:

 Guys,

 are you having problems to validate DNSEC using ISC DLV?


 No idea, but I did see another reference to this over on the OARC dns-ops
 list:

 https://lists.dns-oarc.net/pipermail/dns-operations/2009-April/003726.html

note, this isn't a ddos, so it's probably not related to the other dns ddos
events that have been discussed here recently.

see also geoff's reply on that thread:

Date: Sat, 04 Apr 2009 23:15:55 -0700
From: Geoffrey Sisson ge...@geoff.co.uk
To: dns-operati...@lists.dns-oarc.net
Subject: Re: [dns-operations] ISC DLV broken?
Sender: dns-operations-boun...@lists.dns-oarc.net

m...@ucla.edu (Michael Van Norman) wrote:

 Starting a bit after 18:00, my home machines starting failing DNSSEC
 validation using the ISC DLV.
...
 Are other people seeing this?

Yes, starting at around the same time (PDT).

peter_los...@isc.org (Peter Losher) wrote:

 ISC is aware that there is a issue with lookups against dlv.isc.org and
 are investigating the cause behind it.  You may want to disable DNSSEC
 validation against dlv.isc.org at this time.

It appears as if the RRSIG RRset returned by the DLV nameservers for
dlv.isc.org is missing the RRSIG for the KSK, so validation for
dlv.isc.org is failing.  It _does_ contain the RRSIG for the ZSK (key
id 64263).

As a test I tried changing the trusted key to the ZSK, and DLV validation
appeared to work correctly.  This is, of course, not a recommended
work-around.

Geoff
___
dns-operations mailing list
dns-operati...@lists.dns-oarc.net
https://lists.dns-oarc.net/mailman/listinfo/dns-operations



Re: ISC DLV

2009-04-05 Thread Paul Vixie
David Conrad d...@virtualized.org writes:

 ...  I'm sure the folks at ISC will attempt to minimize reoccurrence.

yes.  though with two outages in the last month, some early DLV adopters
might be getting a bit nervous.  as with DNSSEC itself when folks first
started turning it on a few years ago, the failure codepaths for DLV are
inevitably not as well oiled as the success codepaths.  (we're on it.)
-- 
Paul Vixie



Re: Global Blackhole Service

2009-02-14 Thread Paul Vixie
  where you lose me is where the attacker must always win.
 
 Do you have a miraculous way to stop DDOS? Is there now a way to quickly
 and efficiently track down forged packets? Is there a remedy to shutting
 down the *known* botnets, not to mention the unknown ones?

there are no silver bullets.  anyone who says otherwise is selling something.

 The attacker will always win if he has a large enough attack platform/...
 
 While all this is worked out, we have one solution we know works.

we had to destroy the village in order to save it.

 If we null route the victim IP, the traffic stops at the null route.
 Since most attackers don't care to DOS the ISP, but just to take care of
 that end point, they usually don't start shifting targets to try and keep
 the ISP itself out.

if you null route the victim IP, the victim is off the air, so the DDoS is
a success even though it mostly does not reach its target.  you're proposing
that we lower an attacker's costs.  in a war of economics that's bad juju,
and all wars are about economics.

there are no silver bullets.  isp's who permit random source addresses on
packets leaving their networks are creating a global hazard, and since they
are defending their practices on the basis of thin profit margins it's right
to call this the chemical polluter business model.  as long as the rest of
us continue to peer with these chemical polluters, then anyone on the
internet can be the victim of a devastating DDoS at any time and at low cost.

that's not a silver bullet however.  if most ISP's controlled their source
addresses there would still be DDoS's and then the new problem would be lack
of real-time cooperation along the lines of hi i'm in the XYZ NOC and we're
tracking a DDoS against one of our customers and 14% of it is coming from
your address space, here's the summary of timestamp-ip-volume and here's a
pointer to your share of the netflows, can you remediate?  the answer will
start out just like today's BCP38 answer, no we can't afford the staff or
technology to do that, and then lawyers would worry about liability, and we'd
all have to worry about monopolies, censorship, social engineering, and so on.

in all of these cases the problem is the margins themselves.  just as the full
cost of a fast food cheeseburger is probably about $20 if you count all the
costs that the corporations are shifting onto society, so it is that the full
cost of a 3MBit/sec DSL line is probably $300/month if you count all the costs
that ISPs shift onto digital society.  the usual argument goes (and i'm just
putting it out here to save time, though i'm betting several respondants will
not read closely and so will just spew this out as though it's their original
idea and as though i had not dismissed it many times over the decades): we
cannot build a digital economy without cost shifting since noone would pay
what it really costs during the rampup.  i don't dignify that with a reply,
either here in effigy, or if anyone happens to trot it out again.



Re: Global Blackhole Service

2009-02-14 Thread Paul Vixie
a minor editorial comment:

Jens Ott - PlusServer AG j@plusserver.de writes:

 Jack Bates schrieb:
 Paul Vixie wrote:
 
 Do you have a miraculous way to stop DDOS? Is there now a way to quickly
 and efficiently track down forged packets? Is there a remedy to shutting
 down the *known* botnets, not to mention the unknown ones?

the quoted text was written by jack bates, not paul vixie.
-- 
Paul Vixie



Re: Global Blackhole Service

2009-02-13 Thread Paul Vixie
blackholing victims is an interesting economics proposition.  you're saying
the attacker must always win but that they must not be allowed to affect the
infrastructure.  and you're saying victims will request this, since they know
they can't withstand the attack and don't want to be held responsible for
damage to the infrastructure.

where you lose me is where the attacker must always win.



Re: v6 DSL / Cable modems

2009-02-05 Thread Paul Vixie
Ricky Beam jfb...@gmail.com writes:

 ... In the mid-80's, /8's were handed out like candy because there were
 lots of address space and we'll never use it all. ...

ahem.  allow me to introduce myself.  i was alive and actively using the
internet in the mid-80's, and when we got our /8 it was justified very
differently than what you said.  we had field offices in 100 countries and
we had 130,000 employees and our internal network spanned five continents.
(we thought long and hard about netmasks before we started rolling it out.)

it was not true in Digital Equipment Corporation's (DEC's) case that a /8
was handed out like candy or that the justification was anything like
lots of address space or we'll never use it all.

 IPv6 was designed to not need DHCP.  DHCPv6 has come about since people
 need more than just an address from autoconfiguration.

IPv6 promised a lot of things, like no-forklift insertion of IPv6 into the
existing IPv4 network, and some hosts, such as printers, might need never
be upgraded.  a lot of those promises were trash, just stuff that folks had
to say to get through whatever they were getting through.  as much as i'd
like a time machine to go back and whisper yo, dude, that's *so* not gonna
happen in some ears, what matters to us now is not what IPv6 was promised
to be or even what it could have been but instead: what it could now become.

 I can recall many posts over the years from the IPng WG telling people
 they didn't need DHCP.

some people drink their own cool-aid.  advice: get better at ignoring them.

i dislike the compromises and mistakes other people will make when faced
with NAT, and i don't want to live in a world dominated by products and
services containing those compromises or those mistakes.  i want end-to-end
so i can stop budgeting half a day for each VoIP phone i send home with an
employee.  i don't want to remap addresses mid-path because i just know that
the best programmers are the lazy ones and they WILL encode endpoint IP addrs
in their sessions no matter what we tell them or how much it hurts us all.

IPv6 coulda been and shoulda been lots of better things than we're getting,
but due to circumstances beyond our present control, it's what we've got to
work with, and it could still avoid a lot of problems whose alternative
costs could be higher (NAT, double NAT, triple NAT, IPv4 markets, IPv4
black markets, IPv4 route piracy, explosive deaggregation, to name some).

the most fundamental re-think required to wrap a brain around IPv6 compared
to IPv4 is that we will never run out of addresses again unless someone
(ignorantly) assigns a /125 to a LAN and needs more than 7 hosts thereon,
or something similar.  that part of IPv4's dark past will not follow us to
IPv6 and we can stop thinking all related or derivative thoughts, for IPv6.
but, and this matters so please pay attention, IPv6 does nothing to solve
the routing table problem that IPv4 has had since 1995 or so, and IPv6 can
amplify this part of IPv4's dark past and make it much worse since there can
be so many more attached devices.

the fundamental implication is, forget about address space, it's paperwork
now, it's off the table as a negotiating item or any kind of constraint.
but the size of the routing table is still a bogeyman, and IPv6 arms that
bogeyman with nukes.
-- 
Paul Vixie



Re: DNS Amplification attack?

2009-01-21 Thread Paul Vixie
Mark Andrews mark_andr...@isc.org writes:

   Authoritative servers need a cache.  Authoritative servers
   need to ask queries.  The DNS protocol has evolved since
   RFC 1034 and RFC 1035 and authoritative servers need to
   translate named to addresses for their own use.

   See RFC 1996, A Mechanism for Prompt Notification of Zone
   Changes (DNS NOTIFY).

if i had RFC 1996 to do over again i would either limit outbound notifies
to in-zone servernames, or recommend that primary server operators
configure stealth slaves for servername-containing zones, or (most likely)
i would point out that the need to look up secondary servernames requires
that an authority-only nameserver be able to act as a stub resolver and
that such a server much have access to an independent recursive nameserver.

it's not too late to implement it that way.  no authority-only server
should need a cache of any kind.  the above text from marka represents
a BIND implementatin detail, not a protocol requirement, evolved or not. 

   The real fix is to get BCP 38 deployed.  Reflection
   amplification attacks can be effective if BCP 38 measures
   have not been deployed.  Go chase down the offending
   sources.  BCP 38 is nearly 10 years old.

my agreement with this statement is tempered by the fact that BCP38
deployment cannot be continuously assured, nor tested.  therefore we will
need protocols, implementations, and operational practices that take
account of packet source address spoofing as an unduring property of the
internet.

   We all should be taking this as a opportunity to find where
   the leaks are in the BCP 38 deployment and correct them.

   Mark

yea, verily.  and maybe track down rfc1918-sourced spew while you're at it.
-- 
Paul Vixie



DNSSEC vs. X509 (Re: Security team successfully cracks SSL...)

2009-01-05 Thread Paul Vixie
Joe Abley jab...@hopcount.ca writes:

 On 2009-01-05, at 15:18, Jason Uhlenkott wrote:

 If we had DNSSEC, we could do away with SSL CAs entirely.  The owner
 of each domain or host could publish a self-signed cert in a TXT RR,

 ... or even in a CERT RR, as I heard various clever people talking about
 in some virtual hallway the other day.
 http://www.isi.edu/in-notes/rfc2538.txt.

i wasn't clever but i was in that hallway.  it's more complicated than
RFC 2538, but there does seem to be a way forward involving SSL/TLS (to
get channel encryption) but where a self-signed key could be verified
using a CERT RR (to get endpoint identity authentication).  the attacks
recently have been against MD5 (used by some X.509 CA's) and against an
X.509 CA's identity verification methods (used at certificate granting
time).  no recent attack has shaken my confidence in SSL/TLS negotiation
or encryption, but frankly i'm a little worried about nondeployability
of X.509 now that i see what the CA's are doing operationally when they
start to feel margin pressure and need to keep volume up + costs down.

i don't have a specific proposal.  (yet.)  but i'm investigating, and i
recommend others do likewise.
-- 
Paul Vixie



Re: Sprint v. Cogent, some clarity facts

2008-11-05 Thread Paul Vixie
(note: i don't think sprint or cogent is being evil in this situation.)

[EMAIL PROTECTED] writes:

 Has anyone heard of a backup route? With a longer path so it is never
 used unless there is a real emergency? Why was there no backup route
 available to carry the Sprint - Cogent traffic? Because there was a
 political failure in both Sprint and Cogent.

what you're calling a political failure could be what others call a rate
war.  i'd imagine that cogent's cost structure lower than most networks'
(since their published prices are so low and since they're not pumping
new money in from the outside and since they have no non-IP business they
could be using to support dumping).  if cogent's rates aare also lowest
then the other large networks might be losing customers toward cogent and
those other large networks might feel they are hurting their own cause by
peering, settlement free, with this new-economy competitor.  if that's
the case then the political failure you describe might be a matter of
cogent saying we don't want our prices to our customers to reflect the
capital inefficiencies of other networks and those other networks saying
we do.  note, i'm not on the inside and i'm working only from public
knowledge here and so this is all supposition.  but calling it political
failure when these possibilities exist seems like a stretch.  there'd be
no other leverage whereby cogent could protect the price point its
customers seem to like so well.  i'm not saying a cogent customer will be
glad to trade some instability to get those prices, but i am saying that
if this long chain of guesses is accurate it likely also represents the
ONLY way to drive efficiency in a competitive capital-intensive market.

 Back in 2000 it was acceptable for the big New York banks to have all
 their eggs in one basket in central Manhattan. In 2002, it was no longer
 acceptable.  Do we really need a 911 magnitude of disaster on the
 Internet for people to wake up and smell the coffee? The Internet is no
 longer a kewl tool built and operated by the cognoscenti to meet their
 own interests. It is now part of every nation's and everbody's critical
 infrastructure. It needs to be engineered and operated better so that it
 does not end up partitioning for dumb reasons.

that sounds like justification for government regulation, if true.
-- 
Paul Vixie



Re: Sprint / Cogent dispute over?

2008-11-03 Thread Paul Vixie
Daniel Senie [EMAIL PROTECTED] writes:

 At 06:54 PM 11/2/2008, Daniel Roesen wrote:
 https://www.sprint.net/cogent.php

 ...
 
 Also in this document is a complaint that Cogent failed to disconnect.
 Excuse me?  This was a trial PEERING agreement.  That implies one or a
 series of point-to-point connections.  That implies EITHER party can
 disconnect the circuits (in reality, the physical circuit doesn't even
 matter, just shut down the BGP session(s)).

 ...

Not having read the contract in question, my assumption when I read Sprint's
account of their depeering of Cogent was that the trial peering contract says
Sprint will notify Cogent of its qualification status after 90 days; if in
Sprint's estimation Cogent does not qualify, and Sprint notifies Cogent of
that fact, then Cogent will either disconnect or start paying.  Sprint's
document's wording is careful even if their TITLE is not.  If they are
involved in litigation with Cogent then actual lawyers would have seen that
text (if not necessarily the TITLE) before it went out.  The heart of the
lawsuit might be whether Cogent did or didn't implicitly agree to pay, as
signalled by their lack of disconnection after their 90 day notice.  None of
us who aren't parties to the dispute can do other than wonder, ponder, guess.
-- 
Paul Vixie



  1   2   >