Re: IPv6 Multi-homing (was IPv6 /64 links)

2012-06-26 Thread Douglas Otis
On 6/25/12 10:33 PM, Mikael Abrahamsson wrote:
 On Mon, 25 Jun 2012, Cameron Byrne wrote:
 
 SCTP is coming along, and it has a lot of promise.
 
 Doesn't SCTP suffer from the same problem as SHIM6 was said to be
 suffering from, ie that now all of a sudden end systems control where
 packets go and there is going to be a bunch of people on this list
 complaining that they no longer can do traffic engineering?

Dear Mikael,

SCTP permits multiple provider support of specific hosts where instant
fail-over is needed.  When DNS returns multiple IP addresses, an
application calls sctp_connectx() with this list combined into an
association endpoint belonging to a single host.  This eliminates a need
for PI addresses and related router table growth when high availability
service becomes popular.

Rather than having multi-homing implemented at the router, SCTP
fail-over does not require 20 second delays nor will fail-over cause a
sizable shift in traffic that might introduce other instabilities.
Although not all details related to multi-homing remain hidden, SCTP
offers several significant advantages related to performance and
reliability.

SCTP can isolate applications over fewer ports.  Unlike TCP, SCTP can
combine thousands of independent streams into a single association and
port.  SCTP offers faster setup and can eliminate head-of-queue blocking
and the associated buffering involved.  SCTP also compensates for
reduced Ethernet error detection rates when Jumbo frames are used.

Providers able to control multiple routers will likely prefer router
based methods.  A router approach will not always offer a superior
solution nor will it limit router table growth, but traffic engineering
should remain feasible when SCTP is used instead.

 I don't mind. I wish more would use SCTP so it would get wider use. I
 also wish http://mosh.mit.edu/ would have used SCTP instead of trying
 to invent that part again (the transport part of it at least).

Perhaps MIT could have implemented SCTP over UDP as a starting point.
An adoption impediment has been desktop OS vendors.  This may change
once SCTP's advantages become increasingly apparent with the rise of
data rates and desires for greater resiliency and security.

Regards,
Douglas Otis











IPv6 Multi-homing (was IPv6 /64 links)

2012-06-25 Thread Douglas Otis
On 6/25/12 7:54 AM, Owen DeLong wrote:
 It would have been better if IETF had actually solved this instead 
 of punting on it when developing IPv6.

Dear Owen,

The IETF offered a HA solution that operates at the transport level.  It
solves jumbo frame error detection rate issues, head of queue
blocking, instant fail-over, better supports high data rates with
lower overhead, offers multi-homing transparently across
multiple providers, offers fast setup and anti-packet source spoofing.
The transport is SCTP, used by every cellular tower and for
media distribution.

This transport's improved error detection is now supported in hardware
by current network adapters and processors.  Conversely, TCP suffers
from high undetected stuck bit errors, head of queue blocking, complex
multi-homing, slow setup, high process overhead and is prone to source
spoofing.  It seems OS vendors rather than the IETF hampered progress in
this area.  Why band-aid on a solved problem?

Regards,
Douglas Otis




Re: IPv6 Multi-homing (was IPv6 /64 links)

2012-06-25 Thread Douglas Otis
On 6/25/12 10:17 AM, Christopher Morrow wrote:
 On Mon, Jun 25, 2012 at 1:09 PM, Douglas Otis
 do...@mail-abuse.org wrote:
 On 6/25/12 7:54 AM, Owen DeLong wrote:
 It would have been better if IETF had actually solved this
 instead of punting on it when developing IPv6.
 
 Dear Owen,
 
 The IETF offered a HA solution that operates at the transport
 level.  It solves jumbo frame error detection rate issues, head
 of queue blocking, instant fail-over, better supports high data
 rates with lower overhead, offers multi-homing transparently
 across multiple providers, offers fast setup and anti-packet
 source spoofing. The transport is SCTP, used by every cellular
 tower and for media distribution.
 
 This transport's improved error detection is now supported in
 hardware by current network adapters and processors.  Conversely,
 TCP suffers from high undetected stuck bit errors, head of queue
 blocking, complex multi-homing, slow setup, high process overhead
 and is prone to source spoofing.  It seems OS vendors rather than
 the IETF hampered progress in this area.  Why band-aid on a
 solved problem?
 
 can I use sctp to do the facebooks?

Dear Christopher,

Not now, but you could.  SCTP permits faster page loads and more
efficient use of bandwidth.  OS vendors could embrace SCTP to achieve
safer and faster networks also better able to scale.  Instead, vendors
are hacking HTTP to provide experimental protocols like SPDY which
requires extensions like:

http://tools.ietf.org/search/draft-agl-tls-nextprotoneg-00

The Internet should use more than port 80 and port 443.  Is extending
entrenched TCP cruft really taking the Internet to a better and safer
place?

Regards,
Douglas Otis



Re: IPv6 Multi-homing (was IPv6 /64 links)

2012-06-25 Thread Douglas Otis
On 6/25/12 12:20 PM, William Herrin wrote:
 On Mon, Jun 25, 2012 at 1:09 PM, Douglas Otis
 do...@mail-abuse.org wrote:
 On 6/25/12 7:54 AM, Owen DeLong wrote:
 It would have been better if IETF had actually solved this
 instead of punting on it when developing IPv6.
 
 The IETF offered a HA solution that operates at the transport
 level. The transport is SCTP
 
 Hi Douglas,
 
 SCTP proposes a solution to multihoming by multi-addressing each 
 server. Each address represents one of the leaf node's paths to
 the Internet and if one fails an SCTP session can switch to the
 other. Correct?

Dear William,

Yes. An SCTP association periodically checks alternate path
functionality.

 How does SCTP address the most immediate problem with
 multiaddressed TCP servers: the client doesn't rapidly find a
 currently working address from the set initially offered by A and
  DNS records. Is there anything in the SCTP protocol for this?
 Or does it handle it exactly the way TCP does (nothing at all in
 the API; app-controlled timeout and round robin)?

This is addressed by deprecating use of TCP, since SCTP offers a
super-set of the socket API.  It can also dramatically expand the
number of virtual associations supported in a manner similar to that
of UDP while still mitigating source spoofing.

 Is the SCTP API drop-in compatible with TCP where a client can
 change a parameter in a socket() call and expect it to try SCTP and
 promptly fall back to TCP if no connection establishes? On the
 server side, does it work like the IPv6 API where one socket
 accepts both protocols? Or do the apps have to be redesigned to
 handle both SCTP and TCP?

The SCTP socket API is defined by:
http://tools.ietf.org/html/rfc6458

As the world adopts IPv6, NAT issues become a bad memory of
insecure middle boxes replaced by transports that can be as robust as
necessary.  IMHO, TCP is the impediment preventing simplistic
(hardware based) high speed interfaces able to avoid buffer bloat.

Regards,
Douglas Otis








Re: Most energy efficient (home) setup

2012-04-19 Thread Douglas Otis

On 4/18/12 8:09 PM, Steven Bellovin wrote:


 On Apr 18, 2012, at 5:55 32PM, Douglas Otis wrote:
 Dear Jeroen,

 In the work that led up to RFC3309, many of the errors found on the
 Internet pertained to single interface bits, and not single data
 bits. Working at a large chip manufacturer that removed internal
 memory error detection to foolishly save space, cost them dearly in
 then needing to do far more exhaustive four corner testing.
 Checksums used by TCP and UDP are able to detect single bit data
 errors, but may miss as much as 2% of single interface bit errors.
 It would be surprising to find memory designs lacking internal
 error detection logic.

 mallet:~ smb$ head -14 doc/ietf/rfc/rfc3309.txt | sed 1,7d | sed
 2,5d; date Request for Comments: 3309
 Stanford September 2002

 Wed Apr 18 23:07:53 EDT 2012

 We are not in a static field... (3309 is one of my favorite RFCs --
 but the specific findings (errors happen more often than you think),
 as opposed the general lesson (understand your threat model) may be
 OBE.

Dear Steve,

You may be right.  However back then most were also only considering 
random single bit errors as well.  Although there was plentiful evidence 
for where errors might be occurring, it seems many worked hard to ignore 
the clues.


Reminiscent of a drunk searching for keys dropped in the dark under a 
light post, mathematics for random single bit errors offer easier 
calculations and simpler solutions.  While there are indeed fewer 
parallel buses today, these structures still exist in memory modules and 
other networking components.  Manufactures confront increasingly 
temperamental bit storage elements, where most include internal error 
correction to minimize manufacturing and testing costs.  Error sources 
are not easily ascertained with simple checksums when errors are not random.


Regards,
Douglas Otis



Re: Most energy efficient (home) setup

2012-04-18 Thread Douglas Otis

On 4/18/12 12:35 PM, Jeroen van Aart wrote:

 Laurent GUERBY wrote:
 Do you have reference to recent papers with experimental data about
 non ECC memory errors? It should be fairly easy to do
 Maybe this provides some information:

 http://en.wikipedia.org/wiki/ECC_memory#Problem_background

 Work published between 2007 and 2009 showed widely varying error
 rates with over 7 orders of magnitude difference, ranging from
 10−10−10−17 error/bit·h, roughly one bit error, per hour, per
 gigabyte of memory to one bit error, per century, per gigabyte of
 memory.[2][4][5] A very large-scale study based on Google's very
 large number of servers was presented at the
 SIGMETRICS/Performance’09 conference.[4] The actual error rate found
 was several orders of magnitude higher than previous small-scale or
 laboratory studies, with 25,000 to 70,000 errors per billion device
 hours per megabit (about 3–10×10−9 error/bit·h), and more than 8% of
 DIMM memory modules affected by errors per year.

Dear Jeroen,

In the work that led up to RFC3309, many of the errors found on the 
Internet pertained to single interface bits, and not single data bits.  
Working at a large chip manufacturer that removed internal memory error 
detection to foolishly save space, cost them dearly in then needing to 
do far more exhaustive four corner testing.  Checksums used by TCP and 
UDP are able to detect single bit data errors, but may miss as much as 
2% of single interface bit errors.  It would be surprising to find 
memory designs lacking internal error detection logic.


Regards,
Douglas Otis




Re: using ULA for 'hidden' v6 devices?

2012-01-26 Thread Douglas Otis

On 1/26/12 7:35 AM, Cameron Byrne wrote:

 1. You don't want to disclose what addresses you are using on your
 internal network, including to the rir

 2. You require or desire an address plan that your rir may consider
 wasteful.

 3. You don't want to talk to an rir for a variety of personal or
 business process reasons

 4. When troubleshooting both with network engineers familiar with
 the network as well as tac engineers, seeing the network for the
 first time, ula sticks out like a sore thumb and can lead to some
 meaningful and clarifying discussions about the devices and flows.

 5. Routes and packets leak. Filtering at the perimeter? Which
 perimeter? Mistakes happen. Ula provides a reasonable assumption that
 the ISP will not route the leaked packets. It is one of many possible
 layers of security and fail-safes.

 Cb

Dear Cameron,

For a reference to something taking advantage of ULAs per RFC4193 See:
http://tools.ietf.org/html/rfc6281#page-11

Regards,
Doug Otis





Re: Outgoing SMTP Servers

2011-10-25 Thread Douglas Otis

On 10/25/11 12:31 PM, Ricky Beam wrote:

 On Tue, 25 Oct 2011 12:55:58 -0400, Owen DeLong o...@delong.com
 wrote:
 Wouldn't the right place for that form of rejection to occur be at
 the mail server in question?



 In a perfect world, yes. When you find a perfect world, send us an
 invite.



 I reject lots of residential connections...

 The real issue here is *KNOWING* who is residential or not. Only
 the ISP knows for sure; and they rarely tell others. The various
 blocklists are merely guessing. Using a rDNS name is an even worse
 guess.


Agreed.   Don't expect a comprehensive list based upon rDNS containing 
specific host names with IPv6.  That would represent a never ending 
process to collect.



 However, senders who authenticate legitimately or legitimate
 sources of email (and yes, some spam sources too) connect just
 fine.



 Authenticated sources can be traced and shutoff. If a random
 cablemodem user has some bot spewing spam, the only way to cut off
 the spam is to either (gee) block outbound port 25, or turn their
 connection off entirely. As a responsible admin, I'll take the
 least disruptive path. (I'll even preemptively do so.)


Blocking ports is not free, but don't expect all DSL providers to 
unblock port 25 unless it is for a business account.  Price 
differentials help pay for port blocking.


In a perfect world, all SMTP transactions would cryptographically 
authenticate managing domains for the MTA.  With less effort and 
resources (than that needed to check block lists) IPv6 could continue to 
work through LSNs aimed at helping those refusing to offer IPv6 
connectivity.  Blocking at the prefix requires block list resources 65k 
times greater than what is currently needed for IPv4.  IPv6 
announcements seem likely to expand another 6 fold fairly soon as well.


In comparison, cryptographic authentication would be more practical, but 
a hybrid Kerberos scheme supported by various third-party service 
providers could reduce the overhead.  Is it time for AuthenticatedMTP?


-Doug





Re: Steve Jobs has died

2011-10-11 Thread Douglas Otis

On 10/6/11 7:26 PM, Paul Graydon wrote:

On 10/6/2011 4:02 PM, Wayne E Bouchard wrote:

In some circles, he's being compared to Thomas Edison. Apply your own
opinion there whether you feel that's accurate or not. I'll just state
this: Both men were pasionate about what they did. They each changed
the world and left it better than they found it.
It's probably not a bad analogy, like Ford and many other champions of 
industry he didn't invent groundbreaking technology (Edison's only 
invention was the phonograph IIRC, all else was improvements on 
existing technology).  They took what was already in existence and did 
something amazing with it: made it accessible, be it through price, 
ease of use or whatever.
Steve demonstrated any number of times, when excellent hardware + 
software engineering + quality control is applied, even commodity 
products are able to provide good returns.  In this view, the analogy 
holds when price alone is not considered.


-Doug



Re: NAT444 or ?

2011-09-02 Thread Douglas Otis

On 9/1/11 11:52 AM, Cameron Byrne wrote:

On Thu, Sep 1, 2011 at 11:36 AM, Serge Vautoursergevaut...@yahoo.ca  wrote:

Hello,

Things I understand: IPv6 is the long term solution to IPv4 exhaustion. For IPv6 to 
work correctly, most of the IPv4 content has to be on IPv6. That's not there yet. 
IPv6 deployment to end users is not trivial (end user support, CPE support, etc...). 
Translation techniques are generally evil. IPv6-IPv4 still requires 1 IPv4 IP per 
end user or else you're doing NAT. IPv4-IPv6 (1-1) doesn't solve our main problem 
of giving users access to the IPv4 Internet.

Correct, all content is not there yet... but World IPv6 Day showed
that Google, Facebook, Yahoo, Microsoft and 400+ others are just about
ready to go.

http://en.wikipedia.org/wiki/World_IPv6_Day

IPv6-IPv4 does not require 1 to 1,  any protocol translation is a
form of NATish things, and stateful NAT64 has many desirable
properties IF you already do NAT44.  Specifically, it is nice that
IPv6 flows bypass the NAT  and as more content becomes  IPv6, NAT
becomes less and less used.  In this way, unlike NAT44 or NAT444,
NAT64 has an exit strategy that ends with proper E2E networking with
IPv6... the technology and economic incentives push the right way
(more IPv6...)

Have a look at http://tools.ietf.org/html/rfc6146

There are multiple opensource and big vendor (C, J, B, LB guys...)
implementation of NAT64 / DNS64 ... I have trialed it and plan to
deploy it, YMMV... It works great for web and email, not so great for
gaming and Skype.

http://tools.ietf.org/html/rfc6333
http://tools.ietf.org/html/draft-bpw-pcp-nat-pmp-interworking-00
moves CPE NAT to the ISP tunneled over 192.0.0.0/29.

Has anyone deployed NAT444? Can folks share their experiences? Does it really 
break this many apps? What other options do we have?

Yes, expect it to be deployed in places where the access gear can only
do IPv4 and there is no money or technology available to bring in
IPv6.

A false economy when support outweigh CPE cost.

-Doug



Re: OSPF vs IS-IS

2011-08-12 Thread Douglas Otis

On 8/12/11 8:29 AM, Jeff Wheeler wrote:

I thought I'd chime in from my perspective, being the head router
jockey for a bunch of relatively small networks.  I still find that
many routers have support for OSPF but not IS-IS.  That, plus the fact
that most of these networks were based on OSPF before I took charge of
them, in the absence of a compelling reason to change to another IGP,
keeps me from taking advantage of IS-IS.  I'd like to, but not so
badly that I am willing to work around those routers without IS-IS, or
weight that feature more heavily when purchasing new equipment.

There are many routers with OSPF but no IS-IS.  I haven't seen any
with IS-IS but no OSPF.  I don't think such router would be very
marketable to most non-SP networks.

TRILL supports IS-IS.  It seems it may play a role beyond the router.
http://tools.ietf.org/html/rfc6326

-Doug




Re: Why does abuse handling take so long ?

2011-03-14 Thread Douglas Otis

On 3/14/11 9:11 AM, William Allen Simpson wrote:

On 3/13/11 9:35 PM, goe...@anime.net wrote:
the real cesspool is POC registries. i wish arin would start revoking 
allocations for entities with invalid POCs.



Hear, hear!

Leo's remembering the old days (80s - early '90s), when we checked 
whois and
called each others' NOCs directly.  That stopped working, and we 
started getting
front line support, who's whole purpose was to filter.  Nowadays, I've 
often
been stuck in voice prompt or voice mail hell, unable to get anybody 
on the
phone, and cannot get any response from email, either.  Ever.  The big 
ILECs

are the worst.

What we need is an abuse for ARIN, telling them the contacts don't work
properly, which ARIN could verify, revoke the allocation, and send 
notice to

the upstream telling them to withdraw the route immediately.

Force them to go through the entire allocation process from the 
beginning,

and always assign a new block.  That might make them take notice  And
shrink the routing table!  Win, win!

Since we'd only send notification to ARIN about an actual problem, we'd
only drop the real troublemakers.  To help enforce that, ARIN would also
verify the reporter's contacts. :-)
Distributing abusive IP addresses within IPv6 is not likely sustainable, 
nor would authenticating network reporters and actors.  Filtering routes 
could be more manageable, and would leave dealing with compromised 
systems within popular networks.  Calling for abuse management by ISPs 
might be an effective approach when structured not to conflict with 
maximizing profits.  A Carbon Tax for abuse imposed by a governing 
organization to support an Internet remediation fund? :^)


-Doug



Re: NIST and SP800-119

2011-02-16 Thread Douglas Otis

On 2/16/11 10:57 PM, Joe Abley wrote:

On 2011-02-16, at 02:44, Douglas Otis wrote:

Routers indicate local MTUs, but minimum MTUs are not assured to have 1280 
octets when IPv4 translation is involved.
See Section 5 in rfc2460.

I've heard that interpretation of 2460 before from Bill Manning, but I still 
don't see it myself. The text seems fairly clear that 1280 is the minimum MTU 
for any interface, regardless of the type of interface (tunnel, PPP, whatever). 
In particular,

Links that have a configurable MTU (for example, PPP links [RFC-
1661]) must be configured to have an MTU of at least 1280 octets; it
is recommended that they be configured with an MTU of 1500 octets or
greater, to accommodate possible encapsulations (i.e., tunneling)
without incurring IPv6-layer fragmentation.

That same section indicates that pMTUd is strongly recommended in IPv6 rather 
than mandatory, but in the context of embedded devices that can avoid 
implementing pMTUd by never sending a packet larger than the minimum MTU. Such 
devices would break if there was an interface (of any kind) in the path with a 
sub-1280 MTU.
Bill makes a good point.  Ensuring a minimum MTU of 1280 octets over v6 
connections carrying protocol 41 will not allow subsequent v4 routers to 
fragment based upon discovered PMTUs.   This could influence maximum UDP 
packets from a DNS server for example, where path MTU discovery is 
impractical.  To be assured of continued operation for critical 
infrastructure, minimum MTUs of 1280 for v6 connections that might 
handle protocol 41 packets, becomes 1280 - 40 - 8 = 1232 or less as 
indicated in RFC2460.  As suggested, there might be another 18 octet 
header, like L2TP, where the maximum MTU safely assumed becomes 1214.


Perhaps IPv6 should have specified a required minimum of 1346 octets, 
where 1280 octets could be safely assumed available.  A SHOULD is not a 
MUST, but critical operations MUST be based upon the MUSTs.  How much 
longer will native v4 be carried over the Internet anyway? :^)


-Doug



Re: NIST and SP800-119

2011-02-15 Thread Douglas Otis

On 2/15/11 11:09 PM, Joe Abley wrote:

On 2011-02-14, at 21:41, William Herrin wrote:

On Mon, Feb 14, 2011 at 7:24 PM, TR Shawts...@oitc.com  wrote:

Just wondering what this community thinks of NIST in
general and their SP800-119 (
http://csrc.nist.gov/publications/nistpubs/800-119/sp800-119.pdf )
writeup about IPv6 in particular.

Well, according to this document IPv4 path MTU discovery is,
optional, not widely used.

Optional seems right. Have there been any recent studies on how widely pMTUd is 
actually used in v4?

More contentious is that Path MTU discovery is strongly recommended in IPv6. 
Surely it's mandatory whenever you're exchanging datagrams larger than 1280 octets? 
Otherwise the sender can't fragment.
Routers indicate local MTUs, but minimum MTUs are not assured to have 
1280 octets when IPv4 translation is involved. See Section 5 in 
rfc2460.  (1280 minus 40 for the IPv6 header and 8 for the Fragment 
header.)  Bill suggested this could even be smaller.  This also ignores 
likely limited resources to resolve addresses within a /64.  Public 
facing servers might be placed into much smaller ranges to avoid 
supporting 16M multicast.  Also there might be a need to limit ICMPv6 
functions as well, depending upon the features found in layer-2 switches.


-Doug






Re: Using IPv6 with prefixes shorter than a /64 on a LAN

2011-01-26 Thread Douglas Otis

On 1/25/11 6:00 PM, Fernando Gont wrote:

On 24/01/2011 08:42 p.m., Douglas Otis wrote:

It seems efforts related to IP address specific policies are likely
doomed by the sheer size of the address space, and to be pedantic, ARP
has been replaced with multicast neighbor discovery which dramatically
reduces the overall traffic involved.

This has nothing to do with the number of entries required in the
Neighbor Cache.

Secondly, doesn't Secure Neighbor
Discovery implemented at layer 2 fully mitigate these issues?  I too
would be interested in hearing from Radia and Fred.

It need not. Also, think about actual deployment of SEND: for instance,
last time I checked Windows Vista didn't support it.
First, it should be noted ND over ARP offers ~16M to 2 reduction in 
traffic.  Secondly, services offered within a facility can implement 
Secure Neighbor Discovery, since a local network's data link layer, by 
definition, is isolated from the rest of the Internet. While ICMPv6 
supports ND and SeND using standard IPv6 headers, only stateful ICMPv6 
Packets Too Big messages should be permitted.  Nor is Vista, ISATAP, or 
Teredo wise choices for offering Internet services.  At least there are 
Java implementations of Secure Neighbor Discovery.


When one considers what is needed to defend a facility's resources, 
Secure Neighbor Discovery seems desirable since it offers hardware 
supported defenses from a wide range of threats.  While it is easy to 
understand a desire to keep specific IP addresses organized into small 
segments, such an approach seems at greater risk and more fragile in the 
face of frequent renumbering.  In other words, it seems best to use IPv6 
secure automation whenever possible.


The make before break feature of IPv6 should also remove most 
impediments related to renumbering.  In other words, fears expressed 
about poorly considered address block assignments also seem misplaced.


-Doug





Re: Using IPv6 with prefixes shorter than a /64 on a LAN

2011-01-24 Thread Douglas Otis

On 1/24/11 11:04 AM, bmann...@vacation.karoshi.com wrote:

  well... you are correct - he did say shorter.  me - i'd hollar for my good
friends Fred and Radia (helped w/ the old vitalink mess) on the best way to
manage an arp storm and/or cam table of  a /64 of MAC addresses. :)  It was
hard enough to manage a lan/single broadcast domain that was global in scope
and had 300,000 devices on it.

route when you can, bridge when you must

Bill,

It seems efforts related to IP address specific policies are likely 
doomed by the sheer size of the address space, and to be pedantic, ARP 
has been replaced with multicast neighbor discovery which dramatically 
reduces the overall traffic involved.  Secondly, doesn't Secure Neighbor 
Discovery implemented at layer 2 fully mitigate these issues?  I too 
would be interested in hearing from Radia and Fred.


-Doug




Re: Is NAT can provide some kind of protection?

2011-01-15 Thread Douglas Otis

On 1/15/11 3:24 PM, Brandon Ross wrote:

On Sat, 15 Jan 2011, Owen DeLong wrote:


I really doubt this will be the case in IPv6.
I really hope you are right, because I don't want to see that either, 
however...


Why do you suppose they did that before with IPv4?  Sure you can make 
the argument NOW that v4 is in scarce supply, but 10 years ago it was 
still the case.


Has Comcast actually come out and committed to allowing me to have as 
my IPs as I want on a consumer connection in the most basic, cheapest 
package?  Has any other major provider?
As a customer of Comcast, you can set up a tunnel to he.net and obtain 
your own prefix which then enables 18 x 10^18 IP addresses at no 
additional cost.  See: http://tunnelbroker.net/ and http://www.comcast6.net/


-Doug





Re: Is NAT can provide some kind of protection?

2011-01-14 Thread Douglas Otis

On 1/14/11 11:49 AM, Jack Bates wrote:

On 1/14/2011 1:43 PM, Owen DeLong wrote:

Ah, but, the point here is that NAT actually serves as an enabling
technology for part of the attack he is describing. Another example
where NAT can and is a security negative. The fact that you refuse
to acknowledge these is exactly what you were accusing me of
doing in my previous emails.


Explain how it acts as an enabler.

Consider the impact the typical NAT or firewall has on DNS.

-Doug



Re: Is NAT can provide some kind of protection?

2011-01-14 Thread Douglas Otis

On 1/14/11 4:10 PM, William Herrin wrote:

On Fri, Jan 14, 2011 at 2:43 PM, Owen DeLongo...@delong.com  wrote:

Ah, but, the point here is that NAT actually serves as an enabling
technology for part of the attack he is describing.

As for strictly passive attacks, like the so-called drive by download,
it is not obvious to me that they would operate differently in a NAT
versus non-NAT stateful firewall environment. Please elucidate.
Systems having poor integrity are often _incorrectly_ considered 'safe'  
behind typical firewalls, but their exposure often includes more than 
just IP address contacted in a URI.  Once initiated,  often internal 
hosts remain connected with any IP address on non-symmetric NATs for 
some period beyond an initial exchange. A behavior promoted to support 
teredo, for example.  Don't think no one is using IPv6, even when there 
is only IPv4 access.


http://www.symantec.com/avcenter/reference/Teredo_Security.pdf


Explain how [NAT] acts as an enabler.

Consider the impact the typical NAT or firewall has on DNS.

Hi Doug,

You'd make the argument that NAT aggravates Kaminsky? If you have
something else in mind, I'll have to ask you to spell it out for me.
Many of these products themselves are insecure due to bugs in their 
reference design dutifully replicated by CPE manufactures.  These 
devices often keep no logs, and might even redirect specific DNS queries 
when owned, where a power-cycling removes all evidence.  Even Cisco 
firewalls were mapping a range of IP addresses, rather than port 
mapping, and exposed systems unable to endure this type of exposure to 
the Internet.   While it is possible to have a well implemented NAT, 
many are unable to support DNS TCP exchanges or handle DNSsec.  The same 
devices often restrict port ranges, where prior access to an attacker's 
authoritative servers gives significant poisoning clues on subsequent 
exchanges driven by injected iFrames.  A system not safe on the 
Internet, often is also not safe behind the typical CPE NAT/firewall.


-Doug





Re: Is NAT can provide some kind of protection?

2011-01-13 Thread Douglas Otis

On 1/13/11 5:48 PM, William Herrin wrote:

On Wed, Jan 12, 2011 at 10:02 PM, Mark Andrewsma...@isc.org  wrote:

In messageaanlktikixf_mbuo-oskpjsw98vn5_d5wznui_pl37...@mail.gmail.com, 
William
  Herrin writes:

There's actually a large difference between something that's
impossible for a technology to do (even in theory), something that the
technology has been programmed not to do and something that a
technology is by default configured not to do.

Well ask the firewall vendor not to give you the knob to open it
up completely.

Hi Mark,

Why would I do that? I still have toes left; I *want* to be able to
shoot myself in the foot.

Still, you do follow the practical difference between can't,
programmed not to and configured not to, right? Can't is 0% chance of
a breach on that vector. The others are varying small percentages with
configured the highest of the bunch.


Note the CPE NAT boxes I've seen all have the ability to send
anything that isn't being NAT'd to a internal box so it isn't like
NAT boxes don't already have the flaw you are complaining about.
Usually it's labeled as DMZ host or something similar.

Fair enough. Implementations that can't target -something- for
unsolicited inbound packets have gotten rare.

The core point remains: a hacker trying to push packets at an
arbitrary host behind a NAT firewall has to not only find flaws in the
filtering rules, he also has to convince the firewall to send the
packet to the right host. This is more difficult. The fact that the
firewall doesn't automatically send the packet to the right host once
the filtering flaw is discovered adds an extra layer of security.
Practically speaking, the hacker will have better luck trying to
corrupt data actually solicited by interior hosts that the difficulty
getting the box to send unsolicited packets to the host the hacker
wants to attack puts and end to the whole attack vector.


On Thu, Jan 13, 2011 at 4:21 PM, Lamar Owenlo...@pari.edu  wrote:

On Wednesday, January 12, 2011 03:50:28 pm Owen DeLong wrote:

That's simply not true. Every end user running NAT is
running a stateful firewall with a default inbound deny.

This is demonstrably not correct.

Hi Lamar,

I have to side with Owen on this one. When a packet arrives at the
external interface of a NAT device, it's looked up in the NAT state
table. If no matching state is found, the packet is discarded. However
it came about, that describes a firewall and it is stateful.

Even if you route the packets somewhere instead of discarding them,
you've removed them from the data streams associated with the
individual interior hosts that present on the same exterior address.
Hence, a firewall.

There's no such thing as a pure router any more. As blurry as the line
has gotten it can be attractive to think of selectively acting on
packets with the same IP address pairs as a routing function, but it's
really not... and where the function is to divert undesired packets
from the hosts that don't want them (or the inverse -- divert desired
packets to the hosts that do want them), that's a firewall.

Hi Bill,

Unfortunately, a large number of web sites have been compromised, where 
an unseen iFrame might be included in what is normally safe content.  A 
device accessing the Internet through a NATs often creates opportunities 
for unknown sources to reach the device as well.  Once an attacker 
invokes a response, exposures persist, where more can be discovered.  
There are also exposures related to malicious scripts enabled by a 
general desire to show users dancing fruit.  Microsoft now offers a 
toolkit that allows users a means to 'decide' what should be allowed to 
see fruit dance.  Users that assume local networks are safe are often 
disappointed when someone on their network wants an application do 
something that proves unsafe.  Methods to penetrate firewalls are often 
designed into 'fun' applications or poorly considered OS features.


-Doug





Re: Some truth about Comcast - WikiLeaks style

2010-12-14 Thread Douglas Otis

On 12/14/10 2:38 PM, Richard A Steenbergen wrote:

On Tue, Dec 14, 2010 at 03:39:07PM -0600, Aaron Wendel wrote:

  To what end?  And who's calling the shots there these days?  Comcast
  has been nothing but shady for the last couple years.  Spoofing
  resets, The L3 issue, etc.  What's the speculation on the end game?

I believe Comcast has made clear their position that they feel content
providers should be paying them for access to their customers.
The Internet would offer lesser value by allowing access providers to 
hold their customers hostage.  Clearly, such providers are not acting in 
their customer's interests when inhibiting access to desired and 
legitimate content.  What is net neutrality expected to mean?


Providers should charge a fair price for bandwidth offered, not over 
sell the bandwidth, and not constrain bandwidth below advertised rates.  
Congestion pricing rewards bad practices that leads to the congestion.


-Doug



Re: Jumbo frame Question

2010-11-29 Thread Douglas Otis

On 11/29/10 1:18 PM, Jack Bates wrote:

 On 11/29/2010 1:10 PM, John Kristoff wrote:
 In a nutshell, as I recall, one of the prime motivating factors for
 not standardizing jumbos was interoperability issues with the
 installed base, which penalizes other parts of the network (e.g.
 routers having to perform fragmentation) for the benefit of a
 select few (e.g. modern server to server comms).

 Given that IPv6 doesn't support routers performing fragmentation, and
 many packets are sent df-bit anyways, standardized jumbos would be
 nice. Just because the Internet as a whole may not support them, and
 ethernet cards themselves may not exceed 1500 by default, doesn't
 mean that a standard should be written for those instances where
 jumbo frames would be desired.

 Let's be honestly, there are huge implementations of baby giants out
 there. Verizon for one requires 1600 byte support for cell towers
 (tested at 1600 bytes for them, so slightly larger for transport gear
 depending on what is wrappers are placed over that). None of this
 indicates larger than 1500 byte IP, but it does indicate larger L2
 MTU.

 There are many in-house setups which use jumbo frames, and having a
 standard for interoperability of those devices would be welcome. I'd
 personally love to see standards across the board for MTU from
 logical to physical supporting even tiered MTU with future proof
 overheads for vlans, mpls, ppp, intermixed in a large number of ways
 and layers (IP MTU support for X sizes, overhead support for Y
 sizes).


The level of undetected errors by TCP or UDP checksums can be high.  The 
summation scheme is remarkably vulnerable to bus related bit errors, 
where as much as 2% of parallel bus related bit errors might go 
undetected.   Use of SCTP, TLS, or IPSEC can supplant weak TCP/UDP 
summation error detection schemes.   While Jumbo frames reduce serial 
error detection rates of the IEEE CRC restored by SCTP/CRC32c for Jumbo 
frames, serial detection is less of a concern when compared to bus 
related bit error detection rates.  CRC32c solves both the bus and Jumbo 
frame error detection and is found in 10GB/s NICs and math coprocessors.


-Doug



Re: do you use SPF TXT RRs? (RFC4408)

2010-10-05 Thread Douglas Otis

 On 10/4/10 6:55 PM, Kevin Stange wrote:

The most common situation where another host sends on your domain's
behalf is a forwarding MTA, such as NANOG's mailing list.  A lot of MTAs
will only trust that the final MTA handling the message is a source
host.  In the case of a mailing list, that's NANOG's server.  All
previous headers are untrustworthy and could easily be forged.  I'd bet
few, if any, people have NANOG's servers listed in their SPF, and
delivering a -all result in your SPF could easily cause blocked mail for
anyone that drops hard failing messages.

Kevin,

nanog.org nor mail-abuse.org publish spf or txt records containing spf 
content.  If your MTA expects a message's MailFrom or EHLO be confirmed 
using spf, then you will not receive this message, refuting a lot of 
MTAs 


This also confuses SPF with Sender-ID. SPF confirms the EHLO and 
MailFrom, whereas Sender-ID confirms the PRA.  However, the PRA 
selection is flawed since it permits forged headers most consider to be 
the originator.  To prevent Sender-ID from misleading recipients or 
failing lists such as nanog.org, replicate SPF version 2 records at the 
same node declaring mfrom.  This is required but doubles the DNS 
payload. :^(   Many consider -all to be an ideal, but this reduces 
delivery integrity.  MailFrom local-part tagging or message id 
techniques can instead reject spoofed bounces without a reduction in 
delivery integrity.


-Doug









Re: do you use SPF TXT RRs? (RFC4408)

2010-10-04 Thread Douglas Otis

 On 10/4/10 12:47 PM, Greg Whynott wrote:

A partner had a security audit done on their site.  The report said they were 
at risk of a DoS due to the fact they didn't have a SPF record.

I commented to his team that the SPF idea has yet to see anything near mass 
deployment and of the millions of emails leaving our environment yearly,  I 
doubt any of them have ever been dropped due to us not having an SPF record in 
our DNS.  When a client's email doesn't arrive somewhere,  we will hear about 
it quickly,  and its investigated/reported upon.  I'm not opposed to 
putting one in our DNS,  and probably will now - for completeness/best practice 
sake..


how many of you are using SPF records?  Do you have an opinion on their use/non 
use of?
It is ironic to see recommendations requiring use of SPF due to DoS 
concerns.  SPF is a macro language expanded by recipients that may 
combine cached DNS information with MailFrom local-parts to synthesize 
100 DNS transactions targeting any arbitrary domain unrelated to those 
seen within any email message.  A free 300x DDoS attack while spamming.


SPF permits the use of 10 mechanisms that then require targets to be 
resolved which introduces a 10x multiplier.  The record could end with 
+all, where in the end, any message would pass.  Since SPF based 
attacks are unlikely to target email providers, it seems few 
recommending SPF consider that resolving these records containing active 
content might also be a problem.


-Doug





Re: [OT]Bounce Back

2010-05-20 Thread Douglas Otis

On 5/20/10 4:08 PM, Jeroen van Aart wrote:

James Bensley wrote:

Got the below message back from Hotmail when emailing a friend I email
every week. I have never experienced this particular error before, is
this just an indication of high traffic between Google Mail and
Hotmail?


Yes, high traffic of an abusive nature, i.e. google's email servers 
spew out a lot of spam. Just because they're a big company doesn't 
mean they should get special treatment when they're sending spam. 
Google should try it bit harder to fight their abuse problem.
It seems the year began with major provider's user accounts being 
hacked.  More than just speed is needed to defend these services.  Other 
services that send password reset links via email may also want to 
rethink this strategy, in light of the situation.  Clearly, the accounts 
are not being disabled.


-Doug





Re: DNS TXT field usage ?

2010-03-29 Thread Douglas Otis

On 3/29/10 12:06 PM, Tarig Yassin wrote:

Hi Jul


Dkim, SPF, and Domainkey are sender authentication methods for email system. 
Which use Public Key Cryptography.
   
DKIM and Domainkeys use public key cryptography to authenticate 
signature sources used for signing at least email From headers and 
signature headers.


However,  SPF uses chained IP address lists to establish source 
authorization, but not authentication.  Since outbound MTAs might handle 
multiple domains, it would be incorrect to assume authorization implies 
authentication and to expect email domains have been previously verified 
by the source.  For example, Sender-ID might use the same SPF record, 
but this expects Purported Responsible Addresses (PRA) rather than Mail 
Froms have been verified.  On the other hand, SPF was designed to ignore 
the PRA, and neither section 2.2 or 2.4 of RFC4408 imposes prior 
verification demands on Mail From or HELO, which would conflict with 
normal forwarding. :^(


Both DKIM and Domainkey share the same domain label of 
domain-holding-key._domainkey.admin-domain, whereas the first SPF 
record in a chain would be accessed without any prefix label.  While bad 
actors could use either scheme to obscure encoded DNS tunnel traffic, 
ascertaining abnormal use would be especially difficult whenever the 
first SPF records in a chain includes local-part encoding for subsequent 
SPF record prefixes. :^(


-Doug



Re: DNS question, null MX records

2009-12-17 Thread Douglas Otis

On 12/17/09 4:54 AM, Tony Finch wrote:

On Wed, 16 Dec 2009, Douglas Otis wrote:


To avoid server access and hitting roots:

host-1.example.com. IN A 192.0.2.0
host-10.example.com. IN A 192.0.2.9

example.com.IN MX 0 host-1.example.com.
example.com.IN MX 90 host-10.example.com.


This is not very good from the point of view of a legitimate but
mistaken sender, because their messages will be queued and retried.
The advantage of pointing MX records at nonexistent hosts is most
MTAs (and all common ones) will stop trying to deliver the message
immediately. It is perhaps more polite to use a nonexistent name that
you control, but that doesn't allow the source MTA to skip further
DNS lookups, unlike the nullmx or sink.arpa ideas.


. or *.ARPA. are domains that won't resolve A records. Omit the A
record in the above example accomplishes the same thing. DNS traffic can
be reduced with long TTLs by using the TEST-NET technique.

Pointing MX records toward root or ARPA domains exposes shared
infrastructure to nuisance traffic from perhaps millions of sources
expecting NSEC responses at negative caching rates. Traffic that
should be handled by the name server declaring the service
hostname.

Better operators handling large email volumes reduce bounces and use
retry back-off. Those who don't will find themselves disproportionally
affected by a TEST-NET scheme. This seems to be a good thing, since 
there are far too many operators who carelessly accept email and expect 
others to deal with spoofed DSNs.


Often the problem is due to serves being behind a border server lacking 
a valid recipient list that filters spam. The subsequent server with the 
valid recipient lists then aggressively attempts to deliver a growing 
number of DSNs having spoofed addresses holding spam that gets past 
filters. Why be friendly toward this type of behavior, especially at the 
expense of shared infrastructure?


-Doug




Re: DNS question, null MX records

2009-12-16 Thread Douglas Otis

On 12/16/09 3:59 AM, Tony Finch wrote:

On Wed, 16 Dec 2009, Mark Andrews wrote:

Douglas Otis wrote:


One might instead consider using:

example.com.IN MX 0 192.0.2.0
IN MX 10 192.0.2.1
...
IN MX 90 192.0.2.9


Which will expand to:

example.com. IN MX 0 192.0.2.0.example.com.
IN MX 10 192.0.2.1.example.com.

IN MX 90 192.0.2.9.example.com.

MX records DO NOT take IP addresses.


Sorry for embarrassing mistake.

To avoid server access and hitting roots:

host-1.example.com. IN A 192.0.2.0
...
host-10.example.com. IN A 192.0.2.9

example.com.IN MX 0 host-1.example.com.
...
example.com.IN MX 90 host-10.example.com.

-Doug








Re: DNS question, null MX records

2009-12-16 Thread Douglas Otis

On 12/16/09 4:48 PM, Paul Vixie wrote:

Douglas Otisdo...@mail-abuse.org  writes:


If MX TEST-NET became common, legitimate email handlers unable to
validate messages prior to acceptance might find their server
resource constrained when bouncing a large amount of spam as well.


none of this will block spam.  spammers do not follow RFC 974 today
(since i see a lot of them come to my A RR rather than an MX RR, or
in the wrong order).  any well known pattern that says don't try
to deliver e-mail here will only be honoured by friend people who
don't want us to get e-mail we don't want to get.


Agreed. But it will impact providers generating a large amount of bounce 
traffic, and some portion of spam sources that often start at lower 
priority MX records in an attempt to find backup servers without valid 
recipient information.  In either case, this will not cause extraneous 
traffic to hit roots or ARPA.


-Doug






Re: DNS question, null MX records

2009-12-15 Thread Douglas Otis

On 12/15/09 8:06 AM, Andy Davidson wrote:

Eric J Esslinger wrote:

I have a domain that exists solely to cname A records to another domain's 
websites.

[...]

I found a reference to a null MX proposal, constructed so:
example.comINMX 0 .

[...]

Question: Is this a valid dns construct or did the proposal die?


It's valid, but you will probably find people still try to spam to
machines on the A records, and all of the other weird and wonderful things
that spambots try to do to find a path that will deliver mail...


SRV records documented the hostname . as representing no service. 
However, errors made by non-RFC-compliant clients still generate a fair 
amount of root traffic attempting to resolve A records for ..  The MX 
record never defined a hostname . to mean no service so it would be 
unwise to expect email clients will interpret this as a special case 
meaning no service as well.  One might instead consider using:


example.com.IN MX 0 192.0.2.0
IN MX 10 192.0.2.1
...
IN MX 90 192.0.2.9

where 192.0.2.0/24 represents a TEST-NET block.

This should ensure traffic will not hit the roots or your servers. 
Assuming a sender tries all of MX addresses listed, they may still 
attempt to resolve A records for example.com.  This MX approach will 
affect those failing to validate email prior to acceptance, and, of 
course, spammers.


-Doug




Re: SPF Configurations

2009-12-07 Thread Douglas Otis

On Dec 7, 2009, at 9:51 AM, Michael Holstein wrote:

 
 The problem we face is that some people we work with can't do that
 
 Then explain that client-side (their users, to whom they send mail) are 
 probably using Hotmail, et.al. and SPF will simply not allow spoofing which 
 is what they want to do, unless they either :
 
 A) add the SPF record as previously mentioned. It's a TXT record under their 
 root and isn't hard at all.

An authorization tied to a PRA or Mail From will not prevent spoofing, it just 
constrains the risks to those with access to a provider's service.

A provider could insure a user controls the From email-address, but this would 
conflict with the IP path registration schemes.
 
 B) permit you to use a subdomain (like 
 u...@theircompanymail.yourdomain.com).

A provider can ensure any signed From email-address is controlled by its users 
by using ping-back email confirmations appended to user profiles.

There is a proposal aimed at reducing DNS overhead and scalability issues 
associated with the all-inclusive IP address path registration scheme with its 
inability to cope with forwarded email:

http://tools.ietf.org/html/draft-otis-dkim-tpa-label-03

Use of this DKIM extension can safely accommodate a user's desire to authorize 
third-party signatures to protect acceptance of From headers within domains 
that differ from the DKIM signature.  DKIM does not need to change.

Once IPv6 and international TLDs come into the mix, having users vote 
(authorize) DKIM providers could better determine what new domains can be 
trusted, and help ensure users are allowed to utilize their own language and to 
seek assistance in obtaining acceptable IPv6 connectivity.  

-Doug




Re: Repeated Blacklisting / IP reputation, replaced by registered use

2009-09-14 Thread Douglas Otis

On 9/13/09 12:49 PM, joel jaeggli wrote:

Frank Bulk wrote:

[]

If anything, there's more of a disincentive than ever before for
ARIN to spend time on netblock sanitization.


This whole thread seems to be about shifting (I.E. by externalizing)
the costs of remediation. presumably the entities responsible for the
poor reputation aren't likely to pay... So heck, why not ARIN?
perhaps because it's absurd on the face of it? how much do my fees go
up in order to indemnify ARIN against the cost of a possible future
cleanup? how many more staff do they need? Do I have to buy prefix
reputation insurance as contingent requirement for a new direct
assignm


Perhaps ICANN could require registries establish a clearing-house, where 
at no cost, those assigned a network would register their intent to 
initiate bulk traffic, such as email, from specific addresses.  Such a 
use registry would make dealing with compromised systems more tractable.



I do think that ARIN should inform the new netblock owner if it was
previously owned or not.


We've got high quality data extending back through a least 1997 on
what prefixes have been advertised in the DFZ, and of course from the
ip reputation standpoint it doesn't so much matter if something was
assigned, but rather whether it was ever used. one assumes moreover
that beyond a certain point in the not too distant future it all will
have been previously assigned (owned is the wrong word).


But if ARIN tried to start cleaning up a netblock before releasing
it, there would be no end to it.  How could they check against the
probably hundreds of thousands private blocklist?


Note that they can't insure routability either, though as a community
we've gotten used to testing for stale bogon filters.


The issues created by IPv4 space churn is likely to be dwarfed by 
eventual adoption of IPv6.  Registering intent to initiate bulk traffic, 
such as with SMTP, could help consolidate the administration of filters, 
since abuse is often from addresses that network administrators did not 
intend.  A clearing-house approach could reduce the costs of 
administering filters and better insure against unintentional impediments.


This approach should also prove more responsive than depending upon 
filters embedded within various types of network equipment.  By limiting 
registration to those controlling the network, this provides a low cost 
means to control use of address space without the need to impose 
expensive and problematic layer 7 filters that are better handled by the 
applications.  The size of the registered use list is likely to be 
several orders of magnitude smaller than the typical block list. 
Exceptions to the use list will be even smaller still.


This registry would also supplant the guesswork involved with divining 
meaning of reverse DNS labels.


-Doug



Re: DNS hardening, was Re: Dan Kaminsky

2009-08-10 Thread Douglas Otis

This was responded to on the DNSEXT mailing list.

Sorry, but your question was accidentally attributed to Paul who 
forwarded the message.


DNSEXT Archive: http://ops.ietf.org/lists/namedroppers/

-Doug



Re: dnscurve and DNS hardening, was Re: Dan Kaminsky

2009-08-06 Thread Douglas Otis

On 8/5/09 7:05 PM, Naveen Nathan wrote:

On Wed, Aug 05, 2009 at 09:17:01PM -0400, John R. Levine wrote:

...

It seems to me that the situation is no worse than DNSSEC, since in both
cases the software at each hop needs to be aware of the security stuff, or
you fall back to plain unsigned DNS.


I might misunderstand how dnscurve works, but it appears that dnscurve
is far easier to deploy and get running. The issue is merely coverage.


There might be issues related to intellectual property use. :^(

-Doug



Re: DNS hardening, was Re: Dan Kaminsky

2009-08-05 Thread Douglas Otis

On 8/5/09 9:48 AM, John Levine wrote:

Other than DNSSEC, I'm aware of these relatively simple hacks to add
entropy to DNS queries.

1) Random query ID

2) Random source port

3) Random case in queries, e.g. GooGLe.CoM

4) Ask twice (with different values for the first three hacks) and
compare the answers


DNSSEC introduces vulnerabilities, such as reflected attacks and 
fragmentation related exploits that might poison glue, where perhaps 
asking twice might still be needed.


Modern implementations use random 16 bit transaction IDs.  Interposed 
NATs may impair effectiveness of random source ports.  Use of random 
query cases may not offer an entropy increase in some instances.  Asking 
twice, although doubling resource consumption and latency, offers an 
increase in entropy that works best when queried serially.


Establishing SCTP as a preferred DNS transport offers a safe harbor for 
major ISPs.  SCTP protects against both spoofed and reflected attack. 
Use of persistent SCTP associations can provide lower latency than that 
found using TCP fallback, TCP only, or repeated queries.  SCTP also 
better deals with attack related congestion.


Once UDP is impaired by EDNS0 response sizes that exceed reassembly 
resources, or are preemptively dropped as a result, TCP must then 
dramatically scale up to offer the resilience achieved by UDP anycast. 
In this scenario, SCTP offers several benefits.  SCTP retains 
initialization state within cryptographically secured cookies, which 
provides significant protection against spoofed source resource 
exhaustion.  By first exchanging cookies, the network extends server 
state storage.  SCTP also better ensures against cache poisoning whether 
DNSSEC is used or not.


Having major providers support the SCTP option will mitigate disruptions 
caused by DNS DDoS attacks using less resources.  SCTP will also 
encourage use of IPv6, and improve proper SOHO router support.  When 
SCTP becomes used by HTTP, this further enhances DDoS resistance for 
even critical web related services as well.


-Doug








Re: DNS hardening, was Re: Dan Kaminsky

2009-08-05 Thread Douglas Otis

On 8/5/09 11:38 AM, Skywing wrote:

That is, of course, assuming that SCTP implementations someday clean up their act a bit.  
I'm not so sure I'd suggest that they're really ready for prime time at this 
point.


SCTP DNS would be intended for ISPs validating DNS where there would be 
fewer issues regarding SOHO routers.  It seems likely DNS will require 
some kernel adjustments to support persistent SCTP.  SCTP has been 
providing critical SS7 and H.248.1 services for many years now, where 
TCP would not be suitable.  FreeBSD 7 represents a solid SCTP reference 
implementation.


SCTP has far fewer issues going to homes connected via IPv6.

-Doug





Re: DNS hardening, was Re: Dan Kaminsky

2009-08-05 Thread Douglas Otis

On 8/5/09 11:31 AM, Roland Dobbins wrote:


On Aug 6, 2009, at 1:12 AM, Douglas Otis wrote:


Having major providers support the SCTP option will mitigate disruptions caused 
by DNS DDoS attacks using less resources.


Can you elaborate on this (or are you referring to removing the spoofing 
vector?)?


SCTP is able to simultaneously exchange chunks (DNS messages) over an 
association.  Initialization of associations can offer alternative 
servers for immediate fail-over, which might be seen as means to arrange 
anycast style redundancy.  Unlike TCP, resource commitments are only 
retained within the cookies exchanged.  This avoids consumption of 
resources for tracking transaction commitments for what might be spoofed 
sources.  Confirmation of the small cookie also offers protection 
against reflected attacks by spoofed sources.  In addition to source 
validation, the 32 bit verification tag and TSN would add a significant 
amount of entropy to the DNS transaction ID.


The SCTP stack is able to perform the housekeeping needed to allow 
associations to persist beyond single transaction, nor would there be a 
need to push partial packets, as is needed with TCP.


-Doug






Re: DNS hardening, was Re: Dan Kaminsky

2009-08-05 Thread Douglas Otis

On 8/5/09 2:49 PM, Christopher Morrow wrote:

and state-management seems like it won't be too much of a problem on
that dns server... wait, yes it will.


DNSSEC UDP will likely become problematic.  This might be due to 
reflected attacks, fragmentation related congestion, or packet loss. 
When it does, TCP fallback will tried.  TCP must retain state for every 
attempt to connect, and will require significantly greater resources for 
comparable levels of resilience.


SCTP instead uses cryptographic cookies and the client to retain this 
state information.  SCTP can bundle several transactions into a common 
association, which reduces overhead and latency compared against TCP. 
SCTP ensures against source spoofed reflected attacks or related 
resource exhaustion.  TCP or UDP does not.  Under load, SCTP can 
redirect services without using anycast.  TCP can not.


-Doug





Re: [policy] When Tech Meets Policy...

2007-08-13 Thread Douglas Otis



On Aug 12, 2007, at 6:41 AM, John Levine wrote:

The problems with domain tasting more affect web users, with vast  
number of typosquat parking pages flickering in and out of existence.


Domain tasting clearly affects assessments based upon domains.  With  
millions added and removed daily as part of no cost domain tasting  
programs, the number of transitioning domains has been increased by  
an order of magnitude.  Many of these new domains often appear as  
possible phishing domains.  The high number of tasting domains  
obscures which are involved in criminal activities.  This high number  
also makes timely notification of possible threats far less practical.


There is no advanced notification of new domains nor reliable  
information pertaining to domain ownership.  There are significant  
costs associated with analyzing and publishing domain assessment  
information.  Registries blithely ignore this reality by permitting  
the dangerous activity to continue free of change.  Perhaps those  
harmed by the resulting chaos that domain tasting creates could start  
a class action.   A coalition of financial institutions might prevail  
in both getting this program to end, and perhaps even require  
advanced notification of new domains.


Domain tasting is clearly buying criminals critical time due to the  
resulting high flux created for domain assessments.


-Doug


Re: Interesting new dns failures

2007-05-25 Thread Douglas Otis



On May 24, 2007, at 10:45 PM, John Levine wrote:

I ask you: What would you suggest? It's quite hard to craft  
technical solutions to policy failures.


Since the registrar business has degenerated into a race to the  
bottom, I don't see anything better than setting a floor that is  
the minimal allowable bottom.  Since ICANN has neither the  
inclination nor the competence to do that, and they have no control  
over ccTLDs anyway, that means (egad!) regulation.


Yeah, I know the Internet is all over the world, but as a  
participant in the London Action Plan, an informal talking shop of  
the bits of governments that deal with online crime, spam, etc., I  
can report that pretty much all of the countries that matter  
realize there's a problem, and a lot of them have passed or will  
pass laws whether we like it or not.  So it behooves us to engage  
them and help them pass better rather than worse laws.


Agreed, but adding a preview process doesn't cost much and would help  
establish stability.  There are millions of domains churning every  
day.  Just keeping track of which domains are new is costly.  Once it  
becomes common place for providers to withhold DNS information of new  
domains, does it really make sense to permit domain records to change  
frequently and within milliseconds after some holding period?  While  
provisions should be established for granting exceptions, requiring a  
12 hour zone preview before going live should lead to significant  
reductions in the amount of criminal activity depending upon this  
insane agility that thwarts tracking and takedowns.


Allow security entities time to correlate upcoming domain changes,  
and this swamp will drain rapidly.


-Doug