Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Adam Novak
You're right that a flat mesh is not the best topology for
long-distance communication, especially with current routing
protocols, which require things like global lists of all routeable
prefixes.

On the protocol front, I suggest that the IETF develop routing
protocols that can work well in a flat mesh topology. Parallelizing
traffic streams over many available routes, so it all doesn't try to
take the shortest path, would appear to be a particularly important
feature, as would preventing all of a node's links from being swamped
by through traffic that other nodes want it to route.

The problem with long-distance traffic over flat mesh networks is less
with throughput (if everything isn't taking the shortest path) than
with the latencies involved in sending traffic over a very large
number of hops. I think the solution there is to send traffic that's
leaving your local area over the existing (tapable) long-distance
infrastructure. The idea is to make tapping expensive, not impossible.

There's also the point to be made that current traffic patterns depend
to a significant extent on current Internet architectural decisions.
If everyone had a gigabit connection to their neighbors, but only a 10
megabit uplink to route long-distance traffic over, they might find a
use for all that extra local bandwidth.

On Fri, Sep 6, 2013 at 7:22 AM, Noel Chiappa  wrote:
> > One way to frustrate this sort of dragnet surveillance would be to
> > reduce centralization in the Internet's architecture.
> > ...
> > [If] The IETF focused on developing protocols (and reserving the
> > necessary network numbers) to facilitate direct network peering between
> > private individuals, it could make it much more expensive to mount
> > large-scale traffic interception attacks.
>
> I'm not sure this is viable (although it's an interesting concept).
>
> With our current routing tools, switching to a flat mesh, as opposed to the
> current fairly-structured system, would require enormous amounts of
> configuration/etc work on the part of smaller entities.
>
> Also, traffic patterns being what they are (e.g. most of my traffic goes
> quite a distance, and hardly any to things close by), everyone would wind up
> handling a lot of 'through' traffic - orders of magnitude more than their
> current traffic load.
>
> Noel


Re: Bruce Schneier's Proposal to dedicate November meeting to saving the Internet from the NSA

2013-09-06 Thread Adam Novak

On 09/05/2013 08:19 PM, Brian E Carpenter wrote:

Tell me what the IETF could be doing that it isn't already doing.

I'm not talking about what implementors and operators and users should
be doing; still less about what legislators should or shouldn't be
doing. I care about all those things, but the question here is what
standards or informational outputs from the IETF are needed, in addition
to what's already done or in the works.

I don't intend that to be a rhetorical question.

  Brian


One way to frustrate this sort of dragnet surveillance would be to 
reduce centralization in the Internet's architecture. Right now, the way 
the Internet works in practice for private individuals, all your traffic 
goes up one pipe to your ISP. It's trivial to tap, since the tapping can 
be centralized at the ISP end.


The IETF focused on developing protocols (and reserving the necessary 
network numbers) to facilitate direct network peering between private 
individuals, it could make it much more expensive to mount large-scale 
traffic interception attacks.


Re: https

2011-09-01 Thread Adam Novak
Who is the appropriate net monkey to handle this?
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Hyatt Taipei cancellation policy?

2011-08-27 Thread Adam Novak
On Aug 28, 2011 12:06 AM, "Glen Zorn"  wrote:
> Good to hear.  Getting rid of cookies can save a _lot_ of money & since
> you're "at IETF to work" I'm sure you wouldn't mind if the conditions
> more closely approximated those of the typical office; no office I've
> ever worked in has provided free cookies & beverages twice a day...
>

In my experience, computer industry employers often provide free snacks and
beverages, since hungry/thirsty/caffine-deprived developers write bad code.
It's also a better value than paying them more, in terms of how much they
like you.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Routing at the Edges of the Internet

2011-08-26 Thread Adam Novak
On Fri, Aug 26, 2011 at 3:49 PM, Doug Barton  wrote:
>
> I have a related-but-different example of how end nodes being able to
> know/discover direct paths to one another could be useful. Imagine a
> busy server network with some web servers over here, some sql servers
> over there, etc. All of these systems are on the same network, same
> switch fabric, and have the same gateway address. In an ideal world I
> would like them to be able to know that they can speak directly to one
> another without having to go through the gateway (and without my having
> to manually inject static routes on the hosts, which of course is both
> painful and un-scale'y.

Shouldn't that be covered by the subnet mask? As long as they know
they're on the same subnet (and ARP broadcasts will reach everyone)
they should just ARP for each other and not involve the router at all.

If they are on different IP subnets, but the same Ethernet, then we
can either come up with a new way to do routing, or tell people not to
number things that way. Perhaps a subnet mask or CIDR prefix is not
expressive enough?
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: https

2011-08-26 Thread Adam Novak
On Fri, Aug 26, 2011 at 1:12 PM, Ned Freed  wrote:
>> It ensures that what you're getting is the same as what the IETF has
>> on file,
>
> No, it really doesn't do that. There can be, and usually are, a lot of steps
> involves besides the one https hop.
>

What other steps are there? HTTPS prevents what the web server sends
from being tampered with on its way to the browser. If the web server
storing the archives is itself trusted, this would seem to be all that
is needed.

Obviously HTTPS is not a solution to long-term preservation of the
integrity of the mailing list database itself. What protections, if
any, are currently in place to protect the archive from tampering on
IETF's end? Perhaps the whole thing should be signed by someone
official at regular intervals?

>
> I'm sorry, but the notion that the present use of https provides any
> sort of protection against this sort of thing is just silly. The simple
> facts that the archives are also available via regular http and that
> there is no stated policy about the availability of archives via https
> means the present setup is completely vulnerable to downgrade attacks.
>

A downgrade attack is certainly possible. But there are measures
available to ensure that, once an HTTPS connection has been
established once, future HTTP sessions will be rejected client-side.
And creating an official policy and going HTTPS-only could be done
relatively easily. If HTTPS everywhere is where the Web should be
going, then IETF should be leading the charge.

> 10+ years of experience with multiple certificates having multiple failures
> says this is, at best, a crapshoot.
>

For what reasons? Is it that things scheduled every year or every ten
years are easy for admins to miss? Or is it that it's hard to stay on
top of certificate revocations when they occur?
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: https

2011-08-26 Thread Adam Novak
On Fri, Aug 26, 2011 at 1:13 PM, Hector Santos  wrote:
> Makes you wonder. Why is the concept of expiration required?  Did the IETF
> expire, die?  Did its value as an Organization go down and only valid on a
> year to year basis?

As I understand it, expiration is supposed to solve the problem of
someone getting their hands on your old certificates and impersonating
you. In order to impersonate you, not only do they have to get into
your system, they have to have done it in the last year or so.

It also keeps certificates for domains from outliving domain
registrations for too long. If you don't have the domain when you go
to renew the certificate, the CA shouldn't renew it.

I guess it also keeps revocation lists short. You only have to
remember that a certain certificate was compromised until it expires,
instead of forever.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: https

2011-08-26 Thread Adam Novak
On Fri, Aug 26, 2011 at 11:14 AM,   wrote:
>
> +1. If you want signatures, do them properly. Don't pretend a transfer
> protection mechanism covering exactly one hop provides real object security,
> because it doesn't.
>

It ensures that what you're getting is the same as what the IETF has
on file, and (assuming you trust the IETF's archive integrity, which
is a separate problem) what everyone else on the list received. It
seems to me it's more important to know that than to know that stuff
sent to the list actually came from who it claims to come from. Does
it really matter if a proposed standard wasn't really designed by who
the archive says it was designed by?

> And as for the "encrypt so the really secret stuff doesn't stand out" 
> argument,
> that's fine as long as it doesn't cause inconvenience to anyone. That's 
> clearly
> not the case here. And I'm sorry, the "mistakes were made" notion doesn't
> really fly: Certificates aren't a "set it and forget it" thing, so if you
> haven't noted expiration dates on someone's to-do list so they can be updated
> before expiration, you're not doing it right.

Yeah, it seems like the IETF would be on top of certificate
expiration, having invented it. But I think having the encryption is a
very good thing. Some government interests would be happy to keep
information about some aspects of IETF business away from their
citizenry, and have the resources to launch a MITM attack. (Of course,
those governments may also have the resources to sign a fake
certificate, but that is, again, a separate problem.)

Once the certificate expiration business is fixed, it should be fairly
simple to make sure it's kept up to date so that this sort of thing
doesn't happen again.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Routing at the Edges of the Internet

2011-08-25 Thread Adam Novak
I trust that some of you have seen this article from a while back:



An informative except:

"When I open my laptop, I see over ten different wifi access points.
Say I wanted to send data to my friend in the flat next to mine. It is
idiotic that nowadays, I would use the bottleneck subscriber line to
my upstream ISP and my crippled upload speed and push it all the way
across their infrastructure to my neighbors ISP and back to the Wifi
router in reach of mine. The Internet is not meant to be used that
way. Instead, all these wifi networks should be configured to talk to
each other."

I also trust that you are aware of what happened to the Internet in
Egypt (and elsewhere) this spring, where Internet connectivity was
disrupted by shutting down major ISP networks.

I would like to bring the attention of the IETF to what I see as a
fundamental problem with the current architecture of the Internet:

The Internet is not a network.

As part of the development of the Internet, fault-tolerant routing
protocols have been developed that allow a connecdestined fortion to
be maintained, even if the link that was carrying goes down, by
routing packets around the problem. Similarly, packets can be
load-balanced over multiple links for increased bandwidth. However,
the benefits of these technologies are not available to end users. If
I have a smartphone with both a 3G and a Wi-Fi connection, downloads
cannot currently be load-balanced across them. The two interfaces are
on two different networks, which are almost certainly part of two
different autonomous systems. Packets must be addressed to one of the
two interfaces, not the device, and packets addressed to one interface
have no way to be routed to the other. Similar problems arise when a
laptop has both a wired and a wireless connection. Wired networks also
suffer from related difficulties: If I have Verizon and my friend has
Comcast, and we string an Ethernet cable between our houses, packets
for me will still all come down my connection, and packets for my
friend will still all come down theirs.

The Internet, as it currently appears to end-users, has a logical tree
topology: computers connect to your home router, which connects to
your ISP, which connects to the rest of the Internet. Cell phones
connect to the tower, which connects through a backhaul link to the
rest of the Internet. Almost all of the devices involved have multiple
physical interfaces and full IP routing implementations, but only the
default route is ever used. This results in a brittle Internet: the
failure of one ISP router can disconnect a large number of end-users
from the Internet, as well as interrupting communication between those
users, even when those users are, physically, only a few feet from
each other.

My question is this: what IETF work would be needed to add more
routing to the edges of the Internet? If each home or mobile device
was essentially it's own autonomous system, what would this do to
routing table size? To ASN space utilization? How can individuals
interconnect home networks when RIRs do not assign address and AS
number resources to individuals? How might individuals interconnect
home networks without manual routing configuration? Under what
circumstances could an ISP trust a client's claim to have a route to
another client or to another ISP? How might packets sent to a device's
address on one network be routed to that device's address on another
network, while packets to immediately adjacent addresses take the
normal path?
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf