Iljitsch van Beijnum wrote:

The basic assumption in byzantine robustness is that as long as the
number of dishonest participants do not exceed a treshold ratio
(e.g. one third of all participants), the system does not fail.

Ok, I'm going to read up on this stuff but in the mean time: it's pretty obvious that if you want to be sure that a certain node doesn't deceive you, you must check with a significant number of different nodes to see if they agree. This is going to cost you performance.

It is going to cost some performance, I agree. However, if you
arrange your data so that it is self authenticating, then the only reason for byzantine robustness is protection from DoS and negative
answers. That is, if you can arrange your data so that if you do
receive a positive response, you can check that the response is
genuine, then the task is somewhat simplified.


Whether you can arrange your data to be self authenticating or not
depends on many factors.  One factor is whether your primary identifiers
are cryptographic in nature or not.

In the case of DHT servers, the more servers there are, the less
likely it is that the treshold would be exceeded.  Hence, even if
you are relying on strangers, you are relying on them in a random
manner, making any collusion highly unlikely, and impossible in
practise.

I just don't see it happening. If we reach a million multihomers, 900,000 of which are SOHO/private persons at some point in the future, and every piece of information is replicated in 10 places, there is still a 35% chance that a fortune 500 company has to rely on "basement multihomers" exclusively for this incredibly sensitive stuff. And remember: no SLAs, nothing.

I do understand your view. But I don't share it. I am sure someone will sooner or later produce a statistical analysis that can be used to quantify the danger.

We have to remember that even if we took the very simplistic
approach and distributed the data over all of those basement
servers, data about each separate identifier would be located
on different basement servers. Furthermore, the exact set of
servers serving data about a particular identifier would be
extremely dynamic, since hosts would be joining and leaving
the DHT service network all the time. Hence, even though I
agree that such an arrangement might scare pants off from an
IT manager, it might actually work extremely well and be extremely robust in practise.


I think we need some experimentation first. Maybe set up a "shadow DNS" that uses DHT?

Excellent idea. I wish I had more time to study the few ongoing DHT experiments that there are.

Yes, mobility is complex. But don't you have the home agent as a fallback? So even in this case the regular PA home address should work most of the time.

Well, yes, you would need some kind of a "home" agent for rendezvous. However, if you bind your identity to your initial rendezvous server, you create an additional single point of failure. That happens to be one of the problems with the current MIP and MIPv6 practises.

On the other hand, if you do separate your identifiers and
locators, you can move around your "home" servers, or even
have multiple of them, thereby increasing resiliancy.

So you're saying: use PI first, PA as fallback?

I am saying that depending on application.

No, that's no way to build something reliable. Either you always first do PA and only use the potentially non-routable addresses as a fallback, or you need two faced DNS or complex IGP tricks.

I agree that you need two faced DNS. I don't see any specific reason why you should always do PA first and then PI, or vice versa. You do feed the DNS names to your applications anyway, and you can do what I proposed (before you have real identifiers), if you wish. That is one reason why I see some value of using PI addresses temporarily (for 3-5 years or so) as identifiers. And that is one reason why I understand that many network managers would like to have them.

Personally, I don't much like PI addresses.  IMHO, they seem to
create more complications than solve.  However, as a step towards a
proper separation of identifiers and locators, they are probably
needed.  Not by me, but by many network managers.

We need to protect the infrastructure.  As I wrote, the danger is not
that much to the communicating hosts but to the infrastructure.

I'm not sure what you mean. Routers? DNS servers?

All end-hosts and all subnets. Basically, everything.


   an attacker may launch a denial-of-service attack on a given node A
   by contacting a large number of nodes, claiming to be A, and
   subsequently diverting the traffic at these other nodes so that A is
   harmed.

Not really big news. ... But it has nothing to do with multihoming, mobility or IPv6.

It may not be big news, but it was news back when MIPv6 was at the IESG for the first time. And yes, what I am writing about has everything to do with mobility and multi-homing. And perhaps even IPv6.

Unfortunately neither IPsec nor TLS pixie dust helps here.

It does if you use it right. :-)

I think it is better to take this part off-line until you lose your illusions about fairies being real :-)

--Pekka


-------------------------------------------------------------------- IETF IPng Working Group Mailing List IPng Home Page: http://playground.sun.com/ipng FTP archive: ftp://playground.sun.com/pub/ipng Direct all administrative requests to [EMAIL PROTECTED] --------------------------------------------------------------------

Reply via email to