On Thursday 30 October 2003 04:38 am, Some Guy wrote:
>  --- Tom Kaitchuck <[EMAIL PROTECTED]> wrote:
> > > > Is there some way to make a hash like function that is trivial to
> > > > verify, but hard to generate? Maybe something like: index under the
> > > > 3rd hash and include the second hash as well as the next greater
> > > > value who's last X bits match the last X bits of the third hash. (
> > > > then set some bound of how close that number has to be to the
> > > > original hash.) Anyone have a better algorithm?
> > >
> > > Yeah, it's called hash cash - but it'd slow down requests...
> > >
> > > > Anyways then to brute force 2 bytes it would take nearly 2^16th times
> > > > as long as whatever is deemed an acceptable delay on a normal
> > > > computer.
> >
> > Why would this particularly slow down requests? You could just as easily
> > include the number values in the link to the content. Then the
> > intermediate nodes only have to do two hashes of ints and a comparison.
> > Wouldn't this take much less time than to verify the file when it comes
> > back. If that really slows things down it could be done say only at nodes
> > where HTL%3==1 or something like that. (Then it will still be caught
> > before it gets too far). It would certainly slow down inserts but I see
> > no problem with that.
>
> Look, with "hash cash" you can charge the initiator of a request/insert/ect
> some average CPU time. That's all well and a few jerks from overusing the
> system, but it only scales down the attack someone can do to targeted area
> in hashspace.
>
> Think of it like this.  If some guys has your address and sends you 2 tons
> of junk mail a day, you'll have trouble reading your mail.  If the postal
> service doubles the cost of postage stamps, he'll still send you 1 ton of
> junk mail.  If the postal service ups the cost by factor 10000, nobody will
> be able to afford to send you anything anyway.

See my second idea. (above)

> Hey wait, will this work?!?  Maybe it's different.
> ----Route based on the hashcash!!!
> User caclulates b+c a bit x such that hash(<key,x>) has first b bits 0.
> There should be about 2^c possibile x's.
> Route in hashspace to hash(<key,x>) not hash(key).

I don't quite understand this, can you explain it better?

> Real numbers: b=10, c=2
> In order to use a key I have to do about 2^b=1024 hashes, should happen in
> a few ms*CPU. There could be about 2^c=4 possible solutions, that's ok, a
> little redundancy.
>
> Since an adversary doesn't know what x is until he caculates it, he can't
> know where in hashspace a key will wind up until after he has "paid" for it
> already.
>
> Does this work, or am I missing something?

This is basically the same idea as behind my second proposal. It would work, 
but it require a lot of changes. Suppose we set it up that it one second to 
generate a key on a normal computer. Then to brute force two bits of the 
routing information it would take almost two days. And if an average node was 
handling something like 20000 qph then it would take over 500 normal 
computers 3 days to DOS that node for one hour. (Then the network would 
recover about an hour after that.) 

That's good enough to stop the attack, but each URI would have to include the 
number that you are incrementing by, so that would mean adding two or three 
characters to each URI. Then if the characters were not there, then your 
computer could generate them in about a second. That is good enough that the 
existing content of Freenet would not have to be rewritten, however the hash 
space is all wrong. This would mean that the contents of everyone's data 
store, while still valid, are located in a random place on the network, and 
everything needs to be moved. ( The network has to totally re-specialize. )

Some applications (Frost) need to generate more that one guessed key every 
second. That means in order to accommodate these applications, we MUST 
implement TUKs.

> Attacks Left:
>
> I suppose an adversary with a certain fraction of nodes would probably
> still be able to use the keys already mapped to a target area to do
> dammage.  An advesary with f nodes compromised would get about f/N
> *<arverage route hops> of the messages in the system(for f<<N).  If he
> collected all the keys in the target keyspace and requested them all
> repeatedly from all over the place, it might overload the area of the
> network responsible.  Caching would help some.

The data is actually there, so it would not make the nodes look bad. Caching 
would help a LOT.

> There's still the SSK attack, but you can fight this with a some caching
> and a realistic limit on the update rate.

What attack is this?

> If given enough time an adversary could build up enough crap to still do
> the attack for a short period.  The only thing I can think of to fight this
> is a time dependent hash, or one based on a global random oracle which
> spits out salt every now and then.
>
>
> Damnit why didn't I think of this before?  It beats the crap out of my key
> sharing stuff I came up with!  Please somebody tell me if I'm right about
> this.


_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to