I looked over both papers.

First paper:

The proof in the first one isn't a big surprise. The need for exact
predecessor links is a technicality used in proofs of the polylog routing,
and aren't necessary in general, but without them you do need more density
in the network.  If you scale the node degree as log(n) with the network
size, you have sufficient density that everything still holds. Of course,
it is an open question whether you would get such scaling in practice (but
it is also an open question whether you have a network topology that
resembles the small world models at all, so...)

The routing modification that they suggest, which basically says that you
should allow nodes to route back to previously visited nodes as long as
those nodes still have a neighbors closer than themselves, sounds fine,
however. This could be useful even for the opennet part of Freenet. I think
there are other ways of achieving the same effect however (for example, you
can put in the message the ID of the closest not-chosen neighbor at each
step, and if a later node has no neighbor closer than one that wasn't used
by a previous node, the message backtracks).


Second paper:

The reason I never let the nodes independently draw and change IDs is that
it seems to me that, and this was borne out in experiments, this would
cause all the IDs to converge to a single point. The math that motivates
the formula c(u) in their paper actually only works when the IDs are in
fixed grid - if let the space be continuous the distribution degenerates at
as the distance between points approaches zero, so it isn't necessary that
the Markov chain should converge to any meaningful distribution at all.

If it works, than it works of course. But I would approach with some
caution.

(Note that if a node is running the Markov chain all on its own it should
be possible to analytically calculate, or at least estimate, its stationary
distribution from a particular set of neighbors, making actually running
the chain unnecessary.)



On Fri, May 3, 2013 at 2:31 PM, Michael Grube <michael.gr...@gmail.com>wrote:

>
>
>> Also, the reality may be even worse: Even without attackers, churn in the
>> network causes "natural" degeneration; we randomise periodically to try to
>> correct this.
>>
>> Note that we are probably immune to their third attack due to using a
>> commit/reveal on swapping. However the first two are real enough and can't
>> be beaten by commit/reveal.
>>
>> We could implement this tomorrow. What exactly do we need to determine
>> before doing so? We could validate the implementation, and compare it to
>> current swapping, in a many-nodes-one-VM simulator (we can simulate approx
>> 100 to 500 "real" nodes on one high end system).
>>
>
> That's interesting. What do you consider high end? If it will actually
> help, I'll rent some cloud hardware. I'm not sure about supporting 10,000
> nodes, but maybe 2 or 3 thousand from me.
>
>
>>
>> The old probe requests were intended to find the "successor" of a
>> particular location/node, and often saw huge gaps. However they were often
>> so large that I assumed it was a bug in the implementation, or a
>> particularly perverse topology glitch combined with overload... if the
>> latter, that's a serious problem in itself...
>>
>> The paper talks about not having peers-of-peers. We do have locations for
>> peers-of-peers for FOAF routing purposes. I wonder if this is important...
>>
>> Average hops is good, but their max htl is nearly 200. This is kind of
>> worrying! Fortunately we can afford some failure thanks to key-level
>> redundancy...
>>
>> The new algorithm has impressive performance on big networks - swapping
>> doesn't scale, LMC works a lot better with the limited number of swaps.
>> Having said that, they only show 10k nodes in this paper - the slides from
>> the related talk (
>> http://www.icsi.berkeley.edu/icsi/events/2013/03/strufe-dark-social-networking)
>>  suggest it works well on larger networks too; clearly swapping is running
>> into scalability problems even at 10k nodes, at least with 6000|V| swaps.
>>
>
>
>> >
>> > On Thu, May 2, 2013 at 6:42 PM, Matthew Toseland
>> > <t...@amphibian.dyndns.org>wrote:
>> >
>> > > On Tuesday 05 Mar 2013 15:57:58 Matthew Toseland wrote:
>> > > > A paper accepted by INFOCOM 2013 claims to:
>> > > > 1) Show that our view of Kleinberg is unrealistic because we do not
>> have
>> > > the sideways lattice links - the closest connection to the target
>> isn't
>> > > always closer to the target than we've been so far.
>> > > > 2) Prove that we can't have polylog routing, and show that perverse
>> > > cases where the best route will take us further away are increasingly
>> > > common as networks grow, and
>> > > > 3) Propose a new algorithm that is provably polylog.
>> > > >
>> > >
>> http://www.p2p.tu-darmstadt.de/publications/details/?no_cache=1&tx_bibtex_pi1[pub_id]=TUD-CS-2013-0036
>> > > > ("A Contribution to Analyzing and Enhancing Darknet Routing", Roos &
>> > > Strufe)
>> > > >
>> > > > Performance simulations show it is similar to ours or better
>> depending
>> > > on the properties of the network, via a parameter C, "the maximal
>> distance
>> > > to the closest neighbor over all nodes", i.e. if our closest links are
>> > > close enough, our current algorithm should work, but if not, their
>> version
>> > > is significantly faster.
>> > > >
>> > > > The new algorithm changes backtracking and keeps a list of marked
>> nodes
>> > > - all nodes visited AFAICS, or at least nodes where it backtracked.
>> > > Obviously this has security implications, but on darknet this is less
>> > > important than on opennet, and we really should have a separate tunnel
>> > > setup system for the very paranoid (e.g. based on Pisces).
>> > > >
>> > > > Having said that:
>> > > > - All the topologies simulated showed acceptable HTL for a 100,000
>> node
>> > > network, even using our current simplistic algorithm. Granted that's
>> > > somewhat idealized...
>> > > > - We can use redundancy to some degree to avoid perverse parts of
>> the
>> > > keyspace breaking important functionality. I've suspected that
>> "perverse
>> > > parts of the keyspace" exist for some time, and that seems to be the
>> > > paper's main concern; and it does not solve the problem either, it
>> merely
>> > > limits it.
>> > > > - Right now hardly anyone uses darknet, and Michael Grube still
>> hasn't
>> > > finished dealing with Pitch Black.
>> > > >
>> > > > His homepage also has some rather vague ideas under "open theses" on
>> > > censorship attacks on opennet via flooding; I suspect this was
>> written by
>> > > his supervisor who may not have investigated the matter fully. IMHO
>> > > flooding opennet is a rather expensive attack, it's probably much
>> easier to
>> > > MAST your target and take them offline IRL! :)
>> > > > http://www.p2p.tu-darmstadt.de/people/stefanie-roos/
>> > > >
>> > > And they provide a new swapping algorithm too, which is simpler to
>> > > implement, only uses local information, is faster and far more attack
>> > > resistant:
>> > >
>> > >
>> http://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_P2P/share/publications/Attack_Resistant_Network_Embeddings_for_Darknets.pdf
>> > >
>> > > Now if only we really had a 10,000 node darknet to run these
>> algorithms
>> > > on! Making it easy to build a darknet has to be a high priority.
>> > >
>> >
>>
>
>
_______________________________________________
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to