So I followed up a few of the refs from the Kleinberg paper...

Katz came up with the idea of propagating trust/status/etc across a 
graph in 1953. In 1976, Pinski and Narin suggested normalising each 
node's output by dividing the output on each outgoing edge by the node's 
outdegree. Geller pointed out in 1978 that their model was equivalent to 
a Markov chain: the scores assigned to the nodes followed the Markov 
chain's stationary distribution. In other words, propagating 
trust/status/etc with normalisation at each node is equivalent to taking 
random walks from random starting points and counting how many times you 
end up at each node. (So either my assertion that a random walk would be 
likely to end at a Sybil node was wrong, or trust propagation isn't 
Sybil-resistant.)

The only difference between Geller's model and PageRank is the damping 
factor: in PageRank you continue your random walk with probability d or 
jump to a random node with probability 1-d. (Incidentally, when the 
algorithm's described this way rather than in terms of a transition 
matrix, it's easy to see how you could implement it on a web spider.)

Conclusion: we should be safe from the PageRank patent as long as we 
don't use a damping factor - the rest of the algorithm was published 
(and applied to social networks) 20 years before the patent.

Cheers,
Michael

Reply via email to