> I'm also not sure where you got the number (1/(2*3))^r for the probability. If
> it increments from 1 to 2 with a .5 probability, then probability for Depth
> being k less then the actual depth of the message would be (.5)^k+1 , AFAIK.
> Where to do you get the 3 from? (not that it matters, just add 1 to your
> suggested r value).
The calculation takes into account the probability of the return message
outliving its HTL (because of probabilistic decrementing of the HTL) in
addition to the probability of Depth being to low. The probability is
(1/2^r)/3 (contrary to what you quoted) which is the result of:
inf
1 - SUM[ 1/2^(2*n) * 1/2^r ]
n=0
I hadn't realized that the protocol quoted on sourceforge was not
implemented. Perhaps I'll wait until I have an understanding of the real
protocol before I critique it more.
> I also support the suggestion that nodes which are currently serving below
> their desired capacity can reset it more often, while nodes that are beginning
> to feel bogged down should be able not reset it less often or not at all. That
> sort of load balancing seems to fit very well with the nature of the network.
Since node behaviour cannot be controlled with any certainty, this may be
what nodes end up doing anyway when people start attempting to optimize
their performance.
> Clients look like transient nodes to the nodes they connect to. Having a bunch
> of bad references in nodes stores is not worth the very weak pseudo anonymity
> of having them look almost just like a normal node - someone really wanting to
> check could simply make a request to the DataSource address anyways. People
> should run their own nodes, or use a trusted parties node for the first step.
What about having nodes occationally look like clients? That is, nodes
that occationally pretend to be transient even though they are not. Of
course, a busy node can't possibly hope to appear transient to its
neighbours if only one in a hundred of its messages claim its transient.
Clients will always appear transient so they will probably be easy to pick
out.
Let me return to the idea of reseting DataSource with high probability.
When the client inserts a document to their trusted node, that node will
then propagate the document to an appropriate neighbour. That neighbour
will see DataSource==Source with probability 1 (for that message). I
don't want the neighbour assuming that the document originated from your
trusted node just because DataSource==Source. As long as DataSource is
reset to Source by nodes with reasonably high probability, neighbouring
nodes won't make any assumptions regarding the ultimate source of a
document.
> As for having the Meta-data within the values hashed for the CHK - for CHK
> indexed data it obviously will be. The two way hash will allow for the
> validity
> for the meta-data to be checked without having the entire rest of the data
> either.
Good.
Chris.
_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev