[freenet-dev] Wininstaller deployed

2009-05-13 Thread Zero3
Matthew Toseland skrev:
> I have deployed the new wininstaller, for Vista/win7 users and anyone who 
> clicks on "Windows instructions". Win2K/XP users with working JWS will still 
> see the old installer for now.

Cool! :)

- Zero3



[freenet-dev] What to do on SSK collisions: f86448d51c2e3248e1dfec513eefde50902aac30

2009-05-13 Thread Daniel Cheng
2009/5/13 Matthew Toseland :
> On Wednesday 13 May 2009 01:05:40 you wrote:
>> 2009/5/13 Matthew Toseland :
>> > On Tuesday 12 May 2009 09:33:11 you wrote:
>> >> On Tue, May 12, 2009 at 7:49 AM, Matthew Toseland
>> >>  wrote:
>> >> > On Tuesday 12 May 2009 00:45:54 Matthew Toseland wrote:
>> >> >> commit f86448d51c2e3248e1dfec513eefde50902aac30
>> >> >> Author: Daniel Cheng (???) 
>> >> >> Date:   Fri May 8 21:04:28 2009 +0800
>> >> >>
>> >> >> FreenetStore: Simplify code, remove "overwrite" parameter
>> >> >>
>> >> >> This parameter is always "false".
>> >> >> (Except when doing BDB->SaltedHash migration, which does not have
> to
>> >> >> overwrite)
>> >> >>
>> >> >>
>> >> >> The reason I introduced the overwrite parameter was that when we send
> an
>> > SSK
>> >> >> insert and get a DataFound with different data, we should not only
>> > propagate
>> >> >> the data downstream, but also replace our local copy of it. However
>> >> >> apparently I never implemented this. Is it a good idea?
>> >> >>
>> >> > Looks like we do store the collided data for a local insert
>> >> > (NodeClientCore.java:XXX we have collided), but not for a remote one.
>> > Except
>> >> > that a bug prevents the former from working. So we need to fix
> getBlock().
>> >> > Ok...
>> >> >
>> >>
>> >> Let's see if these two commit make sense: (this is on my fork, not
>> >> committed to the main staging yet)
>> >>
>> >>
>> >
> http://github.com/j16sdiz/fred/commit/8e2ef42c286450813dbfa575bcd3f54dc8cb4c83
>> >>
>> >
> http://github.com/j16sdiz/fred/commit/7e6040ce3359486557bdd832c526e473a4f95577
>> >>
>> >> Regards,
>> >> Daniel
>> >
>> > Does this deal with the case where the request is remote, i.e. came from
>> > outside via a SSKInsertHandler?
>> >
>>
>> I think the SSKInsertSender code have handle the remote request case:
>>
> http://github.com/freenet/fred-staging/commit/6a341ed359a9ef6800a9830685c97072e9845912#diff-3
>
> No it doesn't, store(,,false)! If there is a collision we want to
> store(,,true).

You means overwrite local SSKBlock with remote one, even if we are not
inserting?

Should we?
What if an attacker try to overwrite an SSK?

>>
>> If it do not, I have no idea where should I fix it.
>



[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread xor
On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
> Thomas Sachau wrote:
> > Luke771 schrieb:
> >> I can't comment on the technical part because I wouldnt know what im
> >> talking about.
> >> However, I do like the 'social' part (being able to see an identity even
> >> if the censors mark it down it right away as it's created)
> >
> > "The censors"? There is no central authority to censor people. "Censors"
> > can only censor the web-of-trust for those people that trust them and
> > which want to see a censored net. You cant and should not prevent them
> > from this, if they want it.
>
> This have been discussed  a lot.
> the fact that censoship isnt done by a central authority but by a mob
> rule is irrelevant.
> Censorship in this contest is "blocking users based on the content of
> their messages"
>
>  The whole point  is basically this: "A tool created to block flood
> attacks  is being used to discriminate against a group of users.
>
> Now, it is true that they can't really censor anything because users can
> decide what trust lists to use, but it is also true that this abuse of
> the wot does creates problems. They are social problems and not
> technical ones, but still 'freenet problems'.
>
> If we see the experience with FMS as a test for the Web of Trust, the
> result of that test is in my opinion something in between a miserable
> failure and a catastrophe.
>
> The WoT never got to prove itself against a real flood attack, we have
> no idea what would happen if someone decided to attack FMS, not even if
> the WoT would stop the attempted attack at all, leave alone finding out
> how fast and/or how well it would do it.
>
> In other words, for what we know, the WoT may very well be completely
> ineffective against a DoS attack.
> All we know about it is that the WoT can be used to discriminate against
> people, we know that it WILL be used in that way, and we know that
> because of a proven fact: it's being used to discriminate against people
> right now, on FMS
>
> That's all we know.
> We know that some people will abuse WoT, but we dont really know if it
> would be effective at stopping DoS attacks.
> Yes, it "should" work, but we don't 'know'.
>
> The WoT has never been tested t actually do the job it's designed to do,
> yet the Freenet 'decision makers' are acting as if the WoT had proven
> its validity beyond any reasonable doubt, and at the same time they
> decide to ignore the only one proven fact that we have.
>
> This whole situation is ridiculous,  I don't know if it's more funny or
> sad...  it's grotesque. It reminds me of our beloved politicians, always
> knowing what's the right thing to do, except that it never works as
> expected.
>

No, it is not ridiculous, you are just having a point of view which is not 
abstract enough:

If there is a shared medium (= Freenet, Freetalk, etc.) which is writable by 
EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in "by writing an 
intelligent software") distinguish spam from useful uploads, because 
"EVERYONE" can be evil. 

EITHER you manually view every single piece of information which is uploaded 
and decide yourself whether you consider it as spam or not OR you adopt the 
ratings of other people so each person only has to rate a small subset of the 
uploaded data. There are no other options.

And what the web of trust does is exactly the second option: it "load 
balances" the content rating equally between all users.



-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/a5bcc0d1/attachment.pgp>


[freenet-dev] Current uservoice top 5

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 19:00:54 Matthew Toseland wrote:
> We could pause most of the node relatively easily, there will still be some
> background activity, and therefore some garbage collection, but it can be
> kept minimal...

That would be great. 

As long as it doesn't access its memory very often, my system will put most of 
it to swap, so this should also free most of the memory. 

> > But I don't want to have that all the time. When I compile something in
> > the background, I want freenet to take predecence (that's already well
> > covered with the low scheduling priority, though).
>
> How would Freenet tell the difference?

When I click "pause" I want it to reduce its activity (ideally there'd be 
"take a break for X hours" instead of "pause now and stay paused", because 
else I'm prone to forget that I paused it. 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/d9f342c2/attachment.pgp>


[freenet-dev] Request for proofreading: Announcing donation from Google

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 18:12:52 Robert Hailey wrote:
> On May 12, 2009, at 7:28 PM, Arne Babenhauserheide wrote:
> > On Tuesday, 12. May 2009 21:36:30 Matthew Toseland wrote:

> >> be out in a few days), and hopefully Bloom filter sharing, a new
> >> feature
> >> enabling nodes to know what is in their peers' datastores, greatly
> >> improving performance, combined with some related security
> >> improvements.

> > Bloom filter sharing will enable nodes to know what is in their peers
> > datastores without impacting anonymity and should result in much
> > improved
> > performance and better security."

> Except that it's not true... bloom filter sharing is at a large cost
> to security & anonymity (as said in the roadmap).

Ouch! - if you're right I completely misunderstood the announcement, and its 
last part should definitely be reworked. 

I didn't check the validity of the statements but simply tried to make them 
easier to understand. 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/cf0795b6/attachment.pgp>


[freenet-dev] Release schedule

2009-05-13 Thread Matthew Toseland
We are going to release 0.7.5 in the near future, and then 0.8 later. 0.7.5 
may or may not include Freetalk, and will not be delayed for Freetalk.

Schedule:
Wednesday 20th of May - Release 0.7.5 beta, at the latest.
Wednesday 10th of June - Release 0.7.5 final.

Major feature work should be postponed until after 0.7.5-final is out. Minor 
usability tweaks and debugging are of course vital, this is why we are not 
releasing immediately.

If Freetalk is ready for 0.7.5-final, then great. If it's not, we ship without 
it.

Freetalk will however be a requirement for 0.8. But 0.8 will also have some 
major feature work, including bloom filter sharing (and tunnels if that is 
the only way to secure bloom filter sharing, but IMHO it *probably* isn't). 
That's what we said on the announcement on the website, that's what we said 
to Google, that's how it should be IMHO.
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/ff3ee49f/attachment.pgp>


[freenet-dev] Wininstaller deployed

2009-05-13 Thread Juiceman
On Wed, May 13, 2009 at 5:22 PM, Zero3  wrote:
> Matthew Toseland skrev:
>> I have deployed the new wininstaller, for Vista/win7 users and anyone who
>> clicks on "Windows instructions". Win2K/XP users with working JWS will still
>> see the old installer for now.
>
> Cool! :)
>
> - Zero3
> ___
> Devl mailing list
> Devl at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
>

I just did a test install on a clean virtual machine.  It is missing
wget.exe in the \bin folder of the installer!  This will break the
update.cmd script please pull the new installer until this is fixed!

-- 
I may disagree with what you have to say, but I shall defend, to the
death, your right to say it. - Voltaire
Those who would give up Liberty, to purchase temporary Safety, deserve
neither Liberty nor Safety. - Ben Franklin



[freenet-dev] Request for proofreading: Announcing donation from Google

2009-05-13 Thread Matthew Toseland
On Tuesday 12 May 2009 20:36:30 Matthew Toseland wrote:
> I will post this on the website tomorrow if there are no objections. If 
anyone 
> can suggest any improvements, please do so; sometimes what I write isn't 
> readable by human beings!
> 
> 7th May, 2009 - Another big donation!
> 
> Google's Open Source team has donated US$18,000 to the Freenet Project to 
> support the ongoing development of the Freenet software (thanks again 
> Google!).
> 
> Their last donation funded the db4o project, which has now been merged into 
> Freenet, greatly improving performance for large download queues while 
> reducing memory usage, amongst other benefits.
> 
> We are currently working on Freenet 0.8, which will be released later this 
> year, and will include additional performance improvements, usability work, 
> and security improvements, as well as the usual debugging. Features are not 
> yet finalized but we expect it to include Freetalk (a new anonymous web 
> forums tool), a new Vista-compatible installer for Windows (that part will 
be 
> out in a few days), and hopefully Bloom filter sharing, a new feature 
> enabling nodes to know what is in their peers' datastores, greatly improving 
> performance, combined with some related security improvements.
> 
I have posted the announcement.
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/427931f1/attachment.pgp>


[freenet-dev] Wininstaller deployed

2009-05-13 Thread Matthew Toseland
I have deployed the new wininstaller, for Vista/win7 users and anyone who 
clicks on "Windows instructions". Win2K/XP users with working JWS will still 
see the old installer for now.
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/2fe71f10/attachment.pgp>


[freenet-dev] Request for proofreading: Announcing donation from Google

2009-05-13 Thread Matthew Toseland
On Wednesday 13 May 2009 17:12:52 Robert Hailey wrote:
> 
> On May 12, 2009, at 7:28 PM, Arne Babenhauserheide wrote:
> 
> > On Tuesday, 12. May 2009 21:36:30 Matthew Toseland wrote:
> >> We are currently working on Freenet 0.8, which will be released  
> >> later this
> >> year, and will include additional performance improvements,  
> >> usability work,
> >> and security improvements, as well as the usual debugging. Features  
> >> are not
> >> yet finalized but we expect it to include Freetalk (a new anonymous  
> >> web
> >> forums tool), a new Vista-compatible installer for Windows (that  
> >> part will
> >> be out in a few days), and hopefully Bloom filter sharing, a new  
> >> feature
> >> enabling nodes to know what is in their peers' datastores, greatly
> >> improving performance, combined with some related security  
> >> improvements.
> >
> > "...Bloom filter sharing.
> >
> > Bloom filter sharing will enable nodes to know what is in their peers
> > datastores without impacting anonymity and should result in much  
> > improved
> > performance and better security."
> >
> > That would be my suggestion.
> >
> > Best wishes,
> > Arne
> 
> 
> Except that it's not true... bloom filter sharing is at a large cost  
> to security & anonymity (as said in the roadmap).

If you have security issues with bloom filters *as currently envisaged*, that 
is with the related caching changes, then please explain them on the relevant 
threads (not this one).
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/d8752f6e/attachment.pgp>


[freenet-dev] Bloom filters and store probing, after the fact splitfile originator tracing

2009-05-13 Thread Matthew Toseland
On Wednesday 13 May 2009 17:07:39 Robert Hailey wrote:
> 
> On May 12, 2009, at 5:24 PM, Matthew Toseland wrote:
> 
> > On Tuesday 12 May 2009 17:57:04 Robert Hailey wrote:
> >> If the bloom filters are recovering already-failed requests, then
> >> surely latency is not the issue being addressed.
> >>
> >> I thought that the point of having bloom filters was to increase the
> >> effectiveness of fetches (a more reliable fallback).
> >
> > So your proposal is that we broadcast all our requests to all our  
> > peers, and
> > then what, wait for all their responses before routing outwards?
> 
> It might sound laughable like that, but the same problems have to be  
> overcome with sending a failed request to a peer with a bloom filter  
> hit. 

The difference is we send it to ONE node, not to all of them. And we only have 
to wait for one node.

> This is all operating on the presumption that the bloom filter  
> check is intended to recover a failing request (no htl / dnf / etc.),  
> and not to fundamentally change routing.

Bloom filters would be checked first. If we can get it in one hop, then great, 
it saves a lot of load, time, exposure, etc.
> 
> It is a given that the primary request mechanism has failed (be it  
> from network topology, overloaded peer, etc).

No, this is a short-cut, to save a few hops, as well as to consider more 
nodes' stores for unpopular keys.
> 
> I propose adding a new network message "QueueLocalKey" A response to  
> which is not expected, but that a node may then "OfferKey" to us at  
> some time down the road.

So it's an extension to ULPRs? We do not in fact wait for a response, but if 
the data is found, we are offered it, and we forward it to those who have 
asked for it? The difference to ULPRs being that we ask all our peers for the 
key and not just one of them...
> 
> As you said, upon a conventional request failing (htl, dnf, etc) a  
> node will broadcast this message to all peers (even backed off), or at  
> least queue the message to be sent (later/low priority).
> 
> Upon a node receiving this message, it can discard it (if too many  
> offers are pending to the requestor). Otherwise it will check it's  
> bloom filter for the key; discarding the message if not found, and  
> adding the key to an "offer-queue" if it is found (to be transfered  
> whenever). If the bloom filter check is too expensive to be done at  
> request time, we can have a single thread process all these requests  
> serially at a low priority. If an open net peer disconnects we can  
> dump all their requests.
> 
> The actual request has already failed, we do not hold up processing;  
> and it is a step towards passive requests.

It is very similar to ULPRs but we broadcast requested keys to all peers 
rather than relying on the peer we had routed to.
> 
> I guess that the net effect of this message on data which does not  
> exist amounts to a bunch of bloom filter checks, so it is equivalent.  
> And to data which does exist (but somehow failed), it will simply move  
> the data (at some rate) into caches along a valid request path.

First, it doesn't shortcut. Second, the bandwidth cost may be higher. Third, 
the negative security impact is at least comparable.
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/4af9eb13/attachment.pgp>


[freenet-dev] Current uservoice top 5

2009-05-13 Thread Matthew Toseland
On Sunday 10 May 2009 20:50:00 Arne Babenhauserheide wrote:
> Am Mittwoch 06 Mai 2009 00:23:54 schrieb Matthew Toseland:
> > Isn't using a reasonably low scheduling priority enough? And we already do
> > that!
> 
> Not really, since I can't disable it (when I want full speed), and it sadly 
> doesn't work really well for memory consumption. 
> 
> I'd like an option to have freenet go inactive as soon as the system load 
gets 
> too high. It will lose connections anyway (low scheduling priority leads to 
> far too high answer-times), so it could just explicitely take a break until 
my 
> system runs well again. 

We could pause most of the node relatively easily, there will still be some 
background activity, and therefore some garbage collection, but it can be 
kept minimal...
> 
> But I don't want to have that all the time. When I compile something in the 
> background, I want freenet to take predecence (that's already well covered 
> with the low scheduling priority, though). 

How would Freenet tell the difference?
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/57ba399b/attachment.pgp>


[freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Matthew Toseland
irrelevant.

It is solvable with positive trust, because the spammer will gain trust from 
posting messages, and lose it by spamming. The second party will likely be 
the stronger in most cases, hence we get a zero or worse outcome.
> 
> >> The solution, imho, is mundane: if the occasional trusted identity
> >> starts a spam campaign, I mark them as a spammer. ?This is optionally
> >> published, but can be ignored by others to maintain the positive trust
> >> aspects of the behavior. ?Locally, it functions as a slightly stronger
> >> killfile: their messages get ignored, and their identity's trust
> >> capacity is forced to zero.
> >
> > Does not protect against a spammer's parent identity introducing more
> > spammers. IMHO it is important that if an identity trusts a lot of 
spammers
> > it gets downgraded - and that this be *easy* for the user.
> 
> The Advogato algorithm protects against this, though passively.  The
> spammer's node has a capacity limit based on how far from the tree
> root it is.  How much trust it has is limited by how much it receives
> from upstream nodes, capped by its capacity.  Regardless of the number
> of identities it trusts, the number that get accepted as trusted is
> limited by the amount of trust the main node can get.  The only issue
> I see is that you would want to limit the churn rate -- it does no
> good to have limited the spammer to 5 child identities if he can send
> a few messages, unmark those identities, and mark some new ones.

This is logical.
> 
> Having a way for me to easily realize that an identity I have trusted
> is trusting spammers is important.  However, I think part of avoiding
> censorship is making this be a manually verified process, and *not* an
> automated, recursive part of the normal computation algorithm.

Yes, we should ask the user, but the converse is if a user doesn't visit his 
node for a while, everyone will have blacklisted him because he didn't 
blacklist the spammers he trusted.
> 
> >> In the context of the routing and data store algorithms, Freenet has a
> >> strong prejudice against alchemy and in favor of algorithms with
> >> properties that are both useful and provable from reasonable
> >> assumptions, even though they are not provably perfect. ?Like routing,
> >> the generalized trust problem is non-trivial. ?Advogato has such
> >> properties; the current WoT and FMS algorithms do not: they are
> >> alchemical. ?In addition, the Advogato metric has a strong anecdotal
> >> success story in the form of the Advogato site (I've not been active
> >> on FMS/Freetalk recently enough to speak to them). ?Why is alchemy
> >> acceptable here, but not in routing?
> >
> > Because the provable metrics don't work for our scenario. At least they 
don't
> > work given the current assumptions and formulations.
> 
> Could you be more specific?  This thread is covering several closely
> related but distinct subjects, so I'm not really sure exactly which
> assumptions you're referring to.  Also, do you mean that they don't
> work in the sense that the proof is no longer applicable or
> mathematically valid, or in the sense that the results of the proof
> aren't useful?

The latter. Pure positive only works if every user can be trusted to 
continually evaluate his peers' messages to all contexts, and their 
relationships to other users, and can therefore be blocked if they propagate 
messages of spammers.
> 
> Evan Daniel
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/25707415/attachment.pgp>


[freenet-dev] Recent progress on Interdex

2009-05-13 Thread Ximin Luo
Matthew Toseland wrote:
> On Wednesday 13 May 2009 04:53:11 Evan Daniel wrote:
>> On Tue, May 12, 2009 at 4:26 PM, Ximin Luo  wrote:
>>
>>> (one way of storing it which would allow token-deflate would be having each
>>> indexnode as a CHK, then you'd only have to INS an updated node and all its
>>> parents up to the root, but i chose not to do this as CHKs have a higher 
>>> limit
>>> for being turned into a splitfile. was this the right decision?)
>> My impression is that most of the time to download a key is the
>> routing time to find it, not the time to transfer the data once found.
>>  So a 32KiB CHK is only somewhat slower to download than a 1KiB SSK.
>> (Though I haven't seen hard numbers on this in ages, so I could be
>> completely wrong.)
>>
>> My instinct is that the high latency for a single-key lookup that is
>> the norm for Freenet means that if using CHKs instead results in an
>> appreciably shallower tree, that will yield a performance improvement.
> 
> Agreed.

ok, for larger indexes this may be better then, where there are enough keys to
fill up a 32KB lookup-table. i could implement a CHKSerialser which uses this
format, that automatically gets used once your index gets beyond a certain size.

>>  The other effect to consider is how likely the additional data
>> fetched is to be useful to some later request.  Answering that is
>> probably trickier, since it requires reasonable assumptions about
>> index size and usage.
>>
>> It would be nice if there was a way to get some splitfile-type
>> redundancy in these indexes; otherwise uncommonly searched terms won't
>> be retrievable.  However, there's obviously a tradeoff with common
>> search term latency.
> 
> Yeah. Generally fetching a single block with no redundancy is something to be 
> avoided IMHO. You might want to use the directory insertion code (maybe 
> saces' DefaultManifestPutter), but you may want to tweak the settings a bit.

yeah, was going to look into that at some point. thanks.

X




Re: [freenet-dev] Wininstaller deployed

2009-05-13 Thread Juiceman
On Wed, May 13, 2009 at 5:22 PM, Zero3  wrote:
> Matthew Toseland skrev:
>> I have deployed the new wininstaller, for Vista/win7 users and anyone who
>> clicks on "Windows instructions". Win2K/XP users with working JWS will still
>> see the old installer for now.
>
> Cool! :)
>
> - Zero3
> ___
> Devl mailing list
> Devl@freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
>

I just did a test install on a clean virtual machine.  It is missing
wget.exe in the \bin folder of the installer!  This will break the
update.cmd script please pull the new installer until this is fixed!

-- 
I may disagree with what you have to say, but I shall defend, to the
death, your right to say it. - Voltaire
Those who would give up Liberty, to purchase temporary Safety, deserve
neither Liberty nor Safety. - Ben Franklin
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Evan Daniel
On Wed, May 13, 2009 at 4:28 PM, xor  wrote:
> On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
>> Thomas Sachau wrote:
>> > Luke771 schrieb:
>> >> I can't comment on the technical part because I wouldnt know what im
>> >> talking about.
>> >> However, I do like the 'social' part (being able to see an identity even
>> >> if the censors mark it down it right away as it's created)
>> >
>> > "The censors"? There is no central authority to censor people. "Censors"
>> > can only censor the web-of-trust for those people that trust them and
>> > which want to see a censored net. You cant and should not prevent them
>> > from this, if they want it.
>>
>> This have been discussed ?a lot.
>> the fact that censoship isnt done by a central authority but by a mob
>> rule is irrelevant.
>> Censorship in this contest is "blocking users based on the content of
>> their messages"
>>
>> ?The whole point ?is basically this: "A tool created to block flood
>> attacks ?is being used to discriminate against a group of users.
>>
>> Now, it is true that they can't really censor anything because users can
>> decide what trust lists to use, but it is also true that this abuse of
>> the wot does creates problems. They are social problems and not
>> technical ones, but still 'freenet problems'.
>>
>> If we see the experience with FMS as a test for the Web of Trust, the
>> result of that test is in my opinion something in between a miserable
>> failure and a catastrophe.
>>
>> The WoT never got to prove itself against a real flood attack, we have
>> no idea what would happen if someone decided to attack FMS, not even if
>> the WoT would stop the attempted attack at all, leave alone finding out
>> how fast and/or how well it would do it.
>>
>> In other words, for what we know, the WoT may very well be completely
>> ineffective against a DoS attack.
>> All we know about it is that the WoT can be used to discriminate against
>> people, we know that it WILL be used in that way, and we know that
>> because of a proven fact: it's being used to discriminate against people
>> right now, on FMS
>>
>> That's all we know.
>> We know that some people will abuse WoT, but we dont really know if it
>> would be effective at stopping DoS attacks.
>> Yes, it "should" work, but we don't 'know'.
>>
>> The WoT has never been tested t actually do the job it's designed to do,
>> yet the Freenet 'decision makers' are acting as if the WoT had proven
>> its validity beyond any reasonable doubt, and at the same time they
>> decide to ignore the only one proven fact that we have.
>>
>> This whole situation is ridiculous, ?I don't know if it's more funny or
>> sad... ?it's grotesque. It reminds me of our beloved politicians, always
>> knowing what's the right thing to do, except that it never works as
>> expected.
>>
>
> No, it is not ridiculous, you are just having a point of view which is not
> abstract enough:
>
> If there is a shared medium (= Freenet, Freetalk, etc.) which is writable by
> EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in "by writing an
> intelligent software") distinguish spam from useful uploads, because
> "EVERYONE" can be evil.
>
> EITHER you manually view every single piece of information which is uploaded
> and decide yourself whether you consider it as spam or not OR you adopt the
> ratings of other people so each person only has to rate a small subset of the
> uploaded data. There are no other options.
>
> And what the web of trust does is exactly the second option: it "load
> balances" the content rating equally between all users.

While your statement is trivially true (assuming we ignore some fairly
potent techniques like bayesian classifiers that rely on neither
additional work by the user or reliance on the opinions of others...),
it misses the real point: the fact that WoT spreads the work around
does not mean it does so efficiently or effectively, or that the
choices it makes wrt various design tradeoffs are actually the choices
that we, as its users, would make if we considered those choices
carefully.

A web of trust is a complex system, the entire purpose of which is to
create useful emergent behaviors.  Too much focus on the micro-level
behavior of the parts of such a system, instead of the emergent
properties of the system as a whole, means that you won't get the
emergent properties you wanted.

Evan Daniel



[freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 15:03:13 Matthew Toseland wrote:
> Perhaps some form of feedback/ultimatum system? Users who are affected by
> spam from an identity can send proof that the identity is a spammer to the
> users they trust who trust that identity. If the proof is valid, those who
> trust the identity can downgrade him within a reasonable period; if they
> don't do this they get downgraded themselves?

I remember another alternative which was proposed (and implemented) for 
Gnutella (but LimeWire chose not to merge the code for unknown reasons): 

Voting not on users but on messages (objects): 

- Main site: http://credence-p2p.org
- Papers: http://credence-p2p.org/paper.html
- Overview: http://credence-p2p.org/overview.html

I tested it back then and it worked quite well. 

You could have two different settings: "ignore messages marked as spam" and 
"only see messages marked as good". 

They had the same problem of people not voting on "spam/not spam", but on "I 
like it / I hate it", and their solution was a differenciated voting 
mechanism. 

It's implemented in Java, but the GUI ties into LimeWire. The core is mostly 
independent, though (iirc). 

It only depends on a limit on account creation: creating massive amounts of 
accounts (who are alowed to vote) can break the system. 

This limit could be realized by only allowing people with a minimum message 
count to vote. 

I don't know if it can perfectly be ported to freenet, but it should be worth 
a look - also it's GPL licensed. 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/913165e0/attachment.pgp>


[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Daniel Cheng
On Wed, May 13, 2009 at 4:01 PM, Luke771  wrote:
> Thomas Sachau wrote:
>> Luke771 schrieb:
>>
>>> I can't comment on the technical part because I wouldnt know what im
>>> talking about.
>>> However, I do like the 'social' part (being able to see an identity even
>>> if the censors mark it down it right away as it's created)
>>>
>>
>> "The censors"? There is no central authority to censor people. "Censors" can 
>> only censor the
>> web-of-trust for those people that trust them and which want to see a 
>> censored net. You cant and
>> should not prevent them from this, if they want it.
>>
>>
> This have been discussed ?a lot.
> the fact that censoship isnt done by a central authority but by a mob
> rule is irrelevant.
> Censorship in this contest is "blocking users based on the content of
> their messages"
>
> ?The whole point ?is basically this: "A tool created to block flood
> attacks ?is being used to discriminate against a group of users.
> [pedophiles / gays / terrorist / dissidents / ...]

You don't have to repeat this again and again.
We *are* aware of this problem.
We need solution, not re-stating problem.

Don't tell me frost is the solution -- it is being DoS'ed again.

In fms, you can always adjust the MinLocalMessageTrust to get whatever
message you please to read.  -- ya, you may call it censorship..
but it is the one every reader can opt-out with 2 clicks. --- Even
if majority abuse the system, the poster can always post, the reader
may know who is being censored and adjust accordingly .

In frost, when sb DoS the system...  the poster cannot post anything.
there is nothing a reader can do.

Now, tell me, which one is better?

[...]

>> Why use this sort of announcement, if it takes several days? Announcement 
>> over captchas takes only
>> around 24 hours, which is faster and needs less resources. So i dont see any 
>> real reason for
>> hashcash-introductions.
>>
[...]
> On the other hand, a malicious user who is able to create new identities
> quickly enough (slave labor would do the trick) would still be capable
> to send 75 messages per announced ID... so the 'grace period' should be
> as small as possible to minimize this problem. Maybe 25 or 30 messages?

As long as creating a new identity is free.
1 message is enough to flood the whole system.

Not only that,
Even ZERO message is enough to flood the whole system.
--- if you can introduce thousands of identity in a few days,
everybody would be busy polling from the "fake" identities.

--



Re: [freenet-dev] Wininstaller deployed

2009-05-13 Thread Zero3
Matthew Toseland skrev:
> I have deployed the new wininstaller, for Vista/win7 users and anyone who 
> clicks on "Windows instructions". Win2K/XP users with working JWS will still 
> see the old installer for now.

Cool! :)

- Zero3
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


[freenet-dev] Recent progress on Interdex

2009-05-13 Thread Matthew Toseland
On Wednesday 13 May 2009 04:53:11 Evan Daniel wrote:
> On Tue, May 12, 2009 at 4:26 PM, Ximin Luo  wrote:
> 
> > (one way of storing it which would allow token-deflate would be having 
each
> > indexnode as a CHK, then you'd only have to INS an updated node and all 
its
> > parents up to the root, but i chose not to do this as CHKs have a higher 
limit
> > for being turned into a splitfile. was this the right decision?)
> 
> My impression is that most of the time to download a key is the
> routing time to find it, not the time to transfer the data once found.
>  So a 32KiB CHK is only somewhat slower to download than a 1KiB SSK.
> (Though I haven't seen hard numbers on this in ages, so I could be
> completely wrong.)
> 
> My instinct is that the high latency for a single-key lookup that is
> the norm for Freenet means that if using CHKs instead results in an
> appreciably shallower tree, that will yield a performance improvement.

Agreed.

>  The other effect to consider is how likely the additional data
> fetched is to be useful to some later request.  Answering that is
> probably trickier, since it requires reasonable assumptions about
> index size and usage.
> 
> It would be nice if there was a way to get some splitfile-type
> redundancy in these indexes; otherwise uncommonly searched terms won't
> be retrievable.  However, there's obviously a tradeoff with common
> search term latency.

Yeah. Generally fetching a single block with no redundancy is something to be 
avoided IMHO. You might want to use the directory insertion code (maybe 
saces' DefaultManifestPutter), but you may want to tweak the settings a bit.
> 
> Evan Daniel
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/359580de/attachment.pgp>


[freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Evan Daniel
On Wed, May 13, 2009 at 12:58 PM, Matthew Toseland
 wrote:
> On Wednesday 13 May 2009 15:47:24 Evan Daniel wrote:
>> On Wed, May 13, 2009 at 9:03 AM, Matthew Toseland
>>  wrote:
>> > On Friday 08 May 2009 02:12:21 Evan Daniel wrote:
>> >> On Thu, May 7, 2009 at 6:33 PM, Matthew Toseland
>> >>  wrote:
>> >> > On Thursday 07 May 2009 21:32:42 Evan Daniel wrote:
>> >> >> On Thu, May 7, 2009 at 2:02 PM, Thomas Sachau 
>> > wrote:
>> >> >> > Evan Daniel schrieb:
>> >> >> >> I don't have any specific ideas for how to choose whether to ignore
>> >> >> >> identities, but I think you're making the problem much harder than
> it
>> >> >> >> needs to be. ?The problem is that you need to prevent spam, but at
> the
>> >> >> >> same time prevent malicious non-spammers from censoring identities
> who
>> >> >> >> aren't spammers. ?Fortunately, there is a well documented algorithm
>> >> >> >> for doing this: the Advogato trust metric.
>> >> >> >>
>> >> >> >> The WoT documentation claims it is based upon the Advogato trust
>> >> >> >> metric. ?(Brief discussion:
> http://www.advogato.org/trust-metric.html
>> >> >> >> Full paper: http://www.levien.com/thesis/compact.pdf ) ?I think
> this
>> >> >> >> is wonderful, as I think there is much to recommend the Advogato
>> >> >> >> metric (and I pushed for it early on in the WoT discussions).
>> >> >> >> However, my understanding of the paper and what is actually
>> >> >> >> implemented is that the WoT code does not actually implement it.
>> >> >> >> Before I go into detail, I should point out that I haven't read the
>> >> >> >> WoT code and am not fully up to date on the documentation and
>> >> >> >> discussions; if I'm way off base here, I apologize.
>> >> >> >
>> >> >> > I think, you are:
>> >> >> >
>> >> >> > The advogato idea may be nice (i did not read it myself), if you
> have
>> >> > exactly 1 trustlist for
>> >> >> > everything. But xor wants to implement 1 trustlist for every app as
>> > people
>> >> > may act differently e.g.
>> >> >> > on firesharing than on forums or while publishing freesites. You
>> > basicly
>> >> > dont want to censor someone
>> >> >> > just because he tries to disturb filesharing while he may be tries
> to
>> >> > bring in good arguments at
>> >> >> > forum discussions about it.
>> >> >> > And i dont think that advogato will help here, right?
>> >> >>
>> >> >> There are two questions here. ?The first question is given a set of
>> >> >> identities and their trust lists, how do you compute the trust for an
>> >> >> identity the user has not rated? ?The second question is, how do you
>> >> >> determine what trust lists to use in which contexts? ?The two
>> >> >> questions are basically orthogonal.
>> >> >>
>> >> >> I'm not certain about the contexts issue; Toad raised some good
>> >> >> points, and while I don't fully agree with him, it's more complicated
>> >> >> than I first thought. ?I may have more to say on that subject later.
>> >> >>
>> >> >> Within a context, however, the computation algorithm matters. ?The
>> >> >> Advogato idea is very nice, and imho much better than the current WoT
>> >> >> or FMS answers. ?You should really read their simple explanation page.
>> >> >> ?It's really not that complicated; the only reasons I'm not fully
>> >> >> explaining it here is that it's hard to do without diagrams, and they
>> >> >> already do a good job of it.
>> >> >
>> >> > It's nice, but it doesn't work. Because the only realistic way for
>> > positive
>> >> > trust to be assigned is on the basis of posted messages, in a purely
>> > casual
>> >> > way, and without the sort of permanent, universal commitment that any
>> >> > pure-positive-trust scheme requires: If he spams on any board, if I
> ever
>> > gave
>> >> > him trust and haven't changed that, then *I AM GUILTY* and *I LOSE
> TRUST*
>> > as
>> >> > the only way to block the spam.
>> >>
>> >> How is that different than the current situation? ?Either the fact
>> >> that he spams and you trust him means you lose trust because you're
>> >> allowing the spam through, or somehow the spam gets stopped despite
>> >> your trust -- which implies either that a lot of people have to update
>> >> their trust lists before anything happens, and therefore the spam
>> >> takes forever to stop, or it doesn't take that many people to censor
>> >> an objectionable but non-spamming poster.
>> >>
>> >> I agree, this is a bad thing. ?I'm just not seeing that the WoT system
>> >> is *that* much better. ?It may be somewhat better, but the improvement
>> >> comes at a cost of trading spam resistance vs censorship ability,
>> >> which I think is fundamentally unavoidable.
>> >
>> > So how do you solve the contexts problem? The only plausible way to add
> trust
>> > is to do it on the basis of valid messages posted to the forum that the
> user
>> > reads. If he posts nonsense to other forums, or even introduces identities
>> > that spam other forums, the user adding trust probably does not know about
>> > this, so it is

[freenet-dev] [freenet-cvs] r25585 - in trunk/apps/simsalabim: . darknet rembre utils

2009-05-13 Thread Matthew Toseland
ighbors) {
> > > + if (k.dist(n.pos) < k.dist(pos))
> > > + return false;
> > > + }
> > > + return true;
> > > + }
> > 
> > We have uptime requirements in this in the real node, it'd be nice to have 
> > some theoretical support for that decision... I guess it would require 
some 
> > sort of distribution of uptimes though...
> 
> Yes, but it could also be based on some heuristic. Storing join and leave 
time
> in the simulation is no problem. Do you have any suggestion for why it is 
needed?
> 
Well, we store on 3 nodes's stores typically. So if these are all low-uptime, 
or worse, newbie nodes that appear and then vanish forever, it will be 
impossible to find the data, or it will take a very long time.

> > > +
> > > + public boolean hasData(CircleKey k) {
> > > + return data.contains(k) || cache.contains(k);
> > > + }
> > > +
> > > + public Data findData(CircleKey k) {
> > > + Data cached = cache.get(k);
> > > +
> > > + return (cached != null) ? cached : data.get(k);
> > > + }
> > > +
> > > + /**
> > > +  * Always store in cache
> > > +  * @sink: Whether to store in the long-term storage
> > > +  */
> > > +   
> > > + public void storeData(Data d, boolean sink) {
> > > + if (sink) {
> > > + if (!data.contains(d.key()))
> > > + d.addedTo(this);
> > > + Data r = data.put(d);
> > > + if (r != null)
> > > + r.removedFrom(this);
> > > + }
> > 
> > We don't store in the cache if we have stored in the store.
> 
> Are you sure?!

Hmmm, in fact we do store in both. Do you have any opinion on this?
> > > +
> > > + /**
> > > +  * Calculates the log distance to the neighbors of this node from 
newpos. 
> > If
> > > +  * a neighbor has position newpos, then it is given my current 
position.
> > > +  */
> > > + private double logdist(CircleKey newpos) {
> > > + double val = 0.0f;
> > > + for (Iterator it = neighbors.iterator() ; 
it.hasNext() ;) {
> > > + DarknetNode dn = it.next();
> > > + val += Math.log(dn.pos == newpos ? pos.dist(newpos) : 
> > > + dn.pos.dist(newpos));
> > 
> > Doh! We just ignore it if we are neighbours in LocationManager.java! 
Granted 
> > this is a pretty small effect, but it needs to be fixed...
> 
> ??? 

In LocationManager, when we are deciding whether to do a swap, if the location 
of a neighbour of the swap target is equal to our location (or theirs, 
depending on the bit of the calculation), and would thus introduce a zero, we 
don't include it in the calculation:

double A = 1.0;
for(int i=0;i 
> > Hmm. Because of network bugs / race conditions, it is occasionally 
possible 
> > for two nodes to have the same location... Should we ignore when a peer of 
> > his has our location, or should we assume they are us and calculate it for 
> > their location? Is this different for before vs after calculations?
> 
> If they have the same location it does not matter... right? The switch can
> not load to any difference.

So the code above is correct?
> 
> > > + }
> > > + return val;
> > > + } 
> > > +
> > > +
> > ...
> > > Added: trunk/apps/simsalabim/DarknetRoute.java
> > > ===
> > > --- trunk/apps/simsalabim/DarknetRoute.java   
> > > (rev 
0)
> > > +++ trunk/apps/simsalabim/DarknetRoute.java   2009-02-11 13:53:49 UTC 
> > > (rev 
> > 25585)
> > ...
> > > +
> > > + public Data findData(CircleKey k) {
> > > + for (Iterator it = route.iterator() ; it.hasNext() 
> > > ;) {
> > > + Data d  = it.next().findData(k);
> > > + if (d != null)
> > > + return d;
> > > + }
> > > + return null;
> > > + }
> > 
> > You don't check on each hop as you reach it? Is this some idea about 
visiting 
> > all the nodes on the route even if we find the data early on, so we can 
store 
> > it everywhere for better data robustness? (And considerably worse 
performance 
> > on popular data!)
> 
> Its just a matter of implemetation. The routes terminate for different 
reasons.
> When a route has terminated, the implementation checks if the data was 
found. It
> corresponds to the node checking the message directly in the real node.

But you only cache it on nodes before the one where the data was found?
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/c04a19a9/attachment.pgp>


[freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Matthew Toseland
s posted to the forum that the user 
reads. If he posts nonsense to other forums, or even introduces identities 
that spam other forums, the user adding trust probably does not know about 
this, so it is problematic to hold him responsible for that. In a positive 
trust only system this is unsolvable afaics?

Perhaps some form of feedback/ultimatum system? Users who are affected by spam 
from an identity can send proof that the identity is a spammer to the users 
they trust who trust that identity. If the proof is valid, those who trust 
the identity can downgrade him within a reasonable period; if they don't do 
this they get downgraded themselves?
> 
> There's another reason I don't see this as a problem: I'm working from
> the assumption that if you can force a spammer to perform manual
> effort on par with the amount of spam he can send, then the problem
> *has been solved*.  The reason email spam and Frost spam is a problem
> is not that there are lots of spammers; there aren't.  It's that the
> spammers can send colossal amounts of spam.

Agreed. However positive trust as currently envisaged does not have this 
property, because spammers can gain trust by posting valid messages and then 
use it to introduce spamming identities. Granted there is a limited capacity, 
but they can gain lots of trust by posting, and can therefore send a lot of 
spam via their trusted identities: the multiplier is still pretty good, 
although maybe not hideous.
> 
> The solution, imho, is mundane: if the occasional trusted identity
> starts a spam campaign, I mark them as a spammer.  This is optionally
> published, but can be ignored by others to maintain the positive trust
> aspects of the behavior.  Locally, it functions as a slightly stronger
> killfile: their messages get ignored, and their identity's trust
> capacity is forced to zero.

Does not protect against a spammer's parent identity introducing more 
spammers. IMHO it is important that if an identity trusts a lot of spammers 
it gets downgraded - and that this be *easy* for the user.
> 
> In the context of the routing and data store algorithms, Freenet has a
> strong prejudice against alchemy and in favor of algorithms with
> properties that are both useful and provable from reasonable
> assumptions, even though they are not provably perfect.  Like routing,
> the generalized trust problem is non-trivial.  Advogato has such
> properties; the current WoT and FMS algorithms do not: they are
> alchemical.  In addition, the Advogato metric has a strong anecdotal
> success story in the form of the Advogato site (I've not been active
> on FMS/Freetalk recently enough to speak to them).  Why is alchemy
> acceptable here, but not in routing?

Because the provable metrics don't work for our scenario. At least they don't 
work given the current assumptions and formulations.
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/7f05868f/attachment.pgp>


Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Evan Daniel
On Wed, May 13, 2009 at 4:28 PM, xor  wrote:
> On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
>> Thomas Sachau wrote:
>> > Luke771 schrieb:
>> >> I can't comment on the technical part because I wouldnt know what im
>> >> talking about.
>> >> However, I do like the 'social' part (being able to see an identity even
>> >> if the censors mark it down it right away as it's created)
>> >
>> > "The censors"? There is no central authority to censor people. "Censors"
>> > can only censor the web-of-trust for those people that trust them and
>> > which want to see a censored net. You cant and should not prevent them
>> > from this, if they want it.
>>
>> This have been discussed  a lot.
>> the fact that censoship isnt done by a central authority but by a mob
>> rule is irrelevant.
>> Censorship in this contest is "blocking users based on the content of
>> their messages"
>>
>>  The whole point  is basically this: "A tool created to block flood
>> attacks  is being used to discriminate against a group of users.
>>
>> Now, it is true that they can't really censor anything because users can
>> decide what trust lists to use, but it is also true that this abuse of
>> the wot does creates problems. They are social problems and not
>> technical ones, but still 'freenet problems'.
>>
>> If we see the experience with FMS as a test for the Web of Trust, the
>> result of that test is in my opinion something in between a miserable
>> failure and a catastrophe.
>>
>> The WoT never got to prove itself against a real flood attack, we have
>> no idea what would happen if someone decided to attack FMS, not even if
>> the WoT would stop the attempted attack at all, leave alone finding out
>> how fast and/or how well it would do it.
>>
>> In other words, for what we know, the WoT may very well be completely
>> ineffective against a DoS attack.
>> All we know about it is that the WoT can be used to discriminate against
>> people, we know that it WILL be used in that way, and we know that
>> because of a proven fact: it's being used to discriminate against people
>> right now, on FMS
>>
>> That's all we know.
>> We know that some people will abuse WoT, but we dont really know if it
>> would be effective at stopping DoS attacks.
>> Yes, it "should" work, but we don't 'know'.
>>
>> The WoT has never been tested t actually do the job it's designed to do,
>> yet the Freenet 'decision makers' are acting as if the WoT had proven
>> its validity beyond any reasonable doubt, and at the same time they
>> decide to ignore the only one proven fact that we have.
>>
>> This whole situation is ridiculous,  I don't know if it's more funny or
>> sad...  it's grotesque. It reminds me of our beloved politicians, always
>> knowing what's the right thing to do, except that it never works as
>> expected.
>>
>
> No, it is not ridiculous, you are just having a point of view which is not
> abstract enough:
>
> If there is a shared medium (= Freenet, Freetalk, etc.) which is writable by
> EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in "by writing an
> intelligent software") distinguish spam from useful uploads, because
> "EVERYONE" can be evil.
>
> EITHER you manually view every single piece of information which is uploaded
> and decide yourself whether you consider it as spam or not OR you adopt the
> ratings of other people so each person only has to rate a small subset of the
> uploaded data. There are no other options.
>
> And what the web of trust does is exactly the second option: it "load
> balances" the content rating equally between all users.

While your statement is trivially true (assuming we ignore some fairly
potent techniques like bayesian classifiers that rely on neither
additional work by the user or reliance on the opinions of others...),
it misses the real point: the fact that WoT spreads the work around
does not mean it does so efficiently or effectively, or that the
choices it makes wrt various design tradeoffs are actually the choices
that we, as its users, would make if we considered those choices
carefully.

A web of trust is a complex system, the entire purpose of which is to
create useful emergent behaviors.  Too much focus on the micro-level
behavior of the parts of such a system, instead of the emergent
properties of the system as a whole, means that you won't get the
emergent properties you wanted.

Evan Daniel
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


[freenet-dev] Release schedule

2009-05-13 Thread Matthew Toseland
We are going to release 0.7.5 in the near future, and then 0.8 later. 0.7.5 
may or may not include Freetalk, and will not be delayed for Freetalk.

Schedule:
Wednesday 20th of May - Release 0.7.5 beta, at the latest.
Wednesday 10th of June - Release 0.7.5 final.

Major feature work should be postponed until after 0.7.5-final is out. Minor 
usability tweaks and debugging are of course vital, this is why we are not 
releasing immediately.

If Freetalk is ready for 0.7.5-final, then great. If it's not, we ship without 
it.

Freetalk will however be a requirement for 0.8. But 0.8 will also have some 
major feature work, including bloom filter sharing (and tunnels if that is 
the only way to secure bloom filter sharing, but IMHO it *probably* isn't). 
That's what we said on the announcement on the website, that's what we said 
to Google, that's how it should be IMHO.


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread xor
On Wednesday 13 May 2009 10:01:31 Luke771 wrote:
> Thomas Sachau wrote:
> > Luke771 schrieb:
> >> I can't comment on the technical part because I wouldnt know what im
> >> talking about.
> >> However, I do like the 'social' part (being able to see an identity even
> >> if the censors mark it down it right away as it's created)
> >
> > "The censors"? There is no central authority to censor people. "Censors"
> > can only censor the web-of-trust for those people that trust them and
> > which want to see a censored net. You cant and should not prevent them
> > from this, if they want it.
>
> This have been discussed  a lot.
> the fact that censoship isnt done by a central authority but by a mob
> rule is irrelevant.
> Censorship in this contest is "blocking users based on the content of
> their messages"
>
>  The whole point  is basically this: "A tool created to block flood
> attacks  is being used to discriminate against a group of users.
>
> Now, it is true that they can't really censor anything because users can
> decide what trust lists to use, but it is also true that this abuse of
> the wot does creates problems. They are social problems and not
> technical ones, but still 'freenet problems'.
>
> If we see the experience with FMS as a test for the Web of Trust, the
> result of that test is in my opinion something in between a miserable
> failure and a catastrophe.
>
> The WoT never got to prove itself against a real flood attack, we have
> no idea what would happen if someone decided to attack FMS, not even if
> the WoT would stop the attempted attack at all, leave alone finding out
> how fast and/or how well it would do it.
>
> In other words, for what we know, the WoT may very well be completely
> ineffective against a DoS attack.
> All we know about it is that the WoT can be used to discriminate against
> people, we know that it WILL be used in that way, and we know that
> because of a proven fact: it's being used to discriminate against people
> right now, on FMS
>
> That's all we know.
> We know that some people will abuse WoT, but we dont really know if it
> would be effective at stopping DoS attacks.
> Yes, it "should" work, but we don't 'know'.
>
> The WoT has never been tested t actually do the job it's designed to do,
> yet the Freenet 'decision makers' are acting as if the WoT had proven
> its validity beyond any reasonable doubt, and at the same time they
> decide to ignore the only one proven fact that we have.
>
> This whole situation is ridiculous,  I don't know if it's more funny or
> sad...  it's grotesque. It reminds me of our beloved politicians, always
> knowing what's the right thing to do, except that it never works as
> expected.
>

No, it is not ridiculous, you are just having a point of view which is not 
abstract enough:

If there is a shared medium (= Freenet, Freetalk, etc.) which is writable by 
EVERYONE, it is absolutely IMPOSSIBLE to *automatically* (as in "by writing an 
intelligent software") distinguish spam from useful uploads, because 
"EVERYONE" can be evil. 

EITHER you manually view every single piece of information which is uploaded 
and decide yourself whether you consider it as spam or not OR you adopt the 
ratings of other people so each person only has to rate a small subset of the 
uploaded data. There are no other options.

And what the web of trust does is exactly the second option: it "load 
balances" the content rating equally between all users.





signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Current uservoice top 5

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 19:00:54 Matthew Toseland wrote:
> We could pause most of the node relatively easily, there will still be some
> background activity, and therefore some garbage collection, but it can be
> kept minimal...

That would be great. 

As long as it doesn't access its memory very often, my system will put most of 
it to swap, so this should also free most of the memory. 

> > But I don't want to have that all the time. When I compile something in
> > the background, I want freenet to take predecence (that's already well
> > covered with the low scheduling priority, though).
>
> How would Freenet tell the difference?

When I click "pause" I want it to reduce its activity (ideally there'd be 
"take a break for X hours" instead of "pause now and stay paused", because 
else I'm prone to forget that I paused it. 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Request for proofreading: Announcing donation from Google

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 18:12:52 Robert Hailey wrote:
> On May 12, 2009, at 7:28 PM, Arne Babenhauserheide wrote:
> > On Tuesday, 12. May 2009 21:36:30 Matthew Toseland wrote:

> >> be out in a few days), and hopefully Bloom filter sharing, a new
> >> feature
> >> enabling nodes to know what is in their peers' datastores, greatly
> >> improving performance, combined with some related security
> >> improvements.

> > Bloom filter sharing will enable nodes to know what is in their peers
> > datastores without impacting anonymity and should result in much
> > improved
> > performance and better security."

> Except that it's not true... bloom filter sharing is at a large cost
> to security & anonymity (as said in the roadmap).

Ouch! - if you're right I completely misunderstood the announcement, and its 
last part should definitely be reworked. 

I didn't check the validity of the statements but simply tried to make them 
easier to understand. 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Evan Daniel
On Wed, May 13, 2009 at 12:58 PM, Matthew Toseland
 wrote:
> On Wednesday 13 May 2009 15:47:24 Evan Daniel wrote:
>> On Wed, May 13, 2009 at 9:03 AM, Matthew Toseland
>>  wrote:
>> > On Friday 08 May 2009 02:12:21 Evan Daniel wrote:
>> >> On Thu, May 7, 2009 at 6:33 PM, Matthew Toseland
>> >>  wrote:
>> >> > On Thursday 07 May 2009 21:32:42 Evan Daniel wrote:
>> >> >> On Thu, May 7, 2009 at 2:02 PM, Thomas Sachau 
>> > wrote:
>> >> >> > Evan Daniel schrieb:
>> >> >> >> I don't have any specific ideas for how to choose whether to ignore
>> >> >> >> identities, but I think you're making the problem much harder than
> it
>> >> >> >> needs to be.  The problem is that you need to prevent spam, but at
> the
>> >> >> >> same time prevent malicious non-spammers from censoring identities
> who
>> >> >> >> aren't spammers.  Fortunately, there is a well documented algorithm
>> >> >> >> for doing this: the Advogato trust metric.
>> >> >> >>
>> >> >> >> The WoT documentation claims it is based upon the Advogato trust
>> >> >> >> metric.  (Brief discussion:
> http://www.advogato.org/trust-metric.html
>> >> >> >> Full paper: http://www.levien.com/thesis/compact.pdf )  I think
> this
>> >> >> >> is wonderful, as I think there is much to recommend the Advogato
>> >> >> >> metric (and I pushed for it early on in the WoT discussions).
>> >> >> >> However, my understanding of the paper and what is actually
>> >> >> >> implemented is that the WoT code does not actually implement it.
>> >> >> >> Before I go into detail, I should point out that I haven't read the
>> >> >> >> WoT code and am not fully up to date on the documentation and
>> >> >> >> discussions; if I'm way off base here, I apologize.
>> >> >> >
>> >> >> > I think, you are:
>> >> >> >
>> >> >> > The advogato idea may be nice (i did not read it myself), if you
> have
>> >> > exactly 1 trustlist for
>> >> >> > everything. But xor wants to implement 1 trustlist for every app as
>> > people
>> >> > may act differently e.g.
>> >> >> > on firesharing than on forums or while publishing freesites. You
>> > basicly
>> >> > dont want to censor someone
>> >> >> > just because he tries to disturb filesharing while he may be tries
> to
>> >> > bring in good arguments at
>> >> >> > forum discussions about it.
>> >> >> > And i dont think that advogato will help here, right?
>> >> >>
>> >> >> There are two questions here.  The first question is given a set of
>> >> >> identities and their trust lists, how do you compute the trust for an
>> >> >> identity the user has not rated?  The second question is, how do you
>> >> >> determine what trust lists to use in which contexts?  The two
>> >> >> questions are basically orthogonal.
>> >> >>
>> >> >> I'm not certain about the contexts issue; Toad raised some good
>> >> >> points, and while I don't fully agree with him, it's more complicated
>> >> >> than I first thought.  I may have more to say on that subject later.
>> >> >>
>> >> >> Within a context, however, the computation algorithm matters.  The
>> >> >> Advogato idea is very nice, and imho much better than the current WoT
>> >> >> or FMS answers.  You should really read their simple explanation page.
>> >> >>  It's really not that complicated; the only reasons I'm not fully
>> >> >> explaining it here is that it's hard to do without diagrams, and they
>> >> >> already do a good job of it.
>> >> >
>> >> > It's nice, but it doesn't work. Because the only realistic way for
>> > positive
>> >> > trust to be assigned is on the basis of posted messages, in a purely
>> > casual
>> >> > way, and without the sort of permanent, universal commitment that any
>> >> > pure-positive-trust scheme requires: If he spams on any board, if I
> ever
>> > gave
>> >> > him trust and haven't changed that, then *I AM GUILTY* and *I LOSE
> TRUST*
>> > as
>> >> > the only way to block the spam.
>> >>
>> >> How is that different than the current situation?  Either the fact
>> >> that he spams and you trust him means you lose trust because you're
>> >> allowing the spam through, or somehow the spam gets stopped despite
>> >> your trust -- which implies either that a lot of people have to update
>> >> their trust lists before anything happens, and therefore the spam
>> >> takes forever to stop, or it doesn't take that many people to censor
>> >> an objectionable but non-spamming poster.
>> >>
>> >> I agree, this is a bad thing.  I'm just not seeing that the WoT system
>> >> is *that* much better.  It may be somewhat better, but the improvement
>> >> comes at a cost of trading spam resistance vs censorship ability,
>> >> which I think is fundamentally unavoidable.
>> >
>> > So how do you solve the contexts problem? The only plausible way to add
> trust
>> > is to do it on the basis of valid messages posted to the forum that the
> user
>> > reads. If he posts nonsense to other forums, or even introduces identities
>> > that spam other forums, the user adding trust probably does not know about
>> > this, so it is

[freenet-dev] Request for proofreading: Announcing donation from Google

2009-05-13 Thread Robert Hailey

On May 12, 2009, at 7:28 PM, Arne Babenhauserheide wrote:

> On Tuesday, 12. May 2009 21:36:30 Matthew Toseland wrote:
>> We are currently working on Freenet 0.8, which will be released  
>> later this
>> year, and will include additional performance improvements,  
>> usability work,
>> and security improvements, as well as the usual debugging. Features  
>> are not
>> yet finalized but we expect it to include Freetalk (a new anonymous  
>> web
>> forums tool), a new Vista-compatible installer for Windows (that  
>> part will
>> be out in a few days), and hopefully Bloom filter sharing, a new  
>> feature
>> enabling nodes to know what is in their peers' datastores, greatly
>> improving performance, combined with some related security  
>> improvements.
>
> "...Bloom filter sharing.
>
> Bloom filter sharing will enable nodes to know what is in their peers
> datastores without impacting anonymity and should result in much  
> improved
> performance and better security."
>
> That would be my suggestion.
>
> Best wishes,
> Arne


Except that it's not true... bloom filter sharing is at a large cost  
to security & anonymity (as said in the roadmap).

--
Robert Hailey


On Nov 13, 2008, at 12:52 PM, Matthew Toseland wrote:
> TENTATIVE ROADMAP
>
> 0.8:
> [...]
> - Tunnels. Greatly improve security, significant performance cost.
> - Bloom filters. Greatly improve performance, significant security  
> cost.
> - BUT PUT THEM TOGETHER, and you get greatly improved security for the
> paranoid at a slighty performance cost, improved security and current
> performance or better for the moderately paranoid, and greatly  
> improved
> performance for the not so paranoid.

-- next part --
An HTML attachment was scrubbed...
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/80677e7c/attachment.html>


[freenet-dev] Bloom filters and store probing, after the fact splitfile originator tracing

2009-05-13 Thread Robert Hailey

On May 12, 2009, at 5:24 PM, Matthew Toseland wrote:

> On Tuesday 12 May 2009 17:57:04 Robert Hailey wrote:
>> If the bloom filters are recovering already-failed requests, then
>> surely latency is not the issue being addressed.
>>
>> I thought that the point of having bloom filters was to increase the
>> effectiveness of fetches (a more reliable fallback).
>
> So your proposal is that we broadcast all our requests to all our  
> peers, and
> then what, wait for all their responses before routing outwards?

It might sound laughable like that, but the same problems have to be  
overcome with sending a failed request to a peer with a bloom filter  
hit. This is all operating on the presumption that the bloom filter  
check is intended to recover a failing request (no htl / dnf / etc.),  
and not to fundamentally change routing.

It is a given that the primary request mechanism has failed (be it  
from network topology, overloaded peer, etc).

I propose adding a new network message "QueueLocalKey" A response to  
which is not expected, but that a node may then "OfferKey" to us at  
some time down the road.

As you said, upon a conventional request failing (htl, dnf, etc) a  
node will broadcast this message to all peers (even backed off), or at  
least queue the message to be sent (later/low priority).

Upon a node receiving this message, it can discard it (if too many  
offers are pending to the requestor). Otherwise it will check it's  
bloom filter for the key; discarding the message if not found, and  
adding the key to an "offer-queue" if it is found (to be transfered  
whenever). If the bloom filter check is too expensive to be done at  
request time, we can have a single thread process all these requests  
serially at a low priority. If an open net peer disconnects we can  
dump all their requests.

The actual request has already failed, we do not hold up processing;  
and it is a step towards passive requests.

I guess that the net effect of this message on data which does not  
exist amounts to a bunch of bloom filter checks, so it is equivalent.  
And to data which does exist (but somehow failed), it will simply move  
the data (at some rate) into caches along a valid request path.

--
Robert Hailey




[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 10:24:52 Daniel Cheng wrote:
> In fms, you can always adjust the MinLocalMessageTrust to get whatever
> message you please to read.  -- ya, you may call it censorship..
> but it is the one every reader can opt-out with 2 clicks. --- Even
> if majority abuse the system, the poster can always post, the reader
> may know who is being censored and adjust accordingly .

As long as I can just disable the censorship (and I'm aware tha it exists) I 
don't care about it. Noone has the right to make me listen, but also I don't 
have the right to prevent someone from speaking. 

Luckily the itnernet allows us to join these two goals: You can speak, but 
maybe noone will hear you. 

Important here is, that there must not be a way to check if I join in the 
censorship, else people can create social pressure. 

I don't really use FMS yet, so I need to ask: is there a way to check that? If 
yes: How can we get rid of it? 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/d9356ee3/attachment.pgp>


[freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Evan Daniel
On Wed, May 13, 2009 at 9:03 AM, Matthew Toseland
 wrote:
> On Friday 08 May 2009 02:12:21 Evan Daniel wrote:
>> On Thu, May 7, 2009 at 6:33 PM, Matthew Toseland
>>  wrote:
>> > On Thursday 07 May 2009 21:32:42 Evan Daniel wrote:
>> >> On Thu, May 7, 2009 at 2:02 PM, Thomas Sachau 
> wrote:
>> >> > Evan Daniel schrieb:
>> >> >> I don't have any specific ideas for how to choose whether to ignore
>> >> >> identities, but I think you're making the problem much harder than it
>> >> >> needs to be. ?The problem is that you need to prevent spam, but at the
>> >> >> same time prevent malicious non-spammers from censoring identities who
>> >> >> aren't spammers. ?Fortunately, there is a well documented algorithm
>> >> >> for doing this: the Advogato trust metric.
>> >> >>
>> >> >> The WoT documentation claims it is based upon the Advogato trust
>> >> >> metric. ?(Brief discussion: http://www.advogato.org/trust-metric.html
>> >> >> Full paper: http://www.levien.com/thesis/compact.pdf ) ?I think this
>> >> >> is wonderful, as I think there is much to recommend the Advogato
>> >> >> metric (and I pushed for it early on in the WoT discussions).
>> >> >> However, my understanding of the paper and what is actually
>> >> >> implemented is that the WoT code does not actually implement it.
>> >> >> Before I go into detail, I should point out that I haven't read the
>> >> >> WoT code and am not fully up to date on the documentation and
>> >> >> discussions; if I'm way off base here, I apologize.
>> >> >
>> >> > I think, you are:
>> >> >
>> >> > The advogato idea may be nice (i did not read it myself), if you have
>> > exactly 1 trustlist for
>> >> > everything. But xor wants to implement 1 trustlist for every app as
> people
>> > may act differently e.g.
>> >> > on firesharing than on forums or while publishing freesites. You
> basicly
>> > dont want to censor someone
>> >> > just because he tries to disturb filesharing while he may be tries to
>> > bring in good arguments at
>> >> > forum discussions about it.
>> >> > And i dont think that advogato will help here, right?
>> >>
>> >> There are two questions here. ?The first question is given a set of
>> >> identities and their trust lists, how do you compute the trust for an
>> >> identity the user has not rated? ?The second question is, how do you
>> >> determine what trust lists to use in which contexts? ?The two
>> >> questions are basically orthogonal.
>> >>
>> >> I'm not certain about the contexts issue; Toad raised some good
>> >> points, and while I don't fully agree with him, it's more complicated
>> >> than I first thought. ?I may have more to say on that subject later.
>> >>
>> >> Within a context, however, the computation algorithm matters. ?The
>> >> Advogato idea is very nice, and imho much better than the current WoT
>> >> or FMS answers. ?You should really read their simple explanation page.
>> >> ?It's really not that complicated; the only reasons I'm not fully
>> >> explaining it here is that it's hard to do without diagrams, and they
>> >> already do a good job of it.
>> >
>> > It's nice, but it doesn't work. Because the only realistic way for
> positive
>> > trust to be assigned is on the basis of posted messages, in a purely
> casual
>> > way, and without the sort of permanent, universal commitment that any
>> > pure-positive-trust scheme requires: If he spams on any board, if I ever
> gave
>> > him trust and haven't changed that, then *I AM GUILTY* and *I LOSE TRUST*
> as
>> > the only way to block the spam.
>>
>> How is that different than the current situation? ?Either the fact
>> that he spams and you trust him means you lose trust because you're
>> allowing the spam through, or somehow the spam gets stopped despite
>> your trust -- which implies either that a lot of people have to update
>> their trust lists before anything happens, and therefore the spam
>> takes forever to stop, or it doesn't take that many people to censor
>> an objectionable but non-spamming poster.
>>
>> I agree, this is a bad thing. ?I'm just not seeing that the WoT system
>> is *that* much better. ?It may be somewhat better, but the improvement
>> comes at a cost of trading spam resistance vs censorship ability,
>> which I think is fundamentally unavoidable.
>
> So how do you solve the contexts problem? The only plausible way to add trust
> is to do it on the basis of valid messages posted to the forum that the user
> reads. If he posts nonsense to other forums, or even introduces identities
> that spam other forums, the user adding trust probably does not know about
> this, so it is problematic to hold him responsible for that. In a positive
> trust only system this is unsolvable afaics?
>
> Perhaps some form of feedback/ultimatum system? Users who are affected by spam
> from an identity can send proof that the identity is a spammer to the users
> they trust who trust that identity. If the proof is valid, those who trust
> the identity can downgrade him within a reasonable p

Re: [freenet-dev] Request for proofreading: Announcing donation from Google

2009-05-13 Thread Matthew Toseland
On Tuesday 12 May 2009 20:36:30 Matthew Toseland wrote:
> I will post this on the website tomorrow if there are no objections. If 
anyone 
> can suggest any improvements, please do so; sometimes what I write isn't 
> readable by human beings!
> 
> 7th May, 2009 - Another big donation!
> 
> Google's Open Source team has donated US$18,000 to the Freenet Project to 
> support the ongoing development of the Freenet software (thanks again 
> Google!).
> 
> Their last donation funded the db4o project, which has now been merged into 
> Freenet, greatly improving performance for large download queues while 
> reducing memory usage, amongst other benefits.
> 
> We are currently working on Freenet 0.8, which will be released later this 
> year, and will include additional performance improvements, usability work, 
> and security improvements, as well as the usual debugging. Features are not 
> yet finalized but we expect it to include Freetalk (a new anonymous web 
> forums tool), a new Vista-compatible installer for Windows (that part will 
be 
> out in a few days), and hopefully Bloom filter sharing, a new feature 
> enabling nodes to know what is in their peers' datastores, greatly improving 
> performance, combined with some related security improvements.
> 
I have posted the announcement.


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

[freenet-dev] Wininstaller deployed

2009-05-13 Thread Matthew Toseland
I have deployed the new wininstaller, for Vista/win7 users and anyone who 
clicks on "Windows instructions". Win2K/XP users with working JWS will still 
see the old installer for now.


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Request for proofreading: Announcing donation from Google

2009-05-13 Thread Matthew Toseland
On Wednesday 13 May 2009 17:12:52 Robert Hailey wrote:
> 
> On May 12, 2009, at 7:28 PM, Arne Babenhauserheide wrote:
> 
> > On Tuesday, 12. May 2009 21:36:30 Matthew Toseland wrote:
> >> We are currently working on Freenet 0.8, which will be released  
> >> later this
> >> year, and will include additional performance improvements,  
> >> usability work,
> >> and security improvements, as well as the usual debugging. Features  
> >> are not
> >> yet finalized but we expect it to include Freetalk (a new anonymous  
> >> web
> >> forums tool), a new Vista-compatible installer for Windows (that  
> >> part will
> >> be out in a few days), and hopefully Bloom filter sharing, a new  
> >> feature
> >> enabling nodes to know what is in their peers' datastores, greatly
> >> improving performance, combined with some related security  
> >> improvements.
> >
> > "...Bloom filter sharing.
> >
> > Bloom filter sharing will enable nodes to know what is in their peers
> > datastores without impacting anonymity and should result in much  
> > improved
> > performance and better security."
> >
> > That would be my suggestion.
> >
> > Best wishes,
> > Arne
> 
> 
> Except that it's not true... bloom filter sharing is at a large cost  
> to security & anonymity (as said in the roadmap).

If you have security issues with bloom filters *as currently envisaged*, that 
is with the related caching changes, then please explain them on the relevant 
threads (not this one).


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Bloom filters and store probing, after the fact splitfile originator tracing

2009-05-13 Thread Matthew Toseland
On Wednesday 13 May 2009 17:07:39 Robert Hailey wrote:
> 
> On May 12, 2009, at 5:24 PM, Matthew Toseland wrote:
> 
> > On Tuesday 12 May 2009 17:57:04 Robert Hailey wrote:
> >> If the bloom filters are recovering already-failed requests, then
> >> surely latency is not the issue being addressed.
> >>
> >> I thought that the point of having bloom filters was to increase the
> >> effectiveness of fetches (a more reliable fallback).
> >
> > So your proposal is that we broadcast all our requests to all our  
> > peers, and
> > then what, wait for all their responses before routing outwards?
> 
> It might sound laughable like that, but the same problems have to be  
> overcome with sending a failed request to a peer with a bloom filter  
> hit. 

The difference is we send it to ONE node, not to all of them. And we only have 
to wait for one node.

> This is all operating on the presumption that the bloom filter  
> check is intended to recover a failing request (no htl / dnf / etc.),  
> and not to fundamentally change routing.

Bloom filters would be checked first. If we can get it in one hop, then great, 
it saves a lot of load, time, exposure, etc.
> 
> It is a given that the primary request mechanism has failed (be it  
> from network topology, overloaded peer, etc).

No, this is a short-cut, to save a few hops, as well as to consider more 
nodes' stores for unpopular keys.
> 
> I propose adding a new network message "QueueLocalKey" A response to  
> which is not expected, but that a node may then "OfferKey" to us at  
> some time down the road.

So it's an extension to ULPRs? We do not in fact wait for a response, but if 
the data is found, we are offered it, and we forward it to those who have 
asked for it? The difference to ULPRs being that we ask all our peers for the 
key and not just one of them...
> 
> As you said, upon a conventional request failing (htl, dnf, etc) a  
> node will broadcast this message to all peers (even backed off), or at  
> least queue the message to be sent (later/low priority).
> 
> Upon a node receiving this message, it can discard it (if too many  
> offers are pending to the requestor). Otherwise it will check it's  
> bloom filter for the key; discarding the message if not found, and  
> adding the key to an "offer-queue" if it is found (to be transfered  
> whenever). If the bloom filter check is too expensive to be done at  
> request time, we can have a single thread process all these requests  
> serially at a low priority. If an open net peer disconnects we can  
> dump all their requests.
> 
> The actual request has already failed, we do not hold up processing;  
> and it is a step towards passive requests.

It is very similar to ULPRs but we broadcast requested keys to all peers 
rather than relying on the peer we had routed to.
> 
> I guess that the net effect of this message on data which does not  
> exist amounts to a bunch of bloom filter checks, so it is equivalent.  
> And to data which does exist (but somehow failed), it will simply move  
> the data (at some rate) into caches along a valid request path.

First, it doesn't shortcut. Second, the bandwidth cost may be higher. Third, 
the negative security impact is at least comparable.


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Luke771
Thomas Sachau wrote:
> Luke771 schrieb:
>   
>> I can't comment on the technical part because I wouldnt know what im 
>> talking about.
>> However, I do like the 'social' part (being able to see an identity even 
>> if the censors mark it down it right away as it's created)
>> 
>
> "The censors"? There is no central authority to censor people. "Censors" can 
> only censor the
> web-of-trust for those people that trust them and which want to see a 
> censored net. You cant and
> should not prevent them from this, if they want it.
>
>   
This have been discussed  a lot.
the fact that censoship isnt done by a central authority but by a mob 
rule is irrelevant.
Censorship in this contest is "blocking users based on the content of 
their messages"

 The whole point  is basically this: "A tool created to block flood 
attacks  is being used to discriminate against a group of users.

Now, it is true that they can't really censor anything because users can 
decide what trust lists to use, but it is also true that this abuse of 
the wot does creates problems. They are social problems and not 
technical ones, but still 'freenet problems'.

If we see the experience with FMS as a test for the Web of Trust, the 
result of that test is in my opinion something in between a miserable 
failure and a catastrophe.

The WoT never got to prove itself against a real flood attack, we have 
no idea what would happen if someone decided to attack FMS, not even if 
the WoT would stop the attempted attack at all, leave alone finding out 
how fast and/or how well it would do it.

In other words, for what we know, the WoT may very well be completely 
ineffective against a DoS attack.
All we know about it is that the WoT can be used to discriminate against 
people, we know that it WILL be used in that way, and we know that 
because of a proven fact: it's being used to discriminate against people 
right now, on FMS

That's all we know.
We know that some people will abuse WoT, but we dont really know if it 
would be effective at stopping DoS attacks.
Yes, it "should" work, but we don't 'know'.

The WoT has never been tested t actually do the job it's designed to do, 
yet the Freenet 'decision makers' are acting as if the WoT had proven 
its validity beyond any reasonable doubt, and at the same time they 
decide to ignore the only one proven fact that we have.

This whole situation is ridiculous,  I don't know if it's more funny or 
sad...  it's grotesque. It reminds me of our beloved politicians, always 
knowing what's the right thing to do, except that it never works as 
expected.


Quickly back to our 'social problem': we have seen on FMS that  as_soon 
as a bunch of idiots figured out that they had an instrument of power in 
their hands, they decided to use it to play "holier than thou" and 
discriminate against people deemed "immoral", namely pedophiles and/or 
kiddie porn users.

Now, I'm not justifying pedophilia, kiddie pron or anything. In fact, 
I'm not even discussing it. What I'm doing is to point out that it is 
extremely easy to single out pedophiles as "bad guys" who "should" be 
discriminated against.
It's like asking people 'would you discriminate against unrepented 
sadistic serial killers?"
Hell yeah.
Anyone would.
Same thing with pedophiles, they're so "bad" that our hate towards their 
acts takes all of our attention, making us miss the realy important stuff.

In this case, the problem isn't about discriminating against pedophiles 
(false target), the problem is about setting a precedent, make us accept 
that "discriminating against $group is OK as long as the group in 
question is "bad" enough.
THIS IS DANGEROUS!
Today pedophiles, tomorrow gays.
Today terrorists, tomorrow dissidents.

I hope I made it clear enough this time, because I dont think I can 
explain it any better than so. And by the way, if I still can't get my 
point across, I'll probably give up.

>> On the other hand tho, if a user knows that it will take his system 
>> three days or a week to finish the job, he may decide to do it anyway.
>> I mean the real problem is 'not knowing' that it may take a long time.
>> A user that starts a process and doesnt see any noticeable progress 
>> would probably abort, but the same user would let it run to completion 
>> if he expects it to take several days.
>> 
>
> Why use this sort of announcement, if it takes several days? Announcement 
> over captchas takes only
> around 24 hours, which is faster and needs less resources. So i dont see any 
> real reason for
> hashcash-introductions.
>
>   
the long calculation thing wouldnt work after all, as it has been 
pointed out, computer power increases too fast for this kind of solution 
to be effective.

The other idea was good, a 'grace period' of say 75 "free" messages for 
every new identity before the WoT kicks in, would definitely be a good 
idea because it would greatly reduce the power in the hands of the "wot 
abusers" (if you don;t like the term 'censor

Re: [freenet-dev] Current uservoice top 5

2009-05-13 Thread Matthew Toseland
On Sunday 10 May 2009 20:50:00 Arne Babenhauserheide wrote:
> Am Mittwoch 06 Mai 2009 00:23:54 schrieb Matthew Toseland:
> > Isn't using a reasonably low scheduling priority enough? And we already do
> > that!
> 
> Not really, since I can't disable it (when I want full speed), and it sadly 
> doesn't work really well for memory consumption. 
> 
> I'd like an option to have freenet go inactive as soon as the system load 
gets 
> too high. It will lose connections anyway (low scheduling priority leads to 
> far too high answer-times), so it could just explicitely take a break until 
my 
> system runs well again. 

We could pause most of the node relatively easily, there will still be some 
background activity, and therefore some garbage collection, but it can be 
kept minimal...
> 
> But I don't want to have that all the time. When I compile something in the 
> background, I want freenet to take predecence (that's already well covered 
> with the low scheduling priority, though). 

How would Freenet tell the difference?


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Matthew Toseland
On Wednesday 13 May 2009 15:47:24 Evan Daniel wrote:
> On Wed, May 13, 2009 at 9:03 AM, Matthew Toseland
>  wrote:
> > On Friday 08 May 2009 02:12:21 Evan Daniel wrote:
> >> On Thu, May 7, 2009 at 6:33 PM, Matthew Toseland
> >>  wrote:
> >> > On Thursday 07 May 2009 21:32:42 Evan Daniel wrote:
> >> >> On Thu, May 7, 2009 at 2:02 PM, Thomas Sachau 
> > wrote:
> >> >> > Evan Daniel schrieb:
> >> >> >> I don't have any specific ideas for how to choose whether to ignore
> >> >> >> identities, but I think you're making the problem much harder than 
it
> >> >> >> needs to be.  The problem is that you need to prevent spam, but at 
the
> >> >> >> same time prevent malicious non-spammers from censoring identities 
who
> >> >> >> aren't spammers.  Fortunately, there is a well documented algorithm
> >> >> >> for doing this: the Advogato trust metric.
> >> >> >>
> >> >> >> The WoT documentation claims it is based upon the Advogato trust
> >> >> >> metric.  (Brief discussion: 
http://www.advogato.org/trust-metric.html
> >> >> >> Full paper: http://www.levien.com/thesis/compact.pdf )  I think 
this
> >> >> >> is wonderful, as I think there is much to recommend the Advogato
> >> >> >> metric (and I pushed for it early on in the WoT discussions).
> >> >> >> However, my understanding of the paper and what is actually
> >> >> >> implemented is that the WoT code does not actually implement it.
> >> >> >> Before I go into detail, I should point out that I haven't read the
> >> >> >> WoT code and am not fully up to date on the documentation and
> >> >> >> discussions; if I'm way off base here, I apologize.
> >> >> >
> >> >> > I think, you are:
> >> >> >
> >> >> > The advogato idea may be nice (i did not read it myself), if you 
have
> >> > exactly 1 trustlist for
> >> >> > everything. But xor wants to implement 1 trustlist for every app as
> > people
> >> > may act differently e.g.
> >> >> > on firesharing than on forums or while publishing freesites. You
> > basicly
> >> > dont want to censor someone
> >> >> > just because he tries to disturb filesharing while he may be tries 
to
> >> > bring in good arguments at
> >> >> > forum discussions about it.
> >> >> > And i dont think that advogato will help here, right?
> >> >>
> >> >> There are two questions here.  The first question is given a set of
> >> >> identities and their trust lists, how do you compute the trust for an
> >> >> identity the user has not rated?  The second question is, how do you
> >> >> determine what trust lists to use in which contexts?  The two
> >> >> questions are basically orthogonal.
> >> >>
> >> >> I'm not certain about the contexts issue; Toad raised some good
> >> >> points, and while I don't fully agree with him, it's more complicated
> >> >> than I first thought.  I may have more to say on that subject later.
> >> >>
> >> >> Within a context, however, the computation algorithm matters.  The
> >> >> Advogato idea is very nice, and imho much better than the current WoT
> >> >> or FMS answers.  You should really read their simple explanation page.
> >> >>  It's really not that complicated; the only reasons I'm not fully
> >> >> explaining it here is that it's hard to do without diagrams, and they
> >> >> already do a good job of it.
> >> >
> >> > It's nice, but it doesn't work. Because the only realistic way for
> > positive
> >> > trust to be assigned is on the basis of posted messages, in a purely
> > casual
> >> > way, and without the sort of permanent, universal commitment that any
> >> > pure-positive-trust scheme requires: If he spams on any board, if I 
ever
> > gave
> >> > him trust and haven't changed that, then *I AM GUILTY* and *I LOSE 
TRUST*
> > as
> >> > the only way to block the spam.
> >>
> >> How is that different than the current situation?  Either the fact
> >> that he spams and you trust him means you lose trust because you're
> >> allowing the spam through, or somehow the spam gets stopped despite
> >> your trust -- which implies either that a lot of people have to update
> >> their trust lists before anything happens, and therefore the spam
> >> takes forever to stop, or it doesn't take that many people to censor
> >> an objectionable but non-spamming poster.
> >>
> >> I agree, this is a bad thing.  I'm just not seeing that the WoT system
> >> is *that* much better.  It may be somewhat better, but the improvement
> >> comes at a cost of trading spam resistance vs censorship ability,
> >> which I think is fundamentally unavoidable.
> >
> > So how do you solve the contexts problem? The only plausible way to add 
trust
> > is to do it on the basis of valid messages posted to the forum that the 
user
> > reads. If he posts nonsense to other forums, or even introduces identities
> > that spam other forums, the user adding trust probably does not know about
> > this, so it is problematic to hold him responsible for that. In a positive
> > trust only system this is unsolvable afaics?
> >
> > Perhaps some form of feedback/ultimatum sy

Re: [freenet-dev] Recent progress on Interdex

2009-05-13 Thread Ximin Luo
Matthew Toseland wrote:
> On Wednesday 13 May 2009 04:53:11 Evan Daniel wrote:
>> On Tue, May 12, 2009 at 4:26 PM, Ximin Luo  wrote:
>>
>>> (one way of storing it which would allow token-deflate would be having each
>>> indexnode as a CHK, then you'd only have to INS an updated node and all its
>>> parents up to the root, but i chose not to do this as CHKs have a higher 
>>> limit
>>> for being turned into a splitfile. was this the right decision?)
>> My impression is that most of the time to download a key is the
>> routing time to find it, not the time to transfer the data once found.
>>  So a 32KiB CHK is only somewhat slower to download than a 1KiB SSK.
>> (Though I haven't seen hard numbers on this in ages, so I could be
>> completely wrong.)
>>
>> My instinct is that the high latency for a single-key lookup that is
>> the norm for Freenet means that if using CHKs instead results in an
>> appreciably shallower tree, that will yield a performance improvement.
> 
> Agreed.

ok, for larger indexes this may be better then, where there are enough keys to
fill up a 32KB lookup-table. i could implement a CHKSerialser which uses this
format, that automatically gets used once your index gets beyond a certain size.

>>  The other effect to consider is how likely the additional data
>> fetched is to be useful to some later request.  Answering that is
>> probably trickier, since it requires reasonable assumptions about
>> index size and usage.
>>
>> It would be nice if there was a way to get some splitfile-type
>> redundancy in these indexes; otherwise uncommonly searched terms won't
>> be retrievable.  However, there's obviously a tradeoff with common
>> search term latency.
> 
> Yeah. Generally fetching a single block with no redundancy is something to be 
> avoided IMHO. You might want to use the directory insertion code (maybe 
> saces' DefaultManifestPutter), but you may want to tweak the settings a bit.

yeah, was going to look into that at some point. thanks.

X

___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


Re: [freenet-dev] Recent progress on Interdex

2009-05-13 Thread Matthew Toseland
On Wednesday 13 May 2009 04:53:11 Evan Daniel wrote:
> On Tue, May 12, 2009 at 4:26 PM, Ximin Luo  wrote:
> 
> > (one way of storing it which would allow token-deflate would be having 
each
> > indexnode as a CHK, then you'd only have to INS an updated node and all 
its
> > parents up to the root, but i chose not to do this as CHKs have a higher 
limit
> > for being turned into a splitfile. was this the right decision?)
> 
> My impression is that most of the time to download a key is the
> routing time to find it, not the time to transfer the data once found.
>  So a 32KiB CHK is only somewhat slower to download than a 1KiB SSK.
> (Though I haven't seen hard numbers on this in ages, so I could be
> completely wrong.)
> 
> My instinct is that the high latency for a single-key lookup that is
> the norm for Freenet means that if using CHKs instead results in an
> appreciably shallower tree, that will yield a performance improvement.

Agreed.

>  The other effect to consider is how likely the additional data
> fetched is to be useful to some later request.  Answering that is
> probably trickier, since it requires reasonable assumptions about
> index size and usage.
> 
> It would be nice if there was a way to get some splitfile-type
> redundancy in these indexes; otherwise uncommonly searched terms won't
> be retrievable.  However, there's obviously a tradeoff with common
> search term latency.

Yeah. Generally fetching a single block with no redundancy is something to be 
avoided IMHO. You might want to use the directory insertion code (maybe 
saces' DefaultManifestPutter), but you may want to tweak the settings a bit.
> 
> Evan Daniel


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Request for proofreading: Announcing donation from Google

2009-05-13 Thread Robert Hailey


On May 12, 2009, at 7:28 PM, Arne Babenhauserheide wrote:


On Tuesday, 12. May 2009 21:36:30 Matthew Toseland wrote:
We are currently working on Freenet 0.8, which will be released  
later this
year, and will include additional performance improvements,  
usability work,
and security improvements, as well as the usual debugging. Features  
are not
yet finalized but we expect it to include Freetalk (a new anonymous  
web
forums tool), a new Vista-compatible installer for Windows (that  
part will
be out in a few days), and hopefully Bloom filter sharing, a new  
feature

enabling nodes to know what is in their peers' datastores, greatly
improving performance, combined with some related security  
improvements.


"...Bloom filter sharing.

Bloom filter sharing will enable nodes to know what is in their peers
datastores without impacting anonymity and should result in much  
improved

performance and better security."

That would be my suggestion.

Best wishes,
Arne



Except that it's not true... bloom filter sharing is at a large cost  
to security & anonymity (as said in the roadmap).


--
Robert Hailey


On Nov 13, 2008, at 12:52 PM, Matthew Toseland wrote:

TENTATIVE ROADMAP

0.8:
[...]
- Tunnels. Greatly improve security, significant performance cost.
- Bloom filters. Greatly improve performance, significant security  
cost.

- BUT PUT THEM TOGETHER, and you get greatly improved security for the
paranoid at a slighty performance cost, improved security and current
performance or better for the moderately paranoid, and greatly  
improved

performance for the not so paranoid.


___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Bloom filters and store probing, after the fact splitfile originator tracing

2009-05-13 Thread Robert Hailey

On May 12, 2009, at 5:24 PM, Matthew Toseland wrote:

> On Tuesday 12 May 2009 17:57:04 Robert Hailey wrote:
>> If the bloom filters are recovering already-failed requests, then
>> surely latency is not the issue being addressed.
>>
>> I thought that the point of having bloom filters was to increase the
>> effectiveness of fetches (a more reliable fallback).
>
> So your proposal is that we broadcast all our requests to all our  
> peers, and
> then what, wait for all their responses before routing outwards?

It might sound laughable like that, but the same problems have to be  
overcome with sending a failed request to a peer with a bloom filter  
hit. This is all operating on the presumption that the bloom filter  
check is intended to recover a failing request (no htl / dnf / etc.),  
and not to fundamentally change routing.

It is a given that the primary request mechanism has failed (be it  
from network topology, overloaded peer, etc).

I propose adding a new network message "QueueLocalKey" A response to  
which is not expected, but that a node may then "OfferKey" to us at  
some time down the road.

As you said, upon a conventional request failing (htl, dnf, etc) a  
node will broadcast this message to all peers (even backed off), or at  
least queue the message to be sent (later/low priority).

Upon a node receiving this message, it can discard it (if too many  
offers are pending to the requestor). Otherwise it will check it's  
bloom filter for the key; discarding the message if not found, and  
adding the key to an "offer-queue" if it is found (to be transfered  
whenever). If the bloom filter check is too expensive to be done at  
request time, we can have a single thread process all these requests  
serially at a low priority. If an open net peer disconnects we can  
dump all their requests.

The actual request has already failed, we do not hold up processing;  
and it is a step towards passive requests.

I guess that the net effect of this message on data which does not  
exist amounts to a bunch of bloom filter checks, so it is equivalent.  
And to data which does exist (but somehow failed), it will simply move  
the data (at some rate) into caches along a valid request path.

--
Robert Hailey

___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


[freenet-dev] Recent progress on Interdex

2009-05-13 Thread Daniel Cheng
On Wed, May 13, 2009 at 6:28 AM, Matthew Toseland
 wrote:
> On Tuesday 12 May 2009 21:26:53 Ximin Luo wrote:
>> Matthew Toseland wrote:
>> > Is it a good idea to use MD5? I guess you're using it the same way that
>> > XMLLibrarian does, but it may be more of a problem for your application?
>>
>> you mean collisions? with md5 the expected rate of collisions is around 1 in
>> 2^64, so that would give an average word length of 4 assuming 2^16 possible
>> letters, or word length of 8 with 2^8 possible letters... it seems ok, but
>> maybe sha1 is safer, then.
>
> What about deliberate collisions? Could they cause you to use more memory or
> anything like that? md5 is broken...

2^64 is still much larger then our bloom filter size.
This is just a performance tricks, not a security measure.
I can't see why we can't use MD5 here.

>> in any case, the actual plain keyword would be stored with each index so
>> collisions could be detected.
>>
>> > From your docs...
>> > An IndexNode corresponds to an SSK subspace, it contains a filter (e.g.
> bloom
>> > filter) for quickly ruling out an index based on the sought keywords, and
> a
>> > bunch of entries, which can each either be redirects to other indexes, or
> can
>> > be pointers to files.
>>
>> what's a "subspace" - the entire first "directory" of an SSK,
>
> Yes.
>
>> or any
>> subdirectory at any level? (i'm not familiar with the terminology here,
> sorry)
>> an IndexTree is the former; an IndexNode is the latter.
>>
>> yes, entries can be redirects to other indexes, or point to files that
> contain
>> the actual index data = {(keyword, freenetURI that matches it, other
> relevant
>> information)*}. is this what you meant by files? they *don't* point to
>> non-index files of content.
>
> Hmmm, ok.
>>
>> > What does it mean to inflate or deflate the index?
>>
>> inflate = REQ the relevant data from freenet, and use it to build internal
> data
>> structure
>>
>> deflate = INS the internal data structure into freenet. for the
> SSKSerialiser,
>> SSK/USKs can't be partially updated without updating the whole subspace, (or
> so
>> i thought), which is why token-deflate throws UnsupportedOperationException.
>
> Ok.
>>
>> (one way of storing it which would allow token-deflate would be having each
>> indexnode as a CHK, then you'd only have to INS an updated node and all its
>> parents up to the root, but i chose not to do this as CHKs have a higher
> limit
>> for being turned into a splitfile. was this the right decision?)
>
> Or you could store them as separate SSKs, but I wouldn't recommend it. SSKs
> can have any name after the slash. But inserting it all at once adds
> redundancy etc, it's generally a good idea.
>
--



[freenet-dev] What to do on SSK collisions: f86448d51c2e3248e1dfec513eefde50902aac30

2009-05-13 Thread Daniel Cheng
2009/5/13 Matthew Toseland :
> On Tuesday 12 May 2009 09:33:11 you wrote:
>> On Tue, May 12, 2009 at 7:49 AM, Matthew Toseland
>>  wrote:
>> > On Tuesday 12 May 2009 00:45:54 Matthew Toseland wrote:
>> >> commit f86448d51c2e3248e1dfec513eefde50902aac30
>> >> Author: Daniel Cheng (???) 
>> >> Date:   Fri May 8 21:04:28 2009 +0800
>> >>
>> >> FreenetStore: Simplify code, remove "overwrite" parameter
>> >>
>> >> This parameter is always "false".
>> >> (Except when doing BDB->SaltedHash migration, which does not have to
>> >> overwrite)
>> >>
>> >>
>> >> The reason I introduced the overwrite parameter was that when we send an
> SSK
>> >> insert and get a DataFound with different data, we should not only
> propagate
>> >> the data downstream, but also replace our local copy of it. However
>> >> apparently I never implemented this. Is it a good idea?
>> >>
>> > Looks like we do store the collided data for a local insert
>> > (NodeClientCore.java:XXX we have collided), but not for a remote one.
> Except
>> > that a bug prevents the former from working. So we need to fix getBlock().
>> > Ok...
>> >
>>
>> Let's see if these two commit make sense: (this is on my fork, not
>> committed to the main staging yet)
>>
>>
> http://github.com/j16sdiz/fred/commit/8e2ef42c286450813dbfa575bcd3f54dc8cb4c83
>>
> http://github.com/j16sdiz/fred/commit/7e6040ce3359486557bdd832c526e473a4f95577
>>
>> Regards,
>> Daniel
>
> Does this deal with the case where the request is remote, i.e. came from
> outside via a SSKInsertHandler?
>

I think the SSKInsertSender code have handle the remote request case:
  
http://github.com/freenet/fred-staging/commit/6a341ed359a9ef6800a9830685c97072e9845912#diff-3

If it do not, I have no idea where should I fix it.



Re: [freenet-dev] What to do on SSK collisions: f86448d51c2e3248e1dfec513eefde50902aac30

2009-05-13 Thread Daniel Cheng
2009/5/13 Matthew Toseland :
> On Wednesday 13 May 2009 01:05:40 you wrote:
>> 2009/5/13 Matthew Toseland :
>> > On Tuesday 12 May 2009 09:33:11 you wrote:
>> >> On Tue, May 12, 2009 at 7:49 AM, Matthew Toseland
>> >>  wrote:
>> >> > On Tuesday 12 May 2009 00:45:54 Matthew Toseland wrote:
>> >> >> commit f86448d51c2e3248e1dfec513eefde50902aac30
>> >> >> Author: Daniel Cheng (鄭郁邦) 
>> >> >> Date:   Fri May 8 21:04:28 2009 +0800
>> >> >>
>> >> >> FreenetStore: Simplify code, remove "overwrite" parameter
>> >> >>
>> >> >> This parameter is always "false".
>> >> >> (Except when doing BDB->SaltedHash migration, which does not have
> to
>> >> >> overwrite)
>> >> >>
>> >> >>
>> >> >> The reason I introduced the overwrite parameter was that when we send
> an
>> > SSK
>> >> >> insert and get a DataFound with different data, we should not only
>> > propagate
>> >> >> the data downstream, but also replace our local copy of it. However
>> >> >> apparently I never implemented this. Is it a good idea?
>> >> >>
>> >> > Looks like we do store the collided data for a local insert
>> >> > (NodeClientCore.java:XXX we have collided), but not for a remote one.
>> > Except
>> >> > that a bug prevents the former from working. So we need to fix
> getBlock().
>> >> > Ok...
>> >> >
>> >>
>> >> Let's see if these two commit make sense: (this is on my fork, not
>> >> committed to the main staging yet)
>> >>
>> >>
>> >
> http://github.com/j16sdiz/fred/commit/8e2ef42c286450813dbfa575bcd3f54dc8cb4c83
>> >>
>> >
> http://github.com/j16sdiz/fred/commit/7e6040ce3359486557bdd832c526e473a4f95577
>> >>
>> >> Regards,
>> >> Daniel
>> >
>> > Does this deal with the case where the request is remote, i.e. came from
>> > outside via a SSKInsertHandler?
>> >
>>
>> I think the SSKInsertSender code have handle the remote request case:
>>
> http://github.com/freenet/fred-staging/commit/6a341ed359a9ef6800a9830685c97072e9845912#diff-3
>
> No it doesn't, store(,,false)! If there is a collision we want to
> store(,,true).

You means overwrite local SSKBlock with remote one, even if we are not
inserting?

Should we?
What if an attacker try to overwrite an SSK?

>>
>> If it do not, I have no idea where should I fix it.
>
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


Re: [freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Evan Daniel
On Wed, May 13, 2009 at 9:03 AM, Matthew Toseland
 wrote:
> On Friday 08 May 2009 02:12:21 Evan Daniel wrote:
>> On Thu, May 7, 2009 at 6:33 PM, Matthew Toseland
>>  wrote:
>> > On Thursday 07 May 2009 21:32:42 Evan Daniel wrote:
>> >> On Thu, May 7, 2009 at 2:02 PM, Thomas Sachau 
> wrote:
>> >> > Evan Daniel schrieb:
>> >> >> I don't have any specific ideas for how to choose whether to ignore
>> >> >> identities, but I think you're making the problem much harder than it
>> >> >> needs to be.  The problem is that you need to prevent spam, but at the
>> >> >> same time prevent malicious non-spammers from censoring identities who
>> >> >> aren't spammers.  Fortunately, there is a well documented algorithm
>> >> >> for doing this: the Advogato trust metric.
>> >> >>
>> >> >> The WoT documentation claims it is based upon the Advogato trust
>> >> >> metric.  (Brief discussion: http://www.advogato.org/trust-metric.html
>> >> >> Full paper: http://www.levien.com/thesis/compact.pdf )  I think this
>> >> >> is wonderful, as I think there is much to recommend the Advogato
>> >> >> metric (and I pushed for it early on in the WoT discussions).
>> >> >> However, my understanding of the paper and what is actually
>> >> >> implemented is that the WoT code does not actually implement it.
>> >> >> Before I go into detail, I should point out that I haven't read the
>> >> >> WoT code and am not fully up to date on the documentation and
>> >> >> discussions; if I'm way off base here, I apologize.
>> >> >
>> >> > I think, you are:
>> >> >
>> >> > The advogato idea may be nice (i did not read it myself), if you have
>> > exactly 1 trustlist for
>> >> > everything. But xor wants to implement 1 trustlist for every app as
> people
>> > may act differently e.g.
>> >> > on firesharing than on forums or while publishing freesites. You
> basicly
>> > dont want to censor someone
>> >> > just because he tries to disturb filesharing while he may be tries to
>> > bring in good arguments at
>> >> > forum discussions about it.
>> >> > And i dont think that advogato will help here, right?
>> >>
>> >> There are two questions here.  The first question is given a set of
>> >> identities and their trust lists, how do you compute the trust for an
>> >> identity the user has not rated?  The second question is, how do you
>> >> determine what trust lists to use in which contexts?  The two
>> >> questions are basically orthogonal.
>> >>
>> >> I'm not certain about the contexts issue; Toad raised some good
>> >> points, and while I don't fully agree with him, it's more complicated
>> >> than I first thought.  I may have more to say on that subject later.
>> >>
>> >> Within a context, however, the computation algorithm matters.  The
>> >> Advogato idea is very nice, and imho much better than the current WoT
>> >> or FMS answers.  You should really read their simple explanation page.
>> >>  It's really not that complicated; the only reasons I'm not fully
>> >> explaining it here is that it's hard to do without diagrams, and they
>> >> already do a good job of it.
>> >
>> > It's nice, but it doesn't work. Because the only realistic way for
> positive
>> > trust to be assigned is on the basis of posted messages, in a purely
> casual
>> > way, and without the sort of permanent, universal commitment that any
>> > pure-positive-trust scheme requires: If he spams on any board, if I ever
> gave
>> > him trust and haven't changed that, then *I AM GUILTY* and *I LOSE TRUST*
> as
>> > the only way to block the spam.
>>
>> How is that different than the current situation?  Either the fact
>> that he spams and you trust him means you lose trust because you're
>> allowing the spam through, or somehow the spam gets stopped despite
>> your trust -- which implies either that a lot of people have to update
>> their trust lists before anything happens, and therefore the spam
>> takes forever to stop, or it doesn't take that many people to censor
>> an objectionable but non-spamming poster.
>>
>> I agree, this is a bad thing.  I'm just not seeing that the WoT system
>> is *that* much better.  It may be somewhat better, but the improvement
>> comes at a cost of trading spam resistance vs censorship ability,
>> which I think is fundamentally unavoidable.
>
> So how do you solve the contexts problem? The only plausible way to add trust
> is to do it on the basis of valid messages posted to the forum that the user
> reads. If he posts nonsense to other forums, or even introduces identities
> that spam other forums, the user adding trust probably does not know about
> this, so it is problematic to hold him responsible for that. In a positive
> trust only system this is unsolvable afaics?
>
> Perhaps some form of feedback/ultimatum system? Users who are affected by spam
> from an identity can send proof that the identity is a spammer to the users
> they trust who trust that identity. If the proof is valid, those who trust
> the identity can downgrade him within a reasonable p

Re: [freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 15:03:13 Matthew Toseland wrote:
> Perhaps some form of feedback/ultimatum system? Users who are affected by
> spam from an identity can send proof that the identity is a spammer to the
> users they trust who trust that identity. If the proof is valid, those who
> trust the identity can downgrade him within a reasonable period; if they
> don't do this they get downgraded themselves?

I remember another alternative which was proposed (and implemented) for 
Gnutella (but LimeWire chose not to merge the code for unknown reasons): 

Voting not on users but on messages (objects): 

- Main site: http://credence-p2p.org
- Papers: http://credence-p2p.org/paper.html
- Overview: http://credence-p2p.org/overview.html

I tested it back then and it worked quite well. 

You could have two different settings: "ignore messages marked as spam" and 
"only see messages marked as good". 

They had the same problem of people not voting on "spam/not spam", but on "I 
like it / I hate it", and their solution was a differenciated voting 
mechanism. 

It's implemented in Java, but the GUI ties into LimeWire. The core is mostly 
independent, though (iirc). 

It only depends on a limit on account creation: creating massive amounts of 
accounts (who are alowed to vote) can break the system. 

This limit could be realized by only allowing people with a minimum message 
count to vote. 

I don't know if it can perfectly be ported to freenet, but it should be worth 
a look - also it's GPL licensed. 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] [freenet-cvs] r25585 - in trunk/apps/ simsalabim: . darknet rembre utils

2009-05-13 Thread Matthew Toseland
On Friday 24 April 2009 20:03:41 vive wrote:
> On Fri, Apr 10, 2009 at 09:03:12PM +0100, Matthew Toseland wrote:
> > On Wednesday 11 February 2009 13:53:50 v...@freenetproject.org wrote:
> > > Author: vive
> > > Date: 2009-02-11 13:53:49 + (Wed, 11 Feb 2009)
> > > New Revision: 25585
> > > Added:
> 
> > > Log:
> > > simsalabim: Freenet 0.7 simulator
...
> > > +
> > > + public DarknetRoute findRoute(CircleKey k, DarknetRoute dnr) {
> > > +
> > > + if (!inNetwork)
> > > + throw new Error("Node not in Network routed to: "  + 
> > > this);
> > > + if (!isActive())
> > > + throw new Error("Node that is active routed to: " + 
> > > this);
> > > +
> > > + /*
> > > +  * Routing with
> > > +  * 1) Backtracking
> > > +  * 2) HTL (fixed decrease per node, not probabilistic as in 
> > > freenet)
> > > +  *
> > > +  * Cache/Store: when route has completed
> > > +  */
> > > +
> > > + if (dnr == null)
> > > + dnr = new DarknetRoute(god.N_BEST, god.HTL);
> > > +
> > > + dnr.atNode(this, k);
> > > + numQueries++;
> > > +
> > > + if (data.contains(k) || cache.contains(k))  // 
> > > terminate on success
> > > + return dnr;
> > > +
> > > + while (dnr.htl > 0) {
> > > +   double bd = Double.MAX_VALUE;
> > > +   DarknetNode cand = null;
> > > +
> > > +   for (Iterator it = neighbors() ; 
> > > it.hasNext() ;) {
> > > +   DarknetNode t = it.next();
> > > +   if (dnr.contains(t))
> > > +  continue;
> > 
> > We never route to the same node twice even to be rejected in your 
simulation. 
> > I wonder what impact this has? I mean, it would be easy enough to use a 
local 
> > set in this function, and check contains at the beginning of the 
function...
> 
> Yes, routes only cross specific nodes once (except when they backtrack). 
> This is the best approximation I have found since I was unable to understand 
a few
> heuristics in the node. I am looking forward to a good (but short!) summary 
of routing
> heuristics or I will need to read code myself to make routing more realistic 
in further
> versions.

Hmmm, I always thought RejectedLoop was pretty clear.
> 
> > Is there some basic load simulation? Your conclusions about network 
merging 
> > may be way off if you're not taking into account the limited capacity of 
the 
> > bridge nodes...? Or have you eliminated that somehow?
> 
> It may be way off, but I checked that the load on the border nodes was 
fairly
> low. I found that the success rate depends on the data finding its way over
> the network borders once or twice, before becoming available in a different
> component. Swapping specialization happens very slow with few border nodes,
> which leads to content specialization and failed routes in the other 
component.
> With more border nodes, specialization goes faster but at the same time 
there
> are more border nodes able to take the load...
> 
> No, there is no simulation of the load-mgmt that nodes do. Its not 
event-based
> down on a fine level of detail, high-level events such as routing and 
swapping
> only *complete* at certain times. I am currently working on an event-based 
version.
> 
Cool.
...
> > > + }
> > > + }
> > > + 
> > > + for (Iterator it = openNeighbors.iterator() ; 
> > it.hasNext() ; ) {
> > > + DarknetNode open = it.next();
> > > + open.openNeighbors.remove(this);
> > > + }
> > > + openNeighbors.clear();
> > > +
> > > + // Dormant nodes dont remove data
> > > + if (permanent) {
> > > + for (Iterator it = data.data() ; it.hasNext() ;) {
> > > + it.next().removedFrom(this);
> > > + }
> > > +
> > > + for (Iterator it = cache.data() ; it.hasNext() ;) {
> > > + it.next().removedFrom(this);
> > > + }
> > > + }
> > > +
> > > + return n;
> > > + }
> > > +
> > > + public boolean isSink(CircleKey k) {
> > > + for (DarknetNode n : neighbors) {
> > > + if (k.dist(n.pos) < k.dist(pos))
> > > + return false;
> > > + }
> > > + return true;
> > > + }
> > 
> > We have uptime requirements in this in the real node, it'd be nice to have 
> > some theoretical support for that decision... I guess it would require 
some 
> > sort of distribution of uptimes though...
> 
> Yes, but it could also be based on some heuristic. Storing join and leave 
time
> in the simulation is no problem. Do you have any suggestion for why it is 
needed?
> 
Well, we store on 3 nodes's stores typically. So if these are all low-uptime, 
or worse, newbie nodes that appear and then vanish forever, it will be 
impossible to find the data, or it will take a very long time.

> >

Re: [freenet-dev] Question about an important design decision of the WoT plugin

2009-05-13 Thread Matthew Toseland
On Friday 08 May 2009 02:12:21 Evan Daniel wrote:
> On Thu, May 7, 2009 at 6:33 PM, Matthew Toseland
>  wrote:
> > On Thursday 07 May 2009 21:32:42 Evan Daniel wrote:
> >> On Thu, May 7, 2009 at 2:02 PM, Thomas Sachau  
wrote:
> >> > Evan Daniel schrieb:
> >> >> I don't have any specific ideas for how to choose whether to ignore
> >> >> identities, but I think you're making the problem much harder than it
> >> >> needs to be.  The problem is that you need to prevent spam, but at the
> >> >> same time prevent malicious non-spammers from censoring identities who
> >> >> aren't spammers.  Fortunately, there is a well documented algorithm
> >> >> for doing this: the Advogato trust metric.
> >> >>
> >> >> The WoT documentation claims it is based upon the Advogato trust
> >> >> metric.  (Brief discussion: http://www.advogato.org/trust-metric.html
> >> >> Full paper: http://www.levien.com/thesis/compact.pdf )  I think this
> >> >> is wonderful, as I think there is much to recommend the Advogato
> >> >> metric (and I pushed for it early on in the WoT discussions).
> >> >> However, my understanding of the paper and what is actually
> >> >> implemented is that the WoT code does not actually implement it.
> >> >> Before I go into detail, I should point out that I haven't read the
> >> >> WoT code and am not fully up to date on the documentation and
> >> >> discussions; if I'm way off base here, I apologize.
> >> >
> >> > I think, you are:
> >> >
> >> > The advogato idea may be nice (i did not read it myself), if you have
> > exactly 1 trustlist for
> >> > everything. But xor wants to implement 1 trustlist for every app as 
people
> > may act differently e.g.
> >> > on firesharing than on forums or while publishing freesites. You 
basicly
> > dont want to censor someone
> >> > just because he tries to disturb filesharing while he may be tries to
> > bring in good arguments at
> >> > forum discussions about it.
> >> > And i dont think that advogato will help here, right?
> >>
> >> There are two questions here.  The first question is given a set of
> >> identities and their trust lists, how do you compute the trust for an
> >> identity the user has not rated?  The second question is, how do you
> >> determine what trust lists to use in which contexts?  The two
> >> questions are basically orthogonal.
> >>
> >> I'm not certain about the contexts issue; Toad raised some good
> >> points, and while I don't fully agree with him, it's more complicated
> >> than I first thought.  I may have more to say on that subject later.
> >>
> >> Within a context, however, the computation algorithm matters.  The
> >> Advogato idea is very nice, and imho much better than the current WoT
> >> or FMS answers.  You should really read their simple explanation page.
> >>  It's really not that complicated; the only reasons I'm not fully
> >> explaining it here is that it's hard to do without diagrams, and they
> >> already do a good job of it.
> >
> > It's nice, but it doesn't work. Because the only realistic way for 
positive
> > trust to be assigned is on the basis of posted messages, in a purely 
casual
> > way, and without the sort of permanent, universal commitment that any
> > pure-positive-trust scheme requires: If he spams on any board, if I ever 
gave
> > him trust and haven't changed that, then *I AM GUILTY* and *I LOSE TRUST* 
as
> > the only way to block the spam.
> 
> How is that different than the current situation?  Either the fact
> that he spams and you trust him means you lose trust because you're
> allowing the spam through, or somehow the spam gets stopped despite
> your trust -- which implies either that a lot of people have to update
> their trust lists before anything happens, and therefore the spam
> takes forever to stop, or it doesn't take that many people to censor
> an objectionable but non-spamming poster.
> 
> I agree, this is a bad thing.  I'm just not seeing that the WoT system
> is *that* much better.  It may be somewhat better, but the improvement
> comes at a cost of trading spam resistance vs censorship ability,
> which I think is fundamentally unavoidable.

So how do you solve the contexts problem? The only plausible way to add trust 
is to do it on the basis of valid messages posted to the forum that the user 
reads. If he posts nonsense to other forums, or even introduces identities 
that spam other forums, the user adding trust probably does not know about 
this, so it is problematic to hold him responsible for that. In a positive 
trust only system this is unsolvable afaics?

Perhaps some form of feedback/ultimatum system? Users who are affected by spam 
from an identity can send proof that the identity is a spammer to the users 
they trust who trust that identity. If the proof is valid, those who trust 
the identity can downgrade him within a reasonable period; if they don't do 
this they get downgraded themselves?
> 
> There's another reason I don't see this as a problem: I'm working from
> the assumpt

[freenet-dev] Request for proofreading: Announcing donation from Google

2009-05-13 Thread Arne Babenhauserheide
On Tuesday, 12. May 2009 21:36:30 Matthew Toseland wrote:
> We are currently working on Freenet 0.8, which will be released later this
> year, and will include additional performance improvements, usability work,
> and security improvements, as well as the usual debugging. Features are not
> yet finalized but we expect it to include Freetalk (a new anonymous web
> forums tool), a new Vista-compatible installer for Windows (that part will
> be out in a few days), and hopefully Bloom filter sharing, a new feature
> enabling nodes to know what is in their peers' datastores, greatly
> improving performance, combined with some related security improvements.

"...Bloom filter sharing. 

Bloom filter sharing will enable nodes to know what is in their peers 
datastores without impacting anonymity and should result in much improved 
performance and better security."

That would be my suggestion. 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de
-- next part --
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090513/4253f94a/attachment.pgp>


Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Arne Babenhauserheide
On Wednesday, 13. May 2009 10:24:52 Daniel Cheng wrote:
> In fms, you can always adjust the MinLocalMessageTrust to get whatever
> message you please to read.  -- ya, you may call it censorship..
> but it is the one every reader can opt-out with 2 clicks. --- Even
> if majority abuse the system, the poster can always post, the reader
> may know who is being censored and adjust accordingly .

As long as I can just disable the censorship (and I'm aware tha it exists) I 
don't care about it. Noone has the right to make me listen, but also I don't 
have the right to prevent someone from speaking. 

Luckily the itnernet allows us to join these two goals: You can speak, but 
maybe noone will hear you. 

Important here is, that there must not be a way to check if I join in the 
censorship, else people can create social pressure. 

I don't really use FMS yet, so I need to ask: is there a way to check that? If 
yes: How can we get rid of it? 

Best wishes, 
Arne

--- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- 
   - singing a part of the history of free software -
  http://infinite-hands.draketo.de


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Daniel Cheng
On Wed, May 13, 2009 at 4:01 PM, Luke771  wrote:
> Thomas Sachau wrote:
>> Luke771 schrieb:
>>
>>> I can't comment on the technical part because I wouldnt know what im
>>> talking about.
>>> However, I do like the 'social' part (being able to see an identity even
>>> if the censors mark it down it right away as it's created)
>>>
>>
>> "The censors"? There is no central authority to censor people. "Censors" can 
>> only censor the
>> web-of-trust for those people that trust them and which want to see a 
>> censored net. You cant and
>> should not prevent them from this, if they want it.
>>
>>
> This have been discussed  a lot.
> the fact that censoship isnt done by a central authority but by a mob
> rule is irrelevant.
> Censorship in this contest is "blocking users based on the content of
> their messages"
>
>  The whole point  is basically this: "A tool created to block flood
> attacks  is being used to discriminate against a group of users.
> [pedophiles / gays / terrorist / dissidents / ...]

You don't have to repeat this again and again.
We *are* aware of this problem.
We need solution, not re-stating problem.

Don't tell me frost is the solution -- it is being DoS'ed again.

In fms, you can always adjust the MinLocalMessageTrust to get whatever
message you please to read.  -- ya, you may call it censorship..
but it is the one every reader can opt-out with 2 clicks. --- Even
if majority abuse the system, the poster can always post, the reader
may know who is being censored and adjust accordingly .

In frost, when sb DoS the system...  the poster cannot post anything.
there is nothing a reader can do.

Now, tell me, which one is better?

[...]

>> Why use this sort of announcement, if it takes several days? Announcement 
>> over captchas takes only
>> around 24 hours, which is faster and needs less resources. So i dont see any 
>> real reason for
>> hashcash-introductions.
>>
[...]
> On the other hand, a malicious user who is able to create new identities
> quickly enough (slave labor would do the trick) would still be capable
> to send 75 messages per announced ID... so the 'grace period' should be
> as small as possible to minimize this problem. Maybe 25 or 30 messages?

As long as creating a new identity is free.
1 message is enough to flood the whole system.

Not only that,
Even ZERO message is enough to flood the whole system.
--- if you can introduce thousands of identity in a few days,
everybody would be busy polling from the "fake" identities.

--
___
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


[freenet-dev] a social problem with Wot (was: Hashcash introduction, was: Question about WoT )

2009-05-13 Thread Luke771
Thomas Sachau wrote:
> Luke771 schrieb:
>   
>> I can't comment on the technical part because I wouldnt know what im 
>> talking about.
>> However, I do like the 'social' part (being able to see an identity even 
>> if the censors mark it down it right away as it's created)
>> 
>
> "The censors"? There is no central authority to censor people. "Censors" can 
> only censor the
> web-of-trust for those people that trust them and which want to see a 
> censored net. You cant and
> should not prevent them from this, if they want it.
>
>   
This have been discussed  a lot.
the fact that censoship isnt done by a central authority but by a mob 
rule is irrelevant.
Censorship in this contest is "blocking users based on the content of 
their messages"

 The whole point  is basically this: "A tool created to block flood 
attacks  is being used to discriminate against a group of users.

Now, it is true that they can't really censor anything because users can 
decide what trust lists to use, but it is also true that this abuse of 
the wot does creates problems. They are social problems and not 
technical ones, but still 'freenet problems'.

If we see the experience with FMS as a test for the Web of Trust, the 
result of that test is in my opinion something in between a miserable 
failure and a catastrophe.

The WoT never got to prove itself against a real flood attack, we have 
no idea what would happen if someone decided to attack FMS, not even if 
the WoT would stop the attempted attack at all, leave alone finding out 
how fast and/or how well it would do it.

In other words, for what we know, the WoT may very well be completely 
ineffective against a DoS attack.
All we know about it is that the WoT can be used to discriminate against 
people, we know that it WILL be used in that way, and we know that 
because of a proven fact: it's being used to discriminate against people 
right now, on FMS

That's all we know.
We know that some people will abuse WoT, but we dont really know if it 
would be effective at stopping DoS attacks.
Yes, it "should" work, but we don't 'know'.

The WoT has never been tested t actually do the job it's designed to do, 
yet the Freenet 'decision makers' are acting as if the WoT had proven 
its validity beyond any reasonable doubt, and at the same time they 
decide to ignore the only one proven fact that we have.

This whole situation is ridiculous,  I don't know if it's more funny or 
sad...  it's grotesque. It reminds me of our beloved politicians, always 
knowing what's the right thing to do, except that it never works as 
expected.


Quickly back to our 'social problem': we have seen on FMS that  as_soon 
as a bunch of idiots figured out that they had an instrument of power in 
their hands, they decided to use it to play "holier than thou" and 
discriminate against people deemed "immoral", namely pedophiles and/or 
kiddie porn users.

Now, I'm not justifying pedophilia, kiddie pron or anything. In fact, 
I'm not even discussing it. What I'm doing is to point out that it is 
extremely easy to single out pedophiles as "bad guys" who "should" be 
discriminated against.
It's like asking people 'would you discriminate against unrepented 
sadistic serial killers?"
Hell yeah.
Anyone would.
Same thing with pedophiles, they're so "bad" that our hate towards their 
acts takes all of our attention, making us miss the realy important stuff.

In this case, the problem isn't about discriminating against pedophiles 
(false target), the problem is about setting a precedent, make us accept 
that "discriminating against $group is OK as long as the group in 
question is "bad" enough.
THIS IS DANGEROUS!
Today pedophiles, tomorrow gays.
Today terrorists, tomorrow dissidents.

I hope I made it clear enough this time, because I dont think I can 
explain it any better than so. And by the way, if I still can't get my 
point across, I'll probably give up.

>> On the other hand tho, if a user knows that it will take his system 
>> three days or a week to finish the job, he may decide to do it anyway.
>> I mean the real problem is 'not knowing' that it may take a long time.
>> A user that starts a process and doesnt see any noticeable progress 
>> would probably abort, but the same user would let it run to completion 
>> if he expects it to take several days.
>> 
>
> Why use this sort of announcement, if it takes several days? Announcement 
> over captchas takes only
> around 24 hours, which is faster and needs less resources. So i dont see any 
> real reason for
> hashcash-introductions.
>
>   
the long calculation thing wouldnt work after all, as it has been 
pointed out, computer power increases too fast for this kind of solution 
to be effective.

The other idea was good, a 'grace period' of say 75 "free" messages for 
every new identity before the WoT kicks in, would definitely be a good 
idea because it would greatly reduce the power in the hands of the "wot 
abusers" (if you don;t like the term 'censor