On Wed, May 27, 2009 at 3:11 PM, Thomas Sachau <m...@tommyserver.de> wrote:
> Evan Daniel schrieb:
>> On Wed, May 27, 2009 at 1:29 PM, Thomas Sachau <m...@tommyserver.de> wrote:
>>>> A small number could still be rather large.  Having thousands see it
>>>> ought to suffice.  For the current network, I see no reason not to
>>>> have the (default) limits such that basically everyone sees it.
>>> If your small number is that big, you should add that because for me, 
>>> "small" is not around
>>> "thousends". Additionally, if you allow them to reach thousends (will a 
>>> freenet based message system
>>> ever reach more people?), is there any value in restricting this anyway?
>>
>> Currently, the total number of people using Freenet is small.
>> Hopefully that will not always be the case.  Designing a new system
>> that assumes it will always be the case seems like a rather bad idea
>> to me.
>>
>> In this context, I would say small means sublinear growth with the
>> size of the entire network.  Having the new-identity spam reach
>> thousands of recipients is far better than having it reach tens of
>> thousands or millions.
>
> Why not let the WoT solve the problem? In practise, not all of those will 
> pull the spam at the same
> time. So some will get it first, see it is spam and mark it as such. Later 
> ones will then see the
> spammer mark and not even fetch the message. On the other hand, if it is no 
> spam, it will get fetched.

If WoT can solve it, fine.  If it can't, that's fine too.  Neither
case has any bearing on Advogato's abilities, merely the standard of
comparison.

>
>>
>>>> If the post is really that valuable, some people will mark the poster
>>>> as trusted.  Then everyone will see it.
>>> Why should they? People are lazy, so most, if not all will just read it, 
>>> maybe answer it, but who
>>> thinks about rating someone because of a single post? People are and will 
>>> always be lazy.
>>
>> If the post is only somewhat valuable, it might take a few posts.  If
>> it's a provocative photo that escaped from an oppressive regime, I
>> suspect it wouldn't.
>
> A few? I do sometimes check some FMS trustlists. And those i did check did 
> not set some trust value
> for many people. Additionally remember that FMS is used by people who are 
> willing to do something.
> So i would expect much less from the default WoT inside freenet.
> With your suggestion, someone will have to wait, until someone "uncensors" 
> him. Imho, noone should
> be censored by default, so it should be exactly the other way round.

See below on captchas.

>
>>
>> Granting trust automatically on replies is an idea that has been
>> discussed before.  It has a great deal of merit.  I'm in favor of it.
>> I just don't think that should be the highest level of trust.
>
> It may be an additional option, but this would only make those well-trusted, 
> which do write many
> posts, while others with less posts get less trust. Would be another place, 
> where a spammer could do
> something to make his attacks more powerfull.

It is my firm belief that if the system makes the spammer perform
manual work per identity they wish to spam with, the problem is
solved.  Do you have evidence or sound reasoning to the contrary?  All
systems I know of -- such as email and Frost -- have spam problems
because the spammer can automate all the steps.

>
>>
>>>> You may think that everyone should be equal; I don't.  If newbies are
>>>> posting stuff that isn't spam (be it one message or many), I'm willing
>>>> to believe someone my web can reach will mark them trusted.  You
>>>> obviously aren't; that's fine too.  Fortunately, there is no
>>>> requirement we use the same capacity limiting functions -- that should
>>>> be configurable for each user.  If you want to make the default
>>>> function fairly permissive, that's fine.  I think you'd be making the
>>>> wrong choice, but personally I wouldn't care that much because I'd
>>>> just change it away from the default if new-identity spam was a
>>>> problem.
>>> So you want the default to be more censoring. And you trust people to not 
>>> be lazy. I oppose both.
>>> First, if you really want to implement such censorship, make the default 
>>> open, with thousends of
>>> trusted users, it wont be a difference anyway. Second, why should people 
>>> mark new identities as
>>> trusted? I use FMS and i dont change the trust of every identity i see 
>>> there. And i do somehow
>>> manage a trustlist there. If someone is lazy (and the majority is), they 
>>> will do nothing.
>>
>> If one of your design requirements is that new identities can post and
>> be seen by everyone, you have made the spam problem unsolvable BY
>> DEFINITION.  That is bad.
>
> Wrong. The initial barrier is the proove to solve a problem. Which should be 
> done with a problem
> hard for computers and easy for humans. But this just prevents automated 
> computerbased identity
> creation.

Please cite evidence that such a problem exists in a form that is
user-friendly enough we can use it.  Unless I am greatly mistaken,
Freenet's goal as a project is not to solve the captcha problem when
no one else has.  Taking on oppressive governments is a sufficiently
herculean task for Freenet, imho.

Given that captchas can and will be broken, but that they will likely
slow the spammers down somewhat, we need a system that works even in
the face of new identity spam.

> But you should never start to mistrust any new identity and censor it by 
> default and only allow him
> to reach more people if he does post enough before. For example a person who 
> just wants to post some
> interesting content will probably not take that work, he will just post and 
> leave. With your idea,
> only a limited amount of people will see this and they can decide, if others 
> will see it too. With
> FMS, everyone can see it by default. So this information would reach more 
> people. Isnt that the
> basic idea of freenet? Make it possible to everyone to get the inserted 
> information without the
> possibility to censor it?

If you prefer, set the captcha trust capacity function to something
permissive.  I don't really care, and I'm tired of arguing about it.
I don't even really care what the default is, as long as I can set it
to do something similar to what I describe above.

>
>>
>> The whole point of Advogato or other web of trust systems is that you
>> don't have to mark everyone you see as trusted, only some of them.  As
>> long as a reasonable number of people do the same thing, so that the
>> whole graph is well connected, that will suffice.
>
> With FMS, not everyone needs to mark someone as spammer, but only some 
> people. And if enough of your
> trusted peers do that, the spammer is out. It is the exact oposite of your 
> suggestion and will
> usually result in less people having to do something: Usually, there will be 
> more people not
> spamming, so there are less, that have to mark people down in FMS than people 
> in avogato, that have
> to mark people up.

Taking Frost or email as a basis for comparison, you are wrong.  There
are more spamming identities than non-spamming ones.

>
>>
>>>> Also, you seem to be mistaken about what I mean by limiting CAPTCHA
>>>> identity capacity.  Limiting it to 1 means it's nonzero.  That means
>>>> the identity can receive trust and be accepted, so the message will be
>>>> read.  All it means is that they can't grant trust to anyone else.  It
>>>> says nothing about their own ability to post messages.  The wouldn't
>>>> need to solve lots of CAPTCHAs any more than they would under eg FMS.
>>>> A few should suffice, for redundancy vs collisions and the poster
>>>> having gone offline.
>>> ???
>>>
>>> Who told you that someone would have to solve many captchas and that 
>>> forever? You only need to solve
>>> 1 captcha that is not already solved and which is from a trusted person 
>>> which publishes its trustlist.
>>> And i dont think he is mistaken. You still require people to mark 
>>> identities as trusted to get them
>>> visible and have them stay visible to others. This wont happen, so people 
>>> will loose their
>>> Captcha-Trust and will have to solve more captchas. Annoying for everyone, 
>>> and most annoying for the
>>> lazy majority.
>>
>> The captcha problem is exactly the same as with FMS or WoT.  You could
>> implement it exactly as either of those does with Advogato.  How many
>> and how often a new user must solve captchas is only peripherally
>> related to which algorithm you run on the trust graph.  IIRC, trust in
>> FMS does not propagate very far at all, which means for more than a
>> few people to see you you need to be on many trust lists.  That means
>> solving many captchas or getting lots of manual ratings.  Advogato or
>> WoT (AIUI, anyway) both improve on this.
>
> Please read the FMS site for details. A solved captcha itself does just 
> announce your identity. It
> does not add any trust to it. And you only need to solve only 1 captcha as 
> long as the trust network
> is well connected since other will get that new identity from other 
> trustlists they trust. Now,
> since there is no trustvalue assigned, in a corner case 1 person could be 
> enough to remove the
> spammer from your view. Basicly you need more neutral or good ratings than 
> bad ratings, but all of
> them are done manually, nothing of that is done via solving a captcha.

Except you are wrong -- solving a captcha in FMS *does* add trust.  It
means that I see your post.  That means I have trusted you.  All you
are saying is that captcha trust is completely overridden by manual
settings.  It does still exist.  For comparison, note the case where I
solve no captchas and am not on anyone's trust list: you won't see my
post.

>
>>
>> I am proposing an improved solution.  Currently, in FMS or WoT, Sam
>> can solve a captcha Alice published.  Since he then has trust from
>> Alice, he can mark a large number of fake identities as trusted.
>
> So you did not read the idea behind FMS. See above. Additionally, you need 
> trustlist trust assigned
> to yourself to get your fake identities on other trustlists. You do get 
> neither message trust nor
> trustlist trust via solving captchas.

If Alice can see Sam's posts, then she has granted Sam trust.  I see
no other useful definition of the term.  The fact that trust is
granted by default does not make it not trust.  Whether granting trust
by default or only on the basis of captchas is a good idea or not is a
different question.  I don't really care which you do as long as I
have the option to do it my way.

>
>>
>> If you assume that people will not maintain trust lists, then it
>> doesn't matter what algorithm you run on the trust graph.  There won't
>> be one.  FMS, WoT, and Advogato all fail completely under that
>> assumption.
>
> It is not the same. The system that needs the most manual work will work 
> worse than those that need
> less trustlist maintainence. Since most users are usually no spammers, they 
> will need people to mark
> them as good with avogato. While with FMS, only the minority of spammers will 
> need to be marked as
> bad. Less marks, less work to do, so better chance that it works.

All you are saying is that FMS has not been attacked by a motivated
spammer.  The cases of Frost and email suggest that when it is, you
will be wrong.

>
>>
>>>> Fundamentally, it's a question of whether you believe CAPTCHAs work.
>>>> I don't.  If you start with an assumption that CAPTCHAs are a minor
>>>> hindrance at most, then if you require that everyone sees messages
>>>> sent by identities that have only solved CAPTCHAs and not gained
>>>> manual trust, then you've made it a design criteria to permit
>>>> unlimited amounts of spam.  (That's bad.)  If you believe CAPTCHAs
>>>> work, then things are a bit easier...  but I think the balance of the
>>>> evidence is against that belief.
>>> Captchas may not be the ultimative solution. But they are one way to let 
>>> people in while prooving to
>>> be humans. And you will need this limit (human proove), so you will always 
>>> need some sort of captcha
>>> or a real friends trust network.
>>
>> Captchas do not prove someone is human.  They prove that someone
>> solved a problem.  If your captchas are good, that means they are more
>> likely to be human.  I work from an assumption that captchas are
>> marginally effective at best.  If you think I am mistaken in that,
>> please explain why.  From that assumption, I conclude that we need a
>> system that is reasonably effective against a spammer who can solve
>> significant numbers of captchas, but still is capable of making use of
>> the information that solving a captcha does provide.
>
> You cannot. Whatever you use as entry barrier, if someone is able to break it 
> with some automatic
> way or with other massive attack, your are lost in one way or another. The 
> already existing
> community may still work and stay, but new users wont be able to join.

I have proposed a solution to the problem.  Your response to that
solution has not been "that won't work" but rather "that won't work
perfectly."  I agree that it won't work perfectly.  I have yet to see
an alternate solution to the problem proposed that is better.

Evan Daniel
_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to