Re: Proper form for a public FMS outproxy?

2019-07-21 Thread yanmaani

On 2019-07-21 08:11, Arne Babenhauserheide wrote:

yanma...@cock.li writes:

On 2019-07-20 07:59, Arne Babenhauserheide wrote:

yanma...@cock.li writes:

Frontends would be disposable and dime-a-dozen

To get to this situations, you must make it very, very easy to host
them. This might be a major endeavor (but one which would benefit
Freenet a lot).
If you want to make it dime-a-dozen, you need to make it easy to 
install

Freenet with FMS already setup.

Aren't Freenet and FMS already trivial to install programmatically?

If you have actual IDs, you must provide a secure way to log in — not
secure against the server, but secure against others users 
impersonating

you.



Visibility is also based on the ID, otherwise you don’t get real spam
defense (you’d have to rely on the site hoster to manage spam for you).
Well, doesn't the cookie mechanism I described do this? Everyone is 
anonymous, but personally I think that goes under "It's not a bug, it's 
a feature". No need for an actual login field, although you could give 
them a link that sets the cookie for them that they can use as a 
bookmark if they want.


If you need reliable names, you could implement tripcode functionality. 
For a bit more security, secure tripcodes. This also simplifies 
implementation. On the other hand, it's ugly. But then again, Worse is 
Better, and if you want your software to spread you should make it as 
simple as possible, like a virus.


https://en.wikipedia.org/wiki/Imageboard#Tripcodes

Recap: When posting, you enter a name and a secret, separated by one or 
two pound symbols. For instance, yanmaani#NDl5tlZY. Then the server 
hashes whatever is after the pound symbol(s), and replaces it by its 
(possibly truncated) hash. So an example output would be 
yanmaani#Ri2dTa5T. If using two pound symbols, a secure tripcode is 
computed, where the server appends a secret salt, which makes offline 
brute-force impossible. That has the downside of not being portable, of 
course.


Maybe you'd want to replace the pound symbol by something else, but 
that's an implementation detail.


A user who was really serious about becoming famous should either GPG 
sign their messages or download Freenet. I don't think "secure 
identities" are useful more than as a spam countermeasure, and for that 
purpose tripcodes are more than enough.


Anyway, the upside of predictable names is that you could separate the 
"announce" and "post" parts. Then posting could require for instance one 
captcha, and announcing ten. And there would be no need for the server 
to do any book-keeping.


So the workflow for a new user would be
a1) visit gateway.com/announce
a2) type in their name (optional) and tripcode
a3) solve 10 captchas
b1) visit gateway.com/board/thread
b2) make their post, type in tripcode
b3) solve 1 captcha, for rudimentary flood/tripcode bruteforce control

Steps B can be done before steps A, but a minor UX issue is users doing 
steps B and only B. There could be a warning if the server believes that 
to be the case, though.


Kind of like on this mailing list: first you solve a captcha to get your 
e-mail added to the list, and then you can post. Except for the part 
where you can't first post and then solve captchas.


Super ugly but probably decently functioning hybrid approach: posting, 
if done from a clean IP, gets a very low trust from a seed node, 
provided the IP hasn't been used in the past week or whatever. This 
would make it possible for "normal users" (e.g. casuals dropping by who 
just want to try it out) to post without much hassle, while still making 
it possible for Tor users to post. And the network could handle 
blacklisting that seed node if botnets are raising too much hell.


Of course now the implementation is much harder, since you need to pass 
the IP on to the onion service without terminating SSL. So I would 
rather not go down that route.


The network could handle blacklisting too old identities that suddenly 
resurface - an attacker stands nothing more to gain from solving 20 
captchas and getting a new identity from the service than they do from 
solving 20 captchas and getting a new identity with its own key, so I 
wouldn't imagine it that big a problem.


Another way could be for posts without well-announced identities to go 
into some kind of "pool" where you can vote on them. Or you could just 
have low standards for initial identity verification.


Probably, this can be fine-tuned on-the-fly long as you get the concept 
working. So I wouldn't bike-shed it too much.

What I'm curious about is how the identity generation should
proceed. In particular, can the WoT have multiple identities sharing
the same key?


No, and that wouldn’t be a good idea, since they could switch to the
other ID if they’d manage to trick the server into using another public
name.
But there's no issues if the server is well-coded? Or does WoT point 
blank not have the ability to deal with multiple identities sharing the 
same key?

Re: Proper form for a public FMS outproxy?

2019-07-21 Thread Arne Babenhauserheide

yanma...@cock.li writes:

> On 2019-07-20 07:59, Arne Babenhauserheide wrote:
>> Hi,
>>
>> yanma...@cock.li writes:
>>
>>> Now, my idea is this: You set up a public (onion or clearnet) frontend
>>> where you can make and read posts, with its back-end being FMS.
>> …
>>> Frontends would be disposable and dime-a-dozen; a front-end with too
>> To get to this situations, you must make it very, very easy to host
>> them. This might be a major endeavor (but one which would benefit
>> Freenet a lot).
>> …
> Well, would it? You can pass through FMS, and only intercept the parts
> related to posting. You'd also want to intercept the progress screens
> for downloads, which might be a bit harder.

If you want to make it dime-a-dozen, you need to make it easy to install
Freenet with FMS already setup.

> All you'd need to do is write the code that does the filtering.

If you have actual IDs, you must provide a secure way to log in — not
secure against the server, but secure against others users impersonating
you.

Visibility is also based on the ID, otherwise you don’t get real spam
defense (you’d have to rely on the site hoster to manage spam for you).

> What I'm curious about is how the identity generation should
> proceed. In particular, can the WoT have multiple identities sharing
> the same key?

No, and that wouldn’t be a good idea, since they could switch to the
other ID if they’d manage to trick the server into using another public
name.

> That makes implementation much simpler too, since you don't need to
> pass on the IP info or treat onions as a special case. What you could
> do otherwise is to use for instance the Spamhaus RBL. That would block

If you block open proxies, then you exclude all tor users, but you don’t
get real security, because botnets are horribly cheap.

> Doesn't FMS already limit posting rate on the client side?

Not that I know of. It has delay of messages to provide more anonymity.

> Solving an additional captcha per week would be trivial to add.

> This might be overkill though. Adds implementation cost, and now the
> server gets access to non-public information (although it never has to
> save it). Easier to just tell people to make a new identity once a
> month.

The server always has non-public information about the users. The
question is just how to represent it.

>>> Specifically, a user that didn't like this would set list trust of the
>>> master identity to 0. Do you reckon this would happen?
>>
>> Yes, I think this would happen, because one bad apple would spoil the
>> whole identity.
>>
>> But if you would find a way to pre-generate IDs and then assign them to
>> new users (so the standard FMS spam-defense would work), then this idea
>> could work.
>>
>> If the proxy had a main ID which gives trust-list-trust to these IDs,
>> then people could decide whether they want to see the new IDs.
>>
> Well, this is what I'm concerned about. Do you reckon they would
> blacklist the main ID's trust list, because it has too many children
> which are rotten apples?

Yes. It would then be the same as those public IDs (where the secret key
was published intentionally) which get blocked after abuse.

> Then the bots could agree on some protocol; they make posts announcing
> themselves somewhere, and then these are assumed to take effect after
> X seconds. If other bots find X too low, they rate them negatively,
> but they all get to specify X. And a similar parameter, let's call it
> Y.

There are distributed leader election protocols. You could use a simple
bully-protocol
https://en.wikipedia.org/wiki/Leader_election#Asynchronous_ring[3]

> Bots which "jump the gun" would get blacklisted by the other bots
> programmatically. Bots which censor messages would get blacklisted,
> provided they didn't block all messages sent within a certain
> timeframe.

You’d likely have to block them via FMS and only consider bots in the
distributed algorithm which are not blacklisted by given moderator IDs.

> Another question is if FCP already supports a "stripped-down mode",
> where it doesn't expose internal material, only stuff that's on the
> network. I know SIGAINT ran a Freenet <-> Tor proxy, do you know how
> they did it?

There is public gateway mode, but I would not vouch for its security —
it might have deteriorated over the past years of little usage.

Best wishes,
Arne
--
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: Proper form for a public FMS outproxy?

2019-07-20 Thread yanmaani

On 2019-07-20 07:59, Arne Babenhauserheide wrote:

Hi,

yanma...@cock.li writes:


Now, my idea is this: You set up a public (onion or clearnet) frontend
where you can make and read posts, with its back-end being FMS.

…

Frontends would be disposable and dime-a-dozen; a front-end with too

To get to this situations, you must make it very, very easy to host
them. This might be a major endeavor (but one which would benefit
Freenet a lot).
…
Well, would it? You can pass through FMS, and only intercept the parts 
related to posting. You'd also want to intercept the progress screens 
for downloads, which might be a bit harder.


But it shouldn't be too much work.

All you'd need to do is write the code that does the filtering. For HTTP 
-> Freenet, you'd want to block out everything like config pages, and 
only show the actual data. To make things easier, you could just expose 
the FMS HTTP frontend directly and pass all GET requests through.


The other side is a bit more complicated. You'd filter POST requests, so 
that it checks if it's for the boxes in the FMS frontend used for 
posting. If it is, check cookie. If it doesn't have the cookie, 302 it 
to some page like /auth. That gets a special exemption. The easiest 
implementation I can think of is a small PHP script with a hardcoded 
secret. It checks if the captcha is OK. If so, it sets a cookie to (time 
|| id || hash(secret || time || id)), where || is concatenation. Then 
the frontend also has the hardcoded secret.


Ugly but would probably work fine. For smaller attack surface, it can be 
replaced with a tiny CGI binary or whatever.


If it does have the cookie, then make a post using the ID in the cookie.

What I'm curious about is how the identity generation should proceed. In 
particular, can the WoT have multiple identities sharing the same key?


If so, it's quite trivial: You have a master key, let's call it 
"master@asdasd...". Then any users post from "user_ID>@asdasd...". And each post gets some script to run that tells WoT to 
add it into the trust database.


The end result would be a binary that you download, give a FCP and FMS 
port, a captcha provider (with a built-in fallback, of course), and 
execute. On port 443 you get your frontend, no config needed.


In theory, no need to rate limit or do anything else complicated. The 
network would handle it fine. In practice, might be best to limit it to 
1 post/minute and put in some simple bayesian filtering, so the master 
identity doesn't get completely sullied. Or does FMS do rate limiting 
already?


Certainly, it would take you some time, but it's not strictly speaking 
difficult work as long as this identity generation problem gets solved.

My idea is that each user posting would get some kind of unique name
(e.g. truncated salted IP hash, or for Tor users a cookie they need to
solve say 20 captchas to get - maybe you could do JS PoW or something
like that). Then the frontend would post with its key but that
name. It would also assign message trust slightly above zero, but no
list trust.

Do you think this would work? It's a bit ugly taking the IPs, but not
disastrously bad. The server wouldn't need to do any IP banning of
pathological cases. It could carry out basic spam filtering
(e.g. Bayes), but it wouldn't have to. Captchas might be possible to
replace with rate limits.


I’m thinking about this as I would an attacker to do. If I did not like
your forums, I would simply DoS them by posting from many different 
IPs.


Providing this with an ID just tied to solved captchas via cookies 
could

work. That would then be ephemeral identities. If combined with limited
posting rate and limited lifetime (i.e. solve one additional captcha 
per

week so you cannot just collect IDs and then use them all at once
without maintenance cost) would prevent using this system to DoS FMS.

That makes implementation much simpler too, since you don't need to pass 
on the IP info or treat onions as a special case. What you could do 
otherwise is to use for instance the Spamhaus RBL. That would block all 
the open proxies. An attacker could pay to rent botnets or whatnot, but 
that costs money. So does solving captchas, so it's not a perfect 
defense either.


Doesn't FMS already limit posting rate on the client side?

Solving an additional captcha per week would be trivial to add. For more 
privacy, you could have that give you a brand new identity with trust 
assigned based on the old one. Ideally this would be of limited 
resolution (has good history/has no history/has poor history) so you 
can't link them together but still can't use it to "launder" burned 
identities. And this would be assigned by a new identity, so people's 
distrust of that doesn't interfere with their distrust of master.


This might be overkill though. Adds implementation cost, and now the 
server gets access to non-public information (although it never has to 
save it). Easier to just tell people to make a new identity once a 

Re: Proper form for a public FMS outproxy?

2019-07-20 Thread Arne Babenhauserheide
Hi,

yanma...@cock.li writes:

> Now, my idea is this: You set up a public (onion or clearnet) frontend
> where you can make and read posts, with its back-end being FMS.
…
> Frontends would be disposable and dime-a-dozen; a front-end with too
To get to this situations, you must make it very, very easy to host
them. This might be a major endeavor (but one which would benefit
Freenet a lot).
…
> My idea is that each user posting would get some kind of unique name
> (e.g. truncated salted IP hash, or for Tor users a cookie they need to
> solve say 20 captchas to get - maybe you could do JS PoW or something
> like that). Then the frontend would post with its key but that
> name. It would also assign message trust slightly above zero, but no
> list trust.
>
> Do you think this would work? It's a bit ugly taking the IPs, but not
> disastrously bad. The server wouldn't need to do any IP banning of
> pathological cases. It could carry out basic spam filtering
> (e.g. Bayes), but it wouldn't have to. Captchas might be possible to
> replace with rate limits.

I’m thinking about this as I would an attacker to do. If I did not like
your forums, I would simply DoS them by posting from many different IPs.

Providing this with an ID just tied to solved captchas via cookies could
work. That would then be ephemeral identities. If combined with limited
posting rate and limited lifetime (i.e. solve one additional captcha per
week so you cannot just collect IDs and then use them all at once
without maintenance cost) would prevent using this system to DoS FMS.

> Specifically, a user that didn't like this would set list trust of the
> master identity to 0. Do you reckon this would happen?

Yes, I think this would happen, because one bad apple would spoil the
whole identity.

But if you would find a way to pre-generate IDs and then assign them to
new users (so the standard FMS spam-defense would work), then this idea
could work.

If the proxy had a main ID which gives trust-list-trust to these IDs,
then people could decide whether they want to see the new IDs.

Best wishes,
Arne
--
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Proper form for a public FMS outproxy?

2019-07-19 Thread yanmaani

Hi,

There's a lot of projects aiming to create decentralized forums. One of 
them is, as you know, FMS.


FMS has the obvious downside of having to install Freenet, along with 
the various Web of Trust hassle.


Another one was Usenet. And a later project following the same idea, 
NNTPchan.


Now, my idea is this: You set up a public (onion or clearnet) frontend 
where you can make and read posts, with its back-end being FMS.


(It would be possible to further extend this concept too. For instance, 
what's stopping anyone from registering a new FMS identity, a new e-mail 
account, writing some code, and exposing this mailing list over FMS? The 
possibilities are endless, and all such applications would automatically 
peer with one another by virtue of using the same common back-end)


Frontends would be disposable and dime-a-dozen; a front-end with too 
picky trust lists could get replaced by any other, a front-end with too 
lax posting standards would get blacklisted, and a front-end with too 
harsh posting standards wouldn't get used by anybody.


This has some pretty significant upsides. For one, it's not possible to 
take out the whole network of nodes, as happened to NNTPchan. So 
assuming Freenet isn't taken down, you have a server with absolutely no 
interesting content and an indestructible backend. With the backend on a 
hidden service and a clearnet server just passing through HTTPS, 
adversaries wouldn't even have anything to go on. And in Europe, the 
operator of a proxy has absolutely no legal responsibility anyway, as 
long as he doesn't interfere with the content.


This makes FMS -> frontend trivial: copy all messages with high enough 
trust score, carry out no filtering at all, done. No need to filter 
piracy/CP more than the network already does.


But how about the other way around? Since users would be anonymous or at 
least psuedonymous, the FMS model for spam filtering breaks down. If you 
would put a CAPTCHA for posting, the amount of spam would be very low, 
but there still might be some which would cause it to get blacklisted by 
some users.


My idea is that each user posting would get some kind of unique name 
(e.g. truncated salted IP hash, or for Tor users a cookie they need to 
solve say 20 captchas to get - maybe you could do JS PoW or something 
like that). Then the frontend would post with its key but that name. It 
would also assign message trust slightly above zero, but no list trust.


Do you think this would work? It's a bit ugly taking the IPs, but not 
disastrously bad. The server wouldn't need to do any IP banning of 
pathological cases. It could carry out basic spam filtering (e.g. 
Bayes), but it wouldn't have to. Captchas might be possible to replace 
with rate limits.


Specifically, a user that didn't like this would set list trust of the 
master identity to 0. Do you reckon this would happen?