On 2019-07-20 07:59, Arne Babenhauserheide wrote:
Hi,

yanma...@cock.li writes:

Now, my idea is this: You set up a public (onion or clearnet) frontend
where you can make and read posts, with its back-end being FMS.
Frontends would be disposable and dime-a-dozen; a front-end with too
To get to this situations, you must make it very, very easy to host
them. This might be a major endeavor (but one which would benefit
Freenet a lot).
…
Well, would it? You can pass through FMS, and only intercept the parts related to posting. You'd also want to intercept the progress screens for downloads, which might be a bit harder.

But it shouldn't be too much work.

All you'd need to do is write the code that does the filtering. For HTTP -> Freenet, you'd want to block out everything like config pages, and only show the actual data. To make things easier, you could just expose the FMS HTTP frontend directly and pass all GET requests through.

The other side is a bit more complicated. You'd filter POST requests, so that it checks if it's for the boxes in the FMS frontend used for posting. If it is, check cookie. If it doesn't have the cookie, 302 it to some page like /auth. That gets a special exemption. The easiest implementation I can think of is a small PHP script with a hardcoded secret. It checks if the captcha is OK. If so, it sets a cookie to (time || id || hash(secret || time || id)), where || is concatenation. Then the frontend also has the hardcoded secret.

Ugly but would probably work fine. For smaller attack surface, it can be replaced with a tiny CGI binary or whatever.

If it does have the cookie, then make a post using the ID in the cookie.

What I'm curious about is how the identity generation should proceed. In particular, can the WoT have multiple identities sharing the same key?

If so, it's quite trivial: You have a master key, let's call it "master@asdasd...". Then any users post from "user_<unique ID>@asdasd...". And each post gets some script to run that tells WoT to add it into the trust database.

The end result would be a binary that you download, give a FCP and FMS port, a captcha provider (with a built-in fallback, of course), and execute. On port 443 you get your frontend, no config needed.

In theory, no need to rate limit or do anything else complicated. The network would handle it fine. In practice, might be best to limit it to 1 post/minute and put in some simple bayesian filtering, so the master identity doesn't get completely sullied. Or does FMS do rate limiting already?

Certainly, it would take you some time, but it's not strictly speaking difficult work as long as this identity generation problem gets solved.
My idea is that each user posting would get some kind of unique name
(e.g. truncated salted IP hash, or for Tor users a cookie they need to
solve say 20 captchas to get - maybe you could do JS PoW or something
like that). Then the frontend would post with its key but that
name. It would also assign message trust slightly above zero, but no
list trust.

Do you think this would work? It's a bit ugly taking the IPs, but not
disastrously bad. The server wouldn't need to do any IP banning of
pathological cases. It could carry out basic spam filtering
(e.g. Bayes), but it wouldn't have to. Captchas might be possible to
replace with rate limits.

I’m thinking about this as I would an attacker to do. If I did not like
your forums, I would simply DoS them by posting from many different IPs.

Providing this with an ID just tied to solved captchas via cookies could
work. That would then be ephemeral identities. If combined with limited
posting rate and limited lifetime (i.e. solve one additional captcha per
week so you cannot just collect IDs and then use them all at once
without maintenance cost) would prevent using this system to DoS FMS.

That makes implementation much simpler too, since you don't need to pass on the IP info or treat onions as a special case. What you could do otherwise is to use for instance the Spamhaus RBL. That would block all the open proxies. An attacker could pay to rent botnets or whatnot, but that costs money. So does solving captchas, so it's not a perfect defense either.

Doesn't FMS already limit posting rate on the client side?

Solving an additional captcha per week would be trivial to add. For more privacy, you could have that give you a brand new identity with trust assigned based on the old one. Ideally this would be of limited resolution (has good history/has no history/has poor history) so you can't link them together but still can't use it to "launder" burned identities. And this would be assigned by a new identity, so people's distrust of that doesn't interfere with their distrust of master.

This might be overkill though. Adds implementation cost, and now the server gets access to non-public information (although it never has to save it). Easier to just tell people to make a new identity once a month.

Specifically, a user that didn't like this would set list trust of the
master identity to 0. Do you reckon this would happen?

Yes, I think this would happen, because one bad apple would spoil the
whole identity.

But if you would find a way to pre-generate IDs and then assign them to
new users (so the standard FMS spam-defense would work), then this idea
could work.

If the proxy had a main ID which gives trust-list-trust to these IDs,
then people could decide whether they want to see the new IDs.

Well, this is what I'm concerned about. Do you reckon they would blacklist the main ID's trust list, because it has too many children which are rotten apples?
Best wishes,
Arne
--
Unpolitisch sein
heißt politisch sein
ohne es zu merken

Another question: Say you want to forward mailing lists over Freenet, such as this one. What'd be the best approach?

Either you could have a single bot with a single username, and rely on the central moderation to kick spammers out. It then posts to FMS.

Or you could do Freemail mailing lists.

Or you could use the approach I outlined above.

But how do you get redundancy? If you have two such bots running, you'd get duplicated messages. If you have one, you get atrocious fault tolerance.

Does FMS support cancelling messages? Obviously you can never delete them, but you could ask user agents politely not to download them or judge you for their contents.

Then the bots could agree on some protocol; they make posts announcing themselves somewhere, and then these are assumed to take effect after X seconds. If other bots find X too low, they rate them negatively, but they all get to specify X. And a similar parameter, let's call it Y.

Then, each bot computes hash(key || message hash) for all the bots it knows about, and sorts the list of hashes. If it's first in the list, it's the first one who should propagate the message. If it's the second in the list, it should propagate it if nobody else sends after Y seconds. If it's the third, wait Y*2 seconds and see. If it's the nth, wait Y*n-1 seconds and see. And so on, and so forth.

Bots which "jump the gun" would get blacklisted by the other bots programmatically. Bots which censor messages would get blacklisted, provided they didn't block all messages sent within a certain timeframe.

Maybe this is a solved problem already in CS, though.

Another question is if FCP already supports a "stripped-down mode", where it doesn't expose internal material, only stuff that's on the network. I know SIGAINT ran a Freenet <-> Tor proxy, do you know how they did it?

Cheers

Reply via email to