Toad wrote:

The thing is, the lack of search capabilities reduces
the useability of freenet

Of course. There are ways to implement search, however. Sooner or later
somebody will implement a good spider based anonymous search.

I searched a bit on the web. At http://conferences.oreillynet.com/cs/p2pweb2001/view/e_sess/1669 I found someone claiming that searching freenet would be possible real soon, to quote: "right about now". That was in 2001. At http://www.freenet.org.nz/search/ I found a totally defunct search engine, obviously based on the same principle I'm trying to apply now.

I fully agree with you that anonymous search is much better
than a non-anonymous. However, as I mentioned, the problem
of anonymity has two sides: that of the publisher and that
of the user. If a non-anonymous search solves one part without
affecting the other, what's the harm of it?

This would
probably have two components: 1. A spider, which would spider out from
known freesites, scan NIMs, and Frost traffic, and insert index files.
2. A client, probably integrated into fproxy, which would fetch the
index files that are appropriate to the search given.

You mean creating index files before a search has been made? Wouldn't that be highly inaccurate and/or produce massive volumes of indices?

I can publish stuff anonymously all I want
but, unless I post a URL somewhere, nobody is going to
find my publications.

Indeed. Thus we have NIMs, FreeMail and Frost within Freenet, and
outside it we have Mixmaster remailers, IIP, I2P, various kinds of
proxies and so on. Sadly some people use hushmail too, which is not
exactly the safest option. But there are many possibilities.

All this put together is still a *very* small world. If I'd find and publish, say, the Bush administration's plans to invade Cuba, or detailed information on Israel's chemical and biological weapons, I don't want this information to to reach the users of freenet and hushmail; I want it to reach the huge and clueless masses who watch CNN and use hotmail. And I also want to protect my anonymity damn well. The way to go? Publish on freenet and let automation, i.e. nobody, make the bridge to the web.

How do you propose to protect against spam, and plain malicious content?

I don't. I'm not Google. As you have already gathered, my financial capacity is enough to run a 39-euro server, but not a 78-euro one. Because of that, things get very simple: if I make a freenet search, it will be just as well or ill protected from spam and malicious content as freenet itself is.

> Freenet does not know the
>URIs of data that passes through the node, only those requested locally.

It does know the requests that pass through the node.

Nope. It doesn't. It only knows the routing keys, which are insufficient
to decrypt the actual data. Any other URIs in the logs will be locally
originated. Example:

CHK@<routing key>,<decrypt key>/<human readable key>

Uhm, there's something eluding me here. You know freenet's internals; I don't. If you say so, then so it is. Yet I stuck some of those URIs I found in my logs into my browser and got sites to which I had never been before.

Taking what you say here for granted, the entire discussion
up to this point is probably a meaningless exchange based
on some misunderstanding on my part. But what?

[URIs from logs]

Would be interested to see some of this list.

Duh. So am I by now, but with all the messing around today I deleted them. I can try again though.

Are you running a public
gateway? Are you fetching lots of stuff locally?

Neither.

Z


-- Framtiden Ãr som en babianrÃv, fÃrggrann och full av skit. Arne Anka _______________________________________________ Support mailing list [EMAIL PROTECTED] http://news.gmane.org/gmane.network.freenet.support Unsubscribe at http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/support Or mailto:[EMAIL PROTECTED]

Reply via email to