On Wed, May 30, 2001 at 10:10:29AM +1200, David McNab wrote:
> >It does create the problem that if people start passing around
> >www.xxx.free URLs then these can only be used with Freeweb.  Freeweb
> >should probably just interpret http://localhost:8081/ requests through
> >the proxy, it isn't as pretty, but at least it doesn't drive people away
> >from an existing cross-platform standard.
> 
> There's no reason why www.xxx.free can't work as a cross-platform standard.

Perhaps not, except when there is already a perfectly good cross-platform
standard which is already implemented and has been proven to work.  If you would
like to argue for using .free as a standard, then we should, but it would be
counter-productive to try to force the issue by adopting it in your software
before it had been more widely accepted.  For me personally, I think that it is
something of a kludge, freenet:xxx URLs are more elegant and more in keeping 
with
the URI standard, although encouraging people to use freenet:xxx URIs at this
stage (and relying on browser plugins) also has issues.

The best solution to-date is the simple HTTP compatable 
http://localhost:8081/...
approach (with Freegle-style configurability if desired), and for the moment,
this is the only approach I endorse as it works anywhere Freenet does, and
doesn't require any hacking around with web browsers.  Your proxy could easily 
be
modified to intercept these requests.

> >How is that possible if it is just going through FProxy?
> Haven't you been reading the lists?
> FreeWeb is no longer using FProxy - it's getting Freenet data totally
> through FCP

Hadn't seen that, interesting.

> I hate to say this, but I suggest you may be digging in to a position
> without considering things with full objectivity.

I wasn't aware that FreeWeb used FCP rather than FProxy, but it really doesn't
affect the debate.  Creating new standards where an existing standard would work
just as well is still a bad thing.

> >From previous discussions on lists and chat, I notice that Freenet
> developers aren't sharing your staunch objection against freenet: URI
> handlers, on the grounds that some weird browsers may not handle them.

In this project we choose the correct solutions, these are not always the most
popular solutions.  Anyway, most Freenet developers have not expressed an 
opinion
on the subject so far as I am aware, if I recall, it was Brandon who was arguing
the pro-freenet: side.  I still maintain that requiring browser plugins just to
achieve the illusion that Freenet is a protocol with similar standing to HTTP 
and
FTP is a very bad idea. We go from a situation where people can use Freenet
easily with any web browser, to requiring us to provide plugins for every web
browser, and updating those plugins with every new browser.  Not only will this
require much more work on our part, but will seriously complicate the
installation process, particularly for users with uncommon browsers.  Now if we
got some big benefit from this, then it might be worth considering, but we 
don't.

> >Perhaps, but the problem is that it is a paradigm which locks people
> >into FreeWeb, and which forces people to change their Proxy settings
> >which should not really be nescessary and definitely should not be
> >forced upon people.
> 
> Once again, please RTFL (read the lists).
> I published source to fwproxy, which is the proxy component of FreeWeb, and
> now *compiles and runs on Linux*. Cross platform. And, *every* browser worth
> using supports proxy servers, and fwproxy supports external proxies. What's
> the hassle here? Are you saying we should support some obscure browser which
> Uncle Winston wrote, that doesn't support the use of proxy servers?

No, I am saying that if you want to change a fundamental Freenet protocol, then
it requires discussion, not just the release of a piece of software.  Even if I
agreed that this kind of approach was a good idea at this stage of Freenet
development, I would go for freenet:xxx plugins long before creating an
artificial TLD for Freenet.

> And are you saying that setting up a browser to use external proxies is
> harder than mastering the Freenet 'alphabet soup' URIs?

"hello" is a valid Freenet key (interpreted as freenet:KSK at hello), how is 
this
alphabet soup?

> >>Perhaps, but what happens when we start to see http://xxx.free/ URLs in
> >>webpages which can only be used with FreeWeb (where there is no good
> >>reason that these freesites could not be used on any platform that
> >>supports Freenet)?  This will simply restrict the audiences for FreeWeb
> >>sites, and simultaneously lower the amount of generally available
> >>content on Freenet - all for a cosmetic improvement.
> 
> What's the problem if everyone knows that www.sitename.free maps to
> freenet:MSK at KSK@freeweb/sitename// ?

Because everybody won't know that.

> There would have been those who resisted the advent ot http on a mass scale,
> expressing similar disdain for new syntaxes and protocols, and playing down
> the value of 'cosmetic improvements'. But the developers of http weren't
> discouraged, they pushed forward and found mass acceptance of this standard
> among a new generation (and open-minded members of current generations).

I have nothing against cosmetic improvements, but not when they are:

a) Unnescessary (there are already many online Freesites, none of which have
   expressed a need for the approach you are trying to foist on everyone)
b) Increase installation complexity
c) Encourage a reliance on insecure KSKs
d) An ugly kludge of an existing protocol

I think that trying to equate your mechanism with the rise of the WWW is 
somewhat
humerous.  There are far more examples of people making things worse by trying 
to
solve non-existant problems in the name of elegance.  Your solution does not
increase ease of use, in fact, it decreases it due to the increased complexity 
of
having to play with browser settings.

> If I recall correctly, it was partly due to your insistence (2 conversations
> on #freenet) that I abandoned the more secure system of mapping .free URLs
> into a secure SSK DNS tree. If I was a bit cynical I could speculate that
> you encouraged this so you could employ the argument you're now using here.

No, KSKs are better than your last solution, but they still suck.

> And another thing - there is a total horde of excellent tools, such as web
> accelerators, website downloaders etc, many of which choke on their own
> vomit when fed the 'politically correct' freenet URLs - when they see
> 'http://127.0.0.1/MSK%40SSK%40alphabet-soup/subkey//path/file.ext', they
> convert the '//' into '/', which makes them totally unusable via FProxy.

I have not heard a single example of this.  If true, then it should be
considered, and the MSK protocol may even need to be modified, but it really
doesn't mean that we should convert to your .free protocol.

> Lastly, Ian, please have a serious think about some of the positions you are
> digging into here.

I have been thinking about these issues for quite some time now, probably much
longer than you have.  You would do well to listen.

Ian.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 232 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20010529/348b1287/attachment.pgp>

Reply via email to