On Tue, 18 Aug 2009 21:08:25 -0400, Jonas Sicking <jo...@sicking.cc> wrote:

On Tue, Aug 18, 2009 at 12:39 PM, Michael A. Puls
II<shadow2...@gmail.com> wrote:
On Tue, 18 Aug 2009 15:09:53 -0400, Jonas Sicking <jo...@sicking.cc> wrote:

On Tue, Aug 18, 2009 at 12:03 PM, Michael A. Puls
II<shadow2...@gmail.com> wrote:

O.K. Thanks. fileadata: wouldn't work then if the user has to choose the
file.

Maybe it would help if you started with a use case. What type of thing
are you trying to build?

Many times when people deal with file:// urls it is because they are
building a website on a local file system, and then at appropriate
times publish that website by copying the local files to a web server.

I support that convenience (when dealing with static files) very much. (And, I don't think think this use case should be dismissed, just in case anyone
thinks that.)

Is that what you are doing?

A lot of times, yes. I believe things should work the same between http: and file: in static (not php etc. of course) cases. They basically do with DOM3
Load and Save.

Ok, so for this use case something simpler than what you proposed in
the beginning of this thread should be enough. For example the status
codes 401, 403, 405, 501, 414, 415 does not seem needed. 404 and 200
seems enough. Or am I missing something?

*Just* 404 and 200 would definitely make things better. But, see below.

Or is there another reason you end up
using file:// urls?

Yes, one thing I'm doing is loading a local xspf file from a local web page
(via a script) and parsing it into an ordered list with registered
listeners. This page isn't meant to be published on http (but it should work
just the same).

I can do that now with XHR, but it's a mess and error handling isn't good enough, nor is it interoperable. DOM3 L&S would be nice, but no one wants to
support it.

What is different about DOM3 L&S that makes it possible to use here,
but XHR not?

Look at the following for example:

var parser = document.implementation.createLSParser(document.implementation.MODE_ASYNCHRONOUS , null, null);
parser.domConfig.setParameter("error-handler", function(e) {
    alert(e.message);
});
parser.addEventListener("load", function(e) {
    alert(e.newDocument);
}, false);
parser.parseURI("test.xml");

1. It behaves the same whether you're on file:// or http:// in this case. You don't have to shoehorn the JS or fuss with readyStates and status codes to make it work with file:

2. Setting an error handler gives you a DOMError object with a message getter that would give info on all the different types of errors from file not found, to parse error etc.

With that said though, if this xhr2 way:

var req = new XMLHttpRequest();
req.addEventListener("load", function(e) {
    alert(this.responseXML);
}, false);
req.addEventListener("error", function(e) {
    alert(e);
}, false);
req.open("GET", "test.xml");
req.send();

works where the error handler gives all the different types of errors and one can just avoid readystatechange, then that might do.

But, it seems the error progress event doesn't give any error info.

Basically, I'm looking for an API that actually supports local, static,
web-based apps instead of trying to force it into APIs that don't. That's
why I also proposed
<http://lists.w3.org/Archives/Public/public-webapps/2009JulSep/0680.html>,
just in case the simulating HTTP status code idea wasn't taken well.

The two things that are different about file:// vs. http:// in gecko I
can think of off the top of my head are:

1. Status codes (200, 404, 50x) etc.
2. Missing http features. CGI isn't supported on local files which
means things like request headers and request methods have no effect.
In fact, no other http methods than GET seems to make sense. Unless
you want to get into the ability to write to the file system, which is
a whole other can of worms.
3. Security. In http it's (fairly) clear what constitutes a security
context. http://foo.com/ can't access data from http://bar.com/. But
http://foo.com/somefile can read data located at
http://foo.com/otherfile. With file:// that's much less clear. Do you
want file://users/sicking/Desktop/downloaded_files/file.html to be
able to read from file://etc/passwd? How about from
file://users/sicking/Documents/status_report_2009.xls?

If it's a file page I create, it's a non-issue. If I downloaded a page and ran it locally, I would indeed have to worry about it accessing private data and then tricking me (if it can't to it automatically) into sending it somewhere.

With Firefox's security, I'd just make sure not to run any untrusted local web pages in directories that contain private stuff.

Working with an evil plug-in might make things harder to lock down.

1 seems fixable, 2 and 3 are much much harder.

Thanks

--
Michael

Reply via email to