On 4/27/13 8:49 AM, Lee Braiden wrote:
This would be a relatively ugly approach, to my way of thinking. Why
should a dead stream be returned at all, if the code to create it
failed?  Why should I be able to call write() on something that could
not be created?

Two reasons:

1. If `open` returned a result type, you'd then have to call `.get()` on it to achieve task failure. A lot of people dislike this approach.

2. We have to have the concept of a "dead stream" or a "stream in the error state" already, because the OS and standard libraries have this concept. Given that, it seems simpler to just piggyback on that idea rather than bifurcating the I/O methods into two: ones that return a result and ones that set the error state on the stream.

Operations like read or write on dead streams should be no-ops.

I think this should cause a task failure (maybe even a hard program
assertion failure), rather than doing nothing

By default that would cause a task failure. Only if you override it with a condition would it be possible to continue after this.

BUT, it should be almost
impossible to cause, because nothing should return a bad stream, and
code that "breaks" a stream should be forced to deal with it or be
unwound, rather than simply continuing.

Due to the fact that we don't know how many aliases there are to a given stream, there's no way to force code that "breaks" a stream to deal with it in such a way that it relinquishes all references to the stream. (Actually, we could do it with unique types, but that would be quite a burden -- you couldn't straightforwardly have a `@FileStream`, for example.)

For instance, why should a try_read_uint exist, if you can:

    // assign a handler for read failures
    { read(); }
    // remove the handler

AND have that handler code implemented just once, in a library?

What does `read_uint` return if the handler wants to continue?

To me, the extra keywords/work seem like a small price, for what we'd
gain in clarity, elegance, and flexibility.  We'd have the best of two
popular error handling models, to choose from at will, with a nice
syntax, too.

This seems to basically just be exceptions. While I agree that exceptions are a nice model from the programmer's point of view (although Graydon strongly disagrees), I do have concerns about the performance model. Exceptions are expensive. As a language implementor, we basically have three, not very appealing, options:

1. Burden every call site of every function call with checking an error code, even if an exception was not thrown.

2. Burden every destructor with a call to `setjmp()`, even if an exception is not thrown.

3. Use table-driven unwinding, which makes exceptions extremely slow--so slow that programmers can't use them for simple "is there an integer here?" queries.

To me (3) is the only realistic option for Rust, but it means that we can't use exceptions for "try to parse an integer". Or rather, it means that if we try, programmers will end up inventing `try_read_uint` themselves. The upshot of this, in my mind, is that `try_read_uint`-style methods are inevitable--if we don't provide them, programmers will.

After talking with Brian some, though, I feel as though libraries should provide `try_foo` methods only if it makes sense to do so. For example, it probably doesn't make sense for HTTP parsing libraries to supply a `try_parse_http` method. You only provide a `try` method if both of these are true:

1. Programmers will want to recover extremely quickly from invalid input.

2. The function returns something other than unit. (Otherwise, a condition handler is fine.)

Patrick

_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to