On 13-05-06 07:05 AM, james wrote:

> I think the issue here is that it is not entirely clear what kind of
> problem the libraries are trying to solve, and who for.  Plenty of us
> have experience writing server processes and this has historically
> involved quite low-level coding.  Its quite reasonable for Rust to
> target something else (its not as if Ruby does, for example) and yet be
> useful.  I thought this was a real discussion about what the requirement
> (or multiple diverse requirements sets) are.

Ok. If so, let me put forth a few that may have been implicit in this
conversation, but that have emerged over previous iterations of the IO
system (and tasking system -- they are related) and are presently
informing the design Brian's working on.

  - A serial interface to IO that doesn't block other tasks during the
    IO operation. This is for users who want to treat tasks as serial
    logic -- "thread-like" -- and have the IO-and-tasking system
    multiplex their activities in a best-effort fashion, exploiting
    whatever parallelism an concurrency it can find.

  - A single event loop responsible for issuing/scheduling tasks and IO,
    rather than >1. This is to simplify implementation and improve
    performance, correctness and ease of use over previous attempts
    where the two were disconnected (and fighting with one another).

  - A _replaceable_ event loop. This is to permit running rust code
    -- with the task and IO multiplexing model in the second point --
    under "external" event loops such as those in GUI toolkits, larger
    "hosting" applications, game engines, etc.

It sounds to me like you're adding, or asking to add, a 4th requirement:

  - A convenient, stable, well-crafted way to program the event loop
    directly, without using the abstraction of a task to encapsulate
    sequential logic. This is on the argument (correct me if I'm wrong)
    that the task abstraction will always be too costly and to get the
    maximum performance a server-writer will need access to this API.

If so, I don't object to it. I personally do not know -- and I was
claiming in earlier messages that I doubt any of us can know yet -- that
the numerical argument ("tasks can't be fast enough") is true, or how
significant it is. But to the extent that it's true, I don't mind this
requirement being pursued in parallel. So long as it's not at the
expense of the others.

I should also reiterate, as I mentioned up-thread, that this "convenient
and stable" event-loop API is likely to emerge anyways, in the process
of meeting the first 3 goals: a replaceable interface-point at which one
can switch out IO-and-task event loops is very likely to develop a
"nice" interface, just to make it pleasant to implement the task model
on top of it. I think most of the sketching Brian's been doing is along
these lines.

> Maybe I'm old fashioned, but I can't imagine day-job being materially
> different to:
>  - requirement
>  - analysis
>  - estimation
>  - sponsorship
>  - resourcing
>  - implementation and testing

We (here I'm speaking as a mozilla employee, as we're discussing
day-jobs) iterate through such things relatively often, rather than in a
single pass; and scheduling is pretty ad-hoc, depending on who has
effort, interest and ideas at any given time, and depending on
inter-group schedule commitments made. We discover a lot of requirements
as we go. For example, the 2nd and 3rd requirements above really only
emerged with experience. And the 1st has just been deferred while we
sort out the others; synchronous IO via libc / stdio works well enough
during development.

> Its clear that there are iterations to be had in much of this, but with
> something very fundamental, failing to understand and agree and record
> and publicise the requirements is a basis for later pain.

Fair enough. Have I missed any more, aside from the attempt made at
summarizing yours above?

> In this particular case, it is not easy to see how a reliance on
> lightweight switching at a task layer can work in practice on all
> interesting platforms; but its also true that it doesn't matter whether
> the behaviour now is 'good enough' so long as there is some clarity
> about what the expected goal is.

I'm curious why you think it wouldn't _work_. The relevant
context-switch function is very short, essentially the old posix.1
swapcontext() call. It consists of saving registers into a structure and
then restoring them from another. I find it difficult to think of a
platform it _doesn't_ work on.

I'll grant it may not be _fast_ enough (in terms of integrating with
kernel knowledge of threads, IO subsystems, CPU affinity, power
management and whatnot) which is why I mentioned up-thread that we
intend to have a variant that also maps 1:1 onto normal OS threads.

Beyond that, we're getting into the debate about "maximum efficiency"
kernel IO interfaces. I claimed (and still claim) that it's probably
platform dependent, and even kernel-dependent (in the sense that
different iterations of the same IO API will run at different speeds
over different kernel releases, sometimes radically different) and
hardware-dependent. Very hard to generalize.

If, beyond such discussion, you're still convinced you need to code to
the underlying async API, I am perfectly happy with acquiring the 4th
requirement above, again assuming it doesn't compromise the first 3.

> And that _is_ necessarily abstract for something like this, but that
> does not preclude clarity.

I appreciate your attempts at at provoking that clarity, and hope I can
be similarly lucid in my reply.

-Graydon

_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to