On Wed, Apr 18, 2012 at 17:49, Jenna Fox <a...@creativepony.com> wrote:
> I think the trouble with streaming over the rack interface is that it's
> confusing. I'm fairly good at ruby, but I'm not entirely sure how it would
> even work. I guess I need to run my app in a threaded web server, running
> every request in it's own thread? Then inside the each iterator in the
> response object it just sleeps until it's got more data, using some sort of
> global message queue object to organise messaging between all the different
> threads? What if I'm deploying to passenger? what about fastcgi? Does that
> mean one ruby process per stream? Right now I have a few thins running with
> a ngynix proxy. Will the proxy be okay with sending in multiple concurrent
> requests in to the thins or will it need to have a process per user?

And more importantly: How can I do I/O inside the callback without
blocking the server. In this case, many servers (e.g. Thin) would only
be able to serve one client at a time because you're using IO#gets
which blocks the whole process (you can't use Thread.new in Thin).

Also, env['async.callback'] is not a standard; different servers may
support it differently (e.g. Thin only allows you to use I/O that uses
EventMachine).

> It's well and truly away from being the simple rack thing everyone liked. It
> only gets worse when you start wanting websockets - which don't fit the rack
> model at all (and rightly so! but they still need to be supported)
>
> In the end what I really want is to be able to return a Rack::Stream.new as
> the response, which will do the each magic and deal with the web server in
> some way where it's the server's responsibility to make sure it works - none
> of my concern, and where I can keep around a reference to that Stream object
> and send it messages. It's actually a pretty simple problem to solve, except
> for getting the different ruby servers to implement one common standard on
> how to deal with ruby apps which have lots of long running connections open.
> Maybe it could be made to work somewhat okay, but I cannot imagine in my
> head having ten thousand sleeping threads open waiting for something to
> stream out is going to be very performant. There's also the Fibers and
> Continuations stuff which is probably about as close as we can get to a good
> work around for a completely artificial problem created by the rack
> interface.

Fibers and continuations doesn't solve the problem. Fibers/callcc can
make callback-based code look like blocking (without actually being
blocking), but it can't turn blocking into non-blocking. As long as
the server assumes that #call will block until it gets a response,
it's not going to handle other clients until the #call returns.
_______________________________________________
Camping-list mailing list
Camping-list@rubyforge.org
http://rubyforge.org/mailman/listinfo/camping-list

Reply via email to