On 10/15/07, Zed A. Shaw <[EMAIL PROTECTED]> wrote: > > On Mon, 15 Oct 2007 16:43:34 -0700 > "Brian Williams" <[EMAIL PROTECTED]> wrote: > > > We recently ran into exactly this issue. Some rails requests were > making > > external requests that were taking 5 minutes (networking issues out of > our > > control). > > Now that's a design flaw. If you're expecting the UI user to wait for a > backend request that takes 5 minutes then you need to redesign the workflow > and interface. Do it like asynchronous email where the use "sends a > request", "awaits a reply", "reads the reply", and doesn't deal with the > backend processing chain of events. > > If done right, you'll even get a performance boost and you can distribute > the load of these requests out to other servers. It's also a model most > users are familiar with from SMTP processing.
Just to clarify, we were accessing a web service that typically returns results in < 1 second. But due to network issues out of our control, these requests were going into a black hole, and waiting for tcp timeouts. Admittedly, since this was to an external service, we could shift to a model where all updates are asynchronous, but this doesn't help in the cases that paul mentions, such as a slower reporting queries or programmer error slow actions which then end up degrading the experience for all users to the site. Assuming we did switch to an asynchronous model, I would think it would be more like - show me latest FOO, trigger backend update to get latest FOO, return last cached FOO. Or if you know what FOO is, you periodically update it, and don't bother triggering an update. The first request would then return something like 'Fetching results', right? --Brian
_______________________________________________ Mongrel-users mailing list [email protected] http://rubyforge.org/mailman/listinfo/mongrel-users
