Ross, I had one more thought.

Are most browsers capable of using persistent HTTP for XMLHttpRequests?  If
so, and *assuming your main motivation is simply to be more efficient by
using a single HTTP connection* for your traffic, then I think you might be
able to implicitly exploit persistent HTTP to get what you want.  If you use
a pull model rather than a push model, and have the client make its requests
frequently enough that most browsers' timeouts don't tear down the
persistent HTTP connection, and if you ensure that your responses are
suitable for use with persistent HTTP (i.e., they contain a valid
Content-Length header), then I think you may satisfy your goal.

  -- Scott

On Thu, Jun 26, 2008 at 10:33 AM, Scott Moonen <[EMAIL PROTECTED]> wrote:

> Ross, it seems to me that there are a number of potential hurdles to
> holding a connection open for a long period:
>
>    1. Some server configurations may require that requests be satisfied
>    within a certain time period or else the connection will be reset.  Of
>    course, since the server is under your control, you can probably ensure 
> this
>    is not an issue.
>    2. It seems possible to me that some browsers may have problems with
>    this approach.  Specifically, they might not be willing to hold connections
>    open for longer than a certain period, or they may not have a reliable 
> means
>    for Javascript to access the partial data, or they may not have a means for
>    Javascript to discard the already-read data (resulting in memory creep).
>    I'm not sure any of these are the case, lacking personal experience in this
>    area, but they are things you'd need to research and satisfy yourself of
>    across multiple browsers and platforms.  Certainly there are Flash 
> streaming
>    applications out there, so if you can elect to use Flash (or some other
>    plugin) on the client-side then some of these client-side questions may go
>    away.
>    3. Many firewalls, intrusion detection systems, proxies, and NATs will
>    likely have issues with long-running HTTP connections.  It is possible that
>    a firewall or IDS will reset such connections; it is possible that a NAT
>    mapping expiration will sever the connection (especially if the data is
>    intermittent rather than continuous); and it is possible that a proxy
>    reassignment (think AOL) or other configured proxy restrictions (e.g.,
>    connection lifetime limits) will disrupt the connection as well.  Again,
>    there are a large number of environments here you need to consider.  Some 
> of
>    these issues may go away if you use a port other than 80, but many
>    intermediate hosts are still smart enough to discover that you are using
>    HTTP regardless of the port value.
>    4. In cases where the data to be pushed is intermittent you may also
>    need to enable and configure TCP keepalives to ensure that your connections
>    and server threads don't linger any longer than necessary.
>    5. How many clients do you expect to have?  If it's a large number,
>    then you should weigh the potential cost of having many server-side request
>    processing threads open simultaneously.  You may also find that you hit
>    configuration limits on Apache threads (and Django threads if you are
>    running using WSGI) that need to be raised.
>    6. Performance is also a consideration here; what sort of model will
>    these threads use to suspend or throttle themselves?  Will they select on
>    some event that wakes them up to trigger a new notification?  Will they
>    sleep for a specified amount of time and execute a query to determine if 
> new
>    data is to be sent?  How will you ensure they aren't doing busy waits?  (By
>    comparison, performance is of course still an issue for polling requests
>    initiated by the client, there are just a different set of questions to be
>    answered in that case.)
>    7. If #5/#6 brought up a number of limitations, you might need to
>    consider a server model that didn't couple threads to requests, but instead
>    to some smaller transactional unit of work.  A thread could complete a
>    transaction without the overall HTTP request being considered complete or
>    the connection being closed.  Here you're ranging far out of normal HTTP
>    territory and I'm not sure that most HTTP servers will suit your needs.  
> The
>    good news is that you could still use Django's ORM for your database needs,
>    but you'd probably not be able to use its url/view/template architecture. 
> :)
>
> It's certainly a very interesting set of problems to solve!  And this is
> not at all a bad model in general for some client-server applications.  But
> there are enough issues unique to using HTTP as a transport that would make
> solving these problems for all possible environments, and with good
> performance characteristics, fairly costly.
>
>   -- Scott
>
>
> On Thu, Jun 26, 2008 at 9:40 AM, RossGK <[EMAIL PROTECTED]> wrote:
>
>>
>>
>>
>> On Jun 25, 4:56 pm, "Richard Dahl" <[EMAIL PROTECTED]> wrote:
>> > Generally with HTTP, you would configure your server to continue to
>> respond
>> > to requests;)  Which is exactly what django does anyway.
>>
>> That's my question - rather than one response to a request I'd like to
>> have several responses.  For example a user requests continuous
>> traffic information and the server replies with continuous replies of
>> the number of cars per second. So one reply with multiple answers,
>> with the TCP connection left open until the data flow reaches a
>> condition (e.g. fewer than  5 cars per second).
>>
>> > HTTP is a connection based (TCP) protocol, but the connection is closed
>> once
>> > the return has been sent.  Hence the need to store a 'session' variable
>> in
>> > the server and use a cookie on the browser with a corresponding session
>> id.
>> > Data does not live beyond the request.  You cannot define a variable
>> within
>> > your django view or Apache process and pass it back to the client on the
>> > next request.
>>
>> Not something I'm interested in doing - just want to send _one_
>> request then start receiving a couple of messages containing a few
>> characters every second or two.
>>
>>
>>  You can only put a variable within a session record, at which
>> > point it goes out of memory for all useful purposes, and then put the
>> > variable back into memory from the session record on the next request
>> > matching the session id.
>> >
>> > AFAIK (and I *think* I understand TCP/IP/HTTP fairly well) there is no
>> way
>> > to have an HTTP server initiate a connection to a browser.  Browsers
>> (HTTP
>> > Clients) do not listen on TCP ports for incoming requests, otherwise,
>> they
>> > would be HTTP servers.
>>
>> I think you're right, but I don't want to do that anyway.  :)
>>
>> >
>> > As far as streaming (in the video sense), over http, you are just
>> > downloading a file, albeit usually a really big one, via HTTP and the
>> player
>> > makes it play before the download is finished.  It really is though
>> nothing
>> > more than a 'single' response, in the sense that once the file is done
>> > downloading the request is terminated.
>>
>> I suppose if I could make my small messages (5, 10, 6, 8, 9...) be
>> readable during the stream that would be fine.  Like a big file whose
>> content isn't know by the server 'cause it hasn't all been created
>> yet.  But I'd need the browser to be able to start opening and using
>> the data before the 'all done' flag is raised.
>>
>> >
>> > Now, this is a pretty simple explanation of how this works, in reality
>> when
>> > you request a web page, you usually are getting more than one file
>> (hence
>> > more than one request/response), as all external files (css, js, images
>> > etc...) referenced on a page will be requested by the browser as well.
>> But
>> > the browser does the requesting.  Look at your Django console messages
>> and
>> > you'll see this.
>> >
>> > It is however, possible to have a javascript function request
>> information
>> > asynchronously or request a page refresh (oh the horror) at a given
>> > interval of time.
>>
>> Yes, that is my current tack - repeated ajax requests from the
>> browser. It seems a little less elegant from a client server
>> paradigm.  I want a flow of information, so artificially constructing
>> the flow as a series of requests sounds do-able but more sensible
>> would be a single "give me a flow of info" request and a server
>> construct that can keep open a tcp socket that provides messages until
>> I'm ready to terminate.  Hence my question.
>>
>>   This is how many sites implement live coverage of events,
>> > like Apple's WWDC. Perhaps if you explained a bit more of what you are
>> > trying to accomplish, someone may be able to suggest some javascript
>> library
>> > or function that can do this...
>>
>> As above - the cars per second message delivery.  Or perhaps think of
>> a continuous weather report "start giving me second-by-second
>> precipitation, temperature and wind-speed 3-tuples.
>>
>> > hth,
>> > -richard
>>
>> -Ross
>> >>
>>
>
>
> --
> http://scott.andstuff.org/ | http://truthadorned.org/




-- 
http://scott.andstuff.org/ | http://truthadorned.org/

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to