I started looking at this a bit, you can find the specs for the Comet
protocol, such as it is at
http://svn.cometd.com/trunk/bayeux/bayeux.html It's built on top of
JSON but isn't quite JSON-RPC.  More of a publish/subscribe model.
The latest version of 'cometd' just released a beta release available
at http://download.cometd.org/ and includes a JavaScript library for
both dojo and jquery (Well it's written in/for dojo, but has a jquery
style interface as well.)  I started poking at it a bit, but I haven't
ever done any Jquery so it will probably be slow.

My plan is to require basic auth, and then put the client ID into the
session and to put any data on subscribed channels into that
connection, then just use long-polling to keep it open.

Since there's no real way for a web2py app to be notified of internal
state changes, I'm not sure long term how I would handle actually
looking for anything to send out over the long poll.  Though I've had
some thoughts of writing a scheduler for web2py with granularity of a
second or so.


On Tue, May 25, 2010 at 7:24 PM, Allard <docto...@gmail.com> wrote:
> Comet is a nice way to get this done but I wonder how to implement
> comet efficiently in web2py. Massimo, does web2py use a threadpool
> under the hood? For comet you would then quickly run out of threads.
> If you'd try to do this with a thread per connection things would get
> out of hand pretty quickly so the best way is doing the work
> asynchronously like Orbited. Alternatives would be using one of the
> contemporary Python asynchronous libraries. These libraries provide
> monkey patching of synchronous calls like your url fetching. Some
> suggestions:
>
> Gevent: now with support of Postgress, probably the fastest out there
> Eventlet: used at Lindenlab / Second Life
> Concurrence: with handy async mysql interface
> Tornado: full async webserver in Python
>
> Massimo: what do you think of an asynchronous model for web2py? It'd
> be great to to have asynchronous capabilities. I am writing an app
> that will require quite a bit of client initiated background
> processing (sending emails, resizing images) which I would rather hand
> off to a green thread and not block one the web2py threads. Curious
> about your thoughts.
>
> BTW - my first post here. Started to use for web2py for a community
> site and enjoy working in it a lot! Great work.
>
> On May 25, 9:39 pm, Candid <roman.bat...@gmail.com> wrote:
>> Well, actually there is a way for the server to trigger an action in
>> the browser. It's called comet. Of course under the hood it's
>> implemented on top of http, so it's browser who initiates request, but
>> from the developer perspective it looks like there is dual channel
>> connection between the browser and the server, and they both can send
>> messages to each other asynchronously. There are several
>> implementation of comet technology. I've used Orbited (http://
>> orbited.org/) and it worked quite well for me.
>>
>> On May 25, 9:00 pm, mdipierro <mdipie...@cs.depaul.edu> wrote:
>>
>> > I would use a background process that does the work and adds the items
>> > to a database table. The index function would periodically refresh or
>> > pull an updated list via ajax from the database table. there is no way
>> > for te server to trigger an action in the browser unless 1) the
>> > browser initiates it or 2) the client code embeds an ajax http server.
>> > I would stay away from 1 and 2 and
>> > use reload of ajax.
>>
>> > On May 25, 5:33 pm, Giuseppe Luca Scrofani <glsdes...@gmail.com>
>> > wrote:
>>
>> > > Hi all, as promised I'm here to prove you are patient and nice :)
>> > > I' have to make this little app where there is a function that read
>> > > the html content of several pages of another website (like a spider)
>> > > and if a specified keyword is found the app refresh a page where there
>> > > is the growing list of "match".
>> > > Now, the spider part is already coded, is called search(), it uses
>> > > twill to log in the target site, read the html of a list of pages,
>> > > perform some searching procedures and keep adding the result to a
>> > > list. I integrated this in a default.py controller and make a call in
>> > > def index():
>> > > This make the index.html page loading for a long time, because now it
>> > > have to finish to scan all pages before return all results.
>> > > What I want to achieve is to automatically refresh index every 2
>> > > second to keep in touch with what is going on, seeing the list of
>> > > match growing in "realtime". Even better, if I can use some sort of
>> > > ajax magic to not refresh the entire page... but this is not vital, a
>> > > simple page refresh would be sufficient.
>> > > Question is: I have to use threading to solve this problem?
>> > > Alternative solutions?
>> > > I have to made the list of match a global to read it from another
>> > > function? It would be simpler if I made it write a text file, adding a
>> > > line for every match and reading it from the index controller? If I
>> > > have to use thread it will run on GAE?
>>
>> > > Sorry for the long text and for my bad english :)
>>
>> > > gls
>

Reply via email to