On Monday, April 8, 2013 7:11:57 PM UTC+2, Arnon Marcus wrote:
>
>
>> oh my.... SSE are unidirectional, so of course the example shows you just 
>> the server --> client part and not the client-->server one.
>> you can do the client--> server part as usual with an ajax post.
>>
>> (I would appreciate you refrain from using expressions with 
> condescending implications such as "oh my...")
>

Sorry, it wasn't my intention.... I'm not a native English speaker and 
writing does not always explain "emotions" like a face to face discussion. 
By all means, feel free to put a :-) everywhere .... 
it's just that seeing web2py "bashed" with expression like "the example is 
half-assed" or "something crucial is missing" for something that clearly 
its not a problem of web2py itself "sounds bad", I'm trying to follow you 
and explain/give-an-alternative to the problem(s) you're pointing to.
 

>
>
>> EDIT: you don't need to have one-and-only sse capable controller. 
>> You just need to code into a single one of them what is required by the 
>> view who will call it (i.e. you can have a page for a chat that will "call" 
>> the sse that deals with the chat,  the page of the calendar that listens to 
>> the calendar sse and so on)
>>
>
> Now you are getting closer... Of course I understand that I can have more 
> then a single SSE-enabled controller-action, but as you said - this would 
> mean that, say, a "chat" view, may ONLY invoke a "chat" 
> SSE-enabled-controller-action, and a "calendar" view, may ONLY invoke a 
> "calendar" SSE-enabled-controller-action...
> What if I want 2 users to collaborate on the same data, using different 
> views, and still get real-updates?
> Let's say we have 2 views, a calendar, and a scheduling-run-chart - 
> Different views of the same (or partially-shared) data, for different 
> use-cases.
> How can I have one updating the calendar, and getting live-updates from 
> another user updating the schedule (and vice-versa) ?
> If it is not clear verbally, perhaps a picture is in order...
>
<https://lh4.googleusercontent.com/-m04hkb0vV40/UWL6Vj2_efI/AAAAAAAAAFE/O0zGPg55KnQ/s1600/SSE.jpg>

Picture definitely helps.

What needs to be cleared is this (sorry if this repeats something that is 
yet clear): you can have as many SSE "hooks" defined in a single page as 
you want, but **usually**  you'd want to have a single one for each page 
and send different events (on the SSE specs, it's different "event:" key) 
through the same connection, cause doing so you can "allocate" a single 
connection per user. That being said, if your machine can hold 1000 
connections and you have no more that 50 users, use as many "hooks" as you 
wish.

Every SSE "hook" will at all effects "hold a greenlet captive" for all the 
duration of the "streaming" of responses. 
Not to stress out on the core concept, but in the end you choose SSE over a 
recurring poll with ajax just to "spare" the reconnection times.
Given that a new "greenlet" will be costantly active to send events to the 
client (a greenlet per page per user), you can't "expect" the normal 
request/response cycle: the "method" to hold a connection open is not to 
return something, it's yielding "small blocks" in a never-ending loop. 
This "requires" that the logic, e.g., to check for new messages, "happens" 
in that while loop.

Whatever you choose to implement that logic is up to you: when I said 
"database, redis, cache, etc" I just pointed out some of the possible 
implementations:
- a messages table on a database
- a key in the cache.ram
- a list in redis
but it may be as well leveraging the pubsub features of redis.

So, let's take a messages table: you define topics, type of events, content 
of the event, recipients....
user 1 opens the page /app/default/index.html that has a piece of 
javascript to hook to the sse on /app/sse/index . 
user 2 opens /app/default/index.html and you want him to book an 
appointment. 
When he books it, web2py receives the booking (a normal ajax post in 
response to a click on a button) and stores it into the messages table. 
Inside your "yielding" loop on /app/sse/index you check for new 
appointments every 3 seconds and user 1 receives the update.

Let's take instead a pubsub redis topic: 
user 1 opens the page /app/default/index.html that has a piece of 
javascript to hook to the sse on /app/sse/index . 
user 2 opens /app/default/index.html and you want him to book an 
appointment. When he books it, web2py receives the booking and stores it 
into the  
redis topic. Inside your "yielding" loop on /app/sse/index you subscribe to 
the topic and wait for redis to send you a payload that user 1 receives.

As stated before, you "need" to build your own routing mechanism: if you 
leverage redis pubsub some things can be easier than a database table, but 
you could as well store each message in a flat file and read from that ....
The yielding loop can very well "subscribe" to different redis topics as 
watching for different type of records in your messages table.
Now, taking your graph as example, the green arrows can be done:
- with the "schedule" controller putting a record into redis or into a 
table, so the "controller 2 sse" when checks for new updates can "see" the 
added ones 
- on the other end, the controller 2 that sets the updates can put a record 
into what "schedule controller sse" is watching over

Basically, if you need that kind of functionality where a shared state is 
needed between two different connections, you need a place where both of 
them can look into. The "normal" action can have a request/response cycle 
that closes as soon as the new "event" is submitted to the "message queue", 
while the "sse" action needs to check for new messages in the "queue" every 
once in a while, never returning from it (because as soon as you returns, 
"wsgi dictates" that the connection closes).

The "every once in a while" is a loop. 
If the backend you choose for storage don't has the ability to notify 
something like "hey, I have a new entity for you", you need to loop and 
sleep a bit (e.g. a database table). 
That's more or less what web2py's scheduler worker(s) does: it uses a table 
to communicate its state to other workers (so they can coordinate among 
each others), and the web2py "web" process uses those tables to communicate 
from/to the workers (to-->queueing new tasks, from-->looking for stored 
results).
 
When you start 4 workers and a webserver, you have 5 processes that know 
what is happening on the other 4 just looking into those shared tables. 
It's not a much different "paradigm" from your separate controllers 
"situation": they just need a place to speak to each other.

Redis pubsub has a "blocking call": this means that the method itself 
sleeps "automatically" until a new entity is available, in which case you 
can avoid the sleep() call alltogether. 


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to