I will elaborate. I have 2 different interest:
The first one has to do with my personal feeling of understanding the inner-workings of the implementation of messaging options in web2py, on the architectural level. I do NOT want to re-invent the weal by any means, but I do want to understand the architectural structure of interaction between various optional-components, so I can better judge which option of which component-option can co-exist/communicate-with which other component-option. The second interest, is to be able to decide the best approach of messaging using web2py, for my use-case, which I will elaborate on more here. As for the first interest, I would like to understand how web2py interact with a client, in any non-standard request-response methodology. I have been researching SSE and WebSockets quite extensively in the past few days, as well as other messaging protocols and libraries, like XMPP, AMQP, RabbitMQ and ZeroMQ. The way I understand it, from a performance and sociability stand-point, a best-practice would suggest some kind of separate process built on a single-threaded, non-blocking-io, event-loop type of web-server. This sent me on a research on Eventlet, gEvent, gUnicorn, Tornado, mongrel2, twisted, and the like. I also know that web2py has an option of running via a gEvent server, and I know that this would require some extensions to the libraries I use, such as green_psycopg for my postgres-driver, and uGreen for my uWSGI server. This might be advisable regardless of my interest in messaging, but might have side-effects for other modules I use that don't support gEvent's usage of sub-routines. The are 2 questions I need to answer for myself: 1. In case all is done within web2py, and given it is by-default not a non-blocking-event-loop type of system, than whether that be listening on a long-poll, a server-sent-event, a web-socket, or an amqp/0mq-socket, *how does/can it handle long-lasting requests/connections, without blocking all the other "regular"* *HTTP requests**?* 2. In case it can't, this means a seperate process needs to be working in parallel - a separate python-interpreter means another PVM (python-virtual-machine) process. How does web2py interact with external processes of other servers? An inter-thread communication socket? An Inter-process communication socket? A sub-process-pawning? A sub-routine? An os-socket? A TCP socket? Maybe a ZeroMQ socket is in order? :) As for the second interest, I'll elaborate on my use-case(s): I have a web-application I am designing, for collaborative project management. There are three main use-cases I am targeting: 1. It should have topic/subscription-based messaging system, that users can both subscribe themselves to topics, as well as add/remove other users as "topic-watchers" if they want to join them to a conversation and have them be notified immediately for any new update to that topic (obviously only topic-owners can add/remove new watchers, and other watchers can only remove themselves from a subscription to a topic they are watching). This requires a pub/sub-fan-out messaging-topology. - For this use-case, an SSE transport should suffice, perhaps with an XMPP protocol layered on-top. ZeroMQ also seems lucrative... 2. It should have collaborative-screens, like a Gantt chart and a Scheduling run-chart, that each user that is using it, would be "subscribing" to that view, and have his changes be reflected immediately on all other users currently "subscribed" to that view, as well as receive any changes that any other user is doing to that view. - For this use-case, web-sockets seem ideal, but a ZeroMQ socket seems lucrative as well... 3. It has a CMS system that should interact with 3'rd party desktop-applications. For most uses, an internal-application's view would interact with web2py, either through an RPC'ish connection (like xmlrpc/jsonrpc), a RESTfull architecture (web2py's REST API), or some kind of messaging architecture (XMPP or ZeroMQ). In addition, I might choose to add the same pus/sub-topic-based messaging system into the inner-app view. - For this use-case, I am currently using web2py's built-in xmlrpc and amfrpc, but ZeroRPC looks lucrative as an xmlrpc-replacement - it's a python-rpc layered on-top of ZeroMQ sockets. I also would like to use Redis as a caching mechanism for my web2py application, and this may suggest it being acting as a persistence-layer for a centralized messaging broker. - For the desktop-application integration, I am currently leaning towards using some non-blocking-event-loop-type of server, that would use Redis as both session-cache and general-data cache. Web2py can interact with is as needed, to fill the cache with results from my database. - For the browser-targeted-use-cases, I am not sure which road I should take... I would like to stay within the confines of web2py so I can have DAL access and store the messages and collaborative-view-changes, but I would also like to take advantage of pub/sub libraries for managing the queues and communications, and would rather have this communication not-block the other regular HTTP traffic that comes to web2py... So it's a dilemma... I would appreciate any suggestions... Ideally, I would create another web2py server, running on-top of gEvent, and have it talk to the same database using the same DAL object (or at leas the same model-file) of the main web2py uses. This way I don't need to have the 2 web2py instances interacting at all. Each one would be targeted for different use-cases of the same application. It can ideally also handle all the desktop-application communications through ZeroRPC. For the browser-fronts, this architecture might have issues I am not thinking about, such as cross-origin issues... -- --- You received this message because you are subscribed to the Google Groups "web2py-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to web2py+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/groups/opt_out.