Interestingly enough, here is a recent take on the queue CTE mentioned above: 
https://brandur.org/postgres-queues   



On 28 November 2014 09:50:17 GMT, Lukas Eder <[email protected]> wrote:Hi 
Ben,Wow, that is quite a bit of very interesting history you've shared right 
here on the jOOQ user group! I didn't know you were part of the RabbitMQ team. 
Very interesting to learn about all these backgrounds 2014-11-27 17:11 
GMT+01:00 Ben Hood <[email protected]>:Hi Lukas,  On Tue, Nov 25, 2014 at 7:14 
AM, Lukas Eder <[email protected]> wrote: > Hi Ben, > > Hmm, I thought that 
this was indeed a queue implementation. Very > sophisticated, with recursive 
SQL. I'm personally curious (having recently > written a somewhat controversial 
article on the subject: > 
http://blog.jooq.org/2014/09/26/using-your-rdbms-for-messaging-is-totally-ok/). 
> Were you at the point where you also evaluated the performance and locking > 
implications of your implementation? It looks like you have implemented > 
something similar to Oracle's magical FOR UDPATE SKIP LOCKED clause, which > is 
the implementation basis of Oracle AQ that has been made public (= > 
documented) with Oracle 10g, I believe.  TL;DR: I haven't benchmarked this 
implementation in real world conditions, so I can't comment on real world 
performance.  And to be clear - the idea with the advisory lock is not my own - 
I shamelessly stole it from here: 
https://github.com/chanks/que/blob/master/lib/que/sql.rb  Back in the day I 
worked at an organization where AQ was our only officially supported 
distributed choice of FIFO (i.e. Oracle was only the piece of middleware we 
were allowed to run in production). 10g used to have a limitation (not sure 
whether this is still true) that the AQ proc for enqueuing and consuming point 
to point "messages" was generally available, but the equivalent proc for 
pub-sub "messaging" was implemented in a different package called something 
like DBMS_AQ_ADMIN. Because of the name x_ADMIN, the DBAs refused execute 
privileges to this package for non-DBA usage. So we ended up hacking a polling 
based pub-sub solution. Basically we re-invented the wheel and made turned it 
into a pentagon, but even pentagons can roll.   DBAs... ;-)  Interesting. The 
enqueueing and dequeueing part is certainly not in DBMS_AQADM (anymore). It's 
all in DBMS_AQ. But setting up queues is in DBMS_AQADM for both 
producer/consumer and pub/sub queues: 
https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_aqadm.htm#CACDAABD  
But in addition to the EXECUTE grant on the DBMS_AQ package, you also need 
additional "grants" that aren't really SQL grants but some PL/SQL admissions... 
It's a bit quirky - as many things in PL/SQL  I think it has always been like 
this, though...? Very interesting to implement a competing implementation of AQ 
on Oracle itself :-)  At the time, I was also part of the core RabbitMQ dev 
team, so I benchmarked our solution against a Rabbit solution, and the AQ based 
solution turned out to be a bit of a tortoise (legal disclaimer - for the use 
case we were looking at, and also because I was fully aware of the internals of 
Rabbit, so I knew where its sweet spot lay), but it was fast enough for our 
purposes, so we stuck with. Rabbit was still pre-1.0 and besides which, we 
would never have been allowed to run it in production anyway.  I did briefly 
peruse at the AQ proc at the time, saw that it was using a skip list, but I 
didn't go as far as profiling the IO characteristics on the server, so I'm not 
exactly sure how smart the AQ implementation is under the covers.   I haven't 
gone far either, but I believe that your PostgreSQL solution is pretty much the 
same thing. Except that it is very weird for a FOR UPDATE SKIP LOCKED clause to 
inject additional WHERE predicate semantics "after the fact", i.e. after ORDER 
BY has been applied, etc. So implementing FIFO queues with this SKIP LOCKED 
clause outside of AQ is rather tricky - albeit feasible.  As far as performance 
is concerned, AQ was a bottleneck in 11gR1 because of a significant concurrency 
bug that lead to Oracle having to be restarted in production every night for a 
couple of months. I believe that this is now fixed in 11gR2 and 12c, and AQ is 
running smoothly in that E-Banking system that I used to work on...  The main 
point of contention with using a DB as a queue is that to get the FIFO 
semantics right in conjunction with exclusive consumption, in the general case 
you need to lock a row. That's where Postgres has this handy little proc which 
allows you to try to acquire an in-memory mutex of your own liking. So cobbled 
together with a re-entrant query, you can try to lock head of the queue, and in 
the instant that you have an outstanding acknowledgement (i.e. a row has gone 
out over the wire without a commit), you can skip to the nearest unlocked row. 
Because the lock is in memory, there is no IO penalty. The downside is that you 
can't scale this across multiple instances (assuming you've mastered the art of 
multi-master writes in Postgres). But for old school people like me who still 
believe in CAP, I avoid this issue by only having a single master (i.e. I'm 
trading off write-availability for consistency).   I'm in the same boat here. 
Don't scale out as long as you don't absolutely must. With RAM prices 
crumbling, scaling up is going to remain a very interesting and cheap (in terms 
of total cost of ownership) option for years to come. As I always say: Do not 
anger the mainframe :-)  If you need distributed FIFOs, using a proper MQ 
usually doesn't get you sacked these days. But even grown-up MQ's are subject 
to the CAP triangle, so YMMV.   Yep, it's hard. JavaEE makes a lot of promises 
about distributed transactions between databases and MQs. I have some 
experience with WLS, Oracle DB and IBM MQ, but we still occasionally had lost 
messages with years in production at a previous employer. Nonetheless, the 
person that acquired the unpayable IBM MQ licenses certainly didn't get sacked. 
Even if the licenses were so unreasonably expensive (price per MB transferred) 
that we rebuilt a complete document generation system for banking E-Documents 
just to avoid transferring the documents over the wire. Boy, I started loving 
XSL-FO layouting ;-)  Ironically, back in the day, the design philosophy of 
RabbiMQ was to deliver messages, not to queue them. It was only the fact that 
the product had the letter Q in the name that led people to believe it was a 
good idea to queue up messages. So then the MQ turned into a database. 
Everything went full circle.   Well, people never listen :-) At this point, 
citing Gavin King seems appropriate:  ​Full circle. Hibernate can now handle 
unmanaged DTOs and batch DML ;-)   Personally I don't have any strong feelings 
either way. If you can achieve you FIFO distribution requirements with your DB, 
you don't have any flow control considerations and the performance penalty is 
negligible, why not go for that? It saves you from having to deploy, manage 
monitor another piece of infrastructure.   Exactly. The RDBMS is a hammer, many 
problems are nails. I mean, the hammer is a crazy Swiss Army Knife Hammer, so 
it won't let you down for quite a while before you need something else.  
Cheers, Lukas           

-- 
You received this message because you are subscribed to the Google Groups "jOOQ 
User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to