Hope this thread isn't dead.

Mike - thanks for highlighting some really key issues at scale.

On a related note, can someone from the Ceilometer comment about the store and 
forward requirement? Currently scaling RabbitMQ is non-trivial. Though cells 
help make the problem smaller, as Paul Mathews points out in the video below, 
cells don't make the problems go away. Looking at the experience in the 
community, Qpid isn't an option either.

Cheers,
Subbu


On Dec 9, 2013, at 4:36 PM, Mike Wilson <geekinu...@gmail.com> wrote:

> This is the first time I've heard of the dispatch router, I'm really excited 
> now that I've looked at it a bit. Thx Gordon and Russell for bringing this 
> up. I'm very familiar with the scaling issues associated with any kind of 
> brokered messaging solution. We grew an Openstack installation to about 7,000 
> nodes and started having significant scaling issues with the qpid broker. 
> We've talked about our problems at a couple summits in a fair amount of 
> detail[1][2]. I won't bother repeating the information in this thread.
> 
> I really like the idea of separating the logic of routing away from the the 
> message emitter. Russell mentioned the 0mq matchmaker, we essentially ditched 
> the qpid broker for direct communication via 0mq and it's matchmaker. It 
> still has a lot of problems which dispatch seems to address. For example, in 
> ceilometer we have store-and-forward behavior as a requirement. This kind of 
> communication requires a broker but 0mq doesn't really officially support 
> one, which means we would probably end up with some broker as part of 
> OpenStack. Matchmaker is also a fairly basic implementation of what is 
> essentially a directory. For any sort of serious production use case you end 
> up sprinkling JSON files all over the place or maintaining a Redis backend. I 
> feel like the matchmaker needs a bunch more work to make modifying the 
> directory simpler for operations. I would rather put that work into a 
> separate project like dispatch than have to maintain essentially a one off in 
> Openstack's codebase.
> 
> I wonder how this fits into messaging from a driver perspective in Openstack 
> or even how this fits into oslo.messaging? Right now we have topics for 
> binaries(compute, network, consoleauth, etc), hostname.service_topic for 
> nodes, fanout queue per node (not sure if kombu also has this) and different 
> exchanges per project. If we can abstract the routing from the emission of 
> the message all we really care about is emitter, endpoint, messaging pattern 
> (fanout, store and forward, etc). Also not sure if there's a dispatch 
> analogue in the rabbit world, if not we need to have some mapping of concepts 
> etc between impls.
> 
> So many questions, but in general I'm really excited about this and eager to 
> contribute. For sure I will start playing with this in Bluehost's 
> environments that haven't been completely 0mqized. I also have some lingering 
> concerns about qpid in general. Beyond scaling issues I've run into some 
> other terrible bugs that motivated our move away from it. Again, these are 
> mentioned in our presentations at summits and I'd be happy to talk more about 
> them in a separate discussion. I've also been able to talk to some other 
> qpid+openstack users who have seen the same bugs. Another large installation 
> that comes to mind is Qihoo 360 in China. They run a few thousand nodes with 
> qpid for messaging and are familiar with the snags we run into.
> 
> Gordon,
> 
> I would really appreciate if you could watch those two talks and comment. The 
> bugs are probably separate from the dispatch router discussion, but it does 
> dampen my enthusiasm a bit not knowing how to fix issues beyond scale :-(. 
> 
> -Mike Wilson
> 
> [1] 
> http://www.openstack.org/summit/portland-2013/session-videos/presentation/using-openstack-in-a-traditional-hosting-environment
> [2] 
> http://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/going-brokerless-the-transition-from-qpid-to-0mq
> 
> 
> 
> 
> On Mon, Dec 9, 2013 at 4:29 PM, Mark McLoughlin <mar...@redhat.com> wrote:
> On Mon, 2013-12-09 at 16:05 +0100, Flavio Percoco wrote:
> > Greetings,
> >
> > As $subject mentions, I'd like to start discussing the support for
> > AMQP 1.0[0] in oslo.messaging. We already have rabbit and qpid drivers
> > for earlier (and different!) versions of AMQP, the proposal would be
> > to add an additional driver for a _protocol_ not a particular broker.
> > (Both RabbitMQ and Qpid support AMQP 1.0 now).
> >
> > By targeting a clear mapping on to a protocol, rather than a specific
> > implementation, we would simplify the task in the future for anyone
> > wishing to move to any other system that spoke AMQP 1.0. That would no
> > longer require a new driver, merely different configuration and
> > deployment. That would then allow openstack to more easily take
> > advantage of any emerging innovations in this space.
> 
> Sounds sane to me.
> 
> To put it another way, assuming all AMQP 1.0 client libraries are equal,
> all the operator cares about is that we have a driver that connect into
> whatever AMQP 1.0 messaging topology they want to use.
> 
> Of course, not all client libraries will be equal, so if we don't offer
> the choice of library/driver to the operator, then the onus is on us to
> pick the best client library for this driver.
> 
> (Enjoying the rest of this thread too, thanks to Gordon for his
> insights)
> 
> Mark.
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to