Initial PR has been created: https://github.com/apache/activemq-artemis/pull/2892
On Wed, Nov 6, 2019 at 7:20 AM Christopher Shannon < [email protected]> wrote: > Thanks for taking a look and yes I can see your comments so I will respond > on the branch. > > The XML configuration will absolutely be updated, I just hadn't gotten > around to it but adding the new XML config + documentation will be > necessary for the final PR. > > I'm still tweaking a few things and polishing some stuff but I should > submit the PR pretty soon. I have a couple more things in the works as > follow on PRs including adding plugin support/hooks for the federation > lifecycle as well as adding support for generating demand based on other > types of bindings being created such as divert bindings. > > On Wed, Nov 6, 2019 at 3:13 AM <[email protected]> > wrote: > >> Hi Chris >> >> >> >> >> In general looks good. >> >> >> >> >> Ive tried adding comments inline on the commit hopefully you see them. >> >> >> >> >> Could an xml config example be added like there was for the upstream >> bits. (I could have missed it) >> >> >> >> >> Looks good though great stuff! >> >> >> Mike >> >> >> >> >> Get Outlook for Android >> >> >> >> >> >> >> >> On Mon, Nov 4, 2019 at 11:52 AM +0000, "Christopher Shannon" < >> [email protected]> wrote: >> >> >> >> >> >> >> >> >> >> >> Michael, >> >> I pushed up the branch I've been working on here: >> >> https://github.com/cshannon/activemq-artemis/tree/downstreamFederationPrototype >> >> So you can take a look and see what you think. There are updated tests >> implemented in both FederatedQueueTest and FederatedAddressTest classes if >> you want to see it in action. >> >> On Fri, Nov 1, 2019 at 7:13 PM Christopher Shannon < >> [email protected]> wrote: >> >> > I will push up my branch Monday to github so you can take a look. >> > >> > On Fri, Nov 1, 2019 at 4:36 AM >> > wrote: >> > >> >> Do you have a branch with it at all even if not PR ready? >> >> >> >> >> >> >> >> >> >> Get Outlook for Android >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> On Mon, Oct 28, 2019 at 1:36 PM +0000, "Christopher Shannon" < >> >> [email protected]> wrote: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> As an update I have a decent prototype now for downstream >> configurations >> >> that I am still polishing and working on tests. I'm only in the office >> a >> >> couple days this week so I will probably have a PR ready for the >> following >> >> week. >> >> >> >> On Fri, Oct 18, 2019 at 6:18 AM Christopher Shannon < >> >> [email protected]> wrote: >> >> >> >> > Gary, >> >> > >> >> > That sounds like a good idea as I think you're right that AMQP could >> >> help >> >> > solve some of the issues with flow control. Plus the broker supports >> >> > native AMQP now so performance would be good. In regards to duplex >> >> that is >> >> > a good point I forgot about since in general I setup the same >> >> credentials >> >> > on both brokers of a bridge (plus I just use TLS so all brokers have >> >> certs) >> >> > but re-using the same connection certainly does allow for >> >> authentication to >> >> > be a lot easier. So I think the duplex case probably does (or at >> least >> >> > should have the ability) to share the same connection like on 5.x. I >> >> > figure ultimately we could have lots of bridge types...maybe this new >> >> AMQP >> >> > bridge, the existing federated address/queue stuff, and there is >> still >> >> > clustering so users will have options to decide what is best for >> their >> >> use >> >> > case. >> >> > >> >> > For now, to make things simple, I've decided just to start work on a >> PR >> >> to >> >> > allow configuring of a downstream broker with the existing setup (not >> >> going >> >> > with duplex) as that should be a good start. I'm just going to send >> the >> >> > config info to the remote broker and then that broker will establish >> an >> >> > upstream link based on the config. After that the next stuff I want >> to >> >> > target is to add metrics and also to support divert bindings for >> driving >> >> > demand. (equivalent to the 5.x virtual destination demand feature I >> >> added >> >> > in 5.13.x) >> >> > >> >> > Chris >> >> > >> >> > On Fri, Oct 18, 2019 at 4:29 AM Gary Tully wrote: >> >> > >> >> >> Hi Christopher, >> >> >> this is timely, I started peeking at federation this week also, to >> see >> >> >> if I can make it a "better bridge" from the perspective of only >> moving >> >> >> messages that are needed. >> >> >> The idea is to use AMQP as the protocol and flow messages across the >> >> >> bridge based on aggregate AMQP credit, ie: rather than have all >> >> >> messages move between brokers when local consumers are slow, only >> move >> >> >> to satisfy remote/upstream credit and react to it dynamically, which >> >> >> is a fundamental part of AMQP flow control. >> >> >> >> >> >> i need to pull together a POC of this to verify how easy/hard it >> will >> >> >> be to aggregate credit demand etc and have outbound AMQP calls, but >> I >> >> >> think it can be really good and fix an age old problem with the 5.x >> >> >> bridge. >> >> >> >> >> >> it would also help with the duplex part because of the symmetric >> nature >> >> >> of AMQP. >> >> >> >> >> >> on the duplex and configuration command, authentication was one >> >> >> problem in 5.x, in that the same users needed to exist on all >> brokers >> >> >> b/c the user/pass etc was part of the bridge config, I think the >> >> >> "reuse of the same connection" may be important to avoid that need. >> It >> >> >> will typically need to be TLS and maybe cert based authentication so >> >> >> maybe SASL would also come into play. >> >> >> >> >> >> The duplex case in my mind was always about hub/spoke where the hub >> >> >> did not need to be aware of the spokes configuration. Each spoke >> could >> >> >> initiate a duplex/two way bridge to the hub and not require any >> >> >> additional fire wall ports. To my mind, propagation of config and >> >> >> reuse of the connection was always related. >> >> >> But for sure small steps. And maybe AMQP can help! >> >> >> >> >> >> On Thu, 17 Oct 2019 at 16:34, Christopher Shannon >> >> >> wrote: >> >> >> > >> >> >> > Duplex is still up in the air as I was going to do the downstream >> >> >> portion >> >> >> > first. A true duplex bridge would share the same connection >> which is >> >> >> what >> >> >> > happens In 5.x. It establishes the bridge and then the remote >> broker >> >> >> gets >> >> >> > a command to also send messages back over the same connection. >> >> >> > >> >> >> > So we could do something similar, or we could make it easier and >> just >> >> >> > automatically create two connections. So for example we could >> >> define a >> >> >> > duplex connection as part of the federation config and under the >> >> covers >> >> >> the >> >> >> > federation will just create 1 upstream and 1 downstream connection >> >> >> > automatically. Having 2 connections could be better performance >> >> anyways >> >> >> > and prevent traffic from each direction from getting in the way of >> >> the >> >> >> > other. We could also support both options, etc. >> >> >> > >> >> >> > On Thu, Oct 17, 2019 at 11:26 AM Justin Bertram >> >> >> wrote: >> >> >> > >> >> >> > > I think your implementation idea makes sense and it is actually >> >> quite >> >> >> > > similar to what is done for clustering (i.e. each broker tells >> all >> >> the >> >> >> > > other brokers how they can connect back to it). This makes >> sense to >> >> >> me as a >> >> >> > > way to configure downstream brokers, but I'm still fuzzy on the >> >> >> "duplex" >> >> >> > > part. Does this idea fulfill both the configuration aspect and >> the >> >> >> "duplex" >> >> >> > > aspect? Could you clarify what you mean by "duplex"? I always >> >> >> conceived >> >> >> > > that implementing "duplex" would require modifying the bridge >> to be >> >> >> able to >> >> >> > > "pull" messages rather than only "push" them. >> >> >> > > >> >> >> > > >> >> >> > > Justin >> >> >> > > >> >> >> > > On Thu, Oct 17, 2019 at 8:13 AM Christopher Shannon < >> >> >> > > [email protected]> wrote: >> >> >> > > >> >> >> > > > I recently started to dive into the federation support as I >> try >> >> and >> >> >> > > migrate >> >> >> > > > 5.x brokers to Artemis as I need something similar to how 5.x >> >> does >> >> >> > > bridging >> >> >> > > > and federated queues/addresses seem like more in line to what >> I >> >> >> need than >> >> >> > > > clustering. >> >> >> > > > >> >> >> > > > However, I've noticed several shortcomings and enhancements >> that >> >> >> will be >> >> >> > > > necessary to make it useful. The first thing is right now you >> >> can >> >> >> only >> >> >> > > > configure an upstream broker which is backwards from how 5.x >> >> >> configures a >> >> >> > > > bridge (it configures a one way downstream). So I wanted to >> go >> >> >> ahead and >> >> >> > > > enhance Federation support to allow configuring both >> downstream >> >> >> brokers >> >> >> > > and >> >> >> > > > hopefully duplex as well. >> >> >> > > > >> >> >> > > > For the approach I was thinking that maybe if we could add a >> >> >> > > configuration >> >> >> > > > option for downstream brokers. Then, when the connection is >> made >> >> >> to the >> >> >> > > > remote broker we could send a new CORE packet command with the >> >> info >> >> >> for >> >> >> > > the >> >> >> > > > Federation config. Then the remote broker could receive this >> >> >> config, >> >> >> > > parse >> >> >> > > > it, and then establish an upstream link based on that >> information >> >> >> back to >> >> >> > > > the broker that made the connection...essentially creating a >> >> >> downstream >> >> >> > > > link but re-using the existing upstream way of creating the >> >> bridge >> >> >> to >> >> >> > > > simplify things. >> >> >> > > > >> >> >> > > > I can work on the PR and difference enhancements but wanted to >> >> get >> >> >> some >> >> >> > > > agreement on the approach before spending a bunch of time on >> it. >> >> >> > > > >> >> >> > > > Thoughts? Or other ideas on how to accomplish configuring a >> >> >> downstream >> >> >> > > > broker? >> >> >> > > > >> >> >> > > >> >> >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >>
