[ https://issues.apache.org/jira/browse/ARTEMIS-4325?focusedWorklogId=867858&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-867858 ]
ASF GitHub Bot logged work on ARTEMIS-4325: ------------------------------------------- Author: ASF GitHub Bot Created on: 27/Jun/23 19:46 Start Date: 27/Jun/23 19:46 Worklog Time Spent: 10m Work Description: AntonRoskvist commented on PR #4522: URL: https://github.com/apache/activemq-artemis/pull/4522#issuecomment-1610117640 @brusdev @jbertram Thanks for your feedback, much appreciated! In the case of failback between HA live/backup-pairs, that should be covered already. Since failback would only follow a failover which would only happen if the retry-logic failed first i.e both live and backup servers are unavailable. I think failing over (and back) between different live/backup pairs in a scenario like this make some sense, as long as it doesn't happen internally within the groups. If not, would it make sense to add some additional check to inhibit failback between live-backup-pairs? I might have missed something with the topology listener though, it would be nice to have this happen without having to "ping" the old node... but would this work if the nodeID of the broker changes (say the broker cluster is part of a blue/green deploy or similar, such that FQDN/IP remains the same but the underlying broker is replaced by one started on a new journal)? I can give that a try regardless, but it might take some time due to vacation. Br, Anton Issue Time Tracking ------------------- Worklog Id: (was: 867858) Time Spent: 40m (was: 0.5h) > Ability for core client to failback after failover > -------------------------------------------------- > > Key: ARTEMIS-4325 > URL: https://issues.apache.org/jira/browse/ARTEMIS-4325 > Project: ActiveMQ Artemis > Issue Type: New Feature > Reporter: Anton Roskvist > Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > This would be similar to the "priorityBackup" functionality in ActiveMQ > "Classic." > The primary use case for this is to more easily maintain a good distribution > of consumers and producers across a broker cluster over time. > The intended behavior for my own purposes would be something like: > * Ensure an even distribution across the broker cluster when first connecting > a high throughput client. > * When a broker becomes unavailable (network outage, patch, crash, whatever), > move affected client workers to another broker in the cluster to maintain > throughput. > * When the original broker comes back, move the recently failed-over > resources to the original broker again. -- This message was sent by Atlassian Jira (v8.20.10#820010)