> I got send_event_hashed to work via a bit of a hack 
> (https://github.com/JustinAzoff/broker_distributed_events/blob/master/distributed_broker.bro),
> but it needs support from inside broker or at least the bro/broker 
> integration to work properly in the case of node failure.
> 
> My ultimate vision is a cluster with 2+ physical datanode/manager/logger 
> boxes where one box can fail and the cluster will continue to function 
> perfectly.
> The only thing this requires is a send_event_hashed function that does 
> consistent ring hashing and is aware of node failure.

Yeah, that sounds like a good idea that I can try to work into the design.  
What is a “data node” though?  We don’t currently have that?

More broadly, it sounds like a user needs a way to specify which nodes they 
want to belong to a worker pool, do you still imagine that is done like you had 
in the example broctl.cfg from the earlier thread?  Do you need to be able to 
specify more than one type of pool?

> For things that don't need necessarily need consistent partitioning - like 
> maybe logs if you were using Kafka, a way to designate that a topic should be 
> distributed round-robin between subscribers would be useful too.

Yeah, that seems like it would require pretty much the same set of 
functionality to get working and then user can just specify a different 
function to use for distributing events (e.g. hash vs. round-robin).

- Jon

_______________________________________________
bro-dev mailing list
[email protected]
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev

Reply via email to