Right…..routers export flow to the VIP, as well as “think” they are BGPing with 
the VIP. The LB then has a static rule that forwards both BGP/flow to the 
correct collector. The goal being that if the collector IP changes for some 
reason, I don’t have to go touch the router configs. 


> On Sep 5, 2017, at 11:35 AM, Aaron Finney <aaron.fin...@openx.com> wrote:
> 
> I'm not sure I follow - do you mean setting up BGP peering of the collectors 
> to your source devices using the collector VIP as the neighbor address?
> 
> On Sep 5, 2017 10:11 AM, "Paul Mabey" <p...@mabey.net 
> <mailto:p...@mabey.net>> wrote:
> Has anyone had success is pushing BGP sessions through an LB along with 
> netflow? Interested in the solution below but would like to have BGP aligned 
> with netflow as well. 
> 
>> On Sep 4, 2017, at 9:48 AM, Aaron Finney <aaron.fin...@openx.com 
>> <mailto:aaron.fin...@openx.com>> wrote:
>> 
>> Great to hear, nice work! 
>> 
>> Aaron
>> 
>> On Sep 4, 2017 1:55 AM, "Yann Belin" <y.belin...@gmail.com 
>> <mailto:y.belin...@gmail.com>> wrote:
>> Hi all,
>> 
>> Updating on this, in case someone is interested.
>> 
>> Consul was indeed the way to go:
>> 
>> * nginx is doing the actual UDP load balancing, based on source IP
>> hash (to optimize aggregation).
>> * consul keeps track of nfacctd collectors, of their health, and of
>> the health of their dependencies (rabbitmq in my case).
>> * consul-template uses the information provided by consul (servers +
>> health) to generate nginx configuration files, and reloads nginx
>> service if needed; if a collector becomes unhealthy (e.g. rabbitmq
>> crashes), it will be removed from nginx configuration and will stop
>> receiving flows.
>> 
>> The great thing with consul is that you can write your own checks. For
>> now my checks are relatively basic (process + port binding checks) but
>> I am working on a more advanced one for rabbitmq (e.g. queue length /
>> ram usage). I'm still thinking about more advanced ways to check
>> nfacctd health, if anyone has a suggestion.
>> 
>> Cheers,
>> 
>> Yann
>> 
>> 
>> On Mon, Aug 21, 2017 at 4:02 PM, Aaron Finney <aaron.fin...@openx.com 
>> <mailto:aaron.fin...@openx.com>> wrote:
>> > Hi Yann
>> >
>> > We use Consul for this, it works very well.
>> >
>> > https://www.consul.io <https://www.consul.io/>
>> >
>> >
>> > Aaron
>> >
>> >
>> >
>> > On Aug 21, 2017 6:44 AM, "Yann Belin" <y.belin...@gmail.com 
>> > <mailto:y.belin...@gmail.com>> wrote:
>> >
>> > Hello,
>> >
>> > I have been looking into solutions to achieve reliable load balancing
>> > of my incoming flows across multiple nfacctd servers / daemons.
>> >
>> > Basic load balancing is relatively easy (see Nginx configuration
>> > below), but *reliable* load balancing (only sending flows to servers
>> > that have a running nfacctd daemon) is quite more complicated. For
>> > instance, Nginx normally monitors UDP responses from the remote
>> > servers to determine if those servers are health, but this approach
>> > will not work in the case of netflow or ipfix.
>> >
>> > Did anybody already managed to solve this? Or has a suggestion perhaps?
>> >
>> > Thanks in advance!
>> >
>> > *-*-*-*-*-*-*-*
>> > stream {
>> >     upstream ipfix_traffic {
>> >         hash $binary_remote_addr;
>> >         server 10.20.10.10:9055 <http://10.20.10.10:9055/>;
>> >         server 10.20.10.20:9055 <http://10.20.10.20:9055/>;
>> >     }
>> >
>> >     server {
>> >         listen 9055 udp;
>> >         proxy_responses 0;
>> >         proxy_pass ipfix_traffic;
>> >         proxy_bind $remote_addr transparent;
>> >         error_log /var/log/nginx/ipfix_traffic.error.log;
>> >     }
>> > }
>> > *-*-*-*-*-*-*-*
>> >
>> > Kind regards,
>> >
>> > Yann
>> >
>> > _______________________________________________
>> > pmacct-discussion mailing list
>> > http://www.pmacct.net/#mailinglists <http://www.pmacct.net/#mailinglists>
>> >
>> >
>> >
>> > _______________________________________________
>> > pmacct-discussion mailing list
>> > http://www.pmacct.net/#mailinglists <http://www.pmacct.net/#mailinglists>
>> 
>> _______________________________________________
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists <http://www.pmacct.net/#mailinglists>
>> 
>> _______________________________________________
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists <http://www.pmacct.net/#mailinglists>
> 
> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists <http://www.pmacct.net/#mailinglists>
> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to