I would think you'd just peer every collector to every device in a full
mesh, unless i'm missing something obvious. Having peering sessions going
up and down constantly between the network devices and one of n collectors
behind a load balancer does not seem feasible.

On Tue, Sep 5, 2017 at 10:46 AM, Paul Mabey <p...@mabey.net> wrote:

> Right…..routers export flow to the VIP, as well as “think” they are BGPing
> with the VIP. The LB then has a static rule that forwards both BGP/flow to
> the correct collector. The goal being that if the collector IP changes for
> some reason, I don’t have to go touch the router configs.
>
>
> On Sep 5, 2017, at 11:35 AM, Aaron Finney <aaron.fin...@openx.com> wrote:
>
> I'm not sure I follow - do you mean setting up BGP peering of the
> collectors to your source devices using the collector VIP as the neighbor
> address?
>
> On Sep 5, 2017 10:11 AM, "Paul Mabey" <p...@mabey.net> wrote:
>
>> Has anyone had success is pushing BGP sessions through an LB along with
>> netflow? Interested in the solution below but would like to have BGP
>> aligned with netflow as well.
>>
>> On Sep 4, 2017, at 9:48 AM, Aaron Finney <aaron.fin...@openx.com> wrote:
>>
>> Great to hear, nice work!
>>
>> Aaron
>>
>> On Sep 4, 2017 1:55 AM, "Yann Belin" <y.belin...@gmail.com> wrote:
>>
>> Hi all,
>>
>> Updating on this, in case someone is interested.
>>
>> Consul was indeed the way to go:
>>
>> * nginx is doing the actual UDP load balancing, based on source IP
>> hash (to optimize aggregation).
>> * consul keeps track of nfacctd collectors, of their health, and of
>> the health of their dependencies (rabbitmq in my case).
>> * consul-template uses the information provided by consul (servers +
>> health) to generate nginx configuration files, and reloads nginx
>> service if needed; if a collector becomes unhealthy (e.g. rabbitmq
>> crashes), it will be removed from nginx configuration and will stop
>> receiving flows.
>>
>> The great thing with consul is that you can write your own checks. For
>> now my checks are relatively basic (process + port binding checks) but
>> I am working on a more advanced one for rabbitmq (e.g. queue length /
>> ram usage). I'm still thinking about more advanced ways to check
>> nfacctd health, if anyone has a suggestion.
>>
>> Cheers,
>>
>> Yann
>>
>>
>> On Mon, Aug 21, 2017 at 4:02 PM, Aaron Finney <aaron.fin...@openx.com>
>> wrote:
>> > Hi Yann
>> >
>> > We use Consul for this, it works very well.
>> >
>> > https://www.consul.io
>> >
>> >
>> > Aaron
>> >
>> >
>> >
>> > On Aug 21, 2017 6:44 AM, "Yann Belin" <y.belin...@gmail.com> wrote:
>> >
>> > Hello,
>> >
>> > I have been looking into solutions to achieve reliable load balancing
>> > of my incoming flows across multiple nfacctd servers / daemons.
>> >
>> > Basic load balancing is relatively easy (see Nginx configuration
>> > below), but *reliable* load balancing (only sending flows to servers
>> > that have a running nfacctd daemon) is quite more complicated. For
>> > instance, Nginx normally monitors UDP responses from the remote
>> > servers to determine if those servers are health, but this approach
>> > will not work in the case of netflow or ipfix.
>> >
>> > Did anybody already managed to solve this? Or has a suggestion perhaps?
>> >
>> > Thanks in advance!
>> >
>> > *-*-*-*-*-*-*-*
>> > stream {
>> >     upstream ipfix_traffic {
>> >         hash $binary_remote_addr;
>> >         server 10.20.10.10:9055;
>> >         server 10.20.10.20:9055;
>> >     }
>> >
>> >     server {
>> >         listen 9055 udp;
>> >         proxy_responses 0;
>> >         proxy_pass ipfix_traffic;
>> >         proxy_bind $remote_addr transparent;
>> >         error_log /var/log/nginx/ipfix_traffic.error.log;
>> >     }
>> > }
>> > *-*-*-*-*-*-*-*
>> >
>> > Kind regards,
>> >
>> > Yann
>> >
>> > _______________________________________________
>> > pmacct-discussion mailing list
>> > http://www.pmacct.net/#mailinglists
>> >
>> >
>> >
>> > _______________________________________________
>> > pmacct-discussion mailing list
>> > http://www.pmacct.net/#mailinglists
>>
>> _______________________________________________
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists
>>
>>
>> _______________________________________________
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists
>>
>>
>>
>> _______________________________________________
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists
>>
> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
>
>
> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>



-- 

*Aaron Finney*Network Engineer | OpenX
888 East Walnut Street, 2nd Floor | Pasadena, CA 91101
o: +1 (626) 466-1141 x6035 | aaron.fin...@openx.com
*Advertising Age Best Places to Work
<http://openx.com/press-releases/openx-named-as-one-of-advertising-ages-top-fifty-best-places-to-work-for-2015/>*
*Deloitte's Technology Fast 500™
<http://openx.com/press-releases/openx-ranked-3rd-fastest-growing-software-company-north-america-5th-fastest-overall-deloittes-2013-technology-fast-500/>*
www.openx.com   <http://www.openx.com/>|  Twitter
<http://twitter.com/openx>|  Facebook   <http://www.facebook.com/OpenX>|
LinkedIn   <http://www.linkedin.com/company/openx/products>|  YouTube
<http://www.youtube.com/user/openxvideos>
_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to