Re: [pmacct-discussion] Load balancing nfacctd

2017-09-05 Thread Aaron Finney
I would think you'd just peer every collector to every device in a full
mesh, unless i'm missing something obvious. Having peering sessions going
up and down constantly between the network devices and one of n collectors
behind a load balancer does not seem feasible.

On Tue, Sep 5, 2017 at 10:46 AM, Paul Mabey  wrote:

> Right…..routers export flow to the VIP, as well as “think” they are BGPing
> with the VIP. The LB then has a static rule that forwards both BGP/flow to
> the correct collector. The goal being that if the collector IP changes for
> some reason, I don’t have to go touch the router configs.
>
>
> On Sep 5, 2017, at 11:35 AM, Aaron Finney  wrote:
>
> I'm not sure I follow - do you mean setting up BGP peering of the
> collectors to your source devices using the collector VIP as the neighbor
> address?
>
> On Sep 5, 2017 10:11 AM, "Paul Mabey"  wrote:
>
>> Has anyone had success is pushing BGP sessions through an LB along with
>> netflow? Interested in the solution below but would like to have BGP
>> aligned with netflow as well.
>>
>> On Sep 4, 2017, at 9:48 AM, Aaron Finney  wrote:
>>
>> Great to hear, nice work!
>>
>> Aaron
>>
>> On Sep 4, 2017 1:55 AM, "Yann Belin"  wrote:
>>
>> Hi all,
>>
>> Updating on this, in case someone is interested.
>>
>> Consul was indeed the way to go:
>>
>> * nginx is doing the actual UDP load balancing, based on source IP
>> hash (to optimize aggregation).
>> * consul keeps track of nfacctd collectors, of their health, and of
>> the health of their dependencies (rabbitmq in my case).
>> * consul-template uses the information provided by consul (servers +
>> health) to generate nginx configuration files, and reloads nginx
>> service if needed; if a collector becomes unhealthy (e.g. rabbitmq
>> crashes), it will be removed from nginx configuration and will stop
>> receiving flows.
>>
>> The great thing with consul is that you can write your own checks. For
>> now my checks are relatively basic (process + port binding checks) but
>> I am working on a more advanced one for rabbitmq (e.g. queue length /
>> ram usage). I'm still thinking about more advanced ways to check
>> nfacctd health, if anyone has a suggestion.
>>
>> Cheers,
>>
>> Yann
>>
>>
>> On Mon, Aug 21, 2017 at 4:02 PM, Aaron Finney 
>> wrote:
>> > Hi Yann
>> >
>> > We use Consul for this, it works very well.
>> >
>> > https://www.consul.io
>> >
>> >
>> > Aaron
>> >
>> >
>> >
>> > On Aug 21, 2017 6:44 AM, "Yann Belin"  wrote:
>> >
>> > Hello,
>> >
>> > I have been looking into solutions to achieve reliable load balancing
>> > of my incoming flows across multiple nfacctd servers / daemons.
>> >
>> > Basic load balancing is relatively easy (see Nginx configuration
>> > below), but *reliable* load balancing (only sending flows to servers
>> > that have a running nfacctd daemon) is quite more complicated. For
>> > instance, Nginx normally monitors UDP responses from the remote
>> > servers to determine if those servers are health, but this approach
>> > will not work in the case of netflow or ipfix.
>> >
>> > Did anybody already managed to solve this? Or has a suggestion perhaps?
>> >
>> > Thanks in advance!
>> >
>> > *-*-*-*-*-*-*-*
>> > stream {
>> > upstream ipfix_traffic {
>> > hash $binary_remote_addr;
>> > server 10.20.10.10:9055;
>> > server 10.20.10.20:9055;
>> > }
>> >
>> > server {
>> > listen 9055 udp;
>> > proxy_responses 0;
>> > proxy_pass ipfix_traffic;
>> > proxy_bind $remote_addr transparent;
>> > error_log /var/log/nginx/ipfix_traffic.error.log;
>> > }
>> > }
>> > *-*-*-*-*-*-*-*
>> >
>> > Kind regards,
>> >
>> > Yann
>> >
>> > ___
>> > pmacct-discussion mailing list
>> > http://www.pmacct.net/#mailinglists
>> >
>> >
>> >
>> > ___
>> > pmacct-discussion mailing list
>> > http://www.pmacct.net/#mailinglists
>>
>> ___
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists
>>
>>
>> ___
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists
>>
>>
>>
>> ___
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists
>>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
>
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>



-- 

*Aaron Finney*Network Engineer | OpenX
888 East Walnut Street, 2nd Floor | Pasadena, CA 91101
o: +1 (626) 466-1141 x6035 | aaron.fin...@openx.com
*Advertising Age Best Places to Work
*
*Deloitte's Technology Fast 500™


Re: [pmacct-discussion] Load balancing nfacctd

2017-09-05 Thread Paul Mabey
Right…..routers export flow to the VIP, as well as “think” they are BGPing with 
the VIP. The LB then has a static rule that forwards both BGP/flow to the 
correct collector. The goal being that if the collector IP changes for some 
reason, I don’t have to go touch the router configs. 


> On Sep 5, 2017, at 11:35 AM, Aaron Finney  wrote:
> 
> I'm not sure I follow - do you mean setting up BGP peering of the collectors 
> to your source devices using the collector VIP as the neighbor address?
> 
> On Sep 5, 2017 10:11 AM, "Paul Mabey"  > wrote:
> Has anyone had success is pushing BGP sessions through an LB along with 
> netflow? Interested in the solution below but would like to have BGP aligned 
> with netflow as well. 
> 
>> On Sep 4, 2017, at 9:48 AM, Aaron Finney > > wrote:
>> 
>> Great to hear, nice work! 
>> 
>> Aaron
>> 
>> On Sep 4, 2017 1:55 AM, "Yann Belin" > > wrote:
>> Hi all,
>> 
>> Updating on this, in case someone is interested.
>> 
>> Consul was indeed the way to go:
>> 
>> * nginx is doing the actual UDP load balancing, based on source IP
>> hash (to optimize aggregation).
>> * consul keeps track of nfacctd collectors, of their health, and of
>> the health of their dependencies (rabbitmq in my case).
>> * consul-template uses the information provided by consul (servers +
>> health) to generate nginx configuration files, and reloads nginx
>> service if needed; if a collector becomes unhealthy (e.g. rabbitmq
>> crashes), it will be removed from nginx configuration and will stop
>> receiving flows.
>> 
>> The great thing with consul is that you can write your own checks. For
>> now my checks are relatively basic (process + port binding checks) but
>> I am working on a more advanced one for rabbitmq (e.g. queue length /
>> ram usage). I'm still thinking about more advanced ways to check
>> nfacctd health, if anyone has a suggestion.
>> 
>> Cheers,
>> 
>> Yann
>> 
>> 
>> On Mon, Aug 21, 2017 at 4:02 PM, Aaron Finney > > wrote:
>> > Hi Yann
>> >
>> > We use Consul for this, it works very well.
>> >
>> > https://www.consul.io 
>> >
>> >
>> > Aaron
>> >
>> >
>> >
>> > On Aug 21, 2017 6:44 AM, "Yann Belin" > > > wrote:
>> >
>> > Hello,
>> >
>> > I have been looking into solutions to achieve reliable load balancing
>> > of my incoming flows across multiple nfacctd servers / daemons.
>> >
>> > Basic load balancing is relatively easy (see Nginx configuration
>> > below), but *reliable* load balancing (only sending flows to servers
>> > that have a running nfacctd daemon) is quite more complicated. For
>> > instance, Nginx normally monitors UDP responses from the remote
>> > servers to determine if those servers are health, but this approach
>> > will not work in the case of netflow or ipfix.
>> >
>> > Did anybody already managed to solve this? Or has a suggestion perhaps?
>> >
>> > Thanks in advance!
>> >
>> > *-*-*-*-*-*-*-*
>> > stream {
>> > upstream ipfix_traffic {
>> > hash $binary_remote_addr;
>> > server 10.20.10.10:9055 ;
>> > server 10.20.10.20:9055 ;
>> > }
>> >
>> > server {
>> > listen 9055 udp;
>> > proxy_responses 0;
>> > proxy_pass ipfix_traffic;
>> > proxy_bind $remote_addr transparent;
>> > error_log /var/log/nginx/ipfix_traffic.error.log;
>> > }
>> > }
>> > *-*-*-*-*-*-*-*
>> >
>> > Kind regards,
>> >
>> > Yann
>> >
>> > ___
>> > pmacct-discussion mailing list
>> > http://www.pmacct.net/#mailinglists 
>> >
>> >
>> >
>> > ___
>> > pmacct-discussion mailing list
>> > http://www.pmacct.net/#mailinglists 
>> 
>> ___
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists 
>> 
>> ___
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Load balancing nfacctd

2017-09-05 Thread Aaron Finney
I'm not sure I follow - do you mean setting up BGP peering of the
collectors to your source devices using the collector VIP as the neighbor
address?

On Sep 5, 2017 10:11 AM, "Paul Mabey"  wrote:

> Has anyone had success is pushing BGP sessions through an LB along with
> netflow? Interested in the solution below but would like to have BGP
> aligned with netflow as well.
>
> On Sep 4, 2017, at 9:48 AM, Aaron Finney  wrote:
>
> Great to hear, nice work!
>
> Aaron
>
> On Sep 4, 2017 1:55 AM, "Yann Belin"  wrote:
>
> Hi all,
>
> Updating on this, in case someone is interested.
>
> Consul was indeed the way to go:
>
> * nginx is doing the actual UDP load balancing, based on source IP
> hash (to optimize aggregation).
> * consul keeps track of nfacctd collectors, of their health, and of
> the health of their dependencies (rabbitmq in my case).
> * consul-template uses the information provided by consul (servers +
> health) to generate nginx configuration files, and reloads nginx
> service if needed; if a collector becomes unhealthy (e.g. rabbitmq
> crashes), it will be removed from nginx configuration and will stop
> receiving flows.
>
> The great thing with consul is that you can write your own checks. For
> now my checks are relatively basic (process + port binding checks) but
> I am working on a more advanced one for rabbitmq (e.g. queue length /
> ram usage). I'm still thinking about more advanced ways to check
> nfacctd health, if anyone has a suggestion.
>
> Cheers,
>
> Yann
>
>
> On Mon, Aug 21, 2017 at 4:02 PM, Aaron Finney 
> wrote:
> > Hi Yann
> >
> > We use Consul for this, it works very well.
> >
> > https://www.consul.io
> >
> >
> > Aaron
> >
> >
> >
> > On Aug 21, 2017 6:44 AM, "Yann Belin"  wrote:
> >
> > Hello,
> >
> > I have been looking into solutions to achieve reliable load balancing
> > of my incoming flows across multiple nfacctd servers / daemons.
> >
> > Basic load balancing is relatively easy (see Nginx configuration
> > below), but *reliable* load balancing (only sending flows to servers
> > that have a running nfacctd daemon) is quite more complicated. For
> > instance, Nginx normally monitors UDP responses from the remote
> > servers to determine if those servers are health, but this approach
> > will not work in the case of netflow or ipfix.
> >
> > Did anybody already managed to solve this? Or has a suggestion perhaps?
> >
> > Thanks in advance!
> >
> > *-*-*-*-*-*-*-*
> > stream {
> > upstream ipfix_traffic {
> > hash $binary_remote_addr;
> > server 10.20.10.10:9055;
> > server 10.20.10.20:9055;
> > }
> >
> > server {
> > listen 9055 udp;
> > proxy_responses 0;
> > proxy_pass ipfix_traffic;
> > proxy_bind $remote_addr transparent;
> > error_log /var/log/nginx/ipfix_traffic.error.log;
> > }
> > }
> > *-*-*-*-*-*-*-*
> >
> > Kind regards,
> >
> > Yann
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> >
> >
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
>
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Load balancing nfacctd

2017-09-05 Thread Paul Mabey
Has anyone had success is pushing BGP sessions through an LB along with 
netflow? Interested in the solution below but would like to have BGP aligned 
with netflow as well. 

> On Sep 4, 2017, at 9:48 AM, Aaron Finney  wrote:
> 
> Great to hear, nice work! 
> 
> Aaron
> 
> On Sep 4, 2017 1:55 AM, "Yann Belin"  > wrote:
> Hi all,
> 
> Updating on this, in case someone is interested.
> 
> Consul was indeed the way to go:
> 
> * nginx is doing the actual UDP load balancing, based on source IP
> hash (to optimize aggregation).
> * consul keeps track of nfacctd collectors, of their health, and of
> the health of their dependencies (rabbitmq in my case).
> * consul-template uses the information provided by consul (servers +
> health) to generate nginx configuration files, and reloads nginx
> service if needed; if a collector becomes unhealthy (e.g. rabbitmq
> crashes), it will be removed from nginx configuration and will stop
> receiving flows.
> 
> The great thing with consul is that you can write your own checks. For
> now my checks are relatively basic (process + port binding checks) but
> I am working on a more advanced one for rabbitmq (e.g. queue length /
> ram usage). I'm still thinking about more advanced ways to check
> nfacctd health, if anyone has a suggestion.
> 
> Cheers,
> 
> Yann
> 
> 
> On Mon, Aug 21, 2017 at 4:02 PM, Aaron Finney  > wrote:
> > Hi Yann
> >
> > We use Consul for this, it works very well.
> >
> > https://www.consul.io 
> >
> >
> > Aaron
> >
> >
> >
> > On Aug 21, 2017 6:44 AM, "Yann Belin"  > > wrote:
> >
> > Hello,
> >
> > I have been looking into solutions to achieve reliable load balancing
> > of my incoming flows across multiple nfacctd servers / daemons.
> >
> > Basic load balancing is relatively easy (see Nginx configuration
> > below), but *reliable* load balancing (only sending flows to servers
> > that have a running nfacctd daemon) is quite more complicated. For
> > instance, Nginx normally monitors UDP responses from the remote
> > servers to determine if those servers are health, but this approach
> > will not work in the case of netflow or ipfix.
> >
> > Did anybody already managed to solve this? Or has a suggestion perhaps?
> >
> > Thanks in advance!
> >
> > *-*-*-*-*-*-*-*
> > stream {
> > upstream ipfix_traffic {
> > hash $binary_remote_addr;
> > server 10.20.10.10:9055 ;
> > server 10.20.10.20:9055 ;
> > }
> >
> > server {
> > listen 9055 udp;
> > proxy_responses 0;
> > proxy_pass ipfix_traffic;
> > proxy_bind $remote_addr transparent;
> > error_log /var/log/nginx/ipfix_traffic.error.log;
> > }
> > }
> > *-*-*-*-*-*-*-*
> >
> > Kind regards,
> >
> > Yann
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists 
> >
> >
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] MySQL plugin and dynamic table names

2017-09-05 Thread Mathias Gumz
> I see now. Can you try with 'nfacctd_time_new: true' It will cause
> time-binning to use arrival time of the flow to the collector (that
> time should be reasonably close to flow end time and stamp_updated).

Exactly that is what I switched to and it does its job. :)

Since I don't use it for real accounting, I per se don't have any use
for "sql_history". But since I need "sql_history" to trigger the
working of the dynmic tables, I initially went with 1s resolution. For
whatever reason: This did not work or better: I don't see any data in
any bin. I since then increased the "sql_history" config option to
"60s" (and "sql_refresh_time" is also set to "60s").

Last question so far: Why am I not seeing data in the database when
using "sql_history: 1" (or "10")? I have both "sql_history" and
"sql_refresh_time" set to the same amount of seconds.

Thanks in advance,
-- 
Mathias Gumz

Email: mathias.g...@travelping.com
Phone: +49-391-819099-228

--- enabling your networks --

Travelping GmbH Phone:  +49-391-81 90 99 0
Roentgenstr. 13 Fax:+49-391-81 90 99 299
39108 Magdeburg Email:  i...@travelping.com
GERMANY Web:http://www.travelping.com

Company Registration: Amtsgericht StendalReg No.:   HRB 10578
Geschaeftsfuehrer: Holger Winkelmann  VAT ID No.: DE236673780
-

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists