Hi Paolo,

This is with the normal nfacctd running (that is gathering my real stats):

# ps -ef | grep acct
root      9255     1  0 Dec24 ?        00:00:09 nfacctd: Core Process [default]
root      9256  9255  0 Dec24 ?        00:00:18 nfacctd: MySQL Plugin [internet]
root     29235 29205  0 11:12 pts/0    00:00:00 grep acct

# netstat -pln | grep acct
udp        0      0 0.0.0.0:6789            0.0.0.0:*                          
9255/nfacctd: Core


Now I start one new nfacctd process using a different port and a basic config 
file:


# nfacctd -f /etc/pmacct/test.conf
WARN ( /etc/pmacct/test.conf ): No plugin has been activated; defaulting to 
in-memory table.

# ps -ef | grep acct
root      9255     1  0 Dec24 ?        00:00:09 nfacctd: Core Process [default]
root      9256  9255  0 Dec24 ?        00:00:18 nfacctd: MySQL Plugin [internet]
root     29294     1  0 11:16 ?        00:00:00 nfacctd: Core Process [default]
root     29295 29294  0 11:16 ?        00:00:00 nfacctd: IMT Plugin [default]
root     29299 29205  0 11:16 pts/0    00:00:00 grep acct

# netstat -pln | grep acct
udp        0      0 0.0.0.0:6789            0.0.0.0:*                          
9255/nfacctd: Core
udp        0      0 0.0.0.0:5678            0.0.0.0:*                          
29294/nfacctd: Core
unix  2      [ ACC ]     STREAM     LISTENING     108126129 29295/nfacctd: IMT  
/tmp/collect.pipe


So at this point I have my original nfacctd (PID: 9255 & 9256) running on port 
udp/6789 and my new nfacctd (PID: 29294/5) listening on port 5678.

Now I start another nfacctd with the same config file (test.conf):


# nfacctd -f /etc/pmacct/test.conf
WARN ( /etc/pmacct/test.conf ): No plugin has been activated; defaulting to 
in-memory table.

# ps -ef | grep acct
root      9255     1  0 Dec24 ?        00:00:09 nfacctd: Core Process [default]
root      9256  9255  0 Dec24 ?        00:00:18 nfacctd: MySQL Plugin [internet]
root     29294     1  0 11:16 ?        00:00:00 nfacctd: Core Process [default]
root     29295 29294  0 11:16 ?        00:00:00 nfacctd: IMT Plugin [default]
root     29328     1  0 11:18 ?        00:00:00 nfacctd: Core Process [default]
root     29329 29328  0 11:18 ?        00:00:00 nfacctd: IMT Plugin [default]
root     29331 29205  0 11:18 pts/0    00:00:00 grep acct

# netstat -pln | grep acct
udp        0      0 0.0.0.0:6789            0.0.0.0:*                          
9255/nfacctd: Core
udp        0      0 0.0.0.0:5678            0.0.0.0:*                          
29328/nfacctd: Core
udp        0      0 0.0.0.0:5678            0.0.0.0:*                          
29294/nfacctd: Core
unix  2      [ ACC ]     STREAM     LISTENING     108126129 29295/nfacctd: IMT  
/tmp/collect.pipe
unix  2      [ ACC ]     STREAM     LISTENING     108126210 29329/nfacctd: IMT  
/tmp/collect.pipe


Now I seem to have BOTH of the new processes listening on udp/5678 without any 
errors given in the logs and without the second instance failing to start (I 
would have expected some sort of "can't bind to port" type error).

The following information in /var/log/messages seems to back up the theory that 
they are both bound to the same port:

Dec 30 11:16:36 mx2 nfacctd[29294]: INFO ( default/core ): Start logging ...
Dec 30 11:16:36 mx2 nfacctd[29294]: WARN ( default/memory ): defaulting to SRC 
HOST aggregation.
Dec 30 11:16:36 mx2 nfacctd[29294]: INFO ( default/core ): waiting for NetFlow 
data on 0.0.0.0:5678
Dec 30 11:16:36 mx2 nfacctd[29295]: OK ( default/memory ): waiting for data on: 
'/tmp/collect.pipe'
Dec 30 11:18:35 mx2 nfacctd[29328]: INFO ( default/core ): Start logging ...
Dec 30 11:18:35 mx2 nfacctd[29328]: WARN ( default/memory ): defaulting to SRC 
HOST aggregation.
Dec 30 11:18:35 mx2 nfacctd[29329]: OK ( default/memory ): waiting for data on: 
'/tmp/collect.pipe'
Dec 30 11:18:35 mx2 nfacctd[29328]: INFO ( default/core ): waiting for NetFlow 
data on 0.0.0.0:5678


The "test.conf" file has very basic config:

daemonize: true
nfacctd_port: 5678
nfacctd_time_new: true
syslog: local4


This is running on a debian 4.0 system (2.6.18-5-686) and is reproducable (on 
my box anyway). I haven't tested to see whether the second instance gets the 
netflow packets or is able to write them twice as I don't want to mess with 
real data at this point. 



regards,
Tony.

--- On Thu, 30/12/10, Paolo Lucente <pa...@pmacct.net> wrote:

> From: Paolo Lucente <pa...@pmacct.net>
> Subject: Re: [pmacct-discussion] potentially missing data
> To: "Tony" <td_mi...@yahoo.com>
> Cc: pmacct-discussion@pmacct.net
> Received: Thursday, 30 December, 2010, 3:26 AM
> Hi Tony,
> 
> Good to know issue is solved.
> 
> I can say pmacct doesn't make use at any rate of shared
> memory segments that
> can be attached by distinct (say, nfacctd) instances. Also,
> correct behaviour
> from the underlying OS when trying to bind two processes to
> the same port is
> to get an error back. Puzzling.
> 
> Cheers,
> Paolo
> 


      

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to