Hi Stig,

I've also looked at the Vyatta tiket, I seem unable to reproduce it.
Since it appears it's the core proess failing, would you mind running
it under gdb and send me a backtrace once it crashes? Let's follow-up
privately as debugger info might not be of very general interest.

Cheers,
Paolo

On Wed, Nov 20, 2013 at 04:17:28PM -0800, Stig Thormodsrud wrote:
> Great to hear from you again too Paolo.  I knew I should've checked if you
> had fixed it already.
> 
> Anyway another issue I'm looking into.  I don't think this is a new issue
> because there was a bug open for it back at Vyatta (
> https://bugzilla.vyatta.com/show_bug.cgi?id=7693).  Basically if I'm using
> either netflow or sflow along with IMT, often time the netflow/sflow daemon
> will die.  Running foreground I see:
> 
> root@ubnt-netflow:/etc/pmacct# uacctd -f uacctd-i.conf
> OK ( default/memory ): waiting for data on: '/tmp/uacctd-i.pipe'
> INFO ( default/core ): Successfully connected Netlink ULOG socket
> INFO ( default/core ): Netlink receive buffer size set to 2097152
> INFO ( default/core ): Netlink ULOG: binding to group 2
> INFO ( default/core ): Trying to (re)load map: /etc/pmacct/int_map
> INFO ( default/core ): map '/etc/pmacct/int_map' successfully (re)loaded.
> INFO ( 10.1.7.227-6343/sfprobe ): Exporting flows to [10.1.7.227]:6343
> INFO ( 10.1.7.227-6343/sfprobe ): Sampling at: 1/1
> WARN ( default/memory ): Unable to allocate more memory pools, clear stats
> manually!
> INFO: connection lost to '10.1.7.227-6343-sfprobe'; closing connection.
> 
> 
> After the connection lost message the sflow daemon is gone, but IMT still
> is fine.  Any thoughts on how to further debug this (other than not using
> IMT ;-).

> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to