Hello Paolo,

I configured mySQL and everything works like a charm now. Due to the
low CPU power on my NAS I had to increase the `plugin_buffer_size` to
102400 to make sure I do not miss any packets. The memory plugin was
very useful to do the tuning.

Note: in DEBUG mode, I had the impression that the dropped packet
message did not show, but I may be mistaken.

I can now track a 2 GB transfer without losing anything. Of course, if
I log my internal network transfers to my NAS I have a 50% transfer
rate drop because the transfer are CPU-limited, not network- or disk
access-limited (thanks to the low power ARM processor in the NAS). But
my goal is to log slower extra-network trafic anyway. With proper
filters, I incur only a 10-15 % performance hit on intranet transfers
to the NAS caused by the minimum CPU required to realize it does not
want to log these packets.

Apart from what is in the FAQ, do you have any suggestions to reduce
the load caused by the packet filtering? Of course that will not be a
problem with anyone with a decent processor.

Interesting fact: I tried the ipfm program to log everything,
including internal traffic, I lose only 10% of my transfer rate
instead of 50% with pmacct. But I realized it drops 80% of the
packets, without telling me. So bravo pmacct.

I think I will try MySQL for a while. It does not seem to be too
ressource-hungry. If you still want to debug the sqlite3 problem let
me know. I suspect it has something to do with the slightly
non-traditional Linux setup on the DNS-323.

By the way, I intend to run a cron task that will do DNS lookups my
external hosts (they have dynamic addresses) and store the addresses
in another SQL table so I can link traffic with dynamically addressed
host names.

Many thanks,

Jean-François

On Mon, Nov 9, 2009 at 1:03 PM, JF Cliche <jfcli...@jfcliche.com> wrote:
> Hello Paolo,
>
> Testing MySQL was my next step, but I'll need to learn how to set it
> up first. So I'll do my homeworks and I'll keep you posted.
>
> Thanks,
>
> JF
>
>
> On Mon, Nov 9, 2009 at 12:38 PM, Paolo Lucente <pa...@pmacct.net> wrote:
>> Hi JF,
>>
>> On Mon, Nov 09, 2009 at 10:26:35AM -0500, JF Cliche wrote:
>>
>>> In any case, I cleaned- up my config file and made sure I filter
>>> nothing (see config below). I rechecked pmacctd using the memory
>>> plugin and data is being gathered. Then I relaunched with the sqlite3
>>> plugin. 'pmacct -s' still generate an error: the pipe file is not
>>> created. And a large number of sqlite3 writer processes are created.
>>
>> The "pmacct" client tool is only useful to interact with the memory
>> plugin; when you are using only the sqlite3 plugin, it tries to see
>> if a memory plugin is listening on the pipe file - but nothing is
>> there and hence it triggers the error. I would say this is all OK.
>>
>>> ...
>>> ( in/sqlite3 ) *** Purging cache - START ***
>>> ( in/sqlite3 ) *** Purging cache - END (QN: 0, ET: 0) ***
>>> WARN ( in/sqlite3 ): Maximum number of SQL writer processes reached (10).
>>> ( in/sqlite3 ) *** Purging cache - START ***
>>> ( in/sqlite3 ) *** Purging cache - END (QN: 0, ET: 0) ***
>>> ...
>>
>> Now, this part is interesting. As you said, writer processes seem to
>> not be replenished - while they should be shortly after each "END"
>> debug message you see above; and also it seems the writer process is
>> unable to dump the SQL cache content into the database (as the query
>> number, "QN", stays at zero while you produce some traffic; moreover
>> such traffic is reaching the sqlite3 plugin no problems as you can get
>> the buffering error "ERROR ( in/sqlite3 ): We are missing data" from
>> the plugin).
>>
>> To rule out some configuration issue, i've copy/pasted your config
>> on a testbed - and i see it working fine. This and the fact that the
>> memory plugin works fine (it has a different way of working compared
>> the SQL plugins) suggests me the issue could be at a lower level.
>>
>> I can propose some solutions: 1) allow me to troubleshoot the issue;
>> this would require an access to the NAS (and feel free to bring this
>> over privately); 2) if the above is not possible a clean alternative
>> might be let pmacct to write to a remote SQL database, either MySQL
>> or PostgreSQL; it has to be checked you don't run in the same SQLite
>> issue first; 3) check whether what you want to achieve, can be done
>> with the memory plugin.
>>
>> Cheers,
>> Paolo
>>
>>
>
>
>
> --
> ------------------------------------------------
> Jean-François Cliche, Ph.D., P. Eng
>



-- 
------------------------------------------------
Jean-François Cliche, Ph.D., P. Eng

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to