On 9/25/2009 4:03 PM, Aaron Glenn wrote:
On Fri, Sep 25, 2009 at 4:59 AM, Tony wrote:
Is there a way to sort it properly by IP address (so that .2 comes after .1) in
either an SQL query or in an XLS sheet ?
I hesitate to be 'that guy' but, you should look at using PostgreSQL.
I don
Peter Nixon wrote:
> As you can see the machine is not waiting on disk, but rather on the locks.
> Before I go about trying to optimise this any further, is there something
> basic I have wrong that can speed this up?
>
The configuration looks fine. It seems like there's no indexes or
somethi
Peter Nixon wrote:
> For my purposes the number of bytes for "in" and "out" could be summed (which
> would fit with this system just fine" but it could also be usefull to have
> "in_bytes" and "out_bytes" in separate columns...
>
> Does anyone have any comments on this? Is there an easy way to do
Sven Anderson wrote:
> Hi all,
>
> I want to share with you an interesting experience I just had.
>
> I did the following SELECT:
>
> SELECT "ip_dst",SUM("bytes"),SUM("packets"),SUM("flows") FROM
> "tcom_v5_20060530" WHERE "stamp_inserted">='2006-05-30 00:00:00' AND
> "stamp_inserted"<'2006-05-30 0
Sven Anderson wrote:
> Hi all,
>
> Sven Anderson, 21.04.2006 21:34:
>
>> I think the problem is not the updating of the data itself, but updating
>> the complex primary key. An index of (ip_src, ip_dst, stamp_inserted) is
>> fast enough to find entries, and easy enough to maintain.
>>
>
> i
Sebastien Guilbaud wrote:
I always have a difference of one hour in my database
between these columns :
stamp_inserted stamp_updated
2006-02-02 10:10:00 2006-02-02 09:16:26
I've tried to activate debugmode, and INSERT's contain
timestamps with one hour variation
Jakub Wartak wrote:
Dnia niedziela, 20 listopada 2005 23:59, Paolo Lucente napisaĆ:
The hardware is ok and the number of tuples in both the tables seems fine.
I would suggest to upgrade to either 0.9.3, 0.9.4p1 or the development
snapshot 0.9.5 in order to further troubleshoot the issue (the
I just (painfully) ran into a situation where I had added multiple
aggregation plugins to a configuration, for example:
! ...
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 192.168.1.0/24
aggregate_filter[out]: src net 192.168.1.0/24
plugins: pgsql[in], pgsql[out
Here's an interesting one that I just noticed in dmesg on a Linux 2.6.10
traffic accounting machine. Not sure if it's caused by pmacctd, network
conditions, or a bad kernel... but a bit scary anyhow :-)
pmacctd: page allocation failure. order:0, mode:0x20
[] __alloc_pages+0x1c2/0x370
[] __kmall
Jamie Wilkinson wrote:
This one time, at band camp, Wim Kerkhoff wrote:
I've got 'vlan and ...' in my pcap_filter variable, and that works.
ith e1000? What kernel?
A pair of e100s actually, but I'm surprised that you think the driver is the
problem.
I only m
I've got 'vlan and ...' in my pcap_filter variable, and that works.
With e1000? What kernel?
Wim
Recently I ran into an interesting but frustrating problem when trying
to perform traffic accounting on a VLAN trunk port on a Linux 2.6 router.
Using libpcap tools like tcpdump, tethereal, and pmacctd to sniff
traffic on the physical ethernet port where 802.1Q trunking is enabled
will simply
Hi Paolo,
How's it going?
Can you remember why we made the pgsql_plugin do an exclusive table
lock? I vaguely remember us discussing it privately, but can not recall
the rational.
Everything I have read about PostgreSQL indicates that it's designed to
work will with row level locking, in c
Hi,
I think this is something trivial and I'm doing something obviously
wrong, but thought I'd post as I haven't been able to figure it out yet...
My config file has (among other variables):
aggregate[in]:dst_host
aggregate[out]:src_host
sql_optimize_clauses: true
sql_history: 1h
sql_table_ve
Pajlatek wrote:
Thanx for reply guys,i think port mirror will be enugh.
I wonder what proccessor and disk i would have to buy for pmacct to
count a gigabit trafic per ip(about 2000 ip's).:P
Any suggestions?
Just out of curiosity...
It depends on how you wish to account. For example, perpetually
Pajlatek wrote:
Hello
I was reading some post about netflowds and Cisco routers.
Does anybody knows if its possible to use it with CISCO CATALYST 2970
Gigabit Switch ?
Correction to my last message... I discovered:
http://www.splintered.net/sw/flow-tools/docs/flow-tools-examples.html
The ip
Pajlatek wrote:
Hello
I was reading some post about netflowds and Cisco routers.
Does anybody knows if its possible to use it with CISCO CATALYST 2970
Gigabit Switch ?
And if there is other way to collect smth from Cisco Switches please
write also in your spare time.
Hi,
I did some research
David Maple wrote:
Hi all,
I have pmacctd running on a dedicated machine listening on a gigabit
Ethernet port. It is only collecting half of the data that the router
is passing to it. For example, at this time, the 5 minute average from
the router (confirmed with ifconfig) is 136,087,
David Maple wrote:
Hi all,
I have pmacctd running on a dedicated machine listening on a gigabit
Ethernet port. It is only collecting half of the data that the router
is passing to it. For example, at this time, the 5 minute average from
the router (confirmed with ifconfig) is 136,087,
Has anybody worked on setting up counters based on the service? Eg, not
just the TCP/UDP ports, but HTTP, SMTP, IMAP, Gnutella, Kazaa,
Bittorrent, FTP, Other, etc...
NTop seems to do this quite accurately, but doesn't seem to work too
well with a database. Or, would it better to have Ntop expo
Chris Koutras wrote:
Wim,
Thanks for the info, particularly the config file values. It is always
good to see what works for someone on a large network. Are you using
the "sql_dont_try_update" directive? I have been thinking of trying
this to reduce the database bottle neck.
I didn't even
Hey
I'm monitoring 7500+ hosts and pmacct keeps up fine. Biggest bottleneck
is the backend database and pure disk I/O.
Before starting pmacctd, my start script does:
echo 512000 > /proc/sys/net/core/wmem_max
echo 512000 > /proc/sys/net/core/rmem_max
Here are some values from the config file:
Pajlatek wrote:
Hello i am preparing stats for large over 1800 users stats,and we need
to have day limits like 18GB,and at night from 1:00 am to 8:00 we
allow unlimited download.
If you have hourly counters, then you can do some fancy grouping in
PostgresSQL/MySQL to do this.
The easiest
Hi Steve,
On Tue, Jan 04, 2005 at 05:28:23PM -, Steve Wright wrote:
>
> As a n00b to pmacct, I am trying to get my Netflow output into MySQL.
> However, my MySQL server has just died under the load put upon it (about
> 4 rows in 30 minutes).
>
> Is it expected that MySQL will not be able
Michael Ralston wrote:
You might want to look at using something like ntop in
parallel with pmacctd... it's quite handy for identifying
unusual traffic.
Ntop is more an instantaneous thing though right? Like top is for CPU... I
need something that will log the unusual traffic so
Michael Ralston wrote:
Hi
I've got a couple of ideas I wanted to put forward for pmacct
Recording only top percentage of hosts
I've been a victim of some denial of service attacks and I'd like to use
pmacctd to record where they are coming from. It is fairly obvious that
src_host summarisation
Derek Fedel wrote:
Hi All,
I was just wondering, is there a way to add another column into the
database so that pmacct also logs the resolved hostname? We use a
large network with dynamic dns'd hosts, so its much more practical to
keep track of them via fqdn rather than ip (but I'd still lik
27 matches
Mail list logo