Take a look at Cacti, which is available in the EPEL repo:
https://www.cacti.net/
It's not just for network accounting. It polls multiple hosts for all
kinds of data and keeps RRD tables for display. Cacti provides a web
interface that can display the data in charts. You'll need to install
pl
On 9/13/21 18:47, MRob wrote:
While you probably can't recover such information for past events,
going forward, iptables can help you figure this out. Putting an
IPtables
rule in the OUTPUT table prior to ACCEPTing the packets can help, e.g.:
iptables -A OUTPUT -p tcp -m owner --uid-owner
See "man iptables-extensions" and "man iptables". I don't know how this
works with firewall-cmd, but I imagine firewalld "just" manages
iptables?
Yes thats right
I am running CentOS Linux release 7.9.2009 (Core). Is there a way to
find
out which process consumed network bandwidth during a spe
On Mon, 6 Sept 2021 at 14:24, Anand Buddhdev
On 06/09/2021 19:35, Kaushal Shriyan wrote:
Hi Kaushal,
I am running CentOS Linux release 7.9.2009 (Core). Is there a way to find
out which process consumed network bandwidth during a specific time period?
For example, the Nginx process consumed
On Mon, 6 Sept 2021 at 14:24, Anand Buddhdev wrote:
>
> On 06/09/2021 19:35, Kaushal Shriyan wrote:
>
> Hi Kaushal,
>
> > I am running CentOS Linux release 7.9.2009 (Core). Is there a way to find
> > out which process consumed network bandwidth during a specific time period?
> >
> > For example, t
On 06/09/2021 19:35, Kaushal Shriyan wrote:
Hi Kaushal,
> I am running CentOS Linux release 7.9.2009 (Core). Is there a way to find
> out which process consumed network bandwidth during a specific time period?
>
> For example, the Nginx process consumed how much network traffic on Sept
> 01, 202
Hi,
I am running CentOS Linux release 7.9.2009 (Core). Is there a way to find
out which process consumed network bandwidth during a specific time period?
For example, the Nginx process consumed how much network traffic on Sept
01, 2021.
Best Regards,
Kaushal
Hi:
I use to find all the leave packages with "package-cleanup
--leaves --all". but the "--all" parameter no longer valid under
CentOS-8. is there alternative command I can use to find out all the
leave packages?
___
CentOS mailing list
CentOS@centos.
On 2015-04-27, Les Mikesell
wrote:
--SNIP--
> And I guess the other piece of this would be finding individual
> packages that are not encompassed by the groups - or pulled in by
> dependencies.Is there some database-like approach to take the full
> list of packages, then reduce it to the min
On Mon, Apr 27, 2015 at 4:52 PM, Les Mikesell wrote:
> On Mon, Apr 27, 2015 at 4:34 PM, Matthew Miller wrote:
>> On Mon, Apr 27, 2015 at 04:04:41PM -0500, Les Mikesell wrote:
>>> Interesting, but it seems to _only_ show groups that weren't included
>>> in the anaconda install. For example wher
On Mon, Apr 27, 2015 at 4:34 PM, Matthew Miller wrote:
> On Mon, Apr 27, 2015 at 04:04:41PM -0500, Les Mikesell wrote:
>> Interesting, but it seems to _only_ show groups that weren't included
>> in the anaconda install. For example where the saved anaconda-ks-cfg
>> shows @gnome-desktop and @de
On Mon, Apr 27, 2015 at 04:04:41PM -0500, Les Mikesell wrote:
> Interesting, but it seems to _only_ show groups that weren't included
> in the anaconda install. For example where the saved anaconda-ks-cfg
> shows @gnome-desktop and @development, 'yum grouplist' only shows
> 'MATE Desktop' which
On Mon, Apr 27, 2015 at 03:45:16PM -0500, Johnny Hughes wrote:
> But, I think that is a YUM database and not based on the RPM database,
> so it is possible that you can have all the RPMs for a group installed
> and not actually have it listed as installed.
> At least I sometimes find myself in that
On Mon, Apr 27, 2015 at 1:47 PM, Matthew Miller wrote:
> On Mon, Apr 27, 2015 at 11:58:08AM -0500, Les Mikesell wrote:
>> Is there an 'after the fact' way to find what yum groups are
>> installed, including ones that were added with 'yum groupinstall'
>> instead of the initial anaconda install?
>
On 04/27/2015 01:47 PM, Matthew Miller wrote:
> On Mon, Apr 27, 2015 at 11:58:08AM -0500, Les Mikesell wrote:
>> Is there an 'after the fact' way to find what yum groups are
>> installed, including ones that were added with 'yum groupinstall'
>> instead of the initial anaconda install?
>
> Yes. "y
On Mon, Apr 27, 2015 at 11:58:08AM -0500, Les Mikesell wrote:
> Is there an 'after the fact' way to find what yum groups are
> installed, including ones that were added with 'yum groupinstall'
> instead of the initial anaconda install?
Yes. "yum grouplist" will tell you the groups that are current
Is there an 'after the fact' way to find what yum groups are
installed, including ones that were added with 'yum groupinstall'
instead of the initial anaconda install?
--
Les Mikesell
lesmikes...@gmail.com
___
CentOS mailing list
CentOS@centos.
On Sat, Jan 24, 2015 at 12:32:01PM -0600, Valeri Galtsev wrote:
> One other thing I would try: disable selinux, and see if that lets
> apache read file, e.g.:
>
> setenforce 0
Setting SELinux to permissive temporarily is a good start, although
it's also helpful to check the audit logs, with:
au
On Sat, January 24, 2015 11:27 am, Tim Dunphy wrote:
> Hey guys,
>
> Unless you're using auditd (or a similar service) to watch the file,
> no. You could probably use the logs and `last` to see who was logged
> in at the time and make a guess.
>
>
>
> Also, you can look into shell history files (
Hey guys,
Unless you're using auditd (or a similar service) to watch the file,
no. You could probably use the logs and `last` to see who was logged
in at the time and make a guess.
Also, you can look into shell history files (though that might be cleaned
by users). Admin is allowed to do that
On Fri, January 23, 2015 3:13 pm, Jonathan Billings wrote:
> On Fri, Jan 23, 2015 at 03:50:44PM -0500, Tim Dunphy wrote:
>> Is there any way to find out the last user to access a file on a CentOS
>> 6.5 system?
>
> Unless you're using auditd (or a similar service) to watch the file,
> no. You co
On Fri, Jan 23, 2015 at 03:50:44PM -0500, Tim Dunphy wrote:
> Is there any way to find out the last user to access a file on a CentOS
> 6.5 system?
Unless you're using auditd (or a similar service) to watch the file,
no. You could probably use the logs and `last` to see who was logged
in at the
Hey guys,
Is there any way to find out the last user to access a file on a CentOS
6.5 system?
Thanks
Tim
--
GPG me!!
gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailma
On 10/28/2014 5:32 PM, Robert Nichols wrote:
On 10/28/2014 04:00 PM, Tim Dunphy wrote:
Hey guys,
Sorry not sure what's wrong with this statement. I've tried a few
variations of trying to exclude the /var/www directory.
[root@224432-24 apr-1.5.1]# find / -name "*httpd*" -type d \( ! -name
w
On 10/28/2014 04:00 PM, Tim Dunphy wrote:
Hey guys,
Sorry not sure what's wrong with this statement. I've tried a few
variations of trying to exclude the /var/www directory.
[root@224432-24 apr-1.5.1]# find / -name "*httpd*" -type d \( ! -name www \)
/usr/lib/httpd
/usr/lib64/httpd
/var/www/
>
> In centos, the apache package is named httpd, not apache. try removing the
> packages first. (yum remove httpd)
Yup! Already done. I did say I removed apache packages, realizing the name
of the package is actually httpd in centos. My bad for not communicating
clearly. This exercise is just t
2014-10-28 23:00 GMT+02:00 Tim Dunphy :
> Hey guys,
>
> Sorry not sure what's wrong with this statement. I've tried a few
> variations of trying to exclude the /var/www directory.
>
>
> [root@224432-24 apr-1.5.1]# find / -name "*httpd*" -type d \( ! -name www
> \)
> /usr/lib/httpd
> /usr/lib64/ht
Hi
find / -name "*httpd*" -type d |grep -v www\
Thanks.. Ideally I'd like to use the -delete flag to find once i have the
right command. But with that I suppose I could use find / -name "*httpd*"
-type d |grep -v www\ | xargs rm -rfv
Assuming that the initial find doesn't do anything too scary
On 10/28/2014 11:00 PM, Tim Dunphy wrote:
Hey guys,
Sorry not sure what's wrong with this statement. I've tried a few
variations of trying to exclude the /var/www directory.
[root@224432-24 apr-1.5.1]# find / -name "*httpd*" -type d \( ! -name www \)
/usr/lib/httpd
/usr/lib64/httpd
/var/www/
Hey guys,
Sorry not sure what's wrong with this statement. I've tried a few
variations of trying to exclude the /var/www directory.
[root@224432-24 apr-1.5.1]# find / -name "*httpd*" -type d \( ! -name www \)
/usr/lib/httpd
/usr/lib64/httpd
/var/www/vhosts/johnnyenglish/httpdocs
/var/www/lpadde
Hi John,
Thx, it works.
On Fri, Jul 18, 2014 at 1:45 PM, John R Pierce wrote:
> On 7/17/2014 6:32 PM, Benjamin Fernandis wrote:
> > Is there any command or way to grab disk / storage /volume path for
> > particular guest vm?
>
> # virsh domblklist kfat
> Target Source
> -
On 7/17/2014 6:32 PM, Benjamin Fernandis wrote:
> Is there any command or way to grab disk / storage /volume path for
> particular guest vm?
# virsh domblklist kfat
Target Source
hda/var/lib/libvirt/images/kfat.img
hdb/var/lib/li
Hi,
we use kvm based virtualization on centos 65..
Is there any command or way to grab disk / storage /volume path for
particular guest vm?
Thx
Benjo
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
don't forget to escape that exclamation point if typing on bash.
On Tue, May 13, 2014 at 1:50 AM, Nicolas Thierry-Mieg <
nicolas.thierry-m...@imag.fr> wrote:
> > On Mon, May 12, 2014 at 4:44 AM, Tim Dunphy
> wrote:
> >
> >> Thanks. But what if I want to turn that statement into one that will
>
> On Mon, May 12, 2014 at 4:44 AM, Tim Dunphy wrote:
>
>> Thanks. But what if I want to turn that statement into one that will delete
>> everything it finds? I need to preserve the contents of that directory.
>>
>> As in : find / -path '/usr/local/digitalplatform/*' -prune -o -name
>> "*varnish*"
Why not copy the directory elsewhere, then delete the rest and move it
back? You'd take a copy of it anyway, if it is important, right?
Cheers,
Cliff
On Mon, May 12, 2014 at 4:44 AM, Tim Dunphy wrote:
> Thanks. But what if I want to turn that statement into one that will delete
> everything i
On 05/11/2014 01:06 PM, Tim Dunphy wrote:
> Hal & Jack
>
> Both are perfect! Thanks
>
> [root@uszmpwsls014lb ~]# find / -print | grep -v digitalplatform | grep
> varnish
> /var/lib/varnish
> /var/lib/varnish/uszmpwsls014lb
> /var/lib/varnish/uszmpwsls014lb/_.vsl
> /var/lib/varnish/varnish_storage
On Sun, May 11, 2014 at 12:33:47PM -0400, Tim Dunphy wrote:
> find / -path '/usr/local/digitalplatform/*' -prune -o -name "*varnish*"
Try
find / -path /usr/local/digitalplatform -prune -o name '*varnish*' -print
Without the explicit -print, find will implicitly add one
e.g
find / \( -path ..
Hal & Jack
Both are perfect! Thanks
[root@uszmpwsls014lb ~]# find / -print | grep -v digitalplatform | grep
varnish
/var/lib/varnish
/var/lib/varnish/uszmpwsls014lb
/var/lib/varnish/uszmpwsls014lb/_.vsl
/var/lib/varnish/varnish_storage.bin
/usr/lib64/libvarnish.so.1
/usr/lib64/libvarnishapi.so.1
On Sun, 2014-05-11 at 12:33 -0400, Tim Dunphy wrote:
> Hey all,
>
> I'm trying to do a find of all files with the phrase 'varnish' in the
> name, but want to exclude a user home directory called
> /usr/local/digitalplatform.
find / -path /usr/local/digitalplatform -prune -name \*varnish\* doesn'
find / -print | grep -v digitalplatform | grep varnish | xargs rm
But test this first - you don't want to remove anything by accident.
On Sun, May 11, 2014 at 11:44 AM, Tim Dunphy wrote:
> Thanks. But what if I want to turn that statement into one that will delete
> everything it finds? I need
Thanks. But what if I want to turn that statement into one that will delete
everything it finds? I need to preserve the contents of that directory.
As in : find / -path '/usr/local/digitalplatform/*' -prune -o -name
"*varnish*" -exec rm -rfv {} \;
I'm thinking the grep -v would be a visual thing,
So:
find / -print | grep -v digitalplatform | grep varnish
On Sun, May 11, 2014 at 11:39 AM, Hal Wigoda wrote:
> Just grep it out.
>
> find . -print | grep -v digitalplatform
>
> -v excludes
>
> On Sun, May 11, 2014 at 11:33 AM, Tim Dunphy wrote:
>> Hey all,
>>
>> I'm trying to do a find of
Just grep it out.
find . -print | grep -v digitalplatform
-v excludes
On Sun, May 11, 2014 at 11:33 AM, Tim Dunphy wrote:
> Hey all,
>
> I'm trying to do a find of all files with the phrase 'varnish' in the
> name, but want to exclude a user home directory called
> /usr/local/digitalplatform.
Hey all,
I'm trying to do a find of all files with the phrase 'varnish' in the
name, but want to exclude a user home directory called
/usr/local/digitalplatform.
Here's what I was able to come up with:
find / -path '/usr/local/digitalplatform/*' -prune -o -name "*varnish*"
Which results in thi
> Order of operations
> find /path/to/files/ -type f -mtime -2 -name *.xml.gz -print0
Thanks!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Thu, Oct 25, 2012 at 03:41:51PM -0500, Sean Carolan wrote:
> If I run this:
> find /path/to/files/ -type f -mtime -2 -name *.xml.gz
> find /path/to/files/ -print0 -type f -mtime -2 -name *.xml.gz
Order of operations
find /path/to/files/ -type f -mtime -2 -name *.xml.gz -print0
--
rgds
Ste
If I run this:
find /path/to/files/ -type f -mtime -2 -name *.xml.gz
I get the expected results, files with modify time less than two days old.
But, if I run it like this, with the print0 flag:
find /path/to/files/ -print0 -type f -mtime -2 -name *.xml.gz
I get older files included as well. A
Am 05.08.2012 00:19, schrieb Tim Dunphy:
> I'm trying to write a script that will search through a directory of trace
> logs [...] and it's not possible to know the exact
> names of the files before they are created. The purpose of this is to
> create service checks in nagios.
[...]
> The problem
On Sat, Aug 04, 2012 at 06:19:39PM -0400, Tim Dunphy wrote:
> hello list,
>
> I'm trying to write a script that will search through a directory of trace
> logs for an oracle database. From what I understand new files are always
> being created in the directory and it's not possible to know the ex
hello list,
I'm trying to write a script that will search through a directory of trace
logs for an oracle database. From what I understand new files are always
being created in the directory and it's not possible to know the exact
names of the files before they are created. The purpose of this is
On Fri, Dec 09, 2011 at 03:26:26PM +, Always Learning wrote:
>
> Its not intellectual enough and its too short and its also simple.
You left out "incorrect".
John
--
Like its politicians and its wars, society has the teenagers it deserves.
On Fri, Dec 09, 2011 at 10:09:27AM -0500, Windsor Dave L. (AdP/TEF7.1) wrote:
> I like:
>
> find . -type f -printf '%TY/%Tm/%Td %TH:%TM:%TS %p\n' | sort -n | tail -1
>
> which shows the last access date/time in a human-readable format that also
> sorts nicely (/MM/DD HH:MM:SS).
>
> Note tha
From: "m.r...@5-cent.us"
> John R. Dennison wrote:
>> On Fri, Dec 09, 2011 at 03:15:53PM +0100, Mogens Kjaer wrote:
>>>
>>> Try something like:
>>>
>>> find . -type f -printf '%A@ %p\n' | sort -n | tail -1
>>
>> I believe you want %T@ instead of %A@ (modification time versus access
>> tim
Always Learning wrote:
>
> On Fri, 2011-12-09 at 10:23 -0500, m.r...@5-cent.us wrote:
>
>> What's wrong with ls -laFrt?
>
> Everything !
>
> Its not intellectual enough and its too short and its also simple.
>
Ok, then ls -ZlaFrt | tail -1 | sort | tail -1
That better?
mark "is the obfus
On Fri, 2011-12-09 at 10:23 -0500, m.r...@5-cent.us wrote:
> What's wrong with ls -laFrt?
Everything !
Its not intellectual enough and its too short and its also simple.
Paul.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailma
John R. Dennison wrote:
> On Fri, Dec 09, 2011 at 03:15:53PM +0100, Mogens Kjaer wrote:
>>
>> Try something like:
>>
>> find . -type f -printf '%A@ %p\n' | sort -n | tail -1
>
> I believe you want %T@ instead of %A@ (modification time versus access
> time). I would also suggest sort -nr to sort fr
On 12/9/2011 9:27 AM, John R. Dennison wrote:
> On Fri, Dec 09, 2011 at 03:15:53PM +0100, Mogens Kjaer wrote:
>>
>> Try something like:
>>
>> find . -type f -printf '%A@ %p\n' | sort -n | tail -1
>
> I believe you want %T@ instead of %A@ (modification time versus access
> time). I would also sug
thank you!
Helmut
Am 09.12.2011 15:15, schrieb Mogens Kjaer:
> find . -type f -printf '%A@ %p\n' | sort -n | tail -1
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Fri, Dec 09, 2011 at 03:15:53PM +0100, Mogens Kjaer wrote:
>
> Try something like:
>
> find . -type f -printf '%A@ %p\n' | sort -n | tail -1
I believe you want %T@ instead of %A@ (modification time versus access
time). I would also suggest sort -nr to sort from most recent to least
recent.
On 12/09/2011 02:41 PM, Helmut Drodofsky wrote:
> Hello,
>
> I try to find in a directory hicharchy the most recent time of file update.
>
> I think, there could be a solution with find?
Try something like:
find . -type f -printf '%A@ %p\n' | sort -n | tail -1
Mogens
--
Mogens Kjaer, m...@lemo
Hello,
I try to find in a directory hicharchy the most recent time of file update.
I think, there could be a solution with find?
Thank you for help in advance
Best regards
Helmut Drodofsky
___
CentOS mailing list
CentOS@centos.org
http://lists.cento
Its not a bug, its a feature ... and this is not a joke.
You [de|re]fine the search with the suffices you supply making it
possible to hand "find" a granularity mechanism that indeed makes
find a very powerful utilitiy.
Jobst
s
On Thu, Mar 25, 2010 at 06:16:40PM -0500, Les Mikesell (lesmikes..
On 3/25/2010 6:07 PM, Jobst Schmalenbach wrote:
> On Thu, Mar 25, 2010 at 07:45:14AM +0100, Ala1n Sp1neu8 (aspin...@gmail.com)
> wrote:
>> Hello
>> find /etc -size -1G
>
> Very interesting way of finding all files with a file size of 0 ;-)
What's interesting is that the program doesn't do the uni
On Thu, Mar 25, 2010 at 07:45:14AM +0100, Ala1n Sp1neu8 (aspin...@gmail.com)
wrote:
> Hello
> find /etc -size -1G
Very interesting way of finding all files with a file size of 0 ;-)
Jobst
>
> should return all files less than 1Giga byte in /etc, but return a
> list of empty file (size=0)
>
>
Mike McCarty wrote:
> Nicolas Thierry-Mieg wrote:
>> Nicolas Thierry-Mieg wrote:
>>> Ala1n Sp1neu8 wrote:
Hello
find /etc -size -1G
should return all files less than 1Giga byte in /etc, but return a
list of empty file (size=0)
find /etc -size -2G
work fi
Nicolas Thierry-Mieg wrote:
> Nicolas Thierry-Mieg wrote:
>> Ala1n Sp1neu8 wrote:
>>> Hello
>>> find /etc -size -1G
>>>
>>> should return all files less than 1Giga byte in /etc, but return a
>>> list of empty file (size=0)
>>>
>>> find /etc -size -2G
>>>
>>> work fine and return all the files
>>>
>
Nicolas Thierry-Mieg wrote:
> Ala1n Sp1neu8 wrote:
>> Hello
>> find /etc -size -1G
>>
>> should return all files less than 1Giga byte in /etc, but return a
>> list of empty file (size=0)
>>
>> find /etc -size -2G
>>
>> work fine and return all the files
>>
>> This works the same on my fedora11 and
Ala1n Sp1neu8 wrote:
> Hello
> find /etc -size -1G
>
> should return all files less than 1Giga byte in /etc, but return a
> list of empty file (size=0)
>
> find /etc -size -2G
>
> work fine and return all the files
>
> This works the same on my fedora11 and my centos 5 !
>
> Did I miss something o
Hello
find /etc -size -1G
should return all files less than 1Giga byte in /etc, but return a
list of empty file (size=0)
find /etc -size -2G
work fine and return all the files
This works the same on my fedora11 and my centos 5 !
Did I miss something or is it a bug ?
Regards
--
Alain Spineu
> Your "-path" argument is wrong. Try this:
>
> find /var/data/foo -path '/var/data/foo/.snapshot' -prune -o -exec chown
> usera:groupb {} +
>
> You need the whole path, and there is no need to escape the '.' character.
> I've also used "+" as the terminator. That's just an efficiency issu
Tom Brown wrote:
> Hi
>
> I have to use find to change the perms of a directory and files within
> that directory recursively but i need to exclude a directory within the
> top level directory, as its a netapp and so contains a read only
> .snapshot dir.
>
> I have tried...
>
> # find /var/da
Hi
I have to use find to change the perms of a directory and files within
that directory recursively but i need to exclude a directory within the
top level directory, as its a netapp and so contains a read only
.snapshot dir.
I have tried...
# find /var/data/foo -path '\.\/\.snapshot' -prune
On 2009-12-31 15:13, Noob Centos Admin wrote:
> Just an concluding update to anybody who might be interested :)
>
> My apologies for blaming spamassassin in the earlier email. It was
> taking so long because of the real problem.
>
> Apparently the odd exim processes that was related to the mail loo
Just an concluding update to anybody who might be interested :)
My apologies for blaming spamassassin in the earlier email. It was
taking so long because of the real problem.
Apparently the odd exim processes that was related to the mail loop
problem I nipped was still the culprit. I had overlook
I initiated services shutdown as previously planned and once the
external services like exim, dovecot, httpd, crond (because it kept
restarting these services), the problem child stood out like a sore
thumb.
There was two exim instances that didn't go away despite service exim
stop. Once I killed
Hi,
> I do not know about now but I had to unload the modules in question.
> Just clearing the rules was not enough to ensure that the netfilter
> connection tracking modules were not using any cpu at all.
Thanks for pointing this out. Being a noob admin as my pseudonym
states, I'd assumed stoppi
Noob Centos Admin wrote:
> Hi,
>
>> Yes, these figures indicate that you are fairly close to being cpu bound.
>>
>> What kind of filtering are you doing? If you have any connection
>> tracking/state related rules set, you will need to be using a fair
>> amount of cpu.
>
> Initially, when the load
Hi,
> Yes, these figures indicate that you are fairly close to being cpu bound.
>
> What kind of filtering are you doing? If you have any connection
> tracking/state related rules set, you will need to be using a fair
> amount of cpu.
Initially, when the load start going up, I had thought the APF
Christoph Maser wrote:
> Am Donnerstag, den 31.12.2009, 12:34 +0100 schrieb Chan Chung Hang
> Christopher:
Look at the first two columns. What column have higher numbers? If r,
you're CPU-bound. If b, you're I/O bound.
>>> procs ---memory-- ---swap-- -io --syste
Am Donnerstag, den 31.12.2009, 12:34 +0100 schrieb Chan Chung Hang
Christopher:
> >> Look at the first two columns. What column have higher numbers? If r,
> >> you're CPU-bound. If b, you're I/O bound.
> >
> > procs ---memory-- ---swap-- -io --system--
> > -cpu--
>> Look at the first two columns. What column have higher numbers? If r,
>> you're CPU-bound. If b, you're I/O bound.
>
> procs ---memory-- ---swap-- -io --system--
> -cpu--
> r b swpd free buff cache si sobibo in cs us sy id wa
> st
>
Hi,
> Dstat could at least tell you if your problem is CPU or I/O.
This was the result of running the following command which I obtained
from reading up about two weeks ago when I started trying to
investigate the abnormal server behaviour.
dstat -c --top-cpu -d --top-bio --top-latency
usr sys
Hi,
> You should also try out "atop" instead of just using top. The major
> advantage is that it gives you more information about the disk and
> network utilization.
Thanks for the tip, I tried it and if the red lines are any
indication, it seems that atop thinks my disks (md raid 1) are the
pr
Hi,
> > since initially it seems like the high load may be due to I/O wait
> Maybe this will help you to identify the IO loading process:
>
> http://dag.wieers.com/blog/red-hat-backported-io-accounting-to-rhel5
Thanks for the suggestion, I did install dstat earlier while trying to
figure things
On 2009-12-29 23:44, Noob Centos Admin wrote:
> My Centos 5 server has seen the average load jumped through the roof
> recently despite having no major additional clients placed on it.
> Previously, I was looking at an average of less than 0.6 load, I had a
> monitoring script that sends an email w
On 12/29/2009 11:44 PM, Noob Centos Admin wrote:
> My Centos 5 server has seen the average load jumped through the roof
> recently despite having no major additional clients placed on it.
> Previously, I was looking at an average of less than 0.6 load, I had a
> monitoring script that sends an emai
Am Mittwoch, den 30.12.2009, 05:44 +0100 schrieb Noob Centos Admin:
> since initially it seems like the high load may be due to I/O wait
Maybe this will help you to identify the IO loading process:
http://dag.wieers.com/blog/red-hat-backported-io-accounting-to-rhel5
Chris
financial.com AG
Mun
On Dec 30, 2009, at 1:05 AM, Noob Centos Admin
wrote:
> Hi,
>
>> Try blocking the IPs on the router and see if that helps.
>
> Unfortunately the server's in a DC so the router is not under our
> control.
That sucks, oh well.
>> You can also run iostat and look at the disk usage which also
Noob Centos Admin wrote:
> However, iostat reports much lower %user and $system compared to top
> running at the same time so I'm not quite sure if I can rely on its
> figures.
> ...
> iostat
> Linux 2.6.18-128.1.16.el5xen 12/30/2009
> avg-cpu: %user %nice %system %iowait %steal %idle
>
Hi,
> Try blocking the IPs on the router and see if that helps.
Unfortunately the server's in a DC so the router is not under our control.
> You can also run iostat and look at the disk usage which also
> generates load.
I did try iostat and its iowait% did coincide with top's report, which
is
Hi,
> last time I saw something like that, it was a bunch of chinese 'bots'
> hammering on my public services like ssh.
>another admin had turned
> pop3 on too, this created a very heavy load yet they didn't show up in
> top (bunches of pop3 and ssh processes showed up in ps -auxww,
> however, plu
On Dec 29, 2009, at 11:44 PM, Noob Centos Admin
wrote:
> My Centos 5 server has seen the average load jumped through the roof
> recently despite having no major additional clients placed on it.
> Previously, I was looking at an average of less than 0.6 load, I had
> a monitoring script th
Noob Centos Admin wrote:
> My Centos 5 server has seen the average load jumped through the roof
> recently despite having no major additional clients placed on it.
> Previously, I was looking at an average of less than 0.6 load, I had a
> monitoring script that sends an email warning me if the c
My Centos 5 server has seen the average load jumped through the roof
recently despite having no major additional clients placed on it.
Previously, I was looking at an average of less than 0.6 load, I had a
monitoring script that sends an email warning me if the current load stayed
above 0.6 for mor
From: john blair
> I want to write a script to find the latest version of rpm of a given package
> available from a mirror for eg:
> http://mirror.centos.org/centos/5/os/x86_64/CentOS/
> Is there any existing script that does this? Or can someone give me a general
> idea on how to go about this
I should have mentioned that I am looking for a solution that I can even run
from my debian box (i.e no yum)
--- On Thu, 12/10/09, john blair wrote:
> From: john blair
> Subject: [CentOS] find latest version of rpms from a mirror
> To: centos@centos.org
> Date: Thursday, December
john blair wrote:
> I want to write a script to find the latest version of rpm of a given package
> available from a mirror for eg:
> http://mirror.centos.org/centos/5/os/x86_64/CentOS/
> Is there any existing script that does this? Or can someone give me a general
> idea on how to go about this
I want to write a script to find the latest version of rpm of a given package
available from a mirror for eg:
http://mirror.centos.org/centos/5/os/x86_64/CentOS/
Is there any existing script that does this? Or can someone give me a general
idea on how to go about this?
_
Lucian @ lastdot.org wrote:
> What you need is this:
> http://choon.net/php-mail-header.php
>
> But this requires recompiling PHP..
>
you're assuming this is being done via PHP, it could as easily be coming
from a bad perl CGI or another similar exploitable web service.
___
1 - 100 of 163 matches
Mail list logo