There is no optimal.  It's better used as a baseline.  If it's normally
around 1.5 and today it's running at 10, something is possibly wrong.  All
that really matters is if the system is responsive enough to suit the task.
It could be at 99 and doesn't really matter if radius responses are coming
back fast.

Sent from my iPhone

On Sep 13, 2016, at 4:58 AM, Torry, Andrew <andrew.to...@fxplus.ac.uk>
wrote:

This is all very useful and a fine exercise in Linux semantics, but it does
not really help me much when
I have a manager asking me what this stuff all means and whether the server
up to the job or not.

What is a good (aka safe) figure for a system say with 16G RAM and 8 cores.

It strikes me the vertical scale for this graph might as well be 'Chickens'
or 'Bananas' as it
does not seem to indicate what is or is not a 'good' figure it just shows
ups and downs.

Perhaps the 'Developers' could add an optimal horizontal line into the
graph (ie. go above that
and your server is struggling). After all the number of Portal connections
and other httpd performance
settings are pre-calculated based on resources so it should not be
difficult to do.

Andrew



-----------------------------
    Falmouth University
-----------------------------

-----Original Message-----
From: Sallee, Jake [mailto:jake.sal...@umhb.edu <jake.sal...@umhb.edu>]
Sent: 12 September 2016 14:29
To: packetfence-users@lists.sourceforge.net
Subject: Re: [PacketFence-users] Server Load metric

Load average is more complex than number of (logical or otherwise)

CPUs vs the load average number. The reason being load takes into

account the processor state of "waiting for disk I/O".


Ah, yes. forgot about that.

You can use a command like iostat to get more detailed info about I/O.

The iowait field will give you the % time your CPU was idle due to waiting
on system I/O (IE: reading from hard disk).

As far as my experience goes, when load is driven up, it is almost

always due to IO saturation, not CPU saturation. However, I don't have

much experience with PF systems, so they might have CPU saturation

issues.


Interesting.  My experience has been almost the opposite. But most of my
workloads tend to be RAM centric and not disc centric which could account
for that.

Jake Sallee
Godfather of Bandwidth
System Engineer
University of Mary Hardin-Baylor
WWW.UMHB.EDU

900 College St.
Belton, Texas
76513

Fone: 254-295-4658
Phax: 254-295-4221

________________________________________
From: Matt Zagrabelny <mzagr...@d.umn.edu>
Sent: Friday, September 9, 2016 3:07 PM
To: packetfence-users@lists.sourceforge.net
Subject: Re: [PacketFence-users] Server Load metric

On Fri, Sep 9, 2016 at 2:37 PM, Sallee, Jake <jake.sal...@umhb.edu> wrote:

I always assumed that came from the same source that 'top' pulls from.



If I am correct then the number represents the workload of your system. In
simplified terms you want this number to always be less than the number of
processor cores in your system.



If you have a quad core system and you have a system load of 3.00 then you
are effectively running 3 of your cores at 100%.



If in a quad core system you have a value of 8.00 this means that you have
overloaded your system and there are 4 processes waiting while 4 other
processes are fully utilizing all the cores on your system.



Here is a bit more explanation if your interested.



http://www.howtogeek.com/194642/understanding-the-load-average-on-linux-and-other-unix-like-systems/



TL;DR: the load score should always be less than the number of logical
cores in your system, if its not then your system is overworked and you
need to do something about it.


Load average is more complex than number of (logical or otherwise)
CPUs vs the load average number. The reason being load takes into
account the processor state of "waiting for disk I/O".

>From man proc:


      /proc/loadavg
             The  first  three  fields  in this file are load average
figures giving the number of jobs in the run
             queue (state R) or waiting for disk I/O (state D)
averaged over 1, 5, and 15 minutes.  They  are  the
             same as the load average numbers given by uptime(1) and
other programs.  The fourth field consists of
             two numbers separated by a slash (/).  The first of
these is the number of currently runnable  kernel
             scheduling entities (processes, threads).  The value
after the slash is the number of kernel schedulā€
             ing entities that currently exist on the system.  The
fifth field is the PID of the process that  was
             most recently created on the system.

Thus, you could have a high load average and throw a bunch of CPUs at
the issue and it doesn't change the problem one bit. It could be IO
bound.

As far as my experience goes, when load is driven up, it is almost
always due to IO saturation, not CPU saturation. However, I don't have
much experience with PF systems, so they might have CPU saturation
issues.

-m

------------------------------------------------------------------------------
_______________________________________________
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users
------------------------------------------------------------------------------
_______________________________________________
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users
------------------------------------------------------------------------------
_______________________________________________
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users
------------------------------------------------------------------------------
_______________________________________________
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users

Reply via email to