> Take obytes64/rbytes64 at 2 different time periods,
> let's say 5 seconds apart. Add the difference in
> bytes to get the total bytes transfered during the
> time period.
> 
> Divide by 5 secs and you have the bytes/sec
> throughput.  From here on you can just divide the
> Gbit speed.

Having written a script to do this, I suggest that you grab the timestamp from 
each of the kstat queries and subtract to find the actual interval.  I hate 
seeing impossible numbers (mainly because the 107.3% threw off my output 
formatting!)

> A 1Gbit interface does 128MB/s ... but isn't this
> theoretical? Yes, I've seen 12MB/s on 10Mbit
> ethernet... but I seldom see >100MB/s on Gbit
> ethernet. What would be the correct value to consider
> as 100% utilization?

I can't imagine using anything other than wire speed for 100%.  Even if a 
particular box or OS can't push it that fast, there's no other good number that 
would be a "hard" limit.

-- 
Darren
 
 
This message posted from opensolaris.org
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to