Additional thought... by watching the client-side response time,
we could declare a server outage when a watcher determines
that the response times are greater than some statistical mean.
For example, if the upper limit is 3 sigma above the 20 minute
rolling average (and the variance is suitable
> THe default for filebench today doesn't produce a
> throughput summary
> at regular intervals during the run. It would be
> quite easy to add
> this feature, which is something Roch has asked for.
I don't think this will be easy to interpret from a client's
perspective. When something is broken
Eric Lowe wrote:
While it sounds interesting from an academic point of view, I wonder
how relevant this work (and any work related to paging) is when memory
can be purchased for under $200 a gigabyte.
When we're talking about "green" computing and TB of main memory,
compression and paging optim
> The question is how to lower the CPU utilization of an application.
> - Is there any general rules?
Wow - this is a very, very general question. There are, however, a couple of
general rules which usually stand you in good stead when looking at
optimisation:
* There are only two ways to make so
On Mon, Nov 14, 2005 at 02:34:46PM -0600, James Dickens wrote:
| > paging candidate determination very costly). ZFS is very fast (being copy-
| > on-write it does sequential I/Os whereas swapfs does random disk access),
| > and it supports compression. ZFS is already slated to become the new
| > re
On 11/14/05, Eric Lowe <[EMAIL PROTECTED]> wrote:
On Mon, Nov 14, 2005 at 10:28:38AM -0800, Nitin Gupta wrote:| Hi,|I've been working on porting the 'compressed cache' feature| (http://linuxcompressed.sourceforge.net/
- feature explained in first few lines)| to linux 2.6 kernel.| I'm wonderi
On Mon, Nov 14, 2005 at 10:28:38AM -0800, Nitin Gupta wrote:
| Hi,
|I've been working on porting the 'compressed cache' feature
| (http://linuxcompressed.sourceforge.net/ - feature explained in first few
lines)
| to linux 2.6 kernel.
| I'm wondering why this project is dead even when it show
Hi,
I've been working on porting the 'compressed cache' feature
(http://linuxcompressed.sourceforge.net/ - feature explained in first few lines)
to linux 2.6 kernel.
I'm wondering why this project is dead even when it showed great
performance improvement when system is under memory pressure.
Yes; a simple change the the workload definition will create threads
vs processes.
The clause:
define process, instances=n
{
thread , instances=n
{
...
}
}
You can arrange the process/thread relationship anyway you like. In
the simplest case to make some
I've extended filebench to write through a device specific API, and so far
my rudimentary support works ok. But I've found a platform specific difference
that raises a usage question of threads versus processes.
If USE_PROCESS_MODEL is defined, procflow_createproc() will fork/exec a process,
other
The question is how to lower the CPU utilization of an application.
- Is there any general rules?
- What kind of operations consume large amount of CPU time?
- How to make sure multiple instance of applications not working in too-awful
way?
For example, when using gzip compress a big file, the CP
11 matches
Mail list logo