Rusty Russell wrote:
Is the noop scheduler significantly worse than hooking directly into
q-make_request_fn?
The noop scheduler does do request merging, and has the same device
plug latency as other schedulers.
so long,
Carsten
Jens Axboe wrote:
On Fri, Jun 01 2007, Carsten Otte wrote:
With regard to compute power needed, almost none. The penalty is
latency, not overhead: A small request may sit on the request queue to
wait for other work to arrive until the queue gets unplugged. This
penality is compensated by
On Mon, Jun 04 2007, Carsten Otte wrote:
Jens Axboe wrote:
On Fri, Jun 01 2007, Carsten Otte wrote:
With regard to compute power needed, almost none. The penalty is
latency, not overhead: A small request may sit on the request queue to
wait for other work to arrive until the queue gets
On Mon, 2007-06-04 at 15:43 +0200, Jens Axboe wrote:
On Mon, Jun 04 2007, Carsten Otte wrote:
Jens Axboe wrote:
On Fri, Jun 01 2007, Carsten Otte wrote:
With regard to compute power needed, almost none. The penalty is
latency, not overhead: A small request may sit on the request queue to
On Mon, Jun 04 2007, Rusty Russell wrote:
On Mon, 2007-06-04 at 15:43 +0200, Jens Axboe wrote:
On Mon, Jun 04 2007, Carsten Otte wrote:
Jens Axboe wrote:
On Fri, Jun 01 2007, Carsten Otte wrote:
With regard to compute power needed, almost none. The penalty is
latency, not overhead:
Jens Axboe wrote:
Most people should not fiddle with it, the defaults are there for good
reason. I can provide a blk_queue_unplug_thresholds(q, depth, delay)
helper that you could use for the virtualized drivers, perhaps that
would be better for that use?
Yea, we should'nt change the defaults
On Mon, Jun 04 2007, Carsten Otte wrote:
Jens Axboe wrote:
Most people should not fiddle with it, the defaults are there for good
reason. I can provide a blk_queue_unplug_thresholds(q, depth, delay)
helper that you could use for the virtualized drivers, perhaps that
would be better for that
On Fri, 2007-06-01 at 09:10 +0200, Carsten Otte wrote:
Rusty Russell wrote:
What's the overhead in doing both?
With regard to compute power needed, almost none. The penalty is
latency, not overhead: A small request may sit on the request queue to
wait for other work to arrive until the
Rusty Russell wrote:
Now my lack of block-layer knowledge is showing. I would have thought
that if we want to do things like ionice(1) to work, we have to do some
guest scheduling or pass that information down to the host.
Yea that would only work on the host: one can use ionice to set the io
On Fri, Jun 01 2007, Carsten Otte wrote:
Rusty Russell wrote:
Now my lack of block-layer knowledge is showing. I would have thought
that if we want to do things like ionice(1) to work, we have to do some
guest scheduling or pass that information down to the host.
Yea that would only work on
Troy Benjegerdes wrote:
This kind of a claim needs some benchmark data to document it.
We've implemented both for our vdisk driver on 390. At least on our
platform, merging in the host is preferable because vmenter/vmexit is
very fast and we would merge twice because we submit the result via
On Thu, 2007-05-31 at 14:57 +0200, Carsten Otte wrote:
Rusty Russell wrote:
Example block driver using virtio.
The block driver uses outbufs with sg[0] being the request information
(struct virtio_blk_outhdr) with the type, sector and inbuf id. For a
write, the rest of the sg will
12 matches
Mail list logo