looks like set zfs:zfs_vdev_max_pending = 1 fixes this problem __very__
elegantly. Now with a 16GB file copy in the background I can launch an
intensive application like Eclipse very fast.
--
This message posted from opensolaris.org
On Jan 12, 2010, at 10:27 AM, bank kus wrote:
>> I think if you look at the majority of performance
>> problems reported on this
>> forum, they are latency and not bandwidth bound.
>
> __latency__ of reads in a highly contested system (lots of reads from
> different processes ahead of you)
>
> I think if you look at the majority of performance
> problems reported on this
> forum, they are latency and not bandwidth bound.
__latency__ of reads in a highly contested system (lots of reads from
different processes ahead of you)
OR
the speed of light case (heres a read queues are emp
On Jan 11, 2010, at 8:21 PM, bank kus wrote:
>> resource management policies implemented. By default,
>> ZFS will queue
>> 35 I/Os to each leaf vdev, so it is not clear that
>> scheduling above the ZFS
>> level will be as effective
>
> It doesnt have to be above the _ZFS_ layer no? In place of a
On Mon, Jan 11, 2010 at 10:30 PM, Richard Elling
wrote:
> I misinterpreted the question. My answer assumes reads from the same file.
>
> AFAIK, there is no thread-level I/O scheduler in Solaris. ZFS uses a priority
> scheduler which is based on the type of I/O and there are some other
> resource m
On Mon, Jan 11, 2010 at 9:29 PM, bank kus wrote:
>> Then you're actually asking for a fair I/O scheduler.
>
> yes are we currently fair? any good documentation on the priority model as it
> exists today?
I doubt it, first come-first go is most common. The same holds for
memory as well.
Regards,
Then you're actually asking for a fair I/O scheduler.
Regards,
Andrey
On Mon, Jan 11, 2010 at 8:12 PM, bank kus wrote:
> I was asking from the starvatoin point of view to see if B can be starved by
> a long bust from A
> --
> This message posted from opensolaris.org
> ___
> resource management policies implemented. By default,
> ZFS will queue
> 35 I/Os to each leaf vdev, so it is not clear that
> scheduling above the ZFS
> level will be as effective
It doesnt have to be above the _ZFS_ layer no? In place of a single queue one
could maintain separate queues that
Per Posix there's no read ordering guarantees for a file with
concurrent non-exclusive readers. Use queue/locks in the application
if you need ordering like this.
Regards,
Andrey
On Mon, Jan 11, 2010 at 7:05 PM, bank kus wrote:
> As of 2009.06 what is the policy with reordering ZFS file reads
On Jan 11, 2010, at 11:41 AM, Andrey Kuzmin wrote:
> On Mon, Jan 11, 2010 at 10:30 PM, Richard Elling
> wrote:
>> I misinterpreted the question. My answer assumes reads from the same file.
>>
>> AFAIK, there is no thread-level I/O scheduler in Solaris. ZFS uses a priority
>> scheduler which is b
I misinterpreted the question. My answer assumes reads from the same file.
AFAIK, there is no thread-level I/O scheduler in Solaris. ZFS uses a priority
scheduler which is based on the type of I/O and there are some other
resource management policies implemented. By default, ZFS will queue
35 I/Os
On Jan 11, 2010, at 8:05 AM, bank kus wrote:
> As of 2009.06 what is the policy with reordering ZFS file reads i.e.,
> consider the following timeline:
> T0: Process A issues read of size 20K and gets its thread switched out
>
> T1: Process B issues reads of size 8 bytes and gets its thread sw
> I doubt it, first come-first go is most common. The
> same holds for
> memory as well.
> Regards,
> Andrey
and that is because it was considered and rejected because of XYZ reaons (or
lack of sufficient reasons) or simply something thats not been evaluated. I
would argue the following problem
> Then you're actually asking for a fair I/O scheduler.
yes are we currently fair? any good documentation on the priority model as it
exists today?
--
This message posted from opensolaris.org
I was asking from the starvatoin point of view to see if B can be starved by a
long bust from A
--
This message posted from opensolaris.org
As of 2009.06 what is the policy with reordering ZFS file reads i.e., consider
the following timeline:
T0: Process A issues read of size 20K and gets its thread switched out
T1: Process B issues reads of size 8 bytes and gets its thread switched out
Are the 8 byte reads from B going to fall in
16 matches
Mail list logo