on.
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Donald Stahl
Sent: 9. kesäkuuta 2011 6:27
To: Ding Honghui
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Wired write performance
> There is snapshot of metaslab layout, the last 51 metaslabs have 64G free
> space.
After we added all the disks to our system we had lots of free
metaslabs- but that didn't seem to matter. I don't know if perhaps the
system was attempting to balance the writes across more of our devices
but whate
On 06/09/2011 10:14 AM, Ding Honghui wrote:
On 06/09/2011 12:23 AM, Donald Stahl wrote:
Another (less satisfying) workaround is to increase the amount of
free space
in the pool, either by reducing usage or adding more storage. Observed
behavior is that allocation is fast until usage crosses a
On 06/09/2011 12:23 AM, Donald Stahl wrote:
Another (less satisfying) workaround is to increase the amount of free space
in the pool, either by reducing usage or adding more storage. Observed
behavior is that allocation is fast until usage crosses a threshhold, then
performance hits a wall.
We
> Another (less satisfying) workaround is to increase the amount of free space
> in the pool, either by reducing usage or adding more storage. Observed
> behavior is that allocation is fast until usage crosses a threshhold, then
> performance hits a wall.
We actually tried this solution. We were at
On 06/08/11 01:05, Tomas Ögren wrote:
And if pool usage is>90%, then there's another problem (change of
finding free space algorithm).
Another (less satisfying) workaround is to increase the amount of free
space in the pool, either by reducing usage or adding more storage.
Observed behavior i
> In Solaris 10u8:
> root@nas-hz-01:~# uname -a
> SunOS nas-hz-01 5.10 Generic_141445-09 i86pc i386 i86pc
> root@nas-hz-01:~# echo "metaslab_min_alloc_size/K" | mdb -kw
> mdb: failed to dereference symbol: unknown symbol name
Fair enough. I don't have anything older than b147 at this point so I
was
On 06/08/2011 09:15 PM, Donald Stahl wrote:
"metaslab_min_alloc_size" is not in use when block allocator isDynamic block
allocator[1].
So it is not tunable parameter in my case.
May I ask where it says this is not a tunable in that case? I've read
through the code and I don't see what you are ta
On 06/08/2011 04:05 PM, Tomas Ögren wrote:
On 08 June, 2011 - Donald Stahl sent me these 0,6K bytes:
One day, the write performance of zfs degrade.
The write performance decrease from 60MB/s to about 6MB/s in sequence
write.
Command:
date;dd if=/dev/zero of=block bs=1024*128 count=1;date
Ding Honghui
Sent: 8. kesäkuuta 2011 6:07
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Wired write performance problem
Hi,
I got a wired write performance and need your help.
One day, the write performance of zfs degrade.
The write performance decrease from 60MB/s to about 6MB/s in seq
On 06/08/2011 12:12 PM, Donald Stahl wrote:
One day, the write performance of zfs degrade.
The write performance decrease from 60MB/s to about 6MB/s in sequence
write.
Command:
date;dd if=/dev/zero of=block bs=1024*128 count=1;date
See this thread:
http://www.opensolaris.org/jive/thread.
g] On Behalf Of Ding Honghui
Sent: 8. kesäkuuta 2011 6:07
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Wired write performance problem
Hi,
I got a wired write performance and need your help.
One day, the write performance of zfs degrade.
The write performance decrease from 60MB/s to about 6MB
On 08 June, 2011 - Donald Stahl sent me these 0,6K bytes:
> >> One day, the write performance of zfs degrade.
> >> The write performance decrease from 60MB/s to about 6MB/s in sequence
> >> write.
> >>
> >> Command:
> >> date;dd if=/dev/zero of=block bs=1024*128 count=1;date
>
> See this thre
>> One day, the write performance of zfs degrade.
>> The write performance decrease from 60MB/s to about 6MB/s in sequence
>> write.
>>
>> Command:
>> date;dd if=/dev/zero of=block bs=1024*128 count=1;date
See this thread:
http://www.opensolaris.org/jive/thread.jspa?threadID=139317&tstart=45
And one comment:
When we do write operation(by command dd), heavy read operation
increased from zero to 3M for each disk,
and the write bandwidth is poor.
The disk io %b increase from 0 to about 60.
I don't understand why this happened.
capacity o
Hi,
I got a wired write performance and need your help.
One day, the write performance of zfs degrade.
The write performance decrease from 60MB/s to about 6MB/s in sequence write.
Command:
date;dd if=/dev/zero of=block bs=1024*128 count=1;date
The hardware configuration is 1 Dell MD3000 an
16 matches
Mail list logo