On 12-05-07 8:45 PM, Bob Friesenhahn wrote:
I see that there are a huge number of reads and hardy any reads. Are
you SURE that deduplication was not enabled for this pool? This is
the sort of behavior that one might expect if deduplication was
enabled without enough RAM or L2 read cache.
Bo
On Mon, 7 May 2012, Karl Rossing wrote:
On 12-05-07 12:18 PM, Jim Klimov wrote:
During the send you can also monitor "zpool iostat 1" and usual
"iostat -xnz 1" in order to see how busy the disks are and how
many IO requests are issued. The snapshots are likely sent in
the order of block age (TX
On 12-05-07 12:18 PM, Jim Klimov wrote:
During the send you can also monitor "zpool iostat 1" and usual
"iostat -xnz 1" in order to see how busy the disks are and how
many IO requests are issued. The snapshots are likely sent in
the order of block age (TXG number), which for a busy pool may
mean
On Mon, 7 May 2012, Edward Ned Harvey wrote:
Apparently I pulled it down at some point, so I don't have a URL for you
anymore, but I did, and I posted. Long story short, both raidzN and mirror
configurations behave approximately the way you would hope they do. That
is...
Approximately, as com
On 05/ 8/12 08:36 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
On a Solaris 11 (SR3) system I have a zfs destroy process what appears
to be doing nothing and can't be killed. It has used 5 seconds o
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Paul Kraus
>
> Even with uncompressable data I measure better performance with
> compression turned on rather than off.
*cough*
___
zfs-discus
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
>
> Has someone done real-world measurements which indicate that raidz*
> actually provides better sequential read or write than simple
> mirroring with the same number of disks
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
>
> On a Solaris 11 (SR3) system I have a zfs destroy process what appears
> to be doing nothing and can't be killed. It has used 5 seconds of CPU
> in a day and a half, but truss
Hi Karl,
Someone sitting across the table from me (who saw my posting)
informs me that CR 7060894 would not impact Solaris 10 releases,
so kindly withdrawn my comment about CR 7060894.
Thanks,
Cindy
On 5/7/12 11:35 AM, Cindy Swearingen wrote:
Hi Karl,
I like to verify that no dead or dying d
Hi Karl,
I like to verify that no dead or dying disk is killing pool
performance and your zpool status looks good. Jim has replied
with some ideas to check your individual device performance.
Otherwise, you might be impacted by this CR:
7060894 zfs recv is excruciatingly slow
This CR covers bo
2012-05-07 20:45, Karl Rossing цкщеу:
I'm wondering why the zfs send could be so slow. Could the other server
be slowing down the sas bus?
I hope other posters would have more relevant suggestions, but
you can see if the buses are contended by dd'ing from the drives.
At least that would give yo
Hi,
I'm showing slow zfs send on pool v29. About 25MB/sec
bash-3.2# zpool status vdipool
pool: vdipool
state: ONLINE
scan: scrub repaired 86.5K in 7h15m with 0 errors on Mon Feb 6
01:36:23 2012
config:
NAME STATE READ WRITE CKSUM
vdipool
12 matches
Mail list logo