e introspection into the zpool thread that is
using cpu but not having much luck finding anything meaningful. Occasionally
the cpu usage for that thread will drop, and when it does performance of the
filesystem increases.
> On Wed, 2011-05-04 at 15:40 -0700, Adam Serediuk wrote:
>> Dedu
dedup enabled and the DDT no
> longer fits in RAM? That would create a huge performance cliff.
>
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org on behalf of Eric D. Mudama
> Sent: Wed 5/4/2011 12:55 PM
> To: Adam Serediuk
> Cc: zfs-discuss@opens
free for all devices
On May 4, 2011, at 12:28 PM, Michael Schuster wrote:
> On Wed, May 4, 2011 at 21:21, Adam Serediuk wrote:
>> We have an X4540 running Solaris 11 Express snv_151a that has developed an
>> issue where its write performance is absolutely abysmal. Even touching a
We have an X4540 running Solaris 11 Express snv_151a that has developed an
issue where its write performance is absolutely abysmal. Even touching a file
takes over five seconds both locally and remotely.
/pool1/data# time touch foo
real0m5.305s
user0m0.001s
sys 0m0.004s
/pool1/data#
> Sounds like many of us are in a similar situation.
>
> To clarify my original post. The goal here was to continue with what was
> a cost effective solution to some of our Storage requirements. I'm
> looking for hardware that wouldn't cause me to get the run around from
> the Oracle support fo
, etc all make a large different when dealing with very
large data sets.
On 24-Feb-10, at 2:05 PM, Adam Serediuk wrote:
I manage several systems with near a billion objects (largest is
currently 800M) on each and also discovered slowness over time. This
is on X4540 systems with average file
I manage several systems with near a billion objects (largest is
currently 800M) on each and also discovered slowness over time. This
is on X4540 systems with average file sizes being ~5KB. In our
environment the following readily sped up performance significantly:
Do not use RAID-Z. Use as
would have been with it enabled but I wasn't
about to find out.
Thanks
On 20-Nov-09, at 11:48 AM, Richard Elling wrote:
On Nov 20, 2009, at 11:27 AM, Adam Serediuk wrote:
I have several X4540 Thor systems with one large zpool that
replicate data to a backup host via zfs send/recv.
I have several X4540 Thor systems with one large zpool that replicate
data to a backup host via zfs send/recv. The process works quite well
when there is little to no usage on the source systems. However when
the source systems are under usage replication slows down to a near
crawl. Without