Re: [OpenIndiana-discuss] Slow zfs writes

2013-02-12 Thread Ian Collins

Ram Chander wrote:

So it looks like re-distribution issue. Initially  there were two Vdev with
24 disks ( disk 0-23 ) for close to year. After which  which we added 24
more disks and created additional vdevs. The initial vdevs are filled up
and so write speed declined. Now  how to find files that are present in a
Vdev or a disk. That way I can remove and re-copy back to distribute data.
Any other way to solve this ?


Please stick to one list or cross-post, multi-posting tends to waste 
responder's time.


--
Ian.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Slow zfs writes

2013-02-11 Thread Robbie Crash
If you weren't having any issues with speed and they've progressively
gotten worse, I'd look at dedup. If you're using dedup, you better make
sure you've got 2.5GB RAM for every TB of unique data you have, otherwise
you'll be swapping your dedup tables constantly and your read/write
performance is going to die.

On Mon, Feb 11, 2013 at 1:30 PM, Ian Collins i...@ianshome.com wrote:

 Ram Chander wrote:

 Hi,

 My OI box is expreiencing slow zfs writes ( around 30 times slower ).
 iostat reports below error though pool is healthy. This is happening in
 past 4 days though no change was done to system. Is the hard disks faulty
 ?
 Please help.


 Does iostat -xtcMn 10 show any anomalies such as long wait times or high
 %b?

 Your pool configuration is a bit odd, I assume this isn't a production
 system?

 --
 Ian.



 __**_
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@**openindiana.orgOpenIndiana-discuss@openindiana.org
 http://openindiana.org/**mailman/listinfo/openindiana-**discusshttp://openindiana.org/mailman/listinfo/openindiana-discuss




-- 
Seconds to the drop, but it seems like hours.

http://www.openmedia.ca
https://robbiecrash.me
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Slow zfs writes

2013-02-11 Thread Sašo Kiselkov
On 02/11/2013 09:54 PM, Robbie Crash wrote:
 If you weren't having any issues with speed and they've progressively
 gotten worse, I'd look at dedup. If you're using dedup, you better make
 sure you've got 2.5GB RAM for every TB of unique data you have, otherwise
 you'll be swapping your dedup tables constantly and your read/write
 performance is going to die.

Also always remember to tune zfs_arc_meta_limit if you have lots of
dedup, since DDT entries count as metadata and by default the meta_limit
is 1/4 of arc_c_max (which for machines with lots of DRAM is ramsize
minus 1GG by default).

Cheers,
--
Saso

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Slow zfs writes

2013-02-11 Thread Jason Matthews


I am going to offer to obvious advice...

How full is your pool?  Zpool performance degrades as the pool fills up and
the tools don't tell you how close you are to the cliff -- you find the
cliff on your own by falling off of it. As a rule of thumb, I keep
production system less than 70% utilized.

Here is a real life example. On a 14.5TB (configured) pool, I found the
cliff with 250+GB still reported as free. The system continued to write to
the pool but throughput dismal.

Is your pool full?

j.

-Original Message-
From: Ram Chander [mailto:ramqu...@gmail.com] 
Sent: Monday, February 11, 2013 4:48 AM
To: Discussion list for OpenIndiana
Subject: [OpenIndiana-discuss] Slow zfs writes

Hi,

My OI box is expreiencing slow zfs writes ( around 30 times slower ).
iostat reports below error though pool is healthy. This is happening in
past 4 days though no change was done to system. Is the hard disks faulty ?
Please help.


# zpool status -v
root@host:~# zpool status -v
  pool: test
 state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on software that does not support
feature flags.
config:

NAME STATE READ WRITE CKSUM
test   ONLINE   0 0 0
  raidz1-0   ONLINE   0 0 0
c2t0d0   ONLINE   0 0 0
c2t1d0   ONLINE   0 0 0
c2t2d0   ONLINE   0 0 0
c2t3d0   ONLINE   0 0 0
c2t4d0   ONLINE   0 0 0
  raidz1-1   ONLINE   0 0 0
c2t5d0   ONLINE   0 0 0
c2t6d0   ONLINE   0 0 0
c2t7d0   ONLINE   0 0 0
c2t8d0   ONLINE   0 0 0
c2t9d0   ONLINE   0 0 0
  raidz1-3   ONLINE   0 0 0
c2t12d0  ONLINE   0 0 0
c2t13d0  ONLINE   0 0 0
c2t14d0  ONLINE   0 0 0
c2t15d0  ONLINE   0 0 0
c2t16d0  ONLINE   0 0 0
c2t17d0  ONLINE   0 0 0
c2t18d0  ONLINE   0 0 0
c2t19d0  ONLINE   0 0 0
c2t20d0  ONLINE   0 0 0
c2t21d0  ONLINE   0 0 0
c2t22d0  ONLINE   0 0 0
c2t23d0  ONLINE   0 0 0
  raidz1-4   ONLINE   0 0 0
c2t24d0  ONLINE   0 0 0
c2t25d0  ONLINE   0 0 0
c2t26d0  ONLINE   0 0 0
c2t27d0  ONLINE   0 0 0
c2t28d0  ONLINE   0 0 0
c2t29d0  ONLINE   0 0 0
c2t30d0  ONLINE   0 0 0
  raidz1-5   ONLINE   0 0 0
c2t31d0  ONLINE   0 0 0
c2t32d0  ONLINE   0 0 0
c2t33d0  ONLINE   0 0 0
c2t34d0  ONLINE   0 0 0
c2t35d0  ONLINE   0 0 0
c2t36d0  ONLINE   0 0 0
c2t37d0  ONLINE   0 0 0
  raidz1-6   ONLINE   0 0 0
c2t38d0  ONLINE   0 0 0
c2t39d0  ONLINE   0 0 0
c2t40d0  ONLINE   0 0 0
c2t41d0  ONLINE   0 0 0
c2t42d0  ONLINE   0 0 0
c2t43d0  ONLINE   0 0 0
c2t44d0  ONLINE   0 0 0
spares
  c5t10d0AVAIL
  c5t11d0AVAIL
  c2t45d0AVAIL
  c2t46d0AVAIL
  c2t47d0AVAIL



# iostat -En

c4t0d0   Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
Vendor: iDRACProduct: Virtual CD   Revision: 0323 Serial No:
Size: 0.00GB 0 bytes
Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
Illegal Request: 1 Predictive Failure Analysis: 0
c3t0d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: iDRACProduct: LCDRIVE  Revision: 0323 Serial No:
Size: 0.00GB 0 bytes
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c4t0d1   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: iDRACProduct: Virtual Floppy   Revision: 0323 Serial No:
Size: 0.00GB 0 bytes
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0


root@host:~# fmadm faulty
---   --
-
TIMEEVENT-ID  MSG-ID
SEVERITY
---   --
-
Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9

Re: [OpenIndiana-discuss] Slow zfs writes

2013-02-11 Thread Ram Chander
So it looks like re-distribution issue. Initially  there were two Vdev with
24 disks ( disk 0-23 ) for close to year. After which  which we added 24
more disks and created additional vdevs. The initial vdevs are filled up
and so write speed declined. Now  how to find files that are present in a
Vdev or a disk. That way I can remove and re-copy back to distribute data.
Any other way to solve this ?

Total capacity of pool - 98Tb
Used - 44Tb
Free - 54 Tb

root@host:# zpool iostat -v
capacity operationsbandwidth
pool alloc   free   read  write   read  write
---  -  -  -  -  -  -
test   54.0T  62.7T 52  1.12K  2.16M  5.78M
  raidz1 11.2T  2.41T 13 30   176K   146K
c2t0d0   -  -  5 18  42.1K  39.0K
c2t1d0   -  -  5 18  42.2K  39.0K
c2t2d0   -  -  5 18  42.5K  39.0K
c2t3d0   -  -  5 18  42.9K  39.0K
c2t4d0   -  -  5 18  42.6K  39.0K
  raidz1 13.3T   308G 13100   213K   521K
c2t5d0   -  -  5 94  50.8K   135K
c2t6d0   -  -  5 94  51.0K   135K
c2t7d0   -  -  5 94  50.8K   135K
c2t8d0   -  -  5 94  51.1K   135K
c2t9d0   -  -  5 94  51.1K   135K
  raidz1 13.4T  19.1T  9455   743K  2.31M
c2t12d0  -  -  3137  69.6K   235K
c2t13d0  -  -  3129  69.4K   227K
c2t14d0  -  -  3139  69.6K   235K
c2t15d0  -  -  3131  69.6K   227K
c2t16d0  -  -  3141  69.6K   235K
c2t17d0  -  -  3132  69.5K   227K
c2t18d0  -  -  3142  69.6K   235K
c2t19d0  -  -  3133  69.6K   227K
c2t20d0  -  -  3143  69.6K   235K
c2t21d0  -  -  3133  69.5K   227K
c2t22d0  -  -  3143  69.6K   235K
c2t23d0  -  -  3133  69.5K   227K
  raidz1 2.44T  16.6T  5103   327K   485K
c2t24d0  -  -  2 48  50.8K  87.4K
c2t25d0  -  -  2 49  50.7K  87.4K
c2t26d0  -  -  2 49  50.8K  87.3K
c2t27d0  -  -  2 49  50.8K  87.3K
c2t28d0  -  -  2 49  50.8K  87.3K
c2t29d0  -  -  2 49  50.8K  87.3K
c2t30d0  -  -  2 49  50.8K  87.3K
  raidz1 8.18T  10.8T  5295   374K  1.54M
c2t31d0  -  -  2131  58.2K   279K
c2t32d0  -  -  2131  58.1K   279K
c2t33d0  -  -  2131  58.2K   279K
c2t34d0  -  -  2132  58.2K   279K
c2t35d0  -  -  2132  58.1K   279K
c2t36d0  -  -  2133  58.3K   279K
c2t37d0  -  -  2133  58.2K   279K
  raidz1 5.42T  13.6T  5163   383K   823K
c2t38d0  -  -  2 61  59.4K   146K
c2t39d0  -  -  2 61  59.3K   146K
c2t40d0  -  -  2 61  59.4K   146K
c2t41d0  -  -  2 61  59.4K   146K
c2t42d0  -  -  2 61  59.3K   146K
c2t43d0  -  -  2 62  59.2K   146K
c2t44d0  -  -  2 62  59.3K   146K



On Tue, Feb 12, 2013 at 6:39 AM, Jason Matthews ja...@broken.net wrote:



 I am going to offer to obvious advice...

 How full is your pool?  Zpool performance degrades as the pool fills up and
 the tools don't tell you how close you are to the cliff -- you find the
 cliff on your own by falling off of it. As a rule of thumb, I keep
 production system less than 70% utilized.

 Here is a real life example. On a 14.5TB (configured) pool, I found the
 cliff with 250+GB still reported as free. The system continued to write to
 the pool but throughput dismal.

 Is your pool full?

 j.

 -Original Message-
 From: Ram Chander [mailto:ramqu...@gmail.com]
 Sent: Monday, February 11, 2013 4:48 AM
 To: Discussion list for OpenIndiana
 Subject: [OpenIndiana-discuss] Slow zfs writes

 Hi,

 My OI box is expreiencing slow zfs writes ( around 30 times slower ).
 iostat reports below error though pool is healthy. This is happening in
 past 4 days though no change was done to system. Is the hard disks faulty ?
 Please help.


 # zpool status -v
 root@host:~# zpool status -v
   pool: test
  state: ONLINE
 status: The pool is formatted using a legacy on-disk format.  The pool can
 still be used, but some features are unavailable.
 action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
 pool will no longer be accessible on software that does not support
 feature flags.
 config:

 NAME STATE READ WRITE CKSUM
 test   ONLINE   0 0 0
   raidz1-0   ONLINE   0 0 0
 c2t0d0   ONLINE   0 0 0
 c2t1d0   ONLINE