Re: [zfs-discuss] zpool upgrade -v

2008-07-03 Thread Walter Faleiro
Hi,
I reinstalled our Solaris 10 box using the latest update available.
However I could not upgrade the zpool

bash-3.00# zpool upgrade -v
This system is currently running ZFS version 4.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history

For more information on a particular version, including supported releases,
see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.

bash-3.00# zpool upgrade -a
This system is currently running ZFS version 4.

All pools are formatted using this version.


The sun docs said to use zpool upgrade -a. Looks like I have missed
something.


--Walter

On Fri, Jun 13, 2008 at 7:55 PM, Al Hopper [EMAIL PROTECTED] wrote:

 On Fri, Jun 13, 2008 at 4:48 PM, Dick Hoogendijk [EMAIL PROTECTED] wrote:
  I have a disk on ZFS created by snv_79b (sxde4) and one on ZFS created
  by snv_90 (sxce). I wonder, how do I know a ZFS version has to be
  upgraded or not? I.e. are the ZFS versions of sxde and sxce the same?
  How do I verify that?

 Hi Dick (from the solaris on x86 list),

 - First off, and you may already know this, you can upgrade - but it's
 a one-way ticket.  You can't change your mind and go backwards, as
 in, down-grade to a previous release.  And, what if you want to
 restore a snapshot to a box running an older release of ZFS...

 - Secondly, you're not *required* to upgrade.  If there is even a 1 in
 a 1,000,000 chance that you might want to use the pool with a previous
 release of *olaris - *don't* do it!  And this includes moving the pool
 over to a different *olaris release - which is a requirement that
 cannot always be foreseen.

 - 3rd, in many cases, there is no loss of features by not upgrading.
  Again - I say - in most cases - not in all cases.

 To examine the current version or to upgrade it, please read (latest
 version of) the ZFS admin guide doc #  817-2271.  Look at zpool get
 version poolName and zpool upgrade

 Recommendation (based on personal experience) leave the ondisk format
 at the SXDE default version for now.

 Regards,

  --
  Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
  ++ http://nagual.nl/ + SunOS sxce snv90 ++
 
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 



 --
 Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
  Voice: 972.379.2133 Timezone: US CDT
 OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Quotas Locking down a system

2008-06-06 Thread Walter Faleiro
Folks,
I am running into an issue with a quota enabled ZFS system. I tried to check
out the ZFS properties but could not figure out a workaround.

I have a file system /data/project/software which has 250G quota set. There
are no snapshots enabled for this system. When the quota is reached on this,
no users can delete any files and get a disk quota exceeded error. After
which I have to login as root on the zfs exporting server and increase the
quota before deleting any files then revert the quota back.
As a turnaround, I have a script which check every few minutes of the usage
on the file system and deletes dummy files that I created if the usage is
100% or creates dummy files if the usage is not 1005. But I assume there
must be a better way of handling this via zfs.

Thanks,
--Walter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving ZFS file system to a different system

2007-12-09 Thread Walter Faleiro
Hi Robert,
Thanks it worked like a charm.

--Walter

On Dec 7, 2007 7:33 AM, Robert Milkowski [EMAIL PROTECTED] wrote:

  Hello Walter,


 Thursday, December 6, 2007, 7:05:54 PM, you wrote:


   

 Hi All,

 We are currently a hardware issue with our zfs file server hence the file
 system is unusable.

 We are planning to move it to a different system.


 The setup on the file server when it was running was


 bash-3.00# zpool status

   pool: store1

  state: ONLINE

  scrub: none requested

 config:


 NAMESTATE READ WRITE CKSUM

 backup  ONLINE   0 0 0

   c1t2d1ONLINE   0 0 0

   c1t2d2ONLINE   0 0 0

   c1t2d3ONLINE   0 0 0

   c1t2d4ONLINE   0 0 0

   c1t2d5ONLINE   0 0 0


 errors: No known data errors


   pool: store2

  state: ONLINE

 status: One or more devices has experienced an unrecoverable error.  An

 attempt was made to correct the error.  Applications are
 unaffected.

 action: Determine if the device needs to be replaced, and clear the
 errors

 using 'zpool clear' or replace the device with 'zpool replace'.

see: http://www.sun.com/msg/ZFS-8000-9P

  scrub: none requested

 config:


 NAMESTATE READ WRITE CKSUM

 store   ONLINE   0 0 1

   c1t3d0ONLINE   0 0 0

   c1t3d1ONLINE   0 0 0

   c1t3d2ONLINE   0 0 1

   c1t3d3ONLINE   0 0 0

   c1t3d4ONLINE   0 0 0


 errors: No known data errors


 The store1 was a external raid device with slice configured to boot the
 system+swap and the remaining disk space configured for use with zfs.


 The store2 was a similar external raid device which had all slices
 configured for use for zfs.


 Since both are scsi raid devices, we are thinking of booting up the former
 using a different SUN Box.


 Are there some precautions to be taken to avoid any data loss?


 Thanks,

 --W



 Just make sure the external storage is not connected to both hosts at the
 same time.

 Once you connect it to another host simply import both pools with -f (as
 pool wasn't cleanly exported I guess).



 Please also notice that you've encountered one uncorrectable error in
 store2 pool.

 Well, actually it looks like it was corrected judging from the message.

 IIRC it's a known bug (should have been already corrected) - metadata
 cksum error propagates to top level vdev unnecessarily.


 --

 Best regards,

  Robert Milkowskimailto:[EMAIL PROTECTED][EMAIL 
 PROTECTED]

http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Moving ZFS file system to a different system

2007-12-06 Thread Walter Faleiro
Hi All,
We are currently a hardware issue with our zfs file server hence the file
system is unusable.
We are planning to move it to a different system.

The setup on the file server when it was running was

bash-3.00# zpool status
  pool: store1
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
backup  ONLINE   0 0 0
  c1t2d1ONLINE   0 0 0
  c1t2d2ONLINE   0 0 0
  c1t2d3ONLINE   0 0 0
  c1t2d4ONLINE   0 0 0
  c1t2d5ONLINE   0 0 0

errors: No known data errors

  pool: store2
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
store   ONLINE   0 0 1
  c1t3d0ONLINE   0 0 0
  c1t3d1ONLINE   0 0 0
  c1t3d2ONLINE   0 0 1
  c1t3d3ONLINE   0 0 0
  c1t3d4ONLINE   0 0 0

errors: No known data errors

The store1 was a external raid device with slice configured to boot the
system+swap and the remaining disk space configured for use with zfs.

The store2 was a similar external raid device which had all slices
configured for use for zfs.

Since both are scsi raid devices, we are thinking of booting up the former
using a different SUN Box.

Are there some precautions to be taken to avoid any data loss?

Thanks,
--W
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow file system access on zfs

2007-11-07 Thread Walter Faleiro
Hi Lukasz,
The output of the first sript gives
bash-3.00# ./test.sh
dtrace: script './test.sh' matched 4 probes
CPU IDFUNCTION:NAME
  0  42681:tick-10s

  0  42681:tick-10s

  0  42681:tick-10s

  0  42681:tick-10s

  0  42681:tick-10s

  0  42681:tick-10s

  0  42681:tick-10s



and it goes on.

The second script gives:

checking pool map size [B]: filer
mdb: failed to dereference symbol: unknown symbol name
423917216903435

Regards,
--Walter

On 11/7/07, Łukasz K [EMAIL PROTECTED] wrote:

 Hi,

   I think your problem is filesystem fragmentation.
 When available space is less than 40% ZFS might have problems with
 finding free blocks. Use this script to check it:

 #!/usr/sbin/dtrace -s

 fbt::space_map_alloc:entry
 {
self-s = arg1;
 }

 fbt::space_map_alloc:return
 /arg1 != -1/
 {
   self-s = 0;
 }

 fbt::space_map_alloc:return
 /self-s  (arg1 == -1)/
 {
   @s = quantize(self-s);
   self-s = 0;
 }

 tick-10s
 {
   printa(@s);
 }

 Run script for few minutes.


 You might also have problems with space map size.
 This script will show you size of space map on disk:
 #!/bin/sh

 echo '::spa' | mdb -k | grep ACTIVE \
   | while read pool_ptr state pool_name
 do
   echo checking pool map size [B]: $pool_name

   echo ${pool_ptr}::walk metaslab|::print -d struct metaslab
 ms_smo.smo_objsize \
 | mdb -k \
 | nawk '{sub(^0t,,$3);sum+=$3}END{print sum}'
 done

 In memory space map takes 5 times more.
 All space map is loaded into memory all the time, but for example
 during snapshot remove all space map might be loaded, so check
 if you have enough RAM available on machine.
 Check ::kmastat in mdb.
 Space map uses kmem_alloc_40  ( on thumpers this is a real problem )

 Workaround:
 1. first you can change pool recordsize
   zfs set recordsize=64K POOL

 Maybe you wil have to use 32K or even 16K

 2. You will have to disable ZIL, becuase ZIL always takes 128kB
 blocks.

 3. Try to disable cache, tune vdev cache. Check:
 http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

 Lukas Karwacki

 Dnia 7-11-2007 o godz. 1:49 Walter Faleiro napisał(a):
  Hi,
  We have a zfs file system configured using a Sunfire 280R with a 10T
  Raidweb array
 
  bash-3.00# zpool list
  NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
  filer   9.44T   6.97T   2.47T73%  ONLINE
 -
 
 
  bash-3.00# zpool status
pool: backup
   state: ONLINE
   scrub: none requested
  config:
 
  NAMESTATE READ WRITE CKSUM
  filerONLINE   0 0 0
c1t2d1ONLINE   0 0 0
c1t2d2ONLINE   0 0 0
c1t2d3ONLINE   0 0 0
c1t2d4ONLINE   0 0 0
c1t2d5ONLINE   0 0 0
 
 
  the file system is shared via nfs. Off late we have seen that the file
  system access slows down considerably. Running commands like find, du
  on the zfs system did slow it down, but the intermittent slowdowns
  cannot be explained.
 
  Is there a way to trace the I/O on the zfs so that we can list out
  heavy read/writes to the file system to be responsible for the
  slowness.
 
  Thanks,
  --Walter
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 
 Wojna z terrorem wkracza w decydującą fazę:
 Robert Redford, Meryl Streep i Tom Cruise w filmie
 UKRYTA STRATEGIA - w kinach od 9 listopada!

 http://klik.wp.pl/?adr=http%3A%2F%2Fcorto.www.wp.pl%2Fas%2Fstrategia.htmlsid=90



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS accessed via nfs is slow

2007-11-06 Thread Walter Faleiro
Hi,
I have a ZFS file system that consists of a
Sunfire V280R + 10T of attached Raidweb array.


bash-3.00# zpool status
  pool: filer
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
backup  ONLINE   0 0 0
  c1t2d1ONLINE   0 0 0
  c1t2d2ONLINE   0 0 0
  c1t2d3ONLINE   0 0 0
  c1t2d4ONLINE   0 0 0
  c1t2d5ONLINE   0 0 0

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
filer 9.44T   6.97T   2.47T73%  ONLINE -



This file system is shared via nfs on the network. Off late we have
started noticing considerable slowness in the network. Even a ls or a
vi command takes time to execute.

Things that we noticed to slow down the systems are, running commands
like du find on the nfs server.
Is there a way the nfs traffic can be monitored on the zfs.

Also the slowness of the file system is not at all times, but only
intermittent.

The assumption is that too much I/O is causing it, but what we want to
know is how to capture it.

Thanks,
--Walter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Slow file system access on zfs

2007-11-06 Thread Walter Faleiro
Hi,
We have a zfs file system configured using a Sunfire 280R with a 10T
Raidweb array

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
filer   9.44T   6.97T   2.47T73%  ONLINE -


bash-3.00# zpool status
  pool: backup
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
filerONLINE   0 0 0
  c1t2d1ONLINE   0 0 0
  c1t2d2ONLINE   0 0 0
  c1t2d3ONLINE   0 0 0
  c1t2d4ONLINE   0 0 0
  c1t2d5ONLINE   0 0 0


the file system is shared via nfs. Off late we have seen that the file
system access slows down considerably. Running commands like find, du
on the zfs system did slow it down, but the intermittent slowdowns
cannot be explained.

Is there a way to trace the I/O on the zfs so that we can list out
heavy read/writes to the file system to be responsible for the
slowness.

Thanks,
--Walter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Moving default snapshot location

2007-10-09 Thread Walter Faleiro
Hi,
We have implemented a zfs files system for home directories and have enabled
it with quotas+snapshots. However the snapshots are causing an issue with
the user quotas. The default snapshot files go under
~username/.zfs/snapshot, which is a part of the user file system. So if the
quota is 10G and the snapshots total to 2G, this adds to the disk space used
by the user. Is there any turnaround for this. One is to increase the quota
for the user, which we dont want to implement. Can the default snapshots be
taken to some other location outside the user home directory.

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss