[zfs-discuss] Available Space Discrepancy

2010-10-15 Thread David Stewart
Using snv_111b and yesterday both the Mac OS X Finder and Solaris File Browser 
started reporting that I had 0 space available on the SMB shares.  Earlier in 
the day I had copied some files from the Mac to the SMB shares and no problems 
reported by the Mac (Automator will report errors if the destination is full 
and it is unable to copy the remaining files).  Later I tried to move a folder 
from one share to another share and the Mac Finder crashed and restarted.  I 
tried it again and after the Finder counted the number of files it was going to 
move, it reported that there wasn't enough space available when there should 
have been.  Now, I know I did at least one thing I had not intended: dragging 
from one share to another will not MOVE, but will instead COPY.  That was not 
my intention.

I have 5 shares on the pool (data, movies, music, photos, scans) and zfs list 
reports:
NAME USED AVAIL
mediaz1 4.00T  0
data 760k 0
movies 2.57T 0
music 874G 0
photos 360G 0
scans 235G 0

zpool list reports:
NAME SIZE USED AVAIL
mediaz1 5.44T 5.35T 86.7G

and

zpool iostat reports:
pool used avail operations read write bandwidth read write
mediaz1 5.35T 86.7G 248 2 30.1M 10.4k

There should be about 86G free and that sounds about right, but I don't 
understand why the GUI Finder and File browser report 0 as does zfs list.  And 
how do I correct this or myself?

David

BTW, I DID search the forums and Google and did not find a solution.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool is very slow

2009-10-02 Thread David Stewart
I created a raidz zpool and shares and now the OS is very slow.  I timed it and 
I can get about eight seconds of use before I get ten seconds of a frozen 
screen.  I can be doing anything or barely anything (moving the mouse an inch 
side to side repeatedly.)  This makes the machine unusable.  If I detach the 
SATA card that the raidz zpool is attached to everything is fine.  The slowdown 
occurs regardless of the user that I login as (admin, reg user), and the speed 
up occurs only when the SATA card is removed.  This leads me to believe that 
something is going on with the zpool.  There are no files on the zpool (I don't 
have the patience for the constant freezing to copy files over to the zpool.)  

The zpool is 4TB in size.  I previously had the system up and running for a 
week before I did something stupid and decided to start from scratch and 
reinstall and recreate the zpool.

The "zpool status" command shows no errors with the zpool and iostat:
mediaz used 470k avail 5.44T read 0 write 0 bandwidth read 37 bandwidth write 44

How do I find what is accessing the zpool and stop it?

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-02 Thread David Stewart
Cindy:

I believe I may have been mistaken.  When I recreated the zpools, you are 
correct you receive different numbers for "zpool list" and "zfs list" for the 
sizes.  I must have typed one command and then the other when creating the 
different pools.

Thanks for the assist.  Sheepish grin.

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread David Stewart
Cindy:

I am not at the machine right now, but I installed from the OpenSolaris 2009.06 
LiveCD and have all of the updates installed.  I have solely been using "zfs 
list" to look at the size of the pools.

from a saved file on my laptop:

me...@opensolarisnas:~$ zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
mediapool3.58T   432G  29.9K  /mediapool

I destroyed the zpool and created another one, this time using "raidz" instead 
of "raidz1" in the zpool create command, and showed 0 used and 5.3T available.

I am happy to have the extra TB of space, but just wanted to make sure that I 
had performed the create correctly each time.  When I created a RAIDZ pool in 
VMWare Fusion and typed "raidz" instead of "raidz1" I came up with equal sized 
pools, but that was a virtual machine and only 2GB disks were used.

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread David Stewart
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools.  The 
sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively.  The man page 
for RAIDZ states that "The raidz vdev type is an alias for raidz1."  So why was 
there a difference between the sizes for RAIDZ and RAIDZ1?  Shouldn't the size 
be the same for "zpool create raidz ..." and "zpool create raidz1 ..." if I am 
using the exact same drives?

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread David Stewart
> You can try reading from each raw device, and looking
> for a blinky-light
> to identify which one is active.  If you don't have
> individual lights,
> you may be able to hear which one is active.  The
> "dd" command should do.

  I received an email from another member (Ross) recommending the same solution 
and I tested this out on my VMWare machine.  I'll give it a try once I am home 
on the hardware machine.

When the drive went offline did it reduce the size of the RAIDZ filesystem?  
The amount of space used and free only adds up to ~=2.9TB and not the 4TB that 
it should.

   Once again, thanks,


 David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread David Stewart
Before I try these options you outlined I do have a question.  I went in to 
VMWare Fusion and removed one of the drives from the virtual machine that was 
used to create a RAIDZ pool (there were five drives, one for the OS, and four 
for the RAIDZ.)  Instead of receiving the "removed" status that I am getting 
with the "real" system, I receive "unnavail 0 0 0 cannot open."  So, do I 
really need to remove and RMA the drive or is it just not being recognized by 
OpenSolaris and can I do something nondestructive to find and repair the RAIDZ?

I am so not looking forward to moving the 2.4TB of data around again.

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread David Stewart
How do I identify which drive it is?  I hear each drive spinning (I listened to 
them individually) so I can't simply select the one that is not spinning.

 David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [ZFS-discuss] RAIDZ drive "removed" status

2009-09-29 Thread David Stewart
Having casually used IRIX in the past and used BeOS, Windows, and MacOS as 
primary OSes, last week I set up a RAIDZ NAS with four Western Digital 1.5TB 
drives and copied over data from my WinXP box.  All of the hardware is fresh 
out of the box so I did not expect any hardware problems, but when I ran zpool 
after a few days of uptime and copying 2.4TB of data to the system I received 
the following:

da...@opensolarisnas:~$ zpool status mediapool
  pool: mediapool
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mediapool   DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c8t0d0  ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  FAULTED  0 0 0  too many errors

errors: No known data errors
da...@opensolarisnas:~$

I read the Solaris documentation and it seemed to indicate that I needed to run 
zpool clear.

da...@opensolarisnas:~$ zpool clear mediapool

And then the fun began.  The system froze and rebooted and I was stuck in a 
constant reboot cycle that would get to grub and selecting “opensolaris-2” and 
boot process and crash.  Removing the SATA card that the RAIDZ disks were 
attached to would result in a successful boot.  I reinserted the card, went 
through a few unsuccessful reboots, and magically it booted all the way for me 
to log in.  I then received the following:

me...@opensolarisnas:~$ zpool status -v mediapool
  pool: mediapool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: scrub in progress for 0h2m, 0.29% done, 16h12m to go
config:

NAMESTATE READ WRITE CKSUM
mediapool   DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c8t0d0  ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  UNAVAIL  7 0 0  experienced I/O failures

errors: No known data errors
me...@opensolarisnas:~$

I shut the machine down and unplugged the power supply and removed the SATA 
card and reinserted it, removed each of the SATA cables individually and 
reinserted them, removed each of the SATA power cables and reinserted them.  
Rebooted:

da...@opensolarisnas:~# zpool status -x mediapool
  pool: mediapool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h20m, 2.68% done, 12h29m to go
config:

NAMESTATE READ WRITE CKSUM
mediapool   DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c8t0d0  ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  REMOVED  0 0 0

errors: No known data errors
da...@opensolarisnas:~#


The resilvering completed everything seemed fine and I shut the machine down 
and rebooted later and went through the same boot & crash cycle that never got 
me to the login screen until it finally did get me to that screen for unknown 
reasons.  The machine is resilvering currently with the zpool status the same 
as above.  What happened, why did it happen, and how can I stop it from 
happening again?  Does OpenSolaris believe that c8t3d0 is not connected to the 
SATA card?  The SATA card BIOS sees all four drives.  What is the best way for 
me to figure out which drive is c8t3d0?  Some operating systems will tell you 
which drive is which by telling you the serial number of the drive.  Does 
OpenSolaris do this?  If so, how?  I looked through all of the 
Solaris/OpenSolaris documentation re: ZFS and RAIDZ for a mention of a 
“removed” status for a drive in RAIDZ configuration, but could not find mention 
outside of mirrors having this error.  Page 231 of the OS Bible mentions 
reattaching a drive in the “removed” status from a mirror.  Does this mean 
physically reattaching the drive (unplugging it and replugging it in) or does 
it mean somehow software reattaching it?  If I run “zpool offline –t c8t3d0” 
and reboot and then “zpool replace mediapool c8t3d0 “, then “zpool online 
mediapool c8t3d0 “ will this solve all my issues?

There is another issue and I don’t if it is related or not.  If it isn’t 
related, I will start another thread