Re: [zfs-discuss] recover raidz from fried server ??

2011-07-19 Thread Brett
root@san:~# zdb -l /dev/dsk/c7t6d0s0
cannot open '/dev/rdsk/c7t6d0s0': I/O error
root@san:~# zdb -l /dev/dsk/c7t6d0p1

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3
root@san:~#
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recover raidz from fried server ??

2011-07-19 Thread Brett
Ok, I went with windows and virtualbox solution. I could see all 5 of my raid-z 
disks in windows. I encapsulated them as entire disks in vmdk files and 
subsequently offlined them to windows. 

I then installed a sol11exp vbox instance, attached the 5 virtualized disks and 
can see them in my sol11exp (they are disks #1->#5).

root@san:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c7t0d0 
  /pci@0,0/pci8086,2829@d/disk@0,0
   1. c7t2d0 
  /pci@0,0/pci8086,2829@d/disk@2,0
   2. c7t3d0 
  /pci@0,0/pci8086,2829@d/disk@3,0
   3. c7t4d0 
  /pci@0,0/pci8086,2829@d/disk@4,0
   4. c7t5d0 
  /pci@0,0/pci8086,2829@d/disk@5,0
   5. c7t6d0 
  /pci@0,0/pci8086,2829@d/disk@6,0
Specify disk (enter its number):

Great I thought, all i need to do is import my raid-z.
root@san:~# zpool import
root@san:~# 

Damn, that would have been just too easy I guess. Help !!!

How do i recover my data? I know its still hiding on those disks. Where do i go 
from here?

Thanks Rep
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] recover raidz from fried server ??

2011-07-12 Thread Brett
Hi Folks,

Situation :- x86 based solaris 11 express server with 2 pools (rpool / data) 
got fried. I need to recover the raidz pool "data" which consists of 5 x 1tb 
sata drives. Have individually checked disks with seagate diag tool, they are 
all physically ok.

Issue :- new sandybridge based x86 machine purchased, attempted to rebuild with 
solaris 11 express but the onboard sata controllers can not be recognised by 
the o/s no disks found. Assumption - sol11exp does not have drivers yet for 
the sata controllers on the new motherboard.

Solution needed that allows me to build a functional NAS on the new hardware 
which will allow me to reconstitute and read the raidz zpool "data".

Any thoughts?

My latest thoughts are
1) to try freebsd as an alternative o/s hoping it has more recently updated 
drivers to support the sata controllers. According to the zfs wiki, freebsd 8.2 
supports zpool version 28. I have a concern that when i updated the old (fried) 
server to sol11exp it upgraded the zpool version to 31 and so freebsd8.2 still 
may not be able to read the zpool on the raidz1

2) install windoze to get full hardware support (the drivers that came with the 
motherboard are windows only) and run sol11exp in a virtualbox environment 
which has full access to the raidz disks. Not sure if this is possible, but 
maybe worth a try.

Any help / suggestions to recover my data would be appreciated.

Regards Rep
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs acl issue - transmission not moving files on completion of torrent

2011-01-25 Thread Brett
Hi Folks,

I used to run transmission for torrents on opensolaris snv_134. My working zfs 
was newsan/tmp and when the torrent completed i had transmission configured to 
move the files to newsan/incoming.

I upgraded to sol11exp and the related zfs updates and now the files no longer 
move to newsan/incoming upon completion. 

I believe this to be a zfs acl issue and could use some assistance in 
diagnosing it. 

My ideal situation would be that i set perms on /newsan/tmp and 
/newsan/incoming with inheritance turned on and these perms are passed down to 
all files and subdirs below them. 

Is there a way to get the move error to be logged somewhere like syslog?

bash-4.0$ /bin/ls -ladV /newsan/tmp
drwxr-x---+ 42 memedia 50 Jan 25 17:26 /newsan/tmp
 owner@:rwxpdDaARWcCos:fd-:allow
 group@:r-x---a-R-c--s:---:allow
  everyone@:--a-R-c--s:---:allow
bash-4.0$ /bin/ls -ladV /newsan/incoming
drwxr-xr-x+ 11 memedia326 Jan 25 17:37 /newsan/incoming
 owner@:rwxpdDaARWcCos:fd-:allow
 group@:r-x---a-R-c--s:---:allow
  everyone@:r-x---a-R-c---:fd-:allow
bash-4.0$ 

thanks in advance 
Rep
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] benefits of zfs root over ufs root

2010-03-31 Thread Brett
Hi Folks,

Im in a shop thats very resistant to change. The management here are looking 
for major justification of a move away from ufs to zfs for root file systems. 
Does anyone know if there are any whitepapers/blogs/discussions extolling the 
benefits of zfsroot over ufsroot?

Regards in advance
Rep
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help please - The pool metadata is corrupted

2008-12-13 Thread Brett
Well after a couple of weeks of beating my head, i finally got my data back so 
I thought I would post what process recovered it.

I ran the samsung estool utility
ran auto-scan and for each disk that was showing the wrong physical size i :-
chose set max address
chose recover native size

After that when i booted back into solaris format showed the disks being the 
correct size again and i was able to zpool import :-

AVAILABLE DISK SELECTIONS:
   0. c3d0 
  /p...@0,0/pci8086,2...@1c,4/pci-...@0/i...@0/c...@0,0
   1. c3d1 
  /p...@0,0/pci8086,2...@1c,4/pci-...@0/i...@0/c...@1,0
   2. c4d1 
  /p...@0,0/pci-...@1f,2/i...@0/c...@1,0
   3. c5d0 
  /p...@0,0/pci-...@1f,2/i...@1/c...@0,0
   4. c5d1 
  /p...@0,0/pci-...@1f,2/i...@1/c...@1,0
   5. c6d0 
  /p...@0,0/pci-...@1f,5/i...@0/c...@0,0
   6. c7d0 
  /p...@0,0/pci-...@1f,5/i...@1/c...@0,0

I will just say though that there is something in zfs which caused this in the 
first place as when i first replaced teh faulty sata controller, only 1 of the 
4 disks showed the incorrect size in format but then as i messed around trying 
to zpool export/import i eventually wound up in the sate that all 4 disks 
showed the wrong size. 

Anyhow, im happy i got it all back working again, and hope this solution 
assists others.

Regards Rep
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help please - The pool metadata is corrupted

2008-12-07 Thread Brett
here is the requested output of raidz_open2.d upon running  a zpool status :-

[EMAIL PROTECTED]:/export/home/brett# ./raidz_open2.d
run 'zpool import' to generate trace

60027449049959 BEGIN RAIDZ OPEN
60027449049959 config asize = 4000755744768
60027449049959 config ashift = 9
60027507681841 child[3]: asize = 1000193768960, ashift = 9
60027508294854 asize = 4000755744768
60027508294854 ashift = 9
60027508294854 END RAIDZ OPEN
60027472787344 child[0]: asize = 1000193768960, ashift = 9
60027498558501 child[1]: asize = 1000193768960, ashift = 9
60027505063285 child[2]: asize = 1000193768960, ashift = 9

I hope that helps, means little to me.

One thought I had was maybe i somehow messed up the cables and the devices are 
not in their original sequence. Would this make any difference? I have seen 
examples for raid-z suggesting that the import of a raid-z should figure out 
the devices regardless of the order of devices or of new device numbers so i 
was hoping it didnt matter.

Thanks Rep
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] help please - The pool metadata is corrupted

2008-12-04 Thread Brett
As a result of a power spike during a thunder storm I lost a sata controller 
card. This card supported my zfs pool called newsan which is a 4 x samsung 1Tb 
sata2 disk raid-z. I replaced the card and the devices have the same 
controller/disk numbers,  but now have the following issue.

-bash-3.2$ zpool status
  pool: newsan
 state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
   see: http://www.sun.com/msg/ZFS-8000-72
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
newsan  FAULTED  1 0 0  corrupted data
  raidz1ONLINE   6 0 0
c10d1   ONLINE  17 0 0
c10d0   ONLINE  17 0 0
c9d1ONLINE  24 0 0
c9d0ONLINE  24 0 0

Something majorly weird is going on as when i run format i see this :-
-bash-3.2$ pfexec format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c3d0 
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],4/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c3d1 
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED],4/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c9d0 
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   3. c9d1 
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   4. c10d0 
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   5. c10d1 
  /[EMAIL PROTECTED],0/pci8086,[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0

??? 31.50 MB ??? they all used to show as 1Tb i believe (or 931Mb or whatever)

Specify disk (enter its number): 2
selecting c9d0
NO Alt slice
No defect list found
[disk formatted, no defect list found]
/dev/dsk/c9d0s0 is part of active ZFS pool newsan. Please see zpool(1M).
format> p
partition> p
Current partition table (original):
Total disk sectors available: 1953503710 + 16384 (reserved sectors)

Part  TagFlag First Sector  Size  Last Sector
  0usrwm   256   931.50GB   1953503710
  1 unassignedwm 000
  2 unassignedwm 000
  3 unassignedwm 000
  4 unassignedwm 000
  5 unassignedwm 000
  6 unassignedwm 000
  8   reservedwm1953503711 8.00MB   1953520094 

So the partition table is looking correct.  I dont believe all 4 disks died 
concurrently. 

Any thoughts on how to recover? I dont particularly want to restore the couple 
of terabytes of data if i dont have to. 

analyze> read
Ready to analyze (won't harm SunOS). This takes a long time, 
but is interruptable with CTRL-C. Continue? y
Current Defect List must be initialized to do automatic repair.

Oh and whats this defect list thing? I havnt seen that before 

defect> print
No working list defined.
defect> create
Controller does not support creating manufacturer's defect list.
defect> extract
Ready to extract working list. This cannot be interrupted
and may take a long while. Continue? y
NO Alt slice
NO Alt slice
Extracting defect list...No defect list found
Extraction failed.
defect> commit
Ready to update Current Defect List, continue? y
Current Defect List updated, total of 0 defects.
Disk must be reformatted for changes to take effect.
analyze> read
Ready to analyze (won't harm SunOS). This takes a long time, 
but is interruptable with CTRL-C. Continue? y

pass 0
   64386  

pass 1
   64386  

Total of 0 defective blocks repaired.

So the read test seemed to work fine.

Any suggestions on how to proceed? Thoughts on why the disks are showing 
weirdly in format? Any way to recover/rebuild the zpool metadata?

Any help would be appreciated

Regards Rep
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Brett Monroe
> Yup, and the Supermicro card uses the "Marvell
> Hercules-2 88SX6081 (Rev. C0) SATA Host Controller",
> which is part of the series supported by the same
> driver:
>  http://docs.sun.com/app/docs/doc/816-5177/marvell88sx
> 7d?a=view.  I've seen the Supermicro card mentioned
> in connection with the Thumpers many times on the
> forums.

Ahh, I was unfamiliar with Supermicro's products...I'll shut up now. :)

--Brett
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Brett Monroe
Ross,

The X4500 uses 6x Marvell 88SX SATA controllers for its internal disks.  They 
are not Supermicro controllers.  The new X4540 uses an LSI chipset instead of 
the Marvell chipset.

--Brett
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how do i set a zfs mountpoint for use NEXT mount??

2007-12-11 Thread Brett
Folks,

Not sure if any of this is possible, but thought I would ask. This is all part 
of simplifying my 2 Indiana zfsboot environments.

I am wondering if there is a way to set the mountpoint of a zfs and not have it 
immediately actioned. I want this so I can set the mountpoint of my alternate 
zfs boot (zpl_slim/root2) to "/" (even though I currently have an active rootfs 
of zpl_slim/root ) and then have the grub bootfs parameter control booting from 
one root or the other. 

Are the zfs mountpoints stored in zpool.chache or ondisk or both? I was 
wondering if i can tweak zpool.cache or use zdb to achieve this. 

Essentially this is so the zfs filesystems under zpl_slim/root get mounted 
correctly through inheritance. Currently I achieve booting from altroot by 
having the roots set as legacy and referenced in the vfstab. But what this 
means is the underlying zfs filesystems ( root2/opt, root2/usr, etc) dont get 
mounted as they inherit legacy mode.

Any assistance would be appreciated.

Rep
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mirrored RAID-z2

2007-05-29 Thread Brett
Hi All,

I've been reading through the documentation for ZFS and have noted in several 
blogs that ZFS should support more advanced layouts like RAID1+0, RAID5+0, etc. 
I am having a little trouble getting these more advanced configurations to play 
nicely.

I have two disk shelves, each with 9x 300GB SCSI drives attached to a Dell 
PowerEdge 1850 with dual XEON CPUs and 4GB RAM running the 64-bit Solaris OS.

Ideally, I would like to have a RAIDz-2 on each disk shelf and have a mirror 
between the two disk shelves so that my pool would remain available even if I 
lost the entire shelf.

So far, I've been able to configure a single pool with two RAID-z volumes (one 
per shelf) in a stripe, though this doesn't help me if I lose one of the 
arrays. I've also been able to configure 4x 4-disk mirrors.

e.g.

zpool create zfsdata mirror c0t0d0 c0t1d0 c0t2d0 c0t3d0 mirror c0t4d0 c0t5d0 
c0t8d0 c0t9d0 mirror c1t0d0 c1t1d0 c1t2d0 c1t3d0 mirror c1t4d0 c1t5d0 c1t8d0 
c1t9d0

That gives me two mirrors per disk shelf (4 in total) but only 1.1TB of usable 
disk capacity. 

Is there any way to get the configuration I want, i.e. two raid-z2 volumes in a 
mirrored configuration?

If anyone out there has some suggestions for a better configuration, please let 
me know :-). I'd like to be able to lose two disks per shelf before losing the 
shelf (if possible) but still be able to recover from a total array failure.

Thanks in advance.

Sincerely,

Brett.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss