Re: [zfs-discuss] separate home partition?

2009-01-09 Thread Johan Hartzenberg
On Fri, Jan 9, 2009 at 4:10 AM, noz sf2...@gmail.com wrote:


 Here's my solution:
 (1) n...@holodeck:~# zpool create epool mirror c4t1d0 c4t2d0 c4t3d0

 n...@holodeck:~# zfs list
 NAME USED  AVAIL  REFER  MOUNTPOINT
 epool 69K  15.6G18K  /epool
 rpool   3.68G  11.9G72K  /rpool
 rpool/ROOT  2.81G  11.9G18K  legacy
 rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
 rpool/dump   383M  11.9G   383M  -
 rpool/export 632K  11.9G19K  /export
 rpool/export/home612K  11.9G19K  /export/home
 rpool/export/home/noz594K  11.9G   594K  /export/home/noz
 rpool/swap   512M  12.4G  21.1M  -
 n...@holodeck:~#

 (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
 (3) n...@holodeck:~# zfs send -R rpool/exp...@now  /tmp/export_now
 (4) n...@holodeck:~# zfs destroy -r -f rpool/export
 (5) n...@holodeck:~# zfs recv -d epool  /tmp/export_now

 The above is very dangerous, if it will even work.

The output of the zfs send is redirected to /tmp, which is a ramdisk.  If
you have enough space (RAM + Swap), it will work, but if there is a reboot
or crash before the zfs receive completes then everything is gone.

In stead, do the following:
(2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
(3) n...@holodeck:~# zfs send -R rpool/exp...@now | zfs recv -d epool
(4) Check that all the data looks OK in epool
(5) n...@holodeck:~# zfs destroy -r -f rpool/export


-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home partition?

2009-01-09 Thread Johan Hartzenberg
On Fri, Jan 9, 2009 at 9:55 AM, hardware technician figh...@yahoo.comwrote:

 I want to create a separate home, shared, read/write zfs partition on a
 tri-boot OpenSolaris, Ubuntu, and CentOS system.  I have successfully
 created and exported the zpools that I would like to use, in Ubuntu using
 zfs-fuse.  However, I boot into OpenSolaris, and I type zpool import with no
 options.  The only pool I see to import is on the primary partition, and I
 haven't been able to see or import the pool that is on the extended
 partition.  I have tried importing using the name, and ID.

 In OpenSolaris /dev/dsk/c3d0 shows 15 slices, so I think the slices are
 there, but then I type format, select the disk, and the partition option,
 but it doesn't show (zfs) partitions from linux.  In format, the fdisk
 option recognizes the (zfs) linux partitions.  The partition that I was able
 to import is on the first partition, and is named c3d0p1, and is not a
 slice.

 Are there any ideas how I could import the other pool?


I have this situation working and use my shared pool between Linux and
Solaris.  Note:  The shared pool needs to reside on a whole physical disk or
on a primary fdisk partition, Unless something changed since I last checked,
Solaris' support for Logical Partitions are... not quite there yet.

P.S. I blogged about my setup (Linux + Solaris with a Shared ZFS pool) here
http://initialprogramload.blogspot.com/search?q=zfs-fuse+linux ...  However
this was a long time ago and I don't know whether the statement about Grub
ZFS support in point 3 is still true.

Aparently some bugs pertaining to time stomping between ubuntu and solaris
has been fixed, so you may not need to do step 4. An Alternative to step 4
is to run this in Solaris: pfexec /usr/sbin/rtc -z UTC

In addition, at point nr 7, use bootadm list-menu to find out where
Solaris has decided to save the grub menu.lst file.





-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-09 Thread Ian Collins
Joel Buckley wrote:

 Search http://store.sun.com; for the item that matches your
 needs and run with it.  Sun currently has a promotion on X4150
 Servers...  That will easily be able to serve NFS, SunRay,
 etc... to your home.

   
Do they come with free ear plugs for the family? :)

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I/O error when import

2009-01-09 Thread Steve Goldthorpe
I wonder if your problem is related to mine:
(can't import zpool after upgrade to solaris 10u6 - 
http://www.opensolaris.org/jive/thread.jspa?messageID=324994#324994).

What does zdb -l give you?

zdb -l /dev/dsk/c1d1
zdb -l /dev/dsk/c2d0
zdb -l /dev/dsk/c2d1

-Steve
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-09 Thread The Moog
I wouldn't know which laptops (beside macbooks) that specifically support zfs, 
but I'm sure with a little twiddling around and some general know-how, many a 
system would operate the latest version of opensolaris.  Driver support is 
always my biggest worry. 
Sent from my BlackBerry Bold® 
http://www.blackberrybold.com

-Original Message-
From: JZ j...@excelsioritsolutions.com

Date: Thu, 8 Jan 2009 18:52:10 
To: m...@pixelshift.com; zfs-discuss-boun...@opensolaris.org; Scott 
Lairdsc...@sigkill.org
Cc: Orvar Korvarknatte_fnatte_tja...@yahoo.com; 
zfs-discuss@opensolaris.org; Peter Kornpeter.k...@sun.com
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


OMG!
what a critical factor I just didn't think about!!!
stupid me!

Moog, please, which laptops are supporting ZFS today?
I will only buy within those.

z, at home, feeling better, but still a bit confused


- Original Message - 
From: The Moog m...@pixelshift.com
To: JZ j...@excelsioritsolutions.com; 
zfs-discuss-boun...@opensolaris.org; Scott Laird sc...@sigkill.org
Cc: Orvar Korvar knatte_fnatte_tja...@yahoo.com; 
zfs-discuss@opensolaris.org; Peter Korn peter.k...@sun.com
Sent: Thursday, January 08, 2009 6:50 PM
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


 Are you planning to run Solaris on your laptop?

 Sent from my BlackBerry Bold®
 http://www.blackberrybold.com

 -Original Message-
 From: JZ j...@excelsioritsolutions.com

 Date: Thu, 8 Jan 2009 18:27:52
 To: Scott Lairdsc...@sigkill.org
 Cc: Orvar Korvarknatte_fnatte_tja...@yahoo.com; 
 zfs-discuss@opensolaris.org; Peter Kornpeter.k...@sun.com
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


 Thanks much Scott,
 I still don't know what you are talking about -- my $3000 to $800 laptops
 all never needed to swap any drive.

 But yeah, I got hit on all of them when I was in china, by the china web
 virus that no U.S. software could do anything [then a china open source
 thing did the job]

 So, without the swapping HD concern, what should I do???

 z at home still confused


 - Original Message - 
 From: Scott Laird sc...@sigkill.org
 To: JZ j...@excelsioritsolutions.com
 Cc: Toby Thain t...@telegraphics.com.au; Brandon High
 bh...@freaks.com; zfs-discuss@opensolaris.org; Peter Korn
 peter.k...@sun.com; Orvar Korvar knatte_fnatte_tja...@yahoo.com
 Sent: Thursday, January 08, 2009 6:20 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


 You can't trust any hard drive.  That's what backups are for :-).

 Laptop hard drives aren't much worse than desktop drives, and 2.5
 SATA drives are cheap.  As long as they're easy to swap, then a drive
 failure isn't the end of the world.  Order a new drive ($100 or so),
 swap them, and restore from backup.

 I haven't dealt with PC laptops in years, so I can't really compare
 models.


 Scott

 On Thu, Jan 8, 2009 at 2:40 PM, JZ j...@excelsioritsolutions.com wrote:
 Thanks Scott,
 I was really itchy to order one, now I just want to save that open $ for
 Remy+++.

 Then, next question, can I trust any HD for my home laptop? should I go
 get
 a Sony VAIO or a cheap China-made thing would do?
 big price delta...

 z at home

 - Original Message - From: Scott Laird sc...@sigkill.org
 To: JZ j...@excelsioritsolutions.com
 Cc: Toby Thain t...@telegraphics.com.au; Brandon High
 bh...@freaks.com; zfs-discuss@opensolaris.org; Peter Korn
 peter.k...@sun.com; Orvar Korvar knatte_fnatte_tja...@yahoo.com
 Sent: Thursday, January 08, 2009 5:36 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


 Today?  Low-power SSDs are probably less reliable than low-power hard
 drives, although they're too new to really know for certain.  Given
 the number of problems that vendors have had getting acceptable write
 speeds, I'd be really amazed if they've done any real work on
 long-term reliability yet.  Going forward, SSDs will almost certainly
 be more reliable, as long as you have something SMART-ish watching the
 number of worn-out SSD cells and recommending preemptive replacement
 of worn-out drives every few years.  That should be a slow,
 predictable process, unlike most HD failures.


 Scott

 On Thu, Jan 8, 2009 at 2:30 PM, JZ j...@excelsioritsolutions.com wrote:

 I was think about Apple's new SSD drive option on laptops...

 is that safer than Apple's HD or less safe? [maybe Orvar can help me 
 on
 this]

 the price is a bit hefty for me to just order for experiment...
 Thanks!
 z at home


 - Original Message - From: Toby Thain
 t...@telegraphics.com.au
 To: JZ j...@excelsioritsolutions.com
 Cc: Scott Laird sc...@sigkill.org; Brandon High
 bh...@freaks.com;
 zfs-discuss@opensolaris.org; Peter Korn peter.k...@sun.com
 Sent: Thursday, January 08, 2009 5:25 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?



 On 7-Jan-09, at 9:43 PM, JZ wrote:

 ok, Scott, that sounded sincere. I am not going to do the pic thing
 on
 you.

 But do I have to spell this out 

[zfs-discuss] Looking for new SATA/SAS HBA; JBOD is not always JBOD

2009-01-09 Thread Pål Baltzersen
Why do they throw these fancy RAID-controllers at us when we have plenty CPU 
force to do zfs mirror and even raidz1 and raidz2?

I have 12 SATA disks and would like to prepare to add 12 new internal SATA 
disks to my home server.
The cabinet (Lian Li Modular Cube 
http://www.microplex.no/aspx/produkt/prdinfovnet.aspx?plid=33415#) takes 24 
3.5 or (insane 72 2.5) in front with suitable HDD frames/backplanes.

I have used two Supermicro AOC-SAT2-MV8 PCI-X true 8-port JBOD SATA HBA and I'm 
quite happy with them; They are cheap and works in plain PCI (limited by the 
PCI-speed aof course, but for my 12-disk home NAS it's good enough)
The only problem is that no mainstream motherboards come with plenty PCI-X 
slots. My MB has none and PCI/PCI-X is not future proof. So adding a third 
AOC-SAT2-MV8 seems awkward

So I'm looking for a PCIe HBA. The Adaptec 31605  or 52445  seems tempting as I 
could reuse the old AOC-SAT2-MV8 elsewhere. Or I could add an 8-port Adaptec 
3805  or an LSI SAS3081E-R.

What scares me off (aside the price for these) is that I've bumped into both 
Sun OEM Adaptec and HP OEM Megaraid at work and none of them would do true 
JBOD; No disks showed up in format. I had to configure 1-disk volumes in BIOS 
to simulate JBOD, and from what I understand this writes this config to disk 
destroying any existing data and partitioning, and what shoes up in format is 
*not* the disks (i.e. SEAGATE/IBM/whatever) but the logical volumes (i.e. 
Sun-STK RAID EXT or HP-LOGICAL VOLUME)

On the other hand the Sun X4xxx series uses, AFAIK, some LSI chipset for the 
boot disks and they show the physical disks unless you deliberately configure a 
RAID0.

Also Adaptec claims the 3- and 5-series can do JBOD, as do LSI for the Advanced 
Connectivity Line, though 
http://www.lsi.com/DistributionSystem/AssetDocument/LSI_HBA_v_Adaptec_WP_072108.pdf
 was more confusing than clarifying on to me on this.

My almost absolute requirement is that I *must* be able to move disks with data 
from one controller to another of different brands (and back!), only doing 
zpool export and import, which implies the HBA must be able to run in JBOD-mode 
without storing or modify anything on the disks.

So could anyone confirm whether any of these adapters can operate in true 
JBOD-mode by my definition of JBOD; i.e. Just a Bunch Of Disks and not a Bunch 
of Volumes that looks like or simulates JBOD!?

I would like to have a HBA that is just (like the AOC-SAT2-MV8):
+ Plug the pieces together
+ power on
+ devfsadm
+ zpool create

(Though my brain-dead MSI BIOS thinks any added or replaced disk would be a 
better boot-candidate than the existing one :/ but I think that's a MB problem 
only..)

Pål
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for new SATA/SAS HBA; JBOD is not always JBOD

2009-01-09 Thread Erik Trimble
I'm pretty darned sure that the LSI 1068-based HBAs will do true 
JBOD.  Supermicro makes two such beasts:  AOC-USAS-L8i  and 
AOC-USASLP-L8i Both are 8-port PCI-Express x4 cards.

The low-cost LSI SATA HBAs should also give you what you want.  E.g.  
LSISAS3081E-R These use the same family of chips that are on the 
X4xxx-series machines from us (Sun).


That said, there are still a number of available PCI-X motherboards out 
there, particularly if you are looking at a socket-940 Opteron.  There's 
very little PCI-X support for single-socket systems of any sort (I think 
I've seen  2-slot PCI-X boards for both socket-940 and LGA775, but 
that's about it).

Supermicro has a nice matrix which shows their various motherboards and 
the slots available:

Intel-based:
http://www.supermicro.com/products/motherboard/matrix/index.cfm
AMD-based:http://www.supermicro.com/Aplus/motherboard/matrix/


Tyan's complete motherboard matrix is here:
http://www.tyan.com/tech/product_matrix.aspx


Additionally, I'd look at Asus's Workstation and Server motherboard 
selection - they have some interestingly weird configurations.

http://usa.asus.com/products.aspx?l1=9l2=39
and
http://usa.asus.com/products.aspx?l1=3l2=82





Pål Baltzersen wrote:
 Why do they throw these fancy RAID-controllers at us when we have plenty CPU 
 force to do zfs mirror and even raidz1 and raidz2?

 I have 12 SATA disks and would like to prepare to add 12 new internal SATA 
 disks to my home server.
 The cabinet (Lian Li Modular Cube 
 http://www.microplex.no/aspx/produkt/prdinfovnet.aspx?plid=33415#) takes 24 
 3.5 or (insane 72 2.5) in front with suitable HDD frames/backplanes.

 I have used two Supermicro AOC-SAT2-MV8 PCI-X true 8-port JBOD SATA HBA and 
 I'm quite happy with them; They are cheap and works in plain PCI (limited by 
 the PCI-speed aof course, but for my 12-disk home NAS it's good enough)
 The only problem is that no mainstream motherboards come with plenty PCI-X 
 slots. My MB has none and PCI/PCI-X is not future proof. So adding a third 
 AOC-SAT2-MV8 seems awkward

 So I'm looking for a PCIe HBA. The Adaptec 31605  or 52445  seems tempting as 
 I could reuse the old AOC-SAT2-MV8 elsewhere. Or I could add an 8-port 
 Adaptec 3805  or an LSI SAS3081E-R.

 What scares me off (aside the price for these) is that I've bumped into both 
 Sun OEM Adaptec and HP OEM Megaraid at work and none of them would do true 
 JBOD; No disks showed up in format. I had to configure 1-disk volumes in BIOS 
 to simulate JBOD, and from what I understand this writes this config to disk 
 destroying any existing data and partitioning, and what shoes up in format is 
 *not* the disks (i.e. SEAGATE/IBM/whatever) but the logical volumes (i.e. 
 Sun-STK RAID EXT or HP-LOGICAL VOLUME)

 On the other hand the Sun X4xxx series uses, AFAIK, some LSI chipset for the 
 boot disks and they show the physical disks unless you deliberately configure 
 a RAID0.

 Also Adaptec claims the 3- and 5-series can do JBOD, as do LSI for the 
 Advanced Connectivity Line, though 
 http://www.lsi.com/DistributionSystem/AssetDocument/LSI_HBA_v_Adaptec_WP_072108.pdf
  was more confusing than clarifying on to me on this.

 My almost absolute requirement is that I *must* be able to move disks with 
 data from one controller to another of different brands (and back!), only 
 doing zpool export and import, which implies the HBA must be able to run in 
 JBOD-mode without storing or modify anything on the disks.

 So could anyone confirm whether any of these adapters can operate in true 
 JBOD-mode by my definition of JBOD; i.e. Just a Bunch Of Disks and not a 
 Bunch of Volumes that looks like or simulates JBOD!?

 I would like to have a HBA that is just (like the AOC-SAT2-MV8):
 + Plug the pieces together
 + power on
 + devfsadm
 + zpool create

 (Though my brain-dead MSI BIOS thinks any added or replaced disk would be a 
 better boot-candidate than the existing one :/ but I think that's a MB 
 problem only..)

 Pål
   


-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs, raidz, spare and jbod

2009-01-09 Thread Diego Remolina
Could you explain if you did any specific configuration on the Areca Raid 
controller other than setting it to Raid and manually marking every disk as 
pass-trhough so that the disks are viewable from opensolaris?

I have an ARC-1680ix-16. I have tried two configurations. JBOD and RAID but 
making all drives pass-through as you suggest.

In both Instances, I can only see two hard drives from opensolaris. I have 
tried a RHEL 5.2 Installation with the Areca controller and I can see all the 
drives using RAID with pass-through. Do I need any boot parameters for 
opensolaris or something else?

The Areca controller assigned the following settings for the drives when 
configured as pass-through

Channel-SCSI_ID-LUN Disk#
0-0-0 01
0-0-1 02
0-0-2 03
0-0-3 04
0-0-4 05
0-0-5 06
0-0-6 07
0-0-7 08
--
0-1-0 09
0-1-1 10
0-1-2 11
0-1-3 12
0-1-4 13
0-1-5 14
0-1-6 15
0-1-7 16

My Firmware is 1.45 and I am using the areca driver that comes with opensolaris 
2008.11

Any help would be greatly appreciated.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can't import zpool after upgrade to solaris 10u6

2009-01-09 Thread Steve Goldthorpe
After having a think I've come up with the following hypothesis:

1) When I was on Solaris 10u4 things were working fine.
2) When I re-installed with Solaris 10u6 and imported the zpool (with zpool 
import -f), it created a zpool.cache file and didn't update the on disk data 
structures for some reason.
3) When I re-installed Solaris 10u6, I lost the zpool.cache file and now zfs 
looks at the data structures on the disk and they are inconsistent.

Could the above have actually happened?  It would explain what I'm seeing.

-Steve
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for new SATA/SAS HBA; JBOD is not always JBOD

2009-01-09 Thread Will Murnane
On Fri, Jan 9, 2009 at 09:28, Erik Trimble erik.trim...@sun.com wrote:
 I'm pretty darned sure that the LSI 1068-based HBAs will do true
 JBOD.  Supermicro makes two such beasts:  AOC-USAS-L8i  and
 AOC-USASLP-L8i Both are 8-port PCI-Express x4 cards.
No, both are UIO cards.  They're compatible with PCI express x8 (not
x4) but they're mirrored top-for-bottom.  Take a closer look at the
pictures [1] [2]; these cards are designed to share a slot with a
standard one (think of the PCI+ISA boards that used to exist).  On a
standard PCI device, the PCB is toward the top when mounted in a
standard case, but these cards go in the other way around.  I haven't
heard anything definitive about whether these work in a standard x8
slot, but I wouldn't assume that they work.

 ... I've bumped into both Sun OEM Adaptec and HP OEM Megaraid at work and 
 none of them would do true JBOD; No disks showed up in format. I had to 
 configure 1-disk volumes in BIOS to simulate JBOD, and from what I 
 understand this writes this config to disk destroying any existing data and 
 partitioning, and what shoes up in format is *not* the disks (i.e. 
 SEAGATE/IBM/whatever) but the logical volumes (i.e. Sun-STK RAID EXT or 
 HP-LOGICAL VOLUME)
This is still the case, judging from my x4150s.

 I would like to have a HBA that is just (like the AOC-SAT2-MV8):
 + Plug the pieces together
 + power on
 + devfsadm
 + zpool create
The LSI cards fulfill this requirement.

You might consider a case with a SAS expander in it; you can plug sata
disks into it, and it eliminates the need for a large number of
controller ports.  The Supermicro SC846e1 [3] does this; you can plug
24 disks into one SAS controller.  The power supply fans are fairly
noisy, though, for a home box.  You could also take a look at a
standalone SAS expander; Chenbro sells one called the CK12801 [4] that
would suit the purpose, and might be cheaper than buying additional
controller ports.  An 8-port LSI card with two of those expanders
would run about $750, and support 32 drives.  Compared to an Areca
card of that size, that's a good deal.  I haven't seen anyone running
the Chenbro expanders, but if they perform as specified they're
convenient.

Will

[1]: http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm
[2]: http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
[3]: http://www.supermicro.com/products/chassis/4U/846/SC846E1-R900.cfm
[4]: http://www.valleyseek.com/product.action?itemID=83883
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home partition?

2009-01-09 Thread noz
 The above is very dangerous, if it
 will even work. The output of the zfs send is
 redirected to /tmp, which is a ramdisk.  If you
 have enough space (RAM + Swap), it will work, but if
 there is a reboot or crash before the zfs receive
 completes then everything is gone.

 In stead, do the following:
 (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
 (3) n...@holodeck:~# zfs send -R rpool/exp...@now | zfs recv -d epool
 (4) Check that all the data looks OK in epool
 (5) n...@holodeck:~# zfs destroy -r -f rpool/export

Thanks for the tip.  Is there an easy way to do your revised step 4?  Can I use 
a diff or something similar?  e.g.  diff rpool/export epool/export
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for new SATA/SAS HBA; JBOD is not always JBOD

2009-01-09 Thread Carson Gaspar
On 1/9/2009 8:21 AM, Will Murnane wrote:

 You might consider a case with a SAS expander in it; you can plug sata

Except Solaris still lacks support for port expanders, or did last I 
checked. Has this changed?

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-09 Thread David Dyer-Bennet

On Thu, January 8, 2009 15:35, Tim wrote:
 On Wed, Jan 7, 2009 at 8:43 PM, JZ j...@excelsioritsolutions.com wrote:

 Can we focus on commercial usage?
 please!

 I dunno about you, but I need somewhere to store that music so I can
 stream
 it throughout the house while I'm drinking that wine ;)  A single disk
 windows box isn't really my cup-o-tea.  Plus, I'm a geek, my vmware farm
 needs it's nfs mounts on some solid, high performing gear.

While my music has ended up there, it's my digital photos that actually
pushed me into an NAS-type environment.  I wanted something better than
single-disk reliability plus backups, plus I've found my backups happen
better on the Solaris-based NAS than they did under windows (I never found
an adequate Windows backup product, whereas rsync to external USB drives
works perfectly, with the added benefit that my backup isn't locked up in
a proprietary format).

The enterprise features figure prominently, especially snapshots.  And
it's a BIG DEAL for me to know that a scrub has verified data even if I
haven't accessed it lately; old photos in my collection are still
important to me.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-09 Thread Miles Nordin
 re == Richard Elling richard.ell...@sun.com writes:

re Flash has been around for well over 25 years

there is NOR flash and NAND flash, though.  I think NOR is 25 years
old, and MLC and SLC FLASH are both NAND right?  NOR and NAND have
completely different behavior and implementation, and even within NAND
the number of tolerated write cycles varies wildly MLC vs SLC and
vendor vs vendor.

Also wasn't someone saying the cheapo USB sticks do wear leveling in
16MB chunks, so if one of the chunks is hotter than others you might
blow it sooner than you expect based on device-wide write cycles *
size / bandwidth?  software people would assume the wear leveling
chunk size is the entire device, otherwise what does ``level'' mean,
but apparently the electrical engineer monkeys have a different idea.
The quality or chunk-size of wear leveling could vary from one device
to another.

I think hard disks are a little different in their failure behavior
after increasing 100x in capacity, too, though.

re Trivia: Sun has been shipping flash memory for nearly its
re entire history.

are you talking about the firmware?  because that's NOR FLASH which is
completely different.

I'm not saying don't use it, but this sounds too much like Apple
telling us 400 megaBIT/s firewire is faster than 80 megaBYTE/s
parallel-SCSI.

re It occurs to me that you might be too young to remember that
re format(1m) was the tool used to do media analysis and map bad
re sectors before those smarts were moved onto the disk ? ;-)

yeah im old enough to remember.

the smarts stayed redundantly in format long after it was moved into
the disk.

I thought one of those netapp .pdf's said they deliberately tell some
of their SCSI/FC disks to stop doing reallocation and pass bad block
errors up the stack.  but aside from that all these SCSI disks do it,
even the 5.25 ones.  I'm old enough to remember that every SCSI Sun
system I've used including even VME-based systems and Sun3/60's use
SCSI disks which would do their own bad block remapping.

I haven't used SMD disks.  I used ST506 and ESDI disks in peecees, and
with those you got a sheet of dot-matrix printout taped to the top of
the drive by the manufacturer.  The factory test for bad sectors with
special controller boards that you don't have, to find marginal
sectors you will miss if you do your own ``low-level format''
scan---although the disk layer does have to do remapping, and although
you do scan for bad sectors during low-level format, with ST506 and
ESDI disks you will not scan any bad sectors not marked on the
printout unless your disk is failing, and you must use the printout to
avoid marginal sectors.


pgpNnGjvgyNHJ.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for new SATA/SAS HBA; JBOD is not always JBOD

2009-01-09 Thread Brandon High
On Fri, Jan 9, 2009 at 9:45 AM, Carson Gaspar car...@taltos.org wrote:
 On 1/9/2009 8:21 AM, Will Murnane wrote:

 You might consider a case with a SAS expander in it; you can plug sata

 Except Solaris still lacks support for port expanders, or did last I
 checked. Has this changed?

A SAS expander is different than a SATA  port multiplier (PMP). I'm
not sure if the SAS expander is supported, but it might be.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list improvements?

2009-01-09 Thread Richard Morris - Sun Microsystems - Burlington United States
On 01/09/09 01:44, Ross wrote:
 Can I ask why we need to use -c or -d at all?  We already have -r to 
 recursively list children, can't we add an optional depth parameter to that?

 You then have:
 zfs list : shows current level (essentially -r 0)
 zfs list -r : shows all levels (infinite recursion)
 zfs list -r 2 : shows 2 levels of children

An optional depth argument to -r has already been suggested:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-January/054241.html

However, other zfs subcommands such as destroy, get, rename, and snapshot
also provide -r options without optional depth arguments.  And its probably
good to keep the zfs subcommand option syntax consistent.  On the other 
hand,
if all of the zfs subcommands were modified to accept an optional depth 
argument
to -r, then this would not be an issue.  But, for example, the top 
level(s) of
datasets cannot be destroyed if that would leave orphaned datasets.

BTW, when no dataset is specified, zfs list is the same as zfs list -r 
(infinite
recursion).  When a dataset is specified then it shows only the current 
level.

Does anyone have any non-theoretical situations where a depth option 
other than
1 or 2 would be used?  Are scripts being used to work around this problem?

-- Rich








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]

2009-01-09 Thread Jerry K
It was rumored that Nevada build 105 would have ZFS encrypted file
systems integrated into the main source.

In reviewing the Change logs (URL's below) I did not see anything
mentioned that this had come to pass.  Its going to be another week
before I have a chance to play with b105.

Does anyone know specifically if b105 has ZFS encryption?

Thanks,

Jerry


 Original Message 
Subject: [osol-announce] SXCE Build 105 available
Date: Fri, 09 Jan 2009 08:58:40 -0800


Please find the links to SXCE Build 105 at:

 http://www.opensolaris.org/os/downloads/

This is still a DVD only release.

-
wget work around:

  http://wikis.sun.com/pages/viewpage.action?pageId=28448383

---
Changelogs:

ON (The kernel, drivers, and core utilities):

  http://dlc.sun.com/osol/on/downloads/b105/on-changelog-b105.html

X Window System:

http://opensolaris.org/os/community/x_win/changelogs/changelogs-nv_100/

- Derek
___
opensolaris-announce mailing list
opensolaris-annou...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/opensolaris-announce

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs root, jumpstart and flash archives

2009-01-09 Thread Jerry K
I understand that currently, at least under Solaris 10u6, it is not 
possible to jumpstart a new system with a zfs root using a flash archive 
as a source.

Can anyone comment as to whether this restriction will pass in the near 
term, or if this is a while out (6+ months) before this will be possible?

Thanks,

Jerry
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-01-09 Thread Lori Alt

This is in the process of being resolved right now.  Stay tuned
for when it will be available.  It might be a patch to Update 6.

In the meantime, you might try this:

http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs

- Lori


On 01/09/09 12:28, Jerry K wrote:
 I understand that currently, at least under Solaris 10u6, it is not 
 possible to jumpstart a new system with a zfs root using a flash archive 
 as a source.

 Can anyone comment as to whether this restriction will pass in the near 
 term, or if this is a while out (6+ months) before this will be possible?

 Thanks,

 Jerry
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]

2009-01-09 Thread Richard Elling
Jerry K wrote:
 It was rumored that Nevada build 105 would have ZFS encrypted file
 systems integrated into the main source.
   

No ZFS crypto, but it has lofi crypto.You can use lofi for ZFS,
though.  Perhaps that was confusing?
 -- richard

 In reviewing the Change logs (URL's below) I did not see anything
 mentioned that this had come to pass.  Its going to be another week
 before I have a chance to play with b105.

 Does anyone know specifically if b105 has ZFS encryption?

 Thanks,

 Jerry


  Original Message 
 Subject: [osol-announce] SXCE Build 105 available
 Date: Fri, 09 Jan 2009 08:58:40 -0800


 Please find the links to SXCE Build 105 at:

  http://www.opensolaris.org/os/downloads/

 This is still a DVD only release.

 -
 wget work around:

   http://wikis.sun.com/pages/viewpage.action?pageId=28448383

 ---
 Changelogs:

 ON (The kernel, drivers, and core utilities):

   http://dlc.sun.com/osol/on/downloads/b105/on-changelog-b105.html

 X Window System:

 http://opensolaris.org/os/community/x_win/changelogs/changelogs-nv_100/

 - Derek
 ___
 opensolaris-announce mailing list
 opensolaris-annou...@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/opensolaris-announce

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to find out the zpool of an uberblock printed with the fbt:zfs:uberblock_update: probes?

2009-01-09 Thread Bernd Finger
Marcelo,

I just finished writing up my test results. Hopefully it will answer 
most of your questions. You can find it in my blog, as permalink

http://blogs.sun.com/blogfinger/entry/zfs_and_the_uberblock_part

Regards,

Bernd

Marcelo Leal wrote:
 Marcelo,
  Hello there... 
 I did some more tests.
 
 You are getting very useful informations with your tests. Thanks a lot!!
 
 I found that not each uberblock_update() is also
 followed by a write to 
 the disk (although the txg is increased every 30
 seconds for each of the 
 three zpools of my 2008.11 system). In these cases,
 ub_rootbp.blk_birth 
 stays at the same value while txg is incremented by
 1.
   Are you sure about that? I mean, what i could understand for the 
 ondiskformat, is that there is a correlation 1:1 between txg, creation time, 
 and ubberblock. Each time there is write to the pool, we have another state 
 of the filesystem. Actually, we just need another valid uberblock when we 
 change the filesystem state (write to it). 
  
 But each sync command on the OS level is followed by
 a 
 vdev_uberblock_sync() directly after the
 uberblock_update() and then by 
 four writes to the four uberblock copies (one per
  copy) on disk.
  Hmm, maybe the uberblock_update is not really important in our discussion... 
 ;-)
  
 And a change to one or more files in any pool during
 the 30 seconds 
 interval is also followed by a vdev_uberblock_sync()
 of that pool at the 
 end of the interval.
 
  So, what is the uberblock_update? 
 So on my system (a web server) during time when there
 is enough activity 
 that each uberblock_update() is followed by
 vdev_uberblock_sync(),

 I get:
   2 writes per minute (*60)
 
  I'm totally lost... 2 writes per minute?
 
  writes per hour (*24)
  2880 writes per day
 ut only each 128th time to the same block -
 = 22.5 writes to the same block on the drive per day.

 If we take the lower number of max. writes in the
 referenced paper which 
 is 10.000, we get 10.000/22.5 = 444.4 days or one
 year and 79 days.

 For 100.000, we get .4 days or more than 12
 years.
 
  Ok, but i think the number is 10.000. 100.000 would be a static wear 
 leveling, and it is a non-trivial implementation for USB pen drives right?
 During times without http access to my server, only
 about each 5th to 
 10th uberblock_update() is followed by
 vdev_uberblock_sync() for rpool, 
 and much less for the two data pools, which means
 that the corresponding 
 uberblocks on disk will be skipped for writing (if I
 did not overlook 
 anything), and the device will likely be worn out
 later.
  I need to know what is the uberblock_update... it seems not related with 
 txg, sync of disks, labels, nothing... ;-) 
 
  Thanks a lot Bernd.
 
  Leal
 [http://www.eall.com.br/blog]
 Regards,

 Bernd

 Marcelo Leal wrote:
 Hello Bernd,
  Now i see your point... ;-)
  Well, following a very simple math:

  - One txg each 5 seconds = 17280/day;
  - Each txg writing 1MB (L0-L3) = 17GB/day
  
  In the paper the math was 10 years = ( 2.7 * the
 size of the USB drive) writes per day, right? 
  So, in a 4GB drive, would be ~10GB/day. Then, just
 the labels update would make our USB drive live for 5
 years... and if each txg update 5MB of data, our
 drive would live for just a year.
  Help, i'm not good with numbers... ;-)

  Leal
 [http://www.eall.com.br/blog]
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]

2009-01-09 Thread Nicolas Williams
On Fri, Jan 09, 2009 at 12:13:17PM -0800, Richard Elling wrote:
 Jerry K wrote:
  It was rumored that Nevada build 105 would have ZFS encrypted file
  systems integrated into the main source.

 
 No ZFS crypto, but it has lofi crypto.You can use lofi for ZFS,
 though.  Perhaps that was confusing?

Probably :)

I'd recommend waiting for ZFS crypto rather than using lofi with ZFS.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home partition?

2009-01-09 Thread Johan Hartzenberg
On Fri, Jan 9, 2009 at 6:25 PM, noz sf2...@gmail.com wrote:

  The above is very dangerous, if it
  will even work. The output of the zfs send is
  redirected to /tmp, which is a ramdisk.  If you
  have enough space (RAM + Swap), it will work, but if
  there is a reboot or crash before the zfs receive
  completes then everything is gone.

  In stead, do the following:
  (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
  (3) n...@holodeck:~# zfs send -R rpool/exp...@now | zfs recv -d epool
  (4) Check that all the data looks OK in epool
  (5) n...@holodeck:~# zfs destroy -r -f rpool/export

 Thanks for the tip.  Is there an easy way to do your revised step 4?  Can I
 use a diff or something similar?  e.g.  diff rpool/export epool/export


Personally I would just browse around the structure, open a few files at
random, and consider it done.  But that is me, and my data, of which I _DO_
make backups.

You could use find to create an index of all the files and save these in
files, and compare those.  Depending on exactly how you do the find, you
might be able to just diff the files.

Of course if you want to be realy pedantic, you would do
cd /rpool/export; find . | xargs cksum  /rpool_checksums
cd /epool/export; find  . | xargs cksum  /epool_checksums
diff /?pool_checksums

But be prepared to wait a very very very long time for the two checksum
processes to run.  Unless you have very little data.

Cheers,
  _J



-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]

2009-01-09 Thread Richard Elling
Nicolas Williams wrote:
 On Fri, Jan 09, 2009 at 12:13:17PM -0800, Richard Elling wrote:
   
 Jerry K wrote:
 
 It was rumored that Nevada build 105 would have ZFS encrypted file
 systems integrated into the main source.
   
   
 No ZFS crypto, but it has lofi crypto.You can use lofi for ZFS,
 though.  Perhaps that was confusing?
 

 Probably :)

 I'd recommend waiting for ZFS crypto rather than using lofi with ZFS.
   

+1
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for new SATA/SAS HBA; JBOD is not always JBOD

2009-01-09 Thread Richard Elling
Brandon High wrote:
 On Fri, Jan 9, 2009 at 9:45 AM, Carson Gaspar car...@taltos.org wrote:
   
 On 1/9/2009 8:21 AM, Will Murnane wrote:

 
 You might consider a case with a SAS expander in it; you can plug sata
   
 Except Solaris still lacks support for port expanders, or did last I
 checked. Has this changed?
 

 A SAS expander is different than a SATA  port multiplier (PMP). I'm
 not sure if the SAS expander is supported, but it might be
   

Sun sells many products which use SAS expanders.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool add dumping core

2009-01-09 Thread Brad Plecs
I'm trying to add some additional devices to my existing pool, but it's not 
working.  I'm adding a raidz group of 5 300 GB drives, but the command always 
fails: 

r...@kronos:/ # zpool add raid raidz c8t8d0 c8t13d0 c7t8d0 c3t8d0 c5t8d0
Assertion failed: nvlist_lookup_string(cnv, path, path) == 0, file 
zpool_vdev.c, line 631
Abort (core dumped)

The disks all work, were labeled easily using 'format' after zfs and other 
tools refused to look at them. 
Creating a UFS filesystem with newfs on them runs with no issues, but I can't 
add them to the existing zpool.  

I can use the same devices to create a NEW zpool without issue. 

I fully patched up this system after encountering this problem, no change. 

The zpool to which I am adding them is fairly large and in a degraded state 
(three resilvers running, one that never seems to complete and two related to 
trying to add these new disks), but I didn't think that should prevent me from 
adding another vdev. 

For those who suggest waiting 20 minutes for the resilver to finish, it's been 
estimating  30 minutes
for the last 12 hours, and we're running out of space, so I wanted to add the 
new devices sooner rather than later. 

Can anyone help? 

extra details below:  

r...@kronos:/ # uname -a
SunOS kronos 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-480R

r...@kronos:/ # smpatch analyze 
137276-01 SunOS 5.10: uucico patch
122470-02 Gnome 2.6.0: GNOME Java Help Patch
121430-31 SunOS 5.8 5.9 5.10: Live Upgrade Patch
121428-11 SunOS 5.10: Live Upgrade Zones Support Patch

r...@kronos:patch # zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
raid  4.32T  4.23T  92.1G97%  DEGRADED  -

r...@kronos:patch # zpool status   
  pool: raid
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
 scrub: resilver in progress for 12h22m, 97.25% done, 0h20m to go
config:

NAMESTATE READ WRITE CKSUM
raidDEGRADED 0 0 0
  raidz1ONLINE   0 0 0
c9t0d0  ONLINE   0 0 0
c6t0d0  ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
c4t0d0  ONLINE   0 0 0
c10t0d0 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c9t1d0  ONLINE   0 0 0
c6t1d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c4t1d0  ONLINE   0 0 0
c10t1d0 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c9t3d0  ONLINE   0 0 0
c6t3d0  ONLINE   0 0 0
c2t3d0  ONLINE   0 0 0
c4t3d0  ONLINE   0 0 0
c10t3d0 ONLINE   0 0 0
  raidz1DEGRADED 0 0 0
c9t4d0  ONLINE   0 0 0
spare   DEGRADED 0 0 0
  c5t13d0   ONLINE   0 0 0
  c6t4d0FAULTED  0 12.3K 0  too many errors
c2t4d0  ONLINE   0 0 0
c4t4d0  ONLINE   0 0 0
c10t4d0 ONLINE   0 0 0
  raidz1DEGRADED 0 0 0
c9t5d0  ONLINE   0 0 0
spare   DEGRADED 0 0 0
  replacing DEGRADED 0 0 0
c6t5d0s0/o  UNAVAIL  0 0 0  cannot open
c6t5d0  ONLINE   0 0 0
  c11t13d0  ONLINE   0 0 0
c2t5d0  ONLINE   0 0 0
c4t5d0  ONLINE   0 0 0
c10t5d0 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c5t9d0  ONLINE   0 0 0
c7t9d0  ONLINE   0 0 0
c3t9d0  ONLINE   0 0 0
c8t9d0  ONLINE   0 0 0
c11t9d0 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c5t10d0 ONLINE   0 0 0
c7t10d0 ONLINE   0 0 0
c3t10d0 ONLINE   0 0 0
c8t10d0 ONLINE   0 0 0
c11t10d0ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c5t11d0 ONLINE   0 0 0
c7t11d0   

[zfs-discuss] zpool add dumping core

2009-01-09 Thread Brad Plecs
I'm trying to add some additional devices to my existing pool, but it's not 
working.  I'm adding a raidz group of 5 300 GB drives, but the command always 
fails: 

r...@kronos:/ # zpool add raid raidz c8t8d0 c8t13d0 c7t8d0 c3t8d0 c5t8d0
Assertion failed: nvlist_lookup_string(cnv, path, path) == 0, file 
zpool_vdev.c, line 631
Abort (core dumped)

The disks all work, were labeled easily using 'format' after zfs and other 
tools refused to look at them. 
Creating a UFS filesystem with newfs on them runs with no issues, but I can't 
add them to the existing zpool.  

I can use the same devices to create a NEW zpool without issue. 

I fully patched up this system after encountering this problem, no change. 

The zpool to which I am adding them is fairly large and in a degraded state 
(three resilvers running, one that never seems to complete and two related to 
trying to add these new disks), but I didn't think that should prevent me from 
adding another vdev. 

For those who suggest waiting 20 minutes for the resilver to finish, it's been 
estimating less than 30 minutes for the last 12 hours, and we're running out of 
space, so I wanted to add the new devices sooner rather than later. 

Can anyone help? 

extra details below:  

r...@kronos:/ # uname -a
SunOS kronos 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-480R

r...@kronos:/ # smpatch analyze 
137276-01 SunOS 5.10: uucico patch
122470-02 Gnome 2.6.0: GNOME Java Help Patch
121430-31 SunOS 5.8 5.9 5.10: Live Upgrade Patch
121428-11 SunOS 5.10: Live Upgrade Zones Support Patch

r...@kronos:patch # zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
raid  4.32T  4.23T  92.1G97%  DEGRADED  -

r...@kronos:patch # zpool status   
  pool: raid
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
 scrub: resilver in progress for 12h22m, 97.25% done, 0h20m to go
config:

NAMESTATE READ WRITE CKSUM
raidDEGRADED 0 0 0
  raidz1ONLINE   0 0 0
c9t0d0  ONLINE   0 0 0
c6t0d0  ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
c4t0d0  ONLINE   0 0 0
c10t0d0 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c9t1d0  ONLINE   0 0 0
c6t1d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c4t1d0  ONLINE   0 0 0
c10t1d0 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c9t3d0  ONLINE   0 0 0
c6t3d0  ONLINE   0 0 0
c2t3d0  ONLINE   0 0 0
c4t3d0  ONLINE   0 0 0
c10t3d0 ONLINE   0 0 0
  raidz1DEGRADED 0 0 0
c9t4d0  ONLINE   0 0 0
spare   DEGRADED 0 0 0
  c5t13d0   ONLINE   0 0 0
  c6t4d0FAULTED  0 12.3K 0  too many errors
c2t4d0  ONLINE   0 0 0
c4t4d0  ONLINE   0 0 0
c10t4d0 ONLINE   0 0 0
  raidz1DEGRADED 0 0 0
c9t5d0  ONLINE   0 0 0
spare   DEGRADED 0 0 0
  replacing DEGRADED 0 0 0
c6t5d0s0/o  UNAVAIL  0 0 0  cannot open
c6t5d0  ONLINE   0 0 0
  c11t13d0  ONLINE   0 0 0
c2t5d0  ONLINE   0 0 0
c4t5d0  ONLINE   0 0 0
c10t5d0 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c5t9d0  ONLINE   0 0 0
c7t9d0  ONLINE   0 0 0
c3t9d0  ONLINE   0 0 0
c8t9d0  ONLINE   0 0 0
c11t9d0 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c5t10d0 ONLINE   0 0 0
c7t10d0 ONLINE   0 0 0
c3t10d0 ONLINE   0 0 0
c8t10d0 ONLINE   0 0 0
c11t10d0ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c5t11d0 ONLINE   0 0 0

[zfs-discuss] Solaris destroys large discs?? Bug in install?

2009-01-09 Thread Orvar Korvar
I have taken a Samsung 500GB from my old ZFS raid. I have created a 100GB 
Windows XP partition and installed WinXP. The rest of the disk is unformatted. 
And then I wanted to install SXCE b104, so I started the SXCE install with ZFS. 
But it refused to install. Said that the partitions overlap and told me to edit 
and fix that. But it wasnt possible to edit. The cursor jumped directly to the 
top and nothing happened, each time I wanted to edit a disk.

Strange. I only have one partition with WinXP, and still it says the 
partitions overlap??? Is this a bug?

So I restarted Solaris install with UFS and everything went fine. No error 
reports, I could allocate space and install SXCE.




But upon reboot, to finish the install, it sets up SMF(?) with 213 services. 
That took ages, and when the disk loads data it sounds horrible. Slow, and 
sounds a lot. And when I do format and fdisk my partition, it says

Specify disk (enter its number): 5
selecting c2d0
Controller working list found
[disk formatted, defect list found]
Warning: Current Disk has mounted partitions.
/dev/dsk/c2d0s0 is currently mounted on /. Please see umount(1M).
/dev/dsk/c2d0s1 is currently used by  swap. Please see swap(1M).


What is this defect list found? Why does this happen? Is Solaris not capable 
of installing ZFS above 100GB? Should I make the install smaller? What is 
happening? 

When i boot into WindowsXP, the disk is silent and works fast. Clearly this is 
something with SXCE. So what is happening, is SXCE corrupting my disk? What 
should I do? Reinstall?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris destroys large discs?? Bug in install?

2009-01-09 Thread Ian Collins
Orvar Korvar wrote:
 I have taken a Samsung 500GB from my old ZFS raid. I have created a 100GB 
 Windows XP partition and installed WinXP. The rest of the disk is 
 unformatted. And then I wanted to install SXCE b104, so I started the SXCE 
 install with ZFS. But it refused to install. Said that the partitions overlap 
 and told me to edit and fix that. But it wasnt possible to edit. The cursor 
 jumped directly to the top and nothing happened, each time I wanted to edit a 
 disk.
   
Last time I had that problem, I used a non-Solaris tool (from a Linux
disk tools CD) to create a second partition as Linux swap which the
Solaris installer recognises as a Solaris partition (same ID).

Or you could create a small slice for a UFS install then migrate to the
rest of the disk as ZFS.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for new SATA/SAS HBA; JBOD is not always JBOD

2009-01-09 Thread JZ
Hi B, thx for hitting on the expander issue that I need to finish your job -

On top of as Rich said, Sun sells many expander-based stuff,

please also understand why you even need that --
the LSI SAS HBA that Sun resells does up to over 200 JBOD disks by itself.

But the new HDS AMS2000 SAS implementation uses the expander technology for 
some some..., just that you have to buy the whole storage controller (a not 
so open storage server if you can't make the connection).

So, expander or not to expander or how to expander is debate-able in my 
view.
[and yea Orvar, they are all pretty safe for your baby silo data management 
approach...]
;-)

best,
z

- Original Message - 
From: Richard Elling richard.ell...@sun.com
To: Brandon High bh...@freaks.com
Cc: zfs-discuss@opensolaris.org
Sent: Friday, January 09, 2009 4:23 PM
Subject: Re: [zfs-discuss] Looking for new SATA/SAS HBA;JBOD is not always 
JBOD


 Brandon High wrote:
 On Fri, Jan 9, 2009 at 9:45 AM, Carson Gaspar car...@taltos.org wrote:

 On 1/9/2009 8:21 AM, Will Murnane wrote:


 You might consider a case with a SAS expander in it; you can plug sata

 Except Solaris still lacks support for port expanders, or did last I
 checked. Has this changed?


 A SAS expander is different than a SATA  port multiplier (PMP). I'm
 not sure if the SAS expander is supported, but it might be


 Sun sells many products which use SAS expanders.
 -- richard

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-09 Thread JZ
Ok, since this thread is the official spot for home based chatting, I am off 
work now.

Similar feeling from an enterprise perspective --

I am the last person at work that has not even once, logged on to FaceBook. 
[no need, I insist]

And I can do a zFaceBook on 100% zCode anytime, only if I want to spend my 
open time on that.

And security -- I cannot afford the time and efforts to do my own secure 
enterprise at home, so, sorry, the important personal stuff, probably just 
as Mr. Tucci, are all on company infrastructure.   ;-)

best,
z at home


- Original Message - 
From: David Dyer-Bennet d...@dd-b.net
To: zfs-discuss@opensolaris.org
Sent: Friday, January 09, 2009 12:58 PM
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?



 On Thu, January 8, 2009 15:35, Tim wrote:
 On Wed, Jan 7, 2009 at 8:43 PM, JZ j...@excelsioritsolutions.com wrote:

 Can we focus on commercial usage?
 please!

 I dunno about you, but I need somewhere to store that music so I can
 stream
 it throughout the house while I'm drinking that wine ;)  A single disk
 windows box isn't really my cup-o-tea.  Plus, I'm a geek, my vmware farm
 needs it's nfs mounts on some solid, high performing gear.

 While my music has ended up there, it's my digital photos that actually
 pushed me into an NAS-type environment.  I wanted something better than
 single-disk reliability plus backups, plus I've found my backups happen
 better on the Solaris-based NAS than they did under windows (I never found
 an adequate Windows backup product, whereas rsync to external USB drives
 works perfectly, with the added benefit that my backup isn't locked up in
 a proprietary format).

 The enterprise features figure prominently, especially snapshots.  And
 it's a BIG DEAL for me to know that a scrub has verified data even if I
 haven't accessed it lately; old photos in my collection are still
 important to me.

 -- 
 David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
 Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
 Photos: http://dd-b.net/photography/gallery/
 Dragaera: http://dragaera.info

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI

2009-01-09 Thread Stephen Yum
Hi James,

I set 'mpxio_disable = no' in the /kernel/drv/iscsi.conf file (not 
/kernel/drv/scsi_vhci.h) and tried running
/usr/bin/stmsboot -e. It then listed the pci ids of the NICs, and I said yes 
and rebooted. Still no joy.

At this point, I'm assuming that my Solaris/ZFS box is enabled for MPXIO. But 
I'm having a b of a time 
trying to get vista64 to get it to recognize MPIO.

Using the MS iSCSI intiator, I tried mounting the iSCSI volume with one login, 
two logins, but MPIO doesn't work. Mounted with one login, I tried adding 
another connection, but it bonks out saying Too many connections. When I 
login using both the 192.168.1.102 and 192.168.2.102, they both appear in the 
identifier section, pointing to the same one volume, but under devices, they 
appear as two separate devices as Disk drive not MPIO. And when I down the 
main NIC, the volume disappears. When I down the secondary NIC, it keeps 
working, meaning that it's not really using the secondary NIC.

Am I making any sense? Can you help me?

Is there anyone out there using MPXIO successfully with a ZFS machine as the 
iSCSI target?

S



- Original Message 
From: James C. McPherson james.mcpher...@sun.com
To: Dave Brown dbr...@csolutions.net
Cc: zfs-discuss@opensolaris.org; storage-disc...@opensolaris.org
Sent: Thursday, January 8, 2009 6:26:11 PM
Subject: Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI

On Thu, 08 Jan 2009 17:29:10 -0800
Dave Brown dbr...@csolutions.net wrote:

 S,
Are you sure you have MPXIO turned on?  I haven't dealt with
 Solaris for a while (will again soon as I get some virtual servers
 setup) but in the past you had to manually turn it on.  I believe the
 path was /kernel/drv/scsi_vhci.h (I may be missing some of the path)
 and you changed the line that said mpxio_disabled = yes to
 mpxio_disabled = no and rebooted.

That used to be the case prior to Solaris 10 Update 1.

Since S10u1 the supported way of turning on MPxIO is
to run the command 
vi 
# /usr/sbin/stmsboot -e


If you manually edit /kernel/drv/fp.conf or /kernel/drv/fp.conf
to change the mpxio-disable property, you *must* also run 

# /usr/sbin/stmsboot -u


Please see stmsboot(1m) for more details.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcphttp://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Looking for new SATA/SAS HBA; JBOD is not always JBOD

2009-01-09 Thread Dale Ghent
On Jan 9, 2009, at 9:28 AM, Erik Trimble wrote:

 I'm pretty darned sure that the LSI 1068-based HBAs will do true
 JBOD.

Indeed they do, and the mpt driver works fine with these cards.

/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-09 Thread Ian Collins
Ian Collins wrote:
 Send/receive speeds appear to be very data dependent.  I have several 
 different filesystems containing differing data types.  The slowest to 
 replicate is mail and my guess it's the changes to the index files that takes 
 the time.  Similar sized filesystems with similar deltas where files are 
 mainly added or deleted appear to replicate faster.  

   
Has anyone investigated this?  I have been replicating a server today
and the differences between incremental processing is huge, for example:

filesystem A:

received 1.19Gb stream in 52 seconds (23.4Mb/sec)

filesystem B:

received 729Mb stream in 4564 seconds (164Kb/sec)

I can delve further into the content if anyone is interested.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-09 Thread Brent Jones
On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
 Ian Collins wrote:
 Send/receive speeds appear to be very data dependent.  I have several 
 different filesystems containing differing data types.  The slowest to 
 replicate is mail and my guess it's the changes to the index files that 
 takes the time.  Similar sized filesystems with similar deltas where files 
 are mainly added or deleted appear to replicate faster.


 Has anyone investigated this?  I have been replicating a server today
 and the differences between incremental processing is huge, for example:

 filesystem A:

 received 1.19Gb stream in 52 seconds (23.4Mb/sec)

 filesystem B:

 received 729Mb stream in 4564 seconds (164Kb/sec)

 I can delve further into the content if anyone is interested.

 --
 Ian.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


What hardware, to/from is this?

How are those filesystems laid out, what is their total size, used
space, and guessable file count / file size distribution?

I'm also trying to put together the puzzle to provide more detail to a
case I opened with Sun regarding this.

-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss