Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread hardware technician
I want to create a separate home, shared, read/write zfs partition on a 
tri-boot OpenSolaris, Ubuntu, and CentOS system.  I have successfully created 
and exported the zpools that I would like to use, in Ubuntu using zfs-fuse.  
However, I boot into OpenSolaris, and I type zpool import with no options.  The 
only pool I see to import is on the primary partition, and I haven't been able 
to see or import the pool that is on the extended partition.  I have tried 
importing using the name, and ID.

In OpenSolaris /dev/dsk/c3d0 shows 15 slices, so I think the slices are there, 
but then I type format, select the disk, and the partition option, but it 
doesn't show (zfs) partitions from linux.  In format, the fdisk option 
recognizes the (zfs) linux partitions.  The partition that I was able to import 
is on the first partition, and is named c3d0p1, and is not a slice.

Are there any ideas how I could import the other pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list improvements?

2009-01-08 Thread Ross
Can I ask why we need to use -c or -d at all?  We already have -r to 
recursively list children, can't we add an optional depth parameter to that?

You then have:
zfs list : shows current level (essentially -r 0)
zfs list -r : shows all levels (infinite recursion)
zfs list -r 2 : shows 2 levels of children
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] I/O error when import

2009-01-08 Thread Matthew Zhang
I have a 3 disk raidz configuration, one disk reports lots of error so I 
decided to replace it.
At the same time I replaced the system disk and reinstalled the system(the same 
version).
When I try to import mypool, I got an I/O error.
What could I do to import it, replace the new disk(c2d0) with the original disk 
used in raidz and try again?

bash-3.2# zpool import -f 
  pool: mypool
id: 4052179541023957932
 state: FAULTED
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

mypool  FAULTED  corrupted data
  raidz1DEGRADED
c1d1ONLINE
c2d0UNAVAIL  cannot open
c2d1ONLINE
bash-3.2# zpool import -f mypool
cannot import 'mypool': I/O error
bash-3.2# uname -a
SunOS snv 5.11 snv_104 i86pc i386 i86pc
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread JZ
OMG, Rich, that did help and solved all my confusion and now I can go to 
sleep...

So now I have to consider Sun and EMC vs Intel in my home $ spending?!
Forget it, Lenovo it is!
at least my folks get a cut.

Goodnight!
best,
z


- Original Message - 
From: "Richard Elling" 
To: "Scott Laird" 
Cc: "JZ" ; "Orvar Korvar" 
; ; "Peter 
Korn" 
Sent: Friday, January 09, 2009 1:09 AM
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


> Scott Laird wrote:
>> Today?  Low-power SSDs are probably less reliable than low-power hard
>> drives, although they're too new to really know for certain.  Given
>> the number of problems that vendors have had getting acceptable write
>> speeds, I'd be really amazed if they've done any real work on
>> long-term reliability yet.
>
> Eh?  Flash has been around for well over 25 years and the
> technology is well understood.  Trivia: Sun has been shipping
> flash memory for nearly its entire history.  What hasn't happened
> until relatively recently is that the vendors married high density
> flash with a decent controller which expects and manages failures --
> like the disk drive guys did 20 years ago.  It occurs to me that
> you might be too young to remember that format(1m) was the
> tool used to do media analysis and map bad sectors before those
> smarts were moved onto the disk ? ;-)  Why, we used to have to
> regularly scan the media, reserve spare cylinders, and map out
> bad sectors in the snow, walking uphill, in our bare feet because
> shoes hadn't been invented yet... ;-)
>
>> Going forward, SSDs will almost certainly
>> be more reliable, as long as you have something SMART-ish watching the
>> number of worn-out SSD cells and recommending preemptive replacement
>> of worn-out drives every few years.  That should be a slow,
>> predictable process, unlike most HD failures.
>>
>
> I think you will find that failures can still be catastrophic.
> But from a typical reliability analysis, the SSDs will be more
> reliable than HDDs.  The enterprise SSDs have DRAM
> front-ends and plenty of spare cells to accommodate expected
> enterprise use.  FWIW, I expect an MTBF of 3-4M hours for
> enterprise SSDs as compared to 1.6M hours for a top-tier
> enterprise HDD.  More worrying is the relative newness of the
> firmware... but software reliability is a whole different ballgame.
>
> Rumor was that STEC won one of the Apple contracts
> http://webfeet.sp360hosting.com/Lists/Research%20News/DispForm.aspx?ID=32
> STEC also supplies Sun and EMC. But the competition is
> really heating up with Intel and Samsung having made several
> recent announcements.  We do live in interesting times :-)
> -- richard
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Richard Elling
Scott Laird wrote:
> Today?  Low-power SSDs are probably less reliable than low-power hard
> drives, although they're too new to really know for certain.  Given
> the number of problems that vendors have had getting acceptable write
> speeds, I'd be really amazed if they've done any real work on
> long-term reliability yet.  

Eh?  Flash has been around for well over 25 years and the
technology is well understood.  Trivia: Sun has been shipping
flash memory for nearly its entire history.  What hasn't happened
until relatively recently is that the vendors married high density
flash with a decent controller which expects and manages failures --
like the disk drive guys did 20 years ago.  It occurs to me that
you might be too young to remember that format(1m) was the
tool used to do media analysis and map bad sectors before those
smarts were moved onto the disk ? ;-)  Why, we used to have to
regularly scan the media, reserve spare cylinders, and map out
bad sectors in the snow, walking uphill, in our bare feet because
shoes hadn't been invented yet... ;-)

> Going forward, SSDs will almost certainly
> be more reliable, as long as you have something SMART-ish watching the
> number of worn-out SSD cells and recommending preemptive replacement
> of worn-out drives every few years.  That should be a slow,
> predictable process, unlike most HD failures.
>   

I think you will find that failures can still be catastrophic.
But from a typical reliability analysis, the SSDs will be more
reliable than HDDs.  The enterprise SSDs have DRAM
front-ends and plenty of spare cells to accommodate expected
enterprise use.  FWIW, I expect an MTBF of 3-4M hours for
enterprise SSDs as compared to 1.6M hours for a top-tier
enterprise HDD.  More worrying is the relative newness of the
firmware... but software reliability is a whole different ballgame.

Rumor was that STEC won one of the Apple contracts
http://webfeet.sp360hosting.com/Lists/Research%20News/DispForm.aspx?ID=32
STEC also supplies Sun and EMC. But the competition is
really heating up with Intel and Samsung having made several
recent announcements.  We do live in interesting times :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Odd network performance with ZFS/CIFS

2009-01-08 Thread Tim
On Thu, Jan 8, 2009 at 5:54 PM, gnomad  wrote:

> Ok, I'm going to reply to my own question here.  After a few hours of
> thinking, I believe I know what is going on.
>
> I am seeing the initial high network throughput as the 4GB of RAM in the
> server fills up with data.  In fact, in this case, I am bound by the speed
> of the source drive, which tops out at about 40 MB/s -- just what I am
> seeing as the copy starts.  Eventually, the network speed settles down to
> the write speed of the local pool.  Copying files locally (on and off the
> pool) shows that the sustained write speeds are, in fact, about 17-20 MB/s.
>
> So, this brings up a new question, are these speeds typical?  For
> reference, my pool is built from 6 1TB drives configured as RAIDZ2 driven by
> an ICH9(R) configured in AHCI mode. I am aware that RAIDZ2 performance will
> always be less than the speed of individual disks, but this is a little bit
> more than I was expecting.  Individually, these drives benchmark around
> 60-70 MB/s, so I am looking at a fairly substantial penalty for the
> reliability of RAIDZ2.
>
> I'll CC this message to the CIFS and Networking lists to prevent anyone
> else from waiting time writing a reply, as the appropriate place for this
> thread is now confirmed to be zfs-discuss.
>
> -g.
> --
>


That seems really, relaly low.  What are your sustained read speeds?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hung when import zpool

2009-01-08 Thread Qin Ming Hua
It's a 2GB filessystem just for test.

I wait about half an hour yesterday, but it import successful with only 20s
when i re-tried today.

Meanwhile, zfs didn't find any disk issue. (by the demo it should)


On Thu, Jan 8, 2009 at 6:03 PM, Carsten Aulbert
wrote:

> Hi
>
> Qin Ming Hua wrote:
> > bash-3.00# zpool import mypool
> > ^C^C
> >
> > it hung when i try to re-import the zpool, has anyone  see this before?
> >
>
> How long did you wait?
>
> Once a zfs import took 1-2 hours to complete (it was seemingly stuck at
> a ~30 GB filesystem which it needed to do some work on).
>
> Cheer
>
> Carsten
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Best regards,
Colin Qin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread Andre Wenas
You can edit /etc/user_attr file.

Sent from my iPhone

On Jan 9, 2009, at 11:13 AM, noz  wrote:

>> To do step no 4, you need to login as root, or create
>> new user which
>> home dir not at export.
>>
>> Sent from my iPhone
>>
>
> I tried to login as root at the login screen but it wouldn't let me,  
> some error about roles.  Is there another way to login as root?
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread noz
> To do step no 4, you need to login as root, or create
> new user which  
> home dir not at export.
> 
> Sent from my iPhone
> 

I tried to login as root at the login screen but it wouldn't let me, some error 
about roles.  Is there another way to login as root?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs_space function

2009-01-08 Thread Ian Collins
On Thu 08/01/09 20:36 , kavita kavita_kulka...@qualexsystems.com sent:
> What exactly does zfs_space function do?
> The comments suggest it allocates and frees space in a file. What does this
> mean? And through what operation can i invoke this function? for eg.
> whenever i edit/write to a file, zfs_write is called. So what operation can
> be used to call this function?

The code list is a better place to ask, or just check the source:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zfs_vnops.c#zfs_space

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread Andre Wenas
To do step no 4, you need to login as root, or create new user which  
home dir not at export.

Sent from my iPhone

On Jan 9, 2009, at 10:10 AM, noz  wrote:

> Kyle wrote:
>> So if preserving the home filesystem through
>> re-installs are really
>> important, putting the home filesystem in a separate
>> pool may be in
>> order.
>
> My problem similar to the original thread author, and this scenario  
> is exactly the one I had in mind.  I figured out a workable solution  
> from the zfs admin guide, but I've only tested this in virtualbox.   
> I have no idea how well this would work if I actually had hundreds  
> of gigabytes of data.  I also don't know if my solution is the  
> recommended way to do this, so please let me know if anyone has a  
> better method.
>
> Here's my solution:
> (1) n...@holodeck:~# zpool create epool mirror c4t1d0 c4t2d0 c4t3d0
>
> n...@holodeck:~# zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> epool 69K  15.6G18K  /epool
> rpool   3.68G  11.9G72K  /rpool
> rpool/ROOT  2.81G  11.9G18K  legacy
> rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
> rpool/dump   383M  11.9G   383M  -
> rpool/export 632K  11.9G19K  /export
> rpool/export/home612K  11.9G19K  /export/home
> rpool/export/home/noz594K  11.9G   594K  /export/home/noz
> rpool/swap   512M  12.4G  21.1M  -
> n...@holodeck:~#
>
> (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
> (3) n...@holodeck:~# zfs send -R rpool/exp...@now > /tmp/export_now
> (4) n...@holodeck:~# zfs destroy -r -f rpool/export
> (5) n...@holodeck:~# zfs recv -d epool < /tmp/export_now
>
> n...@holodeck:~# zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> epool756K  15.6G18K  /epool
> epool/export 630K  15.6G19K  /export
> epool/export/home612K  15.6G19K  /export/home
> epool/export/home/noz592K  15.6G   592K  /export/home/noz
> rpool   3.68G  11.9G72K  /rpool
> rpool/ROOT  2.81G  11.9G18K  legacy
> rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
> rpool/dump   383M  11.9G   383M  -
> rpool/swap   512M  12.4G  21.1M  -
> n...@holodeck:~#
>
> (6) n...@holodeck:~# zfs mount -a
>
> or
>
> (6) reboot
>
> The only part I'm uncomfortable with is when I have to destroy  
> rpool's export filesystem (step 4), because trying to destroy  
> without the -f switch results in a "filesystem is active" error.
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI

2009-01-08 Thread James C. McPherson
On Thu, 08 Jan 2009 17:29:10 -0800
Dave Brown  wrote:

> S,
>Are you sure you have MPXIO turned on?  I haven't dealt with
> Solaris for a while (will again soon as I get some virtual servers
> setup) but in the past you had to manually turn it on.  I believe the
> path was /kernel/drv/scsi_vhci.h (I may be missing some of the path)
> and you changed the line that said mpxio_disabled = yes to
> mpxio_disabled = no and rebooted.

That used to be the case prior to Solaris 10 Update 1.

Since S10u1 the supported way of turning on MPxIO is
to run the command 

# /usr/sbin/stmsboot -e


If you manually edit /kernel/drv/fp.conf or /kernel/drv/fp.conf
to change the mpxio-disable property, you *must* also run 

# /usr/sbin/stmsboot -u


Please see stmsboot(1m) for more details.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread noz
Kyle wrote:
> So if preserving the home filesystem through
> re-installs are really
> important, putting the home filesystem in a separate
> pool may be in
> order.

My problem similar to the original thread author, and this scenario is exactly 
the one I had in mind.  I figured out a workable solution from the zfs admin 
guide, but I've only tested this in virtualbox.  I have no idea how well this 
would work if I actually had hundreds of gigabytes of data.  I also don't know 
if my solution is the recommended way to do this, so please let me know if 
anyone has a better method.

Here's my solution:
(1) n...@holodeck:~# zpool create epool mirror c4t1d0 c4t2d0 c4t3d0

n...@holodeck:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
epool 69K  15.6G18K  /epool
rpool   3.68G  11.9G72K  /rpool
rpool/ROOT  2.81G  11.9G18K  legacy
rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
rpool/dump   383M  11.9G   383M  -
rpool/export 632K  11.9G19K  /export
rpool/export/home612K  11.9G19K  /export/home
rpool/export/home/noz594K  11.9G   594K  /export/home/noz
rpool/swap   512M  12.4G  21.1M  -
n...@holodeck:~# 

(2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
(3) n...@holodeck:~# zfs send -R rpool/exp...@now > /tmp/export_now
(4) n...@holodeck:~# zfs destroy -r -f rpool/export
(5) n...@holodeck:~# zfs recv -d epool < /tmp/export_now

n...@holodeck:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
epool756K  15.6G18K  /epool
epool/export 630K  15.6G19K  /export
epool/export/home612K  15.6G19K  /export/home
epool/export/home/noz592K  15.6G   592K  /export/home/noz
rpool   3.68G  11.9G72K  /rpool
rpool/ROOT  2.81G  11.9G18K  legacy
rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
rpool/dump   383M  11.9G   383M  -
rpool/swap   512M  12.4G  21.1M  -
n...@holodeck:~# 

(6) n...@holodeck:~# zfs mount -a

or

(6) reboot

The only part I'm uncomfortable with is when I have to destroy rpool's export 
filesystem (step 4), because trying to destroy without the -f switch results in 
a "filesystem is active" error.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI

2009-01-08 Thread Dave Brown
S,
   Are you sure you have MPXIO turned on?  I haven't dealt with Solaris 
for a while (will again soon as I get some virtual servers setup) but in 
the past you had to manually turn it on.  I believe the path was 
/kernel/drv/scsi_vhci.h (I may be missing some of the path) and you 
changed the line that said mpxio_disabled = yes to mpxio_disabled = no 
and rebooted.

D


JZ wrote:
> Hi S,
> sorry, as much as I am Super z,
> this is beyond me.
> maybe you can go to china town for a seafood dinner (they are on sale 
> worldwide now), and see if Sun folks would reply?
>
> best,
> z
>
>
>
> - Original Message - 
> From: "Stephen Yum" 
> To: ; 
> Sent: Thursday, January 08, 2009 7:27 PM
> Subject: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI
>
>
>   
>> I'm trying to set up a iscsi connection (with MPXIO) between my Vista64 
>> workstation and a ZFS storage machine running OpenSolaris 10 (forget the 
>> exact version).
>>
>> On the ZFS machines, I have two NICS. NIC #1 is 192.168.1.102, and NIC #2 
>> is 192.168.2.102. The NICs are connected to two separate switches serving 
>> two separate IP spaces.
>>
>> On my Vista64 machine, I also have two NICs connected in a similar 
>> fashion, with NIC #1 assigned with 192.168.1.103, and NIC #2 with 
>> 192.168.2.103.
>>
>> Now, I fiddled around with the MS iSCSI intiator seemingly endlessly, and 
>> I can't get it to recognize my ZFS iSCSI volume as being MPIO enabled. It 
>> just shows up in the initiator panel simply as 'Disk'. Either I have not 
>> configured the ZFS end correctly to do MPXIO, or I'm not able to set the 
>> volume up as MPIO volume on the Vista64 end.
>>
>> I Googled endlessly to find some sort of a howto, but I came up virtually 
>> with nothing. Can any enlightened guru out there point me to a good howto 
>> or explain to me via this mailing list how to set it up correctly? 
>> Please??? I'm at my wit's end here.
>>
>> Thank you so much in advance
>>
>> S
>>
>>
>>
>>
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 
>> 
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
>   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI

2009-01-08 Thread JZ
Dear S,
that's a regional question beyond our global chinatown inniatives.

in NYC,
we have the Old, Original chinatown in the city;
we have the newer [but I don't go much since that one is more Taiwan than 
PRC] in Flushing Queens;
we have the Cantooness chinatowns in Brooklyn 8th Ave, and Ave U, which have 
really nice seafood, that I think would be suitable for your hongkong 
taste...
And we have some private ones...

yours,
z


- Original Message - 
From: "Stephen Yum" 
To: "JZ" ; ; 

Sent: Thursday, January 08, 2009 8:13 PM
Subject: Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI


> No prob z. When seeing your name, it keeps reminding me of the famous 
> rapper.
> Which Chinatown are they at? SF?
>
> S
>
>
>
> - Original Message 
> From: JZ 
> To: Stephen Yum ; zfs-discuss@opensolaris.org; 
> storage-disc...@opensolaris.org
> Sent: Thursday, January 8, 2009 4:31:05 PM
> Subject: Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI
>
> Hi S,
> sorry, as much as I am Super z,
> this is beyond me.
> maybe you can go to china town for a seafood dinner (they are on sale 
> worldwide now), and see if Sun folks would reply?
>
> best,
> z
>
>
>
> - Original Message - From: "Stephen Yum" 
> To: ; 
> Sent: Thursday, January 08, 2009 7:27 PM
> Subject: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI
>
>
>> I'm trying to set up a iscsi connection (with MPXIO) between my Vista64 
>> workstation and a ZFS storage machine running OpenSolaris 10 (forget the 
>> exact version).
>>
>> On the ZFS machines, I have two NICS. NIC #1 is 192.168.1.102, and NIC #2 
>> is 192.168.2.102. The NICs are connected to two separate switches serving 
>> two separate IP spaces.
>>
>> On my Vista64 machine, I also have two NICs connected in a similar 
>> fashion, with NIC #1 assigned with 192.168.1.103, and NIC #2 with 
>> 192.168.2.103.
>>
>> Now, I fiddled around with the MS iSCSI intiator seemingly endlessly, and 
>> I can't get it to recognize my ZFS iSCSI volume as being MPIO enabled. It 
>> just shows up in the initiator panel simply as 'Disk'. Either I have not 
>> configured the ZFS end correctly to do MPXIO, or I'm not able to set the 
>> volume up as MPIO volume on the Vista64 end.
>>
>> I Googled endlessly to find some sort of a howto, but I came up virtually 
>> with nothing. Can any enlightened guru out there point me to a good howto 
>> or explain to me via this mailing list how to set it up correctly? 
>> Please??? I'm at my wit's end here.
>>
>> Thank you so much in advance
>>
>> S
>>
>>
>>
>>
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
>
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI

2009-01-08 Thread Stephen Yum
No prob z. When seeing your name, it keeps reminding me of the famous rapper.
Which Chinatown are they at? SF?

S



- Original Message 
From: JZ 
To: Stephen Yum ; zfs-discuss@opensolaris.org; 
storage-disc...@opensolaris.org
Sent: Thursday, January 8, 2009 4:31:05 PM
Subject: Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI

Hi S,
sorry, as much as I am Super z,
this is beyond me.
maybe you can go to china town for a seafood dinner (they are on sale worldwide 
now), and see if Sun folks would reply?

best,
z



- Original Message - From: "Stephen Yum" 
To: ; 
Sent: Thursday, January 08, 2009 7:27 PM
Subject: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI


> I'm trying to set up a iscsi connection (with MPXIO) between my Vista64 
> workstation and a ZFS storage machine running OpenSolaris 10 (forget the 
> exact version).
> 
> On the ZFS machines, I have two NICS. NIC #1 is 192.168.1.102, and NIC #2 is 
> 192.168.2.102. The NICs are connected to two separate switches serving two 
> separate IP spaces.
> 
> On my Vista64 machine, I also have two NICs connected in a similar fashion, 
> with NIC #1 assigned with 192.168.1.103, and NIC #2 with 192.168.2.103.
> 
> Now, I fiddled around with the MS iSCSI intiator seemingly endlessly, and I 
> can't get it to recognize my ZFS iSCSI volume as being MPIO enabled. It just 
> shows up in the initiator panel simply as 'Disk'. Either I have not 
> configured the ZFS end correctly to do MPXIO, or I'm not able to set the 
> volume up as MPIO volume on the Vista64 end.
> 
> I Googled endlessly to find some sort of a howto, but I came up virtually 
> with nothing. Can any enlightened guru out there point me to a good howto or 
> explain to me via this mailing list how to set it up correctly? Please??? I'm 
> at my wit's end here.
> 
> Thank you so much in advance
> 
> S
> 
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 


  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI

2009-01-08 Thread JZ
Hi S,
sorry, as much as I am Super z,
this is beyond me.
maybe you can go to china town for a seafood dinner (they are on sale 
worldwide now), and see if Sun folks would reply?

best,
z



- Original Message - 
From: "Stephen Yum" 
To: ; 
Sent: Thursday, January 08, 2009 7:27 PM
Subject: [zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI


> I'm trying to set up a iscsi connection (with MPXIO) between my Vista64 
> workstation and a ZFS storage machine running OpenSolaris 10 (forget the 
> exact version).
>
> On the ZFS machines, I have two NICS. NIC #1 is 192.168.1.102, and NIC #2 
> is 192.168.2.102. The NICs are connected to two separate switches serving 
> two separate IP spaces.
>
> On my Vista64 machine, I also have two NICs connected in a similar 
> fashion, with NIC #1 assigned with 192.168.1.103, and NIC #2 with 
> 192.168.2.103.
>
> Now, I fiddled around with the MS iSCSI intiator seemingly endlessly, and 
> I can't get it to recognize my ZFS iSCSI volume as being MPIO enabled. It 
> just shows up in the initiator panel simply as 'Disk'. Either I have not 
> configured the ZFS end correctly to do MPXIO, or I'm not able to set the 
> volume up as MPIO volume on the Vista64 end.
>
> I Googled endlessly to find some sort of a howto, but I came up virtually 
> with nothing. Can any enlightened guru out there point me to a good howto 
> or explain to me via this mailing list how to set it up correctly? 
> Please??? I'm at my wit's end here.
>
> Thank you so much in advance
>
> S
>
>
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Desperate question about MPXIO with ZFS-iSCSI

2009-01-08 Thread Stephen Yum
I'm trying to set up a iscsi connection (with MPXIO) between my Vista64 
workstation and a ZFS storage machine running OpenSolaris 10 (forget the exact 
version).

On the ZFS machines, I have two NICS. NIC #1 is 192.168.1.102, and NIC #2 is 
192.168.2.102. The NICs are connected to two separate switches serving two 
separate IP spaces.

On my Vista64 machine, I also have two NICs connected in a similar fashion, 
with NIC #1 assigned with 192.168.1.103, and NIC #2 with 192.168.2.103.

Now, I fiddled around with the MS iSCSI intiator seemingly endlessly, and I 
can't get it to recognize my ZFS iSCSI volume as being MPIO enabled. It just 
shows up in the initiator panel simply as 'Disk'. Either I have not configured 
the ZFS end correctly to do MPXIO, or I'm not able to set the volume up as MPIO 
volume on the Vista64 end.

I Googled endlessly to find some sort of a howto, but I came up virtually with 
nothing. Can any enlightened guru out there point me to a good howto or explain 
to me via this mailing list how to set it up correctly? Please??? I'm at my 
wit's end here.

Thank you so much in advance

S



  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Odd network performance with ZFS/CIFS

2009-01-08 Thread gnomad
test
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Odd network performance with ZFS/CIFS

2009-01-08 Thread gnomad
Ok, I'm going to reply to my own question here.  After a few hours of thinking, 
I believe I know what is going on.

I am seeing the initial high network throughput as the 4GB of RAM in the server 
fills up with data.  In fact, in this case, I am bound by the speed of the 
source drive, which tops out at about 40 MB/s -- just what I am seeing as the 
copy starts.  Eventually, the network speed settles down to the write speed of 
the local pool.  Copying files locally (on and off the pool) shows that the 
sustained write speeds are, in fact, about 17-20 MB/s.

So, this brings up a new question, are these speeds typical?  For reference, my 
pool is built from 6 1TB drives configured as RAIDZ2 driven by an ICH9(R) 
configured in AHCI mode. I am aware that RAIDZ2 performance will always be less 
than the speed of individual disks, but this is a little bit more than I was 
expecting.  Individually, these drives benchmark around 60-70 MB/s, so I am 
looking at a fairly substantial penalty for the reliability of RAIDZ2.

I'll CC this message to the CIFS and Networking lists to prevent anyone else 
from waiting time writing a reply, as the appropriate place for this thread is 
now confirmed to be zfs-discuss.

-g.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread JZ
OMG!
what a critical factor I just didn't think about!!!
stupid me!

Moog, please, which laptops are supporting ZFS today?
I will only buy within those.

z, at home, feeling better, but still a bit confused


- Original Message - 
From: "The Moog" 
To: "JZ" ; 
; "Scott Laird" 
Cc: "Orvar Korvar" ; 
; "Peter Korn" 
Sent: Thursday, January 08, 2009 6:50 PM
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


> Are you planning to run Solaris on your laptop?
>
> Sent from my BlackBerry Bold®
> http://www.blackberrybold.com
>
> -Original Message-
> From: "JZ" 
>
> Date: Thu, 8 Jan 2009 18:27:52
> To: Scott Laird
> Cc: Orvar Korvar; 
> ; Peter Korn
> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>
>
> Thanks much Scott,
> I still don't know what you are talking about -- my $3000 to $800 laptops
> all never needed to swap any drive.
>
> But yeah, I got hit on all of them when I was in china, by the china web
> virus that no U.S. software could do anything [then a china open source
> thing did the job]
>
> So, without the swapping HD concern, what should I do???
>
> z at home still confused
>
>
> - Original Message - 
> From: "Scott Laird" 
> To: "JZ" 
> Cc: "Toby Thain" ; "Brandon High"
> ; ; "Peter Korn"
> ; "Orvar Korvar" 
> Sent: Thursday, January 08, 2009 6:20 PM
> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>
>
>> You can't trust any hard drive.  That's what backups are for :-).
>>
>> Laptop hard drives aren't much worse than desktop drives, and 2.5"
>> SATA drives are cheap.  As long as they're easy to swap, then a drive
>> failure isn't the end of the world.  Order a new drive ($100 or so),
>> swap them, and restore from backup.
>>
>> I haven't dealt with PC laptops in years, so I can't really compare
>> models.
>>
>>
>> Scott
>>
>> On Thu, Jan 8, 2009 at 2:40 PM, JZ  wrote:
>>> Thanks Scott,
>>> I was really itchy to order one, now I just want to save that open $ for
>>> Remy+++.
>>>
>>> Then, next question, can I trust any HD for my home laptop? should I go
>>> get
>>> a Sony VAIO or a cheap China-made thing would do?
>>> big price delta...
>>>
>>> z at home
>>>
>>> - Original Message - From: "Scott Laird" 
>>> To: "JZ" 
>>> Cc: "Toby Thain" ; "Brandon High"
>>> ; ; "Peter Korn"
>>> ; "Orvar Korvar" 
>>> Sent: Thursday, January 08, 2009 5:36 PM
>>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>>>
>>>
 Today?  Low-power SSDs are probably less reliable than low-power hard
 drives, although they're too new to really know for certain.  Given
 the number of problems that vendors have had getting acceptable write
 speeds, I'd be really amazed if they've done any real work on
 long-term reliability yet.  Going forward, SSDs will almost certainly
 be more reliable, as long as you have something SMART-ish watching the
 number of worn-out SSD cells and recommending preemptive replacement
 of worn-out drives every few years.  That should be a slow,
 predictable process, unlike most HD failures.


 Scott

 On Thu, Jan 8, 2009 at 2:30 PM, JZ  wrote:
>
> I was think about Apple's new SSD drive option on laptops...
>
> is that safer than Apple's HD or less safe? [maybe Orvar can help me 
> on
> this]
>
> the price is a bit hefty for me to just order for experiment...
> Thanks!
> z at home
>
>
> - Original Message - From: "Toby Thain"
> 
> To: "JZ" 
> Cc: "Scott Laird" ; "Brandon High"
> ;
> ; "Peter Korn" 
> Sent: Thursday, January 08, 2009 5:25 PM
> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>
>
>>
>> On 7-Jan-09, at 9:43 PM, JZ wrote:
>>
>>> ok, Scott, that sounded sincere. I am not going to do the pic thing
>>> on
>>> you.
>>>
>>> But do I have to spell this out to you -- somethings are invented
>>> not
>>> for
>>> home use?
>>>
>>> Cindy, would you want to do ZFS at home,
>>
>> Why would you disrespect your personal data? ZFS is perfect for home
>> use,
>> for reasons that have been discussed on this list and elsewhere.
>>
>> Apple also recognises this, which is why ZFS is in OS X 10.5 and will
>> presumably become the default boot filesystem.
>>
>> Sorry to wander a little offtopic, but IMHO - Apple needs to
>> acknowledge,
>> and tell their customers, that hard drives are  unreliable
>> consumables.
>>
>> I am desperately looking forward to the day when they recognise the
>> need
>> to ship all their systems with:
>> 1) mirrored storage out of the box;
>> 2) easy user-swappable drives;
>> 3) foolproof fault notification and rectification.
>>
>> There is no reason why an Apple customer should not have this level
>> of
>> protection for her photo and video library, Great American Novel,  or
>> whatever. Time Machine is a go

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread The Moog
Are you planning to run Solaris on your laptop?

Sent from my BlackBerry Bold® 
http://www.blackberrybold.com

-Original Message-
From: "JZ" 

Date: Thu, 8 Jan 2009 18:27:52 
To: Scott Laird
Cc: Orvar Korvar; 
; Peter Korn
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


Thanks much Scott,
I still don't know what you are talking about -- my $3000 to $800 laptops 
all never needed to swap any drive.

But yeah, I got hit on all of them when I was in china, by the china web 
virus that no U.S. software could do anything [then a china open source 
thing did the job]

So, without the swapping HD concern, what should I do???

z at home still confused


- Original Message - 
From: "Scott Laird" 
To: "JZ" 
Cc: "Toby Thain" ; "Brandon High" 
; ; "Peter Korn" 
; "Orvar Korvar" 
Sent: Thursday, January 08, 2009 6:20 PM
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


> You can't trust any hard drive.  That's what backups are for :-).
>
> Laptop hard drives aren't much worse than desktop drives, and 2.5"
> SATA drives are cheap.  As long as they're easy to swap, then a drive
> failure isn't the end of the world.  Order a new drive ($100 or so),
> swap them, and restore from backup.
>
> I haven't dealt with PC laptops in years, so I can't really compare 
> models.
>
>
> Scott
>
> On Thu, Jan 8, 2009 at 2:40 PM, JZ  wrote:
>> Thanks Scott,
>> I was really itchy to order one, now I just want to save that open $ for
>> Remy+++.
>>
>> Then, next question, can I trust any HD for my home laptop? should I go 
>> get
>> a Sony VAIO or a cheap China-made thing would do?
>> big price delta...
>>
>> z at home
>>
>> - Original Message - From: "Scott Laird" 
>> To: "JZ" 
>> Cc: "Toby Thain" ; "Brandon High"
>> ; ; "Peter Korn"
>> ; "Orvar Korvar" 
>> Sent: Thursday, January 08, 2009 5:36 PM
>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>>
>>
>>> Today?  Low-power SSDs are probably less reliable than low-power hard
>>> drives, although they're too new to really know for certain.  Given
>>> the number of problems that vendors have had getting acceptable write
>>> speeds, I'd be really amazed if they've done any real work on
>>> long-term reliability yet.  Going forward, SSDs will almost certainly
>>> be more reliable, as long as you have something SMART-ish watching the
>>> number of worn-out SSD cells and recommending preemptive replacement
>>> of worn-out drives every few years.  That should be a slow,
>>> predictable process, unlike most HD failures.
>>>
>>>
>>> Scott
>>>
>>> On Thu, Jan 8, 2009 at 2:30 PM, JZ  wrote:

 I was think about Apple's new SSD drive option on laptops...

 is that safer than Apple's HD or less safe? [maybe Orvar can help me on
 this]

 the price is a bit hefty for me to just order for experiment...
 Thanks!
 z at home


 - Original Message - From: "Toby Thain"
 
 To: "JZ" 
 Cc: "Scott Laird" ; "Brandon High" 
 ;
 ; "Peter Korn" 
 Sent: Thursday, January 08, 2009 5:25 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


>
> On 7-Jan-09, at 9:43 PM, JZ wrote:
>
>> ok, Scott, that sounded sincere. I am not going to do the pic thing 
>> on
>> you.
>>
>> But do I have to spell this out to you -- somethings are invented 
>> not
>> for
>> home use?
>>
>> Cindy, would you want to do ZFS at home,
>
> Why would you disrespect your personal data? ZFS is perfect for home
> use,
> for reasons that have been discussed on this list and elsewhere.
>
> Apple also recognises this, which is why ZFS is in OS X 10.5 and will
> presumably become the default boot filesystem.
>
> Sorry to wander a little offtopic, but IMHO - Apple needs to
> acknowledge,
> and tell their customers, that hard drives are  unreliable 
> consumables.
>
> I am desperately looking forward to the day when they recognise the 
> need
> to ship all their systems with:
> 1) mirrored storage out of the box;
> 2) easy user-swappable drives;
> 3) foolproof fault notification and rectification.
>
> There is no reason why an Apple customer should not have this level 
> of
> protection for her photo and video library, Great American Novel,  or
> whatever. Time Machine is a good first step (though it doesn't  often
> work
> smoothly for me with a LaCie external FW drive).
>
> These are the neglected pieces, IMHO, of their touted Digital 
> Lifestyle.
>
> --Toby
>
>
>> or just having some wine and music?
>>
>> Can we focus on commercial usage?
>> please!
>>
>>
>>
>> - Original Message -
>> From: "Scott Laird" 
>> To: "Brandon High" 
>> Cc: ; "Peter Korn" 
>> Sent: Wednesday, January 07, 2009 9:28 PM
>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread JZ
Thanks much Scott,
I still don't know what you are talking about -- my $3000 to $800 laptops 
all never needed to swap any drive.

But yeah, I got hit on all of them when I was in china, by the china web 
virus that no U.S. software could do anything [then a china open source 
thing did the job]

So, without the swapping HD concern, what should I do???

z at home still confused


- Original Message - 
From: "Scott Laird" 
To: "JZ" 
Cc: "Toby Thain" ; "Brandon High" 
; ; "Peter Korn" 
; "Orvar Korvar" 
Sent: Thursday, January 08, 2009 6:20 PM
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


> You can't trust any hard drive.  That's what backups are for :-).
>
> Laptop hard drives aren't much worse than desktop drives, and 2.5"
> SATA drives are cheap.  As long as they're easy to swap, then a drive
> failure isn't the end of the world.  Order a new drive ($100 or so),
> swap them, and restore from backup.
>
> I haven't dealt with PC laptops in years, so I can't really compare 
> models.
>
>
> Scott
>
> On Thu, Jan 8, 2009 at 2:40 PM, JZ  wrote:
>> Thanks Scott,
>> I was really itchy to order one, now I just want to save that open $ for
>> Remy+++.
>>
>> Then, next question, can I trust any HD for my home laptop? should I go 
>> get
>> a Sony VAIO or a cheap China-made thing would do?
>> big price delta...
>>
>> z at home
>>
>> - Original Message - From: "Scott Laird" 
>> To: "JZ" 
>> Cc: "Toby Thain" ; "Brandon High"
>> ; ; "Peter Korn"
>> ; "Orvar Korvar" 
>> Sent: Thursday, January 08, 2009 5:36 PM
>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>>
>>
>>> Today?  Low-power SSDs are probably less reliable than low-power hard
>>> drives, although they're too new to really know for certain.  Given
>>> the number of problems that vendors have had getting acceptable write
>>> speeds, I'd be really amazed if they've done any real work on
>>> long-term reliability yet.  Going forward, SSDs will almost certainly
>>> be more reliable, as long as you have something SMART-ish watching the
>>> number of worn-out SSD cells and recommending preemptive replacement
>>> of worn-out drives every few years.  That should be a slow,
>>> predictable process, unlike most HD failures.
>>>
>>>
>>> Scott
>>>
>>> On Thu, Jan 8, 2009 at 2:30 PM, JZ  wrote:

 I was think about Apple's new SSD drive option on laptops...

 is that safer than Apple's HD or less safe? [maybe Orvar can help me on
 this]

 the price is a bit hefty for me to just order for experiment...
 Thanks!
 z at home


 - Original Message - From: "Toby Thain"
 
 To: "JZ" 
 Cc: "Scott Laird" ; "Brandon High" 
 ;
 ; "Peter Korn" 
 Sent: Thursday, January 08, 2009 5:25 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


>
> On 7-Jan-09, at 9:43 PM, JZ wrote:
>
>> ok, Scott, that sounded sincere. I am not going to do the pic thing 
>> on
>> you.
>>
>> But do I have to spell this out to you -- somethings are invented 
>> not
>> for
>> home use?
>>
>> Cindy, would you want to do ZFS at home,
>
> Why would you disrespect your personal data? ZFS is perfect for home
> use,
> for reasons that have been discussed on this list and elsewhere.
>
> Apple also recognises this, which is why ZFS is in OS X 10.5 and will
> presumably become the default boot filesystem.
>
> Sorry to wander a little offtopic, but IMHO - Apple needs to
> acknowledge,
> and tell their customers, that hard drives are  unreliable 
> consumables.
>
> I am desperately looking forward to the day when they recognise the 
> need
> to ship all their systems with:
> 1) mirrored storage out of the box;
> 2) easy user-swappable drives;
> 3) foolproof fault notification and rectification.
>
> There is no reason why an Apple customer should not have this level 
> of
> protection for her photo and video library, Great American Novel,  or
> whatever. Time Machine is a good first step (though it doesn't  often
> work
> smoothly for me with a LaCie external FW drive).
>
> These are the neglected pieces, IMHO, of their touted Digital 
> Lifestyle.
>
> --Toby
>
>
>> or just having some wine and music?
>>
>> Can we focus on commercial usage?
>> please!
>>
>>
>>
>> - Original Message -
>> From: "Scott Laird" 
>> To: "Brandon High" 
>> Cc: ; "Peter Korn" 
>> Sent: Wednesday, January 07, 2009 9:28 PM
>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>>
>>
>>> On Wed, Jan 7, 2009 at 4:53 PM, Brandon High  
>>> wrote:

 On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley 
 wrote:
>
> How much is your time worth?

 Quite a bit.

> Consider the engineering eff

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Scott Laird
You can't trust any hard drive.  That's what backups are for :-).

Laptop hard drives aren't much worse than desktop drives, and 2.5"
SATA drives are cheap.  As long as they're easy to swap, then a drive
failure isn't the end of the world.  Order a new drive ($100 or so),
swap them, and restore from backup.

I haven't dealt with PC laptops in years, so I can't really compare models.


Scott

On Thu, Jan 8, 2009 at 2:40 PM, JZ  wrote:
> Thanks Scott,
> I was really itchy to order one, now I just want to save that open $ for
> Remy+++.
>
> Then, next question, can I trust any HD for my home laptop? should I go get
> a Sony VAIO or a cheap China-made thing would do?
> big price delta...
>
> z at home
>
> - Original Message - From: "Scott Laird" 
> To: "JZ" 
> Cc: "Toby Thain" ; "Brandon High"
> ; ; "Peter Korn"
> ; "Orvar Korvar" 
> Sent: Thursday, January 08, 2009 5:36 PM
> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>
>
>> Today?  Low-power SSDs are probably less reliable than low-power hard
>> drives, although they're too new to really know for certain.  Given
>> the number of problems that vendors have had getting acceptable write
>> speeds, I'd be really amazed if they've done any real work on
>> long-term reliability yet.  Going forward, SSDs will almost certainly
>> be more reliable, as long as you have something SMART-ish watching the
>> number of worn-out SSD cells and recommending preemptive replacement
>> of worn-out drives every few years.  That should be a slow,
>> predictable process, unlike most HD failures.
>>
>>
>> Scott
>>
>> On Thu, Jan 8, 2009 at 2:30 PM, JZ  wrote:
>>>
>>> I was think about Apple's new SSD drive option on laptops...
>>>
>>> is that safer than Apple's HD or less safe? [maybe Orvar can help me on
>>> this]
>>>
>>> the price is a bit hefty for me to just order for experiment...
>>> Thanks!
>>> z at home
>>>
>>>
>>> - Original Message - From: "Toby Thain"
>>> 
>>> To: "JZ" 
>>> Cc: "Scott Laird" ; "Brandon High" ;
>>> ; "Peter Korn" 
>>> Sent: Thursday, January 08, 2009 5:25 PM
>>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>>>
>>>

 On 7-Jan-09, at 9:43 PM, JZ wrote:

> ok, Scott, that sounded sincere. I am not going to do the pic thing  on
> you.
>
> But do I have to spell this out to you -- somethings are invented  not
> for
> home use?
>
> Cindy, would you want to do ZFS at home,

 Why would you disrespect your personal data? ZFS is perfect for home
 use,
 for reasons that have been discussed on this list and elsewhere.

 Apple also recognises this, which is why ZFS is in OS X 10.5 and will
 presumably become the default boot filesystem.

 Sorry to wander a little offtopic, but IMHO - Apple needs to
 acknowledge,
 and tell their customers, that hard drives are  unreliable consumables.

 I am desperately looking forward to the day when they recognise the need
 to ship all their systems with:
 1) mirrored storage out of the box;
 2) easy user-swappable drives;
 3) foolproof fault notification and rectification.

 There is no reason why an Apple customer should not have this level  of
 protection for her photo and video library, Great American Novel,  or
 whatever. Time Machine is a good first step (though it doesn't  often
 work
 smoothly for me with a LaCie external FW drive).

 These are the neglected pieces, IMHO, of their touted Digital Lifestyle.

 --Toby


> or just having some wine and music?
>
> Can we focus on commercial usage?
> please!
>
>
>
> - Original Message -
> From: "Scott Laird" 
> To: "Brandon High" 
> Cc: ; "Peter Korn" 
> Sent: Wednesday, January 07, 2009 9:28 PM
> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>
>
>> On Wed, Jan 7, 2009 at 4:53 PM, Brandon High  wrote:
>>>
>>> On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley 
>>> wrote:

 How much is your time worth?
>>>
>>> Quite a bit.
>>>
 Consider the engineering effort going into every Sun Server.
 Any system from Sun is more than sufficient for a home server.
 You want more disks, then buy one with more slots.  Done.
>>>
>>> A few years ago, I put together the NAS box currently in use at home
>>> for $300 for 1TB of space. Mind you, I recycled the RAM from another
>>> box and the four 250GB disks were free. I think 250 drives were
>>> around
>>> $200 at the time, so let's say the system price was $1200.
>>>
>>> I don't think there's a Sun server that takes 4+ drives anywhere near
>>> $1200. The X4200 uses 2.5" drives, but costs $4255. Actually adding
>>> more drives ups the cost further. That means the afternoon I spent
>>> setting my server up was worth $3000. I should tell my boss that.
>>>
>>> A mo

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread JZ
Scott??
I am really at a major cross-point in my decision making process --

until today, all my home stuff are Sony,
from TV, projector, stereo bricks, all the way to USB SSD sticks.
[besides speakers I use Bose]

but this laptop thing is really bothering my religious love for Sony.
should I or should I not...  OMG!

???!

z, at home don't know how to spend $



- Original Message - 
From: "JZ" 
To: "Scott Laird" 
Cc: "Orvar Korvar" ; 
; "Peter Korn" 
Sent: Thursday, January 08, 2009 5:40 PM
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


> Thanks Scott,
> I was really itchy to order one, now I just want to save that open $ for
> Remy+++.
>
> Then, next question, can I trust any HD for my home laptop? should I go 
> get
> a Sony VAIO or a cheap China-made thing would do?
> big price delta...
>
> z at home
>
> - Original Message - 
> From: "Scott Laird" 
> To: "JZ" 
> Cc: "Toby Thain" ; "Brandon High"
> ; ; "Peter Korn"
> ; "Orvar Korvar" 
> Sent: Thursday, January 08, 2009 5:36 PM
> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>
>
>> Today?  Low-power SSDs are probably less reliable than low-power hard
>> drives, although they're too new to really know for certain.  Given
>> the number of problems that vendors have had getting acceptable write
>> speeds, I'd be really amazed if they've done any real work on
>> long-term reliability yet.  Going forward, SSDs will almost certainly
>> be more reliable, as long as you have something SMART-ish watching the
>> number of worn-out SSD cells and recommending preemptive replacement
>> of worn-out drives every few years.  That should be a slow,
>> predictable process, unlike most HD failures.
>>
>>
>> Scott
>>
>> On Thu, Jan 8, 2009 at 2:30 PM, JZ  wrote:
>>> I was think about Apple's new SSD drive option on laptops...
>>>
>>> is that safer than Apple's HD or less safe? [maybe Orvar can help me on
>>> this]
>>>
>>> the price is a bit hefty for me to just order for experiment...
>>> Thanks!
>>> z at home
>>>
>>>
>>> - Original Message - From: "Toby Thain"
>>> 
>>> To: "JZ" 
>>> Cc: "Scott Laird" ; "Brandon High" 
>>> ;
>>> ; "Peter Korn" 
>>> Sent: Thursday, January 08, 2009 5:25 PM
>>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>>>
>>>

 On 7-Jan-09, at 9:43 PM, JZ wrote:

> ok, Scott, that sounded sincere. I am not going to do the pic thing 
> on
> you.
>
> But do I have to spell this out to you -- somethings are invented  not
> for
> home use?
>
> Cindy, would you want to do ZFS at home,

 Why would you disrespect your personal data? ZFS is perfect for home
 use,
 for reasons that have been discussed on this list and elsewhere.

 Apple also recognises this, which is why ZFS is in OS X 10.5 and will
 presumably become the default boot filesystem.

 Sorry to wander a little offtopic, but IMHO - Apple needs to
 acknowledge,
 and tell their customers, that hard drives are  unreliable consumables.

 I am desperately looking forward to the day when they recognise the
 need
 to ship all their systems with:
 1) mirrored storage out of the box;
 2) easy user-swappable drives;
 3) foolproof fault notification and rectification.

 There is no reason why an Apple customer should not have this level  of
 protection for her photo and video library, Great American Novel,  or
 whatever. Time Machine is a good first step (though it doesn't  often
 work
 smoothly for me with a LaCie external FW drive).

 These are the neglected pieces, IMHO, of their touted Digital 
 Lifestyle.

 --Toby


> or just having some wine and music?
>
> Can we focus on commercial usage?
> please!
>
>
>
> - Original Message -
> From: "Scott Laird" 
> To: "Brandon High" 
> Cc: ; "Peter Korn" 
> Sent: Wednesday, January 07, 2009 9:28 PM
> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>
>
>> On Wed, Jan 7, 2009 at 4:53 PM, Brandon High 
>> wrote:
>>>
>>> On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley 
>>> wrote:

 How much is your time worth?
>>>
>>> Quite a bit.
>>>
 Consider the engineering effort going into every Sun Server.
 Any system from Sun is more than sufficient for a home server.
 You want more disks, then buy one with more slots.  Done.
>>>
>>> A few years ago, I put together the NAS box currently in use at home
>>> for $300 for 1TB of space. Mind you, I recycled the RAM from another
>>> box and the four 250GB disks were free. I think 250 drives were
>>> around
>>> $200 at the time, so let's say the system price was $1200.
>>>
>>> I don't think there's a Sun server that takes 4+ drives anywhere
>>> near
>>> $1200. The X4200 uses 2.5" drives, but costs $4255. Actually adding
>

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread JZ
Thanks Scott,
I was really itchy to order one, now I just want to save that open $ for 
Remy+++.

Then, next question, can I trust any HD for my home laptop? should I go get 
a Sony VAIO or a cheap China-made thing would do?
big price delta...

z at home

- Original Message - 
From: "Scott Laird" 
To: "JZ" 
Cc: "Toby Thain" ; "Brandon High" 
; ; "Peter Korn" 
; "Orvar Korvar" 
Sent: Thursday, January 08, 2009 5:36 PM
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


> Today?  Low-power SSDs are probably less reliable than low-power hard
> drives, although they're too new to really know for certain.  Given
> the number of problems that vendors have had getting acceptable write
> speeds, I'd be really amazed if they've done any real work on
> long-term reliability yet.  Going forward, SSDs will almost certainly
> be more reliable, as long as you have something SMART-ish watching the
> number of worn-out SSD cells and recommending preemptive replacement
> of worn-out drives every few years.  That should be a slow,
> predictable process, unlike most HD failures.
>
>
> Scott
>
> On Thu, Jan 8, 2009 at 2:30 PM, JZ  wrote:
>> I was think about Apple's new SSD drive option on laptops...
>>
>> is that safer than Apple's HD or less safe? [maybe Orvar can help me on
>> this]
>>
>> the price is a bit hefty for me to just order for experiment...
>> Thanks!
>> z at home
>>
>>
>> - Original Message - From: "Toby Thain" 
>> 
>> To: "JZ" 
>> Cc: "Scott Laird" ; "Brandon High" ;
>> ; "Peter Korn" 
>> Sent: Thursday, January 08, 2009 5:25 PM
>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>>
>>
>>>
>>> On 7-Jan-09, at 9:43 PM, JZ wrote:
>>>
 ok, Scott, that sounded sincere. I am not going to do the pic thing  on
 you.

 But do I have to spell this out to you -- somethings are invented  not
 for
 home use?

 Cindy, would you want to do ZFS at home,
>>>
>>> Why would you disrespect your personal data? ZFS is perfect for home 
>>> use,
>>> for reasons that have been discussed on this list and elsewhere.
>>>
>>> Apple also recognises this, which is why ZFS is in OS X 10.5 and will
>>> presumably become the default boot filesystem.
>>>
>>> Sorry to wander a little offtopic, but IMHO - Apple needs to 
>>> acknowledge,
>>> and tell their customers, that hard drives are  unreliable consumables.
>>>
>>> I am desperately looking forward to the day when they recognise the 
>>> need
>>> to ship all their systems with:
>>> 1) mirrored storage out of the box;
>>> 2) easy user-swappable drives;
>>> 3) foolproof fault notification and rectification.
>>>
>>> There is no reason why an Apple customer should not have this level  of
>>> protection for her photo and video library, Great American Novel,  or
>>> whatever. Time Machine is a good first step (though it doesn't  often 
>>> work
>>> smoothly for me with a LaCie external FW drive).
>>>
>>> These are the neglected pieces, IMHO, of their touted Digital Lifestyle.
>>>
>>> --Toby
>>>
>>>
 or just having some wine and music?

 Can we focus on commercial usage?
 please!



 - Original Message -
 From: "Scott Laird" 
 To: "Brandon High" 
 Cc: ; "Peter Korn" 
 Sent: Wednesday, January 07, 2009 9:28 PM
 Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


> On Wed, Jan 7, 2009 at 4:53 PM, Brandon High  
> wrote:
>>
>> On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley 
>> wrote:
>>>
>>> How much is your time worth?
>>
>> Quite a bit.
>>
>>> Consider the engineering effort going into every Sun Server.
>>> Any system from Sun is more than sufficient for a home server.
>>> You want more disks, then buy one with more slots.  Done.
>>
>> A few years ago, I put together the NAS box currently in use at home
>> for $300 for 1TB of space. Mind you, I recycled the RAM from another
>> box and the four 250GB disks were free. I think 250 drives were 
>> around
>> $200 at the time, so let's say the system price was $1200.
>>
>> I don't think there's a Sun server that takes 4+ drives anywhere 
>> near
>> $1200. The X4200 uses 2.5" drives, but costs $4255. Actually adding
>> more drives ups the cost further. That means the afternoon I spent
>> setting my server up was worth $3000. I should tell my boss that.
>>
>> A more reasonable comparison would be the Ultra 24. A system with
>> 4x250 drives is $1650. I could build a 4 TB system today for *less*
>> than my 1TB system of 2 years ago, so let's use 3x750 + 1x250 
>> drives.
>> (That's all the store will let me) and the price jumps to $2641.
>>
>> Assume that I buy the cheapest x64 system (the X2100 M2 at $1228) 
>> and
>> add a drive tray because I want 4 drives ... well I can't. The
>> cheapest drive tray is $7465.
>>
>> I have trouble justifying Sun hardware for many bus

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Scott Laird
Today?  Low-power SSDs are probably less reliable than low-power hard
drives, although they're too new to really know for certain.  Given
the number of problems that vendors have had getting acceptable write
speeds, I'd be really amazed if they've done any real work on
long-term reliability yet.  Going forward, SSDs will almost certainly
be more reliable, as long as you have something SMART-ish watching the
number of worn-out SSD cells and recommending preemptive replacement
of worn-out drives every few years.  That should be a slow,
predictable process, unlike most HD failures.


Scott

On Thu, Jan 8, 2009 at 2:30 PM, JZ  wrote:
> I was think about Apple's new SSD drive option on laptops...
>
> is that safer than Apple's HD or less safe? [maybe Orvar can help me on
> this]
>
> the price is a bit hefty for me to just order for experiment...
> Thanks!
> z at home
>
>
> - Original Message - From: "Toby Thain" 
> To: "JZ" 
> Cc: "Scott Laird" ; "Brandon High" ;
> ; "Peter Korn" 
> Sent: Thursday, January 08, 2009 5:25 PM
> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>
>
>>
>> On 7-Jan-09, at 9:43 PM, JZ wrote:
>>
>>> ok, Scott, that sounded sincere. I am not going to do the pic thing  on
>>> you.
>>>
>>> But do I have to spell this out to you -- somethings are invented  not
>>> for
>>> home use?
>>>
>>> Cindy, would you want to do ZFS at home,
>>
>> Why would you disrespect your personal data? ZFS is perfect for home  use,
>> for reasons that have been discussed on this list and elsewhere.
>>
>> Apple also recognises this, which is why ZFS is in OS X 10.5 and will
>> presumably become the default boot filesystem.
>>
>> Sorry to wander a little offtopic, but IMHO - Apple needs to  acknowledge,
>> and tell their customers, that hard drives are  unreliable consumables.
>>
>> I am desperately looking forward to the day when they recognise the  need
>> to ship all their systems with:
>> 1) mirrored storage out of the box;
>> 2) easy user-swappable drives;
>> 3) foolproof fault notification and rectification.
>>
>> There is no reason why an Apple customer should not have this level  of
>> protection for her photo and video library, Great American Novel,  or
>> whatever. Time Machine is a good first step (though it doesn't  often work
>> smoothly for me with a LaCie external FW drive).
>>
>> These are the neglected pieces, IMHO, of their touted Digital Lifestyle.
>>
>> --Toby
>>
>>
>>> or just having some wine and music?
>>>
>>> Can we focus on commercial usage?
>>> please!
>>>
>>>
>>>
>>> - Original Message -
>>> From: "Scott Laird" 
>>> To: "Brandon High" 
>>> Cc: ; "Peter Korn" 
>>> Sent: Wednesday, January 07, 2009 9:28 PM
>>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>>>
>>>
 On Wed, Jan 7, 2009 at 4:53 PM, Brandon High   wrote:
>
> On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley 
> wrote:
>>
>> How much is your time worth?
>
> Quite a bit.
>
>> Consider the engineering effort going into every Sun Server.
>> Any system from Sun is more than sufficient for a home server.
>> You want more disks, then buy one with more slots.  Done.
>
> A few years ago, I put together the NAS box currently in use at home
> for $300 for 1TB of space. Mind you, I recycled the RAM from another
> box and the four 250GB disks were free. I think 250 drives were  around
> $200 at the time, so let's say the system price was $1200.
>
> I don't think there's a Sun server that takes 4+ drives anywhere  near
> $1200. The X4200 uses 2.5" drives, but costs $4255. Actually adding
> more drives ups the cost further. That means the afternoon I spent
> setting my server up was worth $3000. I should tell my boss that.
>
> A more reasonable comparison would be the Ultra 24. A system with
> 4x250 drives is $1650. I could build a 4 TB system today for *less*
> than my 1TB system of 2 years ago, so let's use 3x750 + 1x250  drives.
> (That's all the store will let me) and the price jumps to $2641.
>
> Assume that I buy the cheapest x64 system (the X2100 M2 at $1228)  and
> add a drive tray because I want 4 drives ... well I can't. The
> cheapest drive tray is $7465.
>
> I have trouble justifying Sun hardware for many business  applications
> that don't require SPARC, let alone for the home. For custom systems
> that most tinkerers would want at home, a shop like Silicon  Mechanics
> (http://www.siliconmechanics.com/) (or even Dell or HP) is almost
> always a better deal on hardware.

 I agree completely.  About a year ago I spent around $800 (w/o  drives)
 on a NAS box for home.  I used a 4x PCI-X single-Xeon Supermicro  MB, a
 giant case, and a single 8-port Supermicro SATA card.  Then I dropped
 a pair of 80 GB boot drives and 9x 500 GB drives into it.  With  raidz2
 plus a spare, that gives me around 2.7T of usable space.  When I
 fil

Re: [zfs-discuss] [storage-discuss] ZFS iscsi snapshot - VSScompatible?

2009-01-08 Thread Stephen Yum
Okay, so is there an implementation of HyperV or VSS or whatever on the 
Solaris+ZFS environment?

Also, is there something like this if I were to access ZFS-based storage from a 
Linux client, for example?

Since most of my clients will be running some version of Windows while 
accessing a ZFS backend array through a Windows 2003 or Windows 2008 server, 
just a solution that can mimic HyperV or VSS would be great.

Thanks so much in advance

S



- Original Message 
From: JZ 
To: Jason J. W. Williams ; Mr Stephen Yum 

Cc: zfs-discuss@opensolaris.org; Tim 
Sent: Wednesday, January 7, 2009 4:45:05 PM
Subject: Re: [zfs-discuss] [storage-discuss] ZFS iscsi snapshot - VSScompatible?

OMG, no safety feature?!
Sorry, even on ZFS turf,
if you use HyperV, and the HyperV VSS Writer, it could be a lot safer
-- if you don't know how to do a block-level Super thing...

best,
zStorageAnalyst

- Original Message - From: "Jason J. W. Williams" 

To: "Mr Stephen Yum" 
Cc: ; 
Sent: Wednesday, January 07, 2009 7:30 PM
Subject: Re: [zfs-discuss] [storage-discuss] ZFS iscsi snapshot - VSScompatible?


> Since iSCSI is block-level, I don't think the iSCSI intelligence at
> the file level you're asking for is feasible. VSS is used at the
> file-system level on either NTFS partitions or over CIFS.
> 
> -J
> 
> On Wed, Jan 7, 2009 at 5:06 PM, Mr Stephen Yum  wrote:
>> Hi all,
>> 
>> If I want to make a snapshot of an iscsi volume while there's a transfer 
>> going on, is there a way to detect this and either 1) not include the file 
>> being transferred, or 2) wait until the transfer is finished before making 
>> the snapshot?
>> 
>> If I understand correctly, this is what Microsoft's VSS is supposed to do. 
>> Am I right?
>> 
>> Right now, when there is a transfer going on while making the snapshot, I 
>> always end up with a corrupt file (understandably so, since the file 
>> transfer is unfinished).
>> 
>> S
>> 
>> 
>> 
>> 
>> 
>> ___
>> storage-discuss mailing list
>> storage-disc...@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
>> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 


  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread JZ
I was think about Apple's new SSD drive option on laptops...

is that safer than Apple's HD or less safe? [maybe Orvar can help me on 
this]

the price is a bit hefty for me to just order for experiment...
Thanks!
z at home


- Original Message - 
From: "Toby Thain" 
To: "JZ" 
Cc: "Scott Laird" ; "Brandon High" ; 
; "Peter Korn" 
Sent: Thursday, January 08, 2009 5:25 PM
Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?


>
> On 7-Jan-09, at 9:43 PM, JZ wrote:
>
>> ok, Scott, that sounded sincere. I am not going to do the pic thing  on 
>> you.
>>
>> But do I have to spell this out to you -- somethings are invented  not 
>> for
>> home use?
>>
>> Cindy, would you want to do ZFS at home,
>
> Why would you disrespect your personal data? ZFS is perfect for home  use, 
> for reasons that have been discussed on this list and elsewhere.
>
> Apple also recognises this, which is why ZFS is in OS X 10.5 and will 
> presumably become the default boot filesystem.
>
> Sorry to wander a little offtopic, but IMHO - Apple needs to  acknowledge, 
> and tell their customers, that hard drives are  unreliable consumables.
>
> I am desperately looking forward to the day when they recognise the  need 
> to ship all their systems with:
> 1) mirrored storage out of the box;
> 2) easy user-swappable drives;
> 3) foolproof fault notification and rectification.
>
> There is no reason why an Apple customer should not have this level  of 
> protection for her photo and video library, Great American Novel,  or 
> whatever. Time Machine is a good first step (though it doesn't  often work 
> smoothly for me with a LaCie external FW drive).
>
> These are the neglected pieces, IMHO, of their touted Digital Lifestyle.
>
> --Toby
>
>
>> or just having some wine and music?
>>
>> Can we focus on commercial usage?
>> please!
>>
>>
>>
>> - Original Message -
>> From: "Scott Laird" 
>> To: "Brandon High" 
>> Cc: ; "Peter Korn" 
>> Sent: Wednesday, January 07, 2009 9:28 PM
>> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>>
>>
>>> On Wed, Jan 7, 2009 at 4:53 PM, Brandon High   wrote:
 On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley 
 wrote:
> How much is your time worth?

 Quite a bit.

> Consider the engineering effort going into every Sun Server.
> Any system from Sun is more than sufficient for a home server.
> You want more disks, then buy one with more slots.  Done.

 A few years ago, I put together the NAS box currently in use at home
 for $300 for 1TB of space. Mind you, I recycled the RAM from another
 box and the four 250GB disks were free. I think 250 drives were  around
 $200 at the time, so let's say the system price was $1200.

 I don't think there's a Sun server that takes 4+ drives anywhere  near
 $1200. The X4200 uses 2.5" drives, but costs $4255. Actually adding
 more drives ups the cost further. That means the afternoon I spent
 setting my server up was worth $3000. I should tell my boss that.

 A more reasonable comparison would be the Ultra 24. A system with
 4x250 drives is $1650. I could build a 4 TB system today for *less*
 than my 1TB system of 2 years ago, so let's use 3x750 + 1x250  drives.
 (That's all the store will let me) and the price jumps to $2641.

 Assume that I buy the cheapest x64 system (the X2100 M2 at $1228)  and
 add a drive tray because I want 4 drives ... well I can't. The
 cheapest drive tray is $7465.

 I have trouble justifying Sun hardware for many business  applications
 that don't require SPARC, let alone for the home. For custom systems
 that most tinkerers would want at home, a shop like Silicon  Mechanics
 (http://www.siliconmechanics.com/) (or even Dell or HP) is almost
 always a better deal on hardware.
>>>
>>> I agree completely.  About a year ago I spent around $800 (w/o  drives)
>>> on a NAS box for home.  I used a 4x PCI-X single-Xeon Supermicro  MB, a
>>> giant case, and a single 8-port Supermicro SATA card.  Then I dropped
>>> a pair of 80 GB boot drives and 9x 500 GB drives into it.  With  raidz2
>>> plus a spare, that gives me around 2.7T of usable space.  When I
>>> filled that up a few weeks back, I bought 2 more 8-port SATA cards, 2
>>> Supermicro CSE-M35T-1B 5-drive hot-swap bays, and 9 1.5T drives, all
>>> for under $2k.  That's around $0.25/GB for the expansion and $0.36
>>> overall, including last year's expensive 500G drives.
>>>
>>> The closest that I can come to this config using current Sun hardware
>>> is probably the X4540 w/ 500G drives; that's $35k for 14T of usable
>>> disk (5x 8-way raidz2 + 1 spare + 2 boot disks), $2.48/GB.  It's much
>>> nicer hardware but I don't care.  I'd also need an electrician  (for 2x
>>> 240V circuits), a dedicated server room in my house (for the fan
>>> noise), and probably a divorce lawyer :-).
>>>
>>> Sun's hardware really isn't price-competitive on the low end,
>>> espe

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Toby Thain

On 7-Jan-09, at 9:43 PM, JZ wrote:

> ok, Scott, that sounded sincere. I am not going to do the pic thing  
> on you.
>
> But do I have to spell this out to you -- somethings are invented  
> not for
> home use?
>
> Cindy, would you want to do ZFS at home,

Why would you disrespect your personal data? ZFS is perfect for home  
use, for reasons that have been discussed on this list and elsewhere.

Apple also recognises this, which is why ZFS is in OS X 10.5 and will  
presumably become the default boot filesystem.

Sorry to wander a little offtopic, but IMHO - Apple needs to  
acknowledge, and tell their customers, that hard drives are  
unreliable consumables.

I am desperately looking forward to the day when they recognise the  
need to ship all their systems with:
1) mirrored storage out of the box;
2) easy user-swappable drives;
3) foolproof fault notification and rectification.

There is no reason why an Apple customer should not have this level  
of protection for her photo and video library, Great American Novel,  
or whatever. Time Machine is a good first step (though it doesn't  
often work smoothly for me with a LaCie external FW drive).

These are the neglected pieces, IMHO, of their touted Digital Lifestyle.

--Toby


> or just having some wine and music?
>
> Can we focus on commercial usage?
> please!
>
>
>
> - Original Message -
> From: "Scott Laird" 
> To: "Brandon High" 
> Cc: ; "Peter Korn" 
> Sent: Wednesday, January 07, 2009 9:28 PM
> Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?
>
>
>> On Wed, Jan 7, 2009 at 4:53 PM, Brandon High   
>> wrote:
>>> On Wed, Jan 7, 2009 at 7:45 AM, Joel Buckley 
>>> wrote:
 How much is your time worth?
>>>
>>> Quite a bit.
>>>
 Consider the engineering effort going into every Sun Server.
 Any system from Sun is more than sufficient for a home server.
 You want more disks, then buy one with more slots.  Done.
>>>
>>> A few years ago, I put together the NAS box currently in use at home
>>> for $300 for 1TB of space. Mind you, I recycled the RAM from another
>>> box and the four 250GB disks were free. I think 250 drives were  
>>> around
>>> $200 at the time, so let's say the system price was $1200.
>>>
>>> I don't think there's a Sun server that takes 4+ drives anywhere  
>>> near
>>> $1200. The X4200 uses 2.5" drives, but costs $4255. Actually adding
>>> more drives ups the cost further. That means the afternoon I spent
>>> setting my server up was worth $3000. I should tell my boss that.
>>>
>>> A more reasonable comparison would be the Ultra 24. A system with
>>> 4x250 drives is $1650. I could build a 4 TB system today for *less*
>>> than my 1TB system of 2 years ago, so let's use 3x750 + 1x250  
>>> drives.
>>> (That's all the store will let me) and the price jumps to $2641.
>>>
>>> Assume that I buy the cheapest x64 system (the X2100 M2 at $1228)  
>>> and
>>> add a drive tray because I want 4 drives ... well I can't. The
>>> cheapest drive tray is $7465.
>>>
>>> I have trouble justifying Sun hardware for many business  
>>> applications
>>> that don't require SPARC, let alone for the home. For custom systems
>>> that most tinkerers would want at home, a shop like Silicon  
>>> Mechanics
>>> (http://www.siliconmechanics.com/) (or even Dell or HP) is almost
>>> always a better deal on hardware.
>>
>> I agree completely.  About a year ago I spent around $800 (w/o  
>> drives)
>> on a NAS box for home.  I used a 4x PCI-X single-Xeon Supermicro  
>> MB, a
>> giant case, and a single 8-port Supermicro SATA card.  Then I dropped
>> a pair of 80 GB boot drives and 9x 500 GB drives into it.  With  
>> raidz2
>> plus a spare, that gives me around 2.7T of usable space.  When I
>> filled that up a few weeks back, I bought 2 more 8-port SATA cards, 2
>> Supermicro CSE-M35T-1B 5-drive hot-swap bays, and 9 1.5T drives, all
>> for under $2k.  That's around $0.25/GB for the expansion and $0.36
>> overall, including last year's expensive 500G drives.
>>
>> The closest that I can come to this config using current Sun hardware
>> is probably the X4540 w/ 500G drives; that's $35k for 14T of usable
>> disk (5x 8-way raidz2 + 1 spare + 2 boot disks), $2.48/GB.  It's much
>> nicer hardware but I don't care.  I'd also need an electrician  
>> (for 2x
>> 240V circuits), a dedicated server room in my house (for the fan
>> noise), and probably a divorce lawyer :-).
>>
>> Sun's hardware really isn't price-competitive on the low end,
>> especially when commercial support offerings have no value to you.
>> There's nothing really wrong with this, as long as you understand  
>> that
>> Sun's really only going to be selling into shops where Sun's support
>> and extra engineering makes financial sense.  In Sun's defense, this
>> is kind of an odd system, specially built for unusual requirements.
>>
>> My NAS box works well enough for me.  It's probably eaten ~20  
>> hours of
>> my time over the past year, partially because my Solaris is really
>> rus

Re: [zfs-discuss] Intel SS4200-E?

2009-01-08 Thread Nicholas Lee
I've got mine sitting on the floor at the moment. Need to find the time to
try out the install.
Do you know why it would not work with the DOM? I'm planning to use a spare
4GB DOM and keep the EMC one for backup if nothing works.

Did you use a video card to install?

On Fri, Jan 9, 2009 at 10:46 AM, Guido Glaus wrote:

> I've done it but could not make it to run from the the dom, had to use a
> usb stick :-)
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel SS4200-E?

2009-01-08 Thread Guido Glaus
I've done it but could not make it to run from the the dom, had to use a usb 
stick :-)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread JZ
But, Tim, you are a super IT guy, and your data is not baby...

I just have so many copies of my home baby data, since storage is so so cheap 
today compared to the wine... 
[and a baby JAVA thing to keep them in sync...]

(BTW, I am not a wine guy, I only do Remy+++)
;-)

best,
z
  - Original Message - 
  From: Tim 
  To: JZ 
  Cc: Scott Laird ; Brandon High ; zfs-discuss@opensolaris.org ; Peter Korn 
  Sent: Thursday, January 08, 2009 4:35 PM
  Subject: Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?





  On Wed, Jan 7, 2009 at 8:43 PM, JZ  wrote:

ok, Scott, that sounded sincere. I am not going to do the pic thing on you.

But do I have to spell this out to you -- somethings are invented not for
home use?

Cindy, would you want to do ZFS at home, or just having some wine and music?

Can we focus on commercial usage?
please!





  I dunno about you, but I need somewhere to store that music so I can stream 
it throughout the house while I'm drinking that wine ;)  A single disk windows 
box isn't really my cup-o-tea.  Plus, I'm a geek, my vmware farm needs it's nfs 
mounts on some solid, high performing gear.   

  --Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Tim
On Wed, Jan 7, 2009 at 8:43 PM, JZ  wrote:

> ok, Scott, that sounded sincere. I am not going to do the pic thing on you.
>
> But do I have to spell this out to you -- somethings are invented not for
> home use?
>
> Cindy, would you want to do ZFS at home, or just having some wine and
> music?
>
> Can we focus on commercial usage?
> please!
>
>
>
I dunno about you, but I need somewhere to store that music so I can stream
it throughout the house while I'm drinking that wine ;)  A single disk
windows box isn't really my cup-o-tea.  Plus, I'm a geek, my vmware farm
needs it's nfs mounts on some solid, high performing gear.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destroy is taking a long time...

2009-01-08 Thread Richard Elling
David W. Smith wrote:
> On Thu, 2009-01-08 at 13:26 -0500, Brian H. Nelson wrote:
>   
>> David Smith wrote:
>> 
>>> I was wondering if anyone has any experience with how long a "zfs destroy" 
>>> of about 40 TB should take?  So far, it has been about an hour...  Is there 
>>> any good way to tell if it is working or if it is hung?
>>>
>>> Doing a "zfs list" just hangs.  If you do a more specific zfs list, then it 
>>> is okay... zfs list pool/another-fs
>>>
>>> Thanks,
>>>
>>> David
>>>   
>>>   
>> I can't voice to something like 40 TB, but I can share a related story 
>> (on Solaris 10u5).
>>
>> A couple days ago, I tried to zfs destroy a clone of a snapshot of a 191 
>> GB zvol. It didn't complete right away, but the machine appeared to 
>> continue working on it, so I decided to let it go overnight (it was near 
>> the end of the day). Well, by about 4:00 am the next day, the machine 
>> had completely ran out of memory and hung. When I came in, I forced a 
>> sync from prom to get it back up. While it was booting, it stopped 
>> during (I think) the zfs initialization part, where it ran the disks for 
>> about 10 minutes before continuing. When the machine was back up, 
>> everything appeared to be ok. The clone was still there, although usage 
>> had changed to zero.
>>
>> I ended up patching the machine up to the latest u6 kernel + zfs patch 
>> (13-01 + 139579-01). After that, the zfs destroy went off without a 
>> hitch.
>>
>> I turned up bug 6606810 'zfs destroy  is taking hours to 
>> complete' which is supposed to be fixed by 139579-01. I don't know if 
>> that was the cause of my issue or not. I've got a 2GB kernel dump if 
>> anyone is interested in looking.
>>
>> -Brian
>>
>> 
>
> Brian,
>
> Thanks for the reply.  I'll take a look at the 139579-01 patch.  Perhaps
> as well a Sun engineer will comment about this issue being fixed with
> patches, etc.
>   

My pleasure :-).  6606810 was closed as a dup of 6573681 which was
fixed in NV 94 and patch 139579-01.
http://bugs.opensolaris.org/view_bug.do?bug_id=6606810
http://bugs.opensolaris.org/view_bug.do?bug_id=6573681
http://sunsolve.sun.com/search/document.do?assetkey=1-21-139579-01-1
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list improvements?

2009-01-08 Thread Will Murnane
On Thu, Jan 8, 2009 at 14:38, Richard Morris - Sun Microsystems -
Burlington United States  wrote:
> As you point out, the -c option is user friendly while the -depth (or
> maybe -d) option is more general.  There have been several requests for
> the -c option.  Would anyone prefer the -depth option?  In what cases
> would this be used?
>
> I was thinking when I logged the bug, that -depth (or -d) would be
> useful in cases where you've got a "jurassic-like" filesystem layout,
> and are interested in seeing just one or two levels.
What about an optional argument to -c specifying the depth:
zfs list tank
  tank
zfs list -c tank
  tank
  tank/home
  tank/foo
zfs list -c 2 tank
  tank
  tank/home
  tank/home/Ireland
  tank/home/UK
  tank/home/France
  tank/home/Germany
  tank/foo
  tank/f...@now
  tank/foo/bar
That leaves -d free, at the expense of ugliness in the argument
parsing.  I would also suggest that 2 is a more logical number than 3
for the last set listed if -c is given an argument, since I would
think of -c as "dataset and children", and -c 2 as "dataset and
children squared": grandchildren, as compared to "datasets of depth
3".

I do think having the more general form available is a good thing to have.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-08 Thread JZ

[just for the beloved Orvar]

Ok, rule of thumb to save you some open time -- anything with "z", or "j", 
would probably be safe enough for your baby data.

And yeah, I manage my own lunch hours BTW.
:-)

best,
z

- Original Message - 
From: "Orvar Korvar" 

To: 
Sent: Thursday, January 08, 2009 10:01 AM
Subject: Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?



Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list improvements?

2009-01-08 Thread Richard Morris - Sun Microsystems - Burlington United States

On 01/08/09 06:39, Tim Foster wrote:

hi Rich,

On Wed, 2009-01-07 at 10:51 -0500, Richard Morris - Sun Microsystems -
Burlington United States wrote:
As you point out, the -c option is user friendly while the -depth (or 
maybe -d) option is more general.  There have been several requests for 
the -c option.  Would anyone prefer the -depth option?  In what cases 
would this be used?


I was thinking when I logged the bug, that -depth (or -d) would be 
useful in cases where you've got a "jurassic-like" filesystem layout,

and are interested in seeing just one or two levels.

zfs list -d 3 tank

  tank/home
  tank/home/Ireland
  tank/home/UK
  tank/home/France
  tank/home/Germany
  tank/foo
  tank/foo/bar

allowing you to look at just the level of hierarchy that you're
interested in (eg. "How much disk space are users from different
countries taking up taking up?"), without needing to grep, or hardcode
a list of datasets somewhere.

More importantly, with hopefully faster performance than showing
all children of tank/home just to get the size of the immediate children.


Hi Tim,

Both the -c and -d options would eliminate the need to grep or hardcode
a list of datasets.  And they would both improve zfs list performance by
eliminating unnecessary recursion.  So adding one of these options probably
makes sense.  But which one?  Is the added complexity of the -d option over
the -c option justified?  In the above example, wouldn't the question "how
much disk space per country" also be answered by zfs list -c /tank/home?

Perhaps a layout like this might be a better argument for the -d option?

   tank/america/Canada
   tank/america/Mexico
   tank/america/USA
   tank/europe/France
   tank/europe/Germany
   tank/europe/Ireland

But how often would the -d option be provided a value other than 1 or 2?

As a point of reference, the ls command also has this issue and does not
provide an option to limit the depth of recursion.  And ls has no shortage
of options (aAbcCdeEfFghHilLmnopqrRstuvVx1@)!  Of course, this does not
necessarily mean that the -d option would not be useful for zfs list.

Other opinions?

-- Rich

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list improvements?

2009-01-08 Thread Richard Morris - Sun Microsystems - Burlington United States
On 01/08/09 06:28, Mike Futerko wrote:
> I'd have a few more proposals how to improve zfs list if they don't
> contravene the concept of zfs list command.
>
> Currently zfs list returns error "operation not applicable to datasets
> of this type" if you try to list for ex.: "zfs list -t snapshot
> file/system" returns above error while it could return what you actually
> asked - the list of all snapshots of "file/system". 

When a specific dataset is provided, zfs list does not return info about
child datasets or snapshots unless the -r option is specified.  So to get
the list of all snapshots of file/system:

zfs list -r -t snapshot file/system

In this particular case, it might be possible for zfs list to infer that
the -r option was intended.

> Similar case is if
> you try "zfs list file/sys...@snapshot" - can zfs be more smart to
> return the snapshot instead of error message if dataset name contains
> "@" in its name?

zfs list already handles this case correctly.  If you are getting an error
message then you are probably hitting CR 6758338 which is fixed in SNV_106.

> Other thing is zfs list performance... even if you want to get the list
> of snapshots with no other properties "zfs list -oname -t snapshot -r
> file/system" it still takes quite long time if there are hundreds of
> snapshots, while "ls /file/system/.zfs/snapshot" returns immediately.
> Can this also be improved somehow please?

The fix for CR 6755389 (also in SNV_106) should significantly improve the
performance of zfs list when there are hundreds or thousands (or hundreds
of thousands) of datasets and/or snapshots.

-- Rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs cp hangs when the mirrors are removed ..

2009-01-08 Thread Brian Leonard
Karthik, did you ever file a bug or this? I'm experiencing the same hang and 
wondering how to recover.

/Brian
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-08 Thread Will Murnane
On Thu, Jan 8, 2009 at 10:01, Orvar Korvar
 wrote:
> Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
Raid-2 is much less used, for one, uses many more disks for parity,
for two, and is much slower in any application I can think of.
Suppose you have 11 100G disks.  Raid-2 would use 7 for data and 4 for
parity, total capacity 700G, and would be able to recover from any
single bit flips per data row (e.g., if any disk were lost or
corrupted (!), it could recover its contents).  This is not done using
checksums, but rather ECC.  One could implement checksums on top of
this, I suppose.  A major downside of raid-2 is that "efficient" use
of space only happens when the raid groups are of size 2**k-1 for some
integer k; this is because the Hamming code includes parity bits at
certain intervals (see [1]).

Raidz2, on the other hand, would take your 11 100G disks and use 9 for
data and 2 for parity, and put checksums on blocks.  This means that
recovering any two corrupt or missing disks (as opposed to one with
raid-2) is possible; with any two pieces of a block potentially
damaged, one can calculate all the possibilities for what the block
could have been before damage and accept the one whose calculated
checksum matches the stored one.  Thus, raidz2 is safer and more
storage-efficient than raid-2.

This is all mostly academic, as nobody uses raid-2.  It's only as safe
as raidz (can repair one error, or detect two) and space efficiency
for normal-sized arrays is fairly atrocious.  Use raidz{,2} and forget
about it.

Will

[1]: http://en.wikipedia.org/wiki/Hamming_code#General_algorithm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destroy is taking a long time...

2009-01-08 Thread David W. Smith

On Thu, 2009-01-08 at 13:26 -0500, Brian H. Nelson wrote:
> David Smith wrote:
> > I was wondering if anyone has any experience with how long a "zfs destroy" 
> > of about 40 TB should take?  So far, it has been about an hour...  Is there 
> > any good way to tell if it is working or if it is hung?
> >
> > Doing a "zfs list" just hangs.  If you do a more specific zfs list, then it 
> > is okay... zfs list pool/another-fs
> >
> > Thanks,
> >
> > David
> >   
> 
> I can't voice to something like 40 TB, but I can share a related story 
> (on Solaris 10u5).
> 
> A couple days ago, I tried to zfs destroy a clone of a snapshot of a 191 
> GB zvol. It didn't complete right away, but the machine appeared to 
> continue working on it, so I decided to let it go overnight (it was near 
> the end of the day). Well, by about 4:00 am the next day, the machine 
> had completely ran out of memory and hung. When I came in, I forced a 
> sync from prom to get it back up. While it was booting, it stopped 
> during (I think) the zfs initialization part, where it ran the disks for 
> about 10 minutes before continuing. When the machine was back up, 
> everything appeared to be ok. The clone was still there, although usage 
> had changed to zero.
> 
> I ended up patching the machine up to the latest u6 kernel + zfs patch 
> (13-01 + 139579-01). After that, the zfs destroy went off without a 
> hitch.
> 
> I turned up bug 6606810 'zfs destroy  is taking hours to 
> complete' which is supposed to be fixed by 139579-01. I don't know if 
> that was the cause of my issue or not. I've got a 2GB kernel dump if 
> anyone is interested in looking.
> 
> -Brian
> 

Brian,

Thanks for the reply.  I'll take a look at the 139579-01 patch.  Perhaps
as well a Sun engineer will comment about this issue being fixed with
patches, etc.

David


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destroy is taking a long time...

2009-01-08 Thread Brian H. Nelson
David Smith wrote:
> I was wondering if anyone has any experience with how long a "zfs destroy" of 
> about 40 TB should take?  So far, it has been about an hour...  Is there any 
> good way to tell if it is working or if it is hung?
>
> Doing a "zfs list" just hangs.  If you do a more specific zfs list, then it 
> is okay... zfs list pool/another-fs
>
> Thanks,
>
> David
>   

I can't voice to something like 40 TB, but I can share a related story 
(on Solaris 10u5).

A couple days ago, I tried to zfs destroy a clone of a snapshot of a 191 
GB zvol. It didn't complete right away, but the machine appeared to 
continue working on it, so I decided to let it go overnight (it was near 
the end of the day). Well, by about 4:00 am the next day, the machine 
had completely ran out of memory and hung. When I came in, I forced a 
sync from prom to get it back up. While it was booting, it stopped 
during (I think) the zfs initialization part, where it ran the disks for 
about 10 minutes before continuing. When the machine was back up, 
everything appeared to be ok. The clone was still there, although usage 
had changed to zero.

I ended up patching the machine up to the latest u6 kernel + zfs patch 
(13-01 + 139579-01). After that, the zfs destroy went off without a 
hitch.

I turned up bug 6606810 'zfs destroy  is taking hours to 
complete' which is supposed to be fixed by 139579-01. I don't know if 
that was the cause of my issue or not. I've got a 2GB kernel dump if 
anyone is interested in looking.

-Brian

-- 
---
Brian H. Nelson Youngstown State University
System Administrator   Media and Academic Computing
  bnelson[at]cis.ysu.edu
---

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Benchmarking ZFS via NFS

2009-01-08 Thread Bob Friesenhahn
On Thu, 8 Jan 2009, Carsten Aulbert wrote:
>>
>> My experience with iozone is that it refuses to run on an NFS client of
>> a Solaris server using ZFS since it performs a test and then refuses to
>> work since it says that the filesystem is not implemented correctly.
>> Commenting a line of code in iozone will get over this hurdle.  This
>> seems to be a religious issue with the iozone maintainer.
>
> Interesting, I've been running this on a Linux client accessing a ZFS
> file system from one of our Thumpers without any source modifications
> and problems.

I think that the problem only occurs when the client is also Solaris.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destroy is taking a long time...

2009-01-08 Thread David Smith
A few more details:

The system is a Sun x4600 running Solaris 10 Update 4.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Odd network performance with ZFS/CIFS

2009-01-08 Thread gnomad
I have just built an opensolaris box (2008.11) as a small fileserver (6x 1TB 
drives as RAIDZ2, kernel CIFS) for home media use and I am noticing an odd 
behavior copying files to the box.

My knowledge of monitoring/analysis tools under Solaris is very limited, and so 
far I have just been using the System Monitor that pops up with ctrl-alt-del, 
and the numbers I am reporting come from that.

When copying files (a small number of large files from a Mac to the 
Solaris/CIFS server) I initially see network usage of 40-45 MB/s which is 
pretty much what I would expect from single spindle disks over GigE through a 
SoHo switch that does not support jumbo frames.  However, I only see this 
performance for perhaps 10 seconds, then it drops to 25-30 MB/s for about 15-20 
seconds, and then it drops again to 17-20 MB/s where it remains for the 
duration of file transfer.

This is not an occasional issue, it happens this way each and every time.  At 
each of the three levels, the speeds are consistent.  There is a brief period 
of inactivity (0.5 s) when the speeds are reduced, leading me to believe that 
*something* is throttling speeds back.

Has anyone else seen this behavior?  Any idea where it might be coming from, 
and what I could do to keep a sustained 40-45 MB/s transfer rate?

Any suggestions as to what tools I might use to help diagnose this would be 
appreciated.  At the moment, I am in the process of putting an old Windows box 
together to see if I can replicate the problem and eliminate the possibility of 
a cause outside of the Solaris box.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] can't import zpool after upgrade to solaris 10u6

2009-01-08 Thread Steve Goldthorpe
Here's what I did:
* had a t1000 with a zpool under /dev/dsk/c0t0d0s7 on solaris 10u4
* re-installed with solaris 10u6 (disk layout unchanged)
* imported zpool with zpool import -f (I'm forever forgetting to export them 
first) - this was ok
* re-installed with solaris 10u6 and more up-to-date patches (again forgetting 
to export it)

When I do zpool import i get the following:
# zpool import 
  pool: zpool
id: 17419375665629462002
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

zpool   FAULTED  corrupted data
  c0t0d0s7  ONLINE

So I thought I'd done something wrong, however checked the partition layout and 
it's not changed.  However after doing a bit of poking about, I've found some 
weird stuff - what zdb -l is showing and what's actually on the disk doesn't 
seem to tally - I can't find that transaction ID from zdb and there seems to be 
a mixture of version 4 and version 10 uberblocks on disk (and they all have 
bigger transaction IDs than zdb is showing).

Am I missing something?

-Steve

# zdb -l /dev/dsk/c0t0d0s7

LABEL 0

version=4
name='zpool'
state=0
txg=1809157
pool_guid=17419375665629462002
top_guid=12174008987990077602
guid=12174008987990077602
vdev_tree
type='disk'
id=0
guid=12174008987990077602
path='/dev/dsk/c0t0d0s7'
devid='id1,s...@n5000cca321ca2647/h'
whole_disk=0
metaslab_array=14
metaslab_shift=30
ashift=9
asize=129904410624
DTL=24

LABEL 1

version=4
name='zpool'
state=0
txg=1809157
pool_guid=17419375665629462002
top_guid=12174008987990077602
guid=12174008987990077602
vdev_tree
type='disk'
id=0
guid=12174008987990077602
path='/dev/dsk/c0t0d0s7'
devid='id1,s...@n5000cca321ca2647/h'
whole_disk=0
metaslab_array=14
metaslab_shift=30
ashift=9
asize=129904410624
DTL=24

LABEL 2


LABEL 3


-- (sample output from a little script i knocked up)

Uberblock Offset: 002 (131072)
Uber version: 4
Transaction group: 1831936
Timestamp: 2008-11-20:11:14:49
GUID_SUM: 9ab0d28ccc7d2e94

Uberblock Offset: 0020400 (132096)
Uber version: 4
Transaction group: 1831937
Timestamp: 2008-11-20:11:14:54
GUID_SUM: 9ab0d28ccc7d2e94
...
Uber version: 10
Transaction group: 114560
Timestamp: 2009-01-07:09:59:11
GUID_SUM: 9f8d9ef301489223

Uberblock Offset: 0e18400 (14779392)
Uber version: 10
Transaction group: 114561
Timestamp: 2009-01-07:09:59:41
GUID_SUM: 9f8d9ef301489223
...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs destroy is taking a long time...

2009-01-08 Thread David Smith
I was wondering if anyone has any experience with how long a "zfs destroy" of 
about 40 TB should take?  So far, it has been about an hour...  Is there any 
good way to tell if it is working or if it is hung?

Doing a "zfs list" just hangs.  If you do a more specific zfs list, then it is 
okay... zfs list pool/another-fs

Thanks,

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-08 Thread Scott Laird
RAID 2 is something weird that no one uses, and really only exists on
paper as part of Berkeley's original RAID paper, IIRC.  raidz2 is more
or less RAID 6, just like raidz is more or less RAID 5.  With raidz2,
you have to lose 3 drives per vdev before data loss occurs.


Scott

On Thu, Jan 8, 2009 at 7:01 AM, Orvar Korvar
 wrote:
> Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Benchmarking ZFS via NFS

2009-01-08 Thread Carsten Aulbert
Hi Bob.

Bob Friesenhahn wrote:
>> Here is the current example - can anyone with deeper knowledge tell me
>> if these are reasonable values to start with?
> 
> Everything depends on what you are planning do with your NFS access. For
> example, the default blocksize for zfs is 128K.  My example tests
> performance when doing I/O with small 8K blocks (like a database), which
> will severely penalize zfs configured for 128K blocks.
> [...]

My plans don't count in here, I need to optimize what the users want and
they don't have a clue what they will do in 6 months from now, so I
guess all detailed planning will fail anyway and I'm just searching for
the one size fits almost all...

> 
> My experience with iozone is that it refuses to run on an NFS client of
> a Solaris server using ZFS since it performs a test and then refuses to
> work since it says that the filesystem is not implemented correctly. 
> Commenting a line of code in iozone will get over this hurdle.  This
> seems to be a religious issue with the iozone maintainer.

Interesting, I've been running this on a Linux client accessing a ZFS
file system from one of our Thumpers without any source modifications
and problems.

Cheers

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Benchmarking ZFS via NFS

2009-01-08 Thread Bob Friesenhahn
On Thu, 8 Jan 2009, Carsten Aulbert wrote:

> for the people higher up the ladder), but someone gave a hint to use
> multiple threads for testing the ops/s and here I'm a bit at a loss how
> to understand the results and if the values are reasonable or not.

I will admit that some research is required to understand what is 
meant by "Parent" and "Children".  It seems that "Parent" takes an 
extra hit by communicating with the "Children".

> Here is the current example - can anyone with deeper knowledge tell me
> if these are reasonable values to start with?

Everything depends on what you are planning do with your NFS access. 
For example, the default blocksize for zfs is 128K.  My example tests 
performance when doing I/O with small 8K blocks (like a database), 
which will severely penalize zfs configured for 128K blocks.  While 
NFS writes are synchronous, most NFS I/O is sequential reads and 
writes of bulk data without much random access.  This means that 
typical NFS I/O will produce larger reads and writes which work ok 
with ZFS's default configuration.  The main penalty for NFS will be 
for when doing small operations like creating/deleting files, or 
changing file attributes.

My experience with iozone is that it refuses to run on an NFS client 
of a Solaris server using ZFS since it performs a test and then 
refuses to work since it says that the filesystem is not implemented 
correctly.  Commenting a line of code in iozone will get over this 
hurdle.  This seems to be a religious issue with the iozone 
maintainer.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS: Log device for rpool (/ root partition) not supported?

2009-01-08 Thread Lin Ling

This is bug 6727463.

On 01/07/09 13:49, Robert Bauer wrote:
> Why is it impossible to have a ZFS pool with a log device for the rpool 
> (device used for the root partition)?
> Is this a bug?
> I can't boot a ZFS partition / on a zpool which uses also a log device. Maybe 
> its not supported because then grub should support it too?
>   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Will Murnane
On Wed, Jan 7, 2009 at 17:12, Volker A. Brandt  wrote:
>> The Samsung HD103UJ drives are nice, if you're not using
>> NVidia controllers - there's a bug in either the drives or the
>> controllers that makes them drop drives fairly frequently.
>
> Do you happen to have more details about this problem?  Or some
> pointers?
We have 3 x2200m2 servers that we added pairs of these drives
(specifically, the HD753UJ variant: 750GB instead of 1TB) to.  We set
up small (40G or so, I forget; we didn't really need the space, but
buying smaller disks wasn't significantly cheaper) SVM mirrors on two
of these machines, and a small SVM mirror plus a large zpool on the
third.  Within two weeks, all three machines had dropped a disk in
some manner.  The behavior we saw goes like this: metastat reports
errors, output of 'format' changes for the dropped disk but still
shows the disk.  If the disk is moved to another machine (a different
chipset; i.e., with another controller) then it shows up fine, all
data intact, everything hunky-dory.  We didn't lose data, but we did
lose an SVM array and had to restore from backups.

We replaced the drives with 4 Maxtors and 2 Seagate ES2s.  None have
reported problems yet.  I don't know of any other solution, if you
don't want to add a controller.  It doesn't appear to be a problem
with the drives, or a problem with the chipset, but the combination of
drive+chipset causes wonkiness.  Google shows some users having
problems with this under XP, so it's probably not just a driver issue.
 This was what made me suspect the combination was a bad one, and
further testing shows that that's probably the case: the drives work
on other controllers, and other drives work on these controllers.

The drives themselves are still working fine; we moved them to a
SCSI->sata jbod with a non-nV controller and they're happy there.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2009-01-08 Thread Orvar Korvar
A question: why do you want to use HW raid together with ZFS? I thought ZFS 
performing better if it was in total control? Would the results have been 
better if no HW raid controller, and only ZFS?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] ZFS iscsi snapshot - VSS compatible?

2009-01-08 Thread James Dean
I don't know if VSS has this capability, but essentially if it can temporarily 
quiesce a device like a data base does for "warm standby" then a snapshot 
should work. This would be a very simple Windows side script/batch:

1) Q-Disk
2) Remote trigger snapshot
3) Un Q-Disk

I have no idea where to even begin researching VSS unfortunately...

 James

(Sent from my mobile)


-Original Message-
From: Tim 
Sent: Wednesday, 07 Jan 2009 23:18
To: Jason J. W. Williams 
Cc: zfs-discuss@opensolaris.org; storage-disc...@opensolaris.org
Subject: Re: [storage-discuss] [zfs-discuss] ZFS iscsi snapshot - VSS 
compatible?



On Wed, Jan 7, 2009 at 6:30 PM, Jason J. W. Williams  
wrote:
Since iSCSI is block-level, I don't think the iSCSI intelligence at
 the file level you're asking for is feasible. VSS is used at the
 file-system level on either NTFS partitions or over CIFS.

 -J
 

VSS integration with block protocols is most definitely possible.  It just 
requires *intelligent* software running on the host side.  That intelligence 
would likely need to come from Sun directly in the case of windows on raw 
hardware as I don't know of any third party apps that work universally with any 
storage system.

 --Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-08 Thread Orvar Korvar
Thank you. How does raidz2 compare to raid-2? Safer? Less safe?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list improvements?

2009-01-08 Thread Tim Foster
hi Rich,

On Wed, 2009-01-07 at 10:51 -0500, Richard Morris - Sun Microsystems -
Burlington United States wrote:
> As you point out, the -c option is user friendly while the -depth (or 
> maybe -d) option is more general.  There have been several requests for 
> the -c option.  Would anyone prefer the -depth option?  In what cases 
> would this be used?

I was thinking when I logged the bug, that -depth (or -d) would be 
useful in cases where you've got a "jurassic-like" filesystem layout,
and are interested in seeing just one or two levels.

zfs list -d 3 tank

  tank/home
  tank/home/Ireland
  tank/home/UK
  tank/home/France
  tank/home/Germany
  tank/foo
  tank/foo/bar

allowing you to look at just the level of hierarchy that you're
interested in (eg. "How much disk space are users from different
countries taking up taking up?"), without needing to grep, or hardcode
a list of datasets somewhere.

More importantly, with hopefully faster performance than showing
all children of tank/home just to get the size of the immediate children.


It's particularly important for snapshots - as the number of snapshots
grows, zfs list without limits like this can take a long time (even with
the massive zfs list performance improvements :-)

[ hacks around listing the contents of .zfs/snapshots/ only work when
filesystems are mounted unfortunately, so I'd been avoiding doing that
in the zfs-auto-snapshot code ]

cheers,
tim


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problems at 90% zpool capacity 2008.05

2009-01-08 Thread Paul Bartholdi
On 1/8/09, Bill Sommerfeld  wrote:
>
>
> On Tue, 2009-01-06 at 22:18 -0700, Neil Perrin wrote:
> > I vaguely remember a time when UFS had limits to prevent
> > ordinary users from consuming past a certain limit, allowing
> > only the super-user to use it. Not that I'm advocating that
> > approach for ZFS.



man page of newfs, on Solaris 8 (5.8), gives the option:

   -m free
 The minimum percentage of free space to maintain
 in   the   file  system  (between  1%  and  99%,
 inclusively). This space is off-limits to normal
 users.  Once  the  file system is filled to this
 threshold,  only  the  super-user  can  continue
 writing  to  the file system. This parameter can
 be subsequently  changed  using  the  tunefs(1M)
 command.

 The default is  ((64  Mbytes/partition  size)  *
 100),  rounded  down  to the nearest integer and
 limited between 1% and 10%, inclusively.

We always kept it to 1 % but were very glad to have it when, for any reason,
the users had nothing left... I should add that we were running most of the
time above 90 % (it is just thermodynamic, gas occupy all available space!)
and could not see any real slowdown between 40 % and 99 % full (ufs+logging
on sparc Solaris 8).

Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list improvements?

2009-01-08 Thread Mike Futerko
Hello

> This seems like a reasonable proposal to enhance zfs list.  But it would 
> also be good to add as few new options to zfs list as possible.  So it 
> probably makes sense to add at most one of these new options.  Or 
> perhaps add an optional depth argument to the -r option instead?
> 
> As you point out, the -c option is user friendly while the -depth (or 
> maybe -d) option is more general.  There have been several requests for 
> the -c option.  Would anyone prefer the -depth option?  In what cases 
> would this be used?


I'd have a few more proposals how to improve zfs list if they don't
contravene the concept of zfs list command.

Currently zfs list returns error "operation not applicable to datasets
of this type" if you try to list for ex.: "zfs list -t snapshot
file/system" returns above error while it could return what you actually
asked - the list of all snapshots of "file/system". Similar case is if
you try "zfs list file/sys...@snapshot" - can zfs be more smart to
return the snapshot instead of error message if dataset name contains
"@" in its name?

Other thing is zfs list performance... even if you want to get the list
of snapshots with no other properties "zfs list -oname -t snapshot -r
file/system" it still takes quite long time if there are hundreds of
snapshots, while "ls /file/system/.zfs/snapshot" returns immediately.
Can this also be improved somehow please?



Thanks
Mike


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-08 Thread Mike Futerko
Hello

> Yah, the incrementals are from a 30TB volume, with about 1TB used.
> Watching iostat on each side during the incremental sends, the sender
> side is hardly doing anything, maybe 50iops read, and that could be
> from other machines accessing it, really light load.
> The receiving side however, for about 3 minutes it is peaking around
> 1500 iops reads, and no writes.


Have you tries truss on both sides? From my experiments I found that
sending side on beginning of the transfer mostly sleeps while receiving
lists all available snapshots on the syncing file system. So if you have
a lot of snapshots on receiving side (as in my case) the process will
take long time sending no data but listing the snapshots. The worst case
is if you use recursive sync of hundreds of file system with hundreds of
snapshots on each. I'm sure this must be optimized somehow otherwise
it's almost useless in practice.


Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Benchmarking ZFS via NFS

2009-01-08 Thread Carsten Aulbert
Hi all,

among many other things I recently restarted benchmarking ZFS over NFS3
performance between X4500 (host) and Linux clients. I've just iozone
quite a while ago and am still a bit at a loss understanding the
results. The automatic mode is pretty ok (and generates nice 3D plots
for the people higher up the ladder), but someone gave a hint to use
multiple threads for testing the ops/s and here I'm a bit at a loss how
to understand the results and if the values are reasonable or not.

Here is the current example - can anyone with deeper knowledge tell me
if these are reasonable values to start with?

Thanks a lot

Carsten

Iozone: Performance Test of File I/O
Version $Revision: 3.315 $
Compiled for 64 bit mode.
Build: linux-AMD64

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby
Collins
 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
 Randy Dunlap, Mark Montague, Dan Million, Gavin
Brebner,
 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

Run began: Wed Jan  7 09:31:49 2009

Multi_buffer. Work area 16777216 bytes
OPS Mode. Output is in operations per second.
Record Size 8 KB
SYNC Mode.
File size set to 4194304 KB
Command line used: ../iozone3_315/src/current/iozone -m -t 8 -T
-O -r 8k -o -s 4G iozone
Time Resolution = 0.01 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 8 threads
Each thread writes a 4194304 Kbyte file in 8 Kbyte records

Children see throughput for  8 initial writers  =4925.20 ops/sec
Parent sees throughput for  8 initial writers   =4924.65 ops/sec
Min throughput per thread   = 615.61 ops/sec
Max throughput per thread   = 615.69 ops/sec
Avg throughput per thread   = 615.65 ops/sec
Min xfer=  524219.00 ops

Children see throughput for  8 rewriters=4208.45 ops/sec
Parent sees throughput for  8 rewriters =4208.42 ops/sec
Min throughput per thread   = 525.88 ops/sec
Max throughput per thread   = 526.22 ops/sec
Avg throughput per thread   = 526.06 ops/sec
Min xfer=  523944.00 ops

Children see throughput for  8 readers  =   11986.99 ops/sec
Parent sees throughput for  8 readers   =   11986.46 ops/sec
Min throughput per thread   =1481.13 ops/sec
Max throughput per thread   =1512.71 ops/sec
Avg throughput per thread   =1498.37 ops/sec
Min xfer=  513361.00 ops

Children see throughput for 8 re-readers=   12017.70 ops/sec
Parent sees throughput for 8 re-readers =   12017.22 ops/sec
Min throughput per thread   =1486.72 ops/sec
Max throughput per thread   =1520.35 ops/sec
Avg throughput per thread   =1502.21 ops/sec
Min xfer=  512761.00 ops

Children see throughput for 8 reverse readers   =   25741.62 ops/sec
Parent sees throughput for 8 reverse readers=   25735.91 ops/sec
Min throughput per thread   =3141.50 ops/sec
Max throughput per thread   =3282.11 ops/sec
Avg throughput per thread   =3217.70 ops/sec
Min xfer=  501956.00 ops

Children see throughput for 8 stride readers=1434.73 ops/sec
Parent sees throughput for 8 stride readers =1434.71 ops/sec
Min throughput per thread   = 122.51 ops/sec
Max throughput per thread   = 297.87 ops/sec
Avg throughput per thread   = 179.34 ops/sec
Min xfer=  215638.00 ops

Children see throughput for 8 random readers= 529.83 ops/sec
Parent sees throughput for 8 random readers = 529.83 ops/sec
Min throughput per thread   =  55.63 ops/sec
Max throughput per thread   = 101.03 ops/sec
Avg throughput per thread   =  66.

Re: [zfs-discuss] hung when import zpool

2009-01-08 Thread Carsten Aulbert
Hi

Qin Ming Hua wrote:
> bash-3.00# zpool import mypool
> ^C^C
> 
> it hung when i try to re-import the zpool, has anyone  see this before?
> 

How long did you wait?

Once a zfs import took 1-2 hours to complete (it was seemingly stuck at
a ~30 GB filesystem which it needed to do some work on).

Cheer

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] hung when import zpool

2009-01-08 Thread Qin Ming Hua
Hi All,

I would like to try zfs Self Healing feature as --
http://www.opensolaris.org/os/community/zfs/demos/selfheal/
but meet some issue, please see my process.

bash-3.00# zpool create mypool mirror c3t5006016130603AE5d7
c3t5006016130603AE5d8
bash-3.00# cd /mypool/
bash-3.00# cp /export/iozone3_315.tar .
bash-3.00# digest -a md5 iozone3_315.tar
e5997fa99c538e067bf5eefde90dd423
bash-3.00# dd if=/dev/zero of=/dev/dsk/c3t5006016130603AE5d8 bs=1024
count=20480
20480+0 records in
20480+0 records out
bash-3.00# zpool status mypool
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ WRITE CKSUM
mypool ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c3t5006016130603AE5d7  ONLINE   0 0 0
c3t5006016130603AE5d8  ONLINE   0 0 0

errors: No known data errors
bash-3.00# cd /
bash-3.00# zpool export mypool
bash-3.00# zpool import mypool
^C^C

it hung when i try to re-import the zpool, has anyone  see this before?

bash-3.00# uname -vi
Generic_120012-14 i86pc
bash-3.00# cat /etc/release
Solaris 10 8/07 s10x_u4wos_12b X86
   Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
Assembled 16 August 2007


-- 
Best regards,
Colin Qin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss