Re: [zfs-discuss] 'zfs recv' is very slow

2008-12-06 Thread Ian Collins
Ian Collins wrote:
 Andrew Gabriel wrote:
   
 Ian Collins wrote:
 
 I've just finished a small application to couple zfs_send and
 zfs_receive through a socket to remove ssh from the equation and the
 speed up is better than 2x.  I have a small (140K) buffer on the sending
 side to ensure the minimum number of sent packets

 The times I get for 3.1GB of data (b101 ISO and some smaller files) to a
 modest mirror at the receive end are:

 1m36s for cp over NFS,
 2m48s for zfs send though ssh and
 1m14s through a socket.
   
 So the best speed is equivalent to 42MB/s.

 Can't tell from this what the limiting factor is (might be the disks).
 
 It probably is.

   
 It would be interesting to try putting a buffer (5 x 42MB = 210MB
 initial stab) at the recv side and see if you get any improvement.
 
It took a while...

I was able to get about 47MB/s with a 256MB circular input buffer. I
think that's about as fast it can go, the buffer fills so receive
processing is the bottleneck.  Bonnie++ shows the pool (a mirror) block
write speed is 58MB/s.

When I reverse the transfer to the faster box, the rate drops to 35MB/s
with neither the send nor receive buffer filling.  So send processing
appears to be the limit in this case.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-06 Thread Tomas Ögren
On 05 December, 2008 - Brian Cameron sent me these 1,5K bytes:

 
 I am the maintainer of GDM, and I am noticing that GDM has a problem when
 running on a ZFS filesystem, as with Indiana.
 
 When GDM (the GNOME Display Manager) starts the login GUI, it runs the
 following commands on Solaris:
 
/usr/bin/setfacl -m user:gdm:rwx,mask:rwx /dev/audio
/usr/bin/setfacl -m user:gdm:rwx,mask:rwx /dev/audioctl
 
 It does this because the login GUI programs are run as the gdm user,
 and in order to support text-to-speech via orca, for users with
 accessibility needs, the gdm user needs access to the audio device.
 We were using setfacl because logindevperm(3) normally manages the
 audio device permissions and we only want the gdm user to have
 access on-the-fly when the GDM GUI is started.
 
 However, I notice that when using ZFS on Indiana the above commands fail
 with the following error:
 
File system doesn't support aclent_t style ACL's.
See acl(5) for more information on ACL styles support by Solaris.
 
 What is the appropriate command to use with ZFS?

chmod

 If different commands are needed based on the file system type, then
 how can GDM determine which command to use.

Do both? :)

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-06 Thread Orvar Korvar
Its not me. There are people on Linux forums that wont to try out Solaris + ZFS 
and this is a concern, for them. What should I tell them? That it is not fixed? 
That they have reboot every week? Someone knows?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-06 Thread Mark Shellenbaum

 However, I notice that when using ZFS on Indiana the above commands fail
 with the following error:
 
File system doesn't support aclent_t style ACL's.
See acl(5) for more information on ACL styles support by Solaris.
 
 What is the appropriate command to use with ZFS? 

You can use pathconf() with _PC_ACL_ENABLED to determine what flavor of 
ACL the file system supports.

check out these links.

http://docs.sun.com/app/docs/doc/816-5167/fpathconf-2?a=view
http://blogs.sun.com/alvaro/entry/detecting_the_acl_type_you

The example in the blog isn't quite correct.  The returned value is a 
bit mask, and it is possible for a file system to support multiple ACL 
flavors.

Here is an example of pathconf() as used in acl_strip(3sec)

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libsec/common/aclutils.c#390


   -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-06 Thread Toby Thain

On 6-Dec-08, at 7:10 AM, Orvar Korvar wrote:

 Its not me. There are people on Linux forums that wont to try out  
 Solaris + ZFS and this is a concern, for them. What should I tell  
 them? That it is not fixed? That they have reboot every week?  
 Someone knows?


That it's not recommended for 32 bit systems. There may also be  
unfixed atomicity issues in 64-bit operations, according to past  
posts on this list.

--Toby

 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SMART data

2008-12-06 Thread Joe S
How do I get SMART data from my drives?

I'm running snv_101 on AMD64.

I have 6x SATA disks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-06 Thread Brian Hechinger
On Sat, Dec 06, 2008 at 11:31:06AM -0500, Toby Thain wrote:
 
  Its not me. There are people on Linux forums that wont to try out  
  Solaris + ZFS and this is a concern, for them. What should I tell  
  them? That it is not fixed? That they have reboot every week?  
  Someone knows?
 
 That it's not recommended for 32 bit systems. There may also be  
 unfixed atomicity issues in 64-bit operations, according to past  
 posts on this list.

Well, he's talking Linux, which means FUSE, so the issues related to 32-bit
on Solaris don't apply as ZFS isn't running in kernel space which is where
the horrid performance issues come from (the ARC cache and the kernel fighting
for address space).  Maybe it would run well under 32-but Linux?  I can't
speak to that as I refuse to run Linux.

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-06 Thread Tim
On Sat, Dec 6, 2008 at 10:59 AM, Brian Hechinger [EMAIL PROTECTED] wrote:

 On Sat, Dec 06, 2008 at 11:31:06AM -0500, Toby Thain wrote:
 
   Its not me. There are people on Linux forums that wont to try out
   Solaris + ZFS and this is a concern, for them. What should I tell
   them? That it is not fixed? That they have reboot every week?
   Someone knows?
 
  That it's not recommended for 32 bit systems. There may also be
  unfixed atomicity issues in 64-bit operations, according to past
  posts on this list.

 Well, he's talking Linux, which means FUSE, so the issues related to 32-bit
 on Solaris don't apply as ZFS isn't running in kernel space which is where
 the horrid performance issues come from (the ARC cache and the kernel
 fighting
 for address space).  Maybe it would run well under 32-but Linux?  I can't
 speak to that as I refuse to run Linux.

 -brian


Solaris + ZFS and this is a concern

Sounds to me like they want to try out solaris + zfs, not zfs on fuse.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-06 Thread Brian Hechinger
On Sat, Dec 06, 2008 at 12:42:44PM -0600, Tim wrote:
 Solaris + ZFS and this is a concern
 
 Sounds to me like they want to try out solaris + zfs, not zfs on fuse.

Ooops, misread what he said.  Sorry about that.

I suppose my original comment still stands then. :)

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2008-12-06 Thread Ian Collins
Richard Elling wrote:
 Ian Collins wrote:
 Ian Collins wrote:  
 Andrew Gabriel wrote:  
 Ian Collins wrote:  
 I've just finished a small application to couple zfs_send and
 zfs_receive through a socket to remove ssh from the equation and the
 speed up is better than 2x.  I have a small (140K) buffer on the
 sending
 side to ensure the minimum number of sent packets

 The times I get for 3.1GB of data (b101 ISO and some smaller
 files) to a
 modest mirror at the receive end are:

 1m36s for cp over NFS,
 2m48s for zfs send though ssh and
 1m14s through a socket.
   
 So the best speed is equivalent to 42MB/s. 
 It would be interesting to try putting a buffer (5 x 42MB = 210MB
 initial stab) at the recv side and see if you get any improvement.
   
 It took a while...

 I was able to get about 47MB/s with a 256MB circular input buffer. I
 think that's about as fast it can go, the buffer fills so receive
 processing is the bottleneck.  Bonnie++ shows the pool (a mirror) block
 write speed is 58MB/s.

 When I reverse the transfer to the faster box, the rate drops to 35MB/s
 with neither the send nor receive buffer filling.  So send processing
 appears to be the limit in this case.  
 Those rates are what I would expect writing to a single disk.
 How is the pool configured?

The slow system has a single mirror pool of two SATA drives, the
faster one a stripe of 4 mirrors and an IDE SD boot drive.

ZFS send though ssh from the slow to the fast box takes 189 seconds, the
direct socket connection send takes 82 seconds.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to compile mbuffer

2008-12-06 Thread Julius Roberts
Hi guys,

i've been following the [zfs-discuss] 'zfs recv' is very slow thread
and i believe i have the same issue; we get ~10MB/sec sending large
incrimental data sets using zfs send | ssh | zfs recv. I'd like to try
mbuffer.

We're running Solaris Express Developers Edition (SunOS murray 5.11
snv_79a i86pc i386 i86pc).  I found the download page
http://www.maier-komor.de/mbuffer.html and i have the source files on
Murray.

How do i compile mbuffer for our system, and what syntax to i use to
invoke it within the zfs send recv?

Any help appreciated!

-- 
Kind regards, Jules

free. open. honest. love. kindness. generosity. energy. frenetic.
electric. light. lasers. spinning spotlights. stage dancers. heads
bathed in yellow light. silence. stillness. awareness. empathy. the
beat. magic, not mushrooms. thick. tight. solid. commanding.
compelling. uplifting. euphoric. ecstatic, not e. ongoing. releasing.
reforming. meandering. focussing. quickening. quickening. quickening.
aloft. floating. then the beat. fat exploding thick bass-line.
eyes, everywhere. smiling. sharing. giving. trust. understanding.
tolerance. peace. equanimity. emptiness (Earthcore, 2008)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-06 Thread Joseph Mocker
Does PAE help things at all on 32-bit ?



Brian Hechinger wrote:
 On Sat, Dec 06, 2008 at 12:42:44PM -0600, Tim wrote:
   
 Solaris + ZFS and this is a concern

 Sounds to me like they want to try out solaris + zfs, not zfs on fuse.
 

 Ooops, misread what he said.  Sorry about that.

 I suppose my original comment still stands then. :)

 -brian
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to compile mbuffer

2008-12-06 Thread Mike Futerko
Hello

 i've been following the [zfs-discuss] 'zfs recv' is very slow thread
 and i believe i have the same issue; we get ~10MB/sec sending large
 incrimental data sets using zfs send | ssh | zfs recv. I'd like to try
 mbuffer.
 
 We're running Solaris Express Developers Edition (SunOS murray 5.11
 snv_79a i86pc i386 i86pc).  I found the download page
 http://www.maier-komor.de/mbuffer.html and i have the source files on
 Murray.
 
 How do i compile mbuffer for our system, and what syntax to i use to
 invoke it within the zfs send recv?
 
 Any help appreciated!


I used compile it this way:

1) wget http://www.maier-komor.de/software/mbuffer/mbuffer-20081113.tgz
2) gtar -xzvf mbuffer-20081113.tgz
3) cd mbuffer-20081113
4) ./configure --prefix=/usr/local --disable-debug CFLAGS=-O MAKE=gmake
If you are on 64bit system you may want to compile 64bit version:
./configure --prefix=/usr/local --disable-debug CFLAGS=-O -m64 MAKE=gmake
5) gmake  gmake install
6) /usr/local/bin/mbuffer -V


Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of zpool remove in raidz and non-redundant stripes

2008-12-06 Thread Ross
If I remember right, the code needed for this has implications for a lot of 
things:

- defrag
- adding disks to raidz zvols
- removing disks from vols
- restriping volumes (to give consistent performance after expansion)

In fact, I just found the question I asked a year or so back, which had a good 
reply from Jeff
http://opensolaris.org/jive/message.jspa?messageID=186561

... and while typing this, I also just found this blog post from Adam Leventhal 
in April, which is also related:
http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of zpool remove in raidz and non-redundant stripes

2008-12-06 Thread Al Tobey
They also mentioned this at some of the ZFS talks at LISA 2008.The general 
argument is that, while plenty of hobbyists are clamoring for this, not enough 
paying customers are asking to make it a high enough priority to get done. 

If you think about it, the code is not only complicated but will be incredibly 
hard to get right and _prove_ it's right.

Maybe the ZFS guys can just borrow the algorithm from Linux mdraid's 
experimental CONFIG_MD_RAID5_RESHAPE:

http://git.kernel.org/?p=linux/kernel/git/djbw/md.git;a=blob;f=drivers/md/raid5.c;h=224de022e7c5d6574cf46747947b3c9e326c8632;hb=HEAD#1885
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-06 Thread Brian Hechinger
On Sat, Dec 06, 2008 at 01:36:35PM -0800, Joseph Mocker wrote:
 Does PAE help things at all on 32-bit ?

No.

-brian
-- 
Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix. -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread Joseph Zhou
Thanks, but compared to what?
To Windows, are you sure we can say lot of additional?
To Linux, maybe, since I am not a Linux fan.
To leading NAS appliances, these are not competitive advantages.
opensolaris.org posted this, I would like an official answer!

The Open-spirit should be encouraged, but the wrong marketing positioning 
messages are not!!!
Please, don't bring shame to the open community.

Thank you!
zStorageAnalyst

- Original Message - 
From: William D. Hathaway [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Sent: Thursday, December 04, 2008 7:35 AM
Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
X4150/X4450


 Keep in mind that if you use ZFS you get a lot of additional functionality 
 like snapshots, compression, clones.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-06 Thread Joseph Zhou
Ian, Tim, again, thank you very much in answering my question.

I am a bit disappointed that the whole discussion group does not have one 
person to stand up and say yeah, OpenSolaris absolutely outperforms Linux 
and Windows, because..

But I wish, one day, we can be arguing not on a basis of belief, but on a 
basis of facts (referencable data).
I can test all I want, the results don't mean anything in official arguments 
because I am not VERITEST, and my firm is not funding my testings.

With all the love for Sun Storage, and all the disappointments, please, keep 
this in mind. Thank you!
zStorageAnalyst

- Original Message - 
From: Ian Collins [EMAIL PROTECTED]
To: Joseph Zhou [EMAIL PROTECTED]
Cc: Tim [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
Sent: Wednesday, December 03, 2008 5:43 PM
Subject: Re: [zfs-discuss] OpenSolaris vs Linux


 Joseph Zhou wrote:
 Thanks Ian, Tim,
 Ok, let me really hit one topic instead of trying to see in general
 what data are out there...

 Let's say OpenSolaris doing Samba vs. Linux doing Samba, in CIFS
 performance.
 (so I can link to the Win2008 CIFS numbers and NetApp CIFS numbers
 myself.)

 Is there any data to this specific point?

 I think what we are telling you is the only way to find the numbers you
 want for your configuration is to do your own tests.  There are just too
 many variables for other people's data to be truly relevant.

 One of the benefits of Open Source is you only have to pay for your time
 to run tests.

 As Tim said, there's no point in limiting OpenSolaris to Samba.

 -- 
 Ian.
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-06 Thread Joseph Zhou
Tim, thanks, yeah, I have highlighted Sun Storage SSD, see my blog, if you are 
really interested.

http://ideasint.blogs.com/ideasinsights/2008/10/ssd-shines-new.html

note the Sun Storage comment to the blog and my reply.
Happy holidays!
z
  - Original Message - 
  From: Tim 
  To: Joseph Zhou 
  Cc: Ian Collins ; zfs-discuss@opensolaris.org 
  Sent: Wednesday, December 03, 2008 5:22 PM
  Subject: Re: [zfs-discuss] OpenSolaris vs Linux





  On Wed, Dec 3, 2008 at 4:15 PM, Joseph Zhou [EMAIL PROTECTED] wrote:

haha, Tim, yes, I see the Open spirit in this reply!   ;-)

As I said, I am just exploring data.

The Sun J4000 SPC1 and SPC2 benchmark results were nice, just lacking other 
published results with the iSCSI HBA as DAS, not as a network storage device 
(as 7000).  Though I would attempt to say those results can be a basis for 7000 
block-performance...

any comment?
Thanks!
z

  I'd imagine you'll see far better performance out of the 7000 with their use 
of flash.  Only time will tell though :)

  --Tim 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help diagnosing system hang

2008-12-06 Thread Ethan Erchinger


Ethan Erchinger wrote:
 Here is a sample set of messages at that time.  It looks like timeouts 
 on the SSD for various requested blocks.  Maybe I need to talk with 
 Intel about this issue.
   
Keeping everyone up-to-date, for those who care, I've RMAd the Intel 
drive, and will retest when the replacement arrives.  I'm working under 
the assumption that I have a bad drive.

Ethan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread William D. Hathaway
I don't understand your statement/questions.  This wasn't a response to ZFS 
versus every possible storage platform in the world.  The original poster was 
asking about comparing  ZFS versus hardware RAID on specific machines as 
mentioned in the title.  AFAIK you don't get compression, snapshots and clones 
with standard hardware RAID cards.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread Joseph Zhou
Yeah?
http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
Snapshot is a big deal?

Windows OS does that too.

Compression -- where is the performance data showing compression in 
OpenSolaris has little overhead?

Clones -- tell me the benefit of Clone when we have point-in-time copies 
with continuous, policy-based protection?  And snapshot images are mostly 
writable and sync-able today?

Man, I am an open storage analyst, please, tell me I am wrong!
zStorageAnalyst

- Original Message - 
From: William D. Hathaway [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Sent: Saturday, December 06, 2008 11:41 PM
Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
X4150/X4450


I don't understand your statement/questions.  This wasn't a response to 
ZFS versus every possible storage platform in the world.  The original 
poster was asking about comparing  ZFS versus hardware RAID on specific 
machines as mentioned in the title.  AFAIK you don't get compression, 
snapshots and clones with standard hardware RAID cards.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on SunX4150/X4450

2008-12-06 Thread Joseph Zhou
Is Jeff Cheeney still on this list? He had an open mind.

Jeff, if you can see this, tell me if I am wrong! Please!

Thanks!
z

- Original Message - 
From: Joseph Zhou [EMAIL PROTECTED]
To: William D. Hathaway [EMAIL PROTECTED]; 
zfs-discuss@opensolaris.org
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, December 06, 2008 11:51 PM
Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on 
SunX4150/X4450


 Yeah?
 http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
 Snapshot is a big deal?

 Windows OS does that too.

 Compression -- where is the performance data showing compression in
 OpenSolaris has little overhead?

 Clones -- tell me the benefit of Clone when we have point-in-time copies
 with continuous, policy-based protection?  And snapshot images are mostly
 writable and sync-able today?

 Man, I am an open storage analyst, please, tell me I am wrong!
 zStorageAnalyst

 - Original Message - 
 From: William D. Hathaway [EMAIL PROTECTED]
 To: zfs-discuss@opensolaris.org
 Sent: Saturday, December 06, 2008 11:41 PM
 Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun
 X4150/X4450


I don't understand your statement/questions.  This wasn't a response to
ZFS versus every possible storage platform in the world.  The original
poster was asking about comparing  ZFS versus hardware RAID on specific
machines as mentioned in the title.  AFAIK you don't get compression,
snapshots and clones with standard hardware RAID cards.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread Richard Elling
Joseph Zhou wrote:
 Yeah?
 http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
 Snapshot is a big deal?
   

Snapshot is a big deal, but you will find most hardware RAID 
implementations
are somewhat limited, as the above adaptec only supports 4 snapshots and 
it is an
optional feature.  You will find many array vendors will be happy to 
charge lots
of money for the snapshot feature.

 Windows OS does that too.
   

Not the Windows OS I run on my laptop.  But the feature seems to be best 
integrated
on Max OSX.

 Compression -- where is the performance data showing compression in 
 OpenSolaris has little overhead?
   

If you search these archives you will find instances where compression
performance is much faster than not, and you will find instances where
compression has significant overhead.  YMMV.  As with most things,
there are engineering and design trade-offs that you should consider.

 Clones -- tell me the benefit of Clone when we have point-in-time copies 
 with continuous, policy-based protection?  And snapshot images are mostly 
 writable and sync-able today?
   

In ZFS, snapshots are read-only.  Clones are created from a snapshot
and can be writable.  We use clones extensively for OS upgrading and
patching.  For example, when you upgrade OpenSolaris, we clone the
OS file systems and upgrade the clone, so that you can move forward
or roll back to different versions.  Many people use clones for virtual
machines.

 Man, I am an open storage analyst, please, tell me I am wrong!
   

I suggest you read the docs, particularly the ZFS Administration Guide.
http://www.opensolaris.org/os/community/zfs/docs
 -- richard
 zStorageAnalyst

 - Original Message - 
 From: William D. Hathaway [EMAIL PROTECTED]
 To: zfs-discuss@opensolaris.org
 Sent: Saturday, December 06, 2008 11:41 PM
 Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
 X4150/X4450


   
 I don't understand your statement/questions.  This wasn't a response to 
 ZFS versus every possible storage platform in the world.  The original 
 poster was asking about comparing  ZFS versus hardware RAID on specific 
 machines as mentioned in the title.  AFAIK you don't get compression, 
 snapshots and clones with standard hardware RAID cards.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on SunX4150/X4450

2008-12-06 Thread Joseph Zhou
Richard, thank you so very much!
This is the kind of answer I expected from Sun Storage.
I will do more studies before I speak again.
Happy holidays!!!
z

- Original Message - 
From: Richard Elling [EMAIL PROTECTED]
To: Joseph Zhou [EMAIL PROTECTED]
Cc: William D. Hathaway [EMAIL PROTECTED]; 
zfs-discuss@opensolaris.org; [EMAIL PROTECTED]; 
[EMAIL PROTECTED]
Sent: Sunday, December 07, 2008 12:57 AM
Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on 
SunX4150/X4450


 Joseph Zhou wrote:
 Yeah?
 http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
 Snapshot is a big deal?


 Snapshot is a big deal, but you will find most hardware RAID 
 implementations
 are somewhat limited, as the above adaptec only supports 4 snapshots and 
 it is an
 optional feature.  You will find many array vendors will be happy to 
 charge lots
 of money for the snapshot feature.

 Windows OS does that too.


 Not the Windows OS I run on my laptop.  But the feature seems to be best 
 integrated
 on Max OSX.

 Compression -- where is the performance data showing compression in 
 OpenSolaris has little overhead?


 If you search these archives you will find instances where compression
 performance is much faster than not, and you will find instances where
 compression has significant overhead.  YMMV.  As with most things,
 there are engineering and design trade-offs that you should consider.

 Clones -- tell me the benefit of Clone when we have point-in-time copies 
 with continuous, policy-based protection?  And snapshot images are mostly 
 writable and sync-able today?


 In ZFS, snapshots are read-only.  Clones are created from a snapshot
 and can be writable.  We use clones extensively for OS upgrading and
 patching.  For example, when you upgrade OpenSolaris, we clone the
 OS file systems and upgrade the clone, so that you can move forward
 or roll back to different versions.  Many people use clones for virtual
 machines.

 Man, I am an open storage analyst, please, tell me I am wrong!


 I suggest you read the docs, particularly the ZFS Administration Guide.
 http://www.opensolaris.org/os/community/zfs/docs
 -- richard
 zStorageAnalyst

 - Original Message - 
 From: William D. Hathaway [EMAIL PROTECTED]
 To: zfs-discuss@opensolaris.org
 Sent: Saturday, December 06, 2008 11:41 PM
 Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
 X4150/X4450



 I don't understand your statement/questions.  This wasn't a response to 
 ZFS versus every possible storage platform in the world.  The original 
 poster was asking about comparing  ZFS versus hardware RAID on specific 
 machines as mentioned in the title.  AFAIK you don't get compression, 
 snapshots and clones with standard hardware RAID cards.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread Torrey McMahon
Richard Elling wrote:
 Joseph Zhou wrote:
   
 Yeah?
 http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
 Snapshot is a big deal?
   
 

 Snapshot is a big deal, but you will find most hardware RAID 
 implementations
 are somewhat limited, as the above adaptec only supports 4 snapshots and 
 it is an
 optional feature.  You will find many array vendors will be happy to 
 charge lots
 of money for the snapshot feature.

On top of that since the ZFS snapshot is at the file system level it's 
much easier to use. You don't have to quiesce the file system first or 
hope that when you take the snapshot you get a consistent data set. I've 
seen plenty of folks take hw raid snapshots without locking the file 
system first, let alone quiescing the app, and getting garbage.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread Joseph Zhou
Torrey, now this impressive as the old days with Sun Storage.

Ok, ZFS PiT is only a software solution.
The Windows VSS is not only a software solution, but also a 3rd party 
integration standard from MS.
What's your comment on ZFS PiT is better than MS PiT, in light of openness 
and 3rd-party integration???

Talking about garbage!
z


- Original Message - 
From: Torrey McMahon [EMAIL PROTECTED]
To: Richard Elling [EMAIL PROTECTED]
Cc: Joseph Zhou [EMAIL PROTECTED]; William D. Hathaway 
[EMAIL PROTECTED]; [EMAIL PROTECTED]; 
zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
Sent: Sunday, December 07, 2008 1:58 AM
Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
X4150/X4450


 Richard Elling wrote:
 Joseph Zhou wrote:

 Yeah?
 http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
 Snapshot is a big deal?


 Snapshot is a big deal, but you will find most hardware RAID 
 implementations
 are somewhat limited, as the above adaptec only supports 4 snapshots and 
 it is an
 optional feature.  You will find many array vendors will be happy to 
 charge lots
 of money for the snapshot feature.

 On top of that since the ZFS snapshot is at the file system level it's 
 much easier to use. You don't have to quiesce the file system first or 
 hope that when you take the snapshot you get a consistent data set. I've 
 seen plenty of folks take hw raid snapshots without locking the file 
 system first, let alone quiescing the app, and getting garbage. 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementationon Sun X4150/X4450

2008-12-06 Thread Joseph Zhou
Ok, I am tired and going to bed.
Thanks, Real Sun Storage folks, this is the best discussion I have had in 
months.

I am satisfied.   ;-)

Goodnight, and long live the open spirit!!!

zStorageAnalyst


- Original Message - 
From: Joseph Zhou [EMAIL PROTECTED]
To: Torrey McMahon [EMAIL PROTECTED]; Richard Elling 
[EMAIL PROTECTED]
Cc: William D. Hathaway [EMAIL PROTECTED]; 
[EMAIL PROTECTED]; zfs-discuss@opensolaris.org; 
[EMAIL PROTECTED]
Sent: Sunday, December 07, 2008 2:03 AM
Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementationon Sun 
X4150/X4450


 Torrey, now this impressive as the old days with Sun Storage.

 Ok, ZFS PiT is only a software solution.
 The Windows VSS is not only a software solution, but also a 3rd party
 integration standard from MS.
 What's your comment on ZFS PiT is better than MS PiT, in light of openness
 and 3rd-party integration???

 Talking about garbage!
 z


 - Original Message - 
 From: Torrey McMahon [EMAIL PROTECTED]
 To: Richard Elling [EMAIL PROTECTED]
 Cc: Joseph Zhou [EMAIL PROTECTED]; William D. Hathaway
 [EMAIL PROTECTED]; [EMAIL PROTECTED];
 zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
 Sent: Sunday, December 07, 2008 1:58 AM
 Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun
 X4150/X4450


 Richard Elling wrote:
 Joseph Zhou wrote:

 Yeah?
 http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
 Snapshot is a big deal?


 Snapshot is a big deal, but you will find most hardware RAID
 implementations
 are somewhat limited, as the above adaptec only supports 4 snapshots and
 it is an
 optional feature.  You will find many array vendors will be happy to
 charge lots
 of money for the snapshot feature.

 On top of that since the ZFS snapshot is at the file system level it's
 much easier to use. You don't have to quiesce the file system first or
 hope that when you take the snapshot you get a consistent data set. I've
 seen plenty of folks take hw raid snapshots without locking the file
 system first, let alone quiescing the app, and getting garbage.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread Torrey McMahon
Compared to hw raid only snapshots ZFS is still, imho, easier to use.

If you start talking about VSS, aka shadow copy for Windows, you're now 
at the fs level. I can see that VSS offers an API for 3rd parties to use 
but, as I literally just started reading about it, I'm not an expert. 
 From a quick glance I think the ZFS feature set is comparable. Is there 
a C++ API to ZFS? Not that I know of. Do you need one? Can't think of a 
reason off the top of my head given the way the zpool/zfs commands work.

Joseph Zhou wrote:
 Torrey, now this impressive as the old days with Sun Storage.

 Ok, ZFS PiT is only a software solution.
 The Windows VSS is not only a software solution, but also a 3rd party 
 integration standard from MS.
 What's your comment on ZFS PiT is better than MS PiT, in light of 
 openness and 3rd-party integration???

 Talking about garbage!
 z


 - Original Message - From: Torrey McMahon [EMAIL PROTECTED]
 To: Richard Elling [EMAIL PROTECTED]
 Cc: Joseph Zhou [EMAIL PROTECTED]; William D. Hathaway 
 [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
 zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
 Sent: Sunday, December 07, 2008 1:58 AM
 Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
 X4150/X4450


 Richard Elling wrote:
 Joseph Zhou wrote:

 Yeah?
 http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
  

 Snapshot is a big deal?


 Snapshot is a big deal, but you will find most hardware RAID 
 implementations
 are somewhat limited, as the above adaptec only supports 4 snapshots 
 and it is an
 optional feature.  You will find many array vendors will be happy to 
 charge lots
 of money for the snapshot feature.

 On top of that since the ZFS snapshot is at the file system level 
 it's much easier to use. You don't have to quiesce the file system 
 first or hope that when you take the snapshot you get a consistent 
 data set. I've seen plenty of folks take hw raid snapshots without 
 locking the file system first, let alone quiescing the app, and 
 getting garbage. 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread Joseph Zhou
Ok, Torrey, I like you, so one more comment before I go to bed --

Please go study the EMC NetWorker 7.5, and why EMC can claim leadership in 
VSS support.
Then, if you still don't understand the importance of VSS, just ask me in an 
open fashion, I will teach you.

The importance of storage in system and application optimization can be very 
significant.
You do coding, do you know what's TGT from IBM in COBOL, to be able to claim 
enterprise technology?
If not, please study.
http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp?topic=/com.ibm.entcobol.doc_4.1/PGandLR/ref/rpbug10.htm

Open Storage is a great concept, but we can only win with realy advantages, 
not fake marketing lines.
I hope everyone enjoyed the discussion. I did.

zStorageAnalyst


- Original Message - 
From: Torrey McMahon [EMAIL PROTECTED]
To: Joseph Zhou [EMAIL PROTECTED]
Cc: Richard Elling [EMAIL PROTECTED]; William D. Hathaway 
[EMAIL PROTECTED]; [EMAIL PROTECTED]; 
zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
Sent: Sunday, December 07, 2008 2:40 AM
Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
X4150/X4450


 Compared to hw raid only snapshots ZFS is still, imho, easier to use.

 If you start talking about VSS, aka shadow copy for Windows, you're now at 
 the fs level. I can see that VSS offers an API for 3rd parties to use but, 
 as I literally just started reading about it, I'm not an expert. From a 
 quick glance I think the ZFS feature set is comparable. Is there a C++ API 
 to ZFS? Not that I know of. Do you need one? Can't think of a reason off 
 the top of my head given the way the zpool/zfs commands work.

 Joseph Zhou wrote:
 Torrey, now this impressive as the old days with Sun Storage.

 Ok, ZFS PiT is only a software solution.
 The Windows VSS is not only a software solution, but also a 3rd party 
 integration standard from MS.
 What's your comment on ZFS PiT is better than MS PiT, in light of 
 openness and 3rd-party integration???

 Talking about garbage!
 z


 - Original Message - From: Torrey McMahon [EMAIL PROTECTED]
 To: Richard Elling [EMAIL PROTECTED]
 Cc: Joseph Zhou [EMAIL PROTECTED]; William D. Hathaway 
 [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
 zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
 Sent: Sunday, December 07, 2008 1:58 AM
 Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
 X4150/X4450


 Richard Elling wrote:
 Joseph Zhou wrote:

 Yeah?
 http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
 Snapshot is a big deal?


 Snapshot is a big deal, but you will find most hardware RAID 
 implementations
 are somewhat limited, as the above adaptec only supports 4 snapshots 
 and it is an
 optional feature.  You will find many array vendors will be happy to 
 charge lots
 of money for the snapshot feature.

 On top of that since the ZFS snapshot is at the file system level it's 
 much easier to use. You don't have to quiesce the file system first or 
 hope that when you take the snapshot you get a consistent data set. I've 
 seen plenty of folks take hw raid snapshots without locking the file 
 system first, let alone quiescing the app, and getting garbage.


 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss