[zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Mertol Ozyoney
Hi Everyone ;

 

I will soon be making a presentation comparing ZFS against Veritas Storage
Foundation , do we have any document comparing features ?

 

regards

 

 


 http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email  mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]

 

 

attachment: image001.gif___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Frank Hofmann


On Fri, 28 Dec 2007, Darren Reed wrote:
[ ... ]
 Is this behaviour defined by a standard (such as POSIX or the
 VFS design) or are we free to innovate here and do something
 that allowed such a shortcut as required?

Wrt. to standards, quote from:

http://www.opengroup.org/onlinepubs/009695399/functions/rename.html

ERRORS
The rename() function shall fail if:
[ ... ]
[EXDEV]
[CX]  The links named by new and old are on different file systems and 
the
implementation does not support links between file systems.

Hence, it's implementation-dependent, as per IEEE1003.1.

FrankH.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Robin Bowes
Mertol Ozyoney wrote:
 Hi Everyone ;
 
 I will soon be making a presentation comparing ZFS against Veritas
 Storage Foundation , do we have any document comparing features ?

Mertol,

I don't have any experience of Veritas - I've only recently come to the
Solaris world purely because of zfs.

For me, the attractive features of zfs (so much so that I've moved to
OpenSolaris from linux(CentOS) ) are, in no particular order:

1. data integrity - built-in checksumming
2. ease of administration
   * easy to create new storage
   * easy to manage storage
3. Integration with nfs and (recently) cifs
   * set property on zfs entity to create nfs/cifs share

zfs really is revolutionary. I am constantly amazed at what it can do
and how easy it is to do it.

Storage management tools in linux are good (md, lvm, etc) but the zfs
toolset is better, and far easier to use.

R.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how to create whole disk links?

2007-12-28 Thread Kent

I just installed SXCE_b78 while having one SuperMicro AOC-SAT2-MV8 card 
installed and disks connected to the first 6 sata ports.  Now I've 
installed two more AOC-SAT2-MV8 cards and added some more drives, but 
I'm not getting the whole disk (:wd) links for them.  For instance, 
the following shows just the original six drives having :wd entries - 
how do I get all 24?

# ls -l /dev/dsk/ | grep :wd$
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t0d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t1d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t2d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t3d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t4d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t5d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd

Following are the vdevs for my 24-disk array (6 rows x 4 cols):

c3t0d0 c3t1d0 c3t2d0 c3t3d0
c3t4d0 c3t5d0 c3t6d0 c3t7d0
c4t0d0 c4t1d0 c4t2d0 c4t3d0
c4t4d0 c4t5d0 c4t6d0 c4t7d0
c5t0d0 c5t1d0 c5t2d0 c5t3d0
c5t4d0 c5t5d0 c5t6d0 c5t7d0

The 3 AOC-SAT-MV8 cards are c3, c4, and c5  (I just added 4 and 5)

Thanks!
Kent



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Joerg Schilling
Darren Reed [EMAIL PROTECTED] wrote:

  if (fromvp != tovp) {
  vattr.va_mask = AT_FSID;
  if (error = VOP_GETATTR(fromvp, vattr, 0, CRED(), NULL))
  goto out;
  fsid = vattr.va_fsid;
  vattr.va_mask = AT_FSID;
  if (error = VOP_GETATTR(tovp, vattr, 0, CRED(), NULL))
  goto out;
  if (fsid != vattr.va_fsid) {
  error = EXDEV;
  goto out;
  }
  }
 
  ZFS will never even see such a rename request.

 Is this behaviour defined by a standard (such as POSIX or the
 VFS design) or are we free to innovate here and do something
 that allowed such a shortcut as required?

EXDEV means: cross device link, not cross filesystem link
A ZFS pool acts as the underlying storage device, so everything that
is within a single ZFS pool may be a candidate for a rename.

POSIX grants that st_dev and st_ino together uniquely identify a file
on a system. As long as neither st_dev nor st_ino change during the 
rename(2) call, POSIX does not prevent this rename operation.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Joerg Schilling
Frank Hofmann [EMAIL PROTECTED] wrote:

 I don't think the standards would prevent us from adding cross-fs rename 
 capabilities. It's beyond the standards as of now, and I'd expect that 
 were it ever added to that it'd be an optional feature as well, to be 
 queried for via e.g. pathconf().

Why do you beliece there is a need for a pathconf() call?
Either rename(2) succeeds or it fails with a cross-device error.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Frank Hofmann


On Fri, 28 Dec 2007, Joerg Schilling wrote:

 Frank Hofmann [EMAIL PROTECTED] wrote:

 I don't think the standards would prevent us from adding cross-fs rename
 capabilities. It's beyond the standards as of now, and I'd expect that
 were it ever added to that it'd be an optional feature as well, to be
 queried for via e.g. pathconf().

 Why do you beliece there is a need for a pathconf() call?
 Either rename(2) succeeds or it fails with a cross-device error.

Why do you have a NAME_MAX / SYMLINK_MAX query - you can just as well let 
such requests fail with ENAMETOOLONG.

Why do you have a FILESIZEBITS query - there's EOVERFLOW to tell you.


There's no _need_. But the convenience exists for others as well.


FrankH.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Frank Hofmann


On Fri, 28 Dec 2007, Joerg Schilling wrote:
[ ... ]
 POSIX grants that st_dev and st_ino together uniquely identify a file
 on a system. As long as neither st_dev nor st_ino change during the
 rename(2) call, POSIX does not prevent this rename operation.

Clarification request: Where's the piece in the standard that forces an 
interpretation:

rename() operations shall not change st_ino/st_dev

I don't see where such a requirement would come from.


FrankH.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Joerg Schilling
Frank Hofmann [EMAIL PROTECTED] wrote:



 On Fri, 28 Dec 2007, Joerg Schilling wrote:
 [ ... ]
  POSIX grants that st_dev and st_ino together uniquely identify a file
  on a system. As long as neither st_dev nor st_ino change during the
  rename(2) call, POSIX does not prevent this rename operation.

 Clarification request: Where's the piece in the standard that forces an 
 interpretation:

   rename() operations shall not change st_ino/st_dev

 I don't see where such a requirement would come from.

See: http://www.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html

The st_ino and st_dev fields taken together uniquely identify the file within 
the system.

The identity of an open file cannot change during the lifetime of a process.
Note that the renamed file may be open and the process may call fstat(2)
on the open file before and after the rename(2). As rename(2) does not change
the content of the file, it may only affect the time stamps of the file.

Note that some programs call stat/fstat on files in order to compare file 
identities. What happens if program A calls stat(file1), then program B
calls rename(file1, file2) and after that, program A calls stat(file2).
A POSIX compliant system will grant that stat(file1) and stat(file2) will
return the same st_dev/st_ino identity.



Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Mike Gerdts
On Dec 28, 2007 8:40 AM, Sengor [EMAIL PROTECTED] wrote:
 Real comparison of features should include scenarios such as:

 - how ZFS/VxVM compare in BCV like environments (eg. when volumes are
 presented back to the same host)
 - how they all cope with various multipathing solutions out there
 - Filesystem vs Volume snapshots
 - Portability within cluster like environments (SCSI reserves  LUN
 visibility to multiple synchronous hosts)
 - Disaster recovery scenarios
 - Ease/Difficulty with data migrations across physical arrays
 - Boot volumes
 - Online vs Offline attribute/parameter changes

Very good list!

 I can't think of more right now, it's way past midnight here ;)

How about these?

- Integration with backup system
- Active-active cluster (parallel file system) capabilities
- Integration with OS maintenance activities (install, upgrade, patching, etc.)
- Relative performance on anticipated workload
- Staffing issues (what do people know, how many hours to train, how
long before proficiency)
- Supportability on multiple platforms at the site (e.g. Solaris,
Linux, HP-UX, AIX, ...)
- Impact of failure modes (missing license key especially major system
changes, on-disk corruption)
- Opportunities to do things previously not possible

ZFS doesn't win on many of those, but with the improvements that I
have seen throughout the storage stack it is somewhat likely that the
required improvements are already on the roadmap.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Richard Elling
While you could have a wart-by-wart comparison, please remember that the
biggest difference is that ZFS is free ($) and open source, while SF is 
costly
(sometimes very costly) and closed source.  The warts are just minor, mostly
temporary, skin-deep issues.
 -- richard

Mike Gerdts wrote:
 On Dec 28, 2007 8:40 AM, Sengor [EMAIL PROTECTED] wrote:
   
 Real comparison of features should include scenarios such as:

 - how ZFS/VxVM compare in BCV like environments (eg. when volumes are
 presented back to the same host)
 - how they all cope with various multipathing solutions out there
 - Filesystem vs Volume snapshots
 - Portability within cluster like environments (SCSI reserves  LUN
 visibility to multiple synchronous hosts)
 - Disaster recovery scenarios
 - Ease/Difficulty with data migrations across physical arrays
 - Boot volumes
 - Online vs Offline attribute/parameter changes
 

 Very good list!

   
 I can't think of more right now, it's way past midnight here ;)
 

 How about these?

 - Integration with backup system
 - Active-active cluster (parallel file system) capabilities
 - Integration with OS maintenance activities (install, upgrade, patching, 
 etc.)
 - Relative performance on anticipated workload
 - Staffing issues (what do people know, how many hours to train, how
 long before proficiency)
 - Supportability on multiple platforms at the site (e.g. Solaris,
 Linux, HP-UX, AIX, ...)
 - Impact of failure modes (missing license key especially major system
 changes, on-disk corruption)
 - Opportunities to do things previously not possible

 ZFS doesn't win on many of those, but with the improvements that I
 have seen throughout the storage stack it is somewhat likely that the
 required improvements are already on the roadmap.

   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to create whole disk links?

2007-12-28 Thread Eric Schrock
These 'whole disk' links are an artifact of the Solaris EFI
implementation, so they only appear once you have labeled as disk using
an EFI label.  ZFS itself doesn't need them to exist in order to
automatically slap an EFI label down.

If you're curious, this comes from the fact that the VTOC label
represents the label portion of the disk within the first slice, so if
you write over the first 8k of your slice, you'll trash your label (this
is why ZFS never writes to the first 8k of any device).  With EFI, the
goal was to separate out the label area from the slices themselves.  But
the label portion of the disk needed to be accessible to utilities, so
the end result was the 'c3t0d0' links without slices.

If you re-label your disks using EFI labels ('format -e') you will see
these links.  Or just let ZFS work its magic ;-)

- Eric

On Fri, Dec 28, 2007 at 07:35:09AM -0500, Kent wrote:
 
 I just installed SXCE_b78 while having one SuperMicro AOC-SAT2-MV8 card 
 installed and disks connected to the first 6 sata ports.  Now I've 
 installed two more AOC-SAT2-MV8 cards and added some more drives, but 
 I'm not getting the whole disk (:wd) links for them.  For instance, 
 the following shows just the original six drives having :wd entries - 
 how do I get all 24?
 
 # ls -l /dev/dsk/ | grep :wd$
 lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t0d0 - 
 ../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
 PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
 lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t1d0 - 
 ../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
 PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
 lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t2d0 - 
 ../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
 PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
 lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t3d0 - 
 ../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
 PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
 lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t4d0 - 
 ../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
 PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
 lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t5d0 - 
 ../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
 PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
 
 Following are the vdevs for my 24-disk array (6 rows x 4 cols):
 
 c3t0d0 c3t1d0 c3t2d0 c3t3d0
 c3t4d0 c3t5d0 c3t6d0 c3t7d0
 c4t0d0 c4t1d0 c4t2d0 c4t3d0
 c4t4d0 c4t5d0 c4t6d0 c4t7d0
 c5t0d0 c5t1d0 c5t2d0 c5t3d0
 c5t4d0 c5t5d0 c5t6d0 c5t7d0
 
 The 3 AOC-SAT-MV8 cards are c3, c4, and c5  (I just added 4 and 5)
 
 Thanks!
 Kent
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to create whole disk links?

2007-12-28 Thread Kent Watsen

Eric Schrock wrote:
 Or just let ZFS work its magic ;-)
   


Oh, I didn't realize that `zpool create` could be fed vdevs that didn't 
exist in /dev/dsk/ - and, as a bonus, it also creates the /dev/dsk/ links!

# zpool create -f tank raidz2 c3t0d0 c3t4d0 c4t0d0 c4t4d0 c5t0d0 c5t4d0z
# ls -l /dev/dsk/ | grep :wd$
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t0d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t1d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t2d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t3d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t4d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  78 Dec 27 17:32 c3t5d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED],1/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  76 Dec 28 12:45 c4t0d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  76 Dec 27 22:38 c4t1d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  76 Dec 27 22:38 c4t4d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  76 Dec 28 12:45 c5t0d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd
lrwxrwxrwx   1 root root  76 Dec 28 12:45 c5t4d0 - 
../../devices/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1033,[EMAIL 
PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:wd



Thanks for the pointer!

Kent



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Mertol Ozyoney
Good points. I will try to Focus on these areas. 

Very best regards

 

 


 http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 

 

 

From: Sengor [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 28, 2007 4:41 PM
To: [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

 

Perhaps a few that might help:

http://www.sun.com/software/whitepapers/solaris10/zfs_veritas.pdf
http://www.symantec.com/enterprise/stn/articles/article_detail.jsp?articleid
=SF_and_ZFS_whitepaper_44545
http://www.serverwatch.com/tutorials/article.php/3663066 

I'm yet to see a side by side features comparison.

Real comparison of features should include scenarios such as:

- how ZFS/VxVM compare in BCV like environments (eg. when volumes are
presented back to the same host) 
- how they all cope with various multipathing solutions out there
- Filesystem vs Volume snapshots
- Portability within cluster like environments (SCSI reserves  LUN
visibility to multiple synchronous hosts) 
- Disaster recovery scenarios
- Ease/Difficulty with data migrations across physical arrays
- Boot volumes
- Online vs Offline attribute/parameter changes

I can't think of more right now, it's way past midnight here ;) 



On 12/28/07, Mertol Ozyoney [EMAIL PROTECTED] wrote:

Hi Everyone ;

 

I will soon be making a presentation comparing ZFS against Veritas Storage
Foundation , do we have any document comparing features ?

 

regards

 

 


 http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif

Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 

 

 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 




-- 
_/ sengork.blogspot.com /

attachment: image001.gif___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Richard Elling
Sengor wrote:
 While on a VCS course on a Symantec site, I was told VxVM is planned
 to be open sourced some time in near future. In either case the cost
 is a large factor here, VxVM does not come cheap (unless you use VxSF
 Basic http://www.symantec.com/business/theme.jsp?themeid=sfbasic which
 is free).

VxVM has far fewer features than ZFS, you really can't compare them.
You could compare VxVM to SVM more directly.

 I see  VxSF Basic being an immediate competitor to ZFS where the cost
 does not count as much.

Yes, I think VxSF Basic is a good thing, but it would not exist if there
were no competitive pressures from the OS *and* hardware vendors.  The
competitive landscape for the low-end systems clearly dictates that, for
today, (software and hardware) RAID is free($).
  -- richard

 On 12/29/07, Richard Elling [EMAIL PROTECTED] wrote:
 While you could have a wart-by-wart comparison, please remember that the
 biggest difference is that ZFS is free ($) and open source, while SF is
 costly
 (sometimes very costly) and closed source.  The warts are just minor, mostly
 temporary, skin-deep issues.
  -- richard
 
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Carisdad
Sengor wrote:
 While on a VCS course on a Symantec site, I was told VxVM is planned
 to be open sourced some time in near future. In either case the cost
 is a large factor here, VxVM does not come cheap (unless you use VxSF
 Basic http://www.symantec.com/business/theme.jsp?themeid=sfbasic which
 is free).

 I see  VxSF Basic being an immediate competitor to ZFS where the cost
 does not count as much.

 On 12/29/07, Richard Elling [EMAIL PROTECTED] wrote:
   
 While you could have a wart-by-wart comparison, please remember that the
 biggest difference is that ZFS is free ($) and open source, while SF is
 costly
 (sometimes very costly) and closed source.  The warts are just minor, mostly
 temporary, skin-deep issues.
  -- richard
 


   
VxSF Basic sounds like good cost competition, until you realize it is 
limited to 4 data volumes and/or 4 filesystems and 2 or less CPU sockets.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [xen-discuss] ZFS very slow under xVM

2007-12-28 Thread James Dickens
one more time


On Dec 28, 2007 9:22 PM, James Dickens [EMAIL PROTECTED] wrote:

 s

 On Dec 5, 2007 4:48 AM, Jürgen Keil [EMAIL PROTECTED] wrote:

I use the following on my snv_77 system with 2
internal SATA drives
that show up with the 'ahci' driver.
  
   Thanks for the tip!  I saw my BIOS had a setting for
   SATA mode, but the selections are IDE or RAID.  It
   was in IDE and I figured RAID mode just enabled one
   of those silly low performance 0/1 settings...Didn't
   know it kicked it into AHCI...But it did!
  
   Unfortunetly my drives aren't recognized now...I've
   asked over in the device list what's up
 
  That's the expected behaviour :-/   The physical device path
  for the root disk has changed by switching the S-ATA
  controller between P-ATA emulation and AHCI mode, and
  for that reason the root disk now uses a different device name
  (e.g. /dev/dsk/c2t0d0s0 instead of /dev/dsk/c0d0s0).
 
  The old device name that can be found in /etc/vfstab isn't
  valid any more.
 
  If you have no separate /usr filesystem, this can be fixed
  with something like this:
 
  - start boot from the hdd, this fails when trying to remount the
   root filesystem in read/write mode, and offers a single user login
 
  - login with the root password
 
  - remount the root filesystem read/write, using the physical
   device path for the disk from the /devices/... filesystem.
   The mount command should show you the physical device
   path that was used to mount the / filesystem read only.
 
   Example for the remount command:
 
   # mount -o remount,rw /devices/[EMAIL PROTECTED],0/pci1043,[EMAIL 
  PROTECTED]/[EMAIL PROTECTED],0:a /
 
  - Run devfsadm -v to create new /dev links for the disks (on
   the ahci controller)
 
  - run format; the AVAILABLE DISK SELECTIONS menu should show
   you the new device name for the disk
 
  # format
  Searching for disks...done
 
 
  AVAILABLE DISK SELECTIONS:
0. c2t0d0 DEFAULT cyl 48638 alt 2 hd 255 sec 63
   /[EMAIL PROTECTED],0/pci1043,[EMAIL PROTECTED]/[EMAIL PROTECTED],0
 
  - now that we know the new disk device name, edit /etc/vfstab
   and update all entries that reference the old name with the new
   name
 
  - reboot
 
  Excellent tip, appears to have solved my problem. though my mother board
 had a 3rd option that was AHCI mode, the raid mode didn't work with solaris
 See my blog for system details.


 James Dickens
 uadmin.blogspot.com


 
  This message posted from opensolaris.org
  ___
  xen-discuss mailing list
  [EMAIL PROTECTED]
 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help needed ZFS vs Veritas Comparison

2007-12-28 Thread Sengor
I believe it will work on systems wich have more than 2 cores, however
only 2 would actually end up being used by VxSF  4 volumes is not a
hard software limit from what I understand.

It's important to note it will not come with any support, perhaps this
is another point where ZFS rises above in terms of features?

 VxSF Basic sounds like good cost competition, until you realize it is
 limited to 4 data volumes and/or 4 filesystems and 2 or less CPU sockets.


-- 
_/ sengork.blogspot.com /
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Carson Gaspar
Frank Hofmann wrote:
 
 On Fri, 28 Dec 2007, Joerg Schilling wrote:
...
 Why do you beliece there is a need for a pathconf() call?
 Either rename(2) succeeds or it fails with a cross-device error.
 
 Why do you have a NAME_MAX / SYMLINK_MAX query - you can just as well let 
 such requests fail with ENAMETOOLONG.
 
 Why do you have a FILESIZEBITS query - there's EOVERFLOW to tell you.
 
 
 There's no _need_. But the convenience exists for others as well.

Because those 2 involve variable types and/or buffer allocations, so 
knowing them in advance is a major advantage. rename will either succeed 
or fail.

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Jonathan Loran

Joerg Schilling wrote:
 See: http://www.opengroup.org/onlinepubs/009695399/basedefs/sys/stat.h.html

 The st_ino and st_dev fields taken together uniquely identify the file 
 within 
 the system.

 The identity of an open file cannot change during the lifetime of a process.
 Note that the renamed file may be open and the process may call fstat(2)
 on the open file before and after the rename(2). As rename(2) does not change
 the content of the file, it may only affect the time stamps of the file.

 Note that some programs call stat/fstat on files in order to compare file 
 identities. What happens if program A calls stat(file1), then program B
 calls rename(file1, file2) and after that, program A calls stat(file2).
 A POSIX compliant system will grant that stat(file1) and stat(file2) will
 return the same st_dev/st_ino identity.

   

And consider what happens if the originating zfs is exported via NFS, 
and the destination isn't.  If an NFS client has the subject file open, 
we need to ensure the correct behavior after the move. 

The Unix file system behavior (Sorry, don't have references to Posix or 
RFCs here, just 25 years of experience..) would be that if a file is 
moved between file systems, it is removed from the source, yet the file 
storage will continue to exist until the last process which has this 
file open closes.  In effect, this means the file in the old location 
(file system) should continue to exist indefinitely, if it is open by a 
long running process.  I fear if we aren't careful, we will introduce a 
boat load of bugs. 

Hey, here's an idea:  We snapshot the file as it exists at the time of 
the mv in the old file system until all referring file handles are 
closed, then destroy the single file snap.  I know, not easy to 
implement, but that is the correct behavior, I believe.

All this said, I would love to have this feature introduced.  Moving 
large file stores between zfs file systems would be so handy!  From my 
own sloppiness, I've suffered dearly from the the lack of it.

Jon

-- 


- _/ _/  /   - Jonathan Loran -   -
-/  /   /IT Manager   -
-  _  /   _  / / Space Sciences Laboratory, UC Berkeley
-/  / /  (510) 643-5146 [EMAIL PROTECTED]
- __/__/__/   AST:7731^29u18e3
 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss