Re: [zfs-discuss] Is ZFS file system supports short writes ?

2007-02-17 Thread dudekula mastan
If a write call attempted to write X bytes of data, and if writecall writes 
only x ( hwere x X) bytes, then we call that write as short write.
   
  -Masthan

Torrey McMahon [EMAIL PROTECTED] wrote:
  Robert Milkowski wrote:

 Hello dudekula,


 Thursday, February 15, 2007, 11:08:26 AM, you wrote:


 

 

 Hi all,

 

 Please let me know the ZFS support for short writes ?

 



 And what are short writes?


http://www.pittstate.edu/wac/newwlassignments.html#ShortWrites :-P


 
-
Food fight? Enjoy some healthy debate
in the Yahoo! Answers Food  Drink QA.___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS with SAN Disks and mutipathing

2007-02-17 Thread Vikash Gupta
Hi,

I just deploy the ZFS on an SAN attach disk array and it's working fine.
How do i get dual pathing advantage of the disk ( like DMP in Veritas).

Can someone point to correct doc and setup.

Thanks in Advance.

Rgds
Vikash Gupta
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with SAN Disks and mutipathing

2007-02-17 Thread Louwtjie Burger

http://docs.sun.com/source/819-0139/index.html

On 2/17/07, Vikash Gupta [EMAIL PROTECTED] wrote:

Hi,

I just deploy the ZFS on an SAN attach disk array and it's working fine.
How do i get dual pathing advantage of the disk ( like DMP in Veritas).

Can someone point to correct doc and setup.

Thanks in Advance.

Rgds
Vikash Gupta


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-17 Thread Nicholas Lee

Is there a best practice guide for using zfs as a basic rackable small
storage solution?

I'm considering zfs with a 2U 12 disk Xeon based server system vs
something like a second hand FAS250.

Target enviroment is mixature of Xen or VI hosts via iSCSI and nfs/cifs.

Being able to take snapshots of running (or maybe paused) xen iscsi
vols and re-export then for cloning and remote backup replication is
important. the aspect I like about zfs is the offsite storage system
can also be generic hardware and thus much cheaper. Being able to run
a postgresql or mysql directly on the storage server has was postives
as well, although a generic storage appliance has a better admin
profile.

Some questions:
1. how stable is zfs? i'm tolarent to some sweat work to fix problems
but data loss is unacceptable
2. If drives need to be pulled and put into a new chasis does zfs
handle them having new device names and being out of order?
3. Is it possible to hot swap drives with raidz(2)
4. How does performance compare with 'brand name' storage systems?

Thanks
Nicholas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-17 Thread Jason J. W. Williams

Hi Nicholas,

ZFS itself is very stable and very effective as fast FS in our
experience. If you browse the archives of the list you'll see that NFS
performance is pretty acceptable, with some performance/RAM quirks
around small files:

http://www.opensolaris.org/jive/message.jspa?threadID=19858
http://www.opensolaris.org/jive/thread.jspa?threadID=18394

To my understanding  the iSCSI driver is undergoing significant
performance improvements...maybe someone close to this can help?


If by VI you are referring to VMware Infrastructure...you won't get
any support from VMware if you're using the iSCSI target on Solaris as
its not approved by them. Not that this is really a problem in my
experience as VMware tech support is pretty terrible anyway.



Some questions:
1. how stable is zfs? i'm tolarent to some sweat work to fix problems
but data loss is unacceptable


We haven't experienced any data loss, and have had some pretty nasty
things thrown at it (FC array rebooted unexpectedly).


2. If drives need to be pulled and put into a new chasis does zfs
handle them having new device names and being out of order?


My understanding and experience here is yes. It'll read the ZFS lables
off the drives/slice.


3. Is it possible to hot swap drives with raidz(2)


Depends on your underlying hardware. To my knowledge hot-swapping is
not dependent on the RAID-level at all.


4. How does performance compare with 'brand name' storage systems?


No clue if you're referring to NetApp. Does anyone else know?

-J
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Google paper on disk reliability

2007-02-17 Thread Akhilesh Mritunjai
Hi Folks

I believe that the word would have gone around already, Google engineers have 
published a paper on disk reliability. It might supplement the ZFS FMA 
integration and well - all the numerous debates on spares etc etc over here.

To quote /.

The Google engineers just published a paper on Failure Trends in a Large Disk 
Drive Population. Based on a study of 100,000 disk drives over 5 years they 
find some interesting stuff. To quote from the abstract: 'Our analysis 
identifies several parameters from the drive's self monitoring facility (SMART) 
that correlate highly with failures. Despite this high correlation, we conclude 
that models based on SMART parameters alone are unlikely to be useful for 
predicting individual drive failures. Surprisingly, we found that temperature 
and activity levels were much less correlated with drive failures than 
previously reported.'

Link to the paper is http://labs.google.com/papers/disk_failures.pdf
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS with SAN Disks and mutipathing

2007-02-17 Thread JS
I'm using ZFS on both EMC and Pillar arrays with PowerPath and MPxIO, 
respectively. Both work fine - the only caveat is to drop your sd_queue to 
around 20 or so, otherwise you can run into an ugly display of bus resets.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss