Re: [zfs-discuss] Snapshot / Rollback at a file level

2006-05-31 Thread Roland Mainz
Scott Dickson wrote: > A customer asked today about the ability to snapshot and rollback > individual files. They have an environment where users might generate > lots of files, but want only a portion of them to be included in a > snapshot. Moreover, typically when they recover a file, they only

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun wouldopen-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-31 Thread Roland Mainz
[EMAIL PROTECTED] wrote: > > >Well, I don't know about his particular case, but many QFS clients > >have found the separation of da ta and metadata to be invaluable. The > >primary reason is that it avoids disk seeks. We have QFS cust omers who > >are running at over 90% of theoretical bandwidth o

[zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source "QFS"... /was:Re: Re: Distributed File System for Solaris

2006-05-31 Thread Roland Mainz
[EMAIL PROTECTED] wrote: > >UNIX admin wrote: > >> > There's still an opening in the shared filesystem > >> > space (multi-reader > >> > and multi-writer). Fix QFS, or extend ZFS? > >> > >> That one's a no-brainer, innit? Extend ZFS and plough on. > > > >Uhm... I think this is not that easy. Based

Re: [zfs-discuss] ZFS root filesystem and sys-suspend(1M) ?

2006-05-31 Thread Roland Mainz
Lori Alt wrote: > Roland Mainz wrote: > > Will the initial ZFS root filesystem putback include support for system > > suspend (see sys-suspend(1M)) on SPARC ? > It is our intention to support system suspend on SPARC > when booted off a zfs root file system. That would be very cool! :-) BTW: Plea

Re: [zfs-discuss] 3510 configuration for ZFS

2006-05-31 Thread grant beattie
On Wed, May 31, 2006 at 03:28:12PM +0200, Roch Bourbonnais - Performance Engineering wrote: > Hi Grant, this may provide some guidance for your setup; > > it's somewhat theoretical (take it for what it's worth) but > it spells out some of the tradeoffs in the RAID-Z vs Mirror > battle: > > >

Re[2]: [zfs-discuss] ZFS writes something...

2006-05-31 Thread Robert Milkowski
Hello Matty, Wednesday, May 31, 2006, 10:46:29 PM, you wrote: M> On Wed, 31 May 2006, Robert Milkowski wrote: >> Hello zfs-discuss, >> >> I noticed on a nfs server with ZFS that even with atime set to off >> and clients only reading data (almost 100% reads - except some >> unlinks()) I still

Re: [zfs-discuss] ZFS writes something...

2006-05-31 Thread Matthew Ahrens
On Wed, May 31, 2006 at 12:26:55AM +0200, Robert Milkowski wrote: > Hello zfs-discuss, > > I noticed on a nfs server with ZFS that even with atime set to off > and clients only reading data (almost 100% reads - except some > unlinks()) I still can see some MB/s being written according to >

Re: [zfs-discuss] ZFS writes something...

2006-05-31 Thread Matty
On Wed, 31 May 2006, Robert Milkowski wrote: Hello zfs-discuss, I noticed on a nfs server with ZFS that even with atime set to off and clients only reading data (almost 100% reads - except some unlinks()) I still can see some MB/s being written according to zpool iostat. What could be the c

Re[2]: [zfs-discuss] Re: Big IOs overhead due to ZFS?

2006-05-31 Thread Robert Milkowski
Hello Matthew, Wednesday, May 31, 2006, 8:09:08 PM, you wrote: MA> There are a few related questions that I think you want answered. MA> 1. How does RAID-Z effect performance? MA> When using RAID-Z, each filesystem block is spread across (typically) MA> all disks in the raid-z group. So to a f

Re: [zfs-discuss] Re: Big IOs overhead due to ZFS?

2006-05-31 Thread Matthew Ahrens
There are a few related questions that I think you want answered. 1. How does RAID-Z effect performance? When using RAID-Z, each filesystem block is spread across (typically) all disks in the raid-z group. So to a first approximation, each raid-z group provides the iops of a single disk (but the

Re: [zfs-discuss] 3510 configuration for ZFS

2006-05-31 Thread Erik Trimble
For things like the 3510FC which (can) have Hardware Raid, I've been hearing that ZFS is preferable to the HW RAID controller to define arrays. I understand the rational and logic behind these arguments. However, most HW RAID controllers have a large amount of NVRAM, which _really_ helps write per

Re: [zfs-discuss] ZFS root filesystem and sys-suspend(1M) ?

2006-05-31 Thread Lori Alt
If sys-suspend is ever made to work in general for Solaris x86, I expect it will work with zfs root file systems too. Lori Nathan Kroenert wrote: Not X86? :( (Yes - I know there are lots of other things that need to happen first, but :( nonetheless... ) Nathan. On Wed, 2006-05-31 at 01:51,

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-31 Thread Anton Rang
On May 31, 2006, at 10:21 AM, Bill Sommerfeld wrote: Hunh. Gigabit ethernet devices typically implement some form of interrupt blanking or coalescing so that the host cpu can batch I/O completion handling. That doesn't exist in FC controllers? Not in quite the same way, AFAIK. Usually there

[zfs-discuss] Re: Big IOs overhead due to ZFS?

2006-05-31 Thread Robert Milkowski
Another pool - different array, different host, different workload. And again - summay read throutput to all disks in a pool is 10x bigger than to a pool itself. Iny idea? bash-3.00# zpool iostat -v 1 capacity operationsbandwidth pool

[zfs-discuss] Re: Big IOs overhead due to ZFS?

2006-05-31 Thread Robert Milkowski
That's interesting - 'zpool iostat' shows quite small read volume to any pool however if I run 'zpool iostat -v' then I can see that while read volume to a pool is small, read volume to each disk is actually quite large so in summary I get over 10x read volume if I sum all disks in a pool than o

Re: Re[2]: [zfs-discuss] cluster features

2006-05-31 Thread Joe Little
Well, here's my previous summary off list to different solaris folk (regarding NFS serving via ZFS and iSCSI): I want to use ZFS as a NAS with no bounds on the backing hardware (not restricted to one boxes capacity). Thus, there are two options: FC SAN or iSCSI. In my case, I have multi-building

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-31 Thread Bill Sommerfeld
On Wed, 2006-05-31 at 10:48, Anton Rang wrote: >We generally take one interrupt for each I/O >(if the CPU is fast enough), so instead of taking one >interrupt for 8 MB (for instance), we take 64. Hunh. Gigabit ethernet devices typically implement some form of interrupt blanking or coa

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-31 Thread Anton Rang
On May 31, 2006, at 8:56 AM, Roch Bourbonnais - Performance Engineering wrote: I'm not taking a stance on this, but if I keep a controler full of 128K I/Os and assuming there are targetting contiguous physical blocks, how different is that to issuing a very large I/O ? There are d

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-31 Thread Roch Bourbonnais - Performance Engineering
> I think ZFS should do fine in streaming mode also, though there are > currently some shortcomings, such as the mentioned 128K I/O size. It may eventually. The lack of direct I/O may also be an issue, since some of our systems don't have enough main memory bandwidth to support data be

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-31 Thread Roch Bourbonnais - Performance Engineering
Anton wrote: (For what it's worth, the current 128K-per-I/O policy of ZFS really hurts its performance for large writes. I imagine this would not be too difficult to fix if we allowed multiple 128K blocks to be allocated as a group.) I'm not taking a stance on this, but if I keep a co

Re: [zfs-discuss] 3510 configuration for ZFS

2006-05-31 Thread Roch Bourbonnais - Performance Engineering
Hi Grant, this may provide some guidance for your setup; it's somewhat theoretical (take it for what it's worth) but it spells out some of the tradeoffs in the RAID-Z vs Mirror battle: http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to As for serving NFS, the user e

Re: [zfs-discuss] Re: [osol-discuss] Re: I wish Sun would open-source"QFS"... / was:Re: Re: Distributed File System for Solaris

2006-05-31 Thread Darren J Moffat
Anton Rang wrote: It's also worth noting that the customers for whom streaming is a real issue tend to be those who are willing to spend a lot of money for reliability (think replicating the whole system+storage) rather than compromising performance; for them, simply the checksumming overhead and