Re: [Fwd: Re: [zfs-discuss] Re: disk write cache, redux]

2006-06-15 Thread Pawel Wojcik
Only SATA drives that operate under SATA framework and SATA HBA drivers have this option available to them via format -e.  That's because they are treated and controlled by the system as scsi drives. >From your e-mail it appears that you are talking about SATA drives connected to legacy control

Re: [zfs-discuss] devid support for EFI partition improved zfs usibility

2006-06-15 Thread Artem Kachitchkine
I'm not sure if a unplug event generates FMA event, but it will generate a unplug event ( I think it some different from a error ) which will be caught by tamarack Tamarack is mostly for the desktop. Sure you can use it to do automatic 'zfs import' on hotplug, but not for fault recovery. FMA

Re: [zfs-discuss] Re: disk write cache, redux

2006-06-15 Thread Gregory Shaw
I've got a pretty dumb question regarding SATA and write cache. I don't see options in 'format -e' on SATA drives for checking/setting write cache. I've seen the options for SCSI driver, but not SATA. I'd like to help on the SATA write cache enable/disable problem, if I can. What am I

Re: [zfs-discuss] devid support for EFI partition improved zfs usibility

2006-06-15 Thread Freeman Liu
Yes, this is all the functionality we have at the moment until the next phase of ZFS/FMA integration. Seth putback some basic hotplug support in the sx4500 diagnosis engine, which we hope to generalize through libtopo and have a ZFS agent which undertands how to behave in response to hotplug e

[zfs-discuss] Re: ZFS panic while mounting lofi device?

2006-06-15 Thread Nathanael Burton
Mark, I might know a little bit more about what's causing this particular panic. I'm currently running OpenSolaris as a guest OS under VMware Server RC1 on a CentOS 4.3 host OS. I have 3 - 300GB (~280GB usable) SATA disks in the server that are all formatted under CentOS like so: [EMAIL PROT

Re: [zfs-discuss] RAID 0+1

2006-06-15 Thread Rich Teer
On Thu, 15 Jun 2006, Tom Gendron wrote: > I may have missed this somewhere but I don't see a way to make mirrored > stripes. I'm not sure I want or need to in real life, but I am curious. I don't know if it is possible, but it's certainly not desirable. RAID 1+0 (stripes made from mirrors) has t

[zfs-discuss] RAID 0+1

2006-06-15 Thread Tom Gendron
Hi, I may have missed this somewhere but I don't see a way to make mirrored stripes. I'm not sure I want or need to in real life, but I am curious. Am I right. -tomg ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.or

Re: [zfs-discuss] Re: disk write cache, redux

2006-06-15 Thread Philip Brown
Roch wrote: Check here: http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/fs/zfs/vdev_disk.c#157 distilled version: vdev_disk_open(vdev_t *vd, uint64_t *psize, uint64_t *ashift) /*...*/ /* * If we own the whole disk, try to enable disk write caching. * We ignore error

Re: [zfs-discuss] ZFS + NFS, confused...

2006-06-15 Thread Casper . Dik
You're missing some of the daemons: daemon 337 1 0 11:41:03 ? 0:00 /usr/sbin/rpcbind daemon 469 1 0 11:41:04 ? 0:00 /usr/lib/nfs/lockd daemon 437 1 0 11:41:04 ? 0:00 /usr/lib/nfs/statd daemon 541 1 0 11:41:07 ? 0:00

[zfs-discuss] ZFS + NFS, confused...

2006-06-15 Thread Patrick
Hi, I'm using solaris express build 40, and i'm trying to share a ZFS(-ed) 3310 array, but it seems it's not up for working and there's some strange stuff happening in dmesg, anyone have any ideas : SVC : bash-3.00# svcs | grep nfs online 14:06:34 svc:/network/nfs/nlockmgr:default onlin

Re: [zfs-discuss] devid support for EFI partition improved zfs usibility

2006-06-15 Thread Eric Schrock
On Thu, Jun 15, 2006 at 10:30:02AM +0800, Freeman Liu wrote: > > Hi, guys, > > I have add devid support for EFI, (not putback yet) and test it with a > zfs mirror, now the mirror can recover even a usb harddisk is unplugged > and replugged into a different usb port. > > But there is still somet

Re: [zfs-discuss] Re: disk write cache, redux

2006-06-15 Thread Roch
Check here: http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/fs/zfs/vdev_disk.c#157 -r Phil Brown writes: > Roch Bourbonnais - Performance Engineering wrote: > > I'm puzzled by 2 things. > > > > Naively I'd think a write_cache should not help throughput > > test since the c

[zfs-discuss] zfs snapshot: cannot snapshot, dataset is busy

2006-06-15 Thread Jürgen Keil
> http://www.opensolaris.org/jive/thread.jspa?messageID=36229#36229 The problem is back, on a different system: a laptop running on-20060605 bits. Compared to snv_29, the error message has improved, though: # zfs snapshot hdd/[EMAIL PROTECTED] cannot snapshot 'hdd/[EMAIL PROTECTED]': dataset is b

Re: [zfs-discuss] Re: disk write cache, redux

2006-06-15 Thread Phil Brown
Roch Bourbonnais - Performance Engineering wrote: I'm puzzled by 2 things. Naively I'd think a write_cache should not help throughput test since the cache should fill up after which you should still be throttled by the physical drain rate. You clearly show that it helps; Anyone knows why/how a

[zfs-discuss] Re: Re: disk write cache, redux

2006-06-15 Thread Anton B. Rang
The write cache decouples the actual write to disk from the data transfer from the host. For a streaming operation, this means that the disk can typically stream data onto tracks with almost no latency (because the cache can aggregate multiple I/O operations into full tracks which can be written

Re: [zfs-discuss] Re: disk write cache, redux

2006-06-15 Thread Roch
Just was on the phone with Andy Bowers. He cleared up that our SATA device drivers need some work. We basically do not have the necessary I/O concurrency at this stage. So the write_cache is actually a good substitute for tag queuing. So that explain why we get more throughput _on SATA_ d

Re: [zfs-discuss] Re: disk write cache, redux

2006-06-15 Thread Jonathan Edwards
On Jun 15, 2006, at 06:23, Roch Bourbonnais - Performance Engineering wrote: Naively I'd think a write_cache should not help throughput test since the cache should fill up after which you should still be throttled by the physical drain rate. You clearly show that it helps; Anyone knows why

Re: [zfs-discuss] Re: disk write cache, redux

2006-06-15 Thread Roch Bourbonnais - Performance Engineering
I'm puzzled by 2 things. Naively I'd think a write_cache should not help throughput test since the cache should fill up after which you should still be throttled by the physical drain rate. You clearly show that it helps; Anyone knows why/how a cache helps throughput ? And the second thing...q