Only SATA drives that operate under SATA framework and SATA HBA drivers
have this option available to them via format -e. That's because they
are treated and controlled by the system as scsi drives.
>From your e-mail it appears that you are talking about SATA drives
connected to legacy control
I'm not sure if a unplug event generates FMA event, but it will generate
a unplug event ( I think it some different from a error ) which will be caught by tamarack
Tamarack is mostly for the desktop. Sure you can use it to do automatic 'zfs
import' on hotplug, but not for fault recovery. FMA
I've got a pretty dumb question regarding SATA and write cache. I
don't see options in 'format -e' on SATA drives for checking/setting
write cache.
I've seen the options for SCSI driver, but not SATA.
I'd like to help on the SATA write cache enable/disable problem, if I
can.
What am I
Yes, this is all the functionality we have at the moment until the next
phase of ZFS/FMA integration. Seth putback some basic hotplug support
in the sx4500 diagnosis engine, which we hope to generalize through
libtopo and have a ZFS agent which undertands how to behave in response
to hotplug e
Mark,
I might know a little bit more about what's causing this particular panic. I'm
currently running OpenSolaris as a guest OS under VMware Server RC1 on a CentOS
4.3 host OS. I have 3 - 300GB (~280GB usable) SATA disks in the server that
are all formatted under CentOS like so:
[EMAIL PROT
On Thu, 15 Jun 2006, Tom Gendron wrote:
> I may have missed this somewhere but I don't see a way to make mirrored
> stripes. I'm not sure I want or need to in real life, but I am curious.
I don't know if it is possible, but it's certainly not desirable. RAID 1+0
(stripes made from mirrors) has t
Hi,
I may have missed this somewhere but I don't see a way to make mirrored
stripes. I'm not sure I want or need to in real life, but I am curious.
Am I right.
-tomg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
Roch wrote:
Check here:
http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/fs/zfs/vdev_disk.c#157
distilled version:
vdev_disk_open(vdev_t *vd, uint64_t *psize, uint64_t *ashift)
/*...*/
/*
* If we own the whole disk, try to enable disk write caching.
* We ignore error
You're missing some of the daemons:
daemon 337 1 0 11:41:03 ? 0:00 /usr/sbin/rpcbind
daemon 469 1 0 11:41:04 ? 0:00 /usr/lib/nfs/lockd
daemon 437 1 0 11:41:04 ? 0:00 /usr/lib/nfs/statd
daemon 541 1 0 11:41:07 ? 0:00
Hi,
I'm using solaris express build 40, and i'm trying to share a ZFS(-ed)
3310 array, but it seems it's not up for working and there's some
strange stuff happening in dmesg, anyone have any ideas :
SVC :
bash-3.00# svcs | grep nfs
online 14:06:34 svc:/network/nfs/nlockmgr:default
onlin
On Thu, Jun 15, 2006 at 10:30:02AM +0800, Freeman Liu wrote:
>
> Hi, guys,
>
> I have add devid support for EFI, (not putback yet) and test it with a
> zfs mirror, now the mirror can recover even a usb harddisk is unplugged
> and replugged into a different usb port.
>
> But there is still somet
Check here:
http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/fs/zfs/vdev_disk.c#157
-r
Phil Brown writes:
> Roch Bourbonnais - Performance Engineering wrote:
> > I'm puzzled by 2 things.
> >
> > Naively I'd think a write_cache should not help throughput
> > test since the c
> http://www.opensolaris.org/jive/thread.jspa?messageID=36229#36229
The problem is back, on a different system: a laptop running on-20060605 bits.
Compared to snv_29, the error message has improved, though:
# zfs snapshot hdd/[EMAIL PROTECTED]
cannot snapshot 'hdd/[EMAIL PROTECTED]': dataset is b
Roch Bourbonnais - Performance Engineering wrote:
I'm puzzled by 2 things.
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why/how a
The write cache decouples the actual write to disk from the data transfer from
the host. For a streaming operation, this means that the disk can typically
stream data onto tracks with almost no latency (because the cache can aggregate
multiple I/O operations into full tracks which can be written
Just was on the phone with Andy Bowers. He cleared up that
our SATA device drivers need some work. We basically do not
have the necessary I/O concurrency at this stage. So the
write_cache is actually a good substitute for tag queuing.
So that explain why we get more throughput _on SATA_ d
On Jun 15, 2006, at 06:23, Roch Bourbonnais - Performance Engineering
wrote:
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why
I'm puzzled by 2 things.
Naively I'd think a write_cache should not help throughput
test since the cache should fill up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why/how a cache helps throughput ?
And the second thing...q
18 matches
Mail list logo