I would like to test ZFS boot on my home server, but according to bug
6486493 ZFS boot cannot be used if the disks are attached to a SATA
controller handled by a driver using the new SATA framework (which
is my case: driver si3124). I have never heard of someone having
successfully used ZFS boot
Rayson Ho writes:
1) Modern DBMSs cache database pages in their own buffer pool because
it is less expensive than to access data from the OS. (IIRC, MySQL's
MyISAM is the only one that relies on the FS cache, but a lot of MySQL
sites use INNODB which has its own buffer pool)
The DB
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I know that first release of ZFS boot will be support single disk and
mirroring configurations. With ZFS copies support in Solaris 10 U5 (I
hope), I was wondering about breaking my current mirror and using both
disks in stripe mode, protecting the
Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I know that first release of ZFS boot will be support single disk and
mirroring configurations. With ZFS copies support in Solaris 10 U5 (I
hope), I was wondering about breaking my current mirror and using both
disks in stripe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Darren J Moffat wrote:
Why would you do that when it would reduce your protection and ZFS boot
can boot from a mirror anyway.
I guess ditto blocks would be protection enough, since the data would be
duplicated between both disks. Of course,
Jesus Cea wrote:
Darren J Moffat wrote:
Why would you do that when it would reduce your protection
and ZFS boot
can boot from a mirror anyway.
I guess ditto blocks would be protection enough, since the
data would be
duplicated between both disks. Of course, backups are your friend.
Moore, Joe wrote:
It would be really nice if there was some sort of
enforced-ditto-separation (fail w/ device full if unable to satisfy) but
that doesn't exist currently.
How would that be different to a mirror ?
I guess it is different to a mirror because only some datasets in the
pool
Matty writes:
On 10/3/07, Roch - PAE [EMAIL PROTECTED] wrote:
Rayson Ho writes:
1) Modern DBMSs cache database pages in their own buffer pool because
it is less expensive than to access data from the OS. (IIRC, MySQL's
MyISAM is the only one that relies on the FS cache, but
Richard,
Having read your blog regarding the copies feature, do you have an
opinion on whether mirroring or copies are better for a SAN situation?
It strikes me that since we're discussing SAN and not local physical
disk, that for a system needing 100GB of usable storage (size chosen
for
On Wed, Oct 03, 2007 at 10:42:53AM +0200, Roch - PAE wrote:
Rayson Ho writes:
2) Also, direct I/O is faster because it avoid double buffering.
A piece of data can be in one buffer, 2 buffers, 3
buffers. That says nothing about performance. More below.
So I guess you mean DIO is
Hi,
I hope someone can help cos ATM zfs' logic seems a little askew.
I just swapped a failing 200gb drive that was one half of a 400gb gstripe
device which I was using as one of the devices in a 3 device raidz1. When the
OS came back up after the drive had been changed, the necessary metadata
Hi,
we are running a v240 with a zfs pool mirror onto two 3310 (SCSI). During
redundancy test, when offlining one 3310.. all zfs data are unsable.
- zpool hang without displaying any info
- trying to read filesystem hang the command (df,ls,...)
- /var/log/messages keep sending error for the
This bug was rendered moot via 6528732 in build snv_68 (and s10_u5). We
now store physical devices paths with the vnodes, so even though the
SATA framework doesn't correctly support open by devid in early boot, we
can fallback to the device path just fine. ZFS root works great on
thumper, which
Has anyone figured out a way to make pfinstall work
sufficiently to just pkgadd all the packages in a DVD
(or netinstall) image into a new ZFS root?
I have a ZFS root pool and an initial root FS that was
copied in from a cpio archive of a previous UFS root.
That much works great. BFU works for
On 10/3/07, Roch - PAE [EMAIL PROTECTED] wrote:
We do not retain 2 copies of the same data.
If the DB cache is made large enough to consume most of memory,
the ZFS copy will quickly be evicted to stage other I/Os on
their way to the DB cache.
What problem does that pose ?
Hi Roch,
1) The
MP wrote:
Hi,
I hope someone can help cos ATM zfs' logic seems a little askew.
I just swapped a failing 200gb drive that was one half of a 400gb gstripe
device which I was using as one of the devices in a 3 device raidz1. When the
OS came back up after the drive had been changed, the
On Oct 3, 2007, at 10:31 AM, Roch - PAE wrote:
If the DB cache is made large enough to consume most of memory,
the ZFS copy will quickly be evicted to stage other I/Os on
their way to the DB cache.
What problem does that pose ?
Personally, I'm still not completely sold on the performance
Hey Roch -
We do not retain 2 copies of the same data.
If the DB cache is made large enough to consume most of memory,
the ZFS copy will quickly be evicted to stage other I/Os on
their way to the DB cache.
What problem does that pose ?
Can't answer that question empirically, because we
more below...
MP wrote:
On 03/10/2007, *Richard Elling* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
Yes. From the fine manual on zpool:
zpool replace [-f] pool old_device [new_device]
Replaces old_device with new_device. This is equivalent
On Wed, Oct 03, 2007 at 12:10:19PM -0700, Richard Elling wrote:
-
# zpool scrub tank
# zpool status -v tank
pool: tank
state: ONLINE
status: One or more devices could not be used because the label is
missing or
invalid. Sufficient replicas exist for the
I think I might have run into the same problem. At the time I assumed I was
doing something wrong, but...
I made a b72 raidz out of three new 1gb virtual disks in vmware. I shut the vm
off, replaced one of the disks with a new 1.5gb virtual disk. No matter what
command I tried, I couldn't
Rayson Ho wrote:
On 10/3/07, Roch - PAE [EMAIL PROTECTED] wrote:
We do not retain 2 copies of the same data.
If the DB cache is made large enough to consume most of memory,
the ZFS copy will quickly be evicted to stage other I/Os on
their way to the DB cache.
What problem does that pose ?
On Oct 3, 2007, at 5:21 PM, Richard Elling wrote:
Slightly off-topic, in looking at some field data this morning
(looking
for something completely unrelated) I notice that the use of directio
on UFS is declining over time. I'm not sure what that means...
hopefully
not more performance
Would the nv_sata driver also be used on nforce 590 sli? I found Asus M2N32 WS
PRO at my hw shop which has 9 internal sata connectors.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I believe so. The Solaris device detection tool will show the MCP version, too.
http://www.sun.com/bigadmin/hcl/hcts/device_detect.html
-- richard
Christopher wrote:
Would the nv_sata driver also be used on nforce 590 sli? I found Asus M2N32
WS PRO at my hw shop which has 9 internal
Hi Dale,
We're testing out the enhanced arc_max enforcement (track DNLC
entries) using Build 72 right now. Hopefully, it will fix the memory
creep, which is the only real downside to ZFS for DB work it seems to
me. Frankly, of our DB loads have improved performance with ZFS. I
suspect its because
Postgres assumes that the OS takes care of caching:
PLEASE NOTE. PostgreSQL counts a lot on the OS to cache data files
and hence does not bother with duplicating its file caching effort.
The shared buffers parameter assumes that OS is going to cache a lot
of files and hence it is generally
Anyhow, in the case of DBs, ARC indeed becomes a vestigial organ. I'm
surprised that this is being met with skepticism considering that
Oracle highly recommends direct IO be used, and, IIRC, Oracle
performance was the main motivation to adding DIO to UFS back in
Solaris 2.6. This isn't a
On Oct 3, 2007, at 3:44 PM, Dale Ghent wrote:
On Oct 3, 2007, at 5:21 PM, Richard Elling wrote:
Slightly off-topic, in looking at some field data this morning
(looking
for something completely unrelated) I notice that the use of directio
on UFS is declining over time. I'm not sure what
Some people are just dumb. Take me, for instance... :)
Was just looking into ZFS on iscsi and doing some painful and unnatural
things to my boxes and dropped a panic I was not expecting.
Here is what I did.
Server: (S10_u4 sparc)
- zpool create usb /dev/dsk/c4t0d0s0
(on a 4gb USB stick,
30 matches
Mail list logo