Hi Matt, ZFS-team,
Problem
---
libzpool.so, when calling pwrite(2), splits the write into two. This is
done to simulate partial disk writes. This has the side effect that the
writes are not block aligned. Hence when the underlying device is a raw
device, the write fails.
Note: ztest always
On Sun, Aug 19, 2007 at 09:32:02AM +0200, Louwtjie Burger wrote:
http://blogs.sun.com/realneel/entry/zfs_and_databases
http://www.sun.com/servers/coolthreads/tnb/parameters.jsp
http://www.sun.com/servers/coolthreads/tnb/applications_oracle.jsp
Be careful with long running single
Marko,
The ZFS Admin Guide has been updated to include the delegated
administration feature.
See Chapter 8, here:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Cindy
Matthew Ahrens wrote:
Marko Milisavljevic wrote:
Hmm.. my b69 installation understands zfs allow, but man zfs
On Sun, Aug 19, 2007 at 05:45:18PM -0700, Mark wrote:
Basically, the setup is a large volume of Hi-Def video is being streamed
from a camera, onto an editing timeline. This will be written to a
network share. Due to the large amounts of data, ZFS is a really good
option for us. But we need a
Damian,
Are you using compression=on? There was a bug in the past (fixed
now) where if compression was turned on, it was being computed by a
single thread. The ZFS team fixed the user data part of it (i.e
user data is compressed in parallel now), but the meta data part
is still compressed by one
On Fri, Aug 10, 2007 at 02:20:42PM +0100, Alec Muffett wrote:
Does anyone on this list have experience with a recent board with 6 or more
SATA ports that they know is supported?
Well so far I have only populated 5 of the ports I have available,
but my writeup with my 9-port SATS ASUS
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
I am asking because I could have sworn that I read somewhere that it
isn't, but I can't find the reference.
Thanks,
Brian
--
- Brian Gupta
Brandorr wrote:
Is ZFS efficient at handling huge populations of tiny-to-small files -
for example, 20 million TIFF images in a collection, each between 5
and 500k in size?
Do you mean efficient in terms of space used? If so, then in general it is
quite efficient. Eg, files 128k space is
After reinstalling a snv62 machine and upgrade a snv64a machine to
snv70, I'm running into no pools available to import when I try to
import two existing pools (that were previously mounted on these
machines).
On one host (a new install, I wiped out the snv62 install) the hint
I'm getting is: