Robert Milkowski writes:
Hello Wee,
Thursday, April 26, 2007, 4:21:00 PM, you wrote:
WYT On 4/26/07, cedric briner [EMAIL PROTECTED] wrote:
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as
Wee Yeh Tan writes:
Robert,
On 4/27/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Wee,
Thursday, April 26, 2007, 4:21:00 PM, you wrote:
WYT On 4/26/07, cedric briner [EMAIL PROTECTED] wrote:
okay let'say that it is not. :)
Imagine that I setup a box:
-
Chad Mynhier writes:
On 4/27/07, Erblichs [EMAIL PROTECTED] wrote:
Ming Zhang wrote:
Hi All
I wonder if any one have idea about the performance loss caused by COW
in ZFS? If you have to read old data out before write it to some other
place, it involve disk seek.
cedric briner writes:
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would
still corrupt your nfs clients.
Just to better understand ? (I know that I'm quite slow :( )
when you say _nfs clients_ are you specifically
Hello ZFS community,
I do not have a so strong love towards *probability*. And even less love
when probability caracterize true, solid and tangible stuff that I've to
administer.
I start doing some math..
don't get scared : I'm not going to show you the little scribbles that
I've done.
I just threw in a truss in the SMF script and rebooted the test system and it
failed again.
The truss output is at http://www.eecis.udel.edu/~bmiller/zfs.truss-Apr27-2007
thanks,
Ben
This message posted from opensolaris.org
___
zfs-discuss mailing
Manoj Joseph wrote:
Brian Hechinger wrote:
After having set my desktop to install (to a pair of 140G SATA disks
that zfs is mirroring) at work, I was trying to skip the dump slice
since in this case, no, I don't really want it. ;)
Don't underestimate the usefulness of a dump device. You
Just so I'm clear:
You are waiting on the release of Samba 3.0.25 which allows vfs_* modules for
ACLS.
Then you will release vfs_zfsacl.c for Samba 3.0.25+ which would allow ACLS to
work?
Also I would love to beta test if needed -- I've been running Samba 3.0.25 from
subversion just to try
On 4/27/07, Ben Miller [EMAIL PROTECTED] wrote:
I just threw in a truss in the SMF script and rebooted the test system and it
failed again.
The truss output is at http://www.eecis.udel.edu/~bmiller/zfs.truss-Apr27-2007
324:read(7, 0x000CA00C, 5120) = 0
324:
cedric briner wrote:
Hello ZFS community,
I do not have a so strong love towards *probability*. And even less love
when probability caracterize true, solid and tangible stuff that I've to
administer.
I start doing some math..
don't get scared : I'm not going to show you the little scribbles
Richard Elling wrote:
cedric briner wrote:
Hello ZFS community,
I do not have a so strong love towards *probability*. And even less
love when probability caracterize true, solid and tangible stuff that
I've to administer.
I start doing some math..
don't get scared : I'm not going to show
On Fri, 2007-04-27 at 11:01 +0200, Roch - PAE wrote:
Chad Mynhier writes:
On 4/27/07, Erblichs [EMAIL PROTECTED] wrote:
Ming Zhang wrote:
Hi All
I wonder if any one have idea about the performance loss caused by COW
in ZFS? If you have to read old data out
Hi,
I was wondering about the ARC and its interaction with the VM
pagecache... When a file on a ZFS filesystem is mmaped, does the ARC
cache get mapped to the process' virtual memory? Or is there another copy?
-Manoj
___
zfs-discuss mailing list
For Brian, et al: Thank you very much. I burned the DVD, and did some minor
tweaking to the profile, and it is up and running (now to just n00b admin
troubleshooting)
For everyone else who is new to this and trying it out, a couple things I
noticed:
1. Don't waste your time trying to format the
I'm building a system with two Apple RAIDs attached. I have hardware
RAID5 configured so no RAIDZ or RAIDZ2, just a basic zpool pointing at
the four LUNs representing the four RAID controllers. For on-going
maintenance, will a zpool scrub be of any benefit?
If some problem is causing data
Hello Darren,
Saturday, April 28, 2007, 1:03:00 AM, you wrote:
DD Also, in-pool metadata should be redundant (via ditto blocks). Errors
DD in such data can be detected and repaired during a scrub.
DD Because of ditto blocks, the in-pool metadata is duplicated (or
DD triplicated). Since you
In order to prevent the so-called poor man's cluster from
corrupting your data, we now store the hostid and verify it upon
importing a pool.
Check it out at:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
This is bug:
6282725 hostname/hostid should be stored in the label
I assume that a zvol has a vtoc. What tags are supported?
Thanks,
Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I assume that a zvol has a vtoc. What tags are supported?
The VTOC on a solaris disk is managed by the disk driver.
Since there's no disk driver managing the zvol, there's no VTOC (unless
you put one one there yourself). Conceptually it's very similar to a
SVM metadevice.
How would you want
On Fri, Apr 27, 2007 at 02:44:02PM -0700, Malachi de ??lfweald wrote:
For Brian, et al: Thank you very much. I burned the DVD, and did some minor
tweaking to the profile, and it is up and running (now to just n00b admin
troubleshooting)
For everyone else who is new to this and trying it out,
On Mon, Apr 23, 2007 at 01:56:53PM -0700, Richard Elling wrote:
FYI,
Sun is having a big, 25th Anniversary sale. X4500s are half price --
24 TBytes for $24k. ZFS runs really well on a X4500.
http://www.sun.com/emrkt/25sale/index.jsp?intcmp=tfa5101
I appologize for those not in the US
On 4/23/07, Richard Elling [EMAIL PROTECTED] wrote:
FYI,
Sun is having a big, 25th Anniversary sale. X4500s are half price --
24 TBytes for $24k. ZFS runs really well on a X4500.
http://www.sun.com/emrkt/25sale/index.jsp?intcmp=tfa5101
I appologize for those not in the US or UK and
I seem to be having nfs server issues myself, on a fresh install of b62 with a
mirrored root pool. I try and I try but my nfs/server process always reverts to
disabled.
I unfortunately have no ufs slices to try out a basic nfs configuration.
No matter how I try and share out a directory or
23 matches
Mail list logo