On Jan 2, 2008 11:46 AM, Darren Reed [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
...
That's a sad situation for backup utilities, by the way - a backup
tool would have no way of finding out that file X on fs A already
existed as file Z on fs B. So what ? If the file got copied, byte
Our test engineer for the ZFS Crypto project discovered that it isn't
possible to enable encryption on the top filesystem in a pool - the
one that gets created by default.
The intent here is that the default top level filesystem gets the
encryption property not the pool itself (because the
Hi
I faced a similar problem when I was adding a property for per-dataset dnode
sizes. I got around it by adding a ZPOOL_PROP_DNODE_SIZE and adding the dataset
property in dsl_dataset_stats(). That way the root dataset gets the property
too. I am not very sure if this is the cleanest solution
Hi again
in meantime I upgraded to s10u4 including recommended patches.
Then I tried again to import the zpool with same behaviour.
The stack dump is exactly the same as in previous message.
to complete label print:
# zdb -lv /dev/rdsk/c2t0d0s0
Kalpak Shah wrote:
Hi
I faced a similar problem when I was adding a property for per-dataset dnode
sizes. I got around it by adding a ZPOOL_PROP_DNODE_SIZE and adding the
dataset property in dsl_dataset_stats(). That way the root dataset gets the
property too. I am not very sure if this
James C. McPherson wrote:
You can definitely loopback mount the same fs into multiple
zones, and as far as I can see you don't have the multiple-writer
issues that otherwise require Qfs to solve - since you're operating
within just one kernel instance.
Is there any significant performance
I didn't find any clear answer in the documentation, so here it goes:
I've got a 4-device RAIDZ array in a pool. I then add another RAIDZ array to
the pool. If one of the arrays fails, would all the data on the array be lost,
or would it be like disc spanning, and only the data on the failed
Your data will be striped across both vdevs after you add the 2nd
vdev. In any case, failure of one stripe device will result in the
loss of the entire pool.
I'm not sure, however, if there is anyway vm recover any data from
surviving vdevs.
On 1/2/08, Austin [EMAIL PROTECTED] wrote:
I didn't
On Mon, Dec 31, 2007 at 07:20:30PM +1100, Darren Reed wrote:
Frank Hofmann wrote:
http://www.opengroup.org/onlinepubs/009695399/functions/rename.html
ERRORS
The rename() function shall fail if:
[ ... ]
[EXDEV]
[CX] The links named by new and old are on different
Oof, I see this has been discussed since (and, actually, IIRC it was
discussed a long time ago too).
Anyways, IMO, this requires a new syscall or syscalls:
xdevrename(2)
xdevcopy(2)
and then mv(1) can do:
if (rename(old, new) != 0) {
if (xdevrename(old, new) !=
I AM NOT A ZFS DEVELOPER. These suggestions should work, but there
may be other people who have better ideas.
Aaron Berland wrote:
Basically, I have a 3 drive raidz array on internal Seagate
drives. running build 64nv. I purchased 3 add'l USB drives
with the intention of mirroring and then
Moore, Joe wrote:
I AM NOT A ZFS DEVELOPER. These suggestions should work, but there
may be other people who have better ideas.
Aaron Berland wrote:
Basically, I have a 3 drive raidz array on internal Seagate
drives. running build 64nv. I purchased 3 add'l USB drives
with the
Hi Joe,
Thanks for trying. I can't even get the pool online because there are 2
corrupt drives according to zpool status. Yours and the other gentlemen's
insights have been very helpful, however!
I lucked out and realized that I did have copies of 90% of my data, so I am
just going to
On Dec 25, 2007 3:19 AM, Maciej Olchowik [EMAIL PROTECTED] wrote:
Hi Folks,
I have 3510 disk array connected to T2000 server running:
SunOS 5.10 Generic_118833-33 sun4v sparc SUNW,Sun-Fire-T200
12 disks (300G each) is exported from array and ZFS is used
to manage them (raidz with one hot
Bob Scheifler wrote:
James C. McPherson wrote:
You can definitely loopback mount the same fs into multiple
zones, and as far as I can see you don't have the multiple-writer
issues that otherwise require Qfs to solve - since you're operating
within just one kernel instance.
Is there any
On Dec 23, 2007, at 7:53 PM, David Dyer-Bennet wrote:
Just out of curiosity, what are the dates ls -l shows on a snapshot?
Looks like they might be the pool creation date.
The ctime and mtime are from the file system creation date. The
atime is the current time. See:
Nicolas Williams wrote:
On Mon, Dec 31, 2007 at 07:20:30PM +1100, Darren Reed wrote:
Frank Hofmann wrote:
http://www.opengroup.org/onlinepubs/009695399/functions/rename.html
ERRORS
The rename() function shall fail if:
[ ... ]
[EXDEV]
[CX] The links named by
17 matches
Mail list logo