On 4/3/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
> As promised. I got my 6140 SATA delivered yesterday and I hooked it
> up to a T2000 on S10u3. The T2000 saw the disks straight away and is
> "working" for the last 1 hour. I'll be running some benchmarks on it.
> I'll probably have a week w
Bertrand Sirodot wrote:
I am trying to backup the pool, but when I tar some of the filesystems, the
kernel panics with the following message:
This error is occurring because a critical piece of metadata can't be
read while we are trying to write out changes. Try ensuring that you
aren't mak
Gino,
I just had a similar experience and was able to import the pool when I
added the readonly option (zpool import -f -o ro )
Ernie
Gino Ruopolo wrote:
Hi Matt,
trying to import our corrupted zpool with snv_60 and 'set zfs:zfs_recover=1' in
/etc/system give us:
Apr 3 20:35:56 SER
Hi,
I have been wrestling with ZFS issues since yesterday when one of my disks sort
of died. After much wrestling with "zpool replace" I managed to get the new
disk in and got the pool to resilver, but since then I have one error left that
I can't clear:
pool: data
state: ONLINE
status: One
Hello Neil,
Tuesday, April 3, 2007, 2:43:55 PM, you wrote:
NP> Hi Robert,
NP> Robert Milkowski wrote On 04/02/07 17:48,:
>> Right now a symlink should consume one dnode (320 bytes)
NP> dnode_phys_t are actually 512 bytes:
Yep, right - I mistaken it with bonus buffer size which is 320B.
>> ::
Hi Matt,
trying to import our corrupted zpool with snv_60 and 'set zfs:zfs_recover=1' in
/etc/system give us:
Apr 3 20:35:56 SERVER141 ^Mpanic[cpu3]/thread=fffec3860f20:
Apr 3 20:35:56 SERVER141 genunix: [ID 603766 kern.notice] assertion failed:
ss->ss_start <= start (0x67b800 <= 0x67
On 4/3/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
You can work around this by setting the quota on an ancestor of the
to-be-created clone. Also, implementing RFE 6364688 "method to preserve
properties when making a clone" would make workaround #1 (set a quota on
the first fs) work for the cl
I assume this is the case. These changes will get rolled up with
all the others.
Lori
oliver soell wrote:
So I can expect that the zfs root bits will be in the weekly ON Consolidation
bfu archives (http://dlc.sun.com/osol/on/downloads/current/) tomorrow or so?
I've been a solaris admin for
Robert Thurlow wrote:
Richard Elling wrote:
Peter Eriksson wrote:
ufsdump/ufsrestore doesn't restore the ACLs so that doesn't work,
same with rsync.
ufsrestore obviously won't work on ZFS.
ufsrestore works fine; it only reads from a 'ufsdump' format medium and
writes through generic files
Richard Elling wrote:
Peter Eriksson wrote:
ufsdump/ufsrestore doesn't restore the ACLs so that doesn't work, same
with rsync.
ufsrestore obviously won't work on ZFS.
ufsrestore works fine; it only reads from a 'ufsdump' format medium and
writes through generic filesystem APIs. I did some
> > I currently use Solaris tar like this:
> >cd $DIR && tar [EMAIL PROTECTED] - . | rsh $HOST "cd $NEWDIR && tar
> > [EMAIL PROTECTED] -"
>
> seems simple enough :-)
>
> > ufsdump/ufsrestore doesn't restore the ACLs so that doesn't work,
> > same with rsync.
>
> ufsrestore obviously won't w
Peter Eriksson wrote:
I'm about to start migrating a lot of files on UFS filesystems from a Solaris 9 server
to a new server running Solaris 10 (u3) with ZFS (a Thumper). Now... What's the
"best" way to move all these files? Should one use Solaris tar, Solaris cpio,
ufsdump/ufsrestore, rsync
I'm about to start migrating a lot of files on UFS filesystems from a Solaris 9
server to a new server running Solaris 10 (u3) with ZFS (a Thumper). Now...
What's the "best" way to move all these files? Should one use Solaris tar,
Solaris cpio, ufsdump/ufsrestore, rsync or what?
I currently us
Joseph Barbey wrote:
Also, all 3 pools are still 'formatted' as v2. I'll try upgrading all 3
before Sunday, and see if that helps as well.
That won't change any performance; upgrading to v3 just enables new
features (hot spares and double parity raidz).
--matt
__
Matthew Ahrens wrote:
Joseph Barbey wrote:
Robert Milkowski wrote:
JB> So, normally, when the script runs, all snapshots finish in maybe
a minute
JB> total. However, on Sundays, it continues to take longer and
longer. On
JB> 2/25 it took 30 minutes, and this last Sunday, it took 2:11. The
On Tue, 2007-04-03 at 10:54 -0400, Luke Scharf wrote:
> Tim Foster wrote:
> > You can add a disk to a raidz configuration, but then that makes a pool
> > containing 1 raidz + 1 additional disk in a dynamic stripe configuration
> > (which ZFS will warn you about, since you have different fault tole
Tim Foster wrote:
And is it possible to add 1 new disk to raidz configuration
without backups and recreating zpool from cratch.
You can add a disk to a raidz configuration, but then that makes a pool
containing 1 raidz + 1 additional disk in a dynamic stripe configuration
(which ZFS will w
Carson Gaspar wrote:
Mark Shellenbaum wrote:
Can you post the full ACL on the directory and on the file you are
being allowed to delete.
Simple test:
carson:gandalf 2 $ uname -a
SunOS gandalf.taltos.org 5.10 Generic_125101-02 i86pc i386 i86pc
carson:gandalf 0 $ mkdir foo
carson:gandalf 0 $
Mark Shellenbaum wrote:
Can you post the full ACL on the directory and on the file you are being
allowed to delete.
Simple test:
carson:gandalf 2 $ uname -a
SunOS gandalf.taltos.org 5.10 Generic_125101-02 i86pc i386 i86pc
carson:gandalf 0 $ mkdir foo
carson:gandalf 0 $ ls -dv foo
drwxr-xr-x
Carson Gaspar wrote:
we give the right to add folder to user foo.(this
user can not delete anything as a default) After that
we give the right create file.And then user foo gains
delete everthing. How come is it possible.
Even though we add another rule like
"0:user:foo:delete_child/delete:deny".
Carson Gaspar wrote:
we give the right to add folder to user foo.(this
user can not delete anything as a default) After that
we give the right create file.And then user foo gains
delete everthing. How come is it possible.
Even though we add another rule like
"0:user:foo:delete_child/delete:deny".
Can anyone comment?
-Brian
Brian H. Nelson wrote:
Adam Leventhal wrote:
On Tue, Mar 20, 2007 at 06:01:28PM -0400, Brian H. Nelson wrote:
Why does this happen? Is it a bug? I know there is a recommendation of
20% free space for good performance, but that thought never occurred to
me when
> we give the right to add folder to user foo.(this
> user can not delete anything as a default) After that
> we give the right create file.And then user foo gains
> delete everthing. How come is it possible.
> Even though we add another rule like
> "0:user:foo:delete_child/delete:deny". Again it d
Hi Robert,
Robert Milkowski wrote On 04/02/07 17:48,:
Right now a symlink should consume one dnode (320 bytes)
dnode_phys_t are actually 512 bytes:
> ::sizeof dnode_phys_t
sizeof (dnode_phys_t) = 0x200
> if the name it point to is less than 67 bytes, otherwise a data block is
allocated
add
24 matches
Mail list logo