Hello Bart,
we´re using nfsv3 by default but also tried version 2. There is no difference
between the two. Version 4 ist not possible with the aix 4.3.3 clients.
Regards
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
On Tue, Apr 10, 2007 at 09:43:39PM -0700, Anton B. Rang wrote:
>
> That's only one cause of panics.
>
> At least two of gino's panics appear due to corrupted space maps, for
> instance. I think there may also still be a case where a failure to
> read metadata during a transaction commit leads to
> Without understanding the underlying pathology it's impossible to "fix" a ZFS
> pool.
Sorry, but I have to disagree with this.
The goal of fsck is not to bring a file system into the state it "should" be in
had no errors occurred. The goal, rather, is to bring a file system to a
self-consist
>> please stop crashing the kernel.
>
> This is:
>
> 6322646 ZFS should gracefully handle all devices failing (when writing)
That's only one cause of panics.
At least two of gino's panics appear due to corrupted space maps, for instance.
I think there may also still be a case where a failure t
> How would you access the data on that device?
Presumably, zpool import.
This is basically what everyone does today with mirrors, isn't it? :-)
Anton
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
> You'd want to export them, not detach them.
But you can't export just one branch of the mirror, can you?
> Off the top of my head (i.e. untested):
>
> - zpool create tank mirror
> - zpool export tank
But this will unmount all the file systems, right?
-- Anton
This message posted from
3.0.25rc1 was released 2 days ago so the "final version" will be available
soon. vfs_zfsacl.c module was tested soon so I think it is a question of 2-3
weeks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
while playing around with ZFS and USB memory sticks or USB harddisks,
rmvolmgr tends to get in the way, which results in a
can't open "/dev/rdsk/cNt0d0p0", device busy
Do you remember exactly what command/operation resulted in this error? It is
something that tries to open device exclusive
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 03/16 - 03/31
=
Size of all threads during per
Rich Teer wrote:
Hi all,
I have a pool called tank/home/foo and I want to rename it to
tank/home/bar. What's the best way to do this (the zfs and
zpool man pages don't have a "rename" option)?
Are you sure you have a pool with that name not a filesystem in a pool
with that name ?
See zfs(1
the release notes:
http://docs.sun.com/app/docs/doc/817-0552/6mgbi4fgg?a=view
say an alternative to fixing the kernelbase is to upgrade to 64 bit - i'm
already running on a 64 bit sparc. maybe i have a different problem - my drives
have spun down to sleepy mode - zfs is still burning coal.
Th
I noticed that there is still an open bug regarding removing devices
from a zpool:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4852783
Does anyone know if or when this feature will be implemented?
Cindy Swearingen wrote:
Hi Mike,
Yes, outside of the hot-spares feature, you can
On Tue, 10 Apr 2007, Mark J Musante wrote:
> On Tue, 10 Apr 2007, Rich Teer wrote:
>
> > I have a pool called tank/home/foo and I want to rename it to
> > tank/home/bar. What's the best way to do this (the zfs and zpool man
> > pages don't have a "rename" option)?
>
> In fact, there is a rename
On Tue, 10 Apr 2007, Rich Teer wrote:
> I have a pool called tank/home/foo and I want to rename it to
> tank/home/bar. What's the best way to do this (the zfs and zpool man
> pages don't have a "rename" option)?
In fact, there is a rename option for zfs:
# zfs create tank/home
# zfs create tank
i am having a similar problem - system hung on zfs destroy snapshot - 50% cpu
utilization - running for hours - how can i know if i have the same problem?
can you be specific about hpw to set the kernelbase?
This message posted from opensolaris.org
Hi all,
I have a pool called tank/home/foo and I want to rename it to
tank/home/bar. What's the best way to do this (the zfs and
zpool man pages don't have a "rename" option)?
One way I can think of is to create a clone of tank/home/foo
called tank/home/bar, and then destroy the former. Is that
Joseph Barbey wrote:
Matthew Ahrens wrote:
Joseph Barbey wrote:
Robert Milkowski wrote:
JB> So, normally, when the script runs, all snapshots finish in
maybe a minute
JB> total. However, on Sundays, it continues to take longer and
longer. On
JB> 2/25 it took 30 minutes, and this last Sund
Anton B. Rang wrote:
This sounds a lot like:
6417779 ZFS: I/O failure (write on ...) -- need to
reallocate writes
Which would allow us to retry write failures on
alternate vdevs.
Of course, if there's only one vdev, the write should be retried to a different
block on the original vdev ... ri
> one quick&dirty way of backing up a pool that is a mirror of two
> devices is to zpool attach a third one, wait for the resilvering to
> finish, then zpool detach it again.
>
> The third device then can be used as a poor man's simple backup.
How would you access the data on that device?
--
Dar
On Tue, Apr 10, 2007 at 12:48:49AM -0700, Gino wrote:
> Hi All
>
> I'd like to expose two points about ZFS that I think are a must before even
> trying to use it in production:
>
>
> 1) ZFS must stop to force kernel panics!
> As you know ZFS takes to a kernel panic when a corrupted zpool is fo
Is there a more elegant approach that tells rmvolmgr to leave certain
devices alone on a per disk basis?
I was expecting there to be something in rmmount.conf to allow a specific device
or pattern to be excluded but there appears to be nothing. Maybe this is an RFE?
___
On Tue, 10 Apr 2007, Constantin Gonzalez wrote:
> Has anybody tried it yet with a striped mirror? What if the pool is
> composed out of two mirrors? Can I attach devices to both mirrors, let
> them resilver, then detach them and import the pool from those?
You'd want to export them, not detach th
Hi,
one quick&dirty way of backing up a pool that is a mirror of two devices is to
zpool attach a third one, wait for the resilvering to finish, then zpool detach
it again.
The third device then can be used as a poor man's simple backup.
Has anybody tried it yet with a striped mirror? What if th
This may be a very stupid question, but
In the current procedure, we have to install onto UFS then convert the UFS
to ZFS...
During install, we can launch a terminal and run ZFS commands...
Would it be possible, doing a fresh install, to use the terminal to run some
ZFS commands and install
Dirk Jakobsmeier wrote:
Hello Basrt,
tanks for your answer. The filesystems on different projects are sized between
20 to 400 gb. Those filesystem sizes where no problem on earlier installation
(vxfs) and should not be a problem now. I can reproduce this error with the 20
gb filesystem.
Rega
Robert Milkowski wrote:
Hello Lori,
Any chances to get 'how_to_netinstall_zfsboot' to public?
I'm really close to putting it out there. I'm updating
the install procedure and tool to support two things
that it didn't support before:
* setup of a dump slice, since zfs doesn't yet supp
On Tue, 10 Apr 2007, Martin Girard wrote:
> Is it possible to make my zpool redundant by adding a new disk in the pool
> and making it a mirror with the initial disk?
Sure, by using zpool attach:
# mkfile 64m /tmp/foo /tmp/bar
# zpool create tank /tmp/foo
# zpool status
pool: tank
state: ONLI
Hi Martin,
Yes, you can do this with the zpool attach command.
See the output below.
An example in the ZFS Admin Guide is here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view
Cindy
# zpool create mpool c1t20d0
# zpool status mpool
pool: mpool
state: ONLINE
scrub: none reques
Read the man page for zpool. Specifically, zpool attach.
On 4/10/07, Martin Girard <[EMAIL PROTECTED]> wrote:
Hi,
I have a zpool with only one disk. No mirror.
I have some data in the file system.
Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror
Hi,
I have a zpool with only one disk. No mirror.
I have some data in the file system.
Is it possible to make my zpool redundant by adding a new disk in the pool
and making it a mirror with the initial disk?
If yes, how?
Thanks
Martin
This message posted from opensolaris.org
___
Hi,
while playing around with ZFS and USB memory sticks or USB harddisks,
rmvolmgr tends to get in the way, which results in a
can't open "/dev/rdsk/cNt0d0p0", device busy
error.
So far, I've just said svcadm disable -t rmvolmgr, did my thing, then
said svcadm enable rmvolmgr.
Is there a mor
There was some discussion on the "always panic for fatal pool failures" issue
in April 2006, but I haven't seen if an actual RFE was generated.
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-April/017276.html
This message posted from opensolaris.org
Hello Lori,
Any chances to get 'how_to_netinstall_zfsboot' to public?
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-dis
Atul Vidwansa wrote:
Hi,
I have few questions about the way a transaction group is created.
1. Is it possible to group transactions related to multiple operations
in same group? For example, an "rmdir foo" followed by "mkdir bar",
can these end up in same transaction group?
2. Is it possible
Hi All
I'd like to expose two points about ZFS that I think are a must before even
trying to use it in production:
1) ZFS must stop to force kernel panics!
As you know ZFS takes to a kernel panic when a corrupted zpool is found or if
it's unable to reach
a device and so on...
We need to have
> Gino,
>
> Can you send me the corefile from the zpool command?
This is the only case where we are unable to import a corrupted zpool but not
having a kernel panic:
SERVER144@/# zpool import zpool3
internal error: unexpected error 5 at line 773 of ../common/libzfs_pool.c
SERVER144@/#
> This l
> Gino,
>
> Were you able to recover by setting zfs_recover?
>
Unfortunately no :(
Using zfs_recover not allowed us to recover any of the 5 corrupted zpool we had
..
Please note that we lost this pool after a panic caused by trying to import a
corrupted zpool!
tnx,
gino
This message post
I'm using Open Solaris 10 doing some test on ZFS and Zpool.I've encountered a
situation that caused the system crash.
There are two SCSI disks connected to my computer, c1t0d0 was used as the
bootable disk, c1t1d0 was used to test ZFS and Zpool.
1. Formating c1t1d0 to four partitions:
format> fd
38 matches
Mail list logo