I have been working on the same problem now for almost 48 straight hours. I
have managed to recover some of my data using
zpool import -f pool
The command never completes, but you can do a
zpool list
and
zpool status
and you will see the pool.
Then you do
zfs list
and the file systems sho
I've seen similar issues. However, it appears most of my problems stem from
ZFS. I'd do something ZFS doesn't like and then I'd have to power cycle the
server to get it back. I actually wrote a large post about my experiences with
b134 and ZFS:
http://opensolaris.org/jive/thread.jspa?message
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ross Walker
>
> If that's the case why not create a second pool called 'backup' and
> 'zfs send' periodically to the backup pool?
+1
This is what I do.
__
Nope, mailed freebsd-fs mailing list.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2010-Jul-26 20:32:41 +0800, Eugen Leitl wrote:
>FreeBSD 8.1 features version 14 of the ZFS subsystem, the addition of the ZFS
>Loader (zfsloader), allowing users to boot from ZFS,
Only on i386 or amd64 systems at present, but you can boot RAIDZ1 and
RAIDZ2 as well as mirrored roots.
Note that
I might be mistaken, but it looks like 3ware does have a driver, several in
fact:
http://www.3ware.com/support/downloadpageprod.asp?pcode=9&path=Escalade9500SSeries&prodname=3ware%209500S%20Series
Any comment on this? I'm thinking about picking up a server with this card,
and it would be cool
On Mon, Jul 26, 2010 at 2:56 PM, Miles Nordin wrote:
>> "mg" == Mike Gerdts writes:
> mg> it is rather common to have multiple 1 Gb links to
> mg> servers going to disparate switches so as to provide
> mg> resilience in the face of switch failures. This is not unlike
> mg> (at a
It should be possible to do
though if you are really serious about it. You can create two zfs
zvols (volumes) which are hopefully in two different raidz-based zfs
pools, and then create a new zfs pool using those two devices. The
end result would be three zfs pools. It is probably not a wise i
On Mon, Jul 26 at 11:51, Dav Banks wrote:
I wanted to test it as a backup solution. Maybe that's crazy in
itself but I want to try it.
Basically, once a week detach the 'backup' pool from the mirror,
replace the drives, add the new raidz to the mirror and let it
resilver and sit for a week.
Si
On Mon, July 26, 2010 14:51, Dav Banks wrote:
> I wanted to test it as a backup solution. Maybe that's crazy in itself but
> I want to try it.
>
> Basically, once a week detach the 'backup' pool from the mirror, replace
> the drives, add the new raidz to the mirror and let it resilver and sit
> for
On Jul 26, 2010, at 2:51 PM, Dav Banks wrote:
> I wanted to test it as a backup solution. Maybe that's crazy in itself but I
> want to try it.
>
> Basically, once a week detach the 'backup' pool from the mirror, replace the
> drives, add the new raidz to the mirror and let it resilver and sit
> "mg" == Mike Gerdts writes:
> "sw" == Saxon, Will writes:
sw> I think there may be very good reason to use iSCSI, if you're
sw> limited to gigabit but need to be able to handle higher
sw> throughput for a single client.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?
You might look at the zpool split feature, where you can
split off the disks from a mirrored pool to create an identical
pool, described here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
ZFS Admin Guide, p. 87
Thanks,
Cindy
On 07/26/10 12:51, Dav Banks wrote:
I wanted to tes
On 26 Jul 2010, at 19:51, Dav Banks wrote:
> I wanted to test it as a backup solution. Maybe that's crazy in itself but I
> want to try it.
>
> Basically, once a week detach the 'backup' pool from the mirror, replace the
> drives, add the new raidz to the mirror and let it resilver and sit for
I wanted to test it as a backup solution. Maybe that's crazy in itself but I
want to try it.
Basically, once a week detach the 'backup' pool from the mirror, replace the
drives, add the new raidz to the mirror and let it resilver and sit for a week.
--
This message posted from opensolaris.org
_
A small follow-up is that creating pools from components of other pools
can cause system deadlocks.
This approach is not recommended.
Thanks,
Cindy
On 07/26/10 12:19, Saxon, Will wrote:
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensola
On Mon, July 26, 2010 14:17, Dav Banks wrote:
> Ah. Thanks! I should have said RAID51 - a mirror of RAID5 elements.
>
> Thanks for the info. Bummer that it can't be done.
Out of curiosity, any particular reason why you want to do this?
___
zfs-discuss
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Dav Banks
> Sent: Monday, July 26, 2010 2:02 PM
> To: zfs-discuss@opensolaris.org
> Subject: [zfs-discuss] Mirrored raidz
>
> This may have been covered somewhere
Ah. Thanks! I should have said RAID51 - a mirror of RAID5 elements.
Thanks for the info. Bummer that it can't be done.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
On Mon, 26 Jul 2010, Dav Banks wrote:
This may have been covered somewhere but I couldn't find it.
Is it possible to mirror two raidz vdevs? Like a RAID50 basically.
This config is not supported by zfs. It should be possible to do
though if you are really serious about it. You can create t
Hi,
> Is it possible to mirror two raidz vdevs? Like a RAID50 basically.
Raid 50 is striped...
basically:
zpool create tank raidz c0t0d0 c0t0d1 c0t0d2 raidz c1t0d0 c1t0d1 c0t0d2
Other than that, I believe it is not possible to create a mirrored
pool from raidz vdevs
Regards,
Serge Fonville
-
This may have been covered somewhere but I couldn't find it.
Is it possible to mirror two raidz vdevs? Like a RAID50 basically.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
Have you posted on the FreeBSD forums?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> -Original Message-
> From: Garrett D'Amore [mailto:garr...@nexenta.com]
> Sent: Monday, July 26, 2010 2:27 AM
> To: Mike Gerdts
> Cc: Saxon, Will; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] NFS performance?
>
> On Sun, 2010-07-25 at 21:39 -0500, Mike Gerdts wrote:
> > O
On Mon, Jul 26, 2010 at 1:27 AM, Garrett D'Amore wrote:
> On Sun, 2010-07-25 at 21:39 -0500, Mike Gerdts wrote:
>> On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore wrote:
>> > On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote:
>> >>
>> >> I think there may be very good reason to use iSCSI, if
http://www.h-online.com/open/news/item/FreeBSD-8-1-arrives-1044996.html
FreeBSD 8.1 arrives
FreeBSD Logo Originally scheduled for the 9th of July, the FreeBSD Release
Engineering Team has now issued version 8.1 of its popular free Unix
derivative, the first stable major point update to version
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Shawn Ferry
>
> I am given to understand that you can delete snapshots in current
> builds (I don't have anything recent where I can test).
So ... You believe the "can't-delete-snap-because-di
Ok I played around with the physical configuration and placed them on the
original controller and zdb -l is now able to unpack LABEL 0,1,2,3 for all
drives in the pool. I also changed the hostname in opensolaris to
"freenas.local" as that is what was listed in the zdb -l(although I doubt this
I 've a ZFS volume exported to one of my Ldom .. but now the Ldom does not see
the data and complaing missing device .. is there any way i can mount or see
what in the volume ... or check if the volume got corrupted or some other issue
?
--
This message posted from opensolaris.org
_
29 matches
Mail list logo