Thinking about it, I think Darren is right. An automatic send/receive to the
external drive may be preferable, and it sounds like it has many advantages:
1. It's directional, your backups will always go from your live drive to the
backup, never the other way unless you actually force it with
Err... you can't remove drives that are in use, but for what you're describing
can't you just use zfs replace and then remove the old drive?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
In fact, thinking about it, could this be more generic than just a USB backup
service?
If this were a scheduled backup system, regularly sending snapshots to another
device, with a nice end user GUI, there's nothing stopping it working with any
device the user points it at. So you could use
hi all,
I've got here a box with zfs and /var on rpool/ROOT/snv/var
I wanted to rename it to rpool/var and made sure that canmount=on but it didn't
mount after reboot.
what else do I need to change?
thanks,
Philip
--
This message posted from opensolaris.org
It is unfortunately that you ask this question after
you've installed the
new disks; now both the old and the new disks are
part of the same zpool.
That's awesome, I did not know that this would work. I'm glad I made this post.
I actually have not yet replaced any drive, in fact, this very
On Wed, Dec 17, 2008 at 12:05:50AM -0800, Ross wrote:
Thinking about it, I think Darren is right. An automatic send/receive to the
external drive may be preferable, and it sounds like it has many advantages:
You forgot *the* most important advantage of using send/recv instead of
mirroring as
What serious compat issues ? There has been one and
only one
incompatible change in the stream format and that
only impacted really
really early (before S10 FCS IIRC) adopters.
Here are the issues that I am aware:
- Running zfs upgrade on a zfs filesystem will cause the zfs send stream
On Wed, Dec 17, 2008 at 10:23 AM, cindy.swearin...@sun.com wrote:
Hi Alex,
Sorry, I missed the 1.5 TB disk/boot issue previously.
A project is underway to provide booting for disks that are large
than 1 TB. This project is outside of a future project to provide
booting from an EFI-labeled
On Wed, Dec 17, 2008 at 9:09 AM, Johan Hartzenberg jhart...@gmail.comwrote:
On Wed, Dec 17, 2008 at 2:46 PM, Thanos McAtos mca...@ics.forth.grwrote:
My problems are 2:
1) I don't know how to properly age a file-system. As already said, I need
traces of a decade's workload to properly do
On Wed, Dec 17, 2008 at 08:51:54AM -0800, Niall Power wrote:
What serious compat issues ? There has been one and
only one
incompatible change in the stream format and that
only impacted really
really early (before S10 FCS IIRC) adopters.
Here are the issues that I am aware:
-
Hey Niall,
Here are the issues that I am aware:
- Running zfs upgrade on a zfs filesystem will
cause the zfs send stream output format to be
incompatible with older versions of the software.
This is according to the zfs man page.
- Again from the zfs man page:
The format of the stream is
In the long run some USB stick problems may surface
because the wearbr
leveling is done in 16MB sections, and you could blow
your stick ifbr
you have a 16MB region which is ``hot#39;#39;.
nbsp;I wonder if parts of a zpoolbr
are hotter than others? nbsp;With AVS the dirty
bitmap might be
I followed Anton's idea but didn't see any difference.
My tests were repetitive PostMark runs, and each run was different. For
instance, I didn't let PostMark delete the files once it finished, deleted the
odd numbered ones by hand, put the rest in a different directory, re-run
PostMark that
On Wed, Dec 17, 2008 at 2:46 PM, Thanos McAtos mca...@ics.forth.gr wrote:
My problems are 2:
1) I don't know how to properly age a file-system. As already said, I need
traces of a decade's workload to properly do this, and to the best of my
knowledge there is no easy way to do this
Bob Friesenhahn wrote:
On Tue, 16 Dec 2008, Reed Gregory wrote:
8 Hardware RAID-5 Groups ( 5 drives each) and 2 SAN hot spares.
zraid of these 8 Raid Groups. ~ 14TB usable.
I did read in a FAQ that doing double redundancy is not recommended
since parity would have to be calculated twice.
On Wed, Dec 17, 2008 at 01:57:37PM -0600, Tim wrote:
On Wed, Dec 17, 2008 at 10:23 AM, cindy.swearin...@sun.com wrote:
Hi Alex,
Sorry, I missed the 1.5 TB disk/boot issue previously.
A project is underway to provide booting for disks that are large
than 1 TB. This project is outside
I'm having a very difficult time destroying a zone. Here's the skinny:
bash-3.00# zfs get origin | grep d01
r12_data/d01 origin
r12_data/d01/.clone.12052...@12042008 -
r12_data/d01-receive origin-
-
17 matches
Mail list logo