Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-23 Thread David Dyer-Bennet

On Sun, February 22, 2009 23:37, Frank Cusack wrote:
> On February 22, 2009 9:56:02 PM -0600 David Dyer-Bennet 
> wrote:
>>
>> On Sun, February 22, 2009 21:06, Frank Cusack wrote:

>>> Your example worked because you are "only" replicating a filesystem
>>> within the root pool.  This works because after setting the altroot
>>> the new (replicated) filesystem mounts in a different location than
>>> the original.
>>
>> That's all I need to do; what I need to back up is the user home
>> directories.
>
> Ah.  Sorry to introduce any confusion, then.  I only mentioned it because
> in your introductory text you said you needed to backup the root pool and
> didn't say "filesystem within".  I'm pretty sure it doesn't matter what
> pool a filesystem is in, so your mention of the root pool threw me.  In
> fact AFAICT the problem with the root pool is not that it's some magic
> pool called the root pool, it's that the root filesystem mounts on '/'.
> If you had some other filesystem/pool with the mountpoint set to '/',
> you'd have the same problem.

Well, we're eventually getting things straightened out.  I think of it as
backing up the parts of the root pool worth backing up (since the software
installation is very easily recreated).  Furthermore, the "real" home
directories are elsewhere, it's just the emergency holographic home
directories (for local-only accounts used only in emergencies) that are in
rpool/export/home.  So just saying "home directories" didn't feel right to
me.

The problem I'm having isn't at the *end* of the zfs receive, it's at the
beginning.  Also, the receiving pool is mounted with an altroot, so stuff
within it shouldn't be trying to appear at /; or rather, when it tries, it
won't conflict with the real /.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread Frank Cusack
On February 22, 2009 9:56:02 PM -0600 David Dyer-Bennet  
wrote:


On Sun, February 22, 2009 21:06, Frank Cusack wrote:

On February 22, 2009 8:03:38 PM -0600 David Dyer-Bennet 
wrote:


On Sun, February 22, 2009 18:11, Frank Cusack wrote:

Did you see my other thread on this specific topic?  You can't backup
the root pool using zfs send -R | zfs recv.


Nope, somehow missed the import of that.

I'm only trying to back up the rpool/export/home portion of the root
pool;
is that still impossible?

Because so far as I can tell, *that* part is working; it's adding a
second
fs that I'm having trouble with.

In the example I posted, this bit WORKED in one case:

# zfs create bup-ruin/fsfs/rpool
# zfs send -R rpool/export/h...@bup-20090216-044512utc | zfs recv -d
bup-ruin/fsfs/rpool

I'll find that other thread and check up; is it a consistent failure or
intermittent, though?


Intermittent?  That would indicate some type of transient problem, not
"can't".


I was asking because if the problem you were describing was consistent
then it's NOT my problem.


Your example worked because you are "only" replicating a filesystem
within the root pool.  This works because after setting the altroot
the new (replicated) filesystem mounts in a different location than
the original.


That's all I need to do; what I need to back up is the user home
directories.


Ah.  Sorry to introduce any confusion, then.  I only mentioned it because
in your introductory text you said you needed to backup the root pool and
didn't say "filesystem within".  I'm pretty sure it doesn't matter what
pool a filesystem is in, so your mention of the root pool threw me.  In
fact AFAICT the problem with the root pool is not that it's some magic
pool called the root pool, it's that the root filesystem mounts on '/'.
If you had some other filesystem/pool with the mountpoint set to '/',
you'd have the same problem.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread Lori Alt



Dave wrote:


Frank Cusack wrote:


When you try to backup the '/' part of the root pool, it will get
mounted on the altroot itself, which is of course already occupied.
At that point, the receive will fail.

So far as I can tell, mounting the received filesystem is the last
step in the process.  So I guess maybe you could replicate everything
except '/', finally replicate '/' and just ignore the error message.
I haven't tried this.  You have to do '/' last because the receive
stops at that point even if there is more data in the stream.



Wouldn't it be relatively easy to add an option to 'zfs receive' to 
ignore/not mount the received filesystem, or set the canmount option 
to 'no' when receiving? Is there an RFE for this, or has it been added 
to a more recent release already?


see:

http://bugs.opensolaris.org/view_bug.do?bug_id=6794452

now fixed in both the community edition and in Update 8
of Solaris 10.

- Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread David Dyer-Bennet

On Sun, February 22, 2009 21:06, Frank Cusack wrote:
> On February 22, 2009 8:03:38 PM -0600 David Dyer-Bennet 
> wrote:
>>
>> On Sun, February 22, 2009 18:11, Frank Cusack wrote:
>>> Did you see my other thread on this specific topic?  You can't backup
>>> the root pool using zfs send -R | zfs recv.
>>
>> Nope, somehow missed the import of that.
>>
>> I'm only trying to back up the rpool/export/home portion of the root
>> pool;
>> is that still impossible?
>>
>> Because so far as I can tell, *that* part is working; it's adding a
>> second
>> fs that I'm having trouble with.
>>
>> In the example I posted, this bit WORKED in one case:
>>
>># zfs create bup-ruin/fsfs/rpool
>># zfs send -R rpool/export/h...@bup-20090216-044512utc | zfs recv -d
>> bup-ruin/fsfs/rpool
>>
>> I'll find that other thread and check up; is it a consistent failure or
>> intermittent, though?
>
> Intermittent?  That would indicate some type of transient problem, not
> "can't".

I was asking because if the problem you were describing was consistent
then it's NOT my problem.

> Your example worked because you are "only" replicating a filesystem
> within the root pool.  This works because after setting the altroot
> the new (replicated) filesystem mounts in a different location than
> the original.

That's all I need to do; what I need to back up is the user home directories.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread Dave

Frank Cusack wrote:

When you try to backup the '/' part of the root pool, it will get
mounted on the altroot itself, which is of course already occupied.
At that point, the receive will fail.

So far as I can tell, mounting the received filesystem is the last
step in the process.  So I guess maybe you could replicate everything
except '/', finally replicate '/' and just ignore the error message.
I haven't tried this.  You have to do '/' last because the receive
stops at that point even if there is more data in the stream.


Wouldn't it be relatively easy to add an option to 'zfs receive' to 
ignore/not mount the received filesystem, or set the canmount option to 
'no' when receiving? Is there an RFE for this, or has it been added to a 
more recent release already?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread Frank Cusack
On February 22, 2009 8:03:38 PM -0600 David Dyer-Bennet  
wrote:


On Sun, February 22, 2009 18:11, Frank Cusack wrote:

Did you see my other thread on this specific topic?  You can't backup
the root pool using zfs send -R | zfs recv.


Nope, somehow missed the import of that.

I'm only trying to back up the rpool/export/home portion of the root pool;
is that still impossible?

Because so far as I can tell, *that* part is working; it's adding a second
fs that I'm having trouble with.

In the example I posted, this bit WORKED in one case:

# zfs create bup-ruin/fsfs/rpool
# zfs send -R rpool/export/h...@bup-20090216-044512utc | zfs recv -d
bup-ruin/fsfs/rpool

I'll find that other thread and check up; is it a consistent failure or
intermittent, though?


Intermittent?  That would indicate some type of transient problem, not
"can't".

Your example worked because you are "only" replicating a filesystem
within the root pool.  This works because after setting the altroot
the new (replicated) filesystem mounts in a different location than
the original.

When you try to backup the '/' part of the root pool, it will get
mounted on the altroot itself, which is of course already occupied.
At that point, the receive will fail.

So far as I can tell, mounting the received filesystem is the last
step in the process.  So I guess maybe you could replicate everything
except '/', finally replicate '/' and just ignore the error message.
I haven't tried this.  You have to do '/' last because the receive
stops at that point even if there is more data in the stream.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread David Dyer-Bennet

On Sun, February 22, 2009 18:11, Frank Cusack wrote:
> On February 22, 2009 1:14:44 PM -0600 David Dyer-Bennet 
> wrote:
>> (Note that I need to back up two pools, rpool and zp1, from the destkop
>> on
>> the the single external pool bup-ruin.  I'm importing bup-ruin with
>> altroot to avoid the mountoints of the backed-up filesystems on it
>> conflicting with each other or with stuff already mounted on the
>> desktop.)
>
> Did you see my other thread on this specific topic?  You can't backup
> the root pool using zfs send -R | zfs recv.

Nope, somehow missed the import of that.

I'm only trying to back up the rpool/export/home portion of the root pool;
is that still impossible?

Because so far as I can tell, *that* part is working; it's adding a second
fs that I'm having trouble with.

In the example I posted, this bit WORKED in one case:

# zfs create bup-ruin/fsfs/rpool
# zfs send -R rpool/export/h...@bup-20090216-044512utc | zfs recv -d
bup-ruin/fsfs/rpool

I'll find that other thread and check up; is it a consistent failure or
intermittent, though?
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread David Dyer-Bennet

On Sun, February 22, 2009 16:31, Blake wrote:
> I'm actually working on this for an application at my org.  I'll try
> to post my work somewhere when done (hopefully this week).

That'd be cool.  I'm converting from rsync to send/receive because I
upgraded to 2008.11 and started using CIFS, so I care about more things
than I used to, and rsync doesn't handle them all.  And I'm having trouble
getting send/receive to do what I need so far.  I'll look forward to
seeing what you do!

> Are you keeping in mind the fact that the '-i' option needs a pair of
> snapshots (original and current) to work properly?

Yes, on the incremental sends I believe I've got that part right.  I can
get the original send on a different fs, and incremental sends, to work;
it was when I added this second fs that I also wanted to back up that I
ran into problems.  Dunno why yet.  The tests below are of the initial
send, which will happen only once per backup volume, but are still
necessary.

I'm going to fall back to doing manual scripts to do initial and
incremental backups for the fs's I care about, in hopes I can get that
working, and because it gives me a precise reproducible problem when I
have trouble.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread Frank Cusack
On February 22, 2009 1:14:44 PM -0600 David Dyer-Bennet  
wrote:

(Note that I need to back up two pools, rpool and zp1, from the destkop on
the the single external pool bup-ruin.  I'm importing bup-ruin with
altroot to avoid the mountoints of the backed-up filesystems on it
conflicting with each other or with stuff already mounted on the desktop.)


Did you see my other thread on this specific topic?  You can't backup
the root pool using zfs send -R | zfs recv.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread Blake
I'm actually working on this for an application at my org.  I'll try
to post my work somewhere when done (hopefully this week).

Are you keeping in mind the fact that the '-i' option needs a pair of
snapshots (original and current) to work properly?



On Sun, Feb 22, 2009 at 2:14 PM, David Dyer-Bennet  wrote:
>
> On Sun, February 22, 2009 00:15, David Dyer-Bennet wrote:
>> First, it fails because the destination directory doesn't exist.  Then it
>> fails because it DOES exist.  I really expected one of those to work.  So,
>> what am I confused about now?  (Running 2008.11)
>>
>> # zpool import -R /backups/bup-ruin bup-ruin
>> # zfs send -R "z...@bup-20090222-054457utc" | zfs receive -dv
>> bup-ruin/fsfs/zp1"
>> cannot receive: specified fs (bup-ruin/fsfs/zp1) does not exist
>> # zfs create bup-ruin/fsfs/zp1
>> # zfs send -R "z...@bup-20090222-054457utc" | zfs receive -dv
>> "bup-ruin/fsfs/zp1"
>> cannot receive new filesystem stream: destination 'bup-ruin/fsfs/zp1'
>> exists
>> must specify -F to overwrite it
>
> I've tried some more things.  Isn't this the "same" operations as above,
> but with different (and more reasonable) results?
>
> But isn't this the same, with different results?
>
> # zfs list -r bup-ruin
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> bup-ruin  79.5K   913G18K  /backups/bup-ruin
> # zfs create bup-ruin/fsfs
> # zfs send -R rpool/export/h...@bup-2009\
> 0216-044512UTC | zfs recv -d bup-ruin/fsfs/rpool
> cannot receive: specified fs (bup-ruin/fsfs/rpool) does not exist
> r...@fsfs:/export/home/localddb/src/bup2# zfs create bup-ruin/fsfs/rpool
> r...@fsfs:/export/home/localddb/src/bup2# zfs send -R
> rpool/export/h...@bup-2009\
> 0216-044512UTC | zfs recv -d bup-ruin/fsfs/rpool
>
> These second results are what I expected, after reading the error messages
> and the manual.  But the first example is what I actually got, originally.
>  (Different pools, slightly).
>
> Here's what I'm trying to do:
>
> I'm trying to store backups of multiple pools (which are on disks mounted
> in the desktop chassis) on external pools consisting of a single external
> USB drive.
>
> My concept is to do a send -R for the initial backup of each, and then to
> do send -i with suitable params for the later backups.  This should keep
> the external filesystems in synch with the internal filesystems up to the
> snapshot most recently synced.  But I can't find commands to make this
> work.
>
> (Note that I need to back up two pools, rpool and zp1, from the destkop on
> the the single external pool bup-ruin.  I'm importing bup-ruin with
> altroot to avoid the mountoints of the backed-up filesystems on it
> conflicting with each other or with stuff already mounted on the desktop.)
>
>
> --
> David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
> Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
> Photos: http://dd-b.net/photography/gallery/
> Dragaera: http://dragaera.info
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about zfs recv -d, apparently

2009-02-22 Thread David Dyer-Bennet

On Sun, February 22, 2009 00:15, David Dyer-Bennet wrote:
> First, it fails because the destination directory doesn't exist.  Then it
> fails because it DOES exist.  I really expected one of those to work.  So,
> what am I confused about now?  (Running 2008.11)
>
> # zpool import -R /backups/bup-ruin bup-ruin
> # zfs send -R "z...@bup-20090222-054457utc" | zfs receive -dv
> bup-ruin/fsfs/zp1"
> cannot receive: specified fs (bup-ruin/fsfs/zp1) does not exist
> # zfs create bup-ruin/fsfs/zp1
> # zfs send -R "z...@bup-20090222-054457utc" | zfs receive -dv
> "bup-ruin/fsfs/zp1"
> cannot receive new filesystem stream: destination 'bup-ruin/fsfs/zp1'
> exists
> must specify -F to overwrite it

I've tried some more things.  Isn't this the "same" operations as above,
but with different (and more reasonable) results?

But isn't this the same, with different results?

# zfs list -r bup-ruin
NAME   USED  AVAIL  REFER  MOUNTPOINT
bup-ruin  79.5K   913G18K  /backups/bup-ruin
# zfs create bup-ruin/fsfs
# zfs send -R rpool/export/h...@bup-2009\
0216-044512UTC | zfs recv -d bup-ruin/fsfs/rpool
cannot receive: specified fs (bup-ruin/fsfs/rpool) does not exist
r...@fsfs:/export/home/localddb/src/bup2# zfs create bup-ruin/fsfs/rpool
r...@fsfs:/export/home/localddb/src/bup2# zfs send -R
rpool/export/h...@bup-2009\
0216-044512UTC | zfs recv -d bup-ruin/fsfs/rpool

These second results are what I expected, after reading the error messages
and the manual.  But the first example is what I actually got, originally.
 (Different pools, slightly).

Here's what I'm trying to do:

I'm trying to store backups of multiple pools (which are on disks mounted
in the desktop chassis) on external pools consisting of a single external
USB drive.

My concept is to do a send -R for the initial backup of each, and then to
do send -i with suitable params for the later backups.  This should keep
the external filesystems in synch with the internal filesystems up to the
snapshot most recently synced.  But I can't find commands to make this
work.

(Note that I need to back up two pools, rpool and zp1, from the destkop on
the the single external pool bup-ruin.  I'm importing bup-ruin with
altroot to avoid the mountoints of the backed-up filesystems on it
conflicting with each other or with stuff already mounted on the desktop.)


-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Confused about zfs recv -d, apparently

2009-02-21 Thread David Dyer-Bennet
First, it fails because the destination directory doesn't exist.  Then it
fails because it DOES exist.  I really expected one of those to work.  So,
what am I confused about now?  (Running 2008.11)

# zpool import -R /backups/bup-ruin bup-ruin
# zfs send -R "z...@bup-20090222-054457utc" | zfs receive -dv
bup-ruin/fsfs/zp1"
cannot receive: specified fs (bup-ruin/fsfs/zp1) does not exist
# zfs create bup-ruin/fsfs/zp1
# zfs send -R "z...@bup-20090222-054457utc" | zfs receive -dv
"bup-ruin/fsfs/zp1"
cannot receive new filesystem stream: destination 'bup-ruin/fsfs/zp1' exists
must specify -F to overwrite it

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss