Thanks Reginald, your idea is much simpler than mine using lofi!
It's what I need :)
Sometimes we get lost in complicated stuff :D
Thanks so much!
Gabriele.
----------------------------------------------------------------------------------
Da: Reginald Beardsley via illumos-discuss
A: [email protected] Gabriele Bulfon
Data: 10 luglio 2014 22.50.43 CEST
Oggetto: Re: [discuss] zfs lofi file pool double mount
I concur with the assessment that tape is senseless at present.  Drives and 
media cost too much.
I mount my backup system via NFS and do a zfs incremental send to backup the 
client.  I did this primarily because it was easy to get correct and I don't 
need much.   As I recall I observed gigabit wire speed performance doing this.  
You could then do a zfs receive to unpack the incremental.  I'm not running a 
normal production shop, so I don't bother.  I rely on zfs on the client and the 
server to take care of things.  I copy the incremental blobs to a USB drive to 
store offsite.  USB disk is under $0.05/GB.  I'm not aware of any tape that is 
that cheap available in 3 TB unit sizes.
In summary consider this:
NFS mount a scratch space on the client
zfs send incrementals to the NFS mounted scratch space
after that completes zfs receive the incremental to unpack on the backup system
delete the zfs send blob
If you lose the client filesystem, just NFS export the backup to the client 
until you can replace the hardware.
The NFS traffic for this scenario is large files.  All the rest is local to the 
server or client. If that doesn't provide adequate performance I think there's 
some other problem that you need to look into.
Reg
--------------------------------------------
On Thu, 7/10/14, Gabriele Bulfon via illumos-discuss
wrote:
Subject: Re: [discuss] zfs lofi file pool double mount
To: "Reginald Beardsley"
, [email protected]
Date: Thursday, July 10, 2014, 2:09 AM
NFS is slow,
expecially with small files.So what I'm
trying to do is:
- have the
NFS client run his daily work on local zfs pools,
fast.-
have periodic zfs incremental send of these pools on the NFS
lofi files- have the NFS
server access these lofi files periodically to backup them
old manner (tape)
Just experimenting, to have some kind
of zfs incremental backup over nfs
----------------------------------------------------------------------------------
Da: Reginald Beardsley
A:
[email protected] Gabriele Bulfon
Data: 9 luglio
2014 18.16.16 CEST
Oggetto: Re: [discuss]
zfs lofi file pool double mount
What are you
trying to do?   It's  not possible for the server to
have RO access and the client have R/W access.  The server
has to have R/W to service the client requests.
As for making certain clients
RO and other R/W, that's provided by NFS.  You can even
make the NFS mountpoint RO on the server, but the physical
mountpoint still has to be R/W at least for nfsd.  You
*could* make the physical mountpoint completely inaccessible
to anything other than nfsd by setting ownership and
permissions properly.
You
can actually do pretty much anything you can imagine using
NFS maps.  Folding maps based on system architecture is one
of my favorites, i.e.  /app/bin -/app/${ARCH}/bin, but
it's not needed much now that so much of the *nix world
has converged on Linux.  Back during the workstation wars it
was very useful.
But again,
what's the objective?
Reg
--------------------------------------------
On Wed, 7/9/14, Gabriele Bulfon via
illumos-discuss
wrote:
Subject: [discuss] zfs lofi
file pool double mount
To:
[email protected]
Date: Wednesday,
July 9, 2014, 8:44 AM
Hi,
as
an
experiment.
Let's say
I have a zfs fs
shared as an NFS resource
to another illumos client.
Let's say I
have created a 152GB file on this
resrouce.
Let's say I have added the
file as a lofi
dev on the NFS server, and
created a 512GB zpool on it.
Finally,
let's say I have added the file as a lofi dev
on the NFS client too.
Now I can export / import
the
pool both from the NFS server, and the NFS client.
Obviously I can't import
the pool from both
machines.
But...what if one would
"import -f
-o readonly=on" and
the other would import -f
read/write?
Would it be possible?
This
would let
me zfs send the client pool to
the lofi pool on the NFS
share, while have
the server
be able to read the
contained files.
Is it safe?
Gabriele.
illumos-developer | Archives
| Modify
Your Subscription
illumos-discuss |
Archives
| Modify
Your Subscription
illumos-discuss | Archives
| Modify
Your Subscription
-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175541-02f10c6f
Modify Your Subscription: 
https://www.listbox.com/member/?&id;secret=21175541-29e3e0ee
Powered by Listbox: http://www.listbox.com



-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com

Reply via email to