One thing to be aware of with partial incremental backups is the danger of
backing up data multiple times if the mount points are nested. For
instance,

/mnt/backup/some-dir
/mnt/backup/some-dir/another-dir

Under normal operation, a node with DOMAIN set to "/mnt/backup/some-dir
/mnt/backup/some-dir/another-dir" will backup the contents of 
/mnt/backup/some-dir/another-dir
as a separate filespace, *and also* will backup another-dir as a
subdirectory of the /mnt/backup/some-dir filespace. We reported this as a
bug, and IBM pointed us at this flag that can be passed as a scheduler
option to prevent this:

-TESTFLAG=VMPUNDERNFSENABLED

On Tue, Jul 17, 2018 at 04:12:17PM +0200, Bjrn Nachtwey wrote:
> Hi Zoltan,
> 
> OK, i will translate my text as there are some more approaches discussed :-)
> 
> breaking up the filesystems in several nodes will work as long as the nodes
> are of suffiecient size.
> 
> I'm not sure if a PROXY node will solve the problem, because each "member
> node" will backup the whole mountpoint. You will need to do partial
> incremental backups. I expect you will do this based on folders, do you?
> So, some questions:
> 1) how will you distribute the folders to the nodes?
> 2) how will you ensure new folders are processed by one of your "member
> nodes"? On our filers many folders are created and deleted, sometimes a
> whole bunch every day. So for me, it was no option to maintain the option
> file manually. The approach from my script / "MAGS" does this somehow
> "automatically".
> 3) what happens if the folders grew not evenly and all the big ones are
> backed up by one of your nodes? (OK you can change the distribution or even
> add another node)
> 4) Are you going to map each backupnode to different nodes of the isilon
> cluster to distribute the traffic / workload for the isilon nodes?
> 
> best
> Bjørn

-- 
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

Reply via email to