On Sun, Feb 28, 2010 at 3:18 PM, Johannes H. Jensen
<[email protected]>wrote:
> It appears I'm running into the actual hardlink limit:
>
> rsync: link "/backup/backuppc/pc/..." =>
> cpool/7/c/6/7c67493bd72ceff21059c3d924d17518 failed: Too many links
> (31)
>
Hi, is 31 an error code or the number of links of this file? ext3 should
allow for 32000 links on a single inode.
Michael
> tune2fs reports:
>
> Inode count: 37429248
> Free inodes: 33290944
>
> So either I have to use another filesystem with a higher limit, but we
> will eventually run into the same problem. Or I have to somehow make
> sure that the backuppc pool is intact before I --delete... Do you see
> any other options?
>
> Best regards,
>
> Johannes H. Jensen
>
>
>
> On Sat, Feb 27, 2010 at 5:22 PM, dan <[email protected]> wrote:
> > Are you running into the actual hardlink limit or an inode limit? ext3
> has a
> > hard coded hardlink limit but hardlinks are also limited by available
> > inodes. you can check your available inodes with
> >
> > tune2fs -l /dev/disk|grep -e "Free inodes" -e "Inode count"
> >
> > if you have very few or none left then this is your problem. You cant
> > change the inode count on an existing ext3 filesystem as far as I know
> but
> > if you re-create the filesystem you can do
> > mkfs.ext3 -N ##### /dev/disk
> > change the ##### to suite your needs. You should know the current number
> > for the tune2fs command above. I would just take your current filesystem
> > usage (lets say 62% for the math) then take the `current number` * 3 /
> .62
> > so that you have enough inodes for today PLUS you are compensated for
> when
> > the disks are fuller.
> >
> >
> >
> > On Sat, Feb 27, 2010 at 6:12 AM, Johannes H. Jensen <
> [email protected]>
> > wrote:
> >>
> >> Thank you for your input,
> >>
> >> On Sat, Feb 27, 2010 at 3:38 AM, dan <[email protected]> wrote:
> >> > if [ -e /var/lib/backuppc/testfile ];
> >> > then rsync xxxx;
> >> > else echo "uh oh!";
> >> > fi
> >> >
> >> > should make sure that the filesystem is mounted.
> >>
> >> Yes, that's definitely a good idea. However it does not check to make
> >> sure that the integrity of the BackupPC pool is okay. If only a small
> >> subset of the backup pool gets removed/corrupted/etc, this would still
> >> get reflected in the remote mirror. I would prefer some
> >> BackupPC-oriented way of doing this (maybe BackupPC_serverMesg status
> >> info?) if someone could provide me with the details.
> >>
> >> > you could also first do a try run
> >> > rsync -avnH --delete /source /destination > /tmp/list
> >> > then identify what will be deleted:
> >> > cat /tmp/list|grep deleting|sed 's/deleting /\//g'
> >> >
> >> > now you have a list of everything that WOULD be deleted with the
> >> > --delete
> >> > option. Run your normal sync and save this file for later
> >> >
> >> > You could save take this file list and send it to the remote system
> >> >
> >> > scp /tmp/list remotehost:/list-`date -%h%m%s`
> >> >
> >> > on remote system
> >> >
> >> > cat /list-* | xargs rm
> >> >
> >> > to delete the file list. You could do this weekly or monthly or
> >> > whenever
> >> > you needed.
> >>
> >> That's a good idea. My original thought was to manually run the rsync
> >> with the --delete option once a week or so, but we've already run into
> >> filesystem (ext3) problems where we exceed the maximum links after a
> >> few days because we don't --delete... I guess we could use another
> >> filesystem with a higher limit instead...
> >>
> >>
> >> Best regards,
> >>
> >> Johannes H. Jensen
> >>
> >>
> >>
> >> > On Fri, Feb 26, 2010 at 6:27 AM, Johannes H. Jensen
> >> > <[email protected]>
> >> > wrote:
> >> >>
> >> >> Hello,
> >> >>
> >> >> We're currently syncing our local BackupPC pool to a remote server
> >> >> using rsync -aH /var/lib/backuppc/ remote:/backup/backuppc/
> >> >>
> >> >> This is executed inside a script which takes care of stopping
> BackupPC
> >> >> while rsync is running as well as logging and e-mail notification.
> The
> >> >> script nightly as a cronjob.
> >> >>
> >> >> This works fairly well, except it won't remove old backups from the
> >> >> remote server. Apart from using up unnecessary space, this has also
> >> >> caused problems like hitting the remote filesystems hard link limit.
> >> >>
> >> >> Now I'm aware of rsync's --delete option, but I find this very risky.
> >> >> If for some reason the local backup server fails and
> >> >> /var/lib/backuppc/ is somehow empty (disk fail etc), then --delete
> >> >> would cause rsync to remove *all* of the mirrored files on the remote
> >> >> server. This kind of ruins the whole point of having a remote
> >> >> mirror...
> >> >>
> >> >> So my question is then - how to make sure that the local backup pool
> >> >> is sane and up-to-date without risking loosing the entire remote
> pool?
> >> >>
> >> >> I have two ideas of which I'd love some input:
> >> >>
> >> >> 1. Perform some sanity check before running rsync to ensure that the
> >> >> local backuppc directory is indeed healthy. How this sanity check
> >> >> should be performed I'm unsure of. Maybe check for existence of some
> >> >> file or examine the output of `BackupPC_serverMesg status info'?
> >> >>
> >> >> 2. Run another instance of BackupPC on the remote server, using the
> >> >> same pc and hosts configuration as the local server but with
> >> >> $Conf{BackupsDisable} = 2 in the global config. This instance should
> >> >> then keep the remote pool clean (with BackupPC_trashClean and
> >> >> BackupPC_nightly), or am I mistaken? Of course, this instance also
> has
> >> >> to be stopped while rsyncing from the local server.
> >> >>
> >> >> If someone could provide some more info on how this can be done
> >> >> safely, it would be greatly appreciated!
> >> >>
> >> >>
> >> >> Best regards,
> >> >>
> >> >> Johannes H. Jensen
> >> >>
> >> >>
> >> >>
> >> >>
> ------------------------------------------------------------------------------
> >> >> Download Intel® Parallel Studio Eval
> >> >> Try the new software tools for yourself. Speed compiling, find bugs
> >> >> proactively, and fine-tune applications for parallel performance.
> >> >> See why Intel Parallel Studio got high marks during beta.
> >> >> http://p.sf.net/sfu/intel-sw-dev
> >> >> _______________________________________________
> >> >> BackupPC-users mailing list
> >> >> [email protected]
> >> >> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >> >> Wiki: http://backuppc.wiki.sourceforge.net
> >> >> Project: http://backuppc.sourceforge.net/
> >> >
> >> >
> >> >
> >> >
> ------------------------------------------------------------------------------
> >> > Download Intel® Parallel Studio Eval
> >> > Try the new software tools for yourself. Speed compiling, find bugs
> >> > proactively, and fine-tune applications for parallel performance.
> >> > See why Intel Parallel Studio got high marks during beta.
> >> > http://p.sf.net/sfu/intel-sw-dev
> >> > _______________________________________________
> >> > BackupPC-users mailing list
> >> > [email protected]
> >> > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >> > Wiki: http://backuppc.wiki.sourceforge.net
> >> > Project: http://backuppc.sourceforge.net/
> >>
> >>
> >>
> ------------------------------------------------------------------------------
> >> Download Intel® Parallel Studio Eval
> >> Try the new software tools for yourself. Speed compiling, find bugs
> >> proactively, and fine-tune applications for parallel performance.
> >> See why Intel Parallel Studio got high marks during beta.
> >> http://p.sf.net/sfu/intel-sw-dev
> >> _______________________________________________
> >> BackupPC-users mailing list
> >> [email protected]
> >> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >> Wiki: http://backuppc.wiki.sourceforge.net
> >> Project: http://backuppc.sourceforge.net/
> >
> >
> >
> ------------------------------------------------------------------------------
> > Download Intel® Parallel Studio Eval
> > Try the new software tools for yourself. Speed compiling, find bugs
> > proactively, and fine-tune applications for parallel performance.
> > See why Intel Parallel Studio got high marks during beta.
> > http://p.sf.net/sfu/intel-sw-dev
> > _______________________________________________
> > BackupPC-users mailing list
> > [email protected]
> > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki: http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
> >
>
>
> ------------------------------------------------------------------------------
> Download Intel® Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> BackupPC-users mailing list
> [email protected]
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
BackupPC-users mailing list
[email protected]
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/