Re: [BackupPC-users] Convert incremental/unfilled to full/filled
Jeff, No, there isn't a built-in way to do this. BackupPC_backupDuplicate does something similar to what you want, but it would need some significant modifications. Craig On Mon, Apr 29, 2019 at 10:35 PM wrote: > I just converted some old v3 backups to v4. > Is there any way to manually convert an incremental/unfilled to a > full/filled backup under v4? > > > ___ > BackupPC-users mailing list > BackupPC-users@lists.sourceforge.net > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki:http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Merging and migrating 2 separate v3 backups
I have 2 separate v3 backup trees, that have been run on 2 separate v3 BackupPC servers, say A and B. I would like to migrate both of them to a new combined v4 backup tree, say C. I believe the following will work based on looking at the code for BackupPC_migrateV3toV4. 1. Migrate A to v4 using: BackupPC_migrateV3toV4 2. Copy the pc trees from B that you want to migrate to the pc tree of the now migrated A [Copy only the tree, not the pools] 3. Use BackupPC_fixBackupSummary to add the new backups from B to the Backups files for each host on A (note you may need to do some renumbering to prevent backup collisions) 4. Again run BackupPC_migrateV3toV4 5. The resulting pc tree should now contain upgraded versions of both A and B, yielding the desired new merged backup tree, C. I think this will work since I believe that BackupPC_migrateV3toV4 doesn't look at the pool/cpool but rather reads the files directly from the v3 tree (which was previously hard linked to the pool/cpool but now is no longer after copying from A). Also, BackupPC_migrateV3toV4 tests for the unlikely event of md5sum collisions so any new files will be added to the new pool/cpool while any duplicate files won't. Does this make sense? If so, I will test it out and report back... ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] FEATURE REQUEST - "lock" designated backups from being deleted...
Matthias Meyer wrote at about 06:41:49 +0200 on Tuesday, April 9, 2019: > Am Sonntag 07 April 2019, 17:11:47 schrieb backu...@kosowsky.org: > > Sometimes you want to save a special backup that for example > > corresponds to a specific change (pre/post) on your system. The > > trouble is that with exponential deleting there is no way to > > guarantee that your specific designated backup won't be deleted > > automatically later on. > > > > In the past, I have simply renamed the backup number to say -save > > which prevents it from being deleted. > > But it also prevents the backup from being part of /backups > > and thus being browsable from the web interface. > > > > Ideally, it would be nice if one could prevent a specific backup from > > being deleted (or even being part of the exponential schema) by > > either: > > 1. Adding a designated "LOCK" file to the top director (just under the > > backun number) > > 2. Prefixing the entry in the /backups file with a > > character that says essentially, skip over me for deleting purposes > > but otherwise I am still here. > > > > Any suggestions better than my renaming of the backup tree itself? > > > > Jeff > > > > > > ___ > > BackupPC-users mailing list > > BackupPC-users@lists.sourceforge.net > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users > > Wiki:http://backuppc.wiki.sourceforge.net > > Project: http://backuppc.sourceforge.net/ > > You could use my patch > ftp://www.backup4u.at/BackupPC-V3.3.2-FullCntYearly.patch and rewrite it to > skip directories containing such a file instead or in addition to the first > full of a year. > > Br > Matthias > > PS: user/password = ftpuser/Backup4U4FTP > The following simple patch worked for me: --- BackupPC_dump.jnew.~1~ 2019-04-29 23:07:13.903654687 -0400 +++ BackupPC_dump.jnew 2019-04-30 01:36:15.788988030 -0400 @@ -1842,6 +1842,9 @@ my $noDelete = $i + 1 < @$Backups ? $Backups->[$i+1]{noFill} : 0; $noDelete = 0 if ( !$Backups[$i]{preV4} ); + #Don't delete full backups that have 'JJKSave' in their root directory + $noDelete = 1 if -e $Dir . "/" . $Backups[$i]{num} . "/JJKSave"; + if ( !$noDelete && ($fullKeepIdx >= @$fullKeepCnt || $k > 0 ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Convert incremental/unfilled to full/filled
I just converted some old v3 backups to v4. Is there any way to manually convert an incremental/unfilled to a full/filled backup under v4? ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Windows client restore
Hello Team, Please share windows client backup restore process also share any document ( step by step) restore process. Thank you Gopal Dhapa On Sun, Apr 28, 2019, 10:47 PM Gopal Dhapa wrote: > How can I backup or restore windows client using another xfer method > please let me know > > Thank you! > Gopal Dhapa > > On Sun, Apr 28, 2019, 10:44 PM Michael Stowe < > michael.st...@member.mensa.org> wrote: > >> On 2019-04-28 09:15, Gopal Dhapa wrote: >> >> Hello support Team. >> >> When I use backuppc in Linux client it's working okay but windows client >> is not specify director backup it's take full c drive backup. Also windows >> client backup not restore it's given my status access denied using smb xfer >> method. >> >> Please guide me. >> >> Thanks in advance. >> >> Thank you Gopal Dhapa >> >> You may just want to go one of the “download archive” options instead of >> a direct restore. >> >> Direct restore via smb *can* work, but Windows can be picky about >> permissions and semantics for overwriting files, and due to the nature of >> the SMB protocol, if you do get it working, there may be additional work to >> do to make the files readable & writable by others. If you're backing up >> using one of the Administrative Shares (e.g., C$ or Admin$) you'll run into >> additional problems; it's better not to back up or restore via these shares. >> > ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Partial backup is not saved
On 2019-04-29 08:08, Tapio Lehtonen wrote: I split a host to two backups, because dump directory contains virtual machine dump files which are big and are taken every night. So I needed shorter keep time for those backups to not run out of disk on the backup host. But this new backup setup does not seem to work. I takes a long time but XferLOGbad shows this: create 644 0/0 100609398737 rottank/bupool/dump/vzdump-qemu-5099-2019_04_25-00_16_31.vma.lzo pool 644 0/0 11286 rottank/bupool/dump/vzdump-qemu-5099-2019_04_26-00_16_26.log finish: removing in-process file rottank/bupool/dump/vzdump-qemu-5099-2019_04_26-00_16_26.vma.lzo Child is aborting Done: 89 files, 910772510921 bytes Got fatal error during xfer (aborted by signal=ALRM) Backup aborted by user signal Not saving this as a partial backup since it has fewer files than the prior one (got 89 and 89 files versus 176002) Does the last line mean BackupPC things this backup ought to have 176002 files, and since only 89 files got backed up this backup is thrown away? The same host is backed up twice, I added exclude for that dump/ director and this new backup only backs up that dump/ directory. There are a few things to address here: * The removed in-process file -- presumably, you want to back this up; since it's a huge file, there may be issues with timeout settings, et al. but the log is saying it's not backed up. * "Splitting a host into two backups" isn't necessarily the best idea. While it can work for non-overlapping shares, you're working against the grain in terms of fills, etc. What you probably want to do is split the host into two hosts and manage them independently, so BackupPC is making the right assumptions in terms of how to do fills, what to keep, etc. * As you work through these issues, you probably want to include the relevant configs, e.g., rsync parameters for 100G files and how you're splitting the hosts. ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Misconfigured or bad backup detection
Hi guys... I just ran into an interesting situation. I was looking for some files from my backup, and noticed that nothing had been backed up in my home directory since late 2017. Upon investigation, it looks like the Fedora packages started adding "--one-file-system" by default in the "RsyncArgs", which excludes my "/home" directory! My bad -- I merged in the upstream changes without thinking deeply about the ramifications of each change. Thankfully I've not needed these backups but, scary! Mistakes like this happen and it got me to thinking about ways to prevent this kind of thing in the future. What mechanisms are people using today to avoid this? If a feature to prevent this were to be added in BackupPC, I was thinking possibly something like "canary files" -- such files would be known "canaries" that if they were present in a prior backup, but no longer present in a new backup, that BackupPC could raise an alert / send an email. Or, simply add paths to such canary files or directories in the configuration. Regards, Raman ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Partial backup is not saved
I split a host to two backups, because dump directory contains virtual machine dump files which are big and are taken every night. So I needed shorter keep time for those backups to not run out of disk on the backup host. But this new backup setup does not seem to work. I takes a long time but XferLOGbad shows this: > create 644 0/0 100609398737 > rottank/bupool/dump/vzdump-qemu-5099-2019_04_25-00_16_31.vma.lzo > pool 644 0/0 11286 > rottank/bupool/dump/vzdump-qemu-5099-2019_04_26-00_16_26.log > finish: removing in-process file > rottank/bupool/dump/vzdump-qemu-5099-2019_04_26-00_16_26.vma.lzo > Child is aborting > Done: 89 files, 910772510921 bytes > Got fatal error during xfer (aborted by signal=ALRM) > Backup aborted by user signal > Not saving this as a partial backup since it has fewer files than the prior > one (got 89 and 89 files versus 176002) Does the last line mean BackupPC things this backup ought to have 176002 files, and since only 89 files got backed up this backup is thrown away? The same host is backed up twice, I added exclude for that dump/ director and this new backup only backs up that dump/ directory. -- Tapio Lehtonen OSK Satatuuli http://satatuuli.fi/ <>___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/