Re: [BackupPC-users] Full backups taking a very long time
On Thu, 2021-03-11 at 10:26 -0500, backu...@kosowsky.org wrote: > Sorin Srbu wrote at about 08:31:35 +0100 on Thursday, March 11, 2021: > > On Wed, 2021-03-10 at 14:04 +, David Williams wrote: > > > I have recently upgraded to Ubuntu 20.04 and since then I have noticed > that my full backups are taking much longer than they used to do. I’m only > using backuppc to bakup two machines at home. The Ubuntu machine and a Mac > laptop. I don’t recall exactly how long the full backups were taking > previously, but now they are taking close to 21 hours. The content on both > machines hasn’t changed much at all since the upgrade so I was surprised by > the increase in time. > > > > > > A full backup on the Linux machine is around 892MB. This is the local > machine that Backuppc is installed on. The drive that the backups are stored > on is an SSD as are most, if not all (sorry can’t remember) of the drives in > the Linux box. Backup method is tar. > > > > > > A full backup on the Mac laptop is around 700MB. It’s connected to the > same router as the Linux machine via ethernet. Backup method is rsync. > > > > > > I’m not sure how to troubleshoot this increase in timing so any help > would be much appreciated. > > > > I saw this happening when small files are backed up. > > Unless there is a truly pathological number of small files (think tens > if not hundreds of millions), I don't think you can explain a 21 hour > backup period. > > My Raspberry PI 4 on a home network backs up my Ubuntu 18.04 with > 2.7GB and 321K files in under 12 minutes. And I think that is with > several simultaneous backups. I had that particular case going on at my previous work. Twentyish linux workstations and all having homefolders with tens of millions of small files. I resorted to decreasing the amount of concurrently backed up computers for a small increase in overall speed as well as excluding some folders. Some clients took hours to complete. Not 21 hours, but maybe a quarter of that. We later moved to faster networks and harddrives which helped too. -- Kind regards, Sorin Srbu Find my OpenPGP public key here: https://cloud.srbu.se/index.php/s/KeEsCCDsG7PZG7N signature.asc Description: This is a digitally signed message part ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/
Re: [BackupPC-users] Full backups taking a very long time
Hi Dave, does each client individually take so long, or is it just one? (Or perhaps one client is taking almost all resources, so the other one is slow.) If both clients suddenly have problems, an issue with the BackupPC server is likely. Otherwise, I've seen similar issues when suddenly some service on the client decided to include a virtual memory map file somewhere in /var/run. Which pretends to be of **huge** size, like 2 TB or something. And I didn't use the -x, --one-file-system flag for rsync. The file mostly consists of (implicitly represented) zeros and compresses really nicely, so BackupPC doesn't choke on it, technically; but the processing takes forever, and eventually it always hit some timeout. So I recommend a `sudo find /share -size +1G` to find potential huge files that you don't expect to be in the backups. Or try `sudo ls -l /proc/$(pgrep rsync)/fd` on the clients during backup and try to see whether rsync still progresses, and if not, on which file it idles. Same for tar, obviously. Cheers, Alex On 3/10/21 3:04 PM, David Williams wrote: > I have recently upgraded to Ubuntu 20.04 and since then I have noticed that > my full backups are taking much longer than they used to do. I’m only using > backuppc to bakup two machines at home. The Ubuntu machine and a Mac laptop. > I don’t recall exactly how long the full backups were taking previously, but > now they are taking close to 21 hours. The content on both machines hasn’t > changed much at all since the upgrade so I was surprised by the increase in > time. > > A full backup on the Linux machine is around 892MB. This is the local > machine that Backuppc is installed on. The drive that the backups are stored > on is an SSD as are most, if not all (sorry can’t remember) of the drives in > the Linux box. Backup method is tar. > > A full backup on the Mac laptop is around 700MB. It’s connected to the same > router as the Linux machine via ethernet. Backup method is rsync. > > I’m not sure how to troubleshoot this increase in timing so any help would be > much appreciated. > > Regards, > _ > *Dave Williams* > > > > > ___ > BackupPC-users mailing list > BackupPC-users@lists.sourceforge.net > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki:https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ > smime.p7s Description: S/MIME Cryptographic Signature ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/
Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects
On 3/11/21 4:40 PM, backu...@kosowsky.org wrote: Sounds like the shadow creation script or your implementation of it is broken. The precmd script fails to create the shadow volume when it is run from the backuppc user account on the backup server, but works when it's run from just about any other account. So I'm thinking user admin rights, but the windows admin hasn't yet found exactly which rights are the issue (assuming my theory is even correct; I don't really know the admin side of windows). Also, why are you using rsyncd? rsync itself is cleaner and generally more secure as it uses rsa/dsa keys vs. unencrypted secrets. I'll ask the windows admin, but I suspect the primary reason is "because that's how the backuppc client for windows on sourceforge says to do it". ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/
Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects
On 3/11/21 4:36 PM, backu...@kosowsky.org wrote: I don't see how this would make sense at a per-host level. And any behavior to have it differ by host is undocumented and not necessarily predictable. That's why I asked - I can't predict what it would do if it were allowed. :D Look at the code that I recently submitted to the group to streamline creation/deletion of shadow backups. I saved those posts, but, honestly, I don't see the advantage of using a large script for the per-host config files over having two lines to set the pre/postdump commands. Yes, the pre/postdump commands require scripts to be installed on each target host, but those scripts come along as part of the overall backuppc client installation that they (presumably) need anyhow, in order for rsync and such to be available. > So I'm thinking that it might work to temporarily set the windows hosts > to MaxBackups = 1, if that would prevent multiple windows hosts from > running at the same time and free up slots for the linux hosts to run. > If it would also prevent linux hosts from running when a windows host is > in progress, though, then that would just make things worse. To me this sounds backasswords. Why not just INCREAE MaxBackups to allow for a few hung Windows machines. It's not like they are consuming any server bandwidth or cpu. Because if I increase it to, say, 20, then that will allow 20 linux backups (which are all using bandwidth) just as happily as 20 stalled windows backups. If there were a solution so say "this pool of resources is only for linux clients and that pool is only for windows clients" (short of having two completely separate bpc installs), then I'd have my workaround, but it appears that no such capability exists. Alternatively, I might divide the blackout periods into one period for Linux and one for Windows machines. That way Linux machines don't compete with Windows machines for slots. You can then write a script to kill any hanging Windows backups that bleed into your Linux slot. That could work, yes. But the only real solution is to figure out why and where the Windows backups are hanging... Of course. ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/
Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects
Dave Sherohman wrote at about 15:14:01 +0100 on Thursday, March 11, 2021: > I'm just the linux admin around here, but the windows admin is working > on it. At this point, I'm mostly just looking for a workaround for > until he figures it out. Plan B for dealing with it is to reduce the > backup frequency for the windows machines and run some manually to > spread them out so they run on different days. But that's undesirable, > since it means they're going multiple days between backups, of course. > > At this point, the extent of what I know about the underlying problem is > that the xferlog shows: > > Creates event in log > backuppc_start couldn't register event source, error code 5 >Logevent not completed after trying for 6 milliseconds. >The target server or the domain controller might be unavailable. >Hint: Increase the TimeOut parameter or try again later. > Waits 30 s while the shadow copies are created and the file > "C:\cygwin\backuppc\rsyncd.pid" is created > (repeated a variable number of times) Sounds like the shadow creation script or your implementation of it is broken. Also, why are you using rsyncd? rsync itself is cleaner and generally more secure as it uses rsa/dsa keys vs. unencrypted secrets. > > from the prerun script, then a number of "device or resource busy" > errors while doing the actual backup, and finally a repeat of the > "couldn't register event source, error code 5" and a lot of "Waits 60 s > while the file "C:\cygwin\backuppc\shadow_del.pid" is created" when the > postrun script executes. > > This is using the windows client from > https://sourceforge.net/p/backuppc-windows-client/code/ci/master/tree/ But bottom line is shadow creation is "tricky" and all of us who have written such scripts use a collection of tricks and kluges -- some cleaner than others. For example, my latest version is much better than my original version from ~10 years ago. ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/
Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects
Dave Sherohman wrote at about 14:03:05 +0100 on Thursday, March 11, 2021: > If I were to set $Conf{MaxBackups} = 1 for one specific host, how would > that be handled? Would it prevent that specific host from running > backups unless there are no other backups in progress? Would it prevent > any other backups from being started before that host finished? Would > it do both? Or is that an inherently-global setting that has no effect > if set for a single host? I don't see how this would make sense at a per-host level. And any behavior to have it differ by host is undocumented and not necessarily predictable. It would seem though to do the opposite of what you want in that one hung rsync backup would prevent any other hosts from starting. > > My use-case here is that I've got a lot of linux hosts and a handful of > windows machines. The linux hosts work great with standard ssh/rsync > configuration, no problems there. > > The windows machines, on the other hand, are using a windows backuppc > client that our windows admin found on sourceforge and it's having... > problems... with handling shadow volumes. As in it appears to be > failing to create them, which causes backup runs to take many hours as > it waits for "device or resource busy" files to time out. Which ties up > available slots in the MaxBackups limit and prevents the linux machines > from being scheduled. Look at the code that I recently submitted to the group to streamline creation/deletion of shadow backups. If the long wait is in the pre dump command, then there is not much you can do other than to make sure that pre-dump command times out faster. If the long wait is in a hung rsync, then you can change $Conf{ClientTimeout} to a smaller number. > So I'm thinking that it might work to temporarily set the windows hosts > to MaxBackups = 1, if that would prevent multiple windows hosts from > running at the same time and free up slots for the linux hosts to run. > If it would also prevent linux hosts from running when a windows host is > in progress, though, then that would just make things worse. To me this sounds backasswords. Why not just INCREAE MaxBackups to allow for a few hung Windows machines. It's not like they are consuming any server bandwidth or cpu. Alternatively, I might divide the blackout periods into one period for Linux and one for Windows machines. That way Linux machines don't compete with Windows machines for slots. You can then write a script to kill any hanging Windows backups that bleed into your Linux slot. But the only real solution is to figure out why and where the Windows backups are hanging... > > Or is there some other way I could specify "run four backups at once, > BUT only one of these six can run at a time (alongside three others > which aren't in that group)"? > > > ___ > BackupPC-users mailing list > BackupPC-users@lists.sourceforge.net > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki:https://github.com/backuppc/backuppc/wiki > Project: https://backuppc.github.io/backuppc/ ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/
Re: [BackupPC-users] Full backups taking a very long time
Sorin Srbu wrote at about 08:31:35 +0100 on Thursday, March 11, 2021: > On Wed, 2021-03-10 at 14:04 +, David Williams wrote: > > I have recently upgraded to Ubuntu 20.04 and since then I have noticed > > that my full backups are taking much longer than they used to do. I’m > > only using backuppc to bakup two machines at home. The Ubuntu machine and > > a Mac laptop. I don’t recall exactly how long the full backups were > > taking previously, but now they are taking close to 21 hours. The content > > on both machines hasn’t changed much at all since the upgrade so I was > > surprised by the increase in time. > > > > A full backup on the Linux machine is around 892MB. This is the local > > machine that Backuppc is installed on. The drive that the backups are > > stored on is an SSD as are most, if not all (sorry can’t remember) of the > > drives in the Linux box. Backup method is tar. > > > > A full backup on the Mac laptop is around 700MB. It’s connected to the > > same router as the Linux machine via ethernet. Backup method is rsync. > > > > I’m not sure how to troubleshoot this increase in timing so any help would > > be much appreciated. > > I saw this happening when small files are backed up. Unless there is a truly pathological number of small files (think tens if not hundreds of millions), I don't think you can explain a 21 hour backup period. My Raspberry PI 4 on a home network backs up my Ubuntu 18.04 with 2.7GB and 321K files in under 12 minutes. And I think that is with several simultaneous backups. ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/
Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects
I'm just the linux admin around here, but the windows admin is working on it. At this point, I'm mostly just looking for a workaround for until he figures it out. Plan B for dealing with it is to reduce the backup frequency for the windows machines and run some manually to spread them out so they run on different days. But that's undesirable, since it means they're going multiple days between backups, of course. At this point, the extent of what I know about the underlying problem is that the xferlog shows: Creates event in log backuppc_start couldn't register event source, error code 5 Logevent not completed after trying for 6 milliseconds. The target server or the domain controller might be unavailable. Hint: Increase the TimeOut parameter or try again later. Waits 30 s while the shadow copies are created and the file "C:\cygwin\backuppc\rsyncd.pid" is created (repeated a variable number of times) from the prerun script, then a number of "device or resource busy" errors while doing the actual backup, and finally a repeat of the "couldn't register event source, error code 5" and a lot of "Waits 60 s while the file "C:\cygwin\backuppc\shadow_del.pid" is created" when the postrun script executes. This is using the windows client from https://sourceforge.net/p/backuppc-windows-client/code/ci/master/tree/ On 3/11/21 2:29 PM, Adam Goryachev via BackupPC-users wrote: On 12/3/21 00:03, Dave Sherohman wrote: If I were to set $Conf{MaxBackups} = 1 for one specific host, how would that be handled? Would it prevent that specific host from running backups unless there are no other backups in progress? Would it prevent any other backups from being started before that host finished? Would it do both? Or is that an inherently-global setting that has no effect if set for a single host? My use-case here is that I've got a lot of linux hosts and a handful of windows machines. The linux hosts work great with standard ssh/rsync configuration, no problems there. The windows machines, on the other hand, are using a windows backuppc client that our windows admin found on sourceforge and it's having... problems... with handling shadow volumes. As in it appears to be failing to create them, which causes backup runs to take many hours as it waits for "device or resource busy" files to time out. Which ties up available slots in the MaxBackups limit and prevents the linux machines from being scheduled. So I'm thinking that it might work to temporarily set the windows hosts to MaxBackups = 1, if that would prevent multiple windows hosts from running at the same time and free up slots for the linux hosts to run. If it would also prevent linux hosts from running when a windows host is in progress, though, then that would just make things worse. Or is there some other way I could specify "run four backups at once, BUT only one of these six can run at a time (alongside three others which aren't in that group)"? I'm pretty sure this has been discussed before, and is not possible. However, I would suggest spending a bit more time to resolve the issues with the windows server backups. There is an updated set of instructions posted recently to the list (check the archives), if you need some help to get something working, the list is a great place to ask. Once it works, the windows machines will backup equally as well as the Linux ones. HTH Regards, Adam ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/ ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/
Re: [BackupPC-users] Per-host $Conf{MaxBackups} effects
On 12/3/21 00:03, Dave Sherohman wrote: If I were to set $Conf{MaxBackups} = 1 for one specific host, how would that be handled? Would it prevent that specific host from running backups unless there are no other backups in progress? Would it prevent any other backups from being started before that host finished? Would it do both? Or is that an inherently-global setting that has no effect if set for a single host? My use-case here is that I've got a lot of linux hosts and a handful of windows machines. The linux hosts work great with standard ssh/rsync configuration, no problems there. The windows machines, on the other hand, are using a windows backuppc client that our windows admin found on sourceforge and it's having... problems... with handling shadow volumes. As in it appears to be failing to create them, which causes backup runs to take many hours as it waits for "device or resource busy" files to time out. Which ties up available slots in the MaxBackups limit and prevents the linux machines from being scheduled. So I'm thinking that it might work to temporarily set the windows hosts to MaxBackups = 1, if that would prevent multiple windows hosts from running at the same time and free up slots for the linux hosts to run. If it would also prevent linux hosts from running when a windows host is in progress, though, then that would just make things worse. Or is there some other way I could specify "run four backups at once, BUT only one of these six can run at a time (alongside three others which aren't in that group)"? I'm pretty sure this has been discussed before, and is not possible. However, I would suggest spending a bit more time to resolve the issues with the windows server backups. There is an updated set of instructions posted recently to the list (check the archives), if you need some help to get something working, the list is a great place to ask. Once it works, the windows machines will backup equally as well as the Linux ones. HTH Regards, Adam ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/
[BackupPC-users] Per-host $Conf{MaxBackups} effects
If I were to set $Conf{MaxBackups} = 1 for one specific host, how would that be handled? Would it prevent that specific host from running backups unless there are no other backups in progress? Would it prevent any other backups from being started before that host finished? Would it do both? Or is that an inherently-global setting that has no effect if set for a single host? My use-case here is that I've got a lot of linux hosts and a handful of windows machines. The linux hosts work great with standard ssh/rsync configuration, no problems there. The windows machines, on the other hand, are using a windows backuppc client that our windows admin found on sourceforge and it's having... problems... with handling shadow volumes. As in it appears to be failing to create them, which causes backup runs to take many hours as it waits for "device or resource busy" files to time out. Which ties up available slots in the MaxBackups limit and prevents the linux machines from being scheduled. So I'm thinking that it might work to temporarily set the windows hosts to MaxBackups = 1, if that would prevent multiple windows hosts from running at the same time and free up slots for the linux hosts to run. If it would also prevent linux hosts from running when a windows host is in progress, though, then that would just make things worse. Or is there some other way I could specify "run four backups at once, BUT only one of these six can run at a time (alongside three others which aren't in that group)"? ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:https://github.com/backuppc/backuppc/wiki Project: https://backuppc.github.io/backuppc/