Re: [BackupPC-users] Starting BackupPC server manually
On 2019-08-03 2:56 p.m., Michael Stowe wrote: On 2019-08-03 14:14, Norman Goldstein wrote: On 2019-08-03 2:08 p.m., Michael Stowe wrote: On 2019-08-03 08:24, Norman Goldstein wrote: Am running Fedora 30 x86-64, BP 4.3.1 Am not able to start the BackupPC server manually. I would like to be able to run the BC server manually, to be able to debug into it. When I do, as root: systemctl stop backuppc this stops the server. Then, as user backuppc, to start the server manually: /usr/share/BackupPC/bin/BackupPC but the BP web page does not come back to life. I kill this, and do: systemctl start backuppc and the BP web page comes back to life. I checked the sysd file > cat /usr/lib/systemd/system/backuppc.service [Unit] Description= BackupPC server After=syslog.target local-fs.target remote-fs.target [Service] Type=simple User=backuppc Group=backuppc ExecStart=/usr/share/BackupPC/bin/BackupPC ExecReload=/bin/kill -HUP $MAINPID PIDFile=/var/run/BackupPC/BackupPC.pid so there does not seem to be anything special about invoking the BP server. The same problem if I use the -d flag when invoking BackupPC . Perhaps not special enough, but surely you've noticed these lines? User=backuppc Group=backuppc Yes, I invoke BackupPC from a backuppc xterm: $ groups backuppc $ whoami backuppc Thanks for raising that possibility. Since it's a simple type, there's not anything special that systemd does except ... exactly what you did at the command line. (N.B.: it's possible to override systemd with unit files that take priority, but I imagine if something like that were at play you'd know about it.) What this leaves is environmental differences between the user backuppc's shell and systemd's backuppc environment -- I can't imagine what that would be that would be that would affect only the web service... What is the configuration for that (from backuppc's config)? Thanks for thinking about this. Here is what I do to set up BP: htpasswd -c /etc/BackupPC/apache.users backuppc Edit /etc/httpd/conf/httpd.conf: Change ‘User apache‘ to ‘User backuppc' Here is the backuppc environment: $ env -v SHELL=/bin/bash SUDO_GID=1000 HOSTNAME=melodic HISTSIZE=1000 SUDO_COMMAND=/usr/bin/su -s /bin/bash backuppc SUDO_USER=norm30 PWD=/usr/share/BackupPC/bin LOGNAME=backuppc HOME=/var/lib/BackupPC USERNAME=norm30 LANG=en_CA.UTF-8 LS_COLORS= many lines of stuff :-) TERM=xterm-256color USER=backuppc DISPLAY=:0 SHLVL=1 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin SUDO_UID=1000 MAIL=/var/spool/mail/norm30 _=/usr/bin/env OLDPWD=/home/save/src/dev/msrc/NN The reason I want to debug into BackupPC is that I am not able to run backups from the GUI -- the backup requested is always set to "idle". However, BackupPC_dump runs fine from the backuppc command prompt, for both incremental and full backups. The GUI problem only started happening a few weeks ago. My disk systems are nowhere near full. ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Starting BackupPC server manually
On 2019-08-03 14:14, Norman Goldstein wrote: > On 2019-08-03 2:08 p.m., Michael Stowe wrote: > > On 2019-08-03 08:24, Norman Goldstein wrote: Am running Fedora 30 x86-64, BP > 4.3.1 > Am not able to start the BackupPC server manually. > > I would like to be able to run the BC server manually, to be able to debug > into it. When I do, as root: > > systemctl stop backuppc > > this stops the server. Then, as user backuppc, to start the server manually: > > /usr/share/BackupPC/bin/BackupPC > > but the BP web page does not come back to life. I kill this, and do: > > systemctl start backuppc > > and the BP web page comes back to life. I checked the sysd file > >> cat /usr/lib/systemd/system/backuppc.service > [Unit] > Description= BackupPC server > After=syslog.target local-fs.target remote-fs.target > > [Service] > Type=simple > User=backuppc > Group=backuppc > ExecStart=/usr/share/BackupPC/bin/BackupPC > ExecReload=/bin/kill -HUP $MAINPID > PIDFile=/var/run/BackupPC/BackupPC.pid so there does not seem to be anything > special about invoking the BP server. The same problem if I use the -d > flag when invoking BackupPC . Perhaps not special enough, but surely you've noticed these lines? User=backuppc Group=backuppc Yes, I invoke BackupPC from a backuppc xterm: $ groups backuppc $ whoami backuppc Thanks for raising that possibility. Since it's a simple type, there's not anything special that systemd does except ... exactly what you did at the command line. (N.B.: it's possible to override systemd with unit files that take priority, but I imagine if something like that were at play you'd know about it.) What this leaves is environmental differences between the user backuppc's shell and systemd's backuppc environment -- I can't imagine what that would be that would be that would affect only the web service... What is the configuration for that (from backuppc's config)?___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Starting BackupPC server manually
On 2019-08-03 2:08 p.m., Michael Stowe wrote: On 2019-08-03 08:24, Norman Goldstein wrote: Am running Fedora 30 x86-64, BP 4.3.1 Am not able to start the BackupPC server manually. I would like to be able to run the BC server manually, to be able to debug into it. When I do, as root: systemctl stop backuppc this stops the server. Then, as user backuppc, to start the server manually: /usr/share/BackupPC/bin/BackupPC but the BP web page does not come back to life. I kill this, and do: systemctl start backuppc and the BP web page comes back to life. I checked the sysd file > cat /usr/lib/systemd/system/backuppc.service [Unit] Description= BackupPC server After=syslog.target local-fs.target remote-fs.target [Service] Type=simple User=backuppc Group=backuppc ExecStart=/usr/share/BackupPC/bin/BackupPC ExecReload=/bin/kill -HUP $MAINPID PIDFile=/var/run/BackupPC/BackupPC.pid so there does not seem to be anything special about invoking the BP server. The same problem if I use the -d flag when invoking BackupPC . Perhaps not special enough, but surely you've noticed these lines? User=backuppc Group=backuppc Yes, I invoke BackupPC from a backuppc xterm: $ groups backuppc $ whoami backuppc Thanks for raising that possibility. ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Starting BackupPC server manually
On 2019-08-03 08:24, Norman Goldstein wrote: > Am running Fedora 30 x86-64, BP 4.3.1 > Am not able to start the BackupPC server manually. > > I would like to be able to run the BC server manually, to be able to debug > into it. When I do, as root: > > systemctl stop backuppc > > this stops the server. Then, as user backuppc, to start the server manually: > > /usr/share/BackupPC/bin/BackupPC > > but the BP web page does not come back to life. I kill this, and do: > > systemctl start backuppc > > and the BP web page comes back to life. I checked the sysd file > >> cat /usr/lib/systemd/system/backuppc.service > >> [Unit] >> Description= BackupPC server >> After=syslog.target local-fs.target remote-fs.target >> >> [Service] >> Type=simple >> User=backuppc >> Group=backuppc >> ExecStart=/usr/share/BackupPC/bin/BackupPC >> ExecReload=/bin/kill -HUP $MAINPID >> PIDFile=/var/run/BackupPC/BackupPC.pid > so there does not seem to be anything special about invoking the BP server. > The same problem if I use the -d flag when invoking BackupPC . Perhaps not special enough, but surely you've noticed these lines? User=backuppc Group=backuppc___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] RsyncIncrArgsExtra
Hi, On 03.08.19 18:59, Ted Toal wrote: > Ged, > >> BackupPC shines, I think, in less well-constrained situations. >> >> Given the boundaries I wonder if you wouldn't do better with something >> simple like a script which runs 'find' to find the files to be backed >> up, plain vanilla rsync to do the actual transfers, and de-duplication >> provided (if necessary) by one of several filesystems which offer it. > > We looked at a lot of different solutions, and BackupPC seemed best. I > really like it. I’m not sure that any script we set up could do any better > job finding the files to back up, than rsync via BackupPC with the > file-size-constraint option specified. If I understand it correctly, > incrementals DO NOT read the entire file contents and compute a checksum, but > work strictly off of file modification date, so finding the files requires > only reading the directories and not reading the files themselves, right? Correct. FWIW, `find` with inspection of the modification date (-newer) calls getdents64 via readdir for listing directory entries directly, then lstat for each entry. `rsync` does exactly the same; so for the unchanged files, both should be identical. (In other words, I don't think that an additional mirroring script based on find buys you anything over BackupPC's rsync use.) What *might* be a problem: I remember the painful experience of listing directories with more than a couple files via NFS. [1] explains a possible reason: readdir is not exactly the most efficient way to get such lists, in particular if the latency to get another chunk of the directory listings is significant. But probably that won't matter if you have to call lstat per file anyways. [1]: http://be-n.com/spw/you-can-list-a-million-files-in-a-directory-but-not-with-ls.html Alex > ___ > BackupPC-users mailing list > BackupPC-users@lists.sourceforge.net > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users > Wiki:http://backuppc.wiki.sourceforge.net > Project: http://backuppc.sourceforge.net/ > smime.p7s Description: S/MIME Cryptographic Signature ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] RsyncIncrArgsExtra
Ged, > BackupPC shines, I think, in less well-constrained situations. > > Given the boundaries I wonder if you wouldn't do better with something > simple like a script which runs 'find' to find the files to be backed > up, plain vanilla rsync to do the actual transfers, and de-duplication > provided (if necessary) by one of several filesystems which offer it. We looked at a lot of different solutions, and BackupPC seemed best. I really like it. I’m not sure that any script we set up could do any better job finding the files to back up, than rsync via BackupPC with the file-size-constraint option specified. If I understand it correctly, incrementals DO NOT read the entire file contents and compute a checksum, but work strictly off of file modification date, so finding the files requires only reading the directories and not reading the files themselves, right? Ted ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] RsyncIncrArgsExtra
Hello again, On Sat, 3 Aug 2019, Ted Toal wrote: I am NOT sure whether bandwidth limitation is what I want. ... only backing up our lab?s small portion of the data ... only backing up files less than 1 MB ... BackupPC shines, I think, in less well-constrained situations. Given the boundaries I wonder if you wouldn't do better with something simple like a script which runs 'find' to find the files to be backed up, plain vanilla rsync to do the actual transfers, and de-duplication provided (if necessary) by one of several filesystems which offer it. -- 73, Ged. ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] Starting BackupPC server manually
Am running Fedora 30 x86-64, BP 4.3.1 Am not able to start the BackupPC server manually. I would like to be able to run the BC server manually, to be able to debug into it. When I do, as root: systemctl stop backuppc this stops the server. Then, as user backuppc, to start the server manually: /usr/share/BackupPC/bin/BackupPC but the BP web page does not come back to life. I kill this, and do: systemctl start backuppc and the BP web page comes back to life. I checked the sysd file > cat /usr/lib/systemd/system/backuppc.service [Unit] Description= BackupPC server After=syslog.target local-fs.target remote-fs.target [Service] Type=simple User=backuppc Group=backuppc ExecStart=/usr/share/BackupPC/bin/BackupPC ExecReload=/bin/kill -HUP $MAINPID PIDFile=/var/run/BackupPC/BackupPC.pid so there does not seem to be anything special about invoking the BP server. The same problem if I use the -d flag when invoking BackupPC . ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] RsyncIncrArgsExtra
Hi Ted, On 02.08.19 20:09, Ted Toal wrote: > Hi Alex, > > Ok, thanks for that suggestion, I’d thought of it, but wasn’t sure if rsync > would complain if the arg appeared twice, but apparently it doesn’t. > > I am NOT sure whether bandwidth limitation is what I want. I am actually > trying to throttle down not only the network bandwidth used but also the I/O > load. This is a shared file system with hundreds of users accessing it. I’m > only backing up our lab’s small portion of the data, and I’m only backing up > files less than 1 MB in size. The full backups are done separately by > someone else in a different manner. For my <1 MB files, I am doing a full > backup once a year and an incremental backup once an hour. > I want to have essentially 0 impact on the network bandwidth and on the I/O > load between the server that talks to BackupPC and the network storage device. I'm not 100% sure, but this sounds way more complicated than throttling the bandwidth between the BackupPC server and the host. IIUC, your situation is: BPC (1) ---(a)--- host (2) ---(b)--- NAS (3) BPC (1) is the BackupPC server; host (2) is the system you want to back up, i.e., the client from BackupPC's perspective; and NAS (3) is the server providing the shared file system. You want to limit I/O on 3 as well as bandwidth on link b, with privileged access to only 1, no access to 3, and probably no chance of changing the way 2 communicates with 3, correct? (E.g., to set up a dedicated NFS connection where the server side (3) is I/O-limited.) Here's my gut feeling: (disclaimer: unconfirmed, highly dependent on your exact setup, and I'm not an expert on NFS setups) In that situation, ionice on 2 won't help; the rsync instance running on host 2 is purely cpu- and network-bound, but has negligible local I/O (controlled by ionice). And limiting cpu (via nice) and network bandwidth (via trickle, e.g.) on 2 won't help, either: just listing files on an NFS is usually a bottleneck, because individual requests have to pass the link b. If you somehow manage to limit the bandwidth across b, actual *content* transfer will be horribly slow. (And I expect this to be difficult, as the NFS is probably pre-mounted via a mechanism that you can't control.) The only reasonable idea, AFAICS, would be to rate-limit the *number* of files accessed. But I do not see how this could be done, short of modifying the rsync-sender on host 2. IMHO, the one and only *proper* way to install such a backup solution would be to ask your friendly staff managing the NAS 3 (hopefully experts on how their setup works, if it serves 100+ users) to grant you access to their backups (which they surely have), or give you read-only direct access to NAS 3 with proper limits. What you're trying to do sounds like their job, and even if you have reasons to think that you might do better or have specific requirements they won't be able to fulfill, you're not in best position to implement it. Just my 2 pennies from someone who enjoys not having do deal with NFS a lot... Alex > Since I’m just starting, I’m doing the first full backups, and they are > taking forever. I have a bandwidth limit of 1 MB/s, very low. I need to > explore how high I can go without impacting other’s access, and how high I > need to go to finish the full backups and incremental backups in a timely > fashion. I’m thinking a higher bandwidth limit for the full backups would > get them done quicker with still little impact. For the incrementals, I > haven’t done one yet so I don’t know how long it will take, but I may > discover I have to increase that bandwidth also, and/or decrease the > frequency of the incrementals. > > Based on that, do you think I should be using ionice too? And by the way, I > do not have root access to the server. > > Ted smime.p7s Description: S/MIME Cryptographic Signature ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/