Re: [BackupPC-users] Upgrading from Ubuntu 14.04 to 16.04
Am 12. September 2017 17:29:34 MESZ, schrieb Gerald Brandt : > >Hi, > >Has anyone done an upgrade from Ubuntu 14.04 to 16.04 on an active >BackupPC system? Normally, I'd clonezilla the system drive before I did >an upgrade, but it's not working right on my Linux raid 1 boot drives. > >Gerald > > > > >-- >Check out the vibrant tech community on one of the world's most >engaging tech sites, Slashdot.org! http://sdm.link/slashdot > > > >___ >BackupPC-users mailing list >BackupPC-users@lists.sourceforge.net >List:https://lists.sourceforge.net/lists/listinfo/backuppc-users >Wiki:http://backuppc.wiki.sourceforge.net >Project: http://backuppc.sourceforge.net/ Works. Done it: 12.04 to 14.04 to 16.04. BR, BR -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Incremental backups
On 11/01/2016 13:07, Adam Goryachev wrote: > On 1/11/16 21:59, Gandalf Corvotempesta wrote: >> 2016-11-01 11:35 GMT+01:00 Johan Ehnberg : >>> Changes in BackupPC 4 are especially geared towards allowing very high >>> full periods. The most recent backup being always filled (as opposed to >>> rsnapshots hardlink pointing to the first), a full backup is not >>> required to maintain a recent and complete representation of all the >>> files and folders. >> So, with the current v4, deleting a full backup doesn't break the >> following incrementals? >> In example, with Bacula, if you delete a "full" backup, all following >> backups are lost. >> In rsnapshot, you can delete whatever you want, it doesn't break >> anything as long as you keep at least 1 backup, obviosuly [...] > So, can you explain the need to delete random backups manually? > Generally, if you need to do something weird like that, then either you > are doing something wrong, or you are using the wrong tool. Some have to purge files that have to go out of all backups... ... due to copyright issues -- I had that. ... because wrongly defined excludes _heavily bloating_ the archive -- I'm guilty. Regards, Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Developer Access Program for Intel Xeon Phi Processors Access to Intel Xeon Phi processor-based developer platforms. With one year of Intel Parallel Studio XE. Training and support from Colfax. Order your platform today. http://sdm.link/xeonphi ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Version 4 vs 3
Am 28.10.2016 um 18:45 schrieb Gandalf Corvotempesta: > 2016-10-28 17:36 GMT+02:00 Alain Mouette : >> Please, I went reading about rsnapshot and it also makes extensive use of >> hard-links, does it perform differently than BackupPC about this? > rsnapshot is much faster than BackupPC because it hasn't to do any > check in the pool > No deduplication or compression, thus it will run a plain rsync at > maximum speed. I'm quite happy with borgbackup as a secondary backup method aside BackupPC and wouldn't go back to rsnapshot and alike. Borgbackup is IMO fast, does deduplication, compression encryption with relatively little effort -- even offsite. And its "pool" of chunks is quite easy to copy. I didn't try borg-rsyncimport which is advertised as "can import existing rsync+hardlink or rsnapshot based backups into a borgbackup repository." Nonetheless I wouldn't want to miss BackupPC. Regards, Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- The Command Line: Reinvented for Modern Developers Did the resurgence of CLI tooling catch you by surprise? Reconnect with the command line and become more productive. Learn the new .NET and ASP.NET CLI. Get your free copy! http://sdm.link/telerik ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Moving /var/lib/BackupPC to a new disk fails with rsync and OOM
Am 05.09.2016 um 17:07 schrieb Colin: > # df -h /var/lib/BackupPC > Filesystem Size Used Avail Use% Mounted on > /dev/sdb1 493G 394G 74G 85% /var/lib/BackupPC > > # rsync -aH /var/lib/BackupPC/. /mnt/. > So far I've tried just rsync'ing individual directories under > /var/lib/BackupPC but in the end, the destination appears to use almost > double of the space of the current one. rsync to an empty dir/disks never(?) makes any sense. Additionally any file based approach will be hit hard by hard links. > Another solution I tried was dump/restore but it just takes too long: > longer than a day and backup are daily so this solution doesn't fit. > > Any ideas ? At least an idea to get the same file system to a bigger disk: dd -- or anything else for cloning a partition. I migrated a full 1TB partition via dd to a 2.2TB relatively fast -- from a remote VM to a local host, the throughput was limited by that VM anyway. I only had to resize it afterwards. The main drawback of that method is, that you would be stuck with the same file system type. Regards, Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] unable to use rsync with NOPASSWD
Hi, have added the public key to the authorized Keys? Sounds like it is asking for the login pw. Not the sudo pw. B Am 5. Juni 2016 00:17:07 MESZ, schrieb Mike Bosschaert : >Hi, >I've been using backuppc for years now with no problems. Recently I've >had to reinstall my os (opensuse leap) on one of my clients. I added >the >user backuppc, added the user to the sudoers file (using visudo) with >the NOPASSWD:/usr/bin/rsync option. But when I issue the backup command > >from the backup server: >/usr/bin/ssh -q -x -l backuppc 192.168.2.102 nice -n 19 sudo >/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group >-D --links --hard-links --times --block-size=2048 --recursive . / >the client keeps asking for the password. >Is there someone who could help me solving this problem? >Thx >Mike > >-- >What NetFlow Analyzer can do for you? Monitors network bandwidth and >traffic >patterns at an interface-level. Reveals which users, apps, and >protocols are >consuming the most bandwidth. Provides multi-vendor support for >NetFlow, >J-Flow, sFlow and other flows. Make informed decisions using capacity >planning reports. >https://ad.doubleclick.net/ddm/clk/305295220;132659582;e >___ >BackupPC-users mailing list >BackupPC-users@lists.sourceforge.net >List:https://lists.sourceforge.net/lists/listinfo/backuppc-users >Wiki:http://backuppc.wiki.sourceforge.net >Project: http://backuppc.sourceforge.net/ -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.-- What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic patterns at an interface-level. Reveals which users, apps, and protocols are consuming the most bandwidth. Provides multi-vendor support for NetFlow, J-Flow, sFlow and other flows. Make informed decisions using capacity planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] [Solved] missed "override" (Re: rsyncd excludes problem (3.2.1))
On 2016-04-13 17:32, Michael Stowe wrote: > On 2016-04-13 03:29, Benjamin Redling wrote: > Start with the basics: xferlog. You'll want to review your excludes in > two ways. First, you'll want to confirm that they show up here: > > Sent exclude: Users/*/AppData/Local/Temp/* > Huge thank you! That got me to question my "config.pl" and have a look on the host's config because the Xferlog was missing any "Sent exclude" -- that made me ask the right questions. And brought the solution which indeed had nothing to do with syntax: Normally I only use the global settings as long as possible -- I really miss a central SPOT (single point of truth) -- 1. I must have set the _erroneous_, new excludes once at the host -- "override" gets set 2. then removed the excludes at the host's config via the CGI, _but_ missed to disable the "override" 3. Whatever settings I made and tested globally got overridden by the now non-existent/empty excludes *argh* ... and starting to edit the text file directly at least in this case made me _less_ aware of the host-specific settings. Regards, Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Find and fix application performance issues faster with Applications Manager Applications Manager provides deep performance insights into multiple tiers of your business applications. It resolves application problems quickly and reduces your MTTR. Get your free trial! https://ad.doubleclick.net/ddm/clk/302982198;130105516;z ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] rsyncd excludes problem (3.2.1)
Sadly coming back for the same reason: On 03/29/2016 19:57, Michael Stowe wrote: > On 2016-03-29 10:17, Benjamin Redling wrote: >> my exclude list seems to be defunct [...] XferMethod is rsyncd, >> shouldn't '*/tmp' avoid this? > No, [...] > If you want to exclude anything in a subdirectory named tmp, then 'tmp/' > should do the trick. No variant works in my case -- tmp/, tmp/***, */tmp/*, home => /tmp/ If anybody is using the same package successfully I would be happy to not annoy the package maintainer and keep trying to understand what I am doing wrong. >> Did a restart of backuppc after every change. > > That's unnecessary. I avoided that at first. But when all variants (tmp/, */tmp/* tmp/***) failed I made sure no process was running an restarted backuppc. Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Find and fix application performance issues faster with Applications Manager Applications Manager provides deep performance insights into multiple tiers of your business applications. It resolves application problems quickly and reduces your MTTR. Get your free trial! https://ad.doubleclick.net/ddm/clk/302982198;130105516;z ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] rsyncd excludes problem (3.2.1)
On 2016-03-29 19:57, Michael Stowe wrote: > On 2016-03-29 10:17, Benjamin Redling wrote: >> [...] /tmp subdirectories and absolute >> paths (/anonuser... see below) are filling up the discs. >> XferMethod is rsyncd, shouldn't '*/tmp' avoid this? > No, '*/tmp' will avoid backing up any files named "tmp" or any empty > subdirectories named "tmp" as long as they are NOT in the root of the > share -- but it will not exclude any files IN directories named "tmp", > which, from your description, is probably what you want. Ok, I reread the section "INCLUDE/EXCLUDE PATTERN RULES" of the rsync man page in hope to grasp the patterns. There it says: " a trailing "dir_name/***" will match both the directory (as if "dir_name/" had been specified) and everything in the directory (as if "dir_name/**" had been specified). This behavior was added in version 2.6.7. " So, to my understanding tmp/*** (and all the other dirs accordingly) should be the correct pattern. [...] > Also note that exclude paths are relative, so if you want to match /tmp > in the root of the share, the proper exclude to use is simply 'tmp'. [...] > If you want to exclude anything in a subdirectory named tmp, then 'tmp/' > should do the trick. I need the later case, tmp in the home dirs. Thanks for pointing to the importance of the trailing slash! Maybe I changed more that I am willing to admit to myself and even worse: this wasn't part of config. management /version control -- really bad. Regards, Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Transform Data into Opportunity. Accelerate data analysis in your applications with Intel Data Analytics Acceleration Library. Click to learn more. http://pubads.g.doubleclick.net/gampad/clk?id=278785471&iu=/4140 ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
[BackupPC-users] rsyncd excludes problem (3.2.1)
Hello everybody, my exclude list seems to be defunct since adding a few absolute paths via the web interface. Recently(?) /tmp subdirectories and absolute paths (/anonuser... see below) are filling up the discs. XferMethod is rsyncd, shouldn't '*/tmp' avoid this? Did a restart of backuppc after every change. >From the config.pl: $Conf{BackupFilesExclude} = { '*' => [ '*/tmp', '*/.cache', '*/temp', '*/Cache', '*/cache', '/anonuser1/usenet-de/per-group', '/anonuser2/sub-dir1/sub-dir2/models', '/anonuser3/downloads', '*/privat*', '*/ImapMail', '*.iso', '*.ISO', '*/.macromedia', '*/.local/share/Trash', '*.ogg', '*.OGG', '*.mp3', '*.MP3', '*.mp4', '*.MP4' ] }; I've also tried putting the new paths to an explicitly defined share: $Conf{BackupFilesExclude} = { '*' => [ '*/tmp', '*/.cache', '*/temp', '*/Cache', '*/cache' ], 'home' => [ '/anonuser1/usenet-de/per-group', '/anonuser2/sub-dir1/sub-dir2/models', '/anonuser3/downloads', '*/privat*', '*/ImapMail', '*.iso', '*.ISO', '*/.macromedia', '*/.local/share/Trash', '*.ogg', '*.OGG', '*.mp3', '*.MP3', '*.mp4', '*.MP4' ] }; BackupPC 3.2.1-2ubuntu1.1 on Ubuntu 12.04 LTS, x86_64 What am I doing wrong? Regards, Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Transform Data into Opportunity. Accelerate data analysis in your applications with Intel Data Analytics Acceleration Library. Click to learn more. http://pubads.g.doubleclick.net/gampad/clk?id=278785471&iu=/4140 ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Using Amazon AWS S3/Glacier with BackupPC
On 03/18/2016 00:51, Jim Wilcoxson wrote: > Marcel Meckel foobar0815.de> writes: > >> >> Hi there, >> >> Amazon offers amongst other services one named S3* (Simple Storage >> Service, moderate >> price with low latency) and Glacier* (extremely cheap storage, retrieval >> can take hours, >> perfect for backups only needed when disaster strikes). >> >> With the correct config rules in place, files uploaded to S3 can be >> moved to Glacier >> automatically, e.g. when the file's age is >= 14 days. > > Hi Marcel - I'd suggest you store files in S3 Infrequent Access instead of > Glacier. S3IA costs 1.2 cents/GB/mo - not much more than .7 cents/GB/mo for > Glacier, and is much more flexible. There's also Google Nearline for 1 > cent/GB/mo and Backblaze B2 for .5 cents/GB/mo. If redundancy isn't needed you get 10TB for 47,48€ with 20TB free traffic with the biggest hetzner storage box The smaller boxes have a higher cost per GB but a way higher traffic ratio. https://www.hetzner.de/hosting/produktmatrix/storagebox-produktmatrix In the US it is even cheaper: https://www.hetzner.de/us/hosting/produktmatrix/storagebox-produktmatrix Click the flag in the header to choose from different countries. It can live with that as an archive host (and a NAS in another building; internal, 1Gb bandwidth). And we get a monthly bill. That is easier for me at a university than with most cloud vendors -- until recently amazon needed a credit card and I've no interest to pay a middleman. Serverbidding (without setup costs, no minimum contract period) is also more attractive for our workloads than most cloud instances as we keep the hosts for at least a month and traffic is practically free. I've yet to see a financially comparable offer by a cloud vendor. For our use cases (NAS, on demand instances) Amazon is 3-4 times more expensive as soon as traffic between them and our site comes into play. Avoiding that would mean going cloud only -- no, thanks. Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Transform Data into Opportunity. Accelerate data analysis in your applications with Intel Data Analytics Acceleration Library. Click to learn more. http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140 ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] BackupPC usage as localhost
On 2015-10-29 10:20, a.d...@accenture.com wrote: > - Is it possible that instead of sending the files to /home/backuppc/storage, > to tell to BackupPC to send the files to a remote computer via ssh scp for > example > > Does Backuppc has this feature or can implement it ? This would remove the > need to script an additional cron job to export /home/backuppc/storage to a > remote server when using bpc as solution to backup the server itself Just to make sure: your are not interested to just use an "archive host" http://backuppc.sourceforge.net/faq/BackupPC.html#Archive-functions ? Regards, Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Setup of rsync via SSH with unprivileged user 'backuppc'
On 2015-03-15 12:40, Adam Goryachev wrote: > On 14/03/2015 22:08, Angus Kerr wrote: [...] >> #Sudoers file for backuppc user to run rsync >> >> backuppc ALL=NOPASSWD: /usr/bin/rsync >> > > Note that this will give the user root access easily enough. The user > could create the file they want in /tmp, and then use sudo rsync to > overwrite the target file (or copy a file they don't have read access to > a location they do have access, including another machine). Therefore, > this entire process is hardly worth the effort and additional complexity [...] A lot of sources at least agree on that being unsafe. AFAIK rrsync should be the proper way and justify the effort. e.g. http://www.guyrutenberg.com/2014/01/14/restricting-ssh-access-to-rsync/ Regards, Benjamin -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Dive into the World of Parallel Programming The Go Parallel Website, sponsored by Intel and developed in partnership with Slashdot Media, is your hub for all things parallel software development, from weekly thought leadership blogs to news, videos, case studies, tutorials and more. Take a look and join the conversation now. http://goparallel.sourceforge.net/ ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] BackupPC backup speed is about 60Mbps ... slow?
Mbps is bandwidth. Most often IOPS are more interesting. I've seen RAID w/ 12 spindels degrade to kbps. /B Adam Goryachev schrieb: >On 16/11/14 21:19, yashiahru wrote: >> iostat -M >> 7.6 MBps (~60mbps) >> >> so it's normal because of the hdd speed limitation ... >> >> Except the upgrading the HDD, is there any way to shorten the backup time? >> > >Some options: >1) Backup less data >2) Change your backup method >3) defrag your drive >4) Optimise your data so you don't have huge number of files in a single >directory > >The first one is sure to solve the issue of course :) > >The second one can make more efficient use of your existing available >resources (maybe), generally this means using rsync. Of course, any >method will still ultimately have some bottleneck somewhere. > >PS, I don't think I've used a drive that is limited to ~60MB/s, even a >single drive should be capable of 100MB/s or more, though of course if >there is any other IO on the same drive at the same time, then you will >massively reduce the available throughput (due to seek times). > >Regards, >Adam >-- >Adam Goryachev Website Managers www.websitemanagers.com.au > >-- >Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server >from Actuate! Instantly Supercharge Your Business Reports and Dashboards >with Interactivity, Sharing, Native Excel Exports, App Integration & more >Get technology previously reserved for billion-dollar corporations, FREE >http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk >___ >BackupPC-users mailing list >BackupPC-users@lists.sourceforge.net >List:https://lists.sourceforge.net/lists/listinfo/backuppc-users >Wiki:http://backuppc.wiki.sourceforge.net >Project: http://backuppc.sourceforge.net/ -- Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server from Actuate! Instantly Supercharge Your Business Reports and Dashboards with Interactivity, Sharing, Native Excel Exports, App Integration & more Get technology previously reserved for billion-dollar corporations, FREE http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] BackupPC backup speed is about 60Mbps ... slow?
Your no. of spindles and the IOPS your setup is able to achieve? You are posting about your network setup first ... and after that you talk about the data on your disc and your backup speed. So why not check IO first? Apart from benchmarking your drive setup, have a look at iotop and iostat. Regards, Benjamin Am 15.11.2014 20:52, schrieb yashiahru: > PC network interface: 1000Mbps with Cat6 cable > backup server network interface: 1000Mbps with Cat6 cable > router: 1000Mbps (put through about 700Mbps) > > CentOS 6.4 > BackupPC latest version > 10TB data from PC backup to backup server by SMB (shared from windows) > > When the full backup start: > the average is about 170Mbps (iftop) > after 24 hours ... still processing ... > the average is about 60Mbps (iftop) > > Yes. file size and numbers matter > the data included GB videos and millions of small files > > Is it normal speed or below average? -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Comprehensive Server Monitoring with Site24x7. Monitor 10 servers for $9/Month. Get alerted through email, SMS, voice calls or mobile push notifications. Take corrective actions from your mobile device. http://pubads.g.doubleclick.net/gampad/clk?id=154624111&iu=/4140/ostg.clktrk ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] Incremental Backup fail
Holger was right (btw. I deleted all your postings upfront because of their format) Have a look at your mail at: http://sourceforge.net/p/backuppc/mailman/message/32569459/ When not viewed as HTML it looks horrible. You can bet that the more experienced users won't display any HTML in their MUA. /B Am 15.07.2014 10:40, schrieb raceface: > Hello Holger, > > thank you for being that kind to non professionals. I have only 3 > blank lines in my last posting and 21 non blank, don't know, why > you get more blank 50 times more blank lines. Using this script is > a suggestion of the backuppc FAQ and not my personal idea. This > script helps me getting backuppc running full backups. Using > $tarPath ends in error " sudo: no tty present and no askpass > program specified". Root has also no rights to login via ssh, so > ssh is no option. Giving the user backuppc sudo rights is no > option, to prevent having too much users with to many rights. > > Best, Andy. > >> -Ursprüngliche Nachricht- Von: Holger Parplies >> [mailto:wb...@parplies.de] Gesendet: Montag, 14. Juli 2014 18:14 >> An: raceface Cc: backuppc-users@lists.sourceforge.net Betreff: >> Re: [BackupPC-users] Incremental Backup fail >> >> Hi, >> >> thank you for sending us 175 blank lines. Unfortunately, the >> content in your 28 non-blank lines doesn't make up for it, so >> I'll quote sparingly. >> >> raceface wrote on 2014-07-13 11:20:42 +0200 [[BackupPC-users] >> Incremental Backup fail]: >>> [...] I have a problem [...] >> >> Obviously. >> >>> [...] /bin/tar: Option --after-date: Treating date `2014-07-10' >>> as 2014-07-10 00:00:00 >> >> Obvious. >> >>> [ skipped 10072 lines ] >> >> That's what I feel like, too. >> >>> My tarCreate is >> >> Nonsense. I've said that before. If you don't understand shell >> scripts, don't use them. In the very least, don't use them where >> there's no point. If you do, don't waste our time with it. This >> is the BackupPC users list, not a "my first steps with shell >> scripting and quoting problems" forum. >> >>> exec /bin/tar -c $* >> >> That won't work. See bash(1). >> >> Regards, Holger >> >> P.S.: If you don't want to take advice, don't ask for any. > > > -- > > Want fast and easy access to all the code in your enterprise? Index and > search up to 200,000 lines of code with a free copy of Black Duck > Code Sight - the same software that powers the world's largest > code search on Ohloh, the Black Duck Open Hub! Try it now. > http://p.sf.net/sfu/bds > ___ BackupPC-users > mailing list BackupPC-users@lists.sourceforge.net List: > https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: > http://backuppc.wiki.sourceforge.net Project: > http://backuppc.sourceforge.net/ > -- Want fast and easy access to all the code in your enterprise? Index and search up to 200,000 lines of code with a free copy of Black Duck Code Sight - the same software that powers the world's largest code search on Ohloh, the Black Duck Open Hub! Try it now. http://p.sf.net/sfu/bds ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
Re: [BackupPC-users] BackupPC Server Backs Itself Up Instead Of The Client
On 2014-04-11 07:58, inodeman wrote: > [...] apt-get update/upgrade on the Debian Jessie based server, [...] > (exactly when, I don't remember), [...] What about not remembering, but looking it up? /var/log/apt/ (term.log, history.log) /var/log/dpkg.log (/var/log/aptitude) ... and when time permits /var/lib/dpkg/status, 'apt-get changelog ', manpages for dpkg and apt-get /BR -- FSU Jena | JULIELab.de/Staff/Benjamin+Redling.html vox: +49 3641 9 44323 | fax: +49 3641 9 44321 -- Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test & Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees ___ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List:https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki:http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/