Re: [BackupPC-users] achieving 3-2-1 backup strategy with backuppc

2022-06-01 Thread Ray Frush
I have always interpreted the 3-2-1 strategy to apply to copies of your data, 
not the number of backups 
(https://www.backblaze.com/blog/the-3-2-1-backup-strategy/)

As such, I’ve used two strategies over time.
1)  Use BackupPC to backup local devices in the same building/LAN, and have a 
second BackupPC instance in a separate space also running backups of the same 
devices.  ( 3 copies of the data: the source, one on local backup, one on 
remote backup.  Requires good network speeds between your local site and your 
remote site.

2) Use BacukpPC to backup local devices to a NAS.  Use NAS replication to push 
a copy of the BackupPC data to a remote device.




--
Ray Frush "Either you are part of the solution
T:970.491.5527  or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IT | 
Systems Engineer

> On Jun 1, 2022, at 00:46, Sharuzzaman Ahmat Raslan  
> wrote:
> 
> Hello,
> 
> I have been using BackupPC for a long time, and even implement it
> successfully for several clients.
> 
> Recently I came across several articles about the 3-2-1 backup
> strategy and tried to rethink my previous implementation and how to
> achieve it with BackupPC
> 
> For anyone who is not familiar with the 3-2-1 backup strategy, the
> idea is you should have 3 copies of backups, 2 copies locally on
> different media or servers, and 1 copy remotely on cloud or remote
> server
> 
> I have previously implemented BackupPC + NAS, where I create a Bash
> script to copy the backup data into NAS. That should fulfil the 2
> local backup requirements, and I could extend it further by having
> another Bash script copying from the NAS to cloud storage (eg. S3
> bucket)
> 
> My concern right now is the experience is not seamless for the user,
> and they have no indicator/report about the status of the backup
> inside the NAS and also in the S3 bucket.
> 
> Restoring from NAS and S3 is also manual and is not simple for the user.
> 
> Anyone has come across a similar implementation for the 3-2-1 backup
> strategy using BackupPC?
> 
> Is there any plan from the developers to expand BackupPC to cover this 
> strategy?
> 
> Thank you.
> 
> -- 
> Sharuzzaman Ahmat Raslan
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backup aborted (No files dumped for share

2020-02-21 Thread Ray Frush
For machines with antique rsync versions, we’ve had success backing them up 
with the ’tar’ method instead.



> On 2020Feb 21, at 06:13, Gerald Brandt  wrote:
> 
> Sadly, no. The machine is due for decommisioning (I've been trying for years, 
> but management...). I still have to back it up though.
> 
> If it's an old rsync issue, I can keep BackupPC 3 running for a while longer.
> 
> The other issue is a stranger one. BackupPC 4 backs up 3 of the 4 
> directories. It says the 4th has no files, yet it's quite large.
> 
> Gerald
> 
> 
> 

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pool storage for BackupPC on a virtual machine

2019-12-18 Thread Ray Frush

The reason we didn’t do that (put the VM disk images on the external storage) 
was more related to the complexity (breaking our design rules) of presenting 
another storage pool to our VM environment from the external storage.   That 
being said, without our design rules in place, we could have presented a 
storage pool from the external storage to our VM infrastructure and done 
exactly as you propose.

The rationale for us is that the storage that hosts our VM disk images is quite 
robust (dual controllers), and we choose not to use our external storage (which 
doesn’t have the same dual controller redundancy)  for running VM disk images 
so that if we need to take it off line for patching, there’s zero impact.  


> On Dec 18, 2019, at 02:08, orsomannaro  wrote:
> 
> On 16/12/19 16:54, Ray Frush wrote:
>> I run BackupPC as a VM in my environment which backs up all of the other 
> 8<
> 
> Exactly the same configuration for me, now.
> 
> But I'm asking myself: why don't put the entire VM in the external storage?
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Pool storage for BackupPC on a virtual machine

2019-12-16 Thread Ray Frush
I run BackupPC as a VM in my environment which backs up all of the other VM’s 
in my environment and I expressly use external storage (an NFS appliance) to 
store the Pool file system so that backups are not stored on the same storage 
as the systems it’s backing up.

If, for some reason, the primary storage becomes corrupted, the backups should 
still be accessible once a replacement BackupPC server is built and the 
external pool of data is presented to it.

--
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Administrator



> On Dec 16, 2019, at 06:08, orsomannaro  wrote:
> 
> Considering that:
> 
> - pool must point to a single file system
> - size of SO+application may become negligible compared to pool storage
> - it is really hard to browse/restore backup data without BackupPC interface
> 
> Assuming to run BackupPC on a virtual machine, there is some reason to mount 
> the pool on an external storage?
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recommended settings for BackupPC v4 on ZFS

2019-09-10 Thread Ray Frush
We backup to a ZFS based appliance, and we allow ZFS to do compression and 
disable compression in BackupPC.  We do not allow ZFS to de-duplicate.   
However since you’re looking at doing ZFS on the same box that’s running 
BackupPC it probably doesn’t matter which one you have compression turned on 
since it’s the same processors., just don’t do both.





> On Sep 10, 2019, at 12:26, Carl Soderstrom  
> wrote:
> 
> We've been a BackupPC v3 shop since there's been a v3 and we're looking at
> building our first v4 BackupPC server. The boss wants to put it on ZFS and a
> JBOD controller.
> 
> I believe that for BackupPC v3 the advice was to turn off ZFS
> filesystem-level deduplication and compression. 
> 
> Is that still true for BackupPC v4?
> Are there any other suggestions for filesystem settings for ZFS when hosting
> a BackupPC pool?
> 
> -- 
> Carl Soderstrom
> Systems Administrator
> Real-Time Enterprises
> www.real-time.com
> 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_Admin cannot write to LOCK

2019-08-28 Thread Ray Frush
Ahh, I didn’t install from an RPM.  We install from the official tar files as 
BackupPC packaging has been spotty in the past, and based on your SELinux 
issues, continues to be spotty.   



> On Aug 28, 2019, at 14:12, Jamie Burchell  wrote:
> 
> Ah, perhaps its due to which package I have. I’m on CentOS 7 with the package 
> from hobbes1069-BackupPC which appears to be from the Fedora project.
> 
> Kind regards,
> 
> Jamie
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_Admin cannot write to LOCK

2019-08-28 Thread Ray Frush

So, I’m running on a RedHat 7 flavored box.   ’semanage fcontext -l’ returns no 
items for specific paths for BackupPC on my system.  Also my systems have no 
content in /usr/share/selinux/packages/, which is why I wrote my own.



> On Aug 28, 2019, at 13:23, Jamie Burchell  wrote:
> 
> Thanks for those details. There does appear to be a policy in place already 
> (/usr/share/selinux/packages/BackupPC/BackupPC.pp)
> 
> # semanage fcontext -l | grep BackupPC
> 
> /etc/BackupPC(/.*)?all files  
> system_u:object_r:httpd_sys_rw_content_t:s0
> /var/run/BackupPC(/.*)?all files  
> system_u:object_r:var_run_t:s0
> /var/log/BackupPC(/.*)?all files  
> system_u:object_r:httpd_log_t:s0
> /etc/BackupPC/LOCK all files  
> system_u:object_r:httpd_lock_t:s0
> 
> No mention of /var/lib/BackupPC though. Interesting that the LOCK file is 
> mentioned here yet is trying to write it to the data folder?
> 

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_Admin cannot write to LOCK

2019-08-28 Thread Ray Frush


Our setup is a little different that yours, but this is the SELinux module I 
deploy to my BackupPC server with these steps:
semodule -r backuppc
checkmodule -M -m -o /tmp/backuppc.mod /tmp/backuppc.te
semodule_package -o /tmp/backuppc.pp -m /tmp/backuppc.mod
semodule -i /tmp/backuppc.pp

We also set these SELinux Booleans
setsebool httpd_read_user_content 1
setsebool httpd_use_nfs 1# our data store is on NFS


Contents of /tmp/backuppc.te:

module backuppc 1.0;

require {
type etc_t;
type var_log_t;
type net_conf_t;
type user_tmp_t;
type httpd_sys_script_t;
class file { write rename read create unlink open };
class dir { search read write getattr remove_name open add_name };
}

#= httpd_sys_script_t ==
allow httpd_sys_script_t etc_t:dir { write search read open getattr add_name 
remove_name };
allow httpd_sys_script_t etc_t:file { write rename create unlink };
allow httpd_sys_script_t var_log_t:dir read;
allow httpd_sys_script_t var_log_t:file { read open };
allow httpd_sys_script_t net_conf_t:file { read write open rename create unlink 
};
allow httpd_sys_script_t user_tmp_t:dir { write search read open getattr 
add_name remove_name };
allow httpd_sys_script_t user_tmp_t:file { write rename create unlink };



> On Aug 28, 2019, at 09:45, Jamie Burchell  wrote:
> 
> Hi
>  
> I’m having trouble with SELinux reporting:
>  
> avc:  denied  { write } for  pid=15496 comm="BackupPC_Admin" name="LOCK" 
> dev="sda1" ino=201443561 scontext=system_u:system_r:httpd_t:s0 
> tcontext=system_u:object_r:var_lib_t:s0 tclass=file permissive=0
>  
> The issue (and supposed answer) is mentioned here:
>  
> https://lists.fedoraproject.org/pipermail/selinux/2013-March/015287.html 
> 
>  
> I have replaced /var/lib/BackupPC with a symlink to 
> /mnt/volume_lon1_01_part1/BackupPC
>  
> As far as I can tell, the default context for /var/lib/BackupPC is 
> “system_u:object_r:var_lib_t:s0” and this is what I have set on 
> “/mnt/volume_lon1_01_part1/BackupPC”.
>  
> So the context appears to be correct, and I’ve run restorecon -R 
> /var/lib/BackupPC but the messages still persist.
>  
> Anybody know how to fix this?
>  
> I should mention that everything appears to be working fine.
>  
> Thanks,
> Jamie
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net 
> 
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users 
> 
> Wiki:http://backuppc.wiki.sourceforge.net 
> 
> Project: http://backuppc.sourceforge.net/ 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Zfs Deduplication vs Backuppc Pooling

2019-04-02 Thread Ray Frush
I’ll echo Jean-Yves sentiment, and advise against turning on ZFS Deduplication. 
  For a backupPC pool, which is already significantly deduplicated (via the 
hash pool), deduplication probably won’t buy you as much as you’d hope.   We 
recently moved to ZFS backed storage and rely on using ZFS’s compression 
instead of BackupPC.  You don’t want to do both as you’ll take a small penalty 
on writes as LZ4 has to decide not to compress your blocks.

To fill in the ‘pooling’ question:  BackupPC’s central architectural feature is 
its hash pool which is how BackupPC achieves its speed and efficient use of 
disk space.   There’s no way to turn that off.




> On Apr 2, 2019, at 06:25, Stefan Schumacher 
>  wrote:
> 
> Hello,
> 
> I want to set up a new zfs volume as storage for Backuppc. I plan on using 
> the zfs features encryption, deduplication and compression. 
> 
> According to my understanding activating these feature on the filesystem 
> level should be followed by disabling them on the application level, meaning 
> Backuppc. I have found an option to deactive compression in the 
> configuration, but none for pooling.
> 
> My questions are:
> 
> 1) Am I correct in assuming that I should disable pooling and compression in 
> Backuppc?
> 2) How can I disable pooling on Backuppc?
> 
> Yours sincerely
> Stefan Malte Schumacher
> Stefan Schumacher
> Systemadministrator
> NetFederation GmbH
> Sürther Hauptstraße 180 B - 
> Fon:+49 (0)2236/3936-701
> 
> E-Mail: stefan.schumac...@net-federation.de 
> 
> Internet: http://www.net-federation.de 
> Besuchen Sie uns doch auch auf Facebook , 
> Twitter , Instagram 
> , Flickr 
> , XING 
>  oder unserem Blog 
> . Wir freuen uns!
> 
> ***
> HR Benchmark 2019: Bewerber in den Mittelpunkt!
> ***
> 
> Wie bewerberfreundlich und auf welchem digitalen Stand sind die 
> Karriereportale großer deutscher Unternehmen? 
> 
> Unsere Antworten finden Sie in unserem frisch veröffentlichten HR Benchmark 
> unter: https://www.hr-benchmark.de/human-resources-benchmark-2019/benchmark 
> 
> 
> *
> NetFederation GmbH
> Geschäftsführung: Christian Berens, Thorsten Greiten 
> Amtsgericht Köln, HRB Nr. 32660
> 
> *
> The information in this e-mail is confidential and may be legally privileged. 
> It is intended solely for the addressee and access to the e-mail by anyone 
> else is unauthorised. If you are not the intended recipient, any disclosure, 
> copying, distribution or any action taken or omitted to be taken in reliance 
> on it, is prohibited and may be unlawful. If you have received this e-mail in 
> error please forward to: post...@net-federation.de 
> 
> 
> Die in dieser E-Mail enthaltenen Informationen sind vertraulich und können 
> von rechtlicher Bedeutung sein. Diese Mail ist ausschließlich für den 
> Adressaten bestimmt und jeglicher Zugriff durch andere Personen ist nicht 
> zulässig. Falls Sie nicht der beabsichtigte Empfänger sind, ist jegliche 
> Veröffentlichung, Vervielfältigung, Verteilung oder sonstige in diesem 
> Zusammenhang stehende Handlung untersagt und unter Umständen ungesetzlich. 
> Falls Sie diese E-Mail irrtümlich erhalten haben, leiten Sie sie bitte weiter 
> an: post...@net-federation.de 
> 
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net 
> 
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users 
> 
> Wiki:http://backuppc.wiki.sourceforge.net 
> 
> Project: http://backuppc.sourceforge.net/ 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] What happens when you 'run out' of inodes...

2019-03-20 Thread Ray Frush
My NFS storage was doing some interesting reporting about available inodes, 
erroneously reporting that we were using >95% of the available inodes.   It’s 
interesting that BackupPC reported the following:

Yesterday 274 hosts were skipped because the file system containing
/mnt/backups/BackupPC was too full.  The threshold in the configuration
file is currently 95%, while yesterday the file system was up
to 68% full.  The maximum inode usage yesterday
was 99% and the threshold is currently 95%.


I only have 160 hosts configured in BackupPC, and the logs indicate that all of 
them were backed up, so it’s not clear to me where BackupPC thinks it skipped 
274 hosts!   

Here’s the log excerpt.
...
2019-03-19 20:27:16 Started incr backup on servr101 (pid=26521, share=/)
2019-03-19 20:28:13 Finished incr backup on servr121
2019-03-19 20:28:13 Started incr backup on aix02 (pid=26651, share=/)
2019-03-19 20:29:48 Finished incr backup on servr501
2019-03-19 20:30:00 Disk too full (usage 67%; inode 99%; thres 95%/95%); 
skipped 114 hosts
2019-03-19 20:30:13 Finished incr backup on aix01
2019-03-19 20:31:57 Started incr backup on servr002 (pid=26807, share=/)
2019-03-19 20:33:17 Finished incr backup on aix02
2019-03-19 20:33:34 Finished incr backup on servr101
2019-03-19 20:34:34 Finished incr backup on servr002
2019-03-19 21:00:01 Disk too full (usage 67%; inode 99%; thres 95%/95%);); 
skipped 160 hosts
2019-03-19 21:00:01 Next wakeup is 2019-03-19 22:00:00
2019-03-19 21:28:18 Finished incr backup on servr301
2019-03-19 22:00:00 Next wakeup is 2019-03-19 23:00:00
2019-03-19 22:00:01 Started incr backup on orocitym (pid=3142, share=/)
2019-03-19 22:00:01 Started incr backup on servr301 (pid=3143, share=/)
2019-03-19 22:00:01 Started incr backup on servr401 (pid=3144, share=/)
2019-03-19 22:00:01 Started incr backup on servr571 (pid=3145, share=/)
2019-03-19 22:00:01 Started incr backup on servr203 (pid=3146, share=/)
...


What I believe is happening is that at the 20:00 wakeup, the hosts were cued up 
to run, but when the inode count (erroneously) crossed the 95% threshold all of 
the remaining cued backups got skipped. The problem still persisted at 21:00, 
and all 160 hosts were queued and skipped.   By 22:00 the problem had self 
corrected, and the systems queued and ran backups as expected.Backups that 
were in progress continued to run to completion because we didn’t actually run 
out of inodes on the backend storage. 

Reporting as an FYI to let people know how BackupPC responds to some of the new 
threshold checking.  


--
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Administrator

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't write config.pl files through CGI interface.

2019-02-21 Thread Ray Frush
All-

I had to write the following SELinux type enforcement policy file ‘backuppc.te’ 
to allow the httpd daemon access to access the required files under 
/etc/BackupPC even after getting httpd setup to run as the ‘backuppc’ user.
The alternative is to set SELinux to permissive, which is not really allowed in 
our environment.


module backuppc 1.0;

require {
type etc_t;
type var_log_t;
type net_conf_t;
type user_tmp_t;
type httpd_sys_script_t;
class file { write rename read create unlink open };
class dir { search read write getattr remove_name open add_name };
}

#= httpd_sys_script_t ==
allow httpd_sys_script_t etc_t:dir { write search read open getattr add_name 
remove_name };
allow httpd_sys_script_t etc_t:file { write rename create unlink };
allow httpd_sys_script_t var_log_t:dir read;
allow httpd_sys_script_t var_log_t:file { read open };
allow httpd_sys_script_t net_conf_t:file { read write open rename create unlink 
};
allow httpd_sys_script_t user_tmp_t:dir { write search read open getattr 
add_name remove_name };
allow httpd_sys_script_t user_tmp_t:file { write rename create unlink };



I top post on purpose.

--
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Administrator

> On Feb 21, 2019, at 15:40, Adam Goryachev 
>  wrote:
> 
> On 22/2/19 8:36 am, Hubert SCHMITT wrote:
>> Thanks for your answer Jean Yves,
>> 
>> But i really don't understand what's wrong.
>> 
>> The rights are the same on my side : 
>> -rw-r-   1 backuppc apache  85K 21 févr. 20:31 config.pl 
>> <http://config.pl/>
>> -rw-r-   1 backuppc apache  82K 27 déc.   2014 config.pl_20141227_OK
>> -rw-r-   1 backuppc apache  82K 17 avril  2016 config.pl.old
>> -rw-r-   1 backuppc apache  86K 19 févr. 14:16 config.pl.pre-4.3.0
>> 
>> Apache is running with : User backuppc and Group apache in httpd.conf
>> 
> I think you will need to confirm your apache settings, because if the user is 
> backuppc and group apache, you should have write access to the above file.
> 
> One other thing to confirm is the permissions of the directory, and also 
> whether the web interface is attempting to write to the same file you think 
> it is. To check directory permissions:
> 
> ls -ld /path/to/check
> 
> Regards,
> Adam
> 
> 
> 
> -- 
> Adam Goryachev Website Managers www.websitemanagers.com.au 
> <http://www.websitemanagers.com.au/>
> 
> -- The information in this e-mail is confidential and may be legally 
> privileged. It is intended solely for the addressee. Access to this e-mail by 
> anyone else is unauthorised. If you are not the intended recipient, any 
> disclosure, copying, distribution or any action taken or omitted to be taken 
> in reliance on it, is prohibited and may be unlawful. If you have received 
> this message in error, please notify us immediately. Please also destroy and 
> delete the message from your computer. Viruses - Any loss/damage incurred by 
> receiving this email is not the sender's responsibility.
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4.3.0 and NFSv3 don't appear to work well together.

2019-02-15 Thread Ray Frush
I observed a bit of odd behavior with a new install of BackupPC 4.3.0. on RHEL 
7.5.   My data store is TrueNAS NFS.  Initially using a TrueNAS for NFS and it 
was set for NFSv3 only.In my investigation, I recalled that NFSv3 file 
locking is a bit of a hack, and apparently BackupPC was really having trouble 
with it.  I’m sharing this with the group in hopes that it will save someone 
some time if they hit this issue.   

Switching to NFSv4 on the TrueNAS (required restart), then re-mounting the 
directory and restarting BackupPC, I find that the behavior now is correct.  


Here’s what I’ve observed with NFSv3.  All of these issues resolved by 
switching to NFSv4:

- New install with a new host definition before any backups, I can select the 
host and get the host screen and I can view the Host Summary which is empty.

- Once I start a backup, the Host Summary screen and the Host screen both get 
‘gateway timeout’ errors.  The processes connected with the CGI go into a ‘D’ 
state in ps, indicating some sort of blocking read wait.

- The initial stages of the backup seemed to take longer than usual, but the 
backup itself then fires off and runs normally (with decent speed) and then 
hangs again.  


--
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Administrator

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Adding file Prefix upon restoring

2018-09-27 Thread Ray Frush

Typically how I do this is to restore to a different directory than the
original.   This solves the problem of wanting to have the old and new
file available, so that they can be compared for forensics.
During the restore process you are presented with this dialog:

Option 1: Direct RestoreYou can start a restore that will restore these
files directly onto 129.82.128.53.Warning: any existing files that
match the ones you have selected will be overwritten!Restore the files
to host  129.82.128.53bandoraclydecoppercitymempirefmdbp101fmdb
t101fmisgoldparkmgroverirasp101irast101irdbp101irdbt101isasd103isasd121
isasd150isasd160isasd170isasd189isasd301isasd302isasd303isasd313isasd60
1isasf160isasf170isasg301isasp101isasp102isasp121isasp160isasp161isasp1
70isasp171isasp172isasp173isasp174isasp189isasp201isasp202isasp203isasp
300isasp301isasp302isasp391isasp401isasp511isasp522isasp531isasp541isas
p551isasp561isasp571isasp610isasp701isasq300isasq301isasq302isasq303isa
sq311isast101isast102isast111isast121isast160isast170isast201isast202is
ast203isast204isast205isast301isast391isast401isast511isast521isast522i
sast531isast541isast551isast561isast571isast610isasu160isasu170isdbp101
isdbp121isdbp201isdbp203isdbp301isdbp401isdbp501isdbp502isdbt101isdbt10
2isdbt121isdbt201isdbt202isdbt301isdbt302isdbt401isdbt402isdbt501isdbt5
02isemp101isemt101isidp120isidp501isidt120isidt501isifp001isifp002isifp
003isifp010isifp090isifp101isifp110isifp190isifp291isifp390isifp590isif
p601isift010isift090isift110isift190isift291isift390isift590isift601isi
fu601isxxt001isxxt002isxxt004keblermonarchoakcreekmorocitympadroniquart
zroggenmsilverdalemstonehammtincupmwalsenmwildsm Restore the
files to shareRestore the files below dir
(relative to share)Replace the path presented here with a new directory
to restore the files into.
There is no option to rename the files into the existing directory.

On Thu, 2018-09-27 at 11:06 -0300, Gabriel Doring wrote:
> Hello, this is my first time using this software for backup, and as
> such, I need help.
> Sometimes users request me to restore some sheets from older days,
> and they need to maintain the new and the old sheet when I restore
> it. Is there any way I can add a prefix or sufix to a restored file?
> i.e. Restore_Sheet001.xls
> 
-- 
Ray Frush "Either you are part of the solutionT:970.491.5527 or 
part of the 
precipitate."-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-Colorado 
State University | IS | System Administrator
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying backups to other host

2018-08-22 Thread Ray Frush
It it were me, I wouldn't trust rsync.  The Hardlink tree is nasty
enough that 'rsync' will probably take longer than 'dd'.   I'm not
familiar with using 'dump' and cannot comment.   It's been quite some
time since I've considered this problem, and there may be newer methods
for making efficient block for block copies, but at the end of the day,
'dd' is a tool I've used over and over and trust.
This relatively recent article points out several methods for cloning a
disk,   I've used Clonezilla, and know that its good at skipping empty
blocks on a disk, but I have no idea if it scales well to 200TB.
https://www.makeuseof.com/tag/2-methods-to-clone-your-linux-hard-drive/



On Wed, 2018-08-22 at 18:15 +, Nino Bosteels wrote:
> Hi,
>  
> I had found some information online that with the --H option and
> rsync you preserve hardlinks?
>  
> The data is mixed, v3-v4, so we still have hardlinks. Should we try
> to migrate these v3 to v4 before copying? Would it be faster?
>  
> I read that instead of dd you could use dump, which actually is aware
> of free disk space for ext(3-4). Any ideas?
>  
> And would it be an idea to let Nightlies run a couple of times?
>  
> Thanks for your answer in any case ! It’s going to be.. weeks yeah :s
>  
> Nino
>  
> 
> 
> From: Ray Frush
>  [mailto:fr...@rams.colostate.edu] 
> 
> Sent: Wednesday, August 22, 2018 7:06 PM
> 
> To: backuppc-users@lists.sourceforge.net
> 
> Subject: Re: [BackupPC-users] Copying backups to other host
> 
> 
>  
> 
> Nino-
> 
> 
>  
> 
> 
> If your old BackupPC instance is version 3.x, then you're advised to
> use 'dd' to make a perfect image of the device that the cpool folder
> resides
>  in. BackupPC 3.x uses extensive use of hardlinks, and the only
> reliable to copy the cpool is to take an image of the block device
> with 'dd'.
> 
> 
> 
>  
> 
> 
> BackupPC 4.x is a little more forgiving, and you may be able to move
> things with rsync.
> 
> 
>  
> 
> 
> In either case, 200TB is a pretty large pool, and it's going to take
> some time to move that!
> 
> 
>  
> 
> 
>  
> 
> 
> -- 
> Ray Frush "Either you are part of the solution
> T:970.491.5527 or part of the precipitate."
> -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
> Colorado State University | IS | System Administrator
> 
> 
>  
> 
> 
> On Wed, 2018-08-22 at 13:25 +, Nino Bosteels wrote:
> 
> > Dear list,
> >  
> > We’re trying to copy the backups from one instance to another to
> > use them as an archive, whilst starting a fresh backuppc on another
> > instance.
> >  
> > But we’re not sure which folders to copy and even more so, how.
> > We’ve been using rsync –avzH –delete to copy the pc folder over.
> > But now we’re in doubt if that is sufficient? I think that we’d 
> > need to copy the cpool
> >  folder too (at least).
> >  
> > Can anybody clarify? The sole resource (old) I found was somebody
> > suggesting to use dd.
> >  
> > Thanks for your time and answer(s). We’re kind of stressing out,
> > since we’d like to cancel the subscription end of the month. And
> > we’re talking 200TB in the cpool folder !!
> >  
> > Kr,
> > Nino
> >  
> > -
> > -
> > Check out the vibrant tech community on one of the world's most
> > engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:
> > https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >  
> 
> 
> 
> 
> -------
> ---Check out the vibrant tech community on one of the world's
> mostengaging tech sites, Slashdot.org! 
> http://sdm.link/slashdot___BackupPC-users
>  mailing listBackupPC-users@lists.sourceforge.netList:
> https://lists.sourceforge.net/lists/listinfo/backuppc-usersWiki::
> http://backuppc.wiki.sourceforge.netProject: 
> http://backuppc.sourceforge.net/
-- 
Ray Frush "Either you are part of the solutionT:970.491.5527 or 
part of the 
precipitate."-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-Colorado 
State University | IS | System Administrator
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copying backups to other host

2018-08-22 Thread Ray Frush
Nino-
If your old BackupPC instance is version 3.x,  then you're advised to
use 'dd' to make a perfect image of the device that the cpool folder
resides in.   BackupPC 3.x uses extensive use of hardlinks, and the
only reliable to copy the cpool is to take an image of the block device
with 'dd'. 
BackupPC 4.x is a little more forgiving, and you may be able to move
things with rsync.
In either case, 200TB is a pretty large pool, and it's going to take
some time to move that!


On Wed, 2018-08-22 at 13:25 +, Nino Bosteels wrote:
> Dear list,
>  
> We’re trying to copy the backups from one instance to another to use
> them as an archive, whilst starting a fresh backuppc on another
> instance.
>  
> But we’re not sure which folders to copy and even more so, how. We’ve
> been using rsync –avzH –delete to copy the pc folder over. But now
> we’re in doubt if that is sufficient? I think that we’d  need to copy
> the cpool
>  folder too (at least).
>  
> Can anybody clarify? The sole resource (old) I found was somebody
> suggesting to use dd.
>  
> Thanks for your time and answer(s). We’re kind of stressing out,
> since we’d like to cancel the subscription end of the month. And
> we’re talking 200TB in the cpool folder !!
>  
> Kr,
> Nino
>  
> 
> 
> 
> 
> ---
> ---Check out the vibrant tech community on one of the world's
> mostengaging tech sites, Slashdot.org! 
> http://sdm.link/slashdot___BackupPC-users
>  mailing listBackupPC-users@lists.sourceforge.netList:
> https://lists.sourceforge.net/lists/listinfo/backuppc-usersWiki::
> http://backuppc.wiki.sourceforge.netProject: 
> http://backuppc.sourceforge.net/
-- 
Ray Frush "Either you are part of the solutionT:970.491.5527 or 
part of the 
precipitate."-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-Colorado 
State University | IS | System Administrator
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] about migrate to a new server

2018-05-10 Thread frush
Luigi-
There is no way I know of to convert your V3 pool to V4, and I can confirm that 
it is
impractical to try to copy or duplicate your V3 pool to a new system due to the 
way it is
hard-linked internally.
The earlier recommendation to run your New BackupPC server (v4) while the old 
server (v3)
is available is  a generally good best practice when making this kind of shift.
-- 
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Administrator


On Thu, 2018-05-10 at 21:37 +0200, Luigi Augello wrote:
> It is V3 and the new backuppc server have V4.
> 
> I cannot preserve the old server I will return it because it was on
> leasing.
> 
> which is the solution. Can I convert V3 to V4 and after can I make
> the migration to new server ?
> 
> Thanks
> 
> Luigi
> 
> 
> 
> Il 10/05/2018 21:27, Carl W. Soderstrom ha scritto:
> 
> 
> >   Is this BackupPC v3 or v4?With v3, the number of symlinks make it 
> > impractical to
> > copy data from onemachine to another (you can do it if you dd the partition 
> > to a
> > partition onthe new machine, but don't try a file-level copy unless you use
> > somespecialized scripts and really know what you're doing).
> > It's generally easiest to just set up a new server and keep the old 
> > onearound as a
> > backup until the data can be considered 'expired'. Much simplerand more 
> > reliable than
> > trying to migrate data.
> > For BackupPC v4 I have no experience.
> > On 05/10 08:27 , Luigi Augello wrote:
> > 
> >   
> > > as from subject I need to migrate data-user from an old server to 
> > > a
> > > newserver it is right to copy data directory from old server  into 
> > > newserver or i
> > > will havew problems of data  compressed ?
> > > 
> > >   
> > 
> >   
> > 
> > 
> > 
> 
> 
> 
> -- 
> 
>   
> 
>   
> 
> --Check
>  out
> the vibrant tech community on one of the world's mostengaging tech sites, 
> Slashdot.org! 
> http://sdm.link/slashdot___BackupPC-users
> mailing listBackupPC-users@lists.sourceforge.netList:
> https://lists.sourceforge.net/
> lists/listinfo/backuppc-usersWiki::
> http://backuppc.wiki.sourceforge.netProject: http
> ://backuppc.sourceforge.net/

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] considerations on i-nodes

2018-04-18 Thread frush
Torsten-
Nice writeup of a way to help manage the backupPC file system.   
On Wed, 2018-04-18 at 13:25 +0200, f...@igh.de wrote:
> Dear List, 
> 
> running BackupPC v4 I sometimes ran out of i-nodes...
> 
-- 
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Administrator--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] offsite server

2018-04-10 Thread frush
Depending on the size of your environment, it may be impractical to try to 
rsync the pool.
Our BackupPC server covers about 120 systems and uses >4TB (and 100M inodes) to 
do
so.  Such a file system would not rsync quickly to a remote location.
I believe the best thing to do is to have a Local BackupPC server to backup 
EVERYTHING
(Prod, non-Prod, experiemental, etc)and a remote BackupPC instance that 
independently
backs up everything you can't afford to lose (Prod).
-- 
Ray Frush "Either you are part of the solution
T:970.491.5527 or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Administrator

On Tue, 2018-04-10 at 10:06 +, Philip Parsons (Velindre - Medical Physics) 
wrote:
> Dear list,
>  
> I’m sure this has been discussed previously, but I couldn’t see anything in 
> recent
> archives that specifically related to v4 of BackupPC.
>  
> I am using BackupPC v4.1.5 to backup hospital data to an on-site server.  We 
> would
> however, like to backup this data to another off-site server.  Has anyone had 
> any
> experience of this?
>  
> I was wondering if it would be a good idea to set up another instance of 
> backuppc on the
> remote server, turn off all the backup functions, copy the config settings, 
> rsync the
> pool and just have that instance as a restorative method (should
>  something happen to our on-site copy).  Is this feasible?
>  
> I guess there are a number of ways that this could be achieved.  CPool size 
> is currently
> approximately 10Tb.  Off-site network speed is going to be pretty good 
> (apologies for
> the vagueness here).
>  
> I’d be very interested in anyone’s thoughts, or experiences of setting up an 
> off-site
> replication server with BackupPC v4.
>  
> Thanks,
> Phil
> 
> 
> 
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! 
> http://sdm.link/slashdot_
> __
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Serious error: last backup ... directory doesn't exist!!! - reason found

2018-03-08 Thread frush
One of the more challenging aspects of this is the number of inodes that 
BackupPC consumes
is directly related to how many inodes you're backing up.   If I recall 
correctly, in
BackupPC 4.x, each 'filled' backup will use 2 inodes per object backed up in 
addition to
the inode consumed by the unique hashed file.
One has to understand well the environment they are backing up in order to make 
a informed
choice about the number of inodes required by BackupPC.   I know I didn't on my 
first
pass!  Our current set of backups (~30 days) uses 93 Million inodes and 4TB of 
space to
backup 126 hosts.   Metadata alone takes up 357MB on our NAS.
--Ray FrushColorado State University.
On Thu, 2018-03-08 at 19:54 +0300, Alexander Moisseev via BackupPC-users wrote:
> On 3/8/2018 6:59 PM, f...@igh.de wrote:
> > Craig,
> > 
> > again I return to my issue "No space left on device".
> > 
> > Meanwhile I found the reason: the partition ran out of inodes. As you
> > wrote under "How much disk space do I need?" one has to have "plenty
> > of inodes". But what does that mean?
> > 
> > May I ask the following:
> > 
> > - in the "General Server Information" you give some statistical
> >information about disk usage; would it be a good idea also to give
> >information about inode consumption?
> > 
> 
> It is a really good idea, but obtaining inode consumption with du command is 
> complicated
> since it returns different sets of columns on different OSes.
> I think the simplest way is to replace CheckFileSystemUsage subroutine with
> Filesys::DiskSpace module.
> 
> Craig, is it ok to introduce another dependency?
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Help with monthly schedule configuration

2017-11-16 Thread Ray Frush
The incremental period of 0.97 results in a daily backup, so that's
probably what you want to keep.


My schedule ends up giving you something like this:   ~32 daily backups + a
couple of older ones just in case you need an older file.

Backup# Type Filled Level Start Date Duration/mins Age/days
0 full yes 0 2017-09-20 18:00 0.3 56.9
14 incr yes 1 2017-10-04 19:00 0.1 42.9
26 incr no 1 2017-10-16 19:00 0.1 30.9
27 incr no 1 2017-10-17 19:00 0.1 29.9
28 incr yes 1 2017-10-18 19:00 0.1 28.9
29 incr no 1 2017-10-19 19:00 0.1 27.9
30 incr no 1 2017-10-20 19:00 0.1 26.9
31 incr no 1 2017-10-21 19:00 0.1 25.9
32 incr no 1 2017-10-22 19:00 0.1 24.9
33 incr no 1 2017-10-23 19:00 0.1 23.9
34 incr no 1 2017-10-24 19:00 0.1 22.9
35 incr yes 1 2017-10-25 22:00 0.1 21.8
36 incr no 1 2017-10-26 22:00 0.1 20.8
37 incr no 1 2017-10-27 22:00 0.1 19.8
38 incr no 1 2017-10-28 22:00 0.1 18.8
39 incr no 1 2017-10-29 22:00 0.1 17.8
40 incr no 1 2017-10-30 22:00 0.1 16.8
41 incr no 1 2017-10-31 22:00 0.1 15.8
42 incr yes 1 2017-11-01 22:00 0.1 14.8
43 incr no 1 2017-11-02 22:00 0.1 13.8
44 incr no 1 2017-11-03 22:00 0.1 12.8
45 incr no 1 2017-11-04 22:00 0.1 11.8
46 incr no 1 2017-11-05 21:00 0.1 10.8
47 incr no 1 2017-11-06 21:00 0.1 9.8
48 incr no 1 2017-11-07 21:00 0.1 8.8
49 incr yes 1 2017-11-08 21:00 0.2 7.8
50 incr no 1 2017-11-09 21:00 0.1 6.8
51 incr no 1 2017-11-10 21:00 0.1 5.8
52 incr no 1 2017-11-11 21:00 0.1 4.8
53 incr no 1 2017-11-12 21:00 0.1 3.8
54 incr no 1 2017-11-13 21:00 0.1 2.8
55 incr no 1 2017-11-14 21:00 0.1 1.8
56 incr yes 1 2017-11-15 21:00 0.1 0.8

On Thu, Nov 16, 2017 at 2:16 AM, Jamie Burchell  wrote:

> Thanks Ray
>
>
>
> I’ll be honest, I still don’t understand those settings after re-reading
> several times.
>
>
>
> What should the incremental period be here 0.97
>
>
>
> I’m also only interested in one month’s worth, so can that schedule be
> simplified?
>
>
>
> Jamie
>
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Help with monthly schedule configuration

2017-11-15 Thread Ray Frush
Jamie-

Another source of advice is the listserv archives.   Assuming you're
running BackupPC 4.  If not, you'll want to upgrade!

Our organization wants to keep at least one month of daily backups

Here's a snippet of a post I did on this subject the last time someone
asked about schedules.

The values you'll want to check:
$Conf{IncrKeepCnt} = 26;  # This is the number of total 'unfilled'
backups kept.

$Conf{FillCycle} = 7;# This is how often a filled backup is kept (1 per
week) which strongly influences the next setting

$Conf{FullKeepCnt} = [  4,  3,  0, ];  # This defines how many 'filled'
backups are retained.

The combination of filled and unfilled backups result in ~32 days of daily
backups plus a couple of older ones just in case a user needs a file from
more than a month ago.



Remember, in BackupPC 4,  a "filled" backup is kinda equivalent to a 'full'
backup for the purposes of "FullKeepCnt", which should be renamed
"FilledKeepCnt".

Missing from that post was:
  $Conf{FullPeriod} = 90.97

This defines the time between 'full' backups where a full checksum is done
against the source.

But, as it turns out, my schedule above deletes the last 'full' backup
after about 70 days, and BackupPC ends up taking a full immediately after
that, so the "FullPeriod" never ends up being triggered.
If my "FullKeepCnt" looked like [4, 3, 1, ]then my FullPeriod would
trigger before the last full backup was aged out.  The Filled vs. Full
backups are a bit confusing.


--
Ray Frush


On Wed, Nov 15, 2017 at 2:59 PM, Jamie Burchell <ja...@ib3.co.uk> wrote:

> Hi!
>
>
>
> Hoping someone can give me that “ah ha!” moment that I’m so desperately
> craving after pouring over the documentation, mailing lists and various
> forum posts.
>
>
>
> I want to move from BackupPC’s default schedule to keeping ~1 month’s
> worth of backups, but I cannot fathom if I should:
>
>
>
> -  Do a full backup every day and keep 30 of them
>
> -  Do a full backup every week and keep 4 of them, with
> incrementals in between
>
> -  Do a full backup each month and keep 30 incrementals.
>
>
>
> BackupPC is so efficient with storage and transferring only what is needed
> between backups that I don’t understand the difference between the three
> approaches. All backups can be browsed like full backups, BackupPC only
> ever transfers files it doesn’t have, all storage is deduplicated and rsync
> can detect changes, new files and deletions, so why does it matter? I am
> using rsync (over SSH), network speed and reliability is good and the
> drives are all SSD.
>
>
>
> The settings I currently have are:
>
>
>
> FullPeriod 6.97
>
> FullKeepCnt 4
>
> IncrPeriod 0.97
>
> IncrKeepCnt 24
>
>
>
> I **think** this will give me 4 full backups with incrementals in
> between, but I think I could have equally have gone with:
>
>
>
> FullPeriod 30
>
> FullKeepCnt 1
>
> IncrPeriod 0.97
>
> IncrKeepCnt 29
>
>
>
> I don’t understand what is meant by a “filled backup” either.
>
>
>
> Thanks for any clarity/help in advance!
>
>
>
> Regards
>
> Jamie
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Ray Frush
Gandalf-

The server you're trying to back up doesn't seem that large.   A similarly
sized server in my environment (210GB, 3M files) does a full backup in
~80-90 minutes, and incrementals run 10-15 minutes.  Monitoring suggests
that the server never exceed 1Gbps outbound on the network connection.
This indicates to me that BackupPC is capable of doing the job, so that
leaves elements of your environment that are variables.

Here's a list that comes to mind:

1) target disk performance:
You indicate that your ZFS store does 50-70MB/s.  That's pretty slow in
today's world.  I get bothered when storage is slower than a single 10K RPM
drive (~100-120MB/sec).  I wonder how fast metadata operations are.
 bonnie++ benchmarks might indicate an issue here as BackupPC is metadata
intensive, and has to read a lot of metadata to properly place files in the
CPOOL.   Compare those results with other storage to gauge how well your
ZFS is performing.   I'm not a ZFS expert.

2) rsyncd vs rsync:
When BackupPC uses the 'rsync' method, it uses ssh to start a dedicated
rsync server on the client system with parameters picked by BackupPC
developers.
When you use the 'rsyncd' method,  the options on the client side were
picked by you, and may not play well with BackupPC.  It would be easy to
test around this by setting up backupPC to use the 'rsync' method instead
(setting up ssh correctly of course) and seeing if you note any
improvement.  That will isolate any issues with your rsyncd configs.


A 4x 1Gbps network link will look exactly like a single 1Gbps per network
channel (stream) unless you've got some really nice port aggregation
hardware that can spray data at 4Gbps across those.   As such, unless you
have parallel jobs running (multithreaded), I wouldn't expect to see any
product do better than 1Gbps from any single client in your environment.
The BackupPC server, running multiple backup jobs could see a benefit from
the bonded connection, being able to manage 4 1Gpbs streams at the same
time, under optimal conditions, which never happens.




On Wed, Sep 20, 2017 at 8:34 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-09-20 16:14 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
> > Question:   Just how big is the host you're trying to backup?  GB?
> number
> > of files?
>
> From BackupPC web page: 147466.4 MB, 3465344 files.
>
> > What is the network connection between the client and the backup
> > server?
>
> 4x 1GbE bonded on both sides.
>
>
> > I'm curious about what it is about your environment that is making
> > it so hard to back up.
>
> It's the same with ALL servers that i'm trying to backup.
> BPC is about 12 times slower than any other tested solution.
> Obviously, same backup server, same source servers.
>
> > I believe I've mentioned my largest, hairiest server is 770GB with 6.8
> > Million files.   Full backups on that system take 8.5 hours to run.
> > Incrementals take 20-30 minutes.   I have no illusions that the
> > infrastructure I'm using to back things up is the fastest, but it's fast
> > enough for the job.
>
> The long running backup (about 38 hours, still running) is an incremental.
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-20 Thread Ray Frush
Question:   Just how big is the host you're trying to backup?  GB?  number
of files?  What is the network connection between the client and the backup
server?  I'm curious about what it is about your environment that is making
it so hard to back up.

I believe I've mentioned my largest, hairiest server is 770GB with 6.8
Million files.   Full backups on that system take 8.5 hours to run.
Incrementals take 20-30 minutes.   I have no illusions that the
infrastructure I'm using to back things up is the fastest, but it's fast
enough for the job.


--
Ray Frush
Colorado State University.



On Wed, Sep 20, 2017 at 12:41 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-09-19 18:01 GMT+02:00 Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com>:
> > Removed "--inplace" from the command like and running the same backup
> > right now from BPC.
> > It's too early to be sure, but seems to go further. Let's see during the
> night.
>
> Is still running. This is ok on one side, as "--inplace" may caused
> the issue, but on the other side,
> there is something not working properly in BPC. An incremental backup
> is still running (about 80%)
> from yesterday at 17:39.
>
> rsync, rsnapshot, bacula, rdiff-backup, bareos, and borg took about 3
> hours (some a little bit more, some a little bit less) to backup this
> host in the same way (more or less, same final size of backup)
> BackupPC is taking 10 times more of any other backup software, this
> makes BPC unusable with huge hosts. An order of magnitude is totally
> unacceptable and can only mean some bugs in the code.
>
> As wrote many month ago, I think there are something not working
> properly in BPC, it's impossible that BPC deduplication is slowing
> down backups in this way.
> Also, I've removed the compression, because I'm using ZFS with native
> compression, thus BPC doesn't have to decompress, check local file,
> compress the new one and so on.
>
> And after the backup, refCnt and fsck is ran. For this server, the
> "post-backup" phase takes another hours or two.
>
> Maybe I have hardware issue on this backup server, but even all other
> backup software that i've tried are running in this server with no
> issue at all. Only BPC is slow as hell.
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-19 Thread Ray Frush
Gandalf-

Hopefully, someone with more rsyncd experience can step in and help.   I
haven't used the rsyncd method for 3-4 years, and don't have any current
examples to help you with.

Good luck!
--
Ray Frush

On Tue, Sep 19, 2017 at 9:32 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> rsyncd is running on all servers, as i'm able to backup them properly
> with plain rsync or rsnapshot.
> Only BPC is freezing.
>
> I don't use SSH at all, i'm directly connecting to rsyncd via rsync.
>
> 2017-09-19 17:19 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
> > Gandalf-
> >
> > It looks like you're using the "rsyncd" method vs "rsync" is that
> correct?
> > I don't have experience using the 'rsyncd' method, so my ability to
> continue
> > troubleshooting ends here.  The main thing that jumps out at me is to
> check
> > that rsyncd is actually running on your clients, and that you can connect
> > from the backuppc server using the command you found above.
> >
> >
> > I use the 'rsync' method, and the rest of my answer below is predicated
> on
> > that scheme:
> >
> > I kicked off a backup and did a 'ps -elf | grep backuppc' to get these
> from
> > the BackupPC server:
> >
> > 1) the BackupPC_dump command
> >
> > backuppc  9603  3682  0 09:05 ?00:00:00 /usr/bin/perl
> > /usr/local/BackupPC/bin/BackupPC_dump -i isast201
> >
> > 2) the local rsync_bpc instanc:
> >
> > backuppc  9606  9603  8 09:05 ?00:00:03 /usr/local/bin/rsync_bpc
> > --bpc-top-dir /mnt/backups/BackupPC --bpc-host-name isast201
> > --bpc-share-name / --bpc-bkup-num 118 --bpc-bkup-comp 3
> --bpc-bkup-prevnum
> > 117 --bpc-bkup-prevcomp 3 --bpc-bkup-inode0 203221 --bpc-attrib-new
> > --bpc-log-level 1 -e /usr/bin/ssh -l root --rsync-path=/usr/bin/rsync
> > --super --recursive --protect-args --numeric-ids --perms --owner --group
> -D
> > --times --links --hard-links --delete --partial --log-format=log: %o %i
> %B
> > %8U,%8G %9l %f%L --stats --iconv=utf8,UTF-8 --timeout=72000
> --exclude=stuff
> > isast201:/ /
> >
> > 3) the ssh command initiated by rsync_bpc to the client to initiate the
> > server:  THIS IS THE IMPORTANT ONE to test next:
> >
> > backuppc  9607  9606  1 09:05 ?00:00:00 /usr/bin/ssh -l root
> > isast201 /usr/bin/rsync --server --sender -slHogDtpre.iLsf --iconv=UTF-8
> >
> > 4) The active portion of process 9606 above:
> >
> > backuppc  9608  9606  0 09:05 ?00:00:00 /usr/local/bin/rsync_bpc
> > --bpc-top-dir /mnt/backups/BackupPC --bpc-host-name isast201
> > --bpc-share-name / --bpc-bkup-num 118 --bpc-bkup-comp 3
> --bpc-bkup-prevnum
> > 117 --bpc-bkup-prevcomp 3 --bpc-bkup-inode0 203221 --bpc-attrib-new
> > --bpc-log-level 1 -e /usr/bin/ssh -l root --rsync-path=/usr/bin/rsync
> > --super --recursive --protect-args --numeric-ids --perms --owner --group
> -D
> > --times --links --hard-links --delete --partial --log-format=log: %o %i
> %B
> > %8U,%8G %9l %f%L --stats --iconv=utf8,UTF-8 --timeout=72000
> --exclude=stuff
> > isast201:/ /
> >
> >
> > In my example, I have setup ssh keys to allow the BackupPC user to access
> > the clients.
> >
> >
> >
> > On Tue, Sep 19, 2017 at 8:51 AM, Gandalf Corvotempesta
> > <gandalf.corvotempe...@gmail.com> wrote:
> >>
> >> I can't get rsync command from the client system, as "ps aux" doesn't
> >> show the command invocation by the server.
> >> BackupPC is running the following:
> >>
> >> /usr/bin/perl /usr/local/backuppc/bin/BackupPC_dump -i myhost
> >>
> >> spawing two identical processes:
> >>
> >> /usr/local/bin/rsync_bpc --bpc-top-dir /var/backups/backuppc
> >> --bpc-host-name myhost --bpc-share-name everything --bpc-bkup-num 1
> >> --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
> >> --bpc-bkup-inode0 7047290 --bpc-attrib-new --bpc-log-level 0 --super
> >> --recursive --protect-args --numeric-ids --perms --owner --group -D
> >> --times --links --hard-links --delete --delete-excluded --partial
> >> --log-format=log: %o %i %B %8U,%8G %9l %f%L --stats
> >> --block-size=131072 --inplace --timeout=72000
> >> --password-file=/var/backups/backuppc/pc/myhost/.rsyncdpw24363
> >> --exclude=var/backups/* --exclude=admin_backups/*
> >> --exclude=reseller_backups/* --exclude=user_backups/* --exclude=tmp/*
> >> --exclude=proc/* --exclude=sys/* --exclude=media/* --exclude=mnt/*
> >> --exclude=tmp/*

Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-19 Thread Ray Frush
Gandalf-

It looks like you're using the "rsyncd" method vs "rsync" is that correct?
  I don't have experience using the 'rsyncd' method, so my ability to
continue troubleshooting ends here.  The main thing that jumps out at me is
to check that rsyncd is actually running on your clients, and that you can
connect from the backuppc server using the command you found above.


I use the 'rsync' method, and the rest of my answer below is predicated on
that scheme:

I kicked off a backup and did a 'ps -elf | grep backuppc' to get these from
the BackupPC server:

1) the BackupPC_dump command

backuppc  9603  3682  0 09:05 ?00:00:00 /usr/bin/perl
/usr/local/BackupPC/bin/BackupPC_dump -i isast201

2) the local rsync_bpc instanc:

backuppc  9606  9603  8 09:05 ?00:00:03 /usr/local/bin/rsync_bpc
--bpc-top-dir /mnt/backups/BackupPC --bpc-host-name isast201
--bpc-share-name / --bpc-bkup-num 118 --bpc-bkup-comp 3 --bpc-bkup-prevnum
117 --bpc-bkup-prevcomp 3 --bpc-bkup-inode0 203221 --bpc-attrib-new
--bpc-log-level 1 -e /usr/bin/ssh -l root --rsync-path=/usr/bin/rsync
--super --recursive --protect-args --numeric-ids --perms --owner --group -D
--times --links --hard-links --delete --partial --log-format=log: %o %i %B
%8U,%8G %9l %f%L --stats --iconv=utf8,UTF-8 --timeout=72000 --exclude=stuff
isast201:/ /

3) the ssh command initiated by rsync_bpc to the client to initiate the
server:  THIS IS THE IMPORTANT ONE to test next:

backuppc  9607  9606  1 09:05 ?00:00:00 /usr/bin/ssh -l root
isast201 /usr/bin/rsync --server --sender -slHogDtpre.iLsf --iconv=UTF-8

4) The active portion of process 9606 above:

backuppc  9608  9606  0 09:05 ?00:00:00 /usr/local/bin/rsync_bpc
--bpc-top-dir /mnt/backups/BackupPC --bpc-host-name isast201
--bpc-share-name / --bpc-bkup-num 118 --bpc-bkup-comp 3 --bpc-bkup-prevnum
117 --bpc-bkup-prevcomp 3 --bpc-bkup-inode0 203221 --bpc-attrib-new
--bpc-log-level 1 -e /usr/bin/ssh -l root --rsync-path=/usr/bin/rsync
--super --recursive --protect-args --numeric-ids --perms --owner --group -D
--times --links --hard-links --delete --partial --log-format=log: %o %i %B
%8U,%8G %9l %f%L --stats --iconv=utf8,UTF-8 --timeout=72000 --exclude=stuff
isast201:/ /


In my example, I have setup ssh keys to allow the BackupPC user to access
the clients.



On Tue, Sep 19, 2017 at 8:51 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> I can't get rsync command from the client system, as "ps aux" doesn't
> show the command invocation by the server.
> BackupPC is running the following:
>
> /usr/bin/perl /usr/local/backuppc/bin/BackupPC_dump -i myhost
>
> spawing two identical processes:
>
> /usr/local/bin/rsync_bpc --bpc-top-dir /var/backups/backuppc
> --bpc-host-name myhost --bpc-share-name everything --bpc-bkup-num 1
> --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
> --bpc-bkup-inode0 7047290 --bpc-attrib-new --bpc-log-level 0 --super
> --recursive --protect-args --numeric-ids --perms --owner --group -D
> --times --links --hard-links --delete --delete-excluded --partial
> --log-format=log: %o %i %B %8U,%8G %9l %f%L --stats
> --block-size=131072 --inplace --timeout=72000
> --password-file=/var/backups/backuppc/pc/myhost/.rsyncdpw24363
> --exclude=var/backups/* --exclude=admin_backups/*
> --exclude=reseller_backups/* --exclude=user_backups/* --exclude=tmp/*
> --exclude=proc/* --exclude=sys/* --exclude=media/* --exclude=mnt/*
> --exclude=tmp/* --exclude=wp-content/cache/object/*
> --exclude=wp-content/cache/page_enhanced/*
> --exclude=wp-content/cache/db/*
> --exclude=usr/local/directadmin/data/tickets/* --exclude=var/cache/*
> --exclude=var/log/directadmin/* --exclude=var/log/lastlog
> --exclude=var/log/rsync* --exclude=var/log/bacula/*
> --exclude=var/log/ntpstats --exclude=var/lib/mlocate
> --exclude=var/lib/mysql/* --exclude=var/lib/apt/lists/*
> --exclude=var/cache/apt/archives/* --exclude=usr/local/php55/sockets/*
> --exclude=var/run/* --exclude=var/spool/exim/*
> backuppc@myhost::everything /
>
>
>
> standard rsync works.
> rsnapshot works too (i'm using rsnapshot to backup this host, as BPC
> freeze)
>
> 2017-09-19 16:42 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
> > Gandalf-
> >
> > As a troubleshooting step, collect the actual running rsync commands from
> > the client system, and from the BackupPC server (found in the Xferlog).
> > Post them here to get a wider audience.
> >
> > Try running an rsync manually  using the same parameters, and see if it
> > works.  My guess is not, and there is a misconfiguration that will leap
> out
> > at you as you work through this.
> >
> > I had to do the same thing when I was doing an initial install.
> >
> > --
> > Ray Frush
> > Colorado State Universit

Re: [BackupPC-users] BackuPC 4 hang during transfer

2017-09-19 Thread Ray Frush
Gandalf-

As a troubleshooting step, collect the actual running rsync commands from
the client system, and from the BackupPC server (found in the Xferlog).
Post them here to get a wider audience.

Try running an rsync manually  using the same parameters, and see if it
works.  My guess is not, and there is a misconfiguration that will leap out
at you as you work through this.

I had to do the same thing when I was doing an initial install.

--
Ray Frush
Colorado State University.

On Tue, Sep 19, 2017 at 2:52 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Still getting the same issue.
> Backup for a couple of host are impossible, bpc hangs (at different
> progresses) and doesn't continue anymore.
>
> No load on backup server and on source server. Simply, bpc doesn't
> transfer.
>
> 2017-09-18 15:38 GMT+02:00 Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com>:
> > 2017-09-18 14:30 GMT+02:00 G.W. Haywood via BackupPC-users
> > <backuppc-users@lists.sourceforge.net>:
> >> When I first used version 4 I ran into a very similar issue, there
> >> were one or two bug-fixes which addressed it.  You have not stated
> >> exactly what version you are using, but first make sure that all the
> >> BPC software is up to date.
> >
> > I'm using the latest version: 4.1.3
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-09-01 Thread Ray Frush
Gandalf-

Sounds like you need a bigger backup server.


BackupPC keeps the tranfer logs compressed, even the most recent one.

Typical log sizes for my largest host  (768GB, 6.7 Million files) which
also has a significant amount of churn. You can see that the Full, (backup
65) even compressed, the log was huge.   There are monthly logs kept for
activity, and a per backup log for transfers.   This server also is limited
to running tar as the Xfer method, which are Extremely chatty compared to
the Rsync based transfer logs.

This server is my worst case scenario.

-rw-r-. 1 backuppc backuppc  26K May 31 01:45 LOG.052017.z
-rw-r-. 1 backuppc backuppc 1.5K Jun 30 22:27 LOG.062017.z
-rw-r-. 1 backuppc backuppc  39M Jul 31 03:38 LOG.072017.z
-rw-r-. 1 backuppc backuppc  18M Sep  1 00:02 LOG.082017.z
-rw-r-. 1 backuppc backuppc0 Sep  1 01:14 LOG.092017
drwxr-x---. 2 backuppc backuppc  16K Sep  1 00:03 refCnt
-rw-r-. 1 backuppc backuppc 2.3M Jul  3 22:33 XferLOG.35.z
-rw-r-. 1 backuppc backuppc 2.2M Jul 17 23:39 XferLOG.49.z
-rw-r-. 1 backuppc backuppc 2.2M Jul 31 03:40 XferLOG.62.z
-rw-r-. 1 backuppc backuppc 2.3M Aug  1 04:07 XferLOG.63.z
-rw-r-. 1 backuppc backuppc 2.2M Aug  2 04:23 XferLOG.64.z
-rw-r-. 1 backuppc backuppc  41M Aug  3 12:22 XferLOG.65.z
-rw-r-. 1 backuppc backuppc 2.2M Aug  4 20:29 XferLOG.66.z
-rw-r-. 1 backuppc backuppc 2.2M Aug  5 20:31 XferLOG.67.z
-rw-r-. 1 backuppc backuppc 2.2M Aug  6 20:46 XferLOG.68.z
-rw-r-. 1 backuppc backuppc 2.2M Aug  7 21:00 XferLOG.69.z
-rw-r-. 1 backuppc backuppc 2.2M Aug  8 20:57 XferLOG.70.z
-rw-r-. 1 backuppc backuppc 2.2M Aug  9 20:57 XferLOG.71.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 10 21:04 XferLOG.72.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 11 23:27 XferLOG.73.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 12 22:23 XferLOG.74.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 13 22:21 XferLOG.75.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 14 22:23 XferLOG.76.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 15 22:22 XferLOG.77.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 16 22:23 XferLOG.78.z
-rw-r-. 1 backuppc backuppc 2.3M Aug 17 22:25 XferLOG.79.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 18 23:02 XferLOG.80.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 19 22:20 XferLOG.81.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 20 22:21 XferLOG.82.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 21 22:23 XferLOG.83.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 22 22:22 XferLOG.84.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 23 22:22 XferLOG.85.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 24 22:24 XferLOG.86.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 25 23:07 XferLOG.87.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 26 23:23 XferLOG.88.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 27 23:44 XferLOG.89.z
-rw-r-. 1 backuppc backuppc 2.3M Aug 28 23:48 XferLOG.90.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 29 23:46 XferLOG.91.z
-rw-r-. 1 backuppc backuppc 2.2M Aug 30 23:45 XferLOG.92.z
-rw-r-. 1 backuppc backuppc 2.2M Sep  1 00:03 XferLOG.93.z


A more typical server (220GB,  3 Million files, less churn.):  This server
uses RSYNC transfers, note that the full backup (#72) log is not noticeably
larger than the incremental logs.

-rw-r-. 1 backuppc backuppc 1.2K May 31 01:06 LOG.052017.z
-rw-r-. 1 backuppc backuppc 1.5K Jun 30 22:13 LOG.062017.z
-rw-r-. 1 backuppc backuppc  40M Jul 31 02:29 LOG.072017.z
-rw-r-. 1 backuppc backuppc  41M Aug 31 21:23 LOG.082017.z
-rw-r-. 1 backuppc backuppc0 Sep  1 01:13 LOG.092017
drwxr-x---. 2 backuppc backuppc  16K Aug 31 21:23 refCnt
-rw-r-. 1 backuppc backuppc  15K Jul  4 22:13 XferLOG.42.z
-rw-r-. 1 backuppc backuppc  26K Jul 18 23:37 XferLOG.56.z
-rw-r-. 1 backuppc backuppc 9.3K Jul 31 02:30 XferLOG.68.z
-rw-r-. 1 backuppc backuppc 9.1K Aug  1 02:42 XferLOG.69.z
-rw-r-. 1 backuppc backuppc  52K Aug  2 02:54 XferLOG.70.z
-rw-r-. 1 backuppc backuppc 9.8K Aug  3 04:34 XferLOG.71.z
-rw-r-. 1 backuppc backuppc  35K Aug  4 20:54 XferLOG.72.z
-rw-r-. 1 backuppc backuppc 9.0K Aug  5 20:04 XferLOG.73.z
-rw-r-. 1 backuppc backuppc 9.3K Aug  6 19:32 XferLOG.74.z
-rw-r-. 1 backuppc backuppc 9.3K Aug  7 19:32 XferLOG.75.z
-rw-r-. 1 backuppc backuppc  12K Aug  8 19:34 XferLOG.76.z
-rw-r-. 1 backuppc backuppc 9.9K Aug  9 19:30 XferLOG.77.z
-rw-r-. 1 backuppc backuppc 9.3K Aug 10 19:31 XferLOG.78.z
-rw-r-. 1 backuppc backuppc 9.7K Aug 11 20:25 XferLOG.79.z
-rw-r-. 1 backuppc backuppc 9.1K Aug 12 20:01 XferLOG.80.z
-rw-r-. 1 backuppc backuppc 9.2K Aug 13 19:33 XferLOG.81.z
-rw-r-. 1 backuppc backuppc 9.7K Aug 14 19:32 XferLOG.82.z
-rw-r-. 1 backuppc backuppc  63K Aug 15 19:32 XferLOG.83.z
-rw-r-. 1 backuppc backuppc  16K Aug 16 19:32 XferLOG.84.z
-rw-r-. 1 backuppc backuppc  14K Aug 17 19:43 XferLOG.85.z
-rw-r-. 1 backuppc backuppc  13K Aug 18 20:24 XferLOG.86.z
-rw-r-. 1 backuppc backuppc 

Re: [BackupPC-users] Scheduling advice

2017-09-01 Thread Ray Frush
Longish answer below...

On Fri, Sep 1, 2017 at 3:22 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-08-31 16:33 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
> > The values you'll want to check:
> > $Conf{IncrKeepCnt} = 26;  # This is the number of total 'unfilled'
> > backups kept.
> >
> > $Conf{FillCycle} = 7;# This is how often a filled backup is kept (1
> per
> > week) which strongly influences the next setting
> >
> > $Conf{FullKeepCnt} = [  4,  3,  0, ];  # This defines how many 'filled'
> > backups are retained.
> >
> > The combination of filled and unfilled backups result in ~32 days of
> daily
> > backups plus a couple of older ones just in case a user needs a file from
> > more than a month ago.
>
> So, to archieve a 7 days of daily backups with 1 filled backup every 4
> months, the following would be ok ?
>
> $Conf{IncrKeepCnt} = 7; # 7 days of incrementals
> $Conf{FillCycle} = 120; # 1 filled every 120 days (4 months)
> $Conf{FullKeepCnt} = [ 0 ]; # keep only the latest full
>
>

BackupPC's retention rules are not necessarily the easiest to understand.
Your proposed schedule would result in having only 7 days of backups, which
is probably not what you want.

Here are some examples from my schedule (shown above).  Our goal (SLA
actually) is to keep a month of daily resolution backups,  I keep a couple
of extra weeks around just in case, since the incremental storage cost is
really really low.   Note the "duration" of the backups.  'filled' backups
don't cost time, as they are still an 'incremental' backup.

Here's a newly added host.  The retention schedule isn't even fully filled
in yet.
You can see the "full" backup taken the first time, and the the subsequent
'filled' backups controlled by the "$Conf{FillCycle} = 7;" setting.
Once a backup is 'filled' it is treated the same whether it was a full, or
a 'filled' incremental.


Backup# TypeFilled  Level   Start Date  Duration/mins   Age/days
 Server Backup Path
0   fullyes 0   8/7 10:39   20.924.9
 /mnt/backups/BackupPC/pc/[hostname]/0
1   incrno  1   8/8 10:02   1.7 23.9
 /mnt/backups/BackupPC/pc/[hostname]/1
2   incrno  1   8/9 10:00   2.0 22.9
 /mnt/backups/BackupPC/pc/[hostname]/2
3   incrno  1   8/10 10:00  2.0 21.9
 /mnt/backups/BackupPC/pc/[hostname]/3
4   incrno  1   8/11 10:00  3.9 20.9
 /mnt/backups/BackupPC/pc/[hostname]/4
5   incrno  1   8/12 10:00  2.0 19.9
 /mnt/backups/BackupPC/pc/[hostname]/5
6   incrno  1   8/13 10:00  1.6 18.9
 /mnt/backups/BackupPC/pc/[hostname]/6
7   incryes 1   8/14 18:00  2.5 17.6
 /mnt/backups/BackupPC/pc/[hostname]/7
- (your proposed schedule would cut off here) 
8   incrno  1   8/15 19:03  2.9 16.6
 /mnt/backups/BackupPC/pc/[hostname]/8
9   incrno  1   8/16 19:03  3.2 15.6
 /mnt/backups/BackupPC/pc/[hostname]/9
10  incrno  1   8/17 19:05  3.8 14.6
 /mnt/backups/BackupPC/pc/[hostname]/10
11  incrno  1   8/18 19:11  3.0 13.6
 /mnt/backups/BackupPC/pc/[hostname]/11
12  incrno  1   8/19 19:05  3.0 12.6
 /mnt/backups/BackupPC/pc/[hostname]/12
13  incrno  1   8/20 19:06  3.1 11.6
 /mnt/backups/BackupPC/pc/[hostname]/13
14  incryes 1   8/21 19:05  3.0 10.6
 /mnt/backups/BackupPC/pc/[hostname]/14
15  incrno  1   8/22 19:10  2.4 9.6
/mnt/backups/BackupPC/pc/[hostname]/15
16  incrno  1   8/23 19:10  2.8 8.6
/mnt/backups/BackupPC/pc/[hostname]/16
17  incrno  1   8/24 19:11  2.8 7.6
/mnt/backups/BackupPC/pc/[hostname]/17
18  incrno  1   8/25 19:24  3.2 6.6
/mnt/backups/BackupPC/pc/[hostname]/18
19  incrno  1   8/26 19:25  2.3 5.5
/mnt/backups/BackupPC/pc/[hostname]/19
20  incrno  1   8/27 19:17  2.4 4.6
/mnt/backups/BackupPC/pc/[hostname]/20
21  incryes 1   8/28 19:16  2.7 3.6
/mnt/backups/BackupPC/pc/[hostname]/21
22  incrno  1   8/29 19:14  2.4 2.6
/mnt/backups/BackupPC/pc/[hostname]/22
23  incrno  1   8/30 19:12  2.9 1.6
/mnt/backups/BackupPC/pc/[hostname]/23
24  incryes 1   8/31 19:14  2.9 0.6
/mnt/backups/BackupPC/pc/[hostname]/24

Notice that the most recent backup is always 'filled'.


This system has been in long enough that the original 'full' backup has
been aged out.
$Conf{FullKeepCnt} = [  4,  3,  0, ];  suggests that a total of 7 filled
backups (excluding the most recent backup)  are retained in addi

Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Ray Frush
On Thu, Aug 31, 2017 at 10:45 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

>
> Ok but let's simulate a crash in your example:
>
> On day 2, before the incremental backup, the filled one (day0) is lost.
> Is backup made on day1 still available with "all" files or only with
> the latest 5GB added between day0 and day1 ?


If you went to disk, and manually deleted backup "0" , backup 1 would still
exist, with references to all the new files, but I cannot predict how
BackupPC would react to the missing 'filled' full backup, nor am I willing
to test that for you.  You would be in a really unusual, unexpected state,
which is why BackupPC documentation suggests that you have regular 'filled'
available.


Finally, don't confuse a 'filled' backup with a 'full' backup.  A full
backup is 'filled' by default, but an incremental backup can be filled (
every 7 days using default settings) to help BackupPC keep track of things
better without any additional data transfer or checksum time.   A 'full'
backup is used to prevent bit rot.  Pick a timeframe that makes sense in
your environment.  Quarterly works for us, but I considered every 6 months.


I would suggest that you try it out, and before you depend on it, try
breaking BackupPC in all the ways you're worried about and see how it
behaves.   A lot of what you're asking us results in speculative answers
because we haven't tried deliberately breaking  our backups in the ways you
suggest.


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Ray Frush
On Thu, Aug 31, 2017 at 10:23 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
>
>
> So, with a "full" run, the second "full" is still seen as an
> incremental by rsync?
> Let's assume a 100GB host.
> bpc will backup that host for the first time. 100GB are transferred.
> The next day, only 5GB are added on that host. i'll force bpc to make
> a "full/filled" backup.
> How many GB are transferred, 105GB or only 5GB ?
>

I'll extend the example

Day 0  :  Full backup 100GB transfered
Day 1  : add 5GB ,   Incremental runs, 5GB transferred
Day 2 : add 5GB  ,   Incremental, ~5GB transferred
Day 3 ; add 5 GB,   Full runs.   ALL files check-summed.  Files with
identical checksum are skipped, new/changed files transferred:  ~ 5GB
   You do incur extra TIME related to checksumming all the files, but
you only transfer what's changed/new




>
> > With BackupPC 4.x  if you delete a 'filled' backup (why would you do that
> > anyway?)  It just makes BackupPC work harder since it has to rebuild
> > references back to an older filled backup, which cost time while doing a
> > restore.  So you'll only lose the single day that you delete.
>
> And what If I don't have any other filled backup but only incrementals
> made from the deleted "filled" ?


BackupPC requires a minimum of one filled backup.   If you gracefully
delete a filled backup, I believe that BackupPC does intelligently fill the
next backup.   If you were to delete the filled backup from the filesystem
directly, It is simply missing, and BackupPC would have to build a restore
tree referencing all of the available incrementals back to the most recent
available filled backup.






> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Ray Frush
On Thu, Aug 31, 2017 at 9:16 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-08-31 16:33 GMT+02:00 Ray Frush <fr...@rams.colostate.edu>:
>
> Thanks for the reply.
> In this case, you are making some full backups.
> I don't want to run any full backup except for the first one, like
> with rsnapshot.
> With rsnapshot, only incrementals are made. I don't want to run full
> backups because
> I have some very very huge servers that will took 2 or 3 days to
> transfer everything, but only
> a couple of hours to transfer the changed files during an incremental run.


With BackupPC 4.x  we only take a 'full' every 90 days, and because we're
using rsync, subsequent fulls aren't as painful as the first one.  We run
the full to ensure that all checksums match to avoid silent data corruption
on th storage



>
> So, what happens if I delete the filled backup ? I'll only loose that
> single backup point or even some subsequent incrementals are lost
> because some files was located to the "filled" backup ?
>

With BackupPC 4.x  if you delete a 'filled' backup (why would you do that
anyway?)  It just makes BackupPC work harder since it has to rebuild
references back to an older filled backup, which cost time while doing a
restore.  So you'll only lose the single day that you delete.


>
> rsnapshot make uses of hardlinks, thus, the only way to loose a file
> is to loose all hardlinks pointing to that file.
> on the first run, all files are transfered. On following runs
> (incrementals) only changed files are transfered, everything else is
> hardlinked to the first backup. If you loose the "first" backup, the
> hardlink is still resolved.
>
> With BPC is the same ?
>
>
BackupPC 4.x does not use hardlinks extensively, which is a big
improvement.   BackupPC 4.x keeps a hash tree of unique files, and a system
of file reference 'pointers' to build the backup tree.





> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Scheduling advice

2017-08-31 Thread Ray Frush
Gandalf-

BackupPC, is relatively easy to setup for a schedule like you propose.   We
keep a 30 day backup history with a few extra weeks tacked on to get out to
~70 days, so the values below reflect our schedule:

The values you'll want to check:
$Conf{IncrKeepCnt} = 26;  # This is the number of total 'unfilled'
backups kept.

$Conf{FillCycle} = 7;# This is how often a filled backup is kept (1 per
week) which strongly influences the next setting

$Conf{FullKeepCnt} = [  4,  3,  0, ];  # This defines how many 'filled'
backups are retained.

The combination of filled and unfilled backups result in ~32 days of daily
backups plus a couple of older ones just in case a user needs a file from
more than a month ago.

To answer your second question:  BackupPC does a good job of managing the
'filled' (think 'full') backups if you decide to delete one.   I have
found, that BackupPC is pretty good at self healing from issues.  We had a
number of backups impacted by running out of inodes during a cycle.   While
the files lost by the lack of Inodes cannot be recovered, BackupPC
recovered gracefully on the next cycle after the file system was expanded.

--
Ray Frush



On Thu, Aug 31, 2017 at 4:39 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Additionally, what happens if I delete/lost/break the full backups ?
> Any subsequent incremental backups will be broken or automatically the
> following incremental backup would become a "full" like with
> rsnapshot?
>
> 2017-08-30 21:54 GMT+02:00 Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com>:
> > Hi to all
> > Currently I use rsnapshot with success to backup about 20 hosts
> >
> > Our configuration is simple: every night I'll start 4 concurrent backups
> > keeping at least 10 days of old backups
> >
> > In this way, due to rsnapshot hardlinks , I'm able to restore any file
> up to
> > 10 days ago or to keep the backup chain consistent even by deleting
> multiple
> > backup trees at all.
> >
> > How can I get the same with bpc 4?
> > Last time I've tried I had difficult to understand filled/unfilled
> backups
> > and retention times
> >
> > Any help ? I would like to move to bpc due to it's deduplication and
> > compression feature but keeping the ability to destroy backup trees
> without
> > compromising the whole host backup is mandatory
> >
> > (In other words, with Bacula if you loose the full backup, you also loose
> > the whole backup chain, with rsnapshot there is no full, if you loose a
> > backup point, you'll only loose that backup point
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dealing with large directories.

2017-08-14 Thread Ray Frush
Craig-


On Mon, Aug 14, 2017 at 11:54 AM, Craig Barratt via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Ray,
>
> The behavior you are seeing is expected for tar (and smb and ftp)
> XferMethods.  Incrementals don't detect deleted or renamed files.  So if
> you have a directory with lots of changes, incrementals will backup the new
> files (and any prior files with recent mtimes), and the "view" of the
> backup will grow to be the union of all new and previous files.  A full
> backup correctly captures the directory contents, so the process you see
> starts over.
>

That's good to know.   I think I'll implement a more traditional full cycle
on the systems I am forced to backup with 'tar' to help deal with this.



> Unfortunately BackupPC_restore isn't easy to use from the command line
> since it requires a lot of information that is passed via a parameter file
> (using similar syntax to config.pl).
>
> How about you use BackupPC_tarCreate?  It's easy to create a tar archive
> of any set of directories or files from a specific backup, and then you can
> use tar to extract it on the client.
>
>
The tarCreate method looks like it would be just about as good as we can
get given the size of the directory in question.

Thanks again for your help!


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Dealing with large directories.

2017-08-14 Thread Ray Frush
Craig-

Thanks for taking a look at this.



On Sun, Aug 13, 2017 at 8:06 PM, Craig Barratt via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Ray,
>
> What is the XferMethod and backup schedule (ie, how often do you do
> incrementals and fulls)?  Which backup are you viewing (ie, how many
> incrementals need to be merged to view it)?  Are you running v3 or v4?
>
>

We're running v4.1.3,
For this host we're using the 'tar' backup method.   The host doesn't
support an rsync new enough to work with BackupPC.

$Conf{XferMethod} = 'tar';

Here's our schedule

$Conf{FullPeriod} = '90.97';
$Conf{IncrPeriod} = '0.97';
$Conf{FillCycle} = 7;
$Conf{FullKeepCnt} = [  4,3,0,0,0,0];


For this host, the last 'full' backup was number 72 taken on Aug-04.
In this output, I count the number of lines of output for each backup for
the directory in question.   Normally, this directory appears to hold
approximately 4 files (+/- 1), and there is a lot of churn month to
month.   However, BackupPC seems to think the directory just keeps growing.
  The file counts support the behavior we saw, and the the new Full backup
seems to have temporarily reset the count to something approaching correct,
but then the counts just keep incrementing.

$ for i in $(ls /mnt/backups/BackupPC/pc/hostname/ | grep -v -E '[^0-9]');
do echo -n "BKP $i:"; /usr/local/BackupPC/bin/BackupPC_ls -h hostname -n $i
-s / /dir/path | wc -l; done
BKP 14:36503  (Filled)
BKP 28:62162  (Filled)
BKP 42:85323  (Filled)
BKP 51:104152
BKP 52:104998
BKP 53:105337
BKP 54:105379
BKP 55:106075
BKP 56:106912  (Filled)
BKP 57:113966
BKP 58:115212
BKP 59:118045
BKP 60:120110
BKP 61:120955
BKP 62:122617
BKP 63:125500  (Filled)
BKP 64:132740
BKP 65:135597
BKP 66:138578
BKP 67:140102
BKP 68:140899
BKP 69:142810
BKP 70:145427  (Filled)
BKP 71:149962
BKP 72:40163   (FULL)
BKP 73:47590
BKP 74:49129
BKP 75:49931
BKP 76:52125
BKP 77:54851
BKP 78:57530
BKP 79:60417  (Filled)
BKP 80:64111
BKP 81:65563
BKP 82:66360  (Filled)


> Two options are to:
>
>- use BackupPC_ls so you can see the backup tree using the command line
>- look in the XferLOG files to figure out which files are in which
>backup.
>
> Craig
>

Both of these are great pointers.   Thanks.

Question:  Once I do find the file using the manual method, how best to
trigger the restore.   The "BackupPC_restore" command doesn't seem to ask
for enough information ( like backup number) to specify the correct file
version to restore.

$ ./BackupPC_restore

usage: ./BackupPC_restore [-p] [-v]   


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the BackupPC pool

2017-08-09 Thread Ray Frush
A snapshot of the BackupPC Filesystem does not protect from gross hardware
failure of the storage that destroys both the data and the snapshots.

--
Ray Frush

On Wed, Aug 9, 2017 at 3:42 PM, Alexander Moisseev via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> On 8/9/2017 11:47 PM, Hannes Elvemyr wrote:
>
> Sound great, but how do I know that BackupPC is not reading/writing to the
>> pool during the copying process (maybe some backup is running or
>> BackupPC_Nightly could start doing some cleaning). Copying a large pool
>> over a bad Internet connection could take hours…
>>
>> Option 1. Stop BackupPC.
> Option 2. Make a snapshot of the file system.
>
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the BackupPC pool

2017-08-09 Thread Ray Frush
Hannes-

Option 2, Copying the the pool (tar, then copy) to a remote server appears
to be viable for small installations, and we've tested that with success.
But at a certain point, if you have a lot of machines, and your pool gets
large, that process becomes un-manageable.  (Our pool is about 3.5TiB for
just over 100 hosts backed up).  Just  tar-ing the data store up takes most
of the day.

So, Option 1 starts to look more appealing.   It may be that your primary
instance (local) has a more aggressive retention schedule than the remote
copy, or other differences like how frequently backups run.

--
Ray Frush



On Wed, Aug 9, 2017 at 2:47 PM, Hannes Elvemyr <hanne...@gmail.com> wrote:

> Hi!
>
> I'm using BackupPC for all my machines and it's great! I would now like to
> protect my BackupPC pool somehow (if my BackupPC server crashes, gets
> stolen or burns up I don't want to loose the data). As I see it, I have at
> least two options:
>
> 1. Run a second instance of BackupPC off-site
>
> This of course creates a new second pool, but that could actually be an
> advantage if one of them somehow gets corrupted.
>
> Pros: Two independent pools.
>
> Cons: Complicated setup. I need a VPN between my network and the off-site
> machine to get this to work, which turned out to be more complicated than I
> first thought (my current router does not support static routing, so a new
> router would be the first step). I would also need to sync the BackupPC
> configuration from my main BackupPC instance to the second one (if I change
> some configuration on my main instance, I would also like the second
> instance to get that change).
>
> 2. Copying the pool and send it off-site
>
> Pros: No need for a second BackupPC instance. Seems to be the easier
> solution if I can find out how to make a reliable copy of the pool.
>
> Cons: How to copy the pool? The version 4 documentation says that “In V4,
> since hardlinks are not used permanently, duplicating a V4 pool is much
> easier, allowing remote copying of the pool.”. Sound great, but how do I
> know that BackupPC is not reading/writing to the pool during the copying
> process (maybe some backup is running or BackupPC_Nightly could start doing
> some cleaning). Copying a large pool over a bad Internet connection could
> take hours…
>
> Any thought on this? How do you get redundancy of your BackupPC data?
>
> Thanks!
>
> --
> /Hannes Elvemyr
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Question about v4

2017-08-07 Thread Ray Frush
Jean-Yves-

I believe you may have been looking at v3 documentation.   BackupPC V4
 does _not_ make extensive use of hard links.

See: https://backuppc.github.io/backuppc/BackupPC.html#BackupPC-4.0

--
Ray Frush
Colorado State University.


On Sun, Aug 6, 2017 at 6:10 PM, B <lazyvi...@gmx.com> wrote:

> Hi backuppcers,
>
> I'm gonna switch from v3 to v4 and have a question about it:
>
> * doc says hardlinked files inodes are stored in each backup tree,
>   I guess that BPC partition being formatted in XFS, any optimization
>   (that might move any file) of this partition is out of the question ?
>
> Jean-Yves
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Dealing with large directories.

2017-08-03 Thread Ray Frush
This is a two part question/problem report.

We backup a file system that has a sub-directory that generally contains
around 39K small files that usually adds up to 16GB.   The files see a fair
amount of churn month to month, and we were pulling from a backup about 2
weeks ago.

When we try to browse the backup to perform a restore of a specific file,
the listing ended up taking more than 5 minutes to render to the web
browser, and we were not able to use the web interface to directly browse
to and restore the file.   If we didn't already have pretty much the exact
date of the file, it would have been 'difficult' to determine which backup
to start using.

Question1:  How do people generally deal with this type of situation, where
the number of files in a directory are difficult to display via the
BackupPC Web interface?


PART 2:

Our workaround was to "Download Tar File" of the entire subdirectory so
that we could pull out the file.   Here's where we ran into an interesting
issue.

The downloaded tar file was 110GB, larger than the expected size of 16GB,
and even larger than the file system is capable of holding (100GB).   The
tar file contained 106K files, more than the expected 39K files that live
in the directory.

This may also explain why trying to browse the backup via the Web
interface, it timed out trying to list 106K entries!


Problem Report:  A tar file download is much larger than expected, and
contains more data than could possibly have been on the disk at the time of
the backup.

Any theories on what would cause this behavior?


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Time to run a 'fsck'?

2017-07-28 Thread Ray Frush
Each night I get a list of warnings about 'missing pool file' in my main
log.   I believe this stems from an issue caused by running out of inodes.
  I'm wondering if doing something like manually running this command would
clean up the pool and fix the missing pool file messages.

  BackupPC_refCountUpdate -m -f -F

Thoughts?  Is there any reason I should not run this?


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] I'd like to make a suggestion to the BPC devs

2017-07-21 Thread Ray Frush
Les-

Putting the Mac vs (anything else) argument aside, as it detracts from the
discussion,  I would like to add that even low end NAS devices like the
QNAP offer filesystem snapshots that are, for the purposes of a home office
or small business seamless.   At $WORK, I've used NetApp, EMC Isilon, and
IBM's Storewise V7000 Unified storage, which all do a nice job of
snapshots, and we've never experienced issues with overhead during the
snapshot process.   Point is, there are a lot of options out there,
including using modern file system features.

--
Ray Frush
Colorado State University.


On Fri, Jul 21, 2017 at 8:27 AM, Les Mikesell <lesmikes...@gmail.com> wrote:

> On Fri, Jul 21, 2017 at 8:59 AM, B <lazyvi...@gmx.com> wrote:
> >
> > @GW Haywood: this would be limited to executive people that usually know
> > what they're doing and are the only ones that are working on not-to-lose
> > docs ie: big spreadsheets  - the idea is to entirely pull off any admin
> > from the restoration process, which isn't the case w/ snapshots.
>
> The quick fix here is to use a Mac with an external or network drive
> for time machine.  If you aren't familiar with it, it does exactly
> what you suggested with easy access for the user and filesystem tricks
> for efficiency.  For a more enterprise flavor, NetApp fileservers have
> done filesystem snapshots at configurable intervals into hidden but
> accessible directories for decades now with next to no overhead, again
> with filesystem tricks.  (But their patents are probably what has kept
> everyone else from doing it well for so long...).
>
> --
>Les Mikesell
>  lesmikes...@gmail.com
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] I'd like to make a suggestion to the BPC devs

2017-07-20 Thread Ray Frush
I believe you could do something like you propose with the current BackupPC
by setting the "IncrPeriod' to 0.04 (1/24 of a day).   You'd have to make
some interesting settings for "FillCycle" and "FullKeepCnt" to make it keep
a usable schedule, but you could then have  'hourly' incrementals.

As Les Mikesell just pointed out, the downside would be that for large
instances, you'd be doing a lot of fairly expensive (compute time)
operations every hour to scan the file system for changes.

I believe that FS snapshots are faster, and more efficient than BackupPC
could ever be for this.

--
Ray Frush

On Thu, Jul 20, 2017 at 7:54 AM, B <lazyvi...@gmx.com> wrote:

> On Thu, 20 Jul 2017 10:26:18 +0200
> Daniel Berteaud <dan...@firewall-services.com> wrote:
>
> > I think this is out of BackupPC's scope
>
> Please develop, don't drop me dry, why is that?
> Why adding a kinda-Xtiple-fugitive-daily-snapshots of only touched files
> is out of the BPC's scope ? On the other hand, I see this as the missing
> complement to get a professional ubiquitous backup system.
>
> JY
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4 practical limits

2017-07-18 Thread Ray Frush
Our BackupPC 4.1.3 pool hit >80% utilization, and I'm contemplating
extending it past 4TiB as we continue to add systems to our backups.

   - Pool is 3237.37GiB comprising 12776990 files and 16512 directories (as
   of 7/18 01:44),

In a previous $JOB, I never had a pool get much larger than about 2.5TiB,
so I'm not entirely certain what the limits of BackupPC are.

Are there reliable reports available of people who have a BackupPC file
system >4TiB?   Are there any gotcha's that the developers are aware of as
a pool starts to get into this size range?   What is a practical upper end
for BackupPC to manage?

Thanks


-- 
Ray Frush
Colorado State University.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Could Not Open Password File

2017-07-13 Thread Ray Frush
Do you have selinux enabled?   I encountered significant challenges getting
BackupPC and SELinux to play nice with each other.   Check
/var/log/audit/audit.log for a report.

--
Ray Frush
Colorado State University.

On Thu, Jul 13, 2017 at 6:48 AM, Akibu Flash <akibufl...@outlook.com> wrote:

> So should I change the owner and group for apache.users to owner backuppc
> and group backuppc?
>
>
>
> *From:* Richard Shaw [mailto:hobbes1...@gmail.com]
> *Sent:* Thursday, July 13, 2017 8:47 AM
> *To:* Akibu Flash <akibufl...@outlook.com>
> *Cc:* General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> *Subject:* Re: [BackupPC-users] Could Not Open Password File
>
>
>
> On Thu, Jul 13, 2017 at 7:35 AM, Akibu Flash <akibufl...@outlook.com>
> wrote:
>
> Thanks Richard.  See below:
>
>
>
> -r--rw-rw-. 1 root root47 Jul 12 21:36 apache.users
>
>
>
>
>
> I don't really have a good way to deal with this from a packaging
> perspective, I'll review the readme's and see if I can make it more clear.
>
>
>
> You obviously created the apache.users files from root, the file needs to
> have the same owner and group as the other files (backuppc/apache). After
> that the usual attributes should work (644).
>
>
>
> Thanks,
>
> Richard
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] A cleanup question for orphaned backup directories.

2017-06-20 Thread Ray Frush
Craig-

(just returning from vacation, so pardon the late response)

The issue seems to have been caused by an odd 'out of inodes' issue caused
by the NFS storage appliance we use for our storage.   After rebooting the
storage appliance, the inode issue went away, and we've had no further
instances of this issue.

However, in the meantime, a number of 'orphaned' backup directories were
left behind.  They exist, but are not referenced by the
.../BackupPC/pc/[system]/backups file.

For example:

[backuppc@isift090 isemt101]$ pwd
/mnt/backups/BackupPC/pc/isemt101

[backuppc@isift090 isemt101]$ ls -ld 10
drwxr-x---. 5 backuppc backuppc 4096 May 31 23:05 10

[backuppc@isift090 isemt101]$ ls 10
attrib_cf06051de16efa13986a642e2dff5f6c  backupInfo  f%2f  inode  refCnt

File/directory ownership is correct, and it looks like an orphaned backup.
  So far, I've found that just deleting the orphaned directories seems like
the best course of action, but I had to write a small script to help find
them.

cd /mnt/backups/BackupPC/pc
for dir in $(for i in $(ls); do ls -d $i/[0-9]*| sort -n ; done) ; do
echo -n "$dir:"
host=$(echo $dir |cut -f 1 -d '/')
backup=$(echo $dir |cut -f 2 -d '/')
#echo $host $backup
if grep -q ^$backup $host/backups; then
echo FOUND
 else
echo -n "MISSING  "
ls -l $host/XferLOG.$backup.z
rm -rf $host/$backup
rm $host/XferLOG.$backup.z
fi
done





On Sat, Jun 10, 2017 at 10:42 PM, Craig Barratt via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Ray,
>
> Sorry about the delay in replying.  I'm not sure why those two directories
> are still around; BackupPC_backupDelete didn't report any errors.  It also
> didn't delete the XferLOG.[56].z files.
>
> It looks like #3 was correctly deleted, and XferLOG.3.z too.
>
> What's in those directories?  Are they owned by the backuppc user?
>
> Craig
>
> On Wed, May 24, 2017 at 4:03 PM, Ray Frush <fr...@rams.colostate.edu>
> wrote:
>
>> I've encountered an interesting issue:
>>
>> In $TopDir/pc/host,   I have some orphaned directories that don't appear
>> in $TopDir/pc/host/backups.
>>
>> For example:   this host shows two extra directories, and XferLOGs for
>> backup #5 and #6  that does not appear in the 'backups' file:
>>
>> $ ls isxxt004
>> 0   17  19  21  23  6  backups  LOCK  LOG.052017
>>  RestoreInfo.0   restores XferLOG.15.z  XferLOG.18.z  XferLOG.20.z
>>  XferLOG.22.z  XferLOG.5.z  XferLOG.8.z
>> 15  18  20  22  5   8  backups.old  LOG.042017.z  refCnt
>>  RestoreLOG.0.z  XferLOG.0.z  XferLOG.17.z  XferLOG.19.z  XferLOG.21.z
>>  XferLOG.23.z  XferLOG.6.z  XferLOG.bad.z.old
>>
>> $ cat isxxt004/backups | cut -f 1,2 -d '  '
>> 0 full
>> 8 incr
>> 15 incr
>> 17 incr
>> 18 incr
>> 19 full
>> 20 incr
>> 21 incr
>> 22 incr
>> 23 incr
>>
>> The odd thing is that the LOG suggests that Backups #5 and #6 should have
>> been deleted:
>>
>> ...
>> 2017-05-11 20:04:32 incr backup started for directory /
>> 2017-05-11 20:08:38 incr backup 12 complete, 182376 files, 182376 bytes,
>> 0 xferErrs (0 bad files, 0 bad shares, 0 other)
>> 2017-05-11 20:08:38 Removing unfilled backup 3
>> 2017-05-11 20:08:38 BackupPC_backupDelete: removing #3
>> 2017-05-11 20:08:38 BackupPC_backupDelete: No prior backup for merge
>> 2017-05-11 20:08:46 BackupPC_refCountUpdate: host isxxt004 got 0 errors
>> (took 7 secs)
>> 2017-05-11 20:08:46 Finished BackupPC_backupDelete, status = 0 (running
>> time: 8 sec)
>> 2017-05-12 20:03:29 incr backup started for directory /
>> 2017-05-12 20:05:03 incr backup 13 complete, 182381 files, 182381 bytes,
>> 0 xferErrs (0 bad files, 0 bad shares, 0 other)
>> 2017-05-12 20:05:04 Removing unfilled backup 5
>> 2017-05-12 20:05:04 BackupPC_backupDelete: removing #5
>> 2017-05-12 20:05:04 BackupPC_backupDelete: No prior backup for merge
>> 2017-05-12 20:05:08 BackupPC_refCountUpdate: host isxxt004 got 0 errors
>> (took 4 secs)
>> 2017-05-12 20:05:08 Finished BackupPC_backupDelete, status = 0 (running
>> time: 4 sec)
>> 2017-05-13 20:03:17 incr backup started for directory /
>> 2017-05-13 20:05:20 incr backup 14 complete, 182388 files, 182388 bytes,
>> 0 xferErrs (0 bad files, 0 bad shares, 0 other)
>> 2017-05-13 20:05:20 Removing unfilled backup 6
>> 2017-05-13 20:05:21 BackupPC_backupDelete: removing #6
>> 2017-05-13 20:05:21 BackupPC_backupDelete: No prior backup for merge
>> 2017-05-13 20:05:25 BackupPC_refCountUpdate: host isxxt004 got 0 errors
>> (took 4

Re: [BackupPC-users] Question about transient inodes

2017-05-30 Thread Ray Frush
Holger-

Thanks for the followup. That's not exactly the answer I was expecting
based on the behavior we experienced.   Once all of the initial 'full'
backups of our systems were taken, the inode situation seems to have calmed
down.

--
Ray.

On Tue, May 30, 2017 at 4:48 PM, Holger Parplies <wb...@parplies.de> wrote:

> Hi,
>
> Ray Frush wrote on 2017-05-23 15:37:36 -0600 [[BackupPC-users] Question
> about transient inodes]:
> > [...]
> > Can a developer comment on under what conditions BackupPC might be
> > temporarily allocating a lot of extra inodes, and then quickly releasing
> > them?
>
> none.
>
> Regards,
> Holger
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] A cleanup question for orphaned backup directories.

2017-05-24 Thread Ray Frush
I've encountered an interesting issue:

In $TopDir/pc/host,   I have some orphaned directories that don't appear in
$TopDir/pc/host/backups.

For example:   this host shows two extra directories, and XferLOGs for
backup #5 and #6  that does not appear in the 'backups' file:

$ ls isxxt004
0   17  19  21  23  6  backups  LOCK  LOG.052017  RestoreInfo.0
  restores XferLOG.15.z  XferLOG.18.z  XferLOG.20.z  XferLOG.22.z
 XferLOG.5.z  XferLOG.8.z
15  18  20  22  5   8  backups.old  LOG.042017.z  refCnt
 RestoreLOG.0.z  XferLOG.0.z  XferLOG.17.z  XferLOG.19.z  XferLOG.21.z
 XferLOG.23.z  XferLOG.6.z  XferLOG.bad.z.old

$ cat isxxt004/backups | cut -f 1,2 -d '  '
0 full
8 incr
15 incr
17 incr
18 incr
19 full
20 incr
21 incr
22 incr
23 incr

The odd thing is that the LOG suggests that Backups #5 and #6 should have
been deleted:

...
2017-05-11 20:04:32 incr backup started for directory /
2017-05-11 20:08:38 incr backup 12 complete, 182376 files, 182376 bytes, 0
xferErrs (0 bad files, 0 bad shares, 0 other)
2017-05-11 20:08:38 Removing unfilled backup 3
2017-05-11 20:08:38 BackupPC_backupDelete: removing #3
2017-05-11 20:08:38 BackupPC_backupDelete: No prior backup for merge
2017-05-11 20:08:46 BackupPC_refCountUpdate: host isxxt004 got 0 errors
(took 7 secs)
2017-05-11 20:08:46 Finished BackupPC_backupDelete, status = 0 (running
time: 8 sec)
2017-05-12 20:03:29 incr backup started for directory /
2017-05-12 20:05:03 incr backup 13 complete, 182381 files, 182381 bytes, 0
xferErrs (0 bad files, 0 bad shares, 0 other)
2017-05-12 20:05:04 Removing unfilled backup 5
2017-05-12 20:05:04 BackupPC_backupDelete: removing #5
2017-05-12 20:05:04 BackupPC_backupDelete: No prior backup for merge
2017-05-12 20:05:08 BackupPC_refCountUpdate: host isxxt004 got 0 errors
(took 4 secs)
2017-05-12 20:05:08 Finished BackupPC_backupDelete, status = 0 (running
time: 4 sec)
2017-05-13 20:03:17 incr backup started for directory /
2017-05-13 20:05:20 incr backup 14 complete, 182388 files, 182388 bytes, 0
xferErrs (0 bad files, 0 bad shares, 0 other)
2017-05-13 20:05:20 Removing unfilled backup 6
2017-05-13 20:05:21 BackupPC_backupDelete: removing #6
2017-05-13 20:05:21 BackupPC_backupDelete: No prior backup for merge
2017-05-13 20:05:25 BackupPC_refCountUpdate: host isxxt004 got 0 errors
(took 4 secs)
2017-05-13 20:05:25 Finished BackupPC_backupDelete, status = 0 (running
time: 5 sec)
...

I'm looking for some advice on the best or safest way to clean up, or
re-link in the extra directories.
At this point, I'm not concerned about losing the incremental backup
history, and just want to get the '$TopDir/pc' tree into a consistent state.

Thanks!

-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 4.1.2 CGI Wipes Config.pl Settings

2017-05-24 Thread Ray Frush
Michael-

This was very recently covered under the subject of "BackupFilesExclude
issue"   thread started by Mark Moorcroft.

A fix for the issue has been posted:

On Fri, May 12, 2017 at 11:21 AM, Craig Barratt <cbarratt@users.sourceforge.
net> wrote:

> I pushed a fix
> <https://github.com/backuppc/backuppc/commit/7936184a9ec049fef3d0d67e012b23d79eb336f1>
>  for
> this last night.
> Craig
>

--
Ray Frush


On Wed, May 24, 2017 at 10:07 AM, Michael McGregor <mcgrm...@isu.edu> wrote:

> Hello,
>
> I have a problem with the BackupFilesExclude setting not persisting or
> populating the website correctly.
>
> If I modify the BackupFilesExclude setting on the global config in the CGI
> interface, the settings make it into the config.pl correctly. But when I
> reload the global config webpage, the "values" for the hash are all gone.
> They are still in the config.pl file, but the CGI interface must not be
> parsing the setting correctly to populate the webpage. If I save any
> settings, the configuration is wiped out.
>
> I found this out the hard way as a server backup was backing up /proc when
> I found it.
>
> After editing the CGI interface to add the proper values, config.pl reads:
>
> $Conf{BackupFilesExclude} = {
>   '/' => [
> 'dev',
> 'media',
> 'mnt',
> 'proc',
> 'run',
> 'sys',
> 'tmp'
>   ]
> };
>
> After reediting the configuration through the CGI interface and saving,
> config.pl reads:
>
> $Conf{BackupFilesExclude} = {
>   '/' => []
> };
>
> Any advice?
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>


-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Question about transient inodes

2017-05-23 Thread Ray Frush
I'm adding systems in earnest to my new BackupPC 4.1.2 installation, and
I've encountered an interesting problem.

My current pool uses a decent number of inodes:

cpool:7864978
pc:11504955

For a total of 19,369,933 objects.

However, my NFS filesystem, which had 50M pre-allocated inodes, and a max
of 80M inodes briefly ran out of inodes!  I increased the max, and it
finally settled at  88M inodes allocated, but currently reports ~20M used.
  A number of double checks confirms the current useage.   I was never able
to show that the file system actually had 80M+ inodes, making me think they
were quickly allocated and then released before I could catch the state.

Since BackupPC is the only process pointed at this file system, the
conclusion is that BackupPC is, as some point in the process, requesting
far more inodes for a short time, than it is using.

It looks like, based on the numbers, that BackupPC is attempting to
temporarily allocate a guess at the number of inodes needed to store
the $Conf{FullKeepCnt} which in my case was [4, 0, 4, 0, 0, 2] or 10.

Can a developer comment on under what conditions BackupPC might be
temporarily allocating a lot of extra inodes, and then quickly releasing
them?


Thanks.

-- 
Time flies like an arrow, but fruit flies like a banana.
Ray Frush
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Apache user...

2017-05-14 Thread Ray Frush
Bob-

I found the instructions to run httpd (apache) as the backupPC user
(backuppc) to be a graceful way to get BackupPC to play well with all the
required file ownership.

--
Ray Frush


On Sat, May 13, 2017 at 5:50 PM, Robert Katz <bobk...@digido.com> wrote:

> Guys:
>
> Can the apache user remain as apache or should I follow Gotham's
> instructions to make the apache user be 'backuppc'?
>
> Also in that case would I add 'apache' as the additional group for the
> user 'backuppc'?
>
> I'm far from done, as you can imagine, I'm just hacking away at the
> details!
>
>
> Thanks
>
> Bob
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupFilesExclude issue

2017-05-12 Thread Ray Frush
Craig-

The fix appears to work in my instance.  I can no longer replicate the
problem, and the config file, managed via CGI, appears as expected.



Thanks!



On Fri, May 12, 2017 at 11:21 AM, Craig Barratt <
cbarr...@users.sourceforge.net> wrote:

> I pushed a fix
> <https://github.com/backuppc/backuppc/commit/7936184a9ec049fef3d0d67e012b23d79eb336f1>
> for this last night.
>
> Craig
>
> On Fri, May 12, 2017 at 9:56 AM, Ray Frush <fr...@rams.colostate.edu>
> wrote:
>
>> I’m seeing this issue too!
>>
>> I noticed that with 4.1.2  if I use the WebUI (CGI) to edit a host config
>> that has an existing $Conf{BackupFilesExclude} set, under the’xfer’ tab,
>> there’s a checkmark for the Override of “BackupFilesExclude”, but the CGI
>> does not show the list of excludes.
>>
>> Saving the config leaves an empty set:
>>
>> $Conf{BackupFilesExclude} = {};
>>
>> or (on a second test)
>>
>> $Conf{BackupFilesExclude} = {
>>   '*' => []
>> };
>>
>>
>> But the LOG reports, incorrectly:
>>
>> 017-05-12 10:44:30 backuppc changed BackupFilesExclude in
>> host grover config to {} from {'*' => ['/proc','/sys','/mnt','/net',
>> '/dev','/var/log/lastlog','/var/run','/ais02','/backup','/ex
>> port','/software','/oracledba','/syswork’]}
>> or
>> 2017-05-12 10:52:00 backuppc changed BackupFilesExclude in
>> host grover config to {'*' => []} from {'*' =>
>> ['/proc','/sys','/mnt','/net','/dev','/var/log/lastlog','/va
>> r/run','/ais02','/backup','/export','/software','/oracledba','/syswork']}
>>
>>
>>
>> Please let me know if there's any additional information we can provid.
>>
>>
>> --
>> Ray Frush
>>
>> On Thu, May 11, 2017 at 5:38 PM, Moorcroft, Mark (ARC-TS)[Analytical
>> Mechanics Associates, INC.] <mark.moorcr...@nasa.gov> wrote:
>>
>>> Using the 4.1.2 COPR on CentOS7. If I set the BackupFilesExclude in
>>> config.pl it will be overwritten the second I change any other setting.
>>> If I change the setting in the GUI it saves, but reverts to empty the next
>>> time any other setting is saved.
>>>
>>>
>>> 2017-05-11 16:29:05 tsadmin changed MaxOldPerPCLogFiles in main config
>>> to '12' from '10'
>>> 2017-05-11 16:29:05 Reloading config/host files via CGI request
>>> 2017-05-11 16:29:05 Next wakeup is 2017-05-11 17:00:00
>>> 2017-05-11 16:35:20 tsadmin changed BackupFilesExclude in main config to
>>> {'*' => ['lastlog']} from {'*' => []}
>>> 2017-05-11 16:35:20 Reloading config/host files via CGI request
>>> 2017-05-11 16:35:20 Next wakeup is 2017-05-11 17:00:00
>>> 2017-05-11 16:35:39 tsadmin changed BackupFilesExclude in main config to
>>> {'*' => []} from {'*' => ['lastlog']}
>>> 2017-05-11 16:35:39 tsadmin changed MaxOldPerPCLogFiles in main config
>>> to '10' from '12'
>>> 2017-05-11 16:35:39 Reloading config/host files via CGI request
>>> 2017-05-11 16:35:39 Next wakeup is 2017-05-11 17:00:00
>>>
>>>
>>> Ideas?
>>>
>>> --
>>>
>>> Contractor: AMA Incorporated
>>> Mark Moorcroft – Lead Admin
>>> ESTRAD Linux Computer Support
>>> NASA Ames Research Center
>>> Moffett Field, CA 94035-1000
>>> N230/108 (MS N230-1)
>>>
>>>
>>> 
>>> --
>>> Check out the vibrant tech community on one of the world's most
>>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>>> ___
>>> BackupPC-users mailing list
>>> BackupPC-users@lists.sourceforge.net
>>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>>> Wiki:http://backuppc.wiki.sourceforge.net
>>> Project: http://backuppc.sourceforge.net/
>>>
>>
>>
>>
>> --
>> Time flies like an arrow, but fruit flies like a banana.
>>
>> 
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
&g

Re: [BackupPC-users] BackupFilesExclude issue

2017-05-12 Thread Ray Frush
I’m seeing this issue too!

I noticed that with 4.1.2  if I use the WebUI (CGI) to edit a host config
that has an existing $Conf{BackupFilesExclude} set, under the’xfer’ tab,
there’s a checkmark for the Override of “BackupFilesExclude”, but the CGI
does not show the list of excludes.

Saving the config leaves an empty set:

$Conf{BackupFilesExclude} = {};

or (on a second test)

$Conf{BackupFilesExclude} = {
  '*' => []
};


But the LOG reports, incorrectly:

017-05-12 10:44:30 backuppc changed BackupFilesExclude in
host grover config to {} from {'*' =>
['/proc','/sys','/mnt','/net','/dev','/var/log/lastlog','/var/run','/ais02','/backup','/export','/software','/oracledba','/syswork’]}
or
2017-05-12 10:52:00 backuppc changed BackupFilesExclude in
host grover config to {'*' => []} from {'*' =>
['/proc','/sys','/mnt','/net','/dev','/var/log/lastlog','/var/run','/ais02','/backup','/export','/software','/oracledba','/syswork']}



Please let me know if there's any additional information we can provid.


--
Ray Frush

On Thu, May 11, 2017 at 5:38 PM, Moorcroft, Mark (ARC-TS)[Analytical
Mechanics Associates, INC.] <mark.moorcr...@nasa.gov> wrote:

> Using the 4.1.2 COPR on CentOS7. If I set the BackupFilesExclude in
> config.pl it will be overwritten the second I change any other setting.
> If I change the setting in the GUI it saves, but reverts to empty the next
> time any other setting is saved.
>
>
> 2017-05-11 16:29:05 tsadmin changed MaxOldPerPCLogFiles in main config to
> '12' from '10'
> 2017-05-11 16:29:05 Reloading config/host files via CGI request
> 2017-05-11 16:29:05 Next wakeup is 2017-05-11 17:00:00
> 2017-05-11 16:35:20 tsadmin changed BackupFilesExclude in main config to
> {'*' => ['lastlog']} from {'*' => []}
> 2017-05-11 16:35:20 Reloading config/host files via CGI request
> 2017-05-11 16:35:20 Next wakeup is 2017-05-11 17:00:00
> 2017-05-11 16:35:39 tsadmin changed BackupFilesExclude in main config to
> {'*' => []} from {'*' => ['lastlog']}
> 2017-05-11 16:35:39 tsadmin changed MaxOldPerPCLogFiles in main config to
> '10' from '12'
> 2017-05-11 16:35:39 Reloading config/host files via CGI request
> 2017-05-11 16:35:39 Next wakeup is 2017-05-11 17:00:00
>
>
> Ideas?
>
> --
>
> Contractor: AMA Incorporated
> Mark Moorcroft – Lead Admin
> ESTRAD Linux Computer Support
> NASA Ames Research Center
> Moffett Field, CA 94035-1000
> N230/108 (MS N230-1)
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does BackupPC 4.1.1 calculate Pools size?

2017-05-05 Thread Ray Frush
I spotted two file stored in the cpool totaling 23.5GB, which is about the
same as the discrepancy between the pool size that BackupPC reports and
what I'm counting on the file system.

...
drwxr-x---. 130 backuppc backuppc 8.0K May  4 01:00 18
-rw-rw.   1 backuppc backuppc  16G Apr 27 14:38 19524.1398.0
-rw-rw.   1 backuppc backuppc 7.5G Apr 27 14:44 19524.1399.0
drwxr-x---. 130 backuppc backuppc 8.0K May  4 01:00 1a
drwxr-x---. 130 backuppc backuppc 8.0K May  4 01:00 1c
...

Can you help me understand what these files are for?



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does BackupPC 4.1.1 calculate Pools size?

2017-05-04 Thread Ray Frush
Craig-
I set "$Conf{PoolSizeNightlyUpdatePeriod} = 1;" yesterday, and really
didn't see a change in the size discrepancy between what BackupPC reports
and the file system still remains, after letting it run with the updated
config for two nights.  The estimate from my last message grew from 26GiB
to 29GiB, while the actual space used on the disk grew only 1GB.   Is is
possible that the process is still using the default value of 16 anyway?


The WebUI shows: "Pool is 29.08GiB comprising 779885 files and 16512
directories (as of 5/4 01:00),"

And a check of the base file system shows:
# du -bhs *
52G cpool
2.2G pc
4.0K pool


My log output from last night suggests that the Nightly job really didn't
take very long, and I question if it had time to scan the pool for actual
size used:

2017-05-04 01:00:00 24hr disk usage: 5% max, 5% recent, 0 skipped hosts
2017-05-04 01:00:00 Aging LOG files, LOG -> LOG.0 -> LOG.1 -> ... -> LOG.13
...
2017-05-04 01:00:00 Running 2 BackupPC_nightly jobs from 0..15 (out of
0..15)
2017-05-04 01:00:00 Running BackupPC_nightly -m -P 6 0 127 (pid=21588)
2017-05-04 01:00:00 Running BackupPC_nightly -P 6 128 255 (pid=21589)
2017-05-04 01:00:00 Next wakeup is 2017-05-04 02:00:00
2017-05-04 01:00:01 BackupPC_nightly now running BackupPC_refCountUpdate -m
-s -c -P 6 -r 0-127
2017-05-04 01:00:01 BackupPC_nightly now running BackupPC_refCountUpdate -m
-s -c -P 6 -r 128-255
2017-05-04 01:00:01  admin1 : __bpc_pidStart__ 21604
2017-05-04 01:00:01  admin : __bpc_pidStart__ 21603
2017-05-04 01:00:36  admin : __bpc_pidEnd__ 21603
2017-05-04 01:00:36 BackupPC_nightly now running BackupPC_sendEmail
2017-05-04 01:00:37  admin1 : __bpc_pidEnd__ 21604
2017-05-04 01:00:37 Finished  admin1  (BackupPC_nightly -P 6 128 255)
2017-05-04 01:00:40 Finished  admin  (BackupPC_nightly -m -P 6 0 127)
2017-05-04 01:00:40 Pool nightly clean removed 0 files of size 0.00GB
2017-05-04 01:00:40 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max
links), 0 directories
2017-05-04 01:00:40 Cpool nightly clean removed 0 files of size 0.00GB
2017-05-04 01:00:40 Cpool is 0.00GB, 0 files (0 repeated, 0 max chain, 0
max links), 0 directories
2017-05-04 01:00:40 Pool4 nightly clean removed 0 files of size 0.00GB
2017-05-04 01:00:40 Pool4 is 0.00GB, 0 files (0 repeated, 0 max chain, 0
max links), 0 directories
2017-05-04 01:00:40 Cpool4 nightly clean removed 603 files of size 0.06GB
2017-05-04 01:00:40 Cpool4 is 29.77GB, 779885 files (0 repeated, 0 max
chain, 4334 max links), 16512 directories
2017-05-04 01:00:40 Running BackupPC_rrdUpdate (pid=21628)
2017-05-04 01:00:43  admin-1 : 2017-05-04 01:00:43 RRD updated: date
1493942400; cpoolKb 0.00; total 719841210.985352; poolKb 0.00;
pool4Kb 0.00; cpool4Kb 30488312.00
2017-05-04 01:00:45 Finished  admin-1  (BackupPC_rrdUpdate)


Thanks again for taking a look at this.



On Mon, May 1, 2017 at 10:50 PM, Craig Barratt <cbarratt@users.sourceforge.
net> wrote:

> The nightly pool check (BackupPC_nightly) only traverses a portion of the
> pool each night.  See $Conf{PoolSizeNightlyUpdatePeriod}.  The default is
> 16, meaning it takes 16 nightly runs to get through the whole pool.
>
> It looks like your installation is quite small, so you could
> set $Conf{PoolSizeNightlyUpdatePeriod} to 1.
>
> Craig
>
> On Mon, May 1, 2017 at 3:48 PM, Ray Frush <fr...@rams.colostate.edu>
> wrote:
>
>> My instance of Backuppc 4.1.1 reports:
>>
>> "Pool is 26.56GiB comprising 764047 files and 16512 directories (as of
>> 5/1 01:00)"
>>
>> However, when I check the file system, I get 'slightly' different numbers:
>>
>> # du -bhs *
>> 50G cpool
>> 2.1G pc
>> 4.0K pool
>>
>>
>> 26GiB  vs ~52GB.
>>
>> The target file system is an NFS file system.   Is there something I
>> should be doing different to get a more accurate report of the pool size?
>> How does BackupPC calculate the pool size?   (I'm trying to grok the source
>> code, but haven't found the method yet.)
>>
>>
>> Thanks
>>
>>
>> --
>> Ray Frush
>> Colorado State University
>>
>>

-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] How does BackupPC 4.1.1 calculate Pools size?

2017-05-01 Thread Ray Frush
My instance of Backuppc 4.1.1 reports:

"Pool is 26.56GiB comprising 764047 files and 16512 directories (as of 5/1
01:00)"

However, when I check the file system, I get 'slightly' different numbers:

# du -bhs *
50G cpool
2.1G pc
4.0K pool


26GiB  vs ~52GB.

The target file system is an NFS file system.   Is there something I should
be doing different to get a more accurate report of the pool size?   How
does BackupPC calculate the pool size?   (I'm trying to grok the source
code, but haven't found the method yet.)


Thanks


-- 
Ray Frush
Colorado State University
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SELinux for v4.1.1

2017-05-01 Thread Ray Frush
I believe the install documentation acknowledges that BackupPC isn't
SELinux aware, and advises you to disable SELinux on the server you're
using to run BackupPC.

An interesting project would be to create a backuppc module for SELinux.


On Fri, Apr 28, 2017 at 9:06 PM, Kenneth Porter 
wrote:

> I was able to successfully complete an rsyncd backup but it doesn't show in
> the web interface unless I set SELinux to Permissive. Any hints on what to
> tweak to let Apache read /var/lib/BackupPC?
>
> Here's a directory listing to show how the host's files are labeled:
>
> drwxr-x---. backuppc backuppc system_u:object_r:unlabeled_t:s0 .
> drwxr-x---. backuppc root unconfined_u:object_r:unlabeled_t:s0 ..
> drwxr-x---. backuppc backuppc unconfined_u:object_r:unlabeled_t:s0 0
> drwxr-x---. backuppc backuppc system_u:object_r:unlabeled_t:s0 1
> -rw-r-. backuppc backuppc system_u:object_r:unlabeled_t:s0 LOCK
> -rw-r-. backuppc backuppc system_u:object_r:unlabeled_t:s0 LOG.042017
> -rw-r-. backuppc backuppc system_u:object_r:unlabeled_t:s0 XferLOG.0.z
> -rw-r-. backuppc backuppc system_u:object_r:unlabeled_t:s0 XferLOG.1.z
> -rw-r-. backuppc backuppc unconfined_u:object_r:unlabeled_t:s0
> XferLOG.bad.z.old
> -rw-r-. backuppc backuppc system_u:object_r:unlabeled_t:s0 backups
> -rw-r-. backuppc backuppc system_u:object_r:unlabeled_t:s0 backups.old
> drwxr-x---. backuppc backuppc system_u:object_r:unlabeled_t:s0 refCnt
>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
>
> 
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>



-- 
Time flies like an arrow, but fruit flies like a banana.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4.1.1 Restore Issue

2017-05-01 Thread Ray Frush
Craig-

Thank you.  I appreciate you taking a look at this.

On Sun, Apr 30, 2017 at 4:26 PM, Craig Barratt <
cbarr...@users.sourceforge.net> wrote:

> Ray,
>
> This is now fixed in master (248f192).
>
> It was getting too aggressive stripping common portions of the remote
> directory path off, and it was empty (instead of "/") in the case you
> mentioned, causing the restore to the home directory, not /.
>
> Craig
>
>
> On Mon, Apr 24, 2017 at 9:20 AM, Ray Frush <fr...@rams.colostate.edu>
> wrote:
>
>> I’ve been away from BackupPC for a few years, and have just installed
>> 4.1.1 as a proposed solution in our environment.  My previous experience
>> has been with 3.x versions of BackupPC, which I’ve had great experiences
>> with.
>>
>> My question/observation is this:
>>
>> In my Test Case, I deleted ‘/opt' from my Linux system that is being
>> backed up by BackupPC.
>>
>> Pass 1:
>> In my first attempt to restore ‘/opt’,  I navigated to the most recent
>> backup and selected the checkbox next to the folder “opt” and hit “Restore
>> selected files”.
>> The next screen, I kept the default host, share and dir  (testserver, /,
>> /) and hit “Start Restore”.  The final confirmation screen showed:
>>
>> You are about to start a restore directly to the machine testserver. The
>> following files will be restored to share /, from backup number 3:
>> Original file/dir Will be restored to
>> testserver://opt testserver://opt
>>
>> However, the files were restored into "testserver:/root/opt”.   Not
>> exactly what I was expecting.   I did this test twice to be sure.
>>
>>
>> Pass 2:
>> In my next attempt, instead of selecting “/opt” from the top level of the
>> restore screen, I navigated into the "/opt” directory which showed a list
>> of all the files/directories in /opt, and then selected “Select all” and
>> proceeded with the restore.   Once again, I navigated through the next
>> screen keeping the defaults, and the final confirmation screen again showed:
>>
>> You are about to start a restore directly to the machine testserver. The
>> following files will be restored to share /, from backup number 3:
>> Original file/dir Will be restored to
>> testserver://opt testserver://opt/
>>
>> In this example, the files were flawlessly restored to the correct
>> location, ’testserver:/opt’.
>>
>>
>> I’m wondering if anyone else has observed this behavior, or can suggest
>> what I might be doing incorrectly to get the unexpected result in my first
>> test case.   Otherwise it sounds like I may have hit a bug.
>>
>> Thanks.
>>
>> I’ve been away from BackupPC for a few years, and have just installed
>> 4.1.1 as a proposed solution in our environment.  My previous experience
>> has been with 3.x versions of BackupPC, which I’ve had great experiences
>> with.
>>
>> My question/observation is this:
>>
>> In my Test Case, I deleted ‘/opt' from my Linux system that is being
>> backed up by BackupPC.
>>
>> Pass 1:
>> In my first attempt to restore ‘/opt’,  I navigated to the most recent
>> backup and selected the checkbox next to the folder “opt” and hit “Restore
>> selected files”.
>> The next screen, I kept the default host, share and dir  (testserver, /,
>> /) and hit “Start Restore”.  The final confirmation screen showed:
>>
>> You are about to start a restore directly to the machine testserver. The
>> following files will be restored to share /, from backup number 3:
>> Original file/dir Will be restored to
>> testserver://opt testserver://opt
>>
>> However, the files were restored into "testserver:/root/opt”.   Not
>> exactly what I was expecting.   I did this test twice to be sure.
>>
>>
>> Pass 2:
>> In my next attempt, instead of selecting “/opt” from the top level of the
>> restore screen, I navigated into the "/opt” directory which showed a list
>> of all the files/directories in /opt, and then selected “Select all” and
>> proceeded with the restore.   Once again, I navigated through the next
>> screen keeping the defaults, and the final confirmation screen again showed:
>>
>> You are about to start a restore directly to the machine testserver. The
>> following files will be restored to share /, from backup number 3:
>> Original file/dir Will be restored to
>> testserver://opt testserver://opt/
>>
>> In this example, the files were flawlessly restored to the correct
>> location, ’testserver:/opt’.
>>
&

[BackupPC-users] BackupPC 4.1.1 Restore Issue

2017-04-24 Thread Ray Frush
I’ve been away from BackupPC for a few years, and have just installed 4.1.1
as a proposed solution in our environment.  My previous experience has been
with 3.x versions of BackupPC, which I’ve had great experiences with.

My question/observation is this:

In my Test Case, I deleted ‘/opt' from my Linux system that is being backed
up by BackupPC.

Pass 1:
In my first attempt to restore ‘/opt’,  I navigated to the most recent
backup and selected the checkbox next to the folder “opt” and hit “Restore
selected files”.
The next screen, I kept the default host, share and dir  (testserver, /, /)
and hit “Start Restore”.  The final confirmation screen showed:

You are about to start a restore directly to the machine testserver. The
following files will be restored to share /, from backup number 3:
Original file/dir Will be restored to
testserver://opt testserver://opt

However, the files were restored into "testserver:/root/opt”.   Not exactly
what I was expecting.   I did this test twice to be sure.


Pass 2:
In my next attempt, instead of selecting “/opt” from the top level of the
restore screen, I navigated into the "/opt” directory which showed a list
of all the files/directories in /opt, and then selected “Select all” and
proceeded with the restore.   Once again, I navigated through the next
screen keeping the defaults, and the final confirmation screen again showed:

You are about to start a restore directly to the machine testserver. The
following files will be restored to share /, from backup number 3:
Original file/dir Will be restored to
testserver://opt testserver://opt/

In this example, the files were flawlessly restored to the correct
location, ’testserver:/opt’.


I’m wondering if anyone else has observed this behavior, or can suggest
what I might be doing incorrectly to get the unexpected result in my first
test case.   Otherwise it sounds like I may have hit a bug.

Thanks.

I’ve been away from BackupPC for a few years, and have just installed 4.1.1
as a proposed solution in our environment.  My previous experience has been
with 3.x versions of BackupPC, which I’ve had great experiences with.

My question/observation is this:

In my Test Case, I deleted ‘/opt' from my Linux system that is being backed
up by BackupPC.

Pass 1:
In my first attempt to restore ‘/opt’,  I navigated to the most recent
backup and selected the checkbox next to the folder “opt” and hit “Restore
selected files”.
The next screen, I kept the default host, share and dir  (testserver, /, /)
and hit “Start Restore”.  The final confirmation screen showed:

You are about to start a restore directly to the machine testserver. The
following files will be restored to share /, from backup number 3:
Original file/dir Will be restored to
testserver://opt testserver://opt

However, the files were restored into "testserver:/root/opt”.   Not exactly
what I was expecting.   I did this test twice to be sure.


Pass 2:
In my next attempt, instead of selecting “/opt” from the top level of the
restore screen, I navigated into the "/opt” directory which showed a list
of all the files/directories in /opt, and then selected “Select all” and
proceeded with the restore.   Once again, I navigated through the next
screen keeping the defaults, and the final confirmation screen again showed:

You are about to start a restore directly to the machine testserver. The
following files will be restored to share /, from backup number 3:
Original file/dir Will be restored to
testserver://opt testserver://opt/

In this example, the files were flawlessly restored to the correct
location, ’testserver:/opt’.


I’m wondering if anyone else has observed this behavior, or can suggest
what I might be doing incorrectly to get the unexpected result in my first
test case.   Otherwise it sounds like I may have hit a bug.

Thanks.

--
Ray Frush "Either you are part of the solution
T:970.491.5527 <970.288.6223> or part of the precipitate."
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Colorado State University | IS | System Administrator
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up macs

2013-12-06 Thread Ray Frush
Here's some notes from our internal Wiki for enabling BackupPC.  With this
method, there's no reason that you couldn't backup the entire system.


Enable ssh access
 System Preferences/Sharing
   Check - Remote Login

Create and populate /var/root/.ssh/authorized_keys
 Applications/Utilities/Terminal
   sudo su -
 mkdir .ssh
 chmod 700 .ssh
 echo '[backuppc user public key]'  .ssh/authorized_keys
 chmod 600 .ssh/authorized_keys
   exit


Note the backup public key looks something like:

ssh-rsa B3NzaC1yc... [a couple lines of hash]  ISXXYosqZQ==
backuppc@server

Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies | APD Technical Computing | IT Engineer


On Fri, Dec 6, 2013 at 7:45 AM, Michael Stowe
mst...@chicago.us.mensa.orgwrote:


  Thanks Ray and Kris for the advice. I checked and found I’ve got 3GB of
  memory in my BackupPC Server, so it seems that it should be sufficient.
 
  What user context does BackupPC connect to the machine under in order to
  get it to back up properly? Do you enable the root user on the Mac? Or do
  you create an administrative user of some sort on the macintosh and run
  rsync as that user?
 
  Justin Best
  360.513.1489

 I enable the root user and back them up just like any [other] *nix system.
  I only bother excluding temporary files, and I've never had a problem.


 --
 Sponsored by Intel(R) XDK
 Develop, test and display web and hybrid apps with a single code base.
 Download it for free now!

 http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

--
Sponsored by Intel(R) XDK 
Develop, test and display web and hybrid apps with a single code base.
Download it for free now!
http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up macs

2013-12-04 Thread Ray Frush
Like Kris, we back up a number of MacBooks here using rsync via ssh, and
have never had an issue.

Also like Kris, we only backup /Users which limits what we're backing up.

Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies | APD Technical Computing | IT Engineer


On Wed, Dec 4, 2013 at 10:48 AM, Justin Best jus...@mjbest.com wrote:

 I’m a user of BackupPC from way back (2005-ish). Love the product (thanks
 Craig!!) and I”m looking forward to hearing any experiences with 4.0

 I’ve got a couple sites with about 10-15 Macintosh machines (OS X 10.6,
 10.7, 10.8, and 10.9) that I’m considering using BackupPC at. Does anybody
 else use BackupPC for backing up OSX machines? I’ve tried testing with
 rsync, but it seemed to simply hang when I tried to back up the entire
 folder structure (beginning with /). rsync also choked when I tried to
 backup my home directory in its entirety. It (rsync) seems only to work
 when I backup a small subset of data (I can successfully backup
 /Users/Justin/Desktop without any problems)

 I’ve seen some suggestions that say “use xtar”, but don’t explain how to
 do this. I’ve tried installing xtar from helios.de, assuming that’s what
 others are referring to, but attempts to back up (choosing xfer method
 “tar”, setting the command to /usr/bin/xtar) and using this result in
 errors:

 Contents of file /var/lib/backuppc/pc/
 justins-air.nwtechs.com/XferLOG.bad.z, modified 2013-12-04 09:47:29

 

 Running: /usr/bin/ssh -q -x -n -l root justins-air.nwtechs.com env LC_ALL=C 
 /usr/bin/xtar -c -v -f - -C /Users/Justin/Desktop --totals .
 full backup started for directory /Users/Justin/Desktop
 Xfer PIDs are now 1317,1316
 tarExtract: Use of qw(...) as parentheses is deprecated at 
 /usr/share/backuppc/lib/BackupPC/Storage/Text.pm line 302.
 tarExtract: Use of qw(...) as parentheses is deprecated at 
 /usr/share/backuppc/lib/BackupPC/Lib.pm line 1425.
 Tar exited with error 65280 () status
 tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 
 filesTotal, 0 sizeTotal
 Got fatal error during xfer (No files dumped for share /Users/Justin/Desktop)
 Backup aborted (No files dumped for share /Users/Justin/Desktop)
 Not saving this as a partial backup since it has fewer files than the prior 
 one (got 0 and 0 files versus 0)

 

 Anyone have advice for me? Should I give up and use a commercial tool like 
 Retrospect for the macs, or should I press on?


 Justin Best
 360.513.1489



 --
 Sponsored by Intel(R) XDK
 Develop, test and display web and hybrid apps with a single code base.
 Download it for free now!

 http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/


--
Sponsored by Intel(R) XDK 
Develop, test and display web and hybrid apps with a single code base.
Download it for free now!
http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync on Windows 8?

2013-08-01 Thread Ray Frush
The BackupPC cygwin-rsyncd / 3.0.9.0 that I helped to contribute is only
missing the  cygintl-8.dll file.  I don't have Windows 8 to test with, but
without the file we've had no issues on Windows 7.

cygwin-rsynd (3.0.9.0) also has the same behavior with regard to requiring
/cygwin/c/path in the config file.

Mark-

Have you tried the cygwin-rsyncd package from
http://sourceforge.net/projects/backuppc/files/cygwin-rsyncd/3.0.9.0 ?   It
would be good to know if the package needs to be tweaked to support Windows
8.


On Thu, Aug 1, 2013 at 12:54 PM, Mark Campbell mcampb...@emediatrade.comwrote:

 So I figured out the problem!  Apparently, by adding in the new
 version/additional .dlls, C:/path is no longer recognized in
 rsyncd.conf.  It needs to be /cygwin/c/path.  So, bearing that in mind,
 the list of files below, which were pulled from cygwin as of yesterday, are
 confirmed to work on Windows 8, Windows 7, and Windows XP (I would assume
 that the server variants would work as well too):

 **· **cygiconv-2.dll

 **· **cygintl-8.dll

 **· **cygpopt-0.dll

 **· **cygrunsrv.exe

 **· **cygwin1.dll

 **· **rsync.exe

 **· **rsyncd.conf

 ** **

 Thanks,

 ** **

 --Mark

 ** **

 *From:* Craig Barratt [mailto:cbarr...@users.sourceforge.net]
 *Sent:* Thursday, August 01, 2013 1:01 AM

 *To:* General list for user discussion, questions and support
 *Subject:* Re: [BackupPC-users] rsync on Windows 8?

 ** **

 Sorry I can't directly help (since I don't have a Win8 machine).  But if
 someone can figure out the right set of cygwin files and the right recipe -
 ideally one that works across Win8, Win7 etc - then I would be happy to
 update the recently released cygwin-rsyncd package (which was helpfully
 provided by Ray Frush).

 ** **

 Craig

 ** **

 On Wed, Jul 31, 2013 at 7:14 AM, Mark Campbell mcampb...@emediatrade.com
 wrote:

 Apparently, my issue is not as solved as hoped.  The service does start up
 fine now, and doing an rsync --list-only to another machine lists the
 contents of the module on the remote machine just fine, but when trying to
 perform a backup of this machine with BackupPC (and even when trying to do
 an rsync --list-only from another machine to it), it generates the error:

 [root@emtbackup2 /]# rsync --list-only
 rsync://backupadmin@win8testvm/users
 Password:
 @ERROR: chdir failed
 rsync error: error starting client-server protocol (code 5) at
 main.c(1503) [receiver=3.0.6]

 Online research suggests that this may be a permissions issue, but I've
 tried running the service both as the Local System Account, and as
 Administrator, with no variations on the error.  Any suggestions on this?

 Thanks,

 --Mark



 -Original Message-
 From: Mark Campbell [mailto:mcampb...@emediatrade.com]
 Sent: Tuesday, July 30, 2013 1:59 PM
 To: General list for user discussion, questions and support
 Subject: Re: [BackupPC-users] rsync on Windows 8?

 Andrew,

 I did fail to mention that once I decided to grab additional .dlls from
 the install, that I copied over the corresponding updates to the original
 files mentioned, to no avail.

 But your idea inspired me to try running the rsync that I installed in the
 rsyncd directory in an administrator command line, and lo and behold, it
 gave me an error message with a new missing .dll to get!  Once I got that
 one and dropped it in the rsyncd folder, I tried running it on the command
 line again, and it gave me the rsync help menu.  Once that worked, I tried
 starting the service again, and it succeeded!  Woo hoo!  :)

 For the record (for anyone else that might be doing the same that I am
 doing), these are all the files required by rsync to run as a service in
 windows 8:
 -cygiconv-2.dll
 -cygintl-8.dll
 -cygpopt-0.dll
 -cygrunsrv.exe
 -cygwin1.dll
 -rsync.exe
 -rsyncd.conf



 Thanks,

 --Mark


 -Original Message-
 From: Andrew Schulman [mailto:and...@alumni.utexas.net]
 Sent: Tuesday, July 30, 2013 1:17 PM
 To: backuppc-users@lists.sourceforge.net
 Subject: Re: [BackupPC-users] rsync on Windows 8?

  So I've been successfully backing up Windows XP, Vista  7 machines to
 my BackupPC instance using Cygwin1.dll, Cygrunsrv.exe,  rsync.exe as a
 windows service.  Then came a Windows 8 laptop to my work.  I tried to
 install rsync on it like I've done with all of the other Windows flavors,
 but it failed.  The files are located where they should be, and the service
 gets registered, but when it tries to start the service, it fails.  Event
 log throws up entries like rsyncd: PID 3092: `rsyncd' service stopped,
 exit status: 127.  Cygwin itself gave some slightly more verbose errors in
 C:\var\log\rsyncd.log:
 
  /rsyncd/rsync.exe: error while loading shared libraries:
  cygpopt-0.dll: cannot open shared object file:  No such file or
  directory
  /rsyncd/rsync.exe: error while loading shared

Re: [BackupPC-users] rsync on Windows 8?

2013-08-01 Thread Ray Frush
Thanks for the followup.

I'll work with Craig to get an updated rsync package released that has the
missing DLL for Windows 8.  May I send you a test case or two in the mean
time to verify we're working in the right direction?




On Thu, Aug 1, 2013 at 1:30 PM, Mark Campbell mcampb...@emediatrade.comwrote:

 Ray,

 ** **

 I can confirm that Windows 8 is NOT happy without cygintl-8.dll.  When
 trying to start the service without this dll, it spits out the following
 error in C:\var\log\rsyncd.log:

 /rsyncd/rsync.exe: error while loading shared libraries:
 ?: cannot open shared object file:  No such file or directory

 ** **

 And yes, it had just the ?, so I was utterly confused as to how to
 troubleshoot that, when Andrew inspired me to try running rsync directly
 from the command line.  That was where it told me that it was missing
 cygintl-8.dll.

 ** **

 Thanks,

 ** **

 --Mark

 ** **

 *From:* Ray Frush [mailto:ray.fr...@avagotech.com]
 *Sent:* Thursday, August 01, 2013 3:14 PM

 *To:* General list for user discussion, questions and support
 *Subject:* Re: [BackupPC-users] rsync on Windows 8?

 ** **

 ** **

 The BackupPC cygwin-rsyncd / 3.0.9.0 that I helped to contribute is only
 missing the  cygintl-8.dll file.  I don't have Windows 8 to test with, but
 without the file we've had no issues on Windows 7.  

   

 cygwin-rsynd (3.0.9.0) also has the same behavior with regard to requiring
 /cygwin/c/path in the config file.

 ** **

 Mark-

 ** **

 Have you tried the cygwin-rsyncd package from
 http://sourceforge.net/projects/backuppc/files/cygwin-rsyncd/3.0.9.0 ?
 It would be good to know if the package needs to be tweaked to support
 Windows 8.

 ** **

 ** **

 On Thu, Aug 1, 2013 at 12:54 PM, Mark Campbell mcampb...@emediatrade.com
 wrote:

 So I figured out the problem!  Apparently, by adding in the new
 version/additional .dlls, C:/path is no longer recognized in
 rsyncd.conf.  It needs to be /cygwin/c/path.  So, bearing that in mind,
 the list of files below, which were pulled from cygwin as of yesterday, are
 confirmed to work on Windows 8, Windows 7, and Windows XP (I would assume
 that the server variants would work as well too):

 · cygiconv-2.dll

 · cygintl-8.dll

 · cygpopt-0.dll

 · cygrunsrv.exe

 · cygwin1.dll

 · rsync.exe

 · rsyncd.conf

  

 Thanks,

  

 --Mark

  

 *From:* Craig Barratt [mailto:cbarr...@users.sourceforge.net]
 *Sent:* Thursday, August 01, 2013 1:01 AM


 *To:* General list for user discussion, questions and support
 *Subject:* Re: [BackupPC-users] rsync on Windows 8?

  

 Sorry I can't directly help (since I don't have a Win8 machine).  But if
 someone can figure out the right set of cygwin files and the right recipe -
 ideally one that works across Win8, Win7 etc - then I would be happy to
 update the recently released cygwin-rsyncd package (which was helpfully
 provided by Ray Frush).

  

 Craig

  

 On Wed, Jul 31, 2013 at 7:14 AM, Mark Campbell mcampb...@emediatrade.com
 wrote:

 Apparently, my issue is not as solved as hoped.  The service does start up
 fine now, and doing an rsync --list-only to another machine lists the
 contents of the module on the remote machine just fine, but when trying to
 perform a backup of this machine with BackupPC (and even when trying to do
 an rsync --list-only from another machine to it), it generates the error:

 [root@emtbackup2 /]# rsync --list-only
 rsync://backupadmin@win8testvm/users
 Password:
 @ERROR: chdir failed
 rsync error: error starting client-server protocol (code 5) at
 main.c(1503) [receiver=3.0.6]

 Online research suggests that this may be a permissions issue, but I've
 tried running the service both as the Local System Account, and as
 Administrator, with no variations on the error.  Any suggestions on this?

 Thanks,

 --Mark



 -Original Message-
 From: Mark Campbell [mailto:mcampb...@emediatrade.com]
 Sent: Tuesday, July 30, 2013 1:59 PM
 To: General list for user discussion, questions and support
 Subject: Re: [BackupPC-users] rsync on Windows 8?

 Andrew,

 I did fail to mention that once I decided to grab additional .dlls from
 the install, that I copied over the corresponding updates to the original
 files mentioned, to no avail.

 But your idea inspired me to try running the rsync that I installed in the
 rsyncd directory in an administrator command line, and lo and behold, it
 gave me an error message with a new missing .dll to get!  Once I got that
 one and dropped it in the rsyncd folder, I tried running it on the command
 line again, and it gave me the rsync help menu.  Once that worked, I tried
 starting the service again, and it succeeded!  Woo hoo!  :)

 For the record (for anyone else that might be doing the same that I am

Re: [BackupPC-users] cygwin-rsyncd outdated No recent version ?

2013-07-09 Thread Ray Frush
I have added this task to my list of things to do.  I'll try to get
something posted to this list within a week that other folks can use.


On Tue, Jul 9, 2013 at 1:21 PM, Richard Zimmerman 
rzimmer...@riverbendhose.com wrote:

 Yes, there is interest from this corner of the world…

 ** **

 Many thanks,





-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Challenges with backups

2013-07-02 Thread Ray Frush
Gregory-

From your attached documents, the issue you're having is not clear.  I see
most of your smaller hosts are being backed up fine.  You have some larger
hosts that are taking a while because of their size. (It looks like they
took 11+ hours to backup).   Can you be more specific about the issues you
are facing?


One issue you may be having is if those large file servers have a LARGE
number of small files.  The BackupPC server's disks may be causing a
bottleneck during incremental backups.  There has been a lot of discussion
about disk speed on the mailing list in the last month or two, so you may
find some ideas in the archives.



On Tue, Jul 2, 2013 at 5:50 AM, Gregory Malsack gmals...@coastalacq.comwrote:

 **
 Hello All,

 I've become somewhat frustrated with BackupPC as it's not maintaining a
 valid set of backups, and I guess I'm not clear where to start. So I am
 coming to you for some assistance. Attached are various portions of data
 that should provide you with a very clear view of what I am attempting to
 accomplish.

 If you would be so kind, please review the information and respond with
 your thoughts/questions...


-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] New backup server

2013-05-24 Thread Ray Frush
I have found that the limiting speed for backups with lots of files is how
fast the backup server can walk the BackupPC pool, so going with faster
disks will help some.   You will probably find that 4x 500GB drives in a
RAID 10 will give you more throughput, but at more cost.

Your ML150 has 8 slots, so you've got a lot of options.  Do you know which
controller you have?



On Fri, May 24, 2013 at 10:15 AM, Erik Hjertén erik.hjer...@companion.sewrote:

  Hi all

 I have invested in a used HP Proliant ML150 G5 server as a new backup
 server. I have about 500 GB of data in 40 000 files spread over 8 clients
 to backup. Data doesn't grow fast so I'm aiming at two 1TB disks in a raid
 1 configuration.

 Do I go with more expensive, but faster (and more reliable?), SAS-disks.
 Or is cheaper, but slower, S-ATA disks sufficient? I'm guessing that disk
 speed will be the bottle neck in performance?

 Your thoughts on this would be appreciated.
 /Erik






 --
 Try New Relic Now  We'll Send You this Cool Shirt
 New Relic is the only SaaS-based application performance monitoring service
 that delivers powerful full stack analytics. Optimize and monitor your
 browser, app,  servers with just a few lines of code. Try New Relic
 and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
Try New Relic Now  We'll Send You this Cool Shirt
New Relic is the only SaaS-based application performance monitoring service 
that delivers powerful full stack analytics. Optimize and monitor your
browser, app,  servers with just a few lines of code. Try New Relic
and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Get the list of changed files for a backup?

2012-10-29 Thread Ray Frush
On Mon, Oct 29, 2012 at 8:29 AM, Johan Wilfer li...@jttech.se wrote:


 The XferLOG's are quite verbose and lists every file even if it's the
 same so it isn't very useful.


cd /var/lib/BackupPC/pc/host
/usr/share/BackupPC/bin/BackupPC_zcat XferLOG.103.z | grep create   


This will spit out all the new files that were transferred for that host
for that backup.   Wrapping it in a for loop to report on all your systems
is a separate exercise.


-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
The Windows 8 Center - In partnership with Sourceforge
Your idea - your app - 30 days.
Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] stumped by ssh

2012-10-11 Thread Ray Frush
On Thu, Oct 11, 2012 at 12:34 AM, Robert E. Wooden rbrte...@comcast.netwrote:

 I ran admin@wdnbkup01:~$ sudo -u backuppc rsync -aP root@[myclientip]:/tmp
 /tmp/client

 [sudo] password for admin:
 receiving incremental file list
 created directory /tmp/client


I was hoping for some kind of informative failure.  But we got none.  That
rules out the .ssh keys and any obvious mis-configurations.

When you rund BackupPC_dump part of what it runs is an rsync server on
the client to be backed up.   The rest of the script is a perl based rsync
receiver.

backuppc 32396 24278  2 09:31 ?00:00:00 /usr/bin/perl
/usr/share/BackupPC/bin/BackupPC_dump -i hostname
backuppc 32406 32396  0 09:31 ?00:00:00 /usr/bin/ssh -q -x -l root
hostname /usr/bin/rsync --server --sender --numeric-ids --perms --owner
--group -D --links --hard-links --times --block-size=2048 --recursive .


This is where my knowledge breaks down  BackupPC_dump tries to talk to the
rsync --server...   running on the client, but in your case it is
failing.  In our environment that's usually because the client disconnects
from the network before any data is transferred.  (We backup a lot of
laptops that wander around).   You'll have to see if there's any evidence
that the client becomes unavailable after a backup is started.   Also
check that the 'rsync --server' is actually getting started on the client.


-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] stumped by ssh

2012-10-10 Thread Ray Frush
What happens when you run

 sudo -u backuppc rsync -aP root@client:/tmp /tmp/client


On Wed, Oct 10, 2012 at 11:19 AM, Robert E. Wooden rbrte...@comcast.netwrote:

  I have been using Backuppc for a few years now. Recently I upgraded my
 machine to newer, faster hardware. Hence, I have experience exchanging ssh
 keys, etc.

 It seems I have one client that refuses to connect via ssh. When I
 exchanged keys and ran ssh -l root *[clientip]* whoami the client
 properly returns 'root'. When I sudo -u backuppc
 /usr/share/backuppc/BackupPC_dump -v -f *[clienthostname]* I get 'dump
 failed: Unable to read 4 bytes'.


-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No more free inodes

2012-10-05 Thread Ray Frush
On Fri, Oct 5, 2012 at 6:42 AM, Les Mikesell lesmikes...@gmail.com wrote:

  I wonder what caused this. My BackupPC filesystem was created with default
  mkfs.ext4, and has used far more disk space than inodes:
 
  Filesystem  Size  Used Avail Use% Mounted on
  /dev/md03.6T  1.6T  2.1T  43% /var
 
  FilesystemInodes   IUsed IFree IUse% Mounted on
  /dev/md0   244195328 4966307 2392290213% /var


Les-
We're using standard ext4 settings as well, and are doing just fine in
the inode dept.  Our disk is smaller, so we've got just over 100
Million inodes instead of your 244M.
I found it interesting that your system and my system are using about
the same number of inodes, however you're using 2x the space.



# df -i /dev/backuppc
FilesystemInodes   IUsed   IFree IUse% Mounted on
/dev/backuppc  100663296 4212939 964503575% /var/lib/BackupPC
# df -h /dev/backuppc
FilesystemSize  Used Avail Use% Mounted on
/dev/backuppc 1.5T  764G  672G  54% /var/lib/BackupPC

That's only  5514 inodes/GByte used

From Frédéric Massot's original message:

 The df command indicates that all inodes are used.

  # df -h
  Filesystem   Size  Used Avail Use% Mounted on
 /dev/mapper/vg01-lv_backup918G  627G  246G  72% /var/lib/backuppc

 # df -i
 Filesystem  Inodes IUsed  IFree IUse% Mounted on
 /dev/mapper/vg01-lv_backup  60186624 60186624  0  100% /var/lib/backuppc

That's 65562 inodes/GB used.


His original file system was more closely sized to his demand for
space, but was still close to 1TB so he got only 60M inodes.
Compared to our environments, as you pointed out, he's got a huge
number of unique files that must be quite small.

Out of curiosity, I checked some of our primary storage, where we
have a mix of lots (over 1Billion) of really small files and some
large databases, and found we're using about 7 inodes/GB

Frédéric's environment appears to have an unusual file density, so it
may be that he could not have expected that his system would be
overrun by all the files.


--
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No more free inodes

2012-10-05 Thread Ray Frush
I can't math today, I have the dumb...

On Fri, Oct 5, 2012 at 9:24 AM, Ray Frush ray.fr...@avagotech.com wrote:
 Out of curiosity, I checked some of our primary storage, where we
 have a mix of lots (over 1Billion) of really small files and some
 large databases, and found we're using about 7 inodes/GB

Our primary storage example has a total of over 1B inodes, which I
used in the calculation above.  We're only using 17% of them, or 1918
inodes/GB.I read off the wrong column.


-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to prove backups for Audits

2012-09-25 Thread Ray Frush
On Tue, Sep 25, 2012 at 10:58 AM, Derek Belcher dbelc...@alertlogic.comwrote:

 I have been tasked with taking a screenshot for our auditors showing that
 we are keeping backups for a year.

 My current scheduling looks like this:
 FullKeepCnt = 1,0,1,0,0,1   one full week, one full month, one 64 weeks

 I there a way to display the oldest back up with a time stamp, proving 52+
 weeks? In the command line or GUI?

 Thank you in advance,
 --Derek


This will point you in the right direction.

#!/bin/bash
for i in `ls /var/lib/BackupPC/pc/`
do
  echo -n  $i   
  read start end  (head -1 /var/lib/BackupPC/pc/$i/backups | awk '{print
$3,  , $4}')
  echo $start | awk '{print strftime(Backup Started %c,$1)}'
  #echo  ( $end - $start ) / 60   | bc
done





 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error Backing up Servers

2012-08-21 Thread Ray Frush
You need to exclude /proc from your backups.  It's a virtual file
system maintained by the kernel, and does not need to be backed up.


Here's the excludes we use for Linux hosts:

$Conf{BackupFilesExclude} = {
  '*' = [
'/dev',
'/proc',
'/tmp_mnt',
'/var/tmp',
'/tmp',
'/net',
'/var/lib/nfs',
'/sys'
  ]
};




On Tue, Aug 21, 2012 at 9:19 AM, gshergill backuppc-fo...@backupcentral.com
 wrote:

 Hi Oliver,

  Just a random question, but do you back up linux machines?

 It keeps failing for me at the point where it tries the file;

 /proc/kcore

 At this point it see's it's in use and aborts the backup... any way around
 that which you know?

 Thanks again.

 Kind Regards,

 gshergill

 +--
 |This was sent by gasherg...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--




 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error Backing up Servers

2012-08-21 Thread Ray Frush
Gshergill-

Please refer to the abundant documentation on this topic.  Here's a good
starting point, though the section on Linux excludes is just plain wrong.
The Windows excludes are quite good.


http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Common_backup_excludes

On Tue, Aug 21, 2012 at 9:48 AM, gshergill backuppc-fo...@backupcentral.com
 wrote:

 Hi Ray,

 =
 You need to exclude /proc from your backups.  It's a virtual file system
 maintained by the kernel, and does not need to be backed up.

 Here's the excludes we use for Linux hosts:
 ...
 =

 This sounds to me like a stupid question, but I assume I add that to a
 config file? Rather than individually adding it to ever host I create?

 Which configuration file do I add it to?

 I assume there's a way to do this for Windows too? It may be a solution to
 my problem.

 Thank you.

 Kind Regards,

 gshergill

 +--
 |This was sent by gasherg...@gmail.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--




 --
 Live Security Virtual Conference
 Exclusive live event will cover all the ways today's security and
 threat landscape has changed and how IT managers can respond. Discussions
 will include endpoint security, mobile security and the latest in malware
 threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/




-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error Backing up Servers

2012-08-20 Thread Ray Frush
A command line like:

/usr/share/BackupPC/bin/BackupPC_zcat XferLOG.0.z

Will get at these files.  Not sure why the developers didn't use standard
gzip format, but at least there's a tool to handle them.

On Mon, Aug 20, 2012 at 8:53 AM, gshergill backuppc-fo...@backupcentral.com
 wrote:

 In the directory for the windows machine the log file;

 -rw-r- 1 backuppc backuppc  544 Aug 15 17:39 XferLOG.1.z
 -rw-r- 1 backuppc backuppc  567 Aug 16 17:00 XferLOG.2.z
 -rw-r- 1 backuppc backuppc  546 Aug 20 15:32 XferLOG.5.z
 -rw-r- 1 backuppc backuppc  570 Aug 20 15:33 XferLOG.6.z
 -rw-r- 1 backuppc backuppc  545 Aug 20 15:42 XferLOG.7.z
 -rw-r- 1 backuppc backuppc  456 Aug 20 14:00 XferLOG.bad.z.old

 Unable to open those files and paste the contents here as they all open
 with a collection of symbols.

 How should I proceed from here?




-- 
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error Backing up Servers

2012-08-20 Thread Ray Frush
On Mon, Aug 20, 2012 at 10:48 AM, Olivier Ragain orag...@chryzo.net wrote:

 PS: what is the rule on this group about post responding or pre
 responding to emails ^^ ?

Do what makes sense in the context of the discussion.   Avoid doing
both at the same time. ;-)




--
Ray Frush   Either you are part of the solution
T:970.288.6223   or part of the precipitate.
-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
 Avago Technologies, Inc. | Technical Computing | IT Engineer

--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/