[BackupPC-users] Wildcards on paths & filenames

2008-05-01 Thread Peter
I have set up BackupPC to backup my Windows XP Prof share. As I
have several users on my machine, I want to make sure that I back
up each of their "My Documents" folders. I also have a folder
that can appear in many different sub folders that I do not want
to be backed up. Plus, on top of these I want to ignore certain
files from being backed up, for example, *.tmp, *.log, .
My first attempt was to use "\Documents and Settings\*\My
Documents", however I received the following error
NT_STATUS_OBJECT_NAME_INVALID listing \Documents and Settings\*\My Docume
nts
tar: dumped 15 files and directories

As I mentioned earlier, I have a folder, "Do not archive", that
may appear at different levels, eg "\data\do not archive" or
"\data\Peter\doc\do not archive". So, I would prefer to put
something like "*\do not archive" to avoid these folders.
I also entered in the exclude for key *, *.log & *.tmp. However,
the test file I created was backed up. I then placed under the
key for the share, that is, Zleo, *.log, then the incremental did
not backup the file.
So, I had originally:
* => *.tmp
* => *.log
Which did not work and then replaced the "*" with the share name
and it worked.
So, my questions are:
1) Is there a feature that allows me to perform wildcards on
folders?
2) Is the syntax for excluding files, needs to have a key for a
share and not "*"
3) From the manual that talks about excluding files, there is
this statement "If a hash is used, a special key ``*'' means it
applies to all shares that don't have a specific entry." What is
meant by "If a hash is used"?
Thanks
Peter
-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wildcards on paths & filenames

2008-05-02 Thread Peter
Hi Paul,

Thanks for the detailed response.

Hmm, sounds like a serious design question and one I may help to
influence (well a little at least).

Firstly, I turned to BackupPC as I like that they do not backup
duplicate files and simple changes to configuration doesn't cause a
massive revisit to backing everything again (something that Bacula
does). Plus, I thought the interface look very user friendly. So, I have
liked all that I have seen and think it works quite nicely.

As you mentioned, you are needing to have BackupPC to be able to deal
with many different protocols (I guess you mean connecting to systems).
My original thought when I configured the system and this support
request email was that if I entered *.tmp then I am looking to
exclude/include that file from every directory (the whole share). But
after reading your email, I think it could be just the root level. Or to
be more explicit for root level you would enter "/*.tmp", if you wanted
the whole share (ie every directory and sub-directory) then enter
"*/*.tmp". This would allow to match 0 or more characters, including
paths, prior to the file in question. So, in my original email example,
I have a directory, "Do not archive", which normally appears as
"/Data/Do not archive" but can as easily appear as "/Data/Peter/Docs/do
not archive". I could therefore, enter the exclusion as "*/do not
archive".

This would also help with Windows "My Documents" folder which appears
for each user on Windows (well XP at least), as "/Documents and
Settings/Peter/My Documents", therefore, you would configure as
"/Documents and Settings/*/My Documents"

One question, what happens in the situation where a folder name is the
same as file name, eg I have a folder "Do not archive", but also a file
"Do not archive" without any extension? So, having the syntax of "*/Do
not archive" will get which? Especially when I want to include the file
in the backup but exclude the folder from backup.

I hope this answers your question. Let me know if you need more details.
If you feel it is reaching off topic to the mailing list, then feel free
to email me directly.

Ah, this time I remembered to check who the TO was going to :)

Regards
Peter

PS Thanks for the link to your site. I keep talking about finding out
about IRC and your blog has helped to reinforce that thought.


- Original message -
From: "Paul Mantz" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Cc: "BackupPC User List" 
Date: Thu, 1 May 2008 15:57:57 -0700
Subject: Re: [BackupPC-users] Wildcards on paths & filenames

[... cut ...]

Hi!
[... cut ...]

The problem with file inclusions and exclusions is that they are not
uniform at the moment.  Different protocols have different syntaxes
and implementations of file inclusion/exclusion.  At the moment, there
is no support for wildcards, since globbing in Perl (the syntax for
UNIX path specification) interfaces directly with the file system.
However, we are working to make a uniform schema for directory
specification that will work for as many protocols as possible, with
the inclusion of some more fluid syntax.  Implementing wildcards is a
tricky thing, since we assume that the root is the sharename, and
saying '*.tmp'  would be ambiguous as far as depth is concerned.
Would you just want to delete any file matching that pattern on the
root level, or for the whole share?  We are still working on this.

[... cut ...]

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Changing your login user

2008-05-05 Thread Peter
I logged into the web interface as the user of the client,
however, now I want to login as the adminstrator of BackupPC (ie,
user backuppc). But, I do not have a logout button and when I try
and start another session or delete and start another session, I
still end up being logged in as the client user.
How do I change users?
Thanks
Peter
-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RE Changing your login user

2008-05-05 Thread Peter
Hi Romain
Yep, in Firefox, Tools > Clear Private Data, did the trick.
Thanks
Peter


- Original message -
From: [EMAIL PROTECTED]
To: "BackupPC User List" ,
[EMAIL PROTECTED]
Date: Mon, 5 May 2008 15:16:12 +0200
Subject: [BackupPC-users] RE  Changing your login user

Hi,

I think I've got a solution but it won't be the better solution (I don't
know).
Can you "delete" your authentification session in your browser ?
For example, with Firefox --> Tools / Erase my footsteps (I think)
And after if you reload your page, you can enter another user/password
(if
it's allowed).

Regards,

Romain



   
 "Peter"   
 <[EMAIL PROTECTED] 
 .sourceforge.net>  
 A
 Envoyé par :  "BackupPC User List"  
 backuppc-users-bo
 <[EMAIL PROTECTED]
 [EMAIL PROTECTED] et> 
 eforge.net
 cc
   
 Objet
 05/05/2008 15:00  [BackupPC-users] Changing your  
   login user  
   
 Veuillez répondre   
 à   
 [EMAIL PROTECTED] 
  sourceforge.net  
   
   
   




I logged into the web interface as the user of the client, however, now
I
want to login as the adminstrator of BackupPC (ie, user backuppc). But,
I
do not have a logout button and when I try and start another session or
delete and start another session, I still end up being logged in as the
client user.

How do I change users?

Thanks
Peter
-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/



SC2N -S.A  Siège Social : 2, Rue Andre Boulle - 94000 Créteil  - 327 153
722 RCS Créteil




"This e-mail message is intended only for the use of the intended
recipient(s).
The information contained therein may be confidential or privileged, and
its disclosure or reproduction is strictly prohibited.
If you are not the intended recipient, please return it immediately to
its
sender at the above address and destroy it."



-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Changing your login user

2008-05-05 Thread Peter
Thanks for all the tips and suggestions. In Firefox, you can clear the
login (for all sites) by selecting "Tools > Clear Private Data". In IE I
could only find the login information cleared by closing the browser.

I've created a Tip & Trick on the wiki.

http://backuppc.wiki.sourceforge.net/Tips+and+Tricks

You will find it at the bottom.
# How to become another user (ie How do I logout from BackupPC)

I also created one for moving data from my previous question when I had
problems. Please correct them if you feel I have missed something or
wasn't clear on my descriptions.

Regards
Peter

- Original message -
From: "Adam Goryachev" <[EMAIL PROTECTED]>
To: "backuppc-users" 
Date: Tue, 06 May 2008 09:09:38 +1000
Subject: Re: [BackupPC-users] Changing your login user

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Les Mikesell wrote:
> [EMAIL PROTECTED] wrote:
>> Yes, I agree with you. In my case it's not really necessary but I think
>> it's a good idea.
>> Will it be in BackupPC 3.1.1 ?
> 
> It can't be done when you use basic http authentication.  There is no 
> way to tell a browser to forget that you have logged in - it will always 
> continue to send the original credentials so you never get a prompt for 
> a new login. The only way to make it forget is to close the browser (all 
> windows) and restart it.

As someone else mentioned, it can be done by temporarily rejecting the
authentication for the same realm. I'm pretty sure I've done this with
perl scripts in the past, I just don't seem to recall all the details
since it was so long ago. I'm pretty sure you can send/fake the auth
denied error the web server would normally send.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFIH5OyGyoxogrTyiURAqxeAKCIo3dJUjCKPtj2ckMslLJlCd7SrQCgmiU+
u/Mg2rmaOIwDlppcN4lVqW4=
=RVH4
-END PGP SIGNATURE-

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Changing your login user

2008-05-07 Thread Peter
Hi Stephen,
Thanks for the information.

I tried entering the suggestion in the CGI interface but nothing appears
on the index. I replaced the myserver.doman with my IP address (figured
the simplest approach first should work). I also changed https to http.

Do I need to restart something?
Thanks
Peter


- Original message -
From: "Stephen Joyce" <[EMAIL PROTECTED]>
To: "Peter" <[EMAIL PROTECTED]>
Cc: "BackupPC User List" 
Date: Tue, 6 May 2008 07:59:36 -0400 (EDT)
Subject: Re: [BackupPC-users] Changing your login user

You can add something like the following to the $Conf{CgiNavBarLinks}
area 
of your config.pl. This works in firefox; I haven't tested IE.

  { 'link' =>
  'https://nouser:[EMAIL PROTECTED]/cgi-bin/BackupPC_Admin',
'lname' => undef,
'name' => 'Logout of BackupPC'
  }

Replace myserver.domain with your own and change https to http if you're 
not using SSL (though you should be). This edit can also be done via the 
CGI: Edit Config -> CGI -> CgiNavBarLinks -> Add.

Cheers, Stephen
--
Stephen Joyce
Systems AdministratorP A N I
C
[... cut ...]

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Western Digital Mybook drives

2008-05-16 Thread Peter
I haven't used it native to Linux. Only via virtual environment. Linux
guest OS on top of host OS of Windows XP Professional. The drive is 320G
USB.

In this respect, both with SAMBA connection and some virtual disks to
the drive.
Peter

- Original message -
From: "Chris Baker" <[EMAIL PROTECTED]>
To: backuppc-users@lists.sourceforge.net
Date: Fri, 16 May 2008 15:52:51 -0500
Subject: [BackupPC-users] Western Digital Mybook drives

From: Chris Baker [mailto:[EMAIL PROTECTED] 
Sent: Friday, May 16, 2008 3:52 PM
To: 'Chris Baker'
Subject: Western Digital Mybook drives


Has anyone used the Western Digital My Book studio drives with Linux? Do
they work with Linux? Western Digital says that they don't support them.
 

Chris Baker -- [EMAIL PROTECTED]
systems administrator
Intera Inc. -- 512-425-2006


 


 

-
This SF.net email is sponsored by: Microsoft 
Defy all challenges. Microsoft(R) Visual Studio 2008. 
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] RsyncdUserName missing in client config

2008-05-27 Thread Peter
Thanks for the update.
Peter


- Original message -
From: "Craig Barratt" <[EMAIL PROTECTED]>
To: "Peter" <[EMAIL PROTECTED]>
Cc: "BackupPC User List" 
Date: Tue, 27 May 2008 17:02:40 -0700
Subject: Re: [BackupPC-users] RsyncdUserName missing in client config 

Peter writes:

> When I went to the client config file I could not see the username field 
> either.

It's a trivial bug in 3.0.0, fixed in 3.1.0.

Craig

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Stolen backuppc server

2008-06-10 Thread Peter
When you are successful in achieving this (and relaxed that you have you
backups back), it would be nice if you could jot down your experiences
in the BackupPC Wiki.
Regards & Wish you success.
Peter


- Original message -
From: "dan" <[EMAIL PROTECTED]>
To: "Johan Ekh" <[EMAIL PROTECTED]>
Cc: "BackupPC users' mailing list"

Date: Mon, 9 Jun 2008 20:34:59 -0600
Subject: Re: [BackupPC-users] Stolen backuppc server

Install a new backuppc server ( or one in a virtual machine ) and then
mount
your NFS network drive to the $TopDIR which is likely to be
/var/lib/backuppc if you are using debian or ubuntu.  You can go to
vmware
and download an ubuntu 8.04 server image that is ready to run and do a
quick
'apt-get install backuppc' and when that is done do 'mount -t nfs
*.*.*.*:/nfs/path/to/backuppc/files /var/lib/backuppc'. you then either
need
to get the config files for the old hosts, that can be done by loging in
to
the net backuppc servers web interface and selecting localhost, which
should
be populated from the localhost section of $TopDIR, and restore the
backuppc
directory out of the /etc backup.

Once the config files are restored, you are ready to continue using
backuppc
OR you can restore from any of the old hosts as you have everything
needed,
the config file and the pool and the pc directory.

I think you can get this accomplished in 1 hour or less if you have high
speed internet to download the vmware image or ubuntu sever iso.

On Sun, Jun 8, 2008 at 5:45 AM, Johan Ekh <[EMAIL PROTECTED]> wrote:

> Hi all,
> I use backuppc on a suse linux server to back up a number of computers on a
> NFS mounted network disk.
> Now somebody has stolen my suse server. I still have the network disk
> though.
>
> What is the best way to retrieve all my data?
>
> Please help with simple instructions if possible. I am not an expert.
>
> Best reagards,
> Johan
>
> -
> Check out the new SourceForge.net Marketplace.
> It's the best place to buy or sell services for
> just about anything Open Source.
> http://sourceforge.net/services/buy/index.php
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Split share into several shares

2008-06-11 Thread Peter
Hi Kurt,
I would be interested in reading your HowTo for cygwin and rsncyd. As I
tried that on Windows and it was s slw, so, I went back to
Samba.

Let me know when you have done your HowTo and I'll put it through its
paces and add what ever else I find.

Thanks
Peter


- Original message -
From: "Kurt Tunkko" <[EMAIL PROTECTED]>
To: "'BackupPC User List'" 
Date: Wed, 11 Jun 2008 10:53:35 +0200
[snip]

I had a similar problem when backing up Windows-Clients to my 
BackupPC-Server. Transfer speed was very (no I mean vey) slow.
I've tried rsync over SSH (very slow with cygwin), I tried Samba (had 
problems with user-rights and you can not use exclude and includes at 
the same time) and now I'm using rsyncd and everything seems fine:
[snip]

I'm recently writing a howto 'Backup Windows Clients with (cygwin) 
rsyncd and BackupPC', and I would be glad to hear feedback.

Best regards from Berlin

=/ Kurt





-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC vs. Bacula

2008-07-23 Thread Peter
agree with Ralf.

I used Bacula for awhile and studying the backups I would notice that I
wanted to eliminate a few other temporary files from the backup. Making
changes to Bacula's configuration would cause it to perform a full
backup. Which meant, I now had 28Gigs twice (minus those extra temporary
files in the second backup). Whereas, BackupPC you can make these
changes and it continues with doing incrementals and full taking these
changes into account.

Plus, the web interface is great (as was mentioned as well).

Regards
Peter


- Original message -
From: "Ralf Gross" <[EMAIL PROTECTED]>
To: backuppc-users@lists.sourceforge.net
Date: Sun, 20 Jul 2008 21:38:13 +0200
Subject: Re: [BackupPC-users] BackupPC vs. Bacula

[... snip ...]

IMHO the biggest difference is the pooling feature backuppc offers.
There is nothing like this in bacula at the moment.

Ralf

[... snip ...]

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups getting smaller and missing data

2008-09-08 Thread Peter
Well I moved the whole 6Gig folders of images to my "Do not archive"
folder (it doesn't get backed up by BackupPC). Executed a manual full
backup and it did a complete back up successfully. Back to (well a
slight increase) 28895.7.

Now, I know what caused the problem I need to find out why it caused the
problem with the introduction of 6Gig's of images.

Any ideas or thoughts on how to resolve?

Thanks
Peter


- Original message -
From: "Peter" <[EMAIL PROTECTED]>
To: "BackupPC User List" 
Date: Mon, 08 Sep 2008 19:29:13 +0800
Subject: [BackupPC-users] Backups getting smaller and missing data

My last full backup in July 31 was a size of 28,895.5 and when the
system did a backup the other day it reports it as 15,169.5 and today I
forced a manual full backup and it reports 7,471.0. However, if should
be getting larger. Firstly, my music collection has not changed (it's
the bulk of the 28,895.5) and secondly, I added most of photo images
which is about 6Gig, so I would have expected around 35,000 Mbytes.

[snip snip]

I think the issue is only really with the D drive (ie Dleo). The others
seem to be okay from the spot check I have done. Dleo has the music
collection and recently added the image collection. I'll look to move
the image collection to another folder outside of the backup to see what
difference it makes.

BTW, the NT_STATUS_BAD_NETWORK_NAME is not an issue as it is related to
the Zleo drive that needs to be made a share for backup each time. It is
an TrueCrypt drive and shares are not retained.

Any tips on what to look for would be appreciated.
Thanks
Peter


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backups getting smaller and missing data

2008-09-08 Thread Peter
My last full backup in July 31 was a size of 28,895.5 and when the
system did a backup the other day it reports it as 15,169.5 and today I
forced a manual full backup and it reports 7,471.0. However, if should
be getting larger. Firstly, my music collection has not changed (it's
the bulk of the 28,895.5) and secondly, I added most of photo images
which is about 6Gig, so I would have expected around 35,000 Mbytes.

The reason for the gap is that I have been away for a month and just
returned and reactivated the virtual machine to perform the backup.

I also noticed I'm getting folders like "M" which does not exist on the
drive.

System setup:
- vmWare 2.0Beta running Kubuntu 8.04
- Backuppc v3.1.0
- Windows XP Professional running vmWare
- Windows machine is the owner of drives and folders being backed up
- Using SMB method for backup.

I have been running the environment for some 4 - 6 months without issue.

Backuppc log:
2008-09-06 16:00:02 removing incr backup 83
2008-09-06 16:00:02 removing incr backup 82
2008-09-06 16:00:02 removing incr backup 81
2008-09-06 16:00:02 removing incr backup 80
2008-09-06 16:00:02 removing incr backup 79
2008-09-06 16:00:02 full backup started for share Cleo
2008-09-06 16:00:47 full backup started for share Dleo
2008-09-06 16:49:21 Got fatal error during xfer (Call timed out: server
did not respond after 2 milliseconds opening remote file
\Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\03 Oro.mp3
(\Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\))
2008-09-06 16:49:29 Backup aborted (Call timed out: server did not
respond after 2 milliseconds opening remote file
\Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\03 Oro.mp3
(\Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\))
2008-09-06 16:49:30 Saved partial dump 88
2008-09-06 17:00:01 full backup started for share Cleo
2008-09-06 17:00:35 full backup started for share Dleo
2008-09-06 17:23:26 full backup started for share Fleo
2008-09-06 17:27:48 full backup started for share Zleo
2008-09-06 17:35:05 full backup 88 complete, 14659 files, 15906708844
bytes, 213 xferErrs ( bad files,  bad shares, 213 other)
2008-09-06 17:35:05 removing full backup 70
2008-09-07 17:00:06 incr backup started back to 2008-09-06 16:00:01 
(backup #88) for share Cleo
2008-09-07 17:00:17 incr backup started back to 2008-09-06 16:00:01 
(backup #88) for share Dleo
2008-09-07 17:00:43 incr backup started back to 2008-09-06 16:00:01 
(backup #88) for share Fleo
2008-09-07 17:00:48 incr backup started back to 2008-09-06 16:00:01 
(backup #88) for share Zleo
2008-09-07 17:00:50 Got fatal error during xfer (tree connect failed:
NT_STATUS_BAD_NETWORK_NAME)
2008-09-07 17:00:55 Backup aborted (tree connect failed:
NT_STATUS_BAD_NETWORK_NAME)
2008-09-07 21:00:01 incr backup started back to 2008-09-06 16:00:01 
(backup #88) for share Cleo
2008-09-07 21:00:07 incr backup started back to 2008-09-06 16:00:01 
(backup #88) for share Dleo
2008-09-07 21:00:47 incr backup started back to 2008-09-06 16:00:01 
(backup #88) for share Fleo
2008-09-07 21:00:53 incr backup started back to 2008-09-06 16:00:01 
(backup #88) for share Zleo
2008-09-07 21:01:06 incr backup 89 complete, 40 files, 93306822 bytes, 1
xferErrs ( bad files,  bad shares, 1 other)
2008-09-07 21:01:06 removing incr backup 87
2008-09-07 21:01:06 removing incr backup 86
2008-09-07 21:01:06 removing incr backup 85
2008-09-08 17:06:24 full backup started for share Cleo
2008-09-08 17:07:57 full backup started for share Dleo
2008-09-08 17:24:43 full backup started for share Fleo
2008-09-08 17:27:41 full backup started for share Zleo
2008-09-08 17:31:42 full backup 90 complete, 12569 files, 7833891298
bytes, 67 xferErrs ( bad files,  bad shares, 67 other)
2008-09-08 17:31:42 removing full backup 71

I think the issue is only really with the D drive (ie Dleo). The others
seem to be okay from the spot check I have done. Dleo has the music
collection and recently added the image collection. I'll look to move
the image collection to another folder outside of the backup to see what
difference it makes.

BTW, the NT_STATUS_BAD_NETWORK_NAME is not an issue as it is related to
the Zleo drive that needs to be made a share for backup each time. It is
an TrueCrypt drive and shares are not retained.

Any tips on what to look for would be appreciated.
Thanks
Peter

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups getting smaller and missing data

2008-09-08 Thread Peter
Looking back at the log for the first backup I did a few days ago and I
found the following:

Receiving SMB: Server stopped responding
This backup will fail because: Call timed out: server did not respond
after 2 milliseconds opening remote file \Data\Music\iTunes\iTunes
Music\Máire Brennan\Maire\03 Oro.mp3 (\Data\Music\iTunes\iTunes
Music\Máire Brennan\Maire\)
Call timed out: server did not respond after 2 milliseconds opening
remote file \Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\03
Oro.mp3 (\Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\)

Which then completed with an error and executed again the following
hour. Which I looked and saw as being completed successfully. Except it
didn't as I later found out that the backup size was far too small. I
now found that it had the following error:

write_data: write failure. Error = Connection reset by peer
write_socket: Error writing 59 bytes to socket 4: ERRNO = Connection
reset by peer
Error writing 59 bytes to client. -1 (Connection reset by peer)
Error reading file \Data\Music\iTunes\iTunes Music\Compilations\Best Of
The Sensational Alex Harvey Band\08 Next.mp3 : Write error: Connection
reset by peer
Didn't get entire file. size=6027392, nread=2162160
Write error: Connection reset by peer opening remote file
\Data\Music\iTunes\iTunes Music\Compilations\Best Of The Sensational
Alex Harvey Band\0 (\Data\Music\iTunes\iTunes Music\Compilations\Best Of
The Sensational Alex Harvey Band\)

The manual full backup that I tried to perform had the same type of
error, ie Connection reset by peer. However, in a different spot. This
time amongst the images. Once I removed the images folder and its
sub-folders I achieved a successful full back up.

BTW, I'm backup up to a Western Digital USB 320Gig drive that has vmWare
created drive. The drive has been allocated about 300Gigs and only about
30Gigs is currently being used.

I'll keep looking but would appreciate any clues. Will report back if I
find any solution.

Thanks
Peter

- Original message -
From: "Peter" <[EMAIL PROTECTED]>
To: "General list for user discussion,  questions and support"

Date: Mon, 08 Sep 2008 20:45:58 +0800
Subject: Re: [BackupPC-users] Backups getting smaller and missing data

Well I moved the whole 6Gig folders of images to my "Do not archive"
folder (it doesn't get backed up by BackupPC). Executed a manual full
backup and it did a complete back up successfully. Back to (well a
slight increase) 28895.7.

Now, I know what caused the problem I need to find out why it caused the
problem with the introduction of 6Gig's of images.

Any ideas or thoughts on how to resolve?

Thanks
Peter


- Original message -
From: "Peter" <[EMAIL PROTECTED]>
To: "BackupPC User List" 
Date: Mon, 08 Sep 2008 19:29:13 +0800
Subject: [BackupPC-users] Backups getting smaller and missing data

My last full backup in July 31 was a size of 28,895.5 and when the
[snip snip]

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups getting smaller and missing data

2008-09-08 Thread Peter
Hi Craig
Thanks for that information. I'm using ClamWin which I thought doesn't
do online scanning, only as per schedules created. Anyway I'll try and
disable the the AV and see if any difference is achieved. At the moment
I want to keep it simple and allow BackupPC to take care of the music
and image directories. 

Thanks
Peter

- Original message -
From: "Craig Barratt" <[EMAIL PROTECTED]>
To: "Peter" <[EMAIL PROTECTED]>
Cc: "BackupPC User List" 
Date: Mon, 08 Sep 2008 18:35:23 -0700
Subject: Re: [BackupPC-users] Backups getting smaller and missing data 

Peter writes:

> 2008-09-06 16:49:21 Got fatal error during xfer (Call timed out: server
> did not respond after 2 milliseconds opening remote file
> \Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\03 Oro.mp3
> (\Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\))

This is the well-known timeout in smbclient caused by antivirus
software having to scan large files in the directory.  This
causes smbclient to exit and the backup stops.

Disabling AV should solve the problem; perhaps you can have it
skip your music directories?

Tim Demarest reported several years ago that he could change
the hard-coded timeout value in smbclient to avoid this issue.

Craig

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups getting smaller and missing data

2008-09-09 Thread Peter
I stopped ClamWin and executed a manual full backup and ran into the
same messages of the timeout.

Since getting Craig's email have thought and yea, possibly backing up
music and images is over kill. As these do not change regularly.
However, as I have now encountered this problem I prefer to fix it than
turn my back on it.

If the issue is with SMB, should I then go for rsyncd? If so, then I
have the issue that when I did try rsyncd some months ago, I had the
backup running for hours and hours and hours, when it should have
completed in about one hour in total. This was using rsyncd from the
BackupPC download on cygwin. I read a message from a month or so that
someone recommended using rsyncd from the normal cygwin site. I'll go
back over the postings to see if I can find that one and look at more
closely. Will that resolve the slowness issue?

Peter

- Original message -----
From: "Peter" <[EMAIL PROTECTED]>
To: "BackupPC User List" 
Date: Tue, 09 Sep 2008 09:55:25 +0800
Subject: Re: [BackupPC-users] Backups getting smaller and missing data

Hi Craig
Thanks for that information. I'm using ClamWin which I thought doesn't
do online scanning, only as per schedules created. Anyway I'll try and
disable the the AV and see if any difference is achieved. At the moment
I want to keep it simple and allow BackupPC to take care of the music
and image directories. 

Thanks
Peter

- Original message -----
From: "Craig Barratt" <[EMAIL PROTECTED]>
To: "Peter" <[EMAIL PROTECTED]>
Cc: "BackupPC User List" 
Date: Mon, 08 Sep 2008 18:35:23 -0700
Subject: Re: [BackupPC-users] Backups getting smaller and missing data 

Peter writes:

> 2008-09-06 16:49:21 Got fatal error during xfer (Call timed out: server
> did not respond after 2 milliseconds opening remote file
> \Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\03 Oro.mp3
> (\Data\Music\iTunes\iTunes Music\Máire Brennan\Maire\))

This is the well-known timeout in smbclient caused by antivirus
software having to scan large files in the directory.  This
causes smbclient to exit and the backup stops.

Disabling AV should solve the problem; perhaps you can have it
skip your music directories?

Tim Demarest reported several years ago that he could change
the hard-coded timeout value in smbclient to avoid this issue.

Craig

-
This SF.Net email is sponsored by the Moblin Your Move Developer's
challenge
Build the coolest Linux based applications with Moblin SDK & win great
prizes
Grand prize is a trip for two to an Open Source event anywhere in the
world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is it OK to use rsync 3.0.3 with BackupPC 3.1.0 ?

2008-09-16 Thread Peter
Thanks for replying Nils. From the thread I was reading that rsync 3.0.3
has fixed many problems, in particular memory. Using ver2.6.9 and I have
had Windows system blue screen, ie crash.
Regards
Peter


- Original message -
From: "Nils Breunese (Lemonbit)" <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED], "General list for user discussion,
 questions and support" 
Date: Tue, 16 Sep 2008 14:26:04 +0200
Subject: Re: [BackupPC-users] Is it OK to use rsync 3.0.3 with BackupPC
3.1.0 ?

Peter wrote:

> I've run setup.exe on Windows XP to install rsync and other packages  
> for
> cygwin. However, it only provides version 2.6.9.  I want to install,  
> as
> mentioned in this topic, version 3.0.3, however I am unable to find it
> via setup.exe nor visiting the site cygwin.com. Any pointers where I  
> can
> find the executable would be appreciated.

I don't know where you could find it, but there is no advantage to  
using rsync 3.x instead of 2.6.9 with BackupPC at this moment.

Nils Breunese.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] No space left on device at /usr/share/backupPC/Xfer/RsyncFileIO.pm

2008-10-05 Thread Peter
Setup:

- vmWare server version 2.0

- Kubuntu 8.04.1 on vmWare

- Kubuntu Rsync v3.0.4

- Windows XP Professional

-  windows Rsync v3.0.4



I have upgraded Rsync to be version 3.0.4 and trying to backup my
Windows XP client (which has the vmWare and BackupPC running on
it) is aborting with the error "No space left on device at
/usr/share/backupPC/Xfer/RsyncFileIO.pm"



I've had a look at my system I am only using 2.9G out of 7.6G of
the system. The  /var/lib/backuppc is residing on an external
drive with about 300G on it. I considered moving
/usr/share/backuppc to the external drive but noted in the
BackupPC documentation that it was best not to move it or you
would need to do a fresh install. Looking around the web for some
clues on the error did not produce any results.



I would appreciate if someone could point me to how this can be
resolved.



Thanks

Peter
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No space left on device at /usr/share/backupPC/Xfer/RsyncFileIO.pm

2008-10-05 Thread Peter
Hi,
I now notice that I'm getting "Disk too full (100%); skipped 2 hosts".

I'm confused on why I would be getting this message when I look at the
drive and it has heaps of space. What am I missing?
Thanks
Peter

- Original message -
From: "Peter" <[EMAIL PROTECTED]>
To: "BackupPC User List" 
Date: Sun, 05 Oct 2008 16:28:14 +0800
Subject: [BackupPC-users] No space left on device at   
/usr/share/backupPC/Xfer/RsyncFileIO.pm

Setup:

- vmWare server version 2.0

- Kubuntu 8.04.1 on vmWare

- Kubuntu Rsync v3.0.4

- Windows XP Professional

-  windows Rsync v3.0.4



I have upgraded Rsync to be version 3.0.4 and trying to backup my
Windows XP client (which has the vmWare and BackupPC running on
it) is aborting with the error "No space left on device at
/usr/share/backupPC/Xfer/RsyncFileIO.pm"



I've had a look at my system I am only using 2.9G out of 7.6G of
the system. The  /var/lib/backuppc is residing on an external
drive with about 300G on it. I considered moving
/usr/share/backuppc to the external drive but noted in the
BackupPC documentation that it was best not to move it or you
would need to do a fresh install. Looking around the web for some
clues on the error did not produce any results.



I would appreciate if someone could point me to how this can be
resolved.



Thanks

Peter

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Error in Digest.c after upgrade Fedora 13 to Fedora 14

2010-11-16 Thread Peter
Hello,

I upgraded my BackupPC from Fedora 13 to Fedora 14 yesterday, and now I have a 
problem with backing up machines.

I get the following error messages when trying to backup a machine. 

2010-11-16 10:00:12 apollo: Assertion ((svtype)((_svi)->sv_flags & 0xff)) >= 
SVt_PV failed: file "Digest.c", line 68 at 
/usr/local/BackupPC/lib/BackupPC/Xfer/RsyncFileIO.pm line 75.
2010-11-16 10:38:17 User peter requested backup of uvax (uvax)
2010-11-16 10:38:18 uvax: Assertion ((svtype)((_svi)->sv_flags & 0xff)) >= 
SVt_PV failed: file "Digest.c", line 68 at 
/usr/local/BackupPC/lib/BackupPC/Xfer/RsyncFileIO.pm line 75.
2010-11-16 10:41:42 User peter requested backup of uvax (uvax)
2010-11-16 10:41:42 uvax: Assertion ((svtype)((_svi)->sv_flags & 0xff)) >= 
SVt_PV failed: file "Digest.c", line 68 at 
/usr/local/BackupPC/lib/BackupPC/Xfer/RsyncFileIO.pm line 75.

I have tried re-compiling File-RsyncP-0.70 but the same error occurs.

I have looked at the two lines where the error is flagged but am not too sure 
what to try to fix it.

line 75 in RsyncFileIO.pm includes a reference to
File::RsyncP::Digest->new()
which then results in Digest.c crashing at line 68,
Perl_croak(aTHX_ "Usage: %s::%s(%s)", hvname, gvname, params);

Are there any suggestions as to how to I can fix this?

Thanks in advance

Peter




--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error in Digest.c after upgrade Fedora 13 to Fedora 14

2010-11-17 Thread Peter
On Tuesday, November 16, 2010, Peter wrote:
> Hello,
> 
> I upgraded my BackupPC from Fedora 13 to Fedora 14 yesterday, and now I
> have a problem with backing up machines.
> 
> I get the following error messages when trying to backup a machine.
> 
> 2010-11-16 10:00:12 apollo: Assertion ((svtype)((_svi)->sv_flags & 0xff))
> >= SVt_PV failed: file "Digest.c", line 68 at
> /usr/local/BackupPC/lib/BackupPC/Xfer/RsyncFileIO.pm line 75.
> 2010-11-16 10:38:17 User peter requested backup of uvax (uvax)
> 2010-11-16 10:38:18 uvax: Assertion ((svtype)((_svi)->sv_flags & 0xff)) >=
> SVt_PV failed: file "Digest.c", line 68 at
> /usr/local/BackupPC/lib/BackupPC/Xfer/RsyncFileIO.pm line 75.
> 2010-11-16 10:41:42 User peter requested backup of uvax (uvax)
> 2010-11-16 10:41:42 uvax: Assertion ((svtype)((_svi)->sv_flags & 0xff)) >=
> SVt_PV failed: file "Digest.c", line 68 at
> /usr/local/BackupPC/lib/BackupPC/Xfer/RsyncFileIO.pm line 75.
> 
> I have tried re-compiling File-RsyncP-0.70 but the same error occurs.
> 
> I have looked at the two lines where the error is flagged but am not too
> sure what to try to fix it.
> 
> line 75 in RsyncFileIO.pm includes a reference to
>   File::RsyncP::Digest->new()
> which then results in Digest.c crashing at line 68,
>   Perl_croak(aTHX_ "Usage: %s::%s(%s)", hvname, gvname, params);
> 
> Are there any suggestions as to how to I can fix this?
> 
> Thanks in advance
> 
> Peter
Hi there,

I managed to track down the problem. It was due to the fact that BackupPC was 
still using v0.68 of File-RsyncP rather than v 
0.70, and even after compiling v0.70 it was still using v0.68. To resolve this 
I deleted all versions of File-RsyncP from my 
local machine, and then re-compiled and re-installed v0.70. I then issued the 
command
perl -e 'use File::RsyncP'
Can't locate File/RsyncP.pm in @INC (@INC contains: /usr/local/lib/perl5 
/usr/local/share/perl5 /usr/lib/perl5 /usr/share/perl5 
/usr/lib/perl5 /usr/share/perl5 
/usr/local/lib/perl5/site_perl/5.10.0/i386-linux-thread-multi 
/usr/local/lib/perl5/site_perl/5.10.0/i386-linux-thread-multi 
/usr/local/lib/perl5/site_perl/5.10.0 
/usr/lib/perl5/vendor_perl/5.10.0/i386-linux-thread-multi 
/usr/lib/perl5/vendor_perl /usr/lib/perl5/site_perl .) at -e line 1.
BEGIN failed--compilation aborted at -e line 1.

which did not make sense as I had just installed it without error. I then 
tracked the problem down to the permissions/ownership 
of the directories and it showed that the owner was root and the permissions 
were drwxr-x---, so I change the directory, and 
sub-directories, permissions to drwxr-xr-x, and then
perl -e 'use File::RsyncP'
returned without an error.

The directories (on my installation) with the permission issues were
/usr/local/lib/perl5/File
/usr/local/lib/perl5/auto/File
and the subdirectories below these. 

Looking at the paths in @INC it would appear that when I thought I was using 
v0.70, I was in fact using v0.68 which was still 
installed in the directory 
/usr/local/lib/perl5/site_perl/5.10.0/i386-linux-thread-multi, and was causing 
the backups to fail 
with the error above. It was doing as expected; I should have gone looking 
harder yesterday before posting the first email.

Apologies for the long-winded reply, but it may prove helpful to others out 
there.


Peter

-- 
Peter M. Bloomfield
Physicist,
PET Centre,
Centre for Addiction and Mental Health,
250 College St.,
Toronto, Ontario,
Canada M5T 1R8
Tel: 416 535 8501 Ext. 4243

--
Beautiful is writing same markup. Internet Explorer 9 supports
standards for HTML5, CSS3, SVG 1.1,  ECMAScript5, and DOM L2 & L3.
Spend less time writing and  rewriting code and more time creating great
experiences on the web. Be a part of the beta today
http://p.sf.net/sfu/msIE9-sfdev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Message: "Got fatal error during xfer" but no error indication

2007-10-28 Thread Peter
Hi,
I got completely stuck with the following Problem:

"Got fatal error during xfer (Gesamtzahl geschriebener Bytes: 21678080 
(21MiB, 401KiB/s))"

No indication of what went wrong, other (non tar) backups do work. There 
ist space available
on the backup volumne (88% used)
I used BackupPc Version 3.0 but the Problem did appear with 2.x as well.
I run the tar command as user backuppc from the console and it returned 
with rc of 0

(backuppc # /usr/bin/sudo /bin/tar -c -v -f /tmp/tetc -C /etc - [ ... ] 
;echo $? )

I have checked this mailing list and other web resources but did not 
find any clue.
I have no other Idea what can be done or where I could look after.

Any help would be very much apreciated. :-)

Regards, Peter


Error log:--

Running: /usr/bin/sudo /bin/tar -c -v -f - -C /etc --totals 
--exclude=./proc --exclude=./dev --exclude=./cdrom --exclude=./media 
--exclude=./floppy --exclude=./mnt --exclude=./var/lib/backuppc 
--exclude=./lost+found --exclude=./home/notes --exclude=./var/run 
--exclude=./var/log .
full backup started for directory /etc
Xfer PIDs are now 18275,18274
[ skipped 3801 lines ]
Gesamtzahl geschriebener Bytes: 21678080 (21MiB, 401KiB/s)
[ skipped 38 lines ]
tarExtract: Done: 0 errors, 3283 filesExist, 19119670 sizeExist, 7436606 
sizeExistComp, 3287 filesTotal, 19122011 sizeTotal
Got fatal error during xfer (Gesamtzahl geschriebener Bytes: 21678080 
(21MiB, 401KiB/s))
Backup aborted (Gesamtzahl geschriebener Bytes: 21678080 (21MiB, 401KiB/s))

Config: --
$Conf{XferMethod} = 'tar';
$Conf{CompressLevel} = '3';
$Conf{BackupFilesExclude} = {
  '*' => [
'/proc',
'/dev',
'/cdrom',
'/media',
'/floppy',
'/mnt',
'/var/lib/backuppc',
'/lost+found',
'/home/notes',
'/var/run',
'/var/log'
  ]
};
$Conf{RsyncShareName} = [
  '/etc',
  '/web',
  '/var',
  '/home'
];
$Conf{TarShareName} = [
  '/etc',
  '/var',
  '/web',
  '/home'
];
$Conf{TarClientCmd} = '/usr/bin/sudo $tarPath -c -v -f - -C $shareName+ 
--totals';
$Conf{TarClientRestoreCmd} = '$sshPath -q -x -l root $host /usr/bin/env 
LC_ALL=C $tarPath -x -p --numeric-owner --same-owner -v -f - -C 
$shareName+';
$Conf{XferLogLevel} = '3';


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Message: "Got fatal error during xfer" but no error indication

2007-10-30 Thread Peter
Hey Guys

I think, I found a Bug or at least an issue that should be either 
documented or fixed.
The case described below does occur if Xfermethod tar is used and their 
is any other language than en_EN selected in the environment.
The Dump script does seem to depend on the tar completion message 
somehow and will report an error if another message occurs (in this case 
German).
This problem may be circumvented by entering

$ENV{'LANG'} = 'en_EN';

below

$ENV{'PATH'} = '/bin:/usr/bin';

in config.pl

at least in my case it fixed the problem.

Regards, Peter

Peter schrieb:
> Hi,
> I got completely stuck with the following Problem:
> 
> "Got fatal error during xfer (Gesamtzahl geschriebener Bytes: 21678080 
> (21MiB, 401KiB/s))"
> 
> No indication of what went wrong, other (non tar) backups do work. There 
> ist space available
> on the backup volumne (88% used)
> I used BackupPc Version 3.0 but the Problem did appear with 2.x as well.
> I run the tar command as user backuppc from the console and it returned 
> with rc of 0
> 
> (backuppc # /usr/bin/sudo /bin/tar -c -v -f /tmp/tetc -C /etc - [ ... ] 
> ;echo $? )
> 
> I have checked this mailing list and other web resources but did not 
> find any clue.
> I have no other Idea what can be done or where I could look after.
> 
> Any help would be very much apreciated. :-)
> 
> Regards, Peter
> 
> 
> Error log:--
> 
> Running: /usr/bin/sudo /bin/tar -c -v -f - -C /etc --totals 
> --exclude=./proc --exclude=./dev --exclude=./cdrom --exclude=./media 
> --exclude=./floppy --exclude=./mnt --exclude=./var/lib/backuppc 
> --exclude=./lost+found --exclude=./home/notes --exclude=./var/run 
> --exclude=./var/log .
> full backup started for directory /etc
> Xfer PIDs are now 18275,18274
> [ skipped 3801 lines ]
> Gesamtzahl geschriebener Bytes: 21678080 (21MiB, 401KiB/s)
> [ skipped 38 lines ]
> tarExtract: Done: 0 errors, 3283 filesExist, 19119670 sizeExist, 7436606 
> sizeExistComp, 3287 filesTotal, 19122011 sizeTotal
> Got fatal error during xfer (Gesamtzahl geschriebener Bytes: 21678080 
> (21MiB, 401KiB/s))
> Backup aborted (Gesamtzahl geschriebener Bytes: 21678080 (21MiB, 401KiB/s))
> 
> Config: --
> $Conf{XferMethod} = 'tar';
> $Conf{CompressLevel} = '3';
> $Conf{BackupFilesExclude} = {
>   '*' => [
> '/proc',
> '/dev',
> '/cdrom',
> '/media',
> '/floppy',
> '/mnt',
> '/var/lib/backuppc',
> '/lost+found',
> '/home/notes',
> '/var/run',
> '/var/log'
>   ]
> };
> $Conf{RsyncShareName} = [
>   '/etc',
>   '/web',
>   '/var',
>   '/home'
> ];
> $Conf{TarShareName} = [
>   '/etc',
>   '/var',
>   '/web',
>   '/home'
> ];
> $Conf{TarClientCmd} = '/usr/bin/sudo $tarPath -c -v -f - -C $shareName+ 
> --totals';
> $Conf{TarClientRestoreCmd} = '$sshPath -q -x -l root $host /usr/bin/env 
> LC_ALL=C $tarPath -x -p --numeric-owner --same-owner -v -f - -C 
> $shareName+';
> $Conf{XferLogLevel} = '3';
> 
> 
> -
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Building an inexpensive backup pc.

2007-11-07 Thread Peter

> It might be interesting to build a BackupPC "appliance" using a Linksys 
> NSLU2. It's a Linux computer about the size of a paperback book [ ... ]
I am currently working on getting this box up and running with BackupPc.
For testing purposes I used the NSLU2 loaded with the debian Etch 
distro. (See http://www.cyrius.com/debian/nslu2/unpack.html which is a 
very good HowTo) Attached is a 2.5" 40GB Harddisk on port 2 (See 
discussion on 
http://www.nslu2-linux.org/wiki/Unslung/WhichUSBPortforUnslung6)

So far I got it working with version 2.1.2 of BackupPc. Version 3 failed 
to install because of a requirement for RsyncP 0.68. I was unable to 
install that version because of difficulties with make and cc.

Tests with version 2 resultet in the follwing findings:
- Perfomance is slow (what did you expect?)
- Compression is a NoNo. Forget about it in this box
- I did only test rsync backup with windows clients
- Xfer rate is about 600KByte on a 100MB LAN

More tests will follow.

Peter


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Unable to open fo writing

2007-12-24 Thread Peter
Hi,
I sometimes getting this type of messages on running backup from windows 
pc's using rsyncd

Unable to open /var/lib/backuppc/pc/deva/new/fuser/f$User_E.ZIP for 
writing\n
Botch, no matches on /var/lib/backuppc/pc/deva/new/fuser/f$User_E.ZIP 
(d9d6deb5b141db776364b3f7b7a17254)\n
create   644   400/401  1053931146 $User_E.ZIP
...
Unable to open /var/lib/backuppc/pc/deva/new/fuser/attrib for writing\n
Botch, no matches on /var/lib/backuppc/pc/deva/new/fuser/attrib 
(ee60a1bc73dfc0e5e929bfe0d243e959)\n
   create d 755   400/401   0 .
   create d 755   400/401   0 Brio
...
followed by a number of create, pool etc. messages
followed by:
Done: 41 files, 1063072351 bytes


I have no Idea what may have went wrong but the file $User_E.zip does 
not appear in the list of backuped files.
I have not tried any other backup method nor can I say whether this 
happens only on specific PCs or with specific files.
I am running Version 3.0.0

Any clues?




-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] How to get rid of PaxHeader folders

2019-10-22 Thread Peter Horn

Dear all,

I'm using BackupPC for some years on Debian. After updating to Debian 9, 
PaxHeader Folders began to appear in the backups of Windows hosts when 
file name with german umlauts are present. This has already been 
mentioned by another user here:


https://sourceforge.net/p/backuppc/mailman/message/35232861/


The problem has been fixed in Release 4.2.0:
https://github.com/backuppc/backuppc/releases/tag/4.2.0


I've updated to Debian 10 and installed BackupPC 4.3.0 from the Debian 
10 packages mentioned here:


https://github.com/backuppc/backuppc/wiki/Build-Your-Own-Packages

But the PaxHeader folders still exist in the backups taken after the 
update and their time stamp is always updated to the time of the most 
recent backup. Inspection of the XferLog shows no PaxHeader folders at all.



How do I get rid of those PaxHeader folders created by BackupPC 3.x? I 
don't want to delete old backups if possible.



Thanks,
Peter


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to get rid of PaxHeader folders

2019-10-28 Thread Peter Horn

Some more observations:

Am 22.10.2019 um 14:28 schrieb Peter Horn:
But the PaxHeader folders still exist in the backups taken after the 
update


After the second full backup the PaxHeader folders disappear, so the 
problem seems to be self curing.



and their time stamp is always updated to the time of the most 
recent backup. Inspection of the XferLog shows no PaxHeader folders at all.


The timestamp isn't the time of the most recent backup but the
time the backup is displayed in the webinterface.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] A Perl error?

2023-07-23 Thread Peter Major
Hi,
I don't usually wade in to discussions like these, but as I understand
perl fairly well, I feel the need to point out that you are on the
wrong track G.W.Haywood.The error message does not refer to line 742 of
config.pl, but to that of configure.pl, presumably the script which
reads config.pl.Unless Jan has modified configure.pl, the actual error
is still likely to be in config.pl.Jan please give more information -
you said 'I started to notice following error.' - following what?Did
you make a configuration change just before you noticed the error?Have
you had BPC working, or are you still tuning the configuration?Are you
using the web interface to do the configuration, or are you editing
config.pl by hand?Which version of BPC are you using?Without further
information, it's unlikely that other people are going to be able to
help you.
Kind regards,Peter Major
On Sun, 2023-07-23 at 13:59 +0100, G.W. Haywood via BackupPC-users
wrote:
> Hi there,
> On Sun, 23 Jul 2023, Jan Stransky wrote:
> > I started to notice following errorCan't use string ("1") as a
> > HASH ref while "strict refs" in use at configure.pl line 742
> 
> Looks like you broke it.
> Please let us see what you have around line 742 of configure.pl.
> In the vanilla configure.pl that would be somewhere around the
> partwhich sets up things to be backed up and/or ignored, but if
> yourversion of config.pl has been heavily modified it could be
> anything.This is from a current config.pl here:
> 8<---
> ---$ cat -n /etc/BackupPC/config.pl | head -n 770 | tail -n
> 45726  # Examples:727  #$Conf{BackupFilesExclude} =
> '/temp';728  #$Conf{BackupFilesExclude} = ['/temp']; #
> same as first example729  #$Conf{BackupFilesExclude} =
> ['/temp', '/winnt/tmp'];730  #$Conf{BackupFilesExclude} =
> {731  #   'c' => ['/temp', '/winnt/tmp'], # these are
> for 'c' share732  #   'd' => ['/junk', '/dont_back_this_up'],
> # these are for 'd'
> share733  #};734  #$Conf{BackupFilesExclude} =
> {735  #   'c' => ['/temp', '/winnt/tmp'], # these are
> for 'c' share736  #   '*' => ['/junk', '/dont_back_this_up'],
> # these are for other
> shares737  #};738  #739  $Conf{BackupFilesExclude} =
> undef;740741  #742  # PCs that are always or often on the
> network can be backed up after743  # hours, to reduce PC, network
> and server load during working hours. For744  # each PC a count
> of consecutive good pings is maintained. Once a PC has745  # at
> least $Conf{BlackoutGoodCnt} consecutive good pings it is
> subject746  # to "blackout" and not backed up during hours and
> days specified by747  #
> $Conf{BlackoutPeriods}.748  #749  # To allow for periodic
> rebooting of a PC or other brief periods when a750  # PC is not
> on the network, a number of consecutive bad pings is
> allowed751  # before the good ping count is reset. This parameter
> is752  # $Conf{BlackoutBadPingLimit}.753  #754  # Note
> that bad and good pings don't occur with the same interval. If
> a755  # machine is always on the network, it will only be pinged
> roughly once756  # every $Conf{IncrPeriod} (eg: once per day). So
> a setting for757  # $Conf{BlackoutGoodCnt} of 7 means it will
> take around 7 days for a758  # machine to be subject to blackout.
> On the other hand, if a ping is759  # failed, it will be retried
> roughly every time BackupPC wakes up, eg,760  # every one or two
> hours. So a setting for $Conf{BlackoutBadPingLimit} of761  # 3
> means that the PC will lose its blackout status after 3-6 hours
> of762  # unavailability.763  #764  # To disable the
> blackout feature set $Conf{BlackoutGoodCnt} to a negative765  #
> value.  A value of 0 will make all machines subject to
> blackout.  But766  # if you don't want to do any backups during
> the day it would be easier767  # to just set
> $Conf{WakeupSchedule} to a restricted
> schedule.768  #769  $Conf{BlackoutBadPingLimit} =
> 3;770  $Conf{BlackoutGoodCnt}  = 7;8<--
> 
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Can't write 32780 bytes to socket

2014-05-26 Thread Stefan Peter
Hi  Guilherme

On 26.05.2014 15:57, Guilherme Cunha wrote:
> Hi, 
> 
> Full lines about this: "
> 
> Can't write 32780 bytes to socket
> 
> Read EOF: 
> 

The real error message probably is above the lines you quoted. I guess
it is something like:

Remote[1]: rsync error: timeout in data send/receive (code 30) at
io.c(239) [sender=3.0.3]

This means that the rsync server did not send any data for the time
specified in the timeout value in /etc/rscynd,conf to the backuppc
client. This can happen due to large deltas, complex settings, a busy
client or a couple o other reasons.

If I where you, I'd double the "timeout" value in the /etc/rsyncd.conf
of the system you want to backup. If this does not help, run a fsck on
the system to backup because file system problems on the system to
backup will cause the rsyncd server to be unable to respond the the
request from the server.

With kind regards

Stefan Peter


-- 
"In summary, I think you are trying to solve a problem that may not
need to be solved, using a tool that is not meant to solve it, without
understanding what is causing your problems and without knowing how
the tool actually works in the first place :)"
Jeffrey J. Kosowsky on the backuppc mailing list



signature.asc
Description: OpenPGP digital signature
--
The best possible search technologies are now affordable for all companies.
Download your FREE open source Enterprise Search Engine today!
Our experts will assist you in its installation for $59/mo, no commitment.
Test it for FREE on our Cloud platform anytime!
http://pubads.g.doubleclick.net/gampad/clk?id=145328191&iu=/4140/ostg.clktrk___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Forbidden You don't have permission to access /BackupPC on this server.

2015-09-21 Thread Stefan Peter
Dear Bob of Donelson Trophy

Please find my comments interspersed below:
(this is the stanza I use for noobs)

On 21.09.2015 14:03, Bob of Donelson Trophy wrote:
> First, my only comment on top posting.
> 
> Here in the states most email clients default to top posting. Therefore,
> if I set my defaults to bottom post (as requested but not mandatory by
> all mailing list I participate in) my US customers (who, grant you, are
> NOT logical and think email should be in bottom post, logical order)
> reply to my emails with "your email response was empty" or "You didn't
> answer my question" and then I would need to explain to all of my
> customers where my answer is (at the bottom, where, yes, I think,
> logically it should be.) I cannot redirect the masses to bottom post,
> sorry. I, like you, have to live with top posts occasionally, sorry.

You are not supposed to bottom post, either. The idea behind the whole
eMail rule is to have a dialogue, to allow a reader who is subscribed to
a couple of mailing lists to be able to follow the conversation without
digging through a 2000+ line email you have to read *bottom up* if
she/he wants to understand the context.

> 
> We mailing list normal users, although annoyed with top posting, can
> learn to follow the combination of top posts with bottom posts and keep
> up with answers.

One could, but I have other emails to read, my boss does not pay me for
reading eMails and I don't have the time to dig through the mess top
posters produce. Result: Your eMail most probably will be ignored by a
majority of the participants of a tech related mailing list. Not what
you aim at, I suppose.


With kind regards

Stefan Peter

-- 
Any technology that does not appear magical is insufficiently advanced.
~ Gregory Benford

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Creating a "read everywhere" user to backup Windows profiles

2016-01-14 Thread Stefan Peter
On 14.01.2016 00:13, Adam Goryachev wrote:
> On 14/01/16 07:11, Andreas Piening wrote:
>> I wonder what the easiest / best way is to create a „read everywhere“ user 
>> on ms windows to create backups with via CIFS / SMBFS.
>>
>> Ideally I would like to run a short .cmd script or do a couple of clicks to 
>> give a local windows user (let’s assume ‚backuppc‘) full read access to 
>> everything under c:\Users. Even better with write access to be able to 
>> restore in place.
>> I know that I can enable inheritance for permissions in c:\Users and 
>> overwrite all permissions on subfolders with the current one. But this would 
>> also enable read for everyone for every user on other users profiles which I 
>> don’t like. And even this does not work everywhere, even not with an 
>> administrative account. I need to take ownership recursively in order to do 
>> that and I don’t want to own other users files.
>>
>> Is there a better way?
> 
> Isn't there a specific "Backup Operator" account on windows which has 
> "super" permissions for exactly this reason? I'm not sure if that 
> account will work over samba though?

Additionally, isn't there something like junction points on windows that
can not be read by an ordinary user? I seem to remember that there have
been lists floating around on this mailing list with directories to
exclude for the various windows versions.

A list of failure messages encountered by you may help, btw.

With kind regards

Stefan Peter

-- 
Any technology that does not appear magical is insufficiently advanced.
~ Gregory Benford

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] R: Re: R: Keep getting "Aborting backup up after signal PIPE"

2016-03-02 Thread Stefan Peter
Dear Nicolas Göddel
Am 02.03.2016 um 11:41 schrieb Nicolas Göddel:
> Hi,
> 
> thank you. I did an incremental backup with verbose output. The last few lines
> in the log are these:
> 
> attribWrite(dir=fbenutzer/.Trashes/501/Recovered files
> #1/com.apple.iBooksAuthor_409_SFED_368025537_2/SpookySpooky.ibooks/OPS/assets/thumbs/content13)
> -> /var/lib/backuppc/pc/suw03/new/fbenutzer/f.Trashes/f501/fRecovered files
> #1/fcom.apple.iBooksAuthor_409_SFED_368025537_2/fSpookySpooky.ibooks/fOPS/fassets/fthumbs/fcontent13/attrib
> Done: 0 files, 0 bytes
> Got fatal error during xfer (aborted by signal=PIPE)
> Backup aborted by user signal
> dump failed: aborted by signal=PIPE

This didn't reveal additional information, so I would propose to have a
look at the other end of the backup.

According to the first mail you wrote, you are using the rsyncd transfer
method. So, according to my experience, signal=PIPE errors most of the
time are caused by timeouts. Although the rsyncd has no default I/O
timeout, most default rsyncd.conf files use something along the lines of
600 seconds.

Can you check your /etc/rsyncd for a timeout=... parameter?
If you find one, double the amount, restart rsyncd and retry your backup.

If there is no timeout to be found in rsyncd.conf, it may be specified
on the command line of the rsyncd startup script. A 'ps faxw|grep rsync'
should reveal it.

If this does not help, finding the logs for rsyncd may shed some more
lights on the problem. Because AFAIK there is no standard rsyncd.conf
for Debian/Ubuntu, check your config file for 'log file' and 'syslog
faclities' entries. These define where rsyncd will put the log files.
You can find out more about these entries by issuing "man rsyncd.con".

With kind regards

Stefan Peter



--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Using Amazon AWS S3/Glacier with BackupPC

2016-03-19 Thread Stefan Peter
Hi list,

On 17.03.2016 13:16, Marcel Meckel wrote:
> I'm curious if anybody managed to use Amazon S3 or Glacier with 
> BackupPC,

Being responsible for a small setup that does not want to afford a
symmetric internet connection, so full backups to S3 was out of question
(pool size is around 8 TB at the moment), I was confronted with this
issue, too.

In the end, I used s3ql (http://www.rath.org/s3ql-docs/) for a very
limited subset of the total backup. I bypassed backuppc, though, I just
rsync the relevant files to the s3ql file system using a cron job. But I
feel that backuppcs archive feature could be used as well when not using
compression (which would throw off the block de-duplication of s3ql).

I use a standard Amazon S3 object and I limit the number of backups to
the last 8 days. The total backup size is around 200 GBytes and the
monthly costs never exceeded 10$ yet. We currently have 1MBits upstream,
the s3ql backup starts at 02:00AM and I have not yet noticed that it
interfered with daily business.


With kind regards

Stefan Peter

-- 
Any technology that does not appear magical is insufficiently advanced.
~ Gregory Benford

--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785231&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] missing libsocket6-perl dependency Backuppc 3.3.0 on Ubuntu 15.10

2016-03-28 Thread Stefan Peter
Hi Robert,

Am 28.03.2016 um 13:16 schrieb Robert Wooden:
> A few months ago I built a Ubuntu 15.10 backuppc (current 3.3.0 version, I 
> think) and had an issue regarding missing dependencies. I got busy and forgot 
> to post my incident. Well, had a hard drive failure and have had to rebuild 
> the machine. Install Ubuntu 15.10 server with a "minicd" with "basic server", 
> "lamp", and "openssh" (tasksel) options only. On start up, installed 
> "vim-nox" and set ssh to allow root password and installed "backuppc". On 
> restart I could not access the "ipaddress/backuppc" and received a long 
> complaint regarding "Socket6" missing instead of the usual backuppc gui. Some 
> Internet research and discovered that to solve my issue I had to install 
> "libsocket6-perl" package and my issue was gone. Backuppc now properly offers 
> it's gui. 
> 
> I just thought someone should know that all the dependencies are not included 
> when installing from repository against U15.10. Perhaps someone could correct 
> this?

This is a packaging issue. Please file a bug at
https://launchpad.net/ubuntu/+source/backuppc/+bugs

With kind regards

Stefan Peter



--
Transform Data into Opportunity.
Accelerate data analysis in your applications with
Intel Data Analytics Acceleration Library.
Click to learn more.
http://pubads.g.doubleclick.net/gampad/clk?id=278785471&iu=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Random Daily Backups remain idle, can be started manually 4.11.16

2016-04-14 Thread Stefan Peter
Dear tschmid4

Please stop top posting (see
https://en.wikipedia.org/wiki/Posting_style).

On 14.04.2016 18:06, tschmid4 wrote:
> I looked in the Gui LOG file.
> 
> (there are 15 servers scheduled to backup)
> 
> 2016-04-13 23:00:00 Running 2 BackupPC_nightly jobs from 0..15 (out of 0..15)
> 2016-04-13 23:00:00 Running BackupPC_nightly -m 0 127 (pid=42153)
> 2016-04-13 23:00:00 Running BackupPC_nightly 128 255 (pid=42154)
> 2016-04-13 23:00:00 Next wakeup is 2016-04-14 23:00:00
> 
> (2 or 3 backups COMPLETE)
> Then,
> 2016-04-13 23:42:13 Finished  adm  (BackupPC_nightly 128 255)
> 2016-04-13 23:42:25 BackupPC_nightly now running BackupPC_sendEmail
> 2016-04-13 23:46:38 Finished  admin  (BackupPC_nightly -m 0 127)
> 2016-04-13 23:46:38 Pool nightly clean removed 0 files of size 0.00GB
> 2016-04-13 23:46:38 Pool is 0.00GB, 0 files (0 repeated, 0 max chain, 0 max 
> links), 1 directories
> 2016-04-13 23:46:38 Cpool nightly clean removed 155 files of size 36.09GB
> 2016-04-13 23:46:38 Cpool is 2201.54GB, 1705438 files (2697 repeated, 184 max 
> chain, 31999 max links), 4369 directories
> 

The log file lines you sent only cover the BackupPC_nightly job. This is
a housekeeping task that should run after the backups, so this
information does not really help to find your problems.

Please point your browser to your BackkupPC server, navigate to "Old
LOGs" and select "LOG.0.z" (or LOG.1.Z" or whatever). Then post the
_complete_ output in _text_ format on this mailing list.

If you really want to find a solution, we need more information, though.

Select one problematic host, browse to the "Host Summary" of your
BackupPC server and copy the status line of your host into the email (No
printscreens please). Then attach the /etc/bakuppc/config.pl and
/etc/backuppc/.pl to the email. But first check these two
files for user names and passwords and redact them if needed. If you
want to be considerate about the amount of data you send out to all the
participants of this mailing list, you put the files on a sharing
service like pastebin and only post the links or at least run through
'|grep -v ^#' prior to attaching.

The output of "df -h" nay be helpful, too.


With kind regards

Stefan Peter

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?


--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Random Daily Backups remain idle, can be started manually 4.11.16

2016-04-15 Thread Stefan Peter
Am 15.04.2016 um 17:01 schrieb Les Mikesell:
> On Fri, Apr 15, 2016 at 9:06 AM, tschmid4  wrote:
>>
>> Here is a link to the .PL files.
>>
>> http://pastebin.com/ZbQTMG7d
>>
> 
> There may be more, but this looks wrong - and not the default setting:
> #
> $Conf{WakeupSchedule} = [
>   23
> ];
> 
> Normally that should be a list of hours when you want it to wake up
> and schedule/start potential backups (see the nearby comments).


Another one, I think:

$Conf{MaxBackups} = 2;

So you only wake up once a day and then shedule only two backups.

So why could you run them manually? Because for this no wakeup is needed
and you have

$Conf{MaxUserBackups} = 4;



With kind regards

Stefan Peter


--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Random Daily Backups remain idle, can be started manually 4.11.16

2016-04-15 Thread Stefan Peter
Am 15.04.2016 um 16:06 schrieb tschmid4:
> Thank you for all of your help.
> -Copy the status line of your host:
> ? Selecting Host Summary from the WebGUI, I see a list of hosts. Providing 
> User, #Full, Full Aga (days) Full Size(GB) etc... 
> No exactly sure what you need here.

There are several crucial informations to be gleamed from the Hist
Summary page you should be aware of if you operate a BackupPC instance.
Some of them:
o LastBackup (days) : This should never go above 1.0 if you do
  daily backups.
o #XFer Errs: This should be 0, otherwise your backup may be
  incomplete. You will not get an eMail warning from
  BackupPC in this case!
  But some errors may be unavoidable, for example the ones
  caused by files being deleted during the backup on a live
  server. Inspect the errors periodically and evaluate their
  severity.
o State: My hosts all show "idle" or "auto disabled" but
  I suppose that yours don't. And I know exactly why some
  show "auto disabled".

With kind regards

Stefan Peter



--
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copyright protection

2016-05-18 Thread Stefan Peter
Dear David Cramblett,
Am 18.05.2016 um 18:45 schrieb David Cramblett:
> I think most GPL projects still use a CLA to help protect the project in
> the case of future litigation. The Linux Kernel for example is GPL v2,
> but still requires CLA language appended to each patch submission.

This is simply wrong: The Linux Kernel does _not_ require a CLA. But it
requires a Certificate of Origin, also known as Signed-off-by. See
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches?id=HEAD
for details.

Additionally, there are countries where you can not legally transfer
your copyright to someone else unless you have been hired to do the work
in question. This would mean that Backuppc would either have to hire the
contributors or reject their contributions in order to make sure the CLA
is effective.

With kind regards

Stefan Peter

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copyright protection

2016-05-19 Thread Stefan Peter
On 19.05.2016 19:01, David Cramblett wrote:
> Yeah, or "CofO", or "Sign-Off", or whatever term folks feel the most
> comfortable with. It doesn't have to be "CLA", it could be whatever make
> since. Just something protecting the project from the unlikely "evil
> company scenario".

Sorry, but I seem to have missed something. What "evil company" scenario
exactly do you talk about?

In large opensource projects like the Linux kernel multiple contributors
provide patches/pull request that eventually get merged. Every
contributor retains the copyright to the code she or he has contributed.
Using git, a list of all contributors is relatively easy to compile when
signed-off-by tags are used.

If there would be a license change mandated, all contributors would have
to agree to this change. If they don't, the project still could change
the license, but the code from the contributors rejecting the change
would have to be removed and/or replaced with code from other
contributors who support the change.


> 
> David
> 
> On Thu, May 19, 2016 at 9:52 AM, Kris Lou  <mailto:k...@themusiclink.net>> wrote:
> 
> How about making the above explanation in a "How to contribute to
> BackupPC development", and requiring a short note in the pull
> request?  Something as simple as "CLA:agreed"?

I'm not a lawyer but I seriously doubt that this is considered as
legally binding.

And I definitely would not want to participate in a project that
requires me to sign a CLA. I even would consider to refrain from _using_
BackupPC if this is implemented. Just because in this case, there would
be a single entity who could change the rules from one day to the other.
And I would not want to risk the ability to restore my own backups from
last year.

And no, I would never expect Craig to do something evil like that. But
this is because I trust him personally. I would not have the same trust
in whatever organization that would have to be created or mandated to
serve as the final copyright holder.


Just my 2 cents. And, as mentioned above, I'm not a lawyer.

Regards

Stefan Peter
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copyright protection

2016-05-19 Thread Stefan Peter
On 19.05.2016 18:28, David Cramblett wrote:
> If you read this (from Linux kernel link provided):
> 
> |(d) I understand and agree that this project and the contribution are
> public and that a record of the contribution (including all personal
> information I submit with it, including my sign-off) is maintained
> indefinitely and may be redistributed consistent with this project or
> the open source license(s) involved.|
> 
> 
> It's kind of splitting hairs to say they don't require CLA, "it's a
> Certificate of Origin". It's essentially accomplishing the same goal.

No, definitively not. With a CLA, I give up my copyright. I have no
rights on the code submitted anymore and the entity that I had to
transfer my copyrights to can do with my contribution as it pleases.
They can even go closed source.

The (d) point you cited above just informs me that anyone can look up my
contribution as it will be publicly available indefinitely.


> The developer is agreeing that the patch belongs to the project and may
> be redistributed indefinitely.

Where does it say that?

> I'm not particularly concerned one way or the other. If any evil person
> or company tries to convert a useful OSS project into a pay-for software
> (or do other disruptive things), the community is going to fork the
> project and move on, e.g. LibreOffice, MariaDB, etc.

There is no easy way to prohibit this. Look at The Gimp, for example:
There are dozens of outlets on the internet that _sell_ precompiled
versions, sometimes "enhanced" with add ware and sometimes even under
the "Photoshop" label.

As a side note, even the current infrastructure supplier of BackupPC,
sourceforge.net, is known to do such things. Just google "sourceforge
grabs gimp".

Now, the GPL does not prohibit to sell software. It demands that you
have to make the source code available and that you can not mix GPL with
non GPL software (oversimplified, I know). But you actually are allowed
to charge for compilation, media, documentation, support and so on.

How can an open source project defend against the misuse of their project?

Mostly by making sure the source code stays open source. Try to attract
as many contributors as possible so the number of copyright holders make
it improbable that the source is taken over.

In the case of BackupPC, the risk of misuse is small: The target
audience are system administrators who will not download BackupPC from
some shady operation for a handful of megabucks if they can install it
from their OS repository for free. So another target would be to closely
cooperate with OS package maintainers in order to make the inclusion of
BackupPC as hassle free for them as possible.

Most OS packagers carry patches for the packages the maintain. Some of
the patches are distribution specific, some are bug fixes, some are even
functional extensions. And all package maintainers would prefer if they
would not have to carry these patches because these may break and the
maintainer then has to redo them for every new release of the upstream
project. So, harvesting these patches and incorporate them into the main
BackupPC source makes sure we stay in the distribution.

Another thing most OS packagers do is accepting bug reports. We should
harvest these bug reports and fix the issues in our source code. The
more bug reports a maintainer can close with a "fixed upstream" notice,
the better the reputation of a project is and the larger the chances of
a project to stay in the distribution (or even get a special exposure in
the distribution) are.

With kind regards

Stefan Peter

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copyright protection

2016-05-19 Thread Stefan Peter
Dear Les Mikesell

On 19.05.2016 23:12, Les Mikesell wrote:
> On Thu, May 19, 2016 at 3:45 PM, Stefan Peter  wrote:
>>
>>
>> How can an open source project defend against the misuse of their project?
> 
> First you would have to define misuse - of something the author wanted
> to give away.   Personally, I think perl got it right with the
> dual-license that keeps it from being either locked into GPL-only code
> or locked out.

But Perl is a language, not a program. In order to be able to penetrate
into commercial entities, Perl had to have a non GPL license because GPL
does not allow modifications without open sourcing them. This is frowned
upon by companies beacuase of this:

Salesman: Hey, this is our xyz product, it is so much better than the
competition  and you can get a license for only !!!
Customer: But it is Perl, so I can download it for free from the internet!
Salesman: No, you can't, because we have added our secret sauce to Perl
and because it is licensed under the Arthritic License, we do not have
to reveal the secret! So you can only get it from us!!
Customer: Ok, where do I have to sign?

How does this relate to BackupPC?

Not at all, I think. Because we do not want commercial business to
resell "enriched" BackupPC applications. So we demand that whoever
enhances BackupPC does so by adhering to the GPL by opensourcing their
additions/modifications. Re-licensing or dual-licensing under one of the
permissive licenses would open the door to the scenario above. Whatever
Perl (or Apache or OpenOffice or ..) has done in regards of licensing
does not automatically server or apply to BackupPC.


With kind regards

Stefan Peter

-- 
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copyright protection

2016-05-19 Thread Stefan Peter
Dear Les Mikesell,

On 20.05.2016 00:19, Les Mikesell wrote:
>>
>> How does this relate to BackupPC?
>>
>> Not at all, I think. Because we do not want commercial business to
>> resell "enriched" BackupPC applications.
> 
> Why not?  If someone had a commercial device that needed a bit of
> proprietary code added to backuppc to access correctly, why would
> anyone be against allowing it to be linked into backuppc?

What would stop them to provide these bits to the main BackupPC source?
This exactly is the companies/entities like Intel, IBM, AMD, The
Raspberry Pie Foundation and so on contribute to the Linux kernel: They
make sure Linux runs on their hardware by providing the needed bits and
bytes in order to bring their hardware into the mainstream kernels. When
did you last bought a Linux machine where you had to get the kernel as a
binary from the manufacturer?


  That would
> be the equivalent, say, of gluing a proprietary database client into a
> perl module which is a better example of the need for dual-licensing.

So only the ones that pay for this solution get to use it? Translating
to BackupPC: Imagine that Microsoft decides to provide a BackupPC client
that relies on some obscure transfer protocol MS is unwilling to open
source. So if you pay $$$ per server/user/year, you get seamless
BackupPC integration for MS systems. Then, OracleOS releases a similar
solution, although the two of them are not compatible because the do not
have to release the sources and neither will be willing nor able to
integrate the others protocol. Then Ubuntu, RedHat, (... you name it)
all follow this path.

Where does this leave you as a burdened administrator who just wants to
make sure the company survives the next crypto locker surge? And would
you recommend BackupPC to a fellow sysadmin well knowing that you need a
separate instance for every client OS you need to backup?


> Or even if someone created a much better user interface and packaged
> it as a product - why would that be a problem as long as the original
> still existed?
> 

Why shouldn't they release it to the public so all BackupPC users
benefit? Who knows, the BackupPC developers may be kind (and fair)
enough to openly praise them or even put this someones logo in the open
sourced user interface?

And, by the way, providing a new BackupPC user interface already is
possible. You just have to write something from scratch that can drive
the BackupPC server component and you even can make it closed source.
Because the front end only has to use the back end API in order to
provide the functionality required. No license violation required (at
least not until the US courts finally decide if an API is copyrightable
(ann if yes, this would apply to the US only, one would hope)).


With kind regards

Stefan Peter

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copyright protection

2016-05-19 Thread Stefan Peter
Dear David Cramblett

On 19.05.2016 23:59, David Cramblett wrote:
> 
> On Thu, May 19, 2016 at 1:00 PM, Stefan Peter  <mailto:s_pe...@swissonline.ch>> wrote:
> 
> On 19.05.2016 19:01, David Cramblett wrote:
> > Yeah, or "CofO", or "Sign-Off", or whatever term folks feel the most
> > comfortable with. It doesn't have to be "CLA", it could be whatever make
> > since. Just something protecting the project from the unlikely "evil
> > company scenario".
> 
> Sorry, but I seem to have missed something. What "evil company" scenario
> exactly do you talk about?
> 
> 
> We're talking about a scenario where an individual submits code to the
> project while working for "abc company", who later says they own the
> copyright of the code and not the individual.

So what? In order to "capture" the project and make it their own, they
would have to get the consent of all other copyright holders
(contributors). Unless you actually introduce this CLA and the "enemy"
company manages to get hold of it, you have nothing to fear.

The worst that can happen is that the new copyright holder wants to
retrieve his contribution. So you remove the individuals code and
replace it with independent, fresh code from another contributor. But I
have never heard of an event like this and I'm not sure if it actually
is possible to retract a license. And whatever company actually would
try to get off a stunt like this would probably be taken to the internet
sponger so fast they would not survive it.


With kind regards

Stefan Peter

-- 
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copyright protection

2016-05-19 Thread Stefan Peter
Dear David Cramblett

On 20.05.2016 01:28, David Cramblett wrote:
> Stefan - Based on all this discussion, is your recommendation the GPL
> lic alone is enough, and we don't need anything additional?

The license is given, more or less. If Craig and the other contributors
do not agree to a license change, there is nothing anyone else can do.
But I do not see any reason to change the license, so I'd suggest to
forget the whole license discussion for now.

A contributor agreement, roughly cut along the one of the Linux Kernel
would be the next item on my wish list.


In order to become a mature, well functioning open source project, there
are a lot of other requirements, as you well may know. Bit I'm to tired
right now to go into details, I might miss something crucial.


> 
> 
> All - Does anyone know what the situation is with this Zamanda site
> http://backuppc.com/ ?

I have no idea, why don't you contact them and ask? But be polite, they
actually may be the driving force behind the resurrection of the
BackupPC development, just using their private eMail instead of a
@zamanda one.

Another issue may be backupcentral.com. They feed our mailing list into
their forum (probably displaying adds and monetizing on that, but I
could not be bothered to have a closer look) and the quality of input
this mailing list gets from them "leaves something to be desired"
normally. I do not remember any constructive input from their site, but
I may be wrong, as usual. But this situation opens the question for me:
What shall we do with pilot fish?

With kind regards

Stefan Peter

--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Copyright protection

2016-05-20 Thread Stefan Peter
All,
On 20.05.2016 20:22, David Cramblett wrote:
> I think it's more the general idea of this situation is common. My
> employer uses BackupPC as well, and much of my work would be on company
> time - although I work for a government agency. So not necessarily
> singling you out, just the situation your in. I think it's a common
> situation.

Sorry, but I think there is a misunderstanding here. What I said is,
under some jurisdictions, a contributor can not transfer his or her
copyright unless she or he is paid to do the work.

Have a look at the source of the Linux kernel: Vast parts of the code
are copyright by Intel, AMD, ARM, Broadcom and so on.

But all of the code is licensed under the GPLv2. And this means that
those companies have agreed to distribute their code under the rules the
GPLv2 defines. But this also means that they want to retain the rights
on their code. They want to be able to sue anyone who takes their code
and redistributes it under a non compatible license (or even closed
source). This is the copyrights holders right (and plight).

A rather interesting read is
http://opensource.stackexchange.com/questions/2046/how-can-the-linux-kernels-main-c-file-say-that-one-of-its-copyrights-is-all-ri
for example.

There have been mentions of the infamous SCO trials. But there, the
situation is different: SCO claims that vast amounts of their code has
been lifted and transplanted in to the Linux kernel. And such claims can
be made by anyone, against any project. This is just lawyers having a
blast and investors gambling.

Another issue that may creep up in this regard are patents. I'm quite
sure that somewhere, some "ingenious" guy has patented "a method to
backup a computer" or "a method to reduce backup footprints by pooling
the files/blocks". There is no real defence against this, other than not
having an entity responsible for the project that could be sued. And if
it happens regardless, move to project to a country where software
patents do not exist.

I feel that this whole discussion is pointless and a real time sink. All
the effort going into this discussion would better be spent by working
on BackupPC and the infrastructure to support it. All I wanted to say is
that CDAs may not be legally possible for all contributors and that it
would require some institution which may have to be tightly controlled
in order to mitigate the risk of loosing the control over the project.

And I'm still not a lawyer.

Therefore, I will refrain from participating in this discussion from now on.

With kind regards

Stefan Peter



-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Mobile security can be enabling, not merely restricting. Employees who
bring their own devices (BYOD) to work are irked by the imposition of MDM
restrictions. Mobile Device Manager Plus allows you to control only the
apps on BYO-devices by containerizing them, leaving personal data untouched!
https://ad.doubleclick.net/ddm/clk/304595813;131938128;j
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc appears to be ignoring home directories on full and incrementals

2016-06-14 Thread Stefan Peter
Dear G Jones

Am 14.06.2016 um 04:53 schrieb G Jones:
> The logs don't really have anything significant in them
> 
> /var/lib/Backuppc/pc//LOG.062016
> 
> 2016-06-13 17:59:24 full backup started for directory /
> 2016-06-13 18:00:03 full backup 0 complete, 79580 files, 1979593737
> bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other)

This is the log from a local backup.

Please note that the rsync backup method con not access other hosts. For
this, you need to use rsyncd (and prepare an rsyncd server on the other
host), or tunnel rsync over ssh by defining
$Conf{RsyncClientCmd} = '$sshPath -q -x -l root $host $rsyncPath $argList+';
and make sure the user backuppc on the backup server can ssh to
root@client without password. Please see the docs (link is in the
footer) for further instruction.

Your config has
$Conf{RsyncClientCmd} = '/usr/bin/sudo $rsyncPath $argList+';
which will only backup locally.

I have no idea why you see 3 backups.

With kind regards

Stefan Peter
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> 


--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] question about backuppc topdir

2016-09-13 Thread Stefan Peter
Dear Juan Manuel,

On 13.09.2016 16:57, Juan Manuel wrote:
> 
> Hello we have a backuppc server and work perflectly, but it near to disk 
> full.
> 
> So it is posible to add another resource/filesystem that to use 
> backuppc, another TopDir ?
> 
> We dont have more disk space on the same disk, but we can add another disks.
> 
> The resource (TopDir) /backuppc is actually in a logical volume LVM. We 
> try to find a solution that not imply using expanding logical volume 
> with other physical volume on LVM.


Why don't you want to expand the LVM volume? The ability to expand,
shrink, migrate and taking snap shoots are the only valid reason to use
LVM, IMHO. And all these operations are more or less painless (I just
migrated a 12TByte BackupPC volume from an old raid to a new one, which
is possible without even unmounting the volume!).

You can find more information about LVM at
http://tldp.org/HOWTO/LVM-HOWTO/
by the way.

With kind regards

Stefan Peter

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Can't write len=1048576 to /var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp

2016-09-22 Thread Peter Thurner
Hi Guys,

I'm running a backuppc Server Installation on Debian 8. I'm using rsync
+ filling method on all clients. I'm backing up several Debian 8
Clients, one of which makes problems. When I start the the backup, the
hook Script works fine, the rsync seems to run through - I strace it on
the client and see that it does stuff, also the new directory fills up.
After the rsync however "nothing" more happens and the client hangs. If
I wait for two days, the backup aborts. When I abort myself I get the
following errors in the bad log (I set log level to 2 and added
--verbose as rsync option)

  skip 600   0/0  934080 var/log/installer/syslog
  skip 600   0/0 1162350 var/log/installer/partman
Can't write len=1048576 to
/var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp
Can't write len=1048576 to
/var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp
[...]
lots of those cant write
[...]
Parent read EOF from child: fatal error!
Done: 0 files, 0 bytes
Got fatal error during xfer (Child exited prematurely)
Backup aborted by user signal


I tried writing to the RStmp file during a backup - if I touch or echo
fo > RStmp it, I can write to it. If I dd if=/dev/zero of=RStmp bs=1M
count=1000, the file disappears right away, as in:

dd if=/dev/zero of=RStmp ... ; ls RStmp
no such file or directory

Any Ideas to what might cause this?


With kind regards,


Peter Thurner

--

Blunix GmbH - Professional Linux Service
Linux: Consulting, Management, Training

Glogauer Straße 21, 10999 Berlin

Web: www.blunix.org
Phone: +49 30 / 301 338 38
Fax: +49 30 / 233 288 28



--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't write len=1048576 to /var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp

2016-10-01 Thread Peter Thurner
Hi Adam,

thanks for your reply.


> I'm assuming you are using BPC v3 but you didn't let us know

My Version is 3.3.0-2+deb8u1 on Debian 8.


> I would guess that there is a large (possibly sparse) file that is in
> the process of being backed up, and it takes a long time.

I searched the Server to be backed up and didnt find any files above 2GB.
I finally found it - it was a logfile that was never rotated for some
reason:

root@gb-srv08.igzev.intern /var/log # ls -lha lastlog
-rw-rw-r-- 1 root utmp 198G Okt  1 17:03 lastlog

root@gb-srv08.igzev.intern /var/log # du -sh .
687M.

Using strace on the clients rsync showed me that information. Thank you
for the hint!



On 09/23/2016 02:31 AM, Adam Goryachev wrote:
> On 22/09/16 19:54, Peter Thurner wrote:
>> Hi Guys,
>>
>> I'm running a backuppc Server Installation on Debian 8. I'm using rsync
>> + filling method on all clients. I'm backing up several Debian 8
>> Clients, one of which makes problems. When I start the the backup, the
>> hook Script works fine, the rsync seems to run through - I strace it on
>> the client and see that it does stuff, also the new directory fills up.
>> After the rsync however "nothing" more happens and the client hangs. If
>> I wait for two days, the backup aborts. When I abort myself I get the
>> following errors in the bad log (I set log level to 2 and added
>> --verbose as rsync option)
>>
>>skip 600   0/0  934080 var/log/installer/syslog
>>skip 600   0/0 1162350 var/log/installer/partman
>> Can't write len=1048576 to
>> /var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp
>> Can't write len=1048576 to
>> /var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp
>> [...]
>> lots of those cant write
>> [...]
>> Parent read EOF from child: fatal error!
>> Done: 0 files, 0 bytes
>> Got fatal error during xfer (Child exited prematurely)
>> Backup aborted by user signal
>>
>>
>> I tried writing to the RStmp file during a backup - if I touch or echo
>> fo > RStmp it, I can write to it. If I dd if=/dev/zero of=RStmp bs=1M
>> count=1000, the file disappears right away, as in:
>>
>> dd if=/dev/zero of=RStmp ... ; ls RStmp
>> no such file or directory
>>
>> Any Ideas to what might cause this?
> Ummm, backuppc is in the process of backing up data, and you want to 
> start stepping on it's toes by writing to it's temp file? That doesn't 
> make any sense to me, but I guess you have your reasons.
> I think you 
> might see more information by examining the client rsync process with 
> either strace, or when it is "stalled" (ie, backing up the large file), 
> look at ls -l /proc//fds which will show which file it has 
> open. Then you can check what is wrong with the file (unexpected large 
> file, or sparse file, or whatever). Once identified, you can either 
> arrange for the file to be deleted, or excluded from the backup, or be 
> more patient and/or extend your timeout to allow this large file to 
> complete.
> 
> If you are unable to solve the issue, please provide some additional 
> details. Especially a look at strace of rsync on the client while it is 
> "stalled" will help identify what it is doing.
> 
> Regards,
> Adam


Best,
Peter

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Archive backup with encryption

2016-12-09 Thread Peter Viskup
Dear all,
would like to ask whether it would be possible to use BackupPC to
store encrypted archives of sensitive directories from the clients
*only*.

We do have server with ssh connection to other clients.
We need to create GPG encrypted archives from some directories from all clients.
We do not want to store any unencrypted files from clients on this
backup server.

Is it possible to achieve that with BackupPC?

-- 
Peter

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today.http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC fails with aborted by signal=PIPE, Can't write to socket

2017-01-20 Thread Stefan Peter
Am 20.01.2017 um 04:57 schrieb John Spitzer:
> On 01/19/2017 1:28 PM, Les Mikesell wrote:
>> On Thu, Jan 19, 2017 at 2:25 PM, John Spitzer 
>> wrote:
>>
>> The most likely suspect is that rsync timeout shown in the log snippet
>> you posted.  But you didn't provide any details about why or how your
>> rsync had timeouts enabled.
>>
> That rsync timeout is being set 'under the hood'.

No, I don't think so. If you use the rsync method, make sure your
RsyncClientCmd does not use the --timeout parameter or at least sets a
reasonably high paramter value.
ount
from man rsync:
 --timeout=TIMEOUT
 This  option allows you to set a maximum I/O timeout in seconds.
 If no data is transferred for the specified time then rsync will
 exit. The default is 0, which means no timeout.

An additional issue I had with this method at some point was that the
ssh connection was terminated, most probably by one of the two firewalls
involved. When doing large file transfers, rsync tends to take quite
some time doing checksums and the like in order to reduce the amount of
data to be transferred. This results in long times when nothing is
transferred over the ssh connection which in turn may lead a firewall to
believe that the connection is dead.
Setting the ClientAliveCountMax and ClientAliveInterval in sshd_config
on the client fixed this problem for me. Again, man sshd_config may help
you with details.



If you use the rsyncd method, you will have to set the timeout value in
the rsyncd.conf file on the client. From man rsyncd.conf:
 timeout
 This parameter allows you to override the clients choice for I/O
 timeout for this module. Using this  parameter  you  can  ensure
 that  rsync  won’t wait on a dead client forever. The timeout is
 specified in seconds. A value of zero means no  timeout  and  is
 the  default.  A  good choice for anonymous rsync daemons may be
 600 (giving a 10 minute timeout).

In one notoriously busy and slow server I had to set this to 3600
seconds in order to be able to backup DVD images.

With kind regards

Stefan Peter

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC fails with aborted by signal=PIPE, Can't write to socket

2017-01-20 Thread Stefan Peter
On 20.01.2017 09:13, Stefan Peter wrote:
> If you use the rsync method, make sure your
> RsyncClientCmd does not use the --timeout parameter or at least sets a
> reasonably high paramter value.

As an afterthought, I suppose the BackupPC parameter
$Conf{ClientTimeout} from the clients Backup Settings may be used to set
the rsync timeout. But I did not verify this and the BackupPC help does
not explicitly mention rsync.

With kind regards

Stefan Peter

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.3.1-2ubuntu3 breaks pool graphs on the Status page.

2017-02-23 Thread Stefan Peter
Dear Joe Leavy
Am 23.02.2017 um 14:34 schrieb Joe Leavy:
> Thanksbut
> 
> james@store-01:~$ sudo chmod 644 /var/log/backuppc/pool.rrd
> [sudo] password for james:
> chmod: cannot access '/var/log/backuppc/pool.rrd': No such file or directory

Well, on my systems the path is
/var/lib/backuppc/log/pool.rrd
and not
/var/log/backuppc/pool.rrd

With kind regards

Stefan Peter



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 3.3.1-2ubuntu3 breaks pool graphs on the Status page.

2017-02-23 Thread Stefan Peter
Dear Bob of Donelson Trophy

On 23.02.2017 16:42, Bob of Donelson Trophy wrote:
>  
> 
> Stefan,
> 
> Is you OS Ubuntu/Debian?

Yes, I use Debian for all my servers and both of my BackupPC are Debian
Jessie.

With kind regards

Stefan Peter

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] back to ssh class, I guess

2017-03-28 Thread Stefan Peter
Dear Bob of Donelson Trophy,

On 28.03.2017 19:24, Bob of Donelson Trophy wrote:
> I now have a functional VM running Ubuntu 16.04LTS and BackupPC 4.0
> (from the master) source.
> 
> I have "su - backuppc -s /bin/sh" and acquired the "$" prompt.
> 
> Generated the keys with "ssh-keygen".
> 
> Now when I:
> 
> $ ssh-copy-id root at client ipaddress<<< sanitized for security
> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed:
> "/var/lib/backuppc/.ssh/id_rsa.pub"
> /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to
> filter out any that are already installed
> /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you
> are prompted now it is to install the new keys
> Permission denied (publickey,password).

Your client most probably is not configured to accept a password for the
user root. This is ok for a configured system where you already have
installed the public key, but will not allow you to do so using a password.

You will have to
o change the entry "PermitRootLogin" in /etc/sshd/ to yes
o restart the sshd on the client
o re-issue your ssh-copy-id
o change back the entry "PermitRootLogin" in /etc/sshd/ to
  “prohibit-password” or “without-password”
o restart sshd again.
o test your connection.

Or you may have to find another way to transfer id_rsa.pub.

You may want to read the "PermitRootLogin" stanza in
man sshd_config
for further information. There seems to be a way to even limit logins
with a public key to specific commands. This explicitly mentions backup
purposes.

All information above is from a fairly recent Debian system, so your
mileage may vary depending on the OS and release you are using.

With kind regards

Stefan Peter

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the server computer

2017-07-05 Thread Stefan Peter
Am 05.07.2017 um 17:26 schrieb Bob Katz:
> Dear Les and group:
> 
> 
> So it seems the daemon is not running on the port? This ps command
> seems to say the daemon is running:
> 
> 
> [bobkatz@localhost Documents]$ ps x | grep rsync
> 17690 pts/1S+ 0:00 grep --color=auto rsync


The output of this command should look like so:
 3411 ?S  0:00 /usr/bin/rsync --no-detach --daemon --config
/etc/rsyncd.conf
13490 pts/0S+ 0:00 grep rsync

The line with "grep rsync" comes from the ps x command you entered.

So, no, your rsync server does not run.


Try to start it with
service rsync start
(or whatever your server OS requires to start the rsyncd)

With kind regards

Stefan Peter

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the server computer

2017-07-14 Thread Stefan Peter
Hi Bob

On 14.07.2017 21:32, Bob Katz wrote:

... snip ...
> 
> 
> 2017/07/14 15:06:17 [3292] connect from localhost (::1)
> 2017/07/14 15:06:17 [3292] rsync denied on module Backup-Data-Folder
> from localhost (::1)
> 

... snip ...

> 
> ###
> ##
> ##  RSYNCD config file for the backuppc server
> ##
> ###
> 
> 
> transfer logging = false
>  lock file = /var/run/rsync.lock
>  log file = /var/log/rsyncd.log
>  pid file = /var/run/rsyncd.pid
>  port=873
>  address=localhost.localdomain
>  uid=root
>  gid=root
>  use chroot=yes
>  read only = no
> ## host allow: this is important.
> ## In my case leaving the subnet-mask leads to a failure,
> ## so I only provide the IP.
>hosts allow = 192.168.0.217, localhost.localdomain

I'd use
hosts allow = 192.168.0.217, 127.0.0.1

because something has to translate localhost.localdomain to an IP
address and if this fails for whatever glitch of the day is present in
name resolution, your backup will fail, too.


>  motd file=/etc/rsyncd/rsyncd.motd
>  
> ## Now you have to declare, in brackets, the RSYNC 'module', or "share
> name" as it is called within backuppc
>[Backup-Data-Folder]
>## Next, set the path you want backed up. Be sure to use a trailing
> slash
>path= /

Please don't do that:
o a Linux system has virtual file systems mounted (/proc, /temp,
  /sys/ ...) that will either not be readable, change during
  access or may lead to endless loops.

o this includes your /var/lib/backuppc directory. That's where
  your backup goes, so you backup your backup. It won't take long
  until the backup of backups of backups (...) will fill up your drive.
  I'd go for /etc at this point and add additional paths later, when
  this works.

>read only   = no
>list= yes
>auth users = root

Most probably, BackupPC will try to connect as user backuppc, not root.
At least, that's the default.

See 'man rsyncd.conf' (and search for 'auth users'). Please note that
auth users is just a user name (which needs a corresponding password in
the secrets file). This does not translate to actual users on the
system, the user name (and the the corresponding password from the
secrets file) are just used to govern access to the rsyncd server share.
the access rights to the files are deined by the user rsyncd runs under,
so in your case

>  uid=root
>  gid=root



And have a look at the 'secrets file' section right below.


BTW, it is cool that you are still on it and, despite all the troubles
you went through, did not abandon the whole project!

With kind regards

Stefan Peter

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the server computer

2017-07-15 Thread Stefan Peter
Dear Bob Katz

On 15.07.2017 00:22, Bob Katz wrote:
> 
> I have root as the user for backuppc for all my other hosts and it
> works. And it's also currently set up as root for backing up the server.
> I did try "backuppc" as the user before and it failed, maybe for
> different reasons. Anyway, I'm not sure how confused the app backuppc
> would be in the case of trying to back up itself. The server app
> backuppc is running as the user "backuppc", but I do know that it can
> call through port 873 as the user "root" when it is backing up all the
> other clients. Yeah, I'm confused  :-)

Yes, I can imagine. Maybe this will help:

The rsyncd user has nothing to do with th operating system. It's name
and password are only used by the rsync client on the BackupPC server
(the machine that does the backup) and the rsyncd daemon on the BackupPC
client (the machine you want to backup).
So, if your /etc/backuppc/myClient.pl contains
$Conf{RsyncdUserName} = 'JoeDoe';
$Conf{RsyncdPasswd} = 'Password';

you need to have the line
auth users = JowDoe
in your /etc/rsyncd.conf on the client and
JoeDoe:Password
in /etc/rsyncd.conf

The rsyncd on the client will operate with the privileges of the user
you have specified in /etc/rsyncd:
uid = root
gid = root
In this case, I use root so the rsyncd daemon has all read and write
rights on all directories and files.

Now, in order to get more information about what goes wrong in your
setup, I recommend to enable logging. Just add
log file=/var/log/rsyncd.log
max verbosity = 3
to your /etc/rsyncd.conf file and then stop and restart the rsyncd daemon:
systemctl stop rsync
systemctl start rsync

and verify that rsynd is loaded:
systemctl status rsync

which should result in something like

● rsync.service - fast remote file copy program daemon
   Loaded: loaded (/lib/systemd/system/rsync.service; enabled)
   Active: active (running) since ...


Then, try again to start a backup. Afterwards, have a look at
/var/log/rsyncd.log, it should contain error messages that help to find
the problem in your setup.

With kind regards

Stefan Peter


-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up the server computer

2017-07-15 Thread Stefan Peter
Dear Kenneth Porter

On 15.07.2017 10:23, Kenneth Porter wrote:
> 
> Does --one-file-system work with rsyncd (daemonized rsync)?

No, at least there is no such parameter documented for rsyncd.conf. You
may be able to provide a --one-file-system parameter upon startup,
though (untested).

With kind regards

Stefan Peter


-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC4: checksum

2017-10-27 Thread Stefan Peter
Dear Gandalf Corvotempesta
On 27.10.2017 17:11, Gandalf Corvotempesta wrote:
> I'm using ZFS, so checksumming is done by ZFS itself, is not an issue
> for me to skip any data corruption check, as zfs does this automatically

But this won't help BackupPC to decide which files have changed and,
therefore, need to be transfered from the client to the server.

With kind regards

Stefan Peter


-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] File System containing backup was too full

2017-12-18 Thread Stefan Peter
Dear
On 18.12.2017 14:42, Adrien Coestesquis wrote:
> Hi BackupPC Users !!
> 
> Since few days I receive notifications which warn me about hosts skipped.
> I recently backuped a server which took a lot of place. I was trying to
> make only one full backup, then i disabled the backup with the
> BackupsDisable setting.
> 

...

> 
> In the notification, it says that my threshold in the configuration file
> is 95% and says that yesterday the system was up to 96% full
> I already tried to modify this value (*
> DfMaxUsagePct)
> * to 98% but the notification still says 95%

This most probably is caused by the formating of your disk. From man mk2fs:

 -m reserved-blocks-percentage
  Specify the percentage of the filesystem blocks reserved for the
  super-user.   This  avoids  fragmentation, and allows root-owned
  daemons, such as syslogd(8), to continue to  function  correctly
  after non-privileged processes are prevented from writing to the
  filesystem.  The default percentage is 5%.


So if you did not add an -m parameter, the last 5% of your disk can be
used by root only.

You can change this percentage using tune2fs -m

But I'd definitely would recommend to enlarge the volume in question or
to organize an additional server.


With kind regards

Stefan Peter


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] File System containing backup was too full

2017-12-18 Thread Stefan Peter
Dear Adrien Coestesquis
On 18.12.2017 16:15, Adrien Coestesquis wrote:
> Block count:              3906469376

This does not look like the reserved block count hit the limit:
> Reserved block count:     195323468
> Free blocks:              1613863920

The inodes look fine, too:
> Free inodes:              476144318
> Inode count:  488308736

What system is this? Did you set any disk quota? Is there anything
besides BackupPC living on /dev/sda1?

With kind regards

Stefan Peter

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] File System containing backup was too full

2017-12-19 Thread Stefan Peter
Dear Adrien Coestesquis
On 19.12.2017 13:10, Adrien Coestesquis wrote:
> i don't think so, the BPC arborescence is somewhere else

And BackupPC knows this for real? If I remember correctly, you can
deviate from /var/lib/backuppc only if you install directly from the
sources. If you use an upstream package, you can not.

Could it be that your /var partition is at 95%?


With kind regards

Stefan Peter


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] No files dumped for share for localhost

2018-02-06 Thread Stefan Peter
Dear RAKOTONDRAINIBE Harimino Lalatiana

On 06.02.2018 07:26, RAKOTONDRAINIBE Harimino Lalatiana wrote:
> Hi Alexander ,
> 
> I did what you said so I created three module in rsyncd.conf
> 
> [boot]
> path = /boot
> comment = all boot files to be backupc
> 
> [var]
> path = /var
> comment = all files from var
> 
> [all]
> path = /
> comment = it's a test
> 

Are you aware of the fact that both 'var' and 'all' modules include the
default /var/lib/backuppc, the default location to place your backups?
This definitely will lead to issues? Unless you have excluded this
directory in the modules definition, of course.

with kind regards

Stefan Peter

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't switch users to BackupPC in terminal?

2018-03-12 Thread Stefan Peter
On 12.03.2018 15:09, Marc Gilliatt wrote:
> I'm sorry, I'm not too sure on what you mean by working in /srv/backuppc?
> 
> 
> I copied and pasted the RSA key from my backuppc server to my host I
> would like backing up. I ran the following commands, 
> 
> 
> On my backuppc server:
> 
> /cat /root/.ssh/id_rsa.pub > id_rsa.pub_copy 
> /
> 
> 
> On my host server:
> 
> ///cat id_rsa.pub_copy >> /root/.ssh/authorized_keys/
> 
> /
> /

I don't see any copy command from the backuppc server to the client here?

Why dont you use

ssh-copy-id -i /var/lib/backuppc/.ssh/id_rsa.pub root@backuppc-client

(execute this as root user on the backuppc server!)


With kind regards

Stefan Peter


-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?
(See https://en.wikipedia.org/wiki/Posting_style for details)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc hangs on certain files.

2018-05-31 Thread Stefan Peter
Hi Brent

On 31.05.2018 09:13, Brent Clark wrote:
> 
> I too set rsyncd debugging (max verbosity = 2) and in the log I get:
> 
> 2018/05/31 08:59:52 [4524] name lookup failed for REMOVED: Name or
> service not known
> 2018/05/31 08:59:52 [4524] connect from UNKNOWN (REMOVED)
> 2018/05/31 08:59:52 [4524] rsync to REMOVED/download/cougar/ from
> REMOVED@UNKNOWN (REMOVED)
> 2018/05/31 08:59:52 [4524] receiving file list
> 2018/05/31 08:59:52 [4524] ABORTING due to unsafe pathname from sender:
> /COU-IMMPRO.pdf

Could it be that you are using Debian stretch and are bitten by this bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=863334

Does the download/cougar directory exist on the system you want to
restore to?

With kind regards

Stefan Peter



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc hangs on certain files.

2018-05-31 Thread Stefan Peter
Dear Brent Clark,

On 31.05.2018 13:29, Brent Clark wrote:
> As you can see they are the same.
> 
> 3.1.1-3+deb8u1 vs 3.1.1-3ubuntu1.2

They may have the same base version but they differ in distro specific
patches.

Anyway, the message

> 2018/05/31 08:59:52 [4524] ABORTING due to unsafe pathname from sender:
> /COU-IMMPRO.pdf

seems to point to the rsyncd server dropping the connection. I'd
investigate in this direction.

With kind regards

Stefan Peter


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC skips directories

2008-07-10 Thread Peter Carlsson
Hello!

I just realise that BackupPC seems to skip directories
during a backup.

In my localhost.pl I have:

$Conf{TarShareName} = [
'/etc',
'/home',
'/root'
];

But the '/home/peter/Maildir' (among others) is not
backuped. I noticed that the permission was different for
Maildir compared to directories that did get backuped.

I therefore changed to read permissions for everyone with:
  chmod -R +r /home/peter/Maildir

debian:~# ls -l /home/peter/
drwxr--r-- 30 peter peter 4096 2008-07-09 22:14 Maildir

But tonights incremental backup didn't make a different.

Otherwise BackupPC seems to work just fine.
Is there something else I miss?

Best regards, 
Peter Carlsson

-
Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW!
Studies have shown that voting for your favorite open source project,
along with a healthy diet, reduces your potential for chronic lameness
and boredom. Vote Now at http://www.sourceforge.net/community/cca08
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] aborted by signal=PIPE errors.

2008-07-14 Thread Peter Nankivell
Bruno,

I have been chasing this problem also, but I have additional
problems.  One is that the backup occasionally hangs without
any error message, the other, more frequently, is that the backup
fails with an additional message "corrupted MAC on input".
The "MAC" is not the address of the NIC, but
"Message Authentication Code", part of the ssh protocol.

It appears that most people believe that this is due to ssh's
reaction to a hardware error.  People have reported faults
with memory, cables, NIC's, wireless and routers.  This is
consistent with the fact that the only machine having the problem
on my home network is the one using a wireless NIC.

The solution seems to be to fix the hardware, which as yet
I have been unable to to, or to reduce the strain that
"rsync" puts on the system.  I have done this by adding the "rsync"
options "--whole-file" and "--bwlimit=500".  This more than doubles
the time for the backup, but it has worked so far.
The first option causes "rsync" to transmitt the whole file rather
than only the bits that are changed, the other limits the
bandwidth to 500k.

If it is all due to ssh and hardware it makes me worry about
the resilience of ssh.  Surely its protocol should be able to
retry on corrupted data?  I don't know...

Another solution maybe to use "rsyncd".  This should avoid
using ssh as the transport.  I haven't tried this yet.

Hope this helps, Peter.

On Tuesday 15 July 2008 04:11:45 Bruno Faria wrote:
> Hey guys,
>
> This is the third message that I'm posting about this problem, but so far I
> haven't gotten any solutions.
>
> I'm getting the error message: aborted by signal=PIPE in some of the hosts
> that I'm backing up. Some hosts get this error while doing incremental
> backups, while other hosts get during full backups.
>
> I searched google about this error message, and it seems like many backuppc
> users have encountered this before, although I could not find any
> solutions.
>
> I'm starting to think that this could be bug in backupPC since many people
> have had this before but I haven't seen any solution.
>
> Please let me know if anyway knows of a solution to backups "aborted by
> signal=PIPE".
>
> Thanks in advance!

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] aborted by signal=PIPE errors.

2008-07-14 Thread Peter Nankivell
Hello,

Heaps of data goes over this link, apparently without error and it has been
doing it for at least a year. Isn't it likely that I would have seen TCP
errors throw up problems with other s/w by now?

If the data is being tampered with it would have to be SSH or the kernel
wouldn't it?

Cheers, Peter.

On Tuesday 15 July 2008 10:02:42 Holger Parplies wrote:
> Hi,
>
> Peter Nankivell wrote on 2008-07-15 09:17:48 +1000 [Re: [BackupPC-users] 
aborted by signal=PIPE errors.]:
> > [...]
> > If it is all due to ssh and hardware it makes me worry about
> > the resilience of ssh.  Surely its protocol should be able to
> > retry on corrupted data?  I don't know...
>
> actually, TCP should provide ssh with a reliable data stream. If ssh *sees*
> corrupted data, then either
>
>   1) the data has been tampered with or
>   2) TCPs CRC failed to identify network data corruption.
>
> While (2) is conceivable, it should *not* be happening on a regular basis
> (if it is, something else is wrong). I would guess it to be so rare that
> ssh ignores the possibility, since there is no way to distinguish between
> both cases. In the case of tampering, retransmissions would not make much
> sense ("Hey, I noticed that! Be more subtle next time." :-).
>
> > Another solution maybe to use "rsyncd".  This should avoid
> > using ssh as the transport.  I haven't tried this yet.
>
> Providing you really have a corrupted TCP data stream, that would mean
> corrupted backups ... or an rsync protocol failure.
>
> Regards,
> Holger



-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] aborted by signal=PIPE errors.

2008-07-14 Thread Peter Nankivell
Adam,

Yes.  It is conceivable that something on the motherboard could be
causing a TCP problem if the corruption occurs after the TCP packets
are unpacked.  I'll check the memory on both the client and server tonight.
Although I suspect the client as the server backs up other machines OK.

Thanks, Peter.

On Tuesday 15 July 2008 13:17:16 Adam Goryachev wrote:
> Holger Parplies wrote:
> > Hi,
> >
> > Peter Nankivell wrote on 2008-07-15 09:17:48 +1000 [Re: [BackupPC-users] 
aborted by signal=PIPE errors.]:
> >> [...]
> >> If it is all due to ssh and hardware it makes me worry about
> >> the resilience of ssh.  Surely its protocol should be able to
> >> retry on corrupted data?  I don't know...
> >
> > actually, TCP should provide ssh with a reliable data stream. If ssh
> > *sees* corrupted data, then either
> >
> >   1) the data has been tampered with or
> >   2) TCPs CRC failed to identify network data corruption.
> >
> > While (2) is conceivable, it should *not* be happening on a regular basis
> > (if it is, something else is wrong). I would guess it to be so rare that
> > ssh ignores the possibility, since there is no way to distinguish between
> > both cases. In the case of tampering, retransmissions would not make much
> > sense ("Hey, I noticed that! Be more subtle next time." :-).
> >
> >> Another solution maybe to use "rsyncd".  This should avoid
> >> using ssh as the transport.  I haven't tried this yet.
> >
> > Providing you really have a corrupted TCP data stream, that would mean
> > corrupted backups ... or an rsync protocol failure.
>
> I recently experienced a very long battle with this exact problem.
>
> A brand new server was being installed using an NFS root which was
> mounted via TCP NFS. I was seeing random crashes usually related to NFS
> disk errors and similar. After using wireshark on the NFS server to
> 'watch' the traffic, I noticed a number of corrupted TCP (NFS) packets
> arriving. I concluded it was a network issue, and proceeded to replace
> the network card on the NFS client, the cable, and the switch. I was
> still seeing the same errors, so I finally gave up and ran a memtest,
> which found faulty memory. I replaced the memory and it has been stable
> ever since.
>
> Just a suggestion, because again, TCP was not resolving the corrupt
> packet issue, the packets were getting through to the NFS layer, causing
> other issues since NFS doesn't seem to handle corrupt packets very well.
>
> Regards,
> Adam

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] aborted by signal=PIPE errors.

2008-07-15 Thread Peter Nankivell
Hello,

I think I have fixed my problem.

TCP segmentation offload was turned on for the clients network card.

I'm running kubuntu 8.04 on both client and server.  The server was
loaded with kubuntu onto a blank disk, the client was upgraded from
7.10 to 8.04.

The both are running intel motherboards with integrated ethernet controllers.
Clients is a 8254EM, servers is a 8254EI.

I've been testing by rsyncing / on the client to a blank directory (folder)
on the server using the command

rsync -av --one-file-system client:/ .

where "client" is the hostname of the client.  This is outside of "backuppc".
The problem was not with "backuppc".

Every time I ran the rsync command, it would eventually fail with a
message from "ssh"

corrupted MAC on input

It had become apparent to me that the TCP packets were being corrupted at some
stage.  People had reported that cables and routers could be the problem,
but if that was so the the CRC protection of the packet data
was failing - unlikely.

Memory, or something else connected directly to the internal bus's that had
access to the data after the packets are unpacked had to be the culprit.

28 passes of all the memory tests overnight produced no errors.
Prompted by the experience of others, I checked the ethernet card
settings on both machines using

# ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: off
udp fragmentation offload: off
generic segmentation offload: off

For the client "tcp segmentation offload" was "on".  Turning it off
using

# ethtool -K eth0 tso off

worked!  The test "rsync" now works wonderfully.

Peter.

On Tuesday 15 July 2008 15:01:03 Peter Nankivell wrote:
> Adam,
>
> Yes.  It is conceivable that something on the motherboard could be
> causing a TCP problem if the corruption occurs after the TCP packets
> are unpacked.  I'll check the memory on both the client and server tonight.
> Although I suspect the client as the server backs up other machines OK.
>
> Thanks, Peter.
>
> On Tuesday 15 July 2008 13:17:16 Adam Goryachev wrote:
> > Holger Parplies wrote:
> > > Hi,
> > >
> > > Peter Nankivell wrote on 2008-07-15 09:17:48 +1000 [Re:
> > > [BackupPC-users]
>
> aborted by signal=PIPE errors.]:
> > >> [...]
> > >> If it is all due to ssh and hardware it makes me worry about
> > >> the resilience of ssh.  Surely its protocol should be able to
> > >> retry on corrupted data?  I don't know...
> > >
> > > actually, TCP should provide ssh with a reliable data stream. If ssh
> > > *sees* corrupted data, then either
> > >
> > >   1) the data has been tampered with or
> > >   2) TCPs CRC failed to identify network data corruption.
> > >
> > > While (2) is conceivable, it should *not* be happening on a regular
> > > basis (if it is, something else is wrong). I would guess it to be so
> > > rare that ssh ignores the possibility, since there is no way to
> > > distinguish between both cases. In the case of tampering,
> > > retransmissions would not make much sense ("Hey, I noticed that! Be
> > > more subtle next time." :-).
> > >
> > >> Another solution maybe to use "rsyncd".  This should avoid
> > >> using ssh as the transport.  I haven't tried this yet.
> > >
> > > Providing you really have a corrupted TCP data stream, that would mean
> > > corrupted backups ... or an rsync protocol failure.
> >
> > I recently experienced a very long battle with this exact problem.
> >
> > A brand new server was being installed using an NFS root which was
> > mounted via TCP NFS. I was seeing random crashes usually related to NFS
> > disk errors and similar. After using wireshark on the NFS server to
> > 'watch' the traffic, I noticed a number of corrupted TCP (NFS) packets
> > arriving. I concluded it was a network issue, and proceeded to replace
> > the network card on the NFS client, the cable, and the switch. I was
> > still seeing the same errors, so I finally gave up and ran a memtest,
> > which found faulty memory. I replaced the memory and it has been stable
> > ever since.
> >
> > Just a suggestion, because again, TCP was not resolving the corrupt
> > packet issue, the packets were getting through to the NFS layer, causing
> > other issues since NFS doesn't seem to handle corrupt packets very well.
> >
> > Regards,
> > Adam
>
> -

Re: [BackupPC-users] aborted by signal=PIPE errors.

2008-07-16 Thread Peter Nankivell
No problems.  Thanks to everyone who contributed.  There were plenty
who did so unwittingly through their posts to other lists.

As a measure of how happy I feel about the solution, my first successful full
dump of this machine took 22 minutes compared to 137 minutes when I
had to use the "--whole-file" and "--bwlimit=500" rsync options.

Cheers Peter.

On Wednesday 16 July 2008 21:41, Carl Wilhelm Soderstrom wrote:
> 
> Thanks for the update Peter! This might be very useful troubleshooting
> information in the future.
> 

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] aborted by signal=PIPE errors.

2008-07-16 Thread Peter Nankivell
On Thursday 17 July 2008 00:46, Les Mikesell wrote:
> Peter Nankivell wrote:
> > No problems.  Thanks to everyone who contributed.  There were plenty
> > who did so unwittingly through their posts to other lists.
> > 
> > As a measure of how happy I feel about the solution, my first successful 
> > full
> > dump of this machine took 22 minutes compared to 137 minutes when I
> > had to use the "--whole-file" and "--bwlimit=500" rsync options.
> 
> Did you run across anything that indicated that you should expect this 
> problem on all similar NICs with that software version or do you think 
> this is a quirk of your particular machine or settings?
> 

I saw a few instances of people having problems with NIC's
that use the e1000 driver.  Some with non-specific problems
most referring to "TCP segmentation offload" not working.

Of course its not really a "card" its a chipset integrated onto
the motherboard, and there are a few versions of it I believe.

Why it was switched on on one machine running the same OS I don't
know.  As a mentioned the NIC's were slightly different models
of the same chipset, and the history of the OS's was slightly
different also.

Regards, Peter.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Deleting files from a backup

2008-07-21 Thread Peter Nankivell
On Tuesday 22 July 2008 01:39:57 Bowie Bailey wrote:
> Joe Bordes wrote:
> > Bowie Bailey escribió:
> > > I forgot to exclude a directory from my backup and now I have an
> > > extra 115GB of files backed up that I don't need.  Is it safe to go
> > > into the pc directory for the backup and delete the files manually?
> >
> > Have a look here:
> > http://backuppc.wiki.sourceforge.net/How+to+delete+backups
>
> Thanks, but I don't want to delete the entire backup, just one directory
> from it.

I did just that yesterday. I didn't delete the directory, just its contents.
Seems to have not caused any problems, subsequent incremental dumps appear
to have worked OK.  When I go and "Browse Backups" things are as I expect.

I checked all the directory trees /var/lib/backuppc/pc/HOST/NUMBER,
where HOST is the hostname and NUMBER is the backup number.  File and
directory names have a "f" prepended.  Stuff with names that don't start
with "f" I think are put there by "backuppc".  I wouldn't touch them
unless they are in the directories you want to delete.

Peter.

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Email claims no backups, but it's incorrect

2008-10-17 Thread Peter Doherty
Hello

My BackupPC server is sending out emails that claims there are no  
backups on some of the machines we backup.  The logs clearly show the  
backups are occurring correctly.
The email summary does not show that BackupPC even sent the emails.
One of the emails claimed there had been no backups in 14,000 days,  
clearly a sign that something is confused.
I think all this might have started after we had a disk failure in the  
RAID, and had to restore the array.
Any thoughts?

Excerpt from email:

> Dear backups,
>
> Your PC () has not been successfully backed up  
> for 14167.5 days.
> Your PC has been correctly backed up 1 times from 0.9 to 14167.5
> ago.  PC backups should occur automatically when your PC is connected
> to the network.



Cheers,
Peter


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Email claims no backups, but it's incorrect

2008-10-17 Thread Peter Doherty
Thanks for the response,
The status.pl file is present, and is being written to, I see a  
current file, plus an hour old file "status.pl.old"
Do you have any other suggestions where to look?

I realized I mis-spoke in my original email, we didn't have to fully  
restore the array, it was just a simple rebuild.  One of the disks  
crashed, and was replaced, and the array was rebuilt.  The box did  
crash one due to a problem with the rebuild, but everything else on  
the box is running fine now, but the email problem did seem to start  
after the RAID was rebuilt.

Thanks.

--Peter




On Oct 17, 2008, at 11:10 AM, Stephen Joyce wrote:

> I suspect that you didn't restore BackupPC's status.pl file when you
> restored your RAID. On my hosts it's in /var/log/BackupPC/.
>
> Or I may be off base and you might have deeper problems. :-)
>
> On Fri, 17 Oct 2008, Peter Doherty wrote:
>
>> Hello
>>
>> My BackupPC server is sending out emails that claims there are no
>> backups on some of the machines we backup.  The logs clearly show the
>> backups are occurring correctly.
>> The email summary does not show that BackupPC even sent the emails.
>> One of the emails claimed there had been no backups in 14,000 days,
>> clearly a sign that something is confused.
>> I think all this might have started after we had a disk failure in  
>> the
>> RAID, and had to restore the array.
>> Any thoughts?
>>
>> Excerpt from email:
>>
>>> Dear backups,
>>>
>>> Your PC () has not been successfully backed up
>>> for 14167.5 days.
>>> Your PC has been correctly backed up 1 times from 0.9 to 14167.5
>>> ago.  PC backups should occur automatically when your PC is  
>>> connected
>>> to the network.
>>
>>
>>
>> Cheers,
>> Peter
>>
>>
>> -
>> This SF.Net email is sponsored by the Moblin Your Move Developer's  
>> challenge
>> Build the coolest Linux based applications with Moblin SDK & win  
>> great prizes
>> Grand prize is a trip for two to an Open Source event anywhere in  
>> the world
>> http://moblin-contest.org/redirect.php?banner_id=100&url=/
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>
> -
> This SF.Net email is sponsored by the Moblin Your Move Developer's  
> challenge
> Build the coolest Linux based applications with Moblin SDK & win  
> great prizes
> Grand prize is a trip for two to an Open Source event anywhere in  
> the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Email claims no backups, but it's incorrect

2008-10-20 Thread Peter Doherty
Hi,

I'm still getting emails from BackupPC, but the backups are  
happening.  The Email Summary does not show these outgoing emails  
being sent.  Why is this?
Where is BackupPC getting the data as to when the last backup  
happened?  I had a successful backup done on one machine last night at  
9pm, but an email was sent 4 hours later claiming the machine hasn't  
been backed up in 100 days.
The time on the box is correct, and the timestamps in /var/log/ 
BackupPC/status.pl look correct too.
Something is clearly not right.

Thank you for any suggestions on this one.

--Peter

On Oct 17, 2008, at 11:47 AM, Peter Doherty wrote:

> Thanks for the response,
> The status.pl file is present, and is being written to, I see a
> current file, plus an hour old file "status.pl.old"
> Do you have any other suggestions where to look?
>
> I realized I mis-spoke in my original email, we didn't have to fully
> restore the array, it was just a simple rebuild.  One of the disks
> crashed, and was replaced, and the array was rebuilt.  The box did
> crash one due to a problem with the rebuild, but everything else on
> the box is running fine now, but the email problem did seem to start
> after the RAID was rebuilt.
>
> Thanks.
>
> --Peter
>
>
>
>
> On Oct 17, 2008, at 11:10 AM, Stephen Joyce wrote:
>
>> I suspect that you didn't restore BackupPC's status.pl file when you
>> restored your RAID. On my hosts it's in /var/log/BackupPC/.
>>
>> Or I may be off base and you might have deeper problems. :-)
>>
>> On Fri, 17 Oct 2008, Peter Doherty wrote:
>>
>>> Hello
>>>
>>> My BackupPC server is sending out emails that claims there are no
>>> backups on some of the machines we backup.  The logs clearly show  
>>> the
>>> backups are occurring correctly.
>>> The email summary does not show that BackupPC even sent the emails.
>>> One of the emails claimed there had been no backups in 14,000 days,
>>> clearly a sign that something is confused.
>>> I think all this might have started after we had a disk failure in
>>> the
>>> RAID, and had to restore the array.
>>> Any thoughts?
>>>
>>> Excerpt from email:
>>>
>>>> Dear backups,
>>>>
>>>> Your PC () has not been successfully backed up
>>>> for 14167.5 days.
>>>> Your PC has been correctly backed up 1 times from 0.9 to 14167.5
>>>> ago.  PC backups should occur automatically when your PC is
>>>> connected
>>>> to the network.
>>>
>>>
>>>
>>> Cheers,
>>> Peter
>>>
>>>
>>> -
>>> This SF.Net email is sponsored by the Moblin Your Move Developer's
>>> challenge
>>> Build the coolest Linux based applications with Moblin SDK & win
>>> great prizes
>>> Grand prize is a trip for two to an Open Source event anywhere in
>>> the world
>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/
>>> ___
>>> BackupPC-users mailing list
>>> BackupPC-users@lists.sourceforge.net
>>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>>> Wiki:http://backuppc.wiki.sourceforge.net
>>> Project: http://backuppc.sourceforge.net/
>>>
>>
>> -
>> This SF.Net email is sponsored by the Moblin Your Move Developer's
>> challenge
>> Build the coolest Linux based applications with Moblin SDK & win
>> great prizes
>> Grand prize is a trip for two to an Open Source event anywhere in
>> the world
>> http://moblin-contest.org/redirect.php?banner_id=100&url=/
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>
>
> -
> This SF.Net email is sponsored by the Moblin Your Move Developer's  
> challenge
> Build the coolest Linux based applications with Moblin SDK & win  
> great prizes
> Grand prize is a trip for two to an Open Source event anywhere in  
> the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> ___
> Backu

[BackupPC-users] rm command does not work in ArchivePreUserCmd

2008-10-23 Thread Peter McKenna
Hi,
I'm trying to run a command rm /archives/*.gz as a ArchivePreUserCmd to
remove old archives on a removable disk before doing the latest archive.
It doesn't work.
The archives work fine but the old pnes are not deleted. If I check
UserCmdCheckStatus I get this error
Executing ArchivePreUserCmd: rm /archives/*

rm: cannot remove `/archives/*': No such file or directory
Archive failed: ArchivePreUserCmd returned error status 256

I tried another command like dd if=/dev/zero of=/archives/test count=512 and it 
worked fine, but cd /archives did not.
The archives folder has 777 permissions. 

I can't figure this out. Any help would be appreciated.
Regards,
Peter McKenna

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rm command does not work in ArchivePreUserCmd

2008-10-23 Thread Peter McKenna
Thanks for your help.
It almost worked. It came up with this error
Executing ArchivePreUserCmd: /bin/sh -c 'rm /archives/*'

/archives/*": 1: Syntax error: Unterminated quoted string
Archive failed: ArchivePreUserCmd returned error status 512

I tried the exact same command in a shell and it worked fine.
Regards,
Peter



On Thu, 2008-10-23 at 15:53 +0200, Tino Schwarze wrote:

> On Fri, Oct 24, 2008 at 02:36:52AM +1300, Peter McKenna wrote:
> 
> > I'm trying to run a command rm /archives/*.gz as a ArchivePreUserCmd to
> > remove old archives on a removable disk before doing the latest archive.
> > It doesn't work.
> > The archives work fine but the old pnes are not deleted. If I check
> > UserCmdCheckStatus I get this error
> > Executing ArchivePreUserCmd: rm /archives/*
> > 
> > rm: cannot remove `/archives/*': No such file or directory
> > Archive failed: ArchivePreUserCmd returned error status 256
> > 
> > I tried another command like dd if=/dev/zero of=/archives/test count=512 
> > and it worked fine, but cd /archives did not.
> > The archives folder has 777 permissions. 
> > 
> > I can't figure this out. Any help would be appreciated.
> 
> Try /bin/sh -c 'rm /archives/*'
> 
> The shell does the expansion of '*'.
> 
> HTH,
> 
> Tino.
> 
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rm command does not work in ArchivePreUserCmd

2008-10-23 Thread Peter McKenna
Tried that, but no difference.
Executing ArchivePreUserCmd: /usr/local/bin/archivepreusercmd.sh

rm: cannot remove `/archives/*': No such file or directory
Archive failed: ArchivePreUserCmd returned error status 256
Regards,
Peter



On Thu, 2008-10-23 at 16:29 +0200, Tino Schwarze wrote:

> On Fri, Oct 24, 2008 at 03:16:45AM +1300, Peter McKenna wrote:
> > Thanks for your help.
> > It almost worked. It came up with this error
> > Executing ArchivePreUserCmd: /bin/sh -c 'rm /archives/*'
> > 
> > /archives/*": 1: Syntax error: Unterminated quoted string
> > Archive failed: ArchivePreUserCmd returned error status 512
> > 
> > I tried the exact same command in a shell and it worked fine.
> 
> Try using double quotes instead of the single quotes, e.g.
> /bin/sh -c "rm /archives/*"
> If it does not work, create a shell script, e.g. 
> /usr/local/sbin/archivepreusercmd.sh with the following content:
> 
> #!/bin/bash
> #
> 
> rm /archives/*
> 
> Make it executable, then use it as ArchivePreUserCmd.
> 
> Tino.
> 
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rm command does not work in ArchivePreUserCmd

2008-10-23 Thread Peter McKenna
It's definitely something to do with how the * character is interpreted
because rm /archives/test deletes the test file and runs the archive
perfectly.
Regards,
Peter


On Thu, 2008-10-23 at 16:29 +0200, Tino Schwarze wrote:

> On Fri, Oct 24, 2008 at 03:16:45AM +1300, Peter McKenna wrote:
> > Thanks for your help.
> > It almost worked. It came up with this error
> > Executing ArchivePreUserCmd: /bin/sh -c 'rm /archives/*'
> > 
> > /archives/*": 1: Syntax error: Unterminated quoted string
> > Archive failed: ArchivePreUserCmd returned error status 512
> > 
> > I tried the exact same command in a shell and it worked fine.
> 
> Try using double quotes instead of the single quotes, e.g.
> /bin/sh -c "rm /archives/*"
> If it does not work, create a shell script, e.g. 
> /usr/local/sbin/archivepreusercmd.sh with the following content:
> 
> #!/bin/bash
> #
> 
> rm /archives/*
> 
> Make it executable, then use it as ArchivePreUserCmd.
> 
> Tino.
> 
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rm command does not work in ArchivePreUserCmd

2008-10-23 Thread Peter McKenna
That works for a directory with files in it but failed if the directory
was empty. exit 0 at the end seems to work in all cases.
Thanks again for your help and the same to Adam.
Regards,
Peter


On Thu, 2008-10-23 at 17:03 +0200, Tino Schwarze wrote:

> On Fri, Oct 24, 2008 at 03:51:55AM +1300, Peter McKenna wrote:
> > Tried that, but no difference.
> > Executing ArchivePreUserCmd: /usr/local/bin/archivepreusercmd.sh
> > 
> > rm: cannot remove `/archives/*': No such file or directory
> > Archive failed: ArchivePreUserCmd returned error status 256
> 
> Ah, I see. Fact is that there is no file '/archives/*'. Try the
> following as /usr/local/bin/archivepreusercmd.sh:
> 
> #!/bin/bash
> #
> 
> for file in /archives/* ; do
>   [ -f "$file" ] && rm -f "$file"
> done
> 
> This ensures that there is actually something to delete.
> 
> HTH!
> 
> Tino.
> 
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rm command does not work in ArchivePreUserCmd

2008-10-23 Thread Peter McKenna
Thanks Adam,
I tried the -f option within BackupPC, not in a script, and the backup
would work but the old files where not deleted.
I gather it passes an exit status 0 to UserCmdCheckStatus. 


On Fri, 2008-10-24 at 02:15 +1100, Adam Goryachev wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> Tino Schwarze wrote:
> > On Fri, Oct 24, 2008 at 03:51:55AM +1300, Peter McKenna wrote:
> >> Tried that, but no difference.
> >> Executing ArchivePreUserCmd: /usr/local/bin/archivepreusercmd.sh
> >>
> >> rm: cannot remove `/archives/*': No such file or directory
> >> Archive failed: ArchivePreUserCmd returned error status 256
> > 
> > Ah, I see. Fact is that there is no file '/archives/*'. Try the
> > following as /usr/local/bin/archivepreusercmd.sh:
> > 
> > #!/bin/bash
> > #
> > 
> > for file in /archives/* ; do
> >   [ -f "$file" ] && rm -f "$file"
> > done
> > 
> > This ensures that there is actually something to delete.
> 
> Or you could do it in two lines:
> #!/bin/bash
> rm -f /archives/*
> 
> Or you could do it in three lines:
> #!/bin/bash
> touch /archives/test
> rm -f /archives/*
> 
> Or you could do it like this (approx)
> #!/bin/bash
> find /archives/ -type f -exec rm -f {}
> (I never seem to get my find -exec commands to work properly, but it
> should work if you get the parameters right.
> 
> BTW, keep in mind /archives/* will never match /archives/.somefile etc
> 
> Finally, like all things in unix, there are dozens of ways to solve a
> problem :)
> 
> PPS, for most of the above "scripts" you probably don't need the
> #!/bin/bash on the first line, because it doesn't matter which shell
> interprets the commands, they should all "do the right thing"
> 
> BTW, "man rm" will provide some hints on how to use the rm command,
> including using -f "ignore nonexistent files, never prompt"
> 
> Regards,
> Adam
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
> 
> iD8DBQFJAJUrGyoxogrTyiURAouuAJ9ZqEdLWxZpkgKJprU2ntouYXGiMQCgjMiD
> cgTD1XJXzcSaO8udA3hjslw=
> =wKQq
> -END PGP SIGNATURE-
> 
> -
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] force full save on weekends

2008-10-24 Thread Peter McKenna
Hi Sebastian,
I don't know about separate times for incremental and full but you can
use blackout periods to define when backups are done.
Go to Server->Edit Config->Blackout Periods.
Regards,
Peter McKenna 


On Fri, 2008-10-24 at 11:41 +0200, Sebastian Perkins wrote:
> Hello,
> 
> BackupPC is up & running for 6 months. 
> 
> One problem remains : backuppc's full/incr solution launches a full
> save after a certain number of days. Unfortunately this happens most
> of the time during office hours.
> 
> The consequence is that certain servers are heavily loaded and users
> complain...
> 
> Is there any to force the full on weekends ? And just keep the incr on
> week days ?
> 
> Thanks in advance,
> 
> 
> Sebastian Perkins 
> 
> 
> 
> -
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> ___ BackupPC-users mailing list 
> BackupPC-users@lists.sourceforge.net List: 
> https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: 
> http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What is the latest on getting cygwin rsync/ssh to work for BackupPC

2008-10-26 Thread Peter McKenna
I find it more convenient to do includes rather than excludes. Set the
RSYNC share to c:/Documents and Settings/ and then include Favorites,
MyDocuments and Desktop seems to work pretty well. I just make it clear
to users that anything not in those places will not be backed up. Of
course if there are any particular requirements on one machine you can
set that up on a per machine basis as required.
RSYNCD seems to be the most reliable method for me. I don't bother with
ssh.



On Sun, 2008-10-26 at 16:53 -0600, Linux Punk wrote:
> On Sun, Oct 26, 2008 at 4:49 AM, Steen Eugen Poulsen <[EMAIL PROTECTED]> 
> wrote:
> > Jeffrey J. Kosowsky skrev:
> >>
> >> I have googled and read a lot of posts about people having trouble
> >> with cygwin rsync/ssh but haven't seen any definitive solutions.
> >
> > I don't think it has anything to do with cygwin or how you backup windows
> > machines, it's just that we keep running into file lock issues that messes
> > up backups.
> 
> I would tend to disagree because we have dozens of machines that
> backup successfully using rsyncd server (no ssh) but we ran into
> exactly the issues described when tunneling rsync in ssh. If it was a
> problem with locked files affecting rsync, we would see those problems
> in both instances. We've also tried backing up dummy directories that
> have no open files in them, and had that fail with rsync/ssh.
> 
> > To successfully backup Windows machine I need a huge exclude list, without
> > it things always seem to be very troubled.
> 
> We have a short exclude list, but we still get lots of failures on
> various files that should probably be excluded. We have no problems
> with the backups locking up on only certain files though.
> 
> Brian Oborn
> 
> -
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Shadow Copy in WindowsXP

2008-10-26 Thread Peter McKenna
Hi Dejan,
If you are trying to backup the clients PIM info from Outlook why don't
you use Backup.pst? It's available from the Microsoft web-site free, but
requires validation. It basically does a backup of outlook data to a pst
file when the user exits Outlook. You can then back this up.
Regards,
Peter


On Sun, 2008-10-26 at 08:48 -0400, [EMAIL PROTECTED] wrote:
> hi,
> 
> When searching the Internet, snapshots and shadow copy are associated with
> Server versions of Windows.
> 
> I have to use shadow copy on client machines to be able to backup Outlook
> files but the only way I found to do that is by using XP's Backup program.
> This Backup program unfortunately makes bkf archive which creates unwanted
> additional steps in backing up/restoring.
> 
> Any solution for this?
> 
> Dejan
> 
> 
> -
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Where do I report problems with this mailing list?

2008-10-26 Thread Peter McKenna
I could not find an admin address for this mailing list so hopefully
someone knows where this should go.
Attached is a bounced message from the sourceforge mail server. It looks
like it's being flags as spam or something.
Regards,
Peter McKenna
--- Begin Message ---
The original message was received at Mon, 27 Oct 2008 02:18:54 GMT
from [EMAIL PROTECTED]

   - The following addresses had permanent fatal errors -
[EMAIL PROTECTED]
(expanded from: [EMAIL PROTECTED])

   - Transcript of session follows -
553 5.0.0 General list for user discussionquestions and support - 
backuppc-users@lists.sourceforge.net <[EMAIL PROTECTED]>
questions and support" ... 
Unbalanced '"'
Reporting-MTA: dns; gourmet.spamgourmet.com
Arrival-Date: Mon, 27 Oct 2008 02:18:54 GMT

Final-Recipient: RFC822; bpc.20.hypatia@xoxy.net
Action: failed
Status: 5.0.0
Last-Attempt-Date: Mon, 27 Oct 2008 02:18:55 GMT
Return-Path:  [EMAIL PROTECTED]
Received: (from [EMAIL PROTECTED])
	by gourmet.spamgourmet.com (8.13.8/8.13.8/Submit) id m9R2Is0a021026
	for [EMAIL PROTECTED]; Mon, 27 Oct 2008 02:18:54 GMT
Received: from lists.sourceforge.net (lists.sourceforge.net [216.34.181.88])
	by gourmet.spamgourmet.com (8.13.8/8.13.7) with ESMTP id m9R2Ipmu020900
	for <[EMAIL PROTECTED]>; Mon, 27 Oct 2008 02:18:51 GMT
Received: from localhost ([127.0.0.1] helo=sfs-ml-4.v29.ch3.sourceforge.com)
	by 335xhf1.ch3.sourceforge.com with esmtp (Exim 4.69)
	(envelope-from <[EMAIL PROTECTED]>)
	id 1KuHff-0001Rn-06; Mon, 27 Oct 2008 02:17:31 +
Received: from sfi-mx-1.v28.ch3.sourceforge.com ([172.29.28.121]
	helo=mx.sourceforge.net)
	by 335xhf1.ch3.sourceforge.com with esmtp (Exim 4.69)
	(envelope-from <[EMAIL PROTECTED]>) id 1KuHfd-0001Rg-Pm
	for backuppc-users@lists.sourceforge.net;
	Mon, 27 Oct 2008 02:17:29 +
X-ACL-Warn: 
Received: from mx4.orcon.net.nz ([219.88.242.54])
	by 29vjzd1.ch3.sourceforge.com with esmtps (TLSv1:AES256-SHA:256)
	(Exim 4.69) id 1KuHfa-0007RV-OK
	for backuppc-users@lists.sourceforge.net;
	Mon, 27 Oct 2008 02:17:29 +
Received: from Debian-exim by mx4.orcon.net.nz with local (Exim 4.68)
	(envelope-from <[EMAIL PROTECTED]>) id 1KuHfS-00059r-1a
	for backuppc-users@lists.sourceforge.net;
	Mon, 27 Oct 2008 15:17:18 +1300
Received: from 60-234-208-121.bitstream.orcon.net.nz ([60.234.208.121]
	helo=[192.168.1.100])
	by mx4.orcon.net.nz with esmtpsa (SSL 3.0:DHE_RSA_AES_256_CBC_SHA1:32)
	(Exim 4.68) (envelope-from <[EMAIL PROTECTED]>)
	id 1KuHfR-00059U-Ma for backuppc-users@lists.sourceforge.net;
	Mon, 27 Oct 2008 15:17:17 +1300
From: "Peter McKenna - [EMAIL PROTECTED]" <>
To: "General list for user discussion,
	questions and support" 
In-Reply-To: <[EMAIL PROTECTED]>
References: <[EMAIL PROTECTED]>
Date: Mon, 27 Oct 2008 15:17:16 +1300
Message-Id: <[EMAIL PROTECTED]>
Mime-Version: 1.0
X-Mailer: Evolution 2.22.3.1 
X-DSPAM-Check: by mx4.orcon.net.nz on Mon, 27 Oct 2008 15:17:17 +1300
X-DSPAM-Result: Innocent
X-DSPAM-Processed: Mon Oct 27 15:17:18 2008
X-DSPAM-Confidence: 0.7619
X-DSPAM-Probability: 0.
X-Spam-Score: 0.5 (/)
X-Spam-Report: Spam Filtering performed by mx.sourceforge.net.
	See http://spamassassin.org/tag/ for more details.
	0.5 AWL AWL: From: address is in the auto white-list
X-Headers-End: 1KuHfa-0007RV-OK
Subject: Re: [BackupPC-users] Shadow Copy in WindowsXP
X-BeenThere: backuppc-users@lists.sourceforge.net
X-Mailman-Version: 2.1.9
Precedence: list
Reply-To:  [EMAIL PROTECTED]
List-Id: "General list for user discussion,
	questions and support" 
List-Unsubscribe: <https://lists.sourceforge.net/lists/listinfo/backuppc-users>, 
	
List-Archive: <http://sourceforge.net/mailarchive/forum.php?forum_name=backuppc-users>
List-Post: 
List-Help: 
List-Subscribe: <https://lists.sourceforge.net/lists/listinfo/backuppc-users>, 
	
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Errors-To: [EMAIL PROTECTED]
--- End Message ---
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] One of my backups are failing

2008-12-15 Thread maillist . peter
Hello!

One of my backups (which have worked for over a month) have recently stopped
working and I get thousands of errors in the log file:

attribSet(dir=f%2fetc, file=)
attribSet(dir=f%2fetc, file=)
makeSpecial(/var/lib/backuppc/pc/localhost-etc/new//f%2fetc/, 9, )
Can't open /var/lib/backuppc/pc/localhost-etc/new//f%2fetc/ for empty output\n
[ skipped 1 lines ]

server:~# cat /etc/backuppc/localhost-etc.pl 
$Conf{ClientNameAlias} = 'server';
$Conf{XferMethod} = 'rsync';
$Conf{XferLogLevel} = 9;
$Conf{RsyncShareName} = [ '/etc' ];

Could anyone tell me how to fix this problem?

Best regards,
Peter

--
SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
The future of the web can't happen without you.  Join us at MIX09 to help
pave the way to the Next Web now. Learn more and register at
http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Designing a BackupPC install over a WAN - minimising Full backups

2009-01-20 Thread Peter Wright
On 21/01 14:01:19, Adam Goryachev wrote:
> David Crisp wrote:
> > Idealy I would love the system to be able to sit in place and make
> > regular incrementals of the data with the most minimal of network
> > traffic requried to backup the data.

> Contrary to popular belief, you really want to have regular full
> backups as part of your plan to reduce data transferred over your wan.
> An incremental backup will transfer changed data from the previous
> full or incremental of a lower level.

Can backuppc be configured to transfer changed data from the previous
incremental of the *same* level, or would this require a patch?

> Regards,
> Adam

Pete.
-- 
If at first you don't succeed, failure may be your thing.
-- Warren Miller

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Designing a BackupPC install over a WAN - minimising Full backups

2009-01-22 Thread Peter Wright
On 21/01 10:48:50, Tino Schwarze wrote:
> On Wed, Jan 21, 2009 at 02:08:31PM +0900, Peter Wright wrote:
> > > An incremental backup will transfer changed data from the
> > > previous full or incremental of a lower level.
> > 
> > Can backuppc be configured to transfer changed data from the previous
> > incremental of the *same* level, or would this require a patch?
> 
> Why would you want to do that? The previous incremental of the same
> level is a lot older and you'd need to transfer a lot more changes.

Take my words in the context of responding to Adam - though perhaps I
could have put it more clearly :).

Adam stated that "An incremental backup will transfer changed data
from the previous full or incremental of a lower level."

What I was thinking is that if my highest priority was minimising
bandwidth per backup, why shouldn't I be able to configure backuppc to
*only* do incrementals and consider them all to be the same "level"?


But I suspect I've probably misunderstood one or more key backuppc
principles, especially re: what the "levels" really mean.

> In default configuration (IIRC), your backups with weekly full would
> look like this:
[ snip description ]
> The second level-1 incremental will transfer all changes since the
> full backup.

Thanks for the clarification - that was how I thought it worked, but
it's good to make sure.

My key issue is that if I'm interested in bandwidth minimisation above
all else, why would I want to do anything other than
incremental-since-the-most-recent-previous-incremental, regardless of
the "level" concept?


As I understand Holger's earlier response, the major (only?) downside
is the cost of building the backup view - and I can certainly see how
that'd become significant after a while.

I suspect I've got my brain too hooked into the rsnapshot model
(http://www.rsnapshot.org/) and I'm not quite understanding the
different philosophy of backuppc.

> Tino.

Pete.
-- 
"But now I've taken my leave of that whole sick, navel-gazing mess we
called the software industry. Now I'm in a more honest line of work:
now I sell beer."  -- jwz

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Designing a BackupPC install over a WAN - minimising Full backups

2009-01-22 Thread Peter Wright
On 21/01 14:44:36, Holger Parplies wrote:
> Tino Schwarze wrote on 2009-01-21 10:48:50 +0100 [Re: [BackupPC-users] 
> Designing a BackupPC install over a WAN -?minimising Full backups]:
> > On Wed, Jan 21, 2009 at 02:08:31PM +0900, Peter Wright wrote:
> > > Adam Goryachev, Wed, Jan 21 2009, 14:01:19 +1100:
> > > > David Crisp, Wed, Jan 21 2009, 12:31:35 +1100:
> > > > > Idealy I would love the system to be able to sit in place
> > > > > and make regular incrementals of the data with the most
> > > > > minimal of network traffic requried to backup the data.
> 
> this is a frequently asked question. Minimal bandwidth usage with
> rsync is achieved by alternating full and incremental backups, or
> using full backups only.

I may be misunderstanding what you mean here.

By "full backup" do you mean "transfer the entire share (albeit
compressed) over the appropriate medium"? (generally network).

Or is it a different concept, more to do with how backuppc manages
data on the server?

> Using multi-level incremental backups can achieve the same bandwidth
> savings, but at increasing costs for building the backup view
> (meaning you can probably take it to level 3 or 4 but not level 365).
> 
> You will need to decide for yourself to which degree bandwidth costs
> outweigh other costs. If you have little change in your backup data,
> you will prefer doing more incrementals in between full backups,

What I'm thinking of is the (fairly common?) scenario of a fileserver
containing lots of Micosoft Office and/or PDF documents. Most of the
documents don't change at all once added. Some of them are modified
for a while (maybe a few weeks or at most a few months) and then left
unchanged. New stuff is added fairly steadily.

Given that usage model, where files more than a year old are almost
certainly *never* changed (and there's easily more than a decade's
worth of files stored), do you see how it'd be hard to justify the
bandwidth cost of doing a full backup (especially every *week* :-))?


Assuming my first understanding (my second paragraph above) is
correct, do you think it could be plausible to use the
incremental-backup stored data to build something that *looks* to
backuppc like a full backup?

> > > > An incremental backup will transfer changed data from the
> > > > previous full or incremental of a lower level.
> > > 
> > > Can backuppc be configured to transfer changed data from the
> > > previous incremental of the *same* level, or would this require
> > > a patch?
> 
> This would simply be incorrect.

Okay...

> If you backup relative to a level 1 backup, you get a level 2
> backup, whatever you call it. BackupPC presents an identical view to
> you through web interface and restore functionality, meaning you
> don't have to restore (level 0 + level 1 + level 2) - the level 2
> backup *appears like* a full backup.

Aha.

So we *could* take the backup data and construct something that
backuppc would view as a genuine full backup?

I suspect this is probably quite possible, but I'm not sure as to
whether I'm missing something that'd make it completely inappropriate
and/or impractical :). 

Any advice on this point would be much appreciated.

> Patching BackupPC to call (and store) your level 2 backup as a level
> 1 gains you something in terms of costs for constructing the view,
> but not in terms of exactness (which is very good for rsync in any
> case, though). There does not seem to be much point in doing that.

Wouldn't it mean that (given my example usage model described above)
keeping backup bandwidth costs more under control?



Would it be fair to say that the most common usage model for backuppc
is over a local network? Do many people tend to use backuppc over an
internet connection?

[ snip ]
> An incremental backup, in contrast, does not check file contents
> that appear unchanged, so errors are possible (if unlikely) - with
> increasing level you potentially get more errors.

Okay, fair point.

> Regards,
> Holger

Pete.
-- 
"There are two types of programming languages; the ones that people
bitch about and the ones that no one uses."  -- Bjarne Stroustrup

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Designing a BackupPC install over a WAN - minimising Full backups

2009-01-22 Thread Peter Wright
On 23/01 16:35:09, Adam Goryachev wrote:
> Peter Wright wrote:
> > My key issue is that if I'm interested in bandwidth minimisation above
> > all else, why would I want to do anything other than
> > incremental-since-the-most-recent-previous-incremental, regardless
> > of the "level" concept?
> 
> Actually, it is the opposite. If bandwidth minimisation is your only
> concern, then every backup should be a full. Since each backup
> will only transfer the changed portions of existing files, or new
> files, compared to the previous (full) backup. This will minimise
> your bandwidth usage.
[ snip rest ]

Okay, in that case I think I've completely and utterly misunderstood
what a full backup is in backuppc terminology (and the difference to
an "incremental" backup).

I may have to resort to reading the documentation and/or the source
code a little more carefully. Or conduct tests to see what *actually*
happens on backup runs.

Thanks for your conscientious efforts in trying to explain. :)


Pete.
-- 
What part of "Ph'nglui mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn"
don't you understand?

--
This SF.net email is sponsored by:
SourcForge Community
SourceForge wants to tell your story.
http://p.sf.net/sfu/sf-spreadtheword
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Backing up a BackupPc host

2009-03-07 Thread Peter Walter
All,

I have implemented backuppc on a Linux server in my mixed OSX / Windows 
/ Linux environment for several months now, and I am very happy with the 
results. For additional disaster recovery protection, I am considering 
implementing an off-site backup of the backuppc server using rsync to 
synchronize the backup pool to a remote server. However, I have heard 
that in a previous release of backuppc, rsyncing to another server did 
not work because backuppc kept changing the file and directory names in 
the backup pool, leading the remote rsync server to having to 
re-transfer the entire backup pool (because it thinks the renamed files 
are new files).

I have searched the wiki and the mailing list and can't find any 
discussion of this topic. Can anyone confirm that the way backuppc 
manages the files and directories in the backup pool would make it 
difficult to rsync to another server, and, if so, can anyone suggest a 
method for "mirroring" the backuppc server at an offsite backup machine?

Regards,
Peter

--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can $Conf{DumpPreUserCmd} and $Conf{DumpPostUserCmd} be used to stop and start Exchange Server 5.5 on a Windows NT 4.0 Server as part of a BackupPC job?

2009-03-08 Thread Peter Walter

Jessie,

It is possible to issue the commands from Linux to stop / start Exchange 
as you require if you use a third-party utility.

See http://eol.ovh.org/winexe/index.php for the details.
You may also be able to use the utility and back up Exchange without 
stopping the Exchange server *at all* by using the "shadow copy" feature 
of Windows. See http://www.goodjobsucking.com/?p=62 for an example of 
such an implementation.


Note: the version of winexe at the link above requires a glibc version 
of 2.9 or greater. An alternate version which supports a glibc version 
of  2.3.4 or later can be found at 
http://eol.ovh.org/winexe/winexe-static-081123-glibc-2.3.4.bz2 , but I 
haven't tested it completely - your mileage may vary.


Regards,
Peter

Jessie M. Bunker-Maxwell wrote:


We are just beginning to use BackupPC running on a Debian Linux server 
to backup our Exchange Server 5.5 databases via rsyncd running on the 
Windows NT 4.0 server where Exchange is installed.


Currently, we have bat files scheduled to run on our Windows NT 4.0 
Exchange Server to stop Exchange services before the BackupPC job runs 
and start Exchange services afterwards.  But this means we're 
estimating the length of time needed for the backups and scheduling 
these jobs with a safe margin in either direction.


I am concerned that: 1) we are leaving our Exchange database down 
longer than necessary; and 2) if something goes wrong and the 
procedures that stop or start the Exchange database run while BackupPC 
has the Exchange files open for backup, we may corrupt our Exchange 
data (am I being paranoid here?).


It would be nice if the BackupPC job itself could issue the "net stop" 
and "net start" commands and I'm assuming I would use the 
$Conf{DumpPreUserCmd} and $Conf{DumpPostUserCmd} but I don't 
understand from the documentation how this would be done if you're 
issuing commands to a Windows server.  Maybe this is not possible?


Can someone tell me if/how I could use $Conf{DumpPreUserCmd}  and 
$Conf{DumpPostUserCmd} to remotely stop and start Exchange?


Does anyone know if there is any risk around attempting to stop or 
start the Exchange services while BackupPC has the files open for backup?


Thanks much.

Jess

*Jessie M. Bunker-Maxwell*
*Network Access Services*
*Santa Cruz Public Library*
*224 Church St.*
*Santa Cruz, CA 95060*
*v: 831-420-5764*
*f: 831-459-7936*
*e: je...@santacruzpl.org*


--
Open Source Business Conference (OSBC), March 24-25, 2009, San Francisco, CA
-OSBC tackles the biggest issue in open source: Open Sourcing the Enterprise
-Strategies to boost innovation and cut costs with open source participation
-Receive a $600 discount off the registration fee with the source code: SFAD
http://p.sf.net/sfu/XcvMzF8H___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a BackupPc host

2009-03-10 Thread Peter Walter
I checked the cpool directory on my server and it contained 803,023 
files in the directory tree. If rsync is not a scaleable solution for 
backing up a backuppc pool over the internet, does anyone have a 
suggestion for a better solution?


Peter

Les Mikesell wrote:

Carl Wilhelm Soderstrom wrote:

  

Is there a problem with my approach?
  

The problem with rsync'ing a backuppc data pool is the sheer number of
files. you'll run your machine out of space with the lists as they get
loaded into memory.

up to a certain point it will work; and with newer versions of rsync it will
work better; but it is not a scaleable solution.



More specifically it is the number of hardlinks that most file-oriented 
copying methods have to track by keeping tables of inode numbers in 
memory and matching them up - and the fact that to work, the entire tree 
containing pool/cpool/pc directories must be processed at once.


  
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up a BackupPc host

2009-03-10 Thread Peter Walter

Les,

All I have access to is "cloud storage" offsite - running a remote 
backuppc server is not an option. Basically, I have access to rsync / 
sftp / scp as file transfer protocols. What do you think about running a 
directory change monitor based upon the Linux kernel inotify() facility, 
such as fsniper or gamin, then triggering a rsync script to synchronize 
only the directories that have changed?


Peter

Les Mikesell wrote:

Peter Walter wrote:
  
I checked the cpool directory on my server and it contained 803,023 
files in the directory tree. If rsync is not a scaleable solution for 
backing up a backuppc pool over the internet, does anyone have a 
suggestion for a better solution?



What matters more is how many links you have to these files in the pc 
tree.  Rsync may work for you if you have plenty of ram and use the 
latest version.  If it doesn't, I'm not sure if there are any 
alternatives that will work over the internet.  There are several 
approaches to making a local image copy of the partition, but even those 
don't scale well to archives that won't fit on an external drive.


In some cases you can just let an independent offsite copy of backuppc 
run through a vpn.


  
--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Rsyncd and WinXP

2009-05-01 Thread Peter Bloomfield
Dear All,

I apologise for re-visiting this, but I am having difficulty backing up a WinXP 
machine over rync.

I am using Backuppc version 3.1.0, and cygwin's rsync on the WinXP box.

When I issue a fuill backup, I get the following message in backuppc

Executing DumpPostUserCmd: /home/backuppc/Bin/NotifyUsers -h  -u 
*...@ -m ** -c 0 -t full -s hr_info
Got fatal error during xfer (chdir failed)
Backup aborted (chdir failed)

I have blanked out the email address, host and user for this email. I then went 
and looked on the WinXP
in the rsync.log file and I get the following message,

2009/05/01 16:39:23 [4024] rsync: chdir 
/cygdrive/c/WINDOWS/system32/f:/Alvina_PET/HR_Info failed

Has anyone else seen this, or can someone point me in the direction of a 
solution, thanks

Peter




--
Register Now & Save for Velocity, the Web Performance & Operations 
Conference from O'Reilly Media. Velocity features a full day of 
expert-led, hands-on workshops and two days of sessions from industry 
leaders in dedicated Performance & Operations tracks. Use code vel09scf 
and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Installation problems on SME Server 8.0 B3

2009-05-05 Thread Peter Walter
SME Server 8.0 B3 is a variant of Centos 5. I installed 
BackupPC-3.1.0-4.el4.noarch.rpm on a system running 8.0 B3, plus two 
rpms that are customized to support the distribution:
smeserver-BackupPC-0.1-6.el4.sme.noarch.rpm
smeserver-remoteuseraccess-1.2-30.el4.sme.noarch.rpm

Installation appeared to go normally. However, when displaying the 
administrative web page, the administrative functions are missing and 
there does not seem to be any way to configure the software from the web 
page. You can see the screen I am getting at 
http://www.sorolo.com/backuppc.jpg .

Can anyone suggest a starting point for troubleshooting this?

Thanks,
Peter



--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Installation problems on SME Server 8.0 B3

2009-05-06 Thread Peter Walter

Paul Mantz wrote:

You should check what user you are authenticating as to apache, then
check the CgiAdminUser value in config.pl.  It sounds like you are
logged in as a user with no permissions.


Adios,

  
Paul, thank you very much. For some reason, CgiAdminUser in config.pl 
was empty. It was supposed to be 'admin'. I will take the issue up with 
the rpm packager.


Peter
--
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   3   >