Re: [BackupPC-users] Download file directly from browse fails...

2008-02-19 Thread Joe Krahn
Mirco Piccin wrote:
 Hi and thanks for reply.
 
 I'm trying to get a backup of a file.

 It's size is about 12 GB.
 I try to download it directly from Browse Backup, but the first time
 download freeze at about 3,99 GB, and the second time at about 2,76
 GB.

 Maybe there's a timeout for the direct downloading?

 I'm going to try also to restore that file by:
 1. picking it in the Browse Backup
 2. click on Restore Selected Files
 3. choose the zip level compression and download the .zip file.
 
 Also trying to restore that file choosing to download the zip file
 does not work.
 Download in this way go at 1kb/s.
 
 For a restore that large I'd use the command line interface, just to
 make sure a browser timeout won't be an issue.
 
 Browser timeout shouldn't be an issue (after only 3,99 GB... - i'm
 thinking at the Ubuntu 4,4 GB DVD .iso image downloaded from web).
 
 You can use BackupPC_tarCreate as your BackupPC user to create
  a tar archive of the files you want to restore.
 
 Well, the restore must be done by a Windows user.
 The restore i'm talking about is of one single file of big size (about 12 GB).
 Backup ot that file works perfectly.
 
 But download that file -by selecting it in Backup Browse tree or
 creating the zip file- seems to be not possible.
 Any help/tips?
 
 Regards
 M
Did you check the Apache error logs? Even though you can download a 4.4G
file, there may still be some timeout problems, especially if the server
is under a heavy load. It may be timeout settings for Apache, and not
the browser. (Just guessing)

Hanging at 3,99 makes me wonder if there is some sort of 4G file size
limitation somewhere, but maybe this was just a coincidence.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_link and TopDir

2008-02-12 Thread Joe Krahn
Joe Krahn wrote:
 It seems to be a known problem that moving the backup location by
 setting $Conf{TopDir} does not work, because it is used to define the
 location of config.pl when $useFHS is off. But, changing $TopDir breaks
 BackupPC_link even when $useFHS is on. It is confusing that the config
 value is used some places, but the default value is used in others.
 
 If certain values have to be hard-coded, then you shouldn't also allow
 those as configuration options. There is also no reason to force the
 backup pool to be in the same location as the config file, because it
 will be quite common to have a separate filesystem for backup data, so
 the backup locations should definitely not be one of the hard-coded items.
 
 Since BackupPC is designed to run with a backuppc account, you could
 require that the backuppc home directory be the config.pl location, and
 then you wouldn't need any hard-coded paths.
 
 Joe Krahn
 
(More info)
Here is where the pool and cpool paths are set in Lib.pm:

#
# Clean up %ENV and setup other variables.
#
delete @ENV{qw(IFS CDPATH ENV BASH_ENV)};
$bpc-{PoolDir}  = $bpc-{TopDir}/pool;
$bpc-{CPoolDir} = $bpc-{TopDir}/cpool;
if ( defined(my $error = $bpc-ConfigRead()) ) {
print(STDERR $error, \n);
return;
}

#
# Update the paths based on the config file
#
foreach my $dir ( qw(TopDir ConfDir InstallDir LogDir) ) {
next if ( $bpc-{Conf}{$dir} eq  );
$paths-{$dir} = $bpc-{$dir} = $bpc-{Conf}{$dir};
}
$bpc-{storage}-setPaths($paths);


Wouldn't the $TopDir value work correctly for PoolDir and CPoolDir if
they were set after reading the config?

Joe

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC_link and TopDir

2008-02-12 Thread Joe Krahn
It seems to be a known problem that moving the backup location by
setting $Conf{TopDir} does not work, because it is used to define the
location of config.pl when $useFHS is off. But, changing $TopDir breaks
BackupPC_link even when $useFHS is on. It is confusing that the config
value is used some places, but the default value is used in others.

If certain values have to be hard-coded, then you shouldn't also allow
those as configuration options. There is also no reason to force the
backup pool to be in the same location as the config file, because it
will be quite common to have a separate filesystem for backup data, so
the backup locations should definitely not be one of the hard-coded items.

Since BackupPC is designed to run with a backuppc account, you could
require that the backuppc home directory be the config.pl location, and
then you wouldn't need any hard-coded paths.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Incorrect $user substitution?

2008-02-12 Thread Joe Krahn
It appears that $user substitution in $Conf{ArchivePreUserCmd} and
$Conf{ArchivePostUserCmd} are mis-documented. BackupPC_archive is passed
$user on the command line, but it comes from the CGI User, not the user
from the hosts file, as is the case for Dump and Restore, and
documentation for Arvhive.

I found this because I was looking into adding the CGI user as a general
substitution for all commands that can be invoked via CGI. Why not have
a consistent set of substitution strings for all commands?

One problem is that the CGI user must be passed on the command-line, but
only BackupPC_archive has user as an argument, and most of them
designed with a static set of arguments instead of option flags. Maybe a
future release could be more flexible.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync proxy for access control

2008-02-08 Thread Joe Krahn
I looked into rsync access security a bit further. It seems that there
are still some possible security risks with symlinks being able to
access files outside of the rsync root directory. That is probably why
Fedora SELinux is configured to prevent general file access by an rsync
daemon, which is probably worth not trying to circumvent for BackupPC.

It is possible to run rsync in daemon mode over ssh, without actually
running an rsync daemon. Look for USING RSYNC-DAEMON FEATURES VIA A
REMOTE-SHELL CONNECTION in the rsync man page. This gives all the
controls of rsyncd.conf, without having to actually run a daemon. That
way, rsyncd is not left open for local privileged access, and it is
possible to use the chroot option. I think this will avoid problems with
the SELinux rsyncd configuration as well.

Also, I think that sudo can be used effectively by giving permission to
run and rsync proxy, instead of rsync. That gives the normal sudo access
control, but also allows for additional restrictions built in to the
rsync proxy.

I just need to figure out how to get BackupPC to directly use the rsyncd
protocol over ssh, and the rest will be easy.

Joe


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync hardlink efficiency

2008-02-08 Thread Joe Krahn
Les Mikesell wrote:
 Joe Krahn wrote:
 Rsync is a high quality tool, but I was surprised to learn how poorly
 the hardlink preservation is handled. It keeps a list of inodes for ALL
 files, not just ones with outstanding hardlink counts. The good news is
 that plans for rsync 3.0 include fixing this.  This could be even more
 efficient by tracking the senders inode instead of file names, with the
 receiver keeping a remote-to-local inode lookup table. I don't know what
 the current development version actually does. Maybe someone here facing
 backup mirroring problems could try out an rsync-3 pre-release.
 
 That's not going to make a huge difference in terms of handling a
 backuppc archive since every file in the cpool and pc directories will
 have at least 2 links and thus have to be included in the table. Mapping
 local to remote inodes would greatly widen the already present window
 where wrong things can happen if the filesystems are active while being
 copied.
 
It does help, because you can still remove files from the list once all
of there links are known. Right now, the list stays full even if all
links are resolved for most of them. Depending on how directories are
traversed, you might end up with the full list part of the time.

As for problems with mapping inodes, I don't see how that is any worse
than filename lists. In fact, contents are more likely to change with a
filename reference than by an inode reference. The remote system could
create temporary files for outstanding links (maybe named by the remote
inode), to hold an inode reference even if the other files got deleted.

Joe

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Rsync hardlink efficiency

2008-02-08 Thread Joe Krahn
Rsync is a high quality tool, but I was surprised to learn how poorly
the hardlink preservation is handled. It keeps a list of inodes for ALL
files, not just ones with outstanding hardlink counts. The good news is
that plans for rsync 3.0 include fixing this.  This could be even more
efficient by tracking the senders inode instead of file names, with the
receiver keeping a remote-to-local inode lookup table. I don't know what
the current development version actually does. Maybe someone here facing
backup mirroring problems could try out an rsync-3 pre-release.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync hardlink efficiency

2008-02-08 Thread Joe Krahn
Les Mikesell wrote:
 Joe Krahn wrote:

 As for problems with mapping inodes, I don't see how that is any worse
 than filename lists. In fact, contents are more likely to change with a
 filename reference than by an inode reference.
 
 At least if you link by name, even if the contents are replaced you'll
 end up linked to something that is probably somehow related to what you
 expected.  If you make a table of inode numbers, then something removes
 some files and replaces them even with files of the same names and
 contents there would not be much reason to expect the numbers in your
 table to still reference the right files.
 
 The remote system could
 create temporary files for outstanding links (maybe named by the remote
 inode), to hold an inode reference even if the other files got deleted.
 
 I'd expect the contents to come along with the first reference.  But,
 suppose your source filesystem is live and changing, and someone starts
 two or more instances of rsync copying to the same destination at once
 and they scramble each other's views of the inode numbers - or any other
 similar activity happens on the destination side.  If that sort of thing
 never happened, we wouldn't need to be doing all these backups...
 
But, if the contents are replaced, they will get a new inode, and the
original data is still under the original inode. That just means that
the receiver has to create a temporary file link. As long as those are
protected from other users/processes, you are safe. If you link by
filename, you end up linking to the modified content on the receiving
end, which means having to re-check for changes just before doing the
actual link.

If you create an extra hardlink for each inode data reference, and
protect it from being deleted, the inode-referenced data can't change.
Then at the actual file/link transfer, if the senders file still
references the same sender inode, you can do a hardlink at the receiver
end to an inode that has to contain the first transfer's content, even
if someone on the remote system has deleted the originally transferred
file. The only disadvantage is that you have to create a lot of
temporary hardlinks, and ensure that at least those are protected from
being deleted. But, if they do get deleted, you will know it, and can
fall back to less efficient methods.

Joe


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] rsyncd over ssh

2008-02-08 Thread Joe Krahn
I decided that the best way to handle rsync security is with rsync in
daemon mode, over ssh, with sudo. The advantages are that rsync daemon
mode allows for chroot and a lot of access controls, but running a
normal rsync daemon could leave some local access security holes, and id
very restricted by many SELinux security configurations.

I now have set up an ssh authorized_key to an unprivileged account,
which runs sudo to start rsync in daemon mode, with a specific
rsyncd.conf. (Earlier, I didn't realize that sudo can limit command
arguments as well as executables.) The rsyncd.conf enforce restrictions
better than my previous attempt at an rsync proxy command.

I have this working for BackupPC. It required a bit of hacking to merge
rsync and rsyncd connections in File::RsyncP.pm, which is implemented
outside of the RsyncP.pm module. I hacked the option to use it into
BackupPC/Xfer/Rsync.pm by using this method when rsyncd is selected with
a port value of zero.

Joe Krahn


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Fwd: cgi interface non admin user

2008-02-07 Thread Joe Krahn
dan wrote:
 there are not users on your system, they are just users in your
 htpassswd file.
 
 htpasswd /etc/backuppc/htpasswd username
 
 will allow access to the username you type with the password you type at
 the prompt.  this can be anything you like and is not limited to the
 users on your system.
 
 you could create a script to put all users from /etc/passwords and their
 password in that file with a script if you really wanted local users to
 match backuppc users. just google it, you will get a few hits that will
 help.
 
...
If you are backing up computers in a Windows domain, you can use
Kerberos authentication, so that BackupPC access will always use their
current password. One disadvantage of Kerberos is that you can't mix
Kerberos with other authentication types, but it is possible to
configure two virtual directories to the same actual directory, each
with a different AuthType. I can give an example if anyone wants to try
this.

Also, it is important to realize that HTTP passwords are sent as plain
text, and that this password can allow easily allow root access to the
client machine, if you have unrestricted write access. So, it is a good
idea to use HTTPS.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What about cpio?

2008-02-07 Thread Joe Krahn
Robin Lee Powell wrote:
 Y'all have made it clear that rsync -H doesn't work too well with
 backuppc archives; what about cpio?  Does it do a decent job of
 preserving hard links without consuming all your RAM?
 
 -Robin
 
Preserving hard links is always inefficient because, unfortunately,
normal filesystems don't keep track of which files point to a given
inode. You have to search for files with matching inodes. But, maybe
rsync is not as efficient as it could be. The rsync NEWS mentions a
major hard link speed-up at version 2.6.1, several years ago. Is it
possible you're using an old rsync?

It's also possible that some Rsync improvements need a newer protocol
version than BackupPC can handle. Try running rsync directly, and see if
it performs better.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Rsync proxy for access control

2008-02-07 Thread Joe Krahn
Here is a first draft at an rsync proxy command for access control on
the client computer (as an attachment; I hope that's OK). I think that
remote root access is probably more secure than a non-privileged ssh
tunnel, because that still relies on a plain rsyncd password for security.

This is a Perl script that gives very limited access control. You can
restrict the read and write access paths, and it automatically rejects
paths with /../. It restricts the rsync arguments, and may need
adjustment if you want arguments like --backup-suffix= It also
executes rsync in a way that avoids sh processing.

There are many possibilities for developing this further. Comments are
welcome.

Joe Krahn
#!/usr/bin/perl
# File: BackupPC_rsync
# Author: Joe Krahn [EMAIL PROTECTED]
# (First draft; no version/license/copyright stuff yet)
#
# This is an rsync proxy to restrict access from the BackupPC server.
# It is intended to be defined as the command= option in the authorized_keys
# file, for example:
#   command=/usr/local/sbin/BackupPC_rsync[,opt,opt] ssh-rsa ...
#
# You may want to add someor all of the follow resriction options as well:
#   from=BackupPC-hostname
#   no-port-forwarding
#   no-agent-forwarding
#   no-X11-forwarding
#   no-pty
#
# Options for this file:
#
#   $log   Name of access log file.
#   $rsync full path of the rsync executable.
#   $read_regexp   Perl RegExp for the allowed source path(s).
#   $write_regexp  Perl RegExp for the allowed destination path(s).
#   $nice  Nice level. In Linux, 19 is lowest. (See the setpriority 
system call)
#
# Access expressions may be blank or undefined to disallow read or write access.
# This example allows read and write access to /tmp and /home.
#
use strict;
my $log='/var/log/BackupPC_rsync.log';
my $rsync='/usr/bin/rsync';
my $write_regexp=qr/^\/(?:tmp|home)\//;
my $read_regexp=qr/^\/(?:tmp|home)\//;
my $nice=10;

my $now=localtime;
my $cmd=$ENV{SSH_ORIGINAL_COMMAND};

open F,$log;
printf F ('%s: COMMAND=%s; ', scalar localtime, $cmd);

if ($cmd !~ s:^$rsync --server ::) { fail('UNAUTHORIZED COMMAND'); }
my @command=($rsync,'--server');
while ($cmd =~ s/^(--?[A-Za-z0-9-]+(?:=\w+)?) *//) { push @command,$1; }
my $sender = ($command[2] eq '--sender');
if ( $cmd !~ m/^\. (\/(?:[\\].|[^\\])*\/) *$/ ) { fail(INVALID ARGUMENTS AT 
\$cmd\); }
my $path = $1;
# Unescape special characters (BackupPC does not use quotes)
$path =~ s/\\(.)/$1/g;
# Check for and reject paths with /../ in them.
if ($path =~ m{/\.\./}) { fail('INVALID PATH'); }
if ($command[2] eq '--sender') {
  if ($read_regexp eq '' or not defined $read_regexp) { fail('READ ACCESS 
DISABLED'); }
  if ($path !~ $read_regexp) { fail('DESTINATION PATH DISALLOWED'); }
} else {
  if ($write_regexp eq '' or not defined $write_regexp) { fail('WRITE ACCESS 
DISABLED'); }
  if ($path !~ $write_regexp) { fail('SOURCE PATH DISALLOWED'); }
}
push @command,'.',$path;
print F ALLOWED\n;
close F;
setpriority(0,0,$nice);
# exec() is safer than system(), because it can use a pre-parsed
# argument list, which avoids sh argument processing.
exec(@command); # NO RETURN FROM THIS CALL

sub fail {
  my $err = $_[0];
  print F DENIED: $err\n;
  close F;
  die ACCESS DENIED: $err\n;
}
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync-like dd/netcat script

2008-02-07 Thread Joe Krahn
Timothy J. Massey wrote:
 Hello!
 
 Some time ago, someone e-mailed a script that performed a dd/netcat in 
 an rsync-like manner:  it hashed blocks of the disk and if they matched 
 between the two sides they were not sent.  If they didn't, the block was 
 sent.  The idea was to limit the amount of data that would be sent in a 
 dd to the relative minimum amount of data that has changed.
 
 I've tried and tried to find this thing, but I just plain cannot--either 
 in the list archives or anywhere else Google searches.  Does anyone have 
 such a script--or even know what I'm talking about?
 
 I think this might have been a home-grown script, but it would be very 
 useful, given that currently the only way to effectively clone a backup 
 server is dd, and dd'ing a 500GB partition is not exactly practical, 
 especially when only a few gig has actually changed from the last time...
 
 Tim Masesy
 

I did some searching and found that several people have expressed
interest in a block-device feature for rsync, but nothing has come of it
yet. I also found DRBD (Distributed Replicated Block Device), which
probably does exactly what you want.

Joe

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync-like dd/netcat script

2008-02-07 Thread Joe Krahn
Les Mikesell wrote:
 Timothy J. Massey wrote:
 
 Some time ago, someone e-mailed a script that performed a dd/netcat in 
 an rsync-like manner:  it hashed blocks of the disk and if they matched 
 between the two sides they were not sent.  If they didn't, the block was 
 sent.  The idea was to limit the amount of data that would be sent in a 
 dd to the relative minimum amount of data that has changed.

 I've tried and tried to find this thing, but I just plain cannot--either 
 in the list archives or anywhere else Google searches.  Does anyone have 
 such a script--or even know what I'm talking about?

 I think this might have been a home-grown script, but it would be very 
 useful, given that currently the only way to effectively clone a backup 
 server is dd, and dd'ing a 500GB partition is not exactly practical, 
 especially when only a few gig has actually changed from the last time...
 
 If you are willing to trade disk space for bandwidth, you could dd a 
 snapshot of the partition to a file locally, then rsync the file to a 
 remote copy.  You'll need twice the space on the remote side if you use 
 rsync's default behavior of building a complete new copy before 
 replacing the old.
 
Is it possible to just rsync the raw disk device? I don't see the point
of a dd snapshot, unless you can't get rsync to read from a block
device. It certainly can't write to a block device, which is why rsync
really won't work.

But, why do you really need to clone a raw disk instead of just rsyncing
the content? A raw device copy means that you will end up synchronizing
deleted file fragments, and you will need to have the filesystem unmounted.

Joe


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync proxy for access control

2008-02-07 Thread Joe Krahn
Jonathan Dill wrote:
 Hmm, interesting, basically an rsync wrapper.  I was also thinking:  How
 about an unprivileged account with sudo access to run rsync as root?  I
 found this discussion:
 
 http://lists.samba.org/archive/rsync/2004-August/010439.html
 
 Turns out that is also in the BackupPC FAQ:
 
 http://backuppc.sourceforge.net/faq/ssh.html
I originally considered that approach, but sudo can only limit what
executables are allowed, and cannot enable rsync with read-only access.
This script enforces more restrictions. But, it might be useful to use
sudo as well.
 
 If you are tunneling the rsync command through an ssh shell, then why
 would rsyncd with plain passwords be used at all?  rsync would be run
 within the shell on the client, ssh does the authentication
 (preferably by keys) rsyncd would not be used at all.
You still need access control from the localhost to keep regular users
from using rsync to gain root privileges, even if you trust the local users.
 
 tcp_wrappers and / or iptables can be used to reinforce restrictions in
 case somebody figures out a way to fool rsync or try spoofing /
 man-in-the-middle.  The unprivileged account could have a restricted
 shell as the shell to limit which commands could be accessed.
 
A restricted shell is more effort to make, and still would be much less
restrictive than this simple Perl script.
 You should also use the complete absolute path and avoid adding layers
 of shell scripting to avoid e.g. a rootkit that adds to $PATH to
 redirect commands to new ones installed by the kit, also avoid reference
 to env variables.
This script has the full path to rsync, but a complete script should
enforce this. The only environment variable it uses is one set by sshd.

One important thing that I did not implement here is to chroot to the
destination directory. Is it possible to get the rsync receiver to go
outside of the destination directory by sending ../ over the rsync
protocol? If so, is it possible to get chroot protection without the
/lib, etc., preparation of chroot?

Joe

 Joe Krahn wrote:
 Here is a first draft at an rsync proxy command for access control on
 the client computer (as an attachment; I hope that's OK). I think that
 remote root access is probably more secure than a non-privileged ssh
 tunnel, because that still relies on a plain rsyncd password for
 security.

 This is a Perl script that gives very limited access control. You can
 restrict the read and write access paths, and it automatically rejects
 paths with /../. It restricts the rsync arguments, and may need
 adjustment if you want arguments like --backup-suffix= It also
 executes rsync in a way that avoids sh processing.

 There are many possibilities for developing this further. Comments are
 welcome.

 Joe Krahn
  
 

 -
 This SF.net email is sponsored by: Microsoft
 Defy all challenges. Microsoft(R) Visual Studio 2008.
 http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
 

 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync-like dd/netcat script

2008-02-07 Thread Joe Krahn
Les Mikesell wrote:
 Joe Krahn wrote:

 If you are willing to trade disk space for bandwidth, you could dd a
 snapshot of the partition to a file locally, then rsync the file to a
 remote copy.  You'll need twice the space on the remote side if you
 use rsync's default behavior of building a complete new copy before
 replacing the old.

 Is it possible to just rsync the raw disk device?
 
 Most unix-like systems treat devices as much like files, but rsync has a
 special check to make sure it is only working with files.  It also
 normally builds a new copy, then renames when complete, but there is an
 option to override that.
 
 I don't see the point
 of a dd snapshot, unless you can't get rsync to read from a block
 device. It certainly can't write to a block device, which is why rsync
 really won't work.
 
 Rsync theoretically could read/write to block devices, it just refuses.
 But, you'd have to keep the filesystem unmounted for the duration and if
 you don't let it build a separate new copy you'll end up with your
 remote copy corrupted if the live system dies mid-run.
I guess the right approach is to have two remote filesystems. There's no
reason that the 'temporary' copy not also be a block device. You could
have one remote system unmounted, and clone it from the unmounted
system. You could then clone the 1st and 2nd remote devices. That way,
one of the 3 would always be accessible.

 
 But, why do you really need to clone a raw disk instead of just rsyncing
 the content? A raw device copy means that you will end up synchronizing
 deleted file fragments, and you will need to have the filesystem
 unmounted.
 
 In the context of backuppc, the number of files and hardlinks often
 makes it impractical to rsync the archive contents.
 

OK; Many files shouldn't be too hard, but I can see the problem when
combined with large numbers of hardlinks. If it is only for BackupPC
files, the most efficient approach would be to build a feature into
BackupPC. Any other tool is going to have to hunt for hard links.

Joe

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync-like dd/netcat script

2008-02-07 Thread Joe Krahn
Les Mikesell wrote:
 Joe Krahn wrote:
...
 If it is only for BackupPC
 files, the most efficient approach would be to build a feature into
 BackupPC. Any other tool is going to have to hunt for hard links.
 
 Backuppc doesn't really know anything about the hardlinks either once
 they are made - which is why they work so well...  There is a tool to
 help copy an archive but it is fairly slow too.
 
That is why I say it would need to be a built in feature rather than
just a backup tool. File system actions including hard links could be
replicated when they are made, so that you never have to go back and
find them.

To avoid performance problems, you could log a sort of file action
journal, and replicate it during the day which backups are idle.

Joe

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] limit restore to a share

2008-02-06 Thread Joe Krahn
ADNET Ghislain wrote:
 I modified the rsync code to limit restore to a share:
 
 against (# Version 3.0.0, released 28 Jan 2007)
 
 (root) diff /usr/local/BackupPC/lib/BackupPC/Xfer/Rsync.pm
 /usr/local/BackupPC/lib/BackupPC/Xfer/Rsync.pm.orig
 134,149d133
## AQUEOS debut
 
if( defined $conf-{rsyncRestoreLimitToShare} ){
my $aqflag = 0;
my $aqshare;
foreach $aqshare ( $conf-{rsyncRestoreLimitToShare} ){
$aqflag = 1 if  $remoteDir  =~ /^$aqshare/;
}
if( $aqflag == 0 ){
my $str = Erreur vous devez restorer dans 
 .join(' ou ',$conf-{rsyncRestoreLimitToShare}). uniquement et  non
 pas $remoteDir\n;
$t-{XferLOG}-write(\$str);
$t-{hostError} = none;
return;
}
}
## AQUEOS fin
 
 
 you have to define in your host file this parameter:
 
 $Conf{rsyncRestoreLimitToShare} = ['/var','/home'];
 
 This way you should not be able to restore anyfiles outside of those
 directories.
 
  I am a bad coder so perhaps some here could help make this better and
 do the same for the  other backup method like tar etc...
 
 legal boilerplate: any ownership of  this wonderful piece code is gived
 to craig so if by any chance he takes it into backup pc he has the right
 on it ;)
 
 

If you are worried about security issues with remote root write access,
a better approach is to restrict write access at the client computer,
but that means different implementations depending on the Xfer method.
But, it is good to have it on the server end as well.

I am using rsync over ssh, with an ssk key that restricts access using
the command= feature of the authorized_keys file. The proxy command can
analyze the command requested by ssh, and assert any restrictions you want.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Several Rsync options result in a fileListReceive failed error

2008-02-06 Thread Joe Krahn
I attempted to include extended attributes, and found that it results in
the error message fileListReceive failed. I also found an old message
in this list, about having to use --devices instead of -D. I think the
problem is that BackupPC's Perl Rsync client is an incomplete
implementation. It would be good to have documentation about which flags
are allowed. If there are only a few useful variations, maybe the CGI
config editor should just have a few checkbox options instead of arg
strings.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Strange backup failures when updated to new kernel

2008-02-06 Thread Joe Krahn
Michael Mansour wrote:
 Hi,
 
 Ever since I updated to the latest Linux kernel (Use Scientific Linux 4.5), I
 get these same backup errors everyday on each of the SL4.5 servers:
 
 Xfer PIDs are now 26468,26747
 [ skipped 20459 lines ]
 usr/src/kernels/2.6.9-67.0.1.EL-smp-x86_64/drivers/net/wireless/ipw2100/Makefile:
 md4 doesn't match: will retry in phase 1; file removed
 [ skipped 1 lines ]
 usr/src/kernels/2.6.9-67.0.1.EL-smp-x86_64/drivers/net/wireless/ipw2200/Makefile:
 md4 doesn't match: will retry in phase 1; file removed
 [ skipped 563 lines ]
 usr/src/kernels/2.6.9-67.0.1.EL-smp-x86_64/include/config/MARKER: md4 doesn't
 match: will retry in phase 1; file removed
 [ skipped 25874 lines ]
 usr/src/kernels/2.6.9-67.0.1.EL-smp-x86_64/drivers/net/wireless/ipw2100/Makefile:
 fatal error: md4 doesn't match on retry; file removed
 MD4 does't agree: fatal error on #194475
 (usr/src/kernels/2.6.9-67.0.1.EL-smp-x86_64/drivers/net/wireless/ipw2100/Makefile)
 usr/src/kernels/2.6.9-67.0.1.EL-smp-x86_64/drivers/net/wireless/ipw2200/Makefile:
 fatal error: md4 doesn't match on retry; file removed
 MD4 does't agree: fatal error on #194477
 (usr/src/kernels/2.6.9-67.0.1.EL-smp-x86_64/drivers/net/wireless/ipw2200/Makefile)
 usr/src/kernels/2.6.9-67.0.1.EL-smp-x86_64/include/config/MARKER: fatal error:
 md4 doesn't match on retry; file removed
 MD4 does't agree: fatal error on #195064
 (usr/src/kernels/2.6.9-67.0.1.EL-smp-x86_64/include/config/MARKER)
 Done: 20589 files, 1215394420 bytes
 
 Any ideas what would be causing this?
 
 Thanks.
 
 Michael.
My guess would be that you have files getting modified during the
backup. Maybe you have some sort of automated re-compile modules script
that is scheduled at the same time as backups. Check the time-stamps on
those files.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Improving security, and user options

2008-02-05 Thread Joe Krahn
(Maybe this should be posted to -devel?)
Unrestricted remote root access by a non-root user is generally not a
secure design. There are many ways to restrict the access to backup
activities, but they can't be enforced if the access includes
unrestricted write access. I think that the secure approach is to
require that restores be run by root from the local machine, rather than
allowing a remote push. (Isn't that true for other backup systems?)

I think the best approach is for remote restores to be allowed for
non-privileged files, but run under user account access from the user
requesting the restore. Remote restoration of privileged files should
require some sort of authentication from the local root account.

This should not be too hard to set up using ssh restrictions, if
BackupPC includes the user name as one of the arguments substituted in
the backup command, and some user ssh key management. You can restrict
remote-root access to read-only using the command= setting in the ssh
authorized_keys file. It runs a pre-defined command in place of the
requested ssh command. The proxy command could handle authentication for
write access, or you could just require that restores are handled with
by downloading a tar/zip archive, or to a chrooted temporary directory.

Does this sound like a good plan to other BackupPC users?

Most of this can be done just by getting a $User variable into the rsync
command substitutions. To do it well, BackupPC needs user-specific
configurations to handle the ssh keys for each user. It will also allow
for user-specific e-mail settings. It is also good to allow different
user names for the same person. We have several people with Linux user
names that are different from their Windows domain user names.

I think that these would be fairly easy to implement for someone
familiar with the BackupPC source code.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Improving security, and user options

2008-02-05 Thread Joe Krahn
Rich Rauenzahn wrote:
 
 
 Joe Krahn wrote:
 (Maybe this should be posted to -devel?)
 Unrestricted remote root access by a non-root user is generally not a
 secure design. There are many ways to restrict the access to backup
   
 
 This seems like a good chance to explain how I handle the rsync security
 -- I prefer it over the sudo method and did not like the idea of a
 remote ssh root login.
 
 For remote backups, I setup a nonpriv account that I configure for
 password-less login from the backup server.  I then setup rsyncd to
 listen only on localhost on the remote host.  I also set an
 rsyncd.secrets file and configure the rsyncd.conf shares to be read-only.
 To backup, I create a tunnel using the password-less login and then
 backup over the tunnel.  For local backups, you obviously don't need the
 tunnel -- just connect to localhost.
 
 Rich
There are several secure ways to set up a read-only backup system, but
that loses the convenience of browsing and restoring files via the web
interface. But, users can still directly download files or tar archives,
so it is a reasonable approach, and probably the right thing to do for now.

Joe


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] (no subject)

2008-02-04 Thread Joe Krahn
I just figured this out myself. You have to runs ssh with flags -t -t
(or just -tt). That flag says to allocate a tty, but you need 2 to
force a tty when ssh is not called from a tty.

BUT, I also see that you are using ssh -l root. The point of using
sudo is that you don't need remote-root access. The default
configuration needs to be better designed to get this right, with proper
security considerations.

Another problem I realized is that rsync cannot preserve user/group
values unless the local rsync is run as root.

I am going to try to work on this. Hasn't someone else on this list
worked on this??


Mariano Sokal wrote:
 Hello again, I tried to change what you said... (sudo -i) and now I get a 
 different error:
 
 Fatal error (bad version): stdin: is not a tty
 
 This is the configuration file:
 
 #
 ## Backup For Host: host1
 ##
 $Conf{XferMethod} = 'rsync';
 $Conf{RsyncClientPath} = '/usr/bin/rsync'; $Conf{RsyncClientCmd} = '$sshPath 
 -l root xxx.xxx.xxx.xxx nice -n 19 sudo -i
 $rsyncPath $argList+'; $Conf{RsyncClientRestoreCmd} = '$sshPath -q -x -l root 
 xxx.xxx.xxx.xxx nice -n 19 sudo
 $rsyncPath $argList+'; $Conf{RsyncShareName} = ['/etc', '/var/www']; 
 $Conf{BackupFilesExclude} = ['swapfile','access_log', 'error_log']; 
 $Conf{PingMaxMsec} = 300;
 #
 
 What would be wrong? 
 
 Thanks and regards,
 Mariano 
 
 -Original Message-
 From: Joe Krahn [mailto:[EMAIL PROTECTED] 
 Sent: viernes, 01 de febrero de 2008 07:54 p.m.
 To: Mariano Sokal
 Subject: Re: [BackupPC-users] (no subject)
 
 Mariano Sokal wrote:
 Hello from Buenos Aires.

  

 I have just installed backuppc, and I was doing some tests with
 ./BackupPC_dump -v -f host1.

  

 And I´m getting the following error:

  

 Running: /usr/bin/ssh -l root xx.xx.xx.xx nice -n 19 sudo i
 /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group
 --devices --links --times --block-size=2048 --recursive -D
 --exclude=swapfile --exclude=access_log --exclude=error_log
 --ignore-times . /etc/

  

  

 Got remote protocol 1868854643

 *Fatal error (bad version): sudo:*

 Can't write 43 bytes to socket

  

 Any ideas?

  

 Best regards,

 Mariano Sokal


 
 Is that sudo i supposed to be sudo -i?
 
 Joe Krahn
 
 


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] (no subject)

2008-02-04 Thread Joe Krahn
Les Mikesell wrote:
 Joe Krahn wrote:
 

 BUT, I also see that you are using ssh -l root. The point of using
 sudo is that you don't need remote-root access. The default
 configuration needs to be better designed to get this right, with proper
 security considerations.
 
 When you need this remote access to have read/write permission on all of
 your target files, how much more secure do you think you can make it?
 
Right! So sudo really is not useful with the BackupPC design. Ideally,
automatic restores should be executed under the user that requested
them, and restoration if privileged files should require that the local
restore command be invoked or authenticated locally by root. If you
disallow remote root write access, then some access restrictions can
actually be enforced.

Some security can be added with ssh; see http://www.linux.com/feature/113847

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Suggestion for email aliases, etc.

2008-02-01 Thread Joe Krahn
BackupPC looks really good, but could use some enhancements for more
flexibility. Here are some ideas.

It would be nice to have an e-mail alias map for user names, instead of
just $user$domain. I put together a simple sendmail proxy to convert
email addresses, but it would be much better if the web interface
allowed a user to define an address, as well as to disable it temporarily.

It would also be nice to allow better customization of specific shares.
Right now, there is a special case for Outlook files. Is this because
they have to be backed up differently, or just because they are
[possibly] a higher priority? Why not make special backup groups more
general?

One thing I liked about BoxBackup is that it stored files encrypted,
which means that users don't have to trust the backup admin, and it
means that you never have to worry about properly erasing a failing
disk. However, that makes it impossible to pool common files. It would
be nice to have a scheme where you can define a given share that uses
encryption, without the performance/storage disadvantages of using it
for the whole system. Of course, some admins may want to be able to see
everything.

Joe

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] SELinux problems with rsync

2008-02-01 Thread Joe Krahn
I am using Fedora 7. SELinux blocks rsync access to all files that don't
have a public_content label. The selinux profile includes boolean
option rsync_export_all_ro, but enabling it did not help. It seems to
only apply to rsyncd, and not rsync over ssh.

I switched to rsync over ssh, but I now find the strange problem that
rsync fails if given the --xattrs option, which saves selinux tags. It
did not generate an selinux alert, so perhaps there are some limitations
with rsync when run with explicit --server args the way BackupPC does?
Has anyone else figured this out?

I was able to configure backups using tar, but rsync is more reliable
for incremental backups. OTOH, rsync takes more CPU to check for changed
files.

Also, the default config for ssh is to directly log in as root. It is
better to log in as an unprivileged user and use sudo, which can be
restricted for backups. It is easy enough to customize this, but it is
better for the defaults to encourage better security.

Joe Krahn

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/