[rdiff-backup-users] Mozy Online Back-Up Problems

2009-09-22 Thread simsam

My experience with Mozy was not good. I installed MOZY FREE Backup software on 
my computer as per their instructions. After installation, my computer kept 
freezing and it CRASHED my computer. Thanks god I have one additional backup of 
data. I need storage space, but a more organized approach. Does anybody provide 
with automated backups?

+--
|This was sent by sara2...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: Remote encrypted backup with slow connection.

2009-09-22 Thread Piotr Karbowski
On Tue, Sep 22, 2009 at 4:36 PM, Dominic Raferd domi...@timedicer.info wrote:
 Piotr Karbowski wrote:

 On Tue, Sep 22, 2009 at 10:51 AM, Dominic Raferd domi...@timedicer.info
 wrote:


 Piotr Karbowski wrote:


 On Mon, Sep 21, 2009 at 8:21 PM, Dominic Raferd domi...@timedicer.info
 wrote:



 Piotr Karbowski wrote:



 On Mon, Sep 21, 2009 at 2:02 PM, Matthew Miller mat...@mattdm.org
 wrote:




 On Mon, Sep 21, 2009 at 01:58:11PM +0200, Piotr Karbowski wrote:




 local rdiff-backup dir with remote server but how? If I will use for
 example rsync it still need to check whole files for changes (read,
 download it) and upload only new. I hope you will understand what I
 need and help me.




 rsync won't check whole files unless you give the -c flag. Otherwise,
 it
 just compares metadata. I don't know if that's also the case with
 rdiff-backup, but I assume so.




 So I need to know how rdiff default compares data, if by size and
 mod-time, it will not be so painful but still itefficient  will
 download
 changed
 files to generate diff.



 Rdiff-backup is designed to be ultra-efficient at this activity. It
 only
 sends the changes in a file over the wire, not the whole file. To do
 this
 it
 uses the librsync library which is effectively the same as rsync. You
 can
 read more about the technique at http://en.wikipedia.org/wiki/Rsync.
 rdiff-backup does not use file times to determine whether to do
 backups.
 It
 can backup very large files with small changes very quickly.

 Dominic




 You dont understand me, rdiff-backup is efficient, but to make diff it
 must read WHOLE file, on remote nfs or sshfs it is SLW and
 painful


 Sorry I get it now. But I think rdiff-backup and rsync require a separate
 computer at the remote end in order to optimise transfers, so if you are
 just accessing a remote share using sshfs or similar then they can still
 work of course but as you realise they will be slow. I guess it is not
 possible for you to run rdiff-backup (or rsync) at the remote end as
 well?

 You could run rdiff-backup locally to create a backup store and then
 mirror
 this store to the remote share using rcp. Still it will be slow because
 rdiff-backup always stores the latest copy of each file in full and so if
 this changes even slightly then the whole file will must be transferred
 by
 rcp.

 Duplicity http://duplicity.nongnu.org/ might work better for you, because
 it
 uses forward diffs. Also its archives are secure.

 Although not directly relevant I found a page here
 http://www.psc.edu/networking/projects/hpn-ssh/ which provides a patch to
 greatly speed up OpenSSH in some situations.


 Duplicity is interesing project. What you think about using
 rdiff-backup to create local backup, for example in /backups and then
 send this /backups to remote server by duplicity? As far as I know
 duplicity is encrypted so I DONT need using encfs, dmcrypt or other -
 only ssh access is needed (realy I dont need duplicity on remote
 server?).

 I just want be able to send _ENCRYPTED_ backups to remote server where
 I have only ssh access (sftp/scp work).

 I have not used duplicity myself, I use rdiff-backup. But I am not sure you
 need to run rdiff-backup first, I think duplicity may make its own local
 copies of backup increments so that it can send future increments without
 having to access the earlier increments from the remote share. And yes
 duplicity sends encrypted files so you don't need other encryption.

 Because duplicity uses forward diffs you have to keep all backups forever,
 and if there is any corruption of a file you lose all backups that occurred
 *after* the date of this file. Rdiff-backup uses reverse diffs so
 corruption, if it occurs, affects backups *before* the date of the corrupted
 backup. With rdiff-backup you can delete backups before a certain date
 (though in my experience the storage is so efficient it is not usually worth
 bothering), which is not possible with duplicity.

 Still it sounds like duplicity would better suit your needs.


So best will be using duplicity to make local backup and run rsync to
send new files (diffs) to remote server. and if backup will be TOO big
just mv backup backup_old and start new backup (every 4 weeks for
example)


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: Remote encrypted backup with slow connection.

2009-09-22 Thread Piotr Karbowski
On Tue, Sep 22, 2009 at 5:45 PM, Dominic Raferd domi...@timedicer.info wrote:
 Piotr Karbowski wrote:

 So best will be using duplicity to make local backup and run rsync to
 send new files (diffs) to remote server. and if backup will be TOO big
 just mv backup backup_old and start new backup (every 4 weeks for
 example)


 I think so. There is a duplicity mailing list where you could probably get
 more help: http://lists.nongnu.org/mailman/listinfo/duplicity-talk


Thanks for you all!


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: Remote encrypted backup with slow connection.

2009-09-22 Thread Dominic Raferd

Piotr Karbowski wrote:

On Mon, Sep 21, 2009 at 2:02 PM, Matthew Miller mat...@mattdm.org wrote:
  

On Mon, Sep 21, 2009 at 01:58:11PM +0200, Piotr Karbowski wrote:


local rdiff-backup dir with remote server but how? If I will use for
example rsync it still need to check whole files for changes (read,
download it) and upload only new. I hope you will understand what I
need and help me.
  

rsync won't check whole files unless you give the -c flag. Otherwise, it
just compares metadata. I don't know if that's also the case with
rdiff-backup, but I assume so.



So I need to know how rdiff default compares data, if by size and
mod-time, it will not be so painful but still it will download changed
files to generate diff.
Rdiff-backup is designed to be ultra-efficient at this activity. It only 
sends the changes in a file over the wire, not the whole file. To do 
this it uses the librsync library which is effectively the same as 
rsync. You can read more about the technique at 
http://en.wikipedia.org/wiki/Rsync. rdiff-backup does not use file times 
to determine whether to do backups. It can backup very large files with 
small changes very quickly.


Dominic


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: Remote encrypted backup with slow connection.

2009-09-22 Thread Dominic Raferd

Piotr Karbowski wrote:

On Mon, Sep 21, 2009 at 8:21 PM, Dominic Raferd domi...@timedicer.info wrote:
  

Piotr Karbowski wrote:


On Mon, Sep 21, 2009 at 2:02 PM, Matthew Miller mat...@mattdm.org wrote:

  

On Mon, Sep 21, 2009 at 01:58:11PM +0200, Piotr Karbowski wrote:



local rdiff-backup dir with remote server but how? If I will use for
example rsync it still need to check whole files for changes (read,
download it) and upload only new. I hope you will understand what I
need and help me.

  

rsync won't check whole files unless you give the -c flag. Otherwise, it
just compares metadata. I don't know if that's also the case with
rdiff-backup, but I assume so.



So I need to know how rdiff default compares data, if by size and
mod-time, it will not be so painful but still itefficient  will download changed
files to generate diff.
  

Rdiff-backup is designed to be ultra-efficient at this activity. It only
sends the changes in a file over the wire, not the whole file. To do this it
uses the librsync library which is effectively the same as rsync. You can
read more about the technique at http://en.wikipedia.org/wiki/Rsync.
rdiff-backup does not use file times to determine whether to do backups. It
can backup very large files with small changes very quickly.

Dominic




You dont understand me, rdiff-backup is efficient, but to make diff it
must read WHOLE file, on remote nfs or sshfs it is SLW and
painful
Sorry I get it now. But I think rdiff-backup and rsync require a 
separate computer at the remote end in order to optimise transfers, so 
if you are just accessing a remote share using sshfs or similar then 
they can still work of course but as you realise they will be slow. I 
guess it is not possible for you to run rdiff-backup (or rsync) at the 
remote end as well?


You could run rdiff-backup locally to create a backup store and then 
mirror this store to the remote share using rcp. Still it will be slow 
because rdiff-backup always stores the latest copy of each file in full 
and so if this changes even slightly then the whole file will must be 
transferred by rcp.


Duplicity http://duplicity.nongnu.org/ might work better for you, 
because it uses forward diffs. Also its archives are secure.


Although not directly relevant I found a page here 
http://www.psc.edu/networking/projects/hpn-ssh/ which provides a patch 
to greatly speed up OpenSSH in some situations.



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Mozy Online Back-Up Problems

2009-09-22 Thread Dominic Raferd

simsam wrote:

My experience with Mozy was not good. I installed MOZY FREE Backup software on 
my computer as per their instructions. After installation, my computer kept 
freezing and it CRASHED my computer. Thanks god I have one additional backup of 
data. I need storage space, but a more organized approach. Does anybody provide 
with automated backups?
  
For individual users it looks as if Windows 7 comes with some pretty 
reasonable MS backup software (try googling windows 7 backup); it can do 
scheduled backups to network shares too. Vista also had similar 
capability, but much less flexible. Windows XP's built-in backup is 
pretty useless, but there are many other choices. Rdiff-backup is the 
engine behind TimeDicer, http://www.timedicer.info/, a script 
specifically aimed at Windows XP backups to a Linux machine.



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: Remote encrypted backup with slow connection.

2009-09-22 Thread Dominic Raferd

Piotr Karbowski wrote:

On Tue, Sep 22, 2009 at 10:51 AM, Dominic Raferd domi...@timedicer.info wrote:
  

Piotr Karbowski wrote:


On Mon, Sep 21, 2009 at 8:21 PM, Dominic Raferd domi...@timedicer.info
wrote:

  

Piotr Karbowski wrote:



On Mon, Sep 21, 2009 at 2:02 PM, Matthew Miller mat...@mattdm.org
wrote:


  

On Mon, Sep 21, 2009 at 01:58:11PM +0200, Piotr Karbowski wrote:




local rdiff-backup dir with remote server but how? If I will use for
example rsync it still need to check whole files for changes (read,
download it) and upload only new. I hope you will understand what I
need and help me.


  

rsync won't check whole files unless you give the -c flag. Otherwise,
it
just compares metadata. I don't know if that's also the case with
rdiff-backup, but I assume so.




So I need to know how rdiff default compares data, if by size and
mod-time, it will not be so painful but still itefficient  will download
changed
files to generate diff.

  

Rdiff-backup is designed to be ultra-efficient at this activity. It only
sends the changes in a file over the wire, not the whole file. To do this
it
uses the librsync library which is effectively the same as rsync. You can
read more about the technique at http://en.wikipedia.org/wiki/Rsync.
rdiff-backup does not use file times to determine whether to do backups.
It
can backup very large files with small changes very quickly.

Dominic




You dont understand me, rdiff-backup is efficient, but to make diff it
must read WHOLE file, on remote nfs or sshfs it is SLW and
painful
  

Sorry I get it now. But I think rdiff-backup and rsync require a separate
computer at the remote end in order to optimise transfers, so if you are
just accessing a remote share using sshfs or similar then they can still
work of course but as you realise they will be slow. I guess it is not
possible for you to run rdiff-backup (or rsync) at the remote end as well?

You could run rdiff-backup locally to create a backup store and then mirror
this store to the remote share using rcp. Still it will be slow because
rdiff-backup always stores the latest copy of each file in full and so if
this changes even slightly then the whole file will must be transferred by
rcp.

Duplicity http://duplicity.nongnu.org/ might work better for you, because it
uses forward diffs. Also its archives are secure.

Although not directly relevant I found a page here
http://www.psc.edu/networking/projects/hpn-ssh/ which provides a patch to
greatly speed up OpenSSH in some situations.



Duplicity is interesing project. What you think about using
rdiff-backup to create local backup, for example in /backups and then
send this /backups to remote server by duplicity? As far as I know
duplicity is encrypted so I DONT need using encfs, dmcrypt or other -
only ssh access is needed (realy I dont need duplicity on remote
server?).

I just want be able to send _ENCRYPTED_ backups to remote server where
I have only ssh access (sftp/scp work).
I have not used duplicity myself, I use rdiff-backup. But I am not sure 
you need to run rdiff-backup first, I think duplicity may make its own 
local copies of backup increments so that it can send future increments 
without having to access the earlier increments from the remote share. 
And yes duplicity sends encrypted files so you don't need other encryption.


Because duplicity uses forward diffs you have to keep all backups 
forever, and if there is any corruption of a file you lose all backups 
that occurred *after* the date of this file. Rdiff-backup uses reverse 
diffs so corruption, if it occurs, affects backups *before* the date of 
the corrupted backup. With rdiff-backup you can delete backups before a 
certain date (though in my experience the storage is so efficient it is 
not usually worth bothering), which is not possible with duplicity.


Still it sounds like duplicity would better suit your needs.


___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


Re: [rdiff-backup-users] Re: Remote encrypted backup with slow connection.

2009-09-22 Thread Dominic Raferd

Piotr Karbowski wrote:

So best will be using duplicity to make local backup and run rsync to
send new files (diffs) to remote server. and if backup will be TOO big
just mv backup backup_old and start new backup (every 4 weeks for
example)
  
I think so. There is a duplicity mailing list where you could probably 
get more help: http://lists.nongnu.org/mailman/listinfo/duplicity-talk



___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


[rdiff-backup-users] Error

2009-09-22 Thread prateekmoturi

Hi i am getting the following error messages when i am trying to backup.
I am using the rdiff-backup 1.2.8.
Can anyone please help me to find the solution for the following error...


Traceback (most recent call last):
  File /usr/bin/rdiff-backup, line 30, in ?
rdiff_backup.Main.error_check_Main(sys.argv[1:])
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 304, in 
error_check_Main
try: Main(arglist)
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 324, in 
Main
take_action(rps)
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 280, in 
take_action
elif action == backup: Backup(rps[0], rps[1])
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 337, in 
Backup
backup_final_init(rpout)
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 501, in 
backup_final_init
checkdest_if_necessary(rpout)
  File /usr/lib/python2.4/site-packages/rdiff_backup/Main.py, line 920, in 
checkdest_if_necessary
dest_rp.conn.regress.Regress(dest_rp)
  File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 71, in 
Regress
for rf in iterate_meta_rfs(mirror_rp, inc_rpath): ITR(rf.index, rf)
  File /usr/lib/python2.4/site-packages/rdiff_backup/regress.py, line 197, in 
iterate_meta_rfs
for raw_rf, metadata_rorp in collated:
  File /usr/lib/python2.4/site-packages/rdiff_backup/rorpiter.py, line 100, 
in Collate2Iters
try: relem2 = riter2.next()
  File /usr/lib/python2.4/site-packages/rdiff_backup/metadata.py, line 274, 
in iterate
for record in self.iterate_records():
  File /usr/lib/python2.4/site-packages/rdiff_backup/metadata.py, line 283, 
in iterate_records
next_pos = self.get_next_pos()
  File /usr/lib/python2.4/site-packages/rdiff_backup/metadata.py, line 266, 
in get_next_pos
newbuf = self.fileobj.read(self.blocksize)
  File /usr/lib/python2.4/gzip.py, line 225, in read
self._read(readsize)
  File /usr/lib/python2.4/gzip.py, line 273, in _read
self._read_eof()
  File /usr/lib/python2.4/gzip.py, line 309, in _read_eof
raise IOError, CRC check failed
IOError: CRC check failed


Thanks in advance

+--
|This was sent by prateekmot...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--




___
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki