Re: [BackupPC-users] Problems excluding files.

2008-08-05 Thread Gabriel Landais
On Tue, Aug 5, 2008 at 05:36, Steve Blackwell [EMAIL PROTECTED] wrote:
 Aahh!!! Thanks Holger.
 I had not understood the use of the hash key. I thought it was just a
 kind of group name for some excludes.
 Now everything is working as expected.

 Steve

Hi,
 it is not greatly explained in doc. So I'll just set * for all my
exclude keys. Directly in pl files as web interface does not allow
that :(
 Cheers
 Gabriel

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Just to make sure: How to move/copy /var/lib/backuppc to another place (RAID1)

2008-08-05 Thread Rob Owens
Holger Parplies wrote:
 Your best options remain to either do a block level copy of the file system
 (dd) or to start over. 

One suggestion I've heard on this list before, which may be a good one 
for you, is to simply start over with a new pool but save the existing 
pool for a few weeks/months/years.  Then if you never need to restore a 
backup from the old pool, you will have saved yourself a lot of effort.

-Rob


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. If you are not the addressee, any disclosure, reproduction,
copying, distribution, or other dissemination or use of this transmission in
error please notify the sender immediately and then delete this e-mail.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted lost, destroyed, arrive late or
incomplete, or contain viruses.
The sender therefore does not accept liability for any errors or omissions
in the contents of this message which arise as a result of e-mail
transmission. If verification is required please request a hard copy
version.




-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Just to make sure: How to move/copy /var/lib/backuppc to another place (RAID1)

2008-08-05 Thread Kurt Tunkko
Hello Holger,

thanks for your detailled answer, even when I got the feeling now, that 
I don't want to copy the pool-data :-/

As far I understood, keeping hardlinks and copy the massive amount of 
files may be a problem.

Other options:

1) Using dd and copy the old harddrive to the new one. Because the old 
harddrive uses LVM and the new one is a RAID, I don't know if this will work

2) Using LVM and append the RAID to the LVM-Volume - this sounds like a 
good solution, but I just want to have /var/lib/backuppc on the RAID, no 
other files.

3) Rename the 'old' /var/lib/backuppc directory (keeping it until 
everything is working) and mount the RAID to /var/lib/backuppc.
This means starting with a new (empty) pool - after some time all 
backups should be completed and as soon as everything is working and I'm 
sure I don't need the old backups I could delete the old 
/var/lib/backuppc directory.

I'll Option 3 and let you know how and if this is working.

- Kurt



Holger Parplies wrote:
 Hi,
 
 Kurt Tunkko wrote on 2008-08-04 23:00:35 +0200 [[BackupPC-users] Just to make 
 sure: How to move/copy /var/lib/backuppc to another place (RAID1)]:
 [...]
 I found: 'change archive directory' on the backuppc wiki
 http://backuppc.wiki.sourceforge.net/change+archive+directory

 Option 1 suggest to use:

  cp -pR /var/lib/backuppc /mnt/md0

 while Option 3 suggest to move the directory to another place.

 In order to be safe when something bad happens while transferring the 
 data to the RAID, I dont want to use 'move'.
 
 ['mv'? Really? Just suppose *that* gets interrupted part way through ...]
 
 Just to make sure that I don't do something stupid before copying tons 
 of GB of backup-data I would like have a short feedback regarding the 
 command in option 1. Will this do the job?
 
 It is *guaranteed* not to. Whoever put that in the wiki either does not have
 the slightest clue what he is writing about, or he is talking about an empty
 pool (read: BackupPC was freshly installed and *no backups done*) and didn't
 make that unmisunderstandably clear.
 
 Even the potentially correct 'cp -dpR ...' will not work in the general case.
 The command from the wiki does *not* preserve hard links. Your pool will
 explode to at least twice the size, and that's assuming every pooled file is
 only used once (which would practically mean you've only got one backup). If
 you've got the space, you *could* get away with it, because future backups
 would be pooled, but for current backups, the benefits of pooling would be
 forever lost.
 The next run of BackupPC_nightly would empty the pool (so you might as well
 not copy it in the first place), and the files would need to be re-compressed
 during future backups.
 So: while it is conceivable that someone might use this as a last resort, you
 don't want to migrate your pool like this.
 
 The version that *does* preserve hard links (cp -d option) will work for
 structures upto a certain limited size.
 
 There seem to be people on the list who repeatedly insist that it worked for
 them, so it will work for you (despite the thread already containing an
 explanation to the contrary). Apparently it has even made it into the wiki.
 
 
 On the other hand, there have also been countless reports of problems with
 *any* file-based copying of the BackupPC pool using general-purpose tools -
 cp, rsync, and tar spring to mind. They either run out of memory or take long
 (read: days to weeks, meaning they are usually aborted at some point; I'm not
 sure if they would eventually finish). This is, basically, due to the fact
 that you cannot create a hard link to an inode based on the inode number. You
 need a path name, i.e. a name of the file to link to. For a few hundred files
 with a link count of more than one, it's no problem to store the information
 in memory (and that is what general-purpose tools are probably expecting). For
 100 million files with more than one link, that obviously won't work any more.
 Add to that the delays of chaotically seeking from one end of the pool disk to
 the other (the kernel needs to look up the paths you're linking to, and
 there's not much chance of finding anything in the cache ...), and you'll get
 an idea of where the problem is. Lots of memory will help, preferably enough
 to fit the pool into cache altogether ;-).
 
 
 Your best options remain to either do a block level copy of the file system
 (dd) or to start over. You can, of course, *try* cp/rsync/tar and hope your
 pool is small enough (hint: count your pool files). I'm not saying it never
 works, only to be aware of what you're facing. Remember, for cp you need
 -d, for rsync -H and for dd a destination partition at least as big
 as the source file system. I haven't heard reports of problems with file
 system resizers and BackupPC pools, but I'd be cautious just the same.
 
 Can I just remount my RAID to 
 /var/lib/backuppc afterwards and be sure that everything is working?
 
 If anyone has 

Re: [BackupPC-users] move a specific backup (share) from one pool to another pool

2008-08-05 Thread Jon Craig
I am inferring from your email that your data filesystem for
BackupPC is maxed out and you cannot grow.  Your thoughts are to
create a new filesystem and start spliting things up between them.  If
this is true then I believe your answer is no.  BackupPC does
de-duplication using hard-links between the files in the pc/[host]/*
directory and the (c)pool directory.  Hard-links cannot cross
filesystems, so creating a second filesystem won't work.  Your only
choice would be to create a second BackupPC instance and move clients
over to the new instance.

I also anticipate that moving clients past backups between instances
would prove very difficult. You may be able to do something with
BackupPC_tarCreate and BackupPC_tarExtract.

On 8/5/08, sabujp [EMAIL PROTECTED] wrote:

  Is it possible to move all of the backups and incrementals of a particular 
 share (/home/user) from one data directory (pool) into another?

  Let's say I've got 100 home directory shares that use up 16TB of data in one 
 top level directory (pool). I cannot expand this volume beyond 16TB but I can 
 get another 16TB volume.

  Is it possible to extract the backup data for a particular share from one 
 top level data directory (pool) and move it to another top level data 
 directory (pool) on another file system? If each of the home directories were 
 equivalent in the amount of space used, then I could move 50 of the backed up 
 shares to the other volume and then spread the original 16TB across two 16TB 
 volumes (leaving 8TB on both volumes).

  +--
  |This was sent by [EMAIL PROTECTED] via Backup Central.
  |Forward SPAM to [EMAIL PROTECTED]
  +--



  -
  This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
  Build the coolest Linux based applications with Moblin SDK  win great prizes
  Grand prize is a trip for two to an Open Source event anywhere in the world
  http://moblin-contest.org/redirect.php?banner_id=100url=/
  ___
  BackupPC-users mailing list
  BackupPC-users@lists.sourceforge.net
  List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
  Wiki:http://backuppc.wiki.sourceforge.net
  Project: http://backuppc.sourceforge.net/



-- 
Jonathan Craig

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems excluding files.

2008-08-05 Thread Bowie Bailey
Gabriel Landais wrote:
 On Tue, Aug 5, 2008 at 05:36, Steve Blackwell [EMAIL PROTECTED]
 wrote: 
  Aahh!!! Thanks Holger.
  I had not understood the use of the hash key. I thought it was just
  a kind of group name for some excludes.
  Now everything is working as expected.
  
  Steve
 
 Hi,
  it is not greatly explained in doc. So I'll just set * for all my
 exclude keys. Directly in pl files as web interface does not allow
 that :(
  Cheers
  Gabriel

The web interface didn't give me any problems creating a * key.  What
happened when you tried it?

-- 
Bowie

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Just to make sure: How to move/copy /var/lib/backuppc to another place (RAID1)

2008-08-05 Thread Adam Goryachev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Kurt Tunkko wrote:
 Hello Holger,
 
 thanks for your detailled answer, even when I got the feeling now, that 
 I don't want to copy the pool-data :-/
 
 As far I understood, keeping hardlinks and copy the massive amount of 
 files may be a problem.
 
 Other options:
 
 1) Using dd and copy the old harddrive to the new one. Because the old 
 harddrive uses LVM and the new one is a RAID, I don't know if this will work
 
 2) Using LVM and append the RAID to the LVM-Volume - this sounds like a 
 good solution, but I just want to have /var/lib/backuppc on the RAID, no 
 other files.

There should be no problem using dd to move a LVM volume to a raid
volume or anything else. Basically, somewhere you have a block device
(the thing you created your filesystem on). Just dd that to your new
block device and you are done.

Or, an easier point, whatever you pass in the HERE is your old block
device:
mount HERE /var/lib/backuppc

Sure, it is all confusing as hell, until you remember that you simply
want to copy whatever the FS level is looking at, and your destination
device is again some raid/LVM/loopback file/whatever block device.

If your source device is equal or smaller than the destination device,
and it is feasible to copy the entire data from the source to the
destination, then dd is the perfect tool, and this is the ideal solution
to the problem of moving the pool.

The only reasons why you would not use it:
1) You can't physically connect that many HDD's at the same time to the
same machine
2) The source and destination are far away (ie, slow network connection
between them)
3) The destination is too small (should you really be doing this anyway?)
4) You want to use a different filesystem format on the destination
5) The source and destination are actually the same device, you just
want to re-arrange them (eg, migrate from LVM to md or similar).
6) Probably some others, but by now, you should realise that dd is
probably the ideal tool for the job, and you should go ahead and make
use of it.

Personally, I would use my filesystem fsck and force a check afterward
just to make myself sleep a little better at night.

I hope someone works out a way to add the above to the wiki, in a more
meaningful/less wordy way.

Regards,
Adam
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFImHGVGyoxogrTyiURAqYsAJ9SVSoiT9NN7SYyTjYwZZJlV30EkACgxk0E
JYZQDHvlVZhfx76DEg7CwR8=
=QnI3
-END PGP SIGNATURE-

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Just to make sure: How to move/copy /var/lib/backuppc to another place (RAID1)

2008-08-05 Thread Holger Parplies
Hi,

Adam Goryachev wrote on 2008-08-06 01:28:21 +1000 [Re: [BackupPC-users] Just to 
make sure: How to move/copy /var/lib/backuppc to another place (RAID1)]:
 Kurt Tunkko wrote:
  Other options:
  
  1) Using dd and copy the old harddrive to the new one.

as Adam pointed out, block device is block device. There is no reason not to
do this.

  2) Using LVM and append the RAID to the LVM-Volume - this sounds like a 
  good solution, but I just want to have /var/lib/backuppc on the RAID, no 
  other files.

Actually, you can get what you want along this path too.

- vgextend your VG to include the RAID device
- pvmove your backup LV off the current disk onto the RAID device
- vgsplit your backup LV out of the current VG into a new one.

The benefit is that you can - in theory - do all of this without stopping
BackupPC and unmounting the file system, even during running backups. It
probably wouldn't be good for performance, so you'd preferably choose a
period of time when BackupPC is idle :).

If you're familiar with LVM (and want to keep your pool file system in a LV
as opposed to on the RAID with no LVM in between), this gives you slightly
more protection against user errors. 'dd' won't complain if you accidentally
swap source and destination device, with LVM this can't really happen. If you
lose power during the copy operation (pvmove), LVM should be able to resume
where it was interrupted.

On the other hand, the 'dd' approach is simpler, and you get to keep a working
backup of your pool.

If in doubt, stick with 'dd'.

 I hope someone works out a way to add the above to the wiki,

I agree with that part :) [the above referring to Adam's message].

 in a more meaningful/less wordy way.

There was nothing wrong with your explanation. I'd prefer correct content
over well-written content in the wiki any day - not meaning to imply yours
was not well-written.

Regards,
Holger

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Is it OK to use rsync 3.0.3 with BackupPC 3.1.0 ?

2008-08-05 Thread Aleksey Tsalolikhin
Hi.  I use rsync as my transfer mechanism.

I am considering upgrading rsync from 2.6.9 to 3.x, as I am unable to
run a full backup with rsync 2.6.9 -- memory utilization's out the
roof (lots of files to transfer) and the server slows to an unusable
crawl.

rsync 3.x has this fixed any reason I can't tear out 2.6.9 and
plug in 3.0.3.?  has anybody done that?

Best,
-at

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is it OK to use rsync 3.0.3 with BackupPC 3.1.0 ?

2008-08-05 Thread Michael Mansour
Hi,

 Hi.  I use rsync as my transfer mechanism.
 
 I am considering upgrading rsync from 2.6.9 to 3.x, as I am unable to
 run a full backup with rsync 2.6.9 -- memory utilization's out the
 roof (lots of files to transfer) and the server slows to an unusable
 crawl.
 
 rsync 3.x has this fixed any reason I can't tear out 2.6.9 and
 plug in 3.0.3.?  has anybody done that?

I have been using rsync 3.0.3 ever since it was released on my BackupPC
server, no issues seen from it at all.

Regards,

Michael.

 Best,
 -at
 
 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
 Build the coolest Linux based applications with Moblin SDK  win 
 great prizes Grand prize is a trip for two to an Open Source event 
 anywhere in the world 
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/
--- End of Original Message ---


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is it OK to use rsync 3.0.3 with BackupPC 3.1.0 ?

2008-08-05 Thread Bernhard Egger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I had similar issues (backups wouldn't complete), and after upgrading
both servers and clients to rsync to 3.0.2, the problems went away.

I use BackupPC in two different locations with quite many clients and
have not had any issues caused by rsync 3.0.2 so far.

- --be

Aleksey Tsalolikhin wrote:
 Hi.  I use rsync as my transfer mechanism.
 
 I am considering upgrading rsync from 2.6.9 to 3.x, as I am unable to
 run a full backup with rsync 2.6.9 -- memory utilization's out the
 roof (lots of files to transfer) and the server slows to an unusable
 crawl.
 
 rsync 3.x has this fixed any reason I can't tear out 2.6.9 and
 plug in 3.0.3.?  has anybody done that?
 
 Best,
 -at
-BEGIN PGP SIGNATURE-
Comment: Public key at http://pgp.mit.edu/

iEYEARECAAYFAkiZBaUACgkQlUmaCwWcOxMLVwCbBYQLnBIbqE/0q28ydfPSdtDH
5ikAnR5BFHnY5AbZpsEVm1opNnUrZcu2
=HVAI
-END PGP SIGNATURE-

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] clustered file system and multiple servers

2008-08-05 Thread dan
I have investigated cluster filesystems for backuppc.  You will have very
poor I/O on all cluster filesystems.  I/O is the most important factor for
backuppc and the performance difference between a local filesystem(or even
iscsi or aoe) and a cluster filesystem can be an order of magnitude slower
I/O performance.  I had hoped to use a cluster filesystem to maintain a
redundant copy of my backuppc data but performance was incredibly bad.  I
find fuse filesystems such as mysqlfs or zfs-fuse to be much faster.



On Mon, Aug 4, 2008 at 10:56 PM, Holger Parplies [EMAIL PROTECTED] wrote:

 Hi,

 sabujp wrote on 2008-08-04 23:18:41 -0400 [[BackupPC-users]  clustered file
 system and multiple servers]:
 
  Can a dev let me know if the files in the pool are FLOCK'd before
 writing,

 use the force, read the source. grep -r flock backuppc-3.1.0 suggests
 that
 flock is used, but not on pool files. Not surprising, considering
 BackupPC_link
 and BackupPC_nightly (or two instances of BackupPC_link) may not run
 concurrently (BackupPC_link is responsible for entering new files into the
 pool).

 Come to think of it, the reason for this restriction is of a different
 nature:
 BackupPC_nightly sometimes needs to rename pool files (with a common
 BackupPC
 hash, when one or more files out of the chain are deleted), while
 BackupPC_link
 may insert a new file with the same BackupPC hash. You can't prevent the
 resulting race condition with flock() - at least you don't effectively
 change
 anything in the single-threaded case (you'd need a rather global lock).

  i.e. is there a chance that two servers backing up into the same top
 level
  data directory could mangle a file in the pool in this manner?

 I don't think you'd have mixed file contents (or, effectively, a corrupt
 compressed file), but there seems to be a chance of linking a file in a
 backup
 to the wrong pool file (wrong contents altogether).

 You could probably use flock() to prevent two instances of
 BackupPC_link/BackupPC_nightly running simultaneously on different servers,
 but there are more things you would want to think about (running
 BackupPC_nightly on more than one server does not make much sense, even if
 they don't run concurrently; limiting simultaneous backups over all
 servers;
 ensuring BackupPC_dump is not run twice simultaneously for the same host
 ...).

 In short: sharing a pool between servers is currently not supported.

 Regards,
 Holger

 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's
 challenge
 Build the coolest Linux based applications with Moblin SDK  win great
 prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is it OK to use rsync 3.0.3 with BackupPC 3.1.0 ?

2008-08-05 Thread dan
rsync 3 will work but will function just like rsync 2 did without the new
features.

On Tue, Aug 5, 2008 at 8:00 PM, Bernhard Egger [EMAIL PROTECTED]wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 I had similar issues (backups wouldn't complete), and after upgrading
 both servers and clients to rsync to 3.0.2, the problems went away.

 I use BackupPC in two different locations with quite many clients and
 have not had any issues caused by rsync 3.0.2 so far.

 - --be

 Aleksey Tsalolikhin wrote:
  Hi.  I use rsync as my transfer mechanism.
 
  I am considering upgrading rsync from 2.6.9 to 3.x, as I am unable to
  run a full backup with rsync 2.6.9 -- memory utilization's out the
  roof (lots of files to transfer) and the server slows to an unusable
  crawl.
 
  rsync 3.x has this fixed any reason I can't tear out 2.6.9 and
  plug in 3.0.3.?  has anybody done that?
 
  Best,
  -at
 -BEGIN PGP SIGNATURE-
 Comment: Public key at http://pgp.mit.edu/

 iEYEARECAAYFAkiZBaUACgkQlUmaCwWcOxMLVwCbBYQLnBIbqE/0q28ydfPSdtDH
 5ikAnR5BFHnY5AbZpsEVm1opNnUrZcu2
 =HVAI
 -END PGP SIGNATURE-

 -
 This SF.Net email is sponsored by the Moblin Your Move Developer's
 challenge
 Build the coolest Linux based applications with Moblin SDK  win great
 prizes
 Grand prize is a trip for two to an Open Source event anywhere in the world
 http://moblin-contest.org/redirect.php?banner_id=100url=/
 ___
 BackupPC-users mailing list
 BackupPC-users@lists.sourceforge.net
 List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 Wiki:http://backuppc.wiki.sourceforge.net
 Project: http://backuppc.sourceforge.net/

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is it OK to use rsync 3.0.3 with BackupPC 3.1.0 ?

2008-08-05 Thread Bernhard Egger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

yes, because BackupPC uses its own implementation of the rsync protocol.
Nevertheless, backups that wouldn't complete with 2.6.9 now work
perfectly, so I guess they must have fixed some problems on the client
as well.

dan wrote:
 rsync 3 will work but will function just like rsync 2 did without the
 new features.
 
 On Tue, Aug 5, 2008 at 8:00 PM, Bernhard Egger [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
 
 I had similar issues (backups wouldn't complete), and after upgrading
 both servers and clients to rsync to 3.0.2, the problems went away.
 
 I use BackupPC in two different locations with quite many clients and
 have not had any issues caused by rsync 3.0.2 so far.
 
 --be
 
 Aleksey Tsalolikhin wrote:
 Hi.  I use rsync as my transfer mechanism.
 
 I am considering upgrading rsync from 2.6.9 to 3.x, as I am unable to
 run a full backup with rsync 2.6.9 -- memory utilization's out the
 roof (lots of files to transfer) and the server slows to an unusable
 crawl.
 
 rsync 3.x has this fixed any reason I can't tear out 2.6.9 and
 plug in 3.0.3. http://3.0.3.?  has anybody done that?
 
 Best,
 -at
-BEGIN PGP SIGNATURE-
Comment: Public key at http://pgp.mit.edu/

iEYEARECAAYFAkiZKtUACgkQlUmaCwWcOxP/xgCfZ/I+qUV9yeRkkil27DQXbMmN
ICgAn1h4WEJ/qBAtBs3Rn/unf4iNTisB
=NoZQ
-END PGP SIGNATURE-

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/