Re: [BackupPC-users] Unable to connect to BackupPC server

2007-03-30 Thread Winston Chan

> Date: Thu, 29 Mar 2007 04:28:25 +0200
> From: Holger Parplies <[EMAIL PROTECTED]>
> Subject: Re: [BackupPC-users] Unable to connect to BackupPC server
>   error
> To: Winston Chan <[EMAIL PROTECTED]>
> Cc: backuppc-users@lists.sourceforge.net
> Message-ID: <[EMAIL PROTECTED]>
> Content-Type: text/plain; charset=us-ascii
> 
> Hi,
> 
> Winston Chan wrote on 28.03.2007 at 21:04:08 [Re: [BackupPC-users] Unable to 
> connect to BackupPC server error]:
> > I had been running BackupPC on an Ubuntu computer for several months to
> > back the computer to a spare hard drive without problem. About the time
> > I added a new host (Windows XP computer using Samba), I started getting
> > the following behavior:
> 
> first of all, your problem seems unrelated to the new host.
> 
> > When I try to touch a file as root, I get "touch: cannot touch
> > `/var/lib/backuppc/log/LOG': Read-only file system."
> 
> What you're seeing is that your file system is mounted read/write when you
> boot your machine, as it should be. BackupPC works. Then something comes
> along and remounts the file system read-only. That might crash BackupPC (in
> fact, I'd expect it to try to log a fatal error, which won't work, because
> it can't write to its log files, and then terminate). What can remount your
> file system read-only?
> 
> 1.) The kernel. It does this if you mount the file system with the option
> "errors=remount-ro" or if the option is set in the file system metadata
> (and file system corruption is detected during operation, of course :).
> You can check with 'tune2fs -l /dev/whatever' (replace /dev/whatever
> with the name of the block device your file system is on, see the
> output of 'df /var/lib/backuppc', left column, if you're unsure) under
> the label "Errors behavior".
> Is /var/log on a different partition from /var/lib/backuppc? If so, you
> should be able to find a message in /var/log/messages if this happened.
> If both are on the same partition, your system log files won't have been
> written to after remounting either (which would indicate the approximate
> time it happened though).
> 
> 2.) Some software doing something it's probably not supposed to. I wouldn't
> know who should 'mount /var/lib/backuppc -oremount,ro' or the like, but
> it's a possibility.
> 
> 3.) A user pressing  at the console. That would affect *all*
> file systems however. Remove either these three keys or the user who did
> it ;-). Or 'echo 0 > /proc/sys/kernel/sysrq' (see /etc/sysctl.conf if
> you really want to do that, but I strongly doubt that is your problem).
> 
> > > Wasn't that Windoze, where you occasionally have to reboot because 
> > > something
> > > stops working for no good reason? ;-)
> 
> ... my point being that, with Linux, instead of rebooting, you'd simply
> 
>   % mount /var/lib/backuppc -oremount,rw
> 
> (presuming /var/lib/backuppc is the relevant mount point), and you'll
> probably get an error message stating the file system has errors, which
> you'd need to fix with fsck (unmount the file system first!) [I haven't got
> a file system with errors available, so I can't check if remounting rw is
> really rejected; it might just work despite errors on the FS, so you should
> probably run fsck (after unmounting) anyway]. Let's hope it is something that
> *can* be reasonably fixed, considering it's grave enough for the kernel to
> remount the file system. Rebooting is not a solution in this case, it only
> hides the problem until it gets bad enough that all of your pool is lost.
> 
> Are, by any chance, regular checks of the file system in question turned off
> ("Mount count" and "Maximum mount count" in the tune2fs output)?
> 
> You should probably try to figure out whether the underlying disk has a
> problem (/var/log/messages is your friend and probably the smartmontools) or
> if it was only a glitch caused by software or a power failure or a user
> failure (you wouldn't believe what I found on my favorite messed up file
> system). Presuming you don't simply have a cron job that remounts the file
> system every few days :-).
> 
> Good luck.
> 
> Regards,
> Holger
> 
> 
Holger,

You have correctly identified the source of the problem. Followed your
advise and found that the directory is corrupted. 

Thanks.

Winston



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rename a machine

2007-03-30 Thread Bowie Bailey
Chris Ernst wrote:
> Brien Dieterle wrote:
>   > To try to answer it; yes I think you can add another host and just
> > rename the folder under "pc" to match the name, but I'm not positive
> > about that :-)
> 
> For the record I am positive out that.  I've done exactly this and
> it does work  =)
> 
>   - Chris

Perfect!  I'll give it a try.  :)

Bowie

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rename a machine

2007-03-30 Thread Bowie Bailey
I want to keep the full backup records of the machine available via
BackupPC for longer than the backup schedule would allow.  This is so
that I can recover files if they were lost in the rebuild process.  I
have made a tar archive of the last backup, but it is much easier to
restore from BackupPC than from a tar archive.

If no one else chimes in here, I'll try renaming directories and see what
happens.

Bowie

Brien Dieterle wrote:
> To not answer your question; why don't you just let the new machine
> use the existing configs, assuming you keep a few fulls you'll still
> have access to the old files just the same.  You could also archive
> it if you really wanted to preserve it as-is.  Basically, what I'm
> saying is OS changes shouldn't really affect your backup scheme if
> you keep plenty of fulls/revisions...
> 
> To try to answer it; yes I think you can add another host and just
> rename the folder under "pc" to match the name, but I'm not positive
> about that :-)
> 
> brien
> 
> Bowie Bailey wrote:
> > I have a machine in my office which has been backed up with
> > BackupPC. 
> > The owner of the machine has rebuilt it with the same name, but a
> > different os.  I want to keep the old backups while still
> > continuing to make backups of the new machine.
> > 
> > Can I change the name of the old backup sets so that they can be
> > maintained under a different name while the new backups come in
> > under 
> > the original name?
> > 
> > Thanks,
> > 
> > Bowie

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rename a machine

2007-03-30 Thread Chris Ernst
Brien Dieterle wrote:
  > To try to answer it; yes I think you can add another host and just
> rename the folder under "pc" to match the name, but I'm not positive 
> about that :-)

For the record I am positive out that.  I've done exactly this and 
it does work  =)

- Chris

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rename a machine

2007-03-30 Thread Brien Dieterle
To not answer your question; why don't you just let the new machine use 
the existing configs, assuming you keep a few fulls you'll still have 
access to the old files just the same.  You could also archive it if you 
really wanted to preserve it as-is.  Basically, what I'm saying is OS 
changes shouldn't really affect your backup scheme if you keep plenty of 
fulls/revisions...

To try to answer it; yes I think you can add another host and just 
rename the folder under "pc" to match the name, but I'm not positive 
about that :-)

brien

Bowie Bailey wrote:
> I have a machine in my office which has been backed up with BackupPC.
> The owner of the machine has rebuilt it with the same name, but a
> different os.  I want to keep the old backups while still continuing to
> make backups of the new machine.
>
> Can I change the name of the old backup sets so that they can be
> maintained under a different name while the new backups come in under
> the original name?
>
> Thanks,
>
> Bowie
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-30 Thread David Rees
On 3/30/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> David Rees wrote:
> > How long are full and incremental backups taking now?
>
> In one machine it went down from 900 minutes to 175 minutes. I expect better
> performance
> when more memory is added (today or tomorrow they will add it) and I dont 
> think all
> files had checksums cached when this full was ran.

Wow, that is a huge difference! I didn't expect performance to
increase that much, apparently the checksum caching is really reducing
the number of disk IOPs.

> I could try tar for testing purposes if you like? I think rsync will be 
> sufficiently
> fast enough. I am guessing that with checksum-seeds the difference shouldnt 
> be so
> much
> tar probably transfers much more data in full backups? Rsync can be faster 
> perhaps if
> ignore-times was removed when taking full backups. I am thinking of removing
> ignore-times
> option from full backups with rsync and see how much it effects for seeing 
> the difference.

Tar is definitely worth a shot if it's short comings for incremental
backups are acceptable and network bandwidth isn't an issue.

Removing rsync ignore-times may also be an option if the reduction in
possible data integrity is acceptible.

-Dave

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Rename a machine

2007-03-30 Thread Bowie Bailey
I have a machine in my office which has been backed up with BackupPC.
The owner of the machine has rebuilt it with the same name, but a
different os.  I want to keep the old backups while still continuing to
make backups of the new machine.

Can I change the name of the old backup sets so that they can be
maintained under a different name while the new backups come in under
the original name?

Thanks,

Bowie

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-30 Thread Mike Dresser
On Fri, 30 Mar 2007, Sylvain MAURIN wrote:

>> MaxLine III 300 (7200 rpm, ATA, 8 meg cache), ext3 w/noatime.  I'd change 
>> it to XFS, but backing up 1.7TB would take weeks/months because of the 
>> hardlinks.

> Hi,
>
> I use your hardware clone except for HDs (which
> are seagate 750GB) and maybe for motherboard
> (Tyan) and RAM (4Gb) and choosed XFS for running
> backuppc on a amd64 sarge distrib with backported
> kernel.
>
> I purchased recently a large library to include
> backuppc and all other servers to amanda tape
> backup.
>
> You are telling us that xfsdump can't handle well
> hardlinks but I got around 5mb/s... In fact it's
> worse than the standard agregated 15~20mb/s
> I have while backuppc dumps but it goes accordingly
> with other servers dump/tar speeds... It just
> take 4~5 days for a 1.5Tb pool !

xfsdump should handle that fine, in fact i'd expect it to be relatively 
speedy.

My weeks/months would be using rsync, as I can't just dump ext3 to xfs, I 
have to use something that can handle the hardlinks.  ext3 to ext3 
upgradesa are easy, just dd the partition and resize2fs it later.. and 
similar for xfs to xfs.

I'm actually in the process of moving the system from ext3 to xfs, it's 
been 3 days so far just to copy the cpool, and then I'll start the pc/* 
directories using 3.0.x's backuppc_tarpccopy script.  I'm hoping that'll 
be faster than rsync.. the last time i used rsync was on a 200 gig pool, 
and it took 4 days to run.

> I admit that I can't do a full restore for now ( I havn't
> another free TByted partition) but partials are
> fine with all hardlinks on small excerpt (50K harlinks,
> againt 50*100kb files pool).
>
> So, from where did you get your assumptions about
> xfs_dumps duration (monthes) ? Do I have have missed
> a point of configuration and just got luck in my small
> restore points ? Must I wait bad surprise like
> exponential augmentation of tape backup time ?
>
> Please help me and feel free to share your experience
> and my question by forwarding to backuppc ML.
>
> Sylvain
>
> PS : your 1gb isn't small for a 4TB partition ?

Except for insane rsync runs, I don't have any problems with memory 
usage.. most of the backups are samba, so I don't need a lot of memory for 
rsync backups.

I'm pondering putting another 4x256 memory in it anyways though, as ram is 
cheap.

Mike


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Server hangs after power outage

2007-03-30 Thread Brien Dieterle
It sounds very much like a hardware problem, perhaps slightly toasted 
ide controllers?  It sounds like a commodity box, can you move all the 
disks to another machine and fire it up?  Oh and go get a decent UPS! :-)

brien

Klaas Vantournhout wrote:
> Dear all,
>
> The real questions are at the bottom, the rest is just a nice intro 
> which introduces you to the nature of the questions.
>
> Two days ago, we had a power outage in our department which caused a 
> rather brutal shutdown of the computers.  All of the computers survived, 
> which is a good thing.  But only one gained a peculiar character, and of 
> course it had to be the backup server.
>
> At the current point I am not blaming BackupPC at all, I'm just trying 
> to isolate the problem, and that is why I would need your help in this.
>
> Okay so what does the bastard (read server) do now.  Well not much, it 
> just hangs or reboots from time to time.  Rather in a random way.
>
> The first thing we noticed was in /var/log/messages that after the 
> poweroutage, the ntpd deamon could not set its clock right anymore.
>
> 
> # cat /var/log/messages | grep ntpd
> Mar 29 10:39:43 inwtheo1 ntpd: ntpd startup succeeded
> Mar 29 10:39:43 inwtheo1 ntpd[5689]: ntp engine ready
> Mar 29 08:40:06 inwtheo1 ntpd[5689]: peer 157.193.40.37 now valid
> Mar 29 10:40:57 inwtheo1 ntpd[5688]: adjusting local clock by 166.241134s
> Mar 29 10:41:59 inwtheo1 ntpd[5688]: adjusting local clock by 166.240065s
> Mar 29 10:44:13 inwtheo1 ntpd[5688]: adjusting local clock by 166.238681s
> Mar 29 10:45:13 inwtheo1 ntpd[5688]: adjusting local clock by 166.174413s
> Mar 29 10:46:15 inwtheo1 ntpd[5688]: adjusting local clock by 187.903248s
> Mar 29 10:55:11 inwtheo1 ntpd: ntpd startup succeeded
> Mar 29 10:55:11 inwtheo1 ntpd[5607]: ntp engine ready
> Mar 29 08:55:32 inwtheo1 ntpd[5607]: peer 157.193.40.37 now valid
> 
>
> Although trying to understand this problem, I noticed that changing from 
> openntpd to ntp did the trick to get the time correct.  Although unsure 
> about this solution, we switched off the deamon to be 100% sure this was 
> not the cause of the reboots and or crashes.
>
> init 1 and 2 ran stable (backuppc is not running in init 2).
> init 3 didn't (backuppc runs there)
> starting all services by hand to go from 2 to 3, also did not give any 
> problem.  But using the command #init 3, it does.  If we remove backuppc 
> from init 3, the server is stable.
>
> So at this point we started to suspect something is going on when 
> backuppc is running, but we also noticed that sometimes something was 
> going on when backuppc was not running.  So no conclusion yet.
>
> Although it frequently happens that backuppc initiates the crashes, we 
> are wondering why this could be, that is why i write here.
>
> Our server is very basic.  We are running version 3.0.0, the whole 
> system is located on /dev/hda in several partitions, and the backup 
> config files and data is in raid 5 on 3 separate disks
>
> [EMAIL PROTECTED] ~]$ df
> FilesystemSize  Used Avail Use% Mounted on
> /dev/hda7 9.9G  1.4G  8.1G  15% /
> /dev/hda1 479M   12M  443M   3% /boot
> /dev/hda8  44G  172M   44G   1% /home
> /dev/hda6  20G  729M   18G   4% /var
> /dev/md0  461G  194G  243G  45% /var/backups
> [EMAIL PROTECTED] ~]$ cat /proc/mdstat
> Personalities : [raid5]
> md0 : active raid5 hdb1[0] hdg1[2] hde1[1]
>490223232 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
>
> unused devices: 
>
>
> A test also showed that init 3 without backuppc and without /dev/md0 
> mounted, was very stable.
>
> I also have to mention that one time when the system rebooted 
> unexpectedly, the raid system lost 2 of its drives, without a reason. 
> The next bootup just repaired the raid system.  Hence we start thinking 
> something is wrong with the raid.  fsck gives no problems whatsoever.
>
> ** If you skipped the top, here are the questions **
>
> What we are wondering now is, does backuppc initiate some other system
> commands which could enable the hang?
>
> The poweroutage was in the middle of some full backups, is it possible 
> that this gives problems? We have for example in couple client directory 
> a directory new/, even without a backup going on.  Can i safely delete 
> this directory?
>
> Is there more going on that I am not aware of, and how can i see it.
>
> Did anybody had the same?  And if so, how did you solve it?
>
> Regards
> klaas
>
>
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourcef

[BackupPC-users] Server hangs after power outage

2007-03-30 Thread Klaas Vantournhout
Dear all,

The real questions are at the bottom, the rest is just a nice intro 
which introduces you to the nature of the questions.

Two days ago, we had a power outage in our department which caused a 
rather brutal shutdown of the computers.  All of the computers survived, 
which is a good thing.  But only one gained a peculiar character, and of 
course it had to be the backup server.

At the current point I am not blaming BackupPC at all, I'm just trying 
to isolate the problem, and that is why I would need your help in this.

Okay so what does the bastard (read server) do now.  Well not much, it 
just hangs or reboots from time to time.  Rather in a random way.

The first thing we noticed was in /var/log/messages that after the 
poweroutage, the ntpd deamon could not set its clock right anymore.


# cat /var/log/messages | grep ntpd
Mar 29 10:39:43 inwtheo1 ntpd: ntpd startup succeeded
Mar 29 10:39:43 inwtheo1 ntpd[5689]: ntp engine ready
Mar 29 08:40:06 inwtheo1 ntpd[5689]: peer 157.193.40.37 now valid
Mar 29 10:40:57 inwtheo1 ntpd[5688]: adjusting local clock by 166.241134s
Mar 29 10:41:59 inwtheo1 ntpd[5688]: adjusting local clock by 166.240065s
Mar 29 10:44:13 inwtheo1 ntpd[5688]: adjusting local clock by 166.238681s
Mar 29 10:45:13 inwtheo1 ntpd[5688]: adjusting local clock by 166.174413s
Mar 29 10:46:15 inwtheo1 ntpd[5688]: adjusting local clock by 187.903248s
Mar 29 10:55:11 inwtheo1 ntpd: ntpd startup succeeded
Mar 29 10:55:11 inwtheo1 ntpd[5607]: ntp engine ready
Mar 29 08:55:32 inwtheo1 ntpd[5607]: peer 157.193.40.37 now valid


Although trying to understand this problem, I noticed that changing from 
openntpd to ntp did the trick to get the time correct.  Although unsure 
about this solution, we switched off the deamon to be 100% sure this was 
not the cause of the reboots and or crashes.

init 1 and 2 ran stable (backuppc is not running in init 2).
init 3 didn't (backuppc runs there)
starting all services by hand to go from 2 to 3, also did not give any 
problem.  But using the command #init 3, it does.  If we remove backuppc 
from init 3, the server is stable.

So at this point we started to suspect something is going on when 
backuppc is running, but we also noticed that sometimes something was 
going on when backuppc was not running.  So no conclusion yet.

Although it frequently happens that backuppc initiates the crashes, we 
are wondering why this could be, that is why i write here.

Our server is very basic.  We are running version 3.0.0, the whole 
system is located on /dev/hda in several partitions, and the backup 
config files and data is in raid 5 on 3 separate disks

[EMAIL PROTECTED] ~]$ df
FilesystemSize  Used Avail Use% Mounted on
/dev/hda7 9.9G  1.4G  8.1G  15% /
/dev/hda1 479M   12M  443M   3% /boot
/dev/hda8  44G  172M   44G   1% /home
/dev/hda6  20G  729M   18G   4% /var
/dev/md0  461G  194G  243G  45% /var/backups
[EMAIL PROTECTED] ~]$ cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 hdb1[0] hdg1[2] hde1[1]
   490223232 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: 


A test also showed that init 3 without backuppc and without /dev/md0 
mounted, was very stable.

I also have to mention that one time when the system rebooted 
unexpectedly, the raid system lost 2 of its drives, without a reason. 
The next bootup just repaired the raid system.  Hence we start thinking 
something is wrong with the raid.  fsck gives no problems whatsoever.

** If you skipped the top, here are the questions **

What we are wondering now is, does backuppc initiate some other system
commands which could enable the hang?

The poweroutage was in the middle of some full backups, is it possible 
that this gives problems? We have for example in couple client directory 
a directory new/, even without a backup going on.  Can i safely delete 
this directory?

Is there more going on that I am not aware of, and how can i see it.

Did anybody had the same?  And if so, how did you solve it?

Regards
klaas



-- 
"Several billion trillion tons of superhot
exploding hydrogen nuclei rose slowly above
the horizon and managed to look small, cold
and slightly damp."
Douglas Adams - The Hitch Hickers
Guide to the Galaxy

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] link counting fixed in 3.0.0? [Was: Subversion working copies cause too many links]

2007-03-30 Thread Craig Barratt
Gregor writes:

> and related messages for background information. In his final mail on
> that thread, Craig said
> 
> > Upon further inspection, it turns out the rsync XferMethod
> > doesn't check the hardlink limit when it is linking to
> > an identical file.  So there are two cases you have found
> > where the hardlink limit is not checked: when there is
> > a transfer error, and when rsync detects the file is
> > identical.  The latter case happens a lot.
> > 
> > These are both bugs that I need to fix...
> 
> That was March '05. Unfortunately the problem still occurs in 2.1.3.
> 
> Before I go ahead and upgrade to 3.0.0: Is the bug fixed in that
> version? If not, are there plans to fix it?

Yes, it's fixed in 3.0.0.

Craig

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate - make one tar for many hosts (don't loose hardlinks)?

2007-03-30 Thread Mike Dresser
I'm wondering about this as well, as DLT-V4 only holds 160 gig of 
backups.. when you're backing up full systems without the benefits of 
hardlinks, that doesn't hold much.

I assume you're looking for something you can use as an emergency restore 
in case the backuppc server dies or is otherwise unavailable...

I was thinking if you used a modified Backuppc_tarpccopy to create your 
list of cpool files(or that script people were talking about to list 
files.. or just read it out of the log files created by backuppc), and 
then made a modified cpool with just the files you need for all the 
systems you're backing up, you could create a .tar that would restore just 
the modified cpool, and then your backuppc_tarpccopy dumps out the needed 
references.  and then you use the standard backuppc tools to create a .tar 
file out of that(decompressing the cpool files, etc)

you probably don't need hardlinks, wouldn't softlinks work in this case? 
you're likely deleting the cpool file after anyways, so you don't care if 
there's 4 or 40 computers linking to the modified cpool.

I'm picturing this could be done even in just regular bash.. create a list 
of files with backuppc_tarpccopy, sort that file, run uniq on it.. now you 
know which cpool files you need.. and then your tar created from 
backuppc_tarpccopy will reference ../cpool/x/x/x/file, and that should 
work?


and then redone properly in perl :)


Mike


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] link counting fixed in 3.0.0? [Was: Subversion working copies cause too many links]

2007-03-30 Thread Les Mikesell
Gregor Schmid wrote:

> The "solution" of excluding .svn directories from backups, as
> mentioned in the previous discussion, is not acceptable for me as SVN
> working copies are among the most valuable stuff we're backing up...

If your work is committed (which is kind of the point of using svn...) 
you should be able to reproduce any working copy by checking it out of 
the repository again.  And no new work should be under the .svn 
directories - they are used to maintain state matching the repository at 
the last update/commit.  To recover without them you'd have to check out 
a new copy, then move the modified-but-not-committed files from your 
restored backup into the right places - a bit of work but you wouldn't 
have to lose anything.


-- 
   Les Mikesell
[EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of windows pc problem

2007-03-30 Thread nilesh vaghela

Thanx for your answer,

The windows pc the size of backup is not more than 200MB.

This observed in 20% to 25% pc only. Other pc are working on quite
acceptable speed.

Again I am taking backup with rsync method.

What should be check ??

On 3/28/07, Jason Hughes <[EMAIL PROTECTED]> wrote:


nilesh vaghela wrote:
> Other 25% pc backcup is dead slow. The data transfer is in 20kbps.
> I found few things might cause problem.
>
> 1. Space within the directory name. ( I do not know but seems to be)
> 2. Tree structure
> 3. " ' " single quote in directory name cause problem.
>
> Presently we have solve this problem with following long procedure.
>
> If want to take backup of /data dir.
>
> I will list all the subdir of the /datat in include file list. per pc.
>
> But if the subdir are in large no. than it is a problem.
>
> I think it is some problem of naming convention of windows and linux.
>
> I am using backuppc 3.0 with rsync method.
>
> Any body who are facing the same problem ??
>
Hmm.  So, you're saying that by explicitly stating the directory names
in the included files list, it runs faster?  It might be the
codepage/character set the Windows boxes are installed with differs from
the backup server, maybe?

Out of curiosity... Do these clients have any folders with thousands of
files in them?  Traditionally, FAT32 has horrible performance in these
directories, so much so that copying them can be tens to hundreds of
times slower than the device's capability, due to the file system
overhead of finding the inode corresponding to the filename.  Their long
filename support is pretty nasty and bloated.  From memory, a directory
of mp3's with lots of characters in each name is my worst case, and it
has about 4000 files in it.  It's very slow just to pull up a directory
listing of it.  NTFS is better in this regard, but I'm not going to say
"good".

JH





--
Nilesh Vaghela
ElectroMech
Redhat Channel Partner and Training Partner
74, Nalanda Complex, Satellite Rd, Ahmedabad
25, The Emperor, Fatehgunj, Baroda.
www.electromech.info
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-30 Thread Evren Yurtesen
David Rees wrote:

> On 3/29/07, Evren Yurtesen <[EMAIL PROTECTED]> wrote:
> 
>>I didnt blame anybody, just said BackupPC is working slow and it was working
>>slow, very slow indeed. checksum-seeds option seems to be doing it's trick 
>>though.
> 
> 
> How long are full and incremental backups taking now?

In one machine it went down from 900 minutes to 175 minutes. I expect better 
performance
when more memory is added (today or tomorrow they will add it) and I dont think 
all
files had checksums cached when this full was ran.

  Totals Existing Files  New Files
Backup# Type#Files  Size/MB MB/sec  #Files  Size/MB 
#Files  Size/MB
245  full280030  7570.4  0.14274205  6797.6 
 10578   776.3
252  full283960  8020.8  0.76276665  6959.3 
 12232   1065.0

 Existing Files  New Files
Backup# TypeComp Level  Size/MB Comp/MB Comp
Size/MB Comp/MB Comp
245  full9   6797.6  3868.9  43.1%   776.3  
 368.7   52.5%
252  full9   6959.3  4056.9  41.7%   1065.0 
 539.0   49.4%

> 
>>I am thankful to people who wrote suggestions here in this forum, I tried all 
>>of
>>those suggestions one by one. I think that shows that I took them seriously 
>>even
>>though some of them looked like long shots. Eventually one of the suggestions
>>seems to be working.
> 
> 
> You only tried 2 things. Mounting the backup partition async and
> turning on checksum-seeds. Are you going to the 2 others? (Add memory
> and try tar instead of rsync)

The memory will be added. As I mentioned before the machine is at a remote 
location
and the guys there should add it.

I could try tar for testing purposes if you like? I think rsync will be 
sufficiently
fast enough. I am guessing that with checksum-seeds the difference shouldnt be 
so much
tar probably transfers much more data in full backups? Rsync can be faster 
perhaps if
ignore-times was removed when taking full backups. I am thinking of removing 
ignore-times
option from full backups with rsync and see how much it effects for seeing the 
difference.

> -Dave
> 
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] link counting fixed in 3.0.0? [Was: Subversion working copies cause too many links]

2007-03-30 Thread Gregor Schmid

Hi,

I'm using BackupPC 2.1.3 and am mostly very happy with it. But now I'm
running into the "too many links" problem with large subversion
directories. Please see

http://www.arcknowledge.com/gmane.comp.sysutils.backup.backuppc.general/2005-03/msg00145.html

and related messages for background information. In his final mail on
that thread, Craig said

> Upon further inspection, it turns out the rsync XferMethod
> doesn't check the hardlink limit when it is linking to
> an identical file.  So there are two cases you have found
> where the hardlink limit is not checked: when there is
> a transfer error, and when rsync detects the file is
> identical.  The latter case happens a lot.
> 
> These are both bugs that I need to fix...

That was March '05. Unfortunately the problem still occurs in 2.1.3.

Before I go ahead and upgrade to 3.0.0: Is the bug fixed in that
version? If not, are there plans to fix it?

The "solution" of excluding .svn directories from backups, as
mentioned in the previous discussion, is not acceptable for me as SVN
working copies are among the most valuable stuff we're backing up...

Any help would be greatly appreciated.

Best regards,
Greg

-- 
Gregor Schmid

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/