Re: [BackupPC-users] very slow backup speed

2007-03-27 Thread Jason Hughes

Evren Yurtesen wrote:
I am saying that it is slow. I am not complaining that it is crap. I 
think when something is really slow, I should have right to say it right?
  


There is such a thing as tact.  Many capable and friendly people have 
been patient with you, and you fail to show any form of respect.  You're 
the one with the busted system; you should be nicer to the important 
people who are giving you their experience for free.


I disagree, I have given all the information requested from me. I have 
tried different things like mounting filesystems with async even though 
I know that it is not the problem just to make you guys happy. Now you 
say that my attitude is not helping the resolution? What makes you say so?
  


If you read the way you write, you're argumentative, combative, and 
rude.  You have not been forthcoming in describing your situation, which 
wastes a lot of time and energy that is "free" to you, but not free to 
the rest of us trying to help you.  In short, you're taking advantage of 
a friendly group of people, and show no gratitude whatsoever.  In fact, 
you act as if you blame us for setting up the same software and being 
satisfied with its performance.  It's very acceptable in my experience.


Other people have been using it with dual-processor systems with 2-3GB 
of memory and raid setups. I would hardly consider this 'similar' 
conditions. BackupPC is very inefficent at handling files so you guys 
have to use very fast hardware solutions to speed it up. So you are 
covering up the problem from another side and say that there is no problem.


  
I have told you several times that I get considerably better performance 
with a single slow 5400 rpm 2mb buffer IDE drive on a 450mhz P3.  The 
hardware is *NOT* as big a factor as you make it out to be.



3) Disk utilisation/bandwidth on both client and server



I have sent this information also. I didnt send it on client actually 
but server is almost idle diskwise. The main disk load is on the server.
  


How do you know your clients aren't the bottleneck, then?  What if 
they're set up in PIO mode?  Or are swapping like mad?  You need to 
diagnose the problem on both ends.


  

4) Network utilisation/bandwidth on both client and server



The network links are idle. I can send this information if you dont 
believe me but there is maybe 200-300kbits/sec other usage while taking 
backups.
  


Have you checked with ttcp that your network is configured properly, and 
you can get the speeds you expect?  A misconfigured switch or multiple 
MACs associated with the same IP can do a lot of damage to performance 
through collisions.  Same with improper cabling from the IDE controller 
to the drive.  Same with badly set jumpers if both drives are on the 
same IDE port.  There are many problems that could exist.  You need to 
measure the device's raw performance, then measure the protocol on that 
device to /dev/nul, then over the network to server's /dev/nul, then to 
the server's hard drive.  Then check out your Perl install and make sure 
you haven't got something going on there.


There are a hundred things you could try.  Don't wait for us to come up 
with them all.


JH

JH
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup of windows pc problem

2007-03-27 Thread Jason Hughes
nilesh vaghela wrote:
> Other 25% pc backcup is dead slow. The data transfer is in 20kbps.
> I found few things might cause problem.
>
> 1. Space within the directory name. ( I do not know but seems to be)
> 2. Tree structure
> 3. " ' " single quote in directory name cause problem.
>
> Presently we have solve this problem with following long procedure.
>
> If want to take backup of /data dir.
>
> I will list all the subdir of the /datat in include file list. per pc.
>
> But if the subdir are in large no. than it is a problem.
>
> I think it is some problem of naming convention of windows and linux.
>
> I am using backuppc 3.0 with rsync method.
>
> Any body who are facing the same problem ??
>
Hmm.  So, you're saying that by explicitly stating the directory names 
in the included files list, it runs faster?  It might be the 
codepage/character set the Windows boxes are installed with differs from 
the backup server, maybe?

Out of curiosity... Do these clients have any folders with thousands of 
files in them?  Traditionally, FAT32 has horrible performance in these 
directories, so much so that copying them can be tens to hundreds of 
times slower than the device's capability, due to the file system 
overhead of finding the inode corresponding to the filename.  Their long 
filename support is pretty nasty and bloated.  From memory, a directory 
of mp3's with lots of characters in each name is my worst case, and it 
has about 4000 files in it.  It's very slow just to pull up a directory 
listing of it.  NTFS is better in this regard, but I'm not going to say 
"good".

JH


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-27 Thread Jason Hughes

Evren Yurtesen wrote:

 Totals  Existing Files  New Files
Backup# Type#Files  Size/MB MB/sec  #Files  Size/MB 
#Files  Size/MB
245 full152228  2095.2  0.06152177  2076.9  108 18.3
246 incr118 17.30.0076  0.2 69  17.1
  


Can you post the duration that these backups took?  All that these stats 
tell us is how much of your data is churning, and how big your average 
file size is.


I dont know if the problem is hard links. This is not a FreeBSD or Linux 
problem. It exists on both. Just that people using ultra fast 5 disk 
raid 5 setups are seeing 2mbytes/sec transfer rate means that backuppc 
is very very inefficient.


For example this guy is using Linux (problem is OS independent)
http://forum.psoft.net/showpost.php?p=107808&postcount=16

  


Er, RAID-5 is slower than a single disk except on sustained reads, which 
is not the typical case with BackupPC.  I only use a single disk and get 
between 0.5mb/s and 10mb/s, depending on the client and its file 
size/file count distribution.  The question is whether other people have 
gotten the system to work correctly under the same or worse conditions, 
and already that has been answered in the affirmative.  Be prepared to 
accept that the software isn't broken just because you haven't gotten it 
working to your satisfaction in your specific situation...  It could be 
rsync, UFS, BSD, or any number of other factors that are causing you grief.


Whatever, this is not the problem here.The fact is that, according to 
reiserfs developers reiserfs is more or less the same speed with ext2. I 
dont think the problem is related to any filesystem as it occurs on both 
Linux and FreeBSD
  


Your argument lacks logic.  If a filesystem can be configured to be slow 
on multiple OS's, does that mean BackupPC is failing to do its job?  No, 
it means multiple people have managed to set up a system that performs 
badly using it.  That's not so uncommon.  BackupPC does not exist in a 
vacuum: its performance is sensitive to the environment of the server 
and its clients.  Many people are using it without issue, right here on 
this very list.  The question you should be asking is, what makes your 
system perform badly?  Start by picking out things that you do 
differently, or that definitely affect performance.  Transport protocol 
(rsync, tar).  UFS.  BSD.   Sync on your filesystem.  Start changing 
those one at a time and measure the performance.


On Linux with raid setup with async io etc. people are getting slightly 
better results. I think ufs2 is just fine. I wonder if there is 
something in my explanations...The problem is backuppc. People are 
getting ~2mbytes/sec(was it 2 or 5?) speed with raid5 and 5 drives, 
using Linux. It is a miracle that backup even finishes in 24 hours using 
a standart ide drive.


  


If you want people to help you, it's probably best if you refrain from 
blaming the program until you have proof.  So far, you have argued with 
people who provide evidence that the system works fine.  It puts people 
on the defensive, and you may find people less willing to help you in 
the future.  We do know BackupPC behaves well on other file systems and 
operating systems... maybe UFS or BSD is doing something poorly--how 
would you know they aren't?  We have less data points to draw from there.


I suspect it has a lot more to do with what the MB/s stats really mean.  
Maybe Craig can give a precise definition?


This is like the 'Contact' movie. The sphere took 30 seconds to download 
but there were 18 hours of recording. If what you said was true and 
backuppc would be backing up very small amount of files and skipping 
most, then backups would probably take less time than 2-4 hours each.


  
With rsync in incremental backups, it still has to check the metadata 
for each file to determine the changed file set.  With millions of 
files, it will take a while.  If your host or server is low on memory 
for any reason, this may bog it down and start vm swapping.  I would 
recommend trying out tar to see if the protocol behavior matters.  Try 
different mount options that are for higher performance.  Others have 
made similar suggestions as well.


Good luck,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] RSync v. Tar

2007-03-26 Thread Jason Hughes
Jesse Proudman wrote:
> I've got one customer who's server has taken 3600 minutes to  
> backup.   77 Gigs of Data.  1,972,859 small files.  Would tar be  
> better or make this faster?  It's directly connected via 100 Mbit to  
> the backup box.
>
>   

First, determine your bottleneck.  Is it disk i/o or cpu limited?

There is another thread going on about small files and seek times.  A 
quick calculation, assuming 8ms per seek and two seeks per file, gives 
me 540 minutes worth of seeks for 2m files, and ~183 minutes of transfer 
time (assume 70% efficiency, best case).  I don't think the protocol is 
the limiting factor, necessarily.  :-)

Tar uses less cpu and more bandwidth, so if that's the place you're 
having trouble (cpu), switching might help.  It also has lower per-file 
transfer latency (rsync calculates checksums and sends extra packets to 
determine what to send), which might help.  But in any case, I think the 
backups will take half a day under theoretical best conditions.

Thanks,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Jason Hughes


Evren Yurtesen wrote:
> Jason Hughes wrote:
>> That drive should be more than adequate.  Mine is a 5400rpm 2mb 
>> buffer clunker.  Works fine.
>> Are you running anything else on the backup server, besides 
>> BackupPC?  What OS?  What filesystem?  How many files total?
>
> FreeBSD, UFS2+softupdates, noatime.
>
> There are 4 hosts that have been backed up, for a total of:
>
> * 16 full backups of total size 72.16GB (prior to pooling and 
> compression),
> * 24 incr backups of total size 13.45GB (prior to pooling and 
> compression).
>
> # Pool is 17.08GB comprising 760528 files and 4369 directories (as of 
> 3/27 05:54),
> # Pool hashing gives 38 repeated files with longest chain 6,
> # Nightly cleanup removed 10725 files of size 0.40GB (around 3/27 05:54),
> # Pool file system was recently at 10% (3/27 07:16), today's max is 
> 10% (3/27 01:00) and yesterday's max was 10%.
>
>  Host   User   #Full   Full Age (days)   Full Size 
> (GB)   Speed (MB/s)   #Incr   Incr Age (days)   Last 
> Backup (days)   State   Last attempt
> host1 4 5.4 3.88 0.22 6 0.4 
> 0.4 idle idle
> host2 4 5.4 2.10 0.06 6 0.4 
> 0.4 idle idle
> host3 4 5.4 7.57 0.14 6 0.4 
> 0.4 idle idle
> host4 4 5.4 5.56 0.10 6 0.4 
> 0.4 idle idle
>

Hmm.  This is a tiny backup setup, even smaller than mine.  However, it 
appears that the average size of your file is only 22KB, which is quite 
small.  For comparison sake, this is from my own server:
Pool is 172.91GB comprising 217311 files and 4369 directories (as of 
3/26 01:08),

The fact that you have tons of little files will probably give 
significantly higher overhead when doing file-oriented work, simply 
because the inodes must be fetched for each file before seeking to the 
file itself.  If we assume no files are shared between hosts (very 
conservative), and you have an 8ms access time, you will have 190132 
files per host and two seeks per file, neglecting actual i/o time, gives 
you 50 minutes.  Just to seek them all.  If you have a high degree of 
sharing, it can be up to 4x worse.  Realize, the same number of seeks 
must be made on the server as well as the client.

Are you sure you need to be backing up everything that you're putting 
across the network?  Maybe excluding some useless directories, maybe 
temp files or logs that haven't been cleaned up?  Perhaps you can 
archive big chunks of it with a cron job?

I'd start looking for ways to cut down the number of files, because the 
overhead of per-file accesses are probably eating you alive.  I'm also 
no expert on UFS2 or FreeBSD, so it may be worthwhile to research its 
behavior with hard links and small files.

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Client Push ?

2007-03-26 Thread Jason Hughes
Use Rsyncd. It runs as a service on each client box as root (or some 
other user with appropriate disk privileges), and the backuppc client 
gains no user privileges on the client box, rather it communicates to 
retrieve data.  There is no real client push model for BackupPC, only 
protocols with lower security risk.  Besides, a client-push model, 
meaning a client initiates a transfer, would be impossible to schedule 
on a heavily loaded server.

JH

John Hannfield wrote:
> Hello
>
> I've just installed BackupPC and love it. It's really great, and
> great to see an open source application which competes with
> similar enterprise level products.
>
> I only need to backup Linux servers with rsync over SSH, and have set
> up a test deployement of BackupPC as described in the docs. But the
> current model is a server pull, which means the backup server has
> potential root on all my client machines. I would prefer a client push
> model. Has anyone devised a method of using BackupPC  with rsync in
> a push model?
>
> If so, I would love to hear how you have done it.
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Jason Hughes
Evren Yurtesen wrote:
>> And, you could consider buying a faster drive, or one with a larger 
>> buffer.  Some IDE drives have pathetically small buffers and slow 
>> rotation rates.  That makes for a greater need for seeking, and worse 
>> seek performance.
>
> Well this is a seagate barracuda 7200rpm drive with 8mb cache ST3250824A
> http://www.seagate.com/support/disc/manuals/ata/100389997c.pdf
>
> Perhaps it is not the maximum amount of cache one can have on a drive 
> but it is not that bad really.

That drive should be more than adequate.  Mine is a 5400rpm 2mb buffer 
clunker.  Works fine. 

Are you running anything else on the backup server, besides BackupPC?  
What OS?  What filesystem?  How many files total?

> I read your posts about wifi etc. on forum. The processor is not the 
> problem however adding memory probably might help bufferwise. I think 
> this idea can actually work.:) thanks! I am seeing swapping problems 
> but the disk the swap is on is almost idle. The backup drive is 
> working all the time.

Hmm.  That's a separate disk, not a separate partition of the same disk, 
right?  If it's just a separate partition, I'm not sure how well the OS 
will be able to allocate wait states to logical devices sharing the same 
physical media... in other words, what looks like waiting on ad2 may be 
waiting on ad0.  Someone more familiar with device drivers and linux 
internals would have chime in here.  I'm not an expert.

>
> I have to say that slow performance with BackupPC is a known problem. 
> I have heard it from several other people who are using BackupPC and 
> it is the #1 reason of changing to another backup program from what I 
> hear.
>
> Things must improve on this area.
>

I did quite a lot of research and found only one other program that was 
near my needs, and it was substantially slower due to encryption 
overhead, and didn't have a central pool to combine backup data.  I may 
have missed an app out there, though.  What are these people switching 
to, if you don't mind?

Re: what must improve is more people helping Craig.  He's doing it all 
for free.  I think if it's important enough to have fixed, it's 
important enough to pay for.  Or dive into the code and start making 
those changes.  It is open source, after all.

My $.02,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] very slow backup speed

2007-03-26 Thread Jason Hughes
Evren Yurtesen wrote:
> I know that the bottleneck is the disk. I am using a single ide disk to 
> take the backups, only 4 machines and 2 backups running at a time(if I 
> am not remembering wrong).
>
> I see that it is possible to use raid to solve this problem to some 
> extent but the real solution is to change backuppc in such way that it 
> wont use so much disk operations.
>   


The whole purpose of live backup media is to use the media.  What you 
may be noticing is that perhaps your drive is mounted with access time 
being tracked.  You should check that your fstab has "noatime" as a 
parameter for your mounted data volume.  This probably cuts the seeks 
down by nearly half or more.

And, you could consider buying a faster drive, or one with a larger 
buffer.  Some IDE drives have pathetically small buffers and slow 
rotation rates.  That makes for a greater need for seeking, and worse 
seek performance.

Also, if your server is a single-proc, you'll probably want to reduce it 
to 1 simultaneous backup, not 2.  Heck, if you are seeing bad thrashing 
on the disk, it would have better coherence if you stick to 1 anyway.  
Increase your memory and you may see less virtual memory swapping as well. 

It seems that your setup is very similar to mine, and I'm not seeing the 
kind of performance problems you're reporting.  Full backup using rsyncd 
over a slow wifi link of about 65gb is only taking about 100 minutes.  
Incrementals are about 35 minutes.  Using SMB on a different machine 
with about 30gb, it takes 300 minutes for a full, even over gigabit, but 
only a couple of minutes for an incremental (because it doesn't detect 
as many changes as rsync).  So it varies dramatically with the protocol 
and hardware.

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Poor backup performance

2007-03-22 Thread Jason Hughes
This has been discussed before, several times.  Most of the 
recommendations say, in no particular order: * drop your compression 
level, * don't use RAID5, * try using something other than rsync, * 
reduce the number of simultaneous backups, * increase memory on the 
server, etc...


For what it's worth, I have a small installation that runs a mix of 
rsyncd and SMB.  The mothership machine runs rsyncd over a half-speed 
wifi hookup.  My backup machine is only a PIII 450mhz with 128mb of 
ram.  Small.  The network is low-end gigabit equipment, except for the 
wifi link.  If I had to guess, I think with rsyncd, the host probably 
matters as much or more than the server.  I did notice an improvement 
when I went from a PentiumPro to a PIII, and fair jump in improvement 
again by going to gigabit.


Hope that helps,
JH

Host 	User 	#Full 	Full Age (days) 	Full Size (GB) 	Speed (MB/s) 
#Incr 	Incr Age (days) 	Last Backup (days) 	State 	Last attempt
dev1  
panther  	4 	2.6 	21.59 	1.23 	8 	0.6 	0.6 	idle 	idle
dev2  
panther  	4 	4.5 	2.47 	1.11 	7 	0.5 	0.5 	idle 	idle
mothership 
 
panther  	4 	0.3 	64.68 	10.33 	6 	1.3 	0.3 	idle 	idle
ringer  
panther  	4 	2.5 	1.65 	0.18 	8 	0.5 	0.5 	idle 	idle
sol  
panther  	4 	3.8 	39.06 	6.80 	6 	0.8 	0.8 	idle 	idle




Jamie Lists wrote:

We're having pertty much the exact same problem with the exact same
setup. I can SCP files between servers at 9MB a sec. But the backup
runs less than 1MB/s

I'm not sure if i should renice the process to be more aggressive or what.

We have so much data and users aren't gone long enough for a full
backup to even complete.

I'm not sure if we should switch to tar instead of rsync or what.
Please any speed tips please please let us know. - jamie



On 3/22/07, John T. Yocum <[EMAIL PROTECTED]> wrote:
  

I'm seeing terrible backup performance on my backup servers, the speed
has slowly degraded over time. Although, I have never seen speeds higher
than 1MB/s. (We have it set to do no more than 2 backups at a time.)

Here is our setup:

Our network is all 100Mb between servers, and switches, and 1Gb between
switches. So, there is enough network capacity for decent performance.

The servers being backed up, all have either RAID1 or RAID5 arrays
consisting of 15K RPM SCSI drives. All RAID is done in hardware.

The backup servers are using 7200RPM SATA drives, connected to 3ware
8500 or 9000 series controllers. Two the backup servers are using RAID1,
and the other is using RAID5.

Our backup servers are running CentOS 4.4, with ext3fs for the backup
partition. I have noatime enabled, and data=writeback set to hopefully
improve performance.

On our backup servers, they are all showing a very high wait during
backups. Here's a screenshot from one of them
http://www.publicmx.com/fh/backup2.jpg. At the time it was doing two
backups, and a nightly.

Any advice on improving performance, would be much appreciated.

Thank you,
John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/




-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

  
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforg

Re: [BackupPC-users] Timing out full backups

2007-03-21 Thread Jason Hughes

Michael Mansour wrote:
I'm wondering why the full backups numbered 2 are not going back 
down to 1 to free up some space on the server?


In the glbal Schedule, I have the following:

FullPeriod: 6.97
FullKeepCnt: 1
FullKeepCntMin: 1
FullAgeMax: 7

and it's my understanding that backuppc should cycle those full 
backups above to trash because of the above globals yes?



Did anyone have any ideas on this?

  
I responded to this a couple of days ago.  Here's what I wrote, in case 
you missed it:


It depends on how many *incrementals* you have, too.  The number of 
backups you keep will be the FullKeepCnt + IncrKeepCnt, plus any fulls 
that must be kept to support those incrementals.  So, it's possible you 
ask for a certain number of full backups on a schedule that are kept, 
but in order for you to retain the incrementals specified, extra 
baseline full backups are kept as well.  So, in your above situation, if 
you specify 1 FullKeepCnt, if you have ANY incrementals saved, once a 
new full is taken, the last incremental will be *older* than the most 
recent full backup, so the full backup just prior to the incremental 
must be kept as well, resulting in 2 fulls, 1 incremental.


Does that make sense?

JH


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] What do to when backuppc dies?

2007-03-20 Thread Jason Hughes

Frej Eriksson wrote:
I sent an e-mail to the list last week and got good answers so now i 
have tested BackupPC for a short time, the result has been satisfying. 
But as always some new questions has poped up. Lets presume that the 
server that runs BackupPC and stores all backed up data crashes. The 
system disk is unusable but the data disks seem to be fine. At the 
same time I need to restore several backups from the data disk, what 
is the fastest and easiest way to get my backed up data back? Doing a 
manual restore is no problem if its possible to access the backed up 
data without BackupPC.


At least a few on this list have attested to leaving the data disks in 
an external case so it can be switched to another machine with the same 
version of BackupPC installed, presumably with a cron job rsync'ing the 
program directory every night.  Others have described using a RAID with 
mirroring, so you always have a backup drive available in the event of 
failures.  If those drives are in a removable chassis, you could 
occasionally pull the drive and insert a new one, forcing a re-sync and 
giving you a backup that you can carry off to a safe storage location, 
in the event of a total meltdown or break-in.  Still others lay off the 
data to tape in a non-backup-pc form from the most recent full backup. 

I think it really depends on your level of comfort with technology, your 
budget, and your anticipated level of direct involvement.


 
Is there any compability problems with backups from different versions 
of BackupPC?




I believe, though I haven't tried it, you can restore files from an 
older version.  You will have more problems if you decide to switch 
transport layer protocols, though, because the way certain kinds of 
links are stored in the pools differs.  I don't know the details, but if 
you search the archives, you will find more answers.


JH
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] FTP storage

2007-03-19 Thread Jason Hughes
You cannot create hardlinks on an FTP site.  BackupPC won't really do 
what you want without that.

JH

Henrik Genssen wrote:
> Hi,
>
> I have some FTP storage at may provider as backup space.
> On my server I have backuppc installed to backup some VMs.
>
> Is it possible to use that ftp-device as storage using e.g. fuse?
> I tried to mount that FTP server and copy all archives to the ftp server, but
> I get errors like:
> cp: cannot create directory `/myftpmountpoint/pc/localhost/1/f%2fetc/frc3.d': 
> Operation not permitted
>
> I can create the directory f%2fetc/ but no file or folder in it.
>
> Is there known way, how to do this?
>
> Hinnack
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Help with Samba restore

2007-03-19 Thread Jason Hughes

Rick,

This may actually be something like what I'd run into a while back, but 
with read permissions and on XP Pro.  For whatever reason, using Samba, 
I could not get some directories to backup properly unless I set a 
password on the account I was using to access the shares remotely.  I 
verified this by running the backup on the account, removing the 
password (empty string) and watching it fail, then adding it back and 
watching it succeed.  Try putting a password on your account and re-run 
the backup.  You never know, it may help.


Hope that helps,
JH

Rick DeNatale wrote:

Craig was kind enough to try to help me off list.

It looks like I'm running into permissions problems as I descend the
windows directory structure.

Does anyone have enough Samba/Windows XP foo to help me.  It's looking
like windows XP home only lets you allow write permision one directory
at a time.  See my experiment with smbclient in forwared note.

 I can't figure out (from the gui at least) a way to permit write
sharing recursively.

Can anyone help?

-- Forwarded message --
From: Rick DeNatale <[EMAIL PROTECTED]>
Date: Mar 19, 2007 2:34 PM
Subject: Re: [BackupPC-users] Help with Samba restore
To: Craig Barratt <[EMAIL PROTECTED]>


No.  I made sure to check the 'let others change' or whatever the
wording is when I set up the share, which is to the C: drive of the
Windows machine.

I've just done a test using smbclient.  I think I might have a
permissions problem somewhere down the directory hierarchy:
$ smbclient //arwen/CDrive

smb: \> put test.txt
putting file test.txt as \test.txt (0.2 kb/s) (average 0.2 kb/s)

smb: \> ls
...
test.txtA6  Mon Mar 19 14:22:59 2007
 ...

smb: \> cd "Documents and Settings"
smb: \Documents and Settings\> put test.txt
putting file test.txt as \Documents and Settings\test.txt (2.9 kb/s)
(average 0.3 kb/s)

smb: \Documents and Settings\> ls
  .   D0  Mon Mar 19 14:23:34 2007
  ..  D0  Mon Mar 19 14:23:34 2007
  All Users   D0  Wed Oct 20 09:12:06 2004
  Compaq_OwnerD0  Mon Mar 19 10:24:47 2007
  deborah D0  Mon Mar 19 11:24:44 2007
  Default User   DH0  Mon Mar 19 12:15:42 2007
  Julia   D0  Wed May 25 11:36:15 2005
  LocalService  DHS0  Wed Oct 20 09:16:32 2004
  NetworkServiceDHS0  Wed Oct 20 09:16:31 2004
  rickD0  Mon Sep 26 18:17:46 2005
  test.txtA6  Mon Mar 19 14:23:34 2007

36807 blocks of size 4194304. 34157 blocks available
smb: \Documents and Settings\> cd deborah
smb: \Documents and Settings\deborah\> put test.txt
NT_STATUS_ACCESS_DENIED opening remote file \Documents and
Settings\deborah\test.txt
smb: \Documents and Settings\deborah\> cd ..
smb: \Documents and Settings\> cd rick
smb: \Documents and Settings\rick\> put test.txt
NT_STATUS_ACCESS_DENIED opening remote file \Documents and
Settings\rick\test.txt
smb: \Documents and Settings\rick\> quit

This is from an account on the linux machine which has the same name
(rick) and password as an administrative account on the XP machine.

It won't let me write to either the Documents and Settings directory
of that account or my wife's one either.

Do I need to set sharing permissions on each sub-directory?  Is there
a way to do that recursively in XP?

On 3/19/07, Craig Barratt <[EMAIL PROTECTED]> wrote:
  

Rick writes:



I'm trying to restore the backup and it isn't working.  I think the
problem is with my Samba setup.  I was hoping someone here might be
able to shed some light.
  

Is the share read-only?

Craig





--
Rick DeNatale


  
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Timing out full backups

2007-03-18 Thread Jason Hughes
Michael Mansour wrote:
> I'm wondering why the full backups numbered 2 are not going back down to 1 to
> free up some space on the server?
>
> In the glbal Schedule, I have the following:
>
> FullPeriod: 6.97
> FullKeepCnt: 1
> FullKeepCntMin: 1
> FullAgeMax: 7
>
> and it's my understanding that backuppc should cycle those full backups above
> to trash because of the above globals yes?
>
>   

It depends on how many *incrementals* you have, too.  The number of 
backups you keep will be the FullKeepCnt + IncrKeepCnt, plus any fulls 
that must be kept to support those incrementals.  So, it's possible you 
ask for a certain number of full backups on a schedule that are kept, 
but in order for you to retain the incrementals specified, extra 
baseline full backups are kept as well.  So, in your above situation, if 
you specify 1 FullKeepCnt, if you have ANY incrementals saved, once a 
new full is taken, the last incremental will be *older* than the most 
recent full backup, so the full backup just prior to the incremental 
must be kept as well, resulting in 2 fulls, 1 incremental.

Does that make sense?

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with backuppc

2007-03-15 Thread Jason Hughes

Peter,

For testing purposes, you may reduce the alarm period, but under 
practical circumstances, it must be large enough that it doesn't cut off 
backups that would finish, had they been given the time to collect 
enough file information.  The behavior also depends on the transport 
mechanism you use, so sometimes a 5 minute timeout will be fine, but if 
you switch transports the host never completes, and vice-versa.


Chances are, if you're not getting data in the pc//new 
directory, there's something wrong.  When you run at the command line, 
are there files in the 'new' subdirectory?  Make sure you do this test 
as the backuppc user.  Also, try to ssh to the remote host and check 
that you have proper file permissions.  Something may have changed with 
the configuration since you set it up.


One last thing: check the host for any really, really deep directory 
structures that are new.  It would be dated just after your last valid 
backup.  If there's a directory with millions of files, or huge numbers 
of subdirectories full of files, it could seriously bog down rsync while 
it builds a file list, because it pulls that info into memory.  With a 
small enough memory configuration, it may be virtual memory swapping on 
the host, making it take up to 100x longer than normal.  With low enough 
disk space on the client, it may be hanging or thrashing the disk 
looking for a place to put the VM, making it even slower.  Of course, 
the simple solution would be to exclude such a directory tree until the 
issue can be remedied.


Hope that helps,
JH

Peter Nearing wrote:

Aaron,

  When I ran the command line that it's trying, the data isn't coming, 
rsync is running on the client, but it stops there.  The backuppc logs 
state that it's saving the data as a partial, tho.  The ClientTimeout, 
may be it, I think I'll reduce it to something a lotta bit more 
sane... like 5 min... I hope this works but I'll let you know 
either way.  Thanks for the quick reply.


Peter N.

On 3/15/07, *Ciarlotta, Aaron* <[EMAIL PROTECTED] 
> wrote:


You didn't say if it appeared as though data was coming across the
wire or not.
 
This is possibly the culprit:
 
*$Conf{ClientTimeout} = 72000;*


Timeout in seconds when listening for the transport program's
(smbclient, tar etc) stdout. If no output is received during
this time, then it is assumed that something has wedged during
a backup, and the backup is terminated.

Note that stdout buffering combined with huge files being
backed up could cause longish delays in the output from
smbclient that BackupPC_dump sees, so in rare cases you might
want to increase this value.

Despite the name, this parameter sets the timeout for all
transport methods (tar, smb etc).



*From:* [EMAIL PROTECTED]

[mailto:[EMAIL PROTECTED]
] *On Behalf
Of *Peter Nearing
*Sent:* Thursday, March 15, 2007 3:50 PM
*To:* backuppc-users@lists.sourceforge.net

*Subject:* [BackupPC-users] Problems with backuppc

Hey,

   I'm Peter, and I have been using backuppc for about 8 months
now, and I am an addict oops, wrong open anyway

I am a network admin for a prof. at Queens University, and I have
8 computers that I backup using backuppc, using rsync over ssh.  I
have recently ran into a problem with backuppc failing while
backing up one of my desktops, giving me the reason of sig ALARM. 
I was wondering if anyone else here has had this problem, and if

so, what it's caused by.  The xfer seems to start fine, but after
about 5 hours, it fails... originally I thought the problem maybe
due to a partially corrupt filesystem, since I am using xfs, and
it does crap out from time to time, but it would seem as though
that isn't the case, as I have done a fsck on all the drives, and
repaired the one error I got on the client.  I tried the backup
again, and again it failed.  So now, I'm not so sure as to the
problem.  If anyone has any suggestions, or knows why backuppc
would be getting a sig alrm, it would be greatly appreciated.

Oh and a huge thanks to the guys who made this great peice of
software.

Peter N.

The information in this electronic mail message is sender's 
business Confidential and may be legally privileged.  It is 
intended solely for the addressee(s).  Access to this Internet 
electronic mail message by anyone else is unauthorized.  If 

you are not the intended recipient, any disclosure, copying, 
distribution or any action taken or omitted to be taken in 
reliance on it is prohibited and may be unlawful. The sender 
believes that this E-mail and 

Re: [BackupPC-users] Hardware choices for a BackupPC server

2007-03-14 Thread Jason Hughes
John,

IMO, the point behind BackupPC is to use cheap, easily upgradeable disk 
media to make backups available and easy.  That kind of steers me in the 
direction of several low-end backup servers, either with separate 
storage or all sharing a big fat fiber channel NAS.  Buying a high end 
machine and trying to handle what is essentially a very parallelizable 
task on a single box is sort of self-defeating, I think.  So, my 
response would be to keep what you've got and buy another machine to 
offload some clients.

There is definitely some tuning involved with respect to server 
performance, given the various file systems, operating systems, physical 
hardware choices.  But BackupPC can be tuned considerably based on the 
transfer protocols and number of simultaneous backups.  Have you 
exhausted these options?

I'm sure you'll get a different answer from every person on the list, 
since most of what you're asking is kid-in-the-candy-store questions.  
If you're maxing out a single server, chances are, you'll be better off 
with two (or more) servers, or the one you have isn't configured for 
maximum efficiency.  And get a good 3ware RAID card.  :-)

Hope that helps,
JH

John Pettitt wrote:
>
>
> It's time to build a new server.  My old one (a re-purposed Celeron D 
> 2.9Ghz / 768M FreeBSD box with a a 1.5 TB raid on a Highpoint card) 
> has hit a wall in both performance and capacity. gstat on FreeBSD 
> shows me that the Highpoint raid array is the main bottleneck (partly 
> because it's in a regular PCI slot and partly because it's really 
> software raid with crappy drivers) and CPU is a close second. I'm 
> going to build a new box with SATA disks and a better raid card.
>
> So my question: Has anybody does any actual benchmarks on BackupPC 
> servers? 
>
> Which OS & Filesystem is best?  (I'm, leaning toward Ubuntu and  
> RaiserFS)
>
> RAID cards that work well?  Allow for on the fly expansion? 
>
> RAID mode ?  5?  6?  10?500GB drives seem to be the sweet spot in 
> the price curve right now - I'd like to get 1.5TB after RAID  so 6 
> drives in RAID 10.
>
> I'm leaning towards a core 2 duo box with 2Gb of ram.
>
> Any hardware to avoid?
>
> John

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Experience with check_backuppc

2007-03-14 Thread Jason Hughes
You might not get a response that helps you here.  This list is 
specifically for supporting backuppc users, and your question is 
regarding a 3rd party plugin for some other system entirely.  Check any 
readme files provided with the plugin to locate the author.  I couldn't 
easily find the contact info for you.

Sorry,
JH

komodo wrote:
> Hi
>
> I am using backuppc on more servers and i want to check status with nagios. 
> That is why i am usign check_backuppc plugin from 
> http://n-backuppc.sourceforge.net/.
>
> Everything works well, but there is one feature that is not good for me. I 
> want to see if backups fails, but on some backups i back up notebooks and 
> they are not often on the network, and when i run check_baskuppc i get 
> warning - host not found. But backups ar OK.
> Is any possibility to change behaviour of plugin to report only failed or old 
> backups and not last error from backuppc ?
>
> Thanks
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fatal error for empty directory

2007-03-01 Thread Jason Hughes
If the whole share is empty, that is considered indistinguishable from a 
general failure.  You can control that with 
$Conf{BackupZeroFilesIsFatal}.  Check the docs for more details.

JH

Brendan Simon wrote:
> I'm getting a fatal error when backing up an empty directory.
> BackupPC server is running Debian Sarge (backuppc 2.1.1-2sarge2)
> Server being backed up is running Debian Etch (rsync 2.6.9-2)
>
> Surely it must be legitimate to backup up an empty directory.  Is this a 
> bogus error message or maybe rsync rysnc returns an error code that 
> older backuppc versions don't handle correctly ???
>
> Any ideas how to fix this (besides removing the share from the backup 
> list) ???
>
> Done: 89 files, 26547311 bytes
> Running: /usr/bin/ssh -q -x -l root chief /usr/bin/rsync --server 
> --sender --numeric-ids --perms --owner --group -D --links --times 
> --block-size=2048 --recursive --one-file-system --ignore-times . /srv/
> Xfer PIDs are now 3934
> Got remote protocol 29
> Xfer PIDs are now 3934,3935
>   create d 755   0/04096 .
> Done: 0 files, 0 bytes
> Running: /usr/bin/ssh -q -x -l root chief /usr/bin/rsync --server 
> --sender --numeric-ids --perms --owner --group -D --links --times 
> --block-size=2048 --recursive --one-file-system --ignore-times . /opt/
> Xfer PIDs are now 3936
> Got remote protocol 29
> Xfer PIDs are now 3936,3937
>   create d 755   0/04096 .
> Done: 0 files, 0 bytes
> Running: /usr/bin/ssh -q -x -l root chief /usr/bin/rsync --server 
> --sender --numeric-ids --perms --owner --group -D --links --times 
> --block-size=2048 --recursive --one-file-system --ignore-times . /initrd/
> Xfer PIDs are now 3938
> Got remote protocol 29
> Xfer PIDs are now 3938,3939
>   create d 755   0/04096 .
> Done: 0 files, 0 bytes
> Running: /usr/bin/ssh -q -x -l root chief /usr/bin/rsync --server 
> --sender --numeric-ids --perms --owner --group -D --links --times 
> --block-size=2048 --recursive --one-file-system --ignore-times . /emul/
> Xfer PIDs are now 3940
> Got remote protocol 29
> Xfer PIDs are now 3940,3941
>   
> Done: 247 files, 6989413 bytes
> Got fatal error during xfer (No files dumped for share /srv)
> Backup aborted (No files dumped for share /srv)
>   
>
>
> Thanks,
> Brendan.
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] smb files truncated

2007-03-01 Thread Jason Hughes
OverlordQ wrote:
> The Unicode versions of several functions permit a maximum path length
> of approximately 32,000 characters composed of components up to 255
> characters in length. To specify that kind of path, use the "\\?\" prefix.
>
> http://msdn2.microsoft.com/en-us/library/aa365247.aspx
>
>   
Thanks for looking that up.  The relevant bits were actually on that 
page.  The question was whether a file length limit was interceding to 
prevent backups from occurring correctly.  I suggested perhaps UTF-16 or 
Unicode might be causing trouble, if set as the codepage on some 
system.  From MSDN:

"Windows stores the long file names on disk in Unicode, which means that 
the original long file name is always preserved, even if it contains 
extended characters, and regardless of the code page that is active 
during a disk read or write operation. The case of the file name is 
preserved, but the file system is not case-sensitive."

So, it's highly unlikely this is the case, as the long filenames are 
already stored as Unicode on disk.

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] smb files truncated

2007-02-28 Thread Jason Hughes
All versions of Windows have a limit of 250-ish characters maximum for a 
full path, including the filename and extension, regardless of file 
system.  I'm not aware of a lower limit imposed by the file system or 
OS, but it's likely related.  Are you running Unicode-16 character set 
or UTF-16 on the server or Windows box, perhaps?  That might halve the 
limit--I'm not sure.

Hope that helps,
JH

Brendan Simon wrote:
> I'm get xfer errors on long path/filenames.  total length is over 100 chars.
> Is there a limit on the total path/filename size ???
>
> Example error:
>
> Read error: Connection reset by peer opening remote file 
> \bjsimon\ABCD\svn-sandbox\branches\firmware\P20-NiceBigProject-2\F1234.humongous-mutation\project\F1234\src\datapath\src\rx_manager\src\r
>  
> (\bjsimon\CTAM\svn-sandbox\branches\firmware\P20-NiceBigProject-2\F1234.humongous-mutation\project\F1234\src\datapath\src\rx_manager\src\)
>
> There are 6 files in that directory starting with 'r'.
> Examples:
> rx_foo1.v
> rx_foo2.v
>
> How can I get backuppc to backup long windows xp file names?
> Is it a length problem or some other issue?
>
> Thanks,
> Brendan.
>
>
>
>
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys-and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] {Fraud?} {Disarmed} Re: {Fraud?} {Disarmed} Re: excludes on windows with spaces in names

2007-02-28 Thread Jason Hughes

Jim,

Here is a snippet from my exclude list, which works using rsyncd on a 
Win2k box:

/Documents and Settings/*/Local Settings/Temporary Internet Files/*

The spaces are not a problem, for me at least.  But, I did have 
considerable difficulty getting rsyncd.conf to behave when I placed the 
share as anywhere on my drive that had spaces in it.  I had to resort to 
using 8.3 contractions (dir /x) to make it work.  This was true for the 
log file locations, secret file locations, etc.  Other people have 
reported no problems using the same package, so I'm at a loss to explain it.


Anyway, what I'm getting at is, if you set your share to be a directory 
with spaces in it, perhaps that is failing to match a folder at all, so 
you get the root instead.  This means your excludes will fail to match 
as well, because / will not start in /Documents and Settings/ for you, 
but instead / is c:/.  You could try changing the share folder to be 
/docume~1/ and see if that fixes your problem with excludes.


Good luck,
JH

Jim McNamara wrote:
I did, and unfortunately it made no difference. Here is the rsyncd 
exclude info I based my file on -


# --exclude "*.o"   would exclude all filenames matching *.o
# --exclude "/foo"  would exclude a file in the base directory called foo

# --exclude "foo/"  would exclude any directory called foo.
# --exclude "/foo/*/bar"  would exclude any file called bar two levels below a
  base directory called foo.

# --exclude "/foo/**/bar" would exclude any file called bar two or more levels 
below
  a base directory called foo.
the full page is http://www.ss64.com/bash/rsync.html

I'm guessing the translation from bash to winworld is where my problem 
is occurring.


Peace,
Jim

On 2/28/07, * Brien Dieterle* <[EMAIL PROTECTED] 
> wrote:


perhaps the leading "/"s are causing it not to match?  Have you
tried just

'Administrator/'

brien



Jim McNamara wrote:

The modified parts of the files now look like:
$Conf{BackupFilesExclude} = {
  '/Administrator/' => [
''
  ],
  '/Application\ Data/' => [
''
  ],
  '/All\ Users/' => [
''
  ],
  '/Default\ User/' => [
''
  ]
};

and on windows:

exclude = "/Administrator/" "/All\ Users/" "/Application\ Data/"
"/Default\ User/" "/Jennie\ and\ Andy/.java/" "/Jennie\ and\
Andy/.javaws/" "/Jennie\ and\ Andy/.jpi_cache/" "/Jennie\ and\
Andy/Application\ Data/" "/Jennie\ and\ Andy/Cookies/" "/Jennie\
and\ Andy/Desktop/" "/Jennie\ and\ Andy/Favorites/" "/Jennie\
and\ Andy/Local\ Settings/" "/Jennie\ and\ Andy/My\ Documents/"
"/Jennie\ and\ Andy/NetHood/" "/Jennie\ and\ Andy/PrintHood/"
"/Jennie\ and\ Andy/Recent/" "/Jennie\ and\ Andy/SendTo/"
"/Jennie\ and\ Andy/Start\ Menu/" "/Jennie\ and\ Andy/Temp/"

I didn't think it would be necessary on the windows machine as it
handled c:\Documents and Settings without special regards to the
whitespace in the path, but figured it was better safe than sorry.

Unfortunately, it still grabs the entire contents of Documents
and Settings.

Peace,
Jim


On 2/28/07, *Brien Dieterle* <
[EMAIL PROTECTED]
> wrote:

Have you tried escaping the spaces with a \ ?  Like:
'/Application\ Data/'

Not sure if that will work, but it sounds like it's worth a shot.

brien

Jim McNamara wrote:

Hello again list!

I'm running into some trouble with excluding directories in
rsyncd.conf on a windows machine. The machine in question is
dying quickly, and rarely stays "alive" for more than 30
minutes or so. Because of that, I'm trying to slowly
increment what is being backed up to the debian server.

The main problem is the excludes list is supposed to be
separated by spaces, and of course everyone's favorite OS
has spaces in directory names. I tried to get around this
with quotation marks, but the things I ask to be excluded
are still included. I also tried adding explicit excludes to
the config on the backuppc, and that similarly didn't take.
Here are the key configs -

>From the host.pl on the backuppc -

$Conf{RsyncShareName} = [
  'documents'
];
$Conf{BackupFilesExclude} = {
  '/Administrator/' => [
''
  ],
  '/Application Data/' => [
''
  ],
  '/All Users/' => [
''
  ],
  '/Default User/' => [
''
  ]
};

This is the rsyncd.conf that is running from the
cygwin-rsyncd 2.6.8_0 package from the backuppc page at
sourceforge. I am trying to start with the smallest possible
amount of data from this machine, then I'll include things
   

Re: [BackupPC-users] Backup OS/2 share via SMB

2007-02-27 Thread Jason Hughes
Tareq,

That error means you logged in, but for some reason (usually permissions 
problems), your logged in user cannot see those files.

Have you tried to log into that share manually on your Linux box, using 
a similar command line?  You can leave off a few flags and just log in 
to poke around.  You may find that either 1) your share is indeed empty 
given your user's credentials, or 2) if your login has no password, the 
SMB server may refuse to export certain files or folders for viewing to 
an insecure user.  WinXP sometimes does that.  I have no idea what OS/2 
does.

Have you gotten any OS/2 systems working with SMB?  Are certain shares 
working, but not others on the same machine?

Hope that helps,
JH

Tareq Djamous wrote:
> Hello,
> I'm trying to backup my OS/2 Warp4 Clients with BackupPC v.3.0.0 . I 
> created a share that  i can access/read/write from a Windows machine and 
> my linux server that runs BackupPC.
> But everytime I want to run a full backup of that shre it says:
>
> Running: /usr/bin/smbclient pc-60\\D -U ADMIN -E -N -d 1 -c tarmode\ 
> full -tc -
> full backup started for share D
> Xfer PIDs are now 8369,8368
> tarmode is now full, system, hidden, noreset, verbose
> tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 
> 0 filesTotal, 0 sizeTotal
> Got fatal error during xfer (No files dumped for share D)
> Backup aborted (No files dumped for share D)
>
> although there are files to backup.
>
> Thanks in advance for your help
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Searching for a command line tool

2007-02-23 Thread Jason Hughes

Jeff Schmidt wrote:

On Fri, 2007-02-23 at 16:47 -0600, Jason Hughes wrote:
  
Basically, I'm looking for a quick way to find the filenames and/or 
backup numbers I should use to get at all versions of a particular file.




how 'bout a commandline browser?

something like:
links 
"http://BACKUPHOST/cgi-bin/BackupPC_Admin?action=dirHistory&host=HOST&share=SHARE&dir=PATH";

  


Interesting, I hadn't really considered that approach.  With a little 
parsing it could be done.  It would be nicer if a simple suite of 
command line tools existed for managing and scripting the backup pool, 
but the www interface could be coerced to do the right thing.


Thanks,
JH
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Searching for a command line tool

2007-02-23 Thread Jason Hughes
Hey all,

I suddenly had an urge to do some work on a particular configuration 
file and wanted to determine all the changes that had occurred to it 
over the lifetime of its backups.  Is there a simple command line tool 
that shows all the revisions that have been transferred to my pool, 
either per-pc or globally, based on the fully qualified filename?  
Basically, I'm looking for a quick way to find the filenames and/or 
backup numbers I should use to get at all versions of a particular file.

Any ideas?

Thanks,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up large directories times out with signal=ALRM or PIPE

2007-02-20 Thread Jason Hughes
Jason B wrote:
> However, the transfer always times out with signal=ALRM.
>   
[...]
> Somewhat unrelated, but of all these attempts, it hasn't ever kept a
> partial - so it transfers the files, fails, and removes them. I have
> one partial from 3 weeks ago that was miraculously kept, so it keeps
> coming back to it.
>
> Would anybody have any ideas on what I can do? I've set
> $Conf{ClientTimeout} = 7200; in the config.pl... enabled
> --checksum-seed... disabled compression to rsync... no other ideas.
> Running BackupPC-3.0.0 final. I'm guessing the connection gets broken
> at some point (using rsycnd), but is there any way to make BackupPC
> attempt to reconnect and just continue from where it left off?
>   

Not exactly.  It's a gripe that has come up before.  The way BackupPC 
works is by completing a job.  Anything incomplete is essentially thrown 
away the next time it runs.  You might try bumping up your ClientTimeout 
to a higher number, but chances are, you're actually seeing the pipe 
break because the connection is cut or TCP errors occur that prevent 
routing or who knows what.  If you think about it, larger transfers are 
much more susceptible to this because there may be a small chance that a 
connection is cut at any time, the longer the connection the more likely 
it breaks...  any unrecoverable transfer will tend toward impossible to 
complete as the tranfer time increases.  :-(

> On a final note: interestingly, backups from the SAME physical host
> using a different hostname (to back up another, much smaller,
> virtualhost directory) work perfectly every day, never failed. So I'm
> guessing it's just having a problem with the size / # of files. What
> can I do?
>
>
>   

I have a machine that has a lot of video (120gb) across a wifi WDS link 
(half 802.11g speed, at best).  I could never get an initial backup to 
succeed, because it could take 30-50 hours.  What I did was set up 
excludes on tons of directories, so the first backup was very short.  I 
kicked it off manually and waited until it completed.  Then I removed 
one excluded directory and kicked off another.  BackupPC skips files 
that have been entered into the pool due to a completed backup, so it is 
kind of like biting off smaller pieces of a single larger backup.  
Repeat until all your files have made it into the pool.   At that point, 
your total backups will be very short and only include deltas.

Other people have had success with moving your server physically to the 
LAN of the client and doing the backup over a fast, stable connection, 
to populate the pool with files initially.  That may not be an option 
for you.

Good luck,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] {Fraud?} {Disarmed} rsyncd problem

2007-02-19 Thread Jason Hughes

Jim McNamara wrote:
I am having a problem with rsyncd between backuppc and a remote 
windows box running the rsync package provided on the backuppc page at 
sourceforge. I installed a clean version of backuppc 3.0.0, moving all 
the old backups from 2.1.1 to an alternate machine should I need them 
later.


The problem is that rsyncd will start this job, get through the 21 Gb 
of data in the apps share from the windows server, then crash out on 
getting the 60 Mb from the UPC share.




It looks like a permissions problem.  I haven't experimented with 
multiple rsync shares much, but perhaps do you need to set up your 
secrets file so that the user has access to both folders?  It may be 
that the user you are using to connect with on the command line is *not* 
the same user BackupPC is using... ie:

$Conf{XferMethod} = 'rsyncd';
$Conf{RsyncShareName} = ['UPC', 'apps'];
$Conf{RsyncdUserName} = 'backuppc';


Maybe you need to try:

$Conf{RsyncdUserName} = ['backuppc', 'backuppc'];

I haven't done this sort of thing, so I'm not even sure it's allowed, 
but if you let guest logins to rsync, I could see how an unprivileged 
guest account would not have access to the files in your UPC share.  
Just a thought.


Good luck,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 3 load

2007-02-19 Thread Jason Hughes
Nils Breunese (Lemonbit) wrote:
> Any ideas on how we can reduce the load? More/less nightly jobs? Less 
> concurrent backups? Other tips? We used to backup 15 servers onto one 
> BackupPC server, but now almost all of our backups are failing and the 
> load is through the roof. Can we just go and install BackupPC 2.1.3 
> again?
>

Are you sure the only change in your environment was BackupPC?  I mean, 
did the file usage on any of the servers shift in some way so that more 
files are being backed up?  Did you change transport mechanisms on any 
servers?  File systems?

I noticed my SMB backups began a lot faster when I upgraded from a 
10/100 network port to a gigabit port on my backup machine, though rsync 
wasn't affected too much.  Adding memory to it also helped I think (from 
128mb up to 256mb, to reduce the amount of paging by measuring the 
high-water mark and surpassing it).  I recently reduced it to take 2 
backups at a time maximum, to help spread them out a little, since I 
have a blackout period during the work day, they tended to bunch up in 
the off-period.  I also checked out hdparm to tune my disk a little... 
didn't help me, but that might help you.

Depending on how loaded your server is, you may notice some backups that 
normally take a short while suddenly take longer, because it's paired 
with a backup that is hogging the system bus, disk i/o or cpu time.  It 
would be nice to have more direct control over pairing, rather than 
having to set per-backup-set blackout times.

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] variable substitution

2007-02-15 Thread Jason Hughes
Perl likes this:
 $string = "'Hello ' . `executethis` . ' test\n'";

You probably want to surround the whole string in single quotes, but use 
dot-concatenation to string the pieces of command together.  I didn't 
try what you have below, but I did notice that backticks weren't being 
executed if they were inside a string (for me, running perl on win32 at 
least).

Good luck,
JH

Brien Dieterle wrote:
> How can I get this to work? I am storing the data inside lasttime.txt 
> (don't ask why) :-)
>
> $Conf{TarIncrArgs} = '--newer=`cat lasttime.txt` $fileList+';
>
> the shell command within ` ` does not get executed, so of course this 
> doesn't work at all.  Any ideas?
>
> Thanks!
>
> Brien
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] restore a backup to samba

2007-02-13 Thread Jason Hughes
Rob Shepherd wrote:
> Thanks for the reply.
>
> Forgive my ignorance, but if the files are not in a "direct access" 
> format, then how does rsync work?
> rsync compares local and remote file trees before sending deltas etc.
>
> Does the rsync perl module do some translation magic or somesuch?
>   

I don't know the exact answer to this (Craig?), but definitely, the 
files are compressed with strange hashed filenames, and many each backup 
stores the real filename and metadata as a hardlink to these compressed, 
hashed filenames.  So, you see, if you allowed a user to work directly 
with any one version of the file on the server, it would necessarily 
'corrupt' the backups not only where the user is working, but going back 
in time as well.  In short, this is not the solution you're looking for.

> Yes, real copies on ZFS would be nice, however I need to cater for users 
> who work on the train, in the airport etc and can't necessarily VPN all 
> the time. As you say, if it was left up to the user to sort out 
> versioning, we may as well format C:\ now :)
>
> It there a way of exploiting the BackupPC_Restore to dump to a local 
> folder, even if piped through tar/gtar/star?
>
> Not through the web interface, but from _my_ terminal.
>
>   

What you are describing is simply rsync or subversion/cvs/arch.  If you 
want people to work independently of a central store, but have access to 
that store in absence of their primary work machine, you want a central 
repository of their files.  A backup system of some sort may do that for 
you, but I think perhaps that is going too far: a revision control 
system would be more appropriate, if the users needed to go back to 
previous versions of files.  Otherwise, rsync with a central server on a 
frequent basis automatically when a secure network connection can be 
built to your server.  Rsync'ing that data back to a local machine as 
needed is pretty straightforward, especially if wrapped in a simple UI.

My $.02, at least.

Good luck,
JH

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with version 3.0.0 (FEDORA CORE)

2007-02-08 Thread Jason Hughes
You might want to check that your perl is in /bin/perl.  It's probably 
in /usr/local/bin/perl or /usr/bin/perl.  Simply change the line at the 
top of the script to whatever this command tells you:

which perl

Hope that helps,
JH

Dienelt Václav wrote:
>> Hello,
>> I have problem with starting backuppc service. First i correct 
>> rights to folder /usr/local/backuppc, but now I have following message.
>>
>> I start backuppc with /etc/init.d/backuppc start
>>
>> Then I have following :
>> "Starting BackupPC: -bash:
>> /usr/local/backuppc/bin/BackupPC: /bin/perl/: bad
>> interpreter: No such file or directory
>>
>> Could you help me please
>>
>> Thank you very much
>> Vaclav
>> 

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Minor quip about ParPath

2007-02-05 Thread Jason Hughes
Hi Craig,

First and foremost, love the software.  Thanks so much.

I noticed that in 3.0.0, if you try to edit any of the config from the 
web interface, it refuses to save for me on the basis that ParPath 
points to a utility that I do not have installed.  Funny thing is, I 
don't recall par being a requirement.  I am going to install it today 
just to get the UI working, but since the vanilla Centos 4.4 install 
doesn't put it on a machine, I am guessing there may be others with the 
same issue.  It could be that 2.1.2pl2 placed a path in the config and 
3.0.0 didn't clear it when upgrading (although it seems to require it).  
I certainly didn't put a path in there manually.

Would setting the path to an empty string shut the error up?

Thanks,
JH

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] GUI problems

2007-02-05 Thread Jason Hughes
Could be many things:
 - Make sure you have added the  tags in your 
httpd.conf that point to BackupPC. 
 - Make sure you have restarted httpd so it reads the config.
 - Check that your htpasswd file has been created for authorization 
purposes.  This file will contain all the 'users' that can use the web 
interface.
 - Make sure you have declared some user to be an admin user in the 
BackupPC configuration.  The name should match a user name in your 
htpasswd file.  This user will have access to all hosts in the web 
interface.

There are probably other mistakes one could make, but without more 
details, it's hard to give more direction.

Good luck,
JH

Gerard J. Leonardo wrote:
> Hi there, I have problem accessing my installed backup PC in www, I 
> checked the mod_perl and stuff but it is there, can you please tell me 
> what to do? I have a CentOS 4.4, apache installed, and Backup PC 3.0.
>
> Please, thanks in advance.
>
> Gerard
>
>   

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC pings by hostname, but backs up by ip?

2007-02-02 Thread Jason Hughes
Dave Fancella wrote:
> I went ahead and did this for now, but it's still not quite the right 
> solution.  The laptop is dhcp because it periodically goes wardriving, so a 
> solution where I can have it dhcp is best.  :)  Still, it'll be some months 
> again before I might need it to leave the house, in the meantime it's staying 
> put, so this will work for awhile.  Thanks.
>   
>

You can set most recent consumer router/gateway boxes to assign a static 
IP through DHCP when it recognizes a MAC address.  This way, you can 
still have your laptop set up for automatic IP discovery through DHCP, 
but at home, you always get the same IP address, and the laptop is 
none-the-wiser.

With regards to setting up an DNS server, it's trivial to set up simple 
DNS, but it can be somewhat challenging to set up reverse DNS for a 
first-timer.  Not all router/gateway boxes support overriding just the 
DNS server address when handing out IP addresses via DHCP, though.  My 
Zyxel P-330W allows this, and it works really well.  Windows and Linux 
boxes can be set up easily to register their IP addresses with their DNS 
server when they receive a DHCP ack, so it can reverse the mapping as 
needed.  IP-to-hostname resolution is called reverse DNS, which is what 
you need for dynamic IP clients to resolve correctly with traceroute and 
nslookup, among other things.

So, the typical alternative is to run DNS and DHCP from a single Linux 
box instead of your router, which gives you the safest mechanism for 
updating the reverse DNS.  There's a security hole there if someone else 
logs into your network and pretends their IP maps to one of your other 
machine names and is trusted to update that by the DNS server...  If you 
run DHCP and DNS on the same machine, the clients don't need to register 
themselves because DHCP can tell DNS directly using only the loopback 
port.  There are plenty of HowTo's out there for DNS, in case you want a 
weekend project, though.  :-)))

But, by far, the simplest solution is to use a static assignment from DHCP.

JH

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] RHEL4 fresh load - child exited prematurely

2007-02-01 Thread Jason Hughes


When I say 3 different locations, I don't mean 3 different floors of 
the same building.  I mean three different client sites, miles apart, 
with completely different *everything*, including network hardware 
brand.  Some of them are HP ProCurve switches (our preferred brand) 
but nowhere near all of them:  everything from Netgear and Linksys to 
Cisco.


It is *NOT* the networking hardware!  This is NOT an isolated 
situation.  It's not just **ONE** serer that is having this problem. 
 And despite the fact that others have made this configuration work, I 
am *very* confident that you can reproduce it too.  Take 2 Pentium III 
computers, load RHEL4 (or CentOS 4.0 or 4.1) on one, boot Knoppix 
4.0.2 on the other.  rsync a couple of gig of data.  It will fail.
Ok.  I missed that detail in your previous messages. It is unlikely to 
be hardware then.




Maybe if you use a newer version of RHEL/CentOS it won't.  That's what 
I'm testing next.  Haven't gotten that far yet.  I'm trying to 
systematically step from version to version:  I'm at CentOS 4.1 right 
now.  But I have tried nearly a dozen different kernels and 
distributions on the host end, on 3 different models of hardware, and 
6 different physical devices.  All of them fail in exactly the same 
way.  I have intentionally changed out *every* hardware component 
(server computer, host computer and network) and *every* software 
component on the server.  The only thing that has remained constant 
through this entire process is RHEL4 on the host.  And, of course, the 
failures are pretty constant, too...
Why use an old kernel for the backup server, if you can control what 
server OS to choose?  I have Centos 4.4 Server and it works like a 
dream.  I have run it on a PPro 200mhz, and it worked fine.  I literally 
swapped the hard drive into a P2-450mhz and it works fine there (and 
automatically picked up all the drivers necessary w/o futzing with 
it--very nice!) as well.


I actually wish someone would take Centos 4.4 and wrap it up onto a 
bootable ISO, so I could completely protect my server machine 1) from 
intrusion, 2) from hardware failure.  My PPro one day just decided not 
to boot, so I had to risk swapping the whole thing to another box.  I'd 
really love it if that were as simple as unplugging the archive drive 
from USB or Firewire and plugging it into another machine.  
Unfortunately, setting up a working distro on a CD is beyond my talents...


JH

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] RHEL4 fresh load - child exited prematurely

2007-02-01 Thread Jason Hughes
Timothy J. Massey wrote:
> [EMAIL PROTECTED] wrote on 02/01/2007 
> 12:22:18 AM:
>
>  > Timothy J. Massey wrote:
>  >
>  > > rsync: read error: No route to host
>   
This one would concern me most.  I thought there was a note somewhere in 
the docs that says clients should have reverse DNS set up for them, 
though I can't find that mention at the moment.  But if you're starting 
backups and they don't fail immediately, it's probably not the issue 
you're searching to solve.
>  >
>  > This is almost certainly a network error, unrelated to rsync or whatever
>  >   application might be running.  
I agree with this statement.
> My notebook can copy the data maybe 30% of the time.  The slower 
> computers cannot.
>   
Your exhaustive tests have shown that faster machines tend to do 
better.  That points to a timing issue, where the error is likely to 
happen in X minutes, and if your machine is quick enough, sometimes it 
doesn't happen to you.  That definitely points to something external to 
the backup server.  Do you have anything going on in the network at a 
periodic interval, such as a reset of some switch?

The best suggestion so far is to take the network out of the equation 
and hook your laptop up to the switch closest to the server and see if 
it works two or three times in a row.  If so, start moving further away 
topologically until it fails.  I'm sure you'll find a pattern in there, 
somewhere.  The spirits in the hardware will be appeased and you will 
get some sleep.  :-)

JH

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] 2.1 versus 3.0

2007-01-31 Thread Jason Hughes
James Ward wrote:
> it looks like they're going to all get started at the same time again  
> due to waiting on the nightly process to complete after the longest  
> of these backups.
>
> Does version 3 get me away from this scenario?  
Yes.  Version 3 doesn't need nightly processing to be mutually exclusive 
to backups.  They should fire off whenever they're due to start.  
However, if you having trouble with one taking more than 24 hours, and 
the period between backups is less than this, it will pretty much always 
be backing up those machines and falling further behind.  Since you 
mention having multiple backup servers, perhaps putting the largest file 
server hosts onto different backuppc servers would help?
> And on my other  
> server that's backing up 200 machines (some remote), will it be able  
> to just backup 24x7 with version 3?  Right now it spends most of  
> every day from the wee hours until the afternoon doing the nightly  
> cleanup.
>   
Again, this should be alleviated in version 3.  Even if the processing 
is still lengthy, it should not bunch up your backups anymore, so 
theoretically, the same server has greater capacity in version 3 than in 
version 2.

JH

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier.
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Found a Bug

2007-01-29 Thread Jason Hughes
Willem Viljoen wrote:
> I have inserted the username and password required to make backups and 
> it works, full and inrcremental. When turning "Use simple File Sharing" 
> of - incremental backups fail with the message: "backup failed (session 
> setup failed: NT_STATUS_LOGON_FAILURE)". My printer monitoring server 
> require that "Use simple File Sharing" is turned off.
>
>   
Are you sure that full backups are working?  The only way a 
NT_STATUS_LOGON_FAILURE can occur is if the authentication for your 
user/pass is denied by SMB on the Windows XP box.  If it's wrong for 
incremental, it's wrong for full backups as well, since it's the same 
setting.

You might want to re-create the root$ share now that Use Simple File 
Sharing is turned off, as sometimes SMB likes to have its configuration 
recreated after certain settings change.  Also, I would mention that if 
your username has an empty password associated with it, you may 
experience strange behavior using Samba as a transfer method, because 
users without passwords may be considered 'guests' through that protocol 
and arbitrarily be denied access to some directories.

Good luck,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Bug: Cannot use rsyncd on RHEL4 hosts: child exited prematurely

2007-01-29 Thread Jason Hughes
[EMAIL PROTECTED] wrote:
> Maybe I shouldn't chime in, because I've only been half following this
> thread, but I can't help wondering if you've looked into all the
> firewall/timeout possibilities?  Sometimes those settings get hosed during
> an upgrade too.
>
>   
Not a bad thing to look into.  I remember someone saying earlier that 
they found a switch that would drop a TCP connection after only a few 
minutes of inactivity.  On a large file, is it possible rsync is busy 
calculating a checksum for a long time, especially on a busy system, 
causing that period of inactivity to trigger some denial-of-service 
rules for open ports?

One thing you could do is hook up a packet sniffer (tcpdump) and see 
what is happening on the wire during that time.  If it's nothing, that's 
your first clue.

Also, you might crank up the timeout settings for just the failing hosts 
and see if it allows your backups to run longer before failing.  Your 
failures are right about 20 minutes into them, which is a suspiciously 
round number.

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Data Directory on a Network Attached Storage

2007-01-25 Thread Jason Hughes
Simon Köstlin wrote:
> I think TCP is a safer connection or plays that none rolls?
> Also when I click on a PC in the web interface it takes around 20-30 seconds
> until the web page appears with the backups which were made. I thought that
> would be better with an other connection. But that time is not dependent on
> the size of the backups. I made backups with just some files and it takes
> that time to load also if I have Backups with 3GB.
>   
Actually, UDP is a faster transport protocol than TCP, since TCP is 
built on top of UDP, more or less.  I'm not certain TCP would be safer, 
except in the event you have a firewall between your backup server and 
the NAS, and you planned to implement a secure layer for it to talk over. 

If you are seeing performance problems, perhaps it is slow because you 
are using the CGI interface through Apache, rather than installing it 
with mod_perl.  Did you configure the backup server to use mod_perl?

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude not working

2007-01-24 Thread Jason Hughes

All of my excludes look like this:
$Conf{BackupFilesExclude} = ['/proc', '/var/named/chroot/proc', '/mnt', 
'/sys', '/media'];

$Conf{XferMethod} = 'rsyncd';
$Conf{RsyncShareName} = 'wholedrive';

They seem to work fine.  I'm using 3.0.0beta3.  Is your rsync share name 
correct?  Shouldn't your exclude path be _relative_ to that directory 
then?  My "wholedrive" is "/", so my directories are relative.  Yours 
aren't.


JH

James Kyle wrote:
I figure I'm doing something wrong here, but I wish to backup 
/usr/local, but not /usr/local/var/backups


_TOPDIR_/pc/localhost/config.pl:

$Conf{RsyncShareName} 

 = ['/usr/local'];
$Conf{BackupFilesExclude} 

 = ['/usr/local/var/backups'];
$Conf{RsyncClientCmd} 

 = '/usr/bin/sudo $rsyncPath $argList';
$Conf{RsyncClientRestoreCmd} 

 = '/usr/bin/sudo $rsyncPath $argList';

localhost full backup log:

Contents of file /usr/local/var/backups/pc/localhost/XferLOG.0.z, 
modified 2007-01-24 12:03:32 (Extracting only Errors)


Running: /usr/bin/sudo /usr/bin/rsync --server --sender --numeric-ids --perms 
--owner --group --devices --links --times --block-size=2048 --recursive 
--exclude=/usr/local/var/backups --ignore-times . /usr/local/
Xfer PIDs are now 315
Got remote protocol 28
Negotiated protocol version 26
Sent exclude: /usr/local/var/backups
Xfer PIDs are now 315,316
[ skipped 2037 lines ]
Done: 1892 files, 214682497 bytes
  
And yet, 
Name		Type	Mode	#	Size	Date Modified

backups dir 07550   544 2007-01-24 -8:00

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Migrating to V3

2007-01-19 Thread Jason Hughes

Clemens von Musil wrote:
if version 3 will turn from beta to stable - will it be possible to 
migrate an existing system, with filespool etc., towards the newer 
version?

Is it already now possible to outline what I need to do the migration?
You mostly just download and install over the existing 2.1.x version, 
then restart Apache if you run mod_perl.  In a production environment, 
you'd probably want to backup your config in case something goes 
horribly wrong, naturally.  It retains the existing pools and per-pc 
settings.  And from that point on, you can mostly edit configs via CGI, 
so you can look it over pretty quickly.


JH
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backupp Errors

2007-01-17 Thread Jason Hughes
Byron Trimble wrote:
> All,
>
> All of a sudden, none of my backups (rsync) are working. I'm getting "Unable
> able to read 4 bytes" for each backup. Any insight?
>
>
>   
I had this happen to me when I had an old File::RsyncP version using 
protocol 26 trying to connect to rsyncd that was at protocol 29.  Check 
your logs and see what protocol they negotiate to.  If this is your 
issue, update with CPAN on the server.

My problem was complicated further by having two different versions of 
Perl installed (one from yum, another from sources), and CPAN was 
updating one version, but BackupPC ran the other.  If CPAN checks out as 
having the latest version, you might check for someone installing 
another Perl on your system.

Hope that helps,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] bare metal ?

2007-01-16 Thread Jason Hughes
As silly as it may sound, I have had some success using VitualPC or 
VMWare or similar PC simulators rather than trying to restore a Windows 
PC from scratch.  The beauty of it is, you can have several sitting 
around on the hard drive of the host OS, and when one crash and burns 
(as Windows inevitably does), I just launch one of the most recent 
copies.  A full backup of a running Virtual Machine is as simple as 
pausing the VM and copying the disk file that encloses it.


Not saying it's appropriate for users, but it's really darned handy.  :-)

JH

[EMAIL PROTECTED] wrote:


Aloha,

Is there any hope for adding bare-metal restore capabilities to 
BackupPC for Windows clients?



Thanks,

Richard
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] chdir failed

2007-01-12 Thread Jason Hughes
Arlequín wrote:
> Hello, David.
>
> I use a stand alone rsync + cygrunsrv install.
> The service rsync.exe is reported as running by user SYSTEM.
>
> SYSTEM has all the perms activated on directory
> C:\Documents and Settings\jdoe\Desktop
>
> But I'm getting chdir failde when rsync'ing.
> rsync -av [EMAIL PROTECTED]::johndoeDesktop
>
> What else do I have to check on WindowsXP permission? :-\
>
> Thanks in advance.
>   
>
Did you put any paths in the rsyncd.conf file that have spaces in them?  
You must use 8.3 filenames (dir /x) in the configuration file, or rsyncd 
will fail to understand the paths.  Check that first.

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] backing up FROM removable media (3.0.0beta3)

2007-01-11 Thread Jason Hughes
Joe Casadonte wrote:
> Using 3.0.0beta3, backup client is WinXP Pro via rsyncd.
>
> I have an 80 GB USB hard drive that I'd like to back up if it's
> connected.  If it's not, then I'd like the rest of the laptop backed
> up.  I have 'BackupZeroFilesIsFatal' unchecked.  Here's what I get in
> the log:
>
>   
If you want to do this completely on the client's side, you could Google 
for junction points.  They're symbolic links for Windows, allowing you 
to make an empty subdirectory somewhere on your client and junction in 
the removable drive.  I think that would give you what you want without 
changing the backup server config.

Good luck,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Question

2007-01-11 Thread Jason Hughes
The BackupPC system is a server-pull model.  There is no such thing as a 
missed backup because the server keeps the schedule.  If the server is 
down, the backups will run as soon as they are allowed to run (taking 
into account blackout periods and minimum uptime requirements).  Making 
two or more backups in rapid succession because a few days went by when 
backups should have occurred does not improve your data security.

Hope that helps,
JH

Jonathan Caum wrote:
> Hey, I am currently using BackupPC at home as a test of it's ability, 
> and I noticed one thing - if the server is ever down, will the 
> computer know it has backups that should have run during the down 
> time, and then run the missed backups?
>
> Thanks!
>

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Defunct BackupPC Process

2007-01-11 Thread Jason Hughes
This was happening to me when I was using rsyncd and File::RsyncP on the 
server that ran protocol version 26.  Upgrading it to run protocol 28 
with CPAN fixed my problem.  You said ssh+rsync, not rsyncd tunneled 
through SSH right?  So maybe this doesn't apply to you.

JH

Randy Barlow wrote:
> Howdy all.  I started a backup manually of a remote host (read: backup
> using rsync over ssh through the public internet) yesterday morning at
>   

> 16928 ?00:00:13 BackupPC_dump 
> 16942 ?01:56:27 BackupPC_dump
>
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup a host 1 time and 1 time only.

2007-01-10 Thread Jason Hughes

From the documentation:


   Other installation topics

*Removing a client*

   If there is a machine that no longer needs to be backed up (eg: a
   retired machine) you have two choices. First, you can keep the
   backups accessible and browsable, but disable all new backups.
   Alternatively, you can completely remove the client and all its backups.

   To disable backups for a client there are two special values for
   $Conf{FullPeriod}
   

   in that client's per-PC config.pl file:

*-1*

   Don't do any regular backups on this machine. Manually requested
   backups (via the CGI interface) will still occur.

*-2*

   Don't do any backups on this machine. Manually requested backups
   (via the CGI interface) will be ignored.



Ryan Turnbull wrote:
Is there a way to have backuppc backup a system once and keep the backup 
indefinately?  I have backed up a crashed PC and restored some data from 
the backup, however I DON'T want to have BackupPC backup that host 
again.  The PC host is at the same IP address as previous.  As well I 
need to be able to view the backup within the CGI.


Please let me know if there is a way.

Thanks

Ryan

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

  
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Transferred data lost

2007-01-10 Thread Jason Hughes
Unfortunately, yes.

What you might want to do is put some of the larger directories in the 
BackupFilesExclude folder for that client.  Then, do a full backup.  
After that backup succeeds, remove one of the excluded folders and 
trigger another backup.  Rinse, repeat.

This way you will populate the set of files that BackupPC knows about, 
and subsequent backups will skip them quickly and move on to the 'new' 
files, transferring only those.  Once the whole machine has had all its 
files backed up once, even a slow connection is pretty dependable for 
backups.

In my experience, having a single very long backup over a slow 
connection is doomed to fail repeatedly for various reasons (loss of 
connection, reboot, network hiccup, etc).

Hope that helps,
JH

Yves Trudeau wrote:
> Hi,
> we are experimenting with backuppc et we are backuping a 30 GB share 
> over Internet with rsyncd.  This morning, after more than 30 hours of 
> transfer,  the remote host was accidentely rebooted so the connection 
> was lost for a few minutes.  Backuppc restarted the backup very nicely 
> but, the content of the "new" folder seems to be lost and all the files 
> are transfert again.   Is it the normal behavior of backuppc?  We use 
> 3.0.0beta3.
>
> Yves
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] CPU Load statistics (Was: Re: OK, how about changing the server's backuppc process niceness?)

2007-01-09 Thread Jason Hughes
Timothy J. Massey wrote:
> The C3 is slow.  I  get it.  I already *knew* that.  However, the 
> performance numbers I posted demonstrate pretty clearly that the 
> failure is not in a simple lack of CPU power, but in truly how *much* 
> CPU power rsync demands.  I get triple the performance in switching 
> from rsync to SMB.  Same CPU, same network, same hard drives, 
> different protocol.
Sure, I see that.  I was only pointing out that the conditions of this 
code might be right to cause worse than normal performance for the C3 
for any number of reasons, making it buckle under the load sooner.  Not 
that it's a bad config at all.
>
> And interestingly enough, the load is far higher on the computer I'm 
> backing up, not the poor little C3.  It takes a 2.6GHz Xeon with SCSI 
> RAID drives to force my anemic C3 to 100% utilization.  I can live 
> with that.  So while my C3 is leaving *some* performance behind, it's 
> nowhere near enough to cause me to switch from the tiny, attractive, 
> quiet, reliable, affordable package I have develeped around the EPIA.  
> (The DMA issue, though, might be.  At least to a different motherboard.)
All my machines here are older except my primary dev box (2.4ghz P4 
w/HT).  And on that machine, I use SMB to back it up, simply because 
it's a WinXP box and it responded well to SMB.  I had another Win2k box 
over a quarter-speed WDS 802.11g link that would fail over SMB due to 
occasional interference that I *had* to switch to rsyncd just so it 
could complete--and I had to stage the introduction of directories with 
large files to the backup system so I could get them into the pool and 
have the backups complete at all (an incomplete backup due to lost 
connection does *not* enter new files into the pool, even if they 
completed, wasting that transfer time.. the single most serious flaw in 
BackupPC if you ask me).
>
> The whole reason I got involved in this discussion is because others 
> have repeatedly said that they get outstanding performance from 
> cast-off  machines.  That may be true:  but it wasn't true for me.  I 
> wanted to know why.  And the reason is not the C3.  It's rsync.
My PPro200mhz server connects with SMB for my faster WinXP client with a 
quick network connection.  I use rsyncd for a Win2k box over a slow 
wireless link.  I used rsync+ssh on two older linux boxes and switched 
them to rsyncd to avoid privilege escalation and to hopefully improve 
performance a little.
>
> Are there others out here using rsyncd?  What type of performance do 
> you get?

Maybe that statistic doesn't mean what you think it means.  The number 
of MB/s that you get *should* decrease after your first full backup.  
That number, Craig correct me, is the amount of changed data transferred 
to the server, divided by the amount of time the backup took to run.  
The time the backup inspects files that did not change or get 
transferred is included in that time, reducing the apparent throughput.  
Rsync itself has lousy throughput too, so it makes it look worse.  The 
only time this would be an accurate reflection of protocol throughput is 
when the pool is bone dry.  The more files match the pool, the more time 
(relatively) the client machine will spend on files that will ultimately 
not be transferred and will not increase the data transferred.  This is 
a good thing.

So, the statistic is a little misleading.  If you *were* getting 
something very high for that number, I think it means you've got 
something wrong with your pool, because no files are matching their 
checksums.

For reference, none of my backups are over 1MB/s, and some are about 
0.1MB/s because nothing changes except a few log files.

One take-away from this is, if you have a number of large files that 
have scattered changes every week, maybe rsync is not the right protocol 
for that client?

Thanks,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] CPU Load statistics (Was: Re: OK, how about changing the server's backuppc process niceness?)

2007-01-09 Thread Jason Hughes
Sorry, I'm not great at deciphering linux diagnostics (I'm relatively 
new to it--a year or two), but I did a little poking around to see what 
might be causing trouble.  Wikipedia had these choice bits to say about 
the C3 chip design:



 C3

   * Because memory performance is the limiting factor in many
 benchmarks, VIA processors implement large primary caches, large
 TLBs ,
 and aggressive prefetching
 , among other
 enhancements. While these features are not unique to VIA, memory
 access optimization is one area where they have not dropped
 features to save die space. In fact generous primary caches (128K)
 have always been a distinctive hallmark of Centaur / VIA designs.

   * Clock frequency is in general terms favored over increasing
 instructions per cycle. Complex features such as out-of-order
 instruction execution are deliberately not implemented, because
 they impact the ability to increase the clock rate, require a lot
 of extra die space and power, and have little impact on
 performance in several common application scenarios. Internally,
 the C7 has 16 pipeline stages.

   * The pipeline is arranged to provide one-clock execution of the
 heavily used register--memory and memory--register forms of x86
 instructions. Several frequently used instructions require fewer
 pipeline clocks than on other x86 processors.

   * Infrequently used x86 instructions are implemented in microcode
  and emulated. This saves
 die space and reduces power consumption. The impact upon the
 majority of real world application scenarios is minimized.

   * These design guidelines are derivative from the original RISC
  advocates, who stated a
 smaller set of instructions, better optimized, would deliver
 faster overall CPU performance.



And they give stats on L1/L2 cache sizes that are pertinent:
Processor   Secondary
Cache (k)   Die size
130 nm (mm²)Die size
90 nm (mm²)
C3 / C7 64/128  52  30
Athlon XP   256 84  N/A
Athlon 64   512 144 84
Pentium M   2048N/A 84
P4 Northwood512 146 N/A
P4 Prescott 1024N/A 110


What I would take from this is A) the C3 does not have out of order 
instruction scheduling, so a lot of places where a Pentium class chip 
would fly through a hunk of code that has numerous data dependencies 
will stall like crazy on a C3, causing tons of wasted cycles which show 
up as CPU usage (the pipe is 16 instructions long on the C3, so a stall 
is at least that many cycles).  Calculating hashes is a pretty tight 
loop, so that will probably increase the total clocks required to 
perform a hash computation.  B) The C3 has a pretty small L2 chip cache, 
but a large L1.  It may be adequate for this task, it may not... hard to 
say without getting performance counters straight from the chip while 
running a backup, to see how many cache misses you have.  Chances are, 
running the OS, Perl, plus the large data sets that are flooding through 
the chip are demanding a lot from such a small cache.  It may be that 
the data itself is always in the cache, but the code for other tasks are 
being swapped out so task switches are very expensive, or possibly the 
data is so large or iterated in just the wrong way so that ~150k is too 
small a working space to compute hashes.  A cache miss on every 64 bytes 
would show up as an incredible CPU hit.


You can test situation B) by going into the CMOS and disabling the L2 
cache, or both if you have to, and re-run a 'quick' backup to compare 
the times.  If the cache is being blown constantly, this will have 
little effect.  Otherwise, it should run about 5x slower with the cache 
disabled, indicating the L2 cache is not the bottleneck.


I will say that your server has a ton of files more than mine do, so 
perhaps you're also being hit by a per-file overhead... maybe packet 
processing costs are eating your lunch?  It's possible your network 
driver is doing a lot in software that counts toward your CPU usage?  If 
your server is also experiencing high load, I would suspect it to be a 
per-file overhead of the transfer protocol rather than a specific 
hardware problem with the C3, since it would also be reflected on a much 
beefier box.


Hope this helps,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
http

Re: [BackupPC-users] Kinda OT: Perform mke2fs in CGI script intelligently

2007-01-08 Thread Jason Hughes
You might consider doing a little Perl script rather than shell for the 
formatting script.  At least that way, you can launch the format command 
as a pipe, read its output (the 11/25000 followed by a bunch of ^H 
characters to back up over itself), parse it, then output something more 
meaningful for a web page, like a bar of stars or somesuch.  I'm not 
certain what buffering mechanisms exist when outputting a page from a 
long-running CGI program, so you might need to flush the stdout stream 
as well so the user sees the bar of stars growing.

Adding new functions to the BackupPC interface isn't very hard.  I added 
one that allows you to get the output of 'ps' on the server, so you can 
see what's running and if it's gone zombie or not.  It only took an hour 
or so to do.  Unfortunately, the way the HTML has been treated for 
translation is quite painful--it's replicated in each language file and 
translated in place, so you have to make and maintain multiple copies of 
each page.  But it's straightforward, at least.

As for the advisability of formatting from a web page... Let's just say 
you have a lot of trust in your users.  :-)

JH

Timothy J. Massey wrote:
> Hello!
>
> I've written a CGI script to allow the user to change the media in a 
> BackupPC using the web GUI.  There's really just two parts:  shut down 
> the server, and initialize the media.  I have a shell script that works, 
> but I would like to improve it.  I have put zero effort into the HTML: I 
> will eventually wrap the standard BackupPC look and feel around it.  My 
> biggest problem is the terrible delay while the format is taking place.
>
> #!/bin/sh
> echo "Content-type: text/html"
> echo ""
> echo "Resetting drive"
> echo "Erasing and configuring drive"
> echo "Unmounting drive..."
> sudo /bin/umount /mnt/removable
> echo "Formatting drive (this may take up to 10 minutes)..."
> sudo /sbin/mke2fs -j -LDataDrive /dev/hdc1 > /dev/null
> echo "Mounting drive..."
> sudo /bin/mount /mnt/removable
> echo "Drive change complete!"
> echo "Return to Backup 
> System."
> echo ""
> exit
>
> I am *very* open to suggestions on how I might implement this more 
> cleanly.  The script works, but there is a very long pause while the 
> script does the format.  If I take the "> /dev/null" out, I get lots of 
> lines like this:
>
> 2/1222 3/1222 4/1222 5/1222 
> 6/1222 
> 7/1222 8/1222 9/1222 10/1222 
> 11/1222 
> 12/1222 13/1222 14/1222 15/1222 
> 16/1222 
> 17/1222 18/1222 19/1222 20/1222 
> 21/1222 
> 22/1222 23/1222 24/1222 25/1222 
> 26/1222 
> 27/1222 28/1222 29/1222 30/1222 
> 31/1222 
> 32/1222 33/1222 34/1222 35/1222 
> 36/1222 
> 37/1222 38/1222 39/1222 40/1222 
> 41/1222 
> 42/1222 43/1222 44/1222 45/1222 
> 46/1222 
> 47/1222 48/1222 49/1222 50/1222 
> 51/1222 
> 52/1222 53/1222 54/1222 55/1222 
> 56/1222 
> 57/1222 58/1222 59/1222 60/1222 
> 61/1222 
> 62/1222 63/1222 64/1222 65/1222 
> 66/1222 
> 67/1222 68/1222 69/1222 70/1222 
> 71/1222 
> 72/1222 73/1222 74/1222 75/1222 
> 76/1222 
> 77/1222 78/1222 79/1222 80/1222 
> 81/1222 
> 82/1222 83/1222 84/1222 85/1222 
> 86/1222 
> 87/1222 88/1222 89/1222 90/1222 
> 91/1222 
> 92/1222
>
> Those lines repeat for hundreds of lines.
>
> What I would really like would be some way of letting the user know that 
> progress is being made, but obviously in a slightly more attractive way!  :)
>
> Does anyone have any experience on how you might be able to format a 
> drive via a CGI script that is more attractive?  Google has been no 
> help.  I have a feeling that formatting drives remotely is not exactly a 
> common goal... :)
>
> Thank you very much for any help you might be able to provide.  For 
> completeness, I've added my script for shutting the server down below as 
> well.  "shutdown.html" is merely a BackupPC page saved as HTML and 
> modified for my purposes.
>
> Tim Massey
>
>
> CGI Script:
> #!/bin/sh
> echo "Content-type: text/html"
> echo ""
> cat shutdown.html
> echo ""
> echo ""
> sudo /sbin/shutdown -h now
> exit
>
> shutdown.html:
> 
> 
> BackupPC: Shut Down Server
>  href="/backuppc/image/BackupPC_stnd.css" title="CSSFile">
> 
>  onLoad="document.getElementById('NavMenu'

Re: [BackupPC-users] OK, how about changing the server's backuppc process niceness?

2007-01-08 Thread Jason Hughes



[EMAIL PROTECTED] wrote:
I routinely hit 100% CPU utilization on the Via C3 1GHz Mini-ITX 
systems I use as backup servers.  I will grant you that the C3 is not 
the most efficient processor, but I am definitely CPU-limited.  I too 
have 512MB RAM, but the machines are not swapping.  And that's with 1 
backup at a time running (there is only one system to back up).

[...]
Like I said, these machines are not swapping during a backup.  They're 
just busy.  Is this not normal?  Is there a way to improve this?


Yes, do not use VIA C3 processors or motherboards.  They are severely 
underpowered at any clock rate.  :-)


Seriously, I looked into using them for some dedicated processing 
machines, and when I saw the benchmarks, ran screaming.  They make fine 
toy web servers, mp3 streamers or maybe low-end office use/browsing.  
Chances are, they're spending most of their time calculating hashes, 
where it is failing to perform.  Like I said, I have a PPro 200 doing 
two backups simultaneously, one machine of which takes about 25-30 hours 
on a full backup to backup ~120gb over 802.11g *the first time only*, 
but afterwards even full backups only take about 2 hours.  The data is a 
mix between many many small files (typical windows development box) and 
a few large video files (video editing station).  The GUI is slower 
during this time, but not unusable.  I am running mod_perl, though, 
which I'm told matters a lot.


A system with only small files will have more work to do and more 
packets to send, and more hashes to compute, so YMMV.  If that is your 
bottleneck, that is.  I don't know the VIA chipset personally, so 
perhaps it has real problems with memory throughput.  If it doesn't have 
a dedicated DMA controller, it will have to move memory using the CPU, 
which will show up as usage the same as running a program would.  Being 
a mini-itx, I would suspect this.


It would be nice if there were a chart of people's backup system 
configurations and their backup time performance over a specific network 
type, just so users have some idea what to expect.  The variability in 
performance might make more sense with more data points.


Thanks,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] OK, how about changing the server's backuppc process niceness?

2007-01-02 Thread Jason Hughes
Holger Parplies wrote:
> Paul Harmor wrote on 01.01.2007 at 20:51:43 [[BackupPC-users] OK, how about 
> changing the server's backuppc process niceness?]:
>   
>> I have only 2 machines (at the moment) being backed up, but every time
>> the backups start, the server system slows to an UNUSEABLE crawl, until
>> I can, slowly, start top, and renice the 4 backuppc processes.
>>
>> Any way to make them start nicer, to begin with?
>> 
[...]
> I wouldn't be surprised, though, if your problem is either not CPU related
> or caused by a misconfiguration if it really is. Please someone correct me
> if I'm wrong, but a 1.3 GHz Duron doesn't sound slow enough by far to me to
> be legitimately overloaded by 2 backups. 512MB of memory maybe, but renicing
> wouldn't help then, would it?
>
> Have you tried what happens, if you only run *one* backup at a time
> ("$Conf{MaxBackups} = 1;")?
>
>   
Good recommendations, Holger.  I would add that "nice"ing a process only 
changes its scheduling affinity, but does not modify in any way its hard 
disk activity or DMA priority, so until the original poster understands 
what exactly makes the server slow, he's shooting in the dark.  A busy 
hard drive usually makes a system feel slower than a busy CPU process, 
because hard disk activity requires a 6-10ms seek minimum, plus 
streaming and unloading to vram, depending on what other processes are 
doing.

Personally, I dedicate a Pentium Pro 200mhz with a whopping 128mb of 
ram, and it backs up about 130gb across four machines, all starting 
about at the same time, using rsyncd on 3 and SMB on 1.  It's true the 
CGI interface is slow to respond during two simultaneous backups, but 
otherwise it's usable.  I wouldn't put another app on that machine, 
though, for obvious reasons.

IMHO, the original poster would do well to diagnose what bottleneck is 
the culprit before attempting to fix it.  My suggestions, to help reduce 
the CPU usage further, is to drop compression to 0.  That should be 
about all that BackupPC would spend a lot of time on during a backup.  
But my PPro200 can handle level 3 compression; a >1Ghz Duron will too.

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup successful - now to exclude

2006-12-19 Thread Jason Hughes

> I'm wondering now how to exclude things like "/proc" globally and per-PC.
>   
You cannot exclude something globally, then exclude more per-PC.  The 
per-PC settings simply override whatever was set globally, since it's 
just setting a perl variable.  I suppose you could actually write perl 
code that pushes more directories onto the array, but it would get 
cryptic very fast.

Look into setting something globally if you have several machines that 
share a similar set of excludes, and override where different 
configurations are needed.  The documentation explains how to write 
per-PC configs.  If you're using 3.0.0beta, you can actually edit the PC 
config straight from the CGI interface.
> Also wondering how through the CGI interface I can have "normal" users who 
> can view/start/restore their backups while different from the "admin" users 
> (my terminology may be wrong here).
>   
You have to create multiple 'users' in your auth file for the CGI to 
allow people to log in.  Once you have that, you assign these users to 
their respective machines in the BackupPC hosts file.  This way, when 
one of those users accesses the CGI pages, it requests them to log in... 
which they do, and are presented with only the machines they are 
associated with in hosts.
> Can't seem to find a config variable other than CgiAdminUsers available.
>   
This variable only tells what users should have access to everything.  
You set the per-PC owners in the hosts file.

Hope this helps,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync error ssh_askpass w/ cygwin

2006-12-14 Thread Jason Hughes
 From what I can tell, all that does is insert and remove registry 
entries from the services list in Windows.  cygrunsrv has a separate 
command for stopping and starting services, and I'm not sure (without 
testing it) if removing will actually stop the service as well.

The definitive way to tell if a service is running is to open 
Settings/ControlPanel/AdministrativeTools/Services and scroll down until 
you see 'rsyncd' and a status next to it.  Right click on it to manually 
start/stop the service.  This definitely works.

Re: File::RsyncP, I think that is the most recent version.

Hope this helps,
JH

Jim McNamara wrote:
> I used:
>
> cygrunsrv --remove rsyncd
>
> and then checked with:
> cygrunsrv --list
>
> which only showed sshd running. I still get the chroot error, and it
> seems that rsync not running should produce an error long before
> anything about chroot. Since cygwin shell seems to be crap (no
> /etc/init.d/rsyncd) and ps aux shows nothing more than bash running,
> how can I be certain rsyncd is stopped?
>
> I had updated the server to File-RsyncP-0.68 when I started this last
> week. I think that is current enough?
>
> Thanks again,
> Jim
>
>
> On 12/14/06, Jason Hughes <[EMAIL PROTECTED]> wrote:
>> You may need to stop/restart the rsyncd service to make it read the
>> rsyncd.conf file on windows.
>>
>> I wanted to mention that there was some bug that I ran into (you're not
>> seeing it yet) when the backuppc was using protocol version 26 and
>> windows running rsyncd.  You might want to update the server's
>> File::RsyncP in cpan.
>>
>> JH
>>
>>
>> Jim McNamara wrote:
>> > And the new error makes even less sense to me,
>> >
>> > Contents of file /var/lib/backuppc/pc/mas90-server/XferLOG.bad.z,
>> > modified 2006-12-14 22:12:40
>> >
>> > Connected to mas90-server:873, remote version 29
>> > Negotiated protocol version 26
>> > Error connecting to module UPC at mas90-server:873: chroot failed
>> > Got fatal error during xfer (chroot failed)
>> > Backup aborted (chroot failed)
>> >
>> > 
>> >
>> > I don't have chroot enabled anymore, it is completely absent from the
>> > rsyncd.conf on the winbox.
>> >
>> >
>> >
>>
>

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync error ssh_askpass w/ cygwin

2006-12-14 Thread Jason Hughes
You may need to stop/restart the rsyncd service to make it read the 
rsyncd.conf file on windows.

I wanted to mention that there was some bug that I ran into (you're not 
seeing it yet) when the backuppc was using protocol version 26 and 
windows running rsyncd.  You might want to update the server's 
File::RsyncP in cpan.

JH


Jim McNamara wrote:
> And the new error makes even less sense to me,
>
> Contents of file /var/lib/backuppc/pc/mas90-server/XferLOG.bad.z,
> modified 2006-12-14 22:12:40
>
> Connected to mas90-server:873, remote version 29
> Negotiated protocol version 26
> Error connecting to module UPC at mas90-server:873: chroot failed
> Got fatal error during xfer (chroot failed)
> Backup aborted (chroot failed)
>
> 
>
> I don't have chroot enabled anymore, it is completely absent from the
> rsyncd.conf on the winbox.
>
>   
>

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc Win2k Errors

2006-12-14 Thread Jason Hughes
I had this happen to me.  In my case, I had an old version of 
File::RsyncP.  If you go to cpan and type 'install File::RsyncP', it 
will tell you if you are up to date or not.  The older protocol (v.66 I 
think) had a bug in it.  I recall the new version is v.68 or v.69.

Adjusting the timeout will not change whether the protocol has locked 
up.  It'll just delay how long it waits before killing the backup.

One other thing to look for: I actually had TWO versions of Perl 
installed on my machine, which made troubleshooting harder.  CPAN would 
update one, but BackupPC was using the other, so even though I thought I 
had updated, I really had not.  Try to locate all the copies of perl on 
your system and make sure you only have one.

Hope this helps,
JH

Byron Trimble wrote:
> All,
>
> I have Backuppc running on Linus and I have a Win2k Client running rsyncd.
> The rsync starts fine, but it gets to a point where it doesn't write to the
> log file for 2 hours, then it errors out with:
>
> [2476] rsync: writefd_unbuffered failed to write 4092 bytes [sender]:
> Software caused connection abort (113)
> 2006/12/14 12:48:41 [2476] rsync error: error in rsync protocol data stream
> (code 12) at io.c(1119) [sender=2.6.8]
>
> I have changed the Timeout parameter also. Nothing seems to be happening
> during the 2 hours.
>
> Thanks,
>
> Byron J. Trimble
> Technical Specialist/Systems Administrator
> 407-515-8613
> [EMAIL PROTECTED]
> Skype ID: btrimble
>
>
> -Original Message-
> From: Diaz Rodriguez, Eduardo [mailto:[EMAIL PROTECTED]
> Sent: Thursday, December 14, 2006 5:27 AM
> To: Byron Trimble; Backuppc-Users (E-mail)
> Subject: Re: [BackupPC-users] Backuppc Win2k Errors
>
>
> Did you use the SAME version of rsyncd in all systems? rsync --version
>
> On Wed, 13 Dec 2006 11:15:39 -0500, Byron Trimble wrote
>   
>> All,
>>
>> This the error message that I'm getting when trying backup my Win2k server
>> with Backuppc using rsyncd:
>>
>> rsync: writefd_unbuffered failed to write 4092 bytes [sender]: Software
>> caused connection abort
>> 
>

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Per PC config files issue

2006-12-12 Thread Jason Hughes
For what it's worth, I started with 2.1.2pl2 stable and had per-machine 
configs for each machine working fine.  When I installed the 3.0.0beta3 
as an upgrade OVER the existing install, it worked fine.  Maybe there's 
something different about the install script that differs between 
upgrade and a fresh install?

I seem to remember reading that the config files changed for 3.0.0 to be 
/pc/machinename.pl rather than /pc/machinename/config.pl?  For sure, 
though, I didn't move or rename any files by hand and it works for me.

If you want to use the newest beta, you could try installing the stable, 
then update to the beta over your install...

JH

JW Smythe wrote:
>   Ahhh, so I'm not crazy. :)
>
>   I posted the same thing a few days ago.  I was trying to do a fresh
> install with the beta, which made me think I was doing something
> stupid.  I reinstalled with the stable version and it's all working
> for me now.
>

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] backing up more directory's

2006-12-12 Thread Jason Hughes
scartomail wrote:
> Hy Evryone,
>  
> Let say I got this in my rsyncd.conf on my windows box:
> [Foto]
> path = E:/FOTO
>  
> Is there anyway to add more directory's to the path variable?
It wouldn't make much sense to do that, because multiple paths would 
then need to be "merged" as a single view when requesting files from 
rsyncd.  If you add multiple shares to rsyncd.conf on the client, ie:
[Foto]
path = E:/FOTO
...

[MorePaths]
path = E:/MORE

[Others]
path = C:/OTHERS

Then you can have the BackupPC per-machine config file for this machine 
list multiple shares to connect to and backup. 

> I do know how to do it with more of these "directivs" and edit the 
> *.pl file on the server.
> But I was just wondering if it could be done just by the client.
However, there is another way to do it only on the client...  Search for 
the term 'junction point' and look for the utility from SysInternals 
that will create them for you (some restrictions on file systems and OS 
versions, I think).  This basically allows you to create an empty 
subdirectory in Windows, then make a symbolic link from another 
drive/path to this empty directory.  It's a kind of hacky version of 
symbolic links ala unix.  But it means you can just mount additional 
files and directories inside your existing share without changing 
anything on the server or even the client's rsyncd.conf.  I would be 
careful not to create directory structure loops, or all hell breaks 
loose.  :-)

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Rsync errors, sig=ALRM

2006-12-05 Thread Jason Hughes
I have a Win2k box running the rsyncd package.  It is over an 802.11g 
link (about 1MB/s throughput when copying via windows shares manually, 
but over rsync it's getting closer to 350k).  Thus it takes about 40 or 
so hours to backup the system.  I've taken to excluding tons of stuff 
just to get a full backup to complete, then for each incremental, I 
remove one or two directories from the exclude paths to introduce the 
files to the backup pool.  Painful.

Well, I put in a directory with a few very large files in it and it 
started failing to backup at around the 23 hour mark.  Mind, I've 
extended the ClientTimeout to 72.  Here's what the rsyncd.log has on it:

2006/12/05 08:37:42 [6824] rsync: writefd_unbuffered failed to write 
4092 bytes [sender]: Connection reset by peer (104)
2006/12/05 08:37:42 [6824] rsync error: error in rsync protocol data 
stream (code 12) at io.c(1119) [sender=2.6.8]
2006/12/05 08:56:25 [6900] connect from backup.flaredev.com (192.168.0.196)
2006/12/05 08:56:27 [6900] rsync on . from 
[EMAIL PROTECTED] (192.168.0.196)
2006/12/05 09:05:41 [6900] rsync: writefd_unbuffered failed to write 4 
bytes [sender]: Software caused connection abort (113)
2006/12/05 09:05:41 [6900] rsync error: error in rsync protocol data 
stream (code 12) at io.c(1119) [sender=2.6.8]

The server status says the program was interrupted by sig=ALRM, which 
I'm assuming is the client timeout.

Any ideas?

Thanks,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems getting Samba connection to work - XP

2006-12-04 Thread Jason Hughes
Quickly try making "backup" an administrator and re-run the smbclient 
command.  If it's a permission thing, this will solve it for sure.  (In 
which case, you'll have to figure out what permission Backup Operators 
lack... perhaps access for C$ has been restricted or broken.  You could 
try deleting C$ share and rebooting.  Or you might find this helpful.  
http://support.microsoft.com/kb/314984)


If being an administrator doesn't help, try using the actual 
administrator user to access C$.  If that doesn't work, there's got to 
be something wrong with the share, especially if other shares work.  I'm 
no SMB guru, though.


JH

Eric Snyder wrote:
Yes, I created a user called "backup" and they are a member of the 
backup operators group. I gave that user a password. I am not escaping 
the $ but use a command similar to what you show below and get 
"NT_STATUS_ACCESS_DENIED". Again, If I change the command from 
"/usr/bin/smbclient gandolf\\C$ -U backup" to "/usr/bin/smbclient 
gandolf\\Eric -U backup" it connects to the Eric folder just fine.


I don't know that much about sharing and networking but it's like the 
default share on "C" is messed up.


What next?

Jason Hughes wrote:

Eric,

You may need to do two things... Are you creating a user on the 
Windows box that is a member of the backup operators group?  You can 
either do that, or use the administrator account.  In either case, 
you will probably want to use a username that exists on the Windows 
machine that *requires a password*.  Using an account that has a 
blank password may work only for certain folders, but fail when 
accessing others, inexplicably.


Second, don't escape the $.  I just did this:
/usr/bin/smbclient dev1\\C$ -U jhughes

It worked fine.  However, if I use a username that doesn't exist on 
the windows machine, it gives NT_STATUS_ACCESS_DENIED.


Good luck,
JH

Eric Snyder wrote:
Using the following commands in a terminal su'd as backuppc the 
following commands work or don't work:

*Works:*
/usr/bin/smbclient -L gandolf\\ -U backuppc
This produces a list of shares on the XP machine gandolf.

*Also Works:*
/usr/bin/smbclient gandolf\\Eric -U backuppc
This connect ti the share Eric on the XP machine gandolf.

*But does not work (frustrated Gr):*
/usr/bin/smbclient gandolf\\C/$ -U backuppc
Gives me "NT_STATUS_BAD_NETWORK_NAME"

/usr/bin/smbclient gandolf\\C\$ -U backuppc
Gives me "NT_STATUS_ACCESS_DENIED"

How to fix?
  
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Installation on Fedora Core 5 login to CGI pagetrouble.

2006-12-04 Thread Jason Hughes
Are you missing a double-quote on the AuthName line?  That might confuse 
the parser, causing who knows what problems.

JH

Krsnendu dasa wrote:
> AuthName "BackupPC
>
>   
>

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsyncd exclude

2006-12-04 Thread Jason Hughes
I had no luck with getting rsyncd on Windows to work with spaces in 
filenames through the config file.  I resorted to using the 8.3 
filenames instead, ie:
secrets file = c:/Progra~1/rsyncd/rsyncd.secrets

The easy way to find them is to do a "dir /x" and you get both the long 
and short names.

Hope this helps,
JH

Algobit wrote:
> I try to use the exclude directive in rsyncd.conf on my windows machine
>
> [docs]
> # 
> # 
> #
> path = C:/Documents and Settings/ENRICO/Documenti
> exclude = C:/Documents and Settings/ENRICO/Documenti/Macchine virtuali
>
> but the directory isn't excluded.
>
> I have to use in config.pl $Conf{BackupFilesExclude} ?
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems getting Samba connection to work - XP

2006-12-04 Thread Jason Hughes

Eric,

You may need to do two things... Are you creating a user on the Windows 
box that is a member of the backup operators group?  You can either do 
that, or use the administrator account.  In either case, you will 
probably want to use a username that exists on the Windows machine that 
*requires a password*.  Using an account that has a blank password may 
work only for certain folders, but fail when accessing others, inexplicably.


Second, don't escape the $.  I just did this:
/usr/bin/smbclient dev1\\C$ -U jhughes

It worked fine.  However, if I use a username that doesn't exist on the 
windows machine, it gives NT_STATUS_ACCESS_DENIED.


Good luck,
JH

Eric Snyder wrote:
Using the following commands in a terminal su'd as backuppc the 
following commands work or don't work:

*Works:*
/usr/bin/smbclient -L gandolf\\ -U backuppc
This produces a list of shares on the XP machine gandolf.

*Also Works:*
/usr/bin/smbclient gandolf\\Eric -U backuppc
This connect ti the share Eric on the XP machine gandolf.

*But does not work (frustrated Gr):*
/usr/bin/smbclient gandolf\\C/$ -U backuppc
Gives me "NT_STATUS_BAD_NETWORK_NAME"

/usr/bin/smbclient gandolf\\C\$ -U backuppc
Gives me "NT_STATUS_ACCESS_DENIED"

How to fix?
  
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] dump failed: can't find Compress::Zlib

2006-12-04 Thread Jason Hughes
You can do this:
cpan
install Compress::Zlib

It should either fetch and compile the perl module, or tell you that it 
is already up to date.

JH

Ariyanto Dewo wrote:
> Hi all,
> Thanks for the respond my last message 'backup
> backuppc', I am able to
> figure it out to works. But now I have a problem with
> backuppc. It complain
> can't find Compress::Zlib, but when I query using
> perldoc, it already
> installed. This problem occurs just couple days ago.
> What should I do with
> this problem?
> Thanks for the enlighment.
>
> Rgds,
> -dewo-
>
> /"\
> \ / ASCII RIBBON CAMPAIGN
>  X  ~ AGAINST HTML MAIL ~
> / \
>
>
>
>
>  
> 
> Do you Yahoo!?
> Everyone is raving about the all-new Yahoo! Mail beta.
> http://new.mail.yahoo.com
>
> -
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the chance to share your
> opinions on IT & business topics through brief surveys - and earn cash
> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/
>
>   

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with Web client 2.1.2

2006-12-04 Thread Jason Hughes

Fabio,

Usually, when it complains that Apache can't connect to BackupPC, it's 
because BackupPC isn't running.  You can log in as root, then do 
'service backuppc restart' and see what it does.  It should shut down 
first, then start.  I expect it to say 'failed' when trying to shut 
down, because I don't think it's running.  Then it should start, and 
your www will be working. 

However, if it says 'ok' when shutting down, you've got other problems, 
like files are in the wrong place, or some user permissions problem 
(httpd needs to run as a user that can access BackupPC files), etc.


You can also check to see if it is running by doing a 'ps ux' and 
looking at the programs running as your backuppc user.


Good luck,
JH

Fabio Parodi wrote:
The problem I have is about the BackupPC Web client. I'm pretty sure I 
did something wrong during the installation, but I can't fix it now.

As soon as I use my Firefox browser to  access:
http://localhost/cgi-bin/BackupPC_Admin
I get the BackupPC frame window, but with the following error:
"Error: Unable to connect to BackupPC server:

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Upgrade to 3.0.0beta2

2006-11-30 Thread Jason Hughes
Craig Barratt wrote:
> Jason writes:
>
>   
>> Since I finally got 2.1.2pl2 working, I decided to upgrade to 3.0.0beta2 
>> (a glutton for punishment, I am).  Everything went swimmingly until I 
>> tried to look at any logs or view the config files either for clients or 
>> the general system, via the CGI interface.
>>
>> 
> Are you using mod_perl?  If so, please restart apache.
>
>   
Doh!  Maybe that would be a good thing to put in the final reminders 
after running the configure script?  Or detect an upgrade and ask if it 
should restart Apache.  Everything works great now.

One thing, though.  I was running a really, really long backup (50+ 
hours due to slow link and large video files) and was about 40 hours 
through it when I started piddling around with the upgrade.  I wasn't 
going to start it until my backup finished, but I wanted to check out 
what the configure.pl script would do.  It A) detected the service was 
still running and stopped it for me (killing my backup in the process), 
then B) dropped out of the script to tell me to stop the service.  It'd 
be nice if it just detected and exited rather than killing anything.  
But I suppose I was being naughty and deserved what I got.  :-)

Thanks,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Upgrade to 3.0.0beta2

2006-11-29 Thread Jason Hughes
Since I finally got 2.1.2pl2 working, I decided to upgrade to 3.0.0beta2 
(a glutton for punishment, I am).  Everything went swimmingly until I 
tried to look at any logs or view the config files either for clients or 
the general system, via the CGI interface.

Here's what I get from the web server when looking at the LOG file:

Global symbol "$LogDir" requires explicit package name at 
/home/backuppc/lib/BackupPC/CGI/View.pm line 99.
Compilation failed in require at /var/www/cgi-bin/BackupPC_Admin line 107.


Here's what I get when trying to view a client's config file:

Undefined subroutine &BackupPC::CGI::View::action called at 
/var/www/cgi-bin/BackupPC_Admin line 109.


I traced through the logic and it looks like it should work, at least 
requiring the View.pm package.  I checked the path and dates of all the 
files, and they were definitely installed a few hours ago.  Any ideas?

Thanks,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] unable to login with normal account

2006-11-28 Thread Jason Hughes
scartomail wrote:
> I'm can not login with a normal user.
> The only user I can login with at http://backuppc-server/backuppc/ is 
> "backuppc" ?
> [...] 
> Still unable to logon to backuppc.
> I did notice the file /etc/backuppc/htpasswd in witch only the user 
> "backuppc" is mentioned.
> Here might be the problem!!
> Only the password is scrambled and I don't know how to add a user to 
> this file with a scrambled password.
>  
> I could not find anything on the htpasswd file in the documentation so 
> I am unable to RTFM myself out of this one.
>  
> Am I on the right track here?
> Is there something I missed?
Try:

htpasswd /etc/backuppc/htpasswd edo edospassword
 
Check out the htpasswd command's man page for more details.

JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Segfault with Rsync or RsyncD

2006-11-14 Thread Jason Hughes
I was using rsync and was consistently getting BackupPC_dump to 
segfault, leaving an orphaned child process doing the rsync that would 
never terminate.  So I switched to rsyncd.  Same business, except now I 
have an rsyncd log on the client machine.  Segfaults.

So I upgraded my File::RsyncP on all my machines (I only needed to do 
this on the backup server, right?) and restarted all the xinetd 
processes, etc.  Now I'm getting seg faults earlier.  Is anyone else 
experiencing this, or is it just my luck?  Has anyone else upgraded to 
File::RsyncP 0.66 yet to test it?

Thanks,
JH

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Seg fault

2006-11-10 Thread Jason Hughes
Using 2.1.2pl2:

I recently switched from using rsync (because it was getting a few 
hundred megs into a backup then giving me an (Unable to read 4 bytes) 
error, and not even keeping a partial), to using rsyncd.  I got this 
working right earlier today and kicked off a manual full backup, ie.  
BackupPC -v -f client1.  It ran for a little while, downloaded some 
files, then gave a Segmentation Fault.  Here's the tail end of it:
...
  create   644   0/07272 boot/grub/ufs2_stage1_5
  create   644   0/06612 boot/grub/vstafs_stage1_5
  create   644   0/09308 boot/grub/xfs_stage1_5
Segmentation fault

I re-ran it a couple of times and got a segfault in this place twice, 
once in a different place further on.  I tried running BackupPC_dump in 
the perl debugger and it seemed to run quite a ways until it just locked up.

For what it's worth, to deal with BackupPC, my server and clients have 
all been upgraded to the newest Perl modules and newest rsync.  There 
was no error in the rsync.log, just "building file list".

Any suggestions how to proceed?  I've basically never gotten any of my 
linux boxes to get through one backup in over a month, and it's very 
frustrating due to the lack of debugging aids, and that files have been 
transferring, but no partials are even registered, so it has to start 
over each time.

Thanks,
JH

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] ok iam lost

2006-11-10 Thread Jason Hughes
Make sure you have set an admin user to be the user name that should 
have complete access to BackupPC from the CGI:

$Conf{CgiAdminUsers} = 'panther';

Without this set, anyone you log in as is only a user, and can only see 
the machines that the hosts file declares to be associated with that user.

Hope this helps,
JH

William McInnis wrote:
> ok so i have the GUI up and i dont have half of the options you have on your 
> screen shots iam missing 
>
> config file 
> hosts file
> current queues 
> log file
> old logs 
> email summary
>
> is there a good how to on this software becuase iam soo lost please someone 
> help me
>
>   

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] SMB backup failing

2006-11-08 Thread Jason Hughes






Craig Barratt wrote:

  Jason writes:

  
  
This took 40 hours to run, and backed up a lot, but when it got to a 12gb file, it choked.

Here's the XferLog Errors:

Error reading file \video\2005\video2005raw.avi : Call timed out: server did not respond after 2 milliseconds
Didn't get entire file. size=12776697964, nread=-1340081072

  
  
The first problem is the timeout.  That could be due to anti-virus SW
that completely checks a file (or directory) before you can open the
file.  You can increase the timeout but you need to recompile smbclient.
  

Thanks.  I'm not running any kind of virus software, but the link is a
wireless one, so it's possible there are other sources of interference
that make it impossible to transfer data for quite a while.  I have a
server that automatically checks for link loss and I get an email every
so often that it went down, but the next 15 minutes (when it checks
again), it's up.  So there's an intermittent and brief complete failure
of the link.  

Unfortunately, when smbclient times out, it may actually be that the
server is completely unreachable, and bumping up the time won't fix
that.  It would be nice if, on a failed tranfer, a second chance is
queued that starts where the first left off.  I don't know enough about
backuppc to say whether that involves an interceding link step to put
the correctly downloaded files into a pool, or a special step that
manually tries to reget the partial file before moving it to the
/pc//new hierarchy, or what.  Even automatically excluding
the partial file on the second try and sending an email that it had to
restart and skip the file would be preferable to failing with a partial
backup.  In fact, because of this one large file, nothing on my PC
beyond this file will ever get into the backup.  A slow internet
connection, or a poor wireless connection, will have the same problem
for very large files.

Is smbclient based on UDP or TCP?  If it's UDP, I could raise the
server timeout to something really big, like 10 minutes or so, and if
my link comes back up it'll work.  With TCP, I'm just screwed, right?


  
The second problem is a 32-bit roll-over bug in smbclient that only happens
after you get the "Didn't get entire file" error when smbclient tries to pad
the tar file.  A bug and fix are filed on samba's bugzilla, but I doubt the
fix has been applied yet.

  


Ok, so it's a totally innocuous display bug.  I'll ignore it.

Thanks,
JH


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] SMB backup failing

2006-11-08 Thread Jason Hughes




I did get it to transfer 10gb of the 12gb file manually using
smbclient.  For whatever reason, I guess there was a 20 second gap in
the transfer there and it timed out.  I had to shut down smbclient,
then open it again to establish a good connection to the server, and
I'm using 'reget' to get the remainder of the file.  It's apparently
"working", if the timeout hadn't occurred.

Thanks,
JH

Call timed out: server did not respond after 2 milliseconds listing
\video\2005\\*
Error in dskattr: Call timed out: server did not respond after 2
milliseconds

[EMAIL PROTECTED] wrote:

  Can you manually use smbclient to transfer this 12gig file?

  
  
This took 40 hours to run, and backed up a lot, but when it got to a 12gb file, it choked.


  
  


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

  



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] SMB backup failing

2006-11-07 Thread Jason Hughes




Les Stott wrote:

  
  
  
I thought maybe excluding that particular file would help, but exclusions aren't
working well for me.  I tried to exclude like this:
$Conf{BackupFilesExclude} = 
  ['Documents and Settings/Administrator/Local Settings/Temporary Internet Files/*'];
And it backed up all 20,000 files in that directory anyway.

  
  
Remember though that exclusions are based on directory paths from your
client share name.
  
i.e. if your client smb share is the c drive then full paths of the
excludes should be
  
  $Conf{BackupFilesExclude} = ['/Documents and Settings/*/Local Settings/Temporary Internet Files/*'];
  
if you share at a different level then your exclude paths should change
accordingly, relative to the share.
  
The above should be all on one line, i assume its just the mail client
that wrapped it to the second line.

In Perl, it shouldn't matter, so long as you don't break the string
portion, right?

However if you do split the 
lines then try the syntax like so
  
$Conf{BackupFilesExclude} = {
   ['/Documents and Settings/*/Local Settings/Temporary Internet
Files/*'],  # these are for 'c' share
    };

The above is providing a hash of a string arrays (where there is only
one string in the array), which is a different construct and has
different semantics.  The docs say if you provide multiple shares, you
can also give multiple exclusion arrays this way that match the share
names (in the order declared?).

Thanks Les,  I'll try putting a leading '/' on my excludes.  I am using
a full C-drive share, so I figured a relative path would work fine. 
Maybe they need to be rooted with a leading slash?  I also didn't know
you could use multiple wildcards.  I'll have to try that too.

Thanks,
JH


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] SMB backup failing

2006-11-07 Thread Jason Hughes




I'm using it to download it right now.  It's over a slower connection
(475k/s sustained rate) so it will take at least 7 hours to download,
if it runs at max speed the whole time.  I'll let it run and see what
happens.  Good suggestion.

Thanks,
JH

[EMAIL PROTECTED] wrote:

  Can you manually use smbclient to transfer this 12gig file?

  
  
This took 40 hours to run, and backed up a lot, but when it got to a 12gb file, it choked.


  
  


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

  



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] SMB backup failing

2006-11-07 Thread Jason Hughes




Hi all.  Nobody has responded to my other messages requesting help, so
I'm trying again.  I'm using the 2.1.2 version.

I have one Windows machine that is backing up flawlessly (other than
NT_SHARING_VIOLATIONs that are unavoidable).  I have another that is
failing when it gets to a very large file.  Because it failed before, I
increased the ClientTimeout to 72 (10x higher than default).  I
deleted everything about the host that was failing and started again.  

Contents of file /var/backuproot/pc/mothership/LOG,
modified 2006-11-06 18:52:34 
2006-11-05 02:46:59 full backup started for share CDrive
2006-11-06 18:52:28 Got fatal error during xfer (Unexpected end of tar archive)
2006-11-06 18:52:33 Backup aborted (Unexpected end of tar archive)
2006-11-06 18:52:34 Saved partial dump 0

This took 40 hours to run, and backed up a lot, but when it got to a 12gb file, it choked.

Here's the XferLog Errors:

Error reading file \video\2005\video2005raw.avi : Call timed out: server did not respond after 2 milliseconds
Didn't get entire file. size=12776697964, nread=-1340081072
This backup will fail because: Call timed out: server did not respond after 2 milliseconds opening remote file \video\2005\v (\video\2005\)
Call timed out: server did not respond after 2 milliseconds opening remote file \video\2005\v (\video\2005\)
Call timed out: server did not respond after 2 milliseconds listing \video\2\*
Call timed out: server did not respond after 2 milliseconds opening remote file \video\h (\video\)
Call timed out: server did not respond after 2 milliseconds opening remote file \video\h (\video\)
Call timed out: server did not respond after 2 milliseconds opening remote file \video\h (\video\)
Call timed out: server did not respond after 2 milliseconds opening remote file \video\p (\video\)
Call timed out: server did not respond after 2 milliseconds opening remote file \video\p (\video\)
Call timed out: server did not respond after 2 milliseconds listing \video\v\*
Call timed out: server did not respond after 2 milliseconds listing \video\w\*
Call timed out: server did not respond after 2 milliseconds opening remote file \video\w (\video\)
Call timed out: server did not respond after 2 milliseconds listing \video\w\*
Call timed out: server did not respond after 2 milliseconds listing \W\*
Call timed out: server did not respond after 2 milliseconds listing \w\*
[ skipped 9 lines ]
tarExtract: Unexpected end of tar archive (tot = 1048576, num = 859648, posn = )
tarExtract: Removing partial file video/2005/video2005raw.avi
tarExtract: BackupPC_tarExtact aborting (Unexpected end of tar archive)
tarExtract: Done: 1 errors, 17145 filesExist, 31438592171 sizeExist, 30715709004 sizeExistComp, 86172 filesTotal, 51652355017 sizeTotal
Got fatal error during xfer (Unexpected end of tar archive)
Backup aborted (Unexpected end of tar archive)


This looks like there's a 32 bit quantity that is rolling over and crashing the BackupPC server process.

I thought maybe excluding that particular file would help, but exclusions aren't
working well for me.  I tried to exclude like this:
$Conf{BackupFilesExclude} = 
  ['Documents and Settings/Administrator/Local Settings/Temporary Internet Files/*'];
And it backed up all 20,000 files in that directory anyway.

Any help would be appreciated.
Thanks,
JH




-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Errors

2006-11-02 Thread Jason Hughes




Yes.  I did:
[EMAIL PROTECTED] ~]$ ssh [EMAIL PROTECTED] echo \$USER
root

JH

Les Mikesell wrote:

  On Thu, 2006-11-02 at 15:24, Byron Trimble wrote:
  
  
I'm using 'rsync' and I have setup all the ssh keys and tested them.

  
  
Did you test them running as the backuppc user on the server,
connecting as root on the target? 

  



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Fixed! tree connect failed: NT_STATUS_ACCESS_DENIED

2006-11-02 Thread Jason Hughes




Using SMB to connect to my Windows 2000 machine, to an account w/o a
password configured works fine.  However, this is not true on a
(freshly installed) Windows XP.  If you have an account that has no
password, it will deny access to many, but not all, directories through
an error that looks like this:

tree connect failed: NT_STATUS_ACCESS_DENIED

If the account is not password-protected, certain folders will be
denied (apparently random, but most system folders and some user
folders).  I did nothing more than add and remove a password, use
smbclient to log in and out as that user and "cd \windows" to reproduce
this.  It appears to be a Windows XP security "feature".  This explains
why my backups were missing a huge chunk of data, but otherwise
appeared to work when they were actually partially failing.

Does anyone have any idea what the rsync error is that I asked about
below?

Thanks,
JH

Jason Hughes wrote:

  
I'm having trouble getting backups to work with rsync.  I have two
hosts using smb that are working (sort of), and two with rsync that are
not.  Here's the log file I get (the machine name is 'sol'):

[...]

The status is:
  
  

  
sol

 panther

 0 
 

 0.00 
 

 0 
   
 idle 
 (Unable to read 4 bytes)
  

  
  
For whatever reason, it always fails with '(Unable to read 4 bytes)'. 
Which is weird, because the ./sol/new directory has 502MB of files in
it, but no entries in the backups table at all.  My other host has
identical issues.  By the way, SSH keys are set up so I have tested the
obvious things, and backups are mostly working.  But how do I debug
this?
  
Any ideas on where to start?
  



-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Failure to backup over rsync

2006-11-02 Thread Jason Hughes




I'm having trouble getting backups to work with rsync.  I have two
hosts using smb that are working (sort of), and two with rsync that are
not.  Here's the log file I get (the machine name is 'sol'):

Contents of file /var/backuproot/pc/sol/LOG, modified
2006-11-01 12:21:42 
2006-11-01 12:21:42 full backup started for directory /


The per-machine config file is very straightforward, but maybe I'm
missing a setting?

Contents of file /var/backuproot/pc/sol/config.pl,
modified 2006-10-24 02:26:31 
$Conf{XferMethod} = 'rsync';
$Conf{RsyncShareName} = '/';
$Conf{BackupFilesExclude} = '/proc';
$Conf{FullPeriod} = 6.97;  # -1 or -2 to disable backups


The status is:


  

  sol
  
   panther
  
   0 
   
  
   0.00 
   
  
   0 
     
   idle 
   (Unable to read 4 bytes)

  


For whatever reason, it always fails with '(Unable to read 4 bytes)'. 
Which is weird, because the ./sol/new directory has 502MB of files in
it, but no entries in the backups table at all.  My other host has
identical issues.  By the way, SSH keys are set up so I have tested the
obvious things, and backups are mostly working.  But how do I debug
this?

Any ideas on where to start?

Thanks,
JH


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Partial failures

2006-10-27 Thread Jason Hughes
Hi all,

I've just got backuppc set up for the first time on a Centos 4.4 box 
using the provided RPMs, with mod_perl.  It was a real challenge because 
it seems somehow to be using mod_perl2, whereas Centos only has 
1.999xxx.  Very confusing.  At any rate, it's working with Apache 
running as user backuppc, on a box totally dedicated to it.  I've got 
ssh keys set up for two Linux boxes, and tested it manually.  Appears to 
work fine.  I also have it using smbclient to get to two Windows boxes 
(XP and 2000).  That also appears to work, though I'm not sure how given 
it's not using any passwords to access my shares?

The problem I'm getting is that only one machine had a complete backup 
succeed.  And its log for 'success' has a number of entries like this:

NT_STATUS_ACCESS_DENIED opening remote file \boot.ini (\)
NT_STATUS_ACCESS_DENIED listing \Documents and Settings\All Users\Application 
Data\*
NT_STATUS_ACCESS_DENIED listing \Documents and Settings\All Users\DRM\*
NT_STATUS_ACCESS_DENIED listing \Documents and Settings\jhughes\*
NT_STATUS_ACCESS_DENIED listing \Documents and Settings\LocalService\*
NT_STATUS_ACCESS_DENIED listing \Documents and Settings\NetworkService\*
NT_STATUS_ACCESS_DENIED opening remote file 
\download\archive\apps\hd_stress_test_bst514.zip (\download\archive\apps\)
NT_STATUS_ACCESS_DENIED opening remote file 
\download\archive\apps\TweakUiPowertoySetup.exe (\download\archive\apps\)
NT_STATUS_ACCESS_DENIED opening remote file 
\download\archive\apps\VirtualDub-1.6.16.zip (\download\archive\apps\)
NT_STATUS_ACCESS_DENIED opening remote file 
\download\archive\apps\winamp53_lite.exe (\download\archive\apps\)
NT_STATUS_ACCESS_DENIED opening remote file 
\download\archive\codecs\XviD-1.1.0-30122005.exe (\download\archive\codecs\)
NT_STATUS_ACCESS_DENIED opening remote file 
\download\archive\dev\smartsvn-win32-setup-nojre-2_0_7.zip 
(\download\archive\dev\)
NT_STATUS_ACCESS_DENIED opening remote file 
\download\archive\drivers\brother_7820inst.EXE (\download\archive\drivers\)
NT_STATUS_ACCESS_DENIED opening remote file 
\download\archive\drivers\logitech_setpoint310.exe (\download\archive\drivers\)
NT_STATUS_ACCESS_DENIED opening remote file \download\ASCII.pfb (\download\)
NT_STATUS_ACCESS_DENIED opening remote file \download\boxbackup-0.10.tgz 
(\download\)
NT_STATUS_ACCESS_DENIED opening remote file 
\download\deleteWindowsGenuineAdvantage\RemoveWGA.exe 
(\download\deleteWindowsGenuineAdvantage\)
NT_STATUS_ACCESS_DENIED opening remote file \download\LogiGamer.NET.2.2.0.zip 
(\download\)
NT_STATUS_ACCESS_DENIED opening remote file 
\download\Softphone-X-Lite_Win32_1003l_30942.exe (\download\)
NT_STATUS_ACCESS_DENIED opening remote file \media\frknmail.wav (\media\)
NT_STATUS_ACCESS_DENIED opening remote file \NTDETECT.COM (\)
NT_STATUS_ACCESS_DENIED opening remote file \ntldr (\)
NT_STATUS_SHARING_VIOLATION opening remote file \pagefile.sys (\)
NT_STATUS_ACCESS_DENIED listing \Program Files\*
NT_STATUS_ACCESS_DENIED listing 
\RECYCLER\S-1-5-21-507921405-436374069-725345543-1003\*
NT_STATUS_ACCESS_DENIED listing \System Volume Information\*
NT_STATUS_ACCESS_DENIED listing \WINDOWS\*


This was for the full backup. It otherwise succeeded and transferred 
30gb, which I figured was good, but a far cry from the 90gb on the 
machine.  It looks like basically any file that my local user on that 
machine has added, I was denied access.  My SMB share is my whole C 
drive, and I think I'm using the administrator account, which seems to 
log in w/o a password.  Being a member of the administrator group, 
shouldn't it have access to everything?  Or am I misunderstanding how 
smbclient works, in that a password-less login has some access, but you 
need a password to get full access or something bizarre like that?

Also, every backup I've run thus far has become wedged.  Every one.  I 
let it run for two or three days and the processes fall from being 50% 
CPU each (2 concurrent max), to zero.  Any ideas?  As you can see, I'm 
having a tough time with it.  I like the interface, but I need to figure 
out what I've done to make it so unstable for me.


Thanks,
JH

-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/