Re: [BackupPC-users] Advice on BackupPC

2007-05-16 Thread Holger Parplies
Hi,

Les Mikesell wrote on 16.05.2007 at 13:55:04 [Re: [BackupPC-users] Advice on 
BackupPC]:
> Vetch wrote:
> > I have a two site network [...] Our bandwidth is limited [...]
> > I want to backup my data from one site to the other...
> > In order to assess whether that would be do-able, I went to an 
> > exhibition of backup technologies.
> > One that caught my eye was a company called Data Domain, who claimed to 
> > de-duplicate data at the block level of 16KB chunks...
> > Apparently, all they send are the changed chunks and the schema to 
> > retrieve the data.
> 
> Backuppc can use rsync to transfer the data.  Rsync works by reading 
> through the file at both ends, exchanging block checksums to find the 
> changed parts.

the important part about this is that rsync compares a file with the version
in the reference backup (last incremental of lower level or full backup).
Consequentially, a new file will be transfered in full even if an identical
file exists in the pool. De-duplication happens on the file level after
transfer.

As far as I know, rsync uses 2KB chunks of the file, so you may need to
transfer less data in some cases than with 16KB chunks. On the other hand,
more checksums will need to be transfered in the general case. rsync
incremental backups take file attributes into account (modification time,
permissions etc.) and only transfer apparently changed files, using block
checksums as with full backups.

> > Does it send the changed data down the line and then check to see if it 
> > already has a copy, or does it check then send?

In general, it sends data and then checks (on-the-fly, without creating a
temporary copy for existing files). With rsync, it is possible to cut down
bandwidth requirements by comparing against the previous version of the
respective file.

> > The other thing is, can BackupPC de-duplicate at the block level or is 
> > it just file level?
> > I'm thinking that block level might save considerable amounts of 
> > traffic, because we will need to send file dumps of Exchange databases 
> > over the wire...
> > ... Which I assume will mean that we've got about 16GB at least to copy 
> > everyday, since it'll be creating a new file daily...

File level. That means you'll have a new file every day. Unless you happen
to have other files with identical contents, pooling won't gain you anything
for these files, though compression might.

> > On the other hand, would 16KB blocks be duplicated that regularly - I 
> > imagine there is a fair amount of variability in 16KB of ones and zeros, 
> > and the chances of them randomly reoccurring without being part of the 
> > same file, I would say are slim...

Well, for your database dumps, that would be sufficient, wouldn't it? If
you've got multiple copies of a 16GB database file and each differs only by
a few MB, that would leave a lot of identical blocks.

Considering we're talking about a M|([EMAIL PROTECTED] product, I wouldn't bet 
on the
dump format being especially convenient, though. They've probably got a
variable length header format just for the sake of defeating block-level
de-duplication strategies :-).

> > What do you think?
> 
> I think rsync will do it as well as it can be done.

For the transfer: yes - if the database dumps are always stored in the same
file. If you have a new file name each day (including the date, for
instance), then rsync won't help you at all.
For storage, the transfer method is irrelevant.

> You can test the transfer efficiency locally first to get an idea of how 
> well the common blocks are handled.

Correct. You can do this for single files (database dumps) or the whole file
tree you want to back up. For your database dumps, rsync should also give
you a hint, how much savings block-level de-duplication could gain you. If
rsync can't speed up the transfer, de-duplication likely won't save any disk
space.


BackupPC is not difficult to set up. You could simply test how well it works
for you before deciding to spend money on a commercial product. BackupPC has
its limits which may make a commercial product the better choice for you.
But then, the commercial product probably also has its limits, and the
question is whether they are so well documented. If it's only the block-level
de-duplication, disk space might be cheaper than software.

Regards,
Holger

P.S.: For LVM snapshots, the problem is also that de-duplication take place
  at file level.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: Incremental backup and SMB protocol don't work correctly

2007-05-16 Thread Jason M. Kusar
Jesús Martel wrote:
> -- Forwarded message --
> From: Jesús Martel <[EMAIL PROTECTED]>
> Date: 16-may-2007 22:32
> Subject: Re: [BackupPC-users] Incremental backup and SMB protocol
> don't work correctly
> To: Holger Parplies <[EMAIL PROTECTED]>
>
>
> I don't understand. If the file is transfered in the first incremental
> backupc, why is downloaded again? The file has not been modified. If
> the amount of data were greater (GB) this it would not be efficient.
>
>   

See here: 
http://backuppc.sourceforge.net/faq/BackupPC.html#item__conf_incrlevels_

--Jason
> 2007/5/16, Holger Parplies <[EMAIL PROTECTED]>:
>   
>> Hi,
>>
>> Jes?s Martel wrote on 16.05.2007 at 19:09:41 [[BackupPC-users] Incremental 
>> backup and SMB protocol don't work correctly]:
>> 
>>> Backup Summary:
>>> ===
>>>
>>> Backup#   TypeFilled  Level   Start Date  Duration/mins   
>>> Age/days
>>> 0 fullyes 0   5/7 21:21   0.0 8.9 <= No 
>>> files
>>> 2 incrno  1   5/8 21:00   5.8 7.9 <= 
>>> Added one file ~647MB
>>> 3 incrno  1   5/9 21:00   5.9 6.9
>>> 4 incrno  1   5/10 22:00  6.1 5.9
>>> 5 incrno  1   5/11 22:00  9.4 4.9
>>> 6 incrno  1   5/14 01:00  7.1 2.7
>>> 7 fullyes 0   5/15 01:00  5.5 1.7
>>> 8 incrno  1   5/16 01:00  0.0 0.7
>>>
>>>
>>> File Size/Count Reuse Summary:
>>> ==
>>>   Totals  Existing Files  New Files
>>> Backup#   Type#Files  Size/MB MB/sec  #Files  Size/MB #Files  
>>> Size/MB
>>> 0 full1   0.0 0.000   0.0 2   0.0
>>> 2 incr1   647.6   1.881   647.6   0   0.0
>>> 3 incr1   647.6   1.841   647.6   0   0.0
>>> 4 incr1   647.6   1.781   647.6   0   0.0
>>> 5 incr1   647.6   1.141   647.6   0   0.0
>>> 6 incr1   647.6   1.521   647.6   0   0.0
>>> 7 full2   647.6   1.982   647.6   1   0.0
>>> 8 incr0   0.0 0.000   0.0 0   0.0
>>>
>>> The new file is always downloaded by the server until the next full
>>> backup. It's correct?
>>>   
>> yes.
>>
>> Regards,
>> Holger
>>
>> 
>
> -
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/
>
>
> !DSPAM:464b78f4213176580331006!
>
>   


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Advice on BackupPC

2007-05-16 Thread Les Mikesell
Vetch wrote:

> It can do either, depending on whether you use the tar, smb, or rsync
> transfer methods. 
> 
> The Rsync method presumably from your previous comment would check then 
> send...?

Yes - if a matching file exists in the previous backup, only the 
differences are sent.

> I think rsync will do it as well as it can be done.  However, it is hard
> to tell how much two different Exchange database dumps will have in
> common.  Then there is the issue that you could reduce the size by
> compressing the file but doing so will make the common parts impossible
> to find from one version to another.  You can work around this by using
> ssh compression or something like an openvpn tunnel with lzo compression
> enabled, leaving the file uncompressed.
> 
> 
> I see - so you wouldn't compress the file, you'd compress the tunnel...
> Makes sense...

This takes some extra CPU work but otherwise it would be impossible to 
find the matching parts.

> Would it then still get compressed when stored at the other end?

Yes, in fact the backuppc side will be running a perl implementation of 
rsync that performs the comparison on the fly against the compressed 
copy (but pretends it is the uncompressed version to match the other end).

> So I would output a copy of the database to the same file name, and 
> rsync would just take the changes...
> I'll try it out...

Yes, depending on the structure of the database dump and the changes 
each day there may not be much in common.

> How well would that work for something like LVM snapshotting?
> I'm thinking of migrating my windows servers to Xen Virtual Machines on 
> LVM drives
> If I take a snapshot of the drive and then mount it somewhere, could I 
> get BackupPC to copy only the changed data as rsynch files?

Rsync will not work directly against devices so you'd have to make a 
file copy first.  Also, when constructing the destination file after 
differences are found you need room for 2 complete copies as the new 
version is built out of a combination of chunks from the old plus the 
transferred differences.  If I were going to try this, I'd probably dd 
the snapshot image and pipe it to split to break it up into some number 
of chunks first, then back up the directory of chunks.  I'm not sure 
what might be a good size, though.

> With regards to the storage - does it keep copies of all the versions of 
> the file that is backed up, with differences stored and are they 
> separated into chunks at that level, or are they stored as distinctive 
> files?

All files that are exactly identical are pooled into a single instance 
(so you might get lucky with the chunking approach if some parts are 
unchanged).  However, if there is any difference at all they are stored 
as different complete files.   Something like backup-rdiff might be 
better for huge files with small changes.

-- 
   Les Mikesell
[EMAIL PROTECTED]


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Advice on BackupPC

2007-05-16 Thread Randy Barlow
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Vetch wrote:
> The Rsync method presumably from your previous comment would check then
> send...?

Correct.

> I see - so you wouldn't compress the file, you'd compress the tunnel...
> Makes sense...
> Would it then still get compressed when stored at the other end?

Yes, if you set the backuppc server to do so.  Compression of the tunnel
just sends the bits across the line more efficiently, but at the other
end they are decompressed to be the same bits when received.  Then the
backuppc server can optionally store them in a compressed pool.  If you
don't compress them, my understanding is that they will be stored in a
much more easy to access format on the filesystem, which is handy if the
backup server goes down for some reason, though I've never tried it
since I always use compression...

> How well would that work for something like LVM snapshotting?
> I'm thinking of migrating my windows servers to Xen Virtual Machines on LVM
> drives
> If I take a snapshot of the drive and then mount it somewhere, could I get
> BackupPC to copy only the changed data as rsynch files?

I've not done this, but it should work if you dd the LV to a file
regularly...

> With regards to the storage - does it keep copies of all the versions of
> the
> file that is backed up, with differences stored and are they separated into
> chunks at that level, or are they stored as distinctive files?

It does intelligent pooling as far as I understand, meaning it will
store the big file once, and then store the next versions as differences
to the original.  Am I correct on this list readers?

R

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGS3wb7So1xaF/eR8RAi0/AJ0d885TDYgyRM2EKJnn8cX1wZyv9QCfUScK
VL1UMtY6Hclev/mWypXkL1M=
=Ndm8
-END PGP SIGNATURE-

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Fwd: Incremental backup and SMB protocol don't work correctly

2007-05-16 Thread Jesús Martel
-- Forwarded message --
From: Jesús Martel <[EMAIL PROTECTED]>
Date: 16-may-2007 22:32
Subject: Re: [BackupPC-users] Incremental backup and SMB protocol
don't work correctly
To: Holger Parplies <[EMAIL PROTECTED]>


I don't understand. If the file is transfered in the first incremental
backupc, why is downloaded again? The file has not been modified. If
the amount of data were greater (GB) this it would not be efficient.


2007/5/16, Holger Parplies <[EMAIL PROTECTED]>:
> Hi,
>
> Jes?s Martel wrote on 16.05.2007 at 19:09:41 [[BackupPC-users] Incremental 
> backup and SMB protocol don't work correctly]:
> > Backup Summary:
> > ===
> >
> > Backup#   TypeFilled  Level   Start Date  Duration/mins   
> > Age/days
> > 0 fullyes 0   5/7 21:21   0.0 8.9 <= No 
> > files
> > 2 incrno  1   5/8 21:00   5.8 7.9 <= 
> > Added one file ~647MB
> > 3 incrno  1   5/9 21:00   5.9 6.9
> > 4 incrno  1   5/10 22:00  6.1 5.9
> > 5 incrno  1   5/11 22:00  9.4 4.9
> > 6 incrno  1   5/14 01:00  7.1 2.7
> > 7 fullyes 0   5/15 01:00  5.5 1.7
> > 8 incrno  1   5/16 01:00  0.0 0.7
> >
> >
> > File Size/Count Reuse Summary:
> > ==
> >   Totals  Existing Files  New Files
> > Backup#   Type#Files  Size/MB MB/sec  #Files  Size/MB #Files  
> > Size/MB
> > 0 full1   0.0 0.000   0.0 2   0.0
> > 2 incr1   647.6   1.881   647.6   0   0.0
> > 3 incr1   647.6   1.841   647.6   0   0.0
> > 4 incr1   647.6   1.781   647.6   0   0.0
> > 5 incr1   647.6   1.141   647.6   0   0.0
> > 6 incr1   647.6   1.521   647.6   0   0.0
> > 7 full2   647.6   1.982   647.6   1   0.0
> > 8 incr0   0.0 0.000   0.0 0   0.0
> >
> > The new file is always downloaded by the server until the next full
> > backup. It's correct?
>
> yes.
>
> Regards,
> Holger
>

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Advice on BackupPC

2007-05-16 Thread Vetch

Hi Les,

Thanks for the info...

Sounds like an incredibly powerful tool!

See responses below:-

On 5/16/07, Les Mikesell <[EMAIL PROTECTED]> wrote:


Vetch wrote:

> I have a two site network, one in the US, and one in the UK.
> Our bandwidth is limited, though will be increasing at some point in the
> future, though I couldn't say how much...
> I want to backup my data from one site to the other...
> In order to assess whether that would be do-able, I went to an
> exhibition of backup technologies.
> One that caught my eye was a company called Data Domain, who claimed to
> de-duplicate data at the block level of 16KB chunks...
> Apparently, all they send are the changed chunks and the schema to
> retrieve the data.

Backuppc can use rsync to transfer the data.  Rsync works by reading
through the file at both ends, exchanging block checksums to find the
changed parts.



Ok - so Rsync sounds like the format to use...


What I am wondering is would BackupPC be a suitable open source
> replacement for that technology...?
> Does it send the changed data down the line and then check to see if it
> already has a copy, or does it check then send?

It can do either, depending on whether you use the tar, smb, or rsync
transfer methods.



The Rsync method presumably from your previous comment would check then
send...?


Presumably it would save significant bandwidth if it checks first...
> The other thing is, can BackupPC de-duplicate at the block level or is
> it just file level?
> I'm thinking that block level might save considerable amounts of
> traffic, because we will need to send file dumps of Exchange databases
> over the wire...
> ... Which I assume will mean that we've got about 16GB at least to copy
> everyday, since it'll be creating a new file daily...
>
> On the other hand, would 16KB blocks be duplicated that regularly - I
> imagine there is a fair amount of variability in 16KB of ones and zeros,
> and the chances of them randomly reoccurring without being part of the
> same file, I would say are slim...
>
> What do you think?

I think rsync will do it as well as it can be done.  However, it is hard
to tell how much two different Exchange database dumps will have in
common.  Then there is the issue that you could reduce the size by
compressing the file but doing so will make the common parts impossible
to find from one version to another.  You can work around this by using
ssh compression or something like an openvpn tunnel with lzo compression
enabled, leaving the file uncompressed.



I see - so you wouldn't compress the file, you'd compress the tunnel...
Makes sense...
Would it then still get compressed when stored at the other end?

You can test the transfer efficiency locally first to get an idea of how

well the common blocks are handled.  Use the command line rsync program
to make a copy of one days's dump, then repeat the process the next day
with the same filename.   Rsync will display the size of the file and
the data actually transferred.



So I would output a copy of the database to the same file name, and rsync
would just take the changes...
I'll try it out...

How well would that work for something like LVM snapshotting?
I'm thinking of migrating my windows servers to Xen Virtual Machines on LVM
drives
If I take a snapshot of the drive and then mount it somewhere, could I get
BackupPC to copy only the changed data as rsynch files?

With regards to the storage - does it keep copies of all the versions of the
file that is backed up, with differences stored and are they separated into
chunks at that level, or are they stored as distinctive files?

Cheers,

Jx


--

   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Advice on BackupPC

2007-05-16 Thread Les Mikesell
Vetch wrote:

> I have a two site network, one in the US, and one in the UK.
> Our bandwidth is limited, though will be increasing at some point in the 
> future, though I couldn't say how much...
> I want to backup my data from one site to the other...
> In order to assess whether that would be do-able, I went to an 
> exhibition of backup technologies.
> One that caught my eye was a company called Data Domain, who claimed to 
> de-duplicate data at the block level of 16KB chunks...
> Apparently, all they send are the changed chunks and the schema to 
> retrieve the data.

Backuppc can use rsync to transfer the data.  Rsync works by reading 
through the file at both ends, exchanging block checksums to find the 
changed parts.

> What I am wondering is would BackupPC be a suitable open source 
> replacement for that technology...?
> Does it send the changed data down the line and then check to see if it 
> already has a copy, or does it check then send?

It can do either, depending on whether you use the tar, smb, or rsync 
transfer methods.

> Presumably it would save significant bandwidth if it checks first...
> The other thing is, can BackupPC de-duplicate at the block level or is 
> it just file level?
> I'm thinking that block level might save considerable amounts of 
> traffic, because we will need to send file dumps of Exchange databases 
> over the wire...
> ... Which I assume will mean that we've got about 16GB at least to copy 
> everyday, since it'll be creating a new file daily...
> 
> On the other hand, would 16KB blocks be duplicated that regularly - I 
> imagine there is a fair amount of variability in 16KB of ones and zeros, 
> and the chances of them randomly reoccurring without being part of the 
> same file, I would say are slim...
> 
> What do you think?

I think rsync will do it as well as it can be done.  However, it is hard 
to tell how much two different Exchange database dumps will have in 
common.  Then there is the issue that you could reduce the size by 
compressing the file but doing so will make the common parts impossible 
to find from one version to another.  You can work around this by using 
ssh compression or something like an openvpn tunnel with lzo compression 
enabled, leaving the file uncompressed.

You can test the transfer efficiency locally first to get an idea of how 
well the common blocks are handled.  Use the command line rsync program 
to make a copy of one days's dump, then repeat the process the next day 
with the same filename.   Rsync will display the size of the file and 
the data actually transferred.

-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incremental backup and SMB protocol don't work correctly

2007-05-16 Thread Holger Parplies
Hi,

Jes?s Martel wrote on 16.05.2007 at 19:09:41 [[BackupPC-users] Incremental 
backup and SMB protocol don't work correctly]:
> Backup Summary:
> ===
> 
> Backup#   TypeFilled  Level   Start Date  Duration/mins   Age/days
> 0 fullyes 0   5/7 21:21   0.0 8.9 <= No 
> files
> 2 incrno  1   5/8 21:00   5.8 7.9 <= 
> Added one file ~647MB
> 3 incrno  1   5/9 21:00   5.9 6.9
> 4 incrno  1   5/10 22:00  6.1 5.9
> 5 incrno  1   5/11 22:00  9.4 4.9
> 6 incrno  1   5/14 01:00  7.1 2.7
> 7 fullyes 0   5/15 01:00  5.5 1.7
> 8 incrno  1   5/16 01:00  0.0 0.7
> 
> 
> File Size/Count Reuse Summary:
> ==
>   Totals  Existing Files  New Files
> Backup#   Type#Files  Size/MB MB/sec  #Files  Size/MB #Files  Size/MB
> 0 full1   0.0 0.000   0.0 2   0.0
> 2 incr1   647.6   1.881   647.6   0   0.0
> 3 incr1   647.6   1.841   647.6   0   0.0
> 4 incr1   647.6   1.781   647.6   0   0.0
> 5 incr1   647.6   1.141   647.6   0   0.0
> 6 incr1   647.6   1.521   647.6   0   0.0
> 7 full2   647.6   1.982   647.6   1   0.0
> 8 incr0   0.0 0.000   0.0 0   0.0
> 
> The new file is always downloaded by the server until the next full
> backup. It's correct?

yes.

Regards,
Holger

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Incremental backup and SMB protocol don't work correctly

2007-05-16 Thread Jesús Martel
Hello! I have a problem with BackupPC 3.0.0. The incremental backups
don't work correctly.
I put a example:

Backup Summary:
===

Backup# TypeFilled  Level   Start Date  Duration/mins   Age/days
0   fullyes 0   5/7 21:21   0.0 8.9 <= No 
files
2   incrno  1   5/8 21:00   5.8 7.9 <= 
Added one file ~647MB
3   incrno  1   5/9 21:00   5.9 6.9
4   incrno  1   5/10 22:00  6.1 5.9
5   incrno  1   5/11 22:00  9.4 4.9
6   incrno  1   5/14 01:00  7.1 2.7
7   fullyes 0   5/15 01:00  5.5 1.7
8   incrno  1   5/16 01:00  0.0 0.7


File Size/Count Reuse Summary:
==
Totals  Existing Files  New Files
Backup# Type#Files  Size/MB MB/sec  #Files  Size/MB #Files  Size/MB
0   full1   0.0 0.000   0.0 2   0.0
2   incr1   647.6   1.881   647.6   0   0.0
3   incr1   647.6   1.841   647.6   0   0.0
4   incr1   647.6   1.781   647.6   0   0.0
5   incr1   647.6   1.141   647.6   0   0.0
6   incr1   647.6   1.521   647.6   0   0.0
7   full2   647.6   1.982   647.6   1   0.0
8   incr0   0.0 0.000   0.0 0   0.0

The new file is always downloaded by the server until the next full
backup. It's correct?

I use Debian GNU/Linux (Lenny / testing) [backuppc_3.0.0-2_all.deb by
Ludovic Drolez <[EMAIL PROTECTED]>].

Thanks.

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] incremental backups are failing, fulls are ok

2007-05-16 Thread Dawn Susini Wallis
Hello,

I'm running BackupPC-3.0.0  On a full backup in the Xferlog, this 
command succeeds:
/usr/bin/smbclient computername\\share -A 
/computerdirectory/computername/passwordfile -E -N -d 1 -c tarmode\ full 
-Tc -

On an incremental backup in the Xferlog, this command fails:
/usr/bin/smbclient computername\\share -A 
/computerdirectory/computername/passwordfile -E -N -d 1 -c tarmode\ full 
-TcN /directory/computername/timeStamp.level0 -

Can anyone with some smbclient experience or backuppc experience (or 
both!) tell me why I get this error on incremental backups:

incr backup started back to 2007-05-08 10:08:25  (backup #2) for share 
share$
Xfer PIDs are now 27970,27969
cmdExecOrEval: about to exec /usr/bin/smbclient computername\\share 
-A /directory/computername/passwordfile -E -N -d 0 -c tarmode\ full -TcN 
/directory/computername/timeStamp.level0 -
session setup failed: NT_STATUS_LOGON_FAILURE
session setup failed: NT_STATUS_LOGON_FAILURE
tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 
0 filesTotal, 0 sizeTotal
Got fatal error during xfer (session setup failed: NT_STATUS_LOGON_FAILURE)
Backup aborted (session setup failed: NT_STATUS_LOGON_FAILURE)


Any help would be greatly appreciated. Thanks.

Dawn

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Re Call timed out: server did not respond

2007-05-16 Thread Regis Gras
Les Stott wrote:
Regis Gras wrote:

>> I have still problems with backuppc-3.0.0.
>>
>> Backup stop with the message
>> Error reading file \Local Settings\Temp\Cover picture.tiff : Call timed 
>> out: server did not respond after 2 milliseconds
>> I am using samba-client-3.0.10-1.4E.11 for smbclient
>>
>> I saw that the problem was depending on the samba version, then, I
>> installed backuppc-3.0.0 on an other server with 
>> samba-client-3.0.23c-2.el5.2.0.2
>>
>> Now, exclude doesn't work ...
>> For a test_pc, test_pc.pl is:
>> $Conf{SmbShareName} = 'Documents';
>> $Conf{SmbShareUserName} = 'rgras';
>> $Conf{SmbSharePasswd} = 'x';
>>
>> With this configuration, backuppc works fine.
>>
>> Now, I want to exclude some directory.  The test_pc.pl becomes
>> $Conf{SmbShareName} = 'Documents';
>> $Conf{SmbShareUserName} = 'rgras';
>> $Conf{SmbSharePasswd} = 'x';
>> $Conf{BackupFilesExclude} = [ '\Personnel' ];
>>
>>   
>  
>
 > Change the "\" to a "/" $Conf{BackupFilesExclude} = [ '/Personnel' ]; 
Remember also that excludes are relative to > the share, so the above 
assumes that the Personnel directory is at the root of the share called 
Documents. Regards, > Les

Than you Les, but with

samba-client-3.0.23c-2.el5.2.0.2 

the problem is always

Backuppc crashes with the message:
Last error is "session setup failed: NT_STATUS_LOGON_FAILURE".

Régis


-- 
==
| Régis Gras | http://www-ledss.ujf-grenoble.fr  |
|   D.C.M.   | mailto:[EMAIL PROTECTED] |
| 301, rue de la chimie  | --|
| DU BP 53   | Tel 04 76 51 41 76|
| 38041 Grenoble Cedex 9 | Fax 04 76 51 40 89|
==


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems using smb

2007-05-16 Thread Les Mikesell
Markus Mehrwald wrote:
> Thank you for the information but my problem was not like it was discussed 
> before. As I wrote, the problem was the user and not the password. I tried 
> your workaround but it did not change anything. After some little tests I 
> figured out that I must give a username because backuppc obviously uses an 
> empty string as user if no one is given and this does not work or it does not 
> work in my case. 
> After setting the guest user it works fine even without the workaround so 
> maybe the bug of the red hat implementation is already fixed or it does not 
> harm my use of samba/backuppc.

That bug is version-specific with the smbclient program running the 
backuppc server.  I've only hit it on fedora FC6 - which probably means 
it will also be in Centos5 when I get around to moving my main server. 
Mine would only do full's and I didn't find the workaround to make 
incrementals work normally.  Does anyone have a link?


-- 
   Les Mikesell
[EMAIL PROTECTED]

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Advice on BackupPC

2007-05-16 Thread Vetch

Hi all,

I've just found BackupPC, and I was wondering if it will achieve what I need
it to?

I have a two site network, one in the US, and one in the UK.
Our bandwidth is limited, though will be increasing at some point in the
future, though I couldn't say how much...
I want to backup my data from one site to the other...
In order to assess whether that would be do-able, I went to an exhibition of
backup technologies.
One that caught my eye was a company called Data Domain, who claimed to
de-duplicate data at the block level of 16KB chunks...
Apparently, all they send are the changed chunks and the schema to retrieve
the data.

What I am wondering is would BackupPC be a suitable open source replacement
for that technology...?
Does it send the changed data down the line and then check to see if it
already has a copy, or does it check then send?
Presumably it would save significant bandwidth if it checks first...
The other thing is, can BackupPC de-duplicate at the block level or is it
just file level?
I'm thinking that block level might save considerable amounts of traffic,
because we will need to send file dumps of Exchange databases over the
wire...
... Which I assume will mean that we've got about 16GB at least to copy
everyday, since it'll be creating a new file daily...

On the other hand, would 16KB blocks be duplicated that regularly - I
imagine there is a fair amount of variability in 16KB of ones and zeros, and
the chances of them randomly reoccurring without being part of the same
file, I would say are slim...

What do you think?

Any help would be greatly appreciated?

Jx
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems using smb

2007-05-16 Thread Markus Mehrwald
Thank you for the information but my problem was not like it was discussed 
before. As I wrote, the problem was the user and not the password. I tried your 
workaround but it did not change anything. After some little tests I figured 
out that I must give a username because backuppc obviously uses an empty string 
as user if no one is given and this does not work or it does not work in my 
case. 
After setting the guest user it works fine even without the workaround so maybe 
the bug of the red hat implementation is already fixed or it does not harm my 
use of samba/backuppc.

Regards,
Markus

 Original-Nachricht 
Datum: Wed, 16 May 2007 11:42:15 -0400
Von: "Jason M. Kusar" <[EMAIL PROTECTED]>
An: Markus Mehrwald <[EMAIL PROTECTED]>
CC: backuppc-users@lists.sourceforge.net
Betreff: Re: [BackupPC-users] Problems using smb

> Markus Mehrwald wrote:
> > I found the problem. No user is not allowed but Windows accepts the user
> "gast" or on english systems it may be "guest". Obviously backuppc passes
> "" as username and this is not allowed.
> >
> >   
> This has actually been discussed before and if you want a little more 
> background (and a temporary fix until RedHat fixes their problem), 
> search the archive for a thread entitled "Backup PC smbclient and 
> passwords."
> 
> --Jason
> 
> >  Original-Nachricht 
> > Datum: Wed, 16 May 2007 15:18:20 +0200
> > Von: "Markus Mehrwald" <[EMAIL PROTECTED]>
> > An: backuppc-users@lists.sourceforge.net
> > Betreff: [BackupPC-users] Problems using smb
> >
> >   
> >> Hello,
> >>
> >> I have got a big problem using smb for backups. If I execute the
> command
> >> smbclient pcbackup -I  -U -E -N -d 1 -c tarmode\ full -Tc -
> >> /test.txt everything works fine and my display is full of spam. This is
> what I
> >> copied from backuppc (without /test.txt) after the execution fails with
> a
> >> sambaerror "tree connect failed: NT_STATUS_ACCESS_DENIED" in fact two
> times
> >> (maybe because in the backup dir are two files?!). why does the command
> >> work on commandline and not out of backuppc?
> >> I use smb 3.0.24-5.fc6 and the current stable version of backuppc.
> >>
> >> Thanks for your help,
> >> Markus
> >>
> >> -- 
> >> GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
> >> Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
> >>
> >>
> -
> >> This SF.net email is sponsored by DB2 Express
> >> Download DB2 Express C - the FREE version of DB2 express and take
> >> control of your XML. No limits. Just data. Click to get it now.
> >> http://sourceforge.net/powerbar/db2/
> >> ___
> >> BackupPC-users mailing list
> >> BackupPC-users@lists.sourceforge.net
> >> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >> http://backuppc.sourceforge.net/
> >> 
> >
> >   

-- 
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems using smb

2007-05-16 Thread Jason M. Kusar
Markus Mehrwald wrote:
> I found the problem. No user is not allowed but Windows accepts the user 
> "gast" or on english systems it may be "guest". Obviously backuppc passes "" 
> as username and this is not allowed.
>
>   
This has actually been discussed before and if you want a little more 
background (and a temporary fix until RedHat fixes their problem), 
search the archive for a thread entitled "Backup PC smbclient and 
passwords."

--Jason

>  Original-Nachricht 
> Datum: Wed, 16 May 2007 15:18:20 +0200
> Von: "Markus Mehrwald" <[EMAIL PROTECTED]>
> An: backuppc-users@lists.sourceforge.net
> Betreff: [BackupPC-users] Problems using smb
>
>   
>> Hello,
>>
>> I have got a big problem using smb for backups. If I execute the command
>> smbclient pcbackup -I  -U -E -N -d 1 -c tarmode\ full -Tc -
>> /test.txt everything works fine and my display is full of spam. This is what 
>> I
>> copied from backuppc (without /test.txt) after the execution fails with a
>> sambaerror "tree connect failed: NT_STATUS_ACCESS_DENIED" in fact two times
>> (maybe because in the backup dir are two files?!). why does the command
>> work on commandline and not out of backuppc?
>> I use smb 3.0.24-5.fc6 and the current stable version of backuppc.
>>
>> Thanks for your help,
>> Markus
>>
>> -- 
>> GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
>> Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
>>
>> -
>> This SF.net email is sponsored by DB2 Express
>> Download DB2 Express C - the FREE version of DB2 express and take
>> control of your XML. No limits. Just data. Click to get it now.
>> http://sourceforge.net/powerbar/db2/
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> http://backuppc.sourceforge.net/
>> 
>
>   


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems using smb

2007-05-16 Thread Markus Mehrwald
I found the problem. No user is not allowed but Windows accepts the user "gast" 
or on english systems it may be "guest". Obviously backuppc passes "" as 
username and this is not allowed.

 Original-Nachricht 
Datum: Wed, 16 May 2007 15:18:20 +0200
Von: "Markus Mehrwald" <[EMAIL PROTECTED]>
An: backuppc-users@lists.sourceforge.net
Betreff: [BackupPC-users] Problems using smb

> Hello,
> 
> I have got a big problem using smb for backups. If I execute the command
> smbclient pcbackup -I  -U -E -N -d 1 -c tarmode\ full -Tc -
> /test.txt everything works fine and my display is full of spam. This is what I
> copied from backuppc (without /test.txt) after the execution fails with a
> sambaerror "tree connect failed: NT_STATUS_ACCESS_DENIED" in fact two times
> (maybe because in the backup dir are two files?!). why does the command
> work on commandline and not out of backuppc?
> I use smb 3.0.24-5.fc6 and the current stable version of backuppc.
> 
> Thanks for your help,
> Markus
> 
> -- 
> GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
> Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail
> 
> -
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> http://backuppc.sourceforge.net/

-- 
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] incremental doesn't stay within filter

2007-05-16 Thread Mark Sopuch
Hi,

Can someone please direct me to an answer on ensuring incremental 
backups only backup what the include filter has defined in the per-host 
configuration files. It appears Full backup can limit processing to just 
the included directories as expected but Incremental processes the whole 
share confusing the user restore display with unintended items that 
appear in their browse trees.

Thanks

Mark

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Call timed out: server did not respond after 20000 milliseconds

2007-05-16 Thread Les Stott
Regis Gras wrote:
> I have still problems with backuppc-3.0.0.
>
> Backup stop with the message
> Error reading file \Local Settings\Temp\Cover picture.tiff : Call timed 
> out: server did not respond after 2 milliseconds
> I am using samba-client-3.0.10-1.4E.11 for smbclient
>
> I saw that the problem was depending on the samba version, then, I
> installed backuppc-3.0.0 on an other server with 
> samba-client-3.0.23c-2.el5.2.0.2
>
> Now, exclude doesn't work ...
> For a test_pc, test_pc.pl is:
> $Conf{SmbShareName} = 'Documents';
> $Conf{SmbShareUserName} = 'rgras';
> $Conf{SmbSharePasswd} = 'x';
>
> With this configuration, backuppc works fine.
>
> Now, I want to exclude some directory.  The test_pc.pl becomes
> $Conf{SmbShareName} = 'Documents';
> $Conf{SmbShareUserName} = 'rgras';
> $Conf{SmbSharePasswd} = 'x';
> $Conf{BackupFilesExclude} = [ '\Personnel' ];
>
>   
Change the "\" to a "/"

$Conf{BackupFilesExclude} = [ '/Personnel' ];

Remember also that excludes are relative to the share, so the above assumes 
that the Personnel directory is at the root of the share called Documents.

Regards,

Les


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Webinterface on different host

2007-05-16 Thread Maikel Punie
On 16/05/2007 15:31, Nils Breunese (Lemonbit Internet) wrote:
> Maikel Punie wrote:
>
>   
>>> I don't know if it'll be much faster though. What is the typical load on
>>> your BackupPC server?
>>>   
>> Well almost 24/24 its between 4 and 8
>> 
> That's pretty high. Is this server only doing backups? What transfer
> method are you using?
>
> Nils Breunese.
>
>   
I'm using rsync over ssh with the folowing config

$Conf{XferMethod} = 'rsync';
$Conf{XferLogLevel} = 1;
$Conf{RsyncClientPath} = '/usr/bin/rsync';
$Conf{RsyncClientCmd} = '$sshPath -c blowfish -q -x -l root $host
$rsyncPath $argList+';
$Conf{RsyncClientRestoreCmd} = '$sshPath -c blowfish -q -x -l root $host
$rsyncPath $argList+';

maikel

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Webinterface on different host

2007-05-16 Thread Nils Breunese (Lemonbit Internet)
Maikel Punie wrote:

>> I don't know if it'll be much faster though. What is the typical load on
>> your BackupPC server?
> Well almost 24/24 its between 4 and 8

That's pretty high. Is this server only doing backups? What transfer
method are you using?

Nils Breunese.



signature.asc
Description: OpenPGP digital signature
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Webinterface on different host

2007-05-16 Thread Maikel Punie





Yes, it is possible. See
  .
I don't know if it'll be much faster though. What is the typical load
on
your BackupPC server?


Ok this server port setting is ok, backuppc is now listening on the server.

But how do i get the client to connect to that place?
i have set $Conf{ServerHost} to the correct server, and i added that secret message.

But what else is needed? does the client need the same config files, these for the hosts, the hosts file itself?

Maikel



On 16/05/2007 14:50, Maikel Punie wrote:

  
On 16/05/2007 11:58, Nils Breunese (Lemonbit Internet) wrote:
  
Maikel Punie schreef:

  

  i'm running backuppc for around 7 servers, this is all working perfectly
but the webinterface is verry slow, it sometimes takes up to 5 minuts to
open up the webinterface.
 
So now i was thinking, maybe it would be good to host the webinterface
on a different host inside the network, this has a couple off extra
advantages
- first off all a apache server less to maintain
- second the webinterface could be much faster
- third, the backuppc server has all the memory and cpu to use for
backing up only.
 
So now my question, is this possible or do you guys have another idea on
how we can solve our problem?



Yes, it is possible. See
.
I don't know if it'll be much faster though. What is the typical load on
your BackupPC server?
  
Well almost 24/24 its between 4 and 8
  
 Large pool? How many hosts are you backing up?
  
  
5 hosts, one with around 400 GB data the others are just for config
backupps, so thats not worth the space
We want to add 3 other hosts but we just' can't a backup (400G) now
typicly takes around 40 hours.
  
I've found that the web interface is not such a heavyweight, but all the
dumping, compressing, pooling, linking, etc. can be a heavy load for a
system.

Nils Breunese.
  
  
  

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
  

___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/
  





-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Problems using smb

2007-05-16 Thread Markus Mehrwald
Hello,

I have got a big problem using smb for backups. If I execute the command 
smbclient pcbackup -I  -U -E -N -d 1 -c tarmode\ full -Tc - 
/test.txt everything works fine and my display is full of spam. This is what I 
copied from backuppc (without /test.txt) after the execution fails with a 
sambaerror "tree connect failed: NT_STATUS_ACCESS_DENIED" in fact two times 
(maybe because in the backup dir are two files?!). why does the command work on 
commandline and not out of backuppc?
I use smb 3.0.24-5.fc6 and the current stable version of backuppc.

Thanks for your help,
Markus

-- 
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Call timed out: server did not respond after 20000 milliseconds

2007-05-16 Thread Regis Gras
I have still problems with backuppc-3.0.0.

Backup stop with the message
Error reading file \Local Settings\Temp\Cover picture.tiff : Call timed 
out: server did not respond after 2 milliseconds
I am using samba-client-3.0.10-1.4E.11 for smbclient

I saw that the problem was depending on the samba version, then, I
installed backuppc-3.0.0 on an other server with 
samba-client-3.0.23c-2.el5.2.0.2

Now, exclude doesn't work ...
For a test_pc, test_pc.pl is:
$Conf{SmbShareName} = 'Documents';
$Conf{SmbShareUserName} = 'rgras';
$Conf{SmbSharePasswd} = 'x';

With this configuration, backuppc works fine.

Now, I want to exclude some directory.  The test_pc.pl becomes
$Conf{SmbShareName} = 'Documents';
$Conf{SmbShareUserName} = 'rgras';
$Conf{SmbSharePasswd} = 'x';
$Conf{BackupFilesExclude} = [ '\Personnel' ];

Backuppc crashes with the message:
Last error is "session setup failed: NT_STATUS_LOGON_FAILURE".

Could some one tell me how to do to solve this problem

Thanks

-- 
==
| Régis Gras | http://www-ledss.ujf-grenoble.fr  |
|   D.C.M.   | mailto:[EMAIL PROTECTED] |
| 301, rue de la chimie  | --|
| DU BP 53   | Tel 04 76 51 41 76|
| 38041 Grenoble Cedex 9 | Fax 04 76 51 40 89|
==


-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Webinterface on different host

2007-05-16 Thread Maikel Punie




On 16/05/2007 11:58, Nils Breunese (Lemonbit Internet) wrote:

  Maikel Punie schreef:

  
  
i'm running backuppc for around 7 servers, this is all working perfectly
but the webinterface is verry slow, it sometimes takes up to 5 minuts to
open up the webinterface.
 
So now i was thinking, maybe it would be good to host the webinterface
on a different host inside the network, this has a couple off extra
advantages
- first off all a apache server less to maintain
- second the webinterface could be much faster
- third, the backuppc server has all the memory and cpu to use for
backing up only.
 
So now my question, is this possible or do you guys have another idea on
how we can solve our problem?

  
  
Yes, it is possible. See
.
I don't know if it'll be much faster though. What is the typical load on
your BackupPC server?

Well almost 24/24 its between 4 and 8

   Large pool? How many hosts are you backing up?
  

5 hosts, one with around 400 GB data the others are just for config
backupps, so thats not worth the space
We want to add 3 other hosts but we just' can't a backup (400G) now
typicly takes around 40 hours.

  I've found that the web interface is not such a heavyweight, but all the
dumping, compressing, pooling, linking, etc. can be a heavy load for a
system.

Nils Breunese.
  




-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] [Fwd: Re: SSH Tunnel HOWTO for BackupPC]

2007-05-16 Thread Johan Ehnberg

Just to make the thread complete.
--- Begin Message ---
Thanks Johan,

You pointed me in the right direction, my sss-wrapper script did not have
the full path for ssh. My server is getting backed up right now ! I found it
by removing the output to null.

Francis

-Message d'origine-
De : Johan Ehnberg [mailto:[EMAIL PROTECTED] 
Envoyé : 14 mai 2007 03:13
À : Francis Lessard
Objet : RE: [BackupPC-users] SSH Tunnel HOWTO for BackupPC

Hi,

The tunnel is not opened correctly from the script. You can see what is
going
wring by doing the following:

Run the tunnel manually, and start the backup from BackupPC within 20 secs.
If
that works you may have something wrong in the usernames or such (is your
key
based authentication OK, or is SSH still asking for a password?).

The most probable scenario is that your SSH is waiting for a password
because
the wrapper never returns with success. In that case, read the BackupPC 
FAQ for
the solution.

If that doensn't help we'll look into it further. It's not very complicated.

/johan


Quoting Francis Lessard <[EMAIL PROTECTED]>:

> Hi Johan,
>
> Here is the output of ./BackupPC_dump -v -f myhost :
>
> Results
>
> cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 localhost
> cmdSystemOrEval: finished: got output PING localhost (127.0.0.1) 56(84)
> bytes of data.
> 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.048 ms
>
> --- localhost ping statistics ---
> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
> rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms
>
> cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 localhost
> cmdSystemOrEval: finished: got output PING localhost (127.0.0.1) 56(84)
> bytes of data.
> 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.046 ms
>
> --- localhost ping statistics ---
> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
> rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms
>
> CheckHostAlive: returning 0.046
> Executing DumpPreUserCmd: /etc/BackupPC/ssh-wrapper -p 3022 -f -L
> 7001:internalremoteip:873 [EMAIL PROTECTED] sleep 20
> cmdSystemOrEval: about to system /etc/BackupPC/ssh-wrapper -p 3022 -f -L
> 7001:internalremoteip:873 [EMAIL PROTECTED] sleep 20
> cmdSystemOrEval: finished: got output
> full backup started for directory fortune
> started full dump, share=hidden
> Error connecting to rsync daemon at localhost:7001: inet connect:
Connection
> refused
> Got fatal error during xfer (inet connect: Connection refused)
> cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 localhost
> cmdSystemOrEval: finished: got output PING localhost (127.0.0.1) 56(84)
> bytes of data.
> 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.049 ms
>
> --- localhost ping statistics ---
> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
> rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms
>
> cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 localhost
> cmdSystemOrEval: finished: got output PING localhost (127.0.0.1) 56(84)
> bytes of data.
> 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.047 ms
>
> --- localhost ping statistics ---
> 1 packets transmitted, 1 received, 0% packet loss, time 0ms
> rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms
>
> CheckHostAlive: returning 0.047
> Backup aborted (inet connect: Connection refused)
> dump failed: inet connect: Connection refused
>
>
> I have not managed to see what 'exact' command is sent after the
wrapper...
>
> Thanks for your support !
>
> Francis
>
>
>
>
>
> -Message d'origine-
> De : Johan Ehnberg [mailto:[EMAIL PROTECTED]
> Envoyé : 10 mai 2007 09:59
> À : Francis Lessard
> Cc : 'Craig Barratt'; 'BackupPC Users'
> Objet : Re: [BackupPC-users] SSH Tunnel HOWTO for BackupPC
>
> Hi,
>
> Can you try running the Dump command manually and post me the output?
> The documentation tells you how.
>
> Changing the SSH port will not mess with anything, as long as you have
> the same commands in BackupPC and on the command line.
>
> I want to see your tunnel working, so hang in there :).
>
> /johan
>
> Francis Lessard wrote:
>> Hi Johan,
>>
>> You document on how to use ssh tunneling with BackupPc is brilliant. I
> tried
>> it and it works in test, but not in BackupPc.
>> In my shell, loggued as backuppc (I replaced internalip, username,
> gateway)
>>
>>[EMAIL PROTECTED]:/home$ /etc/BackupPC/ssh-wrapper -p 3022 -f -L
>> 7001:internalip:873 [EMAIL PROTECTED] sleep 20
>>
>> works good : SSH Started succesfully
>>
>> After, ONLY this command have worked :
>>
>>[EMAIL PROTECTED]:/home$ rsync --port=7001
>> [EMAIL PROTECTED]::myrsyncservice
>>
>> I tried to used the --port=7001 argument in BackupPC Cgi + several
combos,
>> no success, I have not found in logs the complete rsync command that
>> BackupPC sends. Maybe it could me debug... My only hint is the port 3022
I
>> use instead of the standard port 22 on the ssh gateway. Could that mix
>> things up ?
>>
>> Thank you for opinion on that.
>>
>>
>> Re

Re: [BackupPC-users] Webinterface on different host

2007-05-16 Thread Nils Breunese (Lemonbit Internet)
Maikel Punie schreef:

> i'm running backuppc for around 7 servers, this is all working perfectly
> but the webinterface is verry slow, it sometimes takes up to 5 minuts to
> open up the webinterface.
>  
> So now i was thinking, maybe it would be good to host the webinterface
> on a different host inside the network, this has a couple off extra
> advantages
> - first off all a apache server less to maintain
> - second the webinterface could be much faster
> - third, the backuppc server has all the memory and cpu to use for
> backing up only.
>  
> So now my question, is this possible or do you guys have another idea on
> how we can solve our problem?

Yes, it is possible. See
.
I don't know if it'll be much faster though. What is the typical load on
your BackupPC server? Large pool? How many hosts are you backing up?
I've found that the web interface is not such a heavyweight, but all the
dumping, compressing, pooling, linking, etc. can be a heavy load for a
system.

Nils Breunese.



signature.asc
Description: OpenPGP digital signature
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


[BackupPC-users] Webinterface on different host

2007-05-16 Thread Maikel Punie

hey,

i'm running backuppc for around 7 servers, this is all working perfectly but
the webinterface is verry slow, it sometimes takes up to 5 minuts to open up
the webinterface.

So now i was thinking, maybe it would be good to host the webinterface on a
different host inside the network, this has a couple off extra advantages
- first off all a apache server less to maintain
- second the webinterface could be much faster
- third, the backuppc server has all the memory and cpu to use for backing
up only.

So now my question, is this possible or do you guys have another idea on how
we can solve our problem?

--
Greets,
Maikel
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can't call method "abort"..... SOLVED

2007-05-16 Thread komodo
Hi, thanx for reply.

So i get  sig=ALRM driectly after i run BackupPC_dump, therefore i had no idea 
where the problem is.

But problem was teher that i long time ago set up fort testing purposes 
$Conf{ClientTimeout} = 99;.
But now i have changed the kernel and problem apears. When i change timeout to 
86400, everything is ok. So problem is somwhere in the kernel, that they 
change type of some variable or something like this and this number was too 
big.

Anyway thanx for help.

Martin



On Tuesday 15 May 2007 23:02, you wrote:
> Hi,
>
> komodo wrote on 15.05.2007 at 14:34:20 [[BackupPC-users] Can't call 
method "abort".]:
> > Nobody can help with this problem ?
>
> well, I could re-ask the same questions as yesterday. Would that help?
>
> > Here is output with perl -w switch, maybe it helps more.
>
> Nope.
>
> Regards,
> Holger

-- 
komodo

http://komodo.webz.cz

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/