Re: [BackupPC-users] Tricks for Win 10 SMB Backup?

2018-07-30 Thread Tim Evans

On 07/30/2018 02:24 PM, Tim Evans wrote:

On 07/30/2018 01:44 AM, Craig Barratt wrote:
What is the value of $Conf{SmbClientFullCmd} and what is the smb 
command that is being run (see the XferLOG file)?


$Conf{SmbClientFullCmd} = '$smbClientPath $host\\$shareName 
$I_option -U $userName -E -d 1 -c tarmode\\ full -Tc$X_option - $fileList';


XferLOG file /raptor/pc-backups//pc/new-pelican/XferLOG.0.z created 
2018-07-30 14:11:20
Backup prep: type = full, case = 1, inPlace = 1, doDuplicate = 0, 
newBkupNum = 0, newBkupIdx = 0, lastBkupNum = , lastBkupIdx = (FillCycle 
= 0, noFillCnt = )
Running: /usr/bin/smbclient new-pelican\\C\$ -U backup -E -d 1 -c 
tarmode\ full -Tc -

full backup started for share C$
Xfer PIDs are now 31549,31548
tarExtract: /usr/share/BackupPC/bin/BackupPC_tarExtract: got Full = 1
tarExtract: /usr/share/BackupPC/bin/BackupPC_tarExtract starting... 
(XferLogLevel = 1)

XferErr tree connect failed: NT_STATUS_ACCESS_DENIED
XferErr tree connect failed: NT_STATUS_ACCESS_DENIED

The C drive share on the PC is manually accessible:

# smbclient //new-pelican/c -Ubackup%XXX
Try "help" to get a list of possible commands.
smb: \>

Same password used for the user 'backup.' Can 'get' files here.

Adding the dollar sign (i.e., referencing the default 'c$' share) to the 
above:


smbclient //new-pelican/c\$ -Ubackup%XXX
tree connect failed: NT_STATUS_ACCESS_DENIED


OK, I believe I have resolved this. (YMMV).

In Windows 10, sharing "C:" is apparently not the same as sharing "C$". 
The administrative shares (e.g., "C$) aren't shareable/remotely 
accessible by default.  Registry edit required:


reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\system 
/v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1 /f


Once this key was added, and the PC rebooted, C$ is seen remotely as a 
share, and BackupPC appears to be happy.

--
Tim Evans   |5 Chestnut Court
443-394-3864|Owings Mills, MD 21117

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC V4 and --checksum

2018-07-30 Thread Guillermo Rozas
>
> IIUC, you want a way to check the integrity of the pool files on the
> server side.
>

Yes


> BackupPC 3 used to have such a function, by re-checksumming and
> verifying some percentage of the pool during a nightly (can't remember
> the details, and I don't have the v3 docs available).
>

Found it here:
https://backuppc.github.io/backuppc/BackupPC-3.3.2.html#Rsync-checksum-caching

The wording further confirms that V4 won't checksum the files once they're
added to the pool, contrary to what I believed.


> If you want to do this for yourself, it's pretty easy with a cronjob.
> Just compare, for all files in $topDir/pool/*/*/, their md5sum with the
> filename. Same = good, not the same = bad.
> If your pool is compressed, pipe the compressed files in
> $topDir/cpool/*/*/ through pigz [1] (which, as opposed to gzip, can
> handle the headerless gz format used there), as in the following piece
> of bash:
>
>digest=$(pigz -dc $file | md5sum -b | cut -d' ' -f1)
>
> Now, check if $digest == $file, and you have a sanity check. (It's
> slightly more annoying to find out where $file was referenced in case it
> is corrupted; but it's possible, and I recommend not to worry about that
> until it happens.)
>

Perfect, thanks! I can then use --checksum to verify the client, and a
script to checksum the server off-line from time to time. The best of both
worlds :)

Regards,
Guillermo
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Tricks for Win 10 SMB Backup?

2018-07-30 Thread Michael Stowe

On 2018-07-30 11:24, Tim Evans wrote:

On 07/30/2018 01:44 AM, Craig Barratt wrote:
What is the value of $Conf{SmbClientFullCmd} and what is the smb 
command that is being run (see the XferLOG file)?


$Conf{SmbClientFullCmd} = '$smbClientPath $host\\$shareName
$I_option -U $userName -E -d 1 -c tarmode\\ full -Tc$X_option -
$fileList';

XferLOG file /raptor/pc-backups//pc/new-pelican/XferLOG.0.z created
2018-07-30 14:11:20
Backup prep: type = full, case = 1, inPlace = 1, doDuplicate = 0,
newBkupNum = 0, newBkupIdx = 0, lastBkupNum = , lastBkupIdx =
(FillCycle = 0, noFillCnt = )
Running: /usr/bin/smbclient new-pelican\\C\$ -U backup -E -d 1 -c
tarmode\ full -Tc -
full backup started for share C$
Xfer PIDs are now 31549,31548
tarExtract: /usr/share/BackupPC/bin/BackupPC_tarExtract: got Full = 1
tarExtract: /usr/share/BackupPC/bin/BackupPC_tarExtract starting...
(XferLogLevel = 1)
XferErr tree connect failed: NT_STATUS_ACCESS_DENIED
XferErr tree connect failed: NT_STATUS_ACCESS_DENIED

The C drive share on the PC is manually accessible:

# smbclient //new-pelican/c -Ubackup%XXX
Try "help" to get a list of possible commands.
smb: \>

Same password used for the user 'backup.' Can 'get' files here.

Adding the dollar sign (i.e., referencing the default 'c$' share) to 
the above:


smbclient //new-pelican/c\$ -Ubackup%XXX
tree connect failed: NT_STATUS_ACCESS_DENIED


This is normal for Administrative Shares since approximately Vista.

Did you override the Administrative Shares policy that disables remote 
access to Administrative Shares?
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Tricks for Win 10 SMB Backup?

2018-07-30 Thread Tim Evans

On 07/30/2018 01:44 AM, Craig Barratt wrote:
What is the value of $Conf{SmbClientFullCmd} and what is the smb command 
that is being run (see the XferLOG file)?


$Conf{SmbClientFullCmd} = '$smbClientPath $host\\$shareName 
$I_option -U $userName -E -d 1 -c tarmode\\ full -Tc$X_option - $fileList';


XferLOG file /raptor/pc-backups//pc/new-pelican/XferLOG.0.z created 
2018-07-30 14:11:20
Backup prep: type = full, case = 1, inPlace = 1, doDuplicate = 0, 
newBkupNum = 0, newBkupIdx = 0, lastBkupNum = , lastBkupIdx = 
(FillCycle = 0, noFillCnt = )
Running: /usr/bin/smbclient new-pelican\\C\$ -U backup -E -d 1 -c 
tarmode\ full -Tc -

full backup started for share C$
Xfer PIDs are now 31549,31548
tarExtract: /usr/share/BackupPC/bin/BackupPC_tarExtract: got Full = 1
tarExtract: /usr/share/BackupPC/bin/BackupPC_tarExtract starting... 
(XferLogLevel = 1)

XferErr tree connect failed: NT_STATUS_ACCESS_DENIED
XferErr tree connect failed: NT_STATUS_ACCESS_DENIED

The C drive share on the PC is manually accessible:

# smbclient //new-pelican/c -Ubackup%XXX
Try "help" to get a list of possible commands.
smb: \>

Same password used for the user 'backup.' Can 'get' files here.

Adding the dollar sign (i.e., referencing the default 'c$' share) to the 
above:


smbclient //new-pelican/c\$ -Ubackup%XXX
tree connect failed: NT_STATUS_ACCESS_DENIED
--
Tim Evans   |   5 Chestnut Court
|   Owings Mills, MD 21117
|   443-394-3864

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Can only access 10 most recent backups, all others say the directory is empty.

2018-07-30 Thread Kirk Miesle via BackupPC-users
I'm running version 4.2.1 and it seems all new hosts we've setup after May
7th, when 4.2.1 was released, are exhibiting this issue. Hosts that were
setup prior to May 7th seem fine.

Here's the issue: We have a host we've been backing up daily since the
beginning of June. There are no errors in the logs. When we attempt to do a
restore via the CGI interface, we are unable to access any backups older
than about 10 days old. When we select one of the older backup sets it says
"Error: Directory /var/lib/BackupPC//pc/server2.xxx.net/21 is empty". If we
select one within 10 days, we're able to see the content and restore with
no issues. it doesn't seem to matter if it's a full or incr backup or if
it's filled or not filled.

The directory reported as "empty" is on the server. Permissions are fine,
as far as I can tell.
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC V4 and --checksum

2018-07-30 Thread Alexander Kobel

Hi,

On 2018-07-28 20:04, Guillermo Rozas wrote:
Agreed, that is my situation. I'm reasonably sure of the system(UPS, 
Debian stable, ext4), but as my backups are relatively small (1can trade some extra hours of backup once in a while for the extra peace 
of mind.


IIUC, you want a way to check the integrity of the pool files on the 
server side.
BackupPC 3 used to have such a function, by re-checksumming and 
verifying some percentage of the pool during a nightly (can't remember 
the details, and I don't have the v3 docs available).


If you want to do this for yourself, it's pretty easy with a cronjob. 
Just compare, for all files in $topDir/pool/*/*/, their md5sum with the 
filename. Same = good, not the same = bad.
If your pool is compressed, pipe the compressed files in 
$topDir/cpool/*/*/ through pigz [1] (which, as opposed to gzip, can 
handle the headerless gz format used there), as in the following piece 
of bash:


  digest=$(pigz -dc $file | md5sum -b | cut -d' ' -f1)

Now, check if $digest == $file, and you have a sanity check. (It's 
slightly more annoying to find out where $file was referenced in case it 
is corrupted; but it's possible, and I recommend not to worry about that 
until it happens.)


Of course, you can easily scrub only a part of your pool, just choose 
how many subdirectories you want to process each night.



  1: https://zlib.net/pigz/


HTH,
Alex



smime.p7s
Description: S/MIME Cryptographic Signature
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/