Re: [BackupPC-users] Wrong blocksize (!=2048) in rsync checksums for some files

2011-02-17 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 16:23:49 -0500 on Thursday, February 17, 
2011:
 > Jeffrey J. Kosowsky wrote at about 15:41:26 -0500 on Thursday, February 17, 
 > 2011:
 >  > I have been running my BackupPC_digestVerify.pl program to check the
 >  > rsync digests in my pool.
 >  > 
 >  > Looking through the 1/x/x/ tree, I found 3 new bad digests out of
 >  > about 36000 when using the default blocksize of 2048.
 >  > 
 >  > It turns out that those 3 digests have a blocksize !=2048 -- and
 >  > indeed the digests do verify if you use that blocksize.
 >  > These files have block size 2327 and 9906 (twice).
 >  > Note the file sizes are 99MB, 11MB, and 16MB.
 >  > 
 >  > This seems *weird* and *wrong* since I thought the blocksize was fixed
 >  > to 2048 according to the (default) parameters passed to rsync in the
 >  > config.pl file. Specifically,
 >  >   '--block-size=2048',
 >  > 
 >  > Any idea why rsync may be ignoring this and using a larger blocksize
 >  > for these files?
 > 
 > OK this is weird... the block size used is the *uncompressed* file
 > size divided by 10,000 (rounded to integer). 
 > 
 > This too is weird since the normal rsync algorithm uses the rounded
 > sqrt of the (uncompressed) file length for the blocksize (as long as
 > it is >700 and < MAX_BLOCK_SIZE which I think may be 16,384).
 > 
 > So what is going on here and why is rsync neither using the
 > --block-size=2048 value nor the heuristic sqrt(filesize) number?
 > 

OK - I see some code in RsyncDigest.pm that seems to set the
blocksize to:
  defaultBlksize   if filesize/1 < defaultBlkSize
  filesize/1
  16384if filesize/1 > 16384
were it seems that defaultBlkSize = 700

Not sure why filesize/1 is chosen though rather than
sqrt(filesize) as per the regular rsync algorithm heuristic.

Also, I'm confused about how this reconciles with the rsync parameter
that would seemingly force the block size to 2048. And indeed nearly
all the cpool files do have a blocksize of 2048.

Now since the appended rsync digest doesn't record the blocksize (only
the number of blocks), how does BackupPC on the next round know
whether the blocksize is 2048 or the one set by the above
heuristic. And if BackupPC does not know which then it would seem that
the rsync checksum is not going to be helpful.

In particular, if rsync is given the rsync arg of --blocksize=2048,
then won't cpool files with blocksize != 2048 cause rsync to waste
time trying to align blocks based on incompatible block sizes?

So, either I am missing something here (very likely) or something is
broken...

And again, this blocksize != 2048 seems to only affect a *small*
fraction of all the files with a rsync digest (maybe about 1-2 per
1000 files with digests)

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] TopDir host override to back one host to alt volume

2011-02-17 Thread unclecameron
I want to put different hosts in different mounted volumes, since I have and OS 
partition size limitation (without GPT) to 2TB, which is not large enough to 
backup all my hosts. I have tried creating a similar dir structure on the new 
mount volume and symlink the host folder, but it fails with a hardlink error, 
but when I put an alternate TopDir directive in somehostforthenewvolume.pl that 
doesn't work either, what should I be doing?

+--
|This was sent by came...@campworkz.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wrong blocksize (!=2048) in rsync checksums for some files

2011-02-17 Thread Jeffrey J. Kosowsky
Jeffrey J. Kosowsky wrote at about 15:41:26 -0500 on Thursday, February 17, 
2011:
 > I have been running my BackupPC_digestVerify.pl program to check the
 > rsync digests in my pool.
 > 
 > Looking through the 1/x/x/ tree, I found 3 new bad digests out of
 > about 36000 when using the default blocksize of 2048.
 > 
 > It turns out that those 3 digests have a blocksize !=2048 -- and
 > indeed the digests do verify if you use that blocksize.
 > These files have block size 2327 and 9906 (twice).
 > Note the file sizes are 99MB, 11MB, and 16MB.
 > 
 > This seems *weird* and *wrong* since I thought the blocksize was fixed
 > to 2048 according to the (default) parameters passed to rsync in the
 > config.pl file. Specifically,
 >   '--block-size=2048',
 > 
 > Any idea why rsync may be ignoring this and using a larger blocksize
 > for these files?

OK this is weird... the block size used is the *uncompressed* file
size divided by 10,000 (rounded to integer). 

This too is weird since the normal rsync algorithm uses the rounded
sqrt of the (uncompressed) file length for the blocksize (as long as
it is >700 and < MAX_BLOCK_SIZE which I think may be 16,384).

So what is going on here and why is rsync neither using the
--block-size=2048 value nor the heuristic sqrt(filesize) number?

[Note my numbers in my original post were wrong in that the actual
blocksize  wasn't 9906 twice but 9906 and 4081]

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Wrong blocksize (!=2048) in rsync checksums for some files

2011-02-17 Thread Jeffrey J. Kosowsky
I have been running my BackupPC_digestVerify.pl program to check the
rsync digests in my pool.

Looking through the 1/x/x/ tree, I found 3 new bad digests out of
about 36000 when using the default blocksize of 2048.

It turns out that those 3 digests have a blocksize !=2048 -- and
indeed the digests do verify if you use that blocksize.
These files have block size 2327 and 9906 (twice).
Note the file sizes are 99MB, 11MB, and 16MB.

This seems *weird* and *wrong* since I thought the blocksize was fixed
to 2048 according to the (default) parameters passed to rsync in the
config.pl file. Specifically,
  '--block-size=2048',

Any idea why rsync may be ignoring this and using a larger blocksize
for these files?

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Imported local backups, now remote ones in wrong directory

2011-02-17 Thread Boniforti Flavio
Hallo Christian,
 
I'll try to explain what I do to import them, and also thanks for your 
thoughts, I'll apply them right now.
 
I'm using rsync over ssh method for backing up WAN hosts, and it works almost 
flawlessly: the only troubles arise when *huge* changes happen on the remote 
server.
 
To import data I do as follows:
 
first, I put the data from my USB HDD into a folder on my backuppc server (you 
must have enough free space, of course!);
then I change /etc/rsyncd.conf by configuring the exact number and type of 
modules I need to back up from the remote server, pointing for each one to the 
correct path on the LOCAL copy;
then I start rsync manually: rsync -v --daemon --config=/etc/rsyncd.conf 
--no-detach;
on BackupPC webGUI I simply use defaults:
RsyncdClientPort [873]
DumpPreShareCmd [empty]
and finally I start *manually* from the webinterface the first full backup.
 
After this one ends successfully, I simply stop rsync from running, delete the 
data from the temporary location, set up the correct RsyncdClientPort (as it is 
tunneled I use other ports, like 8883 8884 and so on, different from remote 
host to remote host) and I set my DumpPreShareCmd to start the ssh tunnel.
 
That's it, but if you need more depth I can help!
 
Ciao,
Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch  
E-mail: fla...@piramide.ch 




From: Christian Völker [mailto:chrisc...@knebb.de] 
Sent: Thursday, February 17, 2011 4:26 PM
To: backuppc-users@lists.sourceforge.net
Subject: Re: [BackupPC-users] Imported local backups, now remote ones 
in wrong directory


Hi Flavio,


> Now, what happened on my last duty is that I "imported" the 
data into
> the wrong directory level. To be explcit:
>
> I have it in
>
> /var/lib/backuppc/pc/customer10/0/fPublic/fPublic
>
> but it should be in
>
> /var/lib/backuppc/pc/customer10/0/fPublic
>
> So, what could I do now to avoid re-transferring the data?
> Could I simply mv the directory "one level up"?
> Or should I change some more?
>

Just from my thoughts:
You should be good moving it up one level. As "additional" files/
folders would be removed by any following backup this should not be an
issue.

Just one question for my own: what backup type doe you use? I have a
similar issue (slow WAN link)- can I just copy over the files to the
local directory and that's it? Is this your "import"?

Greetings

Christian


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Imported local backups, now remote ones in wrong directory

2011-02-17 Thread Christian Völker
Hi Flavio,


> > Now, what happened on my last duty is that I "imported" the data into
> > the wrong directory level. To be explcit:
> >
> > I have it in
> >
> > /var/lib/backuppc/pc/customer10/0/fPublic/fPublic
> >
> > but it should be in
> >
> > /var/lib/backuppc/pc/customer10/0/fPublic
> >
> > So, what could I do now to avoid re-transferring the data?
> > Could I simply mv the directory "one level up"?
> > Or should I change some more?
> >
Just from my thoughts:
You should be good moving it up one level. As "additional" files/
folders would be removed by any following backup this should not be an
issue.

Just one question for my own: what backup type doe you use? I have a
similar issue (slow WAN link)- can I just copy over the files to the
local directory and that's it? Is this your "import"?

Greetings

Christian


--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Imported local backups, now remote ones in wrong directory

2011-02-17 Thread Boniforti Flavio
Hello list.
I'll try to explain my situation and what happened.

I'm doing remote backups of remote servers, but as one can imagine, the
*first* backup is not being done remotely: too many GBs of data would
take ages to come across the internet.
Thus I pick up the data from the remote server on an USB HDD and take it
to my backuppc site. Here I "import" those directories and files
"locally" and manually. When I subsequently start the remote backups, I
already have the correct data structure in backuppc server and thus it
works like a charm.

Now, what happened on my last duty is that I "imported" the data into
the wrong directory level. To be explcit:

I have it in

/var/lib/backuppc/pc/customer10/0/fPublic/fPublic

but it should be in

/var/lib/backuppc/pc/customer10/0/fPublic

So, what could I do now to avoid re-transferring the data?
Could I simply mv the directory "one level up"?
Or should I change some more?

Thanks for any advice...

Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: fla...@piramide.ch 

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Distributing user configs from a central host?

2011-02-17 Thread Robin Lee Powell
I have a central server, that happens to be the puppetmaster, that
has various users on it.  I would like to copy out their information
(name, uid, password, .bashrc, etc) to all my other hosts, but I
want to let the users change their stuff on that host, so I don't
want to just stick it in puppet.

My inclination is to just make a script that runs through the passwd
file and generates puppet instructions out, and also copies the user
files in question into a place in the puppetmaster directories.

Is there a more-idiomatic way to do that?

-Robin

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which "this parrot
is dead" is "ti poi spitaki cu morsi", but "this sentence is false"
is "na nei".   My personal page: http://www.digitalkingdom.org/rlp/

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Distributing user configs from a central host?

2011-02-17 Thread Robin Lee Powell
That was *not* sent to the right mailing list.  Sorry.

-Robin

On Thu, Feb 17, 2011 at 06:34:58AM -0800, Robin Lee Powell wrote:
> I have a central server, that happens to be the puppetmaster, that
> has various users on it.  I would like to copy out their information
> (name, uid, password, .bashrc, etc) to all my other hosts, but I
> want to let the users change their stuff on that host, so I don't
> want to just stick it in puppet.
> 
> My inclination is to just make a script that runs through the passwd
> file and generates puppet instructions out, and also copies the user
> files in question into a place in the puppetmaster directories.
> 
> Is there a more-idiomatic way to do that?
> 
> -Robin
> 
> -- 
> http://singinst.org/ :  Our last, best hope for a fantastic future.
> Lojban (http://www.lojban.org/): The language in which "this parrot
> is dead" is "ti poi spitaki cu morsi", but "this sentence is false"
> is "na nei".   My personal page: http://www.digitalkingdom.org/rlp/
> 
> --
> The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
> Pinpoint memory and threading errors before they happen.
> Find and fix more than 250 security defects in the development cycle.
> Locate bottlenecks in serial and parallel code that limit performance.
> http://p.sf.net/sfu/intel-dev2devfeb
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
> 

-- 
http://singinst.org/ :  Our last, best hope for a fantastic future.
Lojban (http://www.lojban.org/): The language in which "this parrot
is dead" is "ti poi spitaki cu morsi", but "this sentence is false"
is "na nei".   My personal page: http://www.digitalkingdom.org/rlp/

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-17 Thread Ryan Blake
David,

Did you make sure you have File and Printer sharing enabled on the computer 
(right click on your network device and choose properties and look for File and 
Printer sharing to be checked) and have the firewall open to allow traffic (you 
may even want to temporarily disable it when doing the testing you did 
earlier)?  Also, I am guessing the dwilliams account is not an account from 
part of a domain at work?  If it is, you'll need to use 
username=domain\\username (note the \\).  Your other option would be to create 
a local admin account on your workstation for backup and recovery.  You should 
also right click on your "C" drive and ensure that under the Sharing tab that 
the C$ share shows up.  You can do a more thorough check if you go into your 
Control Panel, select "Small Icons" (top left of window), and find "Folder 
Options."  From there, hit the "View" tab and under "Advanced Settings," 
uncheck Simple file sharing.  Now go back and check for that c$ share.  It 
would also be a good time to see what permissions are set on it.

Those few steps should at least get you started towards finding your solution.

~Ryan


From: David Williams 
Sent: Thursday, February 17, 2011 8:30 AM
To: General list for user discussion,questions and support 
Subject: Re: [BackupPC-users] One more time


Can anyone help me with the samba issue?
Will also try another group thanks.




David Williams
Check out our WebOS mobile phone app for the Palm Pre and Pixi:
   Golf Caddie | Golf Caddie Forum | Golf Caddie FAQ by DTW-Consulting, Inc.





On 2/15/2011 9:36 AM, David Williams wrote: 
  Ryan,

  Thanks for the info.  Tried what you suggested and here is what happened.

  smbclient -L laptop1 -U dwilliams
  Enter dwilliams's password:
  session setup failed: SUCCESS - 0

  mount -t cifs //192.168.15.50/c$ /mnt/test/ -o username=dwilliams
  Password:
  mount error(13): Permission denied

  dwilliams is setup as an admin user on laptop1.

  smbfs did not work and I got the following message: "smbfs is deprecated and 
will be removed from the 2.6.27 kernel. Please migrate to cifs"

  Perhaps this is a windows 7 security issue, not sure.  I know several years 
ago I was using SMB to backup an XP machine and didn't have these issues.
  I'm not familiar with rsync so would have to look into that.


--

  David Williams
  Check out our WebOS mobile phone app for the Palm Pre and Pixi:
 Golf Caddie | Golf Caddie Forum | Golf Caddie FAQ by DTW-Consulting, Inc.





  On 2/15/2011 9:29 AM, Ryan Blake wrote: 
David,

Being a "mobile warrior," I believe you would be best fitted with using 
rsync (cwRsync for Windows) instead of smb.  You also need to keep in mind that 
your system will need to transfer all files across the pipe to check their 
checksum.  If you are connected via wireless when you are home, I would not 
suggest it, unless your drive is practically unused.

With that said, if you still would prefer to use smb, I would suggest doing 
some basic troubleshooting from another PC (if possible) by simply connecting 
to the \\laptop1 device (you will want to connect to the c$ share).  If you can 
connect successfully or if you don't have another device to test, I would 
suggest running the following command on your linux box:

smbclient -L ComputerName -U Administrator*

If the system comes back with your open network shares, then good.  If it 
does not, then something is wrong and you'll need to look into the error 
message it gives back.  If it does work, you can try one last thing, actually 
mounting the drive to your server to ensure that works.

1. Create a folder in /mnt using mkdir /mnt/test
2. Run this command: mount -t cifs //IP.AddressOrHostName.Of.Computer/c$ 
/mnt/test/ -o username=Administrator*
3. If that does not work, exchange cifs for smbfs. mount -t cifs 
//IP.AddressOrHostName.Of.Computer/c$ /mnt/test/ -o username=Administrator*

You should be able to cd to /mnt/test and do a ls -aL to see all the data 
in the root of your C: on your laptop.  If that works, it *should* work 
assuming your credentials are correct in BackupPC's configs.

P.S. Once you test mounting, you'll probably want to unmount with umount 
/mnt/test

* [Or whatever username you want to use to connect]

~Ryan


From: David Williams 
Sent: Tuesday, February 15, 2011 8:49 AM
To: General list for user discussion,questions and support 
Subject: Re: [BackupPC-users] One more time


On 2/7/2011 3:49 PM, Tyler J. Wagner wrote: 
On Mon, 2011-02-07 at 12:39 -0500, Ryan Blake wrote:
However, if that's not an option for whatever reason, the only other option 
would be to ensure that your dhcpd service is properly connected/integrated 
with bind [named] (assuming you are using these).
That's what I do at my office. Ho

[BackupPC-users] blackout-period

2011-02-17 Thread Levkovich Andrew
Hi all!
i need some help in this question

so, i have add two blackout-periods

$Conf{WakeupSchedule} = [  '1',  '2',  '3',  '4',  '5',  '6',  '7',  '8',  
'17',  '18',  '19',  '20',  '21',  '22',  '23' ];


$Conf{BlackoutPeriods} = [
  {
'hourEnd' => '17.5',
'weekDays' => [
  '1'
],
'hourBegin' => '9'
  },
  {
'hourEnd' => '16.5',
'weekDays' => [
  '2',
  '3',
  '4',
  '5'
],
'hourBegin' => '9'
  }
];

it should mean that: backups should not run at monday from 9 to 17:30, and 
thu,wed,thu,fri from 9 to 17:00 right?
but at mondey it run backup at 17:00, at thusday didn't run at 17:00 I can't 
understand it

--
The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE:
Pinpoint memory and threading errors before they happen.
Find and fix more than 250 security defects in the development cycle.
Locate bottlenecks in serial and parallel code that limit performance.
http://p.sf.net/sfu/intel-dev2devfeb___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] One more time

2011-02-17 Thread David Williams


  
  
Can anyone help me with the samba issue?
  Will also try another group thanks.


  
  David Williams
  Check
  out our WebOS mobile phone app for the Palm Pre and
  Pixi:
  
Golf
  Caddie | Golf
Caddie Forum | Golf
  Caddie FAQ by DTW-Consulting, Inc.
  
  


On 2/15/2011 9:36 AM, David Williams wrote:

  
  Ryan,

Thanks for the info.  Tried what you suggested and here is what
happened.

smbclient -L laptop1 -U dwilliams
Enter dwilliams's password:
session setup failed: SUCCESS - 0

mount -t cifs //192.168.15.50/c$ /mnt/test/ -o
username=dwilliams
Password:
mount error(13): Permission denied

dwilliams is setup as an admin user on laptop1.

smbfs did not work and I got the following message: "smbfs is
deprecated and will be removed from the 2.6.27 kernel. Please
migrate to cifs"

Perhaps this is a windows 7 security issue, not sure.  I know
several years ago I was using SMB to backup an XP machine and
didn't have these issues.
I'm not familiar with rsync so would have to look into that.
  
  

David Williams
Check out our WebOS mobile phone app for
the Palm Pre and Pixi:
     Golf
Caddie | Golf

  Caddie Forum | Golf

Caddie FAQ by DTW-Consulting, Inc.


  
  
  On 2/15/2011 9:29 AM, Ryan Blake wrote:
  


David,
 
Being a "mobile warrior," I
believe you would be best fitted with using rsync (cwRsync
for Windows) instead of smb.  You also need to keep in mind that your system will need
to transfer all files across the pipe to check their
checksum.  If you are connected via wireless when you are
home, I would not suggest it, unless your drive is
practically unused.
 
With that said, if you still would
prefer to use smb, I would suggest doing some basic
troubleshooting from another PC (if possible) by simply
connecting to the \\laptop1 device (you will
want to connect to the c$ share).  If you can connect
successfully or if you don't have another device to test, I
would suggest running the following command on your linux
box:
 
smbclient -L ComputerName -U
Administrator*
 
If the system comes back with your
open network shares, then good.  If it does not, then
something is wrong and you'll need to look into the error
message it gives back.  If it does work, you can try one
last thing, actually mounting the drive to your server to
ensure that works.
 
1. Create a folder in /mnt using
mkdir /mnt/test
2. Run this command: mount -t cifs
//IP.AddressOrHostName.Of.Computer/c$ /mnt/test/ -o
username=Administrator*
3. If that does not work, exchange
cifs for smbfs. mount -t cifs
//IP.AddressOrHostName.Of.Computer/c$ /mnt/test/ -o
username=Administrator*
 
You should be able to cd to
/mnt/test and do a ls -aL to see all the data in the root of
your C: on your laptop.  If that works, it *should* work
assuming your credentials are correct in BackupPC's configs.
 
P.S. Once you test mounting,
you'll probably want to unmount with umount /mnt/test
 
* [Or whatever username you want
to use to connect]
 
~Ryan

  
  
  
From: David
Williams 
Sent: Tuesday, February 15, 2011 8:49 AM
To: General

list for user discussion,questions and support 
Subject: Re: [BackupPC-users] One more time
  



On 2/7/2011 3:49 PM, Tyler J. Wagner wrote:

  On Mon, 2011-02-07 at 12:39 -0500, Ryan Blake wrote:

  
However, if that's not an option for whatever reason, the only other option 
would be to ensure that your dhcpd service is properly connected/integrated 
with bind [named] (assuming you are using these).

  
  That's what I do at my office. However, I use dnsmasq at home, which
provides both DNS and DHCP. It automatically integrates them, so when
you supply your hostname with the DHCP request (as most clients do), it
gets added to the local DNS domain