Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-14 Thread James Gray
On 13/02/2010, at 11:42 AM, Ken Foskey wrote:
 I use a simpler approach and to some extent more flexible.
 
 I create a script in a known directory,  for
 example /usr/sbin/run_copy.sh.  I then only authorise the admin group to
 run  only that specific script.  This keeps complicated command lines to
 a minimum.
 
 The run_copy command might for example do a tar of the specified files.
 You can then pipe that tar across the link to the recipient system. I
 would write another script to untar into a working set, verify the copy
 somehow then install it using another script.
 
 visudo add this line 
 
 #  allow admin group to run the rsync script
 %admin ALL=NOPASSWD: /usr/sbin/run_copy.sh

Hi Ken,

Thanks for the suggestion.  Unfortunately this incurs the penalty of copying 
everything, every time (unless I missed something).  Hence the desire to use 
rsync.  I guess if I didn't do anything special (like encrypting) the tar 
ball, rsync could still handle the deltas with a certain degree of efficiency, 
but it would mean doing an update on the tar file each time.  Total data 
requiring synchronisation is approx 12GB, every 15-30min...that's a heck of a 
lot of I/O and network bandwidth if rsync doesn't do a stellar job.  I also 
noticed a --super option in the rsync manual, but I don't really understand 
how this works or what it achieves.

On the upside, I've had an e-mail discussion with the notoriously suspicious 
Security Team and they have agreed (in principle) to relaxing the no remote 
root login by allowing the use of PermitRootLogin   Forced-Commands-Only in 
sshd_config coupled with the method described here 
http://troy.jdmz.net/rsync/index.html - sanity and sensibility prevail.

Now to go through the motions of change control and security approval.  Ugh.  
Why is nothing easy? :(

Thanks for all the input people.

Cheers,

James

smime.p7s
Description: S/MIME cryptographic signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-14 Thread Ken Foskey
On Sun, 2010-02-14 at 20:04 +1100, James Gray wrote:
 On 13/02/2010, at 11:42 AM, Ken Foskey wrote:
  I use a simpler approach and to some extent more flexible.
  
  I create a script in a known directory,  for
  example /usr/sbin/run_copy.sh.  I then only authorise the admin group to
  run  only that specific script.  This keeps complicated command lines to
  a minimum.
  
  The run_copy command might for example do a tar of the specified files.
  You can then pipe that tar across the link to the recipient system. I
  would write another script to untar into a working set, verify the copy
  somehow then install it using another script.
  
  visudo add this line 
  
  #  allow admin group to run the rsync script
  %admin ALL=NOPASSWD: /usr/sbin/run_copy.sh
 
 Hi Ken,
 
 Thanks for the suggestion.  Unfortunately this incurs the penalty of copying 
 everything, every time (unless I missed something).  Hence the desire to use 
 rsync.  I guess if I didn't do anything special (like encrypting) the tar 
 ball, rsync could still handle the deltas with a certain degree of 
 efficiency, but it would mean doing an update on the tar file each time.  
 Total data requiring synchronisation is approx 12GB, every 15-30min...that's 
 a heck of a lot of I/O and network bandwidth if rsync doesn't do a stellar 
 job.  I also noticed a --super option in the rsync manual, but I don't 
 really understand how this works or what it achieves.
 
 On the upside, I've had an e-mail discussion with the notoriously suspicious 
 Security Team and they have agreed (in principle) to relaxing the no 
 remote root login by allowing the use of PermitRootLogin   
 Forced-Commands-Only in sshd_config coupled with the method described here 
 http://troy.jdmz.net/rsync/index.html - sanity and sensibility prevail.
 
 Now to go through the motions of change control and security approval.  Ugh.  
 Why is nothing easy? :(
 
 Thanks for all the input people.
 
 Cheers,
 
 James

You can still use rsync.  You just write rsync command in the script as
per above.

Ta
Ken

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-14 Thread Peter Rundle

Has anyone suggested using setuid?

Why don't you write a program to do the backup. Set ownership root, group to backup, chmod 770 and then setuid on the program 
and you can remote login as the backup group and execute the program with root privileges to do just the things you put in the 
code. If this isn't acceptable to the security team then you'd better also disable the password program.


Just a thought.

Pete


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-14 Thread james
On Monday 15 February 2010 09:00:05 slug-requ...@slug.org.au wrote:
 Has anyone suggested using setuid?
 
 Why don't you write a program to do the backup. Set ownership root, group
  to backup, chmod 770 and then setuid on the program  and you can remote
  login as the backup group and execute the program with root privileges
  to do just the things you put in the code. If this isn't acceptable to the
  security team then you'd better also disable the password program.
 
 Just a thought.

setuid does not work on modern distros umm pam limits applies
http://www.mythtv.org/docs/mythtv-HOWTO-5.html#ss5.4 (enabling real time 
priority)

James
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-13 Thread Ken Foskey
On Fri, 2010-02-12 at 16:22 +1100, Jeremy Visser wrote:
 On Fri, 2010-02-12 at 15:37 +1100, Ken Foskey wrote:
  I have done this using sudo.  I write a script on the called machine,
  sign on as my user and run the script using sudo which I authorise (very
  specifically) to root without password.
 
 Agreed. Not only that, but you can restrict sudo to only be able to run
 certain commands -- rsync being the case in point.
 
 Something like the following oughta do the trick (assuming you have a
 group called 'backup' that the backup user is in — remove the % to make
 it refer to a user instead):
 
 %backup NOPASSWD: ALL = /usr/bin/rsync -ar server1:/vital_data/ /vital_data/
 
 (The above should enforce that rsync is only called with those
 particular parameters, if I read the sudoers man page correctly.)

I use a simpler approach and to some extent more flexible.

I create a script in a known directory,  for
example /usr/sbin/run_copy.sh.  I then only authorise the admin group to
run  only that specific script.  This keeps complicated command lines to
a minimum.

The run_copy command might for example do a tar of the specified files.
You can then pipe that tar across the link to the recipient system. I
would write another script to untar into a working set, verify the copy
somehow then install it using another script.

visudo add this line 

#  allow admin group to run the rsync script
%admin ALL=NOPASSWD: /usr/sbin/run_copy.sh






-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread James Gray

On 12/02/2010, at 4:19 PM, Amos Shapira wrote:

 On 12 February 2010 15:37, Ken Foskey kfos...@tpg.com.au wrote:
 
 On Fri, 2010-02-12 at 10:24 +1100, James Gray wrote:
 need to sync a number of files between these servers and some require
 elevated (root) privileges at *both* ends.  Here lies the problem; we
 don't allow remote root logins (via SSH or any other method
 either...sudo, console or nadda).
 
 I have done this using sudo.  I write a script on the called machine,
 sign on as my user and run the script using sudo which I authorise (very
 specifically) to root without password.
 
 He says that he can't use sudo.

Actually sudo and using a physical console are the only ways to get root ;)  
Sorry if that was a little unclear in the original.  Of course there's the 
other ways of getting root, but I'm talking about normal operations here, not 
nefarious black-hats roaming my network!

 However Google'ing for offline rsync reminded me of rdiff - here is
 a use case which sounds similar to yours:
 http://users.softlab.ece.ntua.gr/~ttsiod/Offline-rsync.html

Nice article - thanks.

Cheers,

James

smime.p7s
Description: S/MIME cryptographic signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread James Gray

On 12/02/2010, at 3:37 PM, Ken Foskey wrote:

 On Fri, 2010-02-12 at 10:24 +1100, James Gray wrote:
 Hi All,
 
 I've googled this one for a while and can't find any examples of people 
 doing *system* file sync with rsync.  So I thought I'd throw it out to the 
 collective wisdom of SLUG.  Here's the full story.
 
 We have a SuSE-based production application/DB server pair and a 
 corresponding pair in a disaster recovery location (offsite, bandwidth 
 consumption needs to be minimised).  We need to sync a number of files 
 between these servers and some require elevated (root) privileges at *both* 
 ends.  Here lies the problem; we don't allow remote root logins (via SSH or 
 any other method either...sudo, console or nadda).
 
 I want to use rsync because of it's ability to transfer 
 differential/incremental changes and thus bandwidth friendly, however any 
 other tool would be fine too.  However, due to the inability for root to 
 login directly, how the heck do I synchronise particular files in privileged 
 locations (like /etc/shadow)?  I can start whatever services I need at 
 either end (like an rsync server) but the main thing is all files maintain 
 the same owner/group/mode at each end.
 
 Ideas?
 
 I have done this using sudo.  I write a script on the called machine,
 sign on as my user and run the script using sudo which I authorise (very
 specifically) to root without password.


Hi Ken,

Don't suppose you'd be willing to share that little gem would you?  I hadn't 
considered this option, but conceptually it's not to hard to get my head 
around.  However, the devil is always in the detail, so I was hoping you might 
be able to share some details ;)

Cheers,

James

smime.p7s
Description: S/MIME cryptographic signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread James Gray

On 12/02/2010, at 4:18 PM, Daniel Pittman wrote:

 James Gray ja...@gray.net.au writes:
 
 I've googled this one for a while and can't find any examples of people
 doing *system* file sync with rsync.  So I thought I'd throw it out to the
 collective wisdom of SLUG.  Here's the full story.
 
 We have a SuSE-based production application/DB server pair and a
 corresponding pair in a disaster recovery location (offsite, bandwidth
 consumption needs to be minimised).  We need to sync a number of files
 between these servers and some require elevated (root) privileges at *both*
 ends.  Here lies the problem; we don't allow remote root logins (via SSH or
 any other method either...sudo, console or nadda).
 
 I want to use rsync because of it's ability to transfer
 differential/incremental changes and thus bandwidth friendly, however any
 other tool would be fine too.  However, due to the inability for root to
 login directly, how the heck do I synchronise particular files in privileged
 locations (like /etc/shadow)?
 
 ...if you allow this tool to write to /etc/shadow[1], just allow root logins:
 you have added *nothing* by forbidding them.  Why?  An attacker with access to
 the rsync tool can add an additional root user with a known password anyhow,
 so additional security doesn't actually change the problem space at all.

I'm not going to get into the allow root, or not to holy war.  This is a big 
multi-billion dollar company and the security team have decreed no direct 
root logins.  End of story, it's not an option.  Whether I agree with them or 
not is irrelevant, it's just not an acceptable solution in the environment I 
work.

 I can start whatever services I need at either end (like an rsync server)
 but the main thing is all files maintain the same owner/group/mode at each
 end.
 
 Ideas?
 
 Just use root, if you want to go down this path.

See above.

 Alternately, I would suggest using something like puppet which is designed to
 do system management like this in an automated fashion; it is a completely
 different approach, but one that will probably solve your underlying problem
 without needing to change your security model so much.

I've used bconfig before and was moving towards puppet when that employer 
decided to commit corporate suicide and ended my tenure (along with everyone 
else's!) before really getting stuck into it.This will probably be the 
long-term solution but for now we simply need something to make the auditors go 
away with happy thoughts.

 [1]  ...and, by implication, /etc/passwd, since the later isn't much use
 without the former being updated too.

Yep - we also need to sync /etc/group and a bunch of other application-specific 
fru-fru like CIFS passwords for /etc/fstab that are store root-readable (and 
no-one else) files too.  The reason for highlighting /etc/shadow is that, 
unlike its /etc/passwd counterpart, is only root-readable ;)  This is why using 
privileged accounts at both ends is necessary.

Thanks for the input.

Cheers,

James



smime.p7s
Description: S/MIME cryptographic signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread James Gray

On 12/02/2010, at 5:35 PM, james wrote:

 On Friday 12 February 2010 13:23:18 slug-requ...@slug.org.au wrote:
 On Fri, 2010-02-12 at 10:24 +1100, James Gray wrote:
 
 need to sync a number of files between these servers and some require
 elevated (root) privileges at both ends.  Here lies the problem; we
 don't allow remote root logins (via SSH or any other method
 either...sudo, console or nadda).
 
 I have done this using sudo.  I write a script on the called machine,
 sign on as my user and run the script using sudo which I authorise (very
 specifically) to root without password.
 
 He says that he can't use sudo.
 
 However Google'ing for offline rsync reminded me of rdiff - here is
 a use case which sounds similar to yours:
 http://users.softlab.ece.ntua.gr/~ttsiod/Offline-rsync.html
 
 So you want root privilege without using any of the standard root-privilege-
 mechanisms
 Wow, he said scathingly, that deserves a prize.

Something was lost between my brain and the keyboard.  What I was trying 
(unsuccessfully) to indicate was sudo is fine, as are direct root logins on the 
console.  Heck su - is cool too if you have the root password.  We just can't 
allow direct root logins via network services (ssh etc)

Sorry for the confusion.  I'll endeavour to imbibe a second (or third) coffee 
before attempting e-mail on Monday! Heheh - it's been a lng week ;)

Cheers,

James

smime.p7s
Description: S/MIME cryptographic signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread Tony Sceats
I may have missed something, or maybe someone else has suggested this
already, but why not pull instead of push?

ie, from the machine that is the backup, connect to the master server and
rsync that way

  - this will mean that anything that's world readable but only writable by
root wont be a problem (you can write locally, and read with a normal user)
  - anything that's readable only by root, well, you'd need root to back it
up, I don't think you can escape that.






On Fri, Feb 12, 2010 at 7:29 PM, James Gray ja...@gray.net.au wrote:


 On 12/02/2010, at 4:18 PM, Daniel Pittman wrote:

  James Gray ja...@gray.net.au writes:
 
  I've googled this one for a while and can't find any examples of people
  doing *system* file sync with rsync.  So I thought I'd throw it out to
 the
  collective wisdom of SLUG.  Here's the full story.
 
  We have a SuSE-based production application/DB server pair and a
  corresponding pair in a disaster recovery location (offsite, bandwidth
  consumption needs to be minimised).  We need to sync a number of files
  between these servers and some require elevated (root) privileges at
 *both*
  ends.  Here lies the problem; we don't allow remote root logins (via SSH
 or
  any other method either...sudo, console or nadda).
 
  I want to use rsync because of it's ability to transfer
  differential/incremental changes and thus bandwidth friendly, however
 any
  other tool would be fine too.  However, due to the inability for root to
  login directly, how the heck do I synchronise particular files in
 privileged
  locations (like /etc/shadow)?
 
  ...if you allow this tool to write to /etc/shadow[1], just allow root
 logins:
  you have added *nothing* by forbidding them.  Why?  An attacker with
 access to
  the rsync tool can add an additional root user with a known password
 anyhow,
  so additional security doesn't actually change the problem space at
 all.

 I'm not going to get into the allow root, or not to holy war.  This is a
 big multi-billion dollar company and the security team have decreed no
 direct root logins.  End of story, it's not an option.  Whether I agree with
 them or not is irrelevant, it's just not an acceptable solution in the
 environment I work.

  I can start whatever services I need at either end (like an rsync
 server)
  but the main thing is all files maintain the same owner/group/mode at
 each
  end.
 
  Ideas?
 
  Just use root, if you want to go down this path.

 See above.

  Alternately, I would suggest using something like puppet which is
 designed to
  do system management like this in an automated fashion; it is a
 completely
  different approach, but one that will probably solve your underlying
 problem
  without needing to change your security model so much.

 I've used bconfig before and was moving towards puppet when that employer
 decided to commit corporate suicide and ended my tenure (along with everyone
 else's!) before really getting stuck into it.This will probably be the
 long-term solution but for now we simply need something to make the auditors
 go away with happy thoughts.

  [1]  ...and, by implication, /etc/passwd, since the later isn't much use
  without the former being updated too.

 Yep - we also need to sync /etc/group and a bunch of other
 application-specific fru-fru like CIFS passwords for /etc/fstab that are
 store root-readable (and no-one else) files too.  The reason for
 highlighting /etc/shadow is that, unlike its /etc/passwd counterpart, is
 only root-readable ;)  This is why using privileged accounts at both ends is
 necessary.

 Thanks for the input.

 Cheers,

 James


 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread James Gray

On 12/02/2010, at 7:38 PM, Tony Sceats wrote:

 I may have missed something, or maybe someone else has suggested this
 already, but why not pull instead of push?
 
 ie, from the machine that is the backup, connect to the master server and
 rsync that way
 
  - this will mean that anything that's world readable but only writable by
 root wont be a problem (you can write locally, and read with a normal user)
  - anything that's readable only by root, well, you'd need root to back it
 up, I don't think you can escape that.

Hi Tony,

THAT is exactly the problem, and why we need root at both ends (keep it clean 
people!).  I'm not fussed if push some data, and pull the rest, but stuff like 
/etc/shadow is a real pain (there are others, but this one is well known).  I'm 
thinking I might just use root to tar up the problem files (they aren't big) 
and transfer them using an unprivileged account, then get root to unpack at the 
destination.  Obviously the tar ball will need to be packed and dropped in a 
secure way at the destination (encrypted file using PKI or some such).  This 
would work, but it would be ugly :(

Eventually, the whole /etc/passwd and /etc/shadow problem will go away when we 
implement Likewise Enterprise to hook into our Active Directory (cough, hack, 
spit) which will manage all the USER accounts.  Administrators are so few and 
rarely turned over, we can manage those through the normal *nix tools; and 
eventually puppet :)

*Sigh*.  I hate the audit-season :(  Deloitte, you suck.

Cheers,

James

smime.p7s
Description: S/MIME cryptographic signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread Tony Sceats
lol, yes, that's the bit I missed :)

I guess ultimately you either have to relax the permissions on the files
(eg, add a new backup group, chrgrp and chmod the files), or relax the
system access restrictions (eg, using sudo, as already suggested by Ken)

I wonder which would have larger implications.. I would expect setting up
extremely limited sudo commands allows more flexibility in the sorts of
things you can do as well as not being a pita to keep stable over upgrades
and installations




On Fri, Feb 12, 2010 at 7:48 PM, James Gray ja...@gray.net.au wrote:


 On 12/02/2010, at 7:38 PM, Tony Sceats wrote:

  I may have missed something, or maybe someone else has suggested this
  already, but why not pull instead of push?
 
  ie, from the machine that is the backup, connect to the master server and
  rsync that way
 
   - this will mean that anything that's world readable but only writable
 by
  root wont be a problem (you can write locally, and read with a normal
 user)
   - anything that's readable only by root, well, you'd need root to back
 it
  up, I don't think you can escape that.

 Hi Tony,

 THAT is exactly the problem, and why we need root at both ends (keep it
 clean people!).  I'm not fussed if push some data, and pull the rest, but
 stuff like /etc/shadow is a real pain (there are others, but this one is
 well known).  I'm thinking I might just use root to tar up the problem files
 (they aren't big) and transfer them using an unprivileged account, then get
 root to unpack at the destination.  Obviously the tar ball will need to be
 packed and dropped in a secure way at the destination (encrypted file using
 PKI or some such).  This would work, but it would be ugly :(

 Eventually, the whole /etc/passwd and /etc/shadow problem will go away when
 we implement Likewise Enterprise to hook into our Active Directory (cough,
 hack, spit) which will manage all the USER accounts.  Administrators are so
 few and rarely turned over, we can manage those through the normal *nix
 tools; and eventually puppet :)

 *Sigh*.  I hate the audit-season :(  Deloitte, you suck.

 Cheers,

 James
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread Tony Sceats
O of course running some sort of backup client/server application that
installs as root is also an option, as it will presumably have some sort of
secured access mechanisms as part of the app (I hope anyway ;)

although I don't actually know one to recommend


On Fri, Feb 12, 2010 at 8:31 PM, Tony Sceats tony.sce...@gmail.com wrote:

 lol, yes, that's the bit I missed :)

 I guess ultimately you either have to relax the permissions on the files
 (eg, add a new backup group, chrgrp and chmod the files), or relax the
 system access restrictions (eg, using sudo, as already suggested by Ken)

 I wonder which would have larger implications.. I would expect setting up
 extremely limited sudo commands allows more flexibility in the sorts of
 things you can do as well as not being a pita to keep stable over upgrades
 and installations




 On Fri, Feb 12, 2010 at 7:48 PM, James Gray ja...@gray.net.au wrote:


 On 12/02/2010, at 7:38 PM, Tony Sceats wrote:

  I may have missed something, or maybe someone else has suggested this
  already, but why not pull instead of push?
 
  ie, from the machine that is the backup, connect to the master server
 and
  rsync that way
 
   - this will mean that anything that's world readable but only writable
 by
  root wont be a problem (you can write locally, and read with a normal
 user)
   - anything that's readable only by root, well, you'd need root to back
 it
  up, I don't think you can escape that.

 Hi Tony,

 THAT is exactly the problem, and why we need root at both ends (keep it
 clean people!).  I'm not fussed if push some data, and pull the rest, but
 stuff like /etc/shadow is a real pain (there are others, but this one is
 well known).  I'm thinking I might just use root to tar up the problem files
 (they aren't big) and transfer them using an unprivileged account, then get
 root to unpack at the destination.  Obviously the tar ball will need to be
 packed and dropped in a secure way at the destination (encrypted file using
 PKI or some such).  This would work, but it would be ugly :(

 Eventually, the whole /etc/passwd and /etc/shadow problem will go away
 when we implement Likewise Enterprise to hook into our Active Directory
 (cough, hack, spit) which will manage all the USER accounts.  Administrators
 are so few and rarely turned over, we can manage those through the normal
 *nix tools; and eventually puppet :)

 *Sigh*.  I hate the audit-season :(  Deloitte, you suck.

 Cheers,

 James
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html



-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread James Gray

On 12/02/2010, at 8:31 PM, Tony Sceats wrote:

 lol, yes, that's the bit I missed :)
 
 I guess ultimately you either have to relax the permissions on the files
 (eg, add a new backup group, chrgrp and chmod the files), or relax the
 system access restrictions (eg, using sudo, as already suggested by Ken)

sudo is fine, and I like the concept of Ken's suggestion.  Just need to flesh 
out some details, but conceptually, it sounds like a good approach.

 I wonder which would have larger implications.. I would expect setting up
 extremely limited sudo commands allows more flexibility in the sorts of
 things you can do as well as not being a pita to keep stable over upgrades
 and installations

Agreed.  Tweaking sudo can be done through the normal change management 
channels.  Relaxing network security (such as direct root login via ssh) 
would involve an entire world of pain starting with the security team.  Mind 
you, they have some rather odd ideas of what constitutes security.  So far, it 
seems, obscurity is just as good as security, as long as the auditors are happy 
(clueless imbeciles...all of them) and PCI compliance isn't affected.

DO NOT get me started on PCI compliance...g: Hey look at me, I'm PCI 
Compliance! I'm a thick as two short planks and read a security appliance 
catalogue onceyou need two of everything in it!. *slaps forehead*

Cheers,

James



smime.p7s
Description: S/MIME cryptographic signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread James Gray

On 12/02/2010, at 8:37 PM, Tony Sceats wrote:

 O of course running some sort of backup client/server application that
 installs as root is also an option, as it will presumably have some sort of
 secured access mechanisms as part of the app (I hope anyway ;)
 
 although I don't actually know one to recommend

IDEALLY, we'd just use NetAPP SnapMirror between the sites to keep the SAN 
volumes in sync and leave the DR site powered down unless needed (the machines 
are virtualised anyway).  However, some bright spark decided to run this system 
on a non-standard (read: cheap and nasty) SAN that doesn't talk to NetAPP.  
Another stellar decision from the top that results in we poor peons scrambling 
to meet deadlines for audit requirements that wouldn't have existed if they'd 
LISTENED to those who knew better.

Bah - corporate ignorance!

smime.p7s
Description: S/MIME cryptographic signature
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-12 Thread Rodolfo Martínez
Hi James,

Since remote root logins are not allowed, maybe you could consider to
backup the system to an external storage device. I have attached a
mini-howto. I hope you find it useful.


MINI-HOWTO backup data in a encrypted storage device

Author: Rodolfo Martinez rmt...@gmail.com


Objectives
--
 * File system backups to an external storage device.
 * The storage device has to be encrypted.
 * Incremental backups.
 * 7-day retention period.


Hardware requirements
-
 * An external storage device, like a USB hard drive. The size will depend on
   how much information you want to backup and the retention period.


Software requirements
-
 * bash

 * rsync

 * cryptsetup-luks
   http://clemens.endorphin.org/
   http://code.google.com/p/cryptsetup/
   A utility for setting up encrypted filesystems using Device Mapper and the
   dm-crypt target. cryptsetup is usually part of any modern GNU/Linux
   distribution.


Limitations
---
 * For certain applications, like data bases, or any other application that
   has its own cache handler (DIRECT_O), a file system backup is NOT an option.


Initialize the storage device with LUKS support
---
I will assume that your external storage device was recognised as /dev/sdb. Of
course you can configure udev to have the same device name always; for
example, /dev/backup.

IMPORTANT: Use a good password, you know what I mean, something like,
   Mt9%I?!RnXE1_lL9O41j

NOTE: You can specify a specific cipher using the -c option; for example,
  -c aes-cbc-essiv:sha256

cryptsetup -y -h sha512 -s 256 luksFormat /dev/sdb


Open the external storage device

cryptsetup luksOpen /dev/sdb encrypted

You will need to type the password that you used in the previous step.

Now, there should be a encrypted block device in the /dev/mapper directory.


Format the encrypted device
---
You can format your encrypted device with any file system format. I will use
ext3.

mkfs.ext3 /dev/mapper/encrypted


Mount the encrypted device
--
You can use any mount point for your encrypted device. I will use /mnt/encrypted

mkdir /mnt/encrypted
mount /dev/mapper/encrypted /mnt/encrypted


Unmount and close the encrypted device
---
umount /mnt/encrypted
cryptsetup luksClose encrypted

NOTE: Make sure the encrypted device in the /dev/mapper directory is gone.


Automate the backups

Save the script below in /root/bin/backup.sh and schedule a daily cron job. Set
the proper permissions and ownership to the script

chown -R root:root /root/bin
chmod 700 /root/bin
chmod 700 /root/bin/backup.sh
chattr +i /root/bin/backup.sh

NOTE: If you don't want to have the password embedded in the script, then you
  will have to run the backups manually and type the password.

IMPORTANT: Make sure someone else knows the password in case you die.


#!/bin/bash

ECHO=/bin/echo;
CRYPTSETUP=/sbin/cryptsetup;
DEV=/dev/vgroot/lvtgtd;
MOUNT=/bin/mount;
UMOUNT=/bin/umount;
RSYNC=/usr/bin/rsync;
SYNC=/bin/sync;
DATE=/bin/date;

function lastBackup() {
typeset -i lastBackup=$($DATE +%u)-1;

[ $lastBackup -eq 0 ]  lastBackup=7;

$ECHO $lastBackup;
}


# Open the encrypted device
$ECHO 'Mt9%I?!RnXE1_lL9O41j' | $CRYPTSETUP luksOpen $DEV encrypted;

# Mount the encrypted device
$MOUNT /dev/mapper/encrypted /mnt/encrypted;

# Backup
$RSYNC --archive \
   --partial \
   --delete \
   --delete-excluded \
   --exclude=*~ \
   --exclude=/dev \
   --exclude=/media \
   --exclude=/misc \
   --exclude=/mnt \
   --exclude=/proc \
   --exclude=/sys \
   --exclude=/tmp \
   --exclude=/var/cache \
   --exclude=/var/spool \
   --exclude=/vat/tmp \
   --link-dest=../$(lastBackup) \
   / \
   /mnt/encrypted/$($DATE +%u);

# Sync before unmouting
$SYNC;

# Unmount the encrypted device
$UMOUNT /mnt/encrypted;

# Remove the encrypted device
$CRYPTSETUP luksClose encrypted;




Rodolfo Martínez
Socio director

Aleux Mexico
www.aleux.com



On Thu, Feb 11, 2010 at 5:24 PM, James Gray ja...@gray.net.au wrote:
 Hi All,

 I've googled this one for a while and can't find any examples of people doing 
 *system* file sync with rsync.  So I thought I'd throw it out to the 
 collective wisdom of SLUG.  Here's the full story.

 We have a SuSE-based production application/DB server pair and a 
 corresponding pair in a disaster recovery location (offsite, bandwidth 
 consumption needs to be minimised).  We need to sync a number of files 
 between these servers and some require elevated (root) privileges at *both* 
 ends.  Here lies the problem; we don't allow remote root logins (via SSH or 
 any other method either...sudo, console or nadda).

 I want to use rsync because of it's ability to 

[SLUG] Replicate Production to DR file system with rsync

2010-02-11 Thread James Gray
Hi All,

I've googled this one for a while and can't find any examples of people doing 
*system* file sync with rsync.  So I thought I'd throw it out to the collective 
wisdom of SLUG.  Here's the full story.

We have a SuSE-based production application/DB server pair and a corresponding 
pair in a disaster recovery location (offsite, bandwidth consumption needs to 
be minimised).  We need to sync a number of files between these servers and 
some require elevated (root) privileges at *both* ends.  Here lies the problem; 
we don't allow remote root logins (via SSH or any other method either...sudo, 
console or nadda).

I want to use rsync because of it's ability to transfer 
differential/incremental changes and thus bandwidth friendly, however any other 
tool would be fine too.  However, due to the inability for root to login 
directly, how the heck do I synchronise particular files in privileged 
locations (like /etc/shadow)?  I can start whatever services I need at either 
end (like an rsync server) but the main thing is all files maintain the same 
owner/group/mode at each end.

Ideas?

Thanks in advance,

James
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-11 Thread Ken Foskey
On Fri, 2010-02-12 at 10:24 +1100, James Gray wrote:
 Hi All,
 
 I've googled this one for a while and can't find any examples of people doing 
 *system* file sync with rsync.  So I thought I'd throw it out to the 
 collective wisdom of SLUG.  Here's the full story.
 
 We have a SuSE-based production application/DB server pair and a 
 corresponding pair in a disaster recovery location (offsite, bandwidth 
 consumption needs to be minimised).  We need to sync a number of files 
 between these servers and some require elevated (root) privileges at *both* 
 ends.  Here lies the problem; we don't allow remote root logins (via SSH or 
 any other method either...sudo, console or nadda).
 
 I want to use rsync because of it's ability to transfer 
 differential/incremental changes and thus bandwidth friendly, however any 
 other tool would be fine too.  However, due to the inability for root to 
 login directly, how the heck do I synchronise particular files in privileged 
 locations (like /etc/shadow)?  I can start whatever services I need at either 
 end (like an rsync server) but the main thing is all files maintain the same 
 owner/group/mode at each end.
 
 Ideas?

I have done this using sudo.  I write a script on the called machine,
sign on as my user and run the script using sudo which I authorise (very
specifically) to root without password.

Ken


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-11 Thread Amos Shapira
On 12 February 2010 15:37, Ken Foskey kfos...@tpg.com.au wrote:

 On Fri, 2010-02-12 at 10:24 +1100, James Gray wrote:
need to sync a number of files between these servers and some require
elevated (root) privileges at *both* ends.  Here lies the problem; we
don't allow remote root logins (via SSH or any other method
either...sudo, console or nadda).

 I have done this using sudo.  I write a script on the called machine,
 sign on as my user and run the script using sudo which I authorise (very
 specifically) to root without password.

He says that he can't use sudo.

However Google'ing for offline rsync reminded me of rdiff - here is
a use case which sounds similar to yours:
http://users.softlab.ece.ntua.gr/~ttsiod/Offline-rsync.html

Cheers,

--Amos
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-11 Thread Jeremy Visser
On Fri, 2010-02-12 at 15:37 +1100, Ken Foskey wrote:
 I have done this using sudo.  I write a script on the called machine,
 sign on as my user and run the script using sudo which I authorise (very
 specifically) to root without password.

Agreed. Not only that, but you can restrict sudo to only be able to run
certain commands -- rsync being the case in point.

Something like the following oughta do the trick (assuming you have a
group called 'backup' that the backup user is in — remove the % to make
it refer to a user instead):

%backup NOPASSWD: ALL = /usr/bin/rsync -ar server1:/vital_data/ /vital_data/

(The above should enforce that rsync is only called with those
particular parameters, if I read the sudoers man page correctly.)


signature.asc
Description: This is a digitally signed message part
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-11 Thread Daniel Pittman
James Gray ja...@gray.net.au writes:

 I've googled this one for a while and can't find any examples of people
 doing *system* file sync with rsync.  So I thought I'd throw it out to the
 collective wisdom of SLUG.  Here's the full story.

 We have a SuSE-based production application/DB server pair and a
 corresponding pair in a disaster recovery location (offsite, bandwidth
 consumption needs to be minimised).  We need to sync a number of files
 between these servers and some require elevated (root) privileges at *both*
 ends.  Here lies the problem; we don't allow remote root logins (via SSH or
 any other method either...sudo, console or nadda).

 I want to use rsync because of it's ability to transfer
 differential/incremental changes and thus bandwidth friendly, however any
 other tool would be fine too.  However, due to the inability for root to
 login directly, how the heck do I synchronise particular files in privileged
 locations (like /etc/shadow)?

...if you allow this tool to write to /etc/shadow[1], just allow root logins:
you have added *nothing* by forbidding them.  Why?  An attacker with access to
the rsync tool can add an additional root user with a known password anyhow,
so additional security doesn't actually change the problem space at all.

 I can start whatever services I need at either end (like an rsync server)
 but the main thing is all files maintain the same owner/group/mode at each
 end.

 Ideas?

Just use root, if you want to go down this path.

Alternately, I would suggest using something like puppet which is designed to
do system management like this in an automated fashion; it is a completely
different approach, but one that will probably solve your underlying problem
without needing to change your security model so much.

Regards,
Daniel

Footnotes: 
[1]  ...and, by implication, /etc/passwd, since the later isn't much use
 without the former being updated too.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-11 Thread james
On Friday 12 February 2010 13:23:18 slug-requ...@slug.org.au wrote:
  On Fri, 2010-02-12 at 10:24 +1100, James Gray wrote:
 
 need to sync a number of files between these servers and some require
 elevated (root) privileges at both ends.  Here lies the problem; we
 don't allow remote root logins (via SSH or any other method
 either...sudo, console or nadda).
 
  I have done this using sudo.  I write a script on the called machine,
  sign on as my user and run the script using sudo which I authorise (very
  specifically) to root without password.
 
 He says that he can't use sudo.
 
 However Google'ing for offline rsync reminded me of rdiff - here is
 a use case which sounds similar to yours:
 http://users.softlab.ece.ntua.gr/~ttsiod/Offline-rsync.html

So you want root privilege without using any of the standard root-privilege-
mechanisms
Wow, he said scathingly, that deserves a prize.

Actually you should start at the beginning, take a deep breath, and clearly 
decide what you are trying to achieve then how to do that securely including 
physical access to the remote machine. That is a very very easy way to 
compromise your server (hint knoppix or any live CD)

You were mugged on the train and lost your rdiff mem stick illustrates the 
foolhardy nature of your thinkings

James
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-11 Thread Amos Shapira
On 12 February 2010 17:35, james j...@tigger.ws wrote:
 You were mugged on the train and lost your rdiff mem stick illustrates the
 foolhardy nature of your thinkings

USB key can be encrypted.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Replicate Production to DR file system with rsync

2010-02-11 Thread James Polley
On Fri, Feb 12, 2010 at 6:35 PM, Amos Shapira amos.shap...@gmail.com wrote:
 On 12 February 2010 17:35, james j...@tigger.ws wrote:
 You were mugged on the train and lost your rdiff mem stick illustrates the
 foolhardy nature of your thinkings

 USB key can be encrypted.

Which is great, no-one else can read your files.

Unfortunately, neither can you.

 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html