Re: Best way to duplicate HDs

2002-01-07 Thread Ted Deppner
On Mon, Jan 07, 2002 at 04:11:57PM +0800, Patrick Hsieh wrote:
> > > But when I ssh from debianclient to backupserver, it gives me a password
> > > prompt,, so I enter the password, then rsync begins.
> > 
> > and ?
> > 
> Thanks for your patience.
> My question is, since I expect automated rsync backup, I hope to avoid
> password prompt in the whole procedure. In this case, I still have to
> enter password, idea?

man ssh-keygen, and don't add a password to the key.

-- 
Ted Deppner
http://www.psyber.com/~ted/




Re: Best way to duplicate HDs

2002-01-07 Thread Ted Deppner

On Mon, Jan 07, 2002 at 04:11:57PM +0800, Patrick Hsieh wrote:
> > > But when I ssh from debianclient to backupserver, it gives me a password
> > > prompt,, so I enter the password, then rsync begins.
> > 
> > and ?
> > 
> Thanks for your patience.
> My question is, since I expect automated rsync backup, I hope to avoid
> password prompt in the whole procedure. In this case, I still have to
> enter password, idea?

man ssh-keygen, and don't add a password to the key.

-- 
Ted Deppner
http://www.psyber.com/~ted/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-07 Thread Patrick Hsieh
> On Mon, Jan 07, 2002 at 03:03:12PM +0800, Patrick Hsieh wrote:
> > >   - obviously this doesn't preclude a bad guy checking out
> > > backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> > > backups/hostname; rsync with whatever daemon options" will limit that)
> > Now I know how to use "command=" in ~/.ssh/authorized_keys2.
> > Providing I have backupserver and debianclient.
> > In ~/.ssh/authorized_keys2 of backupserver "command=" section, what
> > command should I put to automate the backup procedure between
> > backupserver and debianclient? I tried:
> > command="cd /backup; /usr/bin/rsync -av debianclient:/dirtobackup ./"
> 
> run the rsync without a command= statement, and do a ps awux | grep rsync
> on the target (like I already suggested).  That command or something close
> to it will be the basis for your command=""
> 
> > But when I ssh from debianclient to backupserver, it gives me a password
> > prompt,, so I enter the password, then rsync begins.
> 
> and ?
> 
Thanks for your patience.
My question is, since I expect automated rsync backup, I hope to avoid
password prompt in the whole procedure. In this case, I still have to
enter password, idea?

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg




Re: Best way to duplicate HDs

2002-01-07 Thread Ted Deppner
On Mon, Jan 07, 2002 at 03:03:12PM +0800, Patrick Hsieh wrote:
> >   - obviously this doesn't preclude a bad guy checking out
> > backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> > backups/hostname; rsync with whatever daemon options" will limit that)
> Now I know how to use "command=" in ~/.ssh/authorized_keys2.
> Providing I have backupserver and debianclient.
> In ~/.ssh/authorized_keys2 of backupserver "command=" section, what
> command should I put to automate the backup procedure between
> backupserver and debianclient? I tried:
> command="cd /backup; /usr/bin/rsync -av debianclient:/dirtobackup ./"

run the rsync without a command= statement, and do a ps awux | grep rsync
on the target (like I already suggested).  That command or something close
to it will be the basis for your command=""

> But when I ssh from debianclient to backupserver, it gives me a password
> prompt,, so I enter the password, then rsync begins.

and ?

> I don't understand what "command=" means.  Does it only specify what
> will the server do upon ssh login? Can it specify some commands and
> parameters to restrict ssh   ?

command= on the target machine is the command that will be run when the
client successfully authenticates.

Try it yourself.

try a command='/bin/cat /etc/motd', and then ssh target:...  use an
identity key of course.

the ssh commandline will have no effect if a command= is present.

-- 
Ted Deppner
http://www.psyber.com/~ted/




Re: Best way to duplicate HDs

2002-01-07 Thread Patrick Hsieh
> On Tue, Jan 01, 2002 at 08:39:39AM -0500, Keith Elder wrote:
> > This brings up a  question. How do you rsync something but keep the
> > ownership and permissions the same.  I am pulling data off site nightly
> > and that works, but the permissions are all screwed up.
> 
> rsync -avxrP --delete $FILESYSTEMS backup-server:backups/$HOSTNAME
> 
> Some caveats if you want to fully automate this...
>   - remove -vP (verbose w/ progress)
>   - --delete is NECESSARY to make sure deleted files get deleted from the
> backup
>   - FILESYSTEMS should be any local filesystems you want backed up (-x
> won't cross filesystems, makes backing up in NFS environment easier)
>   - obviously this doesn't preclude a bad guy checking out
> backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> backups/hostname; rsync with whatever daemon options" will limit that)
Hello Ted,
Now I know how to use "command=" in ~/.ssh/authorized_keys2.
Providing I have backupserver and debianclient.
In ~/.ssh/authorized_keys2 of backupserver "command=" section, what
command should I put to automate the backup procedure between
backupserver and debianclient? I tried:
command="cd /backup; /usr/bin/rsync -av debianclient:/dirtobackup ./"

But when I ssh from debianclient to backupserver, it gives me a password
prompt,, so I enter the password, then rsync begins.

I don't understand what "command=" means.  Does it only specify what
will the server do upon ssh login? Can it specify some commands and
parameters to restrict ssh   ?


>   - on backup-server, rotate the backup every 12 hours or whatever.  
> - rsync -ar --delete store/hostname.2 store/hostname.3 
> - rsync -ar --delete store/hostname.1 store/hostname.2 
> - rsync -ar --delete backups/hostname store/hostname.1
> # that could be better optimized, but you get the idea
> 
> I've used this rsync system to successfully maintain up to date backups w/
> great ease, AND restore very quickly...  use a LinuxCare Bootable Business
> Card to get the target fdisked and ready, then mount the filesystems as
> you desire, and rsync -avrP backup-server:backups/hostname /target.  I got
> a 700mb server back online in under 20 minutes from powerup to server
> serving requests (the rsync itself is 3 to 5 minutes).  Making sure you
> do (cd /target; lilo -r . -C etc/lilo.conf)) is the only tricky part.
> 
> -- 
> Ted Deppner
> http://www.psyber.com/~ted/
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-07 Thread Jeff Waugh


> > 3) Add this to authorized_keys for the above account, specifying the
> > command that logins with this key are allowed to run. See command="" in
> > sshd(1).
> 
> I can't find the document about this section, can you show me
> some reference or examples? Many thanks.

man sshd, down the bottom.

- Jeff

-- 
   No clue is good clue.




Re: Best way to duplicate HDs

2002-01-07 Thread Patrick Hsieh

> On Mon, Jan 07, 2002 at 03:03:12PM +0800, Patrick Hsieh wrote:
> > >   - obviously this doesn't preclude a bad guy checking out
> > > backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> > > backups/hostname; rsync with whatever daemon options" will limit that)
> > Now I know how to use "command=" in ~/.ssh/authorized_keys2.
> > Providing I have backupserver and debianclient.
> > In ~/.ssh/authorized_keys2 of backupserver "command=" section, what
> > command should I put to automate the backup procedure between
> > backupserver and debianclient? I tried:
> > command="cd /backup; /usr/bin/rsync -av debianclient:/dirtobackup ./"
> 
> run the rsync without a command= statement, and do a ps awux | grep rsync
> on the target (like I already suggested).  That command or something close
> to it will be the basis for your command=""
> 
> > But when I ssh from debianclient to backupserver, it gives me a password
> > prompt,, so I enter the password, then rsync begins.
> 
> and ?
> 
Thanks for your patience.
My question is, since I expect automated rsync backup, I hope to avoid
password prompt in the whole procedure. In this case, I still have to
enter password, idea?

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-06 Thread Ted Deppner

On Mon, Jan 07, 2002 at 03:03:12PM +0800, Patrick Hsieh wrote:
> >   - obviously this doesn't preclude a bad guy checking out
> > backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> > backups/hostname; rsync with whatever daemon options" will limit that)
> Now I know how to use "command=" in ~/.ssh/authorized_keys2.
> Providing I have backupserver and debianclient.
> In ~/.ssh/authorized_keys2 of backupserver "command=" section, what
> command should I put to automate the backup procedure between
> backupserver and debianclient? I tried:
> command="cd /backup; /usr/bin/rsync -av debianclient:/dirtobackup ./"

run the rsync without a command= statement, and do a ps awux | grep rsync
on the target (like I already suggested).  That command or something close
to it will be the basis for your command=""

> But when I ssh from debianclient to backupserver, it gives me a password
> prompt,, so I enter the password, then rsync begins.

and ?

> I don't understand what "command=" means.  Does it only specify what
> will the server do upon ssh login? Can it specify some commands and
> parameters to restrict ssh   ?

command= on the target machine is the command that will be run when the
client successfully authenticates.

Try it yourself.

try a command='/bin/cat /etc/motd', and then ssh target:...  use an
identity key of course.

the ssh commandline will have no effect if a command= is present.

-- 
Ted Deppner
http://www.psyber.com/~ted/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-06 Thread Patrick Hsieh

> On Tue, Jan 01, 2002 at 08:39:39AM -0500, Keith Elder wrote:
> > This brings up a  question. How do you rsync something but keep the
> > ownership and permissions the same.  I am pulling data off site nightly
> > and that works, but the permissions are all screwed up.
> 
> rsync -avxrP --delete $FILESYSTEMS backup-server:backups/$HOSTNAME
> 
> Some caveats if you want to fully automate this...
>   - remove -vP (verbose w/ progress)
>   - --delete is NECESSARY to make sure deleted files get deleted from the
> backup
>   - FILESYSTEMS should be any local filesystems you want backed up (-x
> won't cross filesystems, makes backing up in NFS environment easier)
>   - obviously this doesn't preclude a bad guy checking out
> backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
> backups/hostname; rsync with whatever daemon options" will limit that)
Hello Ted,
Now I know how to use "command=" in ~/.ssh/authorized_keys2.
Providing I have backupserver and debianclient.
In ~/.ssh/authorized_keys2 of backupserver "command=" section, what
command should I put to automate the backup procedure between
backupserver and debianclient? I tried:
command="cd /backup; /usr/bin/rsync -av debianclient:/dirtobackup ./"

But when I ssh from debianclient to backupserver, it gives me a password
prompt,, so I enter the password, then rsync begins.

I don't understand what "command=" means.  Does it only specify what
will the server do upon ssh login? Can it specify some commands and
parameters to restrict ssh   ?


>   - on backup-server, rotate the backup every 12 hours or whatever.  
> - rsync -ar --delete store/hostname.2 store/hostname.3 
> - rsync -ar --delete store/hostname.1 store/hostname.2 
> - rsync -ar --delete backups/hostname store/hostname.1
> # that could be better optimized, but you get the idea
> 
> I've used this rsync system to successfully maintain up to date backups w/
> great ease, AND restore very quickly...  use a LinuxCare Bootable Business
> Card to get the target fdisked and ready, then mount the filesystems as
> you desire, and rsync -avrP backup-server:backups/hostname /target.  I got
> a 700mb server back online in under 20 minutes from powerup to server
> serving requests (the rsync itself is 3 to 5 minutes).  Making sure you
> do (cd /target; lilo -r . -C etc/lilo.conf)) is the only tricky part.
> 
> -- 
> Ted Deppner
> http://www.psyber.com/~ted/
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-06 Thread Jeff Waugh



> > 3) Add this to authorized_keys for the above account, specifying the
> > command that logins with this key are allowed to run. See command="" in
> > sshd(1).
> 
> I can't find the document about this section, can you show me
> some reference or examples? Many thanks.

man sshd, down the bottom.

- Jeff

-- 
   No clue is good clue.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-06 Thread Patrick Hsieh
> 3) Add this to authorized_keys for the above account, specifying the
> command that logins with this key are allowed to run. See command="" in
> sshd(1).

I can't find the document about this section, can you show me
some reference or examples? Many thanks.

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-06 Thread Patrick Hsieh

> 3) Add this to authorized_keys for the above account, specifying the
> command that logins with this key are allowed to run. See command="" in
> sshd(1).

I can't find the document about this section, can you show me
some reference or examples? Many thanks.

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Ted Deppner
[cc: trimed to something a little more sane]

On Wed, Jan 02, 2002 at 04:21:33PM -0500, [EMAIL PROTECTED] wrote:
> We're pulling **from** a read-only rsyncd.  It has to run as root because we
> require the right archive, permissions, etc  I'm confused; is that much 
> different from running an rsync otherwise except for the convenience of the 
> [modules] thing?  Or is rsync wrong tool for job?

To, from, no difference.  rsyncd uses cleartext transport (it appears to
do a challenge/response for the password).  using ssh for the transport
(no rsyncd), gives you encrypted data on the network, and password
management in the form of identity keys.

I trust rsync to move files around in a convient manner.  I trust ssh to
transport data in a secure manner.  I do not trust rsync to be secure.

If you deeply trust your "private network", trust programs not written
with security in mind to be secure, and don't mind your data being exposed
(during transport) as a result of your backup system, maybe this isn't a
big concern for you.

> We want to reduce the load on the production servers.  Some clients need
> 4x daily backups, but for others nothing changes for months at a time.  
> The new system is only going to snapshot and archive only the changed
> versions, not every day.  All the zipping, sorting and file checking 
> will take place on backup machine, not on servers so we don't care how 
> greedy the process gets as long as the process pulling the mirror off 
> the production machine is as light as possible.  Is there something
> better than rsync for that?

rsync is a fine tool for that.  All I'm suggesting is you don't use rsyncd
for your data transport and that you use something more secure, eg ssh.

[rsyncd]
backup-server# rsync -avrP production::everything /backups/production/
  
  - relying on rsync for password (if used) and transport security
  - keys are stored plaintext (or not at all w/ your read-only rsyncd
design)

becomes

[ssh]
production# RSYNC_RSH='ssh -c blowfish -i pathto/bkup-identity' rsync -avrP / 
backup-server:/backups/production/

  - relying on ssh for password (or identity key), and transport security
  - keys stored encrypted, (passwordless identities or via ssh-agent)

-- 
Ted Deppner
http://www.psyber.com/~ted/




Re: Best way to duplicate HDs

2002-01-02 Thread Nick Jennings
On Tue, Jan 01, 2002 at 02:28:28PM +0800, Jason Lim wrote:
> Hi all,
> 
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
> 

 I've been using tar on my system. Works excellent. no downtime, and all
 permissions are maintained. 

-- 
  Nick Jennings




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread cfm
On Wed, Jan 02, 2002 at 10:17:38AM -0800, Ted Deppner wrote:

> > The [modules] in rsyncd.conf provide a nice way to package what you want to
> > back up.  You can also specify what ip addresses connect to rsyncd.  So in
> > theory only the backup machine can connect to the rsyncd daemons; we've set 
> > those to read-only.
> 
> Ack!  If you're doing file level rsync backups to rsyncd, rsyncd *must* be
> running as root (DON'T DO THAT), else your perms will be useless.  rsyncd
> just isn't something that should run with root perms... therefore it's
> rather useless for file level rsync backups.

We're pulling **from** a read-only rsyncd.  It has to run as root because we
require the right archive, permissions, etc  I'm confused; is that much 
different from running an rsync otherwise except for the convenience of the 
[modules] thing?  Or is rsync wrong tool for job?

We want to reduce the load on the production servers.  Some clients need
4x daily backups, but for others nothing changes for months at a time.  
The new system is only going to snapshot and archive only the changed
versions, not every day.  All the zipping, sorting and file checking 
will take place on backup machine, not on servers so we don't care how 
greedy the process gets as long as the process pulling the mirror off 
the production machine is as light as possible.  Is there something
better than rsync for that?
> 
> If you tar up the source, and send those to your rsyncd that's less of a
> security risk from rsyncd itself, HOWEVER your root only file data is now
> in a userland tar file, so your data is now less secure on the backup
> server than it was on the source machine.  Very bad backup design.

I must have described it poorly: dedicated backup machine, no other services,
no random users, private routing on physically separate lan, outbound
connections only.  I'd hope that would be better than a production server.

-- 

Christopher F. Miller, Publisher   [EMAIL PROTECTED]
MaineStreet Communications, Inc   208 Portland Road, Gray, ME  04039
1.207.657.5078 http://www.maine.com/
Content/site management, online commerce, internet integration, Debian linux




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Ted Deppner

[cc: trimed to something a little more sane]

On Wed, Jan 02, 2002 at 04:21:33PM -0500, [EMAIL PROTECTED] wrote:
> We're pulling **from** a read-only rsyncd.  It has to run as root because we
> require the right archive, permissions, etc  I'm confused; is that much 
> different from running an rsync otherwise except for the convenience of the 
> [modules] thing?  Or is rsync wrong tool for job?

To, from, no difference.  rsyncd uses cleartext transport (it appears to
do a challenge/response for the password).  using ssh for the transport
(no rsyncd), gives you encrypted data on the network, and password
management in the form of identity keys.

I trust rsync to move files around in a convient manner.  I trust ssh to
transport data in a secure manner.  I do not trust rsync to be secure.

If you deeply trust your "private network", trust programs not written
with security in mind to be secure, and don't mind your data being exposed
(during transport) as a result of your backup system, maybe this isn't a
big concern for you.

> We want to reduce the load on the production servers.  Some clients need
> 4x daily backups, but for others nothing changes for months at a time.  
> The new system is only going to snapshot and archive only the changed
> versions, not every day.  All the zipping, sorting and file checking 
> will take place on backup machine, not on servers so we don't care how 
> greedy the process gets as long as the process pulling the mirror off 
> the production machine is as light as possible.  Is there something
> better than rsync for that?

rsync is a fine tool for that.  All I'm suggesting is you don't use rsyncd
for your data transport and that you use something more secure, eg ssh.

[rsyncd]
backup-server# rsync -avrP production::everything /backups/production/
  
  - relying on rsync for password (if used) and transport security
  - keys are stored plaintext (or not at all w/ your read-only rsyncd
design)

becomes

[ssh]
production# RSYNC_RSH='ssh -c blowfish -i pathto/bkup-identity' rsync -avrP / 
backup-server:/backups/production/

  - relying on ssh for password (or identity key), and transport security
  - keys stored encrypted, (passwordless identities or via ssh-agent)

-- 
Ted Deppner
http://www.psyber.com/~ted/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-02 Thread Nick Jennings

On Tue, Jan 01, 2002 at 02:28:28PM +0800, Jason Lim wrote:
> Hi all,
> 
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
> 

 I've been using tar on my system. Works excellent. no downtime, and all
 permissions are maintained. 

-- 
  Nick Jennings


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread cfm

On Wed, Jan 02, 2002 at 10:17:38AM -0800, Ted Deppner wrote:

> > The [modules] in rsyncd.conf provide a nice way to package what you want to
> > back up.  You can also specify what ip addresses connect to rsyncd.  So in
> > theory only the backup machine can connect to the rsyncd daemons; we've set 
> > those to read-only.
> 
> Ack!  If you're doing file level rsync backups to rsyncd, rsyncd *must* be
> running as root (DON'T DO THAT), else your perms will be useless.  rsyncd
> just isn't something that should run with root perms... therefore it's
> rather useless for file level rsync backups.

We're pulling **from** a read-only rsyncd.  It has to run as root because we
require the right archive, permissions, etc  I'm confused; is that much 
different from running an rsync otherwise except for the convenience of the 
[modules] thing?  Or is rsync wrong tool for job?

We want to reduce the load on the production servers.  Some clients need
4x daily backups, but for others nothing changes for months at a time.  
The new system is only going to snapshot and archive only the changed
versions, not every day.  All the zipping, sorting and file checking 
will take place on backup machine, not on servers so we don't care how 
greedy the process gets as long as the process pulling the mirror off 
the production machine is as light as possible.  Is there something
better than rsync for that?
> 
> If you tar up the source, and send those to your rsyncd that's less of a
> security risk from rsyncd itself, HOWEVER your root only file data is now
> in a userland tar file, so your data is now less secure on the backup
> server than it was on the source machine.  Very bad backup design.

I must have described it poorly: dedicated backup machine, no other services,
no random users, private routing on physically separate lan, outbound
connections only.  I'd hope that would be better than a production server.

-- 

Christopher F. Miller, Publisher   [EMAIL PROTECTED]
MaineStreet Communications, Inc   208 Portland Road, Gray, ME  04039
1.207.657.5078 http://www.maine.com/
Content/site management, online commerce, internet integration, Debian linux


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Ted Deppner
On Wed, Jan 02, 2002 at 09:19:11AM -0500, [EMAIL PROTECTED] wrote:
> Automation with keys stored on machines is better than doing it manually
> and forgetting to back up.  :-)

Agreed.  Like excercise, the kind you do is better than the kind you
don't.

> It **does** provide a path by which someone can gain access from one
> machine to another.  Even accounts with minimal privs can be
> compromised.

A universal fact, hopefully known to all list members.

> The [modules] in rsyncd.conf provide a nice way to package what you want to
> back up.  You can also specify what ip addresses connect to rsyncd.  So in
> theory only the backup machine can connect to the rsyncd daemons; we've set 
> those to read-only.

Ack!  If you're doing file level rsync backups to rsyncd, rsyncd *must* be
running as root (DON'T DO THAT), else your perms will be useless.  rsyncd
just isn't something that should run with root perms... therefore it's
rather useless for file level rsync backups.

If you tar up the source, and send those to your rsyncd that's less of a
security risk from rsyncd itself, HOWEVER your root only file data is now
in a userland tar file, so your data is now less secure on the backup
server than it was on the source machine.  Very bad backup design.

> It **seems** that even though we are pulling the data of with rsync -e
> ssh there is no need for a key on the server machine.  Maybe I was
> working on it too late last night; at any rate, tcpdump will tell.  Can
> it build an ssh tunnel without keys at both ends?  YMMV.

No need to guess.  if you're using one :, your using rsh by default unless
modified by -e or RSYNC_RSH.  If you're using two ::, you're using rsyncd.

> The idea is that if someone got root on the client machines, the only
> additional path they would have to backups is an interface on the
> private LAN.  Not foolproof, but lower hanging fruit elsewhere would be
> easier picking.

Maybe, maybe not.  If they can get all the goodies off your backup server,
without having to break all the security of the source machines, you're
still just as comprimised.

-- 
Ted Deppner
http://www.psyber.com/~ted/




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Ted Deppner

On Wed, Jan 02, 2002 at 09:19:11AM -0500, [EMAIL PROTECTED] wrote:
> Automation with keys stored on machines is better than doing it manually
> and forgetting to back up.  :-)

Agreed.  Like excercise, the kind you do is better than the kind you
don't.

> It **does** provide a path by which someone can gain access from one
> machine to another.  Even accounts with minimal privs can be
> compromised.

A universal fact, hopefully known to all list members.

> The [modules] in rsyncd.conf provide a nice way to package what you want to
> back up.  You can also specify what ip addresses connect to rsyncd.  So in
> theory only the backup machine can connect to the rsyncd daemons; we've set 
> those to read-only.

Ack!  If you're doing file level rsync backups to rsyncd, rsyncd *must* be
running as root (DON'T DO THAT), else your perms will be useless.  rsyncd
just isn't something that should run with root perms... therefore it's
rather useless for file level rsync backups.

If you tar up the source, and send those to your rsyncd that's less of a
security risk from rsyncd itself, HOWEVER your root only file data is now
in a userland tar file, so your data is now less secure on the backup
server than it was on the source machine.  Very bad backup design.

> It **seems** that even though we are pulling the data of with rsync -e
> ssh there is no need for a key on the server machine.  Maybe I was
> working on it too late last night; at any rate, tcpdump will tell.  Can
> it build an ssh tunnel without keys at both ends?  YMMV.

No need to guess.  if you're using one :, your using rsh by default unless
modified by -e or RSYNC_RSH.  If you're using two ::, you're using rsyncd.

> The idea is that if someone got root on the client machines, the only
> additional path they would have to backups is an interface on the
> private LAN.  Not foolproof, but lower hanging fruit elsewhere would be
> easier picking.

Maybe, maybe not.  If they can get all the goodies off your backup server,
without having to break all the security of the source machines, you're
still just as comprimised.

-- 
Ted Deppner
http://www.psyber.com/~ted/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Brian Sniffen

ssh-agent does help here.  Have the cron job which is doing the backup
look to see if there's an ssh agent running as its user (presumably
'backup', maybe root) and if not send mail to somebody's pager,
complaining about the missing agent.  If the agent is running, the
cron job can reconnect to it and use it for authentication.

It's still possible for a cracker to get the passphrased key, and to
plant a keystroke logger to get your passphrase.  Getting a usable key
out of the agent is *hard*.

-Brian

-- 
Brian Sniffen [EMAIL PROTECTED]



pgpsrBkku8pYc.pgp
Description: PGP signature


Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread cfm
On Wed, Jan 02, 2002 at 03:35:43PM +0800, Patrick Hsieh wrote:
> OK. My problem is, if I use rsync+ssh with blank passphrase among
> servers to automate rsync+ssh backup procedure without password prompt,
> then the cracker will not need to send any password as well as
> passphrase when ssh login onto another server, right?
> 
> Is there a good way to automate rsync+ssh procedure without
> password/passphrase prompt, while password/passphrase is still requierd
> when someone attempts to ssh login?
> 
> > 
> > 
> > > I am sorry I could be kind of off-topic. But I want to know how to
> > > cross-site rsync without authentication, say ssh auth.,?
> > 
> > That's the best way.
> > 
> > > I've read some doc. using ssh-keygen to generate key pairs, appending the
> > > public keys to ~/.ssh/authorized_hosts on another host to prevent ssh
> > > authentication prompt. Is it very risky? Chances are a cracker could
> > > compromise one machine and ssh login others without  any authentication.
> > 
> > It's not "without authentication" - you're still authenticating, you're
> > just using a different means. There's two parts to rsa/dsa authentication
> > with ssh; first there's the key, then there's the passphrase.
> > 
> > If a cracker gets your key, that's tough, but they'll need the passphrase to
> > authenticate. If you make a key without a passphrase (generally what you'd
> > do for scripted rsyncs, etc) then they *only need the key*. So, you should
> > keep the data available with passphrase-less keys either read-only or backed
> > up, depending on its importance, etc.


Automation with keys stored on machines is better than doing it manually and
forgetting to back up.  :-)

It **does** provide a path by which someone can gain access from one machine to
another.  Even accounts with minimal privs can be compromised.

We happen to be in process of overhauling our backup architecture.  We're 
installing
rsyncd (daemons) on the client machines, and initiating rsync -e ssh backups 
from a 
dedicated backup machine on a private LAN with non-routable addresses.  That
machine packages up the backups and spools them off for storage elsewhere.

The [modules] in rsyncd.conf provide a nice way to package what you want to
back up.  You can also specify what ip addresses connect to rsyncd.  So in
theory only the backup machine can connect to the rsyncd daemons; we've set 
those to read-only.

It **seems** that even though we are pulling the data of with rsync -e ssh
there is no need for a key on the server machine.  Maybe I was working on it
too late last night; at any rate, tcpdump will tell.  Can it build an ssh tunnel
without keys at both ends?  YMMV.

The idea is that if someone got root on the client machines, the only 
additional path they would have to backups is an interface on the private 
LAN.  Not foolproof, but lower hanging fruit elsewhere would be easier picking.

cfm

-- 

Christopher F. Miller, Publisher   [EMAIL PROTECTED]
MaineStreet Communications, Inc   208 Portland Road, Gray, ME  04039
1.207.657.5078 http://www.maine.com/
Content/site management, online commerce, internet integration, Debian linux




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Brian Sniffen


ssh-agent does help here.  Have the cron job which is doing the backup
look to see if there's an ssh agent running as its user (presumably
'backup', maybe root) and if not send mail to somebody's pager,
complaining about the missing agent.  If the agent is running, the
cron job can reconnect to it and use it for authentication.

It's still possible for a cracker to get the passphrased key, and to
plant a keystroke logger to get your passphrase.  Getting a usable key
out of the agent is *hard*.

-Brian

-- 
Brian Sniffen [EMAIL PROTECTED]




msg04668/pgp0.pgp
Description: PGP signature


Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread cfm

On Wed, Jan 02, 2002 at 03:35:43PM +0800, Patrick Hsieh wrote:
> OK. My problem is, if I use rsync+ssh with blank passphrase among
> servers to automate rsync+ssh backup procedure without password prompt,
> then the cracker will not need to send any password as well as
> passphrase when ssh login onto another server, right?
> 
> Is there a good way to automate rsync+ssh procedure without
> password/passphrase prompt, while password/passphrase is still requierd
> when someone attempts to ssh login?
> 
> > 
> > 
> > > I am sorry I could be kind of off-topic. But I want to know how to
> > > cross-site rsync without authentication, say ssh auth.,?
> > 
> > That's the best way.
> > 
> > > I've read some doc. using ssh-keygen to generate key pairs, appending the
> > > public keys to ~/.ssh/authorized_hosts on another host to prevent ssh
> > > authentication prompt. Is it very risky? Chances are a cracker could
> > > compromise one machine and ssh login others without  any authentication.
> > 
> > It's not "without authentication" - you're still authenticating, you're
> > just using a different means. There's two parts to rsa/dsa authentication
> > with ssh; first there's the key, then there's the passphrase.
> > 
> > If a cracker gets your key, that's tough, but they'll need the passphrase to
> > authenticate. If you make a key without a passphrase (generally what you'd
> > do for scripted rsyncs, etc) then they *only need the key*. So, you should
> > keep the data available with passphrase-less keys either read-only or backed
> > up, depending on its importance, etc.


Automation with keys stored on machines is better than doing it manually and
forgetting to back up.  :-)

It **does** provide a path by which someone can gain access from one machine to
another.  Even accounts with minimal privs can be compromised.

We happen to be in process of overhauling our backup architecture.  We're installing
rsyncd (daemons) on the client machines, and initiating rsync -e ssh backups from a 
dedicated backup machine on a private LAN with non-routable addresses.  That
machine packages up the backups and spools them off for storage elsewhere.

The [modules] in rsyncd.conf provide a nice way to package what you want to
back up.  You can also specify what ip addresses connect to rsyncd.  So in
theory only the backup machine can connect to the rsyncd daemons; we've set 
those to read-only.

It **seems** that even though we are pulling the data of with rsync -e ssh
there is no need for a key on the server machine.  Maybe I was working on it
too late last night; at any rate, tcpdump will tell.  Can it build an ssh tunnel
without keys at both ends?  YMMV.

The idea is that if someone got root on the client machines, the only 
additional path they would have to backups is an interface on the private 
LAN.  Not foolproof, but lower hanging fruit elsewhere would be easier picking.

cfm

-- 

Christopher F. Miller, Publisher   [EMAIL PROTECTED]
MaineStreet Communications, Inc   208 Portland Road, Gray, ME  04039
1.207.657.5078 http://www.maine.com/
Content/site management, online commerce, internet integration, Debian linux


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-02 Thread I. Forbes
Hello All

I am not sure that I understand what the original poster wishes to 
achieve, nor have I followed the lengthy discussions that ensued.

But, a thread with the above subject line would not be complete 
without a mention of "mirrordir".

Someone wrote:

> > Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
> > /mnt/disk2/ 

Try

apt-get install mirrordir

mirrordir /mnt/sourcedisk /mnt/targetdisk

Everything including soft links, hard links, devices files, fifo's, 
permissions etc, will be mirrored, with a minimum of changes on 
the target disk. 

Mind that you do not mix up the "source" and "target" paths, 
otherwise you will end up wiping your original drive.

If you want to "ghost" a complete linux file system to replace a small 
drive with a larger one, the recipe is this:

- power down and install the target disk on secondary port, reboot.
- partition target disk (fdisk, cfdisk).
- create file systems (mkfs) and swap partion (mkswap) on the 
target disk.
- mount the target disk on /mnt 
- create and mount points and mount other partitions on target drive 
(eg mkdir /mnt/boot, mount /dev/hdc1 /mnt/boot).
- change into single user mode (init s)
- mirror the drive, "mirrordir --exclude /mnt -exclude /proc / /mnt" 
(These excludes save a lot of trouble)
- mkdir /mnt/proc, mkdir /mnt/mnt (This also save a lot of problems 
later).
- power down and remove original disk
- reboot with the target disk mounted as root / using an external 
recovery disk.
- run install-mbr to put a boot record on the target
- run lilo to make the target bootable.
- reboot.

The original poster could probably achieve what he wants by 
running the "mirrordir" statement from crontab every 24 hours.

Have fun

Ian

-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-




Re: Best way to duplicate HDs

2002-01-02 Thread I. Forbes

Hello All

I am not sure that I understand what the original poster wishes to 
achieve, nor have I followed the lengthy discussions that ensued.

But, a thread with the above subject line would not be complete 
without a mention of "mirrordir".

Someone wrote:

> > Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
> > /mnt/disk2/ 

Try

apt-get install mirrordir

mirrordir /mnt/sourcedisk /mnt/targetdisk

Everything including soft links, hard links, devices files, fifo's, 
permissions etc, will be mirrored, with a minimum of changes on 
the target disk. 

Mind that you do not mix up the "source" and "target" paths, 
otherwise you will end up wiping your original drive.

If you want to "ghost" a complete linux file system to replace a small 
drive with a larger one, the recipe is this:

- power down and install the target disk on secondary port, reboot.
- partition target disk (fdisk, cfdisk).
- create file systems (mkfs) and swap partion (mkswap) on the 
target disk.
- mount the target disk on /mnt 
- create and mount points and mount other partitions on target drive 
(eg mkdir /mnt/boot, mount /dev/hdc1 /mnt/boot).
- change into single user mode (init s)
- mirror the drive, "mirrordir --exclude /mnt -exclude /proc / /mnt" 
(These excludes save a lot of trouble)
- mkdir /mnt/proc, mkdir /mnt/mnt (This also save a lot of problems 
later).
- power down and remove original disk
- reboot with the target disk mounted as root / using an external 
recovery disk.
- run install-mbr to put a boot record on the target
- run lilo to make the target bootable.
- reboot.

The original poster could probably achieve what he wants by 
running the "mirrordir" statement from crontab every 24 hours.

Have fun

Ian

-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Jeff Waugh


> OK. My problem is, if I use rsync+ssh with blank passphrase among servers
> to automate rsync+ssh backup procedure without password prompt, then the
> cracker will not need to send any password as well as passphrase when ssh
> login onto another server, right?

No, password and rsa/dsa authentication are different authentication
mechanisms.

> Is there a good way to automate rsync+ssh procedure without
> password/passphrase prompt, while password/passphrase is still requierd
> when someone attempts to ssh login?

1) Use a minimally-privileged account for the rsync process, disable the
password on this account, so it cannot be used to login.

2) Generate a passphrase-less ssh key with ssh_keygen.

3) Add this to authorized_keys for the above account, specifying the
command that logins with this key are allowed to run. See command="" in
sshd(1).

Thus, no one can actually log in with the account normally, you can only
connect with the rsa/dsa key, and you can only run a particular process.

ssh-agent doesn't really help you in this instance, it's generally used to
provide single passphrase authentication for a user's session. (I use it to
log in to the ~30-40 machines I have my public key on, without typing
passwords every five minutes.)

- Jeff

-- 
 "jwz? no way man, he's my idle" - James Wilkinson  




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Patrick Hsieh
Hello Ted,

Your mail is very informative to me.
I wonder how to define cmd to run automatically in authorized_hosts?
I thought there's nothing but pub keys in authorized_hosts file.

And, do I need ssh-agent in this case? Do I need to leave passphrase
blank?

Thank you for your patience and kindness.

> On Wed, Jan 02, 2002 at 03:15:20PM +0800, Patrick Hsieh wrote:
> > I've read some doc. using ssh-keygen to generate key pairs, appending
> > the public keys to ~/.ssh/authorized_hosts on another host to prevent
> > ssh authentication prompt. Is it very risky? Chances are a cracker could
> > compromise one machine and ssh login others without  any authentication.
> 
> use ssh-keygen to generate a new key for *every* machine, and *every*
> application you want to use.  In the authorized_hosts section, you limit
> what a single key can do by specifying a cmd that is run automatically...
> in other words, use of the key executes only the command you want, and not
> simply a shell.
> 
> That does not limit an attacker from exploiting whatever the passwordless
> identity cmds you've setup, but they can't run rampant w/ root over an
> entire machine.
> 
> -- 
> Ted Deppner
> http://www.psyber.com/~ted/
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Jeff Waugh



> OK. My problem is, if I use rsync+ssh with blank passphrase among servers
> to automate rsync+ssh backup procedure without password prompt, then the
> cracker will not need to send any password as well as passphrase when ssh
> login onto another server, right?

No, password and rsa/dsa authentication are different authentication
mechanisms.

> Is there a good way to automate rsync+ssh procedure without
> password/passphrase prompt, while password/passphrase is still requierd
> when someone attempts to ssh login?

1) Use a minimally-privileged account for the rsync process, disable the
password on this account, so it cannot be used to login.

2) Generate a passphrase-less ssh key with ssh_keygen.

3) Add this to authorized_keys for the above account, specifying the
command that logins with this key are allowed to run. See command="" in
sshd(1).

Thus, no one can actually log in with the account normally, you can only
connect with the rsa/dsa key, and you can only run a particular process.

ssh-agent doesn't really help you in this instance, it's generally used to
provide single passphrase authentication for a user's session. (I use it to
log in to the ~30-40 machines I have my public key on, without typing
passwords every five minutes.)

- Jeff

-- 
 "jwz? no way man, he's my idle" - James Wilkinson  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Ted Deppner
On Wed, Jan 02, 2002 at 03:15:20PM +0800, Patrick Hsieh wrote:
> I've read some doc. using ssh-keygen to generate key pairs, appending
> the public keys to ~/.ssh/authorized_hosts on another host to prevent
> ssh authentication prompt. Is it very risky? Chances are a cracker could
> compromise one machine and ssh login others without  any authentication.

use ssh-keygen to generate a new key for *every* machine, and *every*
application you want to use.  In the authorized_hosts section, you limit
what a single key can do by specifying a cmd that is run automatically...
in other words, use of the key executes only the command you want, and not
simply a shell.

That does not limit an attacker from exploiting whatever the passwordless
identity cmds you've setup, but they can't run rampant w/ root over an
entire machine.

-- 
Ted Deppner
http://www.psyber.com/~ted/




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Patrick Hsieh
OK. My problem is, if I use rsync+ssh with blank passphrase among
servers to automate rsync+ssh backup procedure without password prompt,
then the cracker will not need to send any password as well as
passphrase when ssh login onto another server, right?

Is there a good way to automate rsync+ssh procedure without
password/passphrase prompt, while password/passphrase is still requierd
when someone attempts to ssh login?

> 
> 
> > I am sorry I could be kind of off-topic. But I want to know how to
> > cross-site rsync without authentication, say ssh auth.,?
> 
> That's the best way.
> 
> > I've read some doc. using ssh-keygen to generate key pairs, appending the
> > public keys to ~/.ssh/authorized_hosts on another host to prevent ssh
> > authentication prompt. Is it very risky? Chances are a cracker could
> > compromise one machine and ssh login others without  any authentication.
> 
> It's not "without authentication" - you're still authenticating, you're
> just using a different means. There's two parts to rsa/dsa authentication
> with ssh; first there's the key, then there's the passphrase.
> 
> If a cracker gets your key, that's tough, but they'll need the passphrase to
> authenticate. If you make a key without a passphrase (generally what you'd
> do for scripted rsyncs, etc) then they *only need the key*. So, you should
> keep the data available with passphrase-less keys either read-only or backed
> up, depending on its importance, etc.
> 
> - Jeff
> 
> -- 
>"I think we agnostics need a term for a holy war too. I feel all left
> out." - George Lebl 
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Jeff Waugh


> I am sorry I could be kind of off-topic. But I want to know how to
> cross-site rsync without authentication, say ssh auth.,?

That's the best way.

> I've read some doc. using ssh-keygen to generate key pairs, appending the
> public keys to ~/.ssh/authorized_hosts on another host to prevent ssh
> authentication prompt. Is it very risky? Chances are a cracker could
> compromise one machine and ssh login others without  any authentication.

It's not "without authentication" - you're still authenticating, you're
just using a different means. There's two parts to rsa/dsa authentication
with ssh; first there's the key, then there's the passphrase.

If a cracker gets your key, that's tough, but they'll need the passphrase to
authenticate. If you make a key without a passphrase (generally what you'd
do for scripted rsyncs, etc) then they *only need the key*. So, you should
keep the data available with passphrase-less keys either read-only or backed
up, depending on its importance, etc.

- Jeff

-- 
   "I think we agnostics need a term for a holy war too. I feel all left
out." - George Lebl 




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-02 Thread Patrick Hsieh

Hello Ted,

Your mail is very informative to me.
I wonder how to define cmd to run automatically in authorized_hosts?
I thought there's nothing but pub keys in authorized_hosts file.

And, do I need ssh-agent in this case? Do I need to leave passphrase
blank?

Thank you for your patience and kindness.

> On Wed, Jan 02, 2002 at 03:15:20PM +0800, Patrick Hsieh wrote:
> > I've read some doc. using ssh-keygen to generate key pairs, appending
> > the public keys to ~/.ssh/authorized_hosts on another host to prevent
> > ssh authentication prompt. Is it very risky? Chances are a cracker could
> > compromise one machine and ssh login others without  any authentication.
> 
> use ssh-keygen to generate a new key for *every* machine, and *every*
> application you want to use.  In the authorized_hosts section, you limit
> what a single key can do by specifying a cmd that is run automatically...
> in other words, use of the key executes only the command you want, and not
> simply a shell.
> 
> That does not limit an attacker from exploiting whatever the passwordless
> identity cmds you've setup, but they can't run rampant w/ root over an
> entire machine.
> 
> -- 
> Ted Deppner
> http://www.psyber.com/~ted/
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-01 Thread Ted Deppner

On Wed, Jan 02, 2002 at 03:15:20PM +0800, Patrick Hsieh wrote:
> I've read some doc. using ssh-keygen to generate key pairs, appending
> the public keys to ~/.ssh/authorized_hosts on another host to prevent
> ssh authentication prompt. Is it very risky? Chances are a cracker could
> compromise one machine and ssh login others without  any authentication.

use ssh-keygen to generate a new key for *every* machine, and *every*
application you want to use.  In the authorized_hosts section, you limit
what a single key can do by specifying a cmd that is run automatically...
in other words, use of the key executes only the command you want, and not
simply a shell.

That does not limit an attacker from exploiting whatever the passwordless
identity cmds you've setup, but they can't run rampant w/ root over an
entire machine.

-- 
Ted Deppner
http://www.psyber.com/~ted/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-01 Thread Patrick Hsieh

OK. My problem is, if I use rsync+ssh with blank passphrase among
servers to automate rsync+ssh backup procedure without password prompt,
then the cracker will not need to send any password as well as
passphrase when ssh login onto another server, right?

Is there a good way to automate rsync+ssh procedure without
password/passphrase prompt, while password/passphrase is still requierd
when someone attempts to ssh login?

> 
> 
> > I am sorry I could be kind of off-topic. But I want to know how to
> > cross-site rsync without authentication, say ssh auth.,?
> 
> That's the best way.
> 
> > I've read some doc. using ssh-keygen to generate key pairs, appending the
> > public keys to ~/.ssh/authorized_hosts on another host to prevent ssh
> > authentication prompt. Is it very risky? Chances are a cracker could
> > compromise one machine and ssh login others without  any authentication.
> 
> It's not "without authentication" - you're still authenticating, you're
> just using a different means. There's two parts to rsa/dsa authentication
> with ssh; first there's the key, then there's the passphrase.
> 
> If a cracker gets your key, that's tough, but they'll need the passphrase to
> authenticate. If you make a key without a passphrase (generally what you'd
> do for scripted rsyncs, etc) then they *only need the key*. So, you should
> keep the data available with passphrase-less keys either read-only or backed
> up, depending on its importance, etc.
> 
> - Jeff
> 
> -- 
>"I think we agnostics need a term for a holy war too. I feel all left
> out." - George Lebl 
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

-- 
Patrick Hsieh <[EMAIL PROTECTED]>

GPG public key http://pahud.net/pubkeys/pahudatpahud.gpg


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs--talk more about rsync+ssh system

2002-01-01 Thread Jeff Waugh



> I am sorry I could be kind of off-topic. But I want to know how to
> cross-site rsync without authentication, say ssh auth.,?

That's the best way.

> I've read some doc. using ssh-keygen to generate key pairs, appending the
> public keys to ~/.ssh/authorized_hosts on another host to prevent ssh
> authentication prompt. Is it very risky? Chances are a cracker could
> compromise one machine and ssh login others without  any authentication.

It's not "without authentication" - you're still authenticating, you're
just using a different means. There's two parts to rsa/dsa authentication
with ssh; first there's the key, then there's the passphrase.

If a cracker gets your key, that's tough, but they'll need the passphrase to
authenticate. If you make a key without a passphrase (generally what you'd
do for scripted rsyncs, etc) then they *only need the key*. So, you should
keep the data available with passphrase-less keys either read-only or backed
up, depending on its importance, etc.

- Jeff

-- 
   "I think we agnostics need a term for a holy war too. I feel all left
out." - George Lebl 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jorge . Lehner
Hello!

It was already sort of pointed out by other people, that your
situation can probably handlead easier by dividing it in to tasks:

- fast recovery from data damage

- prevention of changes made by hackers/virus

each of which can be better handled by individual aproaches.

While the former has been addressed (three HD's in Software Raid-1
configuration), the second also has some rather easy to setup
solutions.

You can for example setup Cfengine on your network, and monitor/fix
critical files from a CD.  This is similar to tripwire, but "better"
(as of the authors of Cfengine):

You make a copy of your sane binaries and configuration files and burn it
to a CD (or a HD on a [well protected] backup server!! :).

You setup cfengine so it will check each hour or so the integrity of
the files on your production server with respect to the backup, and
overwrites any encountered modified file - this part is almost
trivial: name the file/directory and cfengine will do the job for
you.

When your system crashes you recover from the spare Raid HD. Cfengine
will automatically put everything straight if it would not comply with
the backup server.

Best Regards,

 Jorge-León


On Wed, Jan 02, 2002 at 06:40:39AM +0800, Jason Lim wrote:
...
> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.
> 
> The idea being that if there is a virus, a cracker, or hardware
> malfunction, then the backup drives can be immediately pulled out and
> inserted into a backup computer, and switch on to provide immediate
> restoration of services (with data up to 12 hours old, but better than
> having up-to-date information that may be corrupted or "cracked" versions
> of programs).
...

P.D.: I like cfengine a lot, however, I have never (had the chance to)
  try this aproach out.  I can only dream of 60G HD's :)





Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh


> Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
> /mnt/disk2/  :-/

This is the point at which we have one of those "Brady Bunch Moments", when
everyone stands around chuckling at what they've learned, and the credits
roll.

- Jeff

-- 
"And that's what it sounds like if you *download* it!" - John, They 
  Might Be Giants   




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker
On Wed, 2 Jan 2002 00:55, Jason Lim wrote:
> Not really... i think of it as helping to cure the disease and helping to
> clean up the problem, not eliminating both because it is impossible to
> cure the disease completely. Unfortuantely if you work with a medium to
> large number of various equipment (or even a small number if you're
> unlucky) you're bounded to be cracked sooner or later. Even the strictest
> security policy and such won't guarentee 100% protection. Another way to
> do this would be to go Russell's way (the ideal way) and run a RAID array
> with 3 drives, 2 live and 1 spare, and the sync the spare up every 24
> hours. However, this would require 3 drives instead of 2... $$$ and space.
> For the average server between 2-4 drives, this would mean a minimum of
> 6-12 drives compared to 4-8. The server cases wouldn't even hold 12
> drives. They could hold up to 8 or so. So money isn't the only
> consideration. Then you have to consider that even if we could somehow
> place that many drives in the average rackmount case, overheating... power
> supply issues...etc. come into play.

So go with my original suggestion and use only double the number of drives.

> You might say "tape backup"... but keep in mind that it doesn't offer a
> "plug n play" solution if a server goes down. With the above method, a
> dead server could be brought to life in a minute or so (literally) rather
> than half an hour... an hour... or more.

You can make a tape backup into plug-and-play.  It wouldn't be that difficult 
to do it.

With the amount of time I've spent discussing this issue on this list I could 
have created some floppy boot disks that restore an entire system from a tape 
(presuming that I had a tape drive to test things with).

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker
On Wed, 2 Jan 2002 02:58, Jason Lim wrote:
> You are thinking resource-intensive work, which would require more than a
> basic or low level sysadmin to do. I would not trust a low level sysadmin
> to start performing restoration work on a system. At least if we catch it
> within 12 hours or 24 hours then the sysadmin could at least pull out the
> backup hard disks from the drive caddies, plug them into the backup system
> on standby (basically has everything except hard disks) and have a working
> system up and running instantly. A high level sysadmin can slowly sift
> through original information carefully once the system is up and running.

If you believe that someone has cracked your security then the absolute last 
thing you want is a junior person messing with the system and destroying 
evidence.

If you believe that your network has been cracked then your best person 
should work on it, the second best person should watch, and everyone else 
should leave the room.

> Your assumption is that you can have a sysadmin onsite within a certain
> amount of time to perform said restoration work on the filesystem, which
> may not be possible especially with cutbacks everywhere and everyone
> tightening their belts. Calling in a high-level sysadmin at 3am in the
> morning to perform such tasks is not always possible resource-wise.

Then the network stays down overnight.

If the network isn't important enough to deserve good quality hardware and 
payment for overtime for skilled people then it's not important enough to be 
running 24*7 all the time.

It's that simple.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim
> On Wed, 2 Jan 2002 00:44, Jason Lim wrote:
> > > > The idea being that if there is a virus, a cracker, or hardware
> > > > malfunction
> > >
> > > And if you discover this within 12 hours...  Most times you won't.
> >
> > We've got file integrity checkers running on all the servers, and they
run
> > very often (mostly every hour or so) so unless the first thing the
cracker
> > does is to "fix" or disengage the checkers then we SHOULD notice this
> > within the 12 hours... or make that 24 hours to give a bit more
leeway.
>
> The first thing any serious cracker will do is to stop themselves being
> trivially discovered.  This means changing the database for the file
> integrity checkers etc.  Also if you're running a standard kernel from a
> major distribution then they can have pre-compiled kernel modules ready
to be
> inserted into your kernel.  They can prevent the rootkit from being
> discovered (system calls such as readdir() will not show the file names,
> stat() won't stat them, only chdir() and exec() will see them).
>
> To alleviate this risk you need grsecurity or NSA SE-Linux.
>
> At the moment if you want to make your servers more secure in a way that
is
> Debian supported then grsecurity is your best option.  It allows you to
> really control what the kernel allows.  Run something in a chroot() and
it'll
> NEVER get out.  Run all the daemons that are likely to get attacked in
> chroot() environments and the rest of the system will be safe no matter
what
> happens.
>
> Grsecurity allows you to prevent stack-smashing and prevent chroot()
programs
> from calling chroot(), mknod(), ptrace(), and other dangerous system
calls.
> Also it does lots more.
>
> I used to maintain the package, but I have given it up as I'm more
interested
> in SE Linux.  The URL is below.
> http://www.earth.li/~noodles/grsec/
>
> Any comparison of amount of time/effort/money vs result will show that
> grsecurity is much better than any of these ideas regarding swapping
hard
> drives etc for improving security.
>
> > Well, as said before, unless the cracker spends a considerable amount
of
> > time learning the setup of the system, going through the cron files,
> > config files, etc. then hopefully there will be enough things set up
that
> > the cracker will be unable to destroy or "fix" everything.
>
> Heh.  When setting things up I presume that anyone who can crack my
systems
> knows more about such things than I do.  Therefore once they get root I
can't
> beat them or expect to detect them in any small amount of time.
>
> Once they get root it's game over - unless you run SE Linux.

Of course, hackers/crackers are only one thing that needs to be "solved".
Hardware failure, etc. also needs to be taken into consideration. But your
points are very correct, and I will investigate grsecurity as an
additional means to secure the systems. My hopes are only that by
increasing the number of detection methods, backups, etc. that it will
make it harder, not impossible (is it ever impossible?), for a cracker to
cripple/destroy a system.

> > And regarding data loss, so far the most common thing for us is to
have a
> > HD become the point of failure. We've had quite a few CPUs burn out
too
> > (no bad ram yet, lucky...), but nothing except a bad HD has seemed to
> > cause severe data loss.
>
> So RAID is what you need.

Actually your three-disk RAID solution sounded pretty good... I'll look
more carefully into the "standby" mode for a RAID disk and such, and see
how well it would work.

Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
/mnt/disk2/  :-/




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim
> > You might say "tape backup"... but keep in mind that it doesn't offer
a
> > "plug n play" solution if a server goes down. With the above method, a
> > dead server could be brought to life in a minute or so (literally)
> > rather
> > than half an hour... an hour... or more.
>
> It occours to me that in most cases, recovery from a catostrophic
> failure is not going to be as easy as plug and play. Let's take some
> common situations where we need to recover a system.
>
> Virus -
> The way I traditionaly deal with a virus, is to never have it touch
> my system. As a system admin it is my job to keep viruses from hitting
> machines in the first place, not clean them up once they arrive.
> Cleaning up is the mentality of the Microsoft security world, and I
> refuse to accept such poluted doctrine. However, I do have a contingency
> plan should I miss a virus. I have a master OS image burnt onto a disk,
> and each of my systems make a backup of their data nightly (simple tar).
> The backups rotate and are incrimental, so I can restore data to the
> current date, masking out any infected paths. This, however, is not a
> plug and play solution, it requires manual control.
>
> Hardware failure-
> I run arround and sceam alot. This kind of failure is mostly luck
> of the draw, but I try to follow the same strategy as above.
>
> Hacker-
> If they wipe the disk, then the OS image and data backup will work
> nicely. If they do something else,  then I wipe the disk myself (no
> backdoors that way), and recover.
>
> In none of these situations do I see any value in making a replica of a
> tainted or damaged disk every 12 hours.

You are thinking resource-intensive work, which would require more than a
basic or low level sysadmin to do. I would not trust a low level sysadmin
to start performing restoration work on a system. At least if we catch it
within 12 hours or 24 hours then the sysadmin could at least pull out the
backup hard disks from the drive caddies, plug them into the backup system
on standby (basically has everything except hard disks) and have a working
system up and running instantly. A high level sysadmin can slowly sift
through original information carefully once the system is up and running.

Your assumption is that you can have a sysadmin onsite within a certain
amount of time to perform said restoration work on the filesystem, which
may not be possible especially with cutbacks everywhere and everyone
tightening their belts. Calling in a high-level sysadmin at 3am in the
morning to perform such tasks is not always possible resource-wise.




Re: Best way to duplicate HDs

2002-01-01 Thread Jorge . Lehner

Hello!

It was already sort of pointed out by other people, that your
situation can probably handlead easier by dividing it in to tasks:

- fast recovery from data damage

- prevention of changes made by hackers/virus

each of which can be better handled by individual aproaches.

While the former has been addressed (three HD's in Software Raid-1
configuration), the second also has some rather easy to setup
solutions.

You can for example setup Cfengine on your network, and monitor/fix
critical files from a CD.  This is similar to tripwire, but "better"
(as of the authors of Cfengine):

You make a copy of your sane binaries and configuration files and burn it
to a CD (or a HD on a [well protected] backup server!! :).

You setup cfengine so it will check each hour or so the integrity of
the files on your production server with respect to the backup, and
overwrites any encountered modified file - this part is almost
trivial: name the file/directory and cfengine will do the job for
you.

When your system crashes you recover from the spare Raid HD. Cfengine
will automatically put everything straight if it would not comply with
the backup server.

Best Regards,

 Jorge-León


On Wed, Jan 02, 2002 at 06:40:39AM +0800, Jason Lim wrote:
...
> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.
> 
> The idea being that if there is a virus, a cracker, or hardware
> malfunction, then the backup drives can be immediately pulled out and
> inserted into a backup computer, and switch on to provide immediate
> restoration of services (with data up to 12 hours old, but better than
> having up-to-date information that may be corrupted or "cracked" versions
> of programs).
...

P.D.: I like cfengine a lot, however, I have never (had the chance to)
  try this aproach out.  I can only dream of 60G HD's :)



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh



> Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
> /mnt/disk2/  :-/

This is the point at which we have one of those "Brady Bunch Moments", when
everyone stands around chuckling at what they've learned, and the credits
roll.

- Jeff

-- 
"And that's what it sounds like if you *download* it!" - John, They 
  Might Be Giants   


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker
On Wed, 2 Jan 2002 00:44, Jason Lim wrote:
> > > The idea being that if there is a virus, a cracker, or hardware
> > > malfunction
> >
> > And if you discover this within 12 hours...  Most times you won't.
>
> We've got file integrity checkers running on all the servers, and they run
> very often (mostly every hour or so) so unless the first thing the cracker
> does is to "fix" or disengage the checkers then we SHOULD notice this
> within the 12 hours... or make that 24 hours to give a bit more leeway.

The first thing any serious cracker will do is to stop themselves being 
trivially discovered.  This means changing the database for the file 
integrity checkers etc.  Also if you're running a standard kernel from a 
major distribution then they can have pre-compiled kernel modules ready to be 
inserted into your kernel.  They can prevent the rootkit from being 
discovered (system calls such as readdir() will not show the file names, 
stat() won't stat them, only chdir() and exec() will see them).

To alleviate this risk you need grsecurity or NSA SE-Linux.

At the moment if you want to make your servers more secure in a way that is 
Debian supported then grsecurity is your best option.  It allows you to 
really control what the kernel allows.  Run something in a chroot() and it'll 
NEVER get out.  Run all the daemons that are likely to get attacked in 
chroot() environments and the rest of the system will be safe no matter what 
happens.

Grsecurity allows you to prevent stack-smashing and prevent chroot() programs 
from calling chroot(), mknod(), ptrace(), and other dangerous system calls.  
Also it does lots more.

I used to maintain the package, but I have given it up as I'm more interested 
in SE Linux.  The URL is below.
http://www.earth.li/~noodles/grsec/

Any comparison of amount of time/effort/money vs result will show that 
grsecurity is much better than any of these ideas regarding swapping hard 
drives etc for improving security.

> Well, as said before, unless the cracker spends a considerable amount of
> time learning the setup of the system, going through the cron files,
> config files, etc. then hopefully there will be enough things set up that
> the cracker will be unable to destroy or "fix" everything.

Heh.  When setting things up I presume that anyone who can crack my systems 
knows more about such things than I do.  Therefore once they get root I can't 
beat them or expect to detect them in any small amount of time.

Once they get root it's game over - unless you run SE Linux.

> And regarding data loss, so far the most common thing for us is to have a
> HD become the point of failure. We've had quite a few CPUs burn out too
> (no bad ram yet, lucky...), but nothing except a bad HD has seemed to
> cause severe data loss.

So RAID is what you need.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker

On Wed, 2 Jan 2002 00:55, Jason Lim wrote:
> Not really... i think of it as helping to cure the disease and helping to
> clean up the problem, not eliminating both because it is impossible to
> cure the disease completely. Unfortuantely if you work with a medium to
> large number of various equipment (or even a small number if you're
> unlucky) you're bounded to be cracked sooner or later. Even the strictest
> security policy and such won't guarentee 100% protection. Another way to
> do this would be to go Russell's way (the ideal way) and run a RAID array
> with 3 drives, 2 live and 1 spare, and the sync the spare up every 24
> hours. However, this would require 3 drives instead of 2... $$$ and space.
> For the average server between 2-4 drives, this would mean a minimum of
> 6-12 drives compared to 4-8. The server cases wouldn't even hold 12
> drives. They could hold up to 8 or so. So money isn't the only
> consideration. Then you have to consider that even if we could somehow
> place that many drives in the average rackmount case, overheating... power
> supply issues...etc. come into play.

So go with my original suggestion and use only double the number of drives.

> You might say "tape backup"... but keep in mind that it doesn't offer a
> "plug n play" solution if a server goes down. With the above method, a
> dead server could be brought to life in a minute or so (literally) rather
> than half an hour... an hour... or more.

You can make a tape backup into plug-and-play.  It wouldn't be that difficult 
to do it.

With the amount of time I've spent discussing this issue on this list I could 
have created some floppy boot disks that restore an entire system from a tape 
(presuming that I had a tape drive to test things with).

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker

On Wed, 2 Jan 2002 02:58, Jason Lim wrote:
> You are thinking resource-intensive work, which would require more than a
> basic or low level sysadmin to do. I would not trust a low level sysadmin
> to start performing restoration work on a system. At least if we catch it
> within 12 hours or 24 hours then the sysadmin could at least pull out the
> backup hard disks from the drive caddies, plug them into the backup system
> on standby (basically has everything except hard disks) and have a working
> system up and running instantly. A high level sysadmin can slowly sift
> through original information carefully once the system is up and running.

If you believe that someone has cracked your security then the absolute last 
thing you want is a junior person messing with the system and destroying 
evidence.

If you believe that your network has been cracked then your best person 
should work on it, the second best person should watch, and everyone else 
should leave the room.

> Your assumption is that you can have a sysadmin onsite within a certain
> amount of time to perform said restoration work on the filesystem, which
> may not be possible especially with cutbacks everywhere and everyone
> tightening their belts. Calling in a high-level sysadmin at 3am in the
> morning to perform such tasks is not always possible resource-wise.

Then the network stays down overnight.

If the network isn't important enough to deserve good quality hardware and 
payment for overtime for skilled people then it's not important enough to be 
running 24*7 all the time.

It's that simple.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Carpman
On Tuesday, January 1, 2002, at 05:55 PM, Jason Lim wrote:
You might say "tape backup"... but keep in mind that it doesn't offer a
"plug n play" solution if a server goes down. With the above method, a
dead server could be brought to life in a minute or so (literally) 
rather
than half an hour... an hour... or more.
It occours to me that in most cases, recovery from a catostrophic 
failure is not going to be as easy as plug and play. Let's take some 
common situations where we need to recover a system.

Virus -
	The way I traditionaly deal with a virus, is to never have it touch 
my system. As a system admin it is my job to keep viruses from hitting 
machines in the first place, not clean them up once they arrive. 
Cleaning up is the mentality of the Microsoft security world, and I 
refuse to accept such poluted doctrine. However, I do have a contingency 
plan should I miss a virus. I have a master OS image burnt onto a disk, 
and each of my systems make a backup of their data nightly (simple tar). 
The backups rotate and are incrimental, so I can restore data to the 
current date, masking out any infected paths. This, however, is not a 
plug and play solution, it requires manual control.

Hardware failure-
	I run arround and sceam alot. This kind of failure is mostly luck 
of the draw, but I try to follow the same strategy as above.

Hacker-
	If they wipe the disk, then the OS image and data backup will work 
nicely. If they do something else,  then I wipe the disk myself (no 
backdoors that way), and recover.

In none of these situations do I see any value in making a replica of a 
tainted or damaged disk every 12 hours.




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker
On Wed, 2 Jan 2002 00:32, Jason Lim wrote:
> > It's called RAID-1.
>
> I dunno... whenever I think of "RAID" I always think of live mirrors that
> operate constantly and not a "once in a while" mirror operation just to
> perform a backup (when talking about RAID-1). Am I mistaken in this
> thinking?

RAID is generally configured for always being active, but there's lots of 
things you can do with software RAID, that's the advantage of software 
products in an open-source OS, you can make it do whatever you want.

> From what you have said, basically the only advantage to the Arcoide
> products are that they reduce load on the system, as they can perform the
> RAID-1 mirror process in the background idependent of the OS.

RAID-1 mirroring isn't that intensive.  Compare the amount of resources 
required for copying the fastest available hard drives (that can sustain 
40MB/s) to the power of a 1.4GHz Athlon CPU...

> An alternative spin on what you have said (nearly identical) would be to
> put double the hard disks in each server (eg. a server has 2 hds, put in 2
> "backup" hds). Configure them in RAID-1 mode, marking the 2 backups as a
> spare, and then "adding" them to the RAID array every day via cron. This
> would cause the 2 live HDs to be mirrored to the backups, and then
> disengage the 2 "backup" HDs so they aren't constantly synced.

That sounds excessive.

Why not have three drives configured in a RAID-1 setup and then add a drive 
and remove it as soon as it's re-synced (so it's a three-drive RAID-1 with 
only two drives active most of the time).

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> On Wed, 2 Jan 2002 00:44, Jason Lim wrote:
> > > > The idea being that if there is a virus, a cracker, or hardware
> > > > malfunction
> > >
> > > And if you discover this within 12 hours...  Most times you won't.
> >
> > We've got file integrity checkers running on all the servers, and they
run
> > very often (mostly every hour or so) so unless the first thing the
cracker
> > does is to "fix" or disengage the checkers then we SHOULD notice this
> > within the 12 hours... or make that 24 hours to give a bit more
leeway.
>
> The first thing any serious cracker will do is to stop themselves being
> trivially discovered.  This means changing the database for the file
> integrity checkers etc.  Also if you're running a standard kernel from a
> major distribution then they can have pre-compiled kernel modules ready
to be
> inserted into your kernel.  They can prevent the rootkit from being
> discovered (system calls such as readdir() will not show the file names,
> stat() won't stat them, only chdir() and exec() will see them).
>
> To alleviate this risk you need grsecurity or NSA SE-Linux.
>
> At the moment if you want to make your servers more secure in a way that
is
> Debian supported then grsecurity is your best option.  It allows you to
> really control what the kernel allows.  Run something in a chroot() and
it'll
> NEVER get out.  Run all the daemons that are likely to get attacked in
> chroot() environments and the rest of the system will be safe no matter
what
> happens.
>
> Grsecurity allows you to prevent stack-smashing and prevent chroot()
programs
> from calling chroot(), mknod(), ptrace(), and other dangerous system
calls.
> Also it does lots more.
>
> I used to maintain the package, but I have given it up as I'm more
interested
> in SE Linux.  The URL is below.
> http://www.earth.li/~noodles/grsec/
>
> Any comparison of amount of time/effort/money vs result will show that
> grsecurity is much better than any of these ideas regarding swapping
hard
> drives etc for improving security.
>
> > Well, as said before, unless the cracker spends a considerable amount
of
> > time learning the setup of the system, going through the cron files,
> > config files, etc. then hopefully there will be enough things set up
that
> > the cracker will be unable to destroy or "fix" everything.
>
> Heh.  When setting things up I presume that anyone who can crack my
systems
> knows more about such things than I do.  Therefore once they get root I
can't
> beat them or expect to detect them in any small amount of time.
>
> Once they get root it's game over - unless you run SE Linux.

Of course, hackers/crackers are only one thing that needs to be "solved".
Hardware failure, etc. also needs to be taken into consideration. But your
points are very correct, and I will investigate grsecurity as an
additional means to secure the systems. My hopes are only that by
increasing the number of detection methods, backups, etc. that it will
make it harder, not impossible (is it ever impossible?), for a cracker to
cripple/destroy a system.

> > And regarding data loss, so far the most common thing for us is to
have a
> > HD become the point of failure. We've had quite a few CPUs burn out
too
> > (no bad ram yet, lucky...), but nothing except a bad HD has seemed to
> > cause severe data loss.
>
> So RAID is what you need.

Actually your three-disk RAID solution sounded pretty good... I'll look
more carefully into the "standby" mode for a RAID disk and such, and see
how well it would work.

Sigh... and I was hoping for a simple solution like cp /mnt/disk1/*
/mnt/disk2/  :-/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > You might say "tape backup"... but keep in mind that it doesn't offer
a
> > "plug n play" solution if a server goes down. With the above method, a
> > dead server could be brought to life in a minute or so (literally)
> > rather
> > than half an hour... an hour... or more.
>
> It occours to me that in most cases, recovery from a catostrophic
> failure is not going to be as easy as plug and play. Let's take some
> common situations where we need to recover a system.
>
> Virus -
> The way I traditionaly deal with a virus, is to never have it touch
> my system. As a system admin it is my job to keep viruses from hitting
> machines in the first place, not clean them up once they arrive.
> Cleaning up is the mentality of the Microsoft security world, and I
> refuse to accept such poluted doctrine. However, I do have a contingency
> plan should I miss a virus. I have a master OS image burnt onto a disk,
> and each of my systems make a backup of their data nightly (simple tar).
> The backups rotate and are incrimental, so I can restore data to the
> current date, masking out any infected paths. This, however, is not a
> plug and play solution, it requires manual control.
>
> Hardware failure-
> I run arround and sceam alot. This kind of failure is mostly luck
> of the draw, but I try to follow the same strategy as above.
>
> Hacker-
> If they wipe the disk, then the OS image and data backup will work
> nicely. If they do something else,  then I wipe the disk myself (no
> backdoors that way), and recover.
>
> In none of these situations do I see any value in making a replica of a
> tainted or damaged disk every 12 hours.

You are thinking resource-intensive work, which would require more than a
basic or low level sysadmin to do. I would not trust a low level sysadmin
to start performing restoration work on a system. At least if we catch it
within 12 hours or 24 hours then the sysadmin could at least pull out the
backup hard disks from the drive caddies, plug them into the backup system
on standby (basically has everything except hard disks) and have a working
system up and running instantly. A high level sysadmin can slowly sift
through original information carefully once the system is up and running.

Your assumption is that you can have a sysadmin onsite within a certain
amount of time to perform said restoration work on the filesystem, which
may not be possible especially with cutbacks everywhere and everyone
tightening their belts. Calling in a high-level sysadmin at 3am in the
morning to perform such tasks is not always possible resource-wise.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim
> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
>
> Sorry, but I don't see any benefit to having maximum 12 hour old data
when
> you could have 0. The hardware solution you mentioned was RAID 1 anyway.
> Easiest thing to do is use it, and have both spare drives and spare
machines
> ready to roll should you need to swap either.

> > The idea being that if there is a virus, a cracker, or hardware
> > malfunction, then the backup drives can be immediately pulled out and
> > inserted into a backup computer, and switch on to provide immediate
> > restoration of services (with data up to 12 hours old, but better than
> > having up-to-date information that may be corrupted or "cracked"
versions
> > of programs).
>
> Well, there's your benefit to having old data. Who's to say you're going
to
> know within 12 hours? This is not a particularly interesting problem,
mostly
> because you're not curing the disease, you're trying to clean up after
> infection.

Not really... i think of it as helping to cure the disease and helping to
clean up the problem, not eliminating both because it is impossible to
cure the disease completely. Unfortuantely if you work with a medium to
large number of various equipment (or even a small number if you're
unlucky) you're bounded to be cracked sooner or later. Even the strictest
security policy and such won't guarentee 100% protection. Another way to
do this would be to go Russell's way (the ideal way) and run a RAID array
with 3 drives, 2 live and 1 spare, and the sync the spare up every 24
hours. However, this would require 3 drives instead of 2... $$$ and space.
For the average server between 2-4 drives, this would mean a minimum of
6-12 drives compared to 4-8. The server cases wouldn't even hold 12
drives. They could hold up to 8 or so. So money isn't the only
consideration. Then you have to consider that even if we could somehow
place that many drives in the average rackmount case, overheating... power
supply issues...etc. come into play.

You might say "tape backup"... but keep in mind that it doesn't offer a
"plug n play" solution if a server goes down. With the above method, a
dead server could be brought to life in a minute or so (literally) rather
than half an hour... an hour... or more.





Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh


> > It's called RAID-1.
> 
> I dunno... whenever I think of "RAID" I always think of live mirrors that
> operate constantly

That's what they do post-sync.

> and not a "once in a while" mirror operation just to
> perform a backup (when talking about RAID-1). Am I mistaken in this
> thinking?

That's what they do when they sync (in very rough terms).

> This would cause the 2 live HDs to be mirrored to the backups, and then
> disengage the 2 "backup" HDs so they aren't constantly synced.
> 
> Would the above work? Sorry if I seem naive, but I haven't tried this
> "once in a while" RAID method before.

It's a dirty hack to make it do what you want it to, that's all. Russell's
solution was better, as at least you were getting the benefit of the running
mirror if a drive failed (and buying three disks is not expensive).

- Jeff

-- 
  "And up in the corporate box there's a group of pleasant  
   thirtysomething guys making tuneful music for the masses of people who   
can spell "nihilism", but don't want to listen to it in the car." - 
Richard Jinman, SMH 




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim
> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
> >
> > The idea being that if there is a virus, a cracker, or hardware
> > malfunction
>
> And if you discover this within 12 hours...  Most times you won't.

We've got file integrity checkers running on all the servers, and they run
very often (mostly every hour or so) so unless the first thing the cracker
does is to "fix" or disengage the checkers then we SHOULD notice this
within the 12 hours... or make that 24 hours to give a bit more leeway.

> > then the backup drives can be immediately pulled out and
> > inserted into a backup computer, and switch on to provide immediate
> > restoration of services (with data up to 12 hours old, but better than
> > having up-to-date information that may be corrupted or "cracked"
versions
> > of programs).
>
> If the drive is in the cracked machine then it should not be trusted.
If a
> drive is in a machine that has hardware damage then there's no guarantee
> it'll still have data on it...

Well, as said before, unless the cracker spends a considerable amount of
time learning the setup of the system, going through the cron files,
config files, etc. then hopefully there will be enough things set up that
the cracker will be unable to destroy or "fix" everything.

And regarding data loss, so far the most common thing for us is to have a
HD become the point of failure. We've had quite a few CPUs burn out too
(no bad ram yet, lucky...), but nothing except a bad HD has seemed to
cause severe data loss.




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh


> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.

Sorry, but I don't see any benefit to having maximum 12 hour old data when
you could have 0. The hardware solution you mentioned was RAID 1 anyway.
Easiest thing to do is use it, and have both spare drives and spare machines
ready to roll should you need to swap either.

> The idea being that if there is a virus, a cracker, or hardware
> malfunction, then the backup drives can be immediately pulled out and
> inserted into a backup computer, and switch on to provide immediate
> restoration of services (with data up to 12 hours old, but better than
> having up-to-date information that may be corrupted or "cracked" versions
> of programs).

Well, there's your benefit to having old data. Who's to say you're going to
know within 12 hours? This is not a particularly interesting problem, mostly
because you're not curing the disease, you're trying to clean up after
infection.

- Jeff

-- 
 "The GPL is good. Use it. Don't be silly." - Michael Meeks 




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim
> > I know of a few hardware solutions that do something like this, but
would
> > like to do this in hardware. They claim to perform a "mirror" of one
HD to
> > another HD while the system is live and in use.
>
> It's called RAID-1.

I dunno... whenever I think of "RAID" I always think of live mirrors that
operate constantly and not a "once in a while" mirror operation just to
perform a backup (when talking about RAID-1). Am I mistaken in this
thinking?

> > I have no idea how it does
> > this without corruption of some type (as you mentioned above, doing dd
on
> > a live HD will probably cause errors, especially if the live HD is in
> > use). For example, http://www.arcoide.com/ . To quote the function
we're
> > looking at " the DupliDisk2 automatically switches to the remaining
drive
>
> So setup three disks in a software RAID-1 configuration with one disk
being
> marked as a "spare" disk.  Then have a script run from a cron job every
day
> which marks the first disk as failed, this will cause the spare disk to
be
> added to the RAID set and have the data copied to it.  After setting one
disk
> as failed the script can then add it back to the RAID as a spare disk.
>
> This means that apart from the RAID copying time (at least 20 minutes on
an
> idle array - longer on a busy array) you will always have two live
active
> copies of your data.  Before your script runs you'll also have an old
> snapshot of your data which can be used to recover from operator error.
>
> This will do everything that the arcoide product appears to do.

>From what you have said, basically the only advantage to the Arcoide
products are that they reduce load on the system, as they can perform the
RAID-1 mirror process in the background idependent of the OS.

An alternative spin on what you have said (nearly identical) would be to
put double the hard disks in each server (eg. a server has 2 hds, put in 2
"backup" hds). Configure them in RAID-1 mode, marking the 2 backups as a
spare, and then "adding" them to the RAID array every day via cron. This
would cause the 2 live HDs to be mirrored to the backups, and then
disengage the 2 "backup" HDs so they aren't constantly synced.

Would the above work? Sorry if I seem naive, but I haven't tried this
"once in a while" RAID method before.

Sincerely,
Jason







Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker
On Tue, 1 Jan 2002 23:40, Jason Lim wrote:
> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.
>
> The idea being that if there is a virus, a cracker, or hardware
> malfunction

And if you discover this within 12 hours...  Most times you won't.

> then the backup drives can be immediately pulled out and
> inserted into a backup computer, and switch on to provide immediate
> restoration of services (with data up to 12 hours old, but better than
> having up-to-date information that may be corrupted or "cracked" versions
> of programs).

If the drive is in the cracked machine then it should not be trusted.  If a 
drive is in a machine that has hardware damage then there's no guarantee 
it'll still have data on it...

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker
On Tue, 1 Jan 2002 22:49, Jason Lim wrote:
> Right now one of the things we are testing is:
> 1) mount up the "backup" hard disk
> 2) cp -a /home/* /mnt/backup/home/
> 3) umount "backup" hard disk
>
> The way we do it right now is:
> 1) a backup server with a few 60Gb HDs
> 2) use "dump" to cp the partitions over to the backup server
> 3) use "export" to restore stuff
> (not very elegant... which is why we're trying to set up a better way)
>
> Unless a cracker spends quite a bit of time going through everything, they
> would most probably miss this part. True... if they do spend enough time
> going through everything, then as you said, it is potentially gone.

Yes.  However if you NFS export the file system to the backup machine then as 
long as the backup machine is safe then there shouldn't be any problems.

NFS over 100baseT full-duplex performs pretty well really, why not use it?

> > The most common problem in this regard I've encountered when running
> > ISPs
> > (see at many sites with all distributions of Linux, Solaris, and AIX) is
> > when
> > someone makes a change which results in a non-bootable system.  Then
> > several
> > months later the machine is rebooted and no-one can remember what they
> > changed.
>
> Haven't had that yet... because every time we make a massive system change
> that might upset the "rebootability" of the server (eg. fiddle with lilo,
> partition settings, etc.) we do a real reboot. This might not be pratical
> on a system that needs 99.% uptime, but ensures it will work in
> future.

Lots of things may seem like minor changes but have unforseen impacts in 
complex systems.  I've seen machines become unbootable because of changes to 
/etc/hosts, changes to NFS mounting options (worst case is machines mounting 
each other's file systems and having mounts occur before exports so that they 
can't boot at the same time), changes to init.d scripts (a script that hangs 
will stop the boot process), and daemons that hang on startup when there is 
no disk space (so lack of space triggers a crash and an automatic reboot and 
then the machine is dead).

Also when you have multiple machine dependencies you sometimes have to reboot 
all machines to test everything properly.

Unfortunately some of the companies I work for refuse to allow me to perform 
basic tests such as "reboot all machines at once", so if there is ever a 
power failure then they are likely to discover some problem...

> > > but the system must stay up and operational at all times.
> >
> > LVM.  Create a snapshot of the LV and then use dd to copy it.
>
> Eep... setting up LVM for the SOLE purpose of doing this mirroring? Seems
> a bit like overkill and would add an extra level of complexity :-/

True.  Much easier to use software RAID.

> > I think that probably your whole plan here is misguided.  Please tell us
> > exactly what you are trying to protect against and we can probably give
> > better advice.
>
> I know of a few hardware solutions that do something like this, but would
> like to do this in hardware. They claim to perform a "mirror" of one HD to
> another HD while the system is live and in use.

It's called RAID-1.

> I have no idea how it does
> this without corruption of some type (as you mentioned above, doing dd on
> a live HD will probably cause errors, especially if the live HD is in
> use). For example, http://www.arcoide.com/ . To quote the function we're
> looking at " the DupliDisk2 automatically switches to the remaining drive

So setup three disks in a software RAID-1 configuration with one disk being 
marked as a "spare" disk.  Then have a script run from a cron job every day 
which marks the first disk as failed, this will cause the spare disk to be 
added to the RAID set and have the data copied to it.  After setting one disk 
as failed the script can then add it back to the RAID as a spare disk.

This means that apart from the RAID copying time (at least 20 minutes on an 
idle array - longer on a busy array) you will always have two live active 
copies of your data.  Before your script runs you'll also have an old 
snapshot of your data which can be used to recover from operator error.

This will do everything that the arcoide product appears to do.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker

On Wed, 2 Jan 2002 00:44, Jason Lim wrote:
> > > The idea being that if there is a virus, a cracker, or hardware
> > > malfunction
> >
> > And if you discover this within 12 hours...  Most times you won't.
>
> We've got file integrity checkers running on all the servers, and they run
> very often (mostly every hour or so) so unless the first thing the cracker
> does is to "fix" or disengage the checkers then we SHOULD notice this
> within the 12 hours... or make that 24 hours to give a bit more leeway.

The first thing any serious cracker will do is to stop themselves being 
trivially discovered.  This means changing the database for the file 
integrity checkers etc.  Also if you're running a standard kernel from a 
major distribution then they can have pre-compiled kernel modules ready to be 
inserted into your kernel.  They can prevent the rootkit from being 
discovered (system calls such as readdir() will not show the file names, 
stat() won't stat them, only chdir() and exec() will see them).

To alleviate this risk you need grsecurity or NSA SE-Linux.

At the moment if you want to make your servers more secure in a way that is 
Debian supported then grsecurity is your best option.  It allows you to 
really control what the kernel allows.  Run something in a chroot() and it'll 
NEVER get out.  Run all the daemons that are likely to get attacked in 
chroot() environments and the rest of the system will be safe no matter what 
happens.

Grsecurity allows you to prevent stack-smashing and prevent chroot() programs 
from calling chroot(), mknod(), ptrace(), and other dangerous system calls.  
Also it does lots more.

I used to maintain the package, but I have given it up as I'm more interested 
in SE Linux.  The URL is below.
http://www.earth.li/~noodles/grsec/

Any comparison of amount of time/effort/money vs result will show that 
grsecurity is much better than any of these ideas regarding swapping hard 
drives etc for improving security.

> Well, as said before, unless the cracker spends a considerable amount of
> time learning the setup of the system, going through the cron files,
> config files, etc. then hopefully there will be enough things set up that
> the cracker will be unable to destroy or "fix" everything.

Heh.  When setting things up I presume that anyone who can crack my systems 
knows more about such things than I do.  Therefore once they get root I can't 
beat them or expect to detect them in any small amount of time.

Once they get root it's game over - unless you run SE Linux.

> And regarding data loss, so far the most common thing for us is to have a
> HD become the point of failure. We've had quite a few CPUs burn out too
> (no bad ram yet, lucky...), but nothing except a bad HD has seemed to
> cause severe data loss.

So RAID is what you need.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim
> > For example, http://www.arcoide.com/ . To quote the function we're
looking
> > at " the DupliDisk2 automatically switches to the remaining drive and
> > alerts the user that a drive has failed. Then, depending on the model,
the
> > user can hot-swap out the failed drive and re-mirror in the
background.".
> > So it "re-mirrors" in the background... how do they perform that
> > reliabily?
>
> That's just RAID 1, which has done it since the dawn of time [1]. You
can
> achieve the same thing with Linux software RAID; you just pull out one
of
> the drives and you have half a mirrored RAID set. It's pretty neat to
watch
> /proc/mdstat as your drives are resyncing, too. ;)
>
> The advantage you get with this hardware is the hot-swap rack... and
that's
> about it.
>

Except that I've pointed out already that we're specifically NOT looking
at a live RAID solution. This is a backup drive that is suppose to be
synced every 12 hours or 24 hours.

The idea being that if there is a virus, a cracker, or hardware
malfunction, then the backup drives can be immediately pulled out and
inserted into a backup computer, and switch on to provide immediate
restoration of services (with data up to 12 hours old, but better than
having up-to-date information that may be corrupted or "cracked" versions
of programs).

Sincerely,
Jason




Re: Best way to duplicate HDs

2002-01-01 Thread Carpman


On Tuesday, January 1, 2002, at 05:55 PM, Jason Lim wrote:

>
> You might say "tape backup"... but keep in mind that it doesn't offer a
> "plug n play" solution if a server goes down. With the above method, a
> dead server could be brought to life in a minute or so (literally) 
> rather
> than half an hour... an hour... or more.

It occours to me that in most cases, recovery from a catostrophic 
failure is not going to be as easy as plug and play. Let's take some 
common situations where we need to recover a system.

Virus -
The way I traditionaly deal with a virus, is to never have it touch 
my system. As a system admin it is my job to keep viruses from hitting 
machines in the first place, not clean them up once they arrive. 
Cleaning up is the mentality of the Microsoft security world, and I 
refuse to accept such poluted doctrine. However, I do have a contingency 
plan should I miss a virus. I have a master OS image burnt onto a disk, 
and each of my systems make a backup of their data nightly (simple tar). 
The backups rotate and are incrimental, so I can restore data to the 
current date, masking out any infected paths. This, however, is not a 
plug and play solution, it requires manual control.

Hardware failure-
I run arround and sceam alot. This kind of failure is mostly luck 
of the draw, but I try to follow the same strategy as above.

Hacker-
If they wipe the disk, then the OS image and data backup will work 
nicely. If they do something else,  then I wipe the disk myself (no 
backdoors that way), and recover.

In none of these situations do I see any value in making a replica of a 
tainted or damaged disk every 12 hours.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker

On Wed, 2 Jan 2002 00:32, Jason Lim wrote:
> > It's called RAID-1.
>
> I dunno... whenever I think of "RAID" I always think of live mirrors that
> operate constantly and not a "once in a while" mirror operation just to
> perform a backup (when talking about RAID-1). Am I mistaken in this
> thinking?

RAID is generally configured for always being active, but there's lots of 
things you can do with software RAID, that's the advantage of software 
products in an open-source OS, you can make it do whatever you want.

> From what you have said, basically the only advantage to the Arcoide
> products are that they reduce load on the system, as they can perform the
> RAID-1 mirror process in the background idependent of the OS.

RAID-1 mirroring isn't that intensive.  Compare the amount of resources 
required for copying the fastest available hard drives (that can sustain 
40MB/s) to the power of a 1.4GHz Athlon CPU...

> An alternative spin on what you have said (nearly identical) would be to
> put double the hard disks in each server (eg. a server has 2 hds, put in 2
> "backup" hds). Configure them in RAID-1 mode, marking the 2 backups as a
> spare, and then "adding" them to the RAID array every day via cron. This
> would cause the 2 live HDs to be mirrored to the backups, and then
> disengage the 2 "backup" HDs so they aren't constantly synced.

That sounds excessive.

Why not have three drives configured in a RAID-1 setup and then add a drive 
and remove it as soon as it's re-synced (so it's a three-drive RAID-1 with 
only two drives active most of the time).

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh


> For example, http://www.arcoide.com/ . To quote the function we're looking
> at " the DupliDisk2 automatically switches to the remaining drive and
> alerts the user that a drive has failed. Then, depending on the model, the
> user can hot-swap out the failed drive and re-mirror in the background.".
> So it "re-mirrors" in the background... how do they perform that
> reliabily?

That's just RAID 1, which has done it since the dawn of time [1]. You can
achieve the same thing with Linux software RAID; you just pull out one of
the drives and you have half a mirrored RAID set. It's pretty neat to watch
/proc/mdstat as your drives are resyncing, too. ;)

The advantage you get with this hardware is the hot-swap rack... and that's
about it.

- Jeff

[1] May not be chronologically correct.

-- 
   "A rest with a fermata is the moral opposite of the fast food
   restaurant with express lane." - James Gleick, Faster




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
>
> Sorry, but I don't see any benefit to having maximum 12 hour old data
when
> you could have 0. The hardware solution you mentioned was RAID 1 anyway.
> Easiest thing to do is use it, and have both spare drives and spare
machines
> ready to roll should you need to swap either.

> > The idea being that if there is a virus, a cracker, or hardware
> > malfunction, then the backup drives can be immediately pulled out and
> > inserted into a backup computer, and switch on to provide immediate
> > restoration of services (with data up to 12 hours old, but better than
> > having up-to-date information that may be corrupted or "cracked"
versions
> > of programs).
>
> Well, there's your benefit to having old data. Who's to say you're going
to
> know within 12 hours? This is not a particularly interesting problem,
mostly
> because you're not curing the disease, you're trying to clean up after
> infection.

Not really... i think of it as helping to cure the disease and helping to
clean up the problem, not eliminating both because it is impossible to
cure the disease completely. Unfortuantely if you work with a medium to
large number of various equipment (or even a small number if you're
unlucky) you're bounded to be cracked sooner or later. Even the strictest
security policy and such won't guarentee 100% protection. Another way to
do this would be to go Russell's way (the ideal way) and run a RAID array
with 3 drives, 2 live and 1 spare, and the sync the spare up every 24
hours. However, this would require 3 drives instead of 2... $$$ and space.
For the average server between 2-4 drives, this would mean a minimum of
6-12 drives compared to 4-8. The server cases wouldn't even hold 12
drives. They could hold up to 8 or so. So money isn't the only
consideration. Then you have to consider that even if we could somehow
place that many drives in the average rackmount case, overheating... power
supply issues...etc. come into play.

You might say "tape backup"... but keep in mind that it doesn't offer a
"plug n play" solution if a server goes down. With the above method, a
dead server could be brought to life in a minute or so (literally) rather
than half an hour... an hour... or more.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim


> On Tue, 1 Jan 2002 07:28, Jason Lim wrote:
> > What do you think would be the best way to duplicate a HD to another
> > (similar sized) HD?
> >
> > I'm thinking that a live RAID solution isn't the best option, as (for
> > example) if crackers got in and fiddled with the system, all the HDs
would
> > end up having the same fiddled files.
>
> If crackers get in then anything which involves online storage is
> (potentially) gone.

Right now one of the things we are testing is:
1) mount up the "backup" hard disk
2) cp -a /home/* /mnt/backup/home/
3) umount "backup" hard disk

The way we do it right now is:
1) a backup server with a few 60Gb HDs
2) use "dump" to cp the partitions over to the backup server
3) use "export" to restore stuff
(not very elegant... which is why we're trying to set up a better way)

Unless a cracker spends quite a bit of time going through everything, they
would most probably miss this part. True... if they do spend enough time
going through everything, then as you said, it is potentially gone.

> > If the HD is duplicated every 12 hours or 24 hours, then there would
> > always be a working copy, so if something is detected as being
altered, we
> > could always swap the disks around and get a live working system up
and
> > running almost instantly (unless we detect the problem more than 24
hours
> > later, and then it would be too late since the HDs already synced).
>
> The most common problem in this regard I've encountered when running
ISPs
> (see at many sites with all distributions of Linux, Solaris, and AIX) is
when
> someone makes a change which results in a non-bootable system.  Then
several
> months later the machine is rebooted and no-one can remember what they
> changed.

Haven't had that yet... because every time we make a massive system change
that might upset the "rebootability" of the server (eg. fiddle with lilo,
partition settings, etc.) we do a real reboot. This might not be pratical
on a system that needs 99.% uptime, but ensures it will work in
future.

> Better off having an online RAID for protection against hardware
failures and
> secure everything as much as possible to alleviate the problem of the
> machines being cracked.
>
> > So... what do you think the best way would be to duplicate a HD on a
live
> > working system (without bringing it down or anything like that).
> > Performance can drop for a while (maybe do this at 5am in the
morning),
> > but the system must stay up and operational at all times.
>
> LVM.  Create a snapshot of the LV and then use dd to copy it.

Eep... setting up LVM for the SOLE purpose of doing this mirroring? Seems
a bit like overkill and would add an extra level of complexity :-/

> > Maybe dd... or cp -a /drive1/* /drive2/... or something?
>
> Doing that on the device for a mounted file system will at best add a
fsck to
> your recovery process, and at worst result in a file system so badly
> corrupted that an mkfs is needed.  LVM solves this, but adds it's own
set of
> problems.

> I think that probably your whole plan here is misguided.  Please tell us
> exactly what you are trying to protect against and we can probably give
> better advice.
>

I know of a few hardware solutions that do something like this, but would
like to do this in hardware. They claim to perform a "mirror" of one HD to
another HD while the system is live and in use. I have no idea how it does
this without corruption of some type (as you mentioned above, doing dd on
a live HD will probably cause errors, especially if the live HD is in
use). For example, http://www.arcoide.com/ . To quote the function we're
looking at " the DupliDisk2 automatically switches to the remaining drive
and alerts the user that a drive has failed. Then, depending on the model,
the user can hot-swap out the failed drive and re-mirror in the
background.". So it "re-mirrors" in the background... how do they perform
that reliabily?

Sincerely,
Jason





Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh



> > It's called RAID-1.
> 
> I dunno... whenever I think of "RAID" I always think of live mirrors that
> operate constantly

That's what they do post-sync.

> and not a "once in a while" mirror operation just to
> perform a backup (when talking about RAID-1). Am I mistaken in this
> thinking?

That's what they do when they sync (in very rough terms).

> This would cause the 2 live HDs to be mirrored to the backups, and then
> disengage the 2 "backup" HDs so they aren't constantly synced.
> 
> Would the above work? Sorry if I seem naive, but I haven't tried this
> "once in a while" RAID method before.

It's a dirty hack to make it do what you want it to, that's all. Russell's
solution was better, as at least you were getting the benefit of the running
mirror if a drive failed (and buying three disks is not expensive).

- Jeff

-- 
  "And up in the corporate box there's a group of pleasant  
   thirtysomething guys making tuneful music for the masses of people who   
can spell "nihilism", but don't want to listen to it in the car." - 
Richard Jinman, SMH 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > Except that I've pointed out already that we're specifically NOT
looking
> > at a live RAID solution. This is a backup drive that is suppose to be
> > synced every 12 hours or 24 hours.
> >
> > The idea being that if there is a virus, a cracker, or hardware
> > malfunction
>
> And if you discover this within 12 hours...  Most times you won't.

We've got file integrity checkers running on all the servers, and they run
very often (mostly every hour or so) so unless the first thing the cracker
does is to "fix" or disengage the checkers then we SHOULD notice this
within the 12 hours... or make that 24 hours to give a bit more leeway.

> > then the backup drives can be immediately pulled out and
> > inserted into a backup computer, and switch on to provide immediate
> > restoration of services (with data up to 12 hours old, but better than
> > having up-to-date information that may be corrupted or "cracked"
versions
> > of programs).
>
> If the drive is in the cracked machine then it should not be trusted.
If a
> drive is in a machine that has hardware damage then there's no guarantee
> it'll still have data on it...

Well, as said before, unless the cracker spends a considerable amount of
time learning the setup of the system, going through the cron files,
config files, etc. then hopefully there will be enough things set up that
the cracker will be unable to destroy or "fix" everything.

And regarding data loss, so far the most common thing for us is to have a
HD become the point of failure. We've had quite a few CPUs burn out too
(no bad ram yet, lucky...), but nothing except a bad HD has seemed to
cause severe data loss.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh



> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.

Sorry, but I don't see any benefit to having maximum 12 hour old data when
you could have 0. The hardware solution you mentioned was RAID 1 anyway.
Easiest thing to do is use it, and have both spare drives and spare machines
ready to roll should you need to swap either.

> The idea being that if there is a virus, a cracker, or hardware
> malfunction, then the backup drives can be immediately pulled out and
> inserted into a backup computer, and switch on to provide immediate
> restoration of services (with data up to 12 hours old, but better than
> having up-to-date information that may be corrupted or "cracked" versions
> of programs).

Well, there's your benefit to having old data. Who's to say you're going to
know within 12 hours? This is not a particularly interesting problem, mostly
because you're not curing the disease, you're trying to clean up after
infection.

- Jeff

-- 
 "The GPL is good. Use it. Don't be silly." - Michael Meeks 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > I know of a few hardware solutions that do something like this, but
would
> > like to do this in hardware. They claim to perform a "mirror" of one
HD to
> > another HD while the system is live and in use.
>
> It's called RAID-1.

I dunno... whenever I think of "RAID" I always think of live mirrors that
operate constantly and not a "once in a while" mirror operation just to
perform a backup (when talking about RAID-1). Am I mistaken in this
thinking?

> > I have no idea how it does
> > this without corruption of some type (as you mentioned above, doing dd
on
> > a live HD will probably cause errors, especially if the live HD is in
> > use). For example, http://www.arcoide.com/ . To quote the function
we're
> > looking at " the DupliDisk2 automatically switches to the remaining
drive
>
> So setup three disks in a software RAID-1 configuration with one disk
being
> marked as a "spare" disk.  Then have a script run from a cron job every
day
> which marks the first disk as failed, this will cause the spare disk to
be
> added to the RAID set and have the data copied to it.  After setting one
disk
> as failed the script can then add it back to the RAID as a spare disk.
>
> This means that apart from the RAID copying time (at least 20 minutes on
an
> idle array - longer on a busy array) you will always have two live
active
> copies of your data.  Before your script runs you'll also have an old
> snapshot of your data which can be used to recover from operator error.
>
> This will do everything that the arcoide product appears to do.

>From what you have said, basically the only advantage to the Arcoide
products are that they reduce load on the system, as they can perform the
RAID-1 mirror process in the background idependent of the OS.

An alternative spin on what you have said (nearly identical) would be to
put double the hard disks in each server (eg. a server has 2 hds, put in 2
"backup" hds). Configure them in RAID-1 mode, marking the 2 backups as a
spare, and then "adding" them to the RAID array every day via cron. This
would cause the 2 live HDs to be mirrored to the backups, and then
disengage the 2 "backup" HDs so they aren't constantly synced.

Would the above work? Sorry if I seem naive, but I haven't tried this
"once in a while" RAID method before.

Sincerely,
Jason





-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker

On Tue, 1 Jan 2002 23:40, Jason Lim wrote:
> Except that I've pointed out already that we're specifically NOT looking
> at a live RAID solution. This is a backup drive that is suppose to be
> synced every 12 hours or 24 hours.
>
> The idea being that if there is a virus, a cracker, or hardware
> malfunction

And if you discover this within 12 hours...  Most times you won't.

> then the backup drives can be immediately pulled out and
> inserted into a backup computer, and switch on to provide immediate
> restoration of services (with data up to 12 hours old, but better than
> having up-to-date information that may be corrupted or "cracked" versions
> of programs).

If the drive is in the cracked machine then it should not be trusted.  If a 
drive is in a machine that has hardware damage then there's no guarantee 
it'll still have data on it...

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker

On Tue, 1 Jan 2002 22:49, Jason Lim wrote:
> Right now one of the things we are testing is:
> 1) mount up the "backup" hard disk
> 2) cp -a /home/* /mnt/backup/home/
> 3) umount "backup" hard disk
>
> The way we do it right now is:
> 1) a backup server with a few 60Gb HDs
> 2) use "dump" to cp the partitions over to the backup server
> 3) use "export" to restore stuff
> (not very elegant... which is why we're trying to set up a better way)
>
> Unless a cracker spends quite a bit of time going through everything, they
> would most probably miss this part. True... if they do spend enough time
> going through everything, then as you said, it is potentially gone.

Yes.  However if you NFS export the file system to the backup machine then as 
long as the backup machine is safe then there shouldn't be any problems.

NFS over 100baseT full-duplex performs pretty well really, why not use it?

> > The most common problem in this regard I've encountered when running
> > ISPs
> > (see at many sites with all distributions of Linux, Solaris, and AIX) is
> > when
> > someone makes a change which results in a non-bootable system.  Then
> > several
> > months later the machine is rebooted and no-one can remember what they
> > changed.
>
> Haven't had that yet... because every time we make a massive system change
> that might upset the "rebootability" of the server (eg. fiddle with lilo,
> partition settings, etc.) we do a real reboot. This might not be pratical
> on a system that needs 99.% uptime, but ensures it will work in
> future.

Lots of things may seem like minor changes but have unforseen impacts in 
complex systems.  I've seen machines become unbootable because of changes to 
/etc/hosts, changes to NFS mounting options (worst case is machines mounting 
each other's file systems and having mounts occur before exports so that they 
can't boot at the same time), changes to init.d scripts (a script that hangs 
will stop the boot process), and daemons that hang on startup when there is 
no disk space (so lack of space triggers a crash and an automatic reboot and 
then the machine is dead).

Also when you have multiple machine dependencies you sometimes have to reboot 
all machines to test everything properly.

Unfortunately some of the companies I work for refuse to allow me to perform 
basic tests such as "reboot all machines at once", so if there is ever a 
power failure then they are likely to discover some problem...

> > > but the system must stay up and operational at all times.
> >
> > LVM.  Create a snapshot of the LV and then use dd to copy it.
>
> Eep... setting up LVM for the SOLE purpose of doing this mirroring? Seems
> a bit like overkill and would add an extra level of complexity :-/

True.  Much easier to use software RAID.

> > I think that probably your whole plan here is misguided.  Please tell us
> > exactly what you are trying to protect against and we can probably give
> > better advice.
>
> I know of a few hardware solutions that do something like this, but would
> like to do this in hardware. They claim to perform a "mirror" of one HD to
> another HD while the system is live and in use.

It's called RAID-1.

> I have no idea how it does
> this without corruption of some type (as you mentioned above, doing dd on
> a live HD will probably cause errors, especially if the live HD is in
> use). For example, http://www.arcoide.com/ . To quote the function we're
> looking at " the DupliDisk2 automatically switches to the remaining drive

So setup three disks in a software RAID-1 configuration with one disk being 
marked as a "spare" disk.  Then have a script run from a cron job every day 
which marks the first disk as failed, this will cause the spare disk to be 
added to the RAID set and have the data copied to it.  After setting one disk 
as failed the script can then add it back to the RAID as a spare disk.

This means that apart from the RAID copying time (at least 20 minutes on an 
idle array - longer on a busy array) you will always have two live active 
copies of your data.  Before your script runs you'll also have an old 
snapshot of your data which can be used to recover from operator error.

This will do everything that the arcoide product appears to do.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim

> > For example, http://www.arcoide.com/ . To quote the function we're
looking
> > at " the DupliDisk2 automatically switches to the remaining drive and
> > alerts the user that a drive has failed. Then, depending on the model,
the
> > user can hot-swap out the failed drive and re-mirror in the
background.".
> > So it "re-mirrors" in the background... how do they perform that
> > reliabily?
>
> That's just RAID 1, which has done it since the dawn of time [1]. You
can
> achieve the same thing with Linux software RAID; you just pull out one
of
> the drives and you have half a mirrored RAID set. It's pretty neat to
watch
> /proc/mdstat as your drives are resyncing, too. ;)
>
> The advantage you get with this hardware is the hot-swap rack... and
that's
> about it.
>

Except that I've pointed out already that we're specifically NOT looking
at a live RAID solution. This is a backup drive that is suppose to be
synced every 12 hours or 24 hours.

The idea being that if there is a virus, a cracker, or hardware
malfunction, then the backup drives can be immediately pulled out and
inserted into a backup computer, and switch on to provide immediate
restoration of services (with data up to 12 hours old, but better than
having up-to-date information that may be corrupted or "cracked" versions
of programs).

Sincerely,
Jason


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker
On Tue, 1 Jan 2002 21:06, Jeff Waugh wrote:
> 
>
> > I've just done some tests on that with 33G partitions of 46G IDE drives.
> > The drives are on different IDE buses, and the CPU is an Athlon 800.
> >
> > So it seems to me that page size is probably a good buffer size to use.
>
> Cool! Nothing like Real Proper Testing to prove a point. ;)

;)

> I'm surprised the difference between 512b and 4k wasn't greater though; I'm
> sure I've had more spectacular differences in the past.

I have too.  If you use cat then it seems that buffer size has more of an 
impact (I think I posted to one of the debian lists about the performance 
benefits of my hacked cat which uses a minimum buffer size of page size).

Although that was with an older kernel, and I think that the buffering and 
caching has been changed since then.

If the caching in the kernel works optimally then there should be no 
difference in wall-clock time between 512b and 4k buffers.  My Athlon CPU 
wasn't really being stressed with 512b buffers (unlike my previous tests with 
cat - I guess dd is better written).  For write caching the write-back cache 
should bundle everything into 4K writes, the read-caching should be reading 
ahead (thus sending requests to the disk drive 100K ahead or more) so the 
results should be the same regardless of buffer size as far as the hardware 
is concerned.

> ... and I won't bring up anything about SCSI or IDE at this point. ;)

;)

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh



> For example, http://www.arcoide.com/ . To quote the function we're looking
> at " the DupliDisk2 automatically switches to the remaining drive and
> alerts the user that a drive has failed. Then, depending on the model, the
> user can hot-swap out the failed drive and re-mirror in the background.".
> So it "re-mirrors" in the background... how do they perform that
> reliabily?

That's just RAID 1, which has done it since the dawn of time [1]. You can
achieve the same thing with Linux software RAID; you just pull out one of
the drives and you have half a mirrored RAID set. It's pretty neat to watch
/proc/mdstat as your drives are resyncing, too. ;)

The advantage you get with this hardware is the hot-swap rack... and that's
about it.

- Jeff

[1] May not be chronologically correct.

-- 
   "A rest with a fermata is the moral opposite of the fast food
   restaurant with express lane." - James Gleick, Faster


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh


> I've just done some tests on that with 33G partitions of 46G IDE drives.
> The drives are on different IDE buses, and the CPU is an Athlon 800.
> 
> So it seems to me that page size is probably a good buffer size to use.

Cool! Nothing like Real Proper Testing to prove a point. ;)

I'm surprised the difference between 512b and 4k wasn't greater though; I'm
sure I've had more spectacular differences in the past.

... and I won't bring up anything about SCSI or IDE at this point. ;)

- Jeff

-- 
"I wanted to be Superman, but all I got were these special powers of
 self-deprecation." 




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker
On Tue, 1 Jan 2002 07:28, Jason Lim wrote:
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
>
> I'm thinking that a live RAID solution isn't the best option, as (for
> example) if crackers got in and fiddled with the system, all the HDs would
> end up having the same fiddled files.

If crackers get in then anything which involves online storage is 
(potentially) gone.

> If the HD is duplicated every 12 hours or 24 hours, then there would
> always be a working copy, so if something is detected as being altered, we
> could always swap the disks around and get a live working system up and
> running almost instantly (unless we detect the problem more than 24 hours
> later, and then it would be too late since the HDs already synced).

The most common problem in this regard I've encountered when running ISPs 
(see at many sites with all distributions of Linux, Solaris, and AIX) is when 
someone makes a change which results in a non-bootable system.  Then several 
months later the machine is rebooted and no-one can remember what they 
changed.

Better off having an online RAID for protection against hardware failures and 
secure everything as much as possible to alleviate the problem of the 
machines being cracked.

> So... what do you think the best way would be to duplicate a HD on a live
> working system (without bringing it down or anything like that).
> Performance can drop for a while (maybe do this at 5am in the morning),
> but the system must stay up and operational at all times.

LVM.  Create a snapshot of the LV and then use dd to copy it.

> Maybe dd... or cp -a /drive1/* /drive2/... or something?

Doing that on the device for a mounted file system will at best add a fsck to 
your recovery process, and at worst result in a file system so badly 
corrupted that an mkfs is needed.  LVM solves this, but adds it's own set of 
problems.

I think that probably your whole plan here is misguided.  Please tell us 
exactly what you are trying to protect against and we can probably give 
better advice.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker
On Tue, 1 Jan 2002 09:13, Jeff Waugh wrote:
> 
>
> > What do you think would be the best way to duplicate a HD to another
> > (similar sized) HD?
>
> dd, using a large buffer size for reasonable performance

I've just done some tests on that with 33G partitions of 46G IDE drives.  The 
drives are on different IDE buses, and the CPU is an Athlon 800.

It seems that a buffer size of 4K (page size on Intel) is much better than 
the default of 512b (no surprise there).  However a buffer size of 1M uses 
more system time than 4K and gives the same elapsed time.  So it seems to me 
that page size is probably a good buffer size to use.

# time dd if=/dev/discs/disc0/part6 of=/dev/discs/disc1/part6
dd: reading `/dev/discs/disc0/part6': Input/output error
67376544+0 records in
67376544+0 records out

real61m59.876s
user1m6.110s
sys 13m46.200s
# time dd if=/dev/discs/disc0/part6 of=/dev/discs/disc1/part6 bs=4k
dd: reading `/dev/discs/disc0/part6': Input/output error
8422068+0 records in
8422068+0 records out

real44m56.078s
user0m5.350s
sys 5m28.270s
# time dd if=/dev/discs/disc0/part6 of=/dev/discs/disc1/part6 bs=1024k
dd: reading `/dev/discs/disc0/part6': Input/output error
32898+1 records in
32898+1 records out

real44m55.056s
user0m0.170s
sys 7m46.130s

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: Best way to duplicate HDs

2002-01-01 Thread Jason Lim



> On Tue, 1 Jan 2002 07:28, Jason Lim wrote:
> > What do you think would be the best way to duplicate a HD to another
> > (similar sized) HD?
> >
> > I'm thinking that a live RAID solution isn't the best option, as (for
> > example) if crackers got in and fiddled with the system, all the HDs
would
> > end up having the same fiddled files.
>
> If crackers get in then anything which involves online storage is
> (potentially) gone.

Right now one of the things we are testing is:
1) mount up the "backup" hard disk
2) cp -a /home/* /mnt/backup/home/
3) umount "backup" hard disk

The way we do it right now is:
1) a backup server with a few 60Gb HDs
2) use "dump" to cp the partitions over to the backup server
3) use "export" to restore stuff
(not very elegant... which is why we're trying to set up a better way)

Unless a cracker spends quite a bit of time going through everything, they
would most probably miss this part. True... if they do spend enough time
going through everything, then as you said, it is potentially gone.

> > If the HD is duplicated every 12 hours or 24 hours, then there would
> > always be a working copy, so if something is detected as being
altered, we
> > could always swap the disks around and get a live working system up
and
> > running almost instantly (unless we detect the problem more than 24
hours
> > later, and then it would be too late since the HDs already synced).
>
> The most common problem in this regard I've encountered when running
ISPs
> (see at many sites with all distributions of Linux, Solaris, and AIX) is
when
> someone makes a change which results in a non-bootable system.  Then
several
> months later the machine is rebooted and no-one can remember what they
> changed.

Haven't had that yet... because every time we make a massive system change
that might upset the "rebootability" of the server (eg. fiddle with lilo,
partition settings, etc.) we do a real reboot. This might not be pratical
on a system that needs 99.% uptime, but ensures it will work in
future.

> Better off having an online RAID for protection against hardware
failures and
> secure everything as much as possible to alleviate the problem of the
> machines being cracked.
>
> > So... what do you think the best way would be to duplicate a HD on a
live
> > working system (without bringing it down or anything like that).
> > Performance can drop for a while (maybe do this at 5am in the
morning),
> > but the system must stay up and operational at all times.
>
> LVM.  Create a snapshot of the LV and then use dd to copy it.

Eep... setting up LVM for the SOLE purpose of doing this mirroring? Seems
a bit like overkill and would add an extra level of complexity :-/

> > Maybe dd... or cp -a /drive1/* /drive2/... or something?
>
> Doing that on the device for a mounted file system will at best add a
fsck to
> your recovery process, and at worst result in a file system so badly
> corrupted that an mkfs is needed.  LVM solves this, but adds it's own
set of
> problems.

> I think that probably your whole plan here is misguided.  Please tell us
> exactly what you are trying to protect against and we can probably give
> better advice.
>

I know of a few hardware solutions that do something like this, but would
like to do this in hardware. They claim to perform a "mirror" of one HD to
another HD while the system is live and in use. I have no idea how it does
this without corruption of some type (as you mentioned above, doing dd on
a live HD will probably cause errors, especially if the live HD is in
use). For example, http://www.arcoide.com/ . To quote the function we're
looking at " the DupliDisk2 automatically switches to the remaining drive
and alerts the user that a drive has failed. Then, depending on the model,
the user can hot-swap out the failed drive and re-mirror in the
background.". So it "re-mirrors" in the background... how do they perform
that reliabily?

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Ted Deppner
On Tue, Jan 01, 2002 at 08:39:39AM -0500, Keith Elder wrote:
> This brings up a  question. How do you rsync something but keep the
> ownership and permissions the same.  I am pulling data off site nightly
> and that works, but the permissions are all screwed up.

rsync -avxrP --delete $FILESYSTEMS backup-server:backups/$HOSTNAME

Some caveats if you want to fully automate this...
  - remove -vP (verbose w/ progress)
  - --delete is NECESSARY to make sure deleted files get deleted from the
backup
  - FILESYSTEMS should be any local filesystems you want backed up (-x
won't cross filesystems, makes backing up in NFS environment easier)
  - obviously this doesn't preclude a bad guy checking out
backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
backups/hostname; rsync with whatever daemon options" will limit that)
  - on backup-server, rotate the backup every 12 hours or whatever.  
- rsync -ar --delete store/hostname.2 store/hostname.3 
- rsync -ar --delete store/hostname.1 store/hostname.2 
- rsync -ar --delete backups/hostname store/hostname.1
# that could be better optimized, but you get the idea

I've used this rsync system to successfully maintain up to date backups w/
great ease, AND restore very quickly...  use a LinuxCare Bootable Business
Card to get the target fdisked and ready, then mount the filesystems as
you desire, and rsync -avrP backup-server:backups/hostname /target.  I got
a 700mb server back online in under 20 minutes from powerup to server
serving requests (the rsync itself is 3 to 5 minutes).  Making sure you
do (cd /target; lilo -r . -C etc/lilo.conf)) is the only tricky part.

-- 
Ted Deppner
http://www.psyber.com/~ted/




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker

On Tue, 1 Jan 2002 21:06, Jeff Waugh wrote:
> 
>
> > I've just done some tests on that with 33G partitions of 46G IDE drives.
> > The drives are on different IDE buses, and the CPU is an Athlon 800.
> >
> > So it seems to me that page size is probably a good buffer size to use.
>
> Cool! Nothing like Real Proper Testing to prove a point. ;)

;)

> I'm surprised the difference between 512b and 4k wasn't greater though; I'm
> sure I've had more spectacular differences in the past.

I have too.  If you use cat then it seems that buffer size has more of an 
impact (I think I posted to one of the debian lists about the performance 
benefits of my hacked cat which uses a minimum buffer size of page size).

Although that was with an older kernel, and I think that the buffering and 
caching has been changed since then.

If the caching in the kernel works optimally then there should be no 
difference in wall-clock time between 512b and 4k buffers.  My Athlon CPU 
wasn't really being stressed with 512b buffers (unlike my previous tests with 
cat - I guess dd is better written).  For write caching the write-back cache 
should bundle everything into 4K writes, the read-caching should be reading 
ahead (thus sending requests to the disk drive 100K ahead or more) so the 
results should be the same regardless of buffer size as far as the hardware 
is concerned.

> ... and I won't bring up anything about SCSI or IDE at this point. ;)

;)

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh



> I've just done some tests on that with 33G partitions of 46G IDE drives.
> The drives are on different IDE buses, and the CPU is an Athlon 800.
> 
> So it seems to me that page size is probably a good buffer size to use.

Cool! Nothing like Real Proper Testing to prove a point. ;)

I'm surprised the difference between 512b and 4k wasn't greater though; I'm
sure I've had more spectacular differences in the past.

... and I won't bring up anything about SCSI or IDE at this point. ;)

- Jeff

-- 
"I wanted to be Superman, but all I got were these special powers of
 self-deprecation." 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker

On Tue, 1 Jan 2002 07:28, Jason Lim wrote:
> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?
>
> I'm thinking that a live RAID solution isn't the best option, as (for
> example) if crackers got in and fiddled with the system, all the HDs would
> end up having the same fiddled files.

If crackers get in then anything which involves online storage is 
(potentially) gone.

> If the HD is duplicated every 12 hours or 24 hours, then there would
> always be a working copy, so if something is detected as being altered, we
> could always swap the disks around and get a live working system up and
> running almost instantly (unless we detect the problem more than 24 hours
> later, and then it would be too late since the HDs already synced).

The most common problem in this regard I've encountered when running ISPs 
(see at many sites with all distributions of Linux, Solaris, and AIX) is when 
someone makes a change which results in a non-bootable system.  Then several 
months later the machine is rebooted and no-one can remember what they 
changed.

Better off having an online RAID for protection against hardware failures and 
secure everything as much as possible to alleviate the problem of the 
machines being cracked.

> So... what do you think the best way would be to duplicate a HD on a live
> working system (without bringing it down or anything like that).
> Performance can drop for a while (maybe do this at 5am in the morning),
> but the system must stay up and operational at all times.

LVM.  Create a snapshot of the LV and then use dd to copy it.

> Maybe dd... or cp -a /drive1/* /drive2/... or something?

Doing that on the device for a mounted file system will at best add a fsck to 
your recovery process, and at worst result in a file system so badly 
corrupted that an mkfs is needed.  LVM solves this, but adds it's own set of 
problems.

I think that probably your whole plan here is misguided.  Please tell us 
exactly what you are trying to protect against and we can probably give 
better advice.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Russell Coker

On Tue, 1 Jan 2002 09:13, Jeff Waugh wrote:
> 
>
> > What do you think would be the best way to duplicate a HD to another
> > (similar sized) HD?
>
> dd, using a large buffer size for reasonable performance

I've just done some tests on that with 33G partitions of 46G IDE drives.  The 
drives are on different IDE buses, and the CPU is an Athlon 800.

It seems that a buffer size of 4K (page size on Intel) is much better than 
the default of 512b (no surprise there).  However a buffer size of 1M uses 
more system time than 4K and gives the same elapsed time.  So it seems to me 
that page size is probably a good buffer size to use.

# time dd if=/dev/discs/disc0/part6 of=/dev/discs/disc1/part6
dd: reading `/dev/discs/disc0/part6': Input/output error
67376544+0 records in
67376544+0 records out

real61m59.876s
user1m6.110s
sys 13m46.200s
# time dd if=/dev/discs/disc0/part6 of=/dev/discs/disc1/part6 bs=4k
dd: reading `/dev/discs/disc0/part6': Input/output error
8422068+0 records in
8422068+0 records out

real44m56.078s
user0m5.350s
sys 5m28.270s
# time dd if=/dev/discs/disc0/part6 of=/dev/discs/disc1/part6 bs=1024k
dd: reading `/dev/discs/disc0/part6': Input/output error
32898+1 records in
32898+1 records out

real44m55.056s
user0m0.170s
sys 7m46.130s

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Ted Deppner

On Tue, Jan 01, 2002 at 08:39:39AM -0500, Keith Elder wrote:
> This brings up a  question. How do you rsync something but keep the
> ownership and permissions the same.  I am pulling data off site nightly
> and that works, but the permissions are all screwed up.

rsync -avxrP --delete $FILESYSTEMS backup-server:backups/$HOSTNAME

Some caveats if you want to fully automate this...
  - remove -vP (verbose w/ progress)
  - --delete is NECESSARY to make sure deleted files get deleted from the
backup
  - FILESYSTEMS should be any local filesystems you want backed up (-x
won't cross filesystems, makes backing up in NFS environment easier)
  - obviously this doesn't preclude a bad guy checking out
backup-server:backups/otherhostname (use ssh keys, and invoke cmd="cd
backups/hostname; rsync with whatever daemon options" will limit that)
  - on backup-server, rotate the backup every 12 hours or whatever.  
- rsync -ar --delete store/hostname.2 store/hostname.3 
- rsync -ar --delete store/hostname.1 store/hostname.2 
- rsync -ar --delete backups/hostname store/hostname.1
# that could be better optimized, but you get the idea

I've used this rsync system to successfully maintain up to date backups w/
great ease, AND restore very quickly...  use a LinuxCare Bootable Business
Card to get the target fdisked and ready, then mount the filesystems as
you desire, and rsync -avrP backup-server:backups/hostname /target.  I got
a 700mb server back online in under 20 minutes from powerup to server
serving requests (the rsync itself is 3 to 5 minutes).  Making sure you
do (cd /target; lilo -r . -C etc/lilo.conf)) is the only tricky part.

-- 
Ted Deppner
http://www.psyber.com/~ted/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Christian Jaeger
At 8:39 Uhr -0500 01.01.2002, Keith Elder wrote:
This brings up a  question. How do you rsync something but keep the
ownership and permissions the same.  I am pulling data off site nightly
and that works, but the permissions are all screwed up.
I'm using
rsync -aHx --numeric-ids
and then protect the root folder on the target machine so nobody can 
enter it and make use of the wrong ownerships. When playing the thing 
back it will be correct again.

(No problem when copying to another local disk.)
chj.



Re: Best way to duplicate HDs

2002-01-01 Thread David Stanaway
On Wednesday, January 2, 2002, at 12:39  AM, Keith Elder wrote:
This brings up a  question. How do you rsync something but keep the
ownership and permissions the same.  I am pulling data off site nightly
and that works, but the permissions are all screwed up.

I use rsync -avz as root
You may want to check that the UID/GID's that appear on the primary 
system are the same as on the secondary system.

==
David Stanaway
Personal: [EMAIL PROTECTED]
Work: [EMAIL PROTECTED]



Re: Best way to duplicate HDs

2002-01-01 Thread Christian Jaeger

At 8:39 Uhr -0500 01.01.2002, Keith Elder wrote:
>This brings up a  question. How do you rsync something but keep the
>ownership and permissions the same.  I am pulling data off site nightly
>and that works, but the permissions are all screwed up.

I'm using

rsync -aHx --numeric-ids

and then protect the root folder on the target machine so nobody can 
enter it and make use of the wrong ownerships. When playing the thing 
back it will be correct again.

(No problem when copying to another local disk.)

chj.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Keith Elder
This brings up a  question. How do you rsync something but keep the
ownership and permissions the same.  I am pulling data off site nightly
and that works, but the permissions are all screwed up.

Keith

* Christian Jaeger ([EMAIL PROTECTED]) wrote:
> Date: Tue, 1 Jan 2002 04:24:21 +0100
> To: "Jason Lim" <[EMAIL PROTECTED]>, 
> From: Christian Jaeger <[EMAIL PROTECTED]>
> Subject: Re: Best way to duplicate HDs
> 
> Use cpbk or even better rsync (cpbk is problematic with large 
> filesystems because it takes much memory to hold the tree info - 
> rsync does the same with less memory needs). They (allow to) only 
> copy the changed parts of the fs and keep old versions of altered 
> files.
> 
> chj.
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact 
> [EMAIL PROTECTED]


###
  Keith Elder
   Email: [EMAIL PROTECTED] 
Phone: 1-734-507-1438
 Text Messaging (145 characters): [EMAIL PROTECTED]
Web: http://www.zorka.com (Howto's, News, and hosting!)
  
 "With enough memory and hard drive space
   anything in life is possible!"
###




Re: Best way to duplicate HDs

2002-01-01 Thread David Stanaway


On Wednesday, January 2, 2002, at 12:39  AM, Keith Elder wrote:

> This brings up a  question. How do you rsync something but keep the
> ownership and permissions the same.  I am pulling data off site nightly
> and that works, but the permissions are all screwed up.


I use rsync -avz as root

You may want to check that the UID/GID's that appear on the primary 
system are the same as on the secondary system.

==
David Stanaway
Personal: [EMAIL PROTECTED]
Work: [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Keith Elder

This brings up a  question. How do you rsync something but keep the
ownership and permissions the same.  I am pulling data off site nightly
and that works, but the permissions are all screwed up.

Keith

* Christian Jaeger ([EMAIL PROTECTED]) wrote:
> Date: Tue, 1 Jan 2002 04:24:21 +0100
> To: "Jason Lim" <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]>
> From: Christian Jaeger <[EMAIL PROTECTED]>
> Subject: Re: Best way to duplicate HDs
> 
> Use cpbk or even better rsync (cpbk is problematic with large 
> filesystems because it takes much memory to hold the tree info - 
> rsync does the same with less memory needs). They (allow to) only 
> copy the changed parts of the fs and keep old versions of altered 
> files.
> 
> chj.
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact 
> [EMAIL PROTECTED]


###
  Keith Elder
   Email: [EMAIL PROTECTED] 
Phone: 1-734-507-1438
 Text Messaging (145 characters): [EMAIL PROTECTED]
Web: http://www.zorka.com (Howto's, News, and hosting!)
  
 "With enough memory and hard drive space
   anything in life is possible!"
###


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Christian Jaeger
Use cpbk or even better rsync (cpbk is problematic with large 
filesystems because it takes much memory to hold the tree info - 
rsync does the same with less memory needs). They (allow to) only 
copy the changed parts of the fs and keep old versions of altered 
files.

chj.



Re: Best way to duplicate HDs

2002-01-01 Thread Christian Jaeger

Use cpbk or even better rsync (cpbk is problematic with large 
filesystems because it takes much memory to hold the tree info - 
rsync does the same with less memory needs). They (allow to) only 
copy the changed parts of the fs and keep old versions of altered 
files.

chj.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh


> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?

dd, using a large buffer size for reasonable performance

- Jeff

-- 
  "Linux continues to have almost as much soul as James Brown." - Forrest   
 Cook, LWN  




Re: Best way to duplicate HDs

2002-01-01 Thread Jeff Waugh



> What do you think would be the best way to duplicate a HD to another
> (similar sized) HD?

dd, using a large buffer size for reasonable performance

- Jeff

-- 
  "Linux continues to have almost as much soul as James Brown." - Forrest   
 Cook, LWN  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]