Re: Machine Replication

2005-07-21 Thread asym

At 15:20 7/21/2005, Eli K. Breen wrote:

All,

Does anyone have a good handle on how to replicate (read: image) a freebsd 
machine from one machine to an ostensibly similar machine?


So far I've used countless variations and combinations of the following:

dd(Slow, not usefull if the hardware isn't identical?)
tar   (Doesn't replicate MBR)
rsync   (No MBR support)
Norton Ghost  (Doesn't support UFS/UFS2?)
G4U   (little experience with this)


I've found a combination of dd + tar works great, as documented.

Stick the new drive in the box to be duplicated, use dd on the first 
(forget how many) sectors to copy the mbr and partition tables over, then 
use a tar pipe to copy from one drive to the other, preserving all perms 
and so forth.


Barring that, commercial single-disk duplicators aren't THAT 
expensive.  Hell you could just use a cheap raid card to raid-1 mirror the 
drive, then yank it out and toss it in another box, which I've done on 
occasion when pressed.



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: dangerous situation with shutdown process

2005-07-14 Thread asym

At 15:19 7/14/2005, Wilko Bulte wrote:

On Thu, Jul 14, 2005 at 12:14:49PM -0700, Kevin Oberman wrote..
  Date: Thu, 14 Jul 2005 20:38:15 +0200
  From: Anatoliy Dmytriyev [EMAIL PROTECTED]
  Sender: [EMAIL PROTECTED]
 
  Hello, everybody!
 
  I have found unusual and dangerous situation with shutdown process:
  I did a copy of 200 GB data on the 870 GB partition (softupdates is
  enabled) by cp command.
  It took a lot of time when I did umount for this partition exactly after
  cp, but procedure finished correctly.
  In case, if I did “shutdown –h(r)”, also exactly after cp, the 
shutdown

  procedure waited for “sync” (umounting of the file system) but sync
  process was terminated by  timeout, and fsck checked and did correction
  of the file system after boot.
 
  System 5.4-stable, RAM 4GB, processor P-IV 3GHz.
 
  How can I fix it on my system?


The funny thing about all the replies here.. is that this guy is not saying 
that sync doesn't work.


He's saying that the timeout built into shutdown causes it to *terminate* 
the sync forcibly before it's done, and then reboot.


All finger pointing about IDE, SCSI, softupdates, and journals aside.. I 
think all he wants/needs is a way to increase that timer.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: atapci VIA 82C596B UDMA66 controller: problem for 5.X ?

2005-02-16 Thread asym
At 08:47 2/16/2005, Rob wrote:
--- Mars Trading [EMAIL PROTECTED] wrote:
 This idea may seem useless but what have you got to
 lose?

 Have you tried changing bios setting for hard drive
 mode to auto or
 something other than LBA?  Maybe LARGE or CHS?
Is there a risk that I lose all data on my disk, when
changing this in the BIOS?
I wouldn't exactly call it a risk -- if you change the disk geometry in 
the bios, you will lose all the data on that drive, guaranteed.  Of course, 
if you don't write anything to it, you can just change it back and it'll 
still be there.  If you don't write anything to it though, it's not going 
to boot.


BTW: During the fresh FreeBSD install, I have never
encountered a choice for formatting with or without
LBA. In the Fdisk window, I choose 'use entire disk
for FreeBSD', and in the partition window I have set
'newfs' for all partitions.
LBA is a BIOS thing as was mentioned, not a freebsd install thing.  It's an 
abstraction layer between the drive and the controller.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: drive failure during rebuild causes page fault

2004-12-15 Thread asym
At 18:16 12/15/2004, Gianluca wrote:
barracudas and at this point I wonder if it's best to go w/ a small hw
raid controller like the 3ware 7506-4LP or use sw raid. I don't really
care about speed (I know RAID5 is not the best for that) nor hot
swapping, my main concern is data integrity. I tried to look online
but I couldn't find anything w/ practical suggestions except for
tutorials on how to configure vinum.
If you don't care about hot-swapping, then you don't really care about (or 
need) RAID-5.  It doesn't offer any additional data integrity, but no 
RAID level does.  What RAID does for you is allow you to survive an 
outright drive failure without losing any data.  No RAID level can save you 
from buggy software writing garbage to the disk, transient disk errors, or 
the myriad other events that are far more common than a single drive just 
dying on you.

Using RAID-5 as an example, during normal operations, a chunk is written to 
the disk and the controller (or software) calculates the bitwise XOR of 
all the blocks involved and writes that value into the parity 
stripe.  During read operations, this parity data is not read or verified 
-- doing so would be pointless because there is no way to tell if it's the 
parity-stripe or the data-stripe that's lying if the two don't jive.

So, during normal operations (all drives up and functioning) RAID-5 
functions readwise as a RAID-0 with one less disk than you really have, and 
as a somewhat slower array during writes.

If a drive completely fails, then the parity stripe is always read up, and 
the missing data stripe is reconstructed from the parity data -- unless the 
parity stripe happens to fall on the missing drive for the stripe set 
you're currently accessing, in which case it is ignored and for that single 
access the array functions just as it would if a drive hadnot failed.

If you're thinking of using RAID instead of good timely backups, you need 
to go back to the drawing board, because that is not what RAID is intended 
to replace -- and is something it cannot replace.

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: drive failure during rebuild causes page fault

2004-12-15 Thread asym
At 18:57 12/15/2004, Gianluca wrote:
actually all the data I plan to keep on that server is gonna be backed up, 
either to cdr/dvdr or in the original audio cds that I still have. what I 
meant by integrity is trying to avoid having to go back to the backups to 
restore 120G (or more in this case) that were on a dead drive. I've done 
that before, and even if it's no mission-critical data, it remains a huge 
PITA :)
That's true.  Restoring is always a pain in the ass, no matter the media 
you use.


thanks for the detailed explanation of how RAID5 works, somehow I didn't 
really catch the distinction between the normal and degraded operations on 
the array.

what would be your recommendations for this particular (and very limited) 
application?
Honestly I'd probably go for a RAID1+0 setup.  It wastes half the space in 
total for mirroring, but it has none of the performance penalties of 
RAID-5, and upto half the drives in the array can fail without anything but 
speed being degraded.  You can sort of think of this as having a second 
dedicated array for 'backups' if you want, with the normal caveats -- 
namely that destroyed data cannot be recovered, such as things purposely 
deleted.

RAID5 sacrifices write speed and redundancy for the sake of space.  Since 
you're using IDE and the drives are pretty cheap, I don't see the need for 
such a sacrifice.

Just make sure the controller can do real 1+0.  Several vendoers are 
confused about what the differences are between 1+0, 0+1, and 10 -- they 
mistakenly call their raid 0+1 support RAID-10.

The difference is pretty important though.  If you have say 8 drives, in 
RAID 1+0 (aka 10) you would first create 4 RAID-1 mirrors with 2 disks 
each, and then use these 4 virtual disks in a RAID-0 stripe setup.  This 
would be optimal, as any 4 drives could fail provided they all came from 
different RAID-1 pairs.

In 0+1, you first create two 4-disk RAID-0 arrays and then use one as a 
mirror of the other to create one large RAID-1 disk.  In this setup, which 
has *no* benefits over 1+0, if any drive fails the entire 4-disk RAID-0 
stripe set that the disk is in goes offline and you are left with no 
redundancy -- the entire array is degraded running off the remaining 4-disk 
RAID-0 array, and if any of the drives in that array fail, you're smoked.

If you want redundancy to avoid having to possibly restore data, and you 
can afford more disks, go 1+0.  If you can't afford more disks, then one of 
the striped+parity solutions (-3, -4, -5) are all you can do.. but be ready 
to see write performance anywhere from ok on a $1500 controller, to 
annoying on a sub $500 controller, to downright retardedly slow on 
anything down in the cheap end -- including most IDE controllers -- Look up 
the controller, find out what I/O chip it's using (most are intel based, 
either StrongARM or i960) and see if the chip supports hardware XOR.  If it 
doesn't, you'll really wish it did.

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]