Re: When a System Dies; Getting back in operation again.

2009-05-06 Thread n j
 ... What is the best way to restore the full system?
 Can I use the FreeBSD installation disk in rescue mode?

I experienced such a situation just 2 weeks ago. My primary problem
was that I had to do restore over the network (no attached tape
drives, no external HDDs). I wanted to use ssh to grab the dump from
the backup server, but ended up using netcat which worked great.

Here's basically what I did including backup from the not-yet-dead
machine (note, I used intermediate backup server, but it should be
possible to directly pipe dump to restore):

1. dump -0Laf - / | ssh backup-server cat  dump.root
2. boot the new machine from CD disc1 (FreeBSD 7) or livefs disc (FreeBSD 7)
3. create and newfs partitions as explained in this thread (at least
the size of backup, can be larger)
4. go into the rescue (fixit) mode, create mount points for created
partitions (mkdir mnt.root), mount partitions (e.g. mount /dev/da0s1a
/mnt.root), change directory to mount point (cd /mnt.root), configure
NIC (ifconfig)
5. start netcat (nc -l 5 | restore -rvf -)
6. on backup-server: cat dump.root | nc new-machine 5
7. repeat for usr and var partitions

Notes:
1. if security is an issue, ssh out from the new machine to the backup
server with port forwarding (ssh -R 5:localhost:5
backup-server) and pipe the backup to localhost (cat dump.root | nc
localhost 5);
my initial idea was to start sshd in fixit mode (see my post to the
list fixit console with sshd) which turned out to be too much of a
trouble.
2. restore uses TMPDIR to store some temporary files during restore
process; the fixit mode has limited free space and when it gets
exhausted the restore process will fail, so it is a good idea to use
an available partition as a temporary TMPDIR (e.g. export
TMPDIR=/mnt.var while restoring usr partition and later use a
subdirectory of usr as TMPDIR to restore var partition)
3. [IMPORTANT!] after the restore process is over, manually check
restored etc/fstab and etc/rc.conf (currently mounted as
/mnt.root/...) to fix:
a) partition names (e.g. /dev/da0s1a might become /dev/amrd0s1a)
b) ethernet interface names (e.g. em0 might become bge0)
c) IP addresses in case you still have the old box running to avoid IP conflict

You should now be able to safely reboot and log into your new machine.

Regards,
-- 
Nino
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: When a System Dies; Getting back in operation again.

2009-05-04 Thread Jonathan McKeown
On Friday 01 May 2009 22:43:51 Jerry McAllister wrote:
 On Fri, May 01, 2009 at 12:07:22PM -0500, Martin McCormick wrote:
  Let's say we have a system that is backed up regularly and it
  vanishes in a puff of smoke one day. One can get FreeBSD
  installed on a new drive in maybe half an hour or so but we also
  need to get back to the right patch level and then we can say we
  are back where we started.

 What you want to do is use the fixit image to set up the disk.
 That means fdisk and bsdlabel and newfs it.   You can actually
 use sysinstall to do this as well.  Just let the installer come
 up and do the disk stuff, choose minimal install and then after
 it finishes making the disks, kill the rest of the install (or
 just let it finish and then overwrite it.

 But, I find it actually easier to do the fdisk, bsdlabel and newfs-s
 myself.  But, then I am used to it.

 Right after you get done making sure where your fixit is living,
 then use fdisk and bsdlabel to check for the way you have the disk
 set up currently.   Write it down or print it out and keep it
 near that installation/fixit disk.

[Lots of good stuff about creating the partitions]

 Now all you have to do is newfs each partition.   Just take the
 defaults.   Remember that newfs wants the full device spec, not
 just the drive identifier.

If you have kept the right information beforehand, you can actually restore 
your dumps onto ``bare metal'' without doing a partial install first, and 
with the same newfs settings for each partition as you originally had. You 
need to use bsdlabel and dumpfs -m and keep the output for rebuilding. The 
rest of this message is the details.

On your running system, create and keep two files. My system has one slice, 
ad6s1, and the usual partitions - a for root, d for /tmp, e for /var, f 
for /usr, and I've shown the commands you need, and the resulting file 
contents on my current system, below:

bsdlabel ad6s1 ad6s1.label

ad6s1.label contains:

# /dev/ad6s1:
8 partitions:
#size   offsetfstype   [fsize bsize bps/cpg]
  a:  104857604.2BSD 2048 16384 8
  b:  8388608  1048576  swap
  c: 1562963220unused0 0 # raw part, don't 
edit
  d: 20971520  94371844.2BSD 2048 16384 28552
  e:  1048576 304087044.2BSD 2048 16384 8
  f: 124839042 314572804.2BSD 2048 16384 28552

I usually put all the spare space on a disk into /usr, so changing the first 
field on the f: line (the size) from 124839042 to * tells bsdlabel to do 
exactly that in case the replacement disk is a different size from the 
original.

We now need the newfs settings for all the 4.2BSD filesystems except c, so (in 
sh syntax)

for i in a d e f; do dumpfs -m ad6s1$i; done newfscmds.ad6s1

newfscmds.ad6s1 now contains:

# newfs command for ad6s1a (/dev/ad6s1a)
newfs -O 2 -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o 
time -s 262144 /dev/ad6s1a
# newfs command for ad6s1d (/dev/ad6s1d)
newfs -O 2 -U -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o 
time -s 5242880 /dev/ad6s1d
# newfs command for ad6s1e (/dev/ad6s1e)
newfs -O 2 -U -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o 
time -s 262144 /dev/ad6s1e
# newfs command for ad6s1f (/dev/ad6s1f)
newfs -O 2 -U -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o 
time -s 31209760 /dev/ad6s1f

take out the -s 31209760 in the command for ad6s1f (this is the size of the 
new filesystem and it defaults to the size of the partition - which we made 
to take up the rest of the disk).

Now you can save these two files somewhere. When it comes to a catastrophic 
failure and restore, boot a liveCD. Use fdisk to create your single large 
slice on the new disk with

fdisk -BI ad6

Use 

bsdlabel -R ad6s1 ad6s1.label

to restore the disklabel.

If your device name is different from before, you need to edit newfscmds.ad6s1 
to change the ad6 to the new device name wherever it occurs, but you then run 
the newfs commands in the file to create your filesystems with the same 
parameters (softupdates on/off, etc) as before.

You now have the basic structure of your previous disk, ready to have the 
root, /var/ and /usr dumps restored to make a running system identical to the 
destroyed one, with one last step:

bsdlabel -B ad6s1

to put the boot code on the slice. (I haven't tried this bit, so if you're 
going to use the boot code from your root partition, which is stored 
at /boot/boot, you'll need to check whether you can run bsdlabel -B on a 
mounted disk. If you can, the command would be

bsdlabel -B -b /mnt/boot/boot /dev/ad6s1

assuming you mounted /dev/ad6s1a on /mnt).

If you have a different device name, of course, you also need to edit your 
fstab before rebooting.

Jonathan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to 

Re: When a System Dies; Getting back in operation again.

2009-05-04 Thread Jonathan McKeown
On Monday 04 May 2009 15:59:14 Jerry McAllister wrote:
 On Mon, May 04, 2009 at 10:31:16AM +0200, Jonathan McKeown wrote:

  If you have kept the right information beforehand, you can actually
  restore your dumps onto ``bare metal'' without doing a partial install
  first, and with the same newfs settings for each partition as you
  originally had. You need to use bsdlabel and dumpfs -m and keep the
  output for rebuilding. The rest of this message is the details.

 If you have a specific reason to want your new filesystems' to have
 identical superblock info, you can use dumpfs -m, but you don't need
 to worry about all that.   Just fdisk, bsdlabel and then let newfs
 take its defaults.

Which of your filesystems currently has softupdates disabled? You may not 
care - but the point is that using dumpfs in the way I described will 
preserve that information (along with all the other tuning options) for 
people who do care.

If you're restoring a complete machine from backup, the less you have to think 
about, the better. Knowing that my filesystems are going to be restored with 
whatever tuning options I was previously running with, without my having to 
try and remember, gives me peace of mind ahead of time.

Jonathan
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: When a System Dies; Getting back in operation again.

2009-05-04 Thread Jerry McAllister
On Mon, May 04, 2009 at 10:31:16AM +0200, Jonathan McKeown wrote:

 On Friday 01 May 2009 22:43:51 Jerry McAllister wrote:
  On Fri, May 01, 2009 at 12:07:22PM -0500, Martin McCormick wrote:
   Let's say we have a system that is backed up regularly and it
   vanishes in a puff of smoke one day. One can get FreeBSD
   installed on a new drive in maybe half an hour or so but we also
   need to get back to the right patch level and then we can say we
   are back where we started.
 
  What you want to do is use the fixit image to set up the disk.
  That means fdisk and bsdlabel and newfs it.   You can actually
  use sysinstall to do this as well.  Just let the installer come
  up and do the disk stuff, choose minimal install and then after
  it finishes making the disks, kill the rest of the install (or
  just let it finish and then overwrite it.
 
  But, I find it actually easier to do the fdisk, bsdlabel and newfs-s
  myself.  But, then I am used to it.
 
  Right after you get done making sure where your fixit is living,
  then use fdisk and bsdlabel to check for the way you have the disk
  set up currently.   Write it down or print it out and keep it
  near that installation/fixit disk.
 
 [Lots of good stuff about creating the partitions]
 
  Now all you have to do is newfs each partition.   Just take the
  defaults.   Remember that newfs wants the full device spec, not
  just the drive identifier.
 
 If you have kept the right information beforehand, you can actually restore 
 your dumps onto ``bare metal'' without doing a partial install first, and 
 with the same newfs settings for each partition as you originally had. You 
 need to use bsdlabel and dumpfs -m and keep the output for rebuilding. The 
 rest of this message is the details.

If you have a specific reason to want your new filesystems' to have
identical superblock info, you can use dumpfs -m, but you don't need
to worry about all that.   Just fdisk, bsdlabel and then let newfs
take its defaults.   You do not need an identical filesystem to
do a restore(8) on it.   Restore builds it from scratch in the correct
way - in fact in a better way than what it was before the system
was whacked.So, just build the new disk either manually or with
sysinstall and then restore the dumps within the filesystems.

Make sure you cd in to the mounted filesystem - note, since you
are running from a fixit, you are making up new mount points and
mounting the filesystems from the new disk.   Something like:

  mkdir /newroot
  mount /dev/ad0s1a /newroot
  cd /newroot
  restore -rf /dev/nsa0 
   (replace /dev/nsa0 with wherever you are reading the dump.  don't
forget to position the tape with mt fsf nn if it is a tape)

You can also skip the fdisk if you are running only FreeBSD from that
disk and don't mind using what is called a 'dangerously dedicated' disk.
It isn't really all that dangerous.  No weird creatures will climb out
and grab you by the throat at night.

If you do dangerously dedicated, the device addressing leaves out
the slice specifier (s1, s2, s3 or s4) and would look something 
like:   /dev/ad0a   instead of  /dev/ad0s1a.

jerry

 
 On your running system, create and keep two files. My system has one slice, 
 ad6s1, and the usual partitions - a for root, d for /tmp, e for /var, f 
 for /usr, and I've shown the commands you need, and the resulting file 
 contents on my current system, below:
 
 bsdlabel ad6s1 ad6s1.label
 
 ad6s1.label contains:
 
 # /dev/ad6s1:
 8 partitions:
 #size   offsetfstype   [fsize bsize bps/cpg]
   a:  104857604.2BSD 2048 16384 8
   b:  8388608  1048576  swap
   c: 1562963220unused0 0 # raw part, don't 
 edit
   d: 20971520  94371844.2BSD 2048 16384 28552
   e:  1048576 304087044.2BSD 2048 16384 8
   f: 124839042 314572804.2BSD 2048 16384 28552
 
 I usually put all the spare space on a disk into /usr, so changing the first 
 field on the f: line (the size) from 124839042 to * tells bsdlabel to do 
 exactly that in case the replacement disk is a different size from the 
 original.
 
 We now need the newfs settings for all the 4.2BSD filesystems except c, so 
 (in 
 sh syntax)
 
 for i in a d e f; do dumpfs -m ad6s1$i; done newfscmds.ad6s1
 
 newfscmds.ad6s1 now contains:
 
 # newfs command for ad6s1a (/dev/ad6s1a)
 newfs -O 2 -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o 
 time -s 262144 /dev/ad6s1a
 # newfs command for ad6s1d (/dev/ad6s1d)
 newfs -O 2 -U -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o 
 time -s 5242880 /dev/ad6s1d
 # newfs command for ad6s1e (/dev/ad6s1e)
 newfs -O 2 -U -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o 
 time -s 262144 /dev/ad6s1e
 # newfs command for ad6s1f (/dev/ad6s1f)
 newfs -O 2 -U -a 8 -b 16384 -d 16384 -e 2048 -f 2048 -g 16384 -h 64 -m 8 -o 
 time -s 31209760 /dev/ad6s1f
 
 take out the -s 31209760 in the command for ad6s1f (this is 

Re: When a System Dies; Getting back in operation again.

2009-05-04 Thread Doug Poland
On Mon, May 04, 2009 at 04:30:53PM +0200, Jonathan McKeown wrote:
 On Monday 04 May 2009 15:59:14 Jerry McAllister wrote:
  On Mon, May 04, 2009 at 10:31:16AM +0200, Jonathan McKeown wrote:
 
   If you have kept the right information beforehand, you can
   actually restore your dumps onto ``bare metal'' without doing a
   partial install first, and with the same newfs settings for each
   partition as you originally had. You need to use bsdlabel and
   dumpfs -m and keep the output for rebuilding. The rest of this
   message is the details.
 
  If you have a specific reason to want your new filesystems' to have
  identical superblock info, you can use dumpfs -m, but you don't need
  to worry about all that. ? Just fdisk, bsdlabel and then let newfs
  take its defaults.
 
 Which of your filesystems currently has softupdates disabled? You may
 not care - but the point is that using dumpfs in the way I described
 will preserve that information (along with all the other tuning
 options) for people who do care.
 
 If you're restoring a complete machine from backup, the less you have
 to think about, the better. Knowing that my filesystems are going to
 be restored with whatever tuning options I was previously running
 with, without my having to try and remember, gives me peace of mind
 ahead of time.
 
Excellent discussion.  Along the lines of the less you have to think
about,  is there a technique for restoring geom meta-data on bare
metal?  Say you have a system built upon gmirror and gjournal.  One must
manually create the mirror and journal before restoring from dump.  But
the vital geom meta-data describing your mirror/journal is on the dump.


-- 
Regards,
Doug
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


When a System Dies; Getting back in operation again.

2009-05-01 Thread Martin McCormick
Let's say we have a system that is backed up regularly and it
vanishes in a puff of smoke one day. One can get FreeBSD
installed on a new drive in maybe half an hour or so but we also
need to get back to the right patch level and then we can say we
are back where we started. If you do not have hot-swappable
drives which we mostly do not, What is the best way to restore
the full system?

Can I use the FreeBSD installation disk in rescue mode?
The idea would be to boot the CDROM, go in to rescue mode, mount
the new drive which may be blank right now, and then use restore
based on the last dump of the system we are trying to revive.

Thanks.

Martin McCormick WB5AGZ  Stillwater, OK 
Systems Engineer
OSU Information Technology Department Telecommunications Services Group
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: When a System Dies; Getting back in operation again.

2009-05-01 Thread Craig Butler
Bacula is your friend, tried and tested

http://www.bacula.org/en/dev-manual/Disast_Recove_Using_Bacula.html#SECTION004315

/Craig

 

On Fri, 2009-05-01 at 12:07 -0500, Martin McCormick wrote:
 Let's say we have a system that is backed up regularly and it
 vanishes in a puff of smoke one day. One can get FreeBSD
 installed on a new drive in maybe half an hour or so but we also
 need to get back to the right patch level and then we can say we
 are back where we started. If you do not have hot-swappable
 drives which we mostly do not, What is the best way to restore
 the full system?
 
   Can I use the FreeBSD installation disk in rescue mode?
 The idea would be to boot the CDROM, go in to rescue mode, mount
 the new drive which may be blank right now, and then use restore
 based on the last dump of the system we are trying to revive.
 
   Thanks.
 
 Martin McCormick WB5AGZ  Stillwater, OK 
 Systems Engineer
 OSU Information Technology Department Telecommunications Services Group
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: When a System Dies; Getting back in operation again.

2009-05-01 Thread Polytropon
On Fri, 01 May 2009 12:07:22 -0500, Martin McCormick 
mar...@dc.cis.okstate.edu wrote:
   Can I use the FreeBSD installation disk in rescue mode?

Yes, you can. The only thing you have to ensure is that
you have a means to access the dump files, for example
via network or from optical media (DVD).

A bit more comfortable is the use of a live file system
such as FreeSBIE.



 The idea would be to boot the CDROM, go in to rescue mode, mount
 the new drive which may be blank right now, and then use restore
 based on the last dump of the system we are trying to revive.

I'd suggest to use FreeBSD's sysinstall to make the new
disk bootable, i. e. create slices, create partitions,
format them. If you've done this correctly, you can
easily use restore to read in the dump files and put
their contents back on the respective partitions (from
where they have been created).



-- 
Polytropon
From Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: When a System Dies; Getting back in operation again.

2009-05-01 Thread Warren Block

On Fri, 1 May 2009, Martin McCormick wrote:


Let's say we have a system that is backed up regularly and it
vanishes in a puff of smoke one day. One can get FreeBSD
installed on a new drive in maybe half an hour or so but we also
need to get back to the right patch level and then we can say we
are back where we started. If you do not have hot-swappable
drives which we mostly do not, What is the best way to restore
the full system?

Can I use the FreeBSD installation disk in rescue mode?
The idea would be to boot the CDROM, go in to rescue mode, mount
the new drive which may be blank right now, and then use restore
based on the last dump of the system we are trying to revive.


I've had success doing a minimal install from CD, booting from the new 
drive, and then restoring dumpfiles right over it.


-Warren Block * Rapid City, South Dakota USA
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: When a System Dies; Getting back in operation again.

2009-05-01 Thread Jerry McAllister
On Fri, May 01, 2009 at 06:21:06PM +0100, Craig Butler wrote:

 Bacula is your friend, tried and tested
 

The guy is making nice reliable dump(8)s of his file systems.
He doesn't need to waste time and energy with yet another thing.

Dump and restore work just fine, are part of the system and
handle situations like these most reliably..

jerry


 http://www.bacula.org/en/dev-manual/Disast_Recove_Using_Bacula.html#SECTION004315
 
 /Craig
 
  
 
 On Fri, 2009-05-01 at 12:07 -0500, Martin McCormick wrote:
  Let's say we have a system that is backed up regularly and it
  vanishes in a puff of smoke one day. One can get FreeBSD
  installed on a new drive in maybe half an hour or so but we also
  need to get back to the right patch level and then we can say we
  are back where we started. If you do not have hot-swappable
  drives which we mostly do not, What is the best way to restore
  the full system?
  
  Can I use the FreeBSD installation disk in rescue mode?
  The idea would be to boot the CDROM, go in to rescue mode, mount
  the new drive which may be blank right now, and then use restore
  based on the last dump of the system we are trying to revive.
  
  Thanks.
  
  Martin McCormick WB5AGZ  Stillwater, OK 
  Systems Engineer
  OSU Information Technology Department Telecommunications Services Group
  ___
  freebsd-questions@freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-questions
  To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: When a System Dies; Getting back in operation again.

2009-05-01 Thread Jerry McAllister
On Fri, May 01, 2009 at 12:07:22PM -0500, Martin McCormick wrote:

 Let's say we have a system that is backed up regularly and it
 vanishes in a puff of smoke one day. One can get FreeBSD
 installed on a new drive in maybe half an hour or so but we also
 need to get back to the right patch level and then we can say we
 are back where we started. If you do not have hot-swappable
 drives which we mostly do not, What is the best way to restore
 the full system?
 
   Can I use the FreeBSD installation disk in rescue mode?
 The idea would be to boot the CDROM, go in to rescue mode, mount
 the new drive which may be blank right now, and then use restore
 based on the last dump of the system we are trying to revive.

Yes.

By the way, dump/restore are the best for backup/recovery because
they handle the odd situations best - such as you replace the old
failed disk with a newer either larger or smaller (but still big
enough to hold everything) disk.   Other utilities cannot handle
that gracefully.  Dump/restore does.   There are a few other odd
cases as well.

I think you want what is called 'fixit' mode.   You can select
that when you boot from it.   I am not absolutely sure all sets
of disks are populated identically.  Check now that your CD has
the fixit and if it is on a different image, download that one,
burn it and stash it somewhere safe.

What you want to do is use the fixit image to set up the disk.
That means fdisk and bsdlabel and newfs it.   You can actually
use sysinstall to do this as well.  Just let the installer come
up and do the disk stuff, choose minimal install and then after
it finishes making the disks, kill the rest of the install (or
just let it finish and then overwrite it.

But, I find it actually easier to do the fdisk, bsdlabel and newfs-s
myself.  But, then I am used to it.

Right after you get done making sure where your fixit is living,
then use fdisk and bsdlabel to check for the way you have the disk
set up currently.   Write it down or print it out and keep it
near that installation/fixit disk.

If you do   fdisk  ad0or  fdisk da0  (depending on IDE/SATA or SCSI/SAS
respectively) without any other parameters, it will print out what it
thinks the disk is currently like.Of course, if it is other 
than disk 0, use the correct number.

Then do a similar thing with bsdlabel.   bsdlabel -e ad0s1
or bsdlabel da0s1.If you have more than one slice and FreeBSD
is not on slice 1, then use the correct slice identifier here.
So, if it is the second SATA drive and the third slice on it
that might look like  bsdlabel -e ad1s3.   
Note that drives number from 0, but slices number from 1.

Anyway, then copy the information it shows in the table down or
print it out.   Ignore the stuff on top - anything above where
it says:   '8 partitions:'You are just interested in the
partition identifiers and the sizes and offsets, types 
and the fsize, bsize and bps/cpg.Actually, you can normally
just take whatever defaults it gives you for fsize, bsize and bps/cpg
unless you are doing something extra exotic.

Then just get out of the edit session without writing/saving.
just type ESC :q!

Those numbers don't have to be the exact same on the new disk and
probably will not be, but you will want to have the information 
handy rather than have to recalculate it at a bad time.

NOTE, I am mostly writing this presuming that you have the machine
only running FreeBSD.  If you have it dual booted, you will want
the information on the other OS slices too.   fdisk will give you
what you need to know.   The FreeBSD fdisk is smart enough to report
on all slices -(what MS calls primary partitions) even if they are
not FreeBSD slices.   It does not report on extended partitions, but
it does not need to.  You only need to know about the primaries/slices.
You let those other OSen deal with 'extended' stuff.

If you have an MS or Lunix OS on it, then those should be put back first.
Whatever you did to divide the old disk will have to be done to make
the slices on the new disk.  Maybe Partition Magic or Gparted was used.

Once you have it divided in those major divisions (slices/primary partitions)
then use fdisk to make at least the FreeBSD slice boodable.  Those
other OSen will probably take care of it for theirs.

The easy thing is if the whole disk is being used by FreeBSD.
Then just do:
  fdisk -BI da0

That will make the whole disk FreeBSD and bootable.

Then do two bsdlabels.  The first sets up the label and the
second edits it to have the partitions you want.

  bsdlabel -w -B da0s1

  bsdlabel -e da0s1

You will see an edit session about like the one you saw when you
collected the information to stash away, except it will only show
a 'c' partition.

Leave that c partition alone, but make the other ones similar
to what you had on the old disk. You only need to put in
the '0' value for the offset on the first (a) partition and then
put '*' in for the rest of the offsets.
Make the rest of the 

Re: When a System Dies; Getting back in operation again.

2009-05-01 Thread craig001
 On Fri, May 01, 2009 at 06:21:06PM +0100, Craig Butler wrote:

 Bacula is your friend, tried and tested


 The guy is making nice reliable dump(8)s of his file systems.
 He doesn't need to waste time and energy with yet another thing.

 Dump and restore work just fine, are part of the system and
 handle situations like these most reliably..

 jerry
Sorry I just skip read it, wasn't sure he was using dump, but agreed
dump/restore easy peasy, included and quick.



 http://www.bacula.org/en/dev-manual/Disast_Recove_Using_Bacula.html#SECTION004315

 /Craig



 On Fri, 2009-05-01 at 12:07 -0500, Martin McCormick wrote:
  Let's say we have a system that is backed up regularly and it
  vanishes in a puff of smoke one day. One can get FreeBSD
  installed on a new drive in maybe half an hour or so but we also
  need to get back to the right patch level and then we can say we
  are back where we started. If you do not have hot-swappable
  drives which we mostly do not, What is the best way to restore
  the full system?
 
 Can I use the FreeBSD installation disk in rescue mode?
  The idea would be to boot the CDROM, go in to rescue mode, mount
  the new drive which may be blank right now, and then use restore
  based on the last dump of the system we are trying to revive.
 
 Thanks.
 
  Martin McCormick WB5AGZ  Stillwater, OK
  Systems Engineer
  OSU Information Technology Department Telecommunications Services Group
  ___
  freebsd-questions@freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-questions
  To unsubscribe, send any mail to 
  freebsd-questions-unsubscr...@freebsd.org

 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org