On 02/22/2019 10:02 AM, David Wright wrote:
On Fri 22 Feb 2019 at 09:19:53 (-0500), Stephen P. Molnar wrote:
On 02/22/2019 09:13 AM, Dan Ritter wrote:
Stephen P. Molnar wrote:
My Debian Stretch system has three HD's. I want to remove one of the HD's
(not sda) and replace it with a new HD..
What I need to be sure of is, if I remove the old drive from the fstab and
delete the mount point will the system boot after I put in the new HD. so
that I can edit the fstab and create a mount point for the new drive?
Hence, the request for the sanity check.

The system needs the following to boot:
- the BIOS or UEFI needs to know which drive has a boot loader.
- that drive needs a boot loader (usually grub)
- the boot loader needs to know where to load the kernel and
    possibly an init filesystem from
- the kernel needs to be able to mount /, the root partition.

Typically, all of those things will be on one drive, and that's
usually /dev/sda. However, it's possibly to change all of them.

You're probably safe. If you want to be sure, run a test:
- shutdown to power off
- unplug power from the drive you're going to replace
- try to boot

If that succeeds, shut down again and go ahead with the
replacement. If it fails, you need to trace the boot process
above and find out what's on the drive you're replacing,
and arrange for that to be changed or copied.

Thanks for the reply.

The OS is on dev/sda.  The disk I changing is /dev/sdc
I think we're assuming that you have something better than
/dev/sdaX in your /etc/fstab, UUIDs or LABELs. With modern
PCs, you can be surprised by how these device names are
assigned.

Cheers,
David.

Many thanks to those have answered my cry for a 'sanity check'

It has become obvious to me that I am having problems.

Before I elaborate, I am using the UUID's for the Drives. Here is my fstab:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type> <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=ce25f0e1-610d-4030-ab47-129cd47d974e / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=a8f6dc7e-13f1-4495-b68a-27886d386db0 none swap sw 0 0
/dev/sr0        /media/cdrom0   udf,iso9660 user,noauto     0 0

UUID=900b5f0b-4f3d-4a64-8c91-29aee4c6fd07 /sdb1 ext4 errors=remount-ro 0 1

UUID=d65867da-c658-4e35-928c-9dd2d6dd5742 /sdc1 ext4 errors=remount-ro 0 1

UUID=007c1f16-34a4-438c-9d15-e3df601649ba /sdc2 ext4 errors=remount-ro 0 1

And I have the corresponding mount points.

Now, as to what is happening.



Before disconnection the power to the drives, I edited out their lines in fstab. I disconnecting the power to sdb and sdc and started the computer. It booted for a few lines until it encountered the line starting with 'start job fgfor device disk by . . .' (at least that what i jotted down). then t\iot Then it through the three HD's, two of which had the power unplugged) for 1 minute and 30 seconds and then went on to tell me that I could log on as root or ctrl-D to continue. Ctrl-D didn't work so I logged oh as root

At that point I did 'journalctl -xb and got 1237 lines which were meaningless to me. startx got me to the Root Desktop.

The only option open to me at that point was to logout as root, the options of restart and shutdown were grayed out as being unavailable.

At this point I admitted defeat did 'shutdown -h now' in a terminal and put the system back in its original state.

Obviously, I'm missing something!


--
Stephen P. Molnar, Ph.D.
Consultant
www.molecular-modeling.net
(614)312-7528 (c)
Skype: smolnar1

Reply via email to