On Sat, Aug 21 at 04:09AM +0100, David Leggett wrote:
> Doing stuff like this remotely is fun ;)
> 
> I would recommed that you use LVM to manage the size of your "partitions" so 
> you can simply assign space to wherever you store your data easily.
> 
> Also I would recommend upgrading the kernel to the latest 2.4 series before 
> you start playing with partitions, you can also enable lvm support..
> 
> > so i (in indiana) am thinking i can
> 
> Install new kernel

hmm. i think it's already got 2.4 -- not sure at the moment.

> > - split the raid (in boston) back into two hd* drives,

where's the HOWTO on this split-the-raid part? rwfm?

> > - repartition the non-booted one,
> 
> into / of about 500M to 1G, swap of whatever and the remainder into a single
>   partition
> use _mdadm_ to create your raid arrays on the non-boted disk
>   (i say mdadm because it doesnt need a config file, and imho its easiest)
> turn the large raid array into a lvm pv, create a vg and a few lvs
>   (explained http://www.tldp.org/HOWTO/LVM-HOWTO/)
> 
> > - shuffle stuff over to the new partitions,
> 
> which are now lvm logical volumes
> edit fstab! (for non-booted system)
> 
> > - reconfigure lilo,
> 
> grub would be better because it enables you (or your client) to edit the boot 
>   params at the boot prompt

don't have access to the machine -- and client has it set up as
a faceless server anyhow...

> > - boot from the newly-partitioned drive,
> > - repartition the first drive to match the booted one,
> 
> sfdisk -l /dev/hdc | sfdisk /dev/hda
> where hdc is the LVM+Raid disk and hda is the disk with ugly partitioning

now THAT's cool! :)

> > - re-establish raid parameters,
> > - lilo some more,
> 
> or grub
> 
> > - and then reboot again.
> >
> > is that a sane/possible approach?
> 
> perfectly. just make sure your client has someone who is happy to recieve a
>   phone call from you talking through how to fix stuff if things dont go to
>   plan

i feel more like i'd be on the receiving end of such a call. :)

> > since we're NOT anywhere near the client machine, this seems to
> > be a reasonable way of repartitioning the thing, remotely. if
> > not, other pointers welcome.
> >
> > so how do we split the raid up without borking the remote
> > computer into a non-bootable/non-reachable state?
> 
> if you have a raid1 array of /dev/hda1 and /dev/hdc1 you can mount both the 
> member partitions as if they were not part of the raid array. 

beg pardon? (and right now it's hda/hdb.)

> > <dmesg snippet="in case it helps">
> > VFS: Mounted root (cramfs filesystem).
> > Freeing unused kernel memory: 128k freed
> > md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
> 
> oo i see you boot form initrd with md support already.. fun :)

that makes it tougher to split up, doesn't it?

> >  [events: 00000014]
> >  [events: 00000014]
> > md: autorun ...
> > md: considering hdb3 ...
> > md:  adding hdb3 ...
> > md:  adding hda3 ...
> 
> If its possible it would be a very good idea to get hdb moved to another ide 
> bus, in the current configuration performance is going to be seriously bad 
> because all writes have to be written twice down the same ide bus, so your 
> write performance is half that of a single disk.
> 
> If the disk were moved the write performance will be that of a single disk, 
> read performance should probably improve, although that depends on how 
> paranoid the md raid 1 driver is at making sure the data its giving the 
> kernel isnt corrupted.
> 
> Hope this helps.

-- 
I use Debian/GNU Linux version 3.0;
Linux boss 2.4.18-bf2.4 #1 Son Apr 14 09:53:28 CEST 2002 i586 unknown
 
DEBIAN NEWBIE TIP #55 from Alvin Oga <[EMAIL PROTECTED]>
:
Been thinking about HOW TO BACK UP YOUR DEBIAN SYSTEM? There's
a whole website just for you:
        http://www.Linux-Backup.net/app.gwif.html
Concepts, methods, applications, procedures... Have a look!

Also see http://newbieDoc.sourceForge.net/ ...


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to