After further investigation can say, although i'm not using lvm, the
following command is able to pupulate /dev/mapper/*

/sbin/vgscan --mknodes --config "${config}" >/dev/null

the command can be located at /lib64/rcscripts/addons/lvm-start.sh
which is called by /etc/init.d/lvm start

Really strange as i'm not usinglvm, so the command complains "No
volume groups found".

Also noticed that lvm is able to communcate with device-mapper through
/etc/lvm.conf:

    # Whether or not to communicate with the kernel device-mapper.
    # Set to 0 if you want to use the tools to manipulate LVM metadata
    # without activating any logical volumes.
    # If the device-mapper kernel driver is not present in your kernel
    # setting this to 0 should suppress the error messages.
    activation = 1

Don't know if that's why it's able to create the nodes. Any help?

2010/12/14 Pau Peris <sibok1...@gmail.com>:
> Hi, i'm currently running Gentoo on a RAID0 setup on some Sata disks
> using a Jmicron chip from an Asus P6T board. I'm using a fakeraid due
> to dualboot restrictions. My whole Gentoo system is on the raid0
> device so i use an initramfs to bootup. I've been running with this
> setup for some time, but since i migrated to baselayout2+openrc i
> didn't understand why i need /etc/init.d/lvm to start at boot as i
> have no lvm setup. Today i was doing some research and some questions
> appeared:
>
> * Every where says i need "<*>   RAID support -> <* > RAID-0
> (striping) mode" in kernel for fakeraid to work, but my system still
> boots while disabling those options, are they really needed? i don't
> understand why it is supposed to be needed. (Is it only for mdadm
> usage?)
>
> * Is "SCSI device support -> <*> RAID Transport Class" option needed?
> What is supposed to do? I think raid features are provided by jmicron
> driver and kernel understaands how RAID works due to "Multiple devices
> driver support (RAID and LVM) -> <*>   Device mapper support ", isn't
> it?
>
> * Last question is, after migrating to openrc i noticed that lvm2
> package provides device-mapper tools to manage the array, but i do not
> want /etc/init.d/lvm to start at boot as i do not use any lvm setup, i
> just would like to get /dev/mapper/ correctly populated using
> something like dmraid -ay. I've tried removing lvm from boot and
> adding device-mapper instead but /dev/mapper is not populated. How
> should i proceed to get rid of lvm script and /dev/mapper populated,
> do i need /etc/dmtab to work? Then why would one want to use mdadm
> instead of device-mapper, which are the advatges/disadvantages on each
> other? i know mdadm needs an optional /etc/mdadm.conf file to work,
> and using mdadm one could stop/start the array, add more disks to the
> array etc But do mdadm need a boot up script to work?
>
> As you can see i'm a bit confused about what's best/suits best mdadm
> or device-mapper and why are those kernel settings needed. Thanks a
> lot in advanced :)
>

Reply via email to