Re: Can't load Nvidia driver on Fedora 9 x86_64

2008-06-25 Thread Richard Michael
Hugh,

I know it doesn't help much, but just to give you a positive, that
driver is working on for my integrated GeForce 6150 on x86_64:

$grep -i nvidia /var/log/Xorg.0.log

(II) Module nvidia: vendor="NVIDIA Corporation"
(II) NVIDIA dlloader X Driver  173.14.09  Wed Jun  4 23:48:23 PDT 2008
(II) NVIDIA Unified Driver for all Supported NVIDIA GPUs


$lsmod | grep -i nvid
nvidia   8108912  24 
i2c_core   28448  2 nvidia,i2c_nforce2

$lspci -v | grep -i nvid

00:05.0 VGA compatible controller: nVidia Corporation C51PV [GeForce 6150] (rev 
a2) (prog-if 00 [VGA controller])
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nvidia


$rpm -qa | grep nvidi
xorg-x11-drv-nvidia-libs-173.14.09-1.lvn9.x86_64
kmod-nvidia-2.6.25.4-30.fc9.x86_64-173.14.05-3.lvn9.x86_64
  (I have an old kernel still installed.)
kmod-nvidia-173.14.09-1.lvn9.x86_64
kmod-nvidia-2.6.25.6-55.fc9.x86_64-173.14.09-1.lvn9.x86_64
xorg-x11-drv-nvidia-173.14.09-1.lvn9.x86_64

$rpm -qf /lib/modules/2.6.25.6-55.fc9.x86_64/extra/nvidia/nvidia.ko
kmod-nvidia-2.6.25.6-55.fc9.x86_64-173.14.09-1.lvn9.x86_64

$uname -a
Linux localhost.localdomain 2.6.25.6-55.fc9.x86_64 #1 SMP Tue Jun 10 16:05:21 
EDT 2008 x86_64 x86_64 x86_64 GNU/Linux

I guess it's a problem with the rebuilding of the module by the
kmod-nvidia rpm, so I'd start by removing and re-adding it; and/or
checking it's post/pre scripts and finding out how it rebuilds the
module and doing so by hand just to get it working.  

I assume you have the akmods/kmodtool rpms installed?  I believe they're
related to livna's rebuilding process, but I haven't needed to confirm
that.

$ rpm -qa | grep kmod
kmodtool-1-11.lvn9.noarch
akmods-0.3.1-1.lvn9.noarch


Regards,
Richard

-- 
fedora-list mailing list
fedora-list@redhat.com
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list


Re: Changing initrd contents and grub

2008-06-23 Thread Richard Michael
On Mon, Jun 23, 2008 at 10:39:35AM -0600, Phil Meyer wrote:
> Richard Michael wrote:
>> Hello list,
>>
>> I've changed my RAID and LVM configuration and need to modify the
>> respective commands in the /init of my initrd.
>>
>> I made those changes by decompressing and extracting the cpio archive,
>> editting the init script (add a couple lines for mdadm, changed the
>> activated volume group name), rebuilt a cpio archive (using the correct
>> "-c"/"-H newc" SVR4 format) and fed it back through gzip (max
>> compression), then I just moved aside the old initrd, replacing it with
>> my new one:
>>
>> mkdir /boot/tmp
>> cd !$
>> gzip -dc ../initrd- | cpio -id
>> vi init
>> find . -depth -print | cpio -oc | gzip -9 > ../initrd-.new cd ..
>> mv initrd- initrd-.orig
>> mv initrd-.new initrd-
>>
>> The kernel now panics (paraphrase) "can't find /init".
>>
>> It does not do this if I restore the original initrd.
>>
>> I have not changed the name of the initrd, filenames match grub.conf and
>> grub's boot menu, etc.  I have done this type of modification
>> successfully in the past, but only changing a single character in /init.
>>   
>
> Just a thought here, since I have also tried this several times with 
> limited success:
>
> The whole point of mkinitrd is to avoid these 'by hand' operations.
>
> After you make your changes, run mkinitrd to generate a new initrd.  It 
> will pick up changes in /etc/modprobe.conf and /etc/fstab and try to do the 
> right thing.  Besides that, mkinitrd will accept arguments that allow 
> additional drivers to be loaded, with arguments if needed, as well as many 
> other options.

I understand what you are saying, but I am of the opposite opinion.

I have a decent picture of what needs to change, and where to change it.
For example, those raid arrays which must be activated, and others not,
etc.  What I obviously need is a more detailed understanding of the
kernel boot process, the BIOS and grub.  (One problem here is
troubleshooting the kernel and boot is tricky when the output moves so
quickly on the screen!)

Automated tools often frustrate me because (a) I don't learn anything,
so I can't fix it when something goes wrong, and (b) something will go
wrong due to all the guesswork of an automated tool.  As I say, I've
fixed problems with initrd's on other systems this way before.  (In
fact, I had to change the uuid of an array that was being activated.  If
I hadn't know anything about the contents of an initrd and how to modify
it, etc. it would have been quite hard to fix with mkinitrd because the
system wouldn't boot!)

In this situation, mkinitrd is hard to employ because the system is
booted from the Fedora DVD in rescue mode.  I'm moving the entire system
to a new configuration (RAID1 on RAID5).  This means I can't run
mkinitrd on *the* system to have it autoprobe, etc., etc. (chrooting
from a rescue image always seems broken because the /dev entries never
exist in the newly root tree).  Moreover, I think nested RAID arrays
will really confuse any of the automated tools because that
configuration isn't (doesn't appear to be) supported (at least not at
install time).

Thanks though.

Regards,
Richard

> I am pretty sure that a modern mkinitrd will make almost all need for 
> manual edits of an initrd image unnecessary.
>
> Good luck!
>
> -- 
> fedora-list mailing list
> fedora-list@redhat.com
> To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list

-- 
fedora-list mailing list
fedora-list@redhat.com
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list


Changing initrd contents and grub

2008-06-23 Thread Richard Michael
Hello list,

I've changed my RAID and LVM configuration and need to modify the
respective commands in the /init of my initrd.

I made those changes by decompressing and extracting the cpio archive,
editting the init script (add a couple lines for mdadm, changed the
activated volume group name), rebuilt a cpio archive (using the correct
"-c"/"-H newc" SVR4 format) and fed it back through gzip (max
compression), then I just moved aside the old initrd, replacing it with
my new one:

mkdir /boot/tmp
cd !$
gzip -dc ../initrd- | cpio -id
vi init
find . -depth -print | cpio -oc | gzip -9 > ../initrd-.new 
cd ..
mv initrd- initrd-.orig
mv initrd-.new initrd-

The kernel now panics (paraphrase) "can't find /init".

It does not do this if I restore the original initrd.

I have not changed the name of the initrd, filenames match grub.conf and
grub's boot menu, etc.  I have done this type of modification
successfully in the past, but only changing a single character in /init.

So, it appears the kernel is not using my use initrd.  Perhaps it is not
prepared correctly (file magic for both new and old initrd files
suggests they are the same, however)?  Is grub involved somehow?
Perhaps it can't find my new initrd?

/boot is on a raid1 partition, ext2fs.  Grub knows about this, and the
system used to boot without problem.

Any advice?

Thanks,
Richard

-- 
fedora-list mailing list
fedora-list@redhat.com
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list


F9 install onto LVM on RAID1 on RAID5

2008-06-22 Thread Richard Michael
Hello list,

I am trying to install a new F9 system onto a RAID/LVM setup.

As anaconda doesn't let me create the RAID/LVM configuration I require,
I created these devices using mdadm and lvm in the shell during
installation.  I then created the filesystems and swap space, with
labels, in the LVM volume group and I can mount them, read, write, etc..
So all is well with the underlying setup.

However, returning to the install's "custom layout" partitioning page,
anaconda displays the volume group and the names and sizes of the
members are correct, but in the "TYPE" colume it indicates "foreign" and
the mount point, and other fields are empty.

If I click "LVM" or highlight one of the members and click "Edit" (to
set the mount points and formatting options), anaconda responds with:

--
Not enough physical volumes 
(...)
Create a partition or RAID array of type "physical volume (LVM)"
and click "LVM" again.
--

Consequently, I cannot edit the member details to set mount points and
formatting options and continue with the installation.

How does anaconda determine the "type" of a RAID array; do md devices
have types (as partitions do)?  How can I satisfy it there are indeed
physical volumes for an LVM (and that I have already configured them)?

Alternately, how can I definitely tell anaconda to simply skip all
partitioning and let me tell it which /dev entries to use for whichever
partitions?


Details
===

There are four disks in the system, I will add three more.  The
intention is the have a mirror three disks, for six total, plus one
spare on one half.  So:

md0 is a raid1 (mirror) of four small partitions of each disk.
  mdadm --create /dev/md0 --level=raid1 --raid-devices=4 /dev/sd[abcd]1

md1 is a raid5 of the remaining portion of three disks plus a spare.
  mdadm --create /dev/md1 --level=raid5 --raid-devices=3 --spare-devices=1 
--assume-clean /dev/sd[abcd]2

md4 is a raid1 (mirror), degraded because disks are currently missing,
of md0 and "missing".
  mdadm --create /dev/md4 --level=raid1 --raid-devices=2 /dev/md0 missing

md5 is a raid1 (mirror), degraded because disks are currently missing,
of md1 and "missing".
  mdadm --create /dev/md5 --level=raid1 --raid-devices=2 /dev/md1 missing

I configured my lvm volumes on md5, and as I mentioned, anaconda does
see the members.

  lvm> pvcreate /dev/md5
  lvm> vgcreate -s 32m vg0 /dev/md5
  lvm> lvcreate -L 1024m -n root vg0 ; ...
  lvm> vgchange -a y vg0


Aside, anaconda displays md0 and md1 in the list of RAID volumes (both
as type "foreign"), but *not* md4 and md5 -- even though they are just
normal mirror RAIDs.  Is this because they are degraded?

I suspect anaconda lists the lvm members because it notices which vg's
are active.  It doesn't believe md5 contains a physical volume suitable
for LVM use.  (In fact, I don't think anaconda believes there any
physical volumes for lvm on the system at all and, as above, it doesn't
show md5 at all.)

If I cannot get anaconda to cooperate, I'll install onto a raid5 array
on temporary disks, then move the entire system into the proper
nested-RAID5/RAID1/LVM setup.

Thanks for suggestions.


Regards,
Richard

-- 
fedora-list mailing list
fedora-list@redhat.com
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list