running 3TB disks in RAID5 on SL 5.4
Hi all, My lab uses a Dell Poweredge R905 server running Scientific Linux 5.4. It has three 1TB disks that are combined in a RAID5 configuration to create one 2TB partition. I would like to replace these with three 3TB disks to create one 6TB partition in RAID5. I read online there might be issues with running 3TB disks, some people have encountered an upper limit of 2.1TB for a single partition due to addressing issues. However, this should not be a problem when running Extensible Firmware Interface (EFI). Can anyone confirm that this will work on my system with the current OS? Cheers, Diederick
Re: running 3TB disks in RAID5 on SL 5.4
On Fri, 12 Nov 2010, Diederick Stoffers wrote: Hi all, My lab uses a Dell Poweredge R905 server running Scientific Linux 5.4. It has three 1TB disks that are combined in a RAID5 configuration to create one 2TB partition. I would like to replace these with three 3TB disks to create one 6TB partition in RAID5. I read online there might be issues with running 3TB disks, some people have encountered an upper limit of 2.1TB for a single partition due to addressing issues. However, this should not be a problem when running Extensible Firmware Interface (EFI). Can anyone confirm that this will work on my system with the current OS? Sadly sl5 will not install on the one machine I tested using EFI firmware. RH have a document (sorry can't find it right now) explaining that the support for EFI in anaconda will be added in EL6. However, since EL5/sl5 won't boot from an md RAID-5 set you are probably using a hardware RAID controller - like the PERC/6 or H700 or similar. Those support slicing up a single RAID-5 into several smaller 'Virtual Disks' (VDs) to be presented to the OS as different disks. The only restriction for the BIOS firmware on 2TB is for the 'boot disk' so it is easy enough to slice off a little to install into (using the old fashioned disk label) and have the rest configured either using GPT labels or in fact chop it all up into 2TB chunks if you prefer... e.g. we have one Dell PE2950 box with a PERC/6i with 8x750G disks set up with RAID-5, presenting 3 VDs each under 2TB thus: (Output from MegaCli -LdInfo -LALL -aALL) Adapter 0 -- Virtual Drive Information: Virtual Disk: 0 (Target Id: 0) Name:templ-vd0 RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3 Size:180MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Virtual Disk: 1 (Target Id: 1) Name:templ-vd1 RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3 Size:180MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Virtual Disk: 2 (Target Id: 2) Name:templ-vd2 RAID Level: Primary-5, Secondary-0, RAID Level Qualifier-3 Size:689280MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default The PERC VDs show up in /proc/partitions as: $ cat /proc/partitions major minor #blocks name 8 0 184320 sda 8 1 104391 sda1 8 2 1843089255 sda2 816 184320 sdb 817 1843193646 sdb1 832 705822720 sdc 833 705815743 sdc1 ... anyway sl5 has been working on that box for about 3 years. In fact we join most of them back together as a single LVM-VG (really don't ask why...) The reason that all these PERC VDs are 2TB is that I was being very very conservative. -- /\ | Computers are different from telephones. Computers do not ring. | | -- A. Tanenbaum, Computer Networks, p. 32 | -| | Jon Peatfield, _Computer_ Officer, DAMTP, University of Cambridge | | Mail: jp...@damtp.cam.ac.uk Web: http://www.damtp.cam.ac.uk/ | \/
SL 5.4 to 5.5 upgrade-problem
When installing SL5.5 over SL5.4 about mid way into the Disk 2 I get an Error message to REBOOT. 204 Meg on /mnt/sysimage/usr When I look at /mnt it is empty after reboot. My guess is that update ran out of disk space, and the the sysimage/usr is removed after ERROR is detected and REBOOT message is displayed. Is there anyway to change the location of sysimage/usr to some other disk on the system? The sda? contains all the system directories and is 36 G, about 1/2 of it is uncommitted. /usr is 8G and 94% full. Other directories have at least a G of spare space. I need to change partition sizes but hate to waste a lot of time guessing. Thank You Larry Linder
Re: SL 5.4 to 5.5 upgrade-problem
On Thu, 22 Jul 2010, Larry Linder wrote: When installing SL5.5 over SL5.4 about mid way into the Disk 2 I get an Error message to REBOOT. 204 Meg on /mnt/sysimage/usr When I look at /mnt it is empty after reboot. During an install, the partition that is going to become / would be mounted on /mnt/sysimage and that which will be /usr would be on /mnt/sysimage/usr. My guess is that update ran out of disk space, and the the sysimage/usr is removed after ERROR is detected and REBOOT message is displayed. You shouldn't actually have to do an anaconda install to get from SL54. to sl5.5, a yum upgrade should work. But yes you ran out of disk space on /usr, which is by far where most of the distribution lives. Is there anyway to change the location of sysimage/usr to some other disk on the system? How big is your / partition? Is it enough to contain all that it contains now plus all the 8GB of /usr plus the extra stuff you still have to put in? do you have any other partition that has 2G or more free? If so, you could boot in single user mode and move the contents of /usr/ to whatever partition is free. But it may be better to first try to clean out unused user files out of /usr because there is for sure not 8GB worth of OS files that would be there. The sda? contains all the system directories and is 36 G, about 1/2 of it is uncommitted. /usr is 8G and 94% full. Other directories have at least a G of spare space. I need to change partition sizes but hate to waste a lot of time guessing. Thank You Larry Linder -- -- Steven C. Timm, Ph.D (630) 840-8525 t...@fnal.gov http://home.fnal.gov/~timm/ Fermilab Computing Division, Scientific Computing Facilities, Grid Facilities Department, FermiGrid Services Group, Assistant Group Leader.
Re: SL 5.4 to 5.5 upgrade-problem
Larry Linder wrote: When installing SL5.5 over SL5.4 about mid way into the Disk 2 I get an Error message to REBOOT. 204 Meg on /mnt/sysimage/usr When I look at /mnt it is empty after reboot. My guess is that update ran out of disk space, and the the sysimage/usr is removed after ERROR is detected and REBOOT message is displayed. Is there anyway to change the location of sysimage/usr to some other disk on the system? The sda? contains all the system directories and is 36 G, about 1/2 of it is uncommitted. /usr is 8G and 94% full. Other directories have at least a G of spare space. I need to change partition sizes but hate to waste a lot of time guessing. Thank You Larry Linder Hi Larry, One quick question before I proceed. Are you doing a real upgrade or an install An upgrade is where SL 5.4 stays there and you just update the packages in it. If that is the case, using the installer isn't the recommended way of updating it. It is much easier to to an upgrade via yum. http://www.scientificlinux.org/documentation/howto/upgrade.5x An install is where you wipe and reformat everything except maybe your home and data partitions. I am going to assume that you are doing a install or SL 5.5 over a previously installed SL 5.4. If you are doing this, then you *need* to reformat the partitions that do not contain your home area or data. Otherwise the install starts adding to what is already there, and as you saw, it can fill up. Patitions you should format if you are doing an install. / /usr /var /boot As Steve said in a previous email, if you can fit everything onto / there is often no reason to create a /usr. Take that space and add it to / Hopefully this is enough information, along with Steve's, to get you going Troy -- __ Troy Dawson daw...@fnal.gov (630)840-6468 Fermilab ComputingDivision/LSCS/CSI/USS Group __
Re: SL 5.4 to 5.5 upgrade-problem
Because we are in a little town (wide spot in road) the DSL is not the most reliable. That is why I down loaded the 8 disks from another source and wanted to update systems to SL 5.5. This install try was an update and not a new installation. There have been a lot of additions to this system, for doing FFT, Power Spectral Density, and a lot of signal processing. Unfortunately /usr has expaned beyond our first guess and /usr/local is almost empty. Since these may not be adjacent partitions. Is there anyway to expand /usr and shrink /usr/local. Thanks for the insight. Larry Linder On Thursday 22 July 2010 10:32, you wrote: Larry Linder wrote: When installing SL5.5 over SL5.4 about mid way into the Disk 2 I get an Error message to REBOOT. 204 Meg on /mnt/sysimage/usr When I look at /mnt it is empty after reboot. My guess is that update ran out of disk space, and the the sysimage/usr is removed after ERROR is detected and REBOOT message is displayed. Is there anyway to change the location of sysimage/usr to some other disk on the system? The sda? contains all the system directories and is 36 G, about 1/2 of it is uncommitted. /usr is 8G and 94% full. Other directories have at least a G of spare space. I need to change partition sizes but hate to waste a lot of time guessing. Thank You Larry Linder Hi Larry, One quick question before I proceed. Are you doing a real upgrade or an install An upgrade is where SL 5.4 stays there and you just update the packages in it. If that is the case, using the installer isn't the recommended way of updating it. It is much easier to to an upgrade via yum. http://www.scientificlinux.org/documentation/howto/upgrade.5x An install is where you wipe and reformat everything except maybe your home and data partitions. I am going to assume that you are doing a install or SL 5.5 over a previously installed SL 5.4. If you are doing this, then you *need* to reformat the partitions that do not contain your home area or data. Otherwise the install starts adding to what is already there, and as you saw, it can fill up. Patitions you should format if you are doing an install. / /usr /var /boot As Steve said in a previous email, if you can fit everything onto / there is often no reason to create a /usr. Take that space and add it to / Hopefully this is enough information, along with Steve's, to get you going Troy
Re: SL 5.4 to 5.5 upgrade-problem
What you want to do, can be done with a combination of parted and resize2fs. Take good backups first. It would have been easier if you were using LVM. basic strategy, resize2fs to shrink /usr/local file system, then parted to shrink /usr/local partition, then parted to grow /usr partition. The last time I did this I had to actually delete one of the partitions out of the partition table and then re-create it in the same spot. Not for amateurs. Steve On Thu, 22 Jul 2010, Larry Linder wrote: Because we are in a little town (wide spot in road) the DSL is not the most reliable. That is why I down loaded the 8 disks from another source and wanted to update systems to SL 5.5. This install try was an update and not a new installation. There have been a lot of additions to this system, for doing FFT, Power Spectral Density, and a lot of signal processing. Unfortunately /usr has expaned beyond our first guess and /usr/local is almost empty. Since these may not be adjacent partitions. Is there anyway to expand /usr and shrink /usr/local. Thanks for the insight. Larry Linder On Thursday 22 July 2010 10:32, you wrote: Larry Linder wrote: When installing SL5.5 over SL5.4 about mid way into the Disk 2 I get an Error message to REBOOT. 204 Meg on /mnt/sysimage/usr When I look at /mnt it is empty after reboot. My guess is that update ran out of disk space, and the the sysimage/usr is removed after ERROR is detected and REBOOT message is displayed. Is there anyway to change the location of sysimage/usr to some other disk on the system? The sda? contains all the system directories and is 36 G, about 1/2 of it is uncommitted. /usr is 8G and 94% full. Other directories have at least a G of spare space. I need to change partition sizes but hate to waste a lot of time guessing. Thank You Larry Linder Hi Larry, One quick question before I proceed. Are you doing a real upgrade or an install An upgrade is where SL 5.4 stays there and you just update the packages in it. If that is the case, using the installer isn't the recommended way of updating it. It is much easier to to an upgrade via yum. http://www.scientificlinux.org/documentation/howto/upgrade.5x An install is where you wipe and reformat everything except maybe your home and data partitions. I am going to assume that you are doing a install or SL 5.5 over a previously installed SL 5.4. If you are doing this, then you *need* to reformat the partitions that do not contain your home area or data. Otherwise the install starts adding to what is already there, and as you saw, it can fill up. Patitions you should format if you are doing an install. / /usr /var /boot As Steve said in a previous email, if you can fit everything onto / there is often no reason to create a /usr. Take that space and add it to / Hopefully this is enough information, along with Steve's, to get you going Troy -- -- Steven C. Timm, Ph.D (630) 840-8525 t...@fnal.gov http://home.fnal.gov/~timm/ Fermilab Computing Division, Scientific Computing Facilities, Grid Facilities Department, FermiGrid Services Group, Assistant Group Leader.
Re: SL 5.4 to 5.5 upgrade-problem
Larry Linder wrote: Because we are in a little town (wide spot in road) the DSL is not the most reliable. That is why I down loaded the 8 disks from another source and wanted to update systems to SL 5.5. This install try was an update and not a new installation. There have been a lot of additions to this system, for doing FFT, Power Spectral Density, and a lot of signal processing. Unfortunately /usr has expaned beyond our first guess and /usr/local is almost empty. steven timm has right suggestion, but i believe he missed; Since these may not be adjacent partitions. Is there anyway to expand /usr and shrink /usr/local. so, how about posting your /etc/fstab, and post results of; df -B 1024|grep /dev/|sort do you have any room for an additional large hard drive? do you have another system that you can mount hard drive that you are having space problems with? no further commitment or comments until you reply to above, as it will determine how to proceed. -- peace out. tc,hago. g . in a free world without fences, who needs gates. ** help microsoft stamp out piracy - give linux to a friend today. ** to mess up a linux box, you need to work at it. to mess up an ms windows box, you just need to *look* at it. ** learn linux: 'Rute User's Tutorial and Exposition' http://rute.2038bug.com/index.html 'The Linux Documentation Project' http://www.tldp.org/ 'LDP HOWTO-index' http://www.tldp.org/HOWTO/HOWTO-INDEX/index.html 'HowtoForge' http://howtoforge.com/ signature.asc Description: OpenPGP digital signature
Ethernet drivers in SL 5.4
Hi, Ihave installed SL 5.4 on my new desktop and have a problem with internet drivers. The adapter integrated in the motherboard is a clone of 3c501. Linux tells me that non driver is present. I have also mounted an old card, a clone of ne2000, and get the same message. The kernel version is 2.6.18-164.2.1.el5 What can I do? Thx -- Elio Fabri
Re: Clean SL 5.4 Install and Anaconda Disk Partitioning Madness
Thanks for all the suggestions. Anaconda still did some rearranging, but I eventually figured out what I needed to do to get what I wanted. I had to use `Edit' to set the starting and ending cylinders as well as use the Primary Partition check box. I had to use `Edit' for all the partitions. I tried `Add' for the last partition and let it go to the end of the drive automatically, but it stopped short of the end, so I had to use `Edit' in order to use the full drive space. I assembled all my partitions into their respective software RAID's, set file system type, mount points, and clicked `Next'. New problem, it now says XFS is not valid for a boot partition. Hogwash! All my systems use XFS for boot partitions. I've got systems running SL 3.0.5, SL 4.5, and one running SL 5.3 all with ONLY XFS files systems including /boot. Any ideas on getting past this new hurdle? Thanks. Oh, in order to get XFS listed as a valid file system, I had to add `xfs' to the `linux' line when I started up the install. -- Brent L. Bates (UNIX Sys. Admin.) M.S. 912 Phone:(757) 865-1400, x204 NASA Langley Research CenterFAX:(757) 865-8177 Hampton, Virginia 23681-0001 Email: b.l.ba...@larc.nasa.govhttp://www.vigyan.com/~blbates/
Clean SL 5.4 Install and Anaconda Disk Partitioning Madness
I've been searching Google for answers and can't find any, so I decided to check here. I'm trying to do a clean install of SL 5.4. I'm booting from an SL 5.4 x86_64 DVD. When I get to the point of custom partitioning my drives, Anaconda makes a mess of things. I have 4 drives and I want 4 partions on each drive. The first partion will be `/boot', next `/', then `/data', and finally a swap partion. As I create each partition on each drive, Anaconda will suddenly rearrange the order of the partions. When I go onto another drive, the order may be different than the last drive I just partitioned. When I try to do the 4th and last partition, I get an `Extended' 4th partion (which is empty) and a real 5th partition, instead of a simple single partition. In the end, what I want is the first partition on each drive combined into a software RAID 1 and be `/boot'. The next 2 sets will be software RAID 0's and `/' `/data'. The final partitions will be 4 separate swap partitions that the OS will take care of. I've tried creating all the partitions on one drive and then moving onto the next one and the next one, but it scrambles things up. I've tried creating the first partition on each drive, then combining them into the RAID 1 md0 device, and specifying the files system type, and mount point `/boot'. Next I go onto the next partition, which I've tried as a software RAID 0 partition and swap at various times. This one usually works, but not always. When I get to the 3rd partition, then it will suddenly rearrange the partitions on that drive. I've even seen it suddenly create a swap partition on a different drive than what I'm actually working on. I've done this with earlier versions of SL, but I don't remember having this much trouble with Anaconda randomly rearranging things and creating an extra unneeded partion. Any insights would be greatly appreciated. Thanks. -- Brent L. Bates (UNIX Sys. Admin.) M.S. 912 Phone:(757) 865-1400, x204 NASA Langley Research CenterFAX:(757) 865-8177 Hampton, Virginia 23681-0001 Email: b.l.ba...@larc.nasa.govhttp://www.vigyan.com/~blbates/
Re: Clean SL 5.4 Install and Anaconda Disk Partitioning Madness
Brent L. Bates wrote: I've been searching Google for answers and can't find any, so I decided to check here. I'm trying to do a clean install of SL 5.4. I'm booting from an SL 5.4 x86_64 DVD. When I get to the point of custom partitioning my drives, Anaconda makes a mess of things. I have 4 drives and I want 4 partions on each drive. The first partion will be `/boot', next `/', then `/data', and finally a swap partion. As I create each partition on each drive, Anaconda will suddenly rearrange the order of the partions. When I go onto another drive, the order may be different than the last drive I just partitioned. When I try to do the 4th and last partition, I get an `Extended' 4th partion (which is empty) and a real 5th partition, instead of a simple single partition. In the end, what I want is the first partition on each drive combined into a software RAID 1 and be `/boot'. The next 2 sets will be software RAID 0's and `/' `/data'. The final partitions will be 4 separate swap partitions that the OS will take care of. I've tried creating all the partitions on one drive and then moving onto the next one and the next one, but it scrambles things up. I've tried creating the first partition on each drive, then combining them into the RAID 1 md0 device, and specifying the files system type, and mount point `/boot'. Next I go onto the next partition, which I've tried as a software RAID 0 partition and swap at various times. This one usually works, but not always. When I get to the 3rd partition, then it will suddenly rearrange the partitions on that drive. I've even seen it suddenly create a swap partition on a different drive than what I'm actually working on. I've done this with earlier versions of SL, but I don't remember having this much trouble with Anaconda randomly rearranging things and creating an extra unneeded partion. Any insights would be greatly appreciated. Thanks I've noticed anaconda likes to try and be smarter than you. Quite annoying. You can probably solve this by specifying things explicitly in a kickstart file, pre-partitioning the disk using fdisk/sfdisk, or fiddling with the force primary partition check box and the order you specify the partitions in anaconda. I've worked in the reverse order over the years, manually tricking anaconda, then forcing an sfdisk dump in, and finally resorted to tailoring a kickstart to do my bidding because disk sizes aren't constant for me. Hope that helps a bit. Cheers, Mark -- Mr. Mark V. Stodola Digital Systems Engineer National Electrostatics Corp. P.O. Box 620310 Middleton, WI 53562-0310 USA Phone: (608) 831-7600 Fax: (608) 831-9591
Re: Clean SL 5.4 Install and Anaconda Disk Partitioning Madness
Brent L. Bates wrote: I've been searching Google for answers and can't find any, so I decided to check here. I'm trying to do a clean install of SL 5.4. I'm booting from an SL 5.4 x86_64 DVD. When I get to the point of custom partitioning my drives, Anaconda makes a mess of things. I have 4 drives and I want 4 partions on each drive. The first partion will be `/boot', next `/', then `/data', and finally a swap partion. As I create each partition on each drive, Anaconda will suddenly rearrange the order of the partions. When I go onto another drive, the order may be different than the last drive I just partitioned. When I try to do the 4th and last partition, I get an `Extended' 4th partion (which is empty) and a real 5th partition, instead of a simple single partition. In the end, what I want is the first partition on each drive combined into a software RAID 1 and be `/boot'. The next 2 sets will be software RAID 0's and `/' `/data'. The final partitions will be 4 separate swap partitions that the OS will take care of. I've tried creating all the partitions on one drive and then moving onto the next one and the next one, but it scrambles things up. I've tried creating the first partition on each drive, then combining them into the RAID 1 md0 device, and specifying the files system type, and mount point `/boot'. Next I go onto the next partition, which I've tried as a software RAID 0 partition and swap at various times. This one usually works, but not always. When I get to the 3rd partition, then it will suddenly rearrange the partitions on that drive. I've even seen it suddenly create a swap partition on a different drive than what I'm actually working on. I've done this with earlier versions of SL, but I don't remember having this much trouble with Anaconda randomly rearranging things and creating an extra unneeded partion. Any insights would be greatly appreciated. Thanks. What *I* would do with something that complicated. I would do a Ctrl-Alt-F2 on the screen before that and do all the partitions by hand. Then on the customizing screen, you just have to link them together. But if you are going to do it by the graphical install, make *sure* that you select Primary Partition for each and every partition. The one partition that you don't do that to is going to get popped over to an extended partition. Troy -- __ Troy Dawson daw...@fnal.gov (630)840-6468 Fermilab ComputingDivision/LSCS/CSI/USS Group __
Re: Clean SL 5.4 Install and Anaconda Disk Partitioning Madness
And when you are doing your custom layout, select RAID rather than New in the menu for making partitions. The RAID option allows you to setup partitions on different disks and then combine them into a software raid. You can select the RAID type, and which partitions go into each RAID device. I have used this many times, and Troy is correct: as long as you use the force primary partition option, the partitions do not get moved around. Eve On Wed, 24 Mar 2010, Troy Dawson wrote: Date: Wed, 24 Mar 2010 13:29:59 -0500 From: Troy Dawson daw...@fnal.gov To: Brent L. Bates blba...@vigyan.com Cc: Scientific Linux Users mailing list scientific-linux-us...@fnal.gov Subject: Re: Clean SL 5.4 Install and Anaconda Disk Partitioning Madness Brent L. Bates wrote: I've been searching Google for answers and can't find any, so I decided to check here. I'm trying to do a clean install of SL 5.4. I'm booting from an SL 5.4 x86_64 DVD. When I get to the point of custom partitioning my drives, Anaconda makes a mess of things. I have 4 drives and I want 4 partions on each drive. The first partion will be `/boot', next `/', then `/data', and finally a swap partion. As I create each partition on each drive, Anaconda will suddenly rearrange the order of the partions. When I go onto another drive, the order may be different than the last drive I just partitioned. When I try to do the 4th and last partition, I get an `Extended' 4th partion (which is empty) and a real 5th partition, instead of a simple single partition. In the end, what I want is the first partition on each drive combined into a software RAID 1 and be `/boot'. The next 2 sets will be software RAID 0's and `/' `/data'. The final partitions will be 4 separate swap partitions that the OS will take care of. I've tried creating all the partitions on one drive and then moving onto the next one and the next one, but it scrambles things up. I've tried creating the first partition on each drive, then combining them into the RAID 1 md0 device, and specifying the files system type, and mount point `/boot'. Next I go onto the next partition, which I've tried as a software RAID 0 partition and swap at various times. This one usually works, but not always. When I get to the 3rd partition, then it will suddenly rearrange the partitions on that drive. I've even seen it suddenly create a swap partition on a different drive than what I'm actually working on. I've done this with earlier versions of SL, but I don't remember having this much trouble with Anaconda randomly rearranging things and creating an extra unneeded partion. Any insights would be greatly appreciated. Thanks. What *I* would do with something that complicated. I would do a Ctrl-Alt-F2 on the screen before that and do all the partitions by hand. Then on the customizing screen, you just have to link them together. But if you are going to do it by the graphical install, make *sure* that you select Primary Partition for each and every partition. The one partition that you don't do that to is going to get popped over to an extended partition. Troy -- __ Troy Dawson daw...@fnal.gov (630)840-6468 Fermilab ComputingDivision/LSCS/CSI/USS Group __ *** Eve Kovacs Argonne National Laboratory, Room E-217, Bldg. 362, HEP 9700 S. Cass Ave. Argonne, IL 60439 USA Phone: (630)-252-6208 Fax: (630)-252-5047 email: kov...@hep.anl.gov ***
Re: (New user) SL 5.4-Any rpms for madwifi-hal?
Well, Those modules didn't get wireless working...and when I tried blacklisting ath5k, the mouse stopped working (?). But I realized that the rpms are for standard madwifi. (I had been asking about a newer version with some different code that allows using AR5007 chipsets, madwifi-hal-10.5.6; it appears that the madwifi-hal rpm is for part of the standard madwifi). So the long and short is that I'll just have to be content or see if the newest RHEL kernel does anything. Thank you, Ibidem On Thu, 04 Mar 2010 08:47:56 -0600 Troy Dawson daw...@fnal.gov wrote: We have not been providing madwifi for SL 5.4 because of the conflict it causes with the modules already in the kernel. But we are providing it for the older releases, so you can always find the kernel-module-madwifi and kernel-module-madwifi-hal rpm's in the older releases security updates area. http://ftp.scientificlinux.org/linux/scientific/52/i386/updates/security/ If you have the regular 32 bit kernel installed the rpm's would be here. http://ftp.scientificlinux.org/linux/scientific/52/i386/updates/security/kernel-module-madwifi-2.6.18-164.11.1.el5-0.9.4-15.sl5.i686.rpm http://ftp.scientificlinux.org/linux/scientific/52/i386/updates/security/kernel-module-madwifi-hal-2.6.18-164.11.1.el5-0.9.4-15.sl5.i686.rpm Troy
SL 5.4 - XFS project quota - XFS_QUOTAON: Invalid argument
Hi all, I have posted this a few weeks ago on the XFS mailing list, but did not receive an answer. So I'm trying here. We have an XFS filesystem on SL 5.4 with pquota accounting and enforcement. It was working well, but recently I had to disable enforcement for a short time. Now I want to switch it back on, but I get the error XFS_QUOTAON: Invalid argument. Does anyone know, what to do? Any suggestions are welcome. Thanks in advance. [r...@coffein raid]# xfs_quota -x -c state -a /raid User quota state on /raid (/dev/sdb1) Accounting: OFF Enforcement: OFF Inode: #18446744073709551615 (0 blocks, 0 extents) Group quota state on /raid (/dev/sdb1) Accounting: OFF Enforcement: OFF Inode: #259 (24 blocks, 3 extents) Project quota state on /raid (/dev/sdb1) Accounting: ON Enforcement: OFF Inode: #259 (24 blocks, 3 extents) Blocks grace time: [7 days 00:00:30] Inodes grace time: [7 days 00:00:30] Realtime Blocks grace time: [7 days 00:00:30] [r...@coffein raid]# xfs_quota -x -c enable -p -v /raid XFS_QUOTAON: Invalid argument Best Regards, Jan
Re: (New user) SL 5.4-Any rpms for madwifi-hal?
Ibidem wrote: Hello all, I've recently started using SL 5.4 (installed from mini-livecd) on my Aspire One, with kernels 2.6.18-164.6.1.el5 (original) and 2.6.18-164.11.1.el5 (updated). Ath5k is functional on the live cd, halfway working on the old kernel, somewhat better on the updated kernel; but it comes nowhere near what the latest ath5k (2.6.31 32) can do or what madwifi-hal can do. I've tried installing madwifi, but it did not provide the interfaces (ifconfig did not recognize ath0 and eth1). After looking around, I've found that only madwifi-hal works with my chipset (AR5007), but I can't seem to find a recent (2.6.18-164.11.1) rpm. If I can install an rpm with dkms or kmod, that would be ideal; I'd prefer to avoid compiling it myself. I might have missed something; if so, please point it out. Ibidem We have not been providing madwifi for SL 5.4 because of the conflict it causes with the modules already in the kernel. But we are providing it for the older releases, so you can always find the kernel-module-madwifi and kernel-module-madwifi-hal rpm's in the older releases security updates area. http://ftp.scientificlinux.org/linux/scientific/52/i386/updates/security/ If you have the regular 32 bit kernel installed the rpm's would be here. http://ftp.scientificlinux.org/linux/scientific/52/i386/updates/security/kernel-module-madwifi-2.6.18-164.11.1.el5-0.9.4-15.sl5.i686.rpm http://ftp.scientificlinux.org/linux/scientific/52/i386/updates/security/kernel-module-madwifi-hal-2.6.18-164.11.1.el5-0.9.4-15.sl5.i686.rpm Troy -- __ Troy Dawson daw...@fnal.gov (630)840-6468 Fermilab ComputingDivision/LSCS/CSI/USS Group __
(New user) SL 5.4-Any rpms for madwifi-hal?
Hello all, I've recently started using SL 5.4 (installed from mini-livecd) on my Aspire One, with kernels 2.6.18-164.6.1.el5 (original) and 2.6.18-164.11.1.el5 (updated). Ath5k is functional on the live cd, halfway working on the old kernel, somewhat better on the updated kernel; but it comes nowhere near what the latest ath5k (2.6.31 32) can do or what madwifi-hal can do. I've tried installing madwifi, but it did not provide the interfaces (ifconfig did not recognize ath0 and eth1). After looking around, I've found that only madwifi-hal works with my chipset (AR5007), but I can't seem to find a recent (2.6.18-164.11.1) rpm. If I can install an rpm with dkms or kmod, that would be ideal; I'd prefer to avoid compiling it myself. I might have missed something; if so, please point it out. Ibidem
SL 5.4 Install perfectly
Made DVDs and installed it on test system that has nVidea chip set and SATA drives. It installed perfectly. I down loaded it from last site in list as an ftp and not the html sites. Larry Linder MicroControls LLC
SL 5.4 - SATA Disks Nvida
SL 5.4 now works with SATA drives. NVida chip set now works. The first time to set up video was strange, a clean reload did not exhibit the same problem, we assume it had to do with something left in NVida chip set. Adding disks to a working system manually is not as easy as it used to be on SL 5.4. Opensuse 11 and Yest has a pretty good disk management scheme. However Opensuse 11.3 has two issues that caused us to abandon it. The video system quit working during installation and the LVM was always getting in the way of setting up a system to be used for Engineering work. Nice if you are running a DB. but not cool when you are supporting a bunch of maverick Engineers who want it done their way. In Opensuse 11 you could at least not select LVM. First application for SL 5.4 for doing some high speed atmospheric data analysis. The sample rates are 10 uSec. for 4 channels of 14 bit, broad band data that sometimes last for a 1/2 hr or more. Once this is running we will add a few terabytes of disk space and let it run for a few weeks and see how it progresses. This is written in C but with a C++ flavor. A second application is to analyze wind date taken from an array of 4 remote anemometer placed in a 200 ft grid. The data system we built wakes up every 5 minutes and reads the data, goes back to sleep. We service it every 30 days to swap out (camera chip) and the re-chargeable battery pack. To visualize the data we need is to be able to run is silab to plot the data in at least 3 d and maybe 4 d if we can add time as the 4 dimension. We plan to eventually put a number of boxes into a cluster to get analysis done quicker. Thank You - for fixing the basic problems so we can use existing hardware. Regards Larry Linder
Re: SL 5.4 freeze on 1st boot when setting hostname
Final update, after a very productive exchange via the redhat bug tracking system, it turned out my BIOS was causing the problem. Flashed the latest version an d voila it works like a charm. (Samsung Q45 BIOS ST7 to version ST11) Before that I could get it to boot with kernel options pci=nomsi acpi= off before the crashkernel and rhgb quiet removed. Maybe someone will find this useful in the future and won't have to feel as stupid as I do at the moment for not catching something as easy as this. regards, Matt
SL 5.4 -
After reading a lot of the failure reports and noise. Are the available distributions what we are going to live with for the immediate future? When I read the release notes for SL 5.4 there is no mention of the S ATA problem or it being fixed. NVidea problems I had with SL 5.3 can be taken care of by disabling the video module at install and boot up and just set the desired display resolution. Thank for tip from a SL user. When I tried to install SL 5.3 on 64 bit system with Serial ATA disks. The installation just hangs.I have Suse 11. on this system but really would like to use SL. Suse 11.1 - 11.3 has the same NVidea problem. The ISO images for SL 5.4 were down loaded a few weeks ago and I wonder if its worth the effort to make the disks and install SL 5.4. since the release notes never said any thing about the SATA interface or at least I didn't find it. Regards Larry Linder
Re: SL 5.4 -
Larry J. Linder wrote: After reading a lot of the failure reports and noise. We had some problems with one of the download servers. This has been fixed. Where people were able to get a good download and DVD burn, there haven't been install problems. Are the available distributions what we are going to live with for the immediate future? It's released, yes. Until SL 5.5. When I read the release notes for SL 5.4 there is no mention of the S ATA problem or it being fixed. SATA bug fixes are really a driver and upstream vendor issue. They have changed the way they do their release notes, so those issues are not pulled out and are in more detail. You can find it here. http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html-single/Technical_Notes/ And there are a few things that might have fixed your SATA problem. *Bug fixes: - when using the aic94xx driver, a system with SATA drives may not boot due to a bug in libsas. - a bug in aic94xx may have caused kernel panics during boot on some systems with certain SATA disks. - fixed an issue in which sata_nv was not included in the initramfs. That last issue might have fixed the problem that you were seeing since you were using a motherboard with an nvidia chipset NVidea problems I had with SL 5.3 can be taken care of by disabling the video module at install and boot up and just set the desired display resolution. Thank for tip from a SL user. When I tried to install SL 5.3 on 64 bit system with Serial ATA disks. The installation just hangs.I have Suse 11. on this system but really would like to use SL. Suse 11.1 - 11.3 has the same NVidea problem. The ISO images for SL 5.4 were down loaded a few weeks ago and I wonder if its worth the effort to make the disks and install SL 5.4. since the release notes never said any thing about the SATA interface or at least I didn't find it. A few weeks ago SL 5.4 was *not* released. If you are worried about all of the above things, and then are willing to install a beta instead of the final release, I'm a bit surprised. Download the final release. Do an md5sum and/or sha1sum on the image, and use that image. Troy -- __ Troy Dawson daw...@fnal.gov (630)840-6468 Fermilab ComputingDivision/LSCS/CSI/USS Group __
Re: SL 5.4 -
Hi Larry, After reading a lot of the failure reports and noise. I agree you'll read a lot of problems people are having on this list, but that's typical as people wouldn't email noise to the list saying things work fine. For me, I had SL5.3 i386 crash the server (kernel panic) when running a BackupPC backup job, happened a couple of months ago, while it was working fine for many months on it. I updated to SL5.4 i386 on the weekend, and BackupPC works fine again on that same server (must have refreshed some perl modules or something). I have since update 90% of my SL5.3 servers to SL5.4 and have encountered no drawbacks from doing so. Regards, Michael. Are the available distributions what we are going to live with for the immediate future? When I read the release notes for SL 5.4 there is no mention of the S ATA problem or it being fixed. NVidea problems I had with SL 5.3 can be taken care of by disabling the video module at install and boot up and just set the desired display resolution. Thank for tip from a SL user. When I tried to install SL 5.3 on 64 bit system with Serial ATA disks. The installation just hangs.I have Suse 11. on this system but really would like to use SL. Suse 11.1 - 11.3 has the same NVidea problem. The ISO images for SL 5.4 were down loaded a few weeks ago and I wonder if its worth the effort to make the disks and install SL 5.4. since the release notes never said any thing about the SATA interface or at least I didn't find it. Regards Larry Linder --- End of Original Message ---
SL 5.4 freeze on 1st boot when setting hostname
SL 5.4 Kernel 2.6.18-164.2.1.e15 x86_64 Samsung Q45 Hello! I've managed to install SL 5.4 from DVDs but cannot boot into the system. The boot sequence freezes when reaching or shortly after setting hostname [OK]. I've tried booting with nousb set and rhgb disabled, to no avail. Any ideas? Thanks! _ Looking for a date? View photos of singles in your area! http://clk.atdmt.com/NMN/go/150855801/direct/01/
No xfs filesystem support in SL 5.4 RC 2
Hi there, I can't find the kernel-module-xfs packages. I miss the xfs filsystem in the kernel too. On an original RedHat System the last kernel with xfs was the 2.6.18-157.el5. In the aktual RedHat kernel 2.6.18-164.2.1.el5 I miss xfs too. Bye Thomas
Re: No xfs filesystem support in SL 5.4 RC 2
On Tue, Nov 3, 2009 at 1:12 AM, Thomas Koppe thomas.ko...@hrz.tu-chemnitz.de wrote: Hi there, I can't find the kernel-module-xfs packages. I miss the xfs filsystem in the kernel too. On an original RedHat System the last kernel with xfs was the 2.6.18-157.el5. In the aktual RedHat kernel 2.6.18-164.2.1.el5 I miss xfs too. In 5.4, xfs is included in the kernel itself. There is no external module. You can see it by 'locate xfs.ko' or 'find /lib/modules -name xfs.ko'. Also, it is available only in x86_64, not i686. Akemi
SL 5.4 Questions and Answers
Hello, Because we usually get the same questions at the beginning of each release, I'll try to answer them before they are asked. Q: Have you heard that RHEL 5.4 is released? A: Yes. We managed to start getting the update downloaded as soon as it was up on the ftp servers. We have already downloaded, and started rebuilding it. Q: When will you release SL 5.4? A: When it's done. But for a guess, we usually relese between 2 and 3 months. We usually have our first alpha release somewhere around 2 weeks after RedHat has released the source rpms. Q: Why is Troy Dawson so good looking? A: He was born that way. Q: Will you be releasing the security kernel (2.6.18-164.el) that came out with RHEL 5.4 as a security kernel for the rest of SL 5? A: No. RedHat had just released a security kernel before 5.4 was released. This earlier security kernel addressed the worst security issues. The first kernel after a minor update tends to do bad things to the older releases, such as SL 5.0. So we will wait for the next security kernel, which will also hopefully have the major bugs shaken out. BUT, we reserve the right to change our mind on this. Q: Will you wait until SL 5.4 is released before relesing SL 5 security errata? A: No. But it will probrubly be a couple of weeks before we do release then, just to check and make sure that there aren't hidden problems. I hope this answers some of the questions before they get asked. Thanks Troy -- __ Troy Dawson daw...@fnal.gov (630)840-6468 Fermilab ComputingDivision/LCSI/CSI LMSS Group __
Trying KVM out before RHEL/SL 5.4?
We were hoping to evaluate KVM in the coming weeks. However as RHEL 5.4 is not yet out, and it'll be a few weeks after that the Scientific Linux releases 5.4, does anyone know of any easy way to go about getting KVM working on SL 5.3? Or would it be better to wait for SL5.4 - if so any ideas when that is likely to be out? Thanks Tim Edwards
Re: Trying KVM out before RHEL/SL 5.4?
Hi, On Thursday 25 June 2009 11:52:31 you wrote: We were hoping to evaluate KVM in the coming weeks. However as RHEL 5.4 is not yet out, and it'll be a few weeks after that the Scientific Linux releases 5.4, does anyone know of any easy way to go about getting KVM working on SL 5.3? On SL5.3 I've downloaded KVM, built it including the kernel module and installed the .ko in /lib/modules... and the rest in /usr/local. Since the KVM executables are called with their full hardcoded path by some utils, I also had to symlink the executables in /usr/bin. After modprobing the .ko you can use virt-manager or the libvirt based utils to manage your VMs. I haven't had any problems with that setup. Cheers, Andreas
Re: Trying KVM out before RHEL/SL 5.4?
Tim Edwards wrote: We were hoping to evaluate KVM in the coming weeks. However as RHEL 5.4 is not yet out, and it'll be a few weeks after that the Scientific Linux releases 5.4, does anyone know of any easy way to go about getting KVM working on SL 5.3? Or would it be better to wait for SL5.4 - if so any ideas when that is likely to be out? Thanks Tim Edwards They have it in centos-extras. You can either download it directly or, try out a config file / mirror I've been setting up. yum --enablerepo=sl-testing install yum-conf-centos-extras Troy -- __ Troy Dawson daw...@fnal.gov (630)840-6468 Fermilab ComputingDivision/LCSI/CSI LMSS Group __