Re: [CentOS] Ext4 on CentOS 5.5 x64
>-Original Message- >From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On >Behalf Of cpol...@surewest.net >Sent: Friday, January 28, 2011 5:02 PM >To: CentOS mailing list >Subject: Re: [CentOS] Ext4 on CentOS 5.5 x64 > >Sorin Srbu wrote: > >> Anyway, I get a bad block message when running fsck, and am >not sure >> whether this is a interface problem between the chair and the monitor >or >> something with the tech preview. > > >Having just live through this issue, I recommend you run >the extended (long) SMART test on all your drives and check >the reports. The relevant package to install is smartmontools. >It's worth investing a little time in setting up the package. >I ended up with this incantation in /etc/smartd.conf : > >/dev/hda -T normal -p -a -o on -S on -s (S/../.././02|L/../../6/03) -m >root@localhost > >To execute the extended tests (doesn't mess with your data): ># smartctl --test=long /dev/hda > >To view the test results about 80 minutes later: ># smartctl --log=selftest /dev/hda > >and so on. Good info, thanks! -- /Sorin smime.p7s Description: S/MIME cryptographic signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to relocate $HOME directory
On Mon, Jan 31, 2011 at 12:18 AM, Rudi Ahlers wrote: > On Sun, Jan 30, 2011 at 11:07 PM, Soo-Hyun Choi wrote: >> >> As you know, $HOME is generally located at "/home/$username" by default. >> >> I would like to re-locate all users' $HOME directories to something like >> "/export/home/$username" without having a hassle/trouble. >> >> Initially, I've thought of just copying them to the new directory (under >> /export/home/xxx), but guessed it might trouble for the normal use (I'm >> pretty new to CentOS, although many experiences with Debian/Ubuntu). >> >> Is there any good tricks (or caveats) when moving users' home directory >> cleanly with CentOS? (I'm with CentOS 5.5 x86_64) > > The easiest way would be to move (or copy) everything in /home to > /export/home, and then remount /home on /export/home in your fstab. > > Before you remount it, you may want to rename it to say /oldhome or > /home2 or something like that, and then if everything works fine then > you simply delete it :) If you're changing the root of /home to another mount point or directory, say "/export/home", you'll also have to use semanage to set its selinux context to "home_root_t", etc. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to relocate $HOME directory
On Mon, Jan 31, 2011 at 12:18 AM, Rudi Ahlers wrote: > On Sun, Jan 30, 2011 at 11:07 PM, Soo-Hyun Choi wrote: >> Hi there, >> >> As you know, $HOME is generally located at "/home/$username" by default. >> >> I would like to re-locate all users' $HOME directories to something like >> "/export/home/$username" without having a hassle/trouble. >> >> Initially, I've thought of just copying them to the new directory (under >> /export/home/xxx), but guessed it might trouble for the normal use (I'm >> pretty new to CentOS, although many experiences with Debian/Ubuntu). >> >> Is there any good tricks (or caveats) when moving users' home directory >> cleanly with CentOS? (I'm with CentOS 5.5 x86_64) >> >> Cheers, >> Soo-Hyun >> >> ___ > > > > The easiest way would be to move (or copy) everything in /home to > /export/home, and then remount /home on /export/home in your fstab. > > Before you remount it, you may want to rename it to say /oldhome or > /home2 or something like that, and then if everything works fine then > you simply delete it :) This tends to break symlinks and hard-coded script locations. In particular, Samba and Apache make some assumptions about where home directories live that you might want to resolve if you enable homedir access for or public_html access for those tools. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to relocate $HOME directory
On Sun, Jan 30, 2011 at 11:07 PM, Soo-Hyun Choi wrote: > Hi there, > > As you know, $HOME is generally located at "/home/$username" by default. > > I would like to re-locate all users' $HOME directories to something like > "/export/home/$username" without having a hassle/trouble. > > Initially, I've thought of just copying them to the new directory (under > /export/home/xxx), but guessed it might trouble for the normal use (I'm > pretty new to CentOS, although many experiences with Debian/Ubuntu). > > Is there any good tricks (or caveats) when moving users' home directory > cleanly with CentOS? (I'm with CentOS 5.5 x86_64) > > Cheers, > Soo-Hyun > > ___ The easiest way would be to move (or copy) everything in /home to /export/home, and then remount /home on /export/home in your fstab. Before you remount it, you may want to rename it to say /oldhome or /home2 or something like that, and then if everything works fine then you simply delete it :) -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to relocate $HOME directory
On Mon, Jan 31, 2011 at 06:58:36AM +0900, Soo-Hyun Choi wrote: > > Well, yes and no. In case of Debian/Ubuntu, we need to modify apparmor > settings (e.g., by changing "etc/apparmor.d/tunables/home" information) > to get it right apart from just copying them and changing passwd file. > > I wondered if CentOS has such an issue when relocating $HOME directories. Not that I'm aware. But you could always relocate the home directories and then create appropriate symlink(s). For example, if you're relocating everyone, you can move the users' directories, then remove /home and create /home as a symlink to the new top-level home directory (/export/blah). Or, if your users will be in various different places, you can keep /home and create a symlink for each user (/home/user1 -> /export1/user1 ; /home/user2 -> /export3/user2). There are probably many other ways to deal with this; AFAIK CentOS shouldn't have any difficulties with any of these situations. --keith -- kkel...@wombat.san-francisco.ca.us pgpRTV7Toi40X.pgp Description: PGP signature ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Groups
On Sun, Jan 30, 2011 at 11:14 PM, Jason S-M wrote: > Hi All, > > On one of my servers I have a personal account and root. I disable root for > ssh logins and run ssh on an alternative port. When 'scp'ing files I usually > scp them up, then ssh in 'su' root and move them to /var/www/html. > > I can sftp I realize, but what group can I add my personal account to, but > not root, so I can sftp in and put the files in /var/www/html? There are a dozen ways to do this. One is to uplodate with WebDAV over HTTPS, which is built into Apache on CentOS and has plenty of usable clients such as lftp. Another is simply to designate a directory under /var/www/html/, owned by you personally, that the apache user can browse. That give you direct upload access as yourself. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Groups
Hi All, On one of my servers I have a personal account and root. I disable root for ssh logins and run ssh on an alternative port. When 'scp'ing files I usually scp them up, then ssh in 'su' root and move them to /var/www/html. I can sftp I realize, but what group can I add my personal account to, but not root, so I can sftp in and put the files in /var/www/html? Secondarily /var/www/html/ is owned by root:root, can I change this to something else so my sftp'ing is easier? apache:apache as owner? -Jason ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: Recommendations for a virtual storage server
> Personally, I don't think that the OP really knows what they want, > or they want the best of all worlds without compromise. I can't help but feel this is another classroom based scenario. I mean 5GB on an actual host, seems kinda silly to me? No real mention of what these hosts will actually do or what load the NFS server will be encountering or even data set served. Not tryin to flame here, but it just doesn't seem serious. And if this is real, then the OP needs to try harder and sell the idea of spending more $$$ to the higher ups. - aurf ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: Recommendations for a virtual storage server
- Original Message - | On 1/29/11 5:05 AM, carlopmart wrote: | >> | >> |> It is very important that the virtual machine consumes the | >> |> least | >> |> resources | >> |> possible (host has 5GB RAM and i need to run three virtual | >> |> machines | >> |> minimum, | >> |> including this storage server as a virtual machine). | >> | | >> | What's the point of adding an extra virtual layer compared to an | >> | nfs | >> | or | >> | iscsi share from the host (nfs if it is shared, iscsi if it is | >> | the VM | >> | image store)? This seems like it would be more efficient if you | >> | run | >> | exsi on the hardware with centos and the others as guests anyway. | >> | | >> | >> There are some advantages that I can see in that if your hardware | >> dies you can migrate the entire host and disks over to another | >> VMWare hosts. | >> | >> If your NFS host is not H/A a loss of the host would take down the | >> virtual machines too. Additionally, virtualization offers the | >> ability to migrate the VM and disk to newer hardware somewhat | >> transparently allowing you to take advantage of the | >> latest/greatest/buggy tech. | >> | >> Just my 2c ;) | >> | > | > Correct. | | But I don't see how any of those things apply here. If the host fails | your vm's | are going to fail in any case, and there's not much magic involved in | exporting | an NFS share even if you need to move it. Iscsi targets are slightly | more | complicated because it's not included in the base Centos install but | you can | find howto's to set it up. When your resources are limited it looks | like a big | waste to add an unnecessary virtual layer to storage. I've done it the | other | way around, though, with NFS exports from the host being mounted by | the guest VM's. | | -- | Les Mikesell | lesmikes...@gmail.com I made no claims that it solved anything. I merely noted why someone might want to virtualize in place of NFS. Personally, I don't think that the OP really knows what they want, or they want the best of all worlds without compromise. I don't see how it is possible to provide what is being asked for. Really I think a minimum of two ideally a third server providing iSCSI or NFS is needed for the solution to work. That third machine should have all of the possible host level redundancy possible to keep it running. If H/A is required at least two machines are required. -- James A. Peltier IT Services - Research Computing Group Simon Fraser University - Burnaby Campus Phone : 778-782-6573 Fax : 778-782-3045 E-Mail : jpelt...@sfu.ca Website : http://www.sfu.ca/itservices http://blogs.sfu.ca/people/jpeltier ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] how to move forward/undo/revert/fix re: a failed CentOS 5.5 to SL 5.5 migration ... [SOLVED?]
On Wed, Jan 26, 2011 at 11:25 AM, Larry Vaden wrote: > For various reasons which seemingly fail the necessary/sufficient > tests with the benefit of hindsight, I attempted to migrate a shell > machine which is the beach front from which I work (not a production > server) from CentOS 5.5 to Scientific Linux 5.5 yesterday. > > Karanbir is quoted on this list as having said something like: > > "All that you should need to do is install centos-release, remove > redhat-release rpms and just yum update the machine, which should > bring in all packages changed by CentOS ( since they will have a > slightly higher E-V-R )." > > In other words, is there a get out of jail card based on Karanbir's > stanza which will return the machine to a consistent state without a > fresh install? With apologies for replying to my own post, the final solution (possibly regarded as draconian and puerile by others) which seemed to work to return to a consistent state was to download Oracle R5U6 and invoke 'rpm -ivh' following some rpm which must be set aside in order to avoid "can not coexist." (e.g., bind vs. bind97 et al). My thoughts are that it would work as well once CentOS and Scientific Linux R5U6 are released. It was necessary to pay special attention to the output of updatedb; locate .rpm | egrep ".rpmnew$|.rpmorig$|.rpmsave$" No data was lost, but I watch some Dennis Miller, and like him at the end of a rant, "I could be wrong about that" :) kind regards/ldv ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RHEL-6 vs. CentOS-5.5 (was: Static assignment of SCSI device names?)
On Sun, Jan 30, 2011 at 2:37 PM, Chuck Munro wrote: > Hello list members, > > My adventure into udev rules has taken an interesting turn. I did > discover a stupid error in the way I was attempting to assign static > disk device names on CentOS-5.5, so that's out of the way. > > But in the process of exploring, I installed a trial copy of RHEL-6 on > the new machine to see if anything had changed (since I intend this box > to run CentOS-6 anyway). > > Lots of differences, and it's obvious that RedHat does things a bit > differently here and there. My focus has been on figuring out how best > to solve my udev challenge, and I found that tools like 'scsi_id' and > udev admin/test commands have changed. The udev rules themselves seem > to be the same. > > Regarding networking, all of my 'ifcfg-*' files from CentOS-5.5 worked > well, including bridging for KVM. Of course, one of the first things I > did was remove that atrocious NetworkManager package ... it needs to be > rewritten to make it a *lot* more intuitive. RedHat uses it during > installation to manually configure the NICs, which I think is a > mistake. I much prefer the way CentOS anaconda has done it so far, as a > separate install menu form. Unfortunately, working out all the dependencies and preventing it from activation is more tricky. I suggest putting "NM_CONTROLLLED=no" in all your /etc/sysconfig/network-scripts/ifcfg-* files, it just makes the whole thing a lot safer for server class installations. I don't know if it's more stable or workable in the latest Fedora releases, but I'm still unhappy with it in CentOS and RHEL. > The performance of the machine seemed to be better with the newer > kernel, which is encouraging. I suspect we can look forward to a number > of improvements. I've just managed to scratch the surface. I do expect > there may be a few challenges for those of us upgrading a system from > 5.x to 6, where some user-written admin scripts could break depending on > the system commands they use. Oh, yes. This is inevitable with OS updates this far apart: just the switch from the default sendmail and syslog to postfix and rsyslog caught me somewhat by surprise. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to relocate $HOME directory
Hi, > This is not a CentOs issue or problem. This plain Jane UNIX. $HOME can > be anything you want or need it to be. Copy the user's home directory to > where you want and make the appropriate changes in the passwd file or > automount maps. > Well, yes and no. In case of Debian/Ubuntu, we need to modify apparmor settings (e.g., by changing "etc/apparmor.d/tunables/home" information) to get it right apart from just copying them and changing passwd file. I wondered if CentOS has such an issue when relocating $HOME directories. Cheers, Soo-Hyun ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] How to relocate $HOME directory
This is not a CentOs issue or problem. This plain Jane UNIX. $HOME can be anything you want or need it to be. Copy the user's home directory to where you want and make the appropriate changes in the passwd file or automount maps. -- Thanks, Gene Brandt SCSA 8625 Carriage Road River Ridge, LA 70123 home 504-737-4295 cell 504-452-3250 Family Web Page | My Web Page | LinkedIn | Facebook | Resumebucket On Mon, 2011-01-31 at 06:07 +0900, Soo-Hyun Choi wrote: > Hi there, > > As you know, $HOME is generally located at "/home/$username" by default. > > I would like to re-locate all users' $HOME directories to something like > "/export/home/$username" without having a hassle/trouble. > > Initially, I've thought of just copying them to the new directory (under > /export/home/xxx), but guessed it might trouble for the normal use (I'm > pretty new to CentOS, although many experiences with Debian/Ubuntu). > > Is there any good tricks (or caveats) when moving users' home directory > cleanly with CentOS? (I'm with CentOS 5.5 x86_64) > > Cheers, > Soo-Hyun > > ___ > CentOS mailing list > CentOS@centos.org > http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] How to relocate $HOME directory
Hi there, As you know, $HOME is generally located at "/home/$username" by default. I would like to re-locate all users' $HOME directories to something like "/export/home/$username" without having a hassle/trouble. Initially, I've thought of just copying them to the new directory (under /export/home/xxx), but guessed it might trouble for the normal use (I'm pretty new to CentOS, although many experiences with Debian/Ubuntu). Is there any good tricks (or caveats) when moving users' home directory cleanly with CentOS? (I'm with CentOS 5.5 x86_64) Cheers, Soo-Hyun ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RHEL-6 vs. CentOS-5.5 (was: Static assignment of SCSI device names?)
On 1/30/11 1:37 PM, Chuck Munro wrote: > Hello list members, > > My adventure into udev rules has taken an interesting turn. I did > discover a stupid error in the way I was attempting to assign static > disk device names on CentOS-5.5, so that's out of the way. > > But in the process of exploring, I installed a trial copy of RHEL-6 on > the new machine to see if anything had changed (since I intend this box > to run CentOS-6 anyway). > > Lots of differences, and it's obvious that RedHat does things a bit > differently here and there. My focus has been on figuring out how best > to solve my udev challenge, and I found that tools like 'scsi_id' and > udev admin/test commands have changed. The udev rules themselves seem > to be the same. Do any of the names under /dev/disk/* work for your static identifiers? You should be able to use them directly instead of using udev to map them to something else, making it more obvious what you are doing. And are these names the same under RHEL6? -- Les Mikesell lesmikes...@gmail.com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RAID support in kernel?
Kai Schaetzl wrote: >> I'm setting up a computer that will run 'CentOS 6 server'. > > Sure about that? This is your first experience with CentOS/RHEL? It'll run zoneminder (to create a dvr for video surveillance). I've been using Cent5.3 (non-server) since it was released, and I used Fedora before that (since FC 5 days). Actually, I might go back to Fedora for my regular machine. I decided on CentOS for the server for a couple reasons: * I'm sticking with RH-style distros (I even got my 80 year old non-computer savvy Mother to be comfortable with it!) * zoneminder supports CentOS (they have a guide to set it up on Cent). Robert wrote: > You are generally *better off* to *disable* the motherboard RAID > controller and use native Linux software RAID. After my research, I'm realizing that linux doesn't quite support it. So, I'll probably do as you suggested. Rudi wrote: > CentOS 6 hasn't been released yet. Well, I meant when it gets released.:) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RHEL-6 vs. CentOS-5.5 (was: Static assignment of SCSI device names?)
At Sun, 30 Jan 2011 11:37:19 -0800 CentOS mailing list wrote: > > Hello list members, > > My adventure into udev rules has taken an interesting turn. I did > discover a stupid error in the way I was attempting to assign static > disk device names on CentOS-5.5, so that's out of the way. > > But in the process of exploring, I installed a trial copy of RHEL-6 on > the new machine to see if anything had changed (since I intend this box > to run CentOS-6 anyway). > > Lots of differences, and it's obvious that RedHat does things a bit > differently here and there. My focus has been on figuring out how best > to solve my udev challenge, and I found that tools like 'scsi_id' and > udev admin/test commands have changed. The udev rules themselves seem > to be the same. > > Regarding networking, all of my 'ifcfg-*' files from CentOS-5.5 worked > well, including bridging for KVM. Of course, one of the first things I > did was remove that atrocious NetworkManager package ... it needs to be > rewritten to make it a *lot* more intuitive. RedHat uses it during > installation to manually configure the NICs, which I think is a > mistake. I much prefer the way CentOS anaconda has done it so far, as a > separate install menu form. NetworkManager is fine for something like a laptop with a 'varying' network environment, since it interfaces with the gnome-network-manager applet, which gives one a little GUI thingy for selecting this or that wireless hot spot, etc. For a machine with a *fixed* network connection, NetworkManager just gets in the way and trys to be excessively 'clever'. > > The performance of the machine seemed to be better with the newer > kernel, which is encouraging. I suspect we can look forward to a number > of improvements. I've just managed to scratch the surface. I do expect > there may be a few challenges for those of us upgrading a system from > 5.x to 6, where some user-written admin scripts could break depending on > the system commands they use. > > I look forward to CentOS-6 and all the goodies we can expect, and I'm > quite happy to wait until the CentOS crew does their thing before > releasing it ... they deserve a lot of credit for doing a thorough job > all these years. > > Chuck > > ___ > CentOS mailing list > CentOS@centos.org > http://lists.centos.org/mailman/listinfo/centos > > > -- Robert Heller -- 978-544-6933 / hel...@deepsoft.com Deepwoods Software-- http://www.deepsoft.com/ () ascii ribbon campaign -- against html e-mail /\ www.asciiribbon.org -- against proprietary attachments ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] RHEL-6 vs. CentOS-5.5 (was: Static assignment of SCSI device names?)
Hello list members, My adventure into udev rules has taken an interesting turn. I did discover a stupid error in the way I was attempting to assign static disk device names on CentOS-5.5, so that's out of the way. But in the process of exploring, I installed a trial copy of RHEL-6 on the new machine to see if anything had changed (since I intend this box to run CentOS-6 anyway). Lots of differences, and it's obvious that RedHat does things a bit differently here and there. My focus has been on figuring out how best to solve my udev challenge, and I found that tools like 'scsi_id' and udev admin/test commands have changed. The udev rules themselves seem to be the same. Regarding networking, all of my 'ifcfg-*' files from CentOS-5.5 worked well, including bridging for KVM. Of course, one of the first things I did was remove that atrocious NetworkManager package ... it needs to be rewritten to make it a *lot* more intuitive. RedHat uses it during installation to manually configure the NICs, which I think is a mistake. I much prefer the way CentOS anaconda has done it so far, as a separate install menu form. The performance of the machine seemed to be better with the newer kernel, which is encouraging. I suspect we can look forward to a number of improvements. I've just managed to scratch the surface. I do expect there may be a few challenges for those of us upgrading a system from 5.x to 6, where some user-written admin scripts could break depending on the system commands they use. I look forward to CentOS-6 and all the goodies we can expect, and I'm quite happy to wait until the CentOS crew does their thing before releasing it ... they deserve a lot of credit for doing a thorough job all these years. Chuck ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RAID support in kernel?
Thanks. I hadn't really looked in any of this for a few years since I used RAID to combine 2 smaller hard drives into one larger volume. At work, I'm either just a user of a remote server that uses netapp filers for storage, or am running more disposable installs on lower end systems (with 1 hard drive) that can be wiped and reinstalled easily. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RAID support in kernel?
At Sun, 30 Jan 2011 09:01:56 -0600 CentOS mailing list wrote: > > > On Jan 30, 2011, at 7:36 AM, Robert Heller wrote: > > > At Sat, 29 Jan 2011 22:33:50 -0500 CentOS mailing list > > wrote: > > > > Many of the SATA (so-called) hardware raid controllers are not really > > hardware raid controllers, they are 'fakeraid' and requires lots of > > software RAID logic. You are generally *better off* to *disable* the > > motherboard RAID controller and use native Linux software RAID. > > The only caveat I can think of is if you wanted to BOOT off of the > raid configuration. The BIOS wouldn't understand the Linux RAID > implementation. Not really a problem: make /boot its own RAID 1 set. The BIOS will boot off /dev/sda and Grub will read /dev/sda1 (typically) to load the kernal and init ramdisk. The Linux RAID1 superblock is at the *end* of the disk -- The ext2/3 superblock is in its normal place, where grub will see it. /dev/sda1 and /dev/sdb1 will be kept identical by the Linux RAID logic, so if /dev/sda dies, it can be pulled and /dev/sdb will become /dev/sda. You'll want to replicatate the boot loader install on /dev/sdb (eg grub-install ... /dev/sdb). > > But for RAID 1, especially, you probably want a minimum of 3 drives. > A boot drive with Linux, and the other 2 RAIDed together for speed. > That way, the logic to handle the failure of one of the drives isn't on > the drive that may have failed. No, only two drives will be just fine. Even if one drive fails, you can still boot the RAID set in 'degraded' mode, and then add in the replacement disk to the running system. Make two partitions on each drive, a small one for /boot and the rest for everything else and make this second raid set a LVM volumn group and carve out swap, root (/), /home, etc. as LVM volumns. That is what I have: sauron.deepsoft.com% cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 1003904 blocks [2/2] [UU] md1 : active raid1 sdb2[1] sda2[0] 155284224 blocks [2/2] [UU] unused devices: sauron.deepsoft.com% df -h /boot/ FilesystemSize Used Avail Use% Mounted on /dev/md0 965M 171M 746M 19% /boot sauron.deepsoft.com% sudo /usr/sbin/pvdisplay --- Physical volume --- PV Name /dev/md1 VG Name sauron PV Size 148.09 GB / not usable 768.00 KB Allocatable yes PE Size (KByte) 4096 Total PE 37911 Free PE 23 Allocated PE 37888 PV UUID ttB15B-3eWx-4ioj-TUvm-lAPM-z9rD-Prumee sauron.deepsoft.com% df -h / /usr /var /home FilesystemSize Used Avail Use% Mounted on /dev/mapper/sauron-c5root 2.0G 905M 1.1G 47% / /dev/mapper/sauron-c5usr 9.9G 4.9G 4.5G 53% /usr /dev/mapper/sauron-c5var 4.0G 1.4G 2.5G 36% /var /dev/mapper/sauron-home 9.9G 8.7G 759M 93% /home (I have a pile of other File Systems.) > > Of course, if it is the Linux drive that failed, you replace that > (from backup?) and your data should all still be available. > > > > ___ > CentOS mailing list > CentOS@centos.org > http://lists.centos.org/mailman/listinfo/centos > > > -- Robert Heller -- 978-544-6933 / hel...@deepsoft.com Deepwoods Software-- http://www.deepsoft.com/ () ascii ribbon campaign -- against html e-mail /\ www.asciiribbon.org -- against proprietary attachments ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RAID support in kernel?
On Sun, Jan 30, 2011 at 5:01 PM, Kevin K wrote: > > On Jan 30, 2011, at 7:36 AM, Robert Heller wrote: > >> At Sat, 29 Jan 2011 22:33:50 -0500 CentOS mailing list >> wrote: > > >> Many of the SATA (so-called) hardware raid controllers are not really >> hardware raid controllers, they are 'fakeraid' and requires lots of >> software RAID logic. You are generally *better off* to *disable* the >> motherboard RAID controller and use native Linux software RAID. > > The only caveat I can think of is if you wanted to BOOT off of the raid > configuration. The BIOS wouldn't understand the Linux RAID implementation. > > But for RAID 1, especially, you probably want a minimum of 3 drives. A boot > drive with Linux, and the other 2 RAIDed together for speed. That way, the > logic to handle the failure of one of the drives isn't on the drive that may > have failed. > > Of course, if it is the Linux drive that failed, you replace that (from > backup?) and your data should all still be available. > > > > ___ You can install Linux on software RAID1 :) -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RAID support in kernel?
On Jan 30, 2011, at 7:36 AM, Robert Heller wrote: > At Sat, 29 Jan 2011 22:33:50 -0500 CentOS mailing list > wrote: > Many of the SATA (so-called) hardware raid controllers are not really > hardware raid controllers, they are 'fakeraid' and requires lots of > software RAID logic. You are generally *better off* to *disable* the > motherboard RAID controller and use native Linux software RAID. The only caveat I can think of is if you wanted to BOOT off of the raid configuration. The BIOS wouldn't understand the Linux RAID implementation. But for RAID 1, especially, you probably want a minimum of 3 drives. A boot drive with Linux, and the other 2 RAIDed together for speed. That way, the logic to handle the failure of one of the drives isn't on the drive that may have failed. Of course, if it is the Linux drive that failed, you replace that (from backup?) and your data should all still be available. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RAID support in kernel?
On Sun, Jan 30, 2011 at 5:33 AM, Michael Klinosky wrote: > Hello. > I'm setting up a computer that will run 'CentOS 6 server'. The MB is an > Asus with a hw raid controller (Promise PDC-20276), which I want to use > in RAID-1 mode. I noted (from a MB website) that it also needs a driver > - which is probably why it's called a 'fakeraid'. > > So, I've been trying to determine if any recent kernels support this > chip. Using google.com/linux, I found lots of hits dating about 2002 - > 04, and referencing the 2.4 kernel (which had the driver compiled into > it). But, nothing newer. > > I checked kernel.org and kernelnewbies.org - I see that raid-1 is > supported. But, I can't find any reference to this chip. > > How can I find out what drivers are compiled into a given kernel? Or, > basically what hardware a given kernel supports? > > ___ CentOS 6 hasn't been released yet. The card that you want to use isn't a real RAID card and uses the PC's CPU for RAID calculations so you're better of using Linux md software raid for this purpose. -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RAID support in kernel?
At Sat, 29 Jan 2011 22:33:50 -0500 CentOS mailing list wrote: > > Hello. > I'm setting up a computer that will run 'CentOS 6 server'. The MB is an > Asus with a hw raid controller (Promise PDC-20276), which I want to use > in RAID-1 mode. I noted (from a MB website) that it also needs a driver > - which is probably why it's called a 'fakeraid'. > > So, I've been trying to determine if any recent kernels support this > chip. Using google.com/linux, I found lots of hits dating about 2002 - > 04, and referencing the 2.4 kernel (which had the driver compiled into > it). But, nothing newer. > > I checked kernel.org and kernelnewbies.org - I see that raid-1 is > supported. But, I can't find any reference to this chip. > > How can I find out what drivers are compiled into a given kernel? Or, > basically what hardware a given kernel supports? Many of the SATA (so-called) hardware raid controllers are not really hardware raid controllers, they are 'fakeraid' and requires lots of software RAID logic. You are generally *better off* to *disable* the motherboard RAID controller and use native Linux software RAID. > > ___ > CentOS mailing list > CentOS@centos.org > http://lists.centos.org/mailman/listinfo/centos > > -- Robert Heller -- 978-544-6933 / hel...@deepsoft.com Deepwoods Software-- http://www.deepsoft.com/ () ascii ribbon campaign -- against html e-mail /\ www.asciiribbon.org -- against proprietary attachments ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] py-yaml complaints from yum
It's not a good idea to have all these repo's enabled at the same time and without protectbase or priorities. There's likely a conflict because of this. And it makes debugging now more complex. Try yum clean metadata and if that doesn't help try running with a higher debug level. Maybe an update wants to replace that package, somehow breaking the dependency. Also, remove that i386 version. Kai ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] RAID support in kernel?
Michael Klinosky wrote on Sat, 29 Jan 2011 22:33:50 -0500: > I'm setting up a computer that will run 'CentOS 6 server'. Sure about that? This is your first experience with CentOS/RHEL? > But, I can't find any reference to this chip. I can't tell you either. I think the support for this is coming from dmraid. A search for this should reveal more for you. Such drivers are usually not compiled in the kernel at all, maybe in the 2.4 days, there are various methods to use them as a kernel module. In general, people recommend using normal software RAID if you have only a fakeraid controller. And, a second "in general", if you want to find something out about CentOS, looking at kernel.org or for "recent kernels" won't help too much. I suggest you first read a bit about CentOS on www.centos.org before you make your OS choice. Don't get me wrong, I don't want to discourage you, but you should *know* what you get, you shouldn't assume. Kai ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: Recommendations for a virtual storage server
On 01/30/2011 01:35 AM, John R Pierce wrote: > On 01/29/11 11:42 AM, carlopmart wrote: >> All OS will be UNix based: linux, BSD or Solaris ... > > Solaris is by design quite memory intensive, since on modern servers, > memory is cheap. > > ZFS in particular is designed to use large amounts of memory to optimize > storage performance. > > Correct. At this point, I am thinking a solution based on Linux or FreeBSD (without ZFS) ... -- CL Martinez carlopmart {at} gmail {d0t} com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: Recommendations for a virtual storage server
On 01/29/2011 09:32 PM, Les Mikesell wrote: > On 1/29/11 5:05 AM, carlopmart wrote: >>> >>> |> It is very important that the virtual machine consumes the least >>> |> resources >>> |>possible (host has 5GB RAM and i need to run three virtual machines >>> |>minimum, >>> |>including this storage server as a virtual machine). >>> | >>> | What's the point of adding an extra virtual layer compared to an nfs >>> | or >>> | iscsi share from the host (nfs if it is shared, iscsi if it is the VM >>> | image store)? This seems like it would be more efficient if you run >>> | exsi on the hardware with centos and the others as guests anyway. >>> | >>> >>> There are some advantages that I can see in that if your hardware dies you >>> can migrate the entire host and disks over to another VMWare hosts. >>> >>> If your NFS host is not H/A a loss of the host would take down the virtual >>> machines too. Additionally, virtualization offers the ability to migrate >>> the VM and disk to newer hardware somewhat transparently allowing you to >>> take advantage of the latest/greatest/buggy tech. >>> >>> Just my 2c ;) >>> >> >> Correct. > > But I don't see how any of those things apply here. If the host fails your > vm's > are going to fail in any case, and there's not much magic involved in > exporting > an NFS share even if you need to move it. Iscsi targets are slightly more > complicated because it's not included in the base Centos install Sorry Less, Iscsi target is included in CentOS 5 base repository (package scsi-target-utils). but you can > find howto's to set it up. When your resources are limited it looks like a > big > waste to add an unnecessary virtual layer to storage. I've done it the other > way around, though, with NFS exports from the host being mounted by the guest > VM's. > This is th first step. Next step is to make physical HA infraestructure with hypervisors. -- CL Martinez carlopmart {at} gmail {d0t} com ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos