Re: [CentOS] Linux malware attack
On 3/19/2014 2:50 PM, Ned Slider wrote: Just to add, I'm sure everyone has already read and implemented many of the suggestions here: http://wiki.centos.org/HowTos/Network/SecuringSSH Numbers 2 and 7 have already been highlighted in this thread. #1 These days I would say that 8 chars minimum length is too few, even if they are completely random (and most won't be). If you're not willing to type gibberish, then a more reasonable minimum length is 12-14. Especially for your root password (or other administration accounts). If you have your users creating 15+ character passwords, don't make them change it every 30/60/90 days. Password aging hurts more then it helps as passwords grow longer. Users are more likely to adopt poor behavior like simply adding or incrementing numbers from month to month. Longer durations, like 3-5 years, give the users time to memorize the password rather then just keeping it on a sticky on the desk. #2 (disable root login) is a must for any public facing box, and a strong recommendation for all other boxes. It's the top target of attack, so why allow it to be attacked at all? #5 (non-standard port) is very useful. Not for protecting yourself against attack, but from not having your log files fill up with all of the automated attack scripts. Which makes it easier to spot the more serious attackers who have taken the time and effort to find your SSH port. #7 (public-key pairs) is also a must for any public-facing box. It defeats all attempts to brute-force account passwords remotely. Now you just have to worry that someone will steal your private key files. But if someone has gotten far enough inside to steal your private key file then you have bigger security problems to worry about. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Apache warns Web server admins of DoS attack tool
On 8/28/2011 12:37 PM, Les Mikesell wrote: On Sun, Aug 28, 2011 at 10:20 AM, Keith Robertske...@karsites.net wrote: The CentOS Forums are a very very good resource for many people and the people spending time managing and posting there are doing a very good job. I'm guessing you were unable to get value from the forums since your expectations and forums deliverables dont match. That's fine, but it does not imply that the entire forums are 'useless'. I'd say the forums have ALOT of usefull info, but the main Centos activity is centered on IMHO this list - cannot speak for the other centos lists, as I'm ONLY on this one for now. The problem with forums is that if you have more than a couple of interests you kill the whole day bouncing around in a web browser logging into them and figuring out their user interface differences. Could the rss feed be made a little more obvious? It might work to plug it into google reader or other feed consolidator. Someday, perhaps we'll end up back on an authenticated version of NNTP, with support for bbcode, images, and the front end reader of your choice... Maybe a merger of some sort between forums / email discussion threads and NNTP. There are things that the web forum does well, things that NNTP does well and things that mailing lists do well. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Apache warns Web server admins of DoS attack tool
On 8/25/2011 7:05 PM, Always Learning wrote: On Thu, 2011-08-25 at 14:36 -0700, John R Pierce wrote: On 08/25/11 1:45 PM, Always Learning wrote: I have broken-up the very large conf file (/etc/httpd/conf/httpd.conf) into 3 main parts. Part 1 is left in situ. Parts 2 and 3 are located elsewhere. the existing EL httpd.conf includes /etc/httpd/conf.d/*.conf and any changes are expected to be made there rather than editing the stock file. Hi John, No Centos updates are likely to interfere with my Apache server options and virtual hosts. The existing /etc/httpd/conf/httpd.conf is large and laborious to read and fully understand especially with so many useful comments. 'including' the parts that do change and are not operating system dependant, meaning putting them somewhere which has no connection to the operating system, for example /data/config/apache/server.conf /data/config/apache/domain.* means, I believe, that if a change to one small file goes wrong then there is absolutely no danger to 'damaging' any of the other files and the source of the problem is quick and easy to identify. Thus 'change damage' is strictly limited to one small self-contained file and can not affect any of the other files. I have too much experience of so-called collateral damage inadvertently caused to other parts of a file being changed. It costs time and money to trace and diagnose problems, so economically it is a good idea to eliminate as much as possible non-involved configuration parameters. As you will have noticed Apache actually offers the ability to fragment configuration parameters to other files by supplying - for the benefit of people like me - the 'include' facility. If Apache never wanted folks to use this useful facility, it would never have offered the 'include' ability. Anyone who has ever worked on the nightmare called Windoze will know that one tiny fault in the Registry can cause the entire operating system to malfunction. Spreading the risk with Apache configuration files is my chosen method to minimise potential disruption and it works very successfully for me on Centos 5.3, 5.4, 5.5, 5.6 and hopefully on 5.7 and 6.1 et al. Which is why all of my server's config files are version controlled (I use FSVS with a SVN back-end repository, but there are dozens of tools). Being able to diff your config files when you mangle it to the breaking point is a wonderful thing. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Adding the [SOLVED] Tag to break threads -- multiple factors
On 7/28/2011 5:01 PM, Spiro Harvey wrote: the thing is that not all mail clients will set the in-reply-to headers, whuch is why clients like thunderbird, evolution and mutt will use the subject line as well to thread messages. Apple Mail does that too and it makes the threading unusable IMO. If the clients are too dumb to adhere to a convention, I don't believe it's our job to baby them. Personally, I like the idea of the [SOLVED] tags because they can indicate when help is no longer needed. However, I also like the way the Sun Managers list does (did? it's been many years since I used it), but they basically said, post a question, work it out, then post a new SOLVED thread outlining the solution. While that would probably be a bit too formal for this list, it was a fantastic way of learning things. And having the solved thread made searching through archives way easier. Find a problem related to yours, then look for the SOLVED post. If you needed more detail, you went back to the main thread and read all the posts to see how they came to that conclusion. Heck, I'd settle for people coming back to a problem / issue thread and updating on how or what the actual problem was or what they did to get the thing to work properly. So often you'll see a thread talking about trying X, Y Z, then the person having the problem never responds back as to whether X, Y or Z worked. Which is especially troublesome a year or two later when you're digging through threads in GMane trying to find a solution to a particular issue. (Pet peeve of the day -- dead end threads on mail lists.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] yum segfault - rpmforge problem?
On 7/26/2011 12:12 PM, Karanbir Singh wrote: On 07/26/2011 04:59 PM, Leonard den Ottolander wrote: Seems an issue with yum too, seeing that it segfaults over bad data. This has been reported upstream: https://bugzilla.redhat.com/show_bug.cgi?id=725798 I dont really see that as a yum issue, the problem is bad metadata in rpmforge. - KB While the problem is bad metadata, should yum really be left off the hook for segfaulting when it gets bad data? Shouldn't yum catch that error with checks, then exit gracefully with an error code? (If there are already sanity checks, adding one more to catch a null pointer being passed would be a natural extension of those checks.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Power-outage
On 7/1/2011 10:59 AM, Robert Heller wrote: APC UPSes are supported by apcupsd. Other brands, not so much. Some (read: cheaper models) have their own special protocol and don't include Linux support. These solutions are intended for the cheaper or otherwise 'unsupported' UPSes. It *sounds* like the OP does not need something smart and is probably looking for something cheap. And the APC Smart-UPS 750 units are not all that expensive either. Even the 1500VA units are a lot less expensive then they were 5-10 years ago. $250-$300 to protect $2000-$6000 worth of hardware is worth it in my book. (I prefer the Smart-UPS units for a variety of reasons. Line filtering, voltage regulation, and nice reporting features via apcupsd. We have MRTG polling the apcupsd data regularly and have graphs of line voltage / operating temperature. There are even variants with the audible alarm disabled, which is perfect for a home office where you don't need that high powered screech.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] unofficial ext3 and ext4 compare
On 6/27/2011 8:10 AM, Jerry Geis wrote: I have something like 300G I routinely backup. This includes some large 12Gig images and other files. I had been using ext3 on an external USB disk for part of the process. Under ext3 doing rsync -a /home /mnt/external_back/backup.jun.27.2011 it took 200 minutes. I took the same computer, same external HD and reformatted it for ext4 (mkfs.ext4 /dev/sdd1). I then started the same rsync as above. The time was reduced to 170 minutes. A time reduction for using ext4 in that scenario does not surprise me. Under ext3, deleting large mult-gigabyte files requires a lot of activity as it tracks down and marks the blocks as free. In ext4, this process is a lot faster due to the use of extents to track which blocks are used by large files. Just the faster deletion times with ext4 might account for the time difference. Under ext3, deleting a 10GB file might take a minute or two, but it will only take a few seconds under ext4. (It's the primary reason that I started using ext4 last year for any shares / file systems where I needed to store large files.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] ext4 in CentOS 5.6?
On 6/23/2011 12:16 PM, PJ wrote: I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4. I've previously been using xfs but the software for this project requires ext3/ext4. I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now) Works fine here. I think you would have been jumping the gun if you were asking this in 2009, but by now it's well understood and the tools are fine in 2011. It's been around long enough. I use it anywhere that I have multi-gigabyte files that need to be handled with speed (deleting large files on ext3 is an exercise in patience) or where I have lots and lots of little files (which ext3 sometimes had trouble with). ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Possible to use multiple disk to bypass I/O wait?
On 6/9/2011 1:09 PM, Emmanuel Noobadmin wrote: On 6/10/11, Markus Falbmarkus.f...@fasel.at wrote: Yes, but before doing this be sure that your Software does not need atime. For a brief moment, I had that sinking Oh No... why didn't I see this earlier feeling especially since I've already remounted the filesystem with noatime. Fortunately, so far it seems that everything's still alive and working, keeping fingers crossed :D The last access time is generally not needed, especially for Maildirs. On our setup, Postfix and Dovecot don't care. I always mount as many file systems as possible with 'noatime'. (Our IMAP Maildir storage is a 4-disk RAID 1+0 array with a few million individual messages across a lot of accounts.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Possible to use multiple disk to bypass I/O wait?
On 6/9/2011 1:26 PM, John R Pierce wrote: On 06/09/11 2:24 AM, Emmanuel Noobadmin wrote: Alternatively, if I mdraid mirror the existing disk, would md be smart enough to read using the other disk while the first's tied up with the first process? that woudl be my first choice, and yes, queued read IO could be satisfied by either mirror, hence they'd have double the read performance. next step would be a raid 1+0 with yet more disks. mdadm is good, but you'll never get double the read performance. Even on our 3-way mirrors (RAID 1, 3 active disks), we don't come close to twice the performance gain. RAID 1+0 with 4/6/8 spindles is the best way to ensure that you get better performance. Adding RAM to the server so that you have a larger read buffer might also be an option. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] High system load but low cpu usage
On 6/9/2011 1:02 PM, Emmanuel Noobadmin wrote: On 6/9/11, Steven Tardys...@its.msstate.edu wrote: top Cpu(s) line is averaged for all cpus/cores. to display individual cpus/cores press: 1 you'll likely see one cpu/core being pegged with iowait. to identify the offending process within top press: fjenter to display the P column(last used CPU). watch top for a few minutes to see what is using all of the disk io. Thanks for these tips, it really helped narrow down the issue. Became quite clear that cpu 0 was taking up most of the user and sys time, somewhere in the 10x compared to the other 3. Also consider installing atop, which I find to be a bit more self-explanatory then regular top. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] hard disk install failure
On 6/7/2011 11:22 AM, m.r...@5-cent.us wrote: Timothy Murphy wrote: Right, I just looked it up, and I see it's an ADSL modem. Look at your IP address, and I'll bet you're 192.168.0.x, or 192.168.1.x. Whatever it is, try pinging 192.168.[0 or 1].1. Whichever it is, pull up your browser, and point it to that IP, and you should be at the modem's web interface, and you can go from there. Or, assuming that it hands out a DHCP address with a default gateway (and the modem/NAT unit is acting as the default gateway): a) Look for the default route (indicated as the line starting with 0.0.0.0 for IPv4) # route -n 0.0.0.0 192.168.0.1 0.0.0.0 UG b) Look at the dhclient.leases file. This can be hit or miss, depending on whether you can find the proper section. Other distros put it in a slightly different location. /var/lib/dhclient/dhclient.leases lease { interface eth1; fixed-address 192.168.1.186; option subnet-mask 255.255.255.0; option routers 192.168.1.1; option dhcp-lease-time 3600; option dhcp-message-type 5; option domain-name-servers 192.168.1.1; option dhcp-server-identifier 192.168.1.1; option domain-name lan.example.org; renew 3 2009/4/8 11:57:39; rebind 3 2009/4/8 12:21:03; expire 3 2009/4/8 12:28:33; } c) Or the ip command. $ ip route list default via 192.168.1.1 dev eth0 proto static (Guessing about IP addresses gets harder in a few years once IPv6 finally goes mainstream.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] hard disk install failure
On 6/7/2011 1:04 PM, Timothy Murphy wrote: No. I was rung one day by a lady at my ISP (Eircom), who told me I had been chosen as a recipient of their new Ultimate system, which would increase my speed from 5Mb/s to 14Mb/s, at no extra cost! Could be newer DSL technology, or could simply be some sort of caching and compression system that you have to use Windows client software to take advantage of it. But it sounds like new DSL technology according to the press release and sales site. http://pressroom.eircom.net/press_releases/article/eircom_launches_up_to_24mb_next_generation_broadband/ Digging through their FAQ site: Do I need to change any equipment or settings on my computer? No, you don't need to change anything. Your modem will pick up the new speed automatically. A very limited number of customers may be required to 'turn off' and 'turn on' their modem for the upgrade to take affect. Maybe the Windows CD needs to install new firmware. How will I know that my line/broadband was upgraded? If you are an eligible customer eircom will send a letter to you in advance notifying you that you will be migrated to the Next Generation Broadband product in the near future. Alternatively, if you visit www.eircom.net/ngb and go to the: How Can I Get it? section. Then enter your eircom telephone number and account number and you will be informed if your broadband has been or is scheduled to be upgraded to a Next Generation Broadband product. And maybe double-check that the line was actually upgraded? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Good book on Linux Admin (Centos 5.5)
On 6/2/2011 4:18 PM, Les Mikesell wrote: The things I always look for and almost never find are (a) A split between tutorial (step-by-step for common uses) and reference sections (that have all the options). Once you've followed the tutorial you won't want to wade through that again to find the option to make an obscure change. For pure reference, I've always liked my Linux in a Nutshell book (O'Reilly publisher), which has a huge section with all of the commands and options. It even has sections on vi and emacs. Google and man pages take care of the rest. (Also, since CentOS is so similar to RHEL, anything taught in a RHEL book tends to carry over.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Grep: show me this line and the next N lines?
On 5/31/2011 3:43 AM, Dotan Cohen wrote: On Tue, May 31, 2011 at 01:26, John R. Dennisonj...@gerdesas.com wrote: On Tue, May 31, 2011 at 01:10:40AM +0300, Dotan Cohen wrote: Thanks, all. I did actually look at the grep manpage but after a few screenfuls it became tl;dr and I started just skimming. I suppose that I skimmed too fast! Um It's the first option described. I see now that the server's grep manpage (CentOS) does in fact put it right there at the top. I usually pull up manpages on localhost, not what I'm SSHing into, and on this Debian-Derived distro it is buried halfway down the third page of nine. That is interesting, and I'm sure that there is a lesson to be learned from that! One help might be to use the slash key to search the man page. /lines[enter] Then use 'n' or 'N' to search forward/backward. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] SSD for Centos SWAP /tmp /var/ partition
On 5/26/2011 8:04 AM, Ljubomir Ljubojevic wrote: John Hodrien wrote: On Thu, 26 May 2011, Emmanuel Noobadmin wrote: Personally, I'm averse to using SSD with any important long term data is the nightmare that I could one day wake up to find everything gone without any means of recovery. Compared that to a hard disk, which barring catastrophic physical damage, I could pay somebody to just read the data off the platter. As a performance boosting intermediary storage, yes, long term... maybe not quite yet. That's what backups are for. Unless you are away on important business trip and you loose your system just minutes before the meeting. Yes, it can happen to regular HDD, it's much lesser probability for now. In a situation like a business trip, where the machine absolutely has to boot in order to do the sales presentation or demo, then a secondary traditional HD is a smart move. Mirror the system image just prior to the trip onto the external drive. If the internal dies, swap drives and carry on. It's a $50-$100 investment vs not having a bootable drive at all. If it's that important to you that a drive failure would kill the trip, then you should be doing even now with traditional drives. All the user data should be backed up either to an external device or a server somewhere (including the data files required to do the presentation or configure one-of-a-kind software). Which means that even if the backup drive is a few days out of date, you should be able to drop it in and synchronize the user data back up with the external source within a few minutes. I'd also still stick with the bigger names in SSDs right now. Intel for sure, then maybe consider the lesser players. The oldest SSD we have in use was bought back in '09 and that unit has shown zero issues. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] SSD for Centos SWAP /tmp /var/ partition
On 5/23/2011 7:03 AM, Timothy Murphy wrote: yonatan pingle wrote: On Sun, May 22, 2011 at 2:06 PM, Keith Roberts anyways - if it's for home usage Don't think twice get an SSD . Why? I've read most of the articles in this thread, and I haven't seen anything that persuades me SSD would be a good investment in my case, either in servers or laptops. *whistles* If you have not tried out a SSD laptop or desktop then you're in for a big surprise. Especially if you multi-task at all or work with a few thousand small files. It can make even a 10k RPM SATA seem slow when you try to do multiple things at once. Boot the machine up, start doing work while things are still loading up. Which is a situation that would bury a 7200 or 5400 RPM drive in seeks. After having a 10k RPM SATA on the desktop for a few years, 7200 RPM seem slow and 5400 RPM drives seem glacial. The SSD in the laptop can make the 10k RPM SATA seem slow in comparison. It's the difference between 200-300 seeks/second for a mechanical and a few thousand seeks per second. The main downside right now is cost and how big of a disk you can afford. SSDs are wonderful, but still in the $1.50-$2.00/GB range. Better then it was, but I was disappointed with Intel's 25nm pricing. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Web based file versioning frontend
On 5/20/2011 4:00 PM, Les Mikesell wrote: On 5/20/11 1:16 PM, Joseph L. Casale wrote: Git and Gitweb? Thought of that, is there anything that can monitor for changes so I can avoid a commit command for every script, as they all dump to an already well organized tree, I was hoping to monitor the top level dir for changes and have it commit as they appear. Something like that exist? It seems like you are approaching this backwards - whatever originates the changes should commit, and perhaps replace the rsyncs with updates at the other location(s). But, if you use subversion, it is smart enough to only commit actual differences so it wouldn't hurt to just schedule a fairly frequent commit at the top level. If nothing changed, the commit has no effect. The down side is that subversion wants a complete hidden copy under .svn in every subdirectory so the client can detect changes without contacting the repository. Viewvc is a good web server companion for subversion to easily browse revisions and do color-coded diffs. FSVS gets rid of the .svn issue and still stores the files in a SVN repository. Run it once to see if it detects any changes, then run it again to actually do the automated commit. That lets you schedule it to run every 10-20 minutes, but it won't create a bunch of empty nothing changed commits. (I make heavy use of FSVS to keep track of config file changes and other config changes made to the server. Helps when trying to figure out what has changed on the server.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [OT] 8-15 TB storage: any recommendations?
On 1/7/2010 12:28 PM, Joseph L. Casale wrote: I also heard that disks above 1TB might have reliability issues. Maybe it changed since then... I remember rumors about the early 2TB Seagates. Personally, I won't RAID SATA drives over 500GB unless they're enterprise-level ones with the limits on how long before the drive reports a problem back to the host when it has a read error. Which should also take care of the reliability issue to a large degree. An often overlooked issue is the rebuild time with Linux software raid and all hw raid controllers I have seen. On large drives the times are so long as a result of the sheer size, if the array is degraded you are exposed during the rebuild. ZFS's resilver has this addressed as good as you can by only copying actual data. With this in mind, it's wise to consider how you develop the redundancy into the solution... Yah, RAID-5 is a bad idea anymore with the large drive sizes. RAID-6 or RAID-10 is a far better choice. I prefer RAID-10 because the rebuild time is based on the size of a drive pair, not the entire array. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [OT] 8-15 TB storage: any recommendations?
On 1/7/2010 10:54 AM, John Doe wrote: From: Karanbir Singhmail-li...@karan.org On 01/07/2010 02:30 PM, Boris Epstein wrote: KB, thanks. When you say dont go over 1 TiB in storage per spindle what are you referring to as spindle? disk. it boils down to how much data do you want to put under one read/write stream. the other thing is that these days 1.5TB disks are the best bang-for-the-buck in terms of storage/cost. So maybe thats something to consider, and limit disk usage down initially - expand later as you need. Even better if your hba can support that, if not then mdadm ( have lots of cpu right ? ), and make sure you understand recarving / reshaping before you do the final design. Refactoring filers with large quantities of data is no fun if you cant reshape and grow. I also heard that disks above 1TB might have reliability issues. Maybe it changed since then... I remember rumors about the early 2TB Seagates. Personally, I won't RAID SATA drives over 500GB unless they're enterprise-level ones with the limits on how long before the drive reports a problem back to the host when it has a read error. Which should also take care of the reliability issue to a large degree. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] unattended fsck on reboot
On 1/6/2010 2:36 PM, Alan McKay wrote: On Wed, Jan 6, 2010 at 1:05 PM, Brian Mathisbrian.mat...@gmail.com wrote: No out of band management? My thoughts exactly. All servers should have this these days, be it an integrated card or an IP-based KVM. Oh believe me, I want to get there. It's high on my list this year ... I'm still relatively new here At least they're nowhere near as expensive as they used to be. On my list as well for the new year, along with a weather duck. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] PAM configuration?
On 1/4/2010 12:42 PM, Roland Roland wrote: also is there a way I could enable the PAM module which uses crack library to check the strength of a users password? any help with this is truly appreciated... /etc/pam.d/system-auth-ac The default is: passwordrequisite pam_cracklib.so try_first_pass retry=3 passwordsufficientpam_unix.so md5 shadow nullok try_first_pass use_authtok passwordrequired pam_deny.so A stronger version is: # See: http://www.deer-run.com/~hal/sysadmin/pam_cracklib.html passwordrequisite pam_cracklib.so try_first_pass retry=3 minlen=20 difok=5 passwordsufficientpam_unix.so md5 shadow nullok try_first_pass use_authtok remember=36 passwordrequired pam_deny.so (Note that the entire line that starts with password should be all on one line. It'll make sense when you view the system-auth-ac file. There are 3 password lines by default, and all you're going to do is add options to the first two lines.) The key changes that I made to my setup are: - Added minlen=20, note that cracklib gives a bonus point if you use a number, symbol or different case character. So the minimum length is as short as 16 characters (4 less then what you set minlen to). But that minimum length is only achievable if you use both upper and lower case letters along with at least one symbol and at least one number. Minimum length should never be below 10 or 12 (in my opinion). - The 'try_first_pass' tells cracklib to do some checking of the supplied password against a built-in word list. I don't remember the details behind how it works. - Added difok=5, which says that the new password (when the user changes it) has to be at least 5 letters different from the old password. In general, you'll want this to be about 1/3 to 1/4 of the minlen value. - Added remember=36, which tells pam_unix to remember the last 36 passwords. So if a user wants to change their password, it can't be any of the past 36 passwords. Which is probably overkill to the Nth degree. ... (Minor ramble about why I say minlen should be set to at least 10-12.) My current estimate is that a $1500 PC can brute-force about 2-4 billion MD5 password hashes per second now. (Using NVIDIA Cuda in a 4-way SLI setup.) That's an offline attack where the attacker has a copy of your password hash. A completely random 8-char password can be found in about half a day. Add 2 more characters and it'll takes more like 2 years. Things go a lot slower if they're simply doing a dictionary attack on the SSH port (where they don't have the MD5 hash). Those attacks typically go after low-hanging fruit using common usernames + common passwords. Plus you can throttle the SSH logins or do other things (at the risk of locking yourself out). Keep in mind that modern SSH dictionary attacks ping your machine about once every few minutes from thousands of different IP addresses. Locking out an IP address after 3 incorrect attempts only works against attackers that aren't using a slow-attack botnet. So in situations where you can throttle the attacker, 8 random chars is probably enough. But if you want something a little easier to type, you'd best go for 10-12 chars and assume that the attacker has the hash. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] PAM configuration?
On 1/5/2010 7:31 AM, Kai Schaetzl wrote: For what do you need the hash? You don't supply the hash for logging in. In the case of SSH login, you are correct that the hash is not used to login. But the attacker may find a way to read the hash out of the /etc/shadow file, or the same password is used in other places and also stored with a md5 hash. A lot of things would have to go wrong for a remote attacker to get access to /etc/shadow - but it's been known to happen. (Personally, I always move the SSH port to something other then 22 and we only allow authentication via public keys over the external port.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] IPTABLEs and port scanning
On 1/5/2010 11:49 AM, Benjamin Franz wrote: If your brute force protection is not catching the repeated login failures, you should check its configuration. Or give up and move SSH to a non-standard port, at least from the outside. (I got tired a few years ago of watching my log files fill up with attack attempts.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Centos 5.4 and TYAN s4985 motherboard....
On 1/4/2010 10:09 AM, Tom Bishop wrote: Was wondering if anyone has any luck with the Tyan s4985 motherboard, I had loaded up the latest 5.4 release and in installed most everything that I wanted but when I loaded it up, as in processor wise it would lock up. Funny thing I could still ssh to it but not do a su - or anything from the console. I think it is kernel related and tried the previous kernel but it still exhibited the same symptom. The only thing I saw in the log was a message about a soft lockup on cpu0, there is a open bug for that but from the description doesn't appear to cause any issues. I am running 4 opteron 8356 just for clarification Looks like 207 is the latest: http://www.tyan.com/support_download_bios.aspx?model=S.S4985 According to the OS support matrix: http://www.tyan.com/tech/OS_Support_AMDRedHat5.aspx Not all embedded components have drivers (2c = SATA RAID), which should only indicate that the fake-RAID doesn't work in RHEL 5 64bit. Try booting other live CDs? Including some from other 64bit distros that might have a more recent kernel? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Centos 5.4 and TYAN s4985 motherboard....
On 1/5/2010 1:52 PM, Tom Bishop wrote: Yeah I know fedora core 12 works well along with ubuntu 8.04, thats what I am running now with later kernel, I just like to run centos on my production servers...neither of the latter exhibit the same condition that I saw with centos5.4. I have updated the bios to the latest 2.07 but had already loaded ubuntu 8.04 and running that right now :( since I needed to get something working, may wait till rhel6 comes along and switch when that sees the light of day, although I'm not sure when that might be...lol Or RH might backport a fix into the 2.6.18 kernel. I'm in the process of pricing out a new system for next year, but I'll be going with a dual-CPU board. Either the Tyan S8212 line or the S2937 (Thunder n3600T) line. Or maybe a SuperMicro H8DA6+-F board. I'll have to see whether those boards use the same chipset. (I have a 3ware 16-port SATA card that is currently the bane of my 2.6.18 kernel existence. Horrid performance under the default 2.6.18 kernel and I haven't gotten around to building a custom kernel with the 3ware drivers.) As for when RHEL 6 will ship... well if it's based off of FC12 which came out back in mid-Nov, then we'll probably see RHEL in 1st half of 2010 and it'll be based on 2.6.31. But if they decide to base off of FC13 (which isn't scheduled until May 2010) then I'd guess late-2010. (RHEL 5 was based on FC 6. FC 6 shipped in late Oct 2006, RHEL 5 shipped in middle Mar 2007.) Of course, nothing is final until it ships. The only concrete information from last September was that FC11 and FC12 would have tech previews for RHEL 6. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] IUD Number
On 1/5/2010 3:53 PM, Susan Day wrote: Hello; How do I get the IUD number for a user? id command (see also the groups command) $ id -u thomas 999 $ id -G thomas (prints a list of all group numbers that I belong to) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Software RAID1 Disk I/O
On 1/5/2010 5:44 PM, Matt wrote: I just installed CentOS 5.4 64 bit release on a 1.9ghz CPU with 8gB of RAM. It has 2 Western Digital 1.5TB SATA2 drives in RAID1. [r...@server ~]# df -h FilesystemSize Used Avail Use% Mounted on /dev/md2 1.4T 1.4G 1.3T 1% / /dev/md0 99M 19M 76M 20% /boot tmpfs 4.0G 0 4.0G 0% /dev/shm [r...@server ~]# Its barebones right now. Nothing really running. I intended to move our current email server over to it eventually. The thing is slow as mud due to disk I/O though. I have no idea whats going on. Here is a bit of iostat -x output. [r...@server ~]# iostat -x Linux 2.6.18-164.9.1.el5 (server.x.us) 01/05/2010 avg-cpu: %user %nice %system %iowait %steal %idle 0.130.100.03 38.230.00 61.51 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.18 0.33 0.04 0.2027.77 4.22 132.30 3.70 15239.91 2560.21 61.91 sda1 0.00 0.00 0.00 0.00 0.34 0.00 654.72 0.01 12358.84 3704.24 0.19 sda2 0.18 0.00 0.04 0.0027.30 0.00 742.93 0.30 8065.37 1930.35 7.09 sda3 0.00 0.33 0.01 0.20 0.14 4.2221.29 3.40 16537.41 3008.03 61.52 sda3 is running hot with a 62% utilization sdb 0.19 0.33 0.06 0.2028.39 4.22 126.29 2.51 9676.49 862.49 22.27 sdb1 0.00 0.00 0.00 0.00 0.34 0.00 606.29 0.00 2202.03 643.13 0.04 sdb2 0.18 0.00 0.04 0.0027.30 0.00 724.25 0.10 2579.45 745.76 2.81 sdb3 0.01 0.33 0.02 0.20 0.75 4.2222.61 2.41 10913.16 988.40 21.74 sdb3 is running at 22% utilization Does anyone have an idea what I have wrong here? This is my first software RAID install. Built a number of Centos servers without RAID and they have all worked fine. As mentioned by others, atop lsof are good for figuring out what is touching the disk. Something writing to the disk (wsec/s) and doing it fairly evenly while also reading from the 2nd partition. You could try hdparm (hdparm -tT /dev/sda) and see what speeds you get for the individual drives, then compare that to the speed you get off of say /dev/md2. Or if you want a stronger load that lasts for a minute or two, try using dd and dumping to /dev/null (dd if=/dev/sda of=/dev/null bs=1M count=1000). The individual drive speeds should match up pretty closely with the Software RAID speed. A pair of 1.5GB 7200RPM SATA drives should be able to handle up to a few hundred thousand to a million or so messages per month (Postfix / Dovecot). As long as that's at least a dual-core 1.9GHz CPU. YMMV of course and watching server performance over the long term will be your best bet. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Lost mdadm.conf
On 12/31/2009 11:27 AM, James Bensley wrote: I can't say this with 100% certainty but I would of thought that it would been fine. I've lost my mdadm.conf (reinstalled OS) with a separate 4 disk RAID 5 array and re-assembled the array and carried on as if nothing had happened. Yes, in general, you don't need the mdadm.conf at all. As long as the array is built out of partitions marked as type fd: Linux raid autodetect. However, whenever CentOS installs a new kernel and initrd image file, it creates (or uses?) an mdadm.conf file within the initial boot environment. Back when I was migrating a server to a new environment, I had to unpack the image, edit that copy of mdadm.conf, and then repack it all in order to get a proper boot. So I suspect (but am not certain) that the ARRAY lines in /etc/mdadm.conf are useless on a CentOS system but that the ARRAY lines inside the initrd image file are the real ones used. But the former may be used to generate the latter when you install a new kernel. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Find reason for heavy load
On 12/29/2009 11:44 PM, Noob Centos Admin wrote: My Centos 5 server has seen the average load jumped through the roof recently despite having no major additional clients placed on it. Previously, I was looking at an average of less than 0.6 load, I had a monitoring script that sends an email warning me if the current load stayed above 0.6 for more than 2 minutes. This script used to trigger perhaps once an hour during peak periods. Even so, I seldom see numbers higher than 1.x You should also try out atop instead of just using top. The major advantage is that it gives you more information about the disk and network utilization. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] [OT] CAT5 IP-capable rackmount KVM units?
Rather off-topic, but I'm looking for IP-based KVMs (~16 ports) that can handle both PS/2 and USB hookups on the server side. All of the answers over at Slashdot are a few years out of date and it looks like prices on KVM head units have dropped a bit over the years. Some of the older units only worked with Windows, Internet Explorer and ActiveX. Others like the ATEN KH1516i supposedly use Java and are far better from a cross-platform point of view. I'm on the fence about the CAT5 cables over the more traditional style, it seems like the CAT5 cable system would give a lot more flexibility in dealing with USB vs PS/2 servers (or even serial only?). The bigger advantage with the CAT5 stuff seems to be fewer length limitations and less space used in the rack for the head unit. I'm guessing that the CAT5 adapters are going to be proprietary? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Monitor Network Traffic
On 12/21/2009 9:08 AM, sadas sadas wrote: What is the best way to monitor the total incoming / outcoming network traffic of CentOS server. I think that the solution is to monitor the network interfaces and to send SNMP packets to remote server. But is it possible? MRTG is the simplest to setup, but it only does graphs. It's especially easy if you're trying to monitor the local host. You'll need to also install the net-snmp and possibly net-snmp-utils packages. Network monitoring solutions also do graphs (Cacti, Nagios, OpenNMS). You can also try ntop. It produces pretty graphs and also segregates network traffic by type/port. In the past I've used Nagios NTop. Unfortunately, NTop was a bit of a CPU hog and I had stability issues with it that I never tracked down. So at the moment, we're mostly relying on MRTG to see traffic. You can also (ab)use MRTG to graph things like CPU usage CPU temperature, disk utilization, or anything else that you can query via a remote shell command or SNMP query. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Optimizing CentOS for gigabit firewall
On 12/18/2009 4:12 PM, Peter Serwe wrote: You can't patch the Berkeley Packet Filter into Linux. Linux kernel doesn't support it. and... Despite a cacophonous chorus of replies directing you to the right tool for the job, you insist on sticking with Linux. If you want to use the wrong tool for the job, by all means, use ipset/iptables - have a great time with it. When it doesn't give you the performance you want, then you will probably go buy something else. Or wrap it up using Shorewall or one of the other meta tools that manage the iptable chains for you. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Subversion server: v1.4 (centos) vs. v1.6 (rpmforge)
On 12/16/2009 1:32 PM, Mathieu Baudier wrote: (We took advantage of repository sharding in 1.6, which is why we did a svn dump/load method. If we didn't need sharding, we probably could've just copied the directory tree across from the 1.4 to the 1.6 server.) Did you consider the type of filesystem when setting up sharding? Or would you consider ext3 as good enough? One other note on file systems. Our largest repository is 13GB with about 8000 revs. Our repository with the most revs has about 16,000 (but is only a few GB). So even the 16k rev database probably didn't need sharding yet. But it's growing at the rate of 5-6k revs per year. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] LVM, usb drives, Active Directory
On 12/15/2009 7:48 AM, Scott Ehrlich wrote: I have a client with a handful of USB drives connected to a CentOS box. I am charged with binding the USB drives together into a single LVM for a cheap storage data pool (10 x 1 TB usb drives = 10 TB cheap storage in a single mount point). (snip) What are my best options? Um, don't? Like other people said, go with eSATA, hopefully hooked up to a 4-drive or 8-drive enclosure (or even a 10-drive enclosure). Alternately, go with an external SAS storage rack that supports both SAS / SATA drives. A SAS card for PCIe is fairly inexpensive ($200?) and the external enclosures are probably going to be (but not certainly) better made then inexpensive SATA enclosures. The big problem with USB is that it only supports about 25MB/s per port, which means that it's going to be very very slow. Modern hard drives can push 50-80MB/s easily. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] LVM, usb drives, Active Directory
On 12/16/2009 9:41 AM, William Warren wrote: On 12/16/2009 12:10 AM, Eero Volotinen wrote: Still going to need 10TB of backups. And i can guarantee you the chances of having a URE during rebuild are almost certain with this setup so a backup is going to be crucial. Sounds like a nightmare even inside a supermicro or similar box. Yah, RAID-6 at a minimum, I wouldn't depend on RAID-5, even with a hot-spare. So to get 10TB, you'd need 13 drives (10 data, 2 parity, 1 hot-spare). And make sure you buy enterprise level SATA disks (the 1TB models are about $150 right now). (You can fit 15 3.5 drives into a SuperMicro 4U 942i 760W case with the 5:3 SATA mobile racks. The 942i is also a very quiet case due to using 120mm fans inside.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Subversion server: v1.4 (centos) vs. v1.6 (rpmforge)
On 12/15/2009 4:22 AM, Mathieu Baudier wrote: Hi, I'm planning to upgrade an old public/internal development infrastructure and will use CentOS 5.4 x86_64 as basis. The Subversion version in CentOS 5.4 is v1.4, whereas RPMForge provides v1.6. I use the RPMForge version as my client on the desktop. - Has anyone of you experience running Subversion servers on CentOS? Yes, when we upgraded to SVN 1.6 on the server, we moved from our old Linux box to CentOS. We did a svn dump/load cycle to move from the 1.4 server to the 1.6 server. And kept the 1.4 dump files for posterity. - Would you in general consider as less secure / safe / stable to use RPMForge packages for such critical tasks? Nope. Works fine. Between the nightly hot-copy backups and the internal design of the SVN FSFS storage engine, I'm not terribly worried. (We took advantage of repository sharding in 1.6, which is why we did a svn dump/load method. If we didn't need sharding, we probably could've just copied the directory tree across from the 1.4 to the 1.6 server.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Migrating to RAID
On 12/10/2009 10:39 AM, Matt wrote: I have CentOS 4.x installed on a single 500GB SATA drive. Drive is about 10 percent used. I would like to migrate to software RAID 5 without reinstalling the OS. Was thinking 3 500GB drives. Is that possible or must I reinstall? Moving to RAID-1 is going to be fairly easy. Moving to RAID-5 or RAID-6 will be a good bit trickier. You're going to want a good bare-metal backup (maybe Mondo Rescue http://www.mondorescue.org/) before you get started. Then your basic process is going to be: - make sure that mdadm is loading - partition the new 500GB disks similar to the old disk - build mdadm raid1 arrays on the new 500GB disk (with 1 drive missing) - copy your files over (cp -a) - make sure grub is on the new disk - change your fstab on the new disk to mount the arrays (/dev/mdX) instead of the partitions (/dev/sdaX) - remove the old disk and see if you can boot up on the new one I'm sure I'm forgetting something, just remember that you'll want to make lots and lots of backups (on a 3rd and 4th disk). And be ready to rebuild or restore from the bare-metal backup if you screw up. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Is ext4 safe for a production server?
On 12/9/2009 12:23 PM, Miguel Di Ciurcio Filho wrote: Miguel Medalha wrote: I am about to install a new server running CentOS 5.4. The server will contain pretty critical data that we can't afford to corrupt. Just for the record, Theodore Ts'o marked ext4 as stable and ready for general usage more than one year ago [1]. On 25 December 2008 kernel 2.6.28 was released with ext4 considered ready for production. So, ext4 is not _that_ new anymore. One year latter that Fedora 12 and Ubuntu 9.10 began using ext4 as default. I believe for 5.5 or even on 5.6, ext4 will not be a tech preview anymore. Considering that RH has extended the support so much, and how ext3 is so limited with the current and future disk's capacities (fsck on a 1TB volume is not funny). The current ext4 module is close to the one on 2.6.29 plus lots of fixes [2] [1] http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=03010a3350301baac2154fa66de925ae2981b7e3 [2] rpm -q --changelog kernel|grep ext4 My leaning is that 5.4 would be a bit too soon for production data, unless you have a very specific need and very good backups. But it's darned close to ready. Waiting until 5.5 or 5.6 (or 6.0) or at least waiting until next spring sounds like a reasonable middle ground. That gives the Ubuntu and FC hordes time to beat on it in less controlled settings. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 5.4 x86_64 only detects 32GB RAM while Fedora x86_64 correctly lists 128GB
On 12/7/2009 7:24 AM, Diederick Stoffers wrote: [r...@localhost ~]# uname -a Linux localhost.localdomain 2.6.18-164.6.1.el5xen #1 SMP Tue Nov 3 16:48:13 EST 2009 x86_64 x86_64 x86_64 GNU/Linux If you dig back through the xen-users mailing list, there's a thread that discusses this recently. Look for a subject of [Xen-users] Memory not seen in Dom0 back around Nov 8th. I tried to get GMane's web interface to toss up a direct link to the thread, but gave up. nntp://news.gmane.org/gmane.comp.emulators.xen.user Supposedly, it's not an issue unless you really really want 32GB in the Dom0 host. The Xen 64bit kernel (currently?) limits the Dom0 to 32GB. But any guest VMs that you create will initially use the RAM not used by the Dom0. Most folks recommend that you limit Dom0's memory usage in grub.conf (using the dom0_mem= argument). Or you can set it later using xm mem-set 0 ###M. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid
On 12/3/2009 7:35 AM, Grant McWilliams wrote: You can talk theoretics but I can tell you my real world experience. I cannot speak for other vendors but for 3ware this DOES work and is working so far with 100% success. I have a bunch of Areca controllers too but the drives are never moved between them so I can say how they'd act in that circumstance. Brand probably matters a lot. The 3ware and Areca's I'm inclined to trust. They're true hardware RAID controllers and not just fakeraid. Things get a lot murkier when you get into the bottom half of the market. But for smaller shops that can't afford to have 4+ of everything and don't need the CPU offload that a hardware RAID controller offers, Linux Software RAID is a solid choice. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS] Latency Monitor
On 12/2/2009 4:41 PM, Matt wrote: Does anyone know of a utility I can run on a server to periodically ping several hosts and record the result? Does not need to be anything fancy at all. Various monitoring apps (cacti, nagios, etc)... or MRTG. All of which store their data in RRDTool. Which reminds me, I need to setup Nagios 3 this month. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] /etc/cron.weekly/99-raid-check
On 12/1/2009 8:05 AM, Paul Bijnens wrote: I have the problem on 2 servers, and both of those servers are also running a VMware image (very small, but constantly used) under VMware Server 2. Could it be that the .vmem file, or even the virtual disk is constantly written to, and the raid is constantly out of sync because of that? (All my other VMware servers have hardware raid cards; or are still on Centos4.) ... that fills me with dread. The whole point of RAID-1 is supposed to be that data that gets written to one drive also gets written to the other drive. But yes, apparently will see this on systems where the file is being constantly written to. http://bergs.biz/blog/2009/03/01/startled-by-component-device-mismatches-on-raid1-volumes/ http://www.issociate.de/board/goto/1675787/mismatch_cnt_worries.html (this is a post from 2007 that discusses the issue) http://forum.nginx.org/read.php?24,16699 Apparently, a non-zero number is common on RAID-1 and RAID-10 due to various (harmless?) issues like aborted writes in a swap file. http://www.centos.org/modules/newbb/viewtopic.php?topic_id=23164forum=37 Also mentions that it can happen with VMWare VM files. And lastly, please explain mismatch_cnt so I can sleep better at night. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=405919 So my take on all of that is, if you see it on RAID-5 or RAID-6, you should worry. But if it's on an array with memory mapped files or swap files/partitions that is RAID-1 or RAID-10, it's less of a worry. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] timekeeping on VM - ntpd running
On 11/30/2009 11:34 AM, Jonathan Garden wrote: This is really stupid question. But referring to: http://lists.centos.org/pipermail/centos/2009-October/083791.html I don't see any line related to ntpd in my /var/log/messages . Do I need to turn-on ntpd for timekeeping on VMs? Some people say not to use ntpd on VMs for timekeeping or is it ntpdate cron job? Can someone please elaborate on this? On Xen guests, unless /proc/sys/xen/independent_wallclock is set to 1, the DomU will slave itself to the Dom0 clock. So you probably don't need NTP inside the DomU. Our Dom0 uses ntpd to keep itself synchronized with the internal time servers on our LAN. Whether that is better then using ntpd combined with setting that flag to zero... I'm not sure. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] again, nic driver order (bonding)
On 11/22/2009 8:38 PM, Gordon McLellan wrote: I have two servers with identical hardware ... TYAN i3210w system boards with dual intel gigabit interfaces, and a PCI intel gigabit nic. I'm running Centos 5.4, x86_64, 2.6.18-164.6.1.el5 Every other time I reboot, the nics initialize in a different order. On the servers where I'm currently using bonding... (this is what Ross Walker said on the 23rd). Here's an example for a server w/ 4 total NICs, bonded into a pair of pairs. /etc/modprobe.conf alias eth0 tg3 alias eth1 tg3 alias eth2 forcedeth alias eth3 forcedeth alias scsi_hostadapter sata_nv # BONDING # Set general bonding options (allows multiple bonds) options bonding max_bonds=2 # Define the two bonds alias bond0 bonding alias bond1 bonding /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none HWADDR=00:16:36:##:##:## ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no TYPE=Ethernet /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 BOOTPROTO=none ONBOOT=yes USERCTL=no TYPE=Ethernet BONDING_OPTS=mode=1 miimon=100 NETWORK=nnn.nnn.nnn.nnn NETMASK=nnn.nnn.nnn.nnn IPADDR=nnn.nnn.nnn.nnn GATEWAY=nnn.nnn.nnn.nnn Basically, we create (1) file for each ethernet interface under /etc/sysconfig/network-scripts (ifcfg-eth0, ifcfg-eth1, ifcfg-eth2, ifcfg-eth3), then we create (1) file for each bonded interface there as well (ifcfg-bond0, ifcfg-bond1). Bond membership is defined in the ifcfg-eth# files, while the bond options are defined in the ifcfg-bond# file. You can find out MACs by looking /etc/sysconfig/hwconf. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Recommend Mail Server
On 11/23/2009 2:21 PM, John R. Dennison wrote: On Mon, Nov 23, 2009 at 01:59:40PM -0500, Robert Moskowitz wrote: It points you to: http://howtoforge.net/virtual-users-domains-postfix-courier-mysql-squirrelmail-fedora-10 Now granted this is for FC10, but I suspect it would be easy to fit into Centos. Please, for the love of god and country, do not follow garbage like this. Under 1. Preliminary Note is this text: You should make sure that the firewall is off (at least for now) and that SELinux is disabled (this is important!). Documents that advocate disabling SELinux should be tossed in a pile and set on fire. Documents that tell you to disable your firewall with no mention in the remaining portion of the document to reenable it post install or how to properly configure it should join the burn pile. +1... While SELinux can be a PITA at times, it's not going to go away anytime soon, so a smart sysadmin needs to learn to work with it rather then against it. HowTos that tell me to disable SELinux or a firewall are held at arms length and never to be followed literally. (They might contain some useful commands or configuration options... maybe.) (personal rant) You can do a lot of SELinux workarounds with brute-force egrep'ing of the audit log combined with audit2allow. It's not the best way to do it. If you have mislabeled files that are labeled with a generic var_t label, and you grant processes access to those files with blind acceptance of what audit2allow says, you're also granting access to every other file that is labeled as var_t. (Better choice would be to properly label the files that didn't get labeled correctly.) But even a brute-force application of audit2allow is still a step up from disabling SELinux entirely. (I have a love/hate relationship at times with SELinux. I need to spend another weekend reading up on it again and figuring out some of the things that I'm not sure about yet.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Recommend Mail Server
On 11/23/2009 1:59 PM, Robert Moskowitz wrote: Susan Day wrote: Hi; I don't want sendmail. What's a good secure email server that I can yum? I really only need smtp right now, but who knows what the future will bring? See my slightly prior post on: Re: [CentOS] smtp+pop3+imap+tls+webmail+anti spam+anti virus We use postfix, dovecot, clamav milter (reject at SMTP time), spf policy check (with rejecting on SPF_FAIL at SMTP time), and AmavisD-New w/ SpamAssassin for scoring what's left. ... For us, reject_invalid_helo_hostname and reject_non_fqdn_helo_hostname in the smtpd_helo_restrictions ends up blocking probably 80% of all inbound spam/virus attempts. In a few years, I have yet to see someone complain about a false positive reject from those restrictions. Our users would see 4x-6x more mail that would have to be virus scanned or spam scored without those checks. The reject_unknown_helo_hostname check, OTOH, is much more likely to reject mail from a valid mail server. It's a good check, but the false positive rate for us is in the 1:2000 to 1:3000 rejects will be a false positive. So we have a whitelist where we list the HELOs of misconfigured mail servers of companies that we do business with. We had to list a bunch of folks back when we started, but it's trickled down to about 1 per month now. And in 90% of the cases, you can tell from the HELO name that it's a Microsoft Exchange server. http://tools.ietf.org/html/rfc5321#section-2.3.5 Used to use some DNSBL based rejects at SMTP time, but now we just let that stuff through and have SpamAssassin score it. Then we use server-side sieve scripts to quarantine stuff higher then 8.0-9.0 directly into the server-side Junk folder. (We score and tag at 4.5, but don't quarantine until 8.0 or 9.0.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Recommend Mail Server
On 11/25/2009 6:45 PM, Christopher Chan wrote: Thomas Harold wrote: We use postfix, dovecot, clamav milter (reject at SMTP time), spf policy check (with rejecting on SPF_FAIL at SMTP time), and AmavisD-New w/ SpamAssassin for scoring what's left. Have you looked at spamass-milter too? No, I must have overlooked that. We're taking advantage of a lot of the amavisd-new features that enhance SpamAssassin. OTOH, spamass-milter looks to be a lot simpler to configure and would've allowed us to reject the super-high scoring spam (=25.0) during the SMTP transaction. (I prefer to only reject on bogus HELO names, virus-infected messages caught by ClamAV and SPF_FAILs at the moment. Rejecting on a spam score is trickier and more subjective.) One advantage of amavisd-new is that we could, if needed, move the spam scoring off to a secondary internal server and round trip it back to the primary mail server. There are some other tricks that amavisd-new handles beyond that (such as the policy banks, or the ability to boost/lower a sender's email address or a sender's domain by a few points instead of outright whitelisting/blacklisting). ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Best Motherboard
John Plemons wrote: I would look at Tyan, Soyo, and Intel for middle of the road performance, but more over for dependability... I have also had very good luck with MSI, Asus... Same here, Tyan for the really important systems (complete with ECC) inside a SuperMicro rack case. Asus for the desktops / less important servers. (I really like the Asus M2N designs, because they use heatpipes to cool the chipset. Which means one less fan, a.k.a. moving part, to worry about in our boxes.) ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Somewhat OT:
Ross S. W. Walker wrote: Nagios can start very simple, but has the ability to end up very complex. It's configs take a modular approach, you have monitors, monitors belong in groups, groups have operators/administrators, etc. We just finished setting up Nagios at our office. It's not that bad once you break things out to sensible filenames instead of using one big config file. We stripped it down to just the essentials and are slowly building out our configuration to monitor additional services and hosts. The other trick that we use is FSVS, which means that we have very good records as to what configuration file changes we made on the server. (FSVS is a front-end for storing stuff like /etc in a SVN repository.) It's extremely useful to be able to log configuration changes, browse past changes, do diffs on the files, etc. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Somewhat OT: (Nagios)
Sergio Belkin wrote: 2008/5/13 [EMAIL PROTECTED]: OK, you won :) I'm going to test nagios. I am using centos 5.1 x86_64. Do I lose much if I use rpm from rpmforge (version 2.9)? We're running version 2.11 at the office (on CentOS 5.1 x86_64). I've looked at some of the things in 3.0, but there's nothing there that I needed yet. Hopefully you have some way to track changes in /etc/nagios (FSVS is what we use), because it will make your life much easier to have an audit trail. We created sub-folders under /etc/nagios to hold the various types of entities. For example, we have: /etc/nagios/commands /etc/nagios/contacts /etc/nagios/contactgroups /etc/nagios/hosts-switches /etc/nagios/hosts-dmz /etc/nagios/hosts-servers /etc/nagios/hosts-lan /etc/nagios/templates-hosts /etc/nagios/templates-services We then broke individual elements out of the default massive configuration folder into individual .cfg files. For example, we chose to create individual files for each contact rather the putting them all in a single file. So far it works well, it's a lot easier to get a feel for what users have been defined, what hosts are defined, what the templates are. Because when I look in templates-services, I see from the directory listing that I have service templates named X, Y and Z (without having to open up the file to look). We currently put service checks for individual hosts in the same configuration file as the host. So you will have the following definitions in a typical host file (until you get into templating): define host{ define hostextinfo{ define service{ define service{ ... Any plugins that we wrote ourself, we put under a separate folder. Which keeps them separate from /usr/local/lib64/nagios-plugins/ Basically, start small, track your changes, and plan on refactoring it in week #2 after you start monitoring about a dozen hosts. Stay away from advanced things like escalation, monitoring things like disk space on remote servers, or the like until you get the basics working. Oh, and SELinux will probably get in your way. So you'll need to play with audit2allow to create supplemental policy to give Nagios additional permissions. (Which may have been due to PEBKAC issues on my end - I plan on going back and looking at labeling and figuring out what I mislabeled.) I think that's the majority of the issues that we dealt with in the past 2 weeks. We're now in fine-tuning mode and getting ready to start monitoring remote services next week. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos