Re: [CentOS] Is Oracle a real alternative to Centos?
-- Original Message -- From: "Matti Pulkkinen" As someone who is considering moving to OL, I wonder if you could elaborate clearly on what specific concerns you have, without the insinuation and analogy? Oracle's proposition [1] seems pretty straightforward to me. That they'' eventually treat it to the same lawyers who've changed Java licensing. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] [CentOS-devel] https://blog.centos.org/2020/12/future-is-centos-stream/
-- Original Message -- From: "Rainer Duffner" So, you will quickly be back to square one, unless you want to run stuff like Debian or Ubuntu, which are mainly Linux-kernel+some stuff nowadays, whereas RHEL + CentOS forms a complete system (with additional software that RedHat has developed or acquired over the years). Been reading along and literally laughed out loud at this silliness. The vast majority of that "system" was unavailable to CentOS, always, and WAS the "compromise" in running it. Stuff like beating your head against getting Satellite running, or realizing RH hid away the meta-data from CentOS users to know what a security update was, versus a feature or bugfix, which went against what RHEL itself was SUPPOSED to do, but never really did... and couldn't control massive upstream ABI, API, or feature changes throughout the lifespan of the promised "support", even for paid RHEL. This is definitely not true for most CentOS users and is hilarious. What "system"? It NEVER existed on CentOS. Can't even get patch management software to mesh up verion numbers between RHEL and CentOS. We "put up with it all" for exactly one reason. It was a binary compatible re-spin of RHEL without closed/proprietary things. That's it. The rest is just noise. If it isn't a re-spin anymore... well, we'll "put up with" other oddities of projects that don't reverse their multi-year commitments to support things, and even stop having to "fight" with years-old packages. The IT world wants "rolling" OSes and perma-garbage always-broken releases today, apparently. Our first company meeting about who we dump CentOS for was this morning. Flipping architectures is a year long project at least, so we're out. Didn't announce alternatives THE DAY IT WAS KILLED, we can't be bothered anymore. We literally don't have the time with piles of other commerical and cloud services following suit and capitalizing on WFH and everything else about Covid. We already literally have to "fire" our firewall/VPN vendor for doing it, we're extremely annoyed with both Google and Microsoft and their changes, and we already have the continuous nightmare of literally EVERYONE releasing so many critical security bugs constantly and patching ramping toward daily... that everybody who makes that harder is flipped the bird and summarily tossed. The good news: Covid business model changes at least highlighted who we're firing faster than any hemming and hawing as things deteriorate for years on any platform we use. Whoever is reaching into our (not very deep) pockets will lose a hand this year, we have lost our patience for it. RH and the so-called "CentOS Board" (majority of RedHat people) lost touch with what companies are already going through with multiple vendors bumping prices and lowering services. Flipping distros will ultimately seem tame this year for corporate users. We may have to switch entire cloud platforms and services to avoid the ultra-greedy companies. But annoy us this year, we have zero patience. We're done with it. DUMP. BYE. You ticked us off in a long line of companies we have doing that. Horrible timing for RH, but they'll survive on government graft and large contracts. Go Big Blue. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Advice on partitioning a Dell MD1200 disk array
On Sep 4, 2012, at 5:10 AM, Tony Molloy wrote: > Just remember I'm due to retire at the end of this month so this will > be my last big job for the Dept. And due to financial constraints I > will not be replaced. So I will be handing this machine over to a co- > worker who is basically a Windoze admin with only a basic knowledge of > Linux so nothing too fancy. ;-) Hand him the machine and tell him to load Windows on it or whatever he wants to maintain for the next X years, relax for 30 days and enjoy retirement. It's his problem now. :-) :-) ;-) Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Strange device labeling in 6.3
On Aug 9, 2012, at 7:43 PM, Kahlil Hodgson wrote: > On 10/08/12 09:18, Reindl Harald wrote: >> and that is why i use /etc/udev/rules.d/70-persistent-net.rules to >> pin device-name / MAC and no mac-address in ifconfig-scripts since >> many years > > +1 > > Alternatively, with the biosdevname, you can pin the interface name to > the pci(e) slot. That way its a trivial exercise to get 'remote hands' > to swap out a dead nic -- no need to fiddle with the mac address. And if you don't... Doing remote hands to swap a bad NIC with someone non-Linux qualified in another country, just became the seriously sucky part of your day. BTDT. Got the t-shirt. Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Another NTP issue (fake leap second)
Yes, Java was grumpy. -- Nate Duehr denverpi...@me.com On Aug 2, 2012, at 1:24 AM, Timo Schoeler wrote: > Hi list, > > just out of curiosity: Was anybody affected by this? > > http://lists.ntp.org/pipermail/questions/2012-August/033611.html > > Cheers, > > Timo > ___ > CentOS mailing list > CentOS@centos.org > http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] run fsck manually
On Jul 9, 2012, at 12:03 PM, m.r...@5-cent.us wrote: > Jerry Geis wrote: >> Is there a way in centos to just go ahead and do this automatically? >> >> /dev/sdal: Unexpected Inconsistency; [FAILED] >> >> Run fsck Manually >> >> (ie. Without –a or –p options) >> >> ***An error occurred during the File system check > > a) fsck -y -C [-c] /dev/sda1 (-c will check for bad blocks; it will take a > *while*; run it overnight, or over dinner, or over the next looong > meeting) > b) Buy new disk, *now*, insert, rsync over. He's not asking what to type at that point, he's asking how to keep the kernel from stopping at that point and just do the (possibly destructive, but often-times all that gets damaged/moved to lost+found, is open logs that were open when the system went down) fsck. (Unfortunately I do not know the answer as to how to tell the initial fsck just to go ahead and do the destructive fsck pass, without human intervention, as I wouldn't want it to do that, but I see where the communication misunderstanding is happening in the e-mail chain.) He's saying the desktop machines are "throwaway" and he doesn't want to take the time to go over and look... do the fsck and if it trashes the filesystem, he'll just re-image the machine later. Meanwhile, the user isn't confused by the fsck message or interrupted by it, if the machine finds filesystem problems at boot time. I would assume this is often a desired behavior on machines that have poor AC power at remote sites. Give the fsck a try if I'm not there. Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Question about storage for virtualisation
Appreciate all the discussion, folks. Got some boxes that have some separate / and /usr, since we're a /usr/local religion shop. (GRIN...) Someone long long ago, in a galaxy far, far away picked /usr as the split point, instead of /usr/local itself. So... Layers 8 and 9 of the OSI model, bite again... Religion and Politics. Guess we'll be moving to /opt or /usr/local being the separate mountpoint. I'm sure this will be a happy internal discussion... hahaha... Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Question about storage for virtualisation
On Jun 26, 2012, at 10:29 AM, Gordon Messmer wrote: > > I don't believe there's any more or less need to do so. I would > strongly recommend that you not segregate / and /usr. Fedora and future > versions of RHEL/CentOS will expect a unified / and /usr. I may be behind, but this is the first I've heard of this... Any good references as to WHY?! they want to break this decades old convention? Thanks, Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 6.2-x86-64
On Jun 20, 2012, at 3:27 PM, Gene Heskett wrote: > As for partiality, no way, synaptic, adapted for rpms is by far the best > package manager I've used in the last 5 years since I bailed on fedora at > about 6 or so. Understand that sentiment, Gene. I like aptitude myself for Debian-based systems, but everyone has a favorite. It sounds like you have much of your repo problem for RPM sources for the tools you want understood now. I would caution that some repos out there are set up by individuals who are interested in fixing something, they make a few packages, and then they disappear when they lose interest. I suggested EPEL because it's a large project, based off of an even larger one (Fedora) and there's probably not going to be any major disruptions in it as far as interested-parties goes, so security and version updates of most packages in it should keep flowing unless upstream sources abandon them. > I just pulled the clamav stuff, not terribly complete unless the utils are > part of the main package, but have not attempted an rpm -ivh on that kit > yet. I got the huge majority of the stuff with FF, at > <http://choonrpms.mirror.choon.net/centos/6.2/choonrpms/x86_64/> > which I found via a google search. Highly recommend adding reputable repos to your local system and then using yum search [packagename] or similar... I haven't seen the name choonrpms before, but I'd kinda want to know who they are before installing their packages. Just a thought. Take or leave. (Someone who knows who they are, may be chuckling at that... I don't know... I haven't researched it.) > I'm getting close to that in N. Central WV, phone and internet are on the > local cable, getting about 385k/sec dl speeds on average. But I have kept > my own email corpus here since 1998, over 7Gb of it now, and old, probably > bad habits are hard to break. Old being relative of course since I'm only > slightly younger than dirt at 77. Retired (almost, I take a small plane > ride tomorrow to go look at a transmitter that is off the air) from the > local CBS affiliate as the CE from 1984.9 to 2002.6. I will add my vote that even though running ones own mail server is a fun challenge, at some point in the past I decided to leave it to younger pros who get paid to wrangle with spammers and what-not, and now only run mail servers I'm paid to deal with. (GRIN!) I migrated old mail that I thought I couldn't live without to the IMAP account and said goodbye to the time-suck that a modern mail server has become. (I still operate mail servers for my employer, but at home... it's nice to just forget about it and read my mail. GRIN...) Neat to run into another RF "geek". Never made my living at it, but I maintain some Amateur Radio FM repeaters and some Public Safety FM and P-25 systems. Nothing high power, broadcast or TV, but as things are generally co-located on mountains... have seen many broadcast systems up close, and had the "5 cent tour" from the Station Engineers in the area. Be careful with that high-power stuff... but you know that already. No tower climbing at 77... let one of us young whippersnappers do that silly stuff. I'm about half your age, and I still don't really like it. Just a necessary evil in volunteer organizations... strap on the safety gear, and get going. You mentioned a small plane... I do small planes for fun these days, and I'd much rather be doing that, than climbing a tower. (GRIN!) > Thanks, I'll see if I can google that when I get back from the trip & get > over my aches & pains from crawling around in that elderly Harris 50kw > transmitter. CBS does love their Harris stuff, eh? I got to see the new solid state beast out here in Denver in person... 1KW modules, pop one out, pop one in... touch screen to gracefully fail one if you want to pull it... pretty amazing stuff. Paul (the Engineer here) really enjoys his broadcast radio and other radio toys. That was one of the "five cent tours", seeing his new setup in a shared building with various other DTV systems co-located. Newest site I've ever been in. Nice setup. Always interesting to see waveguide bigger than most sewer pipes and the associated filters. Looks like an old steamworks for a steamship, but all "filled" with RF instead of water vapor... Enjoy your "retirement"... and 73 if you're a Ham... WY0X here. Nate p.s. Apologies to the list for the personal notes and digressing a bit... I don't think I know Gene well enough to send him private messages. All the best. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS 6.2-x86-64
On Jun 20, 2012, at 1:59 PM, Gene Heskett wrote: > I found a yumex, which is not part of the 64 bit install, but it wants a > way older version of python-2.4 whereas we have 2.6.6-something after the > post install upgrade. > > 1st Question: > Is there anything that can be done about this? Or is there something > better, like a 64 bit synaptic to replace yumex? > You may be unfamiliar with CentOS in that many tools you might find "standard" on regular distros, aren't part of the upstream package list for "Enterprise" Linux. As delivered from upstream, it's a fairly "stripped" distro. Yumex is not packaged by upstream, so it's not part of CentOS proper. However, there are repositories that build packages to run under CentOS which aren't part of upstream or CentOS, such as EPEL: http://fedoraproject.org/wiki/EPEL yumex in particular, if you're partial to it, is available in various flavors, all ready to install. > Mail problem: > > I've been using a fetchmail -> procmail system here for years to unload > kmail from its mail pulling duties, making it many times easier to use. > > Adjuncts to that are spamassassin, clamav, and mailfilter > > There appears not to be any 64 bit builds of clamav and mailfilter. Many 32-bit packages run just fine on 64-bit CentOS via the use of 32-bit libraries. A quick look through a machine that has both stock CentOS software repositories and EPEL enabled shows that there are packages for all of the above, except mailfilter. Mailfilter appears to be somewhat Debian-centric and the Debian-distro-derivatives all seem to have updated packages. I've never used it, even though I'm a fan of both "camps" for various things. Looking it over, it looks like it utilizes POP to go take a look at mail and dump spam prior to the POP transfer of whatever is left over? Honestly, most folks have moved on to IMAP, long ago... IMHO. YMMV. The advent of large data pipes, even in residential service in most areas, and effective local filtering probably means that mailfilter is marginalized non-mainstream software, at best, these days. Doing some quick Googling, mailfilter doesn't seem very popular at all with the RedHat-derivative camp. The only distro that seems to have ever had it pre-packaged is Mandriva. You might look at whatever changes they made to make their x86_64 package. It's not in Fedora either... https://admin.fedoraproject.org/pkgdb/acls/list/m*?_csrf_token=1320052a8a44a38e84b472e63f9cba4db006ea38 Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Swap Partition in CentOS 5.8
On Jun 18, 2012, at 5:55 PM, Woodchuck wrote: [snipped swap/pagefile list of statements] > Thanks to the list for any answers! Nothing in your list was phrased as a question. What are you trying to determine? If the list of statements was accurate? Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] system date using ntp client is drifting
On Jun 4, 2012, at 2:42 PM, John R Pierce wrote: > On 06/04/12 1:27 PM, Nate Duehr wrote: >> Additionally, ntp will refuse to sync if it's too far out. Use ntpdate >> [server IP] to force the issue first. If the machines have a bad CMOS >> battery and won't keep time, ntpdate package can be configured to force time >> sync (which is a bad hack) at boot-time. > > the standard EL5 /etc/rc.d/init.d/ntpd script does this automatically, > unless its been disabled via /etc/sysconfig/ntpd Interesting. I hadn't looked at the scripts in a while. The reason no one ever did it "in the old days" was for fear of an NTP server going out of whack. ntpdate should be used sparingly and with knowledge that one is doing it... automating it is usually a bad idea. (Especially by default.) As I said in the original reply, it's a nasty hack. IMHO, ntpdate package install should be just to get the tool installed, no automation. If it's installing automated setting by default, that's not right. One should have to turn ON automated ntpdate at startup/shutdown after understanding the risk. >> After getting the clock in sync, "hwclock --systohc" to push it into the >> CMOS clock. > > setting SYNC_HWCLOCK=yes in said above config file will do this, too. Was just giving him the way to do it in real-time. The config file only does it at startup/shutdown, last I looked. Haven't looked recently. Sounds like there's been some goofy decisions made somewhere along the line for ntp and ntpdate... an NTP server can be compromised and date/time changed across an envrionment (or more likely, it can have some type of failure itself...) and the old tenets of ntp were there for a reason... 1. ntp doesn't sync if it's too far out of whack... 2. ntpdate should be operated by hand... not automated... Oh well. Those unwilling to learn from past mistakes are doomed to repeat them, right? (GRIN!) Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] system date using ntp client is drifting
On Jun 4, 2012, at 1:59 PM, Kaushal Shriyan wrote: > Hi, > > I have a set of servers whose system time is drifting. I am running ntp > client on CentOS 5.8. My config is here -> http://fpaste.org/s55U/ > Anything i am missing? Fire up ntpq, and type "peers" and see if they're seeing their upstream servers. If not, hunt down the firewall or other filter problem. Additionally, ntp will refuse to sync if it's too far out. Use ntpdate [server IP] to force the issue first. If the machines have a bad CMOS battery and won't keep time, ntpdate package can be configured to force time sync (which is a bad hack) at boot-time. After getting the clock in sync, "hwclock --systohc" to push it into the CMOS clock. Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Documentation question
Apologies if this is a FAQ or it's simply due to a lack of volunteers. I'm curious why the CentOS Documentation here: http://www.centos.org/docs/5/ ...stops at CentOS 5.5. And also why there is no: http://www.centos.org/docs/6/ *** Curiosity killed the cat. *** Thanks, Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] screen brightness changes depending on which application is run
On Apr 11, 2012, at 11:29 PM, allan wrote: > Is your monitor an LED type? It could have dynamic brightness. My tv does the > same thing - also annoying. I have also seen that "feature" on an HP monitor with LED backlight. I believe it was a feature one could turn off in the monitor's built-in OSD menu. Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] screen brightness changes depending on which application is run
On Apr 8, 2012, at 12:18 PM, Jeff Cen wrote: > Hi, > > I found my LCD screen brightness increase when I use firefox and other white > background editors and screen brightness decrease when I use dark background > applications, such as terminals with black background . The change in screen > brightness depending on the applications has been annoying. > > > My centos version is ver 5.7 64 bit. Has anyone seen a similar problem or > have a solution for that? Thanks in advance. > > Jeff Jeff, Does your machine have an ambient light sensor? Light from the monitor, reflecting off of you, will often trigger changes in the amount of light the ambient light sensor is seeing. Happens most with large changes like switching from a bright white (browser) background to a dark one, just as you describe in your symptoms. Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Faster and faster
On Mar 27, 2012, at 2:08 PM, Piero wrote: > Thanks for your hard work and great product, Agreed, other than that I'm struggling to keep all the machines up! (GRIN!) Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Can anyone talk infrastructure with me?
On Jan 26, 2012, at 4:59 PM, John R Pierce wrote: > On 01/26/12 3:43 PM, Gordon Messmer wrote: >> Yes, the cost for a T1 will seem very high. It is antiquated telco >> tech. T1s are generally very reliable, but very very slow. >> >> 1.5Mbps is not faster than 40Mbps. There's nothing hidden in the way >> they advertise speeds. >> >> DSL and DOCSIS technologies have advanced and matured over the last >> couple of decades. T1 has not. A T1 connection is the same now as it >> has always been. > > a modern T1 (aka DS0) is likely delivered to the end premises over HDSL > using 2 pairs. while its slower than those consumer oriented > technologies you mention, its far more reliable and has a guaranteed SLA > (service level agreement) you won't get from DOCsis (cable) or end user > ADSL, and tends to have very deterministic latencies... Wow, that's just ... wrong. There's nothing to "mature" in a T1. It's a telco transport standard that is well-known, and utilized everywhere as part of the Bell System standards for multiplexing and demultiplexing from smaller circuits to larger and back down. Ratified by the ITU for decades. "T1" is a channelized synchronous telecommunications circuit type first designed in the late 60s, updated in the 70s. After removing framing bits, 1.544 Mb/s. "DS0" is a sub-channel of a T1 when broken up into frames. Extended SuperFrame being the typical method these days. 24 of them at 64K per channel. "HDSL" is a completely different technology than T1. "DOCSIS" is the name of the standard utilized to deliver data services over a Cable Modem. "ADSL" is a single-pair high speed connection that's very distance limited from the origination point. "SLA" is a Service Level AGREEMENT. The key word being AGREEMENT. Your businesspeople are free to negotiate with any provider of ANY of the above technologies for anything they're willing to pay for. TYPICAL SLA's might be as stated above, but it's a contract... negotiate whatever you like. What you might want SLA's on when ordering IP bandwidth: - Maximum CONTINUOUS data rate upstream AND downstream simultaneously, and what thresholds are considered an OUTAGE on the SLA even if traffic is still flowing. - Latency from your end of the circuit to a known point will never EXCEED "X" amount or it will be considered an outage under your SLA. - Whether or not an UPSTREAM routing outage will be considered an SLA OUTAGE by your local carrier/ISP in terms of your bill. (In other words, how many backbone connections do they have and can they route around a problem, or are you stuck waiting for their one piddly edge router to be fixed in the case of fly-by-night providers.) - In the case of a cable cut, are trucks rolled 24/7, or only during business hours? Etc etc etc... there's more. Read up. SLA's are themselves a playground for lawyers and businesspeople to dicker over. Now the real world: - Any company relying on a single IP connection via a single route... is so far down the food chain they're not going to get service during a larger scale outage anyway. And... remember... - An SLA just gives you a refund of your money for the outage. It doesn't keep you in business if the service provider doesn't keep their side of the bargain. - If you have something that must be connected to the Internet 24/7 or you're out of business... buy more than one connection. An SLA won't matter at all when the backhoe cuts the only path out of your building. - Or... host it in a data center that has far more than one backbone connection via more than one physical route. Let's not mix all the technical details up with the business ones. That posting was the most misleading post I've read in quite a while, and shows a lot of the misconceptions out there. *** ANY of the above technologies can deliver a certain number of bits, at a certain latency, a certain direction, across a certain type of physical media, to some network at the other end. *** Whether that upstream provider has oversubscribed upstream connectivity, has latency issues, doesn't respond to fix their circuits in the middle of the night, pays you back for outages, scratches your back at the beach after signing that multi-million dollar bandwidth contract with giant SLA attached large enough to fund their entire fleet of trucks for a year... That's all up to the contract... ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CentOS CVE "database"?
Appears it's back up, just as a follow-up. On Oct 8, 2011, at 12:01 AM, Nate Duehr wrote: > Was working on a project tonight to document CVE fixes applied to servers, > and noted that RedHat has completely jacked up their website. > > In the past, I've usually just used their website for links to their CVE > list, as well as links to their Errata to look up specifics for CentOS > machines. > > It sure looks like these links are either permanently gone from the public > pages to be hidden internally only available to Subscribers, or... > > RedHat's Marketing folks have completely destroyed what was once a valuable > information-filled website. > > Either way... the question now becomes... > > Is there something similar to RedHat's CVE listings by year and number hosted > by anyone in the CentOS community or by CentOS itself for CentOS? I haven't > had much luck with my GoogleFu tonight. > > Thanks, > Nate > ___ > CentOS mailing list > CentOS@centos.org > http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] CentOS CVE "database"?
Was working on a project tonight to document CVE fixes applied to servers, and noted that RedHat has completely jacked up their website. In the past, I've usually just used their website for links to their CVE list, as well as links to their Errata to look up specifics for CentOS machines. It sure looks like these links are either permanently gone from the public pages to be hidden internally only available to Subscribers, or... RedHat's Marketing folks have completely destroyed what was once a valuable information-filled website. Either way... the question now becomes... Is there something similar to RedHat's CVE listings by year and number hosted by anyone in the CentOS community or by CentOS itself for CentOS? I haven't had much luck with my GoogleFu tonight. Thanks, Nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Intel ICH10R on CentOS 5.4
Hey there.. I was wondering if anyone could share experiences they have had with the Intel ICH10R SATA controller? I tried looking around but all I could find were RAID references, I have no interest in using the RAID functionality just basic SATA JBOD. Wondering if there are any gotchas for performance, drivers, quality etc. I normally don't touch anything that is not hardware RAID though this is a special project.. And even if I get demo gear I won't have enough time to put it through real paces(likely need weeks) before a decision needs to be made. I don't see anything that is causing alarm but just curious if anyone else has experiences. thanks nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] clients obtaining dhcp addys slowly
aurfal...@gmail.com wrote: > Hi all, > > This is unusual for me to observe. > > I've a dhcp server running on Centos 5.3 and it takes a while to > answer clients asking for an address. You happen to be running STP at the edge switch ports? If so then disable it. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] tracing the source of a sector error from smartd
n number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 100 Not_testing 200 Not_testing 300 Not_testing 400 Not_testing 500 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. --- /var/log/messages has tons of entries that say Device: /dev/sdc, 1 Currently unreadable (pending) sectors Since it's only 1 disk out of 72 I suspect it's the disk at fault rather than something with smartd.. Any ideas? thanks nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Benchmark Disk IO
Matt Keating wrote: > What is the best way to benchmark disk IO? > > I'm looking to move one of my servers, which is rather IO intense. But > not without first benchmarking the current and new disk array, To make > sure this isn't a full waste of time. You can do a pretty easy calculation based on the #/type of drives to determine the approx number of raw IOPS that are available, since it's I/O intensive your probably best off with RAID 1+0, which further simplifies the calculation, parity based raid can make it really complicated. 7200 RPM disk = ~90 IOPS 1 RPM disk = ~150-180 IOPS 15000 RPM disk = ~230-250 IOPS SSD = Otherwise, Iozone is a neat benchmark. SPC-1 is a great benchmark for SQL-type apps though it's very high end and designed for testing full storage arrays not a dinky server. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] iostat on multipath disks
Wahyu Darmawan wrote: > Hi Experts, > > How can I get info for my disks on multipath disks? > I mean, usually I use 'iostat' for internal disks, however I need to know > status of my multipathig devices, cause I'm monitoring stress test for my > application.. iostat works fine for devices running on top of device-mapper MPIO, I run it often.. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] iSCSI / GFS shared web server file system
James wrote: > That was my impression from reading through the docs anyways. I've never > set it up. my impression is GFS requires shared storage, I believe there are ways around it, but take a look at this for setting up GFS for use with NFS http://sources.redhat.com/cluster/doc/nfscookbook.pdf I think it'd be much easier if you just replicate the data between the servers with rsync or something. GFS sounds like way overkill for a couple of web servers. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: Caching synchronous writes
John R Pierce wrote: > Ray Van Dolson wrote: >>> I think what you want is a proper storage array with mirrored write >>> cache. >>> >> >> Which is what we have with ZFS + SSD-based ZIL for far less money than >> a NetApp. >> > > not unless you have a pair of them configured as an active/standby HA > cluster, sharing dual port disk storage, and some how (magic?) mirroring > the cache pool so that if the active storage controller/server fails, > the standby can take over wthout losing a single write. > OT too but really thought this was a good post/thread on ZFS http://www.mail-archive.com/zfs-disc...@opensolaris.org/msg18898.html "ZFS is designed for high *reliability*" [..] "You want something completely different. You expect it to deliver *availability*. And availability is something ZFS doesn't promise. It simply can't deliver this." -- nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] setting up 3 network cards
Les Mikesell wrote: > That's only necessary if he wants it to act as a router. The machine > itself should be able to access 3 subnets simultaneously and route > through routers on each. oh ok, must've read the post wrong, haven't slept much this week nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] setting up 3 network cards
Jerry Geis wrote: > Is there something "special" about setting up 3 network cards that may help? > What should I look into? enable ipforwarding? nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: Caching synchronous writes
Ray Van Dolson wrote: > The "delayed allocation" features in ext4 (and xfs, reiser4) sound > interesting. Might give a little performance boost for synchronous > write workloads Doesn't delayed allocation defeat the purpose of a synchronous write? I think what you want is a proper storage array with mirrored write cache. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Logserver recommendations
rai...@ultra-secure.de wrote: > I'd like to hear of people who have used both Splunk and/or prelude in an > environment with, say, 500 voice a few opinions. I use Splunk with a few hundred systems and it works alright, using it right can take some time though creating the reports and stuff, but it does make searching and reporting very easy. Splunk licenses based on the amount of indexed data it collects per day, so you should know how much data your going to index before you buy, and of course give plenty of headroom. I have a friend who works over at T-mobile who is one of the biggest Splunk customers in the world they do something well over 1TB of new data per day, and it works ok for them(off the record it sucks but it sucks FAR less than everything else they have tried). nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 12-15 TB RAID storage recommendations
Eero Volotinen wrote: > err. you can get hitachi sms 100 with sata drives for about 9000e > including 3 year maintenance. Yes, and you get what you pay for with that.. As I mentioned earlier myself I won't go back to crap storage after seeing the light.. Even Hitachi AMS 2k series doesn't compare. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 12-15 TB RAID storage recommendations
Joseph L. Casale wrote: >>Unless you have a good storage system.. >> >>a blog entry I wrote last year: >>http://www.techopsguys.com/2009/11/24/81000-raid-arrays/ >> >>Another one where I ripped into Equallogic's claims: >>http://www.techopsguys.com/2010/03/26/enterprise-equallogic/ > > Lol, Nate... > The op was looking at spending a few grand, not a few million > you show off:) million? Nowhere close to that, you can get a 12-15TB system(raw) in the ~$130-150k range (15k RPM). If you want SATA instead say $80k. Few million and you can get a world record breaker array with more than a thousand drives(15k RPM) and loaded with all the software they have. The capabilities of the system is the same from the low end to the high end the only difference is scale really. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 12-15 TB RAID storage recommendations
John R Pierce wrote: > well, IF your controller totally screams and can rebuild the drives at > wire speeds with full overlap, you'll be reading 7 * 2TB of data at > around 100MB/sec average and writing the XOR of that to the 8th drive in > a 8 spindle raid5 (14tb total). just reading one drive at wirespeed is > 2000,000MB / 100MB == 20,000 seconds, or about 5.5 hours, so thats about > the shortest it possibly could be done. More likely your looking at 24+ hours, because really no disk system is going to read your SATA disks at 100MB/second. If your really lucky perhaps you can get 10MB/second. With the fastest RAID controllers in the industry my own storage array(which does heavy amounts of random I/O) averages about 2.2MB/second for a SATA disk, with peaks at around 4MB/second. Our previous storage array averaged about 4-6 hours to rebuild a RAID 5 12+1 array with 146GB 10k RPM disks, on an array that was in excess of 90% idle. Rebuilding a 400GB SATA-I array often took upwards of 48 hours. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] 12-15 TB RAID storage recommendations
Joseph L. Casale wrote: > Rebuild times especially on busy arrays with large discs take lots of > time... Unless you have a good storage system.. a blog entry I wrote last year: http://www.techopsguys.com/2009/11/24/81000-raid-arrays/ Another one where I ripped into Equallogic's claims: http://www.techopsguys.com/2010/03/26/enterprise-equallogic/ Just checked my array again, nearly 200,000 RAID arrays on it, makes for a massively parallel many:many RAID rebuild for fast recovery times with *no* service impact. Course this stuff may be out of the OP's budget, but just keep in mind that there are such systems on the market. IBM XIV is another such system, though it's scalability is too limited to be useful IMO (~180 drives max). You'd have to pull my toenails out with a rusty pair of pliers before I go back to crap storage. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Question about dhcpd.leases
Niki Kovacs wrote: > How comes these machines never appear in dhcpd.leases ? Do the > respective leases only find their way into /var/log/messages ? I'd expect because the addresses are hard coded, there is no reason to keep track of them. the leases file from what I understand is used to determine which IPs are in use by what so dhcpd can opt to hand out IPs that are not in use. On top of that dhcpd pings the unused IPs to make sure it doesn't hand out IPs that may of been taken without authorization. For hard coded addresses the dhcp response will be the same every time, and if you have duplicate MAC addresses on your layer 2 network you have bigger things to worry about. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Release 6?
MHR wrote: > but I just don't like to do it. 30 systems? Yoik! Out of ~300 .. > As for moving from 4 to 5, that's not a trivial thing at all - and > it's not an "upgrade" per se unless you have LOTS of faith in the > process. I always reinstall across releases, and that's a royal pain > (though usually worth it for the new features, like a newer GNOME and > all that goes with it). Funny thing is for me the hardest part is getting the downtime to do the work, the OS reinstall is easy, the apps already support it and cfengine automatically configures the systems with everything they need. I can re-install a system and get the apps re-installed in ~30 minutes, but it's a real headache for the apps guys to take the apps down and/or move customers off those systems to other systems. And I'm not in *that* big of a hurry I have other things I am working on of course.. I came across a system a few days ago that had an uptime of over 1000 days...here it is [r...@us-mon001 ~]# uptime 19:22:02 up 1012 days, 4:24, 1 user, load average: 0.04, 0.20, 0.26 [r...@us-mon001 ~]# cat /etc/redhat-release Red Hat Enterprise Linux AS release 4 (Nahant Update 1) Part of me doesn't want to re-install it..(I have no immediate plans to..) You know your missing a kernel update or two when your uptime gets over 3 years. > BTW, certain specific upgrades would be really nice. For one thing, > Google's Chrome browser is now available for Linux, but you have to > have a newer version of (I think it was) gtk that's not available on > RH/C 5 at all - yet. Chrome..google. While I'm sure it's a nice browser I don't trust google with my information.. > Ah, well, patience in this particular arena pays off - we get the best > support and solid reliability for free, so a little wait, or even a > long one, is worth it in my book. Me too.. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Release 6?
MHR wrote: > On Thu, Apr 1, 2010 at 9:25 AM, nate wrote: >> >> I *just* finished upgrading to CentOS 5.4 6 days ago. >> > > How many people got trampled in the rush? You might be surprised how many outages it takes to co-ordinate such an upgrade in a medium-large environment(and nobody including me likes to take *everything* down at once though we did have such an outage a few weeks ago to move a storage array I upgraded about 30 systems on that day). The fully redundant systems are easy to upgrade of course but there are lots of systems that are not fully redundant(and can't be made as such due to application design). I tried doing some online upgrades for some of our more important systems(minus reboot for kernel) but something in the update wrecked havok on our NFS cluster the systems are very active doing NFS stuff 24/7. The NFS cluster recovered automatically but each time it took about 3 hours. I don't know what the upgrade might of restarted that would of impacted NFS activity. Since the upgrade there has been no repeats of the issue but during the upgrade within 30 minutes of upgrading active NFS clients(while they were doing stuff) caused immediate headaches on the cluster. I suspect it's the first OS "upgrade" my company has done at least on linux. Looking through my inventory of systems these are getting a bit stale RHEL3/4: 1 AS release 3 (Taroon Update 3) 5 AS release 4 (Nahant Update 1) 6 AS release 4 (Nahant Update 3) 36 AS release 4 (Nahant Update 4) 1 AS release 4 (Nahant Update 6) I don't count RHEL4->CentOS v5 as an upgrade since it is a complete re-install. For the most part those will get upgraded when the systems are retired I think. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Release 6?
Niki Kovacs wrote: > Recently a friend of mine complained his Debian stable system was "too > conservative", given the somewhat outdated software. I told him not to > mind, since Debian is bleeding edge compared to my OS of choice. Maybe your friend needs another distro, of course everyone knows it's conservative for a reason. I've been a Debian user for 12 years now, and still run it exclusively on my own systems. Though I use CentOS/RHEL for "work" stuff. For those 12 years I've run stable throughout except for about a year in ~2001 when I ran testing for a little while. Even on my desktops I run stable. If the hardware is too new(desktops/laptops only) I run Ubuntu since it has a similar package selection. I *just* finished upgrading to CentOS 5.4 6 days ago. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Cron and Cluster
Joseph L. Casale wrote: > I need an idea on how to accomplish this: I have a cluster that I only want > to run cron jobs on when its active. What kind of cluster? the term cluster can mean almost anything these days. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Disable specific LUN on a SCSI bus
Lorenzo Quatrini wrote: > This is what I'm doing right now; but I was searching for a way of > doing it earlier on startup. > I'm playing with a non partitionable DS4300 FC, and I would like to > avoid LUN contention. Since it appears to be a SAN of sorts, another option may be to use the blacklist setting for dm-multipath, or if it's a fiber attached system you may be able to mask it at the controller itself using the vendor tools for the controller. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Disable specific LUN on a SCSI bus
Lorenzo Quatrini wrote: > Hi all, > do you know if there is a way at boot time do disable specific LUN's on a > SCSI > bus of a particular controller? What do you need to do this for? How about just echoing the command to /proc/scsi/scsi echo "scsi remove-single-device X X X X" >/proc/scsi/scsi get the values for the various X's from /proc/scsi/scsi e.g. Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: MegaRAID Model: LD 0 RAID1 69G Rev: 521S Type: Direct-AccessANSI SCSI revision: 02 would be 0 1 0 0 nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] VMWare vs. KVM - recommendations?
MHR wrote: > Recommendations? What are you going to use it for? If your looking for something to act like vmware workstation, e.g. primarily for running a desktop OS on top of X11 then I would stick to vmware server 2. Or more ideally VMware workstation, it has a TON of desktop optimization thingies which may be useful if your running XP as a guest. I use vmware server 2 on a pair of debian systems(only use CentOS at work and there I only use ESX as my hypervisor), without any issues. The system I'm on now uses vmware server 2 to run another copy of debian to use as a VPN client to my company(the vpn software screws with the routing table). No issues going to 1080p resolution(running on a 47" phillips 1080p TV). My other vmware server 2 system is running that because the hardware is too old to run anything better(circa 2004). No problems.. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Kickstart 8TB partition limit?
lheck...@users.sourceforge.net wrote: > No filesystem is specified be cause want to use xfs, which kickstart does > not > support out of the box. This is under 5.2, but the 5.3/5.4 relnotes do not > indicate that this problem has been fixed. Or has it? partition manually using %pre or %post ? nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Multipath and iSCSI Targets
Joseph L. Casale wrote: > Just started messing with multipath against an iSCSI target with 4 nics. > What should one expect as behavior when paths start failing? My lab setup > was copying some data on a mounted block device when I dropped 3 of 4 paths > and the responsiveness of the server completely tanked for several minutes. > > Is that still expected? Depends on the target and the setup, ideally if you have 4 NICs you should be using at least two different VLANs, and since you have 4 NICs (I assume for iSCSI only) you should use jumbo frames. With my current 3PAR storage arrays and my iSCSI targets each system has 4 targets but usually 1 NIC, my last company(same kind of storage) I had 4 targets and 2 dedicated NICs(each on it's own VLAN for routing purposes and jumbo frames). In all cases MPIO was configured for round robin, and failed over in a matter of seconds. Failing BACK can take some time depending on how long the path was down for, at least on CentOS 4.x (not sure on 5.x) there was some hard coded timeouts in the iSCSI system that could delay path restoration for a minute or more because there was a somewhat exponential back off timer for retries, this caused me a big headache at one point doing a software upgrade on our storage array which will automatically roll itself back if all of the hosts do not re-login to the array within ~60 seconds of the controller coming back online. If your iSCSI storage system is using active/passive controllers that may increase fail over and fail back times and complicate stuff, my arrays are all active-active. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] MySQL max clustering package?
John R Pierce wrote: > that may be OK for an order processing system, but it could be a serious > problem for something like a banking system where you are dispersing cash. Hopefully no such systems run on MySQL anyways :) nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] MySQL max clustering package?
Neil Aggarwal wrote: > Our goal is to create redundancy. We want either system to be able > to work if the other is not available. Designating one database > as a write db and the other as a read defeats that. Depending on the requirements splitting out can greatly improve scalability though, potentially using something like mysql proxy and perhaps even a load balancer. The write systems can still be clustered/multi-master replication but if the bulk of your work is reads then load balancing many independent databases for reads can improve performance and even improve availability. But it really depends on the workload. nate -- This message has been 'sanitized'. This means that potentially dangerous content has been rewritten or removed. The following log describes which actions were taken. Sanitizer (start="1268843612"): Split unusually long word(s) in header. SanitizeFile (filename="unnamed.txt", mimetype="text/plain"): Match (names="unnamed.txt", rule="9"): Enforced policy: accept Total modifications so far: 1 Anomy 0.0.0 : Sanitizer.pm $Id: Sanitizer.pm,v 1.94 2006/01/02 16:43:10 bre Exp $ ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] MySQL max clustering package?
JohnS wrote: > I have always heard the replication of MySQL could not keep up with lots > of writes. I don't think MySQL replication has an issue with number of writes, at least with regular replication(can't speak to multi master stuff), all replication is is the DB sending the raw query to the other system to execute, so provided the other system(s) your replicating to are at least as fast as the system doing the real work the others should have no trouble keeping up. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] CPU Upgrade
Kevin Kempter wrote: > OS? Or will CentOS simply see the second chip once I reboot and just work? No changes, you're already running a SMP kernel since you have 4 cores now, so no changes. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] MySQL max clustering package?
jchase wrote: > Can anyone tell me what the scoop is on this? I read that CentOS was > releasing the enterprise/cluster capable MySQL in the CentOS-Plus repo's > but I don't see it there. I don't want to use the mysql.com packages (if > they are even available -- didn't they stop supplying the binaries for this > to the public)? > > I'm trying to setup failover, load balancing clustering of MySQL.I'm a bit > confused on what the current story is for doing this. Last I recall MySQL max wasn't really mysql but based on some other DB(SAP DB?).. Mysql by itself has built in "clustering" though there can be significant limitations in it depending on your requirements. http://www.mysql.com/downloads/cluster/ nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Whether need to run CentOS Certification test suite to get the CentOS logo
zhang hualan wrote: > But I'm still not sure whether CentOS have certification test suite and we > need to run it first, such as we can get Redhat "hardware certified logo" if > we pass the RedHat Certification test suite,eg V7. I say forget about certifying specifically with CentOS, certify with RHEL. CentOS strives to be RHEL compatible. If your product works with RHEL and not CentOS then it's a bug in CentOS(unless your someone like Oracle whom specifically looks for RHEL during installation) nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Installing additional software from CD
Al Sparks wrote: > I installed CentOS w/o Gnome, or X-Windows. > > I'd like to install that stuff from the CD. I don't want to try and install > individual RPM's. What can I run off the CD that allows me to use the > standard package manager (or whatever it's called)? from /etc/yum.repos.d/CentOS-Media.repo # CentOS-Media.repo # # This repo is used to mount the default locations for a CDROM / DVD on # CentOS-5. You can use this repo and yum to install items directly off the # DVD ISO that we release. # # To use this repo, put in your DVD and use it with the other repos too: # yum --enablerepo=c5-media [command] # # or for ONLY the media repo, do this: # # yum --disablerepo=\* --enablerepo=c5-media [command] ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Motherboards for HPC applications
Pasi Kärkkäinen wrote: > Wow, pretty cool system. Can you tell about the pricing? I don't think I can, but it is competitive with Dell and HP as an example while the innovation put into the cloud rack is far beyond anything Dell or HP offer to mere mortals. Closest HP offers is the "SL" series of systems which are pretty decent, though offer roughly half the density as SGI for our particular application. http://h10010.www1.hp.com/wwpc/us/en/sm/WF02a/15351-15351-3896136.html?jumpid=re_R295_prodexp/busproducts/computing-server/proliant-sl-scalable-sys&psn=servers Dell is coming out with something new soon http://www.theregister.co.uk/2010/02/03/dell_cloudedge/ I've seen them, and honestly aren't all that creative, very similar to Supermicro Twin. They are decent for CPU and memory intensive stuff, but not as good for (local) I/O intensive. They seem pretty proud about these systems though considering Supermicro has had similar stuff on the market for quite some time now there isn't much to get excited about IMO. SGI(formerly Rackable) has been pretty aggressive in patenting their designs, which is probably what lead to vendors like Supermicro building their "Twin" systems. http://www.sgi.com/company_info/newsroom/press_releases/rs/2007/05082007.html Dell has a custom design division which they can probably do some pretty crazy things but I'm told they have a ~1,500 server minimum to get anything from that group. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Motherboards for HPC applications
Gordon McLellan wrote: > If your application can't support GPU based processing, I think > Peter's suggestion is most fitting. Load up a rack of dual socket > 5520 servers from Dell or HP and then save some money by building your > own shared-storage to feed the cluster. The big vendors crank out > very inexpensive dual socket xeon servers, the only area they really > seem to be price gouging in right now is storage. For me I have been working on spec'ing out a "HPC" cluster to run Hadoop on large amounts of data and fell in love with the SGI Cloud Rack C2. I managed to come up with a configuration that had roughly 600 CPU cores, 1.2TB of memory and 300 1TB SATA disks in a single rack and consumes ~16,000 watts of power with 99% efficient rack level power supplies and N+1 power redundancy, rack level cooling as well. Very cost effective as well at least for larger scale deployments, assuming you have a data center that can support such density. http://www.sgi.com/products/servers/cloudrack/cloudrackc2.html My current data center does not support such density so I came up with a configuration of 320 CPU cores, 640GB memory, and 160x1TB disks that fit in a single 24U rack, and consumes roughly 8,000 watts(208V 30A 3-phase) and weighs in at just under 1,200 pounds (everything included). Systems come fully racked, cabled & ready to plug in. Systems are built with commodity components wherever possible(MB/ram/CPU/HD), only custom stuff is the enclosure, cooling, and power distribution, which is how they achieve the extreme densities and power efficiency. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] compilers a security risk?
Geoff Galitz wrote: > Making the bar higher, even in little increments, is a basic tenant of > systems security. Never dismiss the power of baby steps. Keep in mind diminishing returns with those baby steps.. Of the ~500-600 systems I've worked on over the past 10 years the only ones that were confirmed to be compromised were ones that were placed directly on the internet(not by me), and wasn't kept up to date with patches. I think I worked on 3 such systems. - keep up to date on patches - if on the internet, lock ssh down to ssh key auth only, try to run a tight firewall on other ports. - don't allow untrusted local accounts - Run only well tested programs(especially when it comes to webapps) with a good track record wherever possible - If at all possible do not put any server directly on the internet (98% of my systems reside behind load balancers, which is a form of firewall since only ports that are specifically opened are allowed through) To-date I haven't needed things like NIDS/HIDS (too many false positives), or things like SElinux(PITA). After this long, and so many systems I don't think luck plays a big role at this point. The servers I manage for my employer receive roughly 2 billion web hits per day. If you can manage those things, the chance of being compromised is practically zero, barring some remote evil organization that has bad guys specifically out to get you. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] is it possible to recover LVM drive from accidental Fdisk?
Rudi Ahlers wrote: > Hi all, > > Does anyone know if it's possible to recover an LVM partition from a drive > that was fdisked? I accidently fdisk'd the wrong drive (had to fdisk a lot > of 160GB drivers from old servers and one still has important data on that > client now wants) by running fdisk /dev/sdc & deleting the partitions. The > drive is still in a another machine and hasn't been rebooted yet, but > there's no no partition on it. re-create the original partition table, which is just a map, as long as you haven't formatted or overwritten data everything should still be there Also suggest if your not already doing it set your LVm partitons to type 8e so it's obvious they are LVM [r...@dc1-mysql001b:~]# fdisk -l /dev/sdc Disk /dev/sdc: 2197.9 GB, 2197949513728 bytes 255 heads, 63 sectors/track, 267218 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 267218 2146428553+ 8e Linux LVM nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] compilers a security risk?
Dave Stevens wrote: > I don't have enough experience to assess the security issues. Does > anyone have an opinion on this? It would be simple and feasible to > allocate another domain as suggested above. Unless your running an obscure platform having a compiler on the system shouldn't be a big deal, if you can upload source code, you can upload a precompiled binary nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] New to VM
David Milholen wrote: > I have managed these for so long on just a couple of machines but > technology is changing and we are growing as a company and I have heard > and read great things that can be done with VM. Really depends on how much usage the systems get, if you are migrating from physical systems to virtual systems look at the CPU, load, and i/o(if linux use iostat). I run vmware server on a 5-year old system which has 2 VMs on it, runs apache, mysql, mail services, dns, and a bunch of other small things. Works fine, though my typical CPU usage on the *host* is 5%. Running off a pair of 250GB SATA drives connected to a 3Ware 8006-2 RAID card. Dual Xeon 3Ghz, 6GB ram, 32-bit. In my experience most systems like the ones your using hosting the apps you mention are idle 99%+ of the time, making them perfect VM candidates. > I have another ibm Eserver with a couple of scsi 15k 50GB drives and 4 > GB of memory that I can configure from scratch to do VM or what ever I need. > I guess I should start by asking how VM is configured and How does > allocate resources on the server? Resource allocation depends on the VM technology your using, myself I am a long time VMware fan/user, so I stick to their stuff, but no matter what it really depends on how much load your system will be under. >From a VMware perspective, this PDF is informative, but probably well beyond the scale your operating at, you can get an idea as to the complexity that "virtualization" entails. http://portal.aphroland.org/~aphro/vmware/09Q3-perf_overview_and_tier1-pac_nw.pdf Performance of bare metal hypervisors like VMware ESX will dramatically outperform the hypervisors that run on top of another OS(I think they call them "type 2") like VMware server. But bare metal hypervisors have very strict hardware requirements. I use VMware server on my own system since the hardware is not supported by ESX. At my full time job I run dozens of ESX systems on real hardware, with a proper SAN and networking infrastructure. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Recover RAID
Jeff Sadino wrote: > Ok, I'm learning a lot about raids and what to do, and what not to do. > Looking at some info I had before, md1 was 200GB in size, which makes sense, > but it was only 39GB full. The way I repartitioned drive 1, I probably > overwrote only about 11GB. Does that make it any easier to recover any > amount of the raid? Is there some sort of "recover lost partitions" option > in Linux or gparted? The partition is just a map, if you can re-create the partition exactly the way it was before, the data should still be there if it wasn't overwritten. But as far as I know there isn't a backup stored of the partition table, if the disk is exactly the same as the other member, then you can try duplicating the partition setup using the first disk as a guide. I don't know whether or not it will help restore a RAID 0 set, but may be worth a shot since the situation probably can't get much worse. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] SSH Remote Execution - su?
Tim Nelson wrote: > YESS. It prevents the tty error from showing up and asks me for a password > as expected. BUT, how do I then automate the entering of the password? > > John Kennedy mentioned using expect which I've used before but found it to > be 'finnicky'. I may have to look at it again... > > Changing settings such as sudo configuration or ssh config may be daunting > since I have a large number of systems(150+) that would need to be modified. > :-/ Just login as root with ssh keys? If you needed to somehow block brute force cracking attacks against the root account either globally disable password auth, or it appears you can use the option "PermitRootLogin without-password" to restrict remote root logins via SSH to keys only. I haven't tried this option myself. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] DHCP client not working with Windows DHCP / dynamic DNS server
Florin Andrei wrote: > Is there anything else I can do on my side to make it happen? Any > particular options in dhclient.conf or something like that? See the man page ? DYNAMIC DNS The client now has some very limited support for doing DNS updates when a lease is acquired. This is prototypical, and probably doesn't do what you want. It also only works if you happen to have control over your DNS server, which isn't very likely. To make it work, you have to declare a key and zone as in the DHCP server (see dhcpd.conf(5) for details). You also need to configure the fqdn option on the client, as follows: send fqdn.fqdn "grosse.fugue.com."; send fqdn.encoded on; send fqdn.server-update off; The fqdn.fqdn option MUST be a fully-qualified domain name. You MUST define a zone statement for the zone to be updated. The fqdn.encoded option may need to be set to on or off, depending on the DHCP server you are using. -- On my company's windows network the IT guy just assigns static IPs via MAC addresses to those that want a fixed IP and create a DNS name associated with it. Myself I've never liked dynamic DNS, never used it, I like my zone files organized(and plain text, no binary crap) and I suspect dynamic DNS would screw it all up. I've seen how horribly polluted zones can get on windows networks with dynamic DNS, overlapping names, multiple DNS entries for the same IP etc. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Temperature sensor
Dominik Zyla wrote: > You have right. While you checking sensors from few machines, you can > see the trend. Gotta think about changing the way of temperature monitoring > here. Myself I wouldn't rely on internal equipment sensors to try to extrapolate ambient temperature from their readings. Most equipment will automatically spin their fans at faster RPMs as the temperature goes up which can give false indications of ambient temperature. I do monitor the temperature of network equipment, but also have dedicated sensors for ambient readings. Already saved us some pain once, opened up a new location in London last year and the ambient temperature at our rack in the data center was 85+ degrees F. The SLA requires temperature be from 64-78 degrees. Alarms were going off in Nagios. The facility claimed there was no issue, and opened up some more air vents, which didn't help. They still didn't believe us so they installed their own sensor in our rack. The next day the temperature dropped by ~10 degrees, I guess they believed their own sensor.. http://portal.aphroland.org/~aphro/rack-temperature.png People at my own company were questioning the accuracy of this sensor(there was only one, I prefer 2 but they are cheap bastards), but I was able to validate the increased temperature by comparing the internal temp of the switches and load balancers were significantly higher than other locations. Though even with the ambient temperature dropping by 10+ degrees, the temperature of the gear didn't move nearly as much. The crazy part was I checked the temperature probes at my former company(different/better data center) and the *exhaust* temperature of the servers was lower than the *input* temperature from this new data center. Exhaust temperature was around 78-80 degrees, several degrees below the 85+. It seems the facility in London further improved their cooling in recent weeks as average temperature is down from 78 to about 70-72 now, and is much more stable, prior to the change we were frequently spiking above 80 and averaging about 78. Also having ambient temperature sensors can be advantageous in the event you need to convince a facility they are running too hot(or out of SLA), as a tech guy myself(as you can probably see already) I am much less inclined to trust the results of internal equipment sensors than a standalone external sensor which can be put on the front of the rack. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Temperature sensor
Bowie Bailey wrote: > Does anyone know of a cheap temperature sensor that will work with > Linux? I don't need a fancy monitoring appliance, I just want a simple > sensor that I can connect to one of my monitoring servers to let me know > if the server room is getting hot. I don't know what your idea of cheap is, but I use Servertech PDUs exclusively and their Smart and Switched models are network aware and have optional temperature/humidity probes which you can query via SNMP. This is handy because you then have sensors essentially built into each and every rack. Pretty simple to have nagios query the SNMP value and alert. Also am having cacti graph the data as well. Most of the servertech PDUs the sensors plug directly into the PDU, on some of them it requires an add-on module. Each PDU typically supports 2 sensors, the cables are about 10 feet. About 5 years ago at a company I deployed a Sensatronics model E4 which has up to 16 probe inputs, and they have sensors with cables up to something like 300 feet. Monitorable by a simple http server and I believe it has SNMP as well. They have fancier monitoring appliances as well with alerting and stuff though they weren't out when I was using their stuff. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] rack configurator?
Alan McKay wrote: > Hey folks, > > Does anyone know of a rack configurator that runs on CentOS? It does > not have to even be very fancy - immediately I'm just looking for an > easy way to keep track of what is in my racks, and being able to have > a visual of it. Maybe juggle stuff around. Bonus if it does power > calculations based on model numbers and so on - or data I punch in for > each model. But really right now some kind of very specific CAD or > Draw tool that is specific to this purpose. Or a template for a more > general tool. Spreadsheet? nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Desperately need help with multi-core NIC performance
Pete Kay wrote: > Hi > > So is that the limit? I have heard people being able to run like 10K > call channels before max out CPU cap. I would verify the network throughput of your system to make sure the NIC/switch/etc are functioning normally, I use iperf to do this, really simple tool to use just need two systems. On a good network you should be able to sustain roughly 900+Mbit/s with standard frame sizes and iperf on a single gigE link(hopefully with no tuning) sample run: Client connecting to pd1-bgas01, TCP port 5001 TCP window size: 205 KByte (default) [ 3] local 10.16.1.12 port 54559 connected with 10.16.1.11 port 5001 [ 3] 0.0-10.0 sec 1.06 GBytes912 Mbits/sec there are lots of options you can use to configure iperf to simulate various types of traffic. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Recommended PCIe SATA/SAS Controller?
Tim Nelson wrote: > Greetings all- > > I need to purchase a PCIe SATA or SAS controller(non-raid) for a Supermicro > 2U system. It should be directly bootable. Any recommendations? The system > will be running CentOS 5.4 as an LTSP system. Thanks! I've had good luck with a ATTO SAS PCIe HBA using with a tape library. The driver for this particular card wasn't included with CentOS at the time but might be with the latest version. http://www.attotech.com/ The drivers are open source, I have no past experience with ATTO it was recommended to me by a tape backup software vendor for their performance and stability under linux. I am using the "ExpressSAS H644 Low-Profile 4-Internal/4-External Port 6Gb/s SAS/SATA PCIe 2.0 Host Adapter" nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] unattended fsck on reboot
m.r...@5-cent.us wrote: > It was dumping large amounts of data into his home directory... which was > NFS mounted from the server I needed to reboot. That's why I like HA clusters, our NFS cluster runs on top of CentOS, and if we needed to reboot a node it would have minimal impact, the other system takes over the IPs and MAC addresses. To-date the only time we've rebooted the NFS systems have been software updates(3 of them in the past year or so). At my previous company I was planning on trying to "roll my own" nfs cluster on RHEL but never got round to it before I left the company http://sources.redhat.com/cluster/doc/nfscookbook.pdf nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] unattended fsck on reboot
Rudi Ahlers wrote: > Is it "absolutely" necessary to run this on servers? Especially since they > don't reboot often, but when they do it takes ages for fsck to finish - > which on web servers causes extra unwanted downtime. > > Or is there a way to run fsck with the server running? I know it's a bad > idea, but is there any way to run it, without causing too much downtime? I > just had one server run fsck for 2+ hours, which is not really feasible in > our line of business. For me at least on my SAN volumes I disable the fsck check after X number of days or X number of mounts. Of all the times over the years where I have seen this fsck triggered by those I have never, ever seen it detect any problems. I don't bother changing the setting for local disks as it is usually pretty quick to scan them. You must have a pretty big and/or slow file system for fsck to take 2+ hours. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Login timed out after 60 seconds
Paul Heinlein wrote: > I'm trying to figure out if it's possible to lengthen that timeout > value from 60 to, say, 180. This isn't the first time I've wanted to > kill a runaway process and been unable to get a console because of > that timeout. I poked around a bunch but couldn't find a config that can be adjusted.. I do see that the particular message comes from /bin/login # strings login | grep -i timed Login timed out after %d seconds which seems to be part of the util-linux package, so perhaps poke around in the source, maybe there is a .h file that you can adjust with a higher value and rebuild it. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Syslog for chroot-jailed SFTP users?
Sean Carolan wrote: > In our environment the chroot jail is /home/username. Does this mean > we need a /home/username/dev/log for each and every user? If the > daemon is chroot'd to /home/username wouldn't this be the case? Yes.. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] disk I/O problems with LSI Logic RAID controller
nate wrote: > Perhaps RAID 6, as I've never heard of RAID 5 with two parity (two > parity is dual parity which is RAID 6). Forgot to mention my own personal preference on my high end SAN at least is for RAID 5 with a 3:1 parity ratio, or a max of 5:1 or 6:1, really never higher than that unless activity is very low. The RAID controllers on my array are the fastest in the industry, and despite that, in the near future I am migrating to a parity ratio of 2:1 to get (even)better performance, that brings me to within about 3-4% of RAID 1+0 performance for typical workloads. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] disk I/O problems with LSI Logic RAID controller
Fernando Gleiser wrote: > yes, ita bunch of 12 7k2 RPM disks organized as 1 hot spare, 2 parity disks, > 9 data disks in a RAID 5 configuration. is 9/2 a "high ratio"? Perhaps RAID 6, as I've never heard of RAID 5 with two parity (two parity is dual parity which is RAID 6). RAID 6 performance can vary dramatically between controllers, if it were me unless you get any other responses shortly I would test other RAID configurations and see how the performance compares RAID 1+0 RAID 5+0 (striped RAID 5 arrays, in your case perhaps 3+1 * 4 w/no hot spares? at least for testing) RAID 5+0 (5+1 * 2) RAID 1+0 should be first though, even if you don't end up using it in the end, it's good to get a baseline with the fastest configuration. I would expect the RAID card to support RAID 50, but not all do, if it doesn't one option may be to perform striping using LVM at the OS level. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] disk I/O problems with LSI Logic RAID controller
Fernando Gleiser wrote: > we're having a weird disk I/O problem on a 5.4 server connected to an > external SAS storage with an LSI logic megaraid sas 1078. Not sure I know what the issue is but telling us how many disks, what the RPM of the disks are, and what level of RAID would probably help. It sounds like perhaps you have a bunch of 7200RPM disks in a RAID setup where the data:parity ratio may be way out of whack(e.g. high number of data disks to parity disks), which will result in very poor write performance. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Best way to backup virtual machines from Citrix XenServer.
Simon Billis wrote: > Good quality storage (which usually comes at a price) will provide the > functionality that is needed to backup the VM's either as a complete VM > image or files from the VM filesystem. Entry level storage from suppliers > such as Equallogic/Dell comes with this functionality and it is possible to > have the storage up and attached to servers within 10 mins from un-boxing it > (but do allow a little longer to understand it ;-) .) Suggest reading this interesting piece "3 years of equallogic" before thinking about using it's snapshot stuff - http://www.tuxyturvy.com/blog/index.php?/archives/61-Three-Years-of-Equallogic.html Of course not all snapshot solutions are created equal, equallogic's appears to be especially poor in this regard. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Clustering
cally connects to Array 2 and has it send all of the data up to the point Array 1 went down, I think you can get as close as something like a few milliseconds from the disaster that took out Array 1, and get all of the data to Array 3. Setting it up takes about 30 minutes, and it's all automatic. Prior to this setting up such a solution would cost waay more, as you'd only find it in the most high end systems. It's going to be many times cheaper to get a 2nd array and replicate than it is to try to design/build a single system that offers 100% uptime. Entry level pricing of this particular array starts at maybe $130k, can go probably as high as $2-3M if you load it up with software(more of half the cost can be software add ons). So it's not in the same league as most NetApp, or Equallogic, or even EMC/HDS gear. Their low end stuff starts at probably $70k. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] VMWare ESXi & CentOS5.4
Thom Paine wrote: > Any thoughts to this, or should I just put on CentOS 5.4 and be done > with it? I know it's like asking what everyone's favourite colour is, > but maybe a few replies will give me some ideas. I like the VM approach because it gives a foolproof to snapshot the guest and do testing/rollbacks easily, also the hardware configuration is usually significantly simpler as it's abstracted, and it makes the server more portable, easier to move to another system as a whole. Where performance is a real big concern I use native hardware, but those cases are fairly rare. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Clustering
m.r...@5-cent.us wrote: > Except that VMware is *based* on RHEL. Why would you *not* have a > Linux-based console? A common misconception. The linux based console is a VM in itself, and is used for management purposes only, it runs on top of the hypervisor. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Clustering
Les Mikesell wrote: > Have you investigated any of the mostly-software alternatives for this like > openfiler, nexentastor, etc., or rolling your own iscsi server out of > opensolaris or centos? I have and it depends on your needs. I ran Openfiler a couple years ago with ESX and it worked ok. The main issue there was stability. I landed on a decent configuration that worked fine as long as you didn't touch it(kernel updates often caused kernel panics on the hardware which was an older HP DL580). And when Openfiler finally came out with their newer "major" version the only upgrade path was to completely re-install the OS(maybe that's changed now I don't know). A second issue was availability, Openfiler(and others) have replication and clustering in some cases, but I've yet to see anything come close to what the formal commercial storage solutions can provide(seamless fail over, online software upgrades etc). Mirrored cache is also a big one as well. Storage can be the biggest pain point to address when dealing with a consolidated environment, since in many cases it remains a single point of failure. Network fault tolerance is fairly simple to address, and throwing more servers to take into account server failure is easy, but the data can often only live in one place at a time. Some higher end arrays offer synchronous replication to another system, though that replication is not application aware(aka crash consistent) so you are at some risk of data loss when using it with applications that are not aggressive about data integrity(like Oracle for example). A local vmware consulting shop here that I have a lot of respect for says in their experience, doing crash consistent replication of VMFS volumes between storage arrays there is about a 10% chance one of the VMs on the volume being replicated will not be recoverable, as a result they heavily promoted NetApp's VMware-aware replication which is much safer. My own vendor 3PAR released similar software a couple of weeks ago for their systems. Shared storage can also be a significant pain point for performance as well with a poor setup. Another advantage to a proper enterprise-type solution is support, mainly for firmware updates. My main array at work for example is using Seagate enterprise SATA drives. The vendor has updated the firmware on them twice in the past six months. So not only was the process made easy since it was automatic, but since it's their product they work closely with the manufacturer and are kept in the loop when important updates/fixes come out and have access to them, last I checked it was a very rare case to be able to get HDD firmware updates from the manufacturer's web sites. The system "worked" perfectly fine before the updates, I don't know what the most recent update was for but the one performed in August was around an edge case where silent data corruption could occur on the disk if a certain type of error condition was encountered, so the vendor sent out an urgent alert to all customers using the same type of drive to get them updated asap. A co-worker of mine had to update the firmware on some other Seagate disks(SCSI) in 2008 on about 50 servers due to a performance issue with our application, in that case he had to go to each system individually with a DOS boot disk and update the disks, a very time consuming process involving a lot of downtime. My company spent almost a year trying to track down the problem before I joined and ran some diagnostics and fairly quickly narrowed the problem down to systems running Seagate disks(some other systems running the same app had other brands(stupid dell), of disks that were not impacted). A lot of firmware update tools I suspect don't work well with RAID controllers either, since the disks are abstracted, further complicating the issue of upgrading them. So it all depends on what the needs are, you can go with the cheaper software options just try to set expectations accordingly when using them. Which for me is basically - "don't freak out when it blows up". nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Clustering
Drew wrote: >> When you talk about the free version are your referring to Vmware server >> or is there a free version of Esxi? The website is a little misleading >> with "free trail" and such. > > ESXi is free to use. ESX / vSphere is the paid version. A common confusion point. While there is a free license available for ESXi and not for ESX, you can pay for ESXi to unlock additional functionality(such as live migration, HA, DRS etc) and still keep the "thin" hypervisor footprint that ESXi offers. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Clustering
Bo Lynch wrote: > Whats your thoughts on Vmware server over esxi? > Really do not want to have to budget for Virtualization if I do not have to. Depends on the hardware, ideally esxi, though it is very picky about hardware. And you should budget for it, storage will be a big concern if you want to provide high availability. A good small storage array(few TB) starts at around $30-40k. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Clustering
Bo Lynch wrote: > Right know we have about 30 or so linux servers scattered through out or > district. Was looking at ways of consolidating and some sort of redundancy > would be nice. > Will clustering not work with certain apps? We have a couple mysql dbases, > oracle database, smb shares, nfs, email, and web servers. Maybe your looking for putting them in a virtual environment, to cluster applications like that is fairly complex, Oracle has it's own clustering(RAC), MySQL has clustering(with some potentially serious limitations depending on your DB size), NFS clustering is yet another animal, and samba clustering, well CIFS is a stateful protocol so there really isn't a good way to do clustering there at least with generic samba, that I'm aware of, if a server fails the clients connected to it will lose their connection and potentially data if they happened to be writing at the time. In any case it sounds like clustering isn't want your looking for, I would look towards putting the systems in VMs with HA shared storage if you want to consolidate and provide high availability. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] best parallel / cluster SSH
Alan McKay wrote: > I was actually going to start another "configuration management redux" > thread as a follow up to a thread I started a few months ago. As Les mentioned, it's far more common in that situation to use ssh key authentication and a for loop, if your ssh key has a pass phrase use a ssh agent. I still use it quite often even though I do have a fairly extensive cfengine setup, sometimes I need something done right now such as a mass restart and can't wait for cfengine to run on each host. If you have servers say web01 -> web30 sample script to restart apache - for i in `seq -w 1 30`; do ssh r...@web${i} "/etc/init.d/httpd restart"; done nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Block network at logoff on workstation
David McGuffey wrote: > I was wondering how to best block all network access to it when I log > off...then unblock it when I log on. Changing iptables requires root > access...as does running ifdown and ifup scripts. You could use sudo to call them.. But I don't really understand your concern, if your behind two pretty tight firewalls then there shouldn't be anything to worry about. Myself I just have one firewall(OpenBSD), no local firewall on my system(at home). If your physically at the system(which I assume you are since your blocking network access while your not logged on), perhaps simply pulling the network cable out of the system is simplest. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] atime, relatime query
Rob Kampen wrote: > I do not agree - every read of the db will update the filesystem with > noatime missing, thus specifying noatime does give performance > improvements - the size of the files does not matter as much - rather > the number of reads vs writes. Interesting, didn't think about that aspect, I dug around and at least for MySQL and Postgresql noatime doesn't appear to provide any noticeable benefit(it may be a measurable one in some cases) http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/ http://www.ffnn.nl/pages/articles/linux/server-wide-performance-benchmarking.php If your doing a ton of reads and only have a few files, it's likely there isn't going to be many atime updates as the file is kept open for an extended period of time(e.g. scanning a table with 100k rows). For DB performance there's a lot more useful areas to spend time tuning. As DBAs often say you can get 10% more performance tuning the OS and getting better hardware, and you can get 1000% better performance by tuning the queries and data structures, or something like that :) nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] atime, relatime query
Rajagopal Swaminathan wrote: > But in a production db server, which is backed up by HP DP, is it > advisable to mount with noatime? noatime typically helps when dealing with lots of files, most DB servers have a small number of files that are large in size, so noatime is likely not to provide any noticeable improvement I think. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Logrotate in CentOS 5.4 more brutal (to httpd at least) than in 5.3?
Matthew Miller wrote: > That said, *both* of them are too brutal, in that they'll kill any open > connections. Signal USR1 (or service httpd graceful) is much nicer, since it > lets any open connections complete. The downside is that these old > connections might get written to the _rotated_ log instead of the new one, > but to me that's a small price to pay. I've always used copytruncate for httpd logs and logrotate, never bothered to send signals to apache at all... sample config - "/path/to/logs/*www*log" { daily rotate 1 nocompress notifempty copytruncate missingok sharedscripts olddir /path/to/archivedir postrotate DATE=`date --date=Yesterday +%y%m%d` cd /path/to/archivedir for FOO in `ls *.1` do mv $FOO `echo $FOO | cut -f1 -d.`.$DATE.log done gzip -9 *.$DATE.log sleep 60 sync logger "[LOGROTATE] Rotated these logs: `echo *.${DATE}.log.gz`" endscript } nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] glibc updates and rebooting
R-Elists wrote: > > i forgot... > > is it necessary to reboot after glibc* yum updates on 4.x and 5.x or any > centos for that matter... Should not be, but as with all library updates applications that are running when the update is applied won't get the update until they are restarted. Often times the updates are so minor that restarts are not critical. Often times rebooting is the easiest method for the update to fully take effect or you can manually restart stuff. I'm not sure if RPM automatically restarts services when system libraries change(I haven't seen it do this myself). In Debian by contrast for example when you update glibc, it will scan for running services that use it and will prompt you to restart them as part of the upgrade process. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Lockup using stock r8169 on 4.8 in gigabit mode during heavy transfer on lan
Jason Pyeron wrote: > Ideas? realtek sucks(massive evidence on the net over the past 10+ years), get a real NIC(no pun intended). nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways
Ross Walker wrote: > Even directio by itself won't do the trick, the OS needs to make sure > the disk drives empties it's write cache and currently barriers are > the only way to make sure of that. Well I guess by the same token nobody in their right mind would run an Oracle DB without a battery backed write cache :) nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS vs SMb vs iSCSI for remote backup mounts
Rudi Ahlers wrote: > nate, why not? Is it simply unavoidable at all costs to mount on system on > another, over a WAN? That's all I really want todo If what you have now works, stick with it.. in general network file systems are very latency sensitive. CIFS might work best *if* your using a WAN optimization appliance, I'm not sure how much support NFS gets from those vendors. iSCSI certainly is the worst, block devices are very intolerant of latency. AFS may be another option though quite a bit more complicated, as far as I know it's a layer on top of an existing file system that is used for things like replication http://www.openafs.org/ I have no experience with it myself. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS vs SMb vs iSCSI for remote backup mounts
Rudi Ahlers wrote: > let's keep the question simple. WHICH filesystem would be best for this type > of operation? SMB, NFS, or iSCSI? none nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS vs SMb vs iSCSI for remote backup mounts
Rudi Ahlers wrote: > let's keep the question simple. WHICH filesystem would be best for this type > of operation? SMB, NFS, or iSCSI? none nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways
Les Mikesell wrote: > I wonder if the generally-horrible handling that linux has always done > for fsync() is the real reason Oracle spun off their own distro? Do > they get it better? Anyone in their right mind with Oracle would be using ASM and direct I/O so I don't think it was related. http://www.oracle.com/technology/pub/articles/smiley_10gdb_install.html#asm http://www.ixora.com.au/tips/avoid_buffered_io.htm "The file system cache should be used to buffer non-Oracle I/O only. Using it to attempt to enhance the caching of Oracle data just wastes memory, and lots of it. Oracle can cache its own data much more effectively than the operating system can. " Which leads me back to my original response, forget about file system cache if you want performance go for application level caching whether it's DB caching or other caching like memcached mentioned by someone. Oracle did it because they wanted to control the entire stack. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NFS vs SMb vs iSCSI for remote backup mounts
Rudi Ahlers wrote: > Hi, > > I would like to get some input from people who have used these options for > mounting a remote server to a local server. Basically, I need to replicate / > backup data from one server to another, but over the internet (i.e. insecure > channels) NFS and CIFS and iSCSI are all terrible for WAN backups(assuming you don't have a WAN optimization appliance), tons of overhead. Use rsync over SSH, or rsync over HPNSSH. I transfer over a TB of data a day using rsync over HPNSSH across several WANs. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Installing an SSL Cert
ML wrote: > Since I have a domain that will be collecting data and processing payments. > > Where can I find instructions on how to install the certificate? Depends on what web server software your running looks like godaddy has quite a few sets of instructions http://help.godaddy.com/article/5346 > Do I have to run another domain or sub domain for the store? Or can I just > run the whole domain on https? You can run the whole domain on http and https, normally what sites would do is run the normal site in http, then have redirect/rewrite rules setup so when a user goes to a part of the site that should be "secure" then the site automatically sends them to the secure version of that page. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Xen, Amazon, and /proc/cmdline
Kurt Newman wrote: > Is it /sbin/init? I can't seem to find any reference of that in any man > pages. Essentially, I'm trying to short-circuit this boot process to > execute a run level of my choosing, and not be forced to use 4. it's probably the kernel itself calling the value defined in /proc/cmdline For example when I go to single user mode I often specify init=/bin/bash on the command line, which I'd expect would take /sbin/init completely out of the loop. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Auto exit lftp on bash script
Alan Hoffmeister wrote: > Em 26/01/2010 16:54, Akemi Yagi escreveu: >> lftp -u user,password -e "mirror --reverse --delete --only-newer >> > --verbose /var/bkp /test_bkp" somehost.com > Already tryed the && exit, but no sucess... try ncftpput instead? http://www.ncftp.com/ncftp/doc/ncftpput.html "The purpose of ncftpput is to do file transfers from the command-line without entering an interactive shell. This lets you write shell scripts or other unattended processes that can do FTP. It is also useful for advanced users who want to send files from the shell command line without entering an interactive FTP program such as ncftp. " nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways
Noob Centos Admin wrote: > The web application is written in PHP and runs off MySQL and/or > Postgresql. So I don't think I can access the raw disk data directly, > nor do I think it would be safe since that bypasses the DBMS's checks. This is what I use for MySQL (among other things) log-queries-not-using-indexes long_query_time=3 key_buffer = 50M bulk_insert_buffer_size = 8M table_cache = 1000 sort_buffer_size = 8M read_buffer_size = 4M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 8M thread_cache = 40 query_cache_size = 256M query_cache_type=1 query_cache_limit=20M default-storage-engine=innodb innodb_file_per_table innodb_buffer_pool_size=20G <-- assumes you have a decent amount of ram, this is the max I can set the buffers with 32G of RAM w/o swapping innodb_additional_mem_pool_size=20M innodb_log_file_size=1999M innodb_flush_log_at_trx_commit=2 innodb_flush_method=O_DIRECT <-- this turns on Direct I/O innodb_lock_wait_timeout=120 innodb_log_buffer_size=13M innodb_open_files=1024 innodb_thread_concurrency=16 sync_binlog=1 set-variable = tmpdir=/var/lib/mysql/tmp <- force tmp to be on the SAN rather than local disk Running MySQL 5.0.51a (built from SRPMS) nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OT: reliable secondary dns provider
Eero Volotinen wrote: > Sorry about a bit offtopic, but I am looking reliable (not free) > secondary dns provider. My company uses Dynect as primary and seconary though they can do secondary as well http://dyn.com/dynect We also use their DNS based global load balancing as well. So far 100% uptime(about 7-8 months of usage). Thousands of queries per second. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Centos/Linux Disk Caching, might be OT in some ways
Noob Centos Admin wrote: > I'm trying to optimize some database app running on a CentOS server > and wanted to confirm some things about the disk/file caching > mechanism. If you want a fast database forget about file system caching, use Direct I/O and put your memory to better use - application level caching. nate ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos