At 15:09 -0800 28/1/15, David C. Miller wrote:
Although I hate Oracle with a fury, one good thing is that they put
all the updates they rebuild for their RHEL clone in a publicly
viewable site. I'm guessing they pay Redhat for extended support on
end of life RHEL4 to get access to the source
Hi,
For reasons which are too tiresome to bore you all with, I have an
obligation to look after a suite of legacy CentOS 4.x systems which
cannot be migrated upwards.
I note on https://access.redhat.com/articles/1332213 the following
comment from a RHN person:
We are currently working on a
At 12:58 -0500 14/5/14, Les Mikesell wrote:
>If you are running physical machines now, you don't have that
>ability anyway...
True, but that's a reason to try and migrate to a better environment
which would allow it.
>Does it have to be hosted? You could run under KVM/Virtualbox/Vmware,
>etc.
Dear all,
I look after a number of CentOS 4.x servers running legacy
applications that depend on ancient versions of various things (such
as MySQL 3.x) and which can't be upgraded without non-trivial
development effort.
I've been considering virtualising them and as a test have been
trialling
At 08:54 + 2/12/09, hadi motamedi wrote:
>Dear All
>Can you please do me favor and let me know how can I compare two
>files but not in line-by-line basis on my CentOS server ? I mean say
>row#1 in file1 has the same data as say row#5 in file2 , but the
>comm compares them in line-by-line bas
At 12:43 +0200 26/8/09, przemol...@poczta.fm wrote:
>Hello,
>
>I'd like to clone existing CentOS server. Can anybody
>recommend any working solution to achieve that ?
I've used the dd + netcat + live CD technique with success in the past eg:
http://alma.ch/blogs/bahut/2005/02/wonders-of-dd-and-ne
At 23:44 -0800 22/12/08, Michael A. Peters wrote:
>Thanks for any suggestions. I may try to find a "GIS for dummies" type
>book, though I've generally not been fond of dummy books, I kind of feel
>like one when it comes to GIS.
Hi Michael,
If you get no satisfactory answers here, you might try ta
At 16:43 +0200 17/6/08, Ralph Angenendt wrote:
> Is your copy installed from rpm/yum or compiled from source?
Mine's the latter.
rpmforge.
Ah - looking more deeply, my source was configured without
--with-dbdir=/var/lib/clamav which is why it defaulted to looking in
/usr/local/share/clamav
At 14:48 +0200 17/6/08, Ralph Angenendt wrote:
It doesn't here:
Is your copy installed from rpm/yum or compiled from source? Mine's the latter.
S.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
At 13:16 +0200 17/6/08, Ralph Angenendt wrote:
It does at least open freshclam.conf
True, but then it goes on to look in its compiled in location too:
open("/etc/freshclam.conf", O_RDONLY) = 3
open("/var/lib/clamav/daily.cld", O_RDONLY) = 3
open("/usr/local/share/clamav/daily.cld", O_RDONLY)
Every day I see in logwatch that my signatures are updated, and the database
notified, but if I try to scan a file manually it tells me that my signatures
are 55 days old.
I think clamscan looks for the db files in a compiled-in default
location of /usr/local/share/clamav and doesn't consult th
At 12:59 -0400 2/10/07, Ross S. W. Walker wrote:
Try running the same benchmark but use bs=4k and count=1048576
Just finished doing that now - comparison graphs are here:
http://community.novacaster.com/showarticle.pl?id=7492
While these tests are running can you run any processes on another
At 13:49 -0400 2/10/07, Ross S. W. Walker wrote:
Sounds like the issue is more of a CPU issue then a disk issue, so
just upgrading the hardware and OS should make a big difference in
itself,
Yeah, that was the plan :-) Basically, we worked out what we needed
to do (alleviate peak load CPU bott
At 13:03 -0400 2/10/07, Ross S. W. Walker wrote:
Have you tried calculating the performance of your current drives on
paper to see if it matches your "reality"? It may just be that your
disks suck...
They're performing to spec for 7200rpm SATA II drives - your help in
determining which was the
At 12:41 -0400 2/10/07, Ross S. W. Walker wrote:
If the performance issue is identical to the kernel bug mentioned
in the posting then the only real fix that was mentioned was to
switch to 32bit from 64bit or to down-rev your kernel, which on
CentOS means to go down to 4.5 from 5.0.
The irony i
What is the recurring performance problem you are seeing?
Pretty much exactly the symptoms described in
http://bugzilla.kernel.org/show_bug.cgi?id=7372 relating to read
starvation under heavy write IO causing sluggish system response.
I recently graphed the blocks in/blocks out from vmstat 1
At 09:24 -0400 2/10/07, Ross S. W. Walker wrote:
Actually the real-real fix was to use the 'deadline' or 'noop' scheduler
with this card as the default 'cfq' scheduler was designed to work with
a single drive and not a multiple drive RAID, so it acts as a govenor on
the amount of IO that a single
At 12:30 +0200 2/10/07, matthias platzer wrote:
What I did to work around them was basically switching to XFS for
everything except / (3ware say their cards are fast, but only on
XFS) AND using very low nr_requests for every blockdev on the 3ware
card.
Hi Matthias,
Thanks for this. In my C
At 12:01 -0400 26/9/07, Ross S. W. Walker wrote:
CFQ is intended for single disk workstations and it's io limits are
based on that, so it actually acts as an io govenor on RAID setups.
Only use 'cfq' on single disk workstations.
Use 'deadline' on RAID setups and servers.
Many thanks Ross, tha
At 09:14 -0400 26/9/07, Ross S. W. Walker wrote:
Could you try the benchmarks with the 'deadline' scheduler?
OK, these are all with RHEL5, driver 2.26.06.002-2.6.18, RAID 1:
elevator=deadline:
Sequential reads:
| 2007/09/26-16:19:30 | START | 3065 | v1.2.8 | /dev/sdb | Start
args: -B 4k -h 1
At 13:26 -0400 25/9/07, Ross S. W. Walker wrote:
Off of 3ware's support site I was able to download and compile the
latest stable release which has this modinfo:
[EMAIL PROTECTED] driver]# modinfo 3w-9xxx.ko
filename: 3w-9xxx.ko
version:2.26.06.002-2.6.18
OK, driver source from t
At 10:36 -0400 25/9/07, Ross S. W. Walker wrote:
Post the modinfo to the list just in case somebody
else knows of any issues with the version you are running.
This is from RHEL5 - it's the driver that comes built-in:
[EMAIL PROTECTED] ~]# modinfo 3w-9xxx
filename: /lib/modules/2.6.18-8.
At 13:35 -0400 24/9/07, Ross S. W. Walker wrote:
Ok, so here is the command I would use:
Thanks - here are the results (tried CentOS 4.5 and RHEL5, with tests
on sdb when configured as both RAID 0 and as RAID 1):
Sequential reads:
disktest -B 4k -h 1 -I BD -K 4 -p l -P T -T 300 -r /dev/sdX
At 10:04 -0400 24/9/07, Ross S. W. Walker wrote:
How about trying your benchmarks with the 'disktest' utility from the
LTP (Linux Test Project),
Now fetched and installed - I'd be grateful for a suggestion as to an
appropriate disktest command line for a 4GB RAM twin CPU box with
250GB RAID 1
At 07:46 +0800 24/9/07, Feizhou wrote:
... plus an Out of Memory kill of sshd. Second time around (logged
in on the console rather than over ssh), it's just the same except
it's hald that happens to get clobbered instead.
Are you saying that running in RAID0 mode with this card and
motherboar
At 17:34 +0800 14/9/07, Feizhou wrote:
.ohdo you have a BBU for your write cache on your 3ware board?
Not installed, but the machine's on a UPS.
Ugh. The 3ware code will not give OK then until the stuff has hit disk.
Having now installed BBUs, it's made no difference to the underlying
At 08:18 +0800 15/9/07, Feizhou wrote:
Is there any way to tell the card to forget about not having a BBU
and behave as if it did?
Short of modifying the code...I do not know of any.
Well, I've now got BBUs on order for the three identical machines to
see if that does anything to improve mat
At 11:16 -0400 14/9/07, Ross S. W. Walker wrote:
Yes, a write-back cache with a BBU will definitely help, also your config,
The write-cache is enabled, but what I've not known up to now is that
the absence of a BBU will impact IO performance in this way - which
seems to be what you and Feizho
At 23:07 +0800 14/9/07, Feizhou wrote:
Well, I do not think it will help much with a larger journal...you
want RAM speed, not single 250GB SATA disk speed.
Right now, I'd be happy with being able to configure the 3Ware care
as a plain old SATA II passthru interface and do software RAID1 with
At 09:41 -0400 14/9/07, Ross S. W. Walker wrote:
Try getting another identical 3ware card and swapping them. If it
produces the same problem, then try putting that card in another
box with a different motherboard to see if it works then.
I've got three identical machines here - two as yet not u
At 15:43 +0200 14/9/07, Sebastian Walter wrote:
Simon Banton wrote:
> No, I haven't. This is 3ware hardware RAID-1 on two disks with a
single LVM ext3 / partition - I'm afraid I don't know how to go about
discovering the chunk size to plug into Ross's calcs.
You
At 08:09 -0400 14/9/07, Jim Perrin wrote:
Have you done any filesystem optimization and tried matching the
filesystem to the raid chunk size?
No, I haven't. This is 3ware hardware RAID-1 on two disks with a
single LVM ext3 / partition - I'm afraid I don't know how to go about
discovering the
At 17:34 +0800 14/9/07, Feizhou wrote:
.ohdo you have a BBU for your write cache on your 3ware board?
Not installed, but the machine's on a UPS.
I see where you're going with larger journal idea and I'll give that a go.
Cheers
S.
___
CentOS mail
Hmm, how are you creating your ext3 filesystem(s) that you test on?
Try creating it with a large journal (maybe 256MB) and run it in
full journal mode.
The filesystem was created during the initial CentOS installation,
and I've tried it with ext2 which made no difference.
S.
At 20:52 +0800 13/9/07, Feizhou wrote:
Well, the first thing I noted was that the H8DA8 was not on the list
of compatible motherboards on the 3ware website.
I challenged the vendor about that quite early on and was told that
they've used this combo before with no trouble, though I've yet to
p
Dear list,
I thought I'd just share my experiences with this 3Ware card, and see
if anyone might have any suggestions.
System: Supermicro H8DA8 with 2 x Opteron 250 2.4GHz and 4GB RAM
installed. 9550SX-8LP hosting 4x Seagate ST3250820SV 250GB in a RAID
1 plus 2 hot spare config. The array is
36 matches
Mail list logo