[CentOS] Recent CentOS4 kernel update

2008-10-20 Thread James Fidell
I see an update for the CentOS4 kernel, to 2.6.9-78.0.5.EL, has appeared
over the weekend.  I've not seen anything on the announce list for it
though.  Have I missed something?

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS5.1 PHP 5.2 RPMs?

2008-05-06 Thread James Fidell

Karanbir Singh wrote:

James Fidell wrote:

Karanbir Singh wrote:

James Fidell wrote:
Is there a PHP 5.2 build for CentOS5.1 anywhere?  Unfortunately I 
need a

fix for a SOAP bug that is present in the current 5.1.6 release.


Not yet, but we are working on it - there should be something there in
the next few days.


Will this be as part of the centosplus repository, or elsewhere?


It will definitely be a part of the centosplus repo ( to ensure we get 
some upgrade path sanity from centos-4 ), however it might also be 
available as a separate repo itself.



If you have centosplus enabled, you will see it for sure!


What's the status of this now?  I don't see it in the centosplus repo
yet.

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 5.1 did not detect marvell e-net controller

2008-04-01 Thread James Fidell

John wrote:


Since the errors @ 1000gbs have you tried the asus provided linux
driver??


I shall give that a go next.


BTW you do have gig e net cabability right? Switches? You can
try ethtool eth0 and try to force gig connectivity. Or like you said
jumbo frames, but does your switching hardware support that?


Yes, I have a gig-e switch :)

I think there may be an interoperability problem with the card and
switch I'm using -- even at 100Mb it isn't entirely reliable whereas
other devices are.  In a 100Mb switch it works fine.  Next step is to
put e100 and e1000 cards in and test those on the gig-e switch.  I'll
also try dropping back to the release on the Asus site, though it does
look like it's just an earlier release of the same driver as the
Marvell one, rather than a different codebase altogether.

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 5.1 did not detect marvell e-net controller

2008-03-28 Thread James Fidell

John wrote:

On Wed, 2008-03-26 at 14:55 -0400, Clyde E. Kunkel wrote:

John wrote:

snip
If you are trying to add it use the system-config-network. I run
basically the same mother board on my home PC and it does work. (the
driver) GUI - System | Administration | Network | Hardware Tab | New.

  
yeahtried that till blue in the face.  The device is just not being 
seen for some reason.  I was able go boot fedora 8, do a chroot to 
centos and bring up the network, but it didn't stick.  (chroot worked 
nice to do a yum update, tho).


Will try a reinstall from scratch.

Thanks for the suggestion.



BTW ASUS.com has a linux Driver for that board. They have one for mine.
Mine is a P4P800-E..


There's a more recent driver available than the one on the Asus site.
Google found it for me.  Unfortunately I can't recall were I downloaded
it from now.

Unfortunately whilst it appears to work ok at 100Mb/s, I get a huge
number of framing errors at 1000Mb/s.  I've not tried enabling jumbo
frames yet though.

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] 5.1 did not detect marvell e-net controller

2008-03-28 Thread James Fidell

Clyde E. Kunkel wrote:

James Fidell wrote:

John wrote:

snip


There's a more recent driver available than the one on the Asus site.
Google found it for me.  Unfortunately I can't recall were I downloaded
it from now.

Unfortunately whilst it appears to work ok at 100Mb/s, I get a huge
number of framing errors at 1000Mb/s.  I've not tried enabling jumbo
frames yet though.



Th ASUS driver won't compile for me...just says compile error, look in 
the log.  The log says compile error.


It wouldn't compile for me first time, either, because it wasn't looking
in the right place for the kernel files.

After installing kernel-devel and kernel-headers for the current kernel,
I did something like:

  # ln -s kernels/`uname -r`-`uname -p` /usr/src/linux

and that fixed it for me.

I was using the latest drivers I could find, from here:

  http://www.marvell.com/drivers/driverDisplay.do?dId=153pId=38

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS5.1 PHP 5.2 RPMs?

2008-03-10 Thread James Fidell

Karanbir Singh wrote:

James Fidell wrote:

Is there a PHP 5.2 build for CentOS5.1 anywhere?  Unfortunately I need a
fix for a SOAP bug that is present in the current 5.1.6 release.


Not yet, but we are working on it - there should be something there in
the next few days.


Will this be as part of the centosplus repository, or elsewhere?

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS5.1 PHP 5.2 RPMs?

2008-03-07 Thread James Fidell

Is there a PHP 5.2 build for CentOS5.1 anywhere?  Unfortunately I need a
fix for a SOAP bug that is present in the current 5.1.6 release.

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS 5.1 and Xen HVM guest networking problem

2008-01-14 Thread James Fidell

I've asked this on the Xen users list, but had no response so far:

I'm running CentOS 5.1 with all current updates:

xen-libs-3.0.3-41.el5
xen-3.0.3-41.el5
kernel-xen-devel-2.6.18-53.1.4.el5
kernel-xen-2.6.18-53.1.4.el5

and have a domU config file from another (5.0) server which works.  The
difference is that on the new machine I want to use routed rather than
bridged networking.  In bridging mode the domU works fine, but I
can't use bridging on this particular server because of the external
network configuration.

I have modified /etc/xen/config.sxp, commenting out

  (network-script network-bridge)
  (vif-script vif-bridge)

and uncommenting

  (network-script network-route)
  (vif-script vif-route)

I've also changed the vif line in the domU config file to:

  vif = [ 'ip=aaa.bbb.ccc.ddd, vifname=veth1' ]

The domU now boots, but has no eth0 interface (the driver doesn't load
in the domU because it can't find a suitable device).

Is there anything obvious I might have done wrong or missed out?  There
are no ethernet-related or network device errors in the logs that I can
see.

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] GFS/LVM/RAID1 recovery question

2007-07-23 Thread James Fidell
Tru Huynh wrote:
 On Mon, Jul 23, 2007 at 04:07:57PM +0100, James Fidell wrote:
 ...
   lvcreate -m 1 ... /dev/sdb /dev/sdc /dev/sdd
 
 or use pvreate /dev/md0 (md raid1 mirror of sda/sdb/sdc)?

AIUI, MD isn't cluster-{aware,safe} though, so I could end up with all
the servers that can see the physical disks trying to do stuff with the
mirrors individually, making a horrible mess?

(In my configuration, all servers mount the iSCSI devices and I'm using
LVM/clvmd to allow them to keep everything sane and remove any single
point of failure.)

 where sd[bc] are the mirrored (iSCSI) PVs in the VG and sdd is the log.
 I have this working and can write data to the filesystem on one machine
 in the cluster and see it appear elsewhere etc.

 What I now want to do is to test what happens when I disable one
 of the mirrors and then restore it with clean disks.
 http://mirror.centos.org/centos/4/docs/4.5/SAC_Cluster_Logical_Volume_Manager/mirrorrecover.html

Perfect.  That's exactly what I was after.  Thank you.

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] network raid file system/server

2007-06-19 Thread James Fidell
Feizhou wrote:

 In this scenario, iscsi provides the devices remotely and the server
 handles the raiding of the devices.
 
 can you explain it a bit more detailed?
 
 The boxes with disks are now just 'disk servers' and those disks are
 exported to the servers that will provide the filesystem layer. iscsi is
 the technology used to export the disks in this scenario.

Is there a neat way to get start the partitions and get them mounted in
this setup?  I have a server on which I want to create /dev/md0 from two
iscsi partitions.  I then want to export /md0 to other servers using
GFS.

Do I have to hand-craft my own scripts to start the md devices at boot
time and get the filesystems mounted before starting GFS, or is there
already a way to do that?  (I don't have a problem with doing so, but
if there's already a right way to do it...)

James
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos