Re: RHN Satellite Server?

2007-03-15 Thread Michael Mansour
Hi Miles,

> I hear RH is going to open source this.
> Does the SL team plan to build this as
> well?  Just curious.

Where did you hear this?

RHN is their bread and butter, I'm surprised to hear this news.

Michael.

> Thanks,
> Miles
--- End of Original Message ---


Re: RHN Satellite Server?

2007-03-22 Thread Michael Mansour
Hi,

> > I hear RH is going to open source this.
> > Does the SL team plan to build this as
> > well?  Just curious.
> 
> I guess we will research this.  If someone else rebuilds it we can 
> always add it to contrib.
> 
> Keep us updated if you hear more about this.

The biggest reason why this surprised me (and I will only believe it when I
see it), was when I took the RH401 Satellite course, I was not even allowed to
have an eval certificate to have a temp Satellite install of it. It was only
available in the classroom to quickly go through and then have the exam the
last day.

I escalated this through our instructor, who escalated it further through Red
Hat, simply asking for a 30-day eval certificate to learn the product, the
answer came back as no.

Quite frustrating and made no sense to me. Now to hear they'll GPL it.. well,
I won't be holding my breath.

Michael.

> -connie
> > 
> > Thanks,
> > Miles
> >
--- End of Original Message ---


Re: Using 64 bit instead 32 bits SL

2007-03-23 Thread Michael Mansour
Hi Johan,

> If you have dual core 64 bits processors in your server. Is it 
> recommended to install x86-64 version of a linux distro like SL ?
> What are the benefits ? Are there any disadvantages ? It would be 
> for a LAMP, so would Apache or MySQL perform better or wouldn't they 
> make no use of it ? Does anyone have any experience with this ?

I run 64bit SL 4.4 on all servers that accept 64 bit mode.

I find everything just runs faster than the 32 bit equivalents and consider,
why not?

The only reason I could think of running 32bit on a 64bit platform is if the
software app I was using wouldn't be stable running 64bit.

My advice is to install 64bit, test it for a couple of weeks with your apps,
if you're happy with it stay with it, if not, re-install 32bit SL.

Regards,

Michael.

> Kind Regards,
> 
> Johan Mares
> 
> -- 
> VLIZ
--- End of Original Message ---


Re: Using 64 bit instead 32 bits SL

2007-03-23 Thread Michael Mansour
Hi,

> On Fri, 23 Mar 2007, Ioannis Vranos wrote:
> 
> > OK, I asked from the single application installation perspective, 32-bit one
> > if the 64-bit version is not available.
> 
> Ok, the short answer is "yes".
> 
> A slightly longer answer is "yes, unless it requires shared objects 
> not available as 64-bit builds, and unless your app is 'really 
> smart' and decides for itself whether it will work on this platform 
> or not".
> 
> > Since you mentioned firefox, is there an 64-bit adobe flash plug-in 
> > available
> > for firefox x86_64? What happens with the current 32-bit extensions for
> > firefox and thunderbird regarding firefox x86_64?
> 
> Short answers are "no", and "you use the 32-bit firefox or you don't 
> get to use the plugins which are not available as 64-bit objects".

Actually, that's not entirely true, you can use this:

http://www.cyberciti.biz/tips/linux-flash-java-realplayer-under-64bit-firefox.html

which is a compatibility plugin allowing you to use 32bit plugins in 64 bit
firefox.

Regards,

Michael.

> -- 
> Stephan Wiesand
>   DESY - DV -
>   Platanenallee 6
>   15738 Zeuthen, Germany
--- End of Original Message ---


Re: RHN Satellite Server?

2007-03-23 Thread Michael Mansour
Hi Miles,

> Stephen John Smoogen said...
> 
> |Ok as far as I know there has been no official declaration that they
> |are open-sourcing the RHN server.
> 
> AFAIK you are correct.  However...
> 
> |There have been a couple of articles
> |about them having to do so at some point.. but that is speculation 
> of |the authors not an announcement.
> 
> Not unless an InfoWorld author is flat
> out lying.
> 
>   
http://weblog.infoworld.com/openresource/archives/2007/01/red_hat_to_open.html

I'm going to post this into the rhn-users mailing list on Monday and see what
RH say. The RH people that work on RHN day in and day out, are in that mailing
list and will either give us the flat yes or no we're looking for.

Regards,

Michael.

> While this is not definitive, and I can't find solid
> corroboration anywhere, it's also not (as far as I can
> tell) pure, authorial speculation.
> 
> -Miles
--- End of Original Message ---


Re: Can't run SMP kernel on Dell Optiplex 620

2007-03-29 Thread Michael Mansour
Hi Michael,

> Greetings.  One of the profs here has got a Dell Optiplex 620 
> running SL
> 4.4.  It has an Intel Pentium D chip that is dual-core-capable, and 
> it has the capability enabled.
> 
> >From time to time the owner has had problems with the system hanging.
> He usually solves the problem with some stupid computer trick, such 
> as cycling the power, etc.  But yesterday he had one of the usual 
> hangs, except that it was one from which he could not recover.
> 
> The problem is very similar to one that was reported on the SL-users
> list not too long ago.  In more detail, the system either has a 
> kernel panic during the boot sequence, or it boots all the way and 
> allows a login, but almost immediately has a "hard freeze" that 
> requires a power cycle to thaw.
> 
> We've run the Dell diagnostic utilities to test processor, memory, 
> and video, but we didn't find any problems.
> 
> The system is running the latest kernel,  but it will not boot reliably
> with any of the four SMP kernels currently installed on it.
> 
> We've tried all of the voodoo that I saw mentioned in the previous
> discussion (run-level 3, no "rhgb quiet" on the command line), but 
> the only thing that seems to work reliably is to boot with the uni-processor
> kernel.
> 
> This is probably an acceptable work-around for the time being, and my
> hope is that when we do a fresh install with SL 5, we'll all be happy
> again.  But I wonder if any of y'all can provide any further insight
> into this.
> 
> Thanks.

You may have also found references on the list archives to:

noapic
apm=off

and other such options to be added to the command line of smp kernels. Try 
these.

Michael.

>   - Mike
> --
> Michael Hannonmailto:[EMAIL PROTECTED]
> Dept. of Physics  530.752.4966
> University of California  530.752.4717 FAX
> Davis, CA 95616-8677
--- End of Original Message ---


Re: ERRATA ?

2007-04-04 Thread Michael Mansour
Hi Johan,

> I am new on this list and saw some messages with errata. Should I do 
> something special for this or just do yum update or apt-get update 
> follewed by apt-get upgrade ?

Errata are released with different priority, critical, important, etc.

When they are released you should apply them, as they fix bugs and
vulnerabilities.

You can do a:

# yum check-update

to see what you need, then:

# yum -y update packagename1 packagename2 packagenameN

to update individually or:

# yum -y update

to update everything.

Regards,

Michael.

> Kind regards,
> 
> Johan Mares
> 
> -- 
> VLIZ
> Flanders Marine Institute
--- End of Original Message ---


Re: Migrating SL 4.4 installation from a 80 GB HDD to a 250 GB one

2007-04-12 Thread Michael Mansour
Hi,

> Hi all,
> 
> I got a 250 GB IDE HDD, and my current SL 4.4 x86 installation is on 
> an 80 GB one. Is there any easy way to "clone" the installation to 
> the new one? The current installation includes a standard lvm2 volume.

I have performed this process many times. The way I do it is:

* image the drive with G4L (Ghost for Linux). It's available on freshmeat.net

* redploy the image onto the new drive

* create a new partition using fdisk for the extra space

* use vgextend to add the new partition

* use lvextend to add that extra space to where I want it

* use ext2online to extend the filesystem(s) I've extended the LV's on.

It's that easy.

Michael.

> [EMAIL PROTECTED] download]# df -h
> FilesystemSize  Used Avail Use% Mounted on
> /dev/mapper/VolGroup00-LogVol00
> 71G   22G   48G  31% /
> /dev/hdh1  99M   14M   80M  15% /boot
> none  506M 0  506M   0% /dev/shm
> 
> [EMAIL PROTECTED] download]#
> 
> /dev/hdh is the current 80 GB HDD.
> 
> [EMAIL PROTECTED] download]# fdisk /dev/hdh
> 
> The number of cylinders for this disk is set to 9729.
> There is nothing wrong with that, but this is larger than 1024,
> and could in certain setups cause problems with:
> 1) software that runs at boot time (e.g., old versions of LILO)
> 2) booting and partitioning software from other OSs
> (e.g., DOS FDISK, OS/2 FDISK)
> 
> Command (m for help): p
> 
> Disk /dev/hdh: 80.0 GB, 80026361856 bytes
> 255 heads, 63 sectors/track, 9729 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> 
> Device Boot  Start End  Blocks   Id  System
> /dev/hdh1   *   1  13  104391   83  Linux
> /dev/hdh2  14972978043770   8e  Linux LVM
> 
> Command (m for help):
> 
> Thanks in advance.
--- End of Original Message ---


ATrpms x86_64 mirror on SL FTP server

2007-05-29 Thread Michael Mansour
Hi,

When trying to install from ATrpms x86_64 mirrored from SL's FTP, I get:

# yum --enablerepo=atrpms -y update DCC
Loading "kernel-module" plugin
Setting up Update Process
Setting up repositories
ftp://ftp.scientificlinux.org/linux/extra/atrpms/sl4-x86_64/stable/repodata/repomd.xml:
[Errno 4] IOError: HTTP Error 404: Not Found
Trying other mirror.
Cannot open/read repomd.xml file for repository: atrpms
failure: repodata/repomd.xml from atrpms: [Errno 256] No more mirrors to try.
Error: failure: repodata/repomd.xml from atrpms: [Errno 256] No more mirrors
to try.

which is correct since in fact the path is:

ftp://ftp.scientificlinux.org/linux/extra/atrpms/sl4-x86_64/atrpms.copy/stable

My atrpms.repo file is:

# rpm -qf /etc/yum.repos.d/atrpms.repo
yum-conf-44-1.SL.noarch

and contains:

# cat /etc/yum.repos.d/atrpms.repo
[atrpms]
name=ATrpms rpms
baseurl=ftp://ftp.scientificlinux.org/linux/extra/atrpms/sl4-$basearch/stable
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-atrpms

Could this path be corrected please?

Thanks.

Michael.


Can't update kdebase on SL308

2007-06-19 Thread Michael Mansour
Hi,

I get this problem when I try to update to the latest kdebase:

# yum check-update
Setting up repositories
Reading repository metadata in from local files

kdebase.i386 6:3.1.3-5.16   sl308errata

# yum -y update kdebase.i386
Setting up Update Process
Setting up repositories
Reading repository metadata in from local files
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Package kdebase.i386 6:3.1.3-5.16 set to be updated
--> Running transaction check
--> Processing Dependency: libsensors.so.1 for package: kdebase
--> Finished Dependency Resolution
Error: Missing Dependency: libsensors.so.1 is needed by package kdebase

Any ideas?

Regards,

Michael.


Re: Can't update kdebase on SL308

2007-06-20 Thread Michael Mansour
Hi Troy,

> Michael Mansour wrote:
> > Hi,
> > 
> > I get this problem when I try to update to the latest kdebase:
> > 
> > # yum check-update
> > Setting up repositories
> > Reading repository metadata in from local files
> > 
> > kdebase.i386 6:3.1.3-5.16   sl308errata
> > 
> > # yum -y update kdebase.i386
> > Setting up Update Process
> > Setting up repositories
> > Reading repository metadata in from local files
> > Resolving Dependencies
> > --> Populating transaction set with selected packages. Please wait.
> > ---> Package kdebase.i386 6:3.1.3-5.16 set to be updated
> > --> Running transaction check
> > --> Processing Dependency: libsensors.so.1 for package: kdebase
> > --> Finished Dependency Resolution
> > Error: Missing Dependency: libsensors.so.1 is needed by package kdebase
> > 
> > Any ideas?
> > 
> > Regards,
> > 
> > Michael.
> 
> I have just triple checked, and kdebase updates fine on a normally 
> installed machine. Do you have lm_sensors excluded in your yum.conf? 
> But then, how would you get kdebase installed in the first place.

I don't have the lm_sensors package excluded, I just don't use the lm_sensors
as supplied by SL but the one supplied by ATrpms, as it's much more updated
and supported the chipset on the motherboard I am running on that server.

Maybe this is what has caused the problem?

Michael.


Re: Scientific Linux 5.0 Virtual Machine

2007-06-20 Thread Michael Mansour
Hi Peter,

> * Tux Distro ([EMAIL PROTECTED]) [20070620 20:25]:
> 
> > http://www.tuxdistro.com/torrents-details.php?id=346 avia
> > BitTorrent nd the only other thing you would need is the Free
> > VMware player.  We hope some may find this useful.
> 
> Please make it available via other means than BitTorrent as
> well (e.g. HTTP and/or FTP).  BitTorrent usage is not permitted
> everywhere.  Thank you.

I started to download the torrent earlier, it's 911Mb.

I'm not sure how many sites will ftp/http host files this large, as ftp/http
are very resource hungry (from the server side) and also suffer from download
retries when primitive non-retrying clients are used.

Bittorrent is the most efficient way to deliver files this large.

Anyway, maybe someone with the capacity can host it, but when visiting
VMware's website for hundreds of VMware appliance downloads, you'll also find
they are all bittorrent only downloads.

Michael.

> Peter
> 
> -- 
> .+'''+. .+'''+. .+'''+. .+'''+. .+''
>  Kelemen Péter /   \   /   \ [EMAIL PROTECTED]
> .+' `+...+' `+...+' `+...+' `+...+'
--- End of Original Message ---


Re: SL5 eth0 mystery

2007-06-27 Thread Michael Mansour
Hi Pan,

> Greetings,
> 
> Color me perplexed.
> 
> We just purchased a new compute node for our computing cluster. All the
> compute nodes are currently running SL4.4 x86_64 and are interconnected
> on the 192.168.1.0 private network.
> 
> I have a clean install of SL5 x86_64 (from CDs) on the new box. The only
> software selection I checked was [X] GUI Server (I may have the word
> order or capitalization wrong).
> 
> The box has two NICs. eth0 is configured for the private network, and
> eth1 is set to DHCP, not active on boot (and it's not connected to
> anything).
> 
> Of the "first boot" selections, the only thing I changed was to disable
> the firewall.
> 
> When I found I couldn't ping the gateway, I immediately rebooted ('cause
> I'm lazy and I wasn't sure disabling the firewall "took"). No change.
> 
> Here's some hopefully relevant information:
> 
> [EMAIL PROTECTED] ~]# /sbin/ifconfig eth0
> eth0  Link encap:Ethernet  HWaddr 00:A0:D1:E5:E9:EC
>   inet addr:192.168.1.10  Bcast:192.168.1.255  Mask:255.255.255.0
>   inet6 addr: fe80::2a0:d1ff:fee5:e9ec/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:65 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:0 (0.0 b)  TX bytes:8742 (8.5 KiB)
>   Interrupt:58 Base address:0x8000

Do an:

# ethtool eth0

and show us the output.

Michael.


perl-Net-DNS worth updating?

2007-07-08 Thread Michael Mansour
Hi,

Is perl-Net-DNS worth updating?

http://www.securityfocus.com/bid/24669/discuss

Checking the package on SL45 (RHEL4 U5) I can't see any backported fixes by
the upstream vendor yet:

* Tue Oct 12 2004 Warren Togami <[EMAIL PROTECTED]> 0.48-1

- #119983 0.48 fixes bugs

* Thu Sep 23 2004 Chip Turner <[EMAIL PROTECTED]> 0.45-4

- rebuild

etc.

Michael.


SL5 x86_64 DVD

2007-07-16 Thread Michael Mansour
Hi,

What's the correct way to get the SL5 x86_64 DVD?

Going to either of these links:

http://ftp.scientificlinux.org/linux/scientific/50/iso/ 
ftp://ftp.scientificlinux.org/linux/scientific/50/iso/ 

don't work ie:

http://ftp.scientificlinux.org/linux/scientific/50/iso/x86_64/SL-5.0-051607-x86_64-DVD.iso

does nothing and:

ftp://ftp.scientificlinux.org/linux/scientific/50/iso/x86_64/SL-5.0-051607-x86_64-DVD.iso

is only 310Mb.

Regards,

Michael.


SL4.5 on download page

2007-07-16 Thread Michael Mansour
Hi,

Going here:

https://www.scientificlinux.org/download

I don't see the link for the Scientific Linux 4.5 release.

Can it be added please?

Michael.


Re: updates from other repos

2007-07-17 Thread Michael Mansour
Hi Ken,

> If I install packages from ATrpms or DAG, do I simply modify the 
> enable flag in the respective repo files in /etc/yum.repo.d so the 
> system gets properly updated from these other repos during the 
> nightly yum update cron job?  Is it really this simple or are there 
> side-effects I should be aware of?

Personally I disable all third party repo's by default and enable them only
when I want packages from them.

With rpmforge (dag, dries), you can basically enable it by default as most of
the packages supplied by rpmforge do not interfere with the core Linux
distribution (talking from experience).

ATrpms doesn't work that way but caters more for the "enable it by default"
process as it will update many of the core distribution files putting you out
of sync with updates from SL.

Depending on your requirements, this may or may not be what you want and
ATrpms does support you if things go wrong.

You can choose to be selective on the repo for certain packages but you're on
your own if things go wrong.

Regards,

Michael.

> Thanks!
--- End of Original Message ---


Re: updates from other repos

2007-07-17 Thread Michael Mansour
Hi John,

> Ken Teh wrote:
> > If I install packages from ATrpms or DAG, do I simply modify the enable 
> > flag in the respective repo files in /etc/yum.repo.d so the system gets 
> > properly updated from these other repos during the nightly yum update 
> > cron job?  Is it really this simple or are there side-effects I should 
> > be aware of?
> 
> I enabled some repos on a CentOS4 box and did an upgrade.
> 
> I was unamused to find my postgresql got updated from 7.x to 8.x.
> 
> I've nothing against postgresql 8.x, but I really didn't want to 
> convert my database in a rush.

I'm sure we've all experienced this at some stage in our admin careers.

The lessons to be learnt from such things is:

* don't blindly enable things without knowing what you're enabling.

* if using yum, don't use "yum -y" as that will assume a "yes" to all
questions. A standard "yum" without the -y option will prompt you before
applying the updates with "Are you sure you want to install y/N" with the
default being N. 

yum provides you with the list of packages it's going to update before it
updates, it's easy to scroll that list to see if postgresql (or any other
package you need) is going to be updated and from what repo it's updating it 
from.

* if using yum, look at your /var/log/yum.log file to see what was updated and
you can downgrade easily from there to previous packages.

* make a backup or image of your server/desktop if possible before applying
multiple updates.

Personally I use rpmforge and atrpms exclusively for the things I need and
don't want to spend too much time packaging myself, but for the things they
don't have, I simply spend the time to package myself and maintain myself.

Regards,

Michael.

> --
> 
> Cheers
> John
> 
> -- spambait
> [EMAIL PROTECTED]  [EMAIL PROTECTED]
> 
> Please do not reply off-list
--- End of Original Message ---


Re: The 'U' word

2007-07-17 Thread Michael Mansour
Hi Michael,

> (apologies to jmh for sending this twice, forgot to send it to the 
> list)
> 
> > I'd like to know how others are dealing with this.  Is anybody using
> > Ubuntu clients with SL servers for instance?  Any other words of wisdom
> > on this topic?

For servers I use SL308, SL45 i386 and x86_64 and soon to use SL5 i386 and 
x86_64.

For desktops, I use PCLinuxOS 2007 business edition. It's only i386 at the
moment (64bit soon), but it's mandrake/mandriva based so is a snap to
administer and very easy to use.

> Personally, I don't think its scientific applications in general that
> are lacking, but Firefox 2.0, for example.  I don't mind installing
> the odd application in /usr/local/bin, but I don't want to have to
> install most of the things I use that way.
> 
> I've just started up a lab that is using Linux workstations and
> servers, and I felt it was a lot easier to just install Fedora on the
> workstations.  Lots of flexibility (and lots of rope to hang yourself
> with.)
> 
> But, for our servers, I need them to be compatible with Redhat and
> very stable.  In general, there aren't users using them directly.  SL
> seems like the way to go.
> 
> I've used Ubuntu, and I felt like it got a bit messy.  There are so
> many things available, but a lot of them didn't work very well.

Michael.


Re: Bizarre screen/htop interaction

2007-07-18 Thread Michael Mansour
Hi Paul,

> Greetings!
> 
> I've just added a new node to our computing cluster and it's exhibiting
> some odd behavior. I'm afraid the background information is rather 
> long, but I don't want to leave anything out that may be useful.
> 
> Our cluster has a single node available on the public network (ssh 
> only) and eight (now nine) additional nodes on a private network (accessible
> only via ssh from the login node).
> 
> The original nodes are eight Sun V20z dual-Opteron boxes and one Sun
> V40z quad-Opteron box. RAM varies from 4G to 32G.
> 
> We recently added a new node based on a Penguin Altus 600 box with 
> two dual-core Opterons and 16G of RAM.
> 
> One monitoring tool I use is htop (from dag) and screen. I log into the
> login node, run screen, run htop, and then ^Ac and ssh to one of the
> compute nodes and run htop. I repeat until I have htop running on all
> nodes and can ^An and ^Ap to move around. I can detach the screen and
> re-attach it (from anywhere) whenever I want to have a quick look at
> what's going on.
> 
> I do this as a normal user.
> 
> When I follow this scenario and start htop on the new node, screen
> consumes between 20-30% CPU and never updates. top runs fine.
> 
> Furthermore, if I just ssh from the login node to the new compute node
> (screen not involved), htop behaves normally.
> 
> And further-furthermore, if I use the original scenario, open a 
> screen on the new node, su to root and run htop, htop behaves normally.

Which version of htop are you using? where did you install it from?

Michael.

> Any ideas, list-dwellers?
> 
> Cheers,
>  Pann
> -- 
> Pann McCuaig <[EMAIL PROTECTED]>212-854-8689
> Systems Coordinator, Economics Department, Columbia University
> Department Computing Resources:
>http://www.columbia.edu/cu/economics/computing/
--- End of Original Message ---


Re: Bizarre screen/htop interaction

2007-07-18 Thread Michael Mansour
Hi Pann,

> $ rpm -qi htop
> Name: htop Relocations: (not 
> relocatable) Version : 0.6.6 Vendor: 
> Dag Apt Repository, http://dag.wieers.com/apt/
> Release : 1.el5.rf  Build Date: Sat 02 Jun 
> 2007 04:30:01 AM EDT
> Install Date: Mon 02 Jul 2007 04:09:17 PM EDT  Build Host:
lisse.leuven.wieers.com
> Group   : Applications/System   Source RPM:
htop-0.6.6-1.el5.rf.src.rpm
> Size: 149023   License: GPL
> 
> Signature   : DSA/SHA1, Sat 02 Jun 2007 10:11:07 AM EDT, Key ID 
> a20e52146b8d79e6
> Packager: Dag Wieers <[EMAIL PROTECTED]>
> URL : http://htop.sourceforge.net/
> Summary : Interactive process viewer
> Description :
> htop is an interactive process viewer for Linux.

Please don't top post Pann, it makes it hard for people searching mailing list
archives to follow the progression of trouble-shooting exercises.

Ok, I use the same htop from Dag, I've tried to reproduce this problem (but
using SL4.5) and cannot unfortunately.

I will have an SL5 (x86_64) machine ready sometime this week, and will try and
 reproduce this problem on there too.

Have you also tried raising this issue with the htop developers?

Michael.

> > Hi Paul,
> > 
> > > Greetings!
> > > 
> > > I've just added a new node to our computing cluster and it's exhibiting
> > > some odd behavior. I'm afraid the background information is rather 
> > > long, but I don't want to leave anything out that may be useful.
> > > 
> > > Our cluster has a single node available on the public network (ssh 
> > > only) and eight (now nine) additional nodes on a private network 
> > > (accessible
> > > only via ssh from the login node).
> > > 
> > > The original nodes are eight Sun V20z dual-Opteron boxes and one Sun
> > > V40z quad-Opteron box. RAM varies from 4G to 32G.
> > > 
> > > We recently added a new node based on a Penguin Altus 600 box with 
> > > two dual-core Opterons and 16G of RAM.
> > > 
> > > One monitoring tool I use is htop (from dag) and screen. I log into the
> > > login node, run screen, run htop, and then ^Ac and ssh to one of the
> > > compute nodes and run htop. I repeat until I have htop running on all
> > > nodes and can ^An and ^Ap to move around. I can detach the screen and
> > > re-attach it (from anywhere) whenever I want to have a quick look at
> > > what's going on.
> > > 
> > > I do this as a normal user.
> > > 
> > > When I follow this scenario and start htop on the new node, screen
> > > consumes between 20-30% CPU and never updates. top runs fine.
> > > 
> > > Furthermore, if I just ssh from the login node to the new compute node
> > > (screen not involved), htop behaves normally.
> > > 
> > > And further-furthermore, if I use the original scenario, open a 
> > > screen on the new node, su to root and run htop, htop behaves normally.
> > 
> > Which version of htop are you using? where did you install it from?
> > 
> > Michael.
> > 
> > > Any ideas, list-dwellers?
> > > 
> > > Cheers,
> > >  Pann
> 
> -- 
> Pann McCuaig <[EMAIL PROTECTED]>212-854-8689
> Systems Coordinator, Economics Department, Columbia University
> Department Computing Resources:
>http://www.columbia.edu/cu/economics/computing/
--- End of Original Message ---


Re: SL5 x86_64 DVD

2007-07-18 Thread Michael Mansour
Hi Troy,

> Michael Mansour wrote:
> > Hi,
> > 
> > What's the correct way to get the SL5 x86_64 DVD?
> > 
> > Going to either of these links:
> > 
> > http://ftp.scientificlinux.org/linux/scientific/50/iso/ 
> > ftp://ftp.scientificlinux.org/linux/scientific/50/iso/ 
> > 
> > don't work ie:
> > 
> >
http://ftp.scientificlinux.org/linux/scientific/50/iso/x86_64/SL-5.0-051607-x86_64-DVD.iso
> > 
> > does nothing and:
> > 
> >
ftp://ftp.scientificlinux.org/linux/scientific/50/iso/x86_64/SL-5.0-051607-x86_64-DVD.iso
> > 
> > is only 310Mb.
> > 
> > Regards,
> > 
> > Michael.
> 
> Hi Miachel,
> I've checked from a couple of different places.  These links all 
> work for me.  Are you still having a problem? Also, do you have some 
> type of proxy between you and ftp.scientificlinux.org?

Ok, I checked from different places and they worked, I then check from work
again (HP) and it didn't work, I changed the proxy server to another one
within HP, and it worked.

Obviously, one of their singapore proxies are broken.

Thanks.

Michael.

> Troy
> 
> -- 
> __
> Troy Dawson  [EMAIL PROTECTED]  (630)840-6468
> Fermilab  ComputingDivision/LCSI/CSI DSS Group
> __
--- End of Original Message ---


Re: SL5 x86_64 DVD

2007-07-18 Thread Michael Mansour
Hi Igor,

> Hello
> tirst you must have on Windows NTFS file sistem and minimum 5 GB 
> free space on disc. if you use IE you mybe have trouble with 
> downlading image. Bether ckoice its if you use FF. you visit 
> ftp://ftp.scientificlinux.org/linux/scientific/50/iso/x86_64/ and 
> vithh mouse go an SL-5.0-051607-x86_64-DVD.iso then click vithh riht 
> mouse button and choice save target as If you like cnow how i'm 
> install SL please wisit https://www.scientificlinux.org/developers/ 
> I'm Downloading linux without problem

Thankyou for your reply. It turned out to be a proxy cache problem at work,
after I changed the proxy I got to the "real" SL http and ftp site.

Michael.

> Igor
> 
> Michael Mansour pravi:
> > Hi,
> >
> > What's the correct way to get the SL5 x86_64 DVD?
> >
> > Going to either of these links:
> >
> > http://ftp.scientificlinux.org/linux/scientific/50/iso/ 
> > ftp://ftp.scientificlinux.org/linux/scientific/50/iso/ 
> >
> > don't work ie:
> >
> >
http://ftp.scientificlinux.org/linux/scientific/50/iso/x86_64/SL-5.0-051607-x86_64-DVD.iso
> >
> > does nothing and:
> >
> >
ftp://ftp.scientificlinux.org/linux/scientific/50/iso/x86_64/SL-5.0-051607-x86_64-DVD.iso
> >
> > is only 310Mb.
> >
> > Regards,
> >
> > Michael.
> >
> >
--- End of Original Message ---


SL5 i386 vi/vim bug

2007-07-21 Thread Michael Mansour
Hi,

I found a vi/vim problem with SL5 i386.

SL5 is using /bin/vi (the old vi which doesn't support colours). Looking at an
SL4.5 server, this has the vi aliased to vim:

[EMAIL PROTECTED] profile.d]# which vi
alias vi='vim'
/usr/bin/vim

which is what I want for the colours.

These are aliased from the /etc/profile.d/vim.sh script when you login.

So for SL4.5:

# cat vim.sh
if [ -n "$BASH_VERSION" -o -n "$KSH_VERSION" -o -n "$ZSH_VERSION" ]; then
  # for bash, pdksh and zsh, only if no alias is already set
  alias vi >/dev/null 2>&1 || alias vi=vim
fi

# sh -x vim.sh
+ '[' -n '3.00.15(1)-release' -o -n '' -o -n '' ']'
+ alias vi
+ alias vi=vim
[EMAIL PROTECTED] profile.d]#

runs through fine and works:

# which vi
alias vi='vim'
/usr/bin/vim

for SL5:

# sh -x ./vim.sh
+ '[' -n '3.1.17(1)-release' -o -n '' -o -n '' ']'
+ '[' -x /usr/bin/id ']'
++ /usr/bin/id -u
+ '[' 0 -le 100 ']'
+ return
./vim.sh: line 3: return: can only `return' from a function or sourced script
+ alias vi
+ alias vi=vim

has an error and doesn't work:

# which vi
/bin/vi

Should I raise a bugzilla for this for TUV?

Michael.


Re: secure ftp with vsftp without remote shell login

2007-07-26 Thread Michael Mansour
Hi Johan,

> SL5.0 webserver, I created a user for every 'customer' with no shell 
> login (sbin/nologin). The users home = /var/www/html/name-of-site/htdocs.
> In vsftpd.conf I made sure that they cannot leave their home dir 
> (chroot jail). Some 'customers' want to use secure ftp, but then I 
> have to give them shell login. Is there a possibility to let users 
> choose wether they want to use FTP or SFTP without giving them 
> remote shell login ?

With SFTP yes, with vsftpd I'm not sure (maybe someone else can answer).

With SFTP it's a simple matter of chroot jailing them and making their login
shell be the sftp-server.

Regards,

Michael.

> thanx,
> 
> Johan Mares
> 
> -- 
> VLIZ
> Flanders Marine Institute
--- End of Original Message ---


Re: system-config-netboot in SL/TUV 5.0?

2007-08-01 Thread Michael Mansour
Hi Michael,

> Hi, folks.  In SL 4.x there is a package:
> 
> system-config-netboot
> 
> that contains, among other things, the pxelinux stuff:
> 
> /tftpboot/linux-install/pxelinux.0
> /tftpboot/linux-install/pxelinux.cfg
> etc.
> 
> This seems to be missing in both SL 5.0 and TUV 5.0, so far as I can
> tell.  Is the pxelinux stuff now packaged elsewhere?  Thanks.

It's a good point you raise as I myself as migrating the pxe boot setup from
SL4.5 to SL5 currently, and have just run into this obstacle.

Here:

http://www.redhat.com/archives/rhelv5-list/2007-April/msg00123.html

seems to suggest that TUV couldn't include it in el5, but also that installing
the FC6 package:

http://ftp.univie.ac.at/systems/linux/fedora/core/6/i386/os/Fedora/RPMS/system-config-netboot-0.1.41-1.FC6.noarch.rpm

will work, or even recompiling the el4 package:

http://updates.redhat.com/enterprise/4ES/en/os/SRPMS/system-config-netboot-0.1.40-1_EL4.src.rpm

as a workaround.

Well, I've tried the first way on SL5:

# rpm -Uvh
http://ftp.univie.ac.at/systems/linux/fedora/core/6/i386/os/Fedora/RPMS/system-config-netboot-0.1.41-1.FC6.noarch.rpm
Retrieving
http://ftp.univie.ac.at/systems/linux/fedora/core/6/i386/os/Fedora/RPMS/system-config-netboot-0.1.41-1.FC6.noarch.rpm
warning: /var/tmp/rpm-xfer.1tCEzJ: Header V3 DSA signature: NOKEY, key ID 
4f2a6fd2
Preparing...### [100%]
   1:system-config-netboot  ### [100%]

and:

# rpm -ql system-config-netboot
/etc/pam.d/system-config-netboot
/etc/security/console.apps/system-config-netboot
/tftpboot/linux-install/msgs
/tftpboot/linux-install/msgs/boot.msg
/tftpboot/linux-install/msgs/expert.msg
/tftpboot/linux-install/msgs/general.msg
/tftpboot/linux-install/msgs/param.msg
/tftpboot/linux-install/msgs/rescue.msg
/tftpboot/linux-install/msgs/snake.msg
/tftpboot/linux-install/pxelinux.0
/tftpboot/linux-install/pxelinux.cfg
/usr/bin/system-config-netboot
/usr/sbin/pxeboot
/usr/sbin/pxeos
/usr/sbin/system-config-netboot
/usr/share/applications/system-config-netboot.desktop
/usr/share/doc/system-config-netboot-0.1.41
/usr/share/doc/system-config-netboot-0.1.41/COPYING
/usr/share/doc/system-config-netboot-0.1.41/ch-diskless.html
/usr/share/doc/system-config-netboot-0.1.41/ch-pxe.html
/usr/share/doc/system-config-netboot-0.1.41/figs
/usr/share/doc/system-config-netboot-0.1.41/figs/diskless
/usr/share/doc/system-config-netboot-0.1.41/figs/diskless/add-host.png
/usr/share/doc/system-config-netboot-0.1.41/figs/pxe
/usr/share/doc/system-config-netboot-0.1.41/figs/pxe/netboot-add-host-dialog.png
/usr/share/doc/system-config-netboot-0.1.41/figs/pxe/netboot-add-hosts.png
/usr/share/doc/system-config-netboot-0.1.41/figs/pxe/network-install-setup.png
/usr/share/doc/system-config-netboot-0.1.41/figs/pxe/temp.png
/usr/share/doc/system-config-netboot-0.1.41/index.html
/usr/share/doc/system-config-netboot-0.1.41/legalnotice.html
/usr/share/doc/system-config-netboot-0.1.41/netboot-performing.html
/usr/share/doc/system-config-netboot-0.1.41/rhdocs-man.css
/usr/share/doc/system-config-netboot-0.1.41/s1-diskless-booting.html
/usr/share/doc/system-config-netboot-0.1.41/s1-diskless-dhcp.html
/usr/share/doc/system-config-netboot-0.1.41/s1-diskless-hosts.html
/usr/share/doc/system-config-netboot-0.1.41/s1-diskless-netboot.html
/usr/share/doc/system-config-netboot-0.1.41/s1-diskless-nfs.html
/usr/share/doc/system-config-netboot-0.1.41/s1-netboot-add-hosts.html
/usr/share/doc/system-config-netboot-0.1.41/s1-netboot-dhcp.html
/usr/share/doc/system-config-netboot-0.1.41/s1-netboot-pxe-config.html
/usr/share/doc/system-config-netboot-0.1.41/s1-netboot-tftp.html
/usr/share/doc/system-config-netboot-0.1.41/s2-netboot-custom-msg.html
/usr/share/doc/system-config-netboot-0.1.41/stylesheet-images
/usr/share/doc/system-config-netboot-0.1.41/stylesheet-images/caution.png
/usr/share/doc/system-config-netboot-0.1.41/stylesheet-images/important.png
/usr/share/doc/system-config-netboot-0.1.41/stylesheet-images/note.png
/usr/share/doc/system-config-netboot-0.1.41/stylesheet-images/tip.png
/usr/share/doc/system-config-netboot-0.1.41/stylesheet-images/warning.png
/usr/share/icons/hicolor/48x48/apps/system-config-netboot.png
/usr/share/locale/ar/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/bg/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/bn/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/bs/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/ca/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/cs/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/cy/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/da/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/de/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/el/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/es/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale/et/LC_MESSAGES/system-config-netboot.mo
/usr/share/locale

Re: XFS file system

2007-08-08 Thread Michael Mansour
Hi guys,

> Donald Tripp wrote:
> > This argument sounds vary familiar to NFS vs GFS vs Lustre vs GPFS... 
> > All file systems have their pros and cons, and no file system is fool 
> > proof. XFS is a good file system, so is Reiser, and ext3, and HFS 
> > (Apple), but they all have their own faults. 
> > 
> > Just my 2 cents...
> >
> 
> Yes and no,
> Faster and slower is one thing, data corruption is another.
> 
> OK, so I decided pull out the "way back" machine and go through old mail.
> 
>  From November 2006, from Peter Kelemen at CERN
> -
> Starting with RHEL4, the 4KSTACKS option is enabled when the
> kernel is compiled.  This limits each process' kernel stack to
> 4K with separate stack for interrupts.  XFS can have deep call
> chains (it's a complex filesystem doing complex stuff) and the
> codebase included in SL4 has not been updated to take this reduced
> stackspace into consideration (it's effectively the XFS codebase
> from the 2.6.9 times).  As a result, it is possible to load the
> machine so that XFS overflows its stack and then the game is
> over.  It can be easily triggered by stacking several software
> layers (SCSI+LVM/MD+XFS+NFS) on top of each other but it has
> been demonstrated that the stack overflow can be triggered with
> sufficient load on plain SCSI+XFS systems as well.
> -

I second Troy's description there, I used SL4 32bit and ran into the above
problem after stacking scsi+md++drbd+lvm+xfs, guess what? it just stopped
working and I was forced to move to ext3 even though I didn't want to.

Of course, SL4 64it kernel doesn't have this problem.

> That is the most clear description of the problem I think
> 
> Now, this is from May, in a discussion about CentOS's XFS kernel 
> modules in CentOS 5 This is Axel Thimm responding
> ---
>  > Wern't they having problems with it on 32 bit?  Just like before? 
>  Or > > was I reading those e-mails wrong.
> 
> That is supposedly solved, and the two lead developers at SGI on XFS
> on Linux now have @redhat.com addresses and thus RH got some XFS 
> love, too (previously there was a strong Suse link). Still RH pushes 
> for its favourite filesystems, e.g. ext3/4 and gfs2.
> -
> 
> So there might be light at the end of the tunnel for 32 bit.
> But ... that also might mean that now the lead developers that were 
> fixing XFS had different projects at RedHat so the XFS development 
> has lost it's developers.  I personally don't know.

XFS is great and my choice of filesystem, but you can make ext3 perform the
same as XFS by turning off redundancy features inherently turned on by default
on ext3.

The new kid on the block? Sun's ZFS which blows all these other filesystems
out of the water when it comes to feature-sets. For now, we have to stay a
step behind in the Linux world until such a filesystem becomes mainstream.

Regards,

Michael.

> Troy
> -- 
> __
> Troy Dawson  [EMAIL PROTECTED]  (630)840-6468
> Fermilab  ComputingDivision/LCSI/CSI DSS Group
> __
--- End of Original Message ---


Re: XFS file system

2007-08-08 Thread Michael Mansour
Hi Karl,

> Donald Tripp wrote:
> > This argument sounds vary familiar to NFS vs GFS vs Lustre vs GPFS... 
> > All file systems have their pros and cons, and no file system is fool 
> > proof. XFS is a good file system, so is Reiser, and ext3, and HFS 
> > (Apple), but they all have their own faults. 
> >
> > Just my 2 cents...
> >
> >
> > - Donald Tripp
> >  [EMAIL PROTECTED] 
> > --
> > HPC Systems Administrator
> > High Performance Computing Center
> > University of Hawai'i at Hilo
> > 200 W. Kawili Street
> > Hilo,   Hawaii   96720
> > http://www.hpc.uhh.hawaii.edu
> >
> >
> > On Aug 8, 2007, at 10:15 AM, Troy Dawson wrote:
> >
> >> Brent L. Bates wrote:
> >>>  The installer will not look in the contrib directories?  Ok.  
> >>> Could I as
> >>> part of my combining the CD's into a single DVD process, move the 
> >>> XFS RPM's
> >>> out of the contrib area and into the main stream directories?  Or 
> >>> perhaps,
> >>> with the DVD I've already burned, do some kind of shell escape out 
> >>> of the
> >>> install GUI and install them from there?  People want XFS and we 
> >>> really don't
> >>> care about the top level vendors prejudices.  I'm willing to work 
> >>> with the SL
> >>> people to get a more reasonable solution to this on going problem.
> >>
> >> This isn't a "top level vendors prejudices"
> >> This is a Scientific Linux Developers prejudices.
> >> "People want XFS" until they start loosing critical data.  Trust me, 
> >> it's happened here.  They scream for it and scream for it, and then 
> >> scream at you when it corrupts stuff and somehow it's all become my 
> >> fault.
> >>
> >> You can do all sorts of stuff in the %post install scripts, including 
> >> install stuff from the contrib area.  You are only limited by your 
> >> imagination.
> >>
> >> Troy
> >> -- 
> >> __
> >> Troy Dawson  [EMAIL PROTECTED]   (630)840-6468
> >> Fermilab  ComputingDivision/LCSI/CSI DSS Group
> >> __
> >
> Hi - This thread comes at an opportune time for me.  We are
> experiencing horrible performance on RAID5 arrays on 3Ware
> 9500S controllers (eg. wait for keystrokes to appear when
> doing any kind of IO).  The system is running SL4.4 and it's
> been suggested that the kernel scheduler in RHEL4 is at fault,
> so I'm currently slowly moving up to SL5 to see if that fixes it.
> On the other hand, trying to figure this out, I've seen it suggested
> on a gentoo forum from someone with a similar problem that
> going to XFS from ext3 was a solution.  However, these questions
> of data corruption worry me.  How common is it to lose data on
> an XFS filesystem?  Obviously, TUV thinks it's a problem, but

A couple of years ago I spoke to technical employees of TUV (doing RHEL
courses with them at the time - the techs where "the buck stops here" to solve
the hardest problems in RHEL) and what they said was, the reason TUV doesn't
support XFS had nothing to do with data corruption or the XFS filesystem
itself, it was just simply a matter of "what for, why support another
filesystem" when ext3 was robust for the enterprise and if you wanted more
performance with less redundancy (to get the performance you get out of XFS)
then you simply turn off those ext3 features which slow it down.

Regards,

Michael.

> I've only ever seen reference to their 'internal tests'. Do those of
> you using (or have used in the past) XFS, see any greater problems
> with data integrity?
> -Karl
> 
> -- 
> -
> | Karl A. Misselt  Office: Steward 254  |
> | Steward Observatory   Phone: 520-626-0196 |
> | University of Arizona   FAX: 520-621-9555 |
> | Tucson, AZ 85721-0065  [EMAIL PROTECTED] |
> -
> | "To be civilized is to restrain the ability to commit mayhem. |
> |  To be incapable of committing mayhem is not the mark of the  |
> |  civilized, merely the domesticated." |
> -
--- End of Original Message ---


Mirroring SL5.x tree via rsync

2007-08-15 Thread Michael Mansour
Hi,

I use mrepo to mirror SL. When recently adding SL5x to the mirror mix, I'm
getting the following errors:

rsync: link_stat "/5x/i386/SL/RPMS/." (in scientific) failed: No such file or
directory (2)
rsync error: some files could not be transferred (code 23) at main.c(1385)
[receiver=2.6.9]
mrepo: Mirroring failed for
rsync://rsync.scientificlinux.org/scientific/5x/i386/SL/RPMS/ with message:
  Failed with return code: 5888
rsync: link_stat "/5x/i386/contrib/RPMS/." (in scientific) failed: No such
file or directory (2)
rsync error: some files could not be transferred (code 23) at main.c(1385)
[receiver=2.6.9]
mrepo: Mirroring failed for
rsync://rsync.scientificlinux.org/scientific/5x/i386/contrib/RPMS/ with message:
  Failed with return code: 5888
rsync: link_stat "/5x/i386/errata/SL/RPMS/." (in scientific) failed: No such
file or directory (2)
rsync error: some files could not be transferred (code 23) at main.c(1385)
[receiver=2.6.9]
mrepo: Mirroring failed for
rsync://rsync.scientificlinux.org/scientific/5x/i386/errata/SL/RPMS/ with 
message:
  Failed with return code: 5888
rsync: link_stat "/5x/i386/errata/fastbugs/RPMS/." (in scientific) failed: No
such file or directory (2)
rsync error: some files could not be transferred (code 23) at main.c(1385)
[receiver=2.6.9]
mrepo: Mirroring failed for
rsync://rsync.scientificlinux.org/scientific/5x/i386/errata/fastbugs/RPMS/
with message:
  Failed with return code: 5888

Any ideas what the exact path I should be using for SL5x is?

Thanks.

Michael.


SL and rsyslog installation

2007-08-18 Thread Michael Mansour
Hi,

For those SL users who wish to use rsyslog to log into MySQL and access
entries from the Web, I've just wiki'ed this on the rsyslog wiki:

http://wiki.rsyslog.com/index.php/Here_comes_the_first_story

Regards,

Michael.


perl-Log-Agent RPM

2007-08-18 Thread Michael Mansour
Hi,

Using Scientific Linux 4.5, I'm installing the FuzzyOCR plugin for
SpamAssassin. One of the dependencies is to have the Log::Agent perl module
available.

I've searched for a "perl-Log-Agent" RPM but can't find any. The closest I
could find is:

http://rpm2html.osmirror.nl/redhat-archive/6.2/cpan/i386/perl-Log-Agent-0.1.2-6.i386.html

but it seems quite old and some of the list of files it contains seem to
already be in the SL 4.5 distribution?

If this is in fact a separate package I need for SL 4.5, could rpmforge
package it? if not, do I need to package this myself since SL 4.5 doesn't
provide all the perl modules required?

Any advice is appreciated. Thanks.

Michael.


Re: [suggest] perl-Log-Agent RPM

2007-08-18 Thread Michael Mansour
Hi Dag,

> On Sun, 19 Aug 2007, Michael Mansour wrote:
> 
> > Using Scientific Linux 4.5, I'm installing the FuzzyOCR plugin for
> > SpamAssassin. One of the dependencies is to have the Log::Agent perl module
> > available.
> > 
> > I've searched for a "perl-Log-Agent" RPM but can't find any. The closest I
> > could find is:
> > 
> >
http://rpm2html.osmirror.nl/redhat-archive/6.2/cpan/i386/perl-Log-Agent-0.1.2-6.i386.html
> > 
> > but it seems quite old and some of the list of files it contains seem to
> > already be in the SL 4.5 distribution?
> > 
> > If this is in fact a separate package I need for SL 4.5, could rpmforge
> > package it? if not, do I need to package this myself since SL 4.5 doesn't
> > provide all the perl modules required?
> > 
> > Any advice is appreciated. Thanks.
> 
> Hi Michael,
> 
> I have written a small tool to generate perl SPEC files that are 
> close to 95% of what is required and works. You can find it at:
> 
>   http://svn.rpmforge.net/svn/trunk/tools/dar/dar-perl.py
> 
> And run it simply by doing:
> 
>   dar-perl.py Log::Agent
> ordar-perl.py perl-Log-Agent
> 
> and it will output the SPEC content to stdout and warnings to stderr.
> In this case the warnings are:
> 
>   License could not be determined.
>   No abstract found.
> 
> Which means that both the license and the abstract 
> (summary/description) needs to be taken from the website. A simple 
> build will then verify if the %files section is correct.
> 
> I have added perl-Log-Agent to subversion :)
> 
>   /dar/tools/dar/dar-perl.py -o perl-Log-Agent/perl-Log-Agent.spec 
> perl-Log-Agentvi perl-Log-Agent/perl-Log-Agent.spec   svn add perl-
> Log-Agent dar-build -v perl-Log-Agent/perl-Log-Agent.spec
> 
> All feedback is welcome.

Many thanks Dag, I'll give this a go today and see how it works out.

Michael.


gdm for sl5

2007-08-23 Thread Michael Mansour
Hi,

Just installed the latest gdm security update for SL5, and got this unsigned
package response:

warning: gdm-2.16.0-31.0.1.sl.2: Header V3 DSA signature: NOKEY, key ID 82fd17b2

Regards,

Michael.


Re: gpg-pubkeys [was] gdm for sl5

2007-08-31 Thread Michael Mansour
Hi,

> - Original message from Jon Peatfield  on 2007-08-31 +0100 at 
> 14:00:52-
> 
> > On Thu, 30 Aug 2007, Alex Kruchkoff wrote:
> > 
> > >Yes, I've found a lots of signatures in sl-release-5.0-4.x86_64.
> > >And thinking about all these keys I wonder why all of them are not 
> > >installed as a part of the SL installation process?
> > 
> > They just never have been is probably the short answer...
> 
>Hello,
> 
>I don't think so, as I have never installed keys on my SL4.4 desktop
>but I got them :)
> 
>  goubert:/home/bob > ls -ltc /etc/pki/rpm-gpg/
> total 112
> 
> -rw-r--r--  1 root root 1726 Sat Jul 14 05:51:45 2007 RPM-GPG-KEY-
> adobe-linux
> -rw-r--r--  1 root root 1910 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY
> -rw-r--r--  1 root root 1718 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-atrpms
> 
> -rw-r--r--  1 root root 1706 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-beta
> -rw-r--r--  1 root root 1795 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-centos4
> 
> -rw-r--r--  1 root root 1565 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-cern
> -rw-r--r--  1 root root 1357 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-csieh
> -rw-r--r--  1 root root 1672 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-dag
> 
> -rw-r--r--  1 root root 1357 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-dawson
> 
> -rw-r--r--  1 root root 2161 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-dries
> 
> -rw-r--r--  1 root root 1519 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-fedora
> 
> -rw-r--r--  1 root root 1076 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-
> fedora-test
> -rw-r--r--  1 root root 1328 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-jpolok
> 
> -rw-r--r--  1 root root 1910 Tue Apr 10 23:51:19 2007 RPM-GPG-KEY-redhat
> 
>with a date corresponding to the installation date.

Yes they are installed _on your hard drive_ as part of the SL installation,
but they are not installed into the rpm database, so just because they're
there doesn't mean you're using them. The next step is to:

# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*

and from what is being discussed here, the person above is suggesting that the
installation do the above command to install the keys into the rpm database.

On a side note, the key:

error: /etc/pki/rpm-gpg/RPM-GPG-KEY-cern: import read failed(0).

fails to import.

Regards,

Michael.

>For the 'adobe-linux' one I got it when downloading the flash-
> plugin   and I noticed those files are mentioned in the yum repository
>for example :
> 
>/etc/yum.repos.d/sl-contrib.repo
> 
>[sl-contrib]
>name=SL 4 base
>baseurl=http://distrib-
> coffee.ipsl.jussieu.fr/pub/linux/scientific-linux/44/$basearch/contrib/RPMS/
>   
http://ftp.lip6.fr/pub/linux/distributions/scientific/44/$basearch/contrib/RPMS/
>   
ftp://ftp.scientificlinux.org/linux/scientific/44/$basearch/contrib/RPMS/
>ftp://linuxsoft.cern.ch/scientific/44/$basearch/contrib/RPMS/
>enabled=1
>gpgcheck=1
>gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-csieh 
> file:///etc/pki/rpm-gpg/RPM-GPG-KEY-dawson file:///etc/  
>  pki/rpm-gpg/RPM-GPG-KEY-jpolok file:///etc/pki/rpm-gpg/RPM-GPG-KEY-cern
> 
> -- 
>  Best regards,
>Robert FRANCHISSEUR
> 
>   Apollo_gist :-)___
> | Robert FRANCHISSEUR   |
> | Laboratoire de Météorologie Dynamique -  C.N.R.S. |
> | Equipe "R.A.M.S.E.S." -   UPMC   -   Tour 45-55 3ème 315C |
> | Boite 99 - 4, place JussieuF-75252 PARIS CEDEX 05  FRANCE |
> | Phone  : +33 (0)1 44 27 73 87  fax : +33 (0)1 44 27 62 72 |
> | e-mail : robert at lmd . jussieu . fr   http://www.lmd.jussieu.fr |
>  ---
--- End of Original Message ---


Re: gpg-pubkeys [was] gdm for sl5

2007-08-31 Thread Michael Mansour
Hi Franchisseur,

> > Hi,
> > 
> > > - Original message from Jon Peatfield  on 2007-08-31 +0100 at
14:00:52-
> > > 
> > > > On Thu, 30 Aug 2007, Alex Kruchkoff wrote:
> > > > 
> > > > >Yes, I've found a lots of signatures in sl-release-5.0-4.x86_64.
> > > > >And thinking about all these keys I wonder why all of them are not 
> > > > >installed as a part of the SL installation process?
> > > > 
> > > > [...]
> > > 
> > >with a date corresponding to the installation date.
> > 
> > Yes they are installed _on your hard drive_ as part of the SL installation,
> > but they are not installed into the rpm database, so just because they're
> > there doesn't mean you're using them. The next step is to:
> > 
> > # rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*
> > 
> > and from what is being discussed here, the person above is suggesting that 
> > the
> > installation do the above command to install the keys into the rpm database.
> >
> 
>I am not sure I understand what is the result of the rpm -
> -import command   because if I do a :
> 
>rpm -qi gpg-pubkey

To get a list of all the keys in your rpm database system:

# rpm -qa gpg-pubkey\*|sort -n

which for me produces:

gpg-pubkey-0c98ff9d-3d4a527c.(none)
gpg-pubkey-1aa78495-3eb24301.(none)
gpg-pubkey-217521f6-45e8a532.(none)
gpg-pubkey-30c9ecf8-3f9da3f7.(none)
gpg-pubkey-443e1821-421f218f.(none)
gpg-pubkey-4f2a6fd2-3f9d9d3b.(none)
gpg-pubkey-66534c2b-41d57eae.(none)
gpg-pubkey-6b8d79e6-3f49313d.(none)
gpg-pubkey-82fd17b2-3ffdb083.(none)
gpg-pubkey-897da07a-3c979a7f.(none)
gpg-pubkey-a7048f8d-3ff1defa.(none)
gpg-pubkey-db42a60e-37ea5438.(none)
gpg-pubkey-e42d547b-3960bdf1.(none)
gpg-pubkey-e8562897-459f07a4.(none)

>I get :
> 
>Name: gpg-pubkey   Relocations: (not 
> relocatable)   Version : a7048f8d  
> Vendor: (none)
>Release : 3ff1defa  Build Date: Wed 
> 11 Apr 2007 03:29:36 AM CEST
>Install Date: Wed 11 Apr 2007 03:29:36 AM CEST  Build 
> Host: localhost   Group   : Public Keys  
>  Source RPM: (none)   Size: 0
> License: pubkey   Signature   : (none)   Summary : 
> gpg(Connie Sieh (Constance J. Sieh) <[EMAIL PROTECTED]>)  
>  Description :
>-BEGIN PGP PUBLIC KEY BLOCK-
>Version: rpm-4.3.3 (beecrypt-3.0.0)
> 
>
> mQGiBD/x3voRBADonMg9Vjira9HR8AceZX3tUKZITeRFbWG6+vhQDJUffbrG7bb1rQqQWqd1
>[...]
>-END PGP PUBLIC KEY BLOCK-
> 
>for each pubkey still with the install date and buid host : localhost.

The "rpm --import" command will import a RPM-GPG-KEY file into the rpm
database. You can import the same key multiple times (where the -qa option
above will show duplicates if you do), but when installing SL for the first
time, the keys in /etc/pki/rpm-gpg aren't all imported into the rpm database.

> > On a side note, the key:
> > 
> > error: /etc/pki/rpm-gpg/RPM-GPG-KEY-cern: import read failed(0).
> > 
> > fails to import.
> >
> 
>This one gives me :
> 
> goubert:/etc/yum.repos.d > rpm -qi gpg-pubkey-1d1e034b-42bfd0c5
>Name: gpg-pubkey   Relocations: (not 
> relocatable)   Version : 1d1e034b  
> Vendor: (none)
>Release : 42bfd0c5  Build Date: Wed 
> 11 Apr 2007 03:29:36 AM CEST
>Install Date: Wed 11 Apr 2007 03:29:36 AM CEST  Build 
> Host: localhost   Group   : Public Keys  
>  Source RPM: (none)   Size: 0
> License: pubkey   Signature   : (none)   Summary : 
> gpg(CERN Linux Support (RPM signing key for CERN Linux Support)
>  <[EMAIL PROTECTED]>)   Description :
>-BEGIN PGP PUBLIC KEY BLOCK-
>Version: rpm-4.3.3 (beecrypt-3.0.0)

Yeah, for me it's:

# rpm -qi gpg-pubkey-1d1e034b-42bfd0c5
package gpg-pubkey-1d1e034b-42bfd0c5 is not installed

because of the import failure. That's run on a newly built SL5 server.

Regards,

Michael.

mQGiBEK/0MURBACv5Rm/jRnrbyocW5t43hrjFxlw/DPLTWiA16apk3P2HQQ8F6csEY/gmNmU
>
> f4U8KB6ncxdye/ostSBFJmVYh0YEYUxBSYM6ZFui3teVRxxXqN921jU2GbbWGqqlxbDqvBxD
>
> EG95pA9oSiFYalVfjxVv0hrcrAHQDW5DL2b8l48kGwCgnxs1iO7Z/5KRalKSJqKx70TVIUkD
>
> /2YkkHjcwp4Nt1pPlKxLaFp41cnCEGMEZVsNIQuJ1SgHyMHKBzMWkD7QHqAeW3Sa9CDAJKoV
>
> PHZK99puF8etyUpC/HfmOIF6jwGpfG5AS7YbqHX6vitRlQt1b1aq5K83J8Y0+8WmjZmCQY6+
>
> y2KHOPP+zHWKe5TJDeqDnN0jsZsKA/441IF4JJTPEhvRFsPJO5WKg1zGFbxRPKvgi7+YY6pJ
>
> 0VFbOMcJVMkvSZ2w4QRD+2ets/pRxNhITHfPToMV3lhC8m1Je5fzoSvSixgH/5o9mekWWSW7
>
> Uq7U0IWA7OD7RraJRrGxy0Tz3G+exA7svv/zn9TW/BaHFlMHoyyDHOYZmIhhBB8RAgAhBQJC
>
> v+/uFwyAEeb+6rc8Txi4s8pfgZAf4xOTel99AgcAAAoJEF4D/eUdHgNLGCgAmwduKegSOBXp
>
> De061zF8NoN6+OFiAJ9nKo+uC6xBZ9Ey550SmhFCPPA2/rRTQ0VSTiBMaW51eCBT

Re: stupid nslookup

2007-09-19 Thread Michael Mansour
Hi,

> > Miles,
> > 
> > nslookup has been deprecated for years and shouldn't be used.  I'd
> 
> nslookup used to be deprecated, and nagged users who didn't say '-silent.'
> 
> Now, it doesn't nag, and behaves the same whether "-silent" is used 
> or not. Not only that, the man page doesn't mention "-silent."
> 
> Consequently, I guess there were howls of protest, and nslookup's 
> fortunes have revived so it will be with us for the foreseeable future.
> 
> The man page seems reasonable to me, try it.

Yes I remember that myself, and even though the deprecation thing was
happening for a while, I must admit I was one of those who also thought
"what's the point of getting rid of it". When I'm on Windows workstations and
need to do nslookup's, all Windows has is nslookup, no dig (even though the
Windows one isn't fully functional, it still helps when diagnosing problems).

> > recommend looking at using the replacement called 'dig' for queries.  
> > See if that solves your problems.
> 
> I'm one who doesn't dig dig.

I'm happy to use dig, but must say I spend an equal amount of time in nslookup.

Regards,

Michael.

> --
> 
> Cheers
> John
> 
> -- spambait
> [EMAIL PROTECTED]  [EMAIL PROTECTED]
> 
> Please do not reply off-list
--- End of Original Message ---


Re: Changing BIOS settings on nodes

2007-09-25 Thread Michael Mansour
Hi,

> Greetings.
> 
> We would like to change the BIOS settings on our headless compute
> nodes to enable PXE booting which is disabled in the default
> configuration. Reading and writing with /dev/nvram appears to work 
> but on reboot the change in CMOS is recognised, assumed to be a corruption
> 
> (it might be a corruption if /dev/nvram is not properly supported) 
> and prompts to be reset. Is there a way round this?
> 
> There is no support for a serial console in the BIOS. The nodes have 
> a floppy disk. Any thoughts on alternatives?

What type of hardware is this, branded (HP, IBM, etc) or non-branded?

Michael.

> Thanks - Andy.
--- End of Original Message ---


Sendmail To and CC are not filled error

2007-09-27 Thread Michael Mansour
Hi,

Sorry if this is way off topic, but I've spent the past few hours web
searching and trying different things for the following sendmail error:

554 5.7.1 To: and CC: are not filled

which results when an email comes into the system where the sender has only
populated the "Bcc" field and not the "To" or "CC" fields.

I don't want these messages rejected yet can't seem to find anyway within
sendmail to stop the rejection and allow the messages through.

Any ideas what I should try to get this working?

Thanks.

Michael.


Re: Sendmail To and CC are not filled error

2007-09-28 Thread Michael Mansour
Hi Daniel,

> Do you have the standard SL sendmail.mc and sendmail.cf files?  If 
> not, try them first.  I do, and It Works For Me.  On SL 4.x at least.

No I don't, I use various additional Options, Features and milters in sendmail
(as I do various virtual hosting etc) that has been developed over many years.
I wouldn't be able to use the stock mc file at all since it wouldn't work for
my configuration.

I was hoping someone would have seen this problem before, but if you don't
have this problem with the stock mc then I may have to start from scratch on a
stock sendmail.mc and build from there option by option and feature by feature.

Please make sure it works by sending an email from gmail (which allows an
empty "To:" field and many "Bcc:" - yahoo and hotmail don't allow it) and send
to multiple recipients in the Bcc with one of those recipients as your mail
server with the stock sendmail.mc and sendmail.cf, this would then be an
identical test to what I'm doing.

If you could let me know what happens please.

Michael.

> Dan W.
> 
> On Fri, Sep 28, 2007 at 03:05:24PM +1000, Michael Mansour wrote:
> > Hi,
> > 
> > Sorry if this is way off topic, but I've spent the past few hours web
> > searching and trying different things for the following sendmail error:
> > 
> > 554 5.7.1 To: and CC: are not filled
> > 
> > which results when an email comes into the system where the sender has only
> > populated the "Bcc" field and not the "To" or "CC" fields.
> > 
> > I don't want these messages rejected yet can't seem to find anyway within
> > sendmail to stop the rejection and allow the messages through.
> > 
> > Any ideas what I should try to get this working?
> > 
> > Thanks.
> > 
> > Michael.
--- End of Original Message ---


Re: Sendmail To and CC are not filled error

2007-09-29 Thread Michael Mansour
Hi Daniel,

> > Please make sure it works by sending an email from gmail (which allows an
> > empty "To:" field and many "Bcc:" - yahoo and hotmail don't allow it) and 
> > send
> > to multiple recipients in the Bcc with one of those recipients as your mail
> > server with the stock sendmail.mc and sendmail.cf, this would then be an
> > identical test to what I'm doing.
> 
> I just tried (again) from mutt, which allows empty To: and Cc: 
> lines.  This time I Bcc:'ed you as well.  Let me know if you get it 
> (subject: test2).  I got this and my previous test just fine on my 
> stock sendmail server (well, stock minus "allow unresolved 
> hostnames" which I always comment out).

Many thanks for your help here mate. After quite some more trouble-shooting I
have found the issue.

I use the smf-zombie milter to block many of the botnets around, however this
milter has the disadvantage that it also blocks those types of emails.
Searching it's source shows the "To and CC field not filled in" error.

I tried to find a way I could just stop the milter from rejecting the
"undisclosed recipient" emails, but I could not without recoding it myself, so
basically I just removed the milter (and noticed an increase in spam entering
the environment since doing it).

I have emailed the author and joined their smfs-users mailing list, and sent
an email yesterday asking about the best way forward to being able to use the
milter while disabling the "undisclosed recipients" check. Haven't received
any replies yet.

So lastly, I have received your test email so thankyou, the milter is disabled
so isn't rejecting those emails anymore.

Regards,

Michael.

> Dan W.
--- End of Original Message ---


Re: xen and the art of computer maintenance

2007-10-25 Thread Michael Mansour
Hi Michael,

> Hi, folks.  We're finally getting around to trying out the Xen
> hypervisor on an SL 5 (x86_64) system here.
> 
> I thought it would make sense to start with the simplest 
> configuration -- just put the whole virtual machine into a single file.
> 
> We ran virt-manager and added a virtual machine, evidently successfully.
> Here are the details of the file:
> 
> # pwd
> /virt
> 
> # ls -lh
> total 4.0G
> -rwxr-xr-x 1 root root 4.0G Oct 24 17:17 ptestv1-f7.img
> 
> # file ptestv1-f7.img
> ptestv1-f7.img: x86 boot sector; partition 1: ID=0x83, active, starthead
> 1, startsector 63, 6120702 sectors; partition 2: ID=0x82,
>  starthead 0,startsector 6120765, 2040255 sectors, code offset 0x48
> 
> We installed Fedora 7 on the virtual machine, i.e., during the 
> initial process of adding the new machine.  (We used Fedora 7 just 
> because we happened to have the distro on-line locally.  At this 
> point the whole process is nothing more than an exercise.)
> 
> We got through the Fedora installation without any evident problems, 
> but when we hit the "Reboot" button at the end of the installation,
>  that was the last we saw of the virtual machine.
> 
> Now, when we open virt-manager and connect to "Local Xen host", we 
> see only Domain-0.  If we go to the File menu, select "Restore saved 
> machine", and then specify the image indicated above (ptestv1-f7.img)
> , we get an error message from virt-manager:
> 
> Error restoring domain '/virt/ptestv1-f7.img'. Is
> the domain already running?
> 
> In the terminal window from which we started virt-manager, we get the
> following, additional error message:
> 
> # libvir: Xen Daemon error : POST operation failed: (xend.err
>   'Restore failed')
>   libvir: error : library call virDomainRestore failed, possibly
>   not supported
> 
> The system is fully patched, including the latest xen kernel (as of
> yesterday, 2007-10-24), and has been rebooted.
> 
> Can anybody tell me what I'm doing wrong here, and/or tell me a 
> better approach?  Thanks.

As you've discovered, the GUI tools provided by TUV are pretty basic. You
create the VM and then?

You need to learn how to use the tools provided by Xen, so read up on "xm"
through the man pages.

A command like this will get your vm booting and will also show up in the
virt-manager that TUV provides:

# xm create -c 

Regards,

Michael.

>   - Mike
> --
> Michael Hannonmailto:[EMAIL PROTECTED]
> Dept. of Physics  530.752.4966
> University of California  530.752.4717 FAX
> Davis, CA 95616-8677
--- End of Original Message ---


ricci for sl5

2007-11-11 Thread Michael Mansour
Hi,

When I try to apply the latest sl-security release of ricci, I get the
following problem:

# yum -y update ricci
Loading "fastestmirror" plugin
Loading "changelog" plugin
Loading "kernel-module" plugin
Loading "allowdowngrade" plugin
Loading "skip-broken" plugin
Loading "downloadonly" plugin
Loading "protectbase" plugin
Loading "priorities" plugin
Setting up Update Process
Setting up repositories
Loading mirror speeds from cached hostfile
Reading repository metadata in from local files
0 packages excluded due to repository protections
0 packages excluded due to repository priority protections
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Downloading header for ricci to pack into transaction set.
ricci-0.10.0-6.el5.i386.r 100% |=|  19 kB00:00
---> Package ricci.i386 0:0.10.0-6.el5 set to be updated
--> Running transaction check
--> Processing Dependency: modcluster >= 0.10.0 for package: ricci
--> Finished Dependency Resolution
Beginning Kernel Module Plugin
Finished Kernel Module Plugin
Error: Missing Dependency: modcluster >= 0.10.0 is needed by package ricci

How can this be resolved?

Thanks.

Michael.


Re: ricci for sl5

2007-11-12 Thread Michael Mansour
Hi Troy,

> Troy Dawson wrote:
> > Michael Mansour wrote:
> >> Hi,
> >>
> >> When I try to apply the latest sl-security release of ricci, I get the
> >> following problem:
> >>
> >> # yum -y update ricci
> >> Loading "fastestmirror" plugin
> >> Loading "changelog" plugin
> >> Loading "kernel-module" plugin
> >> Loading "allowdowngrade" plugin
> >> Loading "skip-broken" plugin
> >> Loading "downloadonly" plugin
> >> Loading "protectbase" plugin
> >> Loading "priorities" plugin
> >> Setting up Update Process
> >> Setting up repositories
> >> Loading mirror speeds from cached hostfile
> >> Reading repository metadata in from local files
> >> 0 packages excluded due to repository protections
> >> 0 packages excluded due to repository priority protections
> >> Resolving Dependencies
> >> --> Populating transaction set with selected packages. Please wait.
> >> ---> Downloading header for ricci to pack into transaction set.
> >> ricci-0.10.0-6.el5.i386.r 100% |=|  19 kB
> >> 00:00
> >> ---> Package ricci.i386 0:0.10.0-6.el5 set to be updated
> >> --> Running transaction check
> >> --> Processing Dependency: modcluster >= 0.10.0 for package: ricci
> >> --> Finished Dependency Resolution
> >> Beginning Kernel Module Plugin
> >> Finished Kernel Module Plugin
> >> Error: Missing Dependency: modcluster >= 0.10.0 is needed by package 
> >> ricci
> >>
> >> How can this be resolved?
> >>
> >> Thanks.
> >>
> >> Michael.
> > 
> > Hi Michael,
> > Thanks for letting us know.  We'll look into it.
> > 
> > Troy
> >
> 
> Fixed.
> luci and ricci (part of conga) were released as a security release,
>  while clustermon (which provides modcluster) was only released as 
> an "enhancement" so it didn't initially go into the security 
> updates. It's there now.
> 
> You'll probrubly have to do a
>yum clean all
> before it will be seen.

Thanks, all updated fine.

Michael.


Re: Building miscellaneous packages for SL

2007-11-12 Thread Michael Mansour
Hi,

> Hello, I'm new here, but now new in Linux.  I've been a RedHat/Fedora
> user about 10 years.
> 
> The rapid turnover in Fedora made me seek out a more longlasting
> distribution, now I'm testing Scientific Linux.  SL installed cleanly
> for me and starts fine.
> 
> I'm having trouble getting some packages installed that we really
> need.  In Fedora systems, we have been using the livna service to get
> RPMS for things that Fedora won't provide, such as nvidia proprietary
> video drivers.  On the SL, I tried to install the nvidia drivers at
> the optional repository server, but it doesn't work as configured by
> default.  I can build a new RPM for my systems, I don't think that is
> trouble.  I also want to run the newest lyx, and I've found that,
> after building and installing  aiksaurus, then lyx does build just
> fine.  I can post those RPMs in case anybody wants them.
> 
> The application where I'm really having trouble is Gnumeric. I'm
> trying to build the version that is supplied with Fedora 8, which is
> gnumeric-1.6.3-12. The problem is that trying to build it leads back
> into a dependency HELL.  First it wants goffice and libgda, and then
> when I try to build that, it wants version of several devel packages
> that are newer than the ones in SL5.  See:
> 
> error: Failed build dependencies:
> sqlite-devel >= 3.4.0 is needed by libgda-3.0.1-4SL5.x86_64
> freetds-devel is needed by libgda-3.0.1-4SL5.x86_64
> postgresql-devel is needed by libgda-3.0.1-4SL5.x86_64
> mdbtools-devel is needed by libgda-3.0.1-4SL5.x86_64
> xbase-devel is needed by libgda-3.0.1-4SL5.x86_64

I think for any of these dependencies you should try the rpmforge third party
repository. Dag, Dries, or even Axel Thimms (atrpms.net) provide a range of
RPM's for RHEL and Fedora releases.

Regards,

Michael.


Determining order of perl modules loaded

2007-12-03 Thread Michael Mansour
Hi,

I have a couple of perl modules of the same name installed in my OS. This is
expected as I've hand compiled various apps.

What I'd like to know is, how I'd determine which perl module is actually used?

As an example, I have the Base64.pm in three locations - please don't ask why
:), and when I run:

# perl -MMIME::Base64 -e '{print "$MIME::Base64::VERSION\n"}'
3.07

Yet I do not know which one it's picking up, as I have one from Sep 20  2004,
another from Nov 30  2005 and another from this year Jun 15 01:35.

I know perl searches through paths in its environment (not sure where it gets
that from), but any ideas how I can find out which Base64.pm is being used?

Thanks.

Michael.


Re: Documentation about install a cluster with SL

2007-12-11 Thread Michael Mansour
hi Fernando,

> El mar, 11-12-2007 a las 14:38 -0800, Akemi Yagi escribió:
> > On Dec 11, 2007 2:29 PM, Fernando C. Estrada <[EMAIL PROTECTED]> wrote:
> > > Hi all!
> > >
> > > I want to install a cluster with SL, if anyone know documentation about
> > > it, please send me the url.
> > >
> > > Thanks!
> > 
> > How about:
> > 
> > http://www.centos.org/docs/5/html/Cluster_Suite_Overview/
> > and
> > http://www.centos.org/docs/5/html/Cluster_Administration/
> > 
> > Akemi
> 
> Thanks fot the links Akemi, the installation of SL is the same to other
> GNU/Linux based in RedHat?, cuz I want a kind of cluster installation
> manual specify to SL.

Do you know what type of cluster you are after?

There are so many which all serve different purposes. What purpose are you
trying to serve?

Michael.


Re: modifying network config

2007-12-17 Thread Michael Mansour
Hi,

> > I have a Linux desktop for which lspci reports as having a realtek
> > 8110/8169. I could not install it via NFS so I popped in a 3COM card
> > and installed it with SL4.5. After installation, I tried to switch back
> > to the onboard LAN. I removed the 3com card and modified the network
> > config by editing/etc/sysconfig/network-scripts/ifcfg-eth0 and
> > /etc/modprobe.conf. Basically replacing the ifcfg-eth0 with the correct
> > MAC address and modprobe.conf with the right driver. There is an r8169.ko 
> > in 
> > the /lib/modules which I believe is the correct driver because I used a 
> > sysrescuecd image on the box and it correctly detected the chipset, loaded 
> > r8169, and activated the card without any difficulty.
> >
> > When I try to do an ifup eth0, I get an error message saying that the
> > device has a different mac address than what was expected and it ignored
> > my attempt.  I'm stumped by this.My understanding may be dated but
> > as far as I know the information to start the network correctly is in 
> > ifcfg-eth0 and modprobe.conf.  I notice that a modprobe r8169 does not 
> > generate any messages in dmesg.  So, it could be that r8169 is not the 
> > correct driver.  Except for the fact that sysrescuecd used this driver and 
> > it 
> > also has a 2.6 kernel.
> >
> > Can anyone shed any light on this?  Thanks
> 
> Just remove HWADDR line from ifcfg-eth0, it's not required.

Yes he's correct, but just so you know, the HWADDR line is required when:

* you need to have an IP go onto a specific NIC and 

* when the server may auto-assign PCI addresses to NIC's on boot

I personally always use HWADDR to guarantee that a NIC is on a specific IP, if
you don't use it you do stand the chance that an IP assigned in ifcfg-eth? can
go to a different physical NIC.

Regards,

Michael.


Re: Security ERRATA for firefox on SL5.x, SL4.x, SL3,x i386/x86_64

2008-01-14 Thread Michael Mansour
Hi Frank,

> Hi SL-folks,
> 
> well, it's a fairly old thread, but since I partially have the same 
> problem, I thought it's kind of better to revive it - hope that's ok 
> ...

I'd personally prefer Firefox 2.0.x for SL5 (or SL4) but have checked here and
there and can't really find anything relevant to that (I have checked CentOS
too but maybe I've just missed it?).

Plugins I use in Firefox 2 (on a Windows notebook) don't work with FF 1.5
which is why I want Firefox 2 for SL4/5.

Michael.

> Troy Dawson wrote:
> > Troy Dawson wrote:
> >> Troy Dawson wrote:
> >>> [EMAIL PROTECTED] wrote:
>  On Fri, 19 Oct 2007, Troy Dawson wrote:
> > SL 4.x
> >
> >  x86_64:
> > firefox-1.5.0.12-0.7.el4.i386.rpm
> > firefox-1.5.0.12-0.7.el4.x86_64.rpm
> >
>  Hi,
> 
>  The above update to firefox.i386 on the x86_64 architecture has 
>  broken the acrobat and java plugins.
>  (using acroread 7.0.9 and Sun Java 6u3)
> 
>  Clicking on a PDF file causes firefox to exit immediately
>  and launching webpages that require a java plugin no longer works.
> 
>  If I revert to firefox-1.5.0.12-0.3.el4.i386 everything works as 
>  expected.
> 
>  The reason we're using the i386 version on an x86_64 arch is
>  to get the various plugins working that have no x86_64 equivalent.
> 
>  I can get around the PDF issue by not using the plugin and just
>  loading the file externally in xpdf/acroread but we need
>  the java plugin to work too.
> 
>  cheers,
>  Ronnie
> >>>
> >>> Hi,
> >>> When it did the update, did it put in both the i386 and x86_64 
> >>> version?  Or just the i386 version?
> >>>
> >>> Troy
> >>
> >> I'm getting more reports of this.
> >> I'm still setting things up for a test, but as I do, is anyone having 
> >> problems with Adobe acroread 8?  or is it only acroread 7?
> >>
> >> Troy
> > 
> > OK, I can't get it to fail on me.
> > First question, for all of those who are having this problem.  Have you 
> > completely exited out of firefox and seamonkey?  I don't just mean one 
> > window, but all of firefox?
> > 
> > Second question, if the above answer was yes, can you send me 
> > information on how you setup your adobe plugin's?  Did you do it by 
> > hand?  Whet directory is the plugin in?  If it is a link, can you send 
> > me the full output of ls -l on the plugin.
> > 
> > Thanks
> > Troy
> >
> 
> apparently the "java-plugin not working" problem wasn't really 
> discussed !? Anyone got a solution to that one ?? Or anyone else has 
> the same problem (I do) ??
> 
> I have firefox-1.5.0.12-0.8.i386 installed on SL4.5 x86_64 systems. 
> The javaplugin comes from java-1.5.0-sun-1.5.0.13, and works 
> otherwise: running firefox-1.5.0.12-0.3.i386 (or firefox2) on the 
> same machine allows to open java-applets (like eg 
>
http://java.sun.com/products/plugin/1.5.0/demos/plugin/applets/Clock/example1.html)
  without problems. javaws works as well, so the java-installation itself
seems to be ok.
> 
> With firefox-1.5.0.12-0.8.i386 however, it simply won't work (for me)
> . Java apparently complains about "Could not start JavaVM!" and the 
> java-console never shows up. I searched quite a bit, but most people 
> (quite a few) running into similar problems usually have a mixup of 
> 32- and 64-bit installations (I don't). Any suggestions appreciated !
> 
> Ciao, Frank.
--- End of Original Message ---


Updating java on SL4.5

2008-01-15 Thread Michael Mansour
Hi,

I've just performed the update to the latest packages released on a SL4.5
x86_64 machine and got the following near the end of the yum update:

Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Updating  : libxml2  ### [ 1/15]
  Updating  : libxml2  ### [ 2/15]
  Installing: jdk  ### [ 3/15]
  Installing: java-1.5.0-sun-compat### [ 4/15]
  Updating  : postgresql-libs  ### [ 5/15]
  Updating  : libxml2-devel### [ 6/15]
  Updating  : java-1.4.2-sun-compat### [ 7/15]
  Updating  : libxml2-python   ### [ 8/15]
  Cleanup   : postgresql-libs  ### [ 9/15]
  Cleanup   : java-1.4.2-sun-compat### [10/15]
  Removing  : j2sdk### [11/15]
/var/tmp/rpm-tmp.7034: command substitution: line 62: unexpected EOF while
looking for matching `"'
/var/tmp/rpm-tmp.7034: command substitution: line 63: syntax error: unexpected
end of file
  Cleanup   : libxml2  ### [12/15]
  Cleanup   : libxml2-devel### [13/15]
  Cleanup   : libxml2  ### [14/15]
  Cleanup   : libxml2-python   ### [15/15]

Dependency Installed: java-1.5.0-sun-compat.noarch 0:1.5.0.14-1.sl4.jpp
jdk.i586 2000:1.5.0_14-fcs
Updated: java-1.4.2-sun-compat.i586 0:1.4.2.90-1jpp libxml2.x86_64
0:2.6.16-10.1 libxml2.i386 0:2.6.16-10.1 libxml2-devel.x86_64 0:2.6.16-10.1
libxml2-python.x86_64 0:2.6.16-10.1 postgresql-libs.x86_64 0:7.4.19-1.el4.1
Complete!

Is that a problem? do I have to clean up anything as a result?

Thanks.

Michael.


Re: Zimbra on SL 5?

2008-01-16 Thread Michael Mansour
Hi,

> Greetings.  We're trying to install the free version of Zimbra on a SL
> 5.0 system (i386).  Most of the installation goes smoothly, but 
> toward the end we have a problem with a failure to initialize LDAP.  
> If we omit SL's version of LDAP (Zimbra has its own), we get a 
> failure to connect to port 389.  If we run the SL version of LDAP 

Have you checked that your server is listening on port 389?

Have you tried connecting to port 389?

Do you know if Zimbra's own LDAP is started and listening on port 389?

Shutting down openldap from SL5 means (to me) you'd need to start another ldap
server (zimbra's) before it will connect. You cannot have two services
listening on the same at the same time.

> (shouldn't be necessary, AFAIK), the Zimbra setup script connects to 
> port 389 but complains that TLS is an unsupported option.

Then just recompile the openldap that comes with SL (recreate the RPM if you
like - src.rpm's are readily available) and turn of TLS. Then start openldap
and that error will go away - possibly being replaced by another one :)

> If you have any suggestions, please send 'em to me.  Thanks.

I personally wouldn't go the route you're going when using SL5. I'd personally
look for a Xen VM (appliance) of Zimbra (I'm pretty sure there's one out
there), load into as a Virtual machine and then spend my time on Zimbra config
instead of setup. That's one of the major reasons to go to SL5 and setup time
reduction is one of the major reasons for VM appliances.

Regards,

Michael.

>   - Mike
> -- 
> Michael Hannonmailto:[EMAIL PROTECTED]
> Dept. of Physics  530.752.4966
> University of California  530.752.4717 FAX
> Davis, CA 95616-8677
--- End of Original Message ---


Re: Updating java on SL4.5

2008-01-17 Thread Michael Mansour
Hi,

> -- On 2008-01-15 -0600 at 20:44:15 Troy Dawson wrote --
> 
> > FRANCHISSEUR Robert wrote:
> > 
> > >
> > > rabeson:/home/bob >  rpm -ql java-1.4.2-sun-compat
> > >(contains no files)
> > >
> > > rabeson:/home/bob >  rpm -qi java-1.4.2-sun-compat
> > >Name: java-1.4.2-sun-compatRelocations: (not relocatable)
> > >Version : 1.4.2.90  Vendor: JPackage Project
> > >Release : 1jpp  Build Date: Thu 27 Dec 2007 
> > >11:12:55 PM CET
> > >Install Date: Tue 15 Jan 2008 05:47:25 PM CET  Build Host: 
> > >yort.fnal.gov
> > >Group   : Development/Interpreters  Source RPM: 
> > >java-1.4.2-sun-compat-1.4.2.90-1jpp.src.rpm
> > >Size: 0License: JPackage License
> > >Signature   : DSA/SHA1, Thu 27 Dec 2007 11:14:54 PM CET, Key ID 
> > >da6ad00882fd17b2
> > >URL : http://java.sun.com/j2se/1.4.2/
> > >Summary : JPackage Java compatibility package for Sun's JDK
> > >Description :
> > >This package provides JPackage compatibility symlinks and directories
> > >for the vendor's JDK rpm.
> > >
> > 
> > That's not j2sdk ... is it?
> > 
> >   rpm -q j2sdk
> > 
> > java-1.4.2-sun-compat is a shell rpm that generates links in the right 
> > place so that you get the right java.   In this case, it serves as a 
> > placeholder so that we could get j2sdk off your system.
> >
> 
>Oups, I don't have j2sdk any longer. I think I mixed up 
> between   java jdk j2sdk ... and I was confused by the empty 
> java-1.4.2-sun-compat.

j2sdk is removed when you install the java update. Note the transaction log:

Running Transaction
  Updating  : libxml2  ### [ 1/15]
  Updating  : libxml2  ### [ 2/15]
  Installing: jdk  ### [ 3/15]
  Installing: java-1.5.0-sun-compat### [ 4/15]
  Updating  : postgresql-libs  ### [ 5/15]
  Updating  : libxml2-devel### [ 6/15]
  Updating  : java-1.4.2-sun-compat### [ 7/15]
  Updating  : libxml2-python   ### [ 8/15]
  Cleanup   : postgresql-libs  ### [ 9/15]
  Cleanup   : java-1.4.2-sun-compat### [10/15]
  Removing  : j2sdk### [11/15]
/var/tmp/rpm-tmp.7034: command substitution: line 62: unexpected EOF while
looking for matching `"'
/var/tmp/rpm-tmp.7034: command substitution: line 63: syntax error: unexpected
end of file
  Cleanup   : libxml2  ### [12/15]
  Cleanup   : libxml2-devel### [13/15]
  Cleanup   : libxml2  ### [14/15]
  Cleanup   : libxml2-python   ### [15/15]

especially the bit above about item [11/15].

So it's there originally, but removed on the update.

Regards,

Michael.

>Sorry about that.
> 
> -- 
>  Best regards,
>Robert FRANCHISSEUR
> 
>   Apollo_gist :-)___
> | Robert FRANCHISSEUR   |
> | Laboratoire de Météorologie Dynamique -  C.N.R.S. |
> | Equipe "R.A.M.S.E.S." -   UPMC   -   Tour 45-55 3ème 315C |
> | Boite 99 - 4, place JussieuF-75252 PARIS CEDEX 05  FRANCE |
> | Phone  : +33 (0)1 44 27 73 87  fax : +33 (0)1 44 27 62 72 |
> | e-mail : robert at lmd . jussieu . fr   http://www.lmd.jussieu.fr |
>  ---
--- End of Original Message ---


Re: Zimbra on SL 5?

2008-01-17 Thread Michael Mansour
Hi Michael,

> On Thu, Jan 17, 2008 at 02:30:33PM +1000, Michael Mansour wrote:
> > Hi,
> > 
> > > Greetings.  We're trying to install the free version of Zimbra on a SL
> > > 5.0 system (i386).  Most of the installation goes smoothly, but 
> > > toward the end we have a problem with a failure to initialize LDAP.  
> > > If we omit SL's version of LDAP (Zimbra has its own), we get a 
> > > failure to connect to port 389.  If we run the SL version of LDAP 
> > 
> > Have you checked that your server is listening on port 389?
> > 
> > Have you tried connecting to port 389?
> > 
> > Do you know if Zimbra's own LDAP is started and listening on port 389?
> > 
> > Shutting down openldap from SL5 means (to me) you'd need to start another 
> > ldap
> > server (zimbra's) before it will connect. You cannot have two services
> > listening on the same at the same time.
> 
> Hi, Michael.  Yes, I have tried connecting to port 389 (telnet localhost
> 389), and, indeed, there is no response, unless I've started SL's 
> ldap server.  Also, I would assume that Zimbra's ldap server is NOT 
> started, as I get the complaint DURING the installation of Zimbra. 
>  There seems to be a Catch 22 here.

Yeah. I haven't ever installed Zimbra myself so not familiar with the process,
but it would seem odd to me that Zimbra would force a connection to it's own
LDAP server without first starting it up itself.

Personally, I'd likely pull LDAP away from the local Zimbra server and have
directory services remotely (and clustered). That's one of the beauties of
LDAP. But if you're using an LDAP server that's not Zimbra's, I'd also imagine
you'd have schema problems which may be hard to resolve.

> > > (shouldn't be necessary, AFAIK), the Zimbra setup script connects to 
> > > port 389 but complains that TLS is an unsupported option.
> > 
> > Then just recompile the openldap that comes with SL (recreate the RPM if you
> > like - src.rpm's are readily available) and turn of TLS. Then start openldap
> > and that error will go away - possibly being replaced by another one :)
> 
> Heh.  That's certainly something I don't want to have to do.  I.e., I
> don't think it would be particularly difficult, but it introduces
> another support headache that I don't need.  And I really can't believe
> that it SHOULD be necessary to jump through those kinds of hoops to get
> this product installed.

I agree.

> > > If you have any suggestions, please send 'em to me.  Thanks.
> > 
> > I personally wouldn't go the route you're going when using SL5. I'd 
> > personally
> > look for a Xen VM (appliance) of Zimbra (I'm pretty sure there's one out
> > there), load into as a Virtual machine and then spend my time on Zimbra 
> > config
> > instead of setup. That's one of the major reasons to go to SL5 and setup 
> > time
> > reduction is one of the major reasons for VM appliances.
> 
> I agree that a VM instance of Zimbra looks appealing.  OTOH:
> 
> (a) The set-up is SUPPOSED to consist of no more than typing:
> 
> ./install.sh
> 
> (b) The stuff I'm doing at the moment is just proof (or disproof,
>  asthe case may be) of principle.
> 
> When/if I get this working on the ancient PIII machine in my office,
> I'll look into virtualizing it.

Do Zimbra have support/mailing lists/forums available? maybe those guys can
help more?

Good luck mate. 

Michael.

> Thanks.
> 
>   - Mike
> -- 
> Michael Hannonmailto:[EMAIL PROTECTED]
> Dept. of Physics  530.752.4966
> University of California  530.752.4717 FAX
> Davis, CA 95616-8677
--- End of Original Message ---


RPM-GPG-KEY-cern in SL50

2008-01-30 Thread Michael Mansour
Hi,

Is the cern key a valid key?

# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-cern
error: /etc/pki/rpm-gpg/RPM-GPG-KEY-cern: import read failed(0).

Michael.


Re: RPM-GPG-KEY-cern in SL50

2008-01-30 Thread Michael Mansour
Hi,

> On Thu, Jan 31, 2008 at 10:57:35AM +1000, Michael Mansour wrote:
> > Is the cern key a valid key?
> > # rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-cern
> > error: /etc/pki/rpm-gpg/RPM-GPG-KEY-cern: import read failed(0).
> 
> It has been like this "for years".

Hmm.. why not just remove it or get cern to fix their key?

Is this an issue for TUV or SL?

Michael.

> -- 
> Konstantin Olchanski
> Data Acquisition Systems: The Bytes Must Flow!
> Email: olchansk-at-triumf-dot-ca
> Snail mail: 4004 Wesbrook Mall, TRIUMF, Vancouver, B.C., V6T 2A3, Canada
--- End of Original Message ---


Re: Help! Can not build VSFTPD on my system!

2008-02-01 Thread Michael Mansour

Hi Wenji,

On Fri, 1 Feb 2008, Wenji Wu wrote:


Is there a reason you don't want to use a prebuilt binary? If not, try:

yum install vsftpd

It's in the sl-base repository and installed fine for me.

Installed: vsftpd.i386 0:2.0.5-10.el5



Thanks,

The binary does not work for my case. I am working on a research project, and 
need to modify the vsftp code.


Then why not just grab the src.rpm from an SL mirror, install that and 
modify the source code from there?


At least you know 100% that src.rpm file will compile with your SL 
version.


Regards,

Michael.


wenji





On Fri, 2008-02-01 at 17:14 -0600, Wenji Wu wrote:

Hi,  I got some problem to install vsftpd on my system, any thought?

thanks in advance.


My system is:
[EMAIL PROTECTED] vsftpd-2.0.5]# uname -a
Linux wan-koi.fnal.gov 2.6.20 #52 SMP PREEMPT Tue Jan 15 15:49:20

CST 2008 x86_64 x86_64 x86_64 GNU/Linux


Building Problem is:
.
[EMAIL PROTECTED] vsftpd-2.0.5]# make
gcc -c main.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c utility.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c prelogin.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c ftpcmdio.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c postlogin.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c privsock.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c tunables.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c ftpdataio.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c secbuf.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c ls.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c postprivparent.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c logging.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c str.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c netstr.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c sysstr.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c strlist.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c banner.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c filestr.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c parseconf.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c secutil.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c ascii.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c oneprocess.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c twoprocess.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c privops.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c standalone.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c hash.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c tcpwrap.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c ipaddrparse.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c access.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c features.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c readwrite.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c ssl.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c sysutil.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -c sysdeputil.c -O2 -Wall -W -Wshadow  -idirafter dummyinc
gcc -o vsftpd main.o utility.o prelogin.o ftpcmdio.o postlogin.o

privsock.o tunables.o ftpdataio.o secbuf.o ls.o postprivparent.o
logging.o str.o netstr.o sysstr.o strlist.o banner.o filestr.o
parseconf.o secutil.o ascii.o oneprocess.o twoprocess.o privops.o
standalone.o hash.o tcpwrap.o ipaddrparse.o access.o features.o
readwrite.o ssl.o sysutil.o sysdeputil.o -Wl,-s `./vsf_findlibs.sh`

/lib/libpam.so.0: could not read symbols: File in wrong format
collect2: ld returned 1 exit status
make: *** [vsftpd] Error 1
..

thanks,

wenji






Re: Automounting NTFS partitions under SL 5.1

2008-02-11 Thread Michael Mansour
Hi,

> OS: SL 5.1 x86.
> 
> How can one use NTFS partitions automatically, after he boots in SL 
> 5.1 GNOME? I have installed fuse-ntfs-3g from the dag repository but 
> I don't know what must be done after this. Or is the dkms version better?

It's been a while since I last did this, but I remember I also downloaded the
ntfs-3g RPM from Dag and from memory, I then just had to use the mount command
to mount the NTFS partition.

Do a rpm -ql on the package to see what it installs and what binaries are
available to you, then man on some of those binaries.

After you've figured that out, setup your fstab to automount those ntfs
partitions on boot.

Regards,

Michael.

> Anyway the ideal would be to be able to see all the NTFS partitions 
> in GNOME nautilus.
> 
> Thanks a lot.
--- End of Original Message ---


Re: Problem with ftp://ftp.scientificlinux.org/?

2008-02-21 Thread Michael Mansour
Hi,

> Greetings all,
> 
> Is similar happening again? Our lftp hung again last night & although
> ftp.scientificlinux.org = linux21.fnal.gov pings, it seems inaccessible
> otherwise.

I'm also having a problem with the rsync which just hangs at this point:

opening tcp connection to rsync.scientificlinux.org port 873

Michael.

> Many thanks to the maintainers of Scientific Linux!!
> 
> On Mon, 11 Feb 2008, Winnie Lacesso wrote:
> > 
> > Greetings, 
> > 
> > Is there some problem with 
> > ftp://ftp.scientificlinux.org//linux/scientific/
> > & http://ftp.scientificlinux.org/linux/scientific/ ?
> > 
> > I can't seem to get to any of them, some of our nightly yum updates hung & 
> > also our nightly mirror hung for the last 2 nights.
> > 
> > I do beg your pardon, there didn't seem to be any scheduled downtime that 
> > could be found under "News" or Scientific-Linux-Announce archives.
> > 
> > Many thanks to the excellent maintainers of Scientific Linux.
> > And someone said maintenance was a thankless task. Not!!
> > 
> > Grateful Unit
> > 
> > 
> >
--- End of Original Message ---


prelink failures

2008-03-19 Thread Michael Mansour
Hi,

I've recently upgraded some 32bit servers to SL46 (from 45).

All the servers that were upgraded, exhibit this:

# prelink /usr/bin/pstree
prelink: /usr/lib/libncurses.so.5.4: .debug_loc adjusting unfinished

# prelink /usr/bin/less
prelink: /usr/lib/libncursesw.so.5.4: .debug_loc adjusting unfinished

Only on those two files. Any ideas how I can fix it?

Thanks.

Michael.


Re: prelink failures

2008-03-20 Thread Michael Mansour
Hi,

> On Wed, Mar 19, 2008 at 5:31 PM, Michael Mansour <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> >  I've recently upgraded some 32bit servers to SL46 (from 45).
> >
> >  All the servers that were upgraded, exhibit this:
> >
> >  # prelink /usr/bin/pstree
> >  prelink: /usr/lib/libncurses.so.5.4: .debug_loc adjusting unfinished
> >
> >  # prelink /usr/bin/less
> >  prelink: /usr/lib/libncursesw.so.5.4: .debug_loc adjusting unfinished
> >
> >  Only on those two files. Any ideas how I can fix it?
> >
> >  Thanks.
> >
> >  Michael.
> 
> This is apparently a known issue:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=240658
> 
> (Thanks to Johnny Hughes for this info)

Thanks for this, I guess we just have to wait until TUV releases patches.

Regards,

Michael.

> Akemi
--- End of Original Message ---


SL5 tomcat server - does it work?

2008-07-01 Thread Michael Mansour
Hi,

I quite new to Tomcat but would like to try and setup a Tomcat server using SL5.

I have a freshly built SL5.2 server and a yum list on this showed:

# yum list tomcat*
Loading "changelog" plugin
Loading "skip-broken" plugin
Loading "allowdowngrade" plugin
Loading "fastestmirror" plugin
Loading "protectbase" plugin
Loading "kernel-module" plugin
Loading "priorities" plugin
Loading "downloadonly" plugin
Loading "security" plugin
Determining fastest mirrors
sl-security   100% |=|  951 B00:00
primary.xml.gz100% |=|  232 B00:00
sl-base   100% |=|  951 B00:00
primary.xml.gz100% |=| 929 kB00:02
sl-base   : ## 2597/2597
0 packages excluded due to repository protections
0 packages excluded due to repository priority protections
Installed Packages
tomcat5-jsp-2.0-api.i386 5.5.23-0jpp.7.el5  installed
tomcat5-servlet-2.4-api.i386 5.5.23-0jpp.7.el5  installed
Available Packages
tomcat5.i386 5.5.23-0jpp.7.el5  sl-base
tomcat5-admin-webapps.i386   5.5.23-0jpp.7.el5  sl-base
tomcat5-common-lib.i386  5.5.23-0jpp.7.el5  sl-base
tomcat5-jasper.i386  5.5.23-0jpp.7.el5  sl-base
tomcat5-jasper-javadoc.i386  5.5.23-0jpp.7.el5  sl-base
tomcat5-jsp-2.0-api-javadoc.i386 5.5.23-0jpp.7.el5  sl-base
tomcat5-server-lib.i386  5.5.23-0jpp.7.el5  sl-base
tomcat5-servlet-2.4-api-javadoc.i386 5.5.23-0jpp.7.el5  sl-base
tomcat5-webapps.i386 5.5.23-0jpp.7.el5  sl-base

so I went ahead and installed:

# yum -y install tomcat5.i386 tomcat5-admin-webapps.i386 tomcat5-webapps.i386
tomcat5-common-lib.i386 tomcat5-jasper.i386 tomcat5-jasper-javadoc.i386
tomcat5-jsp-2.0-api-javadoc.i386 tomcat5-server-lib.i386
tomcat5-servlet-2.4-api-javadoc.i386
Loading "changelog" plugin
Loading "skip-broken" plugin
Loading "allowdowngrade" plugin
Loading "fastestmirror" plugin
Loading "protectbase" plugin
Loading "kernel-module" plugin
Loading "priorities" plugin
Loading "downloadonly" plugin
Loading "security" plugin
Loading mirror speeds from cached hostfile
0 packages excluded due to repository protections
0 packages excluded due to repository priority protections
Setting up Install Process
Parsing package install arguments
Resolving Dependencies
--> Running transaction check
---> Package tomcat5-jasper.i386 0:5.5.23-0jpp.7.el5 set to be updated
---> Package tomcat5-servlet-2.4-api-javadoc.i386 0:5.5.23-0jpp.7.el5 set to
be updated
---> Package tomcat5-common-lib.i386 0:5.5.23-0jpp.7.el5 set to be updated
--> Processing Dependency: jaf >= 1.0.1 for package: tomcat5-common-lib
--> Processing Dependency: ant >= 1.6 for package: tomcat5-common-lib
--> Processing Dependency: jakarta-commons-logging >= 1.0.4 for package:
tomcat5-common-lib
--> Processing Dependency: mx4j >= 3.0.1 for package: tomcat5-common-lib
--> Processing Dependency: jakarta-commons-el >= 1.0 for package:
tomcat5-common-lib
--> Processing Dependency: eclipse-ecj >= 3.1.1 for package: tomcat5-common-lib
--> Processing Dependency: jakarta-commons-pool >= 1.2 for package:
tomcat5-common-lib
--> Processing Dependency: jakarta-commons-dbcp >= 1.2.1 for package:
tomcat5-common-lib
--> Processing Dependency: jaf >= 1.0.1 for package: tomcat5-common-lib
--> Processing Dependency: javamail >= 1.3.1 for package: tomcat5-common-lib
--> Processing Dependency: jakarta-commons-logging >= 1.0.4 for package:
tomcat5-common-lib
--> Processing Dependency: jta >= 1.0.1 for package: tomcat5-common-lib
--> Processing Dependency: eclipse-ecj >= 3.1.1 for package: tomcat5-common-lib
--> Processing Dependency: jakarta-commons-collections >= 3.1 for package:
tomcat5-common-lib
--> Processing Dependency: javamail >= 1.3.1 for package: tomcat5-common-lib
--> Processing Dependency: ant >= 1.6 for package: tomcat5-common-lib
--> Processing Dependency: jta >= 1.0.1 for package: tomcat5-common-lib
--> Processing Dependency: mx4j >= 3.0.1 for package: tomcat5-common-lib
--> Processing Dependency: jakarta-commons-el >= 1.0 for package:
tomcat5-common-lib
--> Processing Dependency: jakarta-commons-pool >= 1.2 for package:
tomcat5-common-lib
--> Processing Dependency: jakarta-commons-dbcp >= 1.2.1 for package:
tomcat5-common-lib
--> Processing Dependency: jakarta-commons-collections >= 3.1 for package:
tomcat5-common-lib
---> Package tomcat5-webapps.i386 0:5.5.23-0jpp.7.el5 set to be updated
--> Processing Dependency: jakarta-taglibs-standard >= 1.1.0 for package:
tomcat5-webapps
---> Package tomcat5-jasper-javadoc.i386 0:5.5.23-0jpp.7.el5 set to be updated
---> Package tomcat5-server-lib.i386 0:5.5.23-0jpp.7.el5 set to be updated
--> Processing Dependency: jakarta-commons-beanutils >= 1.7.0 for package:
tomca

Re: SL5 tomcat server - does it work?

2008-07-03 Thread Michael Mansour
### [18/29]
  Installing: axis ### [19/29]
  Installing: mx4j ### [20/29]
  Installing: jakarta-commons-modeler  ### [21/29]
  Installing: tomcat5-server-lib   ### [22/29]
  Installing: geronimo-specs   ### [23/29]
  Installing: geronimo-specs-compat### [24/29]
  Installing: jakarta-commons-launcher ### [25/29]
  Installing: ldapjdk  ### [26/29]
  Installing: ant  ### [27/29]
  Installing: tomcat5-common-lib   ### [28/29]
  Installing: tomcat5  ### [29/29]

Installed: tomcat5.i386 0:5.5.23-0jpp.7.el5
Dependency Installed: ant.i386 0:1.6.5-2jpp.2 axis.i386 0:1.2.1-2jpp.6
bcel.i386 0:5.1-8jpp.1 classpathx-jaf.i386 0:1.0-9jpp.1 classpathx-mail.i386
0:1.1.1-4jpp.2 geronimo-specs.i386 0:1.0-0.M2.2jpp.12
geronimo-specs-compat.i386 0:1.0-0.M2.2jpp.12 jakarta-commons-beanutils.i386
0:1.7.0-5jpp.1 jakarta-commons-collections.i386 0:3.2-2jpp.3
jakarta-commons-daemon.i386 1:1.0.1-6jpp.1 jakarta-commons-dbcp.i386
0:1.2.1-7jpp.1 jakarta-commons-digester.i386 0:1.7-5jpp.1
jakarta-commons-discovery.i386 1:0.3-4jpp.1 jakarta-commons-el.i386
0:1.0-7jpp.1 jakarta-commons-fileupload.i386 1:1.0-6jpp.1
jakarta-commons-httpclient.i386 1:3.0-7jpp.1 jakarta-commons-launcher.i386
0:0.9-6jpp.1 jakarta-commons-logging.i386 0:1.0.4-6jpp.1
jakarta-commons-modeler.i386 0:1.1-8jpp.3.el5 jakarta-commons-pool.i386
0:1.3-5jpp.1 ldapjdk.i386 0:4.18-2jpp.3.el5 log4j.i386 0:1.2.13-3jpp.2
mx4j.i386 1:3.0.1-6jpp.4 regexp.i386 0:1.4-2jpp.2 tomcat5-common-lib.i386
0:5.5.23-0jpp.7.el5 tomcat5-jasper.i386 0:5.5.23-0jpp.7.el5
tomcat5-server-lib.i386 0:5.5.23-0jpp.7.el5 wsdl4j.i386 0:1.5.2-4jpp.1
Complete!

# service tomcat5 start
Starting tomcat5:  [  OK  ]

and: 

# ps ax|grep tomcat
16468 ?Sl 0:02 /usr/lib/jvm/java/bin/java
-Dcatalina.ext.dirs=/usr/share/tomcat5/shared/lib:/usr/share/tomcat5/common/lib 
-Dcatalina.ext.dirs=/usr/share/tomcat5/shared/lib:/usr/share/tomcat5/common/lib
-Djava.endorsed.dirs=/usr/share/tomcat5/common/endorsed -classpath
/usr/lib/jvm/java/lib/tools.jar:/usr/share/tomcat5/bin/bootstrap.jar:/usr/share/tomcat5/bin/commons-logging-api.jar:/usr/share/java/mx4j/mx4j-impl.jar:/usr/share/java/mx4j/mx4j-jmx.jar
-Dcatalina.base=/usr/share/tomcat5 -Dcatalina.home=/usr/share/tomcat5
-Djava.io.tmpdir=/usr/share/tomcat5/temp org.apache.catalina.startup.Bootstrap
start

and:

# netstat -nap |grep :80
tcp0  0 127.0.0.1:8005  0.0.0.0:*  
LISTEN  16468/java 
tcp0  0 0.0.0.0:80090.0.0.0:*  
LISTEN  16468/java 
tcp0  0 0.0.0.0:80800.0.0.0:*  
LISTEN  16468/java 

So it now looks ok. Thanks.

Now, any ideas how to connect to this thing?

I've tried http://127.0.0.1:8080 but even though it's listening, I can't get a
connection from either firefox or links.

Michael.

> Hope this helps.
> Troy
> 
> Michael Mansour wrote:
> > Hi,
> > 
> > I quite new to Tomcat but would like to try and setup a Tomcat server
using SL5.
> > 
> > I have a freshly built SL5.2 server and a yum list on this showed:
> > 
> > # yum list tomcat*
> > Loading "changelog" plugin
> > Loading "skip-broken" plugin
> > Loading "allowdowngrade" plugin
> > Loading "fastestmirror" plugin
> > Loading "protectbase" plugin
> > Loading "kernel-module" plugin
> > Loading "priorities" plugin
> > Loading "downloadonly" plugin
> > Loading "security" plugin
> > Determining fastest mirrors
> > sl-security   100% |=|  951 B00:00
> > primary.xml.gz100% |=|  232 B00:00
> > sl-base   100% |=|  951 B00:00
> > primary.xml.gz100% |=| 929 kB00:02
> > sl-base   : ## 2597/2597
> > 0 packages excluded due to repository protections
> > 0 packages excluded due to repository priority protections
> > Installed Packages
> > tomcat5-jsp-2.0-api.i386 5.5.23-0jpp.7.el5  installed
> > tomcat5-servlet-2.4-api.i386 5.5.23-0jpp.7.el5  installed
> > Available Packages
> > tomcat5.i386 5.5.23-0jpp.7.el5  sl-base
> > tomcat5-admin-webapps.i386  

Update webpage for SL5.2

2008-07-07 Thread Michael Mansour
Hi,

Please update the webpage:

https://www.scientificlinux.org/download/

referencing the new 5.2 ISO downloads.

Thanks.

Michael.


Re: Recommended third-party repos

2008-07-12 Thread Michael Mansour
Hi,

> When you install SL5.2, in /etc/yum.repos.d I expect you to 
> find, as for SL5.1, the following set of files which I interpret as 
> referring to "recommended (and compatible) repos":
> 
> atrpms.repo, flash.repo, sl-contrib.repo, sl-fastbugs.repo, 
> sl-security.repo, sl-testing.repo, dag.repo, sl-bugfix-52.repo, 
> sl-debuginfo.repo, sl.repo, sl-srpms.repo
> 
> The repos are enabled by having "enabled=1" instead of "enabled=0" 
> lines in their corresponding files.  Only the sl and sl-security 
> ones are enabled by default.

These days I also always use the epel repo for my SL servers, which is
available as a yum install from SL5 series servers:

# rpm -q yum-conf-epel
yum-conf-epel-5-1.noarch

For the SL4 ones, I just grab the epel RPM from fedora:

# rpm -qf /etc/yum.repos.d/epel.repo
epel-release-4-6

Maybe Connie or Troy could make a yum-conf-epel for SL4 servers?

Regards,

Michael.

> Steven Yellin
> 
> On Sun, 13 Jul 2008, Claudiu Tanaselia wrote:
> 
> > Hello all,
> >
> > I used SL Linux a few years ago and now I'm back, hopefully for a
> > longer time than before. I've searched the list for the following
> > question but didn't find a clear answer, so here it goes and please
> > apologize if the question was answered before: what are the (most)
> > recommended (and compatible) repos to use with SL 5.2 distribution?
> > (not considering the official ones, for updates and security patches,
> > provided by SL team). I've seen that livna only supports Fedora and
> > I'm not very familliar with rpm-based repos. Others I found where DAG
> > and ATrpms. I don't know if I remember correct, but I think that last
> > time I've enabled a non-oficial repository, compatible with RHEL, and
> > succesfully used it with SL.
> >
> > Thank you,
> > -- 
> > Claudiu Tanaselia
> > Researcher
> >
> > Research Institute for
> > Analytical Instrumentation (ICIA)
> > Donath 67
> > 400293 Cluj-Napoca, Romania
> >
> > Tel  +40 264 420 590
> > Fax  +40 264 420 667
> > Cell +40 744 670 782
> >
--- End of Original Message ---


Re: status of idea of SL and Centos merger [OFF]

2008-07-30 Thread Michael Mansour
Hi,

> Adrian Sevcenco wrote:
> > Hi,
> > Sorry to be offtopic but given that there is already talk on the next 
> > version of RHEL 
> >
http://www.redhatmagazine.com/2008/07/29/whats-next-in-red-hat-enterprise-linux-part-1/

> > 
> > i am wondering the idea of common repositories is still in place...
> > Thanks for any feedback,
> > Best regards,
> > Adrian
> 
> Although there is talk of the next RHEL, it's not going to be until 
> late next year.  So there is still a long way away. The answer is 
> that there is talk of working more with CentOS on the next release.  
> We currently haven't hammered out many details, so there isn't 
> really too much to say. One thing I do want to point out, because 
> the rumor keeps comming up.  There will be a Scientific Linux 6.  We 
> will keep our identity.  We will not completely merge with CentOS.

I remember when first looking at all this after Fedora became unmanageable for
me in the enterprise. I originally looked at Whitebox Linux (yes early days
I'm talking), Tao, Scientific Linux and CentOS. 

The reason I chose Scientific Linux was simple, stability. My highest priority
was to have an OS which was rock solid and didn't require me to spend the
large part of my life fixing it, maintaining it, patching it and upgrading it.

Why didn't I choose CentOS? a couple of reasons:

* CentOS would immediately re-package and release updates straight after Red
Hat, bugs and all. SL would perform further tests meaning I had something more
stable and tested than what Red Hat and CentOS would release. Saving me heart
ache, stress and time.

* at the time CentOS forced upgrades to the latest released Red Hat Updates
kits. This meant that when I ran, say, CentOS 4.1, and CentOS 4.2 was out,
CentOS would no longer package and release the errata for 4.1. SL would
closely follow the same support regime as Red Hat, which supports releases for
8 years (although SL committed to 3, which is still ok), no matter what update
kit/release you're running. I don't want to be forced to do anything
especially in enterprise production environments where things cannot go wrong.

Since making the decision to go SL over CentOS I've never looked back. I use
repo's from Dag/Dries, ATrpms, EPEL and even CentOS extras/plus and
utterramblings (when I really need them for clients). But the point is, the
approaches's were different for both CentOS and SL when I was looking at this
years ago, and I needed/preferred the SL approach over the CentOS approach.

Regards,

Michael.

> Troy
> -- 
> __
> Troy Dawson  [EMAIL PROTECTED]  (630)840-6468
> Fermilab  ComputingDivision/LCSI/CSI DSS Group
> __
--- End of Original Message ---


Java patch on SL5.0

2008-08-03 Thread Michael Mansour
Hi,

I'm using:

# cat /etc/redhat-release
Scientific Linux SL release 5.0 (Boron)

and just applied some patches:

# yum -y update libxslt.x86_64 libxslt.i386 libxslt-python.x86_64
nfs-utils.x86_64 java-1.5.0-sun-compat.noarch jdk.i586 jdk.x86_64
Loading "kernel-module" plugin
Setting up Update Process
Setting up repositories
Reading repository metadata in from local files
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Downloading header for libxslt to pack into transaction set.
libxslt-1.1.17-2.el5_2.2. 100% |=|  16 kB00:00
---> Package libxslt.x86_64 0:1.1.17-2.el5_2.2 set to be updated
---> Downloading header for libxslt to pack into transaction set.
libxslt-1.1.17-2.el5_2.2. 100% |=|  16 kB00:00
---> Package libxslt.i386 0:1.1.17-2.el5_2.2 set to be updated
---> Downloading header for jdk to pack into transaction set.
jdk-1.5.0_16-fcs.i586.rpm 100% |=| 280 kB00:01
---> Package jdk.i586 2000:1.5.0_16-fcs set to be updated
---> Downloading header for java-1.5.0-sun-compat to pack into transaction set.
java-1.5.0-sun-compat-1.5 100% |=|  55 kB00:01
---> Package java-1.5.0-sun-compat.noarch 0:1.5.0.16-1.1.sl5.jpp set to be 
updated
---> Downloading header for nfs-utils to pack into transaction set.
nfs-utils-1.0.9-35z.el5_2 100% |=|  34 kB00:01
---> Package nfs-utils.x86_64 1:1.0.9-35z.el5_2 set to be updated
---> Downloading header for libxslt-python to pack into transaction set.
libxslt-python-1.1.17-2.e 100% |=| 5.1 kB00:00
---> Package libxslt-python.x86_64 0:1.1.17-2.el5_2.2 set to be updated
---> Downloading header for jdk to pack into transaction set.
jdk-1.5.0_16-fcs.x86_64.r 100% |=| 238 kB00:01
---> Package jdk.x86_64 2000:1.5.0_16-fcs set to be updated
--> Running transaction check
Beginning Kernel Module Plugin
Finished Kernel Module Plugin

Dependencies Resolved

=
 Package Arch   Version  RepositorySize
=
Updating:
 java-1.5.0-sun-compat   noarch 1.5.0.16-1.1.sl5.jpp  sl-security59 
k
 jdk i586   2000:1.5.0_16-fcs  sl-security46 M
 jdk x86_64 2000:1.5.0_16-fcs  sl-security41 M
 libxslt x86_64 1.1.17-2.el5_2.2  sl-security   488 k
 libxslt i386   1.1.17-2.el5_2.2  sl-security   485 k
 libxslt-python  x86_64 1.1.17-2.el5_2.2  sl-security   136 k
 nfs-utils   x86_64 1:1.0.9-35z.el5_2  sl-security   387 k

Transaction Summary
=
Install  0 Package(s)
Update   7 Package(s)
Remove   0 Package(s)

Total download size: 89 M
Downloading Packages:
(1/7): libxslt-1.1.17-2.e 100% |=| 488 kB00:02
(2/7): libxslt-1.1.17-2.e 100% |=| 485 kB00:02
(3/7): jdk-1.5.0_16-fcs.i 100% |=|  46 MB03:35
(4/7): java-1.5.0-sun-com 100% |=|  59 kB00:01
(5/7): nfs-utils-1.0.9-35 100% |=| 387 kB00:02
(6/7): libxslt-python-1.1 100% |=| 136 kB00:01
(7/7): jdk-1.5.0_16-fcs.x 100% |=|  41 MB03:51
Running Transaction Test
warning: libxslt-1.1.17-2.el5_2.2: Header V3 DSA signature: NOKEY, key ID 
82fd17b2
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Updating  : libxslt  ### [ 1/14]
  Updating  : libxslt  ### [ 2/14]
  Updating  : jdk  ### [ 3/14]
  Updating  : jdk  ### [ 4/14]
  Updating  : java-1.5.0-sun-compat### [ 5/14]
  Updating  : nfs-utils### [ 6/14]
  Updating  : libxslt-python   ### [ 7/14]
  Cleanup   : libxslt  ### [ 8/14]
  Cleanup   : libxslt  ### [ 9/14]
  Cleanup   : jdk  ### [10/14]
touch: cannot touch `/usr/java/jdk1.5.0_14/lib/tools.pack': No such file or
directory
touch: cannot touch `/usr/java/jdk1.5.0_14/jre/lib/rt.pack': No such file or
directory
touch: cannot touch `/usr/java/jdk1.5.0_14/jre/lib/jsse.pack': No such file or
directory
touch: cannot touch `/usr/java/jdk1.5.0_14/jre/lib/charsets.pack': No such
file or directory
touch: cannot touch `/usr/java/jdk1.5.0_14/jre/lib/ext/localedata.pack': No
such file or directory
touch: cannot

Re: [5.1] Logged-in users aren't seen

2008-08-14 Thread Michael Mansour
Hi,

> Something strange is going on here... The `w`, `who` and `finger`
> programs do not seem to know about any logged-in users, even when issued
> as root:
> 
> -BEGIN SHELL I/O-
> ~# who
> ~# w
>  01:50:04 up 6 days, 14:03,  0 users,  load average: 0.21, 0.32, 0.29
> USER TTY  FROM  LOGIN@   IDLE   JCPU   PCPU WHAT
> ~# finger
> No one logged on.
> ~#
> --END SHELL I/O--
> 
> Does anyone on 5.1 get this kind of results? Does anyone know the reason?

I would run rkhunter and/or chkrootkit on your server as this would happen if
a rootkit was installed.

Regards,

Michael.

> Thanks,
> Andrea
--- End of Original Message ---


Large Hedron Collider using Scientific Linux :)

2008-09-11 Thread Michael Mansour
All I can say is congrats to SL for potentially being the end to all further
kernel patches in another two to three weeks :)

http://blog.internetnews.com/skerner/2008/09/large-hadron-collider---powere.html
 

Michael.


Re: Please use mirrors if possible

2008-09-18 Thread Michael Mansour
Hi Troy,

> FRANCHISSEUR Robert wrote:
> > Hi Troy,
> > 
> > is there a way to log which mirror we use when yum update ?
> > 
> > I have a mirror 2 floors upstair which comme first in my repos :
> > 
> >
baseurl=ftp://distrib-coffee.ipsl.jussieu.fr/pub/linux/scientific-linux/45/$basearch/errata/SL/RPMS/
> >
http://distrib-coffee.ipsl.jussieu.fr/pub/linux/scientific-linux/45/$basearch/errata/SL/RPMS/
> >
http://ftp.lip6.fr/pub/linux/distributions/scientific/45/$basearch/errata/SL/RPMS/
> >
http://ftp.scientificlinux.org/linux/scientific/45/$basearch/errata/SL/RPMS/
> >
ftp://ftp.scientificlinux.org/linux/scientific/45/$basearch/errata/SL/RPMS/
> > ftp://linuxsoft.cern.ch/scientific/45/$basearch/errata/SL/RPMS/
> > 
> > but I often notice, just doing a 'netstat', that I am using your server 
> > rather
> > than ours and I'd like to investigate this furter on a longer period
> > of time.
> 
> Hi,
> If you want to use your own mirrors in preference to the others you 
> need to include "failovermethod=priority" in your repo 
> configuration. The default is roundrobin which will select one at 
> random from the list.

So does that mean we specify the baseurl entry first, then the mirrorlist
entry? like:

baseurl=blah1
mirrorlist=blah2
failovermethod=priority

and yum will first use the baseurl first (if it can connect) before skipping
to the mirrors?

Michael.


Re: cache access denied

2008-09-19 Thread Michael Mansour
Hi,

> hi
> 
> i m using proxy server to allow internet access to my clients. It worked
> fine for two years but now what is happening my clients are getting 
> an error message while opening browser. That error is "cache access 
> denied". i tried using the following command: 1 echo 
> "">/var/log/squid/cache.log
> 
> 2. squid -z after clearing cache.

None of the above will help with that error, you should check out your ACL's
in /etc/squid/squid.conf to see what rules you've put in place ie. what
networks/subnets are allowed, etc.

Regards,

Michael.

> it starts working . But now same problem encountered again . Where 
> is the problem?
> 
> -- 
> Vivek Chalotra
> GRID Project Associate,
> High Energy Physics Group,
> Department of Physics & Electronics,
> University of Jammu,
> Jammu 180006,
> INDIA.
--- End of Original Message ---


Re: Security Breach

2008-10-01 Thread Michael Mansour
Hi,

> Harry Enke wrote:
> > Hi,
> > there is an easy configurable tool for preventing brute force attacks, 
> > it's called "fail2ban". It sifts through logs for attacks on security 
> > critical ports and blocks login attempts from ip-addresses which fail 
> > too often in too short a timeframe (configurable).
> > 
> > http://www.fail2ban.org

I've personally been using:

http://www.aczoom.com/cms/blockhosts

for years now for customers that need ports open to the public internet (ftp,
ssh, etc). BlockHosts can work with various services out-of-the-box and
handles hosts.allow/deny files and/or iptables rules. It also has web
interfaces to display blocked lists and GeoIP maps if you want them.

> Is this in error?
> "Fail2ban scans log files like /var/log/pwdfail or 
> /var/log/apache/error_log and bans IP that makes too many password 
> failures. It updates firewall rules to reject the IP address."
> 
> Examining logs after the event does not provide real-time protection.

I'm not after real-time, the above is good enough for me but I'm interested in
your comment. Is there a better software solution out there?

Michael.


Re: How to run asp

2008-10-02 Thread Michael Mansour
Hi,

> Hello all,
> i have done full installation of scientific linux 4.5 and installed apache
> server on localhost but i m not able to run asp scripts and sql database

What SQL database do you want to run?

In terms of ASP, you're best using a Windows IIS server to host ASP pages.

You can do it under Linux using things like the Apache::ASP perl module and
others, but they have limitations and aren't 100% compatible with every ASP
feature.

Regards,

Michael.

> scripts on the browser. What else i need to run them. Can anybody 
> help me
> 
> regards
> -- 
> Vivek Chalotra
> GRID Project Associate,
> High Energy Physics Group,
> Department of Physics & Electronics,
> University of Jammu,
> Jammu 180006,
> INDIA.
--- End of Original Message ---


Weird problem with SL5.0 and 5.1

2008-10-09 Thread Michael Mansour
Hi,

This is a simple but very "funny" problem I'd just like to report.

This has to do with the pcre package.

I upgraded an SL 5.0 server to 5.1. I needed to install pcre-devel, so using 
yum:

# yum -y install pcre-devel
Loading "kernel-module" plugin
Setting up Install Process
Setting up repositories
Reading repository metadata in from local files
Parsing package install arguments
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Downloading header for pcre-devel to pack into transaction set.
pcre-devel-6.6-2.el5_1.7. 100% |=|  10 kB00:00
---> Package pcre-devel.i386 0:6.6-2.el5_1.7 set to be updated
---> Downloading header for pcre-devel to pack into transaction set.
pcre-devel-6.6-2.el5_1.7. 100% |=|  10 kB00:00
---> Package pcre-devel.x86_64 0:6.6-2.el5_1.7 set to be updated
--> Running transaction check
--> Processing Dependency: pcre = 6.6-2.el5_1.7 for package: pcre-devel
--> Finished Dependency Resolution
Beginning Kernel Module Plugin
Finished Kernel Module Plugin
Error: Missing Dependency: pcre = 6.6-2.el5_1.7 is needed by package pcre-devel

Ok, so what do I have installed:

# yum list pcre\*
Loading "kernel-module" plugin
Setting up repositories
Reading repository metadata in from local files
Installed Packages
pcre.x86_64  6.6-2.el5.7installed
pcre.i3866.6-2.el5.7installed
Available Packages
pcre.i3866.6-2.el5_1.7  sl-base
pcre.x86_64  6.6-2.el5_1.7  sl-base
pcre-devel.x86_646.6-2.el5_1.7  sl-base
pcre-devel.i386  6.6-2.el5_1.7  sl-base

Ok, so I try to install from the SL5.1 tree manually:

# rpm -Uvh
http://ftp.scientificlinux.org/linux/scientific/51/x86_64/SL/pcre-6.6-2.el5_1.7.i386.rpm
http://ftp.scientificlinux.org/linux/scientific/51/x86_64/SL/pcre-6.6-2.el5_1.7.x86_64.rpm
Retrieving
http://ftp.scientificlinux.org/linux/scientific/51/x86_64/SL/pcre-6.6-2.el5_1.7.i386.rpm
Retrieving
http://ftp.scientificlinux.org/linux/scientific/51/x86_64/SL/pcre-6.6-2.el5_1.7.x86_64.rpm
warning: /var/tmp/rpm-xfer.uTMUG9: Header V3 DSA signature: NOKEY, key ID 
a7048f8d
Preparing...### [100%]
package pcre-6.6-2.el5.7 (which is newer than pcre-6.6-2.el5_1.7) is
already installed

ie. I cannot install pcre-devel 6.6-2.el5.7 because that only exists in the SL
5.0 security area:

http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/

and is not in 5.1 or in 5.2, and the pcre I have running in SL5.1 is actually
not what it should be running (or not what SL5.1 or 5.2 is supposed to have
installed), but the newer pcre from SL 5.0

When I upgraded using yum from 5.0 to 5.1, the pcre wasn't "downgraded" to the
older pcre-6.6-2.el5_1.7 in SL5.1

To resolve this, I just did:

# rpm -Uvh
http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/pcre-devel-6.6-2.el5.7.x86_64.rpm
http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/pcre-devel-6.6-2.el5.7.i386.rpm
Retrieving
http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/pcre-devel-6.6-2.el5.7.x86_64.rpm
Retrieving
http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/pcre-devel-6.6-2.el5.7.i386.rpm
warning: /var/tmp/rpm-xfer.7Z5m3A: Header V3 DSA signature: NOKEY, key ID 
82fd17b2
Preparing...### [100%]
   1:pcre-devel ### [ 50%]
   2:pcre-devel ### [100%]

ie. installed the pcre-devel from SL 5.0 into the 5.1 environment.

Regards,

Michael.


Re: Weird problem with SL5.0 and 5.1

2008-10-09 Thread Michael Mansour
Hi,

> Hi,
> 
> This is a simple but very "funny" problem I'd just like to report.
> 
> This has to do with the pcre package.
> 
> I upgraded an SL 5.0 server to 5.1. I needed to install pcre-devel,
>  so using yum:
> 
> # yum -y install pcre-devel
> Loading "kernel-module" plugin
> Setting up Install Process
> Setting up repositories
> Reading repository metadata in from local files
> Parsing package install arguments
> Resolving Dependencies
> --> Populating transaction set with selected packages. Please wait.
> ---> Downloading header for pcre-devel to pack into transaction set.
> pcre-devel-6.6-2.el5_1.7. 100% |=|  10 kB
> 00:00
> ---> Package pcre-devel.i386 0:6.6-2.el5_1.7 set to be updated
> ---> Downloading header for pcre-devel to pack into transaction set.
> pcre-devel-6.6-2.el5_1.7. 100% |=|  10 kB
> 00:00
> ---> Package pcre-devel.x86_64 0:6.6-2.el5_1.7 set to be updated
> --> Running transaction check
> --> Processing Dependency: pcre = 6.6-2.el5_1.7 for package: pcre-devel
> --> Finished Dependency Resolution
> Beginning Kernel Module Plugin
> Finished Kernel Module Plugin
> Error: Missing Dependency: pcre = 6.6-2.el5_1.7 is needed by package 
> pcre-devel
> 
> Ok, so what do I have installed:
> 
> # yum list pcre\*
> Loading "kernel-module" plugin
> Setting up repositories
> Reading repository metadata in from local files
> Installed Packages
> pcre.x86_64  6.6-2.el5.7installed
> pcre.i3866.6-2.el5.7installed
> Available Packages
> pcre.i3866.6-2.el5_1.7  sl-base
> pcre.x86_64  6.6-2.el5_1.7  sl-base
> pcre-devel.x86_646.6-2.el5_1.7  sl-base
> pcre-devel.i386  6.6-2.el5_1.7  sl-base
> 
> Ok, so I try to install from the SL5.1 tree manually:
> 
> # rpm -Uvh
> http://ftp.scientificlinux.org/linux/scientific/51/x86_64/SL/pcre-
> 6.6-2.el5_1.7.i386.rpm 
> http://ftp.scientificlinux.org/linux/scientific/51/x86_64/SL/pcre-
> 6.6-2.el5_1.7.x86_64.rpm Retrieving 
> http://ftp.scientificlinux.org/linux/scientific/51/x86_64/SL/pcre-
> 6.6-2.el5_1.7.i386.rpm Retrieving 
> http://ftp.scientificlinux.org/linux/scientific/51/x86_64/SL/pcre-
> 6.6-2.el5_1.7.x86_64.rpm warning: /var/tmp/rpm-xfer.uTMUG9: Header 
> V3 DSA signature: NOKEY, key ID a7048f8d Preparing...
> ### [100%]package 
> pcre-6.6-2.el5.7 (which is newer than pcre-6.6-2.el5_1.7) is already 
> installed
> 
> ie. I cannot install pcre-devel 6.6-2.el5.7 because that only exists 
> in the SL
> 5.0 security area:
> 
> http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/
> 
> and is not in 5.1 or in 5.2, and the pcre I have running in SL5.1 is 
> actually not what it should be running (or not what SL5.1 or 5.2 is 
> supposed to have installed), but the newer pcre from SL 5.0
> 
> When I upgraded using yum from 5.0 to 5.1, the pcre wasn't 
> "downgraded" to the older pcre-6.6-2.el5_1.7 in SL5.1
> 
> To resolve this, I just did:
> 
> # rpm -Uvh
>
http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/pcre-devel-6.6-2.el5.7.x86_64.rpm
http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/pcre-devel-6.6-2.el5.7.i386.rpm
Retrieving
http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/pcre-devel-6.6-2.el5.7.x86_64.rpm
Retrieving
http://ftp.scientificlinux.org/linux/scientific/50/x86_64/updates/security/pcre-devel-6.6-2.el5.7.i386.rpm
warning: /var/tmp/rpm-xfer.7Z5m3A: Header V3 DSA signature: NOKEY, key ID
82fd17b2 Preparing...   
### [100%]
>1:pcre-devel
>  ### [ 50%]
>2:pcre-devel
>  ### [100%]
> 
> ie. installed the pcre-devel from SL 5.0 into the 5.1 environment.

The same issue also exists for the libpng-devel package:

[EMAIL PROTECTED] php-4.4.9]# yum list libpng\*
Loading "kernel-module" plugin
Setting up repositories
Reading repository metadata in from local files
Installed Packages
libpng.x86_642:1.2.10-7.1.el5.1 installed
libpng.i386  2:1.2.10-7.1.el5.1 installed
Available Packages
libpng.x86_642:1.2.10-7.1.el5_0.1   sl-base
libpng.i386  2:1.2.10-7.1.el5_0.1   sl-base
libpng-devel.x86_64  2:1.2.10-7.1.el5_0.1   sl-base
libpng-devel.i3862:1.2.10-7.1.el5_0.1   sl-base

Regards,

Michael.


Re: port 10000

2008-10-10 Thread Michael Mansour
Hi,

> On Fri, Oct 10, 2008 at 01:09:30PM +0530, vivek chal wrote:
> > hi !
> > 
> > port 1/tcp is not listed in my /etc/services then how to open it?
> 
> /etc/services is just a table to give a service a name
> (getservbyname() ). I don't know what service is running behind port
> 1 at that site but your example showed that you can connect to 
> the service. You just have to know what protocol is run by the 
> service behind it. And actually, I'm wondering what your question 
> has got to do with this list :-)

By default port 1 is used by Webmin, 2 used by usermin.

http://www.webmin.com

Check to see it's running by:

# ps ax|grep miniserv

Regards,

Michael.


Re: Bug in crond?

2008-11-05 Thread Michael Mansour
Hi Jon,

> All,
> 
> I think I've found a bug in crond.  The man page (SL 5.2) says:
> 
>Daylight Saving Time and other time changes
> 
>Local time changes of less than three hours, such as those
>caused by the start or end of Daylight Saving Time, are 
> handled   specially. This only applies to jobs that run at a specific
>time and jobs that are run with a granularity greater than one
>hour. Jobs that run more frequently are scheduled normally.
> 
>If time has moved forward, those jobs that would have run in the
>interval that has been skipped will be run immediately. 
> Conversely,   if time has moved backward, care is taken to avoid 
> running   jobs twice.
> 
> ***Time changes of more than 3 hours are considered to be corrections
> ***to the clock or timezone, and the new time is used immediately.
> 
> However, today I found that when changing the system time from local
> time (MST) to UTC on a computer running SL 5.2, crond continued to 
> use local time.  It required a "/etc/rc.d/init.d/crond restart" to 
> make it use UTC, the new system time.  FYI, I changed the system 
> time by

This isn't a bug. When software starts up (pick a package, any package) it
reads the timezone information from the server _only_ on startup. The daemon
is then running and never (needs) to re-read the timezone information unless
it's restarted.

If this is a bug with crond, then it's also a bug with every other piece of
software that's out there and runs as a daemon (oracle, mysql, apache, xinetd,
etc).

Software developers just do not write software that re-reads time zone
information periodically, and nor should they. 

In production environments, when you change time zone information it's
recommended you reboot your servers, even when it's not feasible to do so. The
reason? you may not know every single daemon/service/app that's running on the
server which you cannot restart.

Regards,

Michael.

> "rm /etc/localtime ; ln -s /usr/share/zoneinfo/UTC /etc/localtime".
> 
> Jon
--- End of Original Message ---


Apache redirect help

2008-11-09 Thread Michael Mansour
Hi,

I realise this may not be the best mailing list for this query, but if someone
knows...

The problem I have is, I have an Apache website running on:

http(s)://site.example.local

For my local subnet (which exists in .local), I have Apache setup to do:

Redirect / https://site.example.local

for http (port 80) connections, so when anyone types http://site.example.local
on the .local subnet they're redirected to the SSL website.

When accessing this site externally on port 80, I go to:

http://site1.example.com

and (via DNS and PAT rules on the firewall) get:

https://site.example.local

as the URL in the external Web browser, which obviously doesn't work. This
makes sense though because of my "Redirect / https://site.example.local entry"
in Apache.

How can I configure Apache to keep:

Redirect / https://site.example.local

for the .local subnet, while:

Redirect / https://site.example.com

for external subnets?

I have gone through Apache documents etc but can't find anything which helps.

Thanks.

Michael.


Re: Apache redirect help

2008-11-10 Thread Michael Mansour
Hi,

> > I realise this may not be the best mailing list for this query, but if 
> > someone
> > knows...
> >
> > The problem I have is, I have an Apache website running on:
> >
> > http(s)://site.example.local
> >
> > For my local subnet (which exists in .local), I have Apache setup to do:
> >
> > Redirect / https://site.example.local
> >
> > for http (port 80) connections, so when anyone types 
> > http://site.example.local
> > on the .local subnet they're redirected to the SSL website.
> >
> > When accessing this site externally on port 80, I go to:
> >
> > http://site1.example.com
> >
> > and (via DNS and PAT rules on the firewall) get:
> >
> > https://site.example.local
> >
> > as the URL in the external Web browser, which obviously doesn't work. This
> > makes sense though because of my "Redirect / https://site.example.local 
> > entry"
> > in Apache.
> >
> > How can I configure Apache to keep:
> >
> > Redirect / https://site.example.local
> >
> > for the .local subnet, while:
> >
> > Redirect / https://site.example.com
> >
> > for external subnets?
> 
> First, can you confirm that https://site.example.local works locally
> and https://site.example.com works externally (I suspect that you 
> will need two certificates) ?

Yes this works fine. The site.example.local is actually a PHP Help desk app,
so we use this internally every day (on https://site.example.local) and our
customers check the progress of their cases externally via
https://site.example.com

The problem is when customers forget to enter the https and enter http, we'd
just like it automated for them when they make a mistake in the URL.

> If the content is the same, can you redirect everyone to 
> https://site.example.com ?

Yes the content is all the same but since PHP app is running on a server on
our local network (in our office) and listening on a Virtual IP on the
internal network, then we cannot visit http(s)://site.example.com from our
local network.

The way the external people get to it is by giving the site.example.com an A
record which points to a dedicated WAN IP and a PAT rule on the firewall to
forward port 80 and 443 traffic to the internal Virtual IP.

In summary, you cannot go to your external WAN IP from your internal local
network.

So I need a way to tell Apache that if the visitor is coming from the WAN
(internet) then Redirect to https://site.example.com, if they're coming from
our local network then Redirect to https://site.example.local

I've search the web and so far haven't been able to find a way to do this.

Regards,

Michael.

> -- 
> Dr. Andrew C. Aitchison   Computer Officer, DPMMS, Cambridge
> [EMAIL PROTECTED] http://www.dpmms.cam.ac.uk/~werdna
--- End of Original Message ---


Re: more poorly named rpm's pulled from sl repo

2008-12-01 Thread Michael Mansour
Hi Troy,

> Hello,
> More poorly named rpm's have been pulled from the Scientific Linux 
> SL4 repositories.  Most people will not notice this at all.  But 
> those that are mirroring the Scientific Linux repositories will 
> notice that there are some rpm's deleted. This should be the last 
> time I have to do this for SL4, I believe all the poorly named rpm's 
> have been found.  There should also be an updated yum tomorrow which 
> has an updated versionfix.list.
> 
> rpm's that have been removed
> 
> flac-1.1.0-7.2.i386.rpm
> flac-1.1.0-7.2.x86_64.rpm
> flac-devel-1.1.0-7.2.i386.rpm
> flac-devel-1.1.0-7.2.x86_64.rpm
> libpng-1.2.7-3.el4_5.1.i386.rpm
> libpng-1.2.7-3.1.x86_64.rpm
> libpng-devel-1.2.7-3.el4_5.1.i386.rpm
> libpng-devel-1.2.7-3.1.x86_64.rpm
> xmms-flac-1.1.0-7.2.i386.rpm
> xmms-flac-1.1.0-7.2.x86_64.rpm
> 
> Thank you for your patience.

Thank you for taking the time to fix all of this for us!

Michael.

> Troy Dawson
> -- 
> __
> Troy Dawson  [EMAIL PROTECTED]  (630)840-6468
> Fermilab  ComputingDivision/LCSI/CSI DSS Group
> __
--- End of Original Message ---


Create LDAP account from Web form

2008-12-02 Thread Michael Mansour
Hi,

This area is quite new to me so I thought I'd ask this general question.

I have a requirement where I need to setup an LDAP server and then have a web
form available where people can fill out their details (name, address, etc)
and have that web form effectively create an account on the LDAP server.

In terms of the LDAP facility, I have previously installed and run OpenLDAP a
few times over the times, but never in production (just to learn it). But I'm
after some recommendations noting the requirement above.

* Should I use OpenLDAP for this?

* Should I use Fedora Directory Server for this?

* Should I use something else for LDAP directory services?

In terms of the Web form, is there anyone that knows what I can use here? like
a current project or current piece of software (non-commercial) that does this?

Thanks for any tips, recommendations and advice.

Michael.


Handling daily emails from multiple servers

2008-12-08 Thread Michael Mansour
Hi,

I'm currently in the process of starting to rationalise the amount of emails
generated from servers.

Currently, there are plenty of processes cron'ed from each server that
generates multiple daily emails (Logwatch, etc) when processes are run.

I'm thinking of configuring and scripting the servers to generate their
nightly outputs to a file or directory, and then appending those outputs to a
formatted web page so I can just check one web page each morning instead of
receiving hundreds of emails per cron job.

Before I do this I was wondering how others handle this with their servers? 

Any software out there which can actually do this?

Thanks.

Michael.


Re: Handling daily emails from multiple servers

2008-12-09 Thread Michael Mansour
Hi Andrew,

> On Tue, 9 Dec 2008, Michael Mansour wrote:
> 
> > Hi,
> >
> > I'm currently in the process of starting to rationalise the amount of emails
> > generated from servers.
> >
> > Currently, there are plenty of processes cron'ed from each server that
> > generates multiple daily emails (Logwatch, etc) when processes are run.
> >
> > I'm thinking of configuring and scripting the servers to generate their
> > nightly outputs to a file or directory, and then appending those outputs to 
> > a
> > formatted web page so I can just check one web page each morning instead of
> > receiving hundreds of emails per cron job.
> 
> A hundred emails or a single giant web page ?
> Not clear that one is much better than the other.

IMHO the long web page is much better. 

Going through, say, 200 emails in a Webmail client (usually taking a 5-10
seconds to click the delete/trash button) typically takes anywhere from 45mins
to an hour for me. I have to read many of the emails too so it doesn't take
5-10 seconds most of the time.

Having all that on one web page, well, it could take me 10 minutes to go
through the web page, still better than what I'm doing now.

> If you ran a syslog server, all the logs could be in once place
> so logwatch could summarize across machines.
> It does, however, put your logs at the mercy of the network :-(

Yes I thought of this too, and have two central syslog servers which log into
MySQL, but it still really wouldn't solve my problem as many of those emails
won't properly fit into a syslog entry.

> > Before I do this I was wondering how others handle this with their servers?
> 
> I'm afraid I just wade through the emails,
> or choose which to ignore today.

:) what I've done in the mean time is setup a web Ticketing system and
re-routed all those email to "root" etc to the ticketing system, which gets
them out of my Inbox and easier to manage in a DB.

I'm still looking for a better solution though, as hundreds of "cases" are now
raised each day in the Ticketing system which is easier to manage and delete
from, but still not ideal.

Regards,

Michael.

> -- 
> Dr. Andrew C. Aitchison   Computer Officer, DPMMS, Cambridge
> [EMAIL PROTECTED] http://www.dpmms.cam.ac.uk/~werdna
--- End of Original Message ---


Re: NFS server

2008-12-16 Thread Michael Mansour
Hi,

> hello all,
> 
> i have downloaded all the scientific linux versions in one machine 
> under a folder /home/vivek/scientific linux/
> 
> i have exported this folder via NFS.
>  when i am doing network install using bootable linux cd via nfs
> it is asking me to mount the directory .

Yes, that is not a problem? it's supposed to do that when you install without
using kickstart.

Just mount the directory manually through the GUI or text console.

Regards,

Michael.

> can anybody help me in making Network install server.
>
> regards
> 
> Vivek Chalotra
> GRID Project Associate,
> High Energy Physics Group,
> Department of Physics & Electronics,
> University of Jammu,
> Jammu 180006,
> INDIA.
--- End of Original Message ---


Re: NFS server

2008-12-17 Thread Michael Mansour
> Hi,
> 
> I am not sure whether the whitespace in your folder name will cause 
> some problems. How is your entry in /etc/exports file ?
> 
> regards
> 
> Udo
> 
> Michael Mansour wrote:
> > Hi,
> >
> >   
> >> hello all,
> >>
> >> i have downloaded all the scientific linux versions in one machine 
> >> under a folder /home/vivek/scientific linux/

If whitespace is the problem, you'll have to specify it as:

/home/vivek/scientific\ linux/

Regards,

Michael.

> >> i have exported this folder via NFS.
> >>  when i am doing network install using bootable linux cd via nfs
> >> it is asking me to mount the directory .
> >> 
> >
> > Yes, that is not a problem? it's supposed to do that when you install 
> > without
> > using kickstart.
> >
> > Just mount the directory manually through the GUI or text console.
> >
> > Regards,
> >
> > Michael.
> >
> >   
> >> can anybody help me in making Network install server.
> >>
> >> regards
> >>
> >> Vivek Chalotra
> >> GRID Project Associate,
> >> High Energy Physics Group,
> >> Department of Physics & Electronics,
> >> University of Jammu,
> >> Jammu 180006,
> >> INDIA.
> >> 
> > --- End of Original Message ---
> >
--- End of Original Message ---


Re: AFS on XFS or ext3?

2009-02-18 Thread Michael Mansour
Hi,

> Hi Bob!
> 
> On Tue, 17 Feb 2009 09:55:41 -0700
>  Bob Barton  wrote:
> 
> > I am setting up a 2 TB file system to use for AFS volumes
> > on an AFS file server and I am wondering which file
> > system I should use - XFS or ext3. I plan to use
> > Scientific Linux 5.1 or 5.2 x86_64 as the operating
> > system on the file server machine.
> > Suggestions, comments and recommendations are very
> > welcome.
> Assuming that you use LVM anyway, I would recommend xfs.
> It is said to be slightly faster than ext3 (I have not

ext3 can be just as fast by disabling it's "more redundant" features (atime,
etc). XFS isn't as redundant as ext3 so is typically not recommended to be
used to store data of "very high importance".

> tested this by myself), it can take more directories,
> which in the case of AFS is not important and - the main
> point - you can modify the size of the filesystem
> with # xfs_growfs while xfs is mounted! 

You can do exactly the same thing with ext3 using resize2fs, from the man page:

   The  resize2fs  program  will  resize  ext2 or ext3 file systems.  It
can be used to enlarge or shrink an
   unmounted file system located on device.  If the filesystem is mounted,
it can be used to expand the size
   of the mounted filesystem, assuming the kernel supports on-line
resizing.  (As of this writing, the Linux
   2.6 kernel supports on-line resize for filesystems mounted using ext3
only.).

I've used it many times before and it works fine.

Regards,

Michael.

> E.g. if you want to add disks to your RAID ...
> I have very good experience extending LVM and xfs
> whithout stopping the service. 
> Please keep in mind that you cannot extend an xfs
> while it is 100.00% full. At least some blocks
> must be free :-)
> 
> Cheers
> 
> Anton J. Gamel
> 
> HPC und GRID-Computing
> Physikalisches Institut
> Abteilung Professor Herten
> 
> c/o Rechenzentrum der Universität Freiburg
> Arbeitsgruppe Dr. Winterer
> Hermann-Herder-Straße 10
> 79104 Freiburg
> 
> Tel.: ++49 (0)761 203 -4672
> 
> --
> Es bleibt immer ein Rest - und ein Rest vom Rest.
--- End of Original Message ---


Re: AFS on XFS or ext3?

2009-02-19 Thread Michael Mansour
Hi Brent,

> Michael Mansour, cut the CRAP/FUD out!  I would NOT depend on 

Hmm.. 

> ext3 if I CARED about what was stored on my disks.  I ONLY use ext3 
> if the data stored is NOT of "very high importance".  I use XFS when 
> I DO CARE, so I use it all the time.  XFS is the most reliable,

If XFS was that reliable then Red Hat would support it commercially. They do
not specifically because ext3 is more reliable and robust.

Red Hat = Scientific Linux, so if it's not supported by TUV then it's not
supported by SL.

Don't believe me, raise a case with Red Hat and see.

>  dependable, and robust file system out there and independent tests 
> have consistently shown it to be much faster than ext3.  It has far 

Please read my first email, ext3 can perform just as fast with various
features turned off.

> more YEARS and Pentabytes of service under it's belt than ext3, a 
> LOT more!  I've had XFS do a much better job of surviving system 
> crashes and disk failures than ext3.

Different people will give the same arguments as you do. The fact is ext3 is
slower than XFS because it has more redundancy built in, turn off the
redundancy features and you get the same speeds as XFS.

Regards,

Michael.


Re: AFS on XFS or ext3?

2009-02-19 Thread Michael Mansour
Hi Bob,

> I certainly don't want to start any flame wars about choosing 
> between ext3 of XFS. One reason I was thinking of using XFS was 
> because recently when I set up an ext3 system, during the setup a 
> note popped up that an fsck would be forced on the file system after 
> 180 days. Having to take down a crucial resource for a long period 
> to do an fsck on 2TB of ext3 file system every 1/2 year is certainly 
> unattractive! I know there are ways to change this default using 
> tune2fs but I am uncertain what the implications of doing so are. My 

There is nothing wrong with extending the mount and disk check times using
tune2fs, especially in production.

Consider though, that as with any journaling filesystem, just because it's
journaled doesn't mean it's consistent. I personally extend the lengths of
checks using tune2fs on production servers but I always allow an fsck to run
at some stage (either manually when I'm organising a boot or organising
downtime at some stage with the customer).

fsck checks many aspects of the filesystem and should be run.. eventually. 

> previous experience with AFS file servers has been with AIX3.x - 
> AIX4.x and Solaris 9 and I essentially turned the systems on and 
> left them alone for years (literally - they went down whenever the 
> building power failed for some reason or other). I am hoping that 
> XFS would have similar characteristics.

Some advice, do a web search of people that have had problems with XFS in
large environments (maybe do the same with ext3), and then make your decision.
But remember one important note, when using a Red Hat based system, Red Hat
themselves don't recommend XFS, they don't test it, they don't run it, so
getting any type of support if you had problems with XFS on SL is just that
much more difficult.

Regards,

Michael.

> -- 
> Bob Barton 
> Local Area Administrator (780) 492-5160
> 7-095 ECERF
> Chemical & Materials Engineering
> University of Alberta,
> Edmonton Alberta, T6G 2V4
--- End of Original Message ---


RE: AFS on XFS or ext3?

2009-02-19 Thread Michael Mansour
Hi Brunner,

> -Original Message-
> From: owner-scientific-linux-us...@listserv.fnal.gov
> [mailto:owner-scientific-linux-us...@listserv.fnal.gov] On Behalf Of 
> Bob Barton Sent: Thursday, February 19, 2009 3:16 PM To: scientific-
> linux-us...@fnal.gov Subject: Re: AFS on XFS or ext3?
> 
> I certainly don't want to start any flame wars about choosing between
> ext3 of XFS.
> One reason I was thinking of using XFS was because recently when I 
> set up an ext3 system, during the setup a note popped up that an 
> fsck would be forced on the file system after 180 days. Having to 
> take down a crucial resource for a long period to do an fsck on 2TB 
> of ext3 file system every 1/2 year is certainly unattractive! I know 
> there are ways to change this default using tune2fs but I am 
> uncertain what the implications of doing so are. My previous 
> experience with AFS file servers has been with AIX3.x - AIX4.x and 
> Solaris 9 and I essentially turned the systems on and left them 
> alone for years (literally - they went down whenever the building 
> power failed for some reason or other). I am hoping that XFS would have
> similar characteristics.
> 
> ==
> 
> >From my experience with ext3, the fsck takes place upon the first reboot
> that takes place 6+months after that pop-up. It doesn't watch it's
> wristwatch, say it's been 6 months, and do an fsck (THAT would be
> annoying).  So, it waits for a power-fail or shutdown, when disk service
> is normally not expected.

If I'm going to boot a server and I don't want it to check the disks after the
reboot, I make sure before I reboot to use tune2fs to make sure the disks
won't check after n mounts etc.

Get into the habit of doing that and no surprises on reboots :)

Regards,

Michael.

> If an ext3 fs is mounted read-only (like my /usr partition) fsck 
> never runs on it at reboot, no matter what.
> 
> My systems, also, are left running for years; and they're located in
> charming get-aways like Kazakhstan: irate users are to be avoided at 
> all costs. ***
> This email and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom
> they are addressed. If you have received this email in error please
> notify the system manager. This footnote also confirms that this
> email message has been swept for the presence of computer viruses.
> www.Hubbell.com - Hubbell Incorporated**
--- End of Original Message ---


chroot SSH users on SL5

2009-03-03 Thread Michael Mansour
Hi,

I'm looking for a way to setup the chroot for SSH users, into their home
directories.

Do people do this with SL5?

I've looked at the latest OpenSSH which does do this, but requires separate
compilation. I'd rather try and find pre-built RPM's of the latest OpenSSH.

Any advice is appreciated.

Michael.


Re: chroot SSH users on SL5

2009-03-04 Thread Michael Mansour
Hi,

> Why, what is your threat model that you have to do this?

Thanks for your reply. Basically, we're managing the infrastructure for a
client, but not the (web) apps. 

The client has insisted his developers need SSH access. After quite some
discussion, we provided it.

The client has multiple developers and himself hosts for different clients, so
multiple SSH accounts are being provided to multiple developers.

They really only need access to their home directories, they don't need access
to the main server filesystem etc.

The new OpenSSH version makes this easier than other chroot hacks I've seen,
in that it uses sftp libraries to provide the chroot'ed ssh environment. So no
need to copy all libraries for each ssh command that is needed to be used in
the environment and then having an upgrade headache when OpenSSH needs to be
updated.

Regards,

Michael.

> Michael Mansour wrote:
> > Hi,
> > 
> > I'm looking for a way to setup the chroot for SSH users, into their home
> > directories.
> > 
> > Do people do this with SL5?
> > 
> > I've looked at the latest OpenSSH which does do this, but requires separate
> > compilation. I'd rather try and find pre-built RPM's of the latest OpenSSH.
> > 
> > Any advice is appreciated.
> > 
> > Michael.
> >
> 
> --
> 
> Please sign my petition:
> http://petitions.number10.gov.uk/alcohol-buying/
> 
> -
> Faye Gibbins, Computing Officer (Infrastructure Services)
>   GeoS KB; Linux, Unix, Security and Networks.
> Beekeeper  - The Apiary Project, KB -   www.bees.ed.ac.uk
> -
> 
>I grabbed at spannungsbogen before I knew I wanted it.
>   (x(x_(X_x(O_o)x_x)_X)x)
> 
> The University of Edinburgh is a charitable body,
> registered in Scotland, with registration number SC005336.
--- End of Original Message ---


Re: SL5.2 Machine freeze due to CPU100% in relation with the bond0 module

2009-03-30 Thread Michael Mansour
Hi,

> Hello,
> 
> On a recently installed SL5.2 x86_64 server configured with bonding
> of ethernet interfaces, the machine was freezed during the night and
> with no activity. I found at least one reference to a similar 
> problem :
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=468027
> 
> I am wondering if some of you had the same problem and if there is
> a chance that it is fixed soon... I find it quite serious.
> 
> Here is the config :
> 
> /etc/sysconfig/network-scripts/ifcfg-eth0
> ONBOOT=yes
> DEVICE=eth0
> TYPE=Ethernet
> BOOTPROTO=none
> MASTER=bond0
> SLAVE=yes
>   /etc/sysconfig/network-scripts/ifcfg-eth1
> ONBOOT=yes
> DEVICE=eth1
> TYPE=Ethernet
> BOOTPROTO=none
> MASTER=bond0
> SLAVE=yes
> /etc/sysconfig/network-scripts/ifcfg-bond0
> ONBOOT=yes
> DEVICE=bond0
> TYPE=Ethernet
> BOOTPROTO=static
> IPADDR=134.158.24.132
> NETMASK=255.255.248.0
> BROADCAST=134.158.31.255
> /etc/modprobe.conf
> alias eth0 bnx2
> alias eth1 bnx2
> alias scsi_hostadapter megaraid_sas
> alias scsi_hostadapter1 ata_piix
> alias bond0 bonding
> options bonding miimon=100 mode=4

For this (on a 5.2 i386 box), I have:

options bond0 mode=1 miimon=100 use_carrier=1

and it's never failed.

I haven't yet used bonding on SL5.2 x86_64.

Regards,

Michael.

> Thanks
> 
> JM
> 
> -- 
> 
> Jean-michel BARBET| Tel: +33 (0)2 51 85 84 86
> Laboratoire SUBATECH Nantes France| Fax: +33 (0)2 51 85 84 79
> CNRS-IN2P3/Ecole des Mines/Universite | E-Mail: bar...@subatech.in2p3.fr
> 
--- End of Original Message ---


Apache "file -C -m magicfiles" errors

2009-05-03 Thread Michael Mansour
Hi,

I use SL5.2. I recently starting getting errors in the Apache error_log that
goes for pages like this:

Usage: file [-bcikLhnNsvz] [-f namefile] [-F separator] [-m magicfiles] file...
   file -C -m magicfiles
Try `file --help' for more information.
Usage: file [-bcikLhnNsvz] [-f namefile] [-F separator] [-m magicfiles] file...
   file -C -m magicfiles
Try `file --help' for more information.
Usage: file [-bcikLhnNsvz] [-f namefile] [-F separator] [-m magicfiles] file...
   file -C -m magicfiles
Try `file --help' for more information.
Usage: file [-bcikLhnNsvz] [-f namefile] [-F separator] [-m magicfiles] file...
   file -C -m magicfiles
Try `file --help' for more information.
Usage: file [-bcikLhnNsvz] [-f namefile] [-F separator] [-m magicfiles] file...
   file -C -m magicfiles
Try `file --help' for more information.
Usage: file [-bcikLhnNsvz] [-f namefile] [-F separator] [-m magicfiles] file...
   file -C -m magicfiles
Try `file --help' for more information.

I have no idea where this error is coming from. I have tried analysing it for
hours, trying to find it's source but have been left no closer to a resolution.

The server hosts various websites for different clients.

Has anyone seen this error before? know how to fix it? can suggest more
trouble-shooting tips?

After my analysis, I believe the error is coming from the Apache
mod_mime_magic but am really not sure.

Thanks.

Michael.


Package a series of files into RPM

2009-05-04 Thread Michael Mansour
Hi,

I did this about 2 years ago when working elsewhere, however I've forgotten
how I pulled this off now I need to perform the work again.

I simply want to get a series of files and package them into an RPM. No
compiling, no building.

The way I did it before was simply tar.gz the files, put them into
/usr/src/redhat/SOURCES and then create my spec file which would package up
the tarball into RPM.

Trying to re-create such a spec file I've been unsuccessful.

Any ideas? or can someone supply a template for me?

Thanks.

Michael.


Re: Package a series of files into RPM

2009-05-04 Thread Michael Mansour
Hi Fernando,

> Take a look at this spec file:
> 
> http://cosmos.astro.uson.mx/~favilac/downloads/IRAF/el5/SPECS/x11iraf.spec

Thanks, it looks like exactly what I'm looking for.

Michael.

>   Hope it helps.
> 
> --
> C.Dr. Fernando A. Avila Castro
> Responsable del Observatorio Astronomico
> del Centro Ecologico de Sonora
> http://www.astro.uson.mx/~favilac
--- End of Original Message ---


[Spam?BadBits] Re: [suggest] Possible bug in latest perl-DBD-SQLite package

2009-05-30 Thread Michael Mansour
Hi,

I thought this is relevant to post/forward here for SL developers as I'm in
the process of upgrading various SL4 servers to SL5, and will hit this problem
when/if I use the perl-DBD-SQLite supplied by RPMforge with SL5.

Regards,

Michael.

-- Forwarded Message ---
From: Kai Schaetzl 
To: sugg...@lists.rpmforge.net
Sent: Sat, 30 May 2009 13:52:52 +0200
Subject: Re: [suggest] Possible bug in latest perl-DBD-SQLite package

David Steinbrunner wrote on Thu, 28 May 2009 12:03:36 -0400:

> Is it possible for you to install the perl module from CPAN rather than
> rpmforge for testing proposes?  I have a feeling the issue is not the
> rpmforge package but rather the newer versions of the software itself.  If
> it is the DBD::SQLite software rather than the packaging, you could then
> submit this as a bug to the maintainers.

I just found the reason for this problem with a simple test script and a 
require.

Newer versions of DBD::SQLite need a DBI version of at least 1.57. The 
rpmforge package *does* include this requirement. However, CentOS includes 
DBI 1.52 and doesn't provide newer versions. As a good administrator I'm 
using yum priorities and thus the software is locked at the CentOS version. 
I would think that yum should not have upgraded perl-DBD-SQLite if it cannot 
fulfill the requirement of perl-DBI >= 1.57. I will follow this up on the 
CentOS list whether it has to be reported as a bug.
So, for anyone else using this module and priorities: you have to either 
stay on version 1.14 or exclude perl-DBI from the base and updates repos 
(which enables rpmforge to upgrade DBI to 1.607).

Kai

-- 
Kai Schätzl, Berlin, Germany
Get your web at Conactive Internet Services: http://www.conactive.com

___
suggest mailing list
sugg...@lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest
--- End of Forwarded Message ---


Missing file in perl-XML-SAX package

2009-06-20 Thread Michael Mansour
Hi,

After installing perl-LDAP, the other dependency packages were installed:

Jun 20 23:42:32 Installed: perl-XML-NamespaceSupport-1.09-1.2.1.noarch
Jun 20 23:42:32 Installed: perl-XML-SAX-0.14-5.noarch
Jun 20 23:42:33 Installed: 1:perl-LDAP-0.33-3.fc6.noarch

Since then, one of my perl packages is doing strange things, and generating an
error when run with:

could not find ParserDetails.ini in /usr/lib/perl5/vendor_perl/5.8.8/XML/SAX

Checking the package:

# rpm -ql perl-XML-SAX |grep ParserDetails
/usr/lib/perl5/vendor_perl/5.8.8/XML/SAX/ParserDetails.ini

but the file doesn't actually exist on the filesystem:

# ll /usr/lib/perl5/vendor_perl/5.8.8/XML/SAX/ParserDetails.ini
ls: /usr/lib/perl5/vendor_perl/5.8.8/XML/SAX/ParserDetails.ini: No such file
or directory

I've un-installed and re-installed, same problem. The RPM DB says it's there
when in fatc it's not.

I'm using:

Scientific Linux SL release 5.3 (Boron)

Regards,

Michael.


Re: Missing file in perl-XML-SAX package - SOLVED

2009-06-20 Thread Michael Mansour
Hi,

> After installing perl-LDAP, the other dependency packages were installed:
> 
> Jun 20 23:42:32 Installed: perl-XML-NamespaceSupport-1.09-1.2.1.noarch
> Jun 20 23:42:32 Installed: perl-XML-SAX-0.14-5.noarch
> Jun 20 23:42:33 Installed: 1:perl-LDAP-0.33-3.fc6.noarch
> 
> Since then, one of my perl packages is doing strange things, and 
> generating an error when run with:
> 
> could not find ParserDetails.ini in /usr/lib/perl5/vendor_perl/5.8.8/XML/SAX
> 
> Checking the package:
> 
> # rpm -ql perl-XML-SAX |grep ParserDetails
> /usr/lib/perl5/vendor_perl/5.8.8/XML/SAX/ParserDetails.ini
> 
> but the file doesn't actually exist on the filesystem:
> 
> # ll /usr/lib/perl5/vendor_perl/5.8.8/XML/SAX/ParserDetails.ini
> ls: /usr/lib/perl5/vendor_perl/5.8.8/XML/SAX/ParserDetails.ini: No 
> such file or directory
> 
> I've un-installed and re-installed, same problem. The RPM DB says 
> it's there when in fatc it's not.

I found this:

http://perl-xml.sourceforge.net/faq/

which in this section:

3.18.   "could not find ParserDetails.ini"

Explains the following:

"
If you are packaging XML::SAX in an alternative distribution format (such as
RPM), your post-install script should check if ParserDetails.ini exists and if
it doesn't, run this command:

perl -MXML::SAX -e "XML::SAX->add_parser(q(XML::SAX::PurePerl))->save_parsers()"
  

Don't unconditionally run this command, or users who re-install XML::SAX may
find that any fast SAX parser they have installed will be replaced as the
default by the pure-Perl parser.
"

So this tells us that the post-install script in the RPM doesn't actually work
right?

I ran that perl command above and the ParserDetail.ini file was then created
and the problem went away.

Should the issue with this RPM package be raised in bugzilla?

Regards,

Michael.

> I'm using:
> 
> Scientific Linux SL release 5.3 (Boron)
> 
> Regards,
> 
> Michael.
--- End of Original Message ---


Bind 9 DoS vulnerability

2009-07-28 Thread Michael Mansour
Is this something real and to be concerned about?

https://www.isc.org/node/474

Michael.


[Spam?BadBits] Re: Bind 9 DoS vulnerability

2009-07-29 Thread Michael Mansour
> Jan Kundrát wrote:
> > Michael Mansour wrote:
> >> Is this something real and to be concerned about?
> > 
> > Yes, it crashed our named instance running on a freshly updated SL5.2.
> > For reference, exploit is available from the Debian bugtracker [1]. Note
> > that the iptables snippet won't work on SL because it doesn't have the
> > u32 iptables module.
> > 
> > Cheers,
> > -jkt
> > 
> > [1] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=538975
> 
> For those interested, an upstream bug together with a patch is 
> available here:

Troy/Connie, how long before these will make it through to SL (fastbugs)?

I personally need both SL4 and SL5 ones.

Thanks.

Michael.


Re: Fwd: CentOS Project Administrator Goes AWOL

2009-07-30 Thread Michael Mansour
Hi,

> This affects us.  Imagine that all the CentOS users show up to use
> Scientific Linux.  Imagine all their maintainers and developers show
> up, too.

I personally don't think that's a bad thing especially if it allows SL to have
the ability to open more of it's infrastructure to 3rd party "extensions" like
the CentOS team have done for CentOS Plus etc.

For example, EL5 is stuck in the php 5.1.6 and MySQL 5.0.45 days and when you
want applications which relay on at least php 5.2.x (and so many do) then you
have to go to 3rd party repo's which may be incompatible with other repo's
used in the environment.

It's really like opening a can of worms and IMHO CentOS got the right mix by
having their own team of developers providing those packages which TUV doesn't.

In case there's a question, I use SL exclusively for over 30 Linux servers,
never used CentOS.

Regards,

Michael.

> Keith
> 
>  forwarded message ---
> 
> (http://linux.slashdot.org/story/09/07/30/130249/CentOS-Project-
> Administrator-Goes-AWOL):
> 
> Lance Davis, the main project administrator for CentOS, a popular 
> free 'rebuild' of Red Hat's Enterprise Linux, appears to have gone 
> AWOL. In an open letter* from his fellow CentOS developers, they 
> describe the precarious situation the project has been put in. There 
> have been attempts to contact him for some time now, as he's the 
> sole administrator for the centos.org domain, the IRC channels, and 
> apparently, CentOS funds. One can only hope that Lance gets in 
> contact with them and gets things sorted out.
> 
> * Open Letter (http://www.centos.org/):
> 
> July 30, 2009 04:39 UTC
> 
> This is an Open Letter to Lance Davis from fellow CentOS Developers
> 
> It is regrettable that we are forced to send this letter but we are
> left with no other options. For some time now we have been attempting
> to resolve these problems:
> 
> You seem to have crawled into a hole ... and this is not acceptable.
> 
> You have long promised a statement of CentOS project funds; to this
> date this has not appeared.
> 
> You hold sole control of the centos.org domain with no deputy; this 
> is not proper.
> 
> You have, it seems, sole 'Founders' rights in the IRC channels with 
> no deputy ; this is not proper.
> 
> When I (Russ) try to call the phone numbers for UK Linux, and for you
> individually, I get a telco intercept 'Lines are temporarily busy' 
> for the last two weeks. Finally yesterday, a voicemail in your voice 
> picked up, and I left a message urgently requesting a reply. 
> Karanbir also reports calling and leaving messages without your reply.
> 
> Please do not kill CentOS through your fear of shared management of the
> project.
> 
> Clearly the project dies if all the developers walk away.
> 
> Please contact me, or any other signer of this letter at once, to
> arrange for the required information to keep the project alive at the
> 'centos.org' domain.
> 
> Sincerely,
> 
> Russ Herrold
> Ralph Angenendt
> Karanbir Singh
> Jim Perrin
> Donavan Nelson
> Tim Verhoeven
> Tru Huynh
> Johnny Hughes
> 
> -- 
> Sincerely,
> 
> Michael Lauzon
> --
> The Toronto Linux Users Group.  Meetings: http://gtalug.org/
> TLUG requests: Linux topics, No HTML, wrap text below 80 columns
> How to UNSUBSCRIBE: http://gtalug.org/wiki/Mailing_lists
> 
> - end forwarded message ---
> 
> -- 
> Keith Lofstrom  kei...@keithl.com Voice (503)-520-
> 1993 KLIC --- Keith Lofstrom Integrated Circuits --- "Your Ideas in Silicon"
> Design Contracting in Bipolar and CMOS - Analog, Digital, and Scan ICs
--- End of Original Message ---


XEN guest that's never been able to come up again

2009-07-30 Thread Michael Mansour
Hi,

I have a XEN guest which fails to come up.

It's no drama at this point since I had a backup of the guest, however I'd
like to understand why it failed and has never been able to come up again. 

If possible, I'd also like to understand how to trouble-shoot such problems.

Basically, the XEN guest had a problem where the websites it was hosting
stopped responding. Getting in via ssh I noticed a bunch of "read only
filesystem" errors.

I rebooted it, filesystems checked ok and journals committed, but it sits at
this point on the console:

SELinux:  Disabled at runtime.
SELinux:  Unregistering netfilter hooks
type=1404 audit(1248725516.191:2): selinux=0 auid=4294967295 ses=4294967295

and never brings up the login prompt.

I can issue:

xm shutdown 

and it does shut itself down.

It's an SL 5.3 guest (32bit) running on an SL 5.3 host (32bit).

At the point above, no networking works so the guest is effectively cactus.

I am able to mount the img using loop back, kpartx etc and the filesystem is
in tact, checking logs I can't see anything obvious as to why it never comes
up. I've even tried booting into the older kernel of the guest.

This only started happening after I applied the following patches:

Jul 25 06:59:30 server yum: Updated: xulrunner-1.9.0.12-1.el5_3.i386
Jul 25 06:59:32 server yum: Updated:
tomcat5-servlet-2.4-api-5.5.23-0jpp.7.el5_3.2.i386
Jul 25 06:59:48 server yum: Updated: libtiff-3.8.2-7.el5_3.4.i386
Jul 25 06:59:59 server yum: Updated: firefox-3.0.12-1.el5_3.i386
Jul 25 07:00:03 server yum: Updated:
tomcat5-jsp-2.0-api-5.5.23-0jpp.7.el5_3.2.i386

Prior to that reboots would work fine.

I have several XEN guests under SL 5.3 and from the experience above I'm
skeptical now whether I should rely on XEN outside of test environments.

What's an effective way to trouble-shoot the above type of problem to try and
understand what is going wrong?

Thanks.

Michael.


  1   2   >