Re: RH 7.2 Install Issues

2002-04-23 Thread Romney White

Chet:

You'r right - RedHat built their kernel for stand-alone installation
only, requiring the use of the HMC integrated console, whereas VIF is
built to provide Linux images with a 3215 console. You need a kernel
with 3215 support to install under VIF.

Romney

On Mon, 22 Apr 2002 07:13:36 -0700 Chet Norris said:
clip
.  Any time you see a posting from Alan Altmark, or
Romney White about some aspect of VM or VIF, pay particular attention,
as they're intimately familiar with the product from the inside out.

Sorry Romney, I sent the reply to a prior note before reading yours.
I've been talking to IBM, Endicott, and was echoing their conversation.

 Rob van der Heij wrote:
 If you don't want to recompile the driver yourself first:

   echo '$TCPIP'  /proc/net/iucv/iucv0/username

I'm not sure if I can do what you said. Not only can I not telnet into
the image, but I also cannot communicate with the image through any
console facility. The way I understand it RedHat assumes that the
install is going to be performed from a HMC console, and VIF defines
the console as a 3215 device for the image. There is some mis-match in
device types that prevents me from entering any commands when logged on
through VIF.

=
Chet Norris
Marriott International,Inc.

__
Do You Yahoo!?
Yahoo! Games - play chess, backgammon, pool and more
http://games.yahoo.com/



gcc compiler on Linux/390

2002-04-23 Thread Reinald Verheij

Hi,

On Win32 and on other Unixes some compilers have possibilities to optimize
instruction scheduling in the compiler for Pentium pipeline architecture
etc...
Does anything like this exist in GCC, to optimize the instruction stream for
G4, G5 or G6 etc processors ?

Reinald



Re: MTBF

2002-04-23 Thread Thomas David Rivers

David Boyes [EMAIL PROTECTED] wrote:

 On Mon, Apr 22, 2002 at 04:26:39PM -0500, Holly, Jason wrote:
  has anyone established mean-time-between-failure numbers for linux instances
  running under vm?  anything general would be good information.  i'm curious
  about disk, memory or other system failures that compromise the vm
  instances.

 The record for us is about 9 months for a single Linux image. Average
 is about 3-4 months between reboots, depending on what's running in
 them -- things that suck up lots of memory like Websphere tend to
 shorten the lifespan of the machine by fragmenting storage. Machines
 that get a lot of interactive use tend to collect a few zombies after
 a while, so reboots become a reasonably good idea after a while.

 I have to say that I'm a little surprised at that recommendation.
 Let me see if I understand... you're saying that you need to
 reboot your Linux images about every 9 months because the Linux
 virtual memory manager has issues?  I hadn't heard that before...

 Seems like I've heard lots of tales of people with Linux up
 much longer than 9 months... doing web services, etc...  do you
 think your 9 month figure is a function of the 390 version
 of Linux, or Linux in general?  That is, would you recommend
 rebooting a PC version of Linux on the same interval (given
 the same workload?)

 We don't reboot our machines unless there is some configuration
 change... but, we don't do highly interactive things on them,
 just product builds every now-and-then.


 Also - just to ask - what about the BSD variants - would you
 also recommend 9 months for them?  It's my understanding that
 the virtual memory manager in FreeBSD is better than Linux, for
 example, and perhaps might not suffer the fragmentation issues?
 [Of course, that may be a biased understanding... I have no
 evidence to back that up.]



 For the VM side, I know of sites with uptimes measured in years;

 Yes - this is much better! :-)  And, I would expect it from
 Linux (even today's Linux) too...

 Could you relate more about this 9 month figure?  Do you have
 specific instances where it was required, that you can share
 of course...

 - Thanks! -
- Dave Rivers -

--
[EMAIL PROTECTED]Work: (919) 676-0847
Get your mainframe programming tools at http://www.dignus.com



Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01: 00

2002-04-23 Thread Jill Grine

Thanks for your response, Rob, but the website you have provided assumes
OS/390
is running on the machine on which Linux is being installed.  I am
attempting to
to run this under VM, and don't have OS/390 installed on this box.

I hope all will forgive me for the following, it is the only way to convey
the
whole picture.  (lin is a REXX that deletes any reader files, then reloads
the
LINUX files and executes.)

parmfile:  ramdisk_size=32768 dasd=200-202 root=/dev/ram0 ro ipldelay=30s

CTCA 0E68 COUPLED TO TCPIP 0E69

CTCA 0E69 COUPLED TO TCPIP 0E68

Ready; T=0.48/0.56 09:11:54

lin

003 FILES PURGED

RDR FILE 0103 SENT FROM LINUX1   PUN WAS 0103 RECS 019K CPY  001 A NOHOLD
NOKEEP
RDR FILE 0104 SENT FROM LINUX1   PUN WAS 0104 RECS 0001 CPY  001 A NOHOLD
NOKEEP
RDR FILE 0105 SENT FROM LINUX1   PUN WAS 0105 RECS 123K CPY  001 A NOHOLD
NOKEEP
003 FILES CHANGED

Linux version 2.2.16 ([EMAIL PROTECTED]) (gcc version 2.95.2 19991024
(releas
e)) #1 SMP Fri Nov 3 09:38:59 GMT 2000

Command line is:

We are running under VM

This machine has no IEEE fpu

Detected device 0E68 on subchannel  - PIM = 80, PAM = 80, POM = FF

Detected device 0E69 on subchannel 0001 - PIM = 80, PAM = 80, POM = FF

Detected device 0191 on subchannel 0002 - PIM = 80, PAM = 80, POM = FF

Detected device 0200 on subchannel 0003 - PIM = 80, PAM = 80, POM = FF

Detected device 0201 on subchannel 0004 - PIM = 80, PAM = 80, POM = FF

Detected device 0202 on subchannel 0005 - PIM = 80, PAM = 80, POM = FF
 (etc)
Dected device 0120 on subchannel 000E - PIM = 80, PAM = 80, POM = FF
Highest subchannel number detected (hex) : 000E

SenseID : device 0E68 reports: Dev Type/Mod = 3088/08

SenseID : device 0E69 reports: Dev Type/Mod = 3088/08

SenseID : device 0191 reports: CU  Type/Mod = 3990/C2, Dev Type/Mod =
3390/0A
SenseID : device 0200 reports: CU  Type/Mod = 3990/C2, Dev Type/Mod =
3390/0A
SenseID : device 0201 reports: CU  Type/Mod = 3990/C2, Dev Type/Mod =
3390/0A
SenseID : device 0202 reports: CU  Type/Mod = 3990/C2, Dev Type/Mod =
3390/0A
SenseID : device 0009 reports: Dev Type/Mod = 3215/00
  (etc.)
SenseID : device 0120 reports: CU  Type/Mod = 3880/01, Dev Type/Mod =
3370/00
Calibrating delay loop... 15.10 BogoMIPS
SenseID : device 019D reports: CU  Type/Mod = 3880/01, Dev Type/Mod =
3370/00
SenseID : device 0198 reports: CU  Type/Mod = 3880/01, Dev Type/Mod =
3370/00
SenseID : device 0120 reports: CU  Type/Mod = 3880/01, Dev Type/Mod =
3370/00
Calibrating delay loop... 15.26 BogoMIPS

Memory: 257080k/262144k available (1088k kernel code, 0k reserved, 3976k
data, 0
k init)

Dentry hash table entries: 32768 (order 6, 256k)

Buffer cache hash table entries: 262144 (order 8, 1024k)

Page cache hash table entries: 65536 (order 6, 256k)

debug: 16 areas reserved for debugging information

debug: reserved 4 areas of 4 pages for debugging ccwcache

VFS: Diskquotas version dquot_6.4.0 initialized

POSIX conformance testing by UNIFIX

Detected 1 CPU's

Boot cpu address  0

cpu 0 phys_idx=0 vers=FF ident=0D0607 machine=7490 unused=

Linux NET4.0 for Linux 2.2

Based upon Swansea University Computer Society NET3.039

NET4: Unix domain sockets 1.0 for Linux NET4.0.

NET4: Linux TCP/IP 1.0 for NET4.0

IP Protocols: ICMP, UDP, TCP, IGMP

TCP: Hash tables configured (ehash 262144 bhash 65536)
Linux IP multicast router 0.06 plus PIM-SM
Starting kswapd v 1.5
pty: 256 Unix98 ptys configured
RAM disk driver initialized:  16 RAM disks of 32768K size
loop: registered device at major 7
LVM version 0.8i  by Heinz Mauelshagen  (02/10/1999)
lvm -- Driver successfully initialized
md driver 0.36.6 MAX_MD_DEV=4, MAX_REAL=8
Partition check:
Kernel panic: VFS: Unable to mount root fs on 01:00
HCPGIR450W CP entered; disabled wait PSW 000A 8005684C



-Original Message-
From: Rob van der Heij [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 22, 2002 12:06 PM
To: [EMAIL PROTECTED]
Subject: Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01:
00


I followed SuSE's instructions on creating an IPL tape and used that for
the
IPL of the LPAR.  This worked fine - I did not get the Kernel panic
message this time.  The only reason why I chose the DASD IPL volume (using
instructions that I got from another Manual) was because it was easier to
IPL from DASD than from tape.

Should work though. Many reported that it works for them. Maybe
check that you had the latest code from
   http://www.rvdheij.com/linuxipl.html
If you get as far as this message it looks like Linux did pick up
your parm line but failed to load the initrd. Any messages in that
area? Do you get the message about the ramdisk found?

Rob



RH7.2 (vs other distros) Install Issues

2002-04-23 Thread Carey Schug

Is there a comparison chart on the various Linux/390 distributions
anywhere, showing such feature comparisons?  Possibly also listing what
platforms they are known to have been sucessfully installed upon (e.g.
under VM/VIF vs bare iron, and on P/390, 390 software emulation, classes of
real 390 hardware, etc)?  Maybe compiler issues? (If memory serves, at
least one has a version that will run on older hardware missing some of the
newer instructions, if the correct C compiler is used.


Open your home, open your heart, become a foster parent!


Romney White [EMAIL PROTECTED]@VM.MARIST.EDU on 04/23/2002 05:05:09

Please respond to Linux on 390 Port [EMAIL PROTECTED]

Sent by:Linux on 390 Port [EMAIL PROTECTED]


To:[EMAIL PROTECTED]
cc:
Subject:Re: [LINUX-390] RH 7.2  Install Issues


Chet:

You'r right - RedHat built their kernel for stand-alone installation
only, requiring the use of the HMC integrated console, whereas VIF is
built to provide Linux images with a 3215 console. You need a kernel
with 3215 support to install under VIF.

Romney



Re: Distribution pricing comparisons

2002-04-23 Thread Nix, Robert P.

I called all three players the same day, talked to people at each that day,
and had pricing from all three within a week. I'm not sure what questions I
asked that were different than yours, or who I got that you didn't, but my
experience with all three has been very good. I even went back around with
the educational discount? question and got timely responses from all
three.


Robert P. Nixinternet: [EMAIL PROTECTED]
Mayo Clinic  phone: 507-284-0844
200 1st St. SW page: 507-255-3450
Rochester, MN 55905

In theory, theory and practice are the same,
 but in practice, theory and practice are different.


 -Original Message-
 From: James Melin [SMTP:[EMAIL PROTECTED]]
 Sent: Monday, April 22, 2002 8:27 AM
 To:   [EMAIL PROTECTED]
 Subject:  Re: Distribution pricing comparisons

 I have to take exeption to that statement, as it has been my experience
 that they send you a we'll get right back to you e-mail and then do not
 follow through. I've asked for pricing and support offerings three times
 now and I've been rebuffed 3 times.

 I'm having my IBM people pursue it for me this time.




LINUX on S390 - TAPE IPL Problem

2002-04-23 Thread Rivers, James E.

I am trying to IPL LINUX from tape on a S390/Multiprise 3000: I am receiving the 
following error -,

RAMDISK: Couldn't find valid RAM DISK image starting at 0.

These are the three files I copied to tape for the IPL.

LINUX390.IMAGE.TXT 

LINUX390.INITRD.TXT 

LINUX390.PARM.LINE 

Can you help?



Re: LINUX on S390 - TAPE IPL Problem

2002-04-23 Thread Peter Webb, Toronto Transit Commission

If you copied the three files in that order, then that is your problem. The
IMAGE file has to be first, followed by the PARM, and the INITRD last.

 -Original Message-
 From: Rivers, James E. [SMTP:[EMAIL PROTECTED]]
 Sent: Tuesday, April 23, 2002 9:35 AM
 To:   [EMAIL PROTECTED]
 Subject:  LINUX on S390 - TAPE IPL Problem

 I am trying to IPL LINUX from tape on a S390/Multiprise 3000: I am
 receiving the following error -,

 RAMDISK: Couldn't find valid RAM DISK image starting at 0.

 These are the three files I copied to tape for the IPL.

 LINUX390.IMAGE.TXT

 LINUX390.INITRD.TXT

 LINUX390.PARM.LINE

 Can you help?



Anyone running Linux on zSeries in a DMZ?

2002-04-23 Thread Alan Altmark

If anyone is running their Linux for zSeries or Linux for S/390 in a DMZ,
whether app server or firewall, please send me a brief note describing
your use.  The information you provide will not be made public without
your consent.

Thanks.

Alan Altmark
Sr. Software Engineer
IBM z/VM Development



Re: Sendmail Perfomance

2002-04-23 Thread Post, Mark K

Moloko,

You still have not answered the question of just how busy your S/390 CPU is
when the load average is at 10.  (Not 10% as you state.)  A load average
does not tell you anything about how busy the CPU is, only how many
processes are ready to run.  Try bumping your QueueLA and RefuseLA on
Linux/390 to something very large, say 99, and see what happens.  But
really, look at and report %CPU busy, not load average.  That really doesn't
tell us anything.

Mark Post

-Original Message-
From: Moloko Monyepao [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 9:35 AM
To: [EMAIL PROTECTED]
Subject: Sendmail Perfomance


I have sendmail 8.11.6-3 installed on Redhat 7.2 (server installation) on
S390(Lpar) with an IFL dedicated to this Lpar. We made the  O QueueLA=10 and
the O RefuseLA=12 which is the same setup as on my Intel machine. When using
top to check the load the following is what I get. The O
MaxDaemonChildren=100 on S390 and 300 on Intel.

Intel Machine = The Load average does not go over 10% when setup for both
incoming and outgoing mail.

S390 Machine = The Load average goes to more than 10% and it start rejecting
connections if we set it up for both incoming and outgoing mail and it gets
up to about 6 to 8% if we only set it up for outgoing mail which is not a
lot.

Please assist
Moloko



Re: Distribution pricing comparisons

2002-04-23 Thread James Melin

That's the difference. You called them. I e-mailed them. Since this linux
thing here is just an experiment at this point, I have to avoid long
distance charges and a rabid accounting department.

The nature of the beast.




|-+
| |   Nix, Robert P. |
| |   Nix.Robert@mayo.|
| |   edu |
| |   Sent by: Linux on|
| |   390 Port |
| |   [EMAIL PROTECTED]|
| |   IST.EDU |
| ||
| ||
| |   04/23/2002 07:38 |
| |   AM   |
| |   Please respond to|
| |   Linux on 390 Port|
| ||
|-+
  
--|
  |
  |
  |   To:   [EMAIL PROTECTED]
  |
  |   cc:  
  |
  |   Subject:  Re: Distribution pricing comparisons   
  |
  
--|




I called all three players the same day, talked to people at each that day,
and had pricing from all three within a week. I'm not sure what questions I
asked that were different than yours, or who I got that you didn't, but my
experience with all three has been very good. I even went back around with
the educational discount? question and got timely responses from all
three.







How do we get iostat working in SLES7?

2002-04-23 Thread John P Taylor

We are having some problems at the moment geting iostat to output data
from the partitons defined to the system (e.g iostat -x /dev/dasdb).
This works OK on Redhat 7.2 and the reason for this seems to be that
/proc/partitions contains rather more information on RedHat than SLES7.
We tried applying linux-2.4.0-sard.patch to SLES7:

--- linux/drivers/block/ll_rw_blk.c.~1~ Mon Jul 17 14:53:34 2000
--- linux/drivers/scsi/scsi_lib.c.~1~   Mon Jul 17 14:53:30 2000
--- linux/fs/partitions/check.c.~1~ Mon Jul 17 14:53:31 2000
--- linux/include/linux/blkdev.h.~1~Mon Jul 17 14:53:38 2000
--- linux/include/linux/genhd.h.~1~ Mon Jul 17 14:53:28 2000
--- linux/drivers/block/genhd.c.origFri Sep  7 12:05:30 2001

linux/drivers/block/ll_rw_blk.c.~1~ patches cleanly but the others do
not. Are we going about this the wrong way, is there an alternative
utility that provides the same level of detail. Any suggestions welome!

John P Taylor
([EMAIL PROTECTED])



Re: RH7.2 (vs other distros) Install Issues

2002-04-23 Thread Post, Mark K

During the Distributions Redbook residency, the team installed SuSE, Red
Hat, Turbolinux, and Millenux in an LPAR, under VIF, and under VM.  They all
worked (at that time).  Since then, there have been numerous changes, and I
am not aware of anyone that has gone through the same exercise since.
Chet's particular problem is due to a bug that was introduced to the IUCV
driver after the Redbook testing was done.  No chart is ever going to be
able to keep that kind of information current.

Mark Post

-Original Message-
From: Carey Schug [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 8:19 AM
To: [EMAIL PROTECTED]
Subject: RH7.2 (vs other distros) Install Issues


Is there a comparison chart on the various Linux/390 distributions
anywhere, showing such feature comparisons?  Possibly also listing what
platforms they are known to have been sucessfully installed upon (e.g.
under VM/VIF vs bare iron, and on P/390, 390 software emulation, classes of
real 390 hardware, etc)?  Maybe compiler issues? (If memory serves, at
least one has a version that will run on older hardware missing some of the
newer instructions, if the correct C compiler is used.


Open your home, open your heart, become a foster parent!


Romney White [EMAIL PROTECTED]@VM.MARIST.EDU on 04/23/2002 05:05:09

Please respond to Linux on 390 Port [EMAIL PROTECTED]

Sent by:Linux on 390 Port [EMAIL PROTECTED]


To:[EMAIL PROTECTED]
cc:
Subject:Re: [LINUX-390] RH 7.2  Install Issues


Chet:

You'r right - RedHat built their kernel for stand-alone installation
only, requiring the use of the HMC integrated console, whereas VIF is
built to provide Linux images with a 3215 console. You need a kernel
with 3215 support to install under VIF.

Romney



Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01: 00

2002-04-23 Thread Post, Mark K

Jill,

I don't know why, but the kernel is not seeing your parmline:
Command line is:
We are running under VM

This is why you're getting the kernel panic.  Could you post the contents of
your lin exec, and where you got your kernel from?

Mark Post

-Original Message-
From: Jill Grine [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 8:29 AM
To: [EMAIL PROTECTED]
Subject: Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01:
00


Thanks for your response, Rob, but the website you have provided assumes
OS/390
is running on the machine on which Linux is being installed.  I am
attempting to
to run this under VM, and don't have OS/390 installed on this box.

I hope all will forgive me for the following, it is the only way to convey
the
whole picture.  (lin is a REXX that deletes any reader files, then reloads
the
LINUX files and executes.)

parmfile:  ramdisk_size=32768 dasd=200-202 root=/dev/ram0 ro ipldelay=30s

CTCA 0E68 COUPLED TO TCPIP 0E69

CTCA 0E69 COUPLED TO TCPIP 0E68

Ready; T=0.48/0.56 09:11:54

lin

003 FILES PURGED

RDR FILE 0103 SENT FROM LINUX1   PUN WAS 0103 RECS 019K CPY  001 A NOHOLD
NOKEEP
RDR FILE 0104 SENT FROM LINUX1   PUN WAS 0104 RECS 0001 CPY  001 A NOHOLD
NOKEEP
RDR FILE 0105 SENT FROM LINUX1   PUN WAS 0105 RECS 123K CPY  001 A NOHOLD
NOKEEP
003 FILES CHANGED

Linux version 2.2.16 ([EMAIL PROTECTED]) (gcc version 2.95.2 19991024
(releas
e)) #1 SMP Fri Nov 3 09:38:59 GMT 2000

Command line is:

We are running under VM

This machine has no IEEE fpu

Detected device 0E68 on subchannel  - PIM = 80, PAM = 80, POM = FF

Detected device 0E69 on subchannel 0001 - PIM = 80, PAM = 80, POM = FF

Detected device 0191 on subchannel 0002 - PIM = 80, PAM = 80, POM = FF

Detected device 0200 on subchannel 0003 - PIM = 80, PAM = 80, POM = FF

Detected device 0201 on subchannel 0004 - PIM = 80, PAM = 80, POM = FF

Detected device 0202 on subchannel 0005 - PIM = 80, PAM = 80, POM = FF
 (etc)
Dected device 0120 on subchannel 000E - PIM = 80, PAM = 80, POM = FF
Highest subchannel number detected (hex) : 000E

SenseID : device 0E68 reports: Dev Type/Mod = 3088/08

SenseID : device 0E69 reports: Dev Type/Mod = 3088/08

SenseID : device 0191 reports: CU  Type/Mod = 3990/C2, Dev Type/Mod =
3390/0A
SenseID : device 0200 reports: CU  Type/Mod = 3990/C2, Dev Type/Mod =
3390/0A
SenseID : device 0201 reports: CU  Type/Mod = 3990/C2, Dev Type/Mod =
3390/0A
SenseID : device 0202 reports: CU  Type/Mod = 3990/C2, Dev Type/Mod =
3390/0A
SenseID : device 0009 reports: Dev Type/Mod = 3215/00
  (etc.)
SenseID : device 0120 reports: CU  Type/Mod = 3880/01, Dev Type/Mod =
3370/00
Calibrating delay loop... 15.10 BogoMIPS
SenseID : device 019D reports: CU  Type/Mod = 3880/01, Dev Type/Mod =
3370/00
SenseID : device 0198 reports: CU  Type/Mod = 3880/01, Dev Type/Mod =
3370/00
SenseID : device 0120 reports: CU  Type/Mod = 3880/01, Dev Type/Mod =
3370/00
Calibrating delay loop... 15.26 BogoMIPS

Memory: 257080k/262144k available (1088k kernel code, 0k reserved, 3976k
data, 0
k init)

Dentry hash table entries: 32768 (order 6, 256k)

Buffer cache hash table entries: 262144 (order 8, 1024k)

Page cache hash table entries: 65536 (order 6, 256k)

debug: 16 areas reserved for debugging information

debug: reserved 4 areas of 4 pages for debugging ccwcache

VFS: Diskquotas version dquot_6.4.0 initialized

POSIX conformance testing by UNIFIX

Detected 1 CPU's

Boot cpu address  0

cpu 0 phys_idx=0 vers=FF ident=0D0607 machine=7490 unused=

Linux NET4.0 for Linux 2.2

Based upon Swansea University Computer Society NET3.039

NET4: Unix domain sockets 1.0 for Linux NET4.0.

NET4: Linux TCP/IP 1.0 for NET4.0

IP Protocols: ICMP, UDP, TCP, IGMP

TCP: Hash tables configured (ehash 262144 bhash 65536)
Linux IP multicast router 0.06 plus PIM-SM
Starting kswapd v 1.5
pty: 256 Unix98 ptys configured
RAM disk driver initialized:  16 RAM disks of 32768K size
loop: registered device at major 7
LVM version 0.8i  by Heinz Mauelshagen  (02/10/1999)
lvm -- Driver successfully initialized
md driver 0.36.6 MAX_MD_DEV=4, MAX_REAL=8
Partition check:
Kernel panic: VFS: Unable to mount root fs on 01:00
HCPGIR450W CP entered; disabled wait PSW 000A 8005684C



-Original Message-
From: Rob van der Heij [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 22, 2002 12:06 PM
To: [EMAIL PROTECTED]
Subject: Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01:
00


I followed SuSE's instructions on creating an IPL tape and used that for
the
IPL of the LPAR.  This worked fine - I did not get the Kernel panic
message this time.  The only reason why I chose the DASD IPL volume (using
instructions that I got from another Manual) was because it was easier to
IPL from DASD than from tape.

Should work though. Many reported that it works for them. Maybe
check that you had the latest code from
   http://www.rvdheij.com/linuxipl.html
If you get as far as this message it looks like Linux did pick up
your parm line 

Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01: 00

2002-04-23 Thread Jill Grine

Hi Mark,

I think I mentioned in my first email that the command missing line was the
problem, but don't remember.  Anyway here's the exec (very simple):
/*  REXX  */
'CLOSE RDR'
'PURGE RDR ALL'
'SPOOL PUNCH * RDR'
'PUNCH SUSE IMAGE A (NOH)'
'PUNCH SUSE PARM A (NOH)'
'PUNCH SUSE INITRD A (NOH)'
'CHANGE RDR ALL KEEP NOHOLD'
'IPL 00C CLEAR'

Basically empties the reader, loads 3 files to the reader from the LINUX1
minidisk, then IPLs
from the reader.

I downloaded the install from ftp.suse.com/pub/suse/s390/suse-us-390/images.
Bad choice?

-Original Message-
From: Post, Mark K [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 11:20 AM
To: [EMAIL PROTECTED]
Subject: Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01:
00


Jill,

I don't know why, but the kernel is not seeing your parmline:
Command line is:
We are running under VM

This is why you're getting the kernel panic.  Could you post the contents of
your lin exec, and where you got your kernel from?

Mark Post



Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01: 00

2002-04-23 Thread Jill Grine

Hey Mark,

Thanks so much for the new URL.  I will certainly give it a shot and let you

know.  This really isn't making sense, and unfortunately I don't get to
spend
solid chunks of time to concentrate on it.  It probably won't be until
tomorrow...
Gotta work on something that just came up.  Don't we love the life of a sys
prog?
(Actually I really do...)

Hope your day is as pretty as Georgia is today!  (That sounded really
odd...in too
much of a hurry to be literate, but you get the gist.)


-Original Message-
From: Post, Mark K [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 11:57 AM
To: [EMAIL PROTECTED]
Subject: Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01:
00


Jill,

No, not a bad choice, but there is an updated version available at
ftp://ftp.suse.com/pub/suse/s390/kernel/2.2.16-iucv2/images/

I don't know that using the files there will fix your problem, but it won't
hurt.

Also, you commented that Rob's recipe for writing the IPL files to disk was
based on the premise of having OS/390 available.  I'm pretty sure that VM
has the equivalent of ICKDSF available, so the method should be adaptable to
that environment.  (Of course, I don't think Rob or I ever thought it would
be necessary under VM.)

Mark Post

-Original Message-
From: Jill Grine [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 11:41 AM
To: [EMAIL PROTECTED]
Subject: Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01:
00


Hi Mark,

I think I mentioned in my first email that the command missing line was the
problem, but don't remember.  Anyway here's the exec (very simple):
/*  REXX  */
'CLOSE RDR'
'PURGE RDR ALL'
'SPOOL PUNCH * RDR'
'PUNCH SUSE IMAGE A (NOH)'
'PUNCH SUSE PARM A (NOH)'
'PUNCH SUSE INITRD A (NOH)'
'CHANGE RDR ALL KEEP NOHOLD'
'IPL 00C CLEAR'

Basically empties the reader, loads 3 files to the reader from the LINUX1
minidisk, then IPLs
from the reader.

I downloaded the install from ftp.suse.com/pub/suse/s390/suse-us-390/images.
Bad choice?

-Original Message-
From: Post, Mark K [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 11:20 AM
To: [EMAIL PROTECTED]
Subject: Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01:
00


Jill,

I don't know why, but the kernel is not seeing your parmline:
Command line is:
We are running under VM

This is why you're getting the kernel panic.  Could you post the contents of
your lin exec, and where you got your kernel from?

Mark Post



Enhanced CLAW support from UTS Global

2002-04-23 Thread Doug DeMers

*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

  April 23, 2002

  UTS Global, LLC is pleased to announce commercial grade
  support for the Cisco 7500 Series Router on Linux/390
  (Linux on the mainframe). The commercial grade CLAW
  driver supports both CLAW packing protocol and CLAW
  protocol.

  The CLAW (Common Link Access to Workstation) channel
  protocol is used in TCP/IP environments to transport
  data between the mainframe and the Cisco Channel
  Interface Processor (CIP).

  CLAW packing is a Cisco proprietary enhancement to the
  CLAW protocol which enables the transport of multiple IP
  packets in a single channel operation. Substantial TCP/IP
  throughput improvement has been demonstrated.

  The UTS Global Enhanced CLAW driver transparently
  supports both PACKED and TCPIP (normal CLAW) interfaces.
  The driver first attempts a PACKED connection with the
  Cisco router, but will automatically attempt connection
  using the normal (TCPIP) CLAW protocol if the PACKED
  connection fails.

  Minor router configuration changes are required to enable
  the CLAW packing feature once the appropriate changes to
  the mainframe Linux kernel (CLAW driver) and host
  configurations are implemented.

  UTS Global has previously released a Cisco CLAW driver
  for Linux/390 under the UTS Global Public License.

  The latter product will continue to be available for
  non-commercial usage, pursuant to the terms and
  conditions of the UTS Global Public License.

  For more information regarding pricing, support and
  availability of the commercial CLAW packing driver,
  please contact:

  Douglas DeMers
  (408)496-4230

  or send email to ([EMAIL PROTECTED])
  Web address: www.utsglobal.com

  UTS Global is a trademark of UTS Global, LLC. Linux is a
  registered trademark of Linus Torvalds. All other
  registered trademarks belong to their respective holders.

*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

BTW, we are looking for one or two sites with high network traffic
(especially small packets) through CIP-connected Cisco 7XXX routers to
thoroughly test the throughput improvements of this new driver.  Any
takers?  Please email me directly at [EMAIL PROTECTED]

Thank you!
__
Douglas DeMers (408-496-4230)
UTS Global, LLC [EMAIL PROTECTED]



backup software

2002-04-23 Thread Noll, Ralph

what does anyone use for backup software for
linux/390...

do you use TSM under VM

or a Microsoft product to backup Linux


thanks

Ralph Noll
Systems Programmer
City of Little Rock
Phone (501) 371-4884
Fax   (501) 371-4616
Cell  (501) 590-8626
mailto:[EMAIL PROTECTED]


   \\\|///
 \\\ ~ ~ ///
  (  @ @  )
 ===oOOo=(_)=oOOo===



Re: backup software

2002-04-23 Thread Noll, Ralph

their product requires os/390,mvs or z/os.e

 -Original Message-
 From: Froberg, David C [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, April 23, 2002 12:52 PM
 To: [EMAIL PROTECTED]
 Subject: Re: backup software


 Ralph,

 You should check in to Innovation's Upstream product.
 Upstream is great,
 works nicely and easily with existing mainframe infrastructure, and I
 believe it has a Linux agent.

 David Froberg
 No warrantee is implied or explicitly stated.
 Do not bend, fold, spindle, or mutilate.

  -Original Message-
 From:   Noll, Ralph [mailto:[EMAIL PROTECTED]]
 Sent:   Tuesday, April 23, 2002 1:46 PM
 To: [EMAIL PROTECTED]
 Subject:backup software

 what does anyone use for backup software for
 linux/390...

 do you use TSM under VM

 or a Microsoft product to backup Linux


 thanks

 Ralph Noll
 Systems Programmer
 City of Little Rock
 Phone (501) 371-4884
 Fax   (501) 371-4616
 Cell  (501) 590-8626
 mailto:[EMAIL PROTECTED]


\\\|///
  \\\ ~ ~ ///
   (  @ @  )
  ===oOOo=(_)=oOOo===




Re: backup software

2002-04-23 Thread Froberg, David C

True.

I suspect a number of Linux/390 shops have MVS, OS/390, or z/OS with variety
of dasd or tape pools that can be expoited for backup purposes.

 -Original Message-
From:   Noll, Ralph [mailto:[EMAIL PROTECTED]]
Sent:   Tuesday, April 23, 2002 2:13 PM
To: [EMAIL PROTECTED]
Subject:Re: backup software

their product requires os/390,mvs or z/os.e

 -Original Message-
 From: Froberg, David C [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, April 23, 2002 12:52 PM
 To: [EMAIL PROTECTED]
 Subject: Re: backup software


 Ralph,

 You should check in to Innovation's Upstream product.
 Upstream is great,
 works nicely and easily with existing mainframe infrastructure, and I
 believe it has a Linux agent.

 David Froberg
 No warrantee is implied or explicitly stated.
 Do not bend, fold, spindle, or mutilate.

  -Original Message-
 From:   Noll, Ralph [mailto:[EMAIL PROTECTED]]
 Sent:   Tuesday, April 23, 2002 1:46 PM
 To: [EMAIL PROTECTED]
 Subject:backup software

 what does anyone use for backup software for
 linux/390...

 do you use TSM under VM

 or a Microsoft product to backup Linux


 thanks

 Ralph Noll
 Systems Programmer
 City of Little Rock
 Phone (501) 371-4884
 Fax   (501) 371-4616
 Cell  (501) 590-8626
 mailto:[EMAIL PROTECTED]


\\\|///
  \\\ ~ ~ ///
   (  @ @  )
  ===oOOo=(_)=oOOo===




Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01: 00

2002-04-23 Thread Jill Grine

Agreed, the ICKDSF on VM would be painful.  Mark suggested loading from
a different URL.  Have loaded to my PC, but not to mainframe yet.  Had to
drop work on Linux for afternoon to work on another project, but I'm hoping
the new files might make the difference.  The 003 files changed is the
result
a changing all files in the rdr to KEEP NOHOLD in the REXX.

-Original Message-
From: Rob van der Heij [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 2:34 PM
To: [EMAIL PROTECTED]
Subject: Re: Subject: Kernel panic: VFS: Unable to mount root fs on 01:
00


At 17:56 23-04-02, you wrote:

Also, you commented that Rob's recipe for writing the IPL files to disk was
based on the premise of having OS/390 available.  I'm pretty sure that VM
has the equivalent of ICKDSF available, so the method should be adaptable
to
that environment.  (Of course, I don't think Rob or I ever thought it would
be necessary under VM.)

Oh, deity forbid... obviously I used that to develop the stuff but
the process with ICKDSF on VM was painful to do. I misunderstood the
environment.

The logging shows no ramdisk recognized. Either the ramdisk was
corrupted during download or maybe the wrong kernel is being used
(though I think the 0003 FILES CHANGED is only when you have the
the RDR IPL configured in I think).

Rob



Re: How do we get iostat working in SLES7?

2002-04-23 Thread Jon R. Doyle

Kernel updates from SuSE have the hooks for iostat to give info similar to
Solaris or BSD. This was placed into 2.4.16 and beyond as I recall. This
kernel patch was questionable in the past. The patch is also located on
the web, there is a maintainer in France as I recall. Google it if you do
not have the updated Kernel, but I would recommend that route as there are
several things that went into 2.4.18

You might send a note to bernd or Jens at SuSE to check on updates.


Regards,

Jon

Jon R. Doyle
Sendmail Inc.
6425 Christie Ave
Emeryville, Ca. 94608


   (o_
   (o_   (o_   //\
   (/)_  (\)_  V_/_



On Tue, 23 Apr 2002, John P Taylor wrote:

 We are having some problems at the moment geting iostat to output data
 from the partitons defined to the system (e.g iostat -x /dev/dasdb).
 This works OK on Redhat 7.2 and the reason for this seems to be that
 /proc/partitions contains rather more information on RedHat than SLES7.
 We tried applying linux-2.4.0-sard.patch to SLES7:

 --- linux/drivers/block/ll_rw_blk.c.~1~ Mon Jul 17 14:53:34 2000
 --- linux/drivers/scsi/scsi_lib.c.~1~   Mon Jul 17 14:53:30 2000
 --- linux/fs/partitions/check.c.~1~ Mon Jul 17 14:53:31 2000
 --- linux/include/linux/blkdev.h.~1~Mon Jul 17 14:53:38 2000
 --- linux/include/linux/genhd.h.~1~ Mon Jul 17 14:53:28 2000
 --- linux/drivers/block/genhd.c.origFri Sep  7 12:05:30 2001

 linux/drivers/block/ll_rw_blk.c.~1~ patches cleanly but the others do
 not. Are we going about this the wrong way, is there an alternative
 utility that provides the same level of detail. Any suggestions welome!

 John P Taylor
 ([EMAIL PROTECTED])




Re: z/VM 3 and IFL engines - Oh No!

2002-04-23 Thread Rod Clayton

It is also my understanding that you have to buy another VM license to run guests 
under your IFL engine when you get it.  The IFL VM will have to talk to the NON-IFL VM 
via an intra-LPAR communications technique.

Alan Altmark posted that z/VM 3 can't run in IFL engines. This is not
good news for us. We are currently running VM on a machine that can't
run z/VM 4. We are looking at getting newer machine with an IFL engine.
We wanted to do a Linux proof of concept on the new machine, on the IFL
to isolate the linux workload.
If z/VM 3 can't run on an IFL we'll have to upgrade VM - but after we
get the new machine since our current machine can't run z/VM 4.
So my question is can z/VM 3 really not run in an IFL, or is this
another license issue? We already are pursuing RACF licensing and I now
see we'll need a special bid for RSCS NJE and/or Passthru. Can the z/VM
issue also be pursued as a licensing issue or is this one real?


--
Rod Clayton KA3BHY
Systems Programmer
Howard County Public Schools
[EMAIL PROTECTED]



Re: gcc compiler on Linux/390

2002-04-23 Thread Mark Perry

Hi Reinald,
see
http://gcc.gnu.org/onlinedocs/gcc/S-390-and-zSeries-Options.html#S%2f390%20a
nd%20zSeries%20Options

Mark

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
Reinald Verheij
Sent: 23 April 2002 12:47
To: [EMAIL PROTECTED]
Subject: gcc compiler on Linux/390


Hi,

On Win32 and on other Unixes some compilers have possibilities to optimize
instruction scheduling in the compiler for Pentium pipeline architecture
etc...
Does anything like this exist in GCC, to optimize the instruction stream for
G4, G5 or G6 etc processors ?

Reinald



Re: MTBF

2002-04-23 Thread John Alvord

On Tue, 23 Apr 2002 12:28:23 -0400, David Boyes
[EMAIL PROTECTED] wrote:

  The record for us is about 9 months for a single Linux image. Average
  is about 3-4 months between reboots, depending on what's running in
  them -- things that suck up lots of memory like Websphere tend to
  shorten the lifespan of the machine by fragmenting storage. Machines
  that get a lot of interactive use tend to collect a few zombies after
  a while, so reboots become a reasonably good idea after a while.

  I have to say that I'm a little surprised at that recommendation.

No, THIS IS NOT A  RECOMMENDATION. This is a descriptive observation.

The failures we see appear to be memory related, and there are some cases
where if you cut interactive users loose and let them do their stuff, they
create random garbage, et al. This is pretty standard stuff for lots of
interactive processing sites -- clear the decks periodically even if it's
not sick.

  Seems like I've heard lots of tales of people with Linux up
  much longer than 9 months... doing web services, etc...  do you
  think your 9 month figure is a function of the 390 version
  of Linux, or Linux in general?

No, I think its a function of how we make upgrade decisions and/or ops
policy.  I suspect that you could go longer, but I wanted to share a data
point.
The very long uptime reports tend to be for fixed workload
environments... like a router that runs for years.

john



Re: MTBF

2002-04-23 Thread Thomas David Rivers


   The record for us is about 9 months for a single Linux image. Average
   is about 3-4 months between reboots, depending on what's running in
   them -- things that suck up lots of memory like Websphere tend to
   shorten the lifespan of the machine by fragmenting storage. Machines
   that get a lot of interactive use tend to collect a few zombies after
   a while, so reboots become a reasonably good idea after a while.
 
   I have to say that I'm a little surprised at that recommendation.

 No, THIS IS NOT A  RECOMMENDATION. This is a descriptive observation.

 Ah - my fault - I didn't mean to mis-characterize your statements.
 My apologies.

...

 Sorry if the comment was confusing.  I don't intend it to be a
 recommendation, just an observation of our experience.

 Your clarification helps a great deal!  Many thanks!

- Dave Rivers -

--
[EMAIL PROTECTED]Work: (919) 676-0847
Get your mainframe programming tools at http://www.dignus.com



Re: LinuxWorld Article series

2002-04-23 Thread John Summerfield

 On Tue, 23 Apr 2002 05:32:03 +0800, John Summerfield
 [EMAIL PROTECTED] wrote:

   ...
  This is nothing really new.  Sharing a VM system with early releases of
  MVS was unpleasant.
 
I hear that it's no problem with the two in different LPARs, and that
  running MVS as a guest under VM works well with a surprisingly small
  performance hit (in the 2-3% ballpark.)
  --
  --henry schaffer
 
 
 In the times when Sharing a VM system with early releases of MVS was
 unpleasant, IBM hadn't invented LPARs and I think Gene had just released (o
 r
 was about to release) the S/470s.
 
 
 MVS+VM, I was told, made the 168 comparable in performance to a 135.

 One of my first projects at Amdahl was supporting a product called
 VM/PE, a boringly named, technically cool piece of software which
 shared the real (UP) system between VM and MVS. S/370 achitecture is
 dependent on page zero and this code swapped page zeros between MVS
 and VM. It worked just fine for dedicated channels, nice low 1-2%
 overhead. When we started sharing control units and devices, things
 turned ugly.



I do believe we used VM/PE, before MDF became available.

We used to run two, occasionally three MVS systems on a 5860.

-
--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Re: LinuxWorld Article series

2002-04-23 Thread John Alvord

On Wed, 24 Apr 2002 03:46:04 +0800, John Summerfield
[EMAIL PROTECTED] wrote:

 On Tue, 23 Apr 2002 05:32:03 +0800, John Summerfield
 [EMAIL PROTECTED] wrote:

   ...
  This is nothing really new.  Sharing a VM system with early releases of
  MVS was unpleasant.
 
I hear that it's no problem with the two in different LPARs, and that
  running MVS as a guest under VM works well with a surprisingly small
  performance hit (in the 2-3% ballpark.)
  --
  --henry schaffer
 
 
 In the times when Sharing a VM system with early releases of MVS was
 unpleasant, IBM hadn't invented LPARs and I think Gene had just released (o
 r
 was about to release) the S/470s.
 
 
 MVS+VM, I was told, made the 168 comparable in performance to a 135.

 One of my first projects at Amdahl was supporting a product called
 VM/PE, a boringly named, technically cool piece of software which
 shared the real (UP) system between VM and MVS. S/370 achitecture is
 dependent on page zero and this code swapped page zeros between MVS
 and VM. It worked just fine for dedicated channels, nice low 1-2%
 overhead. When we started sharing control units and devices, things
 turned ugly.



I do believe we used VM/PE, before MDF became available.

We used to run two, occasionally three MVS systems on a 5860.

MDF was largely equal to the LPAR facility...

VM/PE had a very elegant development name: Janus - who was the Roman
God of portals, able to look two directions at the same time. 

It was originally written by Dewayne Hendricks and the original was
very nice indeed. [Anyone feel free to correct me]. I ran across an
original listing while at Amdahl and it was so much prettier then the
product version. He was no longer working at Amdahl by the time I
arrived. Robert Lerche was also involved, but I don't know whether he
worked jointly with DH or not.

john



netsaint per redbook ?

2002-04-23 Thread Lionel Dyck

I am using the Linux on IBM zSeries and S/390 ISP/ASP Solutions to guide
me and I have done everything in the chapter on installing netsaint and
configuring apache and when I connect to http://localhost/netsaint I am
told that:

The requested URL /netsaint was not found on this server.

This is my first foray into apache.

The two statements in the httpd.conf per the pub are:

Alias /netsaint/ /usr/local/netsaint/share/
ScriptAlias /cgi-bin/netsaint/ /usr/local/netsaint/bin/

Thanks

Lionel B. Dyck, Systems Software Lead
Kaiser Permanente Information Technology
25 N. Via Monte Ave
Walnut Creek, Ca 94598

Phone:   (925) 926-5332 (tie line 8/473-5332)
E-Mail:[EMAIL PROTECTED]
Sametime: (use Lotus Notes address)
AIM:lbdyck



FOREIGN BANKS SWITCHING TO LINUX

2002-04-23 Thread Wilson Correira Gil



MORE FOREIGN BANKS SWITCHING TO LINUX"A New 
Zealand bank has become the latest institution to adoptthe open-source Linux 
operating system. According to reports,the bank is to move all its branches 
to the Linux platform..."COMPLETE STORY:http://zdnet.com.com/2100-1104-887961.html
 Wilson Correia Gil 
(0xx11) 3179-7462 (0xx11) 3179-7044 - Fax 
Duratex S.A.


Re: netsaint per redbook ?

2002-04-23 Thread Post, Mark K

Lionel,

Did you restart Apache after updating httpd.conf?
Are the netsaint directories on the same file system as Apache's files?

Mark Post

-Original Message-
From: Lionel Dyck [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 4:09 PM
To: [EMAIL PROTECTED]
Subject: netsaint per redbook ?


I am using the Linux on IBM zSeries and S/390 ISP/ASP Solutions to guide
me and I have done everything in the chapter on installing netsaint and
configuring apache and when I connect to http://localhost/netsaint I am
told that:

The requested URL /netsaint was not found on this server.

This is my first foray into apache.

The two statements in the httpd.conf per the pub are:

Alias /netsaint/ /usr/local/netsaint/share/
ScriptAlias /cgi-bin/netsaint/ /usr/local/netsaint/bin/

Thanks

Lionel B. Dyck, Systems Software Lead
Kaiser Permanente Information Technology
25 N. Via Monte Ave
Walnut Creek, Ca 94598

Phone:   (925) 926-5332 (tie line 8/473-5332)
E-Mail:[EMAIL PROTECTED]
Sametime: (use Lotus Notes address)
AIM:lbdyck



Re: Sendmail Perfomance

2002-04-23 Thread John Summerfield

[EMAIL PROTECTED] said:
 We have also seen large performance gains on 2.4.18, especially with
 the SuSE patch from Andrea (VM33).


I don't know why, but disk performance improved on several Intel/AMD boxes I use
by around 30% at 2.4.17.

As measured by hdparm, but bonnie/bonnie++ backed that up on my Athlon system.

--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Re: Sendmail Perfomance

2002-04-23 Thread Jon R. Doyle

I think that is when Andrea's work went in to mainline. On SuSE builds it
is there for all platforms. I think some of the -aa tree went into the
mainline, with other pieces from Andrew. I would have to look back over
the 10k lkml entries :~)

You will also see some gains over the stock using .19pre7 that has a lot
of work on ReiserFS done (some specific to Sendmail) as well as more
cleanup from Andrea. You can grab the source at

ftp.suse.de/pub.people/mantel

Regards,

Jon

Jon R. Doyle
Sendmail Inc.
6425 Christie Ave
Emeryville, Ca. 94608


   (o_
   (o_   (o_   //\
   (/)_  (\)_  V_/_



On Wed, 24 Apr 2002, John Summerfield wrote:

 [EMAIL PROTECTED] said:
  We have also seen large performance gains on 2.4.18, especially with
  the SuSE patch from Andrea (VM33).


 I don't know why, but disk performance improved on several Intel/AMD boxes I use
 by around 30% at 2.4.17.

 As measured by hdparm, but bonnie/bonnie++ backed that up on my Athlon system.

 --
 Cheers
 John Summerfield

 Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

 Note: mail delivered to me is deemed to be intended for me, for my disposition.

 ==
 If you don't like being told you're wrong,
 be right!




Samba article

2002-04-23 Thread Lionel Dyck

http://www.itweek.co.uk/News/1131114

Samba runs rings around Windows


Lionel B. Dyck, Systems Software Lead
Kaiser Permanente Information Technology
25 N. Via Monte Ave
Walnut Creek, Ca 94598

Phone:   (925) 926-5332 (tie line 8/473-5332)
E-Mail:[EMAIL PROTECTED]
Sametime: (use Lotus Notes address)
AIM:lbdyck



Re: Sendmail Perfomance

2002-04-23 Thread Robert Werner

Hi Jon,


 Significant performance increases will be seen using the ReiserFS for
 Queues dirs due to the small random file counts. You can find all kinds of
 info on ReiserFS on the IBM site. The other thing you would really want to
 investigate is Sendmail 8.12. Yes, Yes, not just another go to the new
 rev 8.10 actually performs better in some cases, 8.12 has this
 performance plus advantages in the allocation of processes and queue
 definition. I do know that SuSE builds of 8.12 are available, have not
 checked right away on zSeries. You can always use source.

 We have also seen large performance gains on 2.4.18, especially with the
 SuSE patch from Andrea (VM33).


Up to now I thought that every filesystem with journaling lowers disk
performance ? The journaling function produces additional CPU load and
i/o's. So I'm suprised that ReiserFS should be faster than ext2. Let's see
if I find time to try it out.

I just read the list of new features from sendmail 8.12. Seems that for our
purpose - a smtp-relay that is mx-backup for several domains - upgrading to
8.12.3 could result in much better performance. I think it's worth building
it on the zSeries.


Thanks,

Robert.



Re: netsaint per redbook ?

2002-04-23 Thread Lionel Dyck

This is under SLES 7.2 and yes I did a rcapache restart.  The files are in
the same filesystem.

I have to believe there is something in the apache setup but I don't know
where to look.  I also followed the info in the SHARE Linux hands-on-lab
for SuSE to setup apache. And I am not running the firewall.

thx

Linux on 390 Port [EMAIL PROTECTED] wrote on 04/23/2002 01:55:46
PM:

 Lionel,

 Did you restart Apache after updating httpd.conf?
 Are the netsaint directories on the same file system as Apache's files?

 Mark Post

 -Original Message-
 From: Lionel Dyck [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, April 23, 2002 4:09 PM
 To: [EMAIL PROTECTED]
 Subject: netsaint per redbook ?


 I am using the Linux on IBM zSeries and S/390 ISP/ASP Solutions to guide
 me and I have done everything in the chapter on installing netsaint and
 configuring apache and when I connect to http://localhost/netsaint I am
 told that:

 The requested URL /netsaint was not found on this server.

 This is my first foray into apache.

 The two statements in the httpd.conf per the pub are:

 Alias /netsaint/ /usr/local/netsaint/share/
 ScriptAlias /cgi-bin/netsaint/ /usr/local/netsaint/bin/

 Thanks
 
 Lionel B. Dyck, Systems Software Lead
 Kaiser Permanente Information Technology
 25 N. Via Monte Ave
 Walnut Creek, Ca 94598

 Phone:   (925) 926-5332 (tie line 8/473-5332)
 E-Mail:[EMAIL PROTECTED]
 Sametime: (use Lotus Notes address)
 AIM:lbdyck



Re: MTBF

2002-04-23 Thread Robert Werner

I made the same experience with a heavy loaded nntp-server. We have to
reboot the system after about 10-12 weeks.  Looks like a memory leak in the
kernel (2.4.7).

Robert,
who has a machine running solaris 2.5.1 with an uptime of 1386 (!) days  ;-)



- Original Message -
From: David Boyes [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, April 23, 2002 6:28 PM
Subject: Re: MTBF


   The record for us is about 9 months for a single Linux image. Average
   is about 3-4 months between reboots, depending on what's running in
   them -- things that suck up lots of memory like Websphere tend to
   shorten the lifespan of the machine by fragmenting storage. Machines
   that get a lot of interactive use tend to collect a few zombies after
   a while, so reboots become a reasonably good idea after a while.
 
   I have to say that I'm a little surprised at that recommendation.

 No, THIS IS NOT A  RECOMMENDATION. This is a descriptive observation.

 The failures we see appear to be memory related, and there are some cases
 where if you cut interactive users loose and let them do their stuff, they
 create random garbage, et al. This is pretty standard stuff for lots of
 interactive processing sites -- clear the decks periodically even if it's
 not sick.

   Seems like I've heard lots of tales of people with Linux up
   much longer than 9 months... doing web services, etc...  do you
   think your 9 month figure is a function of the 390 version
   of Linux, or Linux in general?

 No, I think its a function of how we make upgrade decisions and/or ops
 policy.  I suspect that you could go longer, but I wanted to share a data
 point.

  That is, would you recommend
   rebooting a PC version of Linux on the same interval (given
   the same workload?)

 Given my workload, probably. My users are rude, cranky, and badly behaved.
 They can break anything...8-).

   Also - just to ask - what about the BSD variants - would you
   also recommend 9 months for them?

 See above. If the users do stupid things, you're about in the same
position
 no matter what the OS.

   Could you relate more about this 9 month figure?  Do you have
   specific instances where it was required, that you can share
   of course...

 The 9 month one was a power failure on site with a P390 doing mail
delivery.
 The failure was that the customer was too ... funds limited... to go for
 backup setups.  Restart, and we were up and running, but that's the
maximum
 runtime we've observed.

 Sorry if the comment was confusing.  I don't intend it to be a
 recommendation, just an observation of our experience.

 -- db




try to load Gigabit OSA Ethernet driver qeth

2002-04-23 Thread Liang, Ih-Cheng

I have SuSE v7 running on an S390 LPAR.
I tried to load Gigabit OSA Ethernet driver with the following command:
insmod qeth qeth_options=noauto,0x0f0a,0x0f0b,0x0f0c
This command failed with the following messages:
Using /lib/modules/2.2.16/net/qeth.o
/lib/modules/2.2.16/net/qeth.o: init_module: Device or resource busy
Hint: this error can be caused by incorrect module parameters,
including invalid IO or IRQ parameters
Any suggestions what I did wrong?  Thanks.
IC Liang.



Re: Samba article

2002-04-23 Thread John Summerfield

 http://www.itweek.co.uk/News/1131114

 Samba runs rings around Windows

So says http://www.dwheeler.com/oss_fs_why.html - but it takes a while to read.



--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Re: Sendmail Perfomance

2002-04-23 Thread Alan Cox

 used in the Sendmail deployments we do, as the Queues or mailstores can
 become corrupt, especially with the large Cache on today's controllers.
 EXT2 does not fsync or dirsync correctly, we had to place patches into
 8.12 code base for this problematic issue (people that use EXT2 anyway).

This is incorrect. Ext2 is strictly standards compliant here. Sendmail
at least used to make totally bogus assumptions about synchronous directory
updating. I'm suprised that you might not have fixed such code.
(Perhaps you have - I run exim because I want fast efficient mail
handling and a human readable configuration)

 EXT3 will have the same set of problems, infact I was just doing studies

Also incorrect. See the documentation. Ext3 also follows the posix/sus
requirements and allows you to select additional ordering guarantees

As to reiserfs being very fast for very small files. It was certainly
designed to be , and Namesys are smart people.



Re: MTBF

2002-04-23 Thread Alan Cox

 I made the same experience with a heavy loaded nntp-server. We have to
 reboot the system after about 10-12 weeks.  Looks like a memory leak in the
 kernel (2.4.7).

2.4.7 has dcache and other vm balancing problems. Thats one of the
reasons its considered obsolete.



Re: try to load Gigabit OSA Ethernet driver qeth

2002-04-23 Thread David Rock

On Tue, Apr 23, 2002 at 03:12:27PM -0700, Liang, Ih-Cheng wrote:
 I have SuSE v7 running on an S390 LPAR.
 I tried to load Gigabit OSA Ethernet driver with the following command:
 insmod qeth qeth_options=noauto,0x0f0a,0x0f0b,0x0f0c
 This command failed with the following messages:
 Using /lib/modules/2.2.16/net/qeth.o
 /lib/modules/2.2.16/net/qeth.o: init_module: Device or resource busy
 Hint: this error can be caused by incorrect module parameters,
 including invalid IO or IRQ parameters

This sounds like the module has already been loaded. Try lsmod to see if
you see it loaded already and rmmod it if it is.

--
David Rock
[EMAIL PROTECTED]



Re: Sendmail Perfomance

2002-04-23 Thread Jon R. Doyle

On Wed, 24 Apr 2002, Alan Cox wrote:

  used in the Sendmail deployments we do, as the Queues or mailstores can
  become corrupt, especially with the large Cache on today's controllers.
  EXT2 does not fsync or dirsync correctly, we had to place patches into
  8.12 code base for this problematic issue (people that use EXT2 anyway).

 This is incorrect. Ext2 is strictly standards compliant here. Sendmail
 at least used to make totally bogus assumptions about synchronous directory
 updating. I'm suprised that you might not have fixed such code.
 (Perhaps you have - I run exim because I want fast efficient mail
 handling and a human readable configuration)


Whether or not it meets some standards these issues were problematic, and
only on Linux, not BSD with Soft Updates, or the other likely suspects on
Unix AIX etc. Whn I uncovered this, the Opensource group did make fixes
specifically for Linux, it is in the release notes for 8.12 Quite standard
with some mail systems, there are threads on this issue in the SpecBench
area that handles Specmail. This was really when it came to a head for us.
In fact Domino was rejected until they chattr or mounted EXT2 in -sync.
From my memory when I spoke to Claus and Greg here the ISV being required
to make specific calls to a FS was hard to swallow. However, we did do
this in 8.12


  EXT3 will have the same set of problems, infact I was just doing studies

 Also incorrect. See the documentation. Ext3 also follows the posix/sus
 requirements and allows you to select additional ordering guarantees

Humm, if you mean that you can mount EXT3 with options, then yes, but I
rarely find folks that follow the warning label. If you mean the forced
fsync, yes, that too can be disabled, but I have not read wheter this is
wise just saw in places that you should still run fsync manually at
times. This last area was specific to HA environments.


 As to reiserfs being very fast for very small files. It was certainly
 designed to be , and Namesys are smart people.

 Yes, smart, Hans just needs better PR for hot technology or maybe RH
should put ReiserFS support in their installer then all the American's
would use it, since we buy and use everything from Software to Wonder
Bread on marketing  perception :~)

Regards,

Jon



Re: Sendmail Perfomance

2002-04-23 Thread Alan Cox

 In fact Domino was rejected until they chattr or mounted EXT2 in -sync.
 From my memory when I spoke to Claus and Greg here the ISV being required
 to make specific calls to a FS was hard to swallow. However, we did do
 this in 8.12

Standards exist for a reason. If there really is a problem with lack of
standards here it would be very good if Sendmail Inc raised it with
the Open Group for SuSv4. I support the need for a standard to handle
the directory name synchronizing.

 Humm, if you mean that you can mount EXT3 with options, then yes, but I
 rarely find folks that follow the warning label. If you mean the forced
 fsync, yes, that too can be disabled, but I have not read wheter this is
 wise just saw in places that you should still run fsync manually at
 times. This last area was specific to HA environments.

No I mean that it defines orderings and those orderings can handle buggy
software that assumes ancient 4BSD unix behaviour.



Re: try to load Gigabit OSA Ethernet driver qeth

2002-04-23 Thread Post, Mark K

Did you load the qdio module first?

Mark Post

-Original Message-
From: Liang, Ih-Cheng [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, April 23, 2002 6:12 PM
To: [EMAIL PROTECTED]
Subject: try to load Gigabit OSA Ethernet driver qeth


I have SuSE v7 running on an S390 LPAR.
I tried to load Gigabit OSA Ethernet driver with the following command:
insmod qeth qeth_options=noauto,0x0f0a,0x0f0b,0x0f0c
This command failed with the following messages:
Using /lib/modules/2.2.16/net/qeth.o
/lib/modules/2.2.16/net/qeth.o: init_module: Device or resource busy
Hint: this error can be caused by incorrect module parameters,
including invalid IO or IRQ parameters
Any suggestions what I did wrong?  Thanks.
IC Liang.



Uncensored Redbooks - Revisited

2002-04-23 Thread Post, Mark K

Well, so far I haven't been able to line up anyone willing to host the
uncensored version of the Linux/390 Redbook.  If anyone on the mailing list
is able and willing to do that, please let me know.  The file is about 5MB
in size.  If someone wants to host the book, but doesn't have a copy of the
original, I can provide that.

Thanks,

Mark Post

-Original Message-
From: Post, Mark K
Sent: Friday, April 19, 2002 4:49 PM
To: 'Linux390'
Subject: Uncensored Redbooks


I've gotten permission from my web hoster to put a link to an uncensored
copy of SG24-4987 on the main page of linuxvm.org.  I will be updating the
main page very soon to point to it, along with a short explanation of why
there are two pointers.

Mark Post



Re: LinuxWorld Article series

2002-04-23 Thread Jon Nolting

Mark,

This is an Amdahl Millennium 700 which is ALS-2 compliant but does NOT have
IEEE.

At 09:58 AM 4/22/02 -0400, you wrote:
David,

No, I've been informed by a reliable source that this is an MSF'd Amdahl
0700 processor.

Mark Post

-Original Message-
From: David Boyes [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 22, 2002 8:29 AM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


If I'm reading it correctly, a 6070 is some kind of PowerPC box. Possibly a
R/390? If so, you're facing the same OS/2 based device emulation...

-- db

 Dave,

 Not really, sorry.  I'm just a user there, and it sits in
 Texas.  I can ask
 the VM guy that supports it if you're really curious.

 Mark Post

 -Original Message-
 From: Dave Jones [mailto:[EMAIL PROTECTED]]
 Sent: Saturday, April 20, 2002 9:38 PM
 To: [EMAIL PROTECTED]
 Subject: Re: LinuxWorld Article series


 Mark,
 I don't recognize the CPU type in the CPUID field. can you
 explain what type
 of system you ran this test on?
 Thanks.

 DJ
  CPUID = FF0240760700


Jon Nolting
(925) 672-1249  -- Home office



Re: Sendmail Perfomance

2002-04-23 Thread Moloko Monyepao

Mark!

The %CPU goes up to 60 to 100 when the load gets high. The following is some of the 
stats I got from TOP. 

CPU states: 96.4% user,  3.5% system,  0.0% nice,  0.0% idle   Load= 8

CPU states: 68.1% user,  2.5% system,  0.0% nice, 29.2% idle   Load=10

I want to increase the load to maybe 30 or 40 but what I want to know is whether or 
not will I get connection rejections because looking at the stats as it is now is with 
only Outgoing mail I get an impression that I will have problems when I increase the 
load and accept both incoming and outgoing mail.

Please Advice
Moloko


 -Original Message-
 From: Post, Mark K [SMTP:[EMAIL PROTECTED]]
 Sent: 23 April 2002 16:44
 To:   [EMAIL PROTECTED]
 Subject:   Re: Sendmail Perfomance
 
 Moloko,
 
 You still have not answered the question of just how busy your S/390 CPU is
 when the load average is at 10.  (Not 10% as you state.)  A load average
 does not tell you anything about how busy the CPU is, only how many
 processes are ready to run.  Try bumping your QueueLA and RefuseLA on
 Linux/390 to something very large, say 99, and see what happens.  But
 really, look at and report %CPU busy, not load average.  That really doesn't
 tell us anything.
 
 Mark Post
 
 -Original Message-
 From: Moloko Monyepao [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, April 23, 2002 9:35 AM
 To: [EMAIL PROTECTED]
 Subject: Sendmail Perfomance
 
 
 I have sendmail 8.11.6-3 installed on Redhat 7.2 (server installation) on
 S390(Lpar) with an IFL dedicated to this Lpar. We made the  O QueueLA=10 and
 the O RefuseLA=12 which is the same setup as on my Intel machine. When using
 top to check the load the following is what I get. The O
 MaxDaemonChildren=100 on S390 and 300 on Intel.
 
 Intel Machine = The Load average does not go over 10% when setup for both
 incoming and outgoing mail.
 
 S390 Machine = The Load average goes to more than 10% and it start rejecting
 connections if we set it up for both incoming and outgoing mail and it gets
 up to about 6 to 8% if we only set it up for outgoing mail which is not a
 lot.
 
 Please assist
 Moloko



Re: Sendmail Perfomance

2002-04-23 Thread Moloko Monyepao

Robert!

According to our monitoring we did not notice any perfomance problems on the disks or 
the network. The disk is using Raid0 on about 4 disk which can be reached by about (4 
ficon channels and 4 escon channels shared between 5 Lpars). 

Thanx
Moloko
 -Original Message-
 From: Robert Werner [SMTP:[EMAIL PROTECTED]]
 Sent: 23 April 2002 16:41
 To:   [EMAIL PROTECTED]
 Subject:   Re: Sendmail Perfomance
 
 Hi Moloko,
 
 We made the experience that you cannot compare the load averages directly
 between Intel an Linux/390. Also the shown load average on our linuxes is
 higher using a 2.4 kernel as with kernel 2.2.x (why?)
 
 But if there's really a problem with the performance on the S390 you have to
 find out where your bottleneck is. On our smtp-relay on the z900 (under VM,
 SuSE7.0 with kernel 2.4.7, sendmail 8.11) the disk performance slowed down
 the system and pushed up the load average. A solution is to attach the disks
 using more channels and to stripe the spool-area over several disks and
 channels with LVM or software Raid0.
 
 Sendmail also slows down if the number of mails in your spooldir gets to
 high. So if you have a huge amount of spooled mails it is better to split
 the spool area in several subdirs with dedicated sendmail instances
 processing the queues.
 
 We also had bad performance with kernel 2.2.x.
 
 Robert.
 
 
 
 - Original Message -
 From: Moloko Monyepao [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, April 23, 2002 3:35 PM
 Subject: Sendmail Perfomance
 
 
 I have sendmail 8.11.6-3 installed on Redhat 7.2 (server installation) on
 S390(Lpar) with an IFL dedicated to this Lpar. We made the  O QueueLA=10 and
 the O RefuseLA=12 which is the same setup as on my Intel machine. When using
 top to check the load the following is what I get. The O
 MaxDaemonChildren=100 on S390 and 300 on Intel.
 
 Intel Machine = The Load average does not go over 10% when setup for both
 incoming and outgoing mail.
 
 S390 Machine = The Load average goes to more than 10% and it start rejecting
 connections if we set it up for both incoming and outgoing mail and it gets
 up to about 6 to 8% if we only set it up for outgoing mail which is not a
 lot.
 
 Please assist
 Moloko