SLES-7 (31bit) Samba crasded Kernel BUG at fcntl.c :417!

2003-07-29 Thread Knoblauch, Josef
Hallo,
 
SuSE SLES-7 (s390) Kernel 2.4.7-SuSE-SMP
Samba 2.2.0a
 
After installation of Samba 2.20a in SLES-7 (31bit) for zSeries I got a
systemcrash during  access from a W2K worstation to samba 
 
Kernel BUG at fcntl.c : 417!
illegal operation : 0001
CPU : 0
Process smbd (pid: 18262, stackpage=36FE9000)
Kernel PSW: 070c 800601df6
task: 36fe8000 Ksp: 36fe9c88 pt_regs: 36fe9bf0
 
Kernel GRPS
  001b 0001
800601d4 0001 7a30 3d929000
0003 0015 36fe8000 0003
3a03f33c 80060134 800601d4 36fe9c88
 
Kernel ACRC
   
0001   
   
   
 
Kernel BackChain CallChain
   36fe9c88 [<000601d4>]
   36fe9d68 [<000602ca>]
   36fe9dc8 [<0006ab48>]
   36fe9e28 [<0005cbe8>]
   36fe9e88 [<0005ccd0>]
   36fe9f08 [<000128f0>]
 

 

Can somebody help out of the problem?

Thanks

Josef Knoblauch 

__ 
Josef Knoblauch 
ALLDATA SYSTEMS GmbH 
Systemtechnik 
Redlichstraße 2 
40239 Düsseldorf 
Telefon +49-(0)0211/964 - 1560; Telefax +49-(0)0211/964 - 1155 
mailto:[EMAIL PROTECTED]; 
http://www.alldata.de   

 


Re: Whither consolidation and what then?

2003-07-29 Thread Alan Altmark
On Tuesday, 07/29/2003 at 08:55 MST, Jim Sibley <[EMAIL PROTECTED]>
wrote:

> So my question is: What moves are afoot to reduce the
> number of required images by consolidating their
> functions and remove the TCP/IP communications between
> applications?
>
> Isn't this the next logical step?

You make two points:
1. Fewer, larger servers
2. Mixed-function servers

I think you'll get the first before you get the 2nd.  As long as "everyone
knows" (say it often enough and it will be true) that you can't mix
workloads on a single Linux instance, we'll have this problem.  BTW, it
won't eliminate TCP/IP comms; it just changes the latency and CPU
consumption characteristics.  Avoiding TCP/IP altogether would require
application changes that would be specific to VM.  Don't hold your breath
on that.

Alan Altmark
Sr. Software Engineer
IBM z/VM Development


Re: Whither consolidation and what then?

2003-07-29 Thread John Summerfield
On Tue, 29 Jul 2003, Jim Sibley wrote:

> Alan wrote:
>
> "Its just that PC's are so cheap its
> easier to use several for a job _IFF_ you can solve
> the management
> problem."
>
> That _IFF_ is not only non-trivial technically, but
> also not not-trivial financially!
>
> You but one cheap PC or a hundred cheap PC's, you
> still have a bunch of cheap PC's.
>
> One of my favorite examples is that our company still
> has MS pervasively in the office and once a month we
> get a note from IT security to put on a patch because
> MS did it again. So it takes me 15 minutes, so what?
> Well, with 300,000 in the company, thats 75,000
> MANHOURS. IT security doesn't care - the manhours
> doesn't come out of its budget!

Here's a truly cheap PC, a Pentium II 233, bought at auction:
[EMAIL PROTECTED] summer]$ uptime
 11:28am  up 93 days, 18:38, 12 users,  load average: 0.15, 0.13, 0.09
 You have mail in /var/spool/mail/summer
[EMAIL PROTECTED] summer]$


All relevant fixes are applied.

Doesn't even have a UPS on it; I have one, waiting for the next power
failure. It's beginning to look like it will happen when we move house
next month.



--


Cheers
John.

Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
Copyright John Summerfield. Reproduction prohibited.


Re: Whither consolidation and what then?

2003-07-29 Thread Jim Sibley
Alan wrote:

"Its just that PC's are so cheap its
easier to use several for a job _IFF_ you can solve
the management
problem."

That _IFF_ is not only non-trivial technically, but
also not not-trivial financially!

You but one cheap PC or a hundred cheap PC's, you
still have a bunch of cheap PC's.

One of my favorite examples is that our company still
has MS pervasively in the office and once a month we
get a note from IT security to put on a patch because
MS did it again. So it takes me 15 minutes, so what?
Well, with 300,000 in the company, thats 75,000
MANHOURS. IT security doesn't care - the manhours
doesn't come out of its budget!

=
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley

"Computer are useless.They can only give answers." Pablo Picasso

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: Whither consolidation and what then?

2003-07-29 Thread Ulrich Weigand
Alan Cox wrote:

>crashme is part of the Linux cerberus test suite although it goes back
>many years before. Roughly speaking crashme does this
>
>Catch every exception
>Generate random data
>Execute it
>(catching the exception to repeat)
>
>Its found many things, including a K6 CPU bug. If you try it on your 390
>make sure someone has tried it before you  ;)

We do run crashme, and while we didn't find CPU bugs, we found a few
kernel bugs that way -- mostly related to some of the weirder s390
semi-privileged instructions that we didn't properly shut off ...

Bye,
Ulrich

--
  Dr. Ulrich Weigand
  [EMAIL PROTECTED]


Re: Whither consolidation and what then?

2003-07-29 Thread John Summerfield
On Tue, 29 Jul 2003, Alan Cox wrote:

> On Maw, 2003-07-29 at 19:53, Adam Thornton wrote:
> > Reading email shouldn't take much CPU, although if you insist on doing
> > it inside UltraWhizzy
> > K/Gnome/Mozilla/MultiMediaMailReaderNowWithGratuitousAnimation!!! then
> > it can find a way, I'm sure, to burn CPU.
>
> Even that is mostly RAM and I/O heavy (in terms of the weak PC disk
> subsystems). When I tested this with some CPU speed setting stuff at
> about 800Mhz evolution simply stopped getting any faster.

I was surprised that performance improved noticably when I added 256
Mbytes of RAM.

>
> That's actually very much the market pitch of one of the CPU vendors
> now (VIA)
>

There are some nice boxes coming from their products, some including a
motherboard with a soldered-on CPU which retails for much the same price
as a lot of mobos. The mobo is 17cm square, it's perfectly possible to
build a system with the profile of a slimmed-down lunchbox:
http://images.google.com.au/imgres?imgurl=www.tefalheads.com/programs/linux_jumpstart/images/sparclx.jpg&imgrefurl=http://www.tefalheads.com/programs/linux_jumpstart/&h=230&w=186&prev=/images%3Fq%3Dsparcstation%2BLX%26svnum%3D10%26hl%3Den%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26sa%3DG

I chose this image because it goes to show how tiny these Sun boxes are,
and I quite like the story that goes with it;-) And, it's one of those I
just installed RHL 6.2 on.



That box at the bottom is 24 cm (9 1/2") square.

VIA specifically mentions it supports Linux.




--


Cheers
John.

Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
Copyright John Summerfield. Reproduction prohibited.


Re: Whither consolidation and what then?

2003-07-29 Thread John Summerfield
On Tue, 29 Jul 2003, Michael Martin wrote:

> I've seen this behaviour, too. I once tried to move a large number of
> mp3 files from one physical drive to another with rsync, and the machine
> locked up, destroyed the reiserfs file systems on both drives, and I
> lost a bunch of files. That's the only time I've had a near catastrophic
> failure in ten years of running linux.
>

That's a different problem (and maybe is why Red Hat didn't include
reiserfs for some time).

My system never locks up, it just behaves like it's starved of RAM.


--


Cheers
John.

Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
Copyright John Summerfield. Reproduction prohibited.


Re: Many many processes with LINUX

2003-07-29 Thread John Summerfield
On Tue, 29 Jul 2003, Ferguson, Neale wrote:

> New threads and processes are both created via a call to clone(). The former
> uses flags that tell clone not to duplicate everything (like virtual
> memory). The new thread or process gets a unique PID. In the 2.6 there'll be
> something called a process group ID and a new threading model known as NPTL.
> All threads created by a process will belong to such a group. The display of
> processes can then be restricted to much less numbers of beasts. (Mind you
> the practice of creating hundreds or even thousands of threads by many
> (typical?) Java apps. is pretty braindead. The use of a few threads with
> work queues seems (to me) to be a better practice.)


NPTL is in Red Hat Linux 9 and later.

You can try it right now (and probably should) by participating in Red
Hat's current beta of its ES line of software for

xBoxes
pBoxes
iBoxes
and importantly, zBoxes.

For the zBox, your choice - 32-bit or 64.


--


Cheers
John.

Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
Copyright John Summerfield. Reproduction prohibited.


Re: Whither consolidation and what then?

2003-07-29 Thread Alan Cox
On Maw, 2003-07-29 at 20:49, Dale Strickler wrote:
> Does anyone know of anyone doing this sort of research now?  Anyone running
> this or other crash tests like this on Linux (on or off the MVS environment?)
>
> It is simple code to write, just generate two random numbers, treat one as
> an address and one as data and write.  The hard part is doing it on a bunch
> of platforms, under a bunch of conditions and collect the numbers.

crashme is part of the Linux cerberus test suite although it goes back
many years before. Roughly speaking crashme does this

Catch every exception
Generate random data
Execute it
(catching the exception to repeat)

Its found many things, including a K6 CPU bug. If you try it on your 390
make sure someone has tried it before you  ;)


Alan


Re: Whither consolidation and what then?

2003-07-29 Thread Alan Cox
On Maw, 2003-07-29 at 19:53, Adam Thornton wrote:
> Reading email shouldn't take much CPU, although if you insist on doing
> it inside UltraWhizzy
> K/Gnome/Mozilla/MultiMediaMailReaderNowWithGratuitousAnimation!!! then
> it can find a way, I'm sure, to burn CPU.

Even that is mostly RAM and I/O heavy (in terms of the weak PC disk
subsystems). When I tested this with some CPU speed setting stuff at
about 800Mhz evolution simply stopped getting any faster.

That's actually very much the market pitch of one of the CPU vendors
now (VIA)


Re: Whither consolidation and what then?

2003-07-29 Thread Alan Cox
On Maw, 2003-07-29 at 20:35, Jim Sibley wrote:
> One of the driving factors of either the multiple
> virtual machines or the multiple user model is that,
> in most applications, most of the time, a single user
> is idle and your 300Ghz of power is mostly idle.

But in the PC world cpu power is cheap. People have been trading
wasted CPU in vast quanities for convenience. The desktop is a
mindboggling waste of CPU power (idle time not pretty pictures)
A lot of server stuff is. Its just that PC's are so cheap its
easier to use several for a job _IFF_ you can solve the management
problem.

My firewall is 99.5% idle but its not economically interesting to
solve that problem.

> And those 300 small machines would probably only
> access 9 TB of data cut up into 30GB pieces.

I didnt think you could buy disks that small any more 8)
Try 80-240Gb assuming random IDE disks.


Re: Many many processes with LINUX

2003-07-29 Thread Alan Cox
On Maw, 2003-07-29 at 22:16, Fargusson.Alan wrote:
> I think that the reason that the threads don't show up in ps on Solaris that 
> 'lightweight' processes are implemented in the library at user level.  The kernel 
> does not know about them.  This was the case a one time anyway.

Modern solaris certainly does LWP in kernel space.


Re: Whither consolidation and what then?

2003-07-29 Thread Tom Duerbusch
Quite right.  I would think, that once you have an reliable production
application running, you would just leave it alone.  When you get the
next release of that application, you would put it on a current level of
Linux.  And then kill off the old application and old level of Linux.

That is easy enough to do when you only have one application per
image.

Back to the origional question, if it is better to have one application
per image or multiple applications per image.  So far, I haven't seen
much of a response for "multiple applications per image" camp.

I, might have to consolidate some just from the real memory
requirements.  But I can see the MVS types having a view of one large
image with multiple applications.  Also, LPAR types are limited in the
number of images.

Tom Duerbusch
THD Consulting


"When you do full distro upgrades, you upgrade everything and go
through
your QA routine.


Even then I can imagine that over time, the number of "standard
configurations" will proliferate.

I discovered recently that people are still using Red Hat Linux 6.2.
Their applications all work on it. it's a good stable release of RHL
6.2
and it works on their hardware.

Unfortunately, Red Hat's not shipping fixes for it any more, and it is
in need of fixes. Still, if it's properlly shielded it's probably no
worst them MSWare.

Come to think of it, I installed RHL 6.2 myself a week or so ago.

I'm sure people here will be in that position wrt SLES7 or RHL 7.x in
time: the cost of disrupting it will be too much to contemplate."



--


Cheers
John.

Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
Copyright John Summerfield. Reproduction prohibited.


Re: Whither consolidation and what then?

2003-07-29 Thread Michael Martin
I've seen this behaviour, too. I once tried to move a large number of
mp3 files from one physical drive to another with rsync, and the machine
locked up, destroyed the reiserfs file systems on both drives, and I
lost a bunch of files. That's the only time I've had a near catastrophic
failure in ten years of running linux.


On Tue, 2003-07-29 at 13:02, John Summerfield wrote:
> On Tue, 29 Jul 2003, Tom Duerbusch wrote:
>
> > My take on multiple images is two fold.
> >
> > But first, the disclaimer:
> > This assumes you have sufficient resources in the first place to do
> > this (normally real memory).
> >
> > 1.  I don't know this to be true with Linux, but the Unix types have
> > always been leary of having multiple applications running on the same
> > box.  First, they say that they can't guarentee performance, then they
> > start talking about an application corrupting the memory of another
> > application.  So, one application per box if you want reliability.  I
> > haven't had the experience of memory problems in Linux, yet, so I still
> > tend to believe this.
>
> Linux doesn't handle memory very well. My Athlon has 512 Mbytes of RAM,
> and most of the time it works really well. However, I sometimes copy
> large - 600 Mbytes or more - files, either from disk to disk or across
> the LAN. When that happens, RAM gets filled with this data and
> performance is really bad for a while, even when the file processing is
> over.
>
>
>
> >
> > 2.  Once an application is running and is running good, it should
> > continue to run correctly until something external happens, like putting
> > on maintenance.  So, why put on maintenance, other than security
> > patches?  A new application may need a different gcc library or such.
> > The origional application, if not fully tested with the new changes, may
> > fail in production.
>
> Third-party software aside, this tends not to happen with Linux. At
> least with commercial distros, people are paid to fix things without
> causing such problems.
>
> When you do full distro upgrades, you upgrade everything and go through
> your QA routine.
>
> As we've seen in the past few hours, there can be problems with
> third-party software requiring specific versions of, potentially old,
> libraries. Mostly, there are compatibility libraries included to allow
> you to do this.
>
> If you go the "I'll just create this little symlink and see if it
> works," then you really are on your own. If it breaks guess who did it?
> The good news is the pieces are all yours.
>
> Sometimes it's happened in RHL that you needed compatibility libraries
> from a prior release.
>
>
> If vendor certifications are important to you, then life becomes more
> difficult.
>
>
> >
> > At least VM makes it a whole lot easier to define, maintain and control
> > multiple machines.
>
> Even then I can imagine that over time, the number of "standard
> configurations" will proliferate.
>
> I discovered recently that people are still using Red Hat Linux 6.2.
> Their applications all work on it. it's a good stable release of RHL 6.2
> and it works on their hardware.
>
> Unfortunately, Red Hat's not shipping fixes for it any more, and it is
> in need of fixes. Still, if it's properlly shielded it's probably no
> worst them MSWare.
>
> Come to think of it, I installed RHL 6.2 myself a week or so ago.
>
> I'm sure people here will be in that position wrt SLES7 or RHL 7.x in
> time: the cost of disrupting it will be too much to contemplate.
>
>
>
> --
>
>
> Cheers
> John.
>
> Join the "Linux Support by Small Businesses" list at
> http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
> Copyright John Summerfield. Reproduction prohibited.
--
-
Michael Martin
[EMAIL PROTECTED]
(713) 918-2631



Re: Many many processes with LINUX

2003-07-29 Thread Fargusson.Alan
I think that the reason that the threads don't show up in ps on Solaris that 
'lightweight' processes are implemented in the library at user level.  The kernel does 
not know about them.  This was the case a one time anyway.

This disadvantage of this is that if any thread goes compute bound for a long time no 
other thread can run.  I suspect that this is what causes some of the performance 
problems I hear about with Apache under Solaris.

I have a book called "Inside Linux", or something like that.  It has a section on 
process scheduling.  If you want I can give you the ISBN.  The book is at home, so I 
can't see it right now.

-Original Message-
From: Ann Smith [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 1:37 PM
To: [EMAIL PROTECTED]
Subject: Many many processes with LINUX


We recently moved a java app and some MQ clients and servers to
linux/390 for testing. Folks here are used to Solaris and are confused
by the number of processes that show up when you issue 'ps -ef'. Many
more than they are used to. If you ask Jeeves, there is info on the
threading model linux uses. Linux apparently doesn't have 'lightweight'
processes. Every thread is a process? I looked for an option on 'ps' to
try to make the display look more like Solaris but didn't see one. Does
anyone know of a good source of documentation on linux threading and
processes?


Re: Whither consolidation and what then?

2003-07-29 Thread John Summerfield
On Tue, 29 Jul 2003, Tom Duerbusch wrote:

> My take on multiple images is two fold.
>
> But first, the disclaimer:
> This assumes you have sufficient resources in the first place to do
> this (normally real memory).
>
> 1.  I don't know this to be true with Linux, but the Unix types have
> always been leary of having multiple applications running on the same
> box.  First, they say that they can't guarentee performance, then they
> start talking about an application corrupting the memory of another
> application.  So, one application per box if you want reliability.  I
> haven't had the experience of memory problems in Linux, yet, so I still
> tend to believe this.

Linux doesn't handle memory very well. My Athlon has 512 Mbytes of RAM,
and most of the time it works really well. However, I sometimes copy
large - 600 Mbytes or more - files, either from disk to disk or across
the LAN. When that happens, RAM gets filled with this data and
performance is really bad for a while, even when the file processing is
over.



>
> 2.  Once an application is running and is running good, it should
> continue to run correctly until something external happens, like putting
> on maintenance.  So, why put on maintenance, other than security
> patches?  A new application may need a different gcc library or such.
> The origional application, if not fully tested with the new changes, may
> fail in production.

Third-party software aside, this tends not to happen with Linux. At
least with commercial distros, people are paid to fix things without
causing such problems.

When you do full distro upgrades, you upgrade everything and go through
your QA routine.

As we've seen in the past few hours, there can be problems with
third-party software requiring specific versions of, potentially old,
libraries. Mostly, there are compatibility libraries included to allow
you to do this.

If you go the "I'll just create this little symlink and see if it
works," then you really are on your own. If it breaks guess who did it?
The good news is the pieces are all yours.

Sometimes it's happened in RHL that you needed compatibility libraries
from a prior release.


If vendor certifications are important to you, then life becomes more
difficult.


>
> At least VM makes it a whole lot easier to define, maintain and control
> multiple machines.

Even then I can imagine that over time, the number of "standard
configurations" will proliferate.

I discovered recently that people are still using Red Hat Linux 6.2.
Their applications all work on it. it's a good stable release of RHL 6.2
and it works on their hardware.

Unfortunately, Red Hat's not shipping fixes for it any more, and it is
in need of fixes. Still, if it's properlly shielded it's probably no
worst them MSWare.

Come to think of it, I installed RHL 6.2 myself a week or so ago.

I'm sure people here will be in that position wrt SLES7 or RHL 7.x in
time: the cost of disrupting it will be too much to contemplate.



--


Cheers
John.

Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
Copyright John Summerfield. Reproduction prohibited.


Re: Many many processes with LINUX

2003-07-29 Thread Ferguson, Neale
New threads and processes are both created via a call to clone(). The former
uses flags that tell clone not to duplicate everything (like virtual
memory). The new thread or process gets a unique PID. In the 2.6 there'll be
something called a process group ID and a new threading model known as NPTL.
All threads created by a process will belong to such a group. The display of
processes can then be restricted to much less numbers of beasts. (Mind you
the practice of creating hundreds or even thousands of threads by many
(typical?) Java apps. is pretty braindead. The use of a few threads with
work queues seems (to me) to be a better practice.)

-Original Message-
On Tue, 2003-07-29 at 15:37, Ann Smith wrote:
> We recently moved a java app and some MQ clients and servers to
> linux/390 for testing. Folks here are used to Solaris and are confused
> by the number of processes that show up when you issue 'ps -ef'. Many
> more than they are used to. If you ask Jeeves, there is info on the
> threading model linux uses. Linux apparently doesn't have 'lightweight'
> processes. Every thread is a process? I looked for an option on 'ps' to
> try to make the display look more like Solaris but didn't see one. Does
> anyone know of a good source of documentation on linux threading and
> processes?

ps will definitely show each thread.  Threads are lighter weight than
normal processes but still shown as processes by ps; there's some reason
in the kernel that this is the case.  I don't remember enough about this
to be more helpful, though, and I don't know if there's a way to make ps
be more selective.


Re: Many many processes with LINUX

2003-07-29 Thread Adam Thornton
On Tue, 2003-07-29 at 15:37, Ann Smith wrote:
> We recently moved a java app and some MQ clients and servers to
> linux/390 for testing. Folks here are used to Solaris and are confused
> by the number of processes that show up when you issue 'ps -ef'. Many
> more than they are used to. If you ask Jeeves, there is info on the
> threading model linux uses. Linux apparently doesn't have 'lightweight'
> processes. Every thread is a process? I looked for an option on 'ps' to
> try to make the display look more like Solaris but didn't see one. Does
> anyone know of a good source of documentation on linux threading and
> processes?

ps will definitely show each thread.  Threads are lighter weight than
normal processes but still shown as processes by ps; there's some reason
in the kernel that this is the case.  I don't remember enough about this
to be more helpful, though, and I don't know if there's a way to make ps
be more selective.

Adam


Many many processes with LINUX

2003-07-29 Thread Ann Smith
We recently moved a java app and some MQ clients and servers to
linux/390 for testing. Folks here are used to Solaris and are confused
by the number of processes that show up when you issue 'ps -ef'. Many
more than they are used to. If you ask Jeeves, there is info on the
threading model linux uses. Linux apparently doesn't have 'lightweight'
processes. Every thread is a process? I looked for an option on 'ps' to
try to make the display look more like Solaris but didn't see one. Does
anyone know of a good source of documentation on linux threading and
processes?


Re: SCO not playing by Aussie Rules

2003-07-29 Thread Gregg C Levine
Hello from Gregg C Levine
Phil, everything you've been saying about those characters at SCO, is
exactly appropriate. The reason why you firm wasn't interviewed, is
that they may not know of it. Besides, SCO wants positive data that
supports their unsupportable position, not a statement that'll
support, say, IBM. 
---
Gregg C Levine [EMAIL PROTECTED]

"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )



> -Original Message-
> From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf
Of
> Phil Payne
> Sent: Tuesday, July 29, 2003 9:39 AM
> To: [EMAIL PROTECTED]
> Subject: Re: [LINUX-390] SCO not playing by Aussie Rules
> 
> > Go Aussies!
> >
> > http://www.theregister.com/content/61/31910.html
> 
> http://www.theinquirer.net/?article=10743
> 
> "Certainly, SCO has succeeded in making lots of very smart people
extremely
> angry. This isn't
> a great strategy in almost any situation."
> 
> "But aside from a few shills for proprietary software at the vendor
supported
> publications and
> IT industry analyst firms, most reactions and responses are inimical
to SCO."
> 
> IT industry analysts? Which, one wonders?  Not this one.
> 
> --
>   Phil Payne
>   http://www.isham-research.com
>   +44 7785 302 803
>   +49 173 6242039


Re: Whither consolidation and what then?

2003-07-29 Thread Dale Strickler
On a side light to this topic, I remember an article I read in the late
80's early 90's where someone wrote some 'randomly poke storage'
programs.  Then started them running under different platforms.  As I
remember it, there was some mainframe environment (I forget which), Win NT
3.??, OS/2, Win 3.1, Apple, SCO (Sun?) Unix and some others were
included.  It's been too long for me to remember the numbers but I do
remember being surprised that NT had fatal crashes around 5 significant
figures less often then OS/2, Win 3.1 or Apple.  (OS/2, Win 3.1 and Apple
were about equal, I think OS/2 was a touch better.)
Using NT workstation since version 2, I experienced about the same
reliability as with the SCO Unix workstation of that same era.  Then when
NT 4.0 put the Win 95 front end on NT, the SCO workstation became far more
reliable.
Does anyone know of anyone doing this sort of research now?  Anyone running
this or other crash tests like this on Linux (on or off the MVS environment?)
It is simple code to write, just generate two random numbers, treat one as
an address and one as data and write.  The hard part is doing it on a bunch
of platforms, under a bunch of conditions and collect the numbers.
-Dale

At 02:10 PM 2003_07_29, you wrote:
At one time I did a lot of work with Unix, and I never had any problems with
multiple processes corrupting the memory of other processes.  Have there
been some bugs introduced into Unix recently?  I have not been working with
Unix for a couple of years, unless you count z/OS USS.
On the other hand, I have done some work with Windows over the years, and I
would never try to put multiple applications on a Windows box.  It is hard
enough to keep one application running on one Windows box.
-Original Message-
From: Tom Duerbusch [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 10:17 AM
To: [EMAIL PROTECTED]
Subject: Re: Whither consolidation and what then?
My take on multiple images is two fold.

But first, the disclaimer:
This assumes you have sufficient resources in the first place to do
this (normally real memory).
1.  I don't know this to be true with Linux, but the Unix types have
always been leary of having multiple applications running on the same
box.  First, they say that they can't guarentee performance, then they
start talking about an application corrupting the memory of another
application.  So, one application per box if you want reliability.  I
haven't had the experience of memory problems in Linux, yet, so I still
tend to believe this.
2.  Once an application is running and is running good, it should
continue to run correctly until something external happens, like putting
on maintenance.  So, why put on maintenance, other than security
patches?  A new application may need a different gcc library or such.
The origional application, if not fully tested with the new changes, may
fail in production.
At least VM makes it a whole lot easier to define, maintain and control
multiple machines.
Tom Duerbusch
THD Consulting
>>> [EMAIL PROTECTED] 07/29/03 11:33AM >>>
Philosophical question?
The heart of the matter lies in why so many images in the first place?
If I need a half dozen images of Linux to service the Web, but those
Linux images can all be running under VM, what is different between
Linux and VM that lets VM handle the concurrent workload better than
Linux can?
It is a variation of the old arguement as to which is better, VM and
serveral VSE guests or one MVS instance.
Dale Strickler
Cole Software, LLC
Voice: 540-456-8896
Fax: 540-456-6658
Web: http://www.colesoft.com/


Re: SuSE SLES8 64 bits errors

2003-07-29 Thread Post, Mark K
Herve,

You're right, everything does look OK.  Perhaps bringing z/VM up to a more
current maintenance level will help?


Mark Post

-Original Message-
From: Herve Bonvin [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 12:44 AM
To: [EMAIL PROTECTED]
Subject: AW: SuSE SLES8 64 bits errors


Mark,

I double checked everything and it seems OK.

it is a Machine ESA,

Q CPLEVEL gives the following result :
z/VM Version 4 Release 3.0, service level 0201 (64-bit)
Generated at 05/09/02 17:30:26 CDT
IPL at 05/18/03 12:45:22 CDT

and uname -m : s390x

Herve

-Ursprüngliche Nachricht-
Von: Post, Mark K [mailto:[EMAIL PROTECTED]
Gesendet am: lundi, 28. juillet 2003 18:22
An: [EMAIL PROTECTED]
Betreff: Re: SuSE SLES8 64 bits errors

Herve,

I forwarded your question on to our z/VM support team, and got this answer:
1. Make sure the CP directory entry for the guest says "MACHINE ESA" in it.
2. Make sure your z/VM is running in 64-bit mode: "Q CPLEVEL"
3. Make sure you have the 64-bit Linux/390 code: "uname -m" and look for
s390x instead of s390.


Mark Post

-Original Message-
From: Herve Bonvin [mailto:[EMAIL PROTECTED]
Sent: Monday, July 28, 2003 2:22 AM
To: [EMAIL PROTECTED]
Subject: RE : RE : SuSE SLES8 64 bits errors


Mark,

Yes, we are running it as a z/VM guest. How can I be sure that this guest is
defined to run in 64-bit mode ?

Hervé Bonvin

-Original Message-
From: Post, Mark K [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 25, 2003 10:59 PM
To: [EMAIL PROTECTED]
Subject: Re: RE : SuSE SLES8 64 bits errors


Herve,

Are you running this system as a z/VM guest?  If so, are you _sure_ you have
it defined to run in 64-bit mode?  I find it very strange that the system
will come up and run if you have 2GB of storage, but not if you have more.
That sounds a lot like some fields that should be 64-bit are being
interpreted as 31-bit instead.


Mark Post

-Original Message-
From: Herve Bonvin [mailto:[EMAIL PROTECTED]
Sent: Friday, July 25, 2003 1:47 AM
To: [EMAIL PROTECTED]
Subject: RE : SuSE SLES8 64 bits errors


here what I get on the console :



/dev/vg1/lv3 on /var type reiserfs (rw)
reiserfs: found format "3.6" with standard journal
reiserfs: checking transaction log (lvm(58,3)) for (lvm(58,3))
reiserfs: using ordered data mode
Using r5 hash to sort names
/dev/vg1/lv4 on /home type reiserfs (rw)
EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended
/dev/vg1/lv6 on /tmp type ext2 (rw) ..doneCreating /var/log/boot.msg
..done/etc/init.d/boot.d/S06boot.klog: line 48: /bin/dmesg: cannot execute
binar y file
is_tree_node: node level 0 does not match to the expected one 1
vs-5150: search_by_key: invalid format found in block 11845. Fsck?
vs-13070: reiserfs_read_inode2: i/o failure occurred trying to find stat
data of  Ý5086 5104 0x0 SD¨ Restore device permissions..done Activating
remaining swap-devices in /etc/fstab... ..doneMounting shared memory FS on
/dev/shm..done Setting up the system clock..done
is_tree_node: node level 0 does not match to the expected one 1
vs-5150: search_by_key: invalid format found in block 11806. Fsck?
vs-13070: reiserfs_read_inode2: i/o failure occurred trying to find stat
data of  Ý5086 3797 0x0 SD¨ Setting up timezone data..done Setting up
hostname 'sbe12099'..done Setting up NIS domainname 'swissptt.ch'..done
Setting up loopback interface modprobe: modprobe: Can't locate module
binfmt-010 0
modprobe: modprobe: Can't locate module binfmt-0100
/sbin/ifup: line 1: /bin/awk: cannot execute binary file
/sbin/ifup: line 577:   141 Doneecho $SCRIPT
   142 Illegal instruction | grep -q
'\(\.rpm\(save\|new\)$\)\|\(.~$\)'
 
..done
/etc/init.d/boot.d/S11boot.localnet: line 113:   148 Segmentation fault
cho
wn root.tty /var/run/utmp
/etc/init.d/boot.d/S11boot.localnet: line 113:   151 Illegal instruction
/bi
n/ps >/dev/null 2>/dev/null



-Original Message-
From: Jim Sibley [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 24, 2003 5:29 PM
To: [EMAIL PROTECTED]
Subject: SuSE SLES8 64 bits errors


"Has anyone been able to run SLES8 64 bits with more than 2GB of storage ?
It works for me when the storage is 2GB or less but with more, it crashes
during the boot."

We're running 3 instances of 64 bit SLES8 (SP2 level) with 3-5 GB of memory.
We were also running an instance of SLES8. We have encountered no problems.
What are your symptoms?

Regards, Jim
Linux S/390-zSeries Support, SEEL, IBM Silicon Valley Labs
t/l 543-4021, 408-463-4021, [EMAIL PROTECTED]
*** Grace Happens ***


Re: Whither consolidation and what then?

2003-07-29 Thread Jim Sibley
Alan wrote:

"You can run 100 sessions on a 390 but I don't think
you get the equivalent of 300Ghz of CPU power."

With the new TREXX, you're probably talking 20-30Ghz,
assuming 1.2 Ghz engines x 32.

One of the driving factors of either the multiple
virtual machines or the multiple user model is that,
in most applications, most of the time, a single user
is idle and your 300Ghz of power is mostly idle.

And a lot of time in most apps is waiting on I/O.
(Just listen to your PC disk click away when you're
starting an app, saving data, or moving from app to
app on the desktop).

With the multi-user model, when the user really does
wake up, he has access to multiple gigahertz
processors.

And those 300 small machines would probably only
access 9 TB of data cut up into 30GB pieces.


=
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley

"Computer are useless.They can only give answers." Pablo Picasso

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: Whither consolidation and what then?

2003-07-29 Thread Ward, Garry
Which gets into the client and server question.

The server should be grinding data, not generating graphics. Graphics
are presentation and should be the responsiblity of the workstation
(client). digesting the data that is the basis of the graphics is the
server's business, which is going to require more I/O handling capacity,
which a mainframe is capable of. 

100 virtural machines to manage and grind the data that feeds 100 real
workstations to draw pretty pictures from the data.

-Original Message-
From: Alan Cox [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 2:30 PM
To: [EMAIL PROTECTED]
Subject: Re: Whither consolidation and what then?


On Maw, 2003-07-29 at 19:10, Fargusson.Alan wrote:
> At one time I did a lot of work with Unix, and I never had any
problems with
> multiple processes corrupting the memory of other processes.  Have
there
> been some bugs introduced into Unix recently?

Not that I've noticed. Multiuser has gone out of fashion

 19:31:25  up 10 days, 21:32, 53 users,  load average: 0.10, 0.07, 0.05

but it still works (reboot from upgrading the kernel)


Spreading load across a lot of PC's gets you colossal amounts of CPU
power but at management cost. The big trick is becoming solving that
management problem - replicated system filestores, capacity management,
session dump/restore etc.

You can run 100 sessions on a 390 but I don't think you get the
equivalent of 300Ghz of CPU power.


Confidentiality Warning:  This e-mail contains information intended only for the use 
of the individual or entity named above.  If the reader of this e-mail is not the 
intended recipient or the employee or agent responsible for delivering it to the 
intended recipient, any dissemination, publication or copying of this e-mail is 
strictly prohibited. The sender does not accept any responsibility for any loss, 
disruption or damage to your data or computer system that may occur while using data 
contained in, or transmitted with, this e-mail.   If you have received this e-mail in 
error, please immediately notify us by return e-mail.  Thank you.


Re: Whither consolidation and what then?

2003-07-29 Thread Adam Thornton
On Tue, 2003-07-29 at 13:30, Alan Cox wrote:
> You can run 100 sessions on a 390 but I don't think you get the
> equivalent of 300Ghz of CPU power.

Of course you don't.  But you might well get enough CPU to keep your
users happy, depending on what they're doing.

Also of course, the dirty little secret of the PC world is that you have
vastly gross overabundance of CPU for most "real computing" tasks.
Serving web pages or delivering email doesn't take much CPU.

Reading email shouldn't take much CPU, although if you insist on doing
it inside UltraWhizzy
K/Gnome/Mozilla/MultiMediaMailReaderNowWithGratuitousAnimation!!! then
it can find a way, I'm sure, to burn CPU.  Still, interactive users
spend most of their time just sitting around, as you show:

> 53 users,  load average: 0.10, 0.07, 0.05

Playing Unreal Tournament, now, *that* takes quite a bit of CPU.

If you're trying to consolidate a CPU-heavy workload, S/390 is
definitely not the platform for you.

Adam


Re: Whither consolidation and what then?

2003-07-29 Thread Alan Cox
On Maw, 2003-07-29 at 19:10, Fargusson.Alan wrote:
> At one time I did a lot of work with Unix, and I never had any problems with
> multiple processes corrupting the memory of other processes.  Have there
> been some bugs introduced into Unix recently?

Not that I've noticed. Multiuser has gone out of fashion

 19:31:25  up 10 days, 21:32, 53 users,  load average: 0.10, 0.07, 0.05

but it still works (reboot from upgrading the kernel)


Spreading load across a lot of PC's gets you colossal amounts of CPU
power but at management cost. The big trick is becoming solving that
management problem - replicated system filestores, capacity management,
session dump/restore etc.

You can run 100 sessions on a 390 but I don't think you get the
equivalent of 300Ghz of CPU power.


Re: Who will be at linuxworld in SF next week?

2003-07-29 Thread Rich Smrcina
I'll be working various booths.

On Tuesday 29 July 2003 01:22 pm, you wrote:
> I'll be at the IBM booth helping anawer zSeries
> questions next week (Tuesday and Wednesday).
>
> Who all will be there?
>
> =
> Jim Sibley
> Implementor of Linux on zSeries in the beautiful Silicon Valley
>
> "Computer are useless.They can only give answers." Pablo Picasso
>
> __
> Do you Yahoo!?
> Yahoo! SiteBuilder - Free, easy-to-use web site design software
> http://sitebuilder.yahoo.com

--
Rich Smrcina
Sr. Systems Engineer
Sytek Services, A Division of DSG
Milwaukee, WI
rsmrcina at wi.rr.com
rsmrcina at dsgroup.com

Catch the WAVV!  Stay for Requirements and the Free for All!
Update your S/390 skills in 4 days for a very reasonable price.
WAVV 2004 in Chattanooga, TN
April 30-May 4, 2004
For details see http://www.wavv.org


Who will be at linuxworld in SF next week?

2003-07-29 Thread Jim Sibley
I'll be at the IBM booth helping anawer zSeries
questions next week (Tuesday and Wednesday).

Who all will be there?

=
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley

"Computer are useless.They can only give answers." Pablo Picasso

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: Whither consolidation and what then?

2003-07-29 Thread Richard Troth
> What happens then? You still have dozens of copies of
> Linux running in dozens of EC machines. And they're
> talking to each other via TCP/IP stacks over a number
> of high speed connections. Have you really advanced
> the architecture and capabilities of Linux?

Yes,  this is a fabulous question
and we as afficionados of Linux-on-zSeries and of VM
should not be afraid of it.

> So my question is: What moves are afoot to reduce the
> number of required images by consolidating their
> functions and remove the TCP/IP communications between
> applications?

I know that there are  "moves afoot"  from several sectors.
My own employer has some offerings out there already.
Then there are home-grown strategies which leverage zSeries and/or VM
which are sometimes obvious  (and sadly sometimes NOT obvious).

Communication among Linux-on-VM instances can be through TCP/IP.
But don't forget that even there the guest LAN has advantages
over wired interconnect.   ALSO don't forget that TCP/IP is
not the only means of communication and resource sharing.

-- R;


Re: Whither consolidation and what then?

2003-07-29 Thread Fargusson.Alan
At one time I did a lot of work with Unix, and I never had any problems with
multiple processes corrupting the memory of other processes.  Have there
been some bugs introduced into Unix recently?  I have not been working with
Unix for a couple of years, unless you count z/OS USS.

On the other hand, I have done some work with Windows over the years, and I
would never try to put multiple applications on a Windows box.  It is hard
enough to keep one application running on one Windows box.

-Original Message-
From: Tom Duerbusch [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 10:17 AM
To: [EMAIL PROTECTED]
Subject: Re: Whither consolidation and what then?


My take on multiple images is two fold.

But first, the disclaimer:
This assumes you have sufficient resources in the first place to do
this (normally real memory).

1.  I don't know this to be true with Linux, but the Unix types have
always been leary of having multiple applications running on the same
box.  First, they say that they can't guarentee performance, then they
start talking about an application corrupting the memory of another
application.  So, one application per box if you want reliability.  I
haven't had the experience of memory problems in Linux, yet, so I still
tend to believe this.

2.  Once an application is running and is running good, it should
continue to run correctly until something external happens, like putting
on maintenance.  So, why put on maintenance, other than security
patches?  A new application may need a different gcc library or such.
The origional application, if not fully tested with the new changes, may
fail in production.

At least VM makes it a whole lot easier to define, maintain and control
multiple machines.

Tom Duerbusch
THD Consulting

>>> [EMAIL PROTECTED] 07/29/03 11:33AM >>>
Philosophical question?

The heart of the matter lies in why so many images in the first place?
If I need a half dozen images of Linux to service the Web, but those
Linux images can all be running under VM, what is different between
Linux and VM that lets VM handle the concurrent workload better than
Linux can?

It is a variation of the old arguement as to which is better, VM and
serveral VSE guests or one MVS instance.


Re: Whither consolidation and what then?

2003-07-29 Thread Tom Duerbusch
My take on multiple images is two fold.

But first, the disclaimer:
This assumes you have sufficient resources in the first place to do
this (normally real memory).

1.  I don't know this to be true with Linux, but the Unix types have
always been leary of having multiple applications running on the same
box.  First, they say that they can't guarentee performance, then they
start talking about an application corrupting the memory of another
application.  So, one application per box if you want reliability.  I
haven't had the experience of memory problems in Linux, yet, so I still
tend to believe this.

2.  Once an application is running and is running good, it should
continue to run correctly until something external happens, like putting
on maintenance.  So, why put on maintenance, other than security
patches?  A new application may need a different gcc library or such.
The origional application, if not fully tested with the new changes, may
fail in production.

At least VM makes it a whole lot easier to define, maintain and control
multiple machines.

Tom Duerbusch
THD Consulting

>>> [EMAIL PROTECTED] 07/29/03 11:33AM >>>
Philosophical question?

The heart of the matter lies in why so many images in the first place?
If I need a half dozen images of Linux to service the Web, but those
Linux images can all be running under VM, what is different between
Linux and VM that lets VM handle the concurrent workload better than
Linux can?

It is a variation of the old arguement as to which is better, VM and
serveral VSE guests or one MVS instance.


Re: Whither consolidation and what then?

2003-07-29 Thread McKown, John
> -Original Message-
> From: Ward, Garry [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, July 29, 2003 11:34 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Whither consolidation and what then?
>
>
> Philosophical question?
>
> The heart of the matter lies in why so many images in the first place?
> If I need a half dozen images of Linux to service the Web, but those
> Linux images can all be running under VM, what is different between
> Linux and VM that lets VM handle the concurrent workload better than
> Linux can?
>
> It is a variation of the old arguement as to which is better, VM and
> serveral VSE guests or one MVS instance.
>

The answer to this question may be:

Which can generate more aggregrate throughput with acceptable response time?

1) A number of "single use" Linux images running under zVM

or

2) A single Linux image.

I would guess this would depend on the relative effeciencies of the two
schedulers. For example, what happens if an application running on Linux
(say a CGI) "goes crazy" and just eats up CPU? In a "single Linux image"
situation, will Linux take automatic action to prevent this process from
starving all the other processes? In a zVM scenario, with multiple "single
use" Linux images, will zVM better manage the CPU? I truly don't know.


--
John McKown
Senior Systems Programmer
UICI Insurance Center
Applications & Solutions Team
+1.817.255.3225

This message (including any attachments) contains confidential information
intended for a specific individual and purpose, and its' content is
protected by law.  If you are not the intended recipient, you should delete
this message and are hereby notified that any disclosure, copying, or
distribution of this transmission, or taking any action based on it, is
strictly prohibited.


Re: Stripping trailing blanks?

2003-07-29 Thread Ferguson, Neale
You mean something like:

PIPE ftp  | strip | xlate from 437 to 1047 | spec w5.3 1 | sort | >
postproc file a

Yep.

No harping match. You use the tool(s): that works, that you are comfortable
with. Sometimes you don't know what a tool is capable of.

-Original Message-
But, can you:

ftp ... | 


Re: Whither consolidation and what then?

2003-07-29 Thread Ward, Garry
Philosophical question?

The heart of the matter lies in why so many images in the first place?
If I need a half dozen images of Linux to service the Web, but those
Linux images can all be running under VM, what is different between
Linux and VM that lets VM handle the concurrent workload better than
Linux can?

It is a variation of the old arguement as to which is better, VM and
serveral VSE guests or one MVS instance. 

-Original Message-
From: Jim Sibley [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 11:55 AM
To: [EMAIL PROTECTED]
Subject: Whither consolidation and what then?


I had a look at the ebay prototype and it was, well,
less than moving. What they they is a fibre cable
going into a switch, then dozens of cables going to
dozens of web serves in intel boxes in racks, then
dozens of cables going to a switch to a single fibre
to a data base server.

So, with web server consolidation, these dozens of
servers get put under VM and the dozens x 2 cables and
a switch get replaced by OSA cards and/or
hipersockets. Voila, less floor space, less power,
less manpower to maintain, less cost of total
ownership and maintenance. All true, no question in my
mind.

What happens then? You still have dozens of copies of
Linux running in dozens of EC machines. And they're
talking to each other via TCP/IP stacks over a number
of high speed connections. Have you really advanced
the architecture and capabilities of Linux?

So my question is: What moves are afoot to reduce the
number of required images by consolidating their
functions and remove the TCP/IP communications between
applications?

Isn't this the next logical step?

(On the backend, database side, one or a few large DB
servers seem to be able to handle the actual DB workload).

=
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley

"Computer are useless.They can only give answers." Pablo Picasso

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Confidentiality Warning:  This e-mail contains information intended only for the use 
of the individual or entity named above.  If the reader of this e-mail is not the 
intended recipient or the employee or agent responsible for delivering it to the 
intended recipient, any dissemination, publication or copying of this e-mail is 
strictly prohibited. The sender does not accept any responsibility for any loss, 
disruption or damage to your data or computer system that may occur while using data 
contained in, or transmitted with, this e-mail.   If you have received this e-mail in 
error, please immediately notify us by return e-mail.  Thank you.


Re: Stripping trailing blanks?

2003-07-29 Thread Lucius, Leland
>
> pipe < name type | ftp
> ftp://userid:[EMAIL PROTECTED]/place_to_put_it
> (If I put it as the 1st stage I can FTP to VM.)
>
But, can you:

ftp ... | 

(Personally, I'd rather this didn't turn into a harping match on the
benefits of either piping method.)

Leland


Whither consolidation and what then?

2003-07-29 Thread Jim Sibley
I had a look at the ebay prototype and it was, well,
less than moving. What they they is a fibre cable
going into a switch, then dozens of cables going to
dozens of web serves in intel boxes in racks, then
dozens of cables going to a switch to a single fibre
to a data base server.

So, with web server consolidation, these dozens of
servers get put under VM and the dozens x 2 cables and
a switch get replaced by OSA cards and/or
hipersockets. Voila, less floor space, less power,
less manpower to maintain, less cost of total
ownership and maintenance. All true, no question in my
mind.

What happens then? You still have dozens of copies of
Linux running in dozens of EC machines. And they're
talking to each other via TCP/IP stacks over a number
of high speed connections. Have you really advanced
the architecture and capabilities of Linux?

So my question is: What moves are afoot to reduce the
number of required images by consolidating their
functions and remove the TCP/IP communications between
applications?

Isn't this the next logical step?

(On the backend, database side, one or a few large DB
servers seem to be able to handle the actual DB workload).

=
Jim Sibley
Implementor of Linux on zSeries in the beautiful Silicon Valley

"Computer are useless.They can only give answers." Pablo Picasso

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: Stripping trailing blanks?

2003-07-29 Thread Ferguson, Neale
pipe < name type | ftp ftp://userid:[EMAIL PROTECTED]/place_to_put_it (If I
put it as the 1st stage I can FTP to VM.)

PIPE stages use exactly the same philosophy of most UNIX commands. Do one
thing and do it well. Then put all these little stages together to make it
do interesting stuff. Unlike UNIX PIPEs you can have multiple pipelines
running concurrently each connecting back into the main pipe or diverging
from it. You can have primary, secondary, tertiary (etc etc) input and
output stages.

Being to have user written stages (just like awk scripts) that you can write
in REXX (primarily) or any other language allows you to extend it enormously
(just like the FTP stage above).

-Original Message-
This isn't the same thing.  Somebody had to write the STRIP command for VM.
And the STRIP command only does that one thing.  The Unix/Linux cut|sed is a
more general facility that took much less programming effort than what you
have to do under VM to get the same facilities.

You didn't show how you would do an FTP transfer under VM.


Re: Stripping trailing blanks?

2003-07-29 Thread Fargusson.Alan
This isn't the same thing.  Somebody had to write the STRIP command for VM.
And the STRIP command only does that one thing.  The Unix/Linux cut|sed is a
more general facility that took much less programming effort than what you
have to do under VM to get the same facilities.

You didn't show how you would do an FTP transfer under VM.

-Original Message-
From: Coffin Michael C [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 8:17 AM
To: [EMAIL PROTECTED]
Subject: Re: Stripping trailing blanks?


H,

Looking at all of these "easy to remember" ways to strip trailing blanks
reminds me why I like VM/CMS and PIPES.  So instead of one of the incredibly
convoluted and "unfriendly" commands like this:

ncftpget -W "$GX" -d $W/get.$$ -a -c -u $U -p $P $H $F\($mbr\) | (eval $PRE)
>$W/$mbr

I can execute a simple command like:

PIPE < MY FILE A | STRIP | > MY NEWFILE A

I've never been a big fan of the "slash-dot" language.  :)


Michael Coffin, VM Systems Programmer
Internal Revenue Service - Room 6527
 Constitution Avenue, N.W.
Washington, D.C.  20224

Voice: (202) 927-4188   FAX:  (202) 622-3123
[EMAIL PROTECTED]



-Original Message-
From: Lucius, Leland [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 11:10 AM
To: [EMAIL PROTECTED]
Subject: Re: Stripping trailing blanks?


> I'm not sure it will work. One gotcha:
>
> # PRE="cut -b 1-72 | sed -e s/\ \*\$//"
>
> If this were't remmed out, you probably have had a non-functioning
> script.
>
Hmmm, did you actually try the script?  Did you look further down or did you
just stop right at that line and assume you knew there was a problem?  I
believe you'll find that the "eval" in this line resolves this issue:

ncftpget -W "$GX" -d $W/get.$$ -a -c -u $U -p $P $H $F\($mbr\) | (eval $PRE)
>$W/$mbr

Thanks for the input.

Leland


[rhelv3-announce-admin@redhat.com: Announcing Red Hat Enterprise Linux 3 (Taroon) Beta 1 Public Availability]

2003-07-29 Thread Florian La Roche
I haven't found any item that is not already covered in the
below announcement. You may test and report any mainframe bugs
you find.

greetings,

Florian La Roche



- Forwarded message from [EMAIL PROTECTED] -

From: [EMAIL PROTECTED]
Subject: Announcing Red Hat Enterprise Linux 3 (Taroon) Beta 1 Public
Availability
To: [EMAIL PROTECTED]
Date: 29 Jul 2003 11:15:06 -0400

Red Hat is pleased to announce the general availability of Red Hat
Enterprise Linux 3 Beta 1 (Taroon).

This is a public beta.  Please feel free to forward this announcement
to anyone within or outside your organization who may be interested
in testing this beta release.

Red Hat Enterprise Linux 3 is the next generation of our comprehensive
suite of Linux operating systems -- designed for mission-critical
enterprise computing and certified by top enterprise software vendors.
More information on the current Red Hat Enterprise Linux 2.1 product
line is available at http://www.redhat.com/software/rhel/.

This announcement includes details on obtaining the beta software,
reporting bugs, and communicating with Red Hat and other testers
via mailing lists during the beta period.

Red Hat Enterprise Linux 3 Beta 1 is available for the following
architectures:
 - x86 (i686/Athlon 32-bit)
 - ia64 (Intel Itanium2 64-bit)
 - x86_64 (AMD64 64-bit)
 - ppc (IBM iSeries and pSeries 64-bit)
 - s390 (IBM S/390 31-bit)
 - s390x (IBM zSeries 64-bit)

Red Hat Enterprise Linux 3 Beta 1 is available in two variants:

 - Red Hat Enterprise Linux AS
   * Designed for server applications, includes the core operating
 system as well as network server packages
   * Available for x86, ia64, x86_64, ppc, s390, s390x

 - Red Hat Enterprise Linux WS
   * Designed for workstation applications, includes the core
 operating system as well as desktop productivity, development,
 communications, and network client packages
   * Available for x86, ia64, x86_64

A third variant, Red Hat Enterprise Linux ES, designed for
mid-range server applications, has an identical package set to
Red Hat Enterprise Linux AS at Beta 1.  General users interested in
the ES product should test the AS Beta 1 release.

Red Hat Enterprise Linux 3 Beta 1 contains a wide range of new
features, including but not limited to the following:

 - Kernel based on 2.4.21 with numerous scalability enhancements:
   * Native Posix Threading Library (NPTL)
   * Thread Local Storage & Futex APIs
   * Per-device locks for block IO
   * Memory management enhancements: RMAP VM & large pages support
   * O(1) scheduler
   * Hyperthreading scheduler
   * Integrated Summit chipset support
   * NFS performance & stability enhancements
   * Large Translation Buffer pages - hugetlbfs
   * Ext3 updates for performance and stability
   * Semtimedop b   semaphores with time limitation
   * Fine-grain process accounting (x86 only)
   * ACPI 2.0 (Itanium2 & AMD64 only)
   * Many driver updates and additions

 - 4GB/4GB Kernel/User Memory Split (x86 bigmem kernel only)
   * Support for up to 64GB on x86
   * 4GB of virtual address space for kernel and almost 4GB for each
 user process on x86

 - Development Environment
   * gcc 3.2.3 tool chain
   * gcc "ssa" tool chain included as a technology preview
   * gcj / libgcj  (Java gcc compiler front-end)
   * gdb 5.3.90 - including multi-threaded core dump and gcore
   * glibc 2.3.2
   * Eclipse 2.1 Developer Environment

 - Improved I/O subsystem
   * 64-bit SCSI/Fibre Channel DMA support
   * Up to 256 SCSI devices
   * VaryIO support (permits larger I/O transfers)
   * Serial ATA support - SATA1 (for Intel PIIX/ICH ATA ICH5)
   * Hotplug PCI framework (x86 and ia64 only)
   * Asynchronous I/O on sockets
   * Expanded Asynchronous I/O for disks support

 - Desktop enhancements
   * XFree86 4.3.0
   * Bluecurve (tm) graphical user interface (Unified GNOME/KDE look
 and feel)
   * OpenOffice.org 1.0.2 office productivity suite
   * Ximian Evolution 1.4.3
   * Mozilla 1.4

 - Improved serviceability
   * Logical Volume Manager (LVM1) support
   * Kernel crash dump and analysis enhancements
   * Configurable application core dump paths
   * Code profiling support included in the kernel (OProfile)
   * Support for diskless systems

 - Networking Enhancements
   * Improvements to channel bonding
   * Failover & bandwidth aggregation for servers w/ multiple NICs
   * More complete kernel IPv6 support
   * Kernel IGMP updated from V2 to V3
   * Samba 3.0 (Beta)
   * Apache 2.0 web server
   * TUX web accelerator update

 - Security enhancements
   * Filesystem ACLs
   * General purpose cryptographic API in the Kernel
   * Position Independent Executables
   * Kernel support for ipsec on IPV4

 - Red Hat Cluster Manager enhancement
   * Multinode high availability clustering with new GUI

Current features, packages, and naming are subject to change before
the final release.

The Red Hat Enterprise Linux development team would like to encourage
you to te

Re: Stripping trailing blanks?

2003-07-29 Thread Coffin Michael C
H,

Looking at all of these "easy to remember" ways to strip trailing blanks
reminds me why I like VM/CMS and PIPES.  So instead of one of the incredibly
convoluted and "unfriendly" commands like this:

ncftpget -W "$GX" -d $W/get.$$ -a -c -u $U -p $P $H $F\($mbr\) | (eval $PRE)
>$W/$mbr

I can execute a simple command like:

PIPE < MY FILE A | STRIP | > MY NEWFILE A

I've never been a big fan of the "slash-dot" language.  :)


Michael Coffin, VM Systems Programmer
Internal Revenue Service - Room 6527
 Constitution Avenue, N.W.
Washington, D.C.  20224

Voice: (202) 927-4188   FAX:  (202) 622-3123
[EMAIL PROTECTED]



-Original Message-
From: Lucius, Leland [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 29, 2003 11:10 AM
To: [EMAIL PROTECTED]
Subject: Re: Stripping trailing blanks?


> I'm not sure it will work. One gotcha:
>
> # PRE="cut -b 1-72 | sed -e s/\ \*\$//"
>
> If this were't remmed out, you probably have had a non-functioning
> script.
>
Hmmm, did you actually try the script?  Did you look further down or did you
just stop right at that line and assume you knew there was a problem?  I
believe you'll find that the "eval" in this line resolves this issue:

ncftpget -W "$GX" -d $W/get.$$ -a -c -u $U -p $P $H $F\($mbr\) | (eval $PRE)
>$W/$mbr

Thanks for the input.

Leland


Re: Stripping trailing blanks?

2003-07-29 Thread Lucius, Leland
> I'm not sure it will work. One gotcha:
>
> # PRE="cut -b 1-72 | sed -e s/\ \*\$//"
>
> If this were't remmed out, you probably have had a
> non-functioning script.
>
Hmmm, did you actually try the script?  Did you look further down or did you
just stop right at that line and assume you knew there was a problem?  I
believe you'll find that the "eval" in this line resolves this issue:

ncftpget -W "$GX" -d $W/get.$$ -a -c -u $U -p $P $H $F\($mbr\) | (eval $PRE)
>$W/$mbr

Thanks for the input.

Leland


Re: Runaway Processes

2003-07-29 Thread Chet Norris
Thanks.
--- Rich Smrcina <[EMAIL PROTECTED]> wrote:
> In the same situation I mentioned earlier, we changed the priority of
> the
> application server processes to be lower than any telnet process, so
> that
> telnet users could still gain access in the event of a loop.  The
> application
> server priority can be changed from the admin app.
>
> On Tuesday 29 July 2003 08:08 am, you wrote:
> > We're testing with Websphere under RH 7.2 Linux running under z/VM.
> > When a test process is invoked and it goes into a CPU loop, the
> only
> > option I can see to recover is to do a #CP IPL. This will
> eventually
> > result in a corrupted HFS. Any suggestions on how to better manage
> > process loops? During this loop activity I can't issue any commands
> > from any terminal for this image, the most I can do is CP logon.
> >
> > =
> > Chet Norris
> > Marriott International,Inc.
> >
> > __
> > Do you Yahoo!?
> > Yahoo! SiteBuilder - Free, easy-to-use web site design software
> > http://sitebuilder.yahoo.com
>
> --
> Rich Smrcina
> Sr. Systems Engineer
> Sytek Services, A Division of DSG
> Milwaukee, WI
> rsmrcina at wi.rr.com
> rsmrcina at dsgroup.com
>
> Catch the WAVV!  Stay for Requirements and the Free for All!
> Update your S/390 skills in 4 days for a very reasonable price.
> WAVV 2004 in Chattanooga, TN
> April 30-May 4, 2004
> For details see http://www.wavv.org


=
Chet Norris
Marriott International,Inc.

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: OSA Express in QDIO mode problems

2003-07-29 Thread Alan Altmark
On Tuesday, 07/29/2003 at 09:29 EST, James Melin
<[EMAIL PROTECTED]> wrote:
> I was told that varying the OSA CHP
> offline to all using LPARS will cause it to reload it's configuration or
> something

Yes, that is true.  Once an OSA chpid (the whole chpid, not just all the
devices on it!) is offline to *all* LPARs, whether from a host VARY
command or from the HMC, the OSA will re-IML when it is brought back
online.

Alan Altmark
Sr. Software Engineer
IBM z/VM Development


Re: Runaway Processes

2003-07-29 Thread Rich Smrcina
In the same situation I mentioned earlier, we changed the priority of the
application server processes to be lower than any telnet process, so that
telnet users could still gain access in the event of a loop.  The application
server priority can be changed from the admin app.

On Tuesday 29 July 2003 08:08 am, you wrote:
> We're testing with Websphere under RH 7.2 Linux running under z/VM.
> When a test process is invoked and it goes into a CPU loop, the only
> option I can see to recover is to do a #CP IPL. This will eventually
> result in a corrupted HFS. Any suggestions on how to better manage
> process loops? During this loop activity I can't issue any commands
> from any terminal for this image, the most I can do is CP logon.
>
> =
> Chet Norris
> Marriott International,Inc.
>
> __
> Do you Yahoo!?
> Yahoo! SiteBuilder - Free, easy-to-use web site design software
> http://sitebuilder.yahoo.com

--
Rich Smrcina
Sr. Systems Engineer
Sytek Services, A Division of DSG
Milwaukee, WI
rsmrcina at wi.rr.com
rsmrcina at dsgroup.com

Catch the WAVV!  Stay for Requirements and the Free for All!
Update your S/390 skills in 4 days for a very reasonable price.
WAVV 2004 in Chattanooga, TN
April 30-May 4, 2004
For details see http://www.wavv.org


Re: OSA Express in QDIO mode problems

2003-07-29 Thread James Melin
I had a similar problem to this. I did not need to do a POR to fix it.

What you need to do is from the HMC toggle the chpid for the device offline
to the linux LPAR.
On EVERY OTHER os/390 or z/os LPAR, vary the devices for that OSA-E
offline, and then confgiure the chipid offline to each LPAR. After the
CHPID comes offline to All LPARs, configure the CHPID online to all LPARs
and then vary the devices numbers associated with that device back online.
Then using the HMC, toggle the device back on for the Linux LPAR.

I cant rememeber if you will need to IPL linux after that to get the
hardware to register properly, but I was told that varying the OSA CHP
offline to all using LPARS will cause it to reload it's configuration or
something, and you should be good to go. It did seem to make things happy
when I put the Gigabit OSA card on my system without a POR.




|-+>
| |   "Post, Mark K"   |
| |   <[EMAIL PROTECTED]|
| |   m>   |
| |   Sent by: Linux on|
| |   390 Port |
| |   <[EMAIL PROTECTED]|
| |   IST.EDU> |
| ||
| ||
| |   07/28/2003 08:39 |
| |   PM   |
| |   Please respond to|
| |   Linux on 390 Port|
| ||
|-+>
  
>--|
  |
  |
  |   To:   [EMAIL PROTECTED]  
|
  |   cc:  
  |
  |   Subject:  Re: OSA Express in QDIO mode problems  
  |
  
>--|




Harold,

You say the OSA-E "was not initially defined to the newly created Linux
lpar. Even after correcting that problem, we have tried, with no success,
to
get this device working on Linux."  What, exactly, did you do to correct
that problem?  If it was a dynamic reconfiguration of the LPAR, I believe
that will not be sufficient.  You're possibly looking at a power-on reset.


Mark Post

-Original Message-
From: Kubannek, Harold [mailto:[EMAIL PROTECTED]
Sent: Monday, July 28, 2003 7:36 PM
To: [EMAIL PROTECTED]
Subject: OSA Express in QDIO mode problems


We installed Linux in a partition on a 9672-R46 mainframe last week in a
LPAR configuration (non-zVM). We were able to configure and use our OSA-2
card without any problems as an LCS device, but we have been unsuccessful
in
getting our OSA-Express running (in QDIO mode).

The OSA Express is shared across all our lpars (4 lpars running z/OS), but
was not initially defined to the newly created Linux lpar. Even after
correcting that problem, we have tried, with no success, to get this device
working on Linux. We are able to dynamically load the QDIO driver but the
QETH driver fails continuously with "device not found". We have applied all
the latest patches/upgrades for the "Linux for zSeries and S/390 - June
2003
stream" (
http://www10.software.ibm.com/developerworks/opensource/linux390/june2003_te

chnical.shtml ) and have even linked the QDIO and QETH modules as part of
the Kernel.

After we linked the QETH module in the Kernel we started seeing the
following messages on the startup of Linux :


=
Jul 28 13:57:35 lpar5 kernel: qdio: loading QDIO base support version 2
($Revision: 1.145 $/$Revision: 1.66.4.1 $)
Jul 28 13:57:35 lpar5 kernel: qeth: loading qeth S/390 OSA-Express driver
($Revision: 1.337.4.5 $/$Revision: 1.113.4.1 $/$Revision: 1.42.4.1 $)
Jul 28 13:57:35 lpar5 kernel:  qeth: allocated 0 spare buffers
Jul 28 13:57:35 lpar5 kernel: qeth: Trying to use card with devnos
0xF800/0xF801/0xF802
Jul 28 13:57:35 lpar5 kernel:  qeth: IDX_ACTIVATE(rd) on read channel irq
0x1c38: timeout
Jul 28 13:57:35 lpar5 kernel:  qeth: There were problems in hard-setting up
the card.
Jul 28 13:57:35 lpar5 kernel: qeth: Trying to use card with devnos
0xF804/0xF805/0xF803
Jul 28 13:57:35 lpar5 kernel:  qeth: IDX_ACTIVATE(rd) on read channel irq
0x1c3c: timeout
Jul 28 13:57:35 lpar5 kernel:  qeth: There were problems in hard-setting up
the card.
Jul 28 13:57:35 lpar5 kernel: qeth: Trying to use card with devnos
0xF806/0xF807/0xF808
Jul 28 13:57:35 lpar5 kernel:  qeth

Re: Memory displays on Linux

2003-07-29 Thread Rich Smrcina
We've had an incident like this with a customer.  It turned out to be a
garbage collection loop, storage for a very large object was required and
there wasn't enough room in the heap for it.  It turns out to look like a run
away database query that is returning a 100MB+ result set.

On Tuesday 29 July 2003 09:02 am, you wrote:
> We recently encountered an application loop with a Linux Websphere
> instance. I was able to get a VM PER Branch trace, but I cannot find any
> command within Linux to display those memory locations or to determine
> where modules are actually loaded. The code does not have any 'eyecatchers'
> either, so doing VM displays with translate are not helpful either. I'm
> having trouble determining what modules are being executed.

--
Rich Smrcina
Sr. Systems Engineer
Sytek Services, A Division of DSG
Milwaukee, WI
rsmrcina at wi.rr.com
rsmrcina at dsgroup.com

Catch the WAVV!  Stay for Requirements and the Free for All!
Update your S/390 skills in 4 days for a very reasonable price.
WAVV 2004 in Chattanooga, TN
April 30-May 4, 2004
For details see http://www.wavv.org


Re: DB2 UDB V7.2 install problem on SuSE SLES7 S/390 with kernel timer patch ............

2003-07-29 Thread Eric Sammons
Terry,

I wanted to point something out, it is my understanding that in fact TAM
3.9 is not certified on zLinux.  Are you aware of that as well or have you
heard otherwise?

Thanks!
Eric Sammons
FRIT - Infrastructure Engineering





Terry Spaulding <[EMAIL PROTECTED]>
Sent by: Linux on 390 Port <[EMAIL PROTECTED]>
07/29/2003 09:30 AM
Please respond to Linux on 390 Port

To: [EMAIL PROTECTED]
cc:
Subject:Re: DB2 UDB V7.2 install problem on SuSE SLES7
S/390 with kernel timer patch 

Eric,

The link did the trick. I entered the link and used ./db2setup and
everything installed no problem.

I was also installing HTTP from WAS V5 with Fixpak 1 on another SuSE SLES7
which worked ok for the install.
When I tried to do ./apachectl start I received the same error as in the
DB2 UDB V7 install complaining about the
missing lib. I entered the same link and now ./apachectl start works and
HTTP is up.

Thanks for the tip

--
Eric wrote:
I installed TAM 4.1 and that is why I have UDB V8.  Ahh the
requirements

Anyhow, I installed the following RPMs instead of using the db2inst
script.
---

First I would recommend using IBM DB2 8.1.  I was able to get around this
error by doing the following:

ln -s /usr/lib/libstdc++-libc6.2-2.so.3 /usr/lib/libstdc++-libc6.1-2.so.3

I then executed the rpm installs with the following options


Regards,
Terry L. Spaulding
IBM Global Services
Tele: 781-895-2802Tie/L: 362-2802
Pager: 800-759-  pin:1718699
Fax: 781-895-2659
[EMAIL PROTECTED]


Re: Memory displays on Linux

2003-07-29 Thread Ferguson, Neale
/proc//maps will show you where the executable and shared libraries are
loaded. You can use the nm command to display the offsets within the shared
library and executable for each of the entry points and then do the
relocation to work out where in storage the entry points are. (However, if
the object has been stripped, you may not find any symbols.) Once you have
the desired address you can use the #CP TR command and use those virtual
addresses as CP will catch them for you too.

resc004:/usr/src/linux # cat /proc/self/maps
0040-00404000 r-xp  5e:05 32821  /bin/cat
00404000-00405000 rw-p 3000 5e:05 32821  /bin/cat
00405000-00408000 rwxp  00:00 0
4000-40013000 r-xp  5e:05 227151 /lib/ld-2.2.5.so
40013000-40015000 rw-p 00012000 5e:05 227151 /lib/ld-2.2.5.so
40023000-4013f000 r-xp  5e:05 227152 /lib/libc.so.6
4013f000-40145000 rw-p 0011b000 5e:05 227152 /lib/libc.so.6
40145000-40148000 rw-p  00:00 0
40148000-40173000 r--p  5e:05 128512
/usr/lib/locale/en_US/LC_CTYPE
7fffa000-8000 rwxp b000 00:00 0

In the above case you are interested the storage described by 'r-xp': this
is the r/o code.

resc004:/usr/src/linux # nm /lib/libc.so.6 | grep -v "\.L" | grep -v
"L[0-9]" | more
 A GCC_3.0
 A GLIBC_2.0
 A GLIBC_2.1
 A GLIBC_2.1.1
 A GLIBC_2.1.2
 A GLIBC_2.1.3
 A GLIBC_2.2
 A GLIBC_2.2.1
 A GLIBC_2.2.2
 A GLIBC_2.2.3
 A GLIBC_2.2.4
 A GLIBC_2.2.5
0011c9a4 d LogFacility
0011c9a0 d LogFile
0011c9a8 d LogMask
00122988 b LogStat
0012298c b LogTag
0011c99c d LogType
00115ff4 r OPSYS
001229a8 b SyslogAddr
00120488 a _DYNAMIC
000a3fc0 W _Exit
00120574 a _GLOBAL_OFFSET_TABLE_
0011ddb8 D _IO_2_1_stderr_
0011daf8 D _IO_2_1_stdin_
0011dc58 D _IO_2_1_stdout_
00074254 T _IO_adjust_column
0006bc60 T _IO_adjust_wcolumn
000760b4 t _IO_check_libio

-Original Message-
We recently encountered an application loop with a Linux Websphere instance.
I was able to get a VM PER Branch trace, but I cannot find any command
within Linux to display those memory locations or to determine where modules
are actually loaded. The code does not have any 'eyecatchers' either, so
doing VM displays with translate are not helpful either. I'm having trouble
determining what modules are being executed.


Memory displays on Linux

2003-07-29 Thread Kinnear, Mike
We recently encountered an application loop with a Linux Websphere instance.
I was able to get a VM PER Branch trace, but I cannot find any command
within Linux to display those memory locations or to determine where modules
are actually loaded. The code does not have any 'eyecatchers' either, so
doing VM displays with translate are not helpful either. I'm having trouble
determining what modules are being executed.


Re: SCO not playing by Aussie Rules

2003-07-29 Thread Phil Payne
> Go Aussies!
>
> http://www.theregister.com/content/61/31910.html

http://www.theinquirer.net/?article=10743

"Certainly, SCO has succeeded in making lots of very smart people extremely angry. 
This isn't
a great strategy in almost any situation."

"But aside from a few shills for proprietary software at the vendor supported 
publications and
IT industry analyst firms, most reactions and responses are inimical to SCO."

IT industry analysts? Which, one wonders?  Not this one.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803
  +49 173 6242039


Re: DB2 UDB V7.2 install problem on SuSE SLES7 S/390 with kernel timer patch ............

2003-07-29 Thread Terry Spaulding
Eric,

The link did the trick. I entered the link and used ./db2setup and
everything installed no problem.

I was also installing HTTP from WAS V5 with Fixpak 1 on another SuSE SLES7
which worked ok for the install.
When I tried to do ./apachectl start I received the same error as in the
DB2 UDB V7 install complaining about the
missing lib. I entered the same link and now ./apachectl start works and
HTTP is up.

Thanks for the tip

--
Eric wrote:
I installed TAM 4.1 and that is why I have UDB V8.  Ahh the
requirements

Anyhow, I installed the following RPMs instead of using the db2inst
script.
---

First I would recommend using IBM DB2 8.1.  I was able to get around this
error by doing the following:

ln -s /usr/lib/libstdc++-libc6.2-2.so.3 /usr/lib/libstdc++-libc6.1-2.so.3

I then executed the rpm installs with the following options


Regards,
Terry L. Spaulding
IBM Global Services
Tele: 781-895-2802Tie/L: 362-2802
Pager: 800-759-  pin:1718699
Fax: 781-895-2659
[EMAIL PROTECTED]


Re: Stripping trailing blanks?

2003-07-29 Thread John Summerfield
On Tue, 29 Jul 2003, McKown, John wrote:

> > -Original Message-
> > From: John Summerfield [mailto:[EMAIL PROTECTED]
> > Sent: Monday, July 28, 2003 6:08 PM
> > To: [EMAIL PROTECTED]
> > Subject: Re: Stripping trailing blanks?
> >
> >
> > On Mon, 28 Jul 2003, McKown, John wrote:
> >
>
> 
>
> > > I invoke it in a subdirectory with:
> > >
> > > for i in *;do ../nonum.sh $i $i.ext;done
> > >
> >
> > A lot of people use the dot-sh suffix. I presume some of them
> > think it's
> > needed.
> >
> 
>
> I know that UNIX does not require suffixes to tell it if a file is
> executable and whatnot. I just do that out of old habit, so that I know what
> all the shell scripts are without needing to actually do a "file" command or
> attempt to look at them. The same with my Perl stuff ending in ".pl".

Don't feel I was aiming specifically at you, you just provoked me enough
to comment. As I said, there's a lot of people do it:
[EMAIL PROTECTED]:~/cvs$ locate '*.sh' | wc -l
458
[EMAIL PROTECTED]:~/cvs$ locate '*.pl' | wc -l
   1331
[EMAIL PROTECTED]:~/cvs$

A small number of the dot-sh scripts here are because of the (odd) way
Debian processes things in /etc/init.d.


Mostly, my own stuff I either know or don't care: if I want to change
a Perl or Shell script, gvim is the right tool.

If it's in /usr/bin and doesn't work, if it's a script I maybe can fix
it.





--


Cheers
John.

Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
Copyright John Summerfield. Reproduction prohibited.


Runaway Processes

2003-07-29 Thread Chet Norris
We're testing with Websphere under RH 7.2 Linux running under z/VM.
When a test process is invoked and it goes into a CPU loop, the only
option I can see to recover is to do a #CP IPL. This will eventually
result in a corrupted HFS. Any suggestions on how to better manage
process loops? During this loop activity I can't issue any commands
from any terminal for this image, the most I can do is CP logon.

=
Chet Norris
Marriott International,Inc.

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com


Re: Stripping trailing blanks?

2003-07-29 Thread McKown, John
> -Original Message-
> From: John Summerfield [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 28, 2003 6:08 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Stripping trailing blanks?
>
>
> On Mon, 28 Jul 2003, McKown, John wrote:
>



> > I invoke it in a subdirectory with:
> >
> > for i in *;do ../nonum.sh $i $i.ext;done
> >
>
> A lot of people use the dot-sh suffix. I presume some of them
> think it's
> needed.
>


I know that UNIX does not require suffixes to tell it if a file is
executable and whatnot. I just do that out of old habit, so that I know what
all the shell scripts are without needing to actually do a "file" command or
attempt to look at them. The same with my Perl stuff ending in ".pl".

--
John McKown
Senior Systems Programmer
UICI Insurance Center
Applications & Solutions Team
+1.817.255.3225

This message (including any attachments) contains confidential information
intended for a specific individual and purpose, and its' content is
protected by law.  If you are not the intended recipient, you should delete
this message and are hereby notified that any disclosure, copying, or
distribution of this transmission, or taking any action based on it, is
strictly prohibited.


Re: DB2 UDB V7.2 install problem on SuSE SLES7 S/390 with kernel timer patch ............

2003-07-29 Thread John Summerfield
On Tue, 29 Jul 2003, Eric Sammons wrote:

> I installed TAM 4.1 and that is why I have UDB V8.  Ahh the
> requirements
>
> Anyhow, I installed the following RPMs instead of using the db2inst
> script.
>
> IBM_db2sp81-8.1.0-0
> IBM_db2icuc81-8.1.0-0
> IBM_db2jdbc81-8.1.0-0
> IBM_db2crte81-8.1.0-0
> IBM_db2conn81-8.1.0-0
> IBM_db2rte81-8.1.0-0
> IBM_db2repl81-8.1.0-0
> IBM_db2smpl81-8.1.0-0
> IBM_db2ca81-8.1.0-0
> IBM_db2msen81-8.1.0-0
> IBM_db2cj81-8.1.0-0
> IBM_db2cliv81-8.1.0-0
> IBM_db2engn81-8.1.0-0
> IBM_db2das81-8.1.0-0
> IBM_db2cucs81-8.1.0-0
> IBM_db2conv81-8.1.0-0
> IBM_db2jhen81-8.1.0-0
> IBM_db2chen81-8.1.0-0
> IBM_db2cc81-8.1.0-0
> IBM_db2pext81-8.1.0-0
> IBM_db2essg81-8.1.0-0
>
> These should match up with your DB2 components as well, only the version
> numbers will change.  But as I recall from 7.x to 8.x some of the names
> were a little off too so I had to wait around for rpm command to tell me
> IBM_db2xx  is required for IBM_db2xxx.  Also, watch that nodeps
> flag because it could get you here by not telling you what is required. So
> do the install with a large scroll back buffer so that you can scroll back

There's always this command:
rpm -Va



--


Cheers
John.

Join the "Linux Support by Small Businesses" list at
http://mail.computerdatasafe.com.au/mailman/listinfo/lssb
Copyright John Summerfield. Reproduction prohibited.


Re: DB2 UDB V7.2 install problem on SuSE SLES7 S/390 with kernel timer patch ............

2003-07-29 Thread Rich Smrcina
Terry,

On the SLES7 CD there should be a package called compat.  It contains the
correct version of the library that DB2 requires.

On Tuesday 29 July 2003 07:14 am, you wrote:
> I installed TAM 4.1 and that is why I have UDB V8.  Ahh the
> requirements
>
> Anyhow, I installed the following RPMs instead of using the db2inst
> script.
>
> IBM_db2sp81-8.1.0-0
> IBM_db2icuc81-8.1.0-0
> IBM_db2jdbc81-8.1.0-0
> IBM_db2crte81-8.1.0-0
> IBM_db2conn81-8.1.0-0
> IBM_db2rte81-8.1.0-0
> IBM_db2repl81-8.1.0-0
> IBM_db2smpl81-8.1.0-0
> IBM_db2ca81-8.1.0-0
> IBM_db2msen81-8.1.0-0
> IBM_db2cj81-8.1.0-0
> IBM_db2cliv81-8.1.0-0
> IBM_db2engn81-8.1.0-0
> IBM_db2das81-8.1.0-0
> IBM_db2cucs81-8.1.0-0
> IBM_db2conv81-8.1.0-0
> IBM_db2jhen81-8.1.0-0
> IBM_db2chen81-8.1.0-0
> IBM_db2cc81-8.1.0-0
> IBM_db2pext81-8.1.0-0
> IBM_db2essg81-8.1.0-0
>
> These should match up with your DB2 components as well, only the version
> numbers will change.  But as I recall from 7.x to 8.x some of the names
> were a little off too so I had to wait around for rpm command to tell me
> IBM_db2xx  is required for IBM_db2xxx.  Also, watch that nodeps
> flag because it could get you here by not telling you what is required. So
> do the install with a large scroll back buffer so that you can scroll back
> to see what warnings may have appeared regarding missing IBM_db2
> components.
>
> Good luck!
>
> Eric Sammons
> (804)697-3925
> FRIT - Infrastructure Engineering
>
>
>
>
>
> Terry Spaulding <[EMAIL PROTECTED]>
> Sent by: Linux on 390 Port <[EMAIL PROTECTED]>
> 07/29/2003 07:58 AM
> Please respond to Linux on 390 Port
>
> To: [EMAIL PROTECTED]
> cc:
> Subject:Re: DB2 UDB V7.2 install problem on SuSE SLES7
> S/390 with kernel timer patch 
>
> Eric,
>
> Based on the level of LDAP needed to be deployed DB2 V8 is not an option.
> The LDAP is part of TAM 3.9. It is only supported on DB2 UDB V7.
>
> In your response I enter a link statement to point the library to what DB2
> V7 requires.
>
> I am installing DB2 V7 using the script ./db2setup, what rpm installs are
> you referencing ?
>
> Thanks ...
>
> ---
>- Eric replied:
>
> First I would recommend using IBM DB2 8.1.  I was able to get around this
> error by doing the following:
>
> ln -s /usr/lib/libstdc++-libc6.2-2.so.3 /usr/lib/libstdc++-libc6.1-2.so.3
>
> I then executed the rpm installs with the following options
>
> rpm -ihv --nodeps
>
> Hope that helps.
>
> Otherwise I believe IBM has a patch and some documentation on their
> website.  But again I did this with DB2 8.1 and Secureway 5.1 and all
> works great and the performance has been really good.
>
> ---
>--
>
>
> Regards,
> Terry L. Spaulding
> IBM Global Services
> [EMAIL PROTECTED]

--
Rich Smrcina
Sr. Systems Engineer
Sytek Services, A Division of DSG
Milwaukee, WI
rsmrcina at wi.rr.com
rsmrcina at dsgroup.com

Catch the WAVV!  Stay for Requirements and the Free for All!
Update your S/390 skills in 4 days for a very reasonable price.
WAVV 2004 in Chattanooga, TN
April 30-May 4, 2004
For details see http://www.wavv.org


Re: DB2 UDB V7.2 install problem on SuSE SLES7 S/390 with kernel timer patch ............

2003-07-29 Thread Eric Sammons
I installed TAM 4.1 and that is why I have UDB V8.  Ahh the
requirements

Anyhow, I installed the following RPMs instead of using the db2inst
script.

IBM_db2sp81-8.1.0-0
IBM_db2icuc81-8.1.0-0
IBM_db2jdbc81-8.1.0-0
IBM_db2crte81-8.1.0-0
IBM_db2conn81-8.1.0-0
IBM_db2rte81-8.1.0-0
IBM_db2repl81-8.1.0-0
IBM_db2smpl81-8.1.0-0
IBM_db2ca81-8.1.0-0
IBM_db2msen81-8.1.0-0
IBM_db2cj81-8.1.0-0
IBM_db2cliv81-8.1.0-0
IBM_db2engn81-8.1.0-0
IBM_db2das81-8.1.0-0
IBM_db2cucs81-8.1.0-0
IBM_db2conv81-8.1.0-0
IBM_db2jhen81-8.1.0-0
IBM_db2chen81-8.1.0-0
IBM_db2cc81-8.1.0-0
IBM_db2pext81-8.1.0-0
IBM_db2essg81-8.1.0-0

These should match up with your DB2 components as well, only the version
numbers will change.  But as I recall from 7.x to 8.x some of the names
were a little off too so I had to wait around for rpm command to tell me
IBM_db2xx  is required for IBM_db2xxx.  Also, watch that nodeps
flag because it could get you here by not telling you what is required. So
do the install with a large scroll back buffer so that you can scroll back
to see what warnings may have appeared regarding missing IBM_db2
components.

Good luck!

Eric Sammons
(804)697-3925
FRIT - Infrastructure Engineering





Terry Spaulding <[EMAIL PROTECTED]>
Sent by: Linux on 390 Port <[EMAIL PROTECTED]>
07/29/2003 07:58 AM
Please respond to Linux on 390 Port

To: [EMAIL PROTECTED]
cc:
Subject:Re: DB2 UDB V7.2 install problem on SuSE SLES7
S/390 with kernel timer patch 

Eric,

Based on the level of LDAP needed to be deployed DB2 V8 is not an option.
The LDAP is part of TAM 3.9. It is only supported on DB2 UDB V7.

In your response I enter a link statement to point the library to what DB2
V7 requires.

I am installing DB2 V7 using the script ./db2setup, what rpm installs are
you referencing ?

Thanks ...


Eric replied:

First I would recommend using IBM DB2 8.1.  I was able to get around this
error by doing the following:

ln -s /usr/lib/libstdc++-libc6.2-2.so.3 /usr/lib/libstdc++-libc6.1-2.so.3

I then executed the rpm installs with the following options

rpm -ihv --nodeps

Hope that helps.

Otherwise I believe IBM has a patch and some documentation on their
website.  But again I did this with DB2 8.1 and Secureway 5.1 and all
works great and the performance has been really good.

-


Regards,
Terry L. Spaulding
IBM Global Services
[EMAIL PROTECTED]


Re: DB2 UDB V7.2 install problem on SuSE SLES7 S/390 with kernel timer patch ............

2003-07-29 Thread Terry Spaulding
Eric,

Based on the level of LDAP needed to be deployed DB2 V8 is not an option.
The LDAP is part of TAM 3.9. It is only supported on DB2 UDB V7.

In your response I enter a link statement to point the library to what DB2
V7 requires.

I am installing DB2 V7 using the script ./db2setup, what rpm installs are
you referencing ?

Thanks ...


Eric replied:

First I would recommend using IBM DB2 8.1.  I was able to get around this
error by doing the following:

ln -s /usr/lib/libstdc++-libc6.2-2.so.3 /usr/lib/libstdc++-libc6.1-2.so.3

I then executed the rpm installs with the following options

rpm -ihv --nodeps

Hope that helps.

Otherwise I believe IBM has a patch and some documentation on their
website.  But again I did this with DB2 8.1 and Secureway 5.1 and all
works great and the performance has been really good.

-


Regards,
Terry L. Spaulding
IBM Global Services
[EMAIL PROTECTED]


Re: DB2 UDB V7.2 install problem on SuSE SLES7 S/390 with kernel timer patch ............

2003-07-29 Thread Eric Sammons
First I would recommend using IBM DB2 8.1.  I was able to get around this
error by doing the following:

ln -s /usr/lib/libstdc++-libc6.2-2.so.3 /usr/lib/libstdc++-libc6.1-2.so.3

I then executed the rpm installs with the following options

rpm -ihv --nodeps

Hope that helps.

Otherwise I believe IBM has a patch and some documentation on their
website.  But again I did this with DB2 8.1 and Secureway 5.1 and all
works great and the performance has been really good.

Thanks!

Eric Sammons
FRIT - Infrastructure Engineering





Terry Spaulding <[EMAIL PROTECTED]>
Sent by: Linux on 390 Port <[EMAIL PROTECTED]>
07/29/2003 07:36 AM
Please respond to Linux on 390 Port

To: [EMAIL PROTECTED]
cc:
Subject:DB2 UDB V7.2 install problem on SuSE SLES7 S/390
with kernel timer patch 

I have SuSE Linux SLES7 S390 with kernel timer patch installed under zVM
4.3. z800. I need to install DB2 UDB V7.2 to support LDAP.

I started the DB2 install with ./db2setup and receive the following error
message immediately.

" ./db2inst: error while loading shared libraries:
libstdc++-libc6.1-2.so.3: cannot open shared object file: No such file or
directory
pldapr01:/usr/temp # mc "

The libraries I find on SLES7 are:  libstdc++-libc6.2-2.so.3

I am told I need to get the "Compat" libraries off of the Developers
Edition CD's that came with the SuSE install material.

Can anyone provide me some guidance on what I am looking for on these
Developer Edition CD's and what I need to do to get the "Compat" libraries
downloaded to the SuSE instance ?

Any advice would be greatly appreciated.

Thanks..

Regards,
Terry L. Spaulding
IBM Global Services
[EMAIL PROTECTED]


DB2 UDB V7.2 install problem on SuSE SLES7 S/390 with kernel timer patch ............

2003-07-29 Thread Terry Spaulding
I have SuSE Linux SLES7 S390 with kernel timer patch installed under zVM
4.3. z800. I need to install DB2 UDB V7.2 to support LDAP.

I started the DB2 install with ./db2setup and receive the following error
message immediately.

" ./db2inst: error while loading shared libraries:
libstdc++-libc6.1-2.so.3: cannot open shared object file: No such file or
directory
pldapr01:/usr/temp # mc "

The libraries I find on SLES7 are:  libstdc++-libc6.2-2.so.3

I am told I need to get the "Compat" libraries off of the Developers
Edition CD's that came with the SuSE install material.

Can anyone provide me some guidance on what I am looking for on these
Developer Edition CD's and what I need to do to get the "Compat" libraries
downloaded to the SuSE instance ?

Any advice would be greatly appreciated.

Thanks..

Regards,
Terry L. Spaulding
IBM Global Services
[EMAIL PROTECTED]


Re: Stripping trailing blanks?

2003-07-29 Thread Tzafrir Cohen
On Tue, Jul 29, 2003 at 02:51:40AM -0500, Lucius, Leland wrote:

> Will this work for ya John?  Let me know if ya have questions.
>
> I must confess.  I did this just for you.  I never intended to use it.  But,
> doggone it, I really like it.  Heck, if I start to get used to it, I might
> even modify it to work with VM as well and just do all my editing over on
> Linux.

I'm not sure it will work. One gotcha:

# PRE="cut -b 1-72 | sed -e s/\ \*\$//"

If this were't remmed out, you probably have had a non-functioning
script.

Consider:

  cmd='ls |head'
  $cmd

The result will be:

  ls: |head: No such file or directory

The problem is bash's expantion order. It will expand variables only
after it has seperated the command-line to "subcommands".

--
Tzafrir Cohen   +---+
http://www.technion.ac.il/~tzafrir/ |vim is a mutt's best friend|
mailto:[EMAIL PROTECTED]   +---+


Re: Stripping trailing blanks?

2003-07-29 Thread Lucius, Leland
> Now that I have your attention (grin) what I'd like is something like:
>
> download -host MYMVS -user MYUSER -password MYPASS file
> 'MYUSER.PDF.CNTL(member)' $EDITOR file upload -host MYMVS
> -user MYUSER -password MYPASS file 'MYUSER.PDS.CNTL(member)'
>
Will this work for ya John?  Let me know if ya have questions.

I must confess.  I did this just for you.  I never intended to use it.  But,
doggone it, I really like it.  Heck, if I start to get used to it, I might
even modify it to work with VM as well and just do all my editing over on
Linux.

Leland



em
Description: Binary data