Re: big and little endian

2003-08-06 Thread John Alvord
The way I understand it, the little endian scheme is optimized for
mini/micro hardware of the middle 1970s (4004. 8080, PDP etc). Those
had tiny memory caches. When adding numbers together, you start with
the least significant part of the number, store the result, use the
overflow indicator with the next sigificant part. Think of how manual
decimal addition is performed.

Starting at the least significant  part is simplest. Having the least
siginifcant part as the lowest memory address is efficient for cases
with minimal cache buffer... the fetch on one section will bring in
the next part. In the big-endian case, fetching the higher address
would not pull in the prior address.

By now, this is ancient history. But the exegies of binary
compatibility have kept the memory model congruent. And truthfully it
doesn't make any real difference these days.

john alvord

On Wed, 6 Aug 2003 09:44:45 -0700, Wolfe, Gordon W
[EMAIL PROTECTED] wrote:

Ah, yes, but when you look at a NUMBER, whether it be base-2, base-10 or base-16, you 
tend to look at it with the MOST significant digit on the LEFT.  It depends entirely 
upon whether you're looking at it as a NUMBER or as the contents of a STRING OF 
ADDRESSES.

Always do right.  This will gratify some people and confound the rest.  - Mark Twain
Gordon W. Wolfe, Ph.D, (425) 865 - 5940
VM Technical Services, the Boeing Company


-Original Message-
From: Ward, Garry [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 06, 2003 9:36 AM
To: [EMAIL PROTECTED]
Subject: Re: big and little endian


Perception of end.

visually, most folks look at low addresses in storage as on the left
hand and ascending to the right. Big Endian puts the most significant
digits on the left and hence the lower address, puts the big end of the
number at the lower end of storage. 

There is also something about which end of a register the hardware
starts it's arithmetic operations on, whether it operations on a right
to left pattern for bit level operations or on a left to right pattern
for bit level operations.


-Original Message-
From: Bernd Oppolzer [mailto:[EMAIL PROTECTED]
Sent: Wednesday, August 06, 2003 12:23 PM
To: [EMAIL PROTECTED]
Subject: Re: big and little endian


What I never understood about this: how can big ENDian be explained ?
Because, the number formats called big ENDian have the LEAST significant
byte at the END, and the little ENDians have the MOST significant byte
at the END.

Can anybody explain ?

Regards

Bernd


Am Mit, 06 Aug 2003 schrieben Sie:
 Has to do with the order of bytes and significance.  Whether, for
 example, decimal 123456789 which is hex 0x75BCD15 is stored as 07 5B
CD
 15 (big endian) or 15 CD 5B 07 (little endian).  Intel is little
endian.

 ~ Daniel



Confidentiality Warning:  This e-mail contains information intended only for the use 
of the individual or entity named above.  If the reader of this e-mail is not the 
intended recipient or the employee or agent responsible for delivering it to the 
intended recipient, any dissemination, publication or copying of this e-mail is 
strictly prohibited. The sender does not accept any responsibility for any loss, 
disruption or damage to your data or computer system that may occur while using data 
contained in, or transmitted with, this e-mail.   If you have received this e-mail in 
error, please immediately notify us by return e-mail.  Thank you.


Re: Linux/390 Assembler Guru's...

2003-04-05 Thread John Alvord
On Sat, 5 Apr 2003 05:00:10 -0600, Phil Howard
[EMAIL PROTECTED] wrote:

On Fri, Apr 04, 2003 at 06:51:43PM -0600, Lucius, Leland wrote:

|   For instance, calling BZ2_bzBuffToBuffCompress() would
|  require you
|   to reserve the standard 96 bytes, plus 8 more since it has 7
|   parameters.  So:
|  
|   LR15,Stkptr Get address of stack
|   AL   R15,=F'4096'   Point to end of stack
|   SL   R15,=F'104'Reserve first frame
| 
|  These last two  seem equivalent to
|  LA 4096-104(,R15)
|  and faster (no storage reference) and 12 bytes shorter.
| 
|  Okay, I'll stop here;-)
| 
| Except that it was only an example to illustrate usage.  The 4096 was the
| stack size and might need to be enlarged based on compression routines used.

Still, you can combine the AL and SL to a single AL with pre-computed
difference.  Then switch to LA when it's less than 4096.

That is true for most circumstances. One difference is that in 31 bit
mode bit 0 is cleared and in 24 bit mode bits 0-7 are cleared. That
would not be true with the AL/SL instructions. Bit 0 might be
important information in some circumstances.

john


Re: Mysql start-up problem signal 11

2003-03-20 Thread John Alvord
On Thu, 20 Mar 2003 08:56:22 -0500, Gustavson, John (IDS ECCS)
[EMAIL PROTECTED] wrote:

We are running 2.4.7 kernel with mysql-shared-3.23.37-21 rpm installed.  We have a 
test server and a production server running the same rpm and start-up scripts.  On 
production only mysql start-up
fails with a signal 11 error.  Subsequently you have to manually start it with the 
command safe_mysqld -user=rootAny ideas why #1 it fails to start, and #2 why 
it successfully starts with the
safe_mysqld command?

signal 11 is a illegal memory error. I suspect the users involved have
different ulimits. ulimit is an internal process which limits the
amount of system resources, such as virtual memory. root would
typically have no limits, which is why it starts up there.

[On a PC server, signal 11 is usually a hardware memory error... not
the case here.]

john alvord


Re: batch compiles

2003-03-20 Thread John Alvord
On Thu, 20 Mar 2003 08:10:36 -0600, McKown, John
[EMAIL PROTECTED] wrote:

The problem with using make is that I must know *in advance* which
programs I want to compile and then code a make file for that set of
programs. What I want to do is more like:

Edit program1
Submit program1 to compile
Edit program2
Submit program2 to compile
... How ever many times

Each compile is independent of the others and I don't want to preplan and
build a make file to compile the set of programs since I don't often know
the members of the set in advance. I do see where a make file would be used
as I would use a compile proc in MVS. I.e. to compile a C program, I must
do steps 1 through 5. I could then create a generalized make file where I
pass in the name of the program to compile along with any parameters.

I may just be trying to force an MVS concept where it doesn't really belong.

I've done something similar, all in makefiles. In a simplified
description, you have a playpen directory with all the sources needed
for the project. You also have a production set of libraries. There is
a makefile which recognizes changed sources and 1) moves them to the
proper production libraries and 2) initiates a process to process them
[compile/link/move to another machine] etc.

So I do some editting, save files, enter make [ process makefile].

When I add some sources, the makefile has to be updated

Following is a simple example. The purpose here is to move changed
files to a unix build platform. A copy of the files (on unix) are kept
in the buckeye subdirectory. After this, I would do an rexec command
to trigger the build process on unix.
===
_ECHO = @

_IB_FILES = ibface.hpp   \
ibpublic.hpp \
ko4async.cpp \
ko4cache.cpp \
ko4crtsq.cpp \
ko4ib.cpp\
ko4ibcur.cpp \
ko4ibput.cpp \
ko4ibuti.cpp \
ko4sdep.cpp  \
ko4sdep.hpp  \
ko4sitma.cpp \
ko4sod.cpp   \
ko4state.cpp \
ko4xref.cpp  \
ksmibxit.hpp \
ksmibdbg.hpp


_IB_REP = $(addprefix buckeye/,$(_IB_FILES))

all: .start .startib .ibfiles .quit .ftp

.start:
$(_ECHO)rm -f ftp.in
$(_ECHO)rm -f ftp.go
$(_ECHO)echo user jalvo PASSWORD ftp.in
$(_ECHO)echo verbose ftp.in

.startib:
$(_ECHO)echo cd ~/b3502170/src/kib ftp.in

.ibfiles: $(_IB_REP)

buckeye/%: %
$(_ECHO)echo put $@
$(_ECHO)echo $@ ftp.go
$(_ECHO)echo put $? $? ftp.in
$(_ECHO)cp -f $? $@


.quit:
$(_ECHO)echo quit ftp.in

.ftp:
$(_ECHO)test -f ftp.go || echo no files to transfer
$(_ECHO)test -f ftp.go || exit 1
$(_ECHO)ftp -n buckeye ftp.in

clean:
$(_ECHO)rm -f buckeye/*.*
===
The example could use a lot of spiffing up, logging of results,
postponing copying of files into buckeye until the ftp was successful,
etc. It is just a quick one that demonstrates the process.

In a real environment you would also have to interface with some
Source Control System to get and store the files.

I love gnu makefiles, wish someone would pay me to work on them! hint
hint I even made one once that would ask the invoker YES or NO and
take different logic paths... that was tough.

John Alvord


Re: Interesting perspective

2003-03-19 Thread John Alvord
On Wed, 19 Mar 2003 19:34:26 +0800, John Summerfield
[EMAIL PROTECTED] wrote:

On Tue, 18 Mar 2003, Joseph Temple wrote:

 I would point out that clustering makes hardware more available, not more
 reliable.

If the application stays up, ir's more reliable.

 The things actually fail more often because there is  more to
 fail,

.
We've discussed Google here before: would anyone notice if a few Google
servers went missing for a while? Seems to me, probably not, and
according to some in the discussion that is on low-cost hardware.

You always have to take the application into account. A Google session
can drop out with little effect. A money transfer of a million dollars
is quite another story.

john alvord


Re: Any old iron

2003-03-12 Thread John Alvord
On Thu, 13 Mar 2003 11:28:51 +0800, John Summerfield
[EMAIL PROTECTED] wrote:

Not specially relevant, but some may be interested. It _has_ to be worth more
than 60c US.

For most people it would be a costly item to own. Transportation alone
would bust a budget... and the electricity, the special environmental
room, etc etc.  A high maintenance acquisition if ever there was
one.

john


Re: URGENT! really low performance. A related question...

2003-02-20 Thread John Alvord
On Thu, 20 Feb 2003 15:23:23 +, Alan Cox
[EMAIL PROTECTED] wrote:

On Thu, 2003-02-20 at 01:00, John Alvord wrote:
 And Lord protect you if the packaging accidently contained materials
 which generated gamma rays. Another tale of woe from the IBM 1980s

Gamma seems odd, it doesn't interact much most times, now alpha emitters
I could believe. Was it alpha or gamma emitters they got in their materials ?

It has been 15 years since I talked to the researcher involved. He
talked about some contamination from a granite purification
byproduct... something that had small amounts of uranium in the
material... It only occurred in one step of the process. It had been
there all along but showed up as a problem as density increased.

I Am Not A Scientist grin

john



Re: URGENT! really low performance. A related question...

2003-02-20 Thread John Alvord
On Thu, 20 Feb 2003 23:49:38 +0800, John Summerfield
[EMAIL PROTECTED] wrote:

 On Thu, 2003-02-20 at 01:00, John Alvord wrote:
  And Lord protect you if the packaging accidently contained materials
  which generated gamma rays. Another tale of woe from the IBM 1980s

 Gamma seems odd, it doesn't interact much most times, now alpha emitters
 I could believe. Was it alpha or gamma emitters they got in their materials ?


I recall back when we were getting round to 256K chips that cosmic rays were
becoming a problem and that chips weren't going to be made much denser.

What happened?

The cosmic ray scientist I talked with at Research in the middle 1980s
said they spotted a pattern on No Trouble Found on channel Cache
memory. The frequency was doubled in Denver - which has twice the
number of cosmic ray bursts compared to sea level. Eventually IBM set
up a several month long trial in a high altitude ghost town. The 308X
was set up with some PC controllers which monitored for these
transient conditions. At the same time, they arranged to get records
of cosmic ray bursts at a (New Mexico?) radio observatory. The
occurance of transient channel cache memory matched the radio
observatory bursts quite closely.

He never told me how the problem was cured. Maybe some more shielding?
I seem to remember some customers who were advised to move their
mainframe to lower in a tall building... the concrete was an effective
barrier to the cosmic rays.

john alvord



Re: vi vs. ISPF

2003-02-20 Thread John Alvord
On Thu, 20 Feb 2003 09:05:29 -0500, Peter Flass
[EMAIL PROTECTED] wrote:

Paul Raulerson [EMAIL PROTECTED] wrote:
 Vi is very much simpler than ISPF, once you memorize about 12 often used
 commands, and another
 10 that are used often but don't need to be memorized.

Simpler, but extremely annoying.  The whole insert thing just blows my
mind.  I prefer the ISPF editor to xedit, but both are miles ahead of
vi, which should have long ago been scrapped.

It is sort of like the TRS-80 character mode editor, just presented on
a whole screen. All the modes are really confusing...

I am a died in the wool EDGAR/XEDIT/KEDIT curmoudgeon. I have kedit
macros which I have been using for 12 years which make it look very
much like EDGAR.

john



Re: URGENT! really low performance. A related question...

2003-02-19 Thread John Alvord
On Wed, 19 Feb 2003 17:33:38 +0100, Phil Payne
[EMAIL PROTECTED] wrote:

 ECC wasn't pervasive in mainframes also. I remember hearing of a
problem with a 3081 which turned out to be cosmic rays (really) which
occaisionally changed bits in a channel buffer cache... which had
neither parity nor ECC.

The first machine I know of that detected single bit errors throughout the system was 
the
Hitachi S7 - roughly equivalent to the 3083.

You can get single bit errors with no external influence at all - just from quantum 
mechanics.
And Lord protect you if the packaging accidently contained materials
which generated gamma rays. Another tale of woe from the IBM 1980s
which shutdown foundry production for several months..
You never know where an electron realy is - it's a probability thing.  There is a 
chance that
all of the electrons constituting a charge will jump to the left at once - creating a 
false
zero or one at the output to the gate.

I remember a discussion with a CPU designer.

How often does this happen?

Every million years or so with these transistors, more often with the smaller ones 
we plan in
the future.

Uh huh.  So why is it a problem?

Nineteen transistors per bit, eight bits per byte, 64MB.  One single bit every 
couple of
hours.

That was about 1985.

I liked the microcode store recovery system.  There were two banks with identical 
contents,
interleaved for speed.  If a single bit error occured, the machine just waited for 
the next
half-cycle and took the value from the other bank.



Re: URGENT! really low performance. A related question...

2003-02-19 Thread John Alvord
On Tue, 18 Feb 2003 09:43:43 -0800, Fargusson.Alan
[EMAIL PROTECTED] wrote:

I am not sure about the P4, but earlier chip did not pass the ECC bits through the 
processor bus, 
so you could not detect data errors between the processor and memory.  This prevents 
one from 
getting Mainframe reliability with an Intel processor.

ECC wasn't pervasive in mainframes also. I remember hearing of a
problem with a 3081 which turned out to be cosmic rays (really) which
occaisionally changed bits in a channel buffer cache... which had
neither parity nor ECC.

I worked on an Amdahl machine once (customer machine that got cooked)
and the last problem was am LRA that gave bad results when the index
register was used as the source. That path through the chips had
shorted (because of heat) and the result was always zero. No
parity/ecc there either.

john alvord



Re: URGENT! really low performance. A related question...

2003-02-18 Thread John Alvord
On Tue, 18 Feb 2003 00:19:12 -0500, Adam Thornton
[EMAIL PROTECTED] wrote:

On Tue, Feb 18, 2003 at 02:36:21AM +0200, Tzafrir Cohen wrote:
 Replace your faulty hardware. It's cheap.
 Or spend a bit more, and get a case without those cooling problems.

Yes, of course it's cheap.  'S'why I bought it.  And I'll buy a new
machine eventually, at a similarly low price point, because I'm cheap.

Point is, *most* PC hardware is cheap.  Because it, you know, costs less
that way.

Adam

I puzzled about all this for a long time. One example was a 4341
versus a Vax/780. The performance sheets I looked at said they were
about equal in performance. But the 4341 was much more capable in real
work situations, given equal workload.

Eventually I noticed that the 4341 could do about 10 times the I/O
(megabytes per second) compared to the Vax machine. Vax was limited to
about 100K bytes per second and 4341 was 1meg bytes per second.

So I propose that when analysing different architectures we go beyond
simple CPU benchmarks and also calculated the 1) memory bandwidth and
2) I/O bandwidth. So that hot PC might be great for CPU but might be
left in the dust in memory bandwidth and I/O bandwidth.

That type of analysis would explain why a 168 was so capable even
though the CPU benchmark (compared to a 2Ghz Intel) would predict
otherwise. Programming efficiency probably has a measurable effect
too... C++ versus hand crafted assembler can easily add a 5-10 times
efficiency differential in my experience.

Hey - IBM - I figured out that when I was working for IBM Research in
Yorktown in 1983-88 - so if someone wants to grab the idea and use it
in marketting... it's yours. I imagine a bit heavy tank with 2000
horsepower duking it out with a compact... half plastic... auto.

grin

john



Re: URGENT! really low performance. A related question...

2003-02-17 Thread John Alvord
On Sun, 16 Feb 2003, John Summerfield wrote:

  There's also the fact that your cheapo-cheapo PC has one processor and has to
   do all the I/O for itself.  The PC's processor spends 90% of its time handli
  ng I/O, formatting data for some port or the screen, running a driver program
  , polling and waiting for a response from some peripheral and so on.
 

 I don't pretend that my Athlon-based system's overall design is anything like as
 good as the S370/168 I used to use so many years ago, but fair go.

 My PC has the on-board EIDE interfaces (EIDE{0,1}) and additionally, an add-on
 PCI card providing two more EIDE ports.

 At one time I had three drives in the box on each of three interfaces. I was
 running DD to do a disk-to-disk copy, and while it was running, I used hdparm to
 test the speed of the third drive. It tested at 35 Mbytes/sec, pretty close to
 its rated speed.

 My graphics card has its own processor, and if I add a SCSI card that too
 offloads a decent amount of work.

 Devices use interrupts to signal the end of operations, and many use DMA devices
 to provide direct access to system RAM.

 While IBM's mainframes do all these things better (except compute), if an IA32
 system uses more than about five percent of the CPU power to drive devices, the
 OS is broken.

 On Linux, we use (mostly) the same software you do. It does not need lots of CPU
 power to drive most I/O devices.

I understand the benchmark results, but does that mean that current PC
could support the same workload. At John Hancock in the early 1970s a 168
supported a fairly hefty batch workload and an online inquiry system for
400+ file clerks.

If a current PC can't support that workload, what is the difference? Maybe
benchmarks don't mean that much...

john alvord



Re: IBM stops Linux Itanium effort

2003-02-17 Thread John Alvord
On Mon, 17 Feb 2003 23:23:45 +0100, Phil Payne
[EMAIL PROTECTED] wrote:

 we have a 360/20 at the (little) computer museum at our site,
 and on the control panel it has four wheels with hex digits on it where you
 could enter an address and two wheels for the byte value, and so you could
 change storage contents from the control panel. So I'm sure the address size
 was 16 bits. I never worked with it (I'm too young, only 44 years).

Oh, there's absolutely no doubt that the 360/20 was a 16-bit machine.

The debate is whether it really was a /360.

In the common meaning, it certainly wasn't. To a large extent programs
written for one 360 and operating system would operate on other 360
models. That was the prime invention and genius. Since that wasn't
true for a /20, it couldn't possibly be a 360. 

But of course it was sold and labeled as a 360, so it was.

A paradox.

john



Re: Regina/rexx SOCKET

2002-12-07 Thread John Alvord
On Sat, 7 Dec 2002 09:46:28 EST, A. Harry Williams
[EMAIL PROTECTED] wrote:

On Fri, 6 Dec 2002 13:01:10 +0800 John Summerfield said:
On Fri, 6 Dec 2002 12:42, you wrote:
 Oh no, I didn't mean to impugn Regina at all. I last saw it long ago
 (1996?), and gave up on it real quickly. I fuzzily recall that it only
 seemed like a close cousin to the IBM Rexx dialects. Many OS/2, TSO, and
 CMS execs I'd written broke in unexpected places -- IIRC, only the most

I tried it briefly back then too; I had a working REXX script that Regina
couldn't cope with so I dropped it too.

I've spent some time working on some Rexx execs to have them work the
same on CMS, TSO and Regina on Windows, most recently a TXT2PDF from
Leland Lucius that Lionel includes with XMITIP for TSO.  The biggest
problem is the IO.  There really isn't any IO that is consistent
on those three platforms, much less deal with the performance implications
of the IO routines on those platforms.  Except for that and an ASCII-EBCDIC
issue that needed to be handled, TXT2PDF is about 3,000 lines and it ported
flawlessly.  The same two issues for some SMF decoding routines I've
written.  I've been very happy with Regina and have used it to do some support
for TSM and Notes for SHARE.  All it does is make me miss CMS Pipelines even
more.



But that was six years ago. I didn't think the fact OREXX was broken then was
means much now either.

John Summerfield
/ahw

When I do work on creating compatible REXX programs, I determine the
environment, set some flags and variables. I have common routines line
eraseit/openit/readit/writeit/closeit which take name and (for MVS)
ddname parameters. The common routines check the environment flag and
do the right thing. For MVS it is usually EXECIO, for classic REXX on
NT it would be linein() calls. The input is stemmed array, and so is
the output. There is a buffer variable which says how much to read at
a time. 

I've been pasting that into new REXX programs for years.

john alvord



Re: IBM has no realistic entry-level offering in the mainframe space

2002-12-04 Thread John Alvord
On Wed, 4 Dec 2002 13:40:48 +, Alan Cox [EMAIL PROTECTED]
wrote:

On Wed, 2002-12-04 at 07:29, John Alvord wrote:
 What a hobbiest license would do is make it possible for z/OS and z/VM
 to survive the coming retirement of 80% of all the experienced
 programmers. By creating a very inexpensive training/development
 environment, IBM would make it possible for that market to continue.

Maybe IBM don't care ? Whats the business value of doing new stuff on it
as opposed to new platforms ? The only thing I can see is helping to pin
customers to their proprietary hardware and increase the value by upping
the cost to switch that defines the actual price they can charge.

That assumes customers are dumb .. which I guess makes it a good
investment

The cost to switch is already very high.

There is an amazing inertia to production computing workloads. Once
value is being delivered, think payroll, the cost of moving to another
platform is gigantic. There are billions (litterally) of lines of
COBOL/PLI/Assember code out there in customer S390 shops doing
supposedly useful things. And in the US at least, payroll is HARD - it
changes each year, is different depending on work location and
sometimes by where the person lives. Reporting requirements, benefits,
vacation, sick days... whew.

[ Digression: my spouse was a DP consultant in one of her jobs. One
place had run into a brick wall - the monthly batch processes were
taking more than a month, and there was no money for upgrades. Over a
period of weeks she made a paper map of all the batch processes - on a
conference room wall. In the end she found enough deadend processes,
ones that produced data that was unused by humans or programs, to
eliminate a week's worth of processing. That bought them a year's
leeway before upgrades were needed. I bet there is a lot of that going
on...]

One other poster here talked about the monumental effort to move
workloads to a new platform. After 10 years, about 20% of the workload
was about to move. And we have all heard of the failures. I am sure it
is equally difficult to move significant workloads off VMS,
Tandem/nonstop, Apache, etc etc

Imagine what happens when most of the IBM Mainframe savvy programmers
retire. Talk about the Y2K problem...

Computing on demand only solves part of the problem - getting the
system programmer talent to manage the systems. Some workloads can
migrate to service providers (payroll to ADP or competitors). But what
about the billions of lines of application code. Whose fingers will be
typing to update for that next new requirement? 

john



Re: IBM has no realistic entry-level offering in the mainframe space

2002-12-03 Thread John Alvord
On Tue, 3 Dec 2002 21:25:45 -0500, Jeffrey C Barnard
[EMAIL PROTECTED] wrote:

Dean,

Interesting thoughts ...

Basically IBM is a corporation with stockholders. A 'for profit' corporation.
They will do things that they believe will earn them money. IBM is very
interested in earning money (as are most if not all corporations). The key to
this discussion is to come up with a way for IBM to earn money on a 'hobbyist'
license. Something has to fund (pay for) the license and the IBM personal
handling distribution/maintance/support of said license. License security (legal
usage) is another issue (I will ignore that for now).

Find a way for IBM to make a profit on a hobbyist license and they will do it.
Remember IBM is a very large company. Their cost structure is much higher than
you think. Suggesting a way for IBM to make $100,000 is not going to make it.
$100K (or $1 million) is not even on any IBM managers radar screen.

Does PWD make money? Probably not but the $13K/$20K everyone complains about
probably does not even cover the cost of IBM running the project. Remember, NO
AD/CDs any more and PWD costs have largely been moved to T3/Cornerstone as
distributors. Most PWD personal are now doing something else. IBM has lowered
their costs by moving the PWD program outside IBM but T3/Cornerstone have to
make money too.

Complain if you want but the reality is if you want a hobbyist license you have
to find a way for IBM to make money on it. Heck, you might get them to at least
listen to you if you could find a way for them to break-even on the license (but
I doubt it).

What a hobbiest license would do is make it possible for z/OS and z/VM
to survive the coming retirement of 80% of all the experienced
programmers. By creating a very inexpensive training/development
environment, IBM would make it possible for that market to continue.

That isn't return on investment - it is pure survival. Unless... IBM
has given up and is willing for that market to dissipate. Milking an
older market - minimizng investment and maximizing profits - is a
reasonable business strategy. I remember reading that RCA was the last
USA manufacturer of a large class of vacuum tubes. The last few years
they made a ton of money. I've got a friend who works for a small chip
company spun off from Intel... a roster of 8 of them are generating
revenues of 40-50 million a year on costs of 8-10 million. No RD
costs for the hardware, some cost in developing new Windows drivers,
some support costs and paying foundaries to make the chips.

It would take a wild leap of imagination for IBM to make that move...
and I doubt they have it. They see a valid business strategy and are
investing in it.

john alvord



Re: IBM has no realistic entry-level offering in the mainframe space

2002-12-03 Thread John Alvord
On Tue, 3 Dec 2002 19:18:42 -0800, Dean Kent [EMAIL PROTECTED]
wrote:


 Complain if you want but the reality is if you want a hobbyist license you
have
 to find a way for IBM to make money on it. Heck, you might get them to at
least
 listen to you if you could find a way for them to break-even on the
license (but
 I doubt it).

Interestingly, I am quite sure IBM doesn't make money on Linux itself.
They make their money with it in other ways.   Where there is a will, there
is a way.  The question is simply whether there is a will.

I was at an IBM seminar where they talked about TCP/IP vs. Token Ring.  IBM
figured they would 'win' that battle because Token Ring was architecturally
superior.  The finally admitted that the limitations that TCP/IP had were
simply engineering problems, and that the cost of Token Ring made it
uncompetitive.  We now all use TCP/IP, however inferior it is.
It is a valid argument, but it was Token Ring versus Ethernet
(hardware) or TCP/IP versus SNA (software). In each case IBM hubris
led to a great fall.

MVS/VM/VSE will suffer the same fate unless IBM figures out a way to make it
cheap.  I'm sure it is simply a business problem that can be solved if one
thinks 'outside the box'.For example, one could offer a completely
unsupported copy of zOS, zVM or VSE and Flex-ES for the cost of
copying/delivery/etc. with a license that says non-commercial use (Intel
does this with their Linux compilers, for example).   If you want to later
develop a commercial product - you pay what everyone else pays.No loss
of revenues, as there is no support - but potentially a larger group of ISVs
later.  Anyway, it seems like it has worked elsewhere for other 'inferior'
platforms...  :-)

Regards,
Dean


 Regards,
 Jeff
 --
 Jeffrey C Barnard
 Barnard Software, Inc. http://www.bsiopti.com
 Phone 407-323-4773 Fax 407-323-4775



Re: Linux under VM on Hercules?

2002-11-26 Thread John Alvord
On Tue, 26 Nov 2002 09:49:20 -0500, David Boyes
[EMAIL PROTECTED] wrote:

 David,  I disagree with your characterization that IBM's certified
mainframe development
 platform costs a goodly sum a pop.  The guy asking the question is the
VP Engineering
 of Sendmail.com, and if his company produces offerings for zSeries, which
I believe they
 do, then they are eligible for a low-cost offering from IBM's Partnerworld
program.
 That program can provide the guy with a Linux-based Thinkpad (2Ghz, 60gig
drive, 1 gig
 RAM), Flex-ES, 3 years of Flex-ES maintenance, loan of  z/VM AD-CD's,
fully integrated
 and ready to IPL, all for around $13,000.

I am aware of the PID discount. I am also aware that the solution you
propose does not work well in a data center environment (ever tried to
reliably rack a Thinkpad? not easy), and that for me, the equivalent
Hercules environment (minus the ADCD CDs, which I can't license) costs me
the price of a 80 gig disk, which at the local discount outlet amounts to
about $115 plus tax, about $300 if I go super-duper Ultra160 SCSI.

If IBM were to offer a single-user hobbyist license for the ADCDs in the $2K
to 3K range, controlling the use via TCs, then developing for S/390 starts
to look like a reasonable proposition to Joe Average Developer -- at that
rate, it's in the ballpark of buying this week's MS Visual Whatsis per seat,
and everybody's legal and above-board. For that price, I'll buy multiple
copies of the ADCD for developers and we're set.

From IBM's failure to do so, I conclude that MVS (OS/390, zOS) is in
an end of life stage (5-15 years before death) and IBM is following
the common practice of minimizing investment and maximizing revenue.
Growth and new markets is the last thing they want.  Not pretty... but
with all those Cobol programmers retiring and dying, there isn't much
choice.

john alvord



Re: CPU Arch Security [was: Re: Probably the first published shell code]

2002-11-07 Thread John Alvord
On Thu, 7 Nov 2002 10:46:30 -0600, Linas Vepstas [EMAIL PROTECTED]
wrote:

On Wed, Nov 06, 2002 at 10:36:40AM +0800, John Summerfield was heard to remark:
 On Wed, 6 Nov 2002 05:45, you wrote:

 
  The core idea is actually so simple, its painful.  Today, most CPU's
  define two memory spaces: the one that the kernel lives in, and the
  one that the user-space lives in.  When properly designed, there is
  nothing a user-space program can do to corrupt kernel memory.  One
  'switches' between these memory spaces by making system calls, i.e.
  by the SVC instruction.
 
  The 390 arch has not two, but 16 memory spaces (a 4-bit key) with
  this type of protection.  (When I did the i370 port, I put the
  kernel in space 0 or 1 or someething like that, and user space
  programs ran in one of the others.)  The partitioning between them
  is absolute, and is just like the kernel-space/user-space division
  in other archs.  The mechanism is independent/orthogonal to the
  VM/TLB subsystem (you can have/use virtual memory in any of the
  spaces.)

I'm coming late to the party, but this information should be
corrected. Th 4 bit key is not a memory space but an attribute of a
block of memory (4K or 2K) within a continous address space. The PSW
also contains a protect key and the CAW (Channel Address Word) two.
Key zero typically allows access to all memory. When a PSW is set to a
non-zero key, it can typically read any storage but it can only write
to its own key. [The storage key byte has other items like a read-only
bit.]

Thus the storage key allows for memory isolation between programs
running in the same address space, with key zero being used for the
supervisor. That was MVT/MFT days... Nowadays 2K storage keys have
vanished. In MVS, program isolation is by address space. Most of
kernel memory is shared with the application program space. Key 8 is
for programs and key zero is for kernel. Subsystems like VTAM can take
over a storage key. And of course there are all sorts of multi-address
space stuff now - access registers, primary/secondary address space
instructions, etc etc.

But the storage key was originally used for program isolation. It was
a pretty nice advance for its time.

john alvord



Re: Mainframes Are Still A Mainstay

2002-10-28 Thread John Alvord
On Mon, 28 Oct 2002 15:12:52 -0500, Adam Thornton
[EMAIL PROTECTED] wrote:

On Mon, Oct 28, 2002 at 12:56:08PM -0600, Ward, Garry wrote:
 Not to mention, every time I've seen or used CMS, it has been under VM,
 not in an LPAR by itself. I don't think CMS can run native in an LPAR.

Hasn't been able to since CMS v3.  Head over to alt.folklore.computers
and ask Lynn Wheeler about it.  Basically, CMS uses a VM DIAGNOSE to do
its I/O, rather than talking to the metal (or, at least, the emulated
metal provides a function the physical metal doesn't).

Although it *might* be able to under Hercules.  I remember way back in
the 1.54 days or so that some of the DIAGNOSE instructions were being
put into the instruction set so that CMS could run native on Herc.  I
don't know if that was ever completed though.

Very early CMSes, back in CP/67 days could run standalone. A good way
to do maintenance, I have been told... like booting into DOS and
fiddling with a Windows system.

john alvord



Re: NSS-Support for Linux Kernel under VM

2002-10-16 Thread John Alvord
On Wed, 16 Oct 2002 15:11:34 -0500, Rick Troth [EMAIL PROTECTED] wrote:

NSS is Named Saved System.
You can take a snap-shot of a running (or runnable) system on VM
which CP (the hypervisor part of VM) will store into a spool file.
You can then IPL that system by name,  rather than boot by device.

The syntax of the IPL command is  (gross simplification)

[hcp] ipl device
[hcp] ipl device [clear]
[hcp] ipl name
[hcp] ipl name [parm parms]

where 'hcp' is optional and would be how you issue CP commands
from Linux.   If you're on a VM console,  there is no  'hcp',
you might prefix with  'cp'  instead on CMS,  or you
might simply omit that prefix and let  'ipl'
be recognized as a hypervisor command.

NSS is virtual ROM for a named system.
When booting from NSS,  the system to be booted comes up instantly,
rather than going through the motions of booting from device.
With care,  portions of a Named Saved System can be marked
READ ONLY  so that CP (the hypervisor portion of VM)
can share that storage among several virtual machines.

DCSS is related to NSS.
DCSS is a Discontiguous Shared Segment,
can be read-only or read-write,  can be shared or exclusive,
and appears to the guest operating system as attached storage.
(Need not be in the range of defined memory;  that is,  it can be
ABOVE the defined storage for your virtual machine.)
DCSS is not the same as NSS but is supported by the
same mechanisms within VM (CP).   DCSSs are named,  like NSSs,
but are attached by a DIAGNOSE code,  not booted.

The closest Linux parallel is an initrd, a ramdisk image which serves
as the initial root disk.

john



Re: Intel Architecture Emulated with Linux/390?

2002-09-12 Thread John Alvord

On Thu, 12 Sep 2002 08:57:33 -0400, Matt Zimmerman [EMAIL PROTECTED]
wrote:

On Wed, Sep 11, 2002 at 01:39:08PM -0400, Thomas David Rivers wrote:

  One item to add to this is byte order.  It's surprising how often that is
  a factor.  The Intel byte order is different from mainframes.

  What this can mean is that data directly written on an Intel box, cannot
  be directly read on a mainframe box (unless, of course, the programmer
  was aware of this issue.)

  I would hazard a guess that byte order is more of an issue than in-line
  assembly source for most people.

While byte order is a certainly a factor, it is not specific to the 390
platform, and so is less likely to be an issue than truly platform-specific
code.  Any programs which already work on SPARC or big-endian PowerPC, for
example, will already have dealt with any endianness issues.

I seem to remember that 64-bit byte order was somewhat of a
challenge... Linux worked on alpha just fine, but 390x used a
different byte-order.

john



Re: IBM Moves Key Applications To Linux

2002-08-15 Thread John Alvord

Neat History I worked at IBM 1983-8 and was a manager in the area
that ran PCSHARE and VMSHARE and PCTOOLS. It was an incredible
foundary of experience and ideas. Somewhere I still have some 1.2meg
floppies of some of the forum discussions. QUALITY was a great one.
And the sad day when the Challenger crashed... some of the IBMers saw
it from their back yard. I wrote one of the VM/CMS speed up tools that
decreased resource utilization by an amazing amount... 147 lines of
rexx, 1500 lines of assembler... and a cloud of dust.

Glad to hear it has found a happy home.

john alvord

On Thu, 15 Aug 2002 08:42:24 -0400, Coffin Michael C
[EMAIL PROTECTED] wrote:

Hi Dave,

That's funny!  When I left IBM in 1999 they were aggressively moving the
internal Forums to Bloatus Notes.  It was a HORRIBLE implementation (like
all of the conversions from CMS to Bloatus Notes) and people just plain
stopped using the internal Forums (which were, up until then when running on
CMS, a treasure-trove of great information).

This was during the era when IBM had dictated that ALL applications will be
migrated to Bloatus Notes (whether it made any sense or note).  I was a key
programmer for a suite of tools that ran on VM/CMS and we were constantly
being ordered to migrate to Bloatus Notes.  This was during the late 1990's,
and as of this writing these applications still run (perfectly!) on VM/CMS.
Maybe IBM has given up on forcing everything into Notes and is considering
more appropriate platforms!  :)

Thanks for sharing that, I got a real kick out of it!

Michael Coffin, VM Systems Programmer
Internal Revenue Service - Room 6527
 Constitution Avenue, N.W.
Washington, D.C.  20224

Voice: (202) 927-4188   FAX:  (202) 622-3123
[EMAIL PROTECTED]



-Original Message-
From: Dave Jones [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 8:17 AM
To: [EMAIL PROTECTED]
Subject: IBM Moves Key Applications To Linux


[Cross posted to Linux390 and VMESA-L lists].

IBM released the following press release late yesterday afternoon. I think
the community here might find it of interest:
http://biz.yahoo.com/iw/020814/045483.htm

The paragraph that I especially noticed, having used IBM customer (external)
forums for years now, is:

Internal IBM Forums

IBM runs its internal community collaboration systems using Linux on VM on
an IBM eServer zSeries to provide service to more than 300,000 IBM employees
worldwide. Collaboration is provided using forums, or newsgroups, to allow
IBM employees worldwide to discuss hundreds of different technical and
business topics. These forums provide a way to host written discussions on
any relevant topic, and new content is appended to each forum as users
contribute. The forums are open to all IBM employees across the company. The
mainframe running Linux is integrated into IBM's single Intranet solution
called w3.ibm.com. This award-winning portal for IBM employees has over 17
million hits per day. The forums have over 15,000 new posts every month and
provide a valuable source of information on 800 different topics.

Dave Jones
http://www.sinenomine.net/
Houston, TX
281.578.7544 (voice)



Re: CPU Scalability In single Linux Image Under VM?

2002-07-31 Thread John Alvord

On Tue, 30 Jul 2002 09:37:09 -0400, Bill Bitner [EMAIL PROTECTED]
wrote:

It depends. With Linux it depends even more.

Remember that scaling involves both a software and a hardware MP factor.
On zSeries, we have some advantages from the hardware implementation.
There are some workloads that scale very well on Linux for zSeries.
Klaus Bergmann did a presentation at the last SHARE that gave some
examples of excellent scaling on a 16-way. Of course I'm sure that's
not the case for all workloads. :-)

As for scaling with virtual MP support. There is a small bump/cost in
VM overhead in going from a virtual 1-way to a virtual n-way. However,
after that the overhead is basically linear up to the maximum of 64
for a given virtual machine (though defining more virtual processors
than real processors will not be of any benefit and may just add
overhead).

The ongoing work on NUMA support will eventually benefit the S/390 -
z/VM Linux/390 environment. If multiple virtual machines need to work
cooperatively, it is exactly like some NUMA machine where certain
storage is under local control, some storage is shared (shared
seqments) but need locking, and communication between machines has
some special requirements.

john



Re: OT: Original OS choice for IBM PC (was: Windows costs more - official)

2002-07-22 Thread John Alvord

On Sat, 20 Jul 2002 21:58:47 +0100, Alan Cox
[EMAIL PROTECTED] wrote:

On Fri, 2002-07-19 at 19:26, John Alvord wrote:
 The original IBM PC had three operating systems at anouncement.
 PC-DOS, CP/M, and a UCSD P-system (interpreted pascal or something).
 We know what happened, no need to wonder... Care to speculate why?

CP/M was much more expensive, and the 8086 wasn't actually compatible
with Z80 so the huge applications base of CP/M counted for nothing.

I guess the UCSD P-system had even less momentum then the 8086 CP/M.
If I remember right, DRI was exerting a lot of effort toward MP/M, and
so the CP/M kindof languished. Momentum and market timing count of a
lot...

john alvord



Re: Windows costs more - official

2002-07-17 Thread John Alvord

On Wed, 17 Jul 2002 15:12:24 -0400, Mark Earnest [EMAIL PROTECTED]
wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Wed, 17 Jul 2002, Barr Bill P wrote:
 I wonder what the odds of an MS-Linux distribution are?

Consider that they have publicly asserted that the GPL is cancerous,
unAmerican, and that open-source is an intellectual property destroyer.
I can't imagine something that could be worse than this for the software
business and the intellectual-property business..

Odds are probably not very good.

But that was six months ago... time for a new story.

john



Re: **OMEGAMON FOR VM V600 IS GA!**

2002-07-06 Thread John Alvord

On Fri, 5 Jul 2002 22:32:05 -0400, David Boyes [EMAIL PROTECTED]
wrote:

On Fri, Jul 05, 2002 at 08:39:16AM -0700, John Alvord wrote:

 Reminds me of my first wife.

Reminds me of why it takes more than one documented miracle to be
considered for canonization. Congratulations on taking a good first
step, but you've got to show some committment to get people excited
about Omegamon/VM again. We've been burned before -- once burned,
twice shy.

Good points.  I am a committed VM bigot, too, although I haven't
worked seriously in the area for about 20 years. I do have a Knights
of VM and everything. But these days C++ cross-platform programming
keeps my cookie jar filled.

It is really nice to see a new role for VM, hosting Linux/390, making
such a big splash. New roles = new sales = new interest. Let's hope it
continues.

john alvord



Re: **OMEGAMON FOR VM V600 IS GA!**

2002-07-05 Thread John Alvord

A disclaimer that I have worked for Candle for 12 years now...

I find it curious that some people complain 1) lack of VM support and
2) when first day support is provided for the most recent VM level.
Reminds me of my first wife.

john alvord


On Wed, 3 Jul 2002 16:07:48 -0500, Stephen Frazier
[EMAIL PROTECTED] wrote:

First what?


[EMAIL PROTECTED] wrote:

 Well this is a first.

 Larry Davis,  \|/
 Nielsen Media Research   (. .)
 VM Systems Programmer  ___ooO-(_)-Ooo___
 mailto:[EMAIL PROTECTED]



Re: [Linux/390] Re: 2.4.17-may timer pop problems

2002-06-24 Thread John Alvord

There used to be a command... something like CP SET RUN ON which would
let the virtual machine continue processing after a console
interrupt...

john alvord


On Mon, 24 Jun 2002 15:34:02 -0400, Mike Kershaw
[EMAIL PROTECTED] wrote:

On Mon, Jun 24, 2002 at 09:34:39AM -0400, Coffin Michael C wrote:
 Hi Mike,

 When your Linux/390 guest is in CP READ mode, have you tried entering B
 (for BEGIN) to see if the virtual machine resumes running?  If you have, and
 your guest does NOT start running, you should see some CP error messages
 about why it hasn't started - that might be a good place to start.


No - once it drops dead I go over to the console and whack enter to see
what's going on, and it drops into CP.  I do 'B' to begin it again, and
it does nothing - no errors, no system.  Hit enter again, and it drops into
CP again.  Repeat.  It basically stops handling remote OR console input and
as soon as you try to pass it console data it drops into CP.

No errors anywhere.  It just drops dead.

 If entering B gets your system back up and running - it probably means
 somebody pressed ENTER on the virtual machine console (either a logged on
 console, or via SCIF) causing the virtual machine to go into a CP READ.
 That's something you'll have to address with your staff if you find it to be
 the case.

Yeah - I'm VM savvy enough to have thought of that one, this is actually just
on my test system running on my own console.

I've tried it with the console attached and disconnected.

-m



 -Original Message-
 From: Mike Kershaw [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, June 20, 2002 12:31 PM
 To: [EMAIL PROTECTED]
 Subject: 2.4.17-may timer pop problems


 I'm running 2.4.17-may with the no timer patch, qdio, and guest lans.

 I've noticed it has the highly unfortunate tendency to just completely drop
 dead with no errors on the console.  The console drops into CP READ on any
 attempt to interact with the linux/390 system, and will not resume. a re-IPL
 is the only route left.

 No panic message or anything useful.

 It doesn't seem to be load dependent - happens fully idle, happens in the
 middle of compiling.  Happens doing network traffic, happens sitting there
 doing nothing, happens with the console attached and with the console
 disconnected (hey, it was worth a shot).  Usually happens within 15-25
 minutes of IPL.

 Anyone else encounter this?

 -m

 --
 Michael Kershaw
 [EMAIL PROTECTED]
 Linux Systems Programmer, Information Technology

 Don't worry, I'm sure they'll listen to Reason. -- Fisheye, Snowcrash



Re: Addition of new CPUs

2002-06-13 Thread John Alvord

On Tue, 11 Jun 2002 23:15:12 -0700, Nish Deodhar
[EMAIL PROTECTED] wrote:

Hello,

I'm not sure if this is even possible, but I've defined additional CPUs
under VM and would like them enabled for Linux WITHOUT having to re-IPL the
Linux guest image. Is there a command within Linux or otherwise that will
let me do this

Can't work at the moment. There is some activity underway in the
current Linux Kernel development under the name of CPU hot plug
process, but is still quite experimental.

john



Re: Rudy de Haas' Defamation

2002-05-28 Thread John Alvord

On Mon, 27 May 2002 19:33:01 -0400, Post, Mark K [EMAIL PROTECTED]
wrote:

I thought quite a while before deciding to write this note.  I finally
decided that it was worth the trouble and risk of being ignored.

Rudy de Haas' (aka Paul Murphy) latest and final article in his series on
Linux/390 was really too much to have to deal with in what should be a
reputable publication such as LinuxWorld.  Inferring that the subscribers of
the Linux-390 mailing list are the same as gullible members of a religious
cult is crossing the line of responsible journalism into territory that I
don't even know how to categorize.

In his zeal to criticize IBM and the Linux/390 platform, he seems to be
unaware of one possible explanation for all the things he can't understand
about the platform and its supporters: they might be right and he might be
wrong.  In his last column he makes the comment The list members know
what's going on, most of them have daily access to Linux on the mainframe
and can see its costs and limitations far more clearly than outsiders can.
Which is true.  In his mind, though, that just makes them deluded cult
members, as opposed to intelligent professionals who know a good thing when
they see it.

If Mr. de Haas had actually listened to the responses he got, instead of
taking them as further evidence of a collective delusion, he might have
avoided libeling thousands of honest, hardworking IT professionals.
Instead, he chose to drape himself in the mantel of self-righteousness and
proceed in his own delusion.

To use a British turn of phrase, bad show, both on his part, and yours as
well.

Mark Post

Maybe some people here should author a rebuttal and see if LinuxWorld
would publish it.

john



Re: LinuxWorld Article series - bufferring etc...

2002-04-26 Thread John Alvord

On Fri, 26 Apr 2002 07:21:57 -0400, [EMAIL PROTECTED] wrote:

It took me a surprising amount of time to realize that /usr doesn't
retain any large quantities of data that would end up residing in a
buffer cache-  R/O data is of very limited utility.  I don't think
we're likely to be overrun by people calling up the same man page
across all of the systems.

 Binaries including shared libraries are extremely likely to be used on all
 guests. Take glibc for starters.

Shared libraries get paged in too, just like an executable, so
they don't live in the buffer cache.  There's been some talk
about making such segments shared between VMs but (IMHO) that
will take a huge quantity of code which (again, IMHO) Linus
(_and_ all his lieutenants) are unlikely to accept, considering
how specialized it is (and how inapplicable to all other
environments).

I'm was thinking about how the VM - Linux/390 environment is like a
NUMA archtitecture... so problems solved here might eventually find a
wider audience. Local memory is the VM address space. Shared memory
would be DCSS's that need special operations to attach at a distinct
memory address, reading is smooth, writing/locking need special
operations. The benefit, of having a read/only DCSS glibc which
everyone shares would be amazing. Same for having a shared disk cache,
although management of it would be tres hirsuite.

john alvord



Re: MTBF

2002-04-23 Thread John Alvord

On Tue, 23 Apr 2002 12:28:23 -0400, David Boyes
[EMAIL PROTECTED] wrote:

  The record for us is about 9 months for a single Linux image. Average
  is about 3-4 months between reboots, depending on what's running in
  them -- things that suck up lots of memory like Websphere tend to
  shorten the lifespan of the machine by fragmenting storage. Machines
  that get a lot of interactive use tend to collect a few zombies after
  a while, so reboots become a reasonably good idea after a while.

  I have to say that I'm a little surprised at that recommendation.

No, THIS IS NOT A  RECOMMENDATION. This is a descriptive observation.

The failures we see appear to be memory related, and there are some cases
where if you cut interactive users loose and let them do their stuff, they
create random garbage, et al. This is pretty standard stuff for lots of
interactive processing sites -- clear the decks periodically even if it's
not sick.

  Seems like I've heard lots of tales of people with Linux up
  much longer than 9 months... doing web services, etc...  do you
  think your 9 month figure is a function of the 390 version
  of Linux, or Linux in general?

No, I think its a function of how we make upgrade decisions and/or ops
policy.  I suspect that you could go longer, but I wanted to share a data
point.
The very long uptime reports tend to be for fixed workload
environments... like a router that runs for years.

john



Re: LinuxWorld Article series

2002-04-23 Thread John Alvord

On Wed, 24 Apr 2002 03:46:04 +0800, John Summerfield
[EMAIL PROTECTED] wrote:

 On Tue, 23 Apr 2002 05:32:03 +0800, John Summerfield
 [EMAIL PROTECTED] wrote:

   ...
  This is nothing really new.  Sharing a VM system with early releases of
  MVS was unpleasant.
 
I hear that it's no problem with the two in different LPARs, and that
  running MVS as a guest under VM works well with a surprisingly small
  performance hit (in the 2-3% ballpark.)
  --
  --henry schaffer
 
 
 In the times when Sharing a VM system with early releases of MVS was
 unpleasant, IBM hadn't invented LPARs and I think Gene had just released (o
 r
 was about to release) the S/470s.
 
 
 MVS+VM, I was told, made the 168 comparable in performance to a 135.

 One of my first projects at Amdahl was supporting a product called
 VM/PE, a boringly named, technically cool piece of software which
 shared the real (UP) system between VM and MVS. S/370 achitecture is
 dependent on page zero and this code swapped page zeros between MVS
 and VM. It worked just fine for dedicated channels, nice low 1-2%
 overhead. When we started sharing control units and devices, things
 turned ugly.



I do believe we used VM/PE, before MDF became available.

We used to run two, occasionally three MVS systems on a 5860.

MDF was largely equal to the LPAR facility...

VM/PE had a very elegant development name: Janus - who was the Roman
God of portals, able to look two directions at the same time. 

It was originally written by Dewayne Hendricks and the original was
very nice indeed. [Anyone feel free to correct me]. I ran across an
original listing while at Amdahl and it was so much prettier then the
product version. He was no longer working at Amdahl by the time I
arrived. Robert Lerche was also involved, but I don't know whether he
worked jointly with DH or not.

john



Re: LinuxWorld Article series

2002-04-22 Thread John Alvord

On Tue, 23 Apr 2002 05:32:03 +0800, John Summerfield
[EMAIL PROTECTED] wrote:

  ...
 This is nothing really new.  Sharing a VM system with early releases of
 MVS was unpleasant.

   I hear that it's no problem with the two in different LPARs, and that
 running MVS as a guest under VM works well with a surprisingly small
 performance hit (in the 2-3% ballpark.)
 --
 --henry schaffer


In the times when Sharing a VM system with early releases of MVS was
unpleasant, IBM hadn't invented LPARs and I think Gene had just released (or
was about to release) the S/470s.


MVS+VM, I was told, made the 168 comparable in performance to a 135.

One of my first projects at Amdahl was supporting a product called
VM/PE, a boringly named, technically cool piece of software which
shared the real (UP) system between VM and MVS. S/370 achitecture is
dependent on page zero and this code swapped page zeros between MVS
and VM. It worked just fine for dedicated channels, nice low 1-2%
overhead. When we started sharing control units and devices, things
turned ugly.

Of course PR/SM which turned into the LPAR facility... and a parallel
Amdal 580 feature obsoleted the software in 4-5 years.

john alvord



Re: LinuxWorld Article series

2002-04-21 Thread John Alvord

On Sat, 20 Apr 2002, Jay G Phelps wrote:

 Despite the poorly written article, I have actually been somewhat
 disappointed by the test results I have been getting on my MP3000 P30 Linux
 system(s).  In particular, the Bonnie++ test I did last week showed poor
 results in most area's.  Granted, I am running under VM in an LPAR, but I
 still expected better results for I/O related work.

 On the other hand, running Tomcat and a Java/JSP based web site provided
 reasonable performance so I am not ready to give up yet ;-)

 Would anyone running Linux on mainframe with channel attached DASD be
 willing to do a Bonnie++ test and post the results?

I have several times read Linus on the subject of benchmarks like Bonnie
and dbench. They are designed to torture the environment and almost never
reflect actual workload. With them, some corner case bugs are detected and
solved, but performance related problems based on those types of tests are
almost always discounted by top developers.

I haven't seen definitive Linux/390 test results. There have certainly
been enough published examples of problems that I would want to do serious
performance test of any proposed workload before going ahead. One recent
case involved slow DASD performance, but the DASD performance was limited
independent of Linux... the DASD was emulated through some OS/2
subsystem. Linux is never going to give you better performance then the
base system.

john alvord



Re: LinuxWorld Article series

2002-04-20 Thread John Alvord

On Sat, 20 Apr 2002, Dave Jones wrote:

  -Original Message-
  From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
  Hall, Ken (ECSS)
  Sent: Friday, April 19, 2002 12:13 PM
  To: [EMAIL PROTECTED]
  Subject: LinuxWorld Article series
 
 
  Anyone seen this?
 
  Aside from some (fairly glaring) technical inaccuracies, I can't
  see much I'm qualified to dispute.
 
  http://www.linuxworld.com/site-stories/2002/0416.mainframelinux.html
 

 But the glaring technical inaccuracies lead me to question his conclusions
 about Linux on S/390. I suspect that
 while he knows a great deal about the Unix environment and the typical Unix
 user mindset, his grasp of the mainframe
 world is limited, to say the least. He seems to fixate on the mainframe a
 batch-oriented and Unix as interactive, and
 that interactive doesn't work well on mainframes. He obviously has never
 used CMS on VM (or CP/VM as he calls it...);
 it's as interactive and responsive as any Linux system I've used. And his
 statement that TSO and CMS load as batch jobs is just pure nonsense..

 One statement struck me as clearly incorrect is the following:

 In contrast, most mainframe control environments, including loadable
 libraries and related systems level applications, are written and maintained
 very close to the hardware -- usually in PL/1 or assembler but often with
 handwritten or at least
 tweaked object code -- to use far fewer cycles than their C language Unix
 equivalents.

 This statement is wrong on two separate counts:

 1) most mainframe programming (well above 50%) is still done in COBOL, with
 PL/I, Assembler, Fortran, etc. splitting the rest.
 2) PL/I is lots of things, but close to the hardware ain't one of them.
 :-)

 Overall this article appears to be not so much concerned with Linux running
 on a S/390 environment, but a diatribe against
 mainframes in general and the overall superiority of SUN boxes. That seems
 to be the whole thrust of the paragarphs on
 mutually contingent evolution. (whatever that is.).

 I suspect that Paul Murphy is a shill for SUN.

I found it interesting that he wrote about CP/40. That was the first
example of a 360-style operating system using virtual memory with the
equivalent of modern TLBs. [It had been done on other architectures.] The
hardware was a one-of created for the Cambridge Scientific Center
(Mass). And CP/67 was hosted virtually, and CP/67 begat VM/370, and VM/370
begat .

So it is interesting but not terribly important to current
understanding. He got the bit about CP/67-VM/370 wrong, too, calling it
CP/VM. [Brown University had a VM/360, but I digress.]

My conclusion is he read/skimmed a history, such as Melinda Varian's
history of VM and folded it in without real knowledge or anyone to
proofread the result.

It did make me tend to disbelieve any conclusions. If the author couldn't
understand and abstract (with credit) a well written history, it tends to
suggest he doesn't understand the current environment.

Anyone with half a brain can see that IBM bet mucho $$$ on Linux/390 and
have sold a lot of machines and acquired a lot of mindshare... in some
quarters they are approaching cool status.  Big bet with a big payoff.

There is a lot of stuff bubbling around in IBM also. They have some top
guys working on NUMA machines that are regularly collaberating (sending
code to) the Linux kernel development tree.

john alvord



Re: LinuxWorld Article series

2002-04-20 Thread John Alvord

On Sat, 20 Apr 2002, Phil Payne wrote:

  I found it interesting that he wrote about CP/40. That was the first
  example of a 360-style operating system using virtual memory with the
  equivalent of modern TLBs. [It had been done on other architectures.] The
  hardware was a one-of created for the Cambridge Scientific Center
  (Mass). And CP/67 was hosted virtually, and CP/67 begat VM/370, and VM/370
  begat .

 I would like to shake the hand of the guy who came up with 'Conversational'.

 --
   Phil Payne
   http://www.isham-research.com
   +44 7785 302 803
   +49 173 6242039

It was originally Cambridge Monitor System and became Converstation MS in
VM/370. There was a Yorktown Monitor System, which leaked EXEC2 into CMS.

This reads like those history of rock bands, doesn't it?

And the ideas in VM/CMS didn't arise in a void. Compatible Time Sharing
System ran on high end 707X hardware (IBM second generation). I remember
seeing a list of commands, like LISTFILE, and the outputs looked almost
identical in form. The virual hardware had been presaged by the Atlas
computer over in England years before.

john alvord



Re: Missing redbook chapter found!

2002-04-19 Thread John Alvord

On Fri, 19 Apr 2002, Rick Troth wrote:

 Mike ...

 If RedBooks are copyrighted,
 please just double check that publishing this chapter is
 within the bounds of that copyright.

IBM would say... Chapter? What Chapter? lending an air of 1984 to this
circumstance.

john alvord



Re: Lines of code question.

2002-04-03 Thread John Alvord

On Wed, 3 Apr 2002 14:20:25 +0200, Rob van der Heij [EMAIL PROTECTED]
wrote:

At 23:04 02-04-02, Alan Cox wrote:


For example I contributed -1000 lines (note the minus) to the aacraid driver.


Oh dear! Where did those 1000 lines go? Do we now have somewhere
the extra 1000 lines of code that probably have little function?
Should you not 'comment out' those lines instead?  g

I remember a manager at IBM Research who had an intern working for
her. He was working on a GML rendering program for PC and was
regularly recording 500-1000 decrease in total LOCs per week. All this
while implementing new functions as asked. She loved the result but
hated that her code productivity metrics were being turned upside
down.

john



Re: Porting Large S/390 Assembler Applications

2002-03-11 Thread John Alvord

On Mon, 11 Mar 2002 08:57:20 -0500, David Boyes
[EMAIL PROTECTED] wrote:

 My question is if it is easy, difficult or impossible to port
 large existing
 S/390 applications written in assembler to run under LINUX on a 390 or
 z/series platform?  I am not talking about a batch
 application but about a
 server type application that currently uses S/390 facilities
 and operating
 system services such as TCP/IP Socket APIs, multi-tasking and
 Data Spaces.

This will be quite difficult. Most of the APIs either don't exist or are
quite different, multitasking will need to be restructured, and Linux
doesn't know anything about data spaces. You'll effectively need to
restructure most of the critical sections of the application (if not the
whole application), and at that point, you're better off switching to a
higher-level language such as C and starting over, taking advantage of the
Linux APIs.

 Have other S/390 software vendors ported assembler products of this
 complexity or is the effort so large as to not be feasible?.

See above.  Unix applications are pretty easy; this stuff won't be.

You could always port Hercules to Linux/390. run zOS under Hercules
and then run the assembler systems there... grin john alvord



Re: Sort Products

2002-01-26 Thread John Alvord

Syncsort has both unix and windows sorting products. See
www.syncsort.com. It is unclear on the website what versions of unix
are supported, but I'd be real surprised if they wouldn't be
supporting Linux/390 real soon.

john


On Sat, 26 Jan 2002 18:57:16 -0600, Tuomo Stauffer
[EMAIL PROTECTED] wrote:

My $0.02 - knowing Syncsort back from 70's.. Yes - most of
the Syncsort functions can be done with Unix / Linux tools.
Performance - forget it..

You can't even compare Syncsort and the native Unix ( or Win
or whatever. ) sorts to it. It's in class of it's own. Sorting
is just not to sort some file - it's an art ( IMHO ). The
difference is so big that it's sometimes hard to believe. If
you are depent of sorts ( heavy reporting, batch runs, SQL -
the SQL native sorts are not very good, etc. ) Syncsort is the
only way. Let's hope to get free as beer Syncsort to Linux..

have nice day - tuomo

ps. maybe pay a little - they also have to make the living..

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
Chad Kerner
Sent: Friday, January 25, 2002 5:57 PM
To: [EMAIL PROTECTED]
Subject: Re: Sort Products


Thanks Mark.  We are prototyping one of our critical HP-UX apps on Linux/390.  How 
does the standard sort performance compare to
that of SyncSort?

-Original Message-
From: Mark Earnest [mailto:[EMAIL PROTECTED]]
Sent: Friday, January 25, 2002 5:12 PM
To: [EMAIL PROTECTED]
Subject: Re: Sort Products


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 25 Jan 2002, Chad Kerner wrote:
 Hello,
 I was wondering what people are using for 3rd party sort
 products on Linux/390.  We have SyncSort for UNIX, but I haven't hit
 their web site yet to see if it will run on Linux/390.

Most of what you do with Syncsort can be done with standard Unix
utilities. Linux has a sort program aptly named sort. Type man sort
for usage instructions.