Re: LinuxWorld Article series - bufferring etc...

2002-04-26 Thread John Alvord

On Fri, 26 Apr 2002 07:21:57 -0400, [EMAIL PROTECTED] wrote:

>> >   It took me a surprising amount of time to realize that /usr doesn't
>> >   retain any large quantities of data that would end up residing in a
>> >   buffer cache-  R/O data is of very limited utility.  I don't think
>> >   we're likely to be overrun by people calling up the same man page
>> >   across all of the systems.
>>
>> Binaries including shared libraries are extremely likely to be used on all
>> guests. Take glibc for starters.
>
>Shared libraries get paged in too, just like an executable, so
>they don't live in the buffer cache.  There's been some talk
>about making such segments shared between VMs but (IMHO) that
>will take a huge quantity of code which (again, IMHO) Linus
>(_and_ all his lieutenants) are unlikely to accept, considering
>how specialized it is (and how inapplicable to all other
>environments).

I'm was thinking about how the VM - Linux/390 environment is like a
NUMA archtitecture... so problems solved here might eventually find a
wider audience. Local memory is the VM address space. Shared memory
would be DCSS's that need special operations to attach at a distinct
memory address, reading is smooth, writing/locking need special
operations. The benefit, of having a read/only DCSS glibc which
everyone shares would be amazing. Same for having a shared disk cache,
although management of it would be tres hirsuite.

john alvord



Re: LinuxWorld Article series - bufferring etc...

2002-04-26 Thread soup

> >   It took me a surprising amount of time to realize that /usr doesn't
> >   retain any large quantities of data that would end up residing in a
> >   buffer cache-  R/O data is of very limited utility.  I don't think
> >   we're likely to be overrun by people calling up the same man page
> >   across all of the systems.
>
> Binaries including shared libraries are extremely likely to be used on all
> guests. Take glibc for starters.

Shared libraries get paged in too, just like an executable, so
they don't live in the buffer cache.  There's been some talk
about making such segments shared between VMs but (IMHO) that
will take a huge quantity of code which (again, IMHO) Linus
(_and_ all his lieutenants) are unlikely to accept, considering
how specialized it is (and how inapplicable to all other
environments).

> >   IIRC the commentary here on the list was that some folks are working
> >   at getting Linux better at peering w/ VM, which, IMHO, is a non-
> >   starter.  I don't see Linus admitting code like that into the
> > mainline
>
> I don't see why not (but as I already hinted, I'm not a kernel developer).
> It's no different in kind than the PC BIOS and the kernel does use some of
> that.

It's a non-trivial amount of code, it's exceptionally specialized,
it's not a new "device driver" so it's not well compartmented...

I've done _some_ kernel and device driver work, admittedly not
much on Linux, but LynxOS (almost 10 years ago) and freeBSD
(about 5) so I can appreciate the size of this job.  It's a
serious amount of work that'll have to be spread pretty far and
wide and "talking" to VM's CP pretty often.  This is not good
for encapsulation.

> There IS something that seems very odd about paging.
>
> [summer@dugite summer]$ procinfo
> Linux 2.4.9-31 ([EMAIL PROTECTED]) (gcc 2.96 2731 ) #1 Tue Feb
> 26 06:25:35 EST 2002 1CPU [dugite]
> Memory:  TotalUsedFree  Shared Buffers  Cached
> Mem:118276  1113566920  56   18244   48452
> Swap:   393200   79504  313696
>
> Bootup: Fri Mar 22 21:58:04 2002Load average: 0.00 0.00 0.00 1/119 3343
>
> user  :  17:39:54.36   2.1%  page in :153050675  disk 1:  1087873r  634957w
> nice  :   0:40:06.56   0.1%  page out:109710024  disk 2: 3023r   0w
> system:  19:29:41.91   2.4%  swap in :   906529  disk 3:  3196130r 2349159w
> idle  :  32d 19:35:23.04  95.4%  swap out:   238519  disk 4:  3304476r 1724676w
> uptime:  34d  9:25:05.85 context :495785638
>
> irq  0: 297150587 timer irq  7:  40809857 parport0
> irq  1:  9426 keyboard  irq  8:   121 rtc
> irq  2: 0 cascade [4]   irq 10:14 aic7xxx
> irq  3: 4 serialirq 11: 124166017 eth0
> irq  4: 770936353 serialirq 14:   1720315 ide0
> irq  6: 2   irq 15:  10557958 ide1
>
> [summer@dugite summer]$
>
> See those "page out" numbers. I get page out numbers > 0 even if I run without
> swap

I'm not sure but I'd suspect that page steals of code segment
pages may be counted as page outs, even if they're just page
expires.  I'll have to look at the relevant counters within
the kernel source and see where they get incremented, just to
be sure.

AIX is much weirder than Linux when it comes to the Virtual
Memory Manager since the VMM does _all_ of the work managing
disk I/O (files/etc get "mapped" as memory segments and reads
are handled as page faults.  Hence, AIX == AInt uniX, at least
deep inside) so some of the number have to be carefully
interpreted.

--
 John R. Campbell   Speaker to Machines [EMAIL PROTECTED]
 - As a SysAdmin, yes, I CAN read your e-mail, but I DON'T get that bored!
   Disclaimer:  All opinions expressed above are those of John R. Campbell
alone and are seriously unlikely to reflect the opinions of
his employer(s) or lackeys thereof.  Anyone who says
differently is itching for a fight!



Re: LinuxWorld Article series - bufferring etc...

2002-04-26 Thread John Summerfield

>   I hate the lack of sensible quoting w/i Bloated Notes.
>
> >>>
> I thought we were talking about buffers for files, not storage allocated to
> programs during use (and that's what stack, bss are).
>
> Everything in /usr is supposed to be mountable r/o.
>
>
> However, Linux doesn't know that VM might be caching it, so Linux caches it
> too
> and this leads to increased storage use by Linux as seen by VM. So, to
> reduce
> this caching, reduce the storage allocated to the Linux instance.
> <<<
>
>   It took me a surprising amount of time to realize that /usr doesn't
>   retain any large quantities of data that would end up residing in a
>   buffer cache-  R/O data is of very limited utility.  I don't think
>   we're likely to be overrun by people calling up the same man page
>   across all of the systems.

Binaries including shared libraries are extremely likely to be used on all
guests. Take glibc for starters.




> The other question deals with paging/swapping. As far as I can figure it,
> paging
> in Linux/Unix isn't what it is in MVS, and I've never discovered just what
> the
> correspondences is. So, I use the terms swapping and paging as I did in my
> MVS
> days.
>
> There was some discussion about this quite a while ago. As I recall, the
> best
> solution offered is to modify Linux so it recognises it's running in a VM
> environment and to discuss paging operations with VM. I think the feeling
> was
> that VM also needed to have some changes made as the way it discusses these
> matters with other guests isn't ideal for Linux.
>
> I imagine that the IBM folk are beavering away at fixing this up properly
> as I
> type - I think it's working hours in ibm.de;-)
> 
>
>   IIRC the commentary here on the list was that some folks are working
>   at getting Linux better at peering w/ VM, which, IMHO, is a non-
>   starter.  I don't see Linus admitting code like that into the
> mainline

I don't see why not (but as I already hinted, I'm not a kernel developer). It's
no different in kind than the PC BIOS and the kernel does use some of that.



>   kernel.  A more generic approach (like tuning the disk buffer cache
>   mechanism to throttle new buffer requests) would be best, but that
>   needs to be done in such a way that a code segment won't get paged
>   out to make room for a disk buffer;  Only other disk buffers should
>   be eligible for flushing and reallocation.
>
>   (BTW, code segments don't get written out to the paging space;  They
>   get dropped because they'll just get re-loaded from the executable
>   file image when the page faults again.  As if anyone on this list
> didn't
>   already know or have an inkling of how this works.)



There IS something that seems very odd about paging.

[summer@dugite summer]$ procinfo
Linux 2.4.9-31 ([EMAIL PROTECTED]) (gcc 2.96 2731 ) #1 Tue Feb
26 06:25:35 EST 2002 1CPU [dugite]
Memory:  TotalUsedFree  Shared Buffers  Cached
Mem:118276  1113566920  56   18244   48452
Swap:   393200   79504  313696

Bootup: Fri Mar 22 21:58:04 2002Load average: 0.00 0.00 0.00 1/119 3343

user  :  17:39:54.36   2.1%  page in :153050675  disk 1:  1087873r  634957w
nice  :   0:40:06.56   0.1%  page out:109710024  disk 2: 3023r   0w
system:  19:29:41.91   2.4%  swap in :   906529  disk 3:  3196130r 2349159w
idle  :  32d 19:35:23.04  95.4%  swap out:   238519  disk 4:  3304476r 1724676w
uptime:  34d  9:25:05.85 context :495785638

irq  0: 297150587 timer irq  7:  40809857 parport0
irq  1:  9426 keyboard  irq  8:   121 rtc
irq  2: 0 cascade [4]   irq 10:14 aic7xxx
irq  3: 4 serialirq 11: 124166017 eth0
irq  4: 770936353 serialirq 14:   1720315 ide0
irq  6: 2   irq 15:  10557958 ide1

[summer@dugite summer]$




See those "page out" numbers. I get page out numbers > 0 even if I run without
swap


--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Signatures, and taglines was RE: LinuxWorld Article series - bufferring etc...

2002-04-25 Thread Gregg C Levine

Hello from Gregg C Levine normally with Jedi Knight Computers
As most of you have figured out, I happen to be fan of that series of
films. All of them. So facts those are recounted in my signature, and
taglines. But I've noticed that, for example John Campbell here,
presents one, that is unfamiliar to me. So, the question is, "What are
CLAIM Codes?" An answer is appreciated, but not necessary. Directly to
me, so as to not clog up the list.
---
Gregg C Levine [EMAIL PROTECTED]

"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )



> -Original Message-
> From: Linux on 390 Port [mailto:[EMAIL PROTECTED]] On Behalf Of
> John Campbell
> Sent: Thursday, April 25, 2002 5:34 PM
> To: [EMAIL PROTECTED]
> Subject: Re: LinuxWorld Article series - bufferring etc...
> 
> John Summerfield:
> >>>>>>>>>>>>>
> >   I try to maintain some recognition of weaknesses (no one
system is
> >   ever good at _everything_).  Working w/ Xenix (and Unix, early
on)
> >   one of the tunables was to set the buffer cache size.  While
the
> new
> >   model of buffer cache management is wonderful for "regular"
(non-
> >   shared) systems, it's not as good in the VM environment
(though we
> >   wouldn't want to cripple this feature across the s/390 line,
since
> >   this feature is not a problem for the bare metal or an LPAR).
> 
> I did mean to comment on this too;-)
> 
> Linux's caching for single-OS machines isn't so wonderful either. I'm
run a
> postgresql database load a few times by way of a benchmark/test, and a
> result is
> that my 256 Mbytes of RAM gets absolutely full of database stuff.
> 
> Then my desktop (KDE or GNOME) gets very slow indeed for a while until
the
> cache
> gets recharged with stuff from /usr.
> <<<<<<<<<<<<<<<
> 
>   But the /usr stuff that gets re-loaded are executables and data
> (impure)
>   segments of the code to be run.  Unless the KDE stuff is all
> scripting
>   (yeah, like it's all done w/ tcl/TK, smoke and mirrors) then
it's
> going
>   to consist of computational pages rather than persistent storage
> (which
>   is just a fancy AIX name for data that's reflected on disk;  the
code
>   segment of an executable gets to be both, in a way).
> 
>   It can be argued that the memory allocation mechanism needs to
be
> looked
>   at to allow a memory request to have it's own priority level,
just
> like
>   each process has a priority within the scheduler.  H...
> 
>   Doing this would benefit _all_ platforms.
> 
> 
> John R. Campbell, Speaker to Machines (GNUrd)  {813-356|697}-5322
> "Will Work for CLAIM Codes"
> IBM Certified: IBM AIX 4.3 System Administration, System Support



Re: LinuxWorld Article series - bufferring etc...

2002-04-25 Thread John Campbell

John Summerfield:
>
>   I try to maintain some recognition of weaknesses (no one system is
>   ever good at _everything_).  Working w/ Xenix (and Unix, early on)
>   one of the tunables was to set the buffer cache size.  While the
new
>   model of buffer cache management is wonderful for "regular" (non-
>   shared) systems, it's not as good in the VM environment (though we
>   wouldn't want to cripple this feature across the s/390 line, since
>   this feature is not a problem for the bare metal or an LPAR).

I did mean to comment on this too;-)

Linux's caching for single-OS machines isn't so wonderful either. I'm run a
postgresql database load a few times by way of a benchmark/test, and a
result is
that my 256 Mbytes of RAM gets absolutely full of database stuff.

Then my desktop (KDE or GNOME) gets very slow indeed for a while until the
cache
gets recharged with stuff from /usr.
<<<

  But the /usr stuff that gets re-loaded are executables and data
(impure)
  segments of the code to be run.  Unless the KDE stuff is all
scripting
  (yeah, like it's all done w/ tcl/TK, smoke and mirrors) then it's
going
  to consist of computational pages rather than persistent storage
(which
  is just a fancy AIX name for data that's reflected on disk;  the code
  segment of an executable gets to be both, in a way).

  It can be argued that the memory allocation mechanism needs to be
looked
  at to allow a memory request to have it's own priority level, just
like
  each process has a priority within the scheduler.  H...

  Doing this would benefit _all_ platforms.


John R. Campbell, Speaker to Machines (GNUrd)  {813-356|697}-5322
"Will Work for CLAIM Codes"
IBM Certified: IBM AIX 4.3 System Administration, System Support



Re: LinuxWorld Article series - bufferring etc...

2002-04-25 Thread John Campbell

  I hate the lack of sensible quoting w/i Bloated Notes.

>>>
I thought we were talking about buffers for files, not storage allocated to
programs during use (and that's what stack, bss are).

Everything in /usr is supposed to be mountable r/o.


However, Linux doesn't know that VM might be caching it, so Linux caches it
too
and this leads to increased storage use by Linux as seen by VM. So, to
reduce
this caching, reduce the storage allocated to the Linux instance.
<<<

  It took me a surprising amount of time to realize that /usr doesn't
  retain any large quantities of data that would end up residing in a
  buffer cache-  R/O data is of very limited utility.  I don't think
  we're likely to be overrun by people calling up the same man page
  across all of the systems.

>>>
However, this is a problem which I think needs a better long-term solution.
It's
wrong for Linux to allocate lots of cache to /usr (but only when running as
a VM
guest) but right for it to cache /var liberally.

Perhaps a mount option would address this best, but I'm certainly no Kernel
Guru.
<<<

  The "swap" (paging space) already has a priority flag;  Perhaps it
  can be borrowed for this?  I'll hafta "use the source" to see what
  it does now.

  Actually, the buffer cache allocation algorithm needs to have a
  "cost" associated with each percentage point of free space it
  consumes;  The rate of expense growth should be tunable, etc.  Some
  memory allocators used to have such a pricing mechanism to allow the
  system to balance out the load.

>>>
The other question deals with paging/swapping. As far as I can figure it,
paging
in Linux/Unix isn't what it is in MVS, and I've never discovered just what
the
correspondences is. So, I use the terms swapping and paging as I did in my
MVS
days.

There was some discussion about this quite a while ago. As I recall, the
best
solution offered is to modify Linux so it recognises it's running in a VM
environment and to discuss paging operations with VM. I think the feeling
was
that VM also needed to have some changes made as the way it discusses these
matters with other guests isn't ideal for Linux.

I imagine that the IBM folk are beavering away at fixing this up properly
as I
type - I think it's working hours in ibm.de;-)


  IIRC the commentary here on the list was that some folks are working
  at getting Linux better at peering w/ VM, which, IMHO, is a non-
  starter.  I don't see Linus admitting code like that into the
mainline
  kernel.  A more generic approach (like tuning the disk buffer cache
  mechanism to throttle new buffer requests) would be best, but that
  needs to be done in such a way that a code segment won't get paged
  out to make room for a disk buffer;  Only other disk buffers should
  be eligible for flushing and reallocation.

  (BTW, code segments don't get written out to the paging space;  They
  get dropped because they'll just get re-loaded from the executable
  file image when the page faults again.  As if anyone on this list
didn't
  already know or have an inkling of how this works.)

  I *really* need to get a life.


John R. Campbell, Speaker to Machines (GNUrd)  {813-356|697}-5322
"Will Work for CLAIM Codes"
IBM Certified: IBM AIX 4.3 System Administration, System Support



Re: LinuxWorld Article series - bufferring etc...

2002-04-25 Thread John Summerfield

>   I try to maintain some recognition of weaknesses (no one system is
>   ever good at _everything_).  Working w/ Xenix (and Unix, early on)
>   one of the tunables was to set the buffer cache size.  While the new
>   model of buffer cache management is wonderful for "regular" (non-
>   shared) systems, it's not as good in the VM environment (though we
>   wouldn't want to cripple this feature across the s/390 line, since
>   this feature is not a problem for the bare metal or an LPAR).


I did mean to comment on this too;-)

Linux's caching for single-OS machines isn't so wonderful either. I'm run a
postgresql database load a few times by way of a benchmark/test, and a result is
that my 256 Mbytes of RAM gets absolutely full of database stuff.

Then my desktop (KDE or GNOME) gets very slow indeed for a while until the cache
gets recharged with stuff from /usr.


--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Re: LinuxWorld Article series - bufferring etc...

2002-04-25 Thread John Summerfield



> This assumes that every Linux image is going to be using the same disk,
> does it not?
> 
>
>   I've thought that it should work the OTHER way once a mechanism to
>   throttle buffer allocation has been cooked up;  You'd best depend
>   upon VM to handle paging your system (to avoid double-paging) and
>   remove the paging space ("swap" partition) from Linux entirely.
>
>   So you could have a very large "virtual" instance but it wouldn't
>   have any "local" paging space, depending instead upon VM to manage
>   the paging of the instance.  Coupled with a buffer-leashing (we can
>   hope it's tunable via a /proc entry or some such) this'd make each
>   instance more likely to "play well with others".
>
>   As for replicated buffers in the cache, yes, I've seen the cookbooks
>   recommend building a single instance and then providing r/o access
>   to other instances for the /usr filesystem, so this would be a
>   concern.  While reducing the replication of files is a laudable goal
>   we're still stuck w/ replicated buffers.  The only real advantage
>   is with executables, since page misses in the code segments will
>   just pull it in from the file itself (computational pages) and data
>   (stack, bss) segments will be "unique" to each instance's processes
>   anyway.
>
>   Replicated buffers for persistent storage (i.e. data files) is less
>   of a problem since data will vary from instance to instance.
>

I thought we were talking about buffers for files, not storage allocated to
programs during use (and that's what stack, bss are).

Everything in /usr is supposed to be mountable r/o.


However, Linux doesn't know that VM might be caching it, so Linux caches it too
and this leads to increased storage use by Linux as seen by VM. So, to reduce
this caching, reduce the storage allocated to the Linux instance.

However, this is a problem which I think needs a better long-term solution. It's
wrong for Linux to allocate lots of cache to /usr (but only when running as a VM
guest) but right for it to cache /var liberally.

Perhaps a mount option would address this best, but I'm certainly no Kernel Guru.

The other question deals with paging/swapping. As far as I can figure it, paging
in Linux/Unix isn't what it is in MVS, and I've never discovered just what the
correspondences is. So, I use the terms swapping and paging as I did in my MVS
days.

There was some discussion about this quite a while ago. As I recall, the best
solution offered is to modify Linux so it recognises it's running in a VM
environment and to discuss paging operations with VM. I think the feeling was
that VM also needed to have some changes made as the way it discusses these
matters with other guests isn't ideal for Linux.

I imagine that the IBM folk are beavering away at fixing this up properly as I
type - I think it's working hours in ibm.de;-)







--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Re: LinuxWorld Article series - bufferring etc...

2002-04-25 Thread John Campbell

>>>>>>

"Post, Mark K" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 04/25/2002 11:37:05 AM

Please respond to Linux on 390 Port <[EMAIL PROTECTED]>

Sent by:Linux on 390 Port <[EMAIL PROTECTED]>


To:    [EMAIL PROTECTED]
cc:
Subject:Re: LinuxWorld Article series



Yes, and it also assumes that the system administrator hasn't taken steps
to
minimize this.  Such as, reducing the amount of virtual storage allocated
to
the instance, and adding a v-disk as a paging device.  Putting "pressure"
on
the storage use algorithms will reduce the amount used for buffering and
cache, so only frequently used things will remain in storage.

Mark Post

-Original Message-
From: James Melin [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 25, 2002 11:10 AM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


This assumes that every Linux image is going to be using the same disk,
does it not?
<<<<<<<<<<<<

  I've thought that it should work the OTHER way once a mechanism to
  throttle buffer allocation has been cooked up;  You'd best depend
  upon VM to handle paging your system (to avoid double-paging) and
  remove the paging space ("swap" partition) from Linux entirely.

  So you could have a very large "virtual" instance but it wouldn't
  have any "local" paging space, depending instead upon VM to manage
  the paging of the instance.  Coupled with a buffer-leashing (we can
  hope it's tunable via a /proc entry or some such) this'd make each
  instance more likely to "play well with others".

  As for replicated buffers in the cache, yes, I've seen the cookbooks
  recommend building a single instance and then providing r/o access
  to other instances for the /usr filesystem, so this would be a
  concern.  While reducing the replication of files is a laudable goal
  we're still stuck w/ replicated buffers.  The only real advantage
  is with executables, since page misses in the code segments will
  just pull it in from the file itself (computational pages) and data
  (stack, bss) segments will be "unique" to each instance's processes
  anyway.

  Replicated buffers for persistent storage (i.e. data files) is less
  of a problem since data will vary from instance to instance.

  So it looks like a problem where you're replicating the contents of
  "shared" mdisk files across the instances but this replication will
  not be in the buffer cache but in the code segments of the programs
  running in each instance, and there's not much you can do to reduce
  this-  and it'd add a huge amount of overhead to even _think_ about
  doing so.

  If data doesn't vary instance-to-instance there's not much point to
  having multiple instances, eh?

  Mind you, I don't have my own root/shell access to an s/390 running
  linux;  I've worked w/ Linux since kernel 0.95 or thereabouts (does
  anyone out there remember the SLS distro?) and, despite my
enthusiasm,
  I try to maintain some recognition of weaknesses (no one system is
  ever good at _everything_).  Working w/ Xenix (and Unix, early on)
  one of the tunables was to set the buffer cache size.  While the new
  model of buffer cache management is wonderful for "regular" (non-
  shared) systems, it's not as good in the VM environment (though we
  wouldn't want to cripple this feature across the s/390 line, since
  this feature is not a problem for the bare metal or an LPAR).

  VM's side effects of virtualization vary a whole lot of "rules" in
  OS;  I don't know if they've ever been codified.

  Changing the subject slightly, how does Linux run using FBA vs.
  CKD devices?  (It's not like _I_ have the ability to run tests.)


John R. Campbell, Speaker to Machines (GNUrd)  {813-356|697}-5322
"Will Work for CLAIM Codes"
IBM Certified: IBM AIX 4.3 System Administration, System Support



Re: LinuxWorld Article series

2002-04-25 Thread Ingo Adlung

This is something we're looking at. There's some risk getting into
such a situation if ...

a) you significantly overcommited your memory, and
b) you have oversized Linux images (more than the application
   working set requires), and
c) the images are "rather busy"

Then from a VM perspective it is hard to determine a page that can
be selected for paging in case of memory pressure, if everything
appears to be ongoingly in use ...

If the sum of the images requires that much storage for its working
set (other than I/O buffering), and are busily active, then there is
little we can do, though. Then you must not excessively overcommit
your memory.

There is some thought, oversizing Linux memory to be prepared for
arbitrary peak workloads, thinking VM could page more efficiently.
You may choose to have Linux use its memory more restrictive, and
let Linux page in case of memory pressure instead. Depends on the
workload ...

Best regards,
Ingo

--
Ingo Adlung,
Linux for zSeries - Strategy & Design

The box said, 'Requires Windows95 or better', ...so I installed LINUX.


Barton Robinson <[EMAIL PROTECTED]>@VM.MARIST.EDU> on
25.04.2002 16:48:30

Please respond to Linux on 390 Port <[EMAIL PROTECTED]>

Sent by:Linux on 390 Port <[EMAIL PROTECTED]>


To:[EMAIL PROTECTED]
cc:
Subject:Re: [LINUX-390] LinuxWorld Article series



The author is correct. This has NOT been addressed for Linux
on zSeries.

>From: Werner Puschitz <[EMAIL PROTECTED]>
>
>Is the author right on this:
>
>http://www.linuxworld.com/site-stories/2002/0416.mainframelinux-p7.html
>"Linux memory management assumes control of a machine and so grabs up
>free memory for use in I/O buffering. Having multiple Linux instances do
>this to independently buffer I/O to the same files resident on a shared
>mini-disk not only wastes memory, but dramatically increases the paging
>effort."
>
>Or has this already been addressed for Linux on zSeries?
>
>Thanks
>Werner







"If you can't measure it, I'm Just NOT interested!"(tm)

//
Barton Robinson - CBW Internet: [EMAIL PROTECTED]
Velocity Software, IncMailing Address:
 196-D Castro Street   P.O. Box 390640
 Mountain View, CA 94041   Mountain View, CA 94039-0640

VM Performance Hotline:   650-964-8867
Fax: 650-964-9012 Web Page:  WWW.VELOCITY-SOFTWARE.COM
//



Re: LinuxWorld Article series

2002-04-25 Thread Post, Mark K

Yes, and it also assumes that the system administrator hasn't taken steps to
minimize this.  Such as, reducing the amount of virtual storage allocated to
the instance, and adding a v-disk as a paging device.  Putting "pressure" on
the storage use algorithms will reduce the amount used for buffering and
cache, so only frequently used things will remain in storage.

Mark Post

-Original Message-
From: James Melin [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 25, 2002 11:10 AM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


This assumes that every Linux image is going to be using the same disk,
does it not?



|-+-->
| |   Barton Robinson|
| |   <[EMAIL PROTECTED]|
| |   FTWARE.COM>|
| |   Sent by: Linux on 390  |
| |   Port   |
| |   <[EMAIL PROTECTED]|
| |   U> |
| |  |
| |  |
| |   04/25/2002 09:48 AM|
| |   Please respond to Linux|
| |   on 390 Port|
| |  |
|-+-->

>---
---|
  |
|
  |   To:   [EMAIL PROTECTED]
|
  |   cc:
|
  |       Subject:  Re: LinuxWorld Article series
|

>---
---|




The author is correct. This has NOT been addressed for Linux
on zSeries.

>From: Werner Puschitz <[EMAIL PROTECTED]>
>
>Is the author right on this:
>
>http://www.linuxworld.com/site-stories/2002/0416.mainframelinux-p7.html
>"Linux memory management assumes control of a machine and so grabs up
>free memory for use in I/O buffering. Having multiple Linux instances do
>this to independently buffer I/O to the same files resident on a shared
>mini-disk not only wastes memory, but dramatically increases the paging
>effort."
>
>Or has this already been addressed for Linux on zSeries?
>
>Thanks
>Werner







"If you can't measure it, I'm Just NOT interested!"(tm)

//
Barton Robinson - CBW Internet: [EMAIL PROTECTED]
Velocity Software, IncMailing Address:
 196-D Castro Street   P.O. Box 390640
 Mountain View, CA 94041   Mountain View, CA 94039-0640

VM Performance Hotline:   650-964-8867
Fax: 650-964-9012 Web Page:  WWW.VELOCITY-SOFTWARE.COM
//



Linux Memory Management [was Re: LinuxWorld Article series]

2002-04-25 Thread Werner Puschitz

Does anyone know if there are any plans to address it in the near
future? Isn't this a big drawback for Linux on zSeries?


On Thu, 25 Apr 2002, Barton Robinson wrote:

> The author is correct. This has NOT been addressed for Linux
> on zSeries.
>
> >From: Werner Puschitz <[EMAIL PROTECTED]>
> >
> >Is the author right on this:
> >
> >http://www.linuxworld.com/site-stories/2002/0416.mainframelinux-p7.html
> >"Linux memory management assumes control of a machine and so grabs up
> >free memory for use in I/O buffering. Having multiple Linux instances do
> >this to independently buffer I/O to the same files resident on a shared
> >mini-disk not only wastes memory, but dramatically increases the paging
> >effort."
> >
> >Or has this already been addressed for Linux on zSeries?
> >
> >Thanks
> >Werner
>
>
>
>
>
>
>
> "If you can't measure it, I'm Just NOT interested!"(tm)
>
> //
> Barton Robinson - CBW Internet: [EMAIL PROTECTED]
> Velocity Software, IncMailing Address:
>  196-D Castro Street   P.O. Box 390640
>  Mountain View, CA 94041   Mountain View, CA 94039-0640
>
> VM Performance Hotline:   650-964-8867
> Fax: 650-964-9012 Web Page:  WWW.VELOCITY-SOFTWARE.COM
> //
>



Re: LinuxWorld Article series

2002-04-25 Thread James Melin

This assumes that every Linux image is going to be using the same disk,
does it not?



|-+-->
| |   Barton Robinson|
| |   <[EMAIL PROTECTED]|
| |   FTWARE.COM>|
| |   Sent by: Linux on 390  |
| |   Port   |
| |   <[EMAIL PROTECTED]|
| |   U> |
| |  |
| |  |
| |   04/25/2002 09:48 AM|
| |   Please respond to Linux|
| |   on 390 Port|
| |  |
|-+-->
  
>--|
  |
  |
  |   To:   [EMAIL PROTECTED]
  |
  |   cc:  
  |
  |   Subject:  Re: LinuxWorld Article series  
  |
  
>--|




The author is correct. This has NOT been addressed for Linux
on zSeries.

>From: Werner Puschitz <[EMAIL PROTECTED]>
>
>Is the author right on this:
>
>http://www.linuxworld.com/site-stories/2002/0416.mainframelinux-p7.html
>"Linux memory management assumes control of a machine and so grabs up
>free memory for use in I/O buffering. Having multiple Linux instances do
>this to independently buffer I/O to the same files resident on a shared
>mini-disk not only wastes memory, but dramatically increases the paging
>effort."
>
>Or has this already been addressed for Linux on zSeries?
>
>Thanks
>Werner







"If you can't measure it, I'm Just NOT interested!"(tm)

//
Barton Robinson - CBW Internet: [EMAIL PROTECTED]
Velocity Software, IncMailing Address:
 196-D Castro Street   P.O. Box 390640
 Mountain View, CA 94041   Mountain View, CA 94039-0640

VM Performance Hotline:   650-964-8867
Fax: 650-964-9012 Web Page:  WWW.VELOCITY-SOFTWARE.COM
//



Re: LinuxWorld Article series

2002-04-25 Thread Barton Robinson

The author is correct. This has NOT been addressed for Linux
on zSeries.

>From: Werner Puschitz <[EMAIL PROTECTED]>
>
>Is the author right on this:
>
>http://www.linuxworld.com/site-stories/2002/0416.mainframelinux-p7.html
>"Linux memory management assumes control of a machine and so grabs up
>free memory for use in I/O buffering. Having multiple Linux instances do
>this to independently buffer I/O to the same files resident on a shared
>mini-disk not only wastes memory, but dramatically increases the paging
>effort."
>
>Or has this already been addressed for Linux on zSeries?
>
>Thanks
>Werner







"If you can't measure it, I'm Just NOT interested!"(tm)

//
Barton Robinson - CBW Internet: [EMAIL PROTECTED]
Velocity Software, IncMailing Address:
 196-D Castro Street   P.O. Box 390640
 Mountain View, CA 94041   Mountain View, CA 94039-0640

VM Performance Hotline:   650-964-8867
Fax: 650-964-9012 Web Page:  WWW.VELOCITY-SOFTWARE.COM
//



Re: LinuxWorld Article series

2002-04-25 Thread Werner Puschitz

Is the author right on this:

http://www.linuxworld.com/site-stories/2002/0416.mainframelinux-p7.html
"Linux memory management assumes control of a machine and so grabs up
free memory for use in I/O buffering. Having multiple Linux instances do
this to independently buffer I/O to the same files resident on a shared
mini-disk not only wastes memory, but dramatically increases the paging
effort."

Or has this already been addressed for Linux on zSeries?

Thanks
Werner



Re: LinuxWorld Article series

2002-04-24 Thread Ian McKay

the reliable source was moi..
I told him it was pool1 on one of the machines in Auburn Hills
I also asked if it was z/VM 420 (ie did they have the ieee hackware on) - yes
Also talked to him about ieee and where we stand with the latest 2.4 kernel
having just two instructions missing and they are NOT used by either GCC or
Dignus system/c compilers.
also asked if he was aware we did IFLhe came back with some qwip about
not having enough pocket changeso I told him if he has a CPU and wants
to make it IFL we charge 2,500 bucks compared to ibm's 20k bucks.
so he and I have conversed quite a lot privately




At 21:25 23/04/02 -0700, you wrote:
>Mark,
>
>This is an Amdahl Millennium 700 which is ALS-2 compliant but does NOT have
>IEEE.
>
>At 09:58 AM 4/22/02 -0400, you wrote:
> >David,
> >
> >No, I've been informed by a reliable source that this is an "MSF'd Amdahl
> >0700 processor."
> >
> >Mark Post
> >
> >-Original Message-
> >From: David Boyes [mailto:[EMAIL PROTECTED]]
> >Sent: Monday, April 22, 2002 8:29 AM
> >To: [EMAIL PROTECTED]
> >Subject: Re: LinuxWorld Article series
> >
> >
> >If I'm reading it correctly, a 6070 is some kind of PowerPC box. Possibly a
> >R/390? If so, you're facing the same OS/2 based device emulation...
> >
> >-- db
> >
> >> Dave,
> >>
> >> Not really, sorry.  I'm just a user there, and it sits in
> >> Texas.  I can ask
> >> the VM guy that supports it if you're really curious.
> >>
> >> Mark Post
> >>
> >> -Original Message-
> >> From: Dave Jones [mailto:[EMAIL PROTECTED]]
> >> Sent: Saturday, April 20, 2002 9:38 PM
> >> To: [EMAIL PROTECTED]
> >> Subject: Re: LinuxWorld Article series
> >>
> >>
> >> Mark,
> >> I don't recognize the CPU type in the CPUID field. can you
> >> explain what type
> >> of system you ran this test on?
> >> Thanks.
> >>
> >> DJ
> >> > CPUID = FF0240760700
> >
>
>Jon Nolting
>(925) 672-1249  -- Home office



Re: LinuxWorld Article series

2002-04-23 Thread Jon Nolting

Mark,

This is an Amdahl Millennium 700 which is ALS-2 compliant but does NOT have
IEEE.

At 09:58 AM 4/22/02 -0400, you wrote:
>David,
>
>No, I've been informed by a reliable source that this is an "MSF'd Amdahl
>0700 processor."
>
>Mark Post
>
>-Original Message-
>From: David Boyes [mailto:[EMAIL PROTECTED]]
>Sent: Monday, April 22, 2002 8:29 AM
>To: [EMAIL PROTECTED]
>Subject: Re: LinuxWorld Article series
>
>
>If I'm reading it correctly, a 6070 is some kind of PowerPC box. Possibly a
>R/390? If so, you're facing the same OS/2 based device emulation...
>
>-- db
>
>> Dave,
>>
>> Not really, sorry.  I'm just a user there, and it sits in
>> Texas.  I can ask
>> the VM guy that supports it if you're really curious.
>>
>> Mark Post
>>
>> -Original Message-----
>> From: Dave Jones [mailto:[EMAIL PROTECTED]]
>> Sent: Saturday, April 20, 2002 9:38 PM
>> To: [EMAIL PROTECTED]
>> Subject: Re: LinuxWorld Article series
>>
>>
>> Mark,
>> I don't recognize the CPU type in the CPUID field. can you
>> explain what type
>> of system you ran this test on?
>> Thanks.
>>
>> DJ
>> > CPUID = FF0240760700
>

Jon Nolting
(925) 672-1249  -- Home office



Re: LinuxWorld Article series

2002-04-23 Thread John Alvord

On Wed, 24 Apr 2002 03:46:04 +0800, John Summerfield
<[EMAIL PROTECTED]> wrote:

>> On Tue, 23 Apr 2002 05:32:03 +0800, John Summerfield
>> <[EMAIL PROTECTED]> wrote:
>>
>> >> > ...
>> >> >This is nothing really new.  Sharing a VM system with early releases of
>> >> >MVS was unpleasant.
>> >>
>> >>   I hear that it's no problem with the two in different LPARs, and that
>> >> running MVS as a guest under VM works well with a surprisingly small
>> >> performance hit (in the 2-3% ballpark.)
>> >> --
>> >> --henry schaffer
>> >>
>> >
>> >In the times when "Sharing a VM system with early releases of MVS was
>> >unpleasant," IBM hadn't invented LPARs and I think Gene had just released (o
>> r
>> >was about to release) the S/470s.
>> >
>> >
>> >MVS+VM, I was told, made the 168 comparable in performance to a 135.
>>
>> One of my first projects at Amdahl was supporting a product called
>> VM/PE, a boringly named, technically cool piece of software which
>> shared the real (UP) system between VM and MVS. S/370 achitecture is
>> dependent on page zero and this code swapped page zeros between MVS
>> and VM. It worked just fine for dedicated channels, nice low 1-2%
>> overhead. When we started sharing control units and devices, things
>> turned ugly.
>>
>>
>
>I do believe we used VM/PE, before MDF became available.
>
>We used to run two, occasionally three MVS systems on a 5860.

MDF was largely equal to the LPAR facility...

VM/PE had a very elegant development name: Janus - who was the Roman
God of portals, able to look two directions at the same time. 

It was originally written by Dewayne Hendricks and the original was
very nice indeed. [Anyone feel free to correct me]. I ran across an
original listing while at Amdahl and it was so much prettier then the
product version. He was no longer working at Amdahl by the time I
arrived. Robert Lerche was also involved, but I don't know whether he
worked jointly with DH or not.

john



Re: LinuxWorld Article series

2002-04-23 Thread John Summerfield

> On Tue, 23 Apr 2002 05:32:03 +0800, John Summerfield
> <[EMAIL PROTECTED]> wrote:
>
> >> > ...
> >> >This is nothing really new.  Sharing a VM system with early releases of
> >> >MVS was unpleasant.
> >>
> >>   I hear that it's no problem with the two in different LPARs, and that
> >> running MVS as a guest under VM works well with a surprisingly small
> >> performance hit (in the 2-3% ballpark.)
> >> --
> >> --henry schaffer
> >>
> >
> >In the times when "Sharing a VM system with early releases of MVS was
> >unpleasant," IBM hadn't invented LPARs and I think Gene had just released (o
> r
> >was about to release) the S/470s.
> >
> >
> >MVS+VM, I was told, made the 168 comparable in performance to a 135.
>
> One of my first projects at Amdahl was supporting a product called
> VM/PE, a boringly named, technically cool piece of software which
> shared the real (UP) system between VM and MVS. S/370 achitecture is
> dependent on page zero and this code swapped page zeros between MVS
> and VM. It worked just fine for dedicated channels, nice low 1-2%
> overhead. When we started sharing control units and devices, things
> turned ugly.
>
>

I do believe we used VM/PE, before MDF became available.

We used to run two, occasionally three MVS systems on a 5860.

-
--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Re: LinuxWorld Article series

2002-04-22 Thread Dennis Andrews

John,

As you should remember, :)  the feature on the 580s was MDF - Multiple
Domain Feature.

Dennis.

> Of course PR/SM which turned into the LPAR facility... and a parallel
> Amdal 580 feature obsoleted the software in 4-5 years.
>
> john alvord



Re: LinuxWorld Article series

2002-04-22 Thread Gregg C Levine

Hello from Gregg C Levine
Funny that particular comment surfaced here. If I remember correctly,
MVS was originally built, and debugged, under VM, early releases that is
of MVS, I would think, and I know everyone will correct me, that VM
itself was also built, and debugged under itself. Oh, and there were a
large amount of complaints of the early releases of MVS, abnormally
ending under VM, so this issue is neither old, nor new. Just different.
And I believe that discussion surfaced originally on the list that
discusses the H entity. Now if I have my facts wrong, I will cheerfully
accept any corrections, public or private.
---
Gregg C Levine [EMAIL PROTECTED]

"The Force will be with you...Always." Obi-Wan Kenobi
"Use the Force, Luke."  Obi-Wan Kenobi
(This company dedicates this E-Mail to General Obi-Wan Kenobi )
(This company dedicates this E-Mail to Master Yoda )



> -Original Message-
> From: Linux on 390 Port [mailto:[EMAIL PROTECTED]] On Behalf Of
> John Summerfield
> Sent: Monday, April 22, 2002 5:32 PM
> To: [EMAIL PROTECTED]
> Subject: Re: LinuxWorld Article series
> 
> > > ...
> > >This is nothing really new.  Sharing a VM system with early
releases of
> > >MVS was unpleasant.
> >
> >   I hear that it's no problem with the two in different LPARs, and
that
> > running MVS as a guest under VM works well with a surprisingly small
> > performance hit (in the 2-3% ballpark.)
> > --
> > --henry schaffer
> >
> 
> In the times when "Sharing a VM system with early releases of MVS was
> unpleasant," IBM hadn't invented LPARs and I think Gene had just
released (or
> was about to release) the S/470s.
> 
> 
> MVS+VM, I was told, made the 168 comparable in performance to a 135.
> 
> 
> 
> 
> --
> Cheers
> John Summerfield
> 
> Microsoft's most solid OS: http://www.geocities.com/rcwoolley/
> 
> Note: mail delivered to me is deemed to be intended for me, for my
disposition.
> 
> ==
> If you don't like being told you're wrong,
> be right!



Re: LinuxWorld Article series

2002-04-22 Thread John Alvord

On Tue, 23 Apr 2002 05:32:03 +0800, John Summerfield
<[EMAIL PROTECTED]> wrote:

>> > ...
>> >This is nothing really new.  Sharing a VM system with early releases of
>> >MVS was unpleasant.
>>
>>   I hear that it's no problem with the two in different LPARs, and that
>> running MVS as a guest under VM works well with a surprisingly small
>> performance hit (in the 2-3% ballpark.)
>> --
>> --henry schaffer
>>
>
>In the times when "Sharing a VM system with early releases of MVS was
>unpleasant," IBM hadn't invented LPARs and I think Gene had just released (or
>was about to release) the S/470s.
>
>
>MVS+VM, I was told, made the 168 comparable in performance to a 135.

One of my first projects at Amdahl was supporting a product called
VM/PE, a boringly named, technically cool piece of software which
shared the real (UP) system between VM and MVS. S/370 achitecture is
dependent on page zero and this code swapped page zeros between MVS
and VM. It worked just fine for dedicated channels, nice low 1-2%
overhead. When we started sharing control units and devices, things
turned ugly.

Of course PR/SM which turned into the LPAR facility... and a parallel
Amdal 580 feature obsoleted the software in 4-5 years.

john alvord



Re: LinuxWorld Article series

2002-04-22 Thread John Summerfield

> > ...
> >This is nothing really new.  Sharing a VM system with early releases of
> >MVS was unpleasant.
>
>   I hear that it's no problem with the two in different LPARs, and that
> running MVS as a guest under VM works well with a surprisingly small
> performance hit (in the 2-3% ballpark.)
> --
> --henry schaffer
>

In the times when "Sharing a VM system with early releases of MVS was
unpleasant," IBM hadn't invented LPARs and I think Gene had just released (or
was about to release) the S/470s.


MVS+VM, I was told, made the 168 comparable in performance to a 135.




--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Re: LinuxWorld Article series

2002-04-22 Thread Henry Schaffer

> ...
>This is nothing really new.  Sharing a VM system with early releases of
>MVS was unpleasant.

  I hear that it's no problem with the two in different LPARs, and that
running MVS as a guest under VM works well with a surprisingly small
performance hit (in the 2-3% ballpark.)
--
--henry schaffer



Re: LinuxWorld Article series

2002-04-22 Thread Phil Payne

> It seems there are many performance concerns with Linux/390.  A lot of them seem
> to being worked on.  (Linux was conceived with the thought of dedicated
> resources.  Linux is being worked on to behave in a shared resource
> environment.)

This is nothing really new.  Sharing a VM system with early releases of
MVS was unpleasant.

--
 Phil Payne
 http://www.isham-research.com



Re: LinuxWorld Article series

2002-04-22 Thread Tom Duerbusch

As pointed out by many, hardware is the smallest cost in all of this.

Also, you need to consider software costs.  IF the application software isn't
free, multiple boxes are a negative.  Consider that Websphere is about $20K per
engine.  DB2/UDB is also around $20K per engine.  Now 4 Sun boxes times ($20K
Websphere plus $20K DB2) is $160,000 vs a single copy of each for a single
engine S/390  ($40K).

Add in test systems, development systems, spares, how to backup these systems.
Consider that people using these other platforms get real concerned with 30-40%
utilitization and want to replace everything.  It seems that everytime there is
a new "required" release of the software, they seem to always require new
hardware.  All of this costs a lot of money.

It seems there are many performance concerns with Linux/390.  A lot of them seem
to being worked on.  (Linux was conceived with the thought of dedicated
resources.  Linux is being worked on to behave in a shared resource
environment.)

Tom Duerbusch
THD Consulting

David Boyes wrote:

> >  Although the article did have issues, I'm most disconcerted
> >  with some of the bang-per-buck comparisons (one of the
> >  charts showed a mid-range SUN performs at 300% that of the
> >  z/900 at only %18 of the cost... and that was a *mid-range*
> >  SUN!)
>
> He's comparing apples and Brazil nuts. It depends a lot on the
> application -- there are cases where the Sun is the right answer, many where
> it's not.  You have to profile the application.
>
> >  If a mid-range SUN is only 18% of the cost of a (slower) mainframe,
> >  it will make selling mainframe Linux (vs. SUN Linux) a lot harder.
> >  Granted, the RAS facilities of the mainframe are nice, but for
> >  18% of the cost... if you had to, you could buy 3 or 4 SUN boxes,
> >  keeping most of them in the closet as "spares" and still be
> >  cheaper.
>
> I would argue that the figures in the article do not include the whole
> picture. For a *single* application, he may be close. It's when you deploy
> application n+1 and n+2 that the difference/advantage becomes apparent. He's
> falling into the usual trap of doing TCOs based only on hardware price --
> that isn't the whole story, and he's not including cost of operators, floor
> space, etc. Our studies indicate that the breakdown for TCO is nominally:
>
> 20-23% hw/sw cost
> 37% people
> remainder facilities (power, HVAC, floor space, network bandwidth, etc)
>
> It's kind of weird that people focus on the smallest portion of the problem
> while ignoring the other 70+% of the problem...
>
> -- db



Re: LinuxWorld Article series

2002-04-22 Thread David Boyes

>   But - he's comparing one mid-range sun to one z/900.  Seems like
>  the 37% people and remainder facilities would be the same in both
>  of those.  One sun should be just about as much work/power as one
>  z/900.. in fact, I'd expect one mid-range sun to be a little lower
>  on the power/HVAC requirements.

This is probably more garbled than I want it to be, but I'm short of time,
and running out of battery in the laptop.

For *one* application, one box, he's probably ok. It's taking the larger
view of the fact that most organizations don't have only one application,
nor do they have one box per application.

Let's think about box count first for a moment. Consider that for most
organizations, when you deploy Application X's server in production you need
some extra hardware to make the solution supportable. You need:

1) the production box itself
2) a backup server or hot spare in clustering environment (we are talking
mission crit apps)
3) a development box
4) a test/QA box
5) possibly a regression box in case more than one version is in production
at any given time.

So, the comparison of one z900/z800 is actually against 4, possibly 5 Sun
boxes per application deployed.  You can't double up on the test systems
because you need them to mirror production to be a valid test; and you sure
don't want developers testing on the same box.

So, assuming worst case of 5 boxes per application, we've erased most of
that 18% number down to about 2-3% overall.  What happens when the next
application comes along, call it Application Y? You now need new boxes for
that application.  Can't use the others because they're dedicated to
application X.  So, for Application Y, you now need 1+4 *more* servers.
We're now up to 8 servers for two applications. The trend is clear.

Note that we have not addressed the floor space or power costs yet -- which
increase each time we add a server. We also can assume no sharing of
resources; they're separate boxes, and you can't move MIPS or I/O w/o
disrupting service. We also are not computing additional overhead for
maintenance (you get a lot of that for free with VM; the Linux stuff takes
some thought to do, but is also doable with a much higher level of
automation).

The real kick for most people is while the initial cost of the zSeries is
high; it amortizes quickly across multiple applications -- if you take the
model above where you need 4-5 servers to deploy an application, a solution
where the same physical server handles that load and gets partitioned to
supply the same configuration logically, your cost per application decreases
substantially -- 1/n instead of n*4 or more.  The part he's missing in the
article is that in the case of applications n+1 and n+2, there is not
necessarily a hardware acquisition component, or an additional facilities
costs, which are substantial, but fixed for the duration of the capacity
available in the z800/z900, which can be overlapped for normal applications
(the case of applications using 100% of the box is vanishingly rare). The
steps for environmentals and staff are larger, but much less frequent.

>   So - then the argument would be that for 16.4% more, you can get
>  all of the RAS of z/900 hardware, vs. the mainframe box.   Is that
>  a fair statement?

Not really. The major argument is that you control the cost of deployment
and operations rather than focus on cost of hardware for a *number* of
applications, not just one, and that the investment you make in support
infrastructure and staffing is overall smaller for the same number of
logical images.  That cost is coming out of the 70+% of the TCO, and is much
more likely to be recurring cost, which is what makes any solution
expensive.



Re: LinuxWorld Article series

2002-04-22 Thread Post, Mark K

David,

No, I've been informed by a reliable source that this is an "MSF'd Amdahl
0700 processor."

Mark Post

-Original Message-
From: David Boyes [mailto:[EMAIL PROTECTED]]
Sent: Monday, April 22, 2002 8:29 AM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


If I'm reading it correctly, a 6070 is some kind of PowerPC box. Possibly a
R/390? If so, you're facing the same OS/2 based device emulation...

-- db

> Dave,
>
> Not really, sorry.  I'm just a user there, and it sits in
> Texas.  I can ask
> the VM guy that supports it if you're really curious.
>
> Mark Post
>
> -Original Message-
> From: Dave Jones [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, April 20, 2002 9:38 PM
> To: [EMAIL PROTECTED]
> Subject: Re: LinuxWorld Article series
>
>
> Mark,
> I don't recognize the CPU type in the CPUID field. can you
> explain what type
> of system you ran this test on?
> Thanks.
>
> DJ
> > CPUID = FF0240760700



Re: LinuxWorld Article series

2002-04-22 Thread Thomas David Rivers

David Boyes <[EMAIL PROTECTED]>
>
> >  Although the article did have issues, I'm most disconcerted
> >  with some of the bang-per-buck comparisons (one of the
> >  charts showed a mid-range SUN performs at 300% that of the
> >  z/900 at only %18 of the cost... and that was a *mid-range*
> >  SUN!)
>
> He's comparing apples and Brazil nuts. It depends a lot on the
> application -- there are cases where the Sun is the right answer, many where
> it's not.  You have to profile the application.

 Wonderful!  I'd be very delighted to have someone set-me-straight
 on these points...

>
> >  If a mid-range SUN is only 18% of the cost of a (slower) mainframe,
> >  it will make selling mainframe Linux (vs. SUN Linux) a lot harder.
> >  Granted, the RAS facilities of the mainframe are nice, but for
> >  18% of the cost... if you had to, you could buy 3 or 4 SUN boxes,
> >  keeping most of them in the closet as "spares" and still be
> >  cheaper.
>
> I would argue that the figures in the article do not include the whole
> picture. For a *single* application, he may be close. It's when you deploy
> application n+1 and n+2 that the difference/advantage becomes apparent. He's
> falling into the usual trap of doing TCOs based only on hardware price --
> that isn't the whole story, and he's not including cost of operators, floor
> space, etc. Our studies indicate that the breakdown for TCO is nominally:
>
> 20-23% hw/sw cost
> 37% people
> remainder facilities (power, HVAC, floor space, network bandwidth, etc)
>
> It's kind of weird that people focus on the smallest portion of the problem
> while ignoring the other 70+% of the problem...

  Hmm... that could be very true...

  But - he's comparing one mid-range sun to one z/900.  Seems like
 the 37% people and remainder facilities would be the same in both
 of those.  One sun should be just about as much work/power as one
 z/900.. in fact, I'd expect one mid-range sun to be a little lower
 on the power/HVAC requirements.

  So - if we accept that, then really we're talking about %18 percent
 of that 20-23% hardware figure... right?

  I may be just a little "slow" on the up-take here, so bear with
 me while I walk through this...  I _really_ want a nice compelling
 argument here.

  Let's say that the mainframe TCO costs $100.  The hardware costs would
 be $20, the "rest" of the cost (the part that's the same between
 the alternatives) is then $80.

  So - the Sun box would be 18% of the z/900 hardware cost.

  Thus, if the z/900 TCO is $100, the Sun TCO would be $83.6 - a savings
 of 16.4%.

  Granted, a savings of 16.4% is much better than a savings of 82%, but
 16.4% is still quite a significant savings.

  Am I understanding this correctly?  Or, have I missed the boat somewhere?

  So - then the argument would be that for 16.4% more, you can get
 all of the RAS of z/900 hardware, vs. the mainframe box.   Is that
 a fair statement?

  Please don't get me wrong - I'm a big proponent of Linux on the
 mainframe; our company has quite a substantial investment in seeing
 it succeed.  I'm just trying to get together a fantastic response
 when asked the question myself, which does come up from time to time.
 What better place to get a reliable answer?

  Then, we need to understand (and address?) performance concerns.  This
 was all under the assumption that the z/900 runs as fast, and hopefully
 faster, than the mid-range Sun.  And, with this "back-of-the-napkin"
 calculation, there are several other issues to consider (virtualization
 technology for one.) And, as you mention, this ignores the very good
 point regarding testing your application "in the environment."

 - Thanks! -
- Dave Rivers -

--
[EMAIL PROTECTED]Work: (919) 676-0847
Get your mainframe programming tools at http://www.dignus.com



Re: LinuxWorld Article series

2002-04-22 Thread David Boyes

If I'm reading it correctly, a 6070 is some kind of PowerPC box. Possibly a
R/390? If so, you're facing the same OS/2 based device emulation...

-- db

> Dave,
>
> Not really, sorry.  I'm just a user there, and it sits in
> Texas.  I can ask
> the VM guy that supports it if you're really curious.
>
> Mark Post
>
> -Original Message-
> From: Dave Jones [mailto:[EMAIL PROTECTED]]
> Sent: Saturday, April 20, 2002 9:38 PM
> To: [EMAIL PROTECTED]
> Subject: Re: LinuxWorld Article series
>
>
> Mark,
> I don't recognize the CPU type in the CPUID field. can you
> explain what type
> of system you ran this test on?
> Thanks.
>
> DJ
> > CPUID = FF0240760700



Re: LinuxWorld Article series

2002-04-22 Thread David Boyes

>  Although the article did have issues, I'm most disconcerted
>  with some of the bang-per-buck comparisons (one of the
>  charts showed a mid-range SUN performs at 300% that of the
>  z/900 at only %18 of the cost... and that was a *mid-range*
>  SUN!)

He's comparing apples and Brazil nuts. It depends a lot on the
application -- there are cases where the Sun is the right answer, many where
it's not.  You have to profile the application.

>  If a mid-range SUN is only 18% of the cost of a (slower) mainframe,
>  it will make selling mainframe Linux (vs. SUN Linux) a lot harder.
>  Granted, the RAS facilities of the mainframe are nice, but for
>  18% of the cost... if you had to, you could buy 3 or 4 SUN boxes,
>  keeping most of them in the closet as "spares" and still be
>  cheaper.

I would argue that the figures in the article do not include the whole
picture. For a *single* application, he may be close. It's when you deploy
application n+1 and n+2 that the difference/advantage becomes apparent. He's
falling into the usual trap of doing TCOs based only on hardware price --
that isn't the whole story, and he's not including cost of operators, floor
space, etc. Our studies indicate that the breakdown for TCO is nominally:

20-23% hw/sw cost
37% people
remainder facilities (power, HVAC, floor space, network bandwidth, etc)

It's kind of weird that people focus on the smallest portion of the problem
while ignoring the other 70+% of the problem...

-- db



Re: LinuxWorld Article series

2002-04-22 Thread Bernd Oppolzer

Hello all,

yes, I know this, but this is the OLD part of the OS. I could not have been
done in normal PL/1 because of too much overhead and because of the (too) many
features of the language, which prevent effective optimization.

C was designed as a systems programming language, and so you have the
possibility to use it for system development. And IBM finally has started to do
it this way, and I'm pretty sure, that there has been much testing (and
discussions) before this decision. And so I think the author of the article is
completely wrong, when he says that the mainframe does not work well with C or
so. And my personal observations show the same.

Regards

Bernd



Am Son, 21 Apr 2002 schrieben Sie:
> Dave Jones <[EMAIL PROTECTED]> writes:
> >One statement struck me as clearly incorrect is the following:
> >
> >"In contrast, most mainframe control environments, including loadable
> >libraries and related systems level applications, are written and
> >maintained very close to the hardware -- usually in PL/1 or assembler
> >but often with handwritten or at least "tweaked" object code -- ...
>
> The author is right, almost.  If you read "PL/1"
> as "PL/S", "PL/AS", "PL/X", or whatever IBM calls
> their internal-use-only systems programming language
> these days, the comment makes sense and is even correct.
> Most mainframe control environments (SCPs) are indeed
> written in PL/whatever or Assembler.
>
> Ross Patterson
> Computer Associates



Re: LinuxWorld Article series

2002-04-21 Thread Patterson, Ross

Dave Jones <[EMAIL PROTECTED]> writes:
>One statement struck me as clearly incorrect is the following:
>
>"In contrast, most mainframe control environments, including loadable
>libraries and related systems level applications, are written and
>maintained very close to the hardware -- usually in PL/1 or assembler
>but often with handwritten or at least "tweaked" object code -- ...

The author is right, almost.  If you read "PL/1"
as "PL/S", "PL/AS", "PL/X", or whatever IBM calls
their internal-use-only systems programming language
these days, the comment makes sense and is even correct.
Most mainframe control environments (SCPs) are indeed
written in PL/whatever or Assembler.

Ross Patterson
Computer Associates



Re: LinuxWorld Article series

2002-04-21 Thread Ferguson, Neale

Actually, my warning was slightly off beam anyway. putc will use mutex logic
irrespective of pthreads or not. If you know you're program will not do so
then that's when you use _IO_putc_unlocked().



Re: LinuxWorld Article series

2002-04-21 Thread Post, Mark K

Dave,

Not really, sorry.  I'm just a user there, and it sits in Texas.  I can ask
the VM guy that supports it if you're really curious.

Mark Post

-Original Message-
From: Dave Jones [mailto:[EMAIL PROTECTED]]
Sent: Saturday, April 20, 2002 9:38 PM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


Mark,
I don't recognize the CPU type in the CPUID field. can you explain what type
of system you ran this test on?
Thanks.

DJ

> -Original Message-
> From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
> Post, Mark K
> Sent: Saturday, April 20, 2002 3:43 PM
> To: [EMAIL PROTECTED]
> Subject: Re: LinuxWorld Article series
>
>
> For what it's worth:
> bonnie++-1.02a
>
> $ hcp q cpu
> CPUID = FF0240760700
>
> $ ./bonnie++ -s 256
> Writing with putc()...done
> Writing intelligently...done
> Rewriting...done
> Reading with getc()...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version 1.02a   --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
> %CP  /sec
> %CP
> glt3903256M  1334  99  9001  43  4506  10  1347  99  9945
>  10 433.8
> 6
> --Sequential Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> %CP  /sec
> %CP
>  1694  96   641  99  2349  9994  99   771
>  99   643
> 96
> glt3903,256M,1334,99,9001,43,4506,10,1347,99,9945,10,433.8,6,16,94
> ,96,641,99
> ,2349,99,94,99,771,99,643,96
>
> $ ./bonnie++ -s 125
> Writing with putc()...done
> Writing intelligently...done
> Rewriting...done
> Reading with getc()...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version 1.02a   --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
> %CP  /sec
> %CP
> glt3903125M  1344  99  9743  27  4430   9  1288  99  9445
>  10 514.5
> 7
> --Sequential Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> %CP  /sec
> %CP
>  1697  99   643  99  2318 10097  99   758
>  99   632
> 96
> glt3903,125M,1344,99,9743,27,4430,9,1288,99,9445,10,514.5,7,16,97,
> 99,643,99,
> 2318,100,97,99,758,99,632,96
>
>



Re: LinuxWorld Article series

2002-04-21 Thread Volker Bandke

can easily be.  I was referring to a long thread in IBM-MAIN where IBMers
commented on the development languag ethey use, and it was neither C nor
Assembler (with one exception, which was in some rather old code that is
still in use today)

Of course, the IBM/MAIN list has such a high traffic rate , and so many
messages, that I was unable to find sufficiently exact keywords to locate
the messages in the archive.

I can trust my memory on the information it received, but of course you
can't (trust mine, that is)

As this is an OT message, let us just leave it as it is


 With kind Regards|\  _,,,---,,_
ZZZzz /,`.-'`'-.  ;-;;,
 Volker Bandke   |,4-  ) )-,_. ,\ (  `'-'
  (BSP GmbH)'---''(_/--'  `-'\_)

  Machines should work; people should think.

(Another bit of Wisdom from my fortune cookie jar)


-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
Phil Payne
Sent: Sunday, April 21, 2002 10:12 AM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


> Nope, it is not.

Yes it is.  The License Manager, for instance.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803
  +49 173 6242039



Re: LinuxWorld Article series

2002-04-21 Thread John Alvord

On Sat, 20 Apr 2002, Jay G Phelps wrote:

> Despite the poorly written article, I have actually been somewhat
> disappointed by the test results I have been getting on my MP3000 P30 Linux
> system(s).  In particular, the Bonnie++ test I did last week showed poor
> results in most area's.  Granted, I am running under VM in an LPAR, but I
> still expected better results for I/O related work.
>
> On the other hand, running Tomcat and a Java/JSP based web site provided
> reasonable performance so I am not ready to give up yet ;-)
>
> Would anyone running Linux on mainframe with channel attached DASD be
> willing to do a Bonnie++ test and post the results?
>
I have several times read Linus on the subject of benchmarks like Bonnie
and dbench. They are designed to torture the environment and almost never
reflect actual workload. With them, some corner case bugs are detected and
solved, but performance related problems based on those types of tests are
almost always discounted by top developers.

I haven't seen definitive Linux/390 test results. There have certainly
been enough published examples of problems that I would want to do serious
performance test of any proposed workload before going ahead. One recent
case involved slow DASD performance, but the DASD performance was limited
independent of Linux... the DASD was emulated through some OS/2
subsystem. Linux is never going to give you better performance then the
base system.

john alvord



Re: LinuxWorld Article series

2002-04-21 Thread Rich Smrcina

There is development for VSE being done in C as well.  Some of the TCP/IP
functionality available for DB2 and also some of the new Connector support.

On Sunday 21 April 2002 06:41 am, Mark Perry wrote:
> Bernd is correct, New IBM product development (z/OS) is in C and C++ with
> support routines in Assembler when required/justified.
>
> Mark
>

--
Rich Smrcina
Sytek Services, Inc.
Milwaukee, WI
[EMAIL PROTECTED]
[EMAIL PROTECTED]

Catch the WAVV!  Stay for Requirements and the Free for All!
Update your S/390 skills in 4 days for a very reasonable price.
WAVV 2003 in Winston-Salem, NC.
April 25-29, 2003
For details see http://www.wavv.org



Re: LinuxWorld Article series (and PL/I)

2002-04-21 Thread Tuomo Stauffer

PL/I - still my favorite - since 15 years with 'C' and other
obscure languages..

http://www.uni-muenster.de/ZIV/Mitarbeiter/EberhardSturm/PL1andC.html

ps. acually PL/I was defined by Vienna Definition Language ( or
something like that - failing memory.. ) to match the hardware and
(IMHO) looks better - how do you use pseudo registers in "C" ??
And how do you define "task" for "C" procedure ??

have a nice day - tuomo ( [EMAIL PROTECTED] )

- Original Message -
From: <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, April 20, 2002 6:50 PM
Subject: Re: LinuxWorld Article series


> > And for some other topic: as mentioned earlier, PL/1 "close to the
hardware" is
> > complete nonsense. I did much benchmarking in the past with PL/1 and
C/370, and
> > I found that C/370 performs very well (better than PL/1), and I don't
see any
> > performance problems with C on the mainframe. It depends on the quality
of the
> > compiler, and I think, the GNU compiler will generate very fast code on
the
> > mainframe also, cause most optimization is done before the code
generation
> > steps. If there were problems, you simply would have to do some work in
the
> > code generation for the mainframe. But that's all. It could easily be
done.
>
> C will look good compared to PL/I if for no other reason than:
>
> C combines the power of assembler language
> with the ease of use of assembler language.
>
> If you've studied any of the PDP-11's instruction set, C looks
like
> some kind of macro-assembler for it.
>
> So given a reasonable compiler C will tend to look good since the
> base language is pretty low.
>
> The weakness comes in addressing "records" in a file since a
> "record" is a slippery concept w/i Unix-  Unlike VM, VSE or MVS.
>
> --
>  John R. Campbell   Speaker to Machines
[EMAIL PROTECTED]
>  - As a SysAdmin, yes, I CAN read your e-mail, but I DON'T get that bored!
>Disclaimer:  All opinions expressed above are those of John R. Campbell
> alone and are seriously unlikely to reflect the opinions
of
> his employer(s) or lackeys thereof.  Anyone who says
> differently is itching for a fight!
>



Re: LinuxWorld Article series

2002-04-21 Thread Thomas David Rivers

Bernd Oppolzer <[EMAIL PROTECTED]> wrote:
>
> By the way: most of the new development on IBM systems (for example LE) is
> done in C, as you can see by looking at the LE modules.
>
> C is not very widely used by IBM customers; there are only few large
> companies in germany using C/370 for mission-critical apps. But I have the
> impression that an increasing part of system-related development for
> mainframes is done in C, by IBM and others.
>
> The guy who wrote the article has never heard of this, I guess.
>
> C is simple, working, portable, great fun (personal opinion).
>
> Regards
>
> Bernd

 Well... not to be too "advertisy", but we can certainly give you many
 examples of people using C and C++ for mainframe development, on
 various operating systems.  See http://www.dignus.com for more info.

 But - to offer my take on the article - and this is only my
 opinion

 Although the article did have issues, I'm most disconcerted
 with some of the bang-per-buck comparisons (one of the
 charts showed a mid-range SUN performs at 300% that of the
 z/900 at only %18 of the cost... and that was a *mid-range*
 SUN!)

 You really have to get past the history stuff (just skip through
 it, most of the people here already know the history) to get
 at the point of the article.  The point, to me, seems to be
 the Linux on the mainframe didn't make sense because a) it ran
 too slow, and b) the hardware/software was too expensive.

 These are quite significant allegations - which I hope someone
 (IBM?) will spend the time/effort to refute, or at least address
 in future hardware.

 If a mid-range SUN is only 18% of the cost of a (slower) mainframe,
 it will make selling mainframe Linux (vs. SUN Linux) a lot harder.
 Granted, the RAS facilities of the mainframe are nice, but for
 18% of the cost... if you had to, you could buy 3 or 4 SUN boxes,
 keeping most of them in the closet as "spares" and still be
 cheaper.

 Now - that's what I got from the article - I have absolutely
 no idea if these numbers are correct... I certainly hope there
 was significant room for error... and that someone will correct
 the impression.

 I'm also interested in seeing the bonnie and bonnie++ results.

- Dave Rivers -

--
[EMAIL PROTECTED]Work: (919) 676-0847
Get your mainframe programming tools at http://www.dignus.com


>
>
> >
> > There is a lot of stuff bubbling around in IBM also. They have some top
> > guys working on NUMA machines that are regularly collaberating (sending
> > code to) the Linux kernel development tree.
> >
> > john alvord
>



Re: LinuxWorld Article series

2002-04-21 Thread Mark Perry

Bernd is correct, New IBM product development (z/OS) is in C and C++ with
support routines in Assembler when required/justified.

Mark

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
Volker Bandke
Sent: 21 April 2002 07:55
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


Nope, it is not.  I am not quite sure what the current name is, as the
compiler is not freely available.  Names used in the past were PL/S, PL/X,
PLAS, PLAS 3, etc



 With kind Regards|\  _,,,---,,_
ZZZzz /,`.-'`'-.  ;-;;,
 Volker Bandke   |,4-  ) )-,_. ,\ (  `'-'
  (BSP GmbH)'---''(_/--'  `-'\_)

  From an actual insurance claim: An airplane hit the house and came in.

(Another bit of Wisdom from my fortune cookie jar)


-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
Bernd Oppolzer
Sent: Sunday, April 21, 2002 12:23 AM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


By the way: most of the new development on IBM systems (for example LE) is
done
in C, as you can see by looking at the LE modules.

C is not very widely used by IBM customers; there are only few large
companies in germany using C/370 for mission-critical apps. But I have the
impression that an increasing part of system-related development for
mainframes
is done in C, by IBM and others.

The guy who wrote the article has never heard of this, I guess.

C is simple, working, portable, great fun (personal opinion).

Regards

Bernd


>
> There is a lot of stuff bubbling around in IBM also. They have some top
> guys working on NUMA machines that are regularly collaberating (sending
> code to) the Linux kernel development tree.
>
> john alvord



Re: LinuxWorld Article series

2002-04-21 Thread Phil Payne

> Nope, it is not.

Yes it is.  The License Manager, for instance.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803
  +49 173 6242039



Re: LinuxWorld Article series

2002-04-20 Thread Volker Bandke

Nope, it is not.  I am not quite sure what the current name is, as the
compiler is not freely available.  Names used in the past were PL/S, PL/X,
PLAS, PLAS 3, etc



 With kind Regards|\  _,,,---,,_
ZZZzz /,`.-'`'-.  ;-;;,
 Volker Bandke   |,4-  ) )-,_. ,\ (  `'-'
  (BSP GmbH)'---''(_/--'  `-'\_)

  From an actual insurance claim: An airplane hit the house and came in.

(Another bit of Wisdom from my fortune cookie jar)


-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
Bernd Oppolzer
Sent: Sunday, April 21, 2002 12:23 AM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


By the way: most of the new development on IBM systems (for example LE) is
done
in C, as you can see by looking at the LE modules.

C is not very widely used by IBM customers; there are only few large
companies in germany using C/370 for mission-critical apps. But I have the
impression that an increasing part of system-related development for
mainframes
is done in C, by IBM and others.

The guy who wrote the article has never heard of this, I guess.

C is simple, working, portable, great fun (personal opinion).

Regards

Bernd


>
> There is a lot of stuff bubbling around in IBM also. They have some top
> guys working on NUMA machines that are regularly collaberating (sending
> code to) the Linux kernel development tree.
>
> john alvord



Re: LinuxWorld Article series

2002-04-20 Thread John Summerfield

[EMAIL PROTECTED] said:
> Please note that the use of putc in a multithreaded environment under
> any Linux yields horrible results due to locking/mutexes. Replace with
> _IO_putc_unlocked() and see the difference. I'm not sure if Bonnie
> uses pthreads (I ran it some months ago but can't recall).


Don't confuse bonnie and bonnie++ - they're two different programs and yield
incomparable results.

>From the bonnie++ changelog:
  * Reverted zcav to the 1.00a version and then added the code for -u, -g, and
the fix for large numbers of data points.  The multi-threaded zcav code
will go into 1.90 (the pre-2.00 tree).
Bonnie++ versions < 1.90 will never again have threading code.

and from earlier in its life:
  * Version 1.[0-8]0 will use fork().  Version 1.90 and above will use POSIX
threads and include the concurrant bonnie++ functionality I've been
promising for so long.


--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Re: LinuxWorld Article series

2002-04-20 Thread Ferguson, Neale

Please note that the use of putc in a multithreaded environment under any
Linux yields horrible results due to locking/mutexes. Replace with
_IO_putc_unlocked() and see the difference. I'm not sure if Bonnie uses
pthreads (I ran it some months ago but can't recall).

> -Original Message-
> For what it's worth:
> bonnie++-1.02a
>
> $ hcp q cpu
> CPUID = FF0240760700
>
> $ ./bonnie++ -s 256
> Writing with putc()...done
> Writing intelligently...done
> Rewriting...done
> Reading with getc()...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version 1.02a   --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> K/sec %CP  /sec
> %CP
> glt3903256M  1334  99  9001  43  4506  10  1347  99
> 9945  10 433.8
> 6
> --Sequential Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP  /sec
> %CP
>  1694  96   641  99  2349  9994  99
> 771  99   643
> 96
> glt3903,256M,1334,99,9001,43,4506,10,1347,99,9945,10,433.8,6,1
> 6,94,96,641,99
> ,2349,99,94,99,771,99,643,96
>
> $ ./bonnie++ -s 125
> Writing with putc()...done
> Writing intelligently...done
> Rewriting...done
> Reading with getc()...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version 1.02a   --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> K/sec %CP  /sec
> %CP
> glt3903125M  1344  99  9743  27  4430   9  1288  99
> 9445  10 514.5
> 7
> --Sequential Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> /sec %CP  /sec
> %CP
>  1697  99   643  99  2318 10097  99
> 758  99   632
> 96
> glt3903,125M,1344,99,9743,27,4430,9,1288,99,9445,10,514.5,7,16
> ,97,99,643,99,
> 2318,100,97,99,758,99,632,96



Re: LinuxWorld Article series

2002-04-20 Thread John Summerfield

> Here's the output from a G5 with a Shark:
>
> Version 1.02a   --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
> %CP
> websp.corporat 184M  2153  99 14790  14  3562   7  2311  98 98460  97 859.4
> 10
> --Sequential Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> %CP
>  16   141  99   791  99  4426  99   152  98  1659  99  1202
> 96
> websp,184M,2153,99,14790,14,3562,7,2311,98,98460,97,859.4,10,16,141,99,791,99
> ,4426,99,152,98,1659,99,1202,96
>

I can't help myself;-) Here's my Athlon 1.4:
Version 1.02a   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
numbat 128M 18244  99 24463  15  7563   4 11467  55 42416  12 464.7   1
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16  1077  99 + +++ + +++   973  98 + +++  2609  98
numbat,128M,18244,99,24463,15,7563,4,11467,55,42416,12,464.7,1,16,1077,99,+,+++,+,+++,973,98,+,+++,2609,98


Those pluses mean "too fast to measure."


okay, so I cheated a little. The machine has 256 Mbytes of RAM. Here's a proper test:
Version 1.02a   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
numbat   4G 16842  92 21793  14  7058   4 16540  81 27016   9  78.2   0
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16  1059  97 + +++ + +++   900  98 + +++  2607  97
numbat,4G,16842,92,21793,14,7058,4,16540,81,27016,9,78.2,0,16,1059,97,+,+++,+,+++,900,98,+,+++,2607,97

This in a single ATA disk, a year-or-so old - I think I bought it in 2000.

I note the G5 CPU was a little busier doing its 96 Mbytes/sec.

--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my disposition.

==
If you don't like being told you're wrong,
be right!



Re: LinuxWorld Article series

2002-04-20 Thread Rich Smrcina

Here's the output from a G5 with a Shark:

Version 1.02a   --Sequential Output-- --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
websp.corporat 184M  2153  99 14790  14  3562   7  2311  98 98460  97 859.4
10
--Sequential Create-- Random
Create
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
 16   141  99   791  99  4426  99   152  98  1659  99  1202
96
websp,184M,2153,99,14790,14,3562,7,2311,98,98460,97,859.4,10,16,141,99,791,99,4426,99,152,98,1659,99,1202,96


> Version 1.02a   --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
> %CP
> glt3903256M  1334  99  9001  43  4506  10  1347  99  9945  10 433.8
> 6
> --Sequential Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> %CP
>  1694  96   641  99  2349  9994  99   771  99   643
> 96
> glt3903,256M,1334,99,9001,43,4506,10,1347,99,9945,10,433.8,6,16,94,96,641,9
>9 ,2349,99,94,99,771,99,643,96
>


--
Rich Smrcina
Sytek Services, Inc.
Milwaukee, WI
[EMAIL PROTECTED]
[EMAIL PROTECTED]

Catch the WAVV!  Stay for Requirements and the Free for All!
Update your S/390 skills in 4 days for a very reasonable price.
WAVV 2003 in Winston-Salem, NC.
April 25-29, 2003
For details see http://www.wavv.org



Re: LinuxWorld Article series

2002-04-20 Thread soup

> And for some other topic: as mentioned earlier, PL/1 "close to the hardware" is
> complete nonsense. I did much benchmarking in the past with PL/1 and C/370, and
> I found that C/370 performs very well (better than PL/1), and I don't see any
> performance problems with C on the mainframe. It depends on the quality of the
> compiler, and I think, the GNU compiler will generate very fast code on the
> mainframe also, cause most optimization is done before the code generation
> steps. If there were problems, you simply would have to do some work in the
> code generation for the mainframe. But that's all. It could easily be done.

C will look good compared to PL/I if for no other reason than:

C combines the power of assembler language
with the ease of use of assembler language.

If you've studied any of the PDP-11's instruction set, C looks like
some kind of macro-assembler for it.

So given a reasonable compiler C will tend to look good since the
base language is pretty low.

The weakness comes in addressing "records" in a file since a
"record" is a slippery concept w/i Unix-  Unlike VM, VSE or MVS.

--
 John R. Campbell   Speaker to Machines [EMAIL PROTECTED]
 - As a SysAdmin, yes, I CAN read your e-mail, but I DON'T get that bored!
   Disclaimer:  All opinions expressed above are those of John R. Campbell
alone and are seriously unlikely to reflect the opinions of
his employer(s) or lackeys thereof.  Anyone who says
differently is itching for a fight!



Re: LinuxWorld Article series

2002-04-20 Thread Dave Jones

Mark,
I don't recognize the CPU type in the CPUID field. can you explain what type
of system you ran this test on?
Thanks.

DJ

> -Original Message-
> From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
> Post, Mark K
> Sent: Saturday, April 20, 2002 3:43 PM
> To: [EMAIL PROTECTED]
> Subject: Re: LinuxWorld Article series
>
>
> For what it's worth:
> bonnie++-1.02a
>
> $ hcp q cpu
> CPUID = FF0240760700
>
> $ ./bonnie++ -s 256
> Writing with putc()...done
> Writing intelligently...done
> Rewriting...done
> Reading with getc()...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version 1.02a   --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
> %CP  /sec
> %CP
> glt3903256M  1334  99  9001  43  4506  10  1347  99  9945
>  10 433.8
> 6
> --Sequential Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> %CP  /sec
> %CP
>  1694  96   641  99  2349  9994  99   771
>  99   643
> 96
> glt3903,256M,1334,99,9001,43,4506,10,1347,99,9945,10,433.8,6,16,94
> ,96,641,99
> ,2349,99,94,99,771,99,643,96
>
> $ ./bonnie++ -s 125
> Writing with putc()...done
> Writing intelligently...done
> Rewriting...done
> Reading with getc()...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version 1.02a   --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
> %CP  /sec
> %CP
> glt3903125M  1344  99  9743  27  4430   9  1288  99  9445
>  10 514.5
> 7
> --Sequential Create-- Random
> Create
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> %CP  /sec
> %CP
>  1697  99   643  99  2318 10097  99   758
>  99   632
> 96
> glt3903,125M,1344,99,9743,27,4430,9,1288,99,9445,10,514.5,7,16,97,
> 99,643,99,
> 2318,100,97,99,758,99,632,96
>
>



Re: LinuxWorld Article series

2002-04-20 Thread Post, Mark K

Rich,

Go to http://www.coker.com.au/bonnie++/, download the source, ./configure,
make, ./bonnie++ and then wait for results.  Or ./bonnie++ -s sizeinMB,
where at least twice the amount of RAM you have is preferred.

Mark Post

-Original Message-
From: Rich Smrcina [mailto:[EMAIL PROTECTED]]
Sent: Saturday, April 20, 2002 4:46 PM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


If you explain how, I will run a test and post the results.


--
Rich Smrcina
Sytek Services, Inc.
Milwaukee, WI
[EMAIL PROTECTED]
[EMAIL PROTECTED]

Catch the WAVV!  Stay for Requirements and the Free for All!
Update your S/390 skills in 4 days for a very reasonable price.
WAVV 2003 in Winston-Salem, NC.
April 25-29, 2003
For details see http://www.wavv.org



Re: LinuxWorld Article series

2002-04-20 Thread Bernd Oppolzer

I would like to add some personal views to this discussion.

First I must admit that I have no experience with LINUX-390. But I worked for
years with VM and CMS. At this shop there were always two or three production
VSE guests with CICS and SAP R/2 and some DB2/VSE (former SQL/DS) guests running
and lots of CMS users. We never had any performance problems. We had also
capacity left on the same machine to convert HPGL graphic files coming from
some workstations to a graphics metafile format called GKS and then plotting
them on a large Calcomp Plotter, with self-written PASCAL routines to optimize
the paper usage by shifting the pictures. And these were not
the big boxes from IBM; I don't remember the numbers exactly but I think the
first was a 4381/12 and then came a 3083 and so on. I think that virtualization
in VM is done in such a perfect way (with hardware support etc.), that you
cannot call that "emulation". Emulation in my opinion means, that some hardware
"emulates" another hardware. That's slow, of course, but this has nothing to do
with VM.

And for some other topic: as mentioned earlier, PL/1 "close to the hardware" is
complete nonsense. I did much benchmarking in the past with PL/1 and C/370, and
I found that C/370 performs very well (better than PL/1), and I don't see any
performance problems with C on the mainframe. It depends on the quality of the
compiler, and I think, the GNU compiler will generate very fast code on the
mainframe also, cause most optimization is done before the code generation
steps. If there were problems, you simply would have to do some work in the
code generation for the mainframe. But that's all. It could easily be done.

In a large project in the past, we developed the same software (mathematical
calculations for an insurance) in C with targets OS/390, OS/2, WIN NT and 98,
and (last months) Sun Solaris, consisting of several million lines of source
code. The same software runs on the mainframe and on the laptops of the
insurance agents, no difference. And we never had performance problems on the
mainframe, although the software is used as part of database transactions with
IMS/DC and used in parallel by many people. But on the PC, there were heavy
performance problems; we almost didn't match our service criteria. We had to do
lots of advanced things, for example multithreading on the laptops, although
these are single user machines. My personal experience is: forget about
performance issues on the mainframe, it will probably work, no worry. But the
same is not always true for PCs.

Best regards

Bernd Oppolzer



>
> One statement struck me as clearly incorrect is the following:
>
> "In contrast, most mainframe control environments, including loadable
> libraries and related systems level applications, are written and maintained
> very close to the hardware -- usually in PL/1 or assembler but often with
> handwritten or at least
> "tweaked" object code -- to use far fewer cycles than their C language Unix
> equivalents.
>
> This statement is wrong on two separate counts:
>
> 1) most mainframe programming (well above 50%) is still done in COBOL, with
> PL/I, Assembler, Fortran, etc. splitting the rest.
> 2) PL/I is lots of things, but "close to the hardware" ain't one of them.
> :-)
>



Re: LinuxWorld Article series

2002-04-20 Thread Post, Mark K

For what it's worth:
bonnie++-1.02a

$ hcp q cpu
CPUID = FF0240760700

$ ./bonnie++ -s 256
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.02a   --Sequential Output-- --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
glt3903256M  1334  99  9001  43  4506  10  1347  99  9945  10 433.8
6
--Sequential Create-- Random
Create
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
 1694  96   641  99  2349  9994  99   771  99   643
96
glt3903,256M,1334,99,9001,43,4506,10,1347,99,9945,10,433.8,6,16,94,96,641,99
,2349,99,94,99,771,99,643,96

$ ./bonnie++ -s 125
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.02a   --Sequential Output-- --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
glt3903125M  1344  99  9743  27  4430   9  1288  99  9445  10 514.5
7
--Sequential Create-- Random
Create
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
 1697  99   643  99  2318 10097  99   758  99   632
96
glt3903,125M,1344,99,9743,27,4430,9,1288,99,9445,10,514.5,7,16,97,99,643,99,
2318,100,97,99,758,99,632,96

-Original Message-
From: Jay G Phelps [mailto:[EMAIL PROTECTED]]
Sent: Saturday, April 20, 2002 2:20 PM
To: [EMAIL PROTECTED]
Subject: Re: LinuxWorld Article series


Despite the poorly written article, I have actually been somewhat
disappointed by the test results I have been getting on my MP3000 P30 Linux
system(s).  In particular, the Bonnie++ test I did last week showed poor
results in most area's.  Granted, I am running under VM in an LPAR, but I
still expected better results for I/O related work.

On the other hand, running Tomcat and a Java/JSP based web site provided
reasonable performance so I am not ready to give up yet ;-)

Would anyone running Linux on mainframe with channel attached DASD be
willing to do a Bonnie++ test and post the results?



Re: LinuxWorld Article series

2002-04-20 Thread Bernd Oppolzer

By the way: most of the new development on IBM systems (for example LE) is done
in C, as you can see by looking at the LE modules.

C is not very widely used by IBM customers; there are only few large
companies in germany using C/370 for mission-critical apps. But I have the
impression that an increasing part of system-related development for mainframes
is done in C, by IBM and others.

The guy who wrote the article has never heard of this, I guess.

C is simple, working, portable, great fun (personal opinion).

Regards

Bernd


>
> There is a lot of stuff bubbling around in IBM also. They have some top
> guys working on NUMA machines that are regularly collaberating (sending
> code to) the Linux kernel development tree.
>
> john alvord



Re: LinuxWorld Article series

2002-04-20 Thread John Summerfield

> Despite the poorly written article, I have actually been somewhat
> disappointed by the test results I have been getting on my MP3000 P30 Linux
> system(s).  In particular, the Bonnie++ test I did last week showed poor
> results in most area's.  Granted, I am running under VM in an LPAR, but I
> still expected better results for I/O related work.
>
> On the other hand, running Tomcat and a Java/JSP based web site provided
> reasonable performance so I am not ready to give up yet ;-)
>
> Would anyone running Linux on mainframe with channel attached DASD be
> willing to do a Bonnie++ test and post the results?


I suggest bonnie too - it gives higher throughput than bonnie++ on my
desktop hardware.



--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my
disposition.

==
If you don't like being told you're wrong,
be right!



Re: LinuxWorld Article series

2002-04-20 Thread Rich Smrcina

If you explain how, I will run a test and post the results.

On Saturday 20 April 2002 01:19 pm, you wrote:
> Despite the poorly written article, I have actually been somewhat
> disappointed by the test results I have been getting on my MP3000 P30 Linux
> system(s).  In particular, the Bonnie++ test I did last week showed poor
> results in most area's.  Granted, I am running under VM in an LPAR, but I
> still expected better results for I/O related work.
>
> On the other hand, running Tomcat and a Java/JSP based web site provided
> reasonable performance so I am not ready to give up yet ;-)
>
> Would anyone running Linux on mainframe with channel attached DASD be
> willing to do a Bonnie++ test and post the results?

--
Rich Smrcina
Sytek Services, Inc.
Milwaukee, WI
[EMAIL PROTECTED]
[EMAIL PROTECTED]

Catch the WAVV!  Stay for Requirements and the Free for All!
Update your S/390 skills in 4 days for a very reasonable price.
WAVV 2003 in Winston-Salem, NC.
April 25-29, 2003
For details see http://www.wavv.org



Re: LinuxWorld Article series

2002-04-20 Thread soup

> > -Original Message-
> > From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
> > Hall, Ken (ECSS)
> > Sent: Friday, April 19, 2002 12:13 PM
> > To: [EMAIL PROTECTED]
> > Subject: LinuxWorld Article series
> >
> >
> > Anyone seen this?
> >
> > Aside from some (fairly glaring) technical inaccuracies, I can't
> > see much I'm qualified to dispute.
> >
> > http://www.linuxworld.com/site-stories/2002/0416.mainframelinux.html
>
> But the "glaring technical inaccuracies" lead me to question his conclusions
> about Linux on S/390. I suspect that
> while he knows a great deal about the Unix environment and the typical Unix
> user mindset, his grasp of the "mainframe"
> world is limited, to say the least. He seems to fixate on the mainframe a
> "batch-oriented" and Unix as interactive, and
> that "interactive" doesn't work well on mainframes. He obviously has never
> used CMS on VM (or CP/VM as he calls it...);
> it's as interactive and responsive as any Linux system I've used. And his
> statement that TSO and CMS "load as batch jobs" is just pure nonsense..

Actually, one relative I spoke (in depth) about Unix w back
over 20 years ago (working at Western Electric) indicated that Unix
is very good at character oriented stuff since files had a "character"
granularity and the was no "innate" record-ness therein, which the
Mainframe systems at the time were oriented around (LRECL, RECFM,
BLKSIZE, remember there?) so, for some workloads, a Unix-based system
was at a disadvantage-  and remains so.  FBA comes naturally to Unix
since it evolved on non-IBM hardware;  The Xerox Sigma series (5, 7,
9) disk drives that I worked with were FBA'd versions of the 3330 made
by CDC, as were the disk drives for the UNIVAC 1100 and the smaller
minicomputers all used hard-sectored disk drives too.  IBM was the
only purveyor I know of with CKD drive architectures (which is, really,
soft-sectoring).  (The RCA Spectra-70 which was reborn as the UNIVAC
System-80 may have been CKD-ish but I didn't know much about it,
despite playing with TSOS on a -70/46.)

One of the other items I found annoying was ignoring the whole
issue of RAS (as explained in Appendix A's comparison between the
Intel x86 architecture and the s/390 CP architecture) which explains
that speed isn't as important as being able to rely on the results
with an ability to service portions of the systems while the rest
of it is still operating (the way memory is handled, for instance).

Totally ignored was the basic compromise implicit in going to a
mainframe:  the ability to trust ALL of the hardware.

The problem is that there are many disconnects here;  While my
understanding of virtualizing the hardware is limited (though Melinda
Varian's paper was extremely educational and put a lot into context
for me) even I could see some gaps in the logic.

Mind you, I've a fairly eclectic background, and most of the article
didn't _smell_ quite right to me.  There were all kinds of tangents
it seemed to go off on.

Personally, I want to see Linux be successful on the s/390 architecture
because it's neat, but I'd also have to agree that we can't do this
blindly.

And while a set of benchmarks are ludicrous for a virtual instance
(and, I suspect even an LPAR) some kind of metrics for the bare metal
would not hurt (though nobody can afford to turn over a whole piece
of BFI like an z800 or z900 to run such tests since it's such expensive
hardware).  I suspect nobody at IBM can get enough bare metal set aside
long enough to run these tests anyway.

> Overall this article appears to be not so much concerned with Linux running
> on a S/390 environment, but a diatribe against
> mainframes in general and the overall superiority of SUN boxes. That seems
> to be the whole thrust of the paragarphs on
> "mutually contingent evolution." (whatever that is.).

There are some things a mainframe is good for:

   1)   Maximum single-thread performance
   2)   Maximum I/O Bandwidth
   3)   Maximum I/O Connectivity

You need single thread performance for many tasks in business-  like
balancing a binary tree structure (which means the I/O can't be
sluggish either) or performing the merge phase of a sortation job.

You won't be running SETI@Home on these things, y'know.  It ain't for
 

Re: LinuxWorld Article series

2002-04-20 Thread John Alvord

On Sat, 20 Apr 2002, Phil Payne wrote:

> > I found it interesting that he wrote about CP/40. That was the first
> > example of a 360-style operating system using virtual memory with the
> > equivalent of modern TLBs. [It had been done on other architectures.] The
> > hardware was a one-of created for the Cambridge Scientific Center
> > (Mass). And CP/67 was hosted virtually, and CP/67 begat VM/370, and VM/370
> > begat .
>
> I would like to shake the hand of the guy who came up with 'Conversational'.
>
> --
>   Phil Payne
>   http://www.isham-research.com
>   +44 7785 302 803
>   +49 173 6242039
>
It was originally Cambridge Monitor System and became Converstation MS in
VM/370. There was a Yorktown Monitor System, which leaked EXEC2 into CMS.

This reads like those history of rock bands, doesn't it?

And the ideas in VM/CMS didn't arise in a void. Compatible Time Sharing
System ran on high end 707X hardware (IBM second generation). I remember
seeing a list of commands, like LISTFILE, and the outputs looked almost
identical in form. The virual hardware had been presaged by the Atlas
computer over in England years before.

john alvord



Re: LinuxWorld Article series

2002-04-20 Thread Jay G Phelps

Despite the poorly written article, I have actually been somewhat
disappointed by the test results I have been getting on my MP3000 P30 Linux
system(s).  In particular, the Bonnie++ test I did last week showed poor
results in most area's.  Granted, I am running under VM in an LPAR, but I
still expected better results for I/O related work.

On the other hand, running Tomcat and a Java/JSP based web site provided
reasonable performance so I am not ready to give up yet ;-)

Would anyone running Linux on mainframe with channel attached DASD be
willing to do a Bonnie++ test and post the results?



Re: LinuxWorld Article series

2002-04-20 Thread Phil Payne

> I found it interesting that he wrote about CP/40. That was the first
> example of a 360-style operating system using virtual memory with the
> equivalent of modern TLBs. [It had been done on other architectures.] The
> hardware was a one-of created for the Cambridge Scientific Center
> (Mass). And CP/67 was hosted virtually, and CP/67 begat VM/370, and VM/370
> begat .

I would like to shake the hand of the guy who came up with 'Conversational'.

--
  Phil Payne
  http://www.isham-research.com
  +44 7785 302 803
  +49 173 6242039



Re: LinuxWorld Article series

2002-04-20 Thread John Alvord

On Sat, 20 Apr 2002, Dave Jones wrote:

> > -Original Message-
> > From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
> > Hall, Ken (ECSS)
> > Sent: Friday, April 19, 2002 12:13 PM
> > To: [EMAIL PROTECTED]
> > Subject: LinuxWorld Article series
> >
> >
> > Anyone seen this?
> >
> > Aside from some (fairly glaring) technical inaccuracies, I can't
> > see much I'm qualified to dispute.
> >
> > http://www.linuxworld.com/site-stories/2002/0416.mainframelinux.html
> >
>
> But the "glaring technical inaccuracies" lead me to question his conclusions
> about Linux on S/390. I suspect that
> while he knows a great deal about the Unix environment and the typical Unix
> user mindset, his grasp of the "mainframe"
> world is limited, to say the least. He seems to fixate on the mainframe a
> "batch-oriented" and Unix as interactive, and
> that "interactive" doesn't work well on mainframes. He obviously has never
> used CMS on VM (or CP/VM as he calls it...);
> it's as interactive and responsive as any Linux system I've used. And his
> statement that TSO and CMS "load as batch jobs" is just pure nonsense..
>
> One statement struck me as clearly incorrect is the following:
>
> "In contrast, most mainframe control environments, including loadable
> libraries and related systems level applications, are written and maintained
> very close to the hardware -- usually in PL/1 or assembler but often with
> handwritten or at least
> "tweaked" object code -- to use far fewer cycles than their C language Unix
> equivalents.
>
> This statement is wrong on two separate counts:
>
> 1) most mainframe programming (well above 50%) is still done in COBOL, with
> PL/I, Assembler, Fortran, etc. splitting the rest.
> 2) PL/I is lots of things, but "close to the hardware" ain't one of them.
> :-)
>
> Overall this article appears to be not so much concerned with Linux running
> on a S/390 environment, but a diatribe against
> mainframes in general and the overall superiority of SUN boxes. That seems
> to be the whole thrust of the paragarphs on
> "mutually contingent evolution." (whatever that is.).
>
> I suspect that Paul Murphy is a shill for SUN.

I found it interesting that he wrote about CP/40. That was the first
example of a 360-style operating system using virtual memory with the
equivalent of modern TLBs. [It had been done on other architectures.] The
hardware was a one-of created for the Cambridge Scientific Center
(Mass). And CP/67 was hosted virtually, and CP/67 begat VM/370, and VM/370
begat .

So it is interesting but not terribly important to current
understanding. He got the bit about CP/67->VM/370 wrong, too, calling it
CP/VM. [Brown University had a VM/360, but I digress.]

My conclusion is he read/skimmed a history, such as Melinda Varian's
history of VM and folded it in without real knowledge or anyone to
proofread the result.

It did make me tend to disbelieve any conclusions. If the author couldn't
understand and abstract (with credit) a well written history, it tends to
suggest he doesn't understand the current environment.

Anyone with half a brain can see that IBM bet mucho $$$ on Linux/390 and
have sold a lot of machines and acquired a lot of mindshare... in some
quarters they are approaching cool status.  Big bet with a big payoff.

There is a lot of stuff bubbling around in IBM also. They have some top
guys working on NUMA machines that are regularly collaberating (sending
code to) the Linux kernel development tree.

john alvord



Re: LinuxWorld Article series

2002-04-20 Thread Dave Jones

> -Original Message-
> From: Linux on 390 Port [mailto:[EMAIL PROTECTED]]On Behalf Of
> Hall, Ken (ECSS)
> Sent: Friday, April 19, 2002 12:13 PM
> To: [EMAIL PROTECTED]
> Subject: LinuxWorld Article series
>
>
> Anyone seen this?
>
> Aside from some (fairly glaring) technical inaccuracies, I can't
> see much I'm qualified to dispute.
>
> http://www.linuxworld.com/site-stories/2002/0416.mainframelinux.html
>

But the "glaring technical inaccuracies" lead me to question his conclusions
about Linux on S/390. I suspect that
while he knows a great deal about the Unix environment and the typical Unix
user mindset, his grasp of the "mainframe"
world is limited, to say the least. He seems to fixate on the mainframe a
"batch-oriented" and Unix as interactive, and
that "interactive" doesn't work well on mainframes. He obviously has never
used CMS on VM (or CP/VM as he calls it...);
it's as interactive and responsive as any Linux system I've used. And his
statement that TSO and CMS "load as batch jobs" is just pure nonsense..

One statement struck me as clearly incorrect is the following:

"In contrast, most mainframe control environments, including loadable
libraries and related systems level applications, are written and maintained
very close to the hardware -- usually in PL/1 or assembler but often with
handwritten or at least
"tweaked" object code -- to use far fewer cycles than their C language Unix
equivalents.

This statement is wrong on two separate counts:

1) most mainframe programming (well above 50%) is still done in COBOL, with
PL/I, Assembler, Fortran, etc. splitting the rest.
2) PL/I is lots of things, but "close to the hardware" ain't one of them.
:-)

Overall this article appears to be not so much concerned with Linux running
on a S/390 environment, but a diatribe against
mainframes in general and the overall superiority of SUN boxes. That seems
to be the whole thrust of the paragarphs on
"mutually contingent evolution." (whatever that is.).

I suspect that Paul Murphy is a shill for SUN.

DJ



Re: LinuxWorld Article series

2002-04-20 Thread Scott Courtney

On Friday 19 April 2002 03:13 pm, Hall, Ken (ECSS) wrote:
> Anyone seen this?
>
> Aside from some (fairly glaring) technical inaccuracies, I can't see much
> I'm qualified to dispute.
>
> http://www.linuxworld.com/site-stories/2002/0416.mainframelinux.html

Unfortunately, there are ads that are 336x280 pixels, and the site designers
insisted on making the page a forced width of about 800 pixels. I don't know
about anyone else, but on my screen the article is all but unreadable because
the page layout compresses its columns so that there are only about three
words per line. Ugh!

I know these people need to make money, but they are working very hard to drive
at least one visitor from their site. I hate websites that hardwire a narrow
width, so that even though I have a larger screen I *still* can't widen the
page out to make it readable.

Sorry...rant mode off now.

Scott

--
---+--
Scott Courtney | "I don't mind Microsoft making money. I mind them
[EMAIL PROTECTED]   | having a bad operating system."-- Linus Torvalds
http://www.4th.com/| ("The Rebel Code," NY Times, 21 February 1999)



Re: LinuxWorld Article series

2002-04-19 Thread John Summerfield

[EMAIL PROTECTED] said:
> It doesn't appear that the author has a very good idea of the basic
> concepts

Be sure to tell Paul and the editor.

[EMAIL PROTECTED]
[EMAIL PROTECTED]

If enough people tell them, I guess it will get fixed.


--
Cheers
John Summerfield

Microsoft's most solid OS: http://www.geocities.com/rcwoolley/

Note: mail delivered to me is deemed to be intended for me, for my
disposition.

==
If you don't like being told you're wrong,
be right!



Re: LinuxWorld Article series

2002-04-19 Thread Dennis G. Wicks

It doesn't appear that the author has a very good idea of the basic
concepts
because he makes a lot of basic mistakes.

He apparently thinks that VM is used to "micro-partition" LPARS and refers
to
it as CP/VM, a term I don't remember seeing used. CP-40 etc, but not CP/VM.

He also seems to think that the linux timer pops are the same as i/o
interrupts,
has no idea about paging/swapping ("My main question, however, is how he
got
41,400 instances to fit into a 128 megabyte machine."), keeps refering to
guests
as "ghosts", and such gems as;

 "Since each such LPAR is independent of all others, it can run VM
 or any other OS, including Linux, separately although each remains
 dependent on the underlying hardware and microcode."

As if this is in any way different than any other computing system and is
somehow
a disadvantage of the IBM platform.

And I have only read the first few pages. I am sure that others can find a
lot more
problems with this article.

Big grain of salt!










"Hall, Ken
(ECSS)"   To: [EMAIL PROTECTED]
     Subject: LinuxWorld Article series
Sent by: Linux
on 390 Port
<[EMAIL PROTECTED]
ARIST.EDU>


04/19/02 02:13
PM
Please respond
to Linux on 390
Port






Anyone seen this?

Aside from some (fairly glaring) technical inaccuracies, I can't see much
I'm qualified to dispute.

http://www.linuxworld.com/site-stories/2002/0416.mainframelinux.html



Re: LinuxWorld Article series

2002-04-19 Thread Post, Mark K

There are a lot of inaccuracies, bad assumptions, oversights, whatever.  The
one thing the author says that I totally agree with is that I also would
like to see some independent benchmarks that can answer some of the
questions that all of us have about Linux/390 performance.

Mark Post

-Original Message-
From: Hall, Ken (ECSS) [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 19, 2002 3:13 PM
To: [EMAIL PROTECTED]
Subject: LinuxWorld Article series


Anyone seen this?

Aside from some (fairly glaring) technical inaccuracies, I can't see much
I'm qualified to dispute.

http://www.linuxworld.com/site-stories/2002/0416.mainframelinux.html



LinuxWorld Article series

2002-04-19 Thread Hall, Ken (ECSS)

Anyone seen this?

Aside from some (fairly glaring) technical inaccuracies, I can't see much I'm 
qualified to dispute.

http://www.linuxworld.com/site-stories/2002/0416.mainframelinux.html