Re: z/OS X-Windows (was: ASCII (was: Unix path name))

2012-04-10 Thread Scott Chapman
XMing is also freely available on Windows and seems to work well enough as long 
as you check the no access control check box.  I've used it while running 
Java applications with GUIs on z/OS.  Notably for installing SAS but also for 
running open source Java applications as well.  I was surprised at how easy it 
was to get those things working and how well they ran (with sufficient zAAP 
capacity).

On Mon, 9 Apr 2012 07:47:07 -0500, McKown, John john.mck...@healthmarkets.com 
wrote:

You can also use MS-Windows if you have a X server on it. MS-Windows does not 
come with an X server. I have successfully use Cygwin's X server.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Tapeless solution (IBM or Sun) Enterprise class

2012-03-29 Thread Scott Chapman
We're just finishing up migration from a B20 Peer to Peer solution to a 
7720-7720-7740 three-way grid solution.  Performance has been much better, no 
surprise there.  I'd say we've been pretty happy, but we're currently tracking 
down a cache management issue that cropped up with one of the 7720s when we 
lost the data links between the sites.  It may be working correctly, we just 
don't understand it yet.  

I'm not doing the migration work myself, so I can't really comment in detail, 
but my understanding is that Tivoli Tape Optimizer was problematic until we 
found the right combination of settings to get it to work smoothly.  Actually, 
I think there was a bug that IBM provided a work-around or fix for.  Just be 
sure to leave yourself adequate time to do the migration, it might take longer 
than you expect: the heavy read workload we're pushing through the B20 is 
causing our (8 year old) B20 to drop a drive or two a day.  The CE has been 
able to fix most of them, but that takes time and we didn't really think about 
having to regularly run with less than the full complement of drives when 
planning out the migration.  

And don't trust a vendor's average compression ratio--go measure it out of 
your existing VTS for peak write times (it's in the SMF 94s for the VTCs).  If 
you're (for example) backing up lots of DB2 data that's already compressed, 
that data is not going to compress well in the VTS.  Assuming too high of a 
compression ratio and hence buying too little capacity could become a 
significant issue if you're in a disk-only solution.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: LPAR CPU usage from a program

2012-02-28 Thread Scott Chapman
Is the requirement no RMF or no authorized calls or both?  

If RMF is available, start the Distributed Data server and you can fetch RMF 
III data with simply HTTP requests that return XML.  There's other RMF 
interfaces as well, I don't believe all of them require authorization.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IBM announces 6% price increase for z/OS

2012-02-22 Thread Scott Chapman
And of course once you add in ISVs, it's even less than 2%.

The other thing that I think is interesting is to compare the cost between 
4,000 MSUs and 350.  Over 10x more (potential) workload for ~4x cost increase.  
There's probably a number of interesting things one can say about that.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Effective AVG MIPS/CP on a Multiprocessor z10

2012-02-13 Thread Scott Chapman
I believe if you do this scenario in zPCR you will find a small impact on the 
GCPs.  Probably something on the order of 1-2%.  Which of course is well within 
the margin of error.  

Even though z/OS can't use those CPs, the work running on those CPs will impact 
the CPU caches, hence the modest impact on the GCP engines.

Now if the CPs are relatively unused, then there will be less impact.  E.G. if 
you add IFLs and never actually run anything on them, no overhead.  I believe 
zPCR assumes that the normal engines run at about 90% busy, I don't know what 
it assumes for IFLs or ICFs but I assume it assumes some reasonable normal 
maximum.  

Scott Chapman

On Fri, 10 Feb 2012 11:15:08 -0700, Mark Post mp...@novell.com wrote:

 On 2/10/2012 at 12:37 PM, Jose Correa Saldeñojcor...@evertecinc.com 
 wrote:
 Effective AVG MIPS/CP on a Multiprocessor z10

 An z10 E26 model 701 AVG MIPS/CP is 924 and a 702 866MIPS and a 703 834 and
 so far.

 My question is, If we have a 703 and we add 4 specialty engines like two IFL
 and two  CF, what will be the  AVG MIPS/CP.

I don't believe the IFL specialty engines will have any impact, since z/OS 
can't even see them let alone use them.  I believe the same to be true of 
the CF engines.


Mark Post

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IBM announces 6% price increase for z/OS

2012-01-30 Thread Scott Chapman
Did your total IBM software bill decline or increase during that period?

I didn't get deeply involved in software costs until c. 2008, so it took some 
digging, but I did find our then current MLC costs c. 2005 when we were on 
z900s and looking to go to z9s.  In short, today, on z10s with vWLC, our 
monthly MLC cost is about 10% less (in total, all MLC products) than what it 
was then on the z900s.  

Now some of that is because we did eliminate a few unused MLC products (such as 
old compilers) over the past few years, but I believe that those only totalled 
to something on the order of 2-3% of the monthly cost.   If we hadn't done 
that, and if we were paying full-capacity VWLC, then I believe our cost would 
be very close to what it was in 2005.  But still, no net increase across a 7 
year time-period wouldn't be a bad run.

Note that I'm not trying to make the case that z/OS and associated products are 
by any means inexpensive.  Nor am I happy about the 6% increase now.  Whether 
the cost is a reasonable value is a much more complicated analysis that is 
going to be dependent on every organization's particular needs.  For some, I'm 
sure it's not such a good value.  Certainly it is a much better value (on a 
cost/unit of capacity basis) for much larger shops.  

Oh, and I didn't try to look at zOTC, because that's even more complicated as 
we've swapped multiple products between IBM and ISVs over that time period as 
well as eliminated some products. 

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IBM announces 6% price increase for z/OS

2012-01-28 Thread Scott Chapman
Over the last 9 years, we've held the line on MIPS growth to essentially 
rounding effects during machine life cycles.  As a result, our MIPS count is up 
only 16% over that period.  However, due to the technology dividends, our MSU 
count is actually down.  

I don't have all the cost data readily at hand, but I can say with confidence 
that our z/OS (only) cost decilined over that time due to the declining MSU 
count and the implementation of VWLC.  

Whether it's a fair value or not is a reasonable question, and one that's not 
as easily answered.

Historically IBM has raised MLC prices at version changes.  Lately they've 
raised prices on older versions some time after the newer version have been out 
for some time.  My presumption is that they're trying to incent people to stay 
current.  And make more money.  Because after all they are in business to make 
money.  

My theory about the z/OS price increase is that they did this so the September 
release can be z/OS v2r1, at the same price as the new (raised) z/OS v1r13 
price.  Otherwise if they just released v2 with a price increase there'd 
probably be a lot of people trying to hang on to v1r13 for as long as possible. 
 

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Help! When is DASD I/O time equal to Subchannel Start Count?

2012-01-26 Thread Scott Chapman
Answer: When you are using FICON.

I took a quick look at a semi-random sample of my data and saw the same thing.  
So I went to the SMF manual and found this for SMF30AIC:

DASD I/O connect time, in 128-microsecond units, for address
space plus dependent enclaves. Note that the value of RqsvAIC for
FICON® channel utilization cannot be calculated. For more
information, see “Note.”

The note says:
Note: The system adjusts the connect time for FICON DASD to be 1 millisecond
per request. This value differs from the channel reported connect time.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: JAR sampling program to estimate load impact after system changes?

2012-01-23 Thread Scott Chapman
Cheryl Watson's BoxScore product may do part of what you're looking for.  
http://www.watsonwalker.com/boxscore.html

I have no personal experience with it though and don't know if it looks at 
specialty engine utilization as part of it's complications.

Also, I've noted significant variation in run-to-run CPU (zAAP) usage for batch 
Java work even when running the exact same code with the same input data on the 
same configuration (run back to back).  While I have and have heard lots of 
good theories of why that might be, I've not come to any absolute conclusions 
for my particular instances that I've reasearched in depth.  So before you go 
attributing some variations in the metrics for Java workloads to a system 
configuration change, make sure that you can get consistent runs without 
changing anything.

(I believe BoxScore works by parsing through a pile of SMF data to find the 
consistent workloads and then does the before/after comparisons for those 
workloads.  Or something like that -- remember I have no experience actually 
using the product.)

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Possible impact of VTS on batch I/O times (also posted to IMS-L)

2011-12-10 Thread Scott Chapman
I know nothing of IMS.  But assuming that the process has run regularly in the 
past at the same relative time of the week, then my guess would be something 
was going on in the VTS.  The B20 does produce a fair bit of SMF data (type 94) 
that can be useful for tracking what was going on at the time.  I'd start by 
looking at that data from the timeframe in question.  

Some thoughts, absent that data:

1) As somebody else mentioned, reclaim processing can potentially impact 
things.  I haven't seen that be signficant, but we also took care to force it 
to low-usage times.

2) If you're in a peer-to-peer VTS with one peer remote, then life can get 
unpleasant if your I/O happens to be going to the remote peer.  That can happen 
for any number of reasons: broken hardware, service prep mode, incorrect 
configuration, maybe bad luck.

3) If you've hit the max throughput of the VTS (or of the individual VTCs, I 
suppose) then you may see degredation too.  I ran into such a throughput issue 
on our B20 earlier this year related to overall throughput utilization during 
DB2 log offload processing.  I wrote it up for MeasureIT: 
http://www.cmg.org/measureit/issues/mit80/m_80_5.pdf

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS compressed VSAM datasets

2011-09-14 Thread Scott Chapman
I guess my first thought / question would be to identify where the bottleneck 
for the backups is; disk I/O, tape I/O, or CPU?  

Presumably disk I/O didn't change a whole lot for reading the data or you would 
have seen impacts elsewhere too.  My guess would be similar for the CPU.  
However, if either is true, it's possible it's simply more noticable during 
backups than your normal processing.

But my guess would be it's writing to the tape.  You didn't mention what kind 
of tape subsystem you're writing to, but everything has it's throughput limit 
and possibly by pushing more data (uncompressed) down the channel, you've hit 
the throughput limit of the subsystem.  If you're going down ESCON channels to 
the tape, those aren't terribly fast by modern standards, and more data on the 
channel = slower elapsed time.

I discovered a similar situation for our DB2 log archives earlier this year, 
but the interesting part was that I didn't initially recognize we were hitting 
the throughput limit because the data is compressed in the logs and the quoted 
throughput limits seem to assume you're sending uncompressed data to the 
subsystem. Of course you now are sending uncompressed data to the subsystem, 
but likely the tape subsystem compression ratio is different than what you get 
on disk from either SMS or BMC.  

As I was looking at this I also discovered that for my test jobs there was no 
significant difference between using 24K and 256K blocks.  YMMV.  

If you're so interested, I wrote up that experience for one of my What I 
Learned This Month columns for MeasureIT:
http://www.cmg.org/measureit/issues/mit80/m_80_5.pdf

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Dashboard type software for monitoring z/OS

2011-08-23 Thread Scott Chapman
That's essentially what we did for the missing task list when we built our 
Mainframe performance dashboard.  Most of the data comes off from the RMF 
Distributed Data Server, but for missing tasks, automation periodically runs 
through the list of active tasks and updates an XML file with what's missing on 
each system.  The dashboard polls those XML files.

And idea that I've had but not looked into is to use the REXX SDSF support to 
pick up on Health Checker alerts and push them out to an XML file as well that 
would be picked up by the dashboard.  So we could leverage the existing IBM HC 
alerts, and add our own in that standard facility and just consolidate them in 
a different interface.  Should be fairly simple to implement, but my work on 
the next version of our dashboard has been pushed back by more pressing 
matters.  

Scott Chapman

I had an idea but I didn't perform\test it yet to perform a Monitor which is
basically HTML page on you z/os HTTP server which is being update
accordingly.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WAS DB2 cpu times

2011-08-04 Thread Scott Chapman
If you're using a type 4 JDBC connection, the SQL will run on a DDF enclave and 
the CPU time will be associated with the enclave and the DDF address space, not 
WAS.  Type 2 is a local connection and won't trigger the separate DDF enclave.  
Which is best (type 2 vs. type 4) depends on the application and the hardware 
configuration, best is to test and understand the implications of both.

Scott Chapman


On Wed, 3 Aug 2011 15:28:04 +, Pudukotai, Nagaraj S 
nagaraj.s.puduko...@jpmorgan.com wrote:

Hi
The set up in our environment is that applications running in Websphere 
Application Server (WAS) address spaces on z/OS run SQL against DB2 on z/OS.

I am trying to correlate the CPU time (field SMF1209CI) I get from WAS SMF type 
120 subtype 9 records and the DB2 class 1 CPU time (MXG field DB2TCBTM) for a 
bunch of WAS address spaces (our set up is one WAS server region with one WAS 
servant region). Value in DB2TCBTM is way more than what I get from SMF1209CI. 
But the CPUTM (MXG Variable) from type 30 interval records (SMFINTRV MXG SAS 
dataset) I get for the WAS Address spaces matches with SMF1209CI value I get 
from type 120 subtype 9 records for the same WAS address spaces.

Can anyone edify me as to what is wrong with what I am trying to reconcile here?

Thank you
Nagaraj



This email is confidential and subject to important disclaimers and
conditions including on offers for the purchase or sale of
securities, accuracy and completeness of information, viruses,
confidentiality, legal privilege, and legal entity disclaimers,
available at http://www.jpmorgan.com/pages/disclosures/email.
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: where 2 find SMS compression code

2011-08-02 Thread Scott Chapman
I realize you said you aren't testing for compression ratio or CPU usage, but 
you might still want to take a quick look at those with both tailored and 
generic/standard compression.   I just recently found that switching my SMF 
data to tailored compression saved about 40% of the space, but at the cost of 
about a 25% increase in CPU time.  Everybody probably has different views about 
that trade-off, but those percentages are big enough to make it worth looking 
at regardless of which resource is more precious to you at the moment.  

Your mileage may vary.  Past performance is not indicative of future gains.  

Scott Chapman

Thanks. these are small datasets. I don't know why they were compressed
with Data Accelerator. We greatly overused that product. Management at
the time said: Great! Compress everything and we don't need to get any
more DASD! Management today says: Use SMS compression and eliminate
the cost of Data Accelerator! We did not testing to see how this will
affect CPU usage or compression ratio. Just say save money! and eyes
glisten like a child in a candy shop.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Eight Position TSO Logonid

2011-07-28 Thread Scott Chapman
I have indeed considered that.  Fairly trivial to do I believe--the REXX SDSF 
interface is pretty straight-forward and complete.  I've done one browser-based 
tool that analyzes the spool utilization (owner, job name, class, etc.) and it 
was pretty straight-forward.  I've also read and converted spool output to XML 
and displayed it in a browser as well, which can be slightly more convoluted 
than necessary due to trying to figure out what carriage control is in use.  

Getting to 90% of the functionality of SDSF via a browser interface and some 
only slightly clever server-side code would not be difficult.  I however lack 
the time and compelling reason: I know how to use SDSF and don't have a user 
base that needs browser based access to it.  It would be cool, but there are 
other more important things to do that may be just as cool.  

One would have to be somewhat more clever, but I'm not sure why we couldn't 
also replicate much of RD/z as well.  For my time/money that's potentially more 
useful than replicating SDSF in a browser.  I'm thinking either one could build 
one's own interface in Eclipse or possibly via a web app that provides an 
in-browser editor similar/using to bespin/skywriter/ace.  But this is idle 
musing: I haven't explored the idea in detail, it just seems reasonable to 
believe it would be possible with relatively little code.  At least for the 
base functionality.  

Scott Chapman

On Wed, 27 Jul 2011 19:03:03 -0500, John McKown joa...@swbell.net wrote:

snip...
because it would decrease the demand for RD/z. SDSF could possibly be
done by one of us via the SDSF API in REXX or Java. Now, there's an
idea. If they did a non-3270 version of ISPF, that would likely mean

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: performance differences between java versions

2011-07-14 Thread Scott Chapman
At the risk of taking this thread too far afield: 

Well there's always room for personal opinion for which tools you use.  Some 
people may prefer a Bosch driver over a Makita and others will happily pay for 
the Festool.  Any of them will put screws in boards.  But some hands will 
prefer different drivers for various reasons.  Nothing wrong with that.  

Personally, as a programming language, I don't care for Java.  I've written 
some code in it and will undoubtedly write more.  But, IMO, it's difficult to 
use well if you aren't using it all the time.  OTOH, for those who do use it 
all the time it's probably second nature and know that class X is better than 
class Y for the particular problem at hand, even though both do very similar 
things.  For those people, Java is probably a fine choice.  

But the concept of the JVM to enable portability across a wide range of 
platforms and architectures is a good one.  And from the mainframe perspective, 
getting work on the zAAPs is potentially good.  And as it turns out Java6 
contains Rhino, so I can write code in one of my favorite languages 
(JavaScript) on the mainframe.  That is another benefit of the JVM: if you have 
a language that runs in the JVM, it should run on the mainframe.  So if Groovy 
feels right in your hands, I believe you should be able to use that on the 
mainframe.

 I really, really don't understand objections to particular programming 
 languages. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: performance differences between java versions

2011-07-13 Thread Scott Chapman
As somebody else stated, I wouldn't draw any conclusions from just running java 
-version.  A couple thoughts though:

1) Java 1.4 is really pretty old.  Java 6 came out in something like 2006 or 
2007, IIRC.  I believe Java 7 is due soon.  It's unfortunate that Java doesn't 
do as good a job of maintaining backwards compatibility as one might like.  

2) I've noted some strange and significant variations in CPU time for Java 
worklaods recently.  I didn't try the older versions. 

3) On my test system this morning, time shows Java -version executing in 
about 0.3 to 1.3s for 1.4 and 0.7 to 1.8s for Java 6.  Java5 seemed to be kind 
of in the middle.  That's on a z10 EC with zAAPs.  So I'm going to say that 
yes, for me there might be a half second difference of elapsed time.  Not sure 
whether that's really significant though.

4) Tuning to improve performance between releases undoubtedly would focus on 
real workloads, or at least workloads that should be somewhat representative of 
real workloads.  If that caused a regression for the trivial workload of 
java -version, I wouldn't view that as a problem.

5) Elapsed time is inherently variable, and so I generally look to the CPU time 
consumed to deteremine performance changes.  But CPU time is getting more 
variable too, and especially so for Java workloads.  So the rule of running 
multiple test iterations is all the more important today.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WLM Resource Group zIIP

2011-07-13 Thread Scott Chapman
According to what support told me, workloads do not accumulate specialty engine 
SUs towards the resource caps.  The SUs on the specialty engines count towards 
period aging, but not towards the resource group caps or minimums.  Which seems 
inconsistent, but probably makes some sense.  But the ideal answer would 
probably be that we'd have resource cap specifications per processor type.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Enclave help

2011-07-06 Thread Scott Chapman
Why not simply quiesce it?  This is readily available from SDSF.

In short, RQ does not seem to stop DDF threads from consuming 
resources.  At least in my quick test here.  

To be honest, I think I'd never tried it because I didn't have a test case 
to try it on in my sandbox and I'd only try that in production for the first 
time if something was really broken.  But a few months ago I created a 
test case in my sandbox, so I tried it there and all RQ seems to do is 
reset the period to period 1.  Interestingly, after doing so, it seems 
that the enclave doesn't age through the periods any more, at least 
not that SDSF shows.  But if I then do an R, it starts over again from 
period 1 and ages down through the periods (all 3 in this case).

So either my expectations of what RQ should do are wrong or we've 
stumbled across a bug someplace.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Enclave help

2011-07-05 Thread Scott Chapman
AFAIK, there's nothing available within z/OS itself to cancel an 
individual enclave, although I certainly have wanted that capability 
sometimes.  This makes sense when you think about what enclaves 
are: they're really sub-tasks from an address space more so than 
being their own address space.  So the correct answer is that to get 
rid of an enclave, you have to cancel the unit of work from the task 
that owns the enclave, in this case, apparently DB2.

You can change the service class of an active enclave, maybe to keep it 
from consuming resources.  But a couple of caveats: 

1) Beware dependent enclaves as changing the service class of those 
will effect the owning address space.  I've seen CICS create 
dependent enclaves and you don't want to touch those.  

2) For DDF work that's running on the zIIP, if you're thinking to change 
the service class to one with a Resource Group cap to keep it from 
consuming zIIP resources, that probably won't work well as work on 
specialty engines don't accumulate SUs towards RG caps.  At least that 
what I've been told, and that's what I've generally seen.  

Finally, since this is DB2, we recently ran into a problem where QMF 
DDF threads wouldn't go away: they couldn't be cancelled from DB2 
and seemed to be stuck looping.  Turns out there was a bug in one of 
our DB2 monitoring products (from IBM, no less) that was keeping 
them from going away.  This was somewhat problematic because of 
point #2 above: they thread was consuming zIIP resources that other 
threads could use.  To try to resolve this (at least partially) I put it in 
my SC with a RG cap of 1 SU, varied the zIIP off line to get it on the 
GCP, then put the zIIP back online.  This seemed to slow it down, 
although I'm still not sure why it worked as well as it did: I expected it 
to bounce back to the zIIP and run away again in short order.  While it 
did bounce back to the zIIP, it consumed at a much lower rate than 
before.  Again, I don't have an explanation, I only relate the 
experience in case you're stuck in a similar situation, maybe it might be 
something to try.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: RMF and DDS Options

2011-06-30 Thread Scott Chapman
I agree with what somebody else said: DDS resource consumption has 
been relatively minor for us.  You only need one per sysplex and 
looking at my biggest sysplex right now, over the past almost 2 
months since the last IPL, DDS has consumed less than half of what 
RMF + RMFGAT have consumed per LPAR.  But RMF and RMFGAT run on 
each of the 5 LPARs in the sysplex.  And we do have multiple users 
pulling data from DDS every 100 seconds.  

Now there is a valid point that I could do a better job of tuning RMF 
itself: there's some inefficiency in our RMF setup that I've been going 
to fix real soon now for a real long time.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: BatchPipes/MVS

2011-06-13 Thread Scott Chapman
As others have pointed out, you can do similar things with USS pipes, 
even from batch jobs, if you're creative/tricky.  I've played with that a 
little bit but haven't put anything in production with it.  

If I had BatchPipes, I'd probably play with it.  But for the applications I 
work with today, the most significant bottleneck is usually not the 
sequential I/O; it's DB2 and/or application design issues.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SDSF API - REXX or Java?

2011-05-18 Thread Scott Chapman
Funny you should ask.  I'm working on a CMG paper about Java on 
z/OS right now.  The overhead is not nearly as bad as it used to be, 
and on modern hardware, especially with zAAPs (or zAAP on zIIP to 
forestall that point), probably immaterial for a lot of things.  But you 
specifically said that zAAPs weren't in the mix, which is unfortunate.

I've also done a fair bit of REXX SDSF.  It works pretty well, most of the 
time.  And personally, I like REXX more than Java.  

If you're talking about using the JZOS MvsJobOutput class, the doc for 
that states: The class relies on the sample Rexx script jobOutput, 
spawned as a child process via the Exec class.  I haven't looked at 
that script, but my guess is that it's using the REXX SDSF interface.  So 
in this case the Java solution = Java overhead + REXX SDSF.  That sum 
will be greater than either part, so I'd just stick with REXX unless I was 
doing substantial further processing of the output that could be zAAP-
eligible.   Of course if I was a Java only programmer, and didn't know  
REXX programmer, then I'd probably have to use the Java class.

As for the overhead of instantiating the JVM, that's down to  1 CPU 
second for trivial work, even on a z10-504 (half speed CPs).  But it's 
still tenths of a second, so you probably don't want to be instantiating 
multiple per second unless you have the CPU to back it.  (Almost all of 
that is zAAP-eligible though.)  If you're talking about a few requests 
per hour though, you probably don't care.  

For truly trivial (not much beyond helloWorld), look at the quickstart 
runtime option.  That does seem to have a noticable impact on those 
trivial results.  It does seem to potentially have a negative impact on 
longer running things though.  I think caching shared classes may also 
help in cases where you're starting the same Java class multiple times 
per day, but I haven't yet investigated this.  

And if you're in the Cincinnati area, I'm presenting on this topic at the 
Cincinnati System Z User Group on 6/23.   I expect to have a better 
summarization of performance data re. running Java in batch on z/OS.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: fact or fiction?

2011-03-22 Thread Scott Chapman
Will upgrading to z/OS 1.11 from 1.9 add more utilization to the CPU? 
IF
so, how much? We are two LPARs...one test and one production.

SoftCap attempts to estimate the impact of such changes based on 
teh current and future software level and the machine you're running 
on.  I would of course expect your mileage to vary. 

Find it by searching for PRS268 on the IBM tech docs if this link doesn't 
come through correctly:

http://www-
03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS268

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cheryl's List #148

2011-03-14 Thread Scott Chapman
The one thing that I think is decidedly true is that the plethora of IBM 
pricing metrics makes it very difficult for customers to understand and 
optimize their costs.  Of course that may be intentional--IBM needs to 
have their revenue stream of course.

On the other hand, at least IBM is fairly transparent about their pricing, 
at least for MLC software.  The ISVs, not so much.  Transparency is 
good regardless of how complicated it is to figure out and regardless 
of whether or not we like the final number.  At least we can predict 
that number--at least until we're surprised by some nuance in the 
rules that we had missed.  Which leads us back to the issue of things 
being too complicated.  

Regarding the lack of usage of sub-capacity VWLC, wasn't it only with 
the z10s that Group Capacity Limits became available?  Before that 
there wasn't any good (built-in) way of controlling the overall usage 
and so it was quite easy for your R4H to hit very near 100% at *some* 
time during the month.  GCL now gives us a way to ensure the R4H 
doesn't go over our defined limit just because somebody wrote some 
bad code or we needed to do a large testing cycle or some such.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Is RMF Better?

2011-03-10 Thread Scott Chapman
I haven't ever used or even seen CMF, so I can't really comment on the 
comparison.  However, I will say that I really like RMF's Distributed 
Dataserver component which exposes RMF III interval data as XML 
that can then be used in all sorts of interesting ways.  IBM's included 
browser-based Data Portal leverages that and you can relatively easily 
write your own code to do something similar.  

But as I said, I haven't seen CMF so perhaps they have the same 
capability.  If they don't, put that as item #1 on my list for reasons to 
use RMF instead.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Java performance (was Re: SMF data question)

2011-03-08 Thread Scott Chapman
 I use JZOS. But we don't have a zAAP. Java is CPU intensive. 

There's no doubt that if you don't have a specialty engine, it's much 
harder to make the case for running Java on the mainframe.  But I've 
been playing around with it recently and the performance isn't as 
abysmal as it used to be.  

A quick fairly trivial test this morning based on some code I was playing 
with last week: 
Read a file with about a half million records.  
Split the records into fields by spaces.
Find the 3rd field and generate for it: the total, average, min and max.

I already had a REXX script that did this.  I wrote a JavaScript script to 
do essentially the same thing.  Note that the JavaScript language is 
not at all related to Java, but Java6 contains scripting engine support 
and includes Rhino (Mozilla's JS engine that's written in Java).  
Therefore it's possible to run JavaScript on the mainframe by wrapping 
it in a small Java program that reads the script and passes it off to the 
JS engine for execution.

So the comparison here is effectively REXX code being interpreted by 
the REXX interpretor (running under IKJEFT01 in batch) and JS code 
being interpreted by Java code (Rhino) running under JZOS.

Average of 3 executions, on a z10 504 machine.  (GCPs run at about 
half the zAAP speed.)  Both programs were reading the same MVS 
dataset--the JS code via the JZOS routines.  

REXX: 8.7 CPU seconds, 16.2s ET
Java (zAAP offline): 31.2 CPU seconds, 43.4s ET
Java (zAAP online): 0.4 CPU seconds (GCP), 26.9s normalized zAAP 
seconds, 17s ET

Clearly the JS code running in the JVM is significantly more CPU 
intensive overall.  But from my recollection of playing with Java from 
several years ago, I think this is a significant step forward from where 
we were.  And in the case with the zAAP online there's a net GCP 
savings.

Now a more interesting test might be to get the scripting languages 
out of the mix and compare a C program to a straight-up Java 
program.  Or test something that does more real work that's less 
trivial.  Those will have to wait for another day though.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Java performance (was Re: SMF data question)

2011-03-08 Thread Scott Chapman
Kirk:

I agree completely: 

This wasn't intended to be a benchmark, but rather a general indication 
that Java performance isn't necessarily completely outside the ballpark 
(not even in the same county) any more.  I think there's a general 
perception that Java is a horribly bloated mess that should be avoided 
whenever possible.  That was at least partly true 10 years ago, maybe 
less.  But it's a lot better today.  I can use JZOS (thanks!) to start the 
JVM and run a trivial JS script in less than a second today.  I remember 
when standing up a JVM took minutes!

And I definitely agree that the code in question, regardless of the 
language, is likely the most important factor in the performance of any 
solution to a particular problem.  Well mostly--I can probably come up 
with a scenario where bad Java code on a zAAP is better overall than good 
(pick your language code) on the GCP.  At least when the zAAPs run faster 
than the GCPs.  And then there's the capacity/cost implications of running 
code on the zAAP vs. the GCP. 

The funny thing is I personally really don't like Java as a language, 
regardless of performance.  But I have excess zAAP capacity at the moment, 
so I'm looking at what it might be useful for.

Scott Chapman



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Java performance (was Re: SMF data question)

2011-03-08 Thread Scott Chapman
You can of course write C/C++ for the mainframe no problem.

There is a Perl port available, I'm not sure how up to date it is at the 
moment.  

You can apparently run Python within CICS, but I have no experience 
with that. 

I have no experience with either Ruby or CL either.  Google implies 
that there are Java implementations of those though.  I would think 
that you could then leverage those to run on the mainframe.  If they 
allow you to access any Java class from within Ruby/CL then I would 
think that you could access the JZOS classes to get access to 
traditional MVS datasets from those languages.  

While I'm no fan of Java as a language, the idea of having the JVM be a 
portable run-time container  for multiple languages has some merit I 
think.  

Scott

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMF data question - an opinion poll, of sorts

2011-03-07 Thread Scott Chapman
If you want to do it in Java, it seems like JZOS might be your friend.  
For example:

http://www.ibm.com/developerworks/java/zos/javadoc/jzos/com/ibm/jz
os/sample/fields/Smf83BaseRecord.html

see also com.ibm.jzos.fields

Of course there are all sorts of options for Java classes to build the 
XML once you have the data in some structure within Java.

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DB2 Performance

2011-03-02 Thread Scott Chapman
As others have pointed out, the DB2 governor will let you kill threads 
that have exceeded some arbitrary amount of CPU time.  There's 
obviously pluses and minuses to that.  

Using WLM you can age those long-running queries so they drop to a 
low enough importance that they don't substantially get in the way of 
anybody else.  (Well, depending on how you set it up and how much 
you're willing to penalize those queries.)  

However, even discretionary work that is running because nothing else 
needs the CPU right now can drive up your R4H and may impact either 
your software costs and/or how quickly you reach your caps.  If that 
work was unnecessary because the user gave up on the query after 5 
minutes, then that was a shame and it maybe would have been nice 
for the governor to have killed that thread before it ran for 2 hours.  
But I'm not sure that you can readily do so without impacting other 
threads that maybe really do need to run that long.  

Be aware that the recent IBM PTF that allowed for 60% DDF offload 
really changes the way DDF work runs on the zIIP and GCP.  Previously 
the generosity factor for the DDF threads was set to 55% and SRM 
moved the enclave back and forth between the GCP and the zIIP.  
After that PTF, DB2 marks 3 of every 5 (or maybe 6 of every 10) DDF 
enclaves as being 100% offloaded and the remainder 0%.  (Effectively, 
I'm not exactly sure exactly what they're doing under the covers.)

While this might average out to 60% of the CPU work offloaded over 
time, the less homogeneous your workload is the more likely that any 
particular interval will show a signifcant variation from that value.  So if 
you have large user queries coming in that use significant amounts of 
CPU time, there's a 40% chance that that will now run entirely on the 
GCP instead of running 45% on the GCP resulting in a possibly 
significant increase in the CPU utilization during that interval.

Finally, if you're running knee-capped GCPs (less than full speed), your 
users will likely perceive a noticable variation in run times between 
executions of the same query--because sometimes they run on the 
slower GCP and sometimes they run on the faster zIIP.  The bigger the 
discrepency between the GCP and zIIP speed the bigger this potential 
runtime difference.  

As you might tell, we're not real happy with that PTF.  It works well for 
homogeneous workloads, but where you have a mixture of large and 
small queries, how lucky or unlucky you are with the large queries in 
any given period will determine how happy you are in that period.  My 
guess is that having a mixture of transaction sizes is more normal and 
where you have adhoc user queries, likely some of those are large to 
very large.

But if you just recently started having problems with your DDF, and if 
you believe the queries haven't changed you might look to see if you 
just recently applied that PTF (PM12256 I think?).  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: LPAR only CoD

2010-11-09 Thread Scott Chapman
It's not clear to me exactly what you mean, but I presume you mean 
you use On/Off Capacity on Demand to upgrade the processor and 
then some ISV (3rd party) software notices the processor change and 
stops working because it's not licensed for that processor.  

There's no way of limiting the processor change to one LPAR--changing 
the hardware affects the entire CEC.  You can play with weights, 
number of logical CPs, and potentially capping within the LPAR 
definitions so only one will gain the benefit of increased capacity of 
course. 

But if you upgrade a -704 to a -705, then you're running on a -705.  If 
your ISV software contracts say you pay based on the capacity of the 
entire machine, then you have a contractual issue to deal with.  And 
that's why we don't actually use OOCoD here to do temporary capacity 
changes--there are a lot of contractual issues to deal with first.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU time variance

2010-08-11 Thread Scott Chapman
I'm  not familiar with any variability issues that may arise out of 
running under z/VM but I would guess that guest systems might find 
more CPU variability than when running just under PR/SM.  But that's 
just a guess.

My experience has been that when we moved from z900 to z9s we 
saw an increase in CPU time variation.  When we moved from z9s to 
z10s we saw a larger increase in that variation.  I believe the reason 
for this is the increased sensitivity to processor cache misses.  (E.G. 
memory speed is not keeping up with CPU speed.)

I've seen 10% variations on my z10s, but primarily when they're more 
heavily loaded.  At some point I'd like to do a bit of studying on that 
and attempt to correlate the variation back to the numbers from HIS, 
but I haven't done that yet.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Access z/OS 3270 TSO from smartphone?

2010-07-30 Thread Scott Chapman
I'm not sure about Android (being a WebOS person), but if you're 
worried about the Terms of Service, Verizon dropped the price of the 
Palm Pre/Pixi mobile hotspot to $0.  Previously they charged for it.  

See:
http://www.engadget.com/2010/04/01/verizon-mobile-hotspot-on-
webos-devices-now-free/

You can also tether the Pre on Sprint, but it's not sanctioned by Sprint.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 2 versus 4 processors

2010-07-21 Thread Scott Chapman
I realize I'm late to this discussion, but if you're on the fence, why not 
set yourself up so you can do both?  You now readily can do so with 
the z10s.  

I take it you're waffling between a G04 and (perhaps) an L02.  From 
my chart, an L02 is very slightly more capacity than a G02.  So 
purchase an L02, but ask IBM to dial it down to an E01.  Also execute 
an On/Off Capacity on Demand contract.  

Once you've done that, you can now use OOCoD to move to any 
capacity setting from an E01 to a L02, including (potentially), a G04, or 
an E05 or whatever is within an L02.  As long as you stay within the 
L02 capacity setting, your daily OOCoD cost is $0.  (This is a change 
from the z9s where OOCoD cost a minimum of $1K/day.)

The only negative is that you have to refresh your OOCoD every 6 
months.  But it could be very useful if your workload today is skewed 
towards an G04 or E05 but tomorrow an L02 would be better.  At the 
very least it gives you piece of mind that you can choose the correct 
processor count/speed based on what you observe once you have the 
machine installed.  

We did this with our z10 ECs last year because for more complicated 
reasons.  We did it this year with our z10 BC penalty box (related to 
those complicated reasons), and it's worked really well.  While 
testing on the penalty box, I've changed the capacity marker a half 
dozen times over the course of a morning, no problems at all.

Two final caveats: 
1) You can't reduce the processor count below whatever the machine 
was delivered dialed down to.  Hence the reason for bringing it in as a 
uniprocessor and then immediately changing it with OOCoD.  That give 
you the flexibility of trying, for example, something like trying a P01.  
But uniprocessors are generally not recommended.

2) ISV software that look at the actual model number to determine 
license compliance may or may not have a problem with swapping the 
models around.  We've not yet run into any problems, but I can see 
where it would be possible.

I hope that helps.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 2 versus 4 processors

2010-07-21 Thread Scott Chapman
Rolling 4 hour average utilization.  Capacity only sets the maximum 
utilization level you can achieve.  If you turn the machine up, obviously 
you now have headroom to do more work and so the rolling 4 hour average 
would likely increase if you took advantage of that capacity (assuming you 
don't have caps set to prevent it).  On our little penalty box while I was 
testing a couple of Sundays ago, I did end up pushing up the the R4HA for 
that machine by an MSU.  Not particularly significant in that particular 
case because we actually had planned for more MSUs than what I used in our 
ELA, but we don't have all of the planned production work on that box yet 
so it's running below our ELA budget at the moment.

But for IBM MLC pricing purposes, x MSUs on 4 CPUs is the same as x MSUs 
on 2 CPUs.  (Assuming x is the same: an L02 is really 30 MSUs and a G04 is 
29 MSUs.  But an I03 is 29 MSUs.)

We just started doing vWLC last year and it certainly does complicate life 
in that there are definitely more things to manage.  But the benefits, at 
least in our case, have been worth it. 

Scott




eric-ibmm...@wi.rr.com 
07/21/2010 03:10 PM

To
IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu
cc
Scott Chapman sachap...@aep.com
Subject
Re: 2 versus 4 processors






I'm just curious.  What does that do for your software bill for the month? 
 Do they bill you at the highest level you use, or does that use the 
highest rolling 4 hour average, or what?  If you adjusted it 5 times in 
one working day, you never got a 4 hour period at the same capacity level.

--
Eric Bielefeld
Systems Programmer
IBM MVS Technical Services
Dubuque, Iowa
563-845-4363

 Scott Chapman sachap...@aep.com wrote: 
 We did this with our z10 ECs last year because for more complicated 
 reasons.  We did it this year with our z10 BC penalty box (related to 
 those complicated reasons), and it's worked really well.  While 
 testing on the penalty box, I've changed the capacity marker a half 
 dozen times over the course of a morning, no problems at all.
 



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Mainframe Executive article on the death of tape

2010-03-25 Thread Scott Chapman
Thanks!  I was beginning to worry I was the only one that still 
occassionally gets woken up in the night for a tape error.  Tapes wear 
out, heads get dirty, writes fail.  These days the error is almost always 
caught on the write, but I did have a 3590 refuse to read a tape a few 
months ago.  First one that I can remember in a long, long time 
though.  

Scott Chapman

sy...@hotmail.com wrote:

I had a tape error just last week.  Tapes and drives are about 5 
years old.

 

10079 08:52:11.45 STC06292 0080  IOS000I 
0121,2C,IOE,01,0600,,**,A00298,EXHPDM 504   
   504 0080   0A4410D050405050 
0001FF00 030C003117335490 4B04E8205C5B2011
   504 0080   WRITE ERROR 
DETECTED   

 

Cliff McNeill
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: LPARs: More or Less?

2010-02-18 Thread Scott Chapman
One other advantage of having the separate LPARs is that by using 
the
LPAR weights I can give preference to the total production LPAR over
what is running is development or the sysprog sandbox.  In the 
instances
where the total CEC is near its max (month end processing), I would
prefer that development and the sandbox start feeling the pinch 
before
production does.

If your goal is to give preference to production work when the system 
is maxed out, I submit that you're better off having dev/test/prod all in 
the same system.  We have multiple LPARs here: sys prog sandbox, 
a legacy two-way plex that has dev/test/prod all together in two 
LPARs (across two boxes) and one with 4 LPARs--two of which are 
supposedly dev/test and two of which are supposedly production.  

When a box gets maxed out, the dev-test LPARs continue to process 
work--consuming their share.  On the legacy plex, dev/test work on 
the maxed out processor basically comes to a halt as WLM works to 
protect the production workload--as it probably rightly should.  Is your 
dev/test work really more important than your production work?  I 
agree that there are specific situations where it may be, e.g. some 
production batch that really doesn't need to be done right now but 
you have a developer sitting on their hands waiting on a compile.  

If you have dev/test in a separate LPAR on the machine, you probably 
aren't going to set the weight to zero to force dev/test to give up all 
their cycles, but WLM can come pretty close to that for work in the 
same LPAR.

Having been in development, I understand the need for keeping some 
development moving, and separate dev/test LPARs are good for that.  
And I understand the warm fuzzy feeling one gets from deploying z/OS 
to dev/test first.  But in my experience, new z/OS releases rarely cause 
application problems and in a production sysplex you can roll the OS to 
just one LPAR at a time, further reducing the risk.  And as was 
previously pointed out, dev/test should have their own instances of 
WAS, DB2, CICS, etc. so new versions of the things that are more likely 
to cause problems should be rolled out to dev/test first.

One final point: at least here, testing happens on the production LPAR 
even in that 4 way plex with the 2 dev/test LPARs.  Why?  Because 
those are the larger LPARs and so a long time ago the system test 
environment was put there so they could have the resources to run 
larger tests.  That did cause us a problem yesterday in one of the few 
ways that dev/test still can impact production: a series of looping test 
jobs filled the spool.  So even if you have separate dev/test LPARs, 
people will find a way to cause you trouble in production.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: OT (?): Are HTML emails unsafe

2010-02-03 Thread Scott Chapman
I agree with pretty much everything Phil said--except I don't use 
Outlook.  I'm still using Pegasus at home.

It's not that HTML email is inherently evil, it's just that you have to be 
as careful as you would surfing the web.  And few people would 
suggest you should completely stop using the web--although 
technically that would be safer.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: WEB Print Server Shopping

2010-01-25 Thread Scott Chapman
Indeed you can: we're working on a homegrown solution that does 
just that: pull the output from the spool with SDSF REXX then wrap the 
output in some simple XML and store it in PDSEs.  HSM will manage the 
PDSEs--a control table determines what management class to use for 
each report.  The resulting data is served via the web with the HTTP 
server with the MVSDS which can be secured via RACF so that you 
need access rights to the PDSE to be able to view the report.  The web-
front end is a JavaScript program that runs in the browser and is also 
served up by the HTTP server.  So everything stays on z/OS and is 
managed on z/OS with native z/OS functions.

The major issues we've run into creating this:
1) Very, very large reports can be a bit problematic--we have one 
report that's somewhere around 1GB.  Being able to handle that large 
of a report required making some compromises in that we only 
download the reports to the PC in 1,000 page chunks--which impacts 
local searching and printing.  

2) This only works well for pure text reports, not AFP.  Even so 
handling the control characters is a bit tricky as REXX SDSF doesn't 
currently tell you the CC method for each file on the spool, although 
you can find it by chasing through the relevant control blocks.

3) Reprinting on a local printer also required a little bit of JS work, but I 
think we have that working well now.  We have not explored reprinting 
the whole thing on a MF-attached printer directly from the UI because 
it's not clear that we have that as a user requirement.  We can easily 
re-transform back to a MF flat file though.

So rolling your own is possible, but not without some work.  

Scott Chapman
AEP

I think there are commercial product to do this, but with the HTTP
Server and REXX SPOOL interface you can do all
this things .

Jim Marshall wrote:

We are big into TN3270 Telnet Associated Printing here and are 
getting to
the point where thousands of users will be added who need to get 
output off
the IBM JES2 Spool.  Naturally the applications are not willing to 
modify
anything as to where they might put this output except the Warm 
and Fuzzy
JES2 Spool.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: TELNET Inherited Region

2010-01-14 Thread Scott Chapman
That's exactly what my notes from a few years ago when I was trying 
to figure this all out indicate.  I found the source for the soft and hard 
limits were handled differently for telnet users compared to every 
other scenario I tested, including TSO OMVS, BPXBATCH SH, etc.  But 
for telnet users, the answer was consistently that the soft and hard 
limits were set to ASSIZEMAX (if specified on the user's RACF OMVS 
segment) or MAXASSIZE (if ASSIZEMAX wasn't specified).

My guess is that this is because my IEFUSI (per IBM recommendation) 
says to ignore unix tasks to avoid overriding the limit for forked 
children. 

Additionally, if you are UID=0, it's trivial to override the hard and soft 
limits to whatever you want.  Not that I would recommend that, but I 
mention it because that has always irked me.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: RMF3 DDS (Data Portal) access of XML feeds programatically

2010-01-11 Thread Scott Chapman
Been there, done that, wrote the paper.  My method was presented at 
CMG '07 as a late breaking paper.  It seems that it's not in the 
proceedings, at least I can't find it searching the CMG archives, which 
is kind of annoying--presumably something to do with being a late 
breaking paper.  It was published in the Journal in 2008, if you happen 
to have those back issues, but again I don't see them in electronic 
form on cmg.org.

My technique is, as you are thinking about, JavaScript running in the 
browser, periodically (shortly after the RMF interval) pulling the XML 
from the DDS.  We actually have 4 separate sysplexes, so I can pull 
data from all 4 sysplexes and present it on one consolidated screen.  
And integrate it with historical data as well.  Not too difficult, and even 
easier if you can be allowed access to one of the common JS libraries.  

The negative is having to make multiple requests per interval per 
sysplex per user.  For a few users the overhead is trivial, but I 
wouldn't expect my solution to scale to scores or hundreds of users.  
And over a slow network link it's not ideal.  I'm currently thinking about 
writing a newer version that would use a started task (probably 
written in Java so it could run on a zAAP) that would do the RMF DDS 
queries and emit a single consolidated XML stream.  

In short, your idea is quite doable.  If you're interested in the paper, 
send me a message of list and I'll get it to you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: hfs VS zfs

2010-01-05 Thread Scott Chapman
A couple of releases ago, I measured an increase in CPU time for some 
benchmark-type tasks that used ZFS vs. HFS when both were caching 
equally.  That pattern was later confirmed with one or two real 
workloads.  According to IBM this is not really unexpected because ZFS 
does more (journaling, more sophisticated caching), but I still was 
surprised--it was something on the order of 10% and for workloads 
that were doing a *lot* of HFS/ZFS cache friendly reads it did result in 
measurably longer run times.  

Nonetheless, we're slowly moving towards ZFS because it does a 
better job caching than HFS.  In fact, HFS caching is flat broken for us 
at times and IBM is not particularly interested in fixing it because they 
want everybody to go to ZFS.  (E.G. I still have an ETR open with them 
that's been open for something like 18+ months, waiting for L3 to have 
time to work on finding the problem.) So despite the increased CPU 
time for similarly cached workloads, workloads that are not being 
cached well in HFS may be cached better in ZFS and may have a 
significant performance improvement.  

The cache reporting and controls in ZFS are also much better.  So it's 
not all bad, I just wish there wasn't such a significant CPU hit for 
things that are already well-cached.

And we did move the root to ZFS.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: High CPU / channel ovhd w/3592 and DFDSS

2009-11-19 Thread Scott Chapman
Small blocksize maybe?  Just a guess.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Daylight Saving Time changes effect on CICS Transaction Server

2009-10-21 Thread Scott Chapman
Agreed--or at least periodically check to see if the timezone has 
changed.  Or even better, listen for the ENF that indicates that the 
local timezone offset changed.  Of course applications may still have 
local time issues, but system-level things shouldn't.  

Surely there is a good reason why IBM hasn't fixed this obvious 
problem for so very long, but I can't figure out what it is.

Scott

Someone should teach those guys about UTC.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Sysplex Basic Question

2009-09-09 Thread Scott Chapman
Yes, but there are some significant caveats.

First off, the users that are logged on to the CICS TOR on the failing 
system will be knocked off and have to log back on.  Assuming there's 
a VTAM generic in place, that should be no big deal--they'll get directed 
to the surviving TOR and can continue to work once they log back in.  
If you're coming in via IP via WAS or even a direct socket, there are 
options for similarly routing people to the surviving member.

However, any locks that the failing DB2 had in place are retained until 
that DB2 member is restarted.  You can automate the restart of that 
DB2 on the surviving system to speed that process up, but the time it 
takes to do that is non-zero and related to how much activity was in-
flight at the time of the failure.  

Application design is what really impacts the availability of the 
application during the failure.  For example, it's not uncommon for 
applications to update a common table, and even sometimes a 
common row in that table, for most or all transactions.  In that case, 
it's quite possible for the DB2 that failed to have held a lock on that 
common resource.  That could prevent the application from running 
until that lock is resolved--that is until the failing DB2 is restarted.

And of course if the application has affinities life may be worse.  E.G. 
even our most sysplex- friendly app still uses a single DOR for some file 
access and still has affinities to particular MQs.  So if the system failure 
impacts those  resources, those tasks need to be restarted before the 
application can continue.  Getting those affinities removed is an uphill 
battle because we take almost no unplanned system outages these 
days, so there's very little payback for resolving them.  For planned 
outages we shift those resources to another system at a convenient 
time.

I hope that helps.

Scott Chapman

Can you set up a sysplex so that both machines have everything 
running on
each CPU in the plex, and when one system crashes the other will
automatically take over everything?  Say you have SYSA and SYSB.  
Each has
10 CICS regions running.  Half of the applications are routed to SYSA, 
and
the other half to SYSB, but all 10 CICS regions are running on each 
system.
If SYSA crashes, can all transactions now be routed to SYSB?  I'm 
sure that
any transactions on SYSA at the time of the crash will not finish, and 
have
to be reentered, but I was under the impression that everything after 
the
crash could then be routed to SYSB after the SYSA crash.  I'm not 
sure if
that could be done automatically by automation, or would take 
someone to
look at it and type in a command.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Seperate LPARs for Prod and Test or a single LPAR for both

2009-09-09 Thread Scott Chapman
By test do you mean sysprog test or application test?

I'm a firm believer that you need a sysprog sandbox to install the 
latest releases and try out new things.  We actually have a separate 
sandbox sysplex so any sysplex-wide things we might need to play 
with won't affect production.

However, it's less clear from an application perspective, and in fact we 
do it both ways.  We have one production 2 member sysplex where 
application dev/test and prod are on both members.  We have another 
4 member plex where 2 are supposedly production and the other 
two are supposedly dev/test.  However, in reality, over time, the 
functions shift around such that that's not such a pure distinction: 
there's user training that goes on in the dev/test systems and 
there's application testing that goes on in the production systems.  

WLM can do a good job of keeping the dev/test users from hurting the 
production users, and fewer systems imply fewer fixed overheads.  So 
my preference is to do application dev/test and production on the 
same systems.  The most significant negative to that strategy is that 
the application teams will feel better if you can give them a new 
version of the OS in just their test environment first.  But that's really 
only an issue for OS releases--all the major application subsystem 
software (CICS, DB2, MQ) rolls through dev/test before production 
anyways.  And when we roll out new OS releases, we do them one 
side of the production sysplex at a time, about a week apart for the 
sysplex that has dev/test and production on the same systems.  

Scott Chapman

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How often IPL a production LPAR (any good practice)

2009-09-04 Thread Scott Chapman
Right!  Or applying maintenance to individual subsystems.

Looking at my IPL history, I see 6 IPLs for most of our systems in the 
past 12 months, but that may be slightly high since over that time 
period we've done 2 DASD replacements, changed on out the CPUs 
and upgraded z/OS.  It looks like 10 IPLs over the last 24 months.  

I have noticed that the system CPU workload does seem to decrease 
a bit immediately after an IPL, but I've never tried to quantify it or track 
it down.  

In contrast, we still recycle CICS regions that run application code 
every night, DB2s weekly, and most WASes weekly.  Those 
are mostly non-disruptive for most of our applications--they're either 
data sharing across the sysplex or inactive at those times.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html