I have recently taken a job working in the Performance and Capacity Mgmt
area of the company, but used to do z/Linux support (and AIX support).  Our
setup is odd, they moved z/Linux support to reporting to the AIX area about
two years ago.  I hadn't had much time prior to that to really learn the
guts of Velocity.  Now that I've moved areas, I'm hoping I can get some
help on where else to look in Velocity to find some indicators of what
might be the problem with a particular applications on z/Linux.  Some
background...there is 1 production guest (wasp50) running all our internal
WebSphere applications (3 appb servers on the guest and 5G memory).  The
test workload is split between 2 z/Linux guests with 2 app servers on each.
What is happening is that our Claims area is running a batch process
through z/Linux twice a day and as you can see from the numbers below, it
really cranks the system.  So far, I can dig to the point that it's a java
process that's using up most the time, but are there other screens in
particular that can help me point the developers in the right direction as
to what they can do to make this process run better (although the good news
is it won't run during our prime shift once it's in production)?  I have
access to the browser based screens for Velocity, but could also talk to
the zVM admin to have him look at stuff through the mainframe screens.  At
this point, the AIX area is pushing to move everything off of z/Linux and
to AIX because it will 'run better' there and they've done a good job of
convincing management.  Any thoughts would be appreciated.

ESAMAIN

         <---Users----> Transact.      <Processor>  Cap- <--Storage (MB)->
<-Paging--> <-----I/O-----> <MiniDisk> Spool Communications
         <-avg number->  per Avg.      Utilization  ture Fixed Active Stor
<pages/sec> <-DASD--> Other <-Cache-->  Page <-per second->
Time       On Actv In Q Sec. Time CPUs Total Virt. Ratio  User Resid. Load
XStore DASD Rate Resp  Rate  Rate %Hit  Rate   IUCV    VMCF
-------- ---- ---- ---- ---- ---- ---- ----- ----- ----- ----- ------ ----
------ ---- ---- ---- ----- ----- ---- ----- ------  ------
08:27:00   24   20  7.0  3.4 0.17    2 153.3 150.2   100    67  12063  0.4
1    0   36    1     0    13  100     0    198       0
08:26:00   24   21  8.0  4.0 0.13    2 149.6 146.4   100    67  12064  0.4
0    0   35    1     0     1  100     0    197       0
08:25:00   24   20 10.0  3.6 0.14    2 150.0 146.7   100    67  12063  0.4
0    0   34    1     0     1  100     0    194       0
08:24:00   24   20  8.0  4.0 0.13    2 153.5 150.2   100    67  12063  0.4
0    0   34    1     0     1 95.1     0    196       0
08:23:00   24   20 10.0  3.4 0.16    2 164.0 161.0   100    67  12063  0.4
0    0   23    2     0     1  100     0    194       0
08:22:00   24   20  9.0  4.0 0.13    2 153.6 150.7   100    67  12063  0.4
0    0   26    2     0     1 95.7     0    196       0
08:21:00   24   20  8.0  3.7 0.14    2 144.7 141.6   100    67  12063  0.4
0    0   26    2     0     1  100     0    194       0
08:20:00   24   20  9.0  4.1 0.10    2 138.5 135.1   100    67  12063  0.4
0    0   23    2     0     1 97.7     0    196       0
08:19:00   24   20  9.0  3.7 0.11    2 182.1 179.2   100    67  12063  0.4
0    0   26    2     0     1  100     0    194       0
08:18:00   24   21  8.0  3.9 0.13    2 155.1 151.9   100    67  12064  0.4
0    0   24    2     0     7 95.5     0    196       0
08:17:00   24   20 10.0  4.1 0.10    2 153.7 150.6   100    67  12063  0.4
0    0   24    2     0     6 97.3     0    194       0

ESAUMENU:ESATUSRS

                  <------CPU time-------> <----Main Storage (pages)----->
<---------Paging (pages)----------> <Spooling(pages)> Qed Resid Frame
Address
         UserID   <----(seconds)----> T:V <Resident> Lock <---WSSize---->
<-------Allocated-----> <---I/O--->       <---I/O---> Pg+    at  List
Spaces
Time     /Class    Total     Virt     Rat Total Actv  -ed Total Actv Avg
Total ExStg  Pref NPref  Read Write Alloc  Read Write Spl Reset Reord Avg
Max
-------- -------- ---------- -------- --- ----- ---- ---- ----- ---- ----
----- ----- ----- ----- ----- ----- ----- ----- ----- --- ----- ----- ---
---
08:30:00 System:      94.000   92.329 1.0 3088K   3M  824 3169K   3M 132K
158K 75085     0 83323     0     0  2624     0     0   0     0     0   0
0
08:30:00 WAST51       56.188   55.793 1.0  711K 711K   24  717K 717K 717K
4771     0     0  4771     0     0     0     0     0   0     0     0   0
0
08:30:00 WASP50       33.188   32.760 1.0 1273K   1M   25 1311K   1M   1M
36251  1295     0 34956     0     0     0     0     0   0     0     0   0
0
08:30:00 WAST50        2.031    1.959 1.0  692K 692K   24  717K 717K 717K
23824 16747     0  7077     0     0     0     0     0   0     0     0   0
0
08:30:00 LNPRODR2      1.917    1.362 1.4  106K 106K  176  105K 105K 105K
15251  4971     0 10280     0     0     0     0     0   0     0     0   0
0
08:30:00 LNOUS         0.155    0.151 1.0  119K 119K   13  131K 131K 131K
11214  8217     0  2997     0     0     0     0     0   0     0     0   0
0
08:30:00 DTCVSW1       0.148    0.000  3K    84   84   34    50   50   50
2502     1     0  2501     0     0     1     0     0   0     0     0   0
0
08:30:00 LNS10AP       0.095    0.093 1.0  107K 107K   13  107K 107K 107K
23617 20452     0  3165     0     0     0     0     0   0     0     0   0
0
08:30:00 LINUX1        0.090    0.087 1.0 75490  75K   13 75456  75K  75K
26449 22441     0  4008     0     0     0     0     0   0     0     0   0
0
08:30:00 ESATCP        0.060    0.045 1.4   513  513    1   512  512  512
647    17     0   630     0     0     2     0     0   0     0     0   0   0
08:30:00 ESAWRITE      0.018    0.017 1.1   639  639    1   638  638  638
467    22     0   445     0     0   945     0     0   0     0     0   0   0

ESAXMENU:ESALNXP

08:26:00 WAST51   snmpd      2352     1  2351  0.2  0.1  0.0    0    0  -10
0.1  0.1  0.0    0    0 6483 2496    2    0    0    0
08:26:00 WAST51   qpmon      4840  4811  4810  0.1  0.0  0.1    0    0    0
0.1  0.0  0.1    0    0  20K 2440    0    0    0    0
08:26:00 WAST51   smbd       6730  2528  2528  0.3  0.1  0.2    0    0    0
0.2  0.1  0.1    0    0  14K 4232    0    0    0    0
08:26:00 WAST51   java      28704     1 28704  0.2  0.0  0.2    0    0    0
0.1  0.0  0.1    0    0 463K 232K    0    0    0    0
08:26:00 WAST51   java      28847     1 28847 91.7  3.4 88.3    0    0    0
55.0  2.0 53.0    0    0   1M   1M   19    0    0    0
08:26:00 WAST51   java      29117     1 29117  0.5  0.1  0.4    0    0    0
0.3  0.1  0.3    0

Thanks!

Joell Chockley
System Capacity/Performance Specialist
Blue Cross Blue Shield of KS
1133 Topeka Blvd
Topeka, KS  66629-0001
Work (785)291-7837



CONFIDENTIALITY NOTICE: This email message and any attachments are for the sole 
use of the intended recipient(s) and may contain proprietary, confidential, 
trade secret or privileged information.  Any unauthorized review use, 
disclosure or distribution is prohibited and may be a violation of law.  If you 
are not the intended recipient or a person responsible for delivering this 
message to an intended recipient, please contact the sender by reply email and 
destroy all copies of the original message.

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to