STORBUF is the answer, the default is broken as designed. See first, "http://velocitysoftware.com/faq.html", and then "http://velocitysoftware.com/present/CONFIG/" for configuration guidelines, that will help you avoid other such unavoidable issues.
Tyler Koyl wrote:
I am starting to get guests dropping off into E3. Here is what it looks like: Ready; T=0.01/0.01 14:34:23 ind queues MAINT Q1 R00 00000212/00000191 TCPIP Q0 PS 00000736/00000160 VSWCTRL1 Q0 PS 00000086/00000025 SWPLT01 Q0 PS 00015174/00015100 SWPLT02 Q0 PS 00007506/00007473 SWPLT13 Q3 PS 00031507/00037536 SWPLT52 Q3 PS 00008694/00010560 SWPLT05 Q0 PS 00083254/00083131 SWPLT07 Q3 PS 00178853/00202001 SWPLT53 Q3 PS 00048262/00084705 SWPLT04 Q3 PS 00046603/00053150 SWPLT55 E3 PS 00253430/00330662 Ready; T=0.01/0.01 14:34:48 q stor STORAGE = 3G Ready; T=0.01/0.01 14:34:55 q srm IABIAS : INTENSITY=90%; DURATION=2 LDUBUF : Q1=100% Q2=75% Q3=60% STORBUF: Q1=125% Q2=105% Q3=95% DSPBUF : Q1=32767 Q2=32767 Q3=32767 DISPATCHING MINOR TIMESLICE = 5 MS MAXWSS : LIMIT=9999% ...... : PAGES=999999 XSTORE : 0% Ready; T=0.01/0.01 14:35:01 q xstore XSTORE= 1024M online= 1024M XSTORE= 1024M userid= SYSTEM usage= 99% retained= 0M pending= 0M XSTORE MDC min=0M, max=1024M, usage=0% XSTORE= 1024M userid= (none) max. attach= 1024M Ready; T=0.01/0.01 14:35:19 Ready; T=0.01/0.01 14:41:19 q alloc page EXTENT EXTENT TOTAL PAGES HIGH % VOLID RDEV START END PAGES IN USE PAGE USED ------ ---- ---------- ---------- ------ ------ ------ ---- VMTPG1 A724 0 3338 601020 239701 506877 39% VMTPG2 A70F 0 3338 601020 235920 522720 39% VMTPG3 A71F 0 3338 601020 245292 530640 40% VMTPG4 A72F 0 3338 601020 235011 514799 39% VMTPG5 A73F 0 3338 601020 236858 522710 39% VMTPG6 A700 0 3338 601020 37478 39799 6% VMTPG7 A710 0 3338 601020 37929 40396 6% VMTPG8 A731 0 3338 601020 38418 40944 6% ------ ------ ---- SUMMARY 4695K 1276K 27% USABLE 4695K 1276K 27% I added the paging volumes this morning to ensure we had enough in place. This is our VM Test LPAR and the sum of the virtual storage of the linux guests is 14G so things are tight. What should be done first? 1. Further reduce guest storage. We have done this already but there may be a way to squeeze some more. 2. What about messing with SRM STORBUF etc? 3. Anything else besides ordering more storage? I am actually looking for something non-disruptive. Taking down a 6 LPAR z9 to add more memory to a test lpar does not go over well. We tend to tag this stuff on when production changes are made. Tyler Koyl This e-mail and any attachment(s) are confidential and may be privileged. If you are not the intended recipient please notify me immediately by return e-mail, delete this e-mail and do not copy, use or disclose it. ---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
---------------------------------------------------------------------- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
begin:vcard fn:Barton Robinson n:Robinson;Barton adr;dom:;;PO 390640;Mountain View;CA;94039-0640 email;internet:[EMAIL PROTECTED] title:Sr. Architect tel;work:650-964-8867 note:If you can't measure it, I'm just not interested x-mozilla-html:FALSE url:http://velocitysoftware.com version:2.1 end:vcard