Re: Optimal VMTUNE Guidelines for a TSM Server
IMHO, your box looks overloaded: 1. you always have 20+ runnable processes 2. user + system exceeds 95% most of the time 3. you are always searching and freeing memory 4. with F - f / R, you only have room for one (376-120 / 256 = 1)read ahead. I don't thing setting R above 64 helps. F should be higher. And I think f should be a bit higher. See my post in the perf and tune sectoinon adsm.org. You might try f=256, F=R*number_of_read_aheads say 768, R=64 5. you may try lowering P, to see if the searching/freeing is lessoned. You may want to set h=1 (BE CAREFUL!) 6. increase 'b' if vmtune -a shows fsbufwaitcnt is none zero 7. your 'B' looks high. Normally you increase it if vmtune -a shows hd_pendqblked is non zero. If yours is zero lower B, because the memory is pinned. 8. since pi and po are zero, I think your using too much memory for file cache. see #5. Miles >>> [EMAIL PROTECTED] 10/01/02 11:08 AM >>> While following this discussion, I have also looked long and hard at our AIX 5.1 (H50 with 3GB memory and 2GB page space) running TSM 4.1.2.9. I am submitting vmtune and vmstat information. Any comments on this implementation would be appreciated. Are we running O.K. Do you see any problems? Are there opportunities for improvement? (I think we are spending a lot of CPU resources chasing free pages when the page-replacement algorithm scans the PFT.) Comments? %vmtune vmtune: current values: -p -P-r -R -f -F -N-W minperm maxperm minpgahead maxpgahead minfree maxfree pd_npages maxrandwrt 156664 392172 2256120 376 655360 -M -w -k -c-b -B -u-l -d maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps 627476 327688192 1 186 1728 9 131072 1 -s -n -S -L -g -h sync_release_ilock nokilluid v_pinshm lgpg_regions lgpg_size strict_maxperm 0 0 0 000 -t maxclient 626656 number of valid memory pages = 784345 maxperm=50.0% of real memory maximum pinable=80.0% of real memoryminperm=20.0% of real memory number of file memory pages = 640004numperm=81.6% of real memory number of compressed memory pages = 0 compressed=0.0% of real memory number of client memory pages = 0 numclient=0.0% of real memory # of remote pgs sched-pageout = 0 maxclient=79.9% of real memory %vmtune vmtune: current values: -p -P-r -R -f -F -N-W minperm maxperm minpgahead maxpgahead minfree maxfree pd_npages maxrandwrt 156664 392172 2256120 376 655360 -M -w -k -c-b -B -u-l -d maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps 627476 327688192 1 186 1728 9 131072 1 -s -n -S -L -g -h sync_release_ilock nokilluid v_pinshm lgpg_regions lgpg_size strict_maxperm 0 0 0 000 -t maxclient 626656 number of valid memory pages = 784345 maxperm=50.0% of real memory maximum pinable=80.0% of real memoryminperm=20.0% of real memory number of file memory pages = 640004numperm=81.6% of real memory number of compressed memory pages = 0 compressed=0.0% of real memory number of client memory pages = 0 numclient=0.0% of real memory # of remote pgs sched-pageout = 0 maxcliet=79.9% of real memory %vmstat 60 30 kthr memory page faultscpu - --- --- r b avm fre re pi po fr sr cy in sy cs us sy id wa 4 0 169166 119 0 0 0 55 546 0 34 7234 1487 29 48 13 9 21 2 169239 225 0 0 0 2205 5151 0 3130 47524 17686 44 45 6 5 38 2 169284 564 0 0 0 2850 7623 0 3393 42361 14571 48 47 2 3 26 3 169291 421 0 0 0 2898 5654 0 3740 40477 14652 43 49 4 4 25 2 169752 296 0 0 0 2092 4600 0 3338 35895 14932 47 43 6 5 35 2 169321 376 0 0 0 2623 7652 0 3622 34551 9644 47 50 1 1 35 2 169335 420 0 0 0 1642 4355 0 3528 39442 14716 49 44 4 3 29 2 169445 203 0 0 0 2155 4655 0 3736 37539 14001 48 46 3 3 41 2 169375 254 0 0 0 2286 4969 0 3434 46253 15200 50 45 2 2 35 3 170207 454 0 0 0 2751 7763 0 3802 44231 12711 51 47 1 1 24 2 169385 207 0 0 0 4114 9283 0 3746 34910 9610 39 57 1 2 22 3 169389 346 0 0 0 4523 9576 0 4453 40497 12297 39 57 1 3 23 2 169378 228 0 0 0 3879 11433 0 4457 40795 12185 41 57 0 1 22 2 169421 286 0 0 0 4009 8959 0 3292 45895 14423 46 50 1 3 24 3 169850 462 0 0
Re: Optimal VMTUNE Guidelines for a TSM Server
John, Based on my experience, you're doing fine. * Your system is doing zero paging to/from disk (pi and po are both zero consistently); * Your scan/free ratio (sr/fr) is about 2, which is OK. Rule of thumb is sr/fr should be 3 or less. When my TSM server had only 512 MB, I was able to tune it for zero "po & pi" and 97+% TSM DB cache hit, but sr/fr was about 16. Tab Trepagnier TSM Administrator Laitram Corporation
Re: Optimal VMTUNE Guidelines for a TSM Server
While following this discussion, I have also looked long and hard at our AIX 5.1 (H50 with 3GB memory and 2GB page space) running TSM 4.1.2.9. I am submitting vmtune and vmstat information. Any comments on this implementation would be appreciated. Are we running O.K. Do you see any problems? Are there opportunities for improvement? (I think we are spending a lot of CPU resources chasing free pages when the page-replacement algorithm scans the PFT.) Comments? %vmtune vmtune: current values: -p -P-r -R -f -F -N-W minperm maxperm minpgahead maxpgahead minfree maxfree pd_npages maxrandwrt 156664 392172 2256120 376 655360 -M -w -k -c-b -B -u-l -d maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps 627476 327688192 1 186 1728 9 131072 1 -s -n -S -L -g -h sync_release_ilock nokilluid v_pinshm lgpg_regions lgpg_size strict_maxperm 0 0 0 000 -t maxclient 626656 number of valid memory pages = 784345 maxperm=50.0% of real memory maximum pinable=80.0% of real memoryminperm=20.0% of real memory number of file memory pages = 640004numperm=81.6% of real memory number of compressed memory pages = 0 compressed=0.0% of real memory number of client memory pages = 0 numclient=0.0% of real memory # of remote pgs sched-pageout = 0 maxclient=79.9% of real memory %vmtune vmtune: current values: -p -P-r -R -f -F -N-W minperm maxperm minpgahead maxpgahead minfree maxfree pd_npages maxrandwrt 156664 392172 2256120 376 655360 -M -w -k -c-b -B -u-l -d maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps 627476 327688192 1 186 1728 9 131072 1 -s -n -S -L -g -h sync_release_ilock nokilluid v_pinshm lgpg_regions lgpg_size strict_maxperm 0 0 0 000 -t maxclient 626656 number of valid memory pages = 784345 maxperm=50.0% of real memory maximum pinable=80.0% of real memoryminperm=20.0% of real memory number of file memory pages = 640004numperm=81.6% of real memory number of compressed memory pages = 0 compressed=0.0% of real memory number of client memory pages = 0 numclient=0.0% of real memory # of remote pgs sched-pageout = 0 maxclient=79.9% of real memory %vmstat 60 30 kthr memory page faultscpu - --- --- r b avm fre re pi po fr sr cy in sy cs us sy id wa 4 0 169166 119 0 0 0 55 546 0 34 7234 1487 29 48 13 9 21 2 169239 225 0 0 0 2205 5151 0 3130 47524 17686 44 45 6 5 38 2 169284 564 0 0 0 2850 7623 0 3393 42361 14571 48 47 2 3 26 3 169291 421 0 0 0 2898 5654 0 3740 40477 14652 43 49 4 4 25 2 169752 296 0 0 0 2092 4600 0 3338 35895 14932 47 43 6 5 35 2 169321 376 0 0 0 2623 7652 0 3622 34551 9644 47 50 1 1 35 2 169335 420 0 0 0 1642 4355 0 3528 39442 14716 49 44 4 3 29 2 169445 203 0 0 0 2155 4655 0 3736 37539 14001 48 46 3 3 41 2 169375 254 0 0 0 2286 4969 0 3434 46253 15200 50 45 2 2 35 3 170207 454 0 0 0 2751 7763 0 3802 44231 12711 51 47 1 1 24 2 169385 207 0 0 0 4114 9283 0 3746 34910 9610 39 57 1 2 22 3 169389 346 0 0 0 4523 9576 0 4453 40497 12297 39 57 1 3 23 2 169378 228 0 0 0 3879 11433 0 4457 40795 12185 41 57 0 1 22 2 169421 286 0 0 0 4009 8959 0 3292 45895 14423 46 50 1 3 24 3 169850 462 0 0 0 3679 8738 0 3618 38413 12054 44 48 3 6 23 2 169468 386 0 0 0 3205 8526 0 3396 39838 13030 42 53 2 3 39 2 169533 416 0 0 0 3190 6098 0 3098 40905 13073 54 44 1 1 35 2 169534 368 0 0 0 3578 8490 0 2384 40938 12915 54 42 2 3 33 1 169705 173 0 0 0 2756 7917 0 2735 42542 14719 53 42 3 3 33 2 169821 338 0 0 0 4286 9351 0 3482 37009 11309 50 47 1 2 kthr memory page faultscpu - --- --- r b avm fre re pi po fr sr cy in sy cs us sy id wa 34 2 169714 427 0 0 0 3238 7982 0 3071 34298 11006 49 48 1 1 31 2 169764 224 0 0 0 1639 4340 0 2233 42900 16126 48 36 9 7 39 2 169791 430 0 0 0 1278 3897 0 2585 48103 16678 54 40 3 3 35 2 169830 141 0 0 0 2039 4740 0 3100 43696 18504 47 45 4 4 41 1 17
Re: Optimal VMTUNE Guidelines for a TSM Server
>Thanks everyone for the input. Keep the thread going. I will provide some >feedback on what seems to work with my very high-end environment, 6H1, >Shark Disk, Magstar tape, all fibre channel. OOh Ooh Goodie!! My env-- 6M1, Shark ESS, 3494, fibre channel, Inrange director. Can I copy? P.S. I mucked around with some vmtune settings and this is what I've got now (be gentle if they are all wrong!) This is in the rc.local /usr/bin/vmtune -p 10 -P 20 -R 64 -f 256 -F 768 -b 128 -B 547 -u 32 -l 65536 minperm maxperm minpgahead maxpgahead minfree maxfree pd_npages maxrandwrt 104855 209710 2 64256 768 5242880 -M -w -k -c-b -B -u-l -d maxpin npswarn npskill numclust numfsbufs hd_pbuf_cnt lvm_bufcnt lrubucket defps 838841 320008000 1 128547 32 65536 1 -s -n -S -L -g -h sync_release_ilock nokilluid v_pinshm lgpg_regions lgpg_size strict_maxperm 0 0 0 000 -t maxclient 838020 number of valid memory pages = 1048551 maxperm=20.0% of real memory maximum pinable=80.0% of real memoryminperm=10.0% of real memory number of file memory pages = 767596numperm=73.2% of real memory number of compressed memory pages = 0 compressed=0.0% of real memory number of client memory pages = 0 numclient=0.0% of real memory # of remote pgs sched-pageout = 0 maxclient=79.9% of real memory "Seay, Paul" cc: Sent by: Subject: Re: Optimal VMTUNE Guidelines for a TSM "ADSM: Dist Server Stor Manager" <[EMAIL PROTECTED] IST.EDU> 09/16/2002 08:40 PM Please respond to "ADSM: Dist Stor Manager" Mark, I am sure your recommendations will bring rain. But, it is the most definitive response to this question yet. This thread is going to be a gold mine when done. I bet there are hundreds of TSM servers that could benefit from a little tuning in this area. We need to develop a simple calculator for the dedicated TSM server and generates a default vmtune recommendation to start with. This is a simple shell script. Putting on my 25 years of I/O and memory management experience with MVS hat and now being clearly explained how all of these knobs interact along with some other stuff that I have read, I now have a starting place. And, as always you change one knob at a time unless they are dependent on each other, then measure, then make the next adjustment. At the end of the day the hum from this machine should be able to be heard around the world. Thanks everyone for the input. Keep the thread going. I will provide some feedback on what seems to work with my very high-end environment, 6H1, Shark Disk, Magstar tape, all fibre channel. Paul D. Seay, Jr. Technical Specialist Naptheon Inc. 757-688-8180 -Original Message- From: Mark D. Rodriguez [mailto:[EMAIL PROTECTED]] Sent: Monday, September 16, 2002 8:46 PM To: [EMAIL PROTECTED] Subject: Re: Optimal VMTUNE Guidelines for a TSM Server Seay, Paul wrote: >I am trying to figure out what these are. The defaults are not good on >a large server. > >The suggestion is figure out how much memory does dsmserv need and then >work from there. > >So lets take the example of a 2G server with a buffer pool of 256MB and >an overall memory requirement of 400MB. That would make you think >there is about 1.6GB left around. Problem is the default filesystem >maxperm is 80% of the 2GB or about 1.6GB. This would mean nothing left >for the rest of the processes, thus lots of paging. I am thinking a >buffer of about 128MB should be in there. So, in this case, maybe set >maxperm to 65%. > >The real question is what other vmtune knobs should be considered in a >TSM server. The IO prefetch, large or small? Is there a book on how >to do this? > >ETC >ETC. > >Paul D. Seay, Jr. >Technical Specialist >Naptheon Inc. >757-688-8180 > > Paul, You are asking very interesting questions. I teach the AIX Performance tuning class and we spend the better part of a day discussing VMM. Needless to say, I can't review that in just one note. However I do want to take sometime to explain the theory behind this in order to justify my settings. Also, you may want to choose different values based on your environment. The important thing is to see the big picture here and to realize that AIX VMM does not work like any other OS's Virtual Memory Management. In addition, before adjusting anything with vmtune
Re: Optimal VMTUNE Guidelines for a TSM Server
Hey Seay, here are a few hints on vmtune. Provided in the bos.adt.samples fileset found in /usr/samples/kernel (not in default path) Contorles varius aspects of the AIX virutal memory system. The virtual memory system contorls most filesystem activity on AIX. Changes to vmtune parameters do not surive reboot. But A line can be added to /etc/inittab so that vmtune settings are set upon reboot. If AIX detects that file reading is happening sequentially it can read ahead even though the application has not (yet) requested that data. Helps large file backup on AIX clients and helps storage pool migrations from disk on an AIX TSM server. When altering the read ahead parameter (-R) you must also alter the maxfree parameter (-F) so that there is enough free memory to store the read ahead data. The following equation MUST HOLD: minfree + maxpgahead <= maxfree I reccomend the maximum: -R256 If you look at vmtune - minperm/maxperm it detemens how mutch memory AIX sets aside for file system cache. AIX can/will throw out application like TSM memory in favor of caching filesystem data. This can cause paging of the database buffer pool leading to slow database perfomance. Paging of the database buffer pool can cause database cahce hit statisitics to ber overly ooptimistic. TSM dos not often take adcantage of any filesystem caching. So i recomend you to lowvering the maxperm will make AIX retain more application memmory. Most VM paging on a TSM (only) server can be stopped by modifying the minperm/maxperm parameters. Exeption: RAM constrained systems and database buffer pool size is to large. Good starting poing is setting aside a max of 50% (-P50) for filesystem caching instead of the default of 80%. Lower further if no effective, this changes can be done on the fly. As maxperm approaches minperm, consider lowering minperm as well and watch vmstat for progress, if po´s go to zero pi´s will eventually lower as well. Ok i know of 2 good books that cover this concepts. Please read this readbook IBM Certifacation Study Guide AIX Perfomance and System Tuning. and this book, (this is not a redbook and you need to buy it) AIX Perfomance Tuning, by Frank Waters. Hope this helps you Paul Pete out :-) Kvedja/Regards Petur Eythorsson Taeknimadur/Technician IBM Certified Specialist - AIX Tivoli Storage Manager Certified Professional Microsoft Certified System Engineer [EMAIL PROTECTED] Nyherji Hf Simi TEL: +354-569-7700 Borgartun 37105 Iceland URL:http://www.nyherji.is -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of Seay, Paul Sent: 16. september 2002 22:00 To: [EMAIL PROTECTED] Subject: Optimal VMTUNE Guidelines for a TSM Server I am trying to figure out what these are. The defaults are not good on a large server. The suggestion is figure out how much memory does dsmserv need and then work from there. So lets take the example of a 2G server with a buffer pool of 256MB and an overall memory requirement of 400MB. That would make you think there is about 1.6GB left around. Problem is the default filesystem maxperm is 80% of the 2GB or about 1.6GB. This would mean nothing left for the rest of the processes, thus lots of paging. I am thinking a buffer of about 128MB should be in there. So, in this case, maybe set maxperm to 65%. The real question is what other vmtune knobs should be considered in a TSM server. The IO prefetch, large or small? Is there a book on how to do this? ETC ETC. Paul D. Seay, Jr. Technical Specialist Naptheon Inc. 757-688-8180
Re: Optimal VMTUNE Guidelines for a TSM Server
Paul, When tuning "maxperm", one important thing should be noted: "maxperm" is not the limit on cached file pages! Under heavy JFS file I/O, the file cache can grow up to "maxperm" by paging out process data areas (I think AIX documentation refers to them as "computational" pages) - if there is free memory beyond maxperm, the file cache will continue to grow and consume all available free memory. In other words, maxperm defines the limit where the file cache will *compete* with processes for memory. So if you have a system with a known "small" application load and no real need for VM paging, I don't see a reason why not to set maxperm very low, to let the file cache only consume real "free" memory. As an example, here's one of our boxes: number of valid memory pages = 2097141 maxperm=8.0% of real memory maximum pinable=80.0% of real memoryminperm=3.0% of real memory number of file memory pages = 847688numperm=40.4% of real memory Of course, I'm pretty sure if you use raw logical volumes, all of this goes out the window. Our TSM server is a Sun (with raw logical volumes), so I can't tell exactly how AIX will behave. Cheers, Paul Ripke UNIX/OpenVMS Sysadmin CSC, Port Kembla 2502, NSW, Australia Phone +61 2 4275 4101 Fax +61 2 4275 7801 Mobile +61 419 432 517 101 reasons why you can't find your Sysadmin: 68. It's 9 AM. He/she is not working that late. This email, including any attachments, is intended only for use by the addressee(s) and may contain confidential and/or personal information and may also be the subject of legal privilege. Any personal information contained in this email is not to be used or disclosed for any purpose other than the purpose for which you have received it. If you are not the intended recipient, you must not disclose or use the information contained in it. In this case, please let me know by return email, delete the message permanently from your system and destroy any copies.