As others have pointed out, the DB2 governor will let you kill threads 
that have exceeded some arbitrary amount of CPU time.  There's 
obviously pluses and minuses to that.  

Using WLM you can age those long-running queries so they drop to a 
low enough importance that they don't substantially get in the way of 
anybody else.  (Well, depending on how you set it up and how much 
you're willing to penalize those queries.)  

However, even discretionary work that is running because nothing else 
needs the CPU right now can drive up your R4H and may impact either 
your software costs and/or how quickly you reach your caps.  If that 
work was unnecessary because the user gave up on the query after 5 
minutes, then that was a shame and it maybe would have been nice 
for the governor to have killed that thread before it ran for 2 hours.  
But I'm not sure that you can readily do so without impacting other 
threads that maybe really do need to run that long.  

Be aware that the recent IBM PTF that allowed for "60% DDF offload" 
really changes the way DDF work runs on the zIIP and GCP.  Previously 
the "generosity factor" for the DDF threads was set to 55% and SRM 
moved the enclave back and forth between the GCP and the zIIP.  
After that PTF, DB2 marks 3 of every 5 (or maybe 6 of every 10) DDF 
enclaves as being 100% offloaded and the remainder 0%.  (Effectively, 
I'm not exactly sure exactly what they're doing under the covers.)

While this might average out to 60% of the CPU work offloaded over 
time, the less homogeneous your workload is the more likely that any 
particular interval will show a signifcant variation from that value.  So if 
you have large user queries coming in that use significant amounts of 
CPU time, there's a 40% chance that that will now run entirely on the 
GCP instead of running 45% on the GCP resulting in a possibly 
significant increase in the CPU utilization during that interval.

Finally, if you're running knee-capped GCPs (less than full speed), your 
users will likely perceive a noticable variation in run times between 
executions of the same query--because sometimes they run on the 
slower GCP and sometimes they run on the faster zIIP.  The bigger the 
discrepency between the GCP and zIIP speed the bigger this potential 
runtime difference.  

As you might tell, we're not real happy with that PTF.  It works well for 
homogeneous workloads, but where you have a mixture of large and 
small queries, how lucky or unlucky you are with the large queries in 
any given period will determine how happy you are in that period.  My 
guess is that having a mixture of transaction sizes is more normal and 
where you have adhoc user queries, likely some of those are large to 
very large.

But if you just recently started having problems with your DDF, and if 
you believe the queries haven't changed you might look to see if you 
just recently applied that PTF (PM12256 I think?).  

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to