How many IFLs are we talking about here? At a guess, if you are going fro m 4 to 8 IFLs, you won't see any MP effect, but if you are going from 30 to 60 IFLs, you will defi nitely see an MP effect.
What I can remember about the MP effect: The MP effect is determined by the operating system and by the workload. It is not simply a hardware effect at all. If two or more processors access the same memory location -- one will have to wait. Memory operations are much faster if the data is in cache t han if it is in memory and has to be brought into cache. Further, if processor A has a particular pi ece of memory (a cache line) in cache, and processor B changes that piece of memory then the dat a in processor A's cache must be removed (cast out) or A and B would see different values in the s ame memory location. This cast out slows down processor A, since it must now access that piece of memory from storage instead of from cache. If you were to set up each of the processors to access its own private ar ea of memory, with no shared memory at all, then the MP effect would be zero. So it is the shar ing of memory by operating systems and applications that causes the MP effect, not the har dware itself. Shared resources are protected by locks. Locks in turn have a particular shared memory location, and (especially in the case of spin locks) may consume cycles just testin g the lock. So shared resources also lead to MP effects. If a virtual machine is moved from running on one processor to running on another one, it leaves its cached data behind, so will run slower on the new processor for a whi le. This is also considered an MP effect. The early VM/XA did away with most locks by allowing many operations only on the master processor. At the time, when the number of processors was small, this wor ked really well, but as the master processor gets busier (especially if it gets up close to 100% busy) then the other processors will end up waiting on the master processor. This is also an M P effect. As Barton said, IBM has done a lot to reduce this master-processor serialization, but at the cost of adding more locks, which can themselves cause MP effects. VM was always had a much lower MP effect than MVS by its nature. Individu al virtual machines have separate storage, except for shared segments. These are mostly read- only, so after loading the cache, the multiple copies can reside in different processor caches w ith no conflict, and there is never a need to cast it out since it never gets updated. (The exceptio n is the shared-write segments used by GCS.) In MVS every address space shared common storage areas (LPA, SQA, low sto rage) so there was inherent conflict. In addition, CP provides far fewer services for VM gue sts than MVS does, so there are far fewer locks needed. In addition, with large numbers of small guests (typical CMS workload) it was very easy for VM to maintain processor affinity, so each guest tended to be dispatched on the same processor each time, that means higher speed memory access since more data will still be in cache. I don't know what the effect of Linux is on all this. I'm sure VM's algor ithms and hardware are far more sophisticate than Linux or VMware on Intel, so the MP effect is much lower. I think I've seen the chart, and it was based on theoretical calculations and not on the ac tual workload measurements that used to be done by the Washington Systems Center to cre ate the LSPR. The increase of the number of processors from 16 to 60 obviously means th e MP effect is much more important to large systems. In addition, z10s are NUMA processors (N on-Uniform Memory Access) so memory access speed varies depending on where in memory a part icular data item resides -- each group of processors has some data that is close to it and some that is farther away. There is also another VM-like operating system, PR/SM, in every System z. It too can have MP effects, if you share processors between LPARs. If you have both IFLs and CPs, the CPs have no affect the MP effect for the IFLs. (At least until we get to z/VM-mode LP ARS that mix them.) z/OS has a new feature HiperDispatch to help reduce the MP and NUMA effec t. In the Austin SHARE proceedings, see 2831 - System z10: HiperDispatch From a Sysprog Perspective. It involves cooperation between PR/SM and the z/OS dispatcher. I should warn some of you that I havent done much performance work in the last 10 years. Im sure Ive missed or oversimplified things. If anyone knows of a good re ference on the MP effect, Id like to read it. Is anyone aware of any work on MP effect when runn ing Linux guests on z/VM? Alan Ackerman alan.acker...@bankofamerica.com