W dniu 19.04.2024 o 10:32, Massimo Biancucci pisze:
Hi everybody,

In a presentation at GSE I saw a slide with a graph about the advantage of
having more small sysplex LPARs versus a bigger one.
So for instance, it's better to have 5 LPARs with 4 processors than one
with 20.

There was a sentence: "It doesn't take an extremely large number of CPUs
before a single-image system will deliver less capacity than a sysplex
configuration of two systems, each with half as many CPUs".
And: "In contrast to a multiprocessor, sysplex scaling is near linear.
Adding another system to the sysplex may give you more effective capacity
than adding another CP to an existing system."

We've been told (IBM Labs, it seems) that a 4 ways DataSharing with 8 CPUs
perform 20% better than a single LPARs with 32 CPUs.
The same (at another customer site) with "having more than 8 CPUs in a
single LPAR is counterproductive".

Putting these infos all together, it seems it's better to have more small
partitions (how small ???) in data sharing than, let me say, four bigger
ones (in data sharing too).

Anybody there has direct experience on doing and measuring such scenarios ?
Mainly standard CICS/Batch/DB2 application.
Of course I'm talking about well defined LPARs with High Polarization CPUs,
so don't think about that.

Could you imagine and share your thoughts (direct experiences would be
better) about where the inefficiency comes from ?
Excluding HW issues (Polarization and so on), could it come from zOS
related inefficiency (WLM queue management) ?
If so, do zIIP CPUs participate in inefficiency growth ?

I know that the usual response is "it depends", anyway I'm looking for
general guidelines that allow me to choose.

Thanks a lot in advance for your valuable time.
Max

Well, it is not my problem (I use smaller configurations).
However I do have some remarks:
1. Sysplex overhead. Parallel Sysplex have a lot of advantages, except one: CPU. That means it is more effective to assign 1000MSU to single LPAR than to spread it across 2 or more LPARs. 2. Parallel Sysplex history - many years ago IBM introduced new CPU technology - CMOS. However new CMOS CPs were significantly less powerful than old ECL. And there was no way to add more CPs to the CPC or LPAR. (LPAR was quite new concept at the time). So Parallel Sysplex was the only way to scale the machines. However today single CPC can have up to 200 CPs - a lot. More than you need. So scalability is no longer a problem. 3. Each multi-CP solution have some overhead, so it is never "1+1=2", it is rather "1+1=1,9". But the delta is growing with the numer of CPs, but spreading the CPs across multiple system images provide more overhead.
4. What do you prefer? 8 CPs at 100MSU each or single CP at 800MSU?
5. Everything depends on your workload. Single DB2 or bunch of CICS+VSAM applications? Single batch critical path or multiple "threads"? However for multi-application environment it is IMHO still easier to cut large CP into pieces than to aggregate multiple CPs into single CP resource for single-application.


--
Radoslaw Skorupka
Lodz, Poland

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to