strfmon() should do the trick. 

https://pubs.opengroup.org/onlinepubs/009604599/functions/strfmon.html

> On 5 Mar 2023, at 4:06 am, Rupert Reynolds <rreyno...@cix.co.uk> wrote:
> 
> To explain, I'm writing new PC code. I want the equivalent of EDMK in
> (something like) snprintf() format strings to print numbers with optional
> floating currency symbol and spaces/commas between thousands. (not
> forgetting n,nn,nn,nnn.nn style used in at least one country).
> 
> As far as I can see, snprintf() format strings can't handle it, but PL/I
> and even assembly (EDMK) make this easy.
> 
> Is there a common standard I should look at, please?
> 
> e.g. format 1234.56 with "$999,999,990.00"
> leads to "  $1,234.56".
> 
> I'm not going to be choosy about input data type--it's the presentation
> that matters and I'd rather not reinvent the wheel.
> 
> Any suggestions, please?
> 
> Roops
> 
>> On Sat, 4 Mar 2023, 18:39 Mike Schwab, <mike.a.sch...@gmail.com> wrote:
>> 
>> 
>> https://www.researchgate.net/publication/342570694_Coupling_Facility_Configuration_Options_-_Updated_2020
>> 
>> CF is not counted on SCRT, shown on RMF reports.
>> Won't cost you on z/OS, may on some vendors.
>> 
>> Thin CFs go to enabled wait when work is completed, restart when
>> interrupt says there is work.
>> 
>> Estimate is 3% light sharing to 13% heaving sharing (of z/OS workload).
>> 
>> Thin CF would use internal links so no I/O overhead to another CPU.
>> 
>>> On Fri, Mar 3, 2023 at 9:35 PM Laurence Chiu <lch...@gmail.com> wrote:
>>> 
>>> The situation.
>>> 
>>> We share a couple of Z13's with another (larger client). Z13 B is where
>> we
>>> run our development LPARs and Z13 A is production.
>>> 
>>> For critical business reasons an online application on our production
>> LPAR
>>> needs to be highly available and that means in a parallel sysplex.  But
>> our
>>> outsourcer has told us it cannot be done for the following reasons
>> because
>>> there are no spare ICF engines on the host B - all are being used by
>> other
>>> CF instances, either to support production Sysplexes or development ones
>>> (not ours).
>>> 
>>> Host A does potentially have a spare ICF engine we could use to support a
>>> production parallel Sysplex but good practice does recommend you create a
>>> test one first of course.
>>> 
>>> I then asked the question, if host A has a spare ICF engine, can't it be
>>> used to support a CF to be used by the test Sysplex on B. I was advised
>>> this was not possible since there are no spare connections between host A
>>> and Host B (Infiniband possibly) so the Sysplex on B could not actually
>>> communicate with the CF on A.
>>> 
>>> Our requirement for the Sysplex is primarily to be able to share a VSAM
>>> dataset which is hit every time a transaction comes in with a peak of
>> about
>>> 99tps. So we would need VSAM RLS to share the dataset records between the
>>> two application instances. There is no DB2, CICS or IMS so I think the
>> only
>>> structures in the CF are those to support VSAM RLS, maybe some XCF
>>> structures and core systems.
>>> 
>>> Knowing that we would only bring up the test sysplex to make sure
>>> transactions routed correctly across the two LPARs and most of the time
>> we
>>> would have one member of the Sysplex off, I suggested that the test CF
>>> could be built using a CP.  To this suggestion I received the following
>>> (anti) advice
>>> - there would be MSU costs (we don't care since we think the MIPS load on
>>> the CF would be low). Plus we would ask that the CF be defined with
>> Dynamic
>>> Coupling Facility Dispatch and set DYNDISP=THIN. Since that CF is going
>> to
>>> be idling most of the time, MSU consumption is not going to be a major
>> cost.
>>> - it's strongly recommended not to do this by IBM. Yet when I read this
>>> document
>>> 
>>> https://www.ibm.com/downloads/cas/JZB2E38Q
>>> the option is discussed in great detail and the only negatives are the
>>> incurring of MSU costs and some performance degradation if both a z/OS
>> and
>>> CF LPAR are trying to use the same CP at the same time.  But this can be
>>> managed.
>>> 
>>> - that a CF running on a CP would need a dedicated CP engine and there
>> are
>>> no spare engines in host B. That totally flies against the information I
>>> have read from IBM docs.
>>> 
>>> Of course for production the CF on host A would be configured to use an
>> ICF
>>> engine (or share one)
>>> 
>>> Finally, while I accepted the argument at the time there were no
>>> connections between Host A and Host B, further reading suggests that you
>> do
>>> not need to dedicate channels for communications but use XCF or by using
>>> Infiniband sub channels or sharing the same physical link with more than
>>> one Sysplex. Then the issue of running the CF on a CP goes away since I
>> can
>>> ask for two CF's to be defined on host A, one for production and one for
>>> test and DCFC ensures that that production CF is not impacted by the
>>> development one.
>>> 
>>> A lot to digest here but I really want to have some authoritative data in
>>> order to refute most of the comments being our outsourcer.
>>> 
>>> Thanks
>>> 
>>> ----------------------------------------------------------------------
>>> For IBM-MAIN subscribe / signoff / archive access instructions,
>>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>> 
>> 
>> 
>> --
>> Mike A Schwab, Springfield IL USA
>> Where do Forest Rangers go to get away from it all?
>> 
>> ----------------------------------------------------------------------
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>> 
> 
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to