On YOUR shores. :-) And another on MY shores. :-)
Cheers, Martin
Martin Packer,
zChampion, Principal Systems Investigator,
Worldwide Cloud & Systems Performance, IBM
+44-7802-245-584
email: martin_pac...@uk.ibm.com
Twitter / Facebook IDs: MartinPacker
Blog:
https://www.ibm.com/developerworks/
Elardus Engelbrecht wrote:
But then big blue is still recruiting "... currently has more than 25,000 open
positions."
The impression I got when contracting at IBM for 4 years was that IBM is sort
of like an independent city-state, a
transnational entity composed of people from all over the wor
Ed Gould wrote:
>http://www.channelregister.co.uk/2016/03/02/ibm_layoffs/
>IBM axed a wedge of workers today across the US as part of an "aggressive"
>shakeup of its business.
Same story every few year. In fact, the term 'Dead wood' is coming from IBM
according to an ex-IBMer.
The big blue is
Clark Morris wrote:
>Check your licensing requirements. Normally there is only a short
>period when you can run 2 versions without having to pay for both.
It's normally 12 months for IBM Monthly License Charge products. However,
if you have IBM Country Multiplex Pricing there is no time limit, al
Radoslaw Skorupka:
>Yes, there are other methods to skin the cat, but what's wrong with
>knowledge whether method #101 is possible?
I believe I offered an answer to your question. If there's no other test
that you could be running instead that is more important then have fun.
http://www.channelregister.co.uk/2016/03/02/ibm_layoffs/
IBM axed a wedge of workers today across the US as part of an
"aggressive" shakeup of its business.
--
For IBM-MAIN subscribe / signoff / archive access instructions
I'm supposing that when you say HCD you mean IODF.
Further, when you mention REXX code, I'm guessing you are referring to Mark
Zelden's IPLINFO REXX.
This interrogates a couple of control blocks that I can't see documented, not
in the Data Areas manuals, nor in MACLIB/MODGEN (IOVT and CDA ).
On 3/03/2016 11:13 AM, David Crayford wrote:
On 3/03/2016 11:04 AM, Paul Gilmartin wrote:
On Thu, 3 Mar 2016 10:23:12 +0800, David Crayford wrote:
I've got no idea why Rocket would choose to use tarballs. It would have
been a much better idea to use compressed pax archives like the
original
On 3/03/2016 11:04 AM, Paul Gilmartin wrote:
On Thu, 3 Mar 2016 10:23:12 +0800, David Crayford wrote:
I've got no idea why Rocket would choose to use tarballs. It would have
been a much better idea to use compressed pax archives like the original
IBM ported tools.
Yes. But on (some) GNU Linu
On Thu, 3 Mar 2016 10:23:12 +0800, David Crayford wrote:
>
>I've got no idea why Rocket would choose to use tarballs. It would have
>been a much better idea to use compressed pax archives like the original
>IBM ported tools.
>
Yes. But on (some) GNU Linux:
man pax
...
-z Use the g
On 3/03/2016 1:24 AM, Paul Gilmartin wrote:
I'm calling Rocket remiss in providing a gzip that can't be bootstrapped
using only base z/OS facilities. Hardly forgiven in that many desktop
systems (which Rocket may have used for packaging) provide uncompress
but not compress because of (expired) p
Does anyone have Rexx code to check to see if an HCD is activated dynamically
between IPLS. My issue is I can get the IPL Load Parms when the system is
ipl'ed by using rexx code I have gotten off of websites. What I can't detect is
when the HCD changes between IPLS because it seems the CVT area
When we went to pure virtual tape, this is what we got back from IBM DFHSM L2:
PARTIALTAPE(REUSE) vs PARTIALTAPE(MARKFULL)
When using a virtual tape system, IBM usually recommends using
PARTIALTAPE(MARKFULL).
RECYCLE
You (or automation) need to issue the RECYCLE command, it is not automatic.
With our VTS, we found ourselves in the position where the default
automatic DFHSM recycle was unable to keep up with scratch cartridge
release that was needed to maintain the required scratch count, which was
putting the whole VTS environment at risk (90+% of which was consumed by
DFHSM migration)
On 3/2/2016 6:19 PM, Anthony Fletcher wrote:
We are using a VTS with the virtual tapes defined as 3490s, and we are using
the recommended 'Mark TAPE Full' option. That's all well and good.
It does, however, mean that there are a lot of 'tapes' and many of those may
have a mixture of valid and n
We are using a VTS with the virtual tapes defined as 3490s, and we are using
the recommended 'Mark TAPE Full' option. That's all well and good.
It does, however, mean that there are a lot of 'tapes' and many of those may
have a mixture of valid and no-longer-valid data sets - a call for Recycle.
We are running AT-TLS and have a keyring with a certifcate that is about to
expire. we have gotten a new certificate and added it to the keyring, but not
as the default. The question I have is if we leave the old certificate in the
keyring as the default, when it expires will AT-TLS start usin
Caveat: I'm a daily digester so list responses are always delayed... (plus
someone's probably beat me to it. *grin*)
Few more items to scratch of the possibilities list:
1) Verify you are *not* using 'ACTIVE' as the CDS name in the ISMF panels.
You *cannot* alter the 'ACTIVE' CDS. You must
On Wed, 2 Mar 2016 12:03:05 -0500, Rick Troth wrote:
>On 03/02/2016 12:50 AM, Jack J. Woehr wrote:
>>> So I go and download the GZip for z/OS package, and it says use
>>> 'gzip' as the 1st step to install from the supplied '*.tar.gz' file.
>>
>> If you have Gnu tar on the system, tar takes a gzip
On 03/02/2016 12:50 AM, Jack J. Woehr wrote:
So I go and download the GZip for z/OS package, and it says use
'gzip' as the 1st step to install from the supplied '*.tar.gz' file.
If you have Gnu tar on the system, tar takes a gzip switch
tar zxvf myfile.tgz
Right, but 'tar' handles GZipp
ESHEL Jonathan wrote:
We are trying to apply the PTF's that install the new JSON parser support under
z/OS 2.1 (as of 2.2 it's integrated into the base system), and have a problem
with one of the prereqs - UA71619. It's an assembler error when SMPE is
compiling SDSF module ISFJREAD and the usa
Check you SMP/E DDDEFS for SYSLIB. Ensure SMPMTS is the 1st dataset in the
concat...
HTH,
We are trying to apply the PTF's that install the new JSON parser support under
z/OS 2.1 (as of 2.2 it's integrated into the base system), and have a problem
with one of the prereqs - UA71619. It's an as
We are trying to apply the PTF's that install the new JSON parser support under
z/OS 2.1 (as of 2.2 it's integrated into the base system), and have a problem
with one of the prereqs - UA71619. It's an assembler error when SMPE is
compiling SDSF module ISFJREAD and the usage of the CALL macro see
The CBU test starts when the first processor is activated. That starts the 10
day clock. As long as 1 processor remains active, the customer can go up and
down as may times as they want within that 10 day period. As soon as all
processors on the record are deactivated, the test ends.
--
I think the op question was about the file system name during the COBOL
installation. Do you need to setup a new path, use an existing path or how
does COBOL 5.2 know to use the COBOL 5.2 path vs. the other path? Or is the
file system not needed during comp/lked?
MOUNT FILESYSTEM('#dsn')
MOU
W dniu 2016-03-02 o 06:50, Timothy Sipples pisze:
Radoslaw Skorupka wrote:
That's another option, surely it is legal and counts as one CBU test,
but it does not allow to downsize from 7nn to 6nn or 5nn or 4nn capacity
marker.
True, but you have lots of LPAR constraint settings to accomplish
fun
On 1 Mar 2016 22:02:30 -0800, in bit.listserv.ibm-main mainframe
mainframe wrote:
>Hello Group,
>We have COBOL V4.2 in our system and recently we
>installed v5.2 as well. Now my customers want to use both of these version.
>Is it possible or as we installed v5.2 on same file sy
Just of of completeness you can call the program direct if you have not
yet set up the paths by running :
> /bin/gzip -V[ etc ]
That said the bin directory should already be in the search paths so
check it via
echo $PATH
If not add it to your profile (or for all users if wanted).
.
Vince
The JCL for the compiles are substantially different, so your customer will
have to select the correct option from a panel you provide (presumably).
There is no direct impediment to running two different COBOL compilers
concurrently. You'll probably have one just sitting entirely in a STEPLIB, a
Joel Ewing has made a valid point about programs potentially having LRECL
expectations. COBOL is good for that.
Tim Brown is silent on what he actually wants to do this for. Until then it's
difficult to suggest something concrete. Ditch the blocksize has been said,
making the LRECL smaller has
Why not just create a VBA file with a very long LRECL and not worry about it at
all? Longer LRECLs don't introduce any more oveReader than short ones.
-teD
Original Message
From: Kjell Holmborg
Sent: Wednesday, March 2, 2016 02:54
To: IBM-MAIN@LISTSERV.UA.EDU
Reply To: IBM Mainframe Discuss
31 matches
Mail list logo