HSM to open systems

2012-04-11 Thread Uriel Carrasquilla
My company is in the process of sun setting our MF.
We have about 40 TBytes of data under HSM control.
I have been tasked with the responsibility to pull the data and put it under a 
Unix/Linux system as an archived data.
The only time we will need to reference it (without the MF) is in case some of 
our users need to retrieve one of the files (not to update), just to look at.
We are supposed to keep the data for up to 15 years.
Any suggestions will be greatly appreciated.
Uriel

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: HSM to open systems

2012-04-11 Thread Uriel Carrasquilla
I think there are two options.

1)  Unload all the data from HSM into the original format and then FTP to a
Server.  Loadlibs maybe an issus.
2)  Contract with a 3rd party to be able to store your HSM data there for
access only.

I am unaware of any server product that could read Mainframe HSM data.


SAS and ISPF (Unix or Win) can read MF data (ebcdic).
I am only concerned about customer data (text) since anything else will not be 
needed.
(we will do the usual back up just in case).

Am I wondering if anybody had to face this situation and I would like to learn 
how they went about it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: HSM to open systems

2012-04-11 Thread Uriel Carrasquilla
Hi Miklos.
Interesting situation with the HSM exits.
Can you view the HSM ML2 files under USS using vi or more?
Is the data compressed under the USS file system?
I suspect I could use something like FTP (or similar products) to copy to a 
Linux on Intel from the HSM exit.  Is that possible?
Regards,
Uriel

From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Miklos 
Szigetvari [miklos.szigetv...@isis-papyrus.com]
Sent: Wednesday, April 11, 2012 10:17 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: HSM to open systems

Hi

Not the same as your case, but we have here HSM exits, to use HFS (Unix)
dataset in the place of ML2
So with this exit we migrate/recall everything from/ to HFS (Unix) .


On 4/11/2012 3:41 PM, Uriel Carrasquilla wrote:
 My company is in the process of sun setting our MF.
 We have about 40 TBytes of data under HSM control.
 I have been tasked with the responsibility to pull the data and put it under 
 a Unix/Linux system as an archived data.
 The only time we will need to reference it (without the MF) is in case some 
 of our users need to retrieve one of the files (not to update), just to look 
 at.
 We are supposed to keep the data for up to 15 years.
 Any suggestions will be greatly appreciated.
 Uriel

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: HSM to open systems

2012-04-11 Thread Uriel Carrasquilla
Hi Lisette.
In response to your questions:
1) Time frame to move data to server?  I have from now until early 2014.
2) Efforts involved?  Do you have a large staff, small staff?  Do you have
time to write exits for HSM and redirect the HSM data? Once I have an idea of 
the size, then we will see how much we can do ourselves and how much help we 
will need.
3) Is the 40 TB all of the data or is the information you need to place on
the server a smaller number?  My HSM data is about 40 TB (between disk and 
tape).
4) Will you have a dedicated server for your mainframe storage? Yes, a Linux 
server.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: HSM to open systems

2012-04-11 Thread Uriel Carrasquilla
Kirk, thank you for the information.
I printed the Co:Z user guide and I think it has a lot of the pieces of what I 
need.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: SNA future

2012-03-30 Thread Uriel Carrasquilla
It is not only SNA I worry about, it is the entire MVS/zOS world.
In two years my company will have moved the last production application out of 
our Mainframe.  
This is a company that has been in businesss for 80+ years with a MF since they 
first became commercially available.


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of John 
Gilmore [johnwgilmore0...@gmail.com]
Sent: Friday, March 30, 2012 4:20 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: SNA future

SNA has certainly been eclipsed, often appropriately, by TCP/IP.

I am not certain, however, that SNA is even moribund; and it would
certainly be premature to prepare the funeral baked meats just yet.

In the interval conflicts with the established uses of 'SNA' as the
stock symbol of Snap-On, Inc. and as an acronym for the Student Nurses
Association have not been problematic.

John Gilmore, Ashland, MA 01721 - USA

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: catalogued datasets in tapes and expiration dates

2012-03-29 Thread Uriel Carrasquilla
Let's say I have a tape with multiple datasets inside.
Some datasets may be catalogued, some may not.
I can understand the once not catalogued when they expire they no longer hold 
the tape from going to the scratch pool.
But what about those cases when the catalogued datasets hit their expiration 
date? what happens?
am I also correct in assuming that the entire tape with all the stacked 
datasets is being held until the last dataset in it expires? 
Please share your thoughts.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: catalogued datasets in tapes and expiration dates

2012-03-29 Thread Uriel Carrasquilla
Forgot to mention, yes, I am using CA-1.
I ran a GENER to copy to output file on our VTL (UNIT=V3590) with a 
LABEL=EXPDT=99000 with a DISP to CATLG if successful.
It ran OK and put the dataset (no stacking) into VOL=SER=682436.
Then, via ISPF TMS, I checked the volume and it shows the expiration date with 
the CATALOG description.
Then, I went to ISPF 3.2 and uncatalogued the dataset.
TMS still has the same information with the expiration date set to CATALOG.
I was expecting to see SCRATCH instead.
is this normal?

From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of 
Jonathan Goossen [jonathan.goos...@assurant.com]
Sent: Thursday, March 29, 2012 10:23 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: catalogued datasets in tapes and expiration dates

The answer likely varies by the TMS in use.

CA-1 will keep the tape until every dataset on it is expired. This can be
confusing as the volume record is also the first file on the tape. CA-1
will adjust its expiration to be equal the highest on the tape. If you
manually expire the first file (volume record) then the entire tape will
scratch regardless of the expiration of the rest of the files. That is one
reason that I am very carful about changing the expiration of volume
records.

In CA-1 there is an expiration called CATALOG. It will expire the file
when it is no longer cataloged. If a cataloged tape file has a Julian
expiration, then uncataloging the  file will not affect expiration. But
scratching the tape will uncatalog the file.

Thank you and have a Terrific day!

Jonathan Goossen, DTM
ACT Mainframe Storage Group
Personal: 651-361-4541
Department Support Line: 651-361-
For help with communication and leadership skills checkout Woodwinds
Toastmasters

IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu wrote on 03/29/2012
08:51:19 AM:

 From: Uriel Carrasquilla uriel.carrasqui...@mail.mcgill.ca
 To: IBM-MAIN@bama.ua.edu
 Date: 03/29/2012 08:52 AM
 Subject: Re: catalogued datasets in tapes and expiration dates
 Sent by: IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu

 Let's say I have a tape with multiple datasets inside.
 Some datasets may be catalogued, some may not.
 I can understand the once not catalogued when they expire they no
 longer hold the tape from going to the scratch pool.
 But what about those cases when the catalogued datasets hit their
 expiration date? what happens?
 am I also correct in assuming that the entire tape with all the
 stacked datasets is being held until the last dataset in it expires?
 Please share your thoughts.

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


This e-mail message and all attachments transmitted with it may
contain legally privileged and/or confidential information intended
solely for the use of the addressee(s). If the reader of this
message is not the intended recipient, you are hereby notified that
any reading, dissemination, distribution, copying, forwarding or
other use of this message or its attachments is strictly
prohibited. If you have received this message in error, please
notify the sender immediately and delete this message and all
copies and backups thereof. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: catalogued datasets in tapes and expiration dates

2012-03-29 Thread Uriel Carrasquilla
OK, I found out that we run the clean and scratch right after mid-night.
Thank you.
I also found out how to use the EXPDT by using YYY/J-DAY.
Thank you all for your response.


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of McKown, 
John [john.mck...@healthmarkets.com]
Sent: Thursday, March 29, 2012 11:25 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: catalogued datasets in tapes and expiration dates

The status does not change until you run a CA-1 maintenance job. It is not 
real time.

--
John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 -Original Message-
 From: IBM Mainframe Discussion List
 [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Uriel Carrasquilla
 Sent: Thursday, March 29, 2012 10:10 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: catalogued datasets in tapes and expiration dates

 Forgot to mention, yes, I am using CA-1.
 I ran a GENER to copy to output file on our VTL (UNIT=V3590)
 with a LABEL=EXPDT=99000 with a DISP to CATLG if successful.
 It ran OK and put the dataset (no stacking) into VOL=SER=682436.
 Then, via ISPF TMS, I checked the volume and it shows the
 expiration date with the CATALOG description.
 Then, I went to ISPF 3.2 and uncatalogued the dataset.
 TMS still has the same information with the expiration date
 set to CATALOG.
 I was expecting to see SCRATCH instead.
 is this normal?
 
 From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on
 behalf of Jonathan Goossen [jonathan.goos...@assurant.com]
 Sent: Thursday, March 29, 2012 10:23 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: catalogued datasets in tapes and expiration dates

 The answer likely varies by the TMS in use.

 CA-1 will keep the tape until every dataset on it is expired.
 This can be
 confusing as the volume record is also the first file on the
 tape. CA-1
 will adjust its expiration to be equal the highest on the tape. If you
 manually expire the first file (volume record) then the
 entire tape will
 scratch regardless of the expiration of the rest of the
 files. That is one
 reason that I am very carful about changing the expiration of volume
 records.

 In CA-1 there is an expiration called CATALOG. It will expire the file
 when it is no longer cataloged. If a cataloged tape file has a Julian
 expiration, then uncataloging the  file will not affect
 expiration. But
 scratching the tape will uncatalog the file.

 Thank you and have a Terrific day!

 Jonathan Goossen, DTM
 ACT Mainframe Storage Group
 Personal: 651-361-4541
 Department Support Line: 651-361-
 For help with communication and leadership skills checkout Woodwinds
 Toastmasters

 IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu wrote on
 03/29/2012
 08:51:19 AM:

  From: Uriel Carrasquilla uriel.carrasqui...@mail.mcgill.ca
  To: IBM-MAIN@bama.ua.edu
  Date: 03/29/2012 08:52 AM
  Subject: Re: catalogued datasets in tapes and expiration dates
  Sent by: IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu
 
  Let's say I have a tape with multiple datasets inside.
  Some datasets may be catalogued, some may not.
  I can understand the once not catalogued when they expire they no
  longer hold the tape from going to the scratch pool.
  But what about those cases when the catalogued datasets hit their
  expiration date? what happens?
  am I also correct in assuming that the entire tape with all the
  stacked datasets is being held until the last dataset in it expires?
  Please share your thoughts.
 
 
 --
  For IBM-MAIN subscribe / signoff / archive access instructions,
  send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


 This e-mail message and all attachments transmitted with it may
 contain legally privileged and/or confidential information intended
 solely for the use of the addressee(s). If the reader of this
 message is not the intended recipient, you are hereby notified that
 any reading, dissemination, distribution, copying, forwarding or
 other use of this message or its attachments is strictly
 prohibited. If you have received this message in error, please
 notify the sender immediately and delete this message and all
 copies and backups thereof. Thank you

Re: megabytes per second

2012-03-21 Thread Uriel Carrasquilla
May be there is a reason as to why you are gettign so many different answers.
In my case, the question I normally get from management is: how much is it 
going to cost us to maintain a copy our application XXX data at our DR location?
I change this into a problem of money and budgets.
I figured that the biggest variable cost is the network and the disk space.
The network also adds other traffic overheads depending on the protocols and 
recovery but it is highly unlikely that I will get a black and white estimate.
The good news about the network is that they sell bandwidth in big jumps so I 
estimated my worse case scenario by running so tests, made some assumptions 
(including my data compression), and added the next level of bandwidth (ended 
up with an OC-12).  I gave myself lots of head room because I found that 
sometimes the equipment doing the compression can fail.
The amount of disk was easy, whatever I had needed to be on the other side.
In other words, may be the problem that you are trying to solve needs to be 
revisited.  MB/sec is a moving target that can produce as many answers 
depending on the assumptions you make and who you ask.
Regards,
Uriel 


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Bill 
Fairchild [bfairch...@rocketsoftware.com]
Sent: Wednesday, March 21, 2012 10:03 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: megabytes per second

Everyone is missing the point.  Let me rephrase.
What do most users and managers think when they hear or use the phrase copy 
data at xxx number of MB/sec.?  Do they think that means the theoretically 
fastest possible rate at which data can be transferred under ideal conditions, 
or the actual rate at which that user's data can be transferred as that data 
exists at the instant when the copying is done?
Only scenario to consider:  an unsophisticated user runs a tool that tells him 
his data center's mission-critical, home-grown access method file Y has 10,000 
MB of data in it.  File Y, however, has been allocated to hold 20,000 MB of 
data.  Maybe their home-grown access method does not use DS1LSTAR properly.  
Maybe DS1LSTAR is maintained but a very inefficient block size is being used.  
Perhaps the tool reads every track and adds up the block size of all blocks in 
it.  Perhaps the tool looks at the RECFM, BLKSIZE, and device type and computes 
the size of the contents.  The point here is that there is really only 10,000 
MB of user data stored in this file that could theoretically hold twice as much 
data.  The backup process has been designed to copy every track in the file.  
Not knowing that each track in his file is only 50% full of data (inter-record 
gaps, inefficient block size, not enough data yet in the file to fill up each 
track completely, whatever), he runs a copy pro!
 duct that can copy 100MB/sec.  and finds that it takes 200 seconds to copy his 
10,000 MB of data stored in a 20,000 MB file.  He measures the elapsed time by 
looking at the start and end timestamp in his JES job log as any 
unsophisticated user would.  He wants to know why he is only getting 50MB/sec. 
of copy speed from his copy process that claims it can copy 100MB/sec from DASD 
to tape.
The worst case scenario is that the file is only allocated and has never been 
loaded with data.  In this case, the actual data transfer rate should be 0 
MB/secs., but it would still take 200 seconds to copy the file to tape.

Bill Fairchild

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
Ron Hawkins
Sent: Tuesday, March 20, 2012 8:59 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: megabytes per second

Bill,

It can also depend on where you are measuring the throughput:

Back end of the disk array - there is additional data to transfer due 
to the encapsulation of EBCDIV within SCSI FBA blocks
FICON - It's a 10 bit byte, so divide the data rate by 8 bits. A 1Gb 
channel is 1000MB/10=100 GB (yes no little i)
Tape Drive - whatever you get after ICRC
Virtual Tape Drive - whatever you get after ICRC and De duplication

This could be a fun topic.

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
 Behalf Of Bill Fairchild
 Sent: Tuesday, March 20, 2012 3:12 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: [IBM-MAIN] megabytes per second

 New thread.

 What exactly does MB/second mean when referring to how much data can
 be copied from a DASD to a tape?

 To be more precise, I am not interested in big MB vs. little mib, just
 the philosophy.  Suppose I have a huge file on a 3390 virtual thing
 and I
can
 copy whole tracks to tape at the rate of 100 MB/sec.  Assume the
 tracks
hold
 50,000 bytes instead of 56,664 to make the math easier.  Does 100 MB/sec.
 mean that I am copying 2,000 tracks per second?  Maybe.  What if there
 is nothing written on the tracks, but I don't know that until I read
 

Re: Set Clock Command

2011-11-22 Thread Uriel Carrasquilla
At my company we use GMT and offset from it for the time on our zOS system.
We also use MXG/SAS.
I was not aware that TOD was a requirement for SAS.
Might be an MXG thing but our times are properly reported.


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of McKown, 
John [john.mck...@healthmarkets.com]
Sent: Tuesday, November 22, 2011 10:22 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Set Clock Command

Most weird. We have the TOD clock set to GMT (more or less), and the TIMEZONE 
set to W.06 (US Central). The timestamps in the SMF records at our shop are in 
LOCAL time (1/100 seconds since local midnight). Back when we had SAS, we used 
these times directly. We did not do any offset manipulation on them at all. 
If any of the records have STCKE values, then those would need to be 
manipulated.

--
John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM



 -Original Message-
 From: IBM Mainframe Discussion List
 [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Paul Gilmartin
 Sent: Tuesday, November 22, 2011 9:10 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: Set Clock Command

 On Tue, 22 Nov 2011 08:55:04 -0600, Mark Hammack wrote:

 At my last position (major corporation), I tried to get them
 to go to GMT with setting an offset.  The biggest push back I
 got was from our tuning expert because he would have to put
 an adjustment into the SMF data manipulator (based on SAS).
 I never bought the argument, but then again, I wasn't too
 familiar with SMF data so wasn't prepared to argue.
 
 Or, you could just let the SMF manipulator produce its
 reports showing GMT.
 It's better than transgressing IBM's recommendations for
 setting the TOD clock.

 I've seen lots of sympathy in this forum for abandoning local
 time altogether.

 -- gil

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-11-21 Thread Uriel Carrasquilla
OK, mission accomplished.  The Journal is now larger.
Thank you all for your participation and help, particularly Walter.
Once the HSM re-start was disabled in OPS/MVS, it all worked as advertised.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-11-17 Thread Uriel Carrasquilla
 I hope you were now able to enlarge your hsm journal, which was your goal.

I am coming to work this SAT to try again.
This time I will be showing up in person so I can see what is going on with all 
the LPAR's.
We found the OPS/MVS rule and it is being disabled.  They are also checking in 
all the LPARs to make sure the rule is disabled in all of them.
I'll let you know how it goes.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-11-15 Thread Uriel Carrasquilla
I don't think so. I'm not aware of any hsm parameter which would restart the 
address space once it is
shutdown. As Dave already mentioned, ask your automation folks. It is possible 
your automation product
triggers ARC0002I on the main LPAR, of hsm is defined under ARM

We are taking a closer look at OPS/MVS.  
When we shutdown HSM in all of our LPARs without setting them in the EMERGENCY 
mode, we never had any problems with HSM re-starting.
What do you mean when you say hsm is defined under ARM?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-11-15 Thread Uriel Carrasquilla
 Have a look at RESTART keyword in your procedure.

I took a look at the HSM PROC and EMERG=NO but no signs of RESTART.
I also checked out SYS.PARMLIB(ARCCMDSx) and could not find any traces of a 
RESTART keyword.

When I look at the LOG, I can see that OPS/MVS is involved and might be the 
culprit:

F DFSMSHSM,STOP 
ARC0016I DFSMSHSM SHUTDOWN HAS BEEN REQUESTED   
OPS3092H OI STATESET CPU1TBL.DFSMSHSM CURRENT(DOWN) 
ARC0002I DFSMSHSM SHUTDOWN HAS COMPLETED
OPS3092H OPS3730I CPU1TBL.DFSMSHSM RESOURCE: CURRENT=DOWN DESIRED=UP
MODE=ACTIVE PMD=ACTIVE RMD=ACTIVE AMD=ACTIVE
OPS7902H STATEMAN ACTION FOR CPU1TBL.DFSMSHSM: DOWN_UP MVSCMD=START 
DFSMSHSM
OPS1181H OPSMAIN   (*Local*) MVS N/A OPSYSTZS START DFSMSHSM
START DFSMSHSM  
OPS3092H READY  
$HASP100 DFSMSHSM ON STCINRDR  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-11-14 Thread Uriel Carrasquilla
 yes, every hsm instance sharing the journal dataset must be put in emergency 
  mode (1), and also be stopped (3).

 Walter Marguccio

Walter, very kind of you.  I will go it this weekend before we IPL with the 
new time here in the US.


We ran into problems.  
We put HSM in emergency in 3 LPARS (all of them sharing HSM).
We ran the BACKVOL command in the main LPAR.
Then, we stopped HSM in all 3 LPARS.
But, for some reason, in the main LPAR, HSM would restart automatically 
everytime we stopped it.
So I could not drop the existing Journal to switch to a bigger one.

The procedure we followed:
F DFSMSHSM, SETSYS EMERGENCY  (in all 3 LPARS)
F DFSMSHSM, BACKVOL CONTROLDATASETS (in main LPAR)
F DFSMSHSM, STOP (in al 3 LPARS)

But then we noticed that in the main LPAR, DFSMSHSM would just start by itself.

I could not switch to a bigger Journal file so we issued the S DFSMSHSM 
command in the remaining 2 LPARS.

Why is DFSMSHSM re-starting by itself?
We successfully took DFSMSHSM down in the main LPAR earlier that day and it 
would stay down.
Is the emergency request creating a situation where at least one of the LPARs 
must be running HSM?

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-11-02 Thread Uriel Carrasquilla
 yes, every hsm instance sharing the journal dataset must be put in emergency 
  mode (1), and also be stopped (3).

 Walter Marguccio

Walter, very kind of you.  I will go it this weekend before we IPL with the new 
time here in the US.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-10-31 Thread Uriel Carrasquilla
I would do what the hsm Storage Admin Guide suggest under moving the 
Journal :
1. Put DFSMShsm in emergency mode. SETSYS EMERGENCY
2. Issue the CONTROLDATASETS parameter of the BACKVOL command to back up the
control and journal data sets.BACKVOL CONTROLDATASETS
3. Stop DFSMShsm. F DFSMSHSM,STOP
4. Delete the old journal data set
5. Allocate a new journal data set on a different DASD device.
6. Start DFSMShsm. S DFSMSHSM
Obviously, I'd allocate a much bigger journal at step 5.

 Walter Marguccio

Walter, thank you for your response.
I have one more question.
When you put DFSMShsm in emergency mode, do you do so for every zOS sharing the 
same DFSMShsm?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Maintenance at two in the afternoon? On a Friday?

2011-10-30 Thread Uriel Carrasquilla
 An obvious corollary is that system maintenance should be done when
 the local time is 0300 everywhere.

Not a bad idea unless you happen to be the one doing the maintenance half awake.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-10-27 Thread Uriel Carrasquilla
Walter,
thank you so much for your help.
Regards,
Uriel


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Walter 
Marguccio [walter_marguc...@yahoo.com]
Sent: Thursday, October 27, 2011 4:15 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: HSM Journal dataset is almost full

 From: Uriel Carrasquilla uriel.carrasqui...@mail.mcgill.ca

 Subject: Re: HSM Journal dataset is almost full

 If I was to increase the size of the HSM1.JRNL dataset, what would be the 
 best practice?


I would do what the hsm Storage Admin Guide suggest under moving the 
Journal :

1. Put DFSMShsm in emergency mode. SETSYS EMERGENCY
2. Issue the CONTROLDATASETS parameter of the BACKVOL command to back up the
control and journal data sets.BACKVOL CONTROLDATASETS
3. Stop DFSMShsm. F DFSMSHSM,STOP
4. Delete the old journal data set
5. Allocate a new journal data set on a different DASD device.
6. Start DFSMShsm. S DFSMSHSM

Obviously, I'd allocate a much bigger journal at step 5.

HTH
Walter Marguccio
z/OS Systems Programmer
BELENUS LOB Informatic GmbH
Munich - Germany

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-10-26 Thread Uriel Carrasquilla
I keep on getting messages that the Journal dataset is 80% (which I confirmed 
under ISPF 3.2).
My plan is to shutdown HSM, rename the existing file, create a new one with the 
same characteristics, copy the renamed file, and bring up HSM.
Is my thinking appropriate?
Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-10-26 Thread Uriel Carrasquilla
Why I don't get is that the automatic jobs are running but the Journal file 
remains with the same amount of disk space allocated after execution.
We also have a batch job that compresses the HSM files 
(backup/delete/create/reload) but the Journal is not in the list.


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of 
O'Brien, David W. (NIH/CIT) [C] [obrie...@mail.nih.gov]
Sent: Wednesday, October 26, 2011 12:00 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: HSM Journal dataset is almost full

Correction, that should be Backvol cds

-Original Message-
From: O'Brien, David W. (NIH/CIT) [C]
Sent: Wednesday, October 26, 2011 11:45 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: HSM Journal dataset is almost full

No, it is not correct.

Issue a Backup cds command. That will null the Journal as well as give you a 
set of CDS Backups.

-Original Message-
From: Uriel Carrasquilla [mailto:uriel.carrasqui...@mail.mcgill.ca]
Sent: Wednesday, October 26, 2011 11:39 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: HSM Journal dataset is almost full

I keep on getting messages that the Journal dataset is 80% (which I confirmed 
under ISPF 3.2).
My plan is to shutdown HSM, rename the existing file, create a new one with the 
same characteristics, copy the renamed file, and bring up HSM.
Is my thinking appropriate?
Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-10-26 Thread Uriel Carrasquilla
I made the wrong comment.  the F DFSMSHSM,BACKVOL CDS command is running on a 
schedule.  When it runs, last time was yesterday, the HSM1.JRNL dataset does 
get reset to empty.  In my case, it is allocated with 150 CYLS and since 
yesterday it has used up 22 CYLS.
The problem is that I am getting hit with messages that it is 80% full quite 
often.  So my thinking is that I either run the BACKVOL command more often or 
increase the size of the HSM1.JRNL dataset.
If I was to increase the size of the HSM1.JRNL dataset, what would be the best 
practice?
Run the BACKVOL CDS command, take down DFSMSHSM, increase the size of the JRNL 
file size (after deleting and re-allocating) or is it something else?
Thanks.


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of McKown, 
John [john.mck...@healthmarkets.com]
Sent: Wednesday, October 26, 2011 12:48 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: HSM Journal dataset is almost full

I think the OP was wonder how to expand the size of the journal so that it 
would not fill up as quickly. Not how to automate doing the backvol command or 
how to do the backvol cds command. He knows how, he just wants to do it less 
often. At least, that's how I read his intent.

--
John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 -Original Message-
 From: IBM Mainframe Discussion List
 [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Ed Gould
 Sent: Wednesday, October 26, 2011 11:33 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: HSM Journal dataset is almost full

  Uriel,
 The real question you should be looking for is to why the
 automatic jobs that should be running are not.

 Ed

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM Journal dataset is almost full

2011-10-26 Thread Uriel Carrasquilla
Ed,
It is ignorance on my part.  
By John and the group asking me the question, I learned what is going on.  
Regards,
Uriel


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Ed 
Gould [ps2...@yahoo.com]
Sent: Wednesday, October 26, 2011 2:29 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: HSM Journal dataset is almost full

 John,
Possibly, but I guess don#39;t do Vulcan mind meds as well as you do:)

Ed

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cobol dynamic file allocation using SETENV and C run time environment

2011-10-20 Thread Uriel Carrasquilla
I found CEE.SCEELKED with the SETENV member in the PDS.
I am correct in thinking that this library should be placed in the link editing 
step after compiling with the //SYSLIB DD statement?
Or should it be placed on the LINKLIST?
Thank you.



From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of McKown, 
John [john.mck...@healthmarkets.com]
Sent: Wednesday, October 19, 2011 5:33 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Cobol dynamic file allocation using SETENV and C run time 
environment

SCEELKED contains SETENV and PUTENV.

--
John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM



 -Original Message-
 From: IBM Mainframe Discussion List
 [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Uriel Carrasquilla
 Sent: Wednesday, October 19, 2011 3:32 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Cobol dynamic file allocation using SETENV and C run
 time environment

 I have a developer that is not being able to make a call to
 SETENV from within Cobol (Cobol snippet: call 'setenv' using
 envname,...).
 I found that the C routines in the LE run time environment
 must be available.
 Does anybody know what I need to check in the linklist/lpa to
 make sure that Cobol finds the needed routines from the C run
 time libraries?
 Thank you.

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cobol dynamic file allocation using SETENV and C run time environment

2011-10-20 Thread Uriel Carrasquilla
I found CEE.SCEELKED in SYS1.PARMLIB(LPALST00) and the library does contain a 
SETENV member.
So I don't know why the COBOL program is saying that it cannot find the module.


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Scott 
Ford [scott_j_f...@yahoo.com]
Sent: Thursday, October 20, 2011 10:04 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Cobol dynamic file allocation using SETENV and C run time 
environment

Uriel:

We use it all the time for LE Cobol, etc and we had to do nothing to place it 
in the LINKLIST. IBM packaged it in the correct libraries and LPA and/or 
Linklist member of
IEASYSxx of the SYS1.PARMLIB.  Could you have a situation where your IEASYSXX 
member is incorrect ? Just a thought.

Scott J Ford
Software Engineer
http://www.identityforge.com




From: Uriel Carrasquilla uriel.carrasqui...@mail.mcgill.ca
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, October 20, 2011 9:04 AM
Subject: Re: Cobol dynamic file allocation using SETENV and C run time 
environment

I found CEE.SCEELKED with the SETENV member in the PDS.
I am correct in thinking that this library should be placed in the link editing 
step after compiling with the //SYSLIB DD statement?
Or should it be placed on the LINKLIST?
Thank you.



From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of McKown, 
John [john.mck...@healthmarkets.com]
Sent: Wednesday, October 19, 2011 5:33 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Cobol dynamic file allocation using SETENV and C run time 
environment

SCEELKED contains SETENV and PUTENV.

--
John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone *
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM



 -Original Message-
 From: IBM Mainframe Discussion List
 [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Uriel Carrasquilla
 Sent: Wednesday, October 19, 2011 3:32 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Cobol dynamic file allocation using SETENV and C run
 time environment

 I have a developer that is not being able to make a call to
 SETENV from within Cobol (Cobol snippet: call 'setenv' using
 envname,...).
 I found that the C routines in the LE run time environment
 must be available.
 Does anybody know what I need to check in the linklist/lpa to
 make sure that Cobol finds the needed routines from the C run
 time libraries?
 Thank you.

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cobol dynamic file allocation using SETENV and C run time environment

2011-10-20 Thread Uriel Carrasquilla
We are current with zOS 11.


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Scott 
Ford [scott_j_f...@yahoo.com]
Sent: Thursday, October 20, 2011 1:16 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Cobol dynamic file allocation using SETENV and C run time 
environment

Ed,

I havent had that problem and we use LE Cobol a lot since 3.1.  There were 
compatiblity issues on the older z/OS releases. I ran intoone of those all 
ready, happened to be on an older version that didnt support some of the LE 
calls.

Scott J Ford
Software Engineer
http://www.identityforge.com




From: Ed Gould ps2...@yahoo.com
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, October 20, 2011 12:36 PM
Subject: Re: Cobol dynamic file allocation using SETENV and C run time 
environment

John,

That is one solution. However there is a potential to break a lot of working 
code.
Compatibility has never been one of LE's strong points.

Ed

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cobol dynamic file allocation using SETENV and C run time environment

2011-10-20 Thread Uriel Carrasquilla
Do you still have to put a //SYSLMOD DD in the LKED step (link editing) if you 
already have in in the SYS1.PARMLIB(LPALST00)?


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Scott 
Ford [scott_j_f...@yahoo.com]
Sent: Thursday, October 20, 2011 10:08 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Cobol dynamic file allocation using SETENV and C run time 
environment

Yep, me too John, I had to add it to my SYSLMOD stmts in the LKED...Once I did 
that no problem


Scott J Ford
Software Engineer
http://www.identityforge.com




From: McKown, John john.mck...@healthmarkets.com
To: IBM-MAIN@bama.ua.edu
Sent: Thursday, October 20, 2011 9:29 AM
Subject: Re: Cobol dynamic file allocation using SETENV and C run time 
environment

If you're going to CALL it from COBOL, using the DYNAM compiler option, then 
the library needs to be available to the job during execution. That means in 
the LNKLST or STEPLIB/JOBLIB DD statement. Another possibility is to COPY the 
SETENV and PUTENV routines into a library which is already on the LNKLST. 
Personally, I'd put SCEELKED on the LNKLST in this case. Or compile the routine 
NODYNAM so that they are statically linked into the program object/load module.

John McKown

Systems Engineer IV

IT



Administrative Services Group



HealthMarkets(r)



9151 Boulevard 26 * N. Richland Hills * TX 76010

(817) 255-3225 phone *

john.mck...@healthmarkets.com * www.HealthMarkets.com



Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM



 -Original Message-
 From: IBM Mainframe Discussion List
 [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Uriel Carrasquilla
 Sent: Thursday, October 20, 2011 8:05 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: Cobol dynamic file allocation using SETENV and C
 run time environment

 I found CEE.SCEELKED with the SETENV member in the PDS.
 I am correct in thinking that this library should be placed
 in the link editing step after compiling with the //SYSLIB DD
 statement?
 Or should it be placed on the LINKLIST?
 Thank you.


 
 From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on
 behalf of McKown, John [john.mck...@healthmarkets.com]
 Sent: Wednesday, October 19, 2011 5:33 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: Cobol dynamic file allocation using SETENV and C
 run time environment

 SCEELKED contains SETENV and PUTENV.

 --
 John McKown
 Systems Engineer IV
 IT

 Administrative Services Group

 HealthMarkets(r)

 9151 Boulevard 26 * N. Richland Hills * TX 76010
 (817) 255-3225 phone *
 john.mck...@healthmarkets.com * www.HealthMarkets.com

 Confidentiality Notice: This e-mail message may contain
 confidential or proprietary information. If you are not the
 intended recipient, please contact the sender by reply e-mail
 and destroy all copies of the original message.
 HealthMarkets(r) is the brand name for products underwritten
 and issued by the insurance subsidiaries of HealthMarkets,
 Inc. -The Chesapeake Life Insurance Company(r), Mid-West
 National Life Insurance Company of TennesseeSM and The MEGA
 Life and Health Insurance Company.SM



  -Original Message-
  From: IBM Mainframe Discussion List
  [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Uriel Carrasquilla
  Sent: Wednesday, October 19, 2011 3:32 PM
  To: IBM-MAIN@bama.ua.edu
  Subject: Cobol dynamic file allocation using SETENV and C run
  time environment
 
  I have a developer that is not being able to make a call to
  SETENV from within Cobol (Cobol snippet: call 'setenv' using
  envname,...).
  I found that the C routines in the LE run time environment
  must be available.
  Does anybody know what I need to check in the linklist/lpa to
  make sure that Cobol finds the needed routines from the C run
  time libraries?
  Thank you.
 
 
 --
  For IBM-MAIN subscribe / signoff / archive access instructions,
  send email to lists...@bama.ua.edu with the message: GET
 IBM-MAIN INFO
  Search the archives at http://bama.ua.edu/archives/ibm-main.html
 
 

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

 --
 For IBM-MAIN subscribe / signoff / archive

Cobol dynamic file allocation using SETENV and C run time environment

2011-10-19 Thread Uriel Carrasquilla
I have a developer that is not being able to make a call to SETENV from within 
Cobol (Cobol snippet: call 'setenv' using envname,...).
I found that the C routines in the LE run time environment must be available.
Does anybody know what I need to check in the linklist/lpa to make sure that 
Cobol finds the needed routines from the C run time libraries?
Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REXX fails accessing SDSF console

2011-10-13 Thread Uriel Carrasquilla
 Going to SAF from ISFPARMS in  SDSF is not too bad.

 Go to manual z/OS V1R10.0 SDSF Operation and Customization  SA22-7670-11
There is a section called
 Converting ISFPARMS to SAF security

 This section should help you go through the process of
 1) Determining if you have SAF instead of ISFPARMS for security
 2) How to convert over.

 Hope this helps

Thank you.  This will help.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REXX fails accessing SDSF console

2011-10-12 Thread Uriel Carrasquilla
 The ISF032I RC4 message indicates the MCS console is already activate when 
 the REXX is run ..

This works.  I must have been in SDSF.  I ran the REXX interactively after 
making sure I had no SDSF session and it works.

Your REXX can also set the ISFCONS variable to specify a unique console name 
that the process uses.

I am going to try this.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REXX fails accessing SDSF console

2011-10-12 Thread Uriel Carrasquilla

From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Mark 
Zelden [m...@mzelden.com]
Sent: Wednesday, October 12, 2011 9:03 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: REXX fails accessing SDSF console

On Wed, 12 Oct 2011 05:50:34 +, Uriel Carrasquilla 
uriel.carrasqui...@mail.mcgill.ca wrote:


 The biggest problem I have found with batch is that many shops have
SDSF security set up (from the default/sample parms) based on TSO
authorities (JCL, OPER, ACCT) and TSOAUTH is automatically set to JCL for
a batch SDSF job (regardless of what authorities the USERID actually has).
This is the documented behavior. 

Mark, let me thank you for your help.  My TSOBATCH REXX is now working as 
expected.
The problem was the CONSOLE being used with ISPF SDSF interactively while 
running the REXX at the same time.
I forced the REXX to use a different console (instead of the USERID default 
that gets assigned).

The PARMLIB(ISFPRMxx) member was set up by my predecesor and it did have 
AUTH(ALL) for my user group.
When reading your remarks, I don't understand how authorization for SDSF can be 
set up any other way.
Could you please elaborate? 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REXX fails accessing SDSF console

2011-10-12 Thread Uriel Carrasquilla
 When you run in batch, you may not fall into the group you think you are.   
 If your
parms are not based on IUID (userid) and are based on TSO authorities like
JCL, OPER and ACCT (as the default / sample parms are), then when you run
in batch the only authority you get is JCL.   This puts you into a group that
is typically an end user / application programmer group that probably does
not have access to CONSOLE or jobs/output that aren't associated with the
userid of the submitter.

I became a victim of the situation you described.
I set up a job to be submitted by our scheduler (UC4) and I thought the userid 
would have the same access as myself to the console.  Well, that was not the 
case.  As you described, the TSO-Batch failed on insufficient authority.
I am probably using TSO authorities as you described.
How do you switch parameters so they are based on IUID (userid)?
I am trying to have my job vary unit,online, do some DF/DSS work, and then 
vary unit,offline.

In the past this work was done by OPS/MVS by having the job create dummy 
messages (via IEFBR14) but the problem is that I have no way of knowing if the 
devices did go on-line before DF/DSS is invoked. 
When I run the solution I put in place under my userid that has AUTH(ALL), 
everything works really well.
So I am kind of convinced that this is the route to follow.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REXX fails accessing SDSF console

2011-10-12 Thread Uriel Carrasquilla
 Your shop is probably using SAF controls rather than SDSF controls.
I actually think it is under SDSF control but how to I verify?
and how do you switch to SAF controls?

What happens if you try executing an MVS command under CONSOLE? 
It is actually working fine when I submit TSO-BATCH job from TSO/ISPF session.
The problem I am having is when my scheduler submits the job with another 
USER-ID.  I will check tomorrow if the problem is that this other userid does 
not have AUTH(ALL).
But it seems to me from what Mark described that I should be switching to SAF 
controls (in my case I have ACF2).  But I don't know much about ACF2 to know 
where I would need to go to re-implement what I currently have in my PARMLIB 
member.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


REXX fails accessing SDSF console

2011-10-11 Thread Uriel Carrasquilla
/* REXX */
mycmd.0 = 1
mycmd.1 = 'd a,l'
ADDRESS SDSF ISFSLASH (MYCMD.) (WAIT) 
...
IF DATATYPE(ISFULOG.0) = NUM THEN  
DO IX=1 TO ISFULOG.0 
   SAY ISFULOG.IX IS: ISFULOG.IX 
END   

PRODUCES;
ISFULOG.1 IS: CPU1  2011284  15:38:31.00   ISF032I CONSOLE SSUJC ACTIVATE 
FAILED, RETURN CODE 0004, REASON CODE 


I checked parmlib(isfprm00) and it does have AUTH(ALL) for the TSO userid I am 
using to execute the REXX.

I suspect the problem is with ACF2 but my security team does not have a clue.  
They gave me CONSOLE access but that does not fixed the problem.  (Except 
that now I can interactively issue CONSOLE commands from TSO).

Anybody with ACF2 knowledge willing to share?
I did open a ticket with CA.

Thanks. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REXX fails accessing SDSF console

2011-10-11 Thread Uriel Carrasquilla
Hi Mark.


Does your SDSF console work when you use it via TSO and / or ISPF SDSF?  
Yes, it does work.  Also, I can see that the MVS command entered via REXX is 
being executed when I look at the SDSF log.

Are you already using a console with that name via TSO/ISPF when you execute 
the exec?
I have tried to execute the REXX exec from option 6 and via batch using TSO 
batch.  This is an excellent point.  The console name is the same as my userid 
and it is possible I am using SDSF at the same time.  I'll check tomorrow when 
I am back at work.  Is it possible to change the console name in the REXX?  I 
tried changing it with activate console console(mytest) but for some reason 
the REXX puts me in interactive mode under console (which works when I issue a 
d a,l).

Are you running this exec in batch of interactively?   
I have tried both way with the same message in both cases.

 IF batch, search the archives for past posts of mine regarding security and 
 SDSF in batch and consult the SDSF user guide.
I will search for your postings.  I have also checked the SDSF User Guide.  The 
examples in the User Guide and mine are identical (that is where I picked up 
the original REXX.  I have also spent a considerable number of hours in Google. 
 In PARMLIB, is the AUTH parameter the only one I need to be concerned with?  
It is AUTH(ALL) for my TSO$xx group right now.  But I was wondering if there 
was another parameter I should verify.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REXX fails accessing SDSF console

2011-10-11 Thread Uriel Carrasquilla
 XX=ISFCALLS('ON')
Yes, this command is being issue ahead of any REXX statement and the ('OFF') at 
the end.

If it helps, I can post the entire REXX.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Postion Available Washington DC

2011-10-03 Thread Uriel Carrasquilla
Hi Jim.
I am more of the MVS person than the VTAM person.  
In fact, my VTAM abilities are very limited.
Sorry but I won't be able to help you.
For the last 10 years, I have also spent more time on the Capacity Planning 
side than the Systems Programming side.
Good luck!
Uriel


From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Jim 
Marshall [jim.marsh...@opm.gov]
Sent: Monday, October 03, 2011 2:53 PM
To: IBM-MAIN@bama.ua.edu
Subject: Postion Available Washington DC

I currently have a Contractor position open on the staff here, here are the 
particulars

 1. Fifteen year of demonstrated experience as a VTAM systems programmer, 
at least ten years of which must be within a MVS/ESA operating system or higher 
with subsystems very similar to
2.  Three years of recent experience as a senior VTAM systems programmer 
providing expertise in z/OS or OS/390 Communications Server.
3.  Three years of recent demonstrated experience implementing zSeries 
technologies similar in scope to the one described in this solicitation.
4.  Three years of recent demonstrated experience in implementing 
Communications Server in a z/OS Parallel Sysplex environment.
5.  Demonstrated experience in configuring IBM Open Systems Adapter.
6.  Demonstrated experience in configuring IBM 8265 and CISCO Switches.
7.  Demonstrated experience in presenting technical communications 
information.
8.  Recent demonstrated experience performing z/OS software conversions, as 
well as release level upgrades.
9.  Recent experience in leading large-scale, technically complex systems 
software/integration projects involving systems similar to one cited in this 
solicitation.
10. Recent demonstrated experience in performing problem determination and 
resolution.
11. Recent demonstrated experience in performing problem determination 
using VTAM and IP Trace facilities.
12. Recent demonstrated experience in implementing and testing a disaster 
recovery plan.

The job is with Compuware Services.  I am the COTR of the contract and will be 
glad to forward your inquiry over to the company for review.   If you are 
hesitant about coming to the Washington DC area, give me a call and I can chat 
about the living expenses, entertainment, and general over all conditions.  I 
came to DC back in 1975 as an Air Force Staff Sergeant (single) and lived well. 
 I came back in the 1980s as an officer and lived quite well.  Am now a gov't 
civilian and making it fine.

I have enticed a number of folks over the years from out in the hinderland to 
come here and they are still around. The work is challenging and interesting 
with a plan to go to GDPS Active-Active Sysplex at 69km. IBM zEnterprise 
Servers are coming next year and just upgraded DASD  Tape. Just moving into 
DB2 for z/OS Data Sharing and converting SuSe to RedHat Linux on z.  If someone 
wants to broaden their horizons and can do what is considered your job, you can 
cross over into other areas.

You will need the ability to get a SSBI (Single Scope Background Investigation) 
Full Adjudicated Background Investigation

Let me know if you are interested. jim

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


MVS NFS Translation Tables

2011-09-16 Thread Uriel Carrasquilla
Has anybody experienced problems with ASCII to EBCDIC translations from MVS NFS 
mount points?
I am getting x'00' in my files that were dropped by Unix clients into the MVS 
managed Mount Point.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS Systems Programmer Jobs in Dubuque Iowa

2011-09-08 Thread Uriel Carrasquilla
I could be mistaken but that is not how the H1B visa process normally works.
IBM must post the job to see what kind of response they get in a media often 
used by other companies to find similar employees.  The salary must match a 
median for the position as established by referencing similar jobs.
I have been through the process as an employer and it is not as trivial as it 
sounds.
Yes, the process can be worked out, probably often done by some employers but I 
would be surprised IBM would go down the wrong path.
Would it be possible that the position is a $40k job and not the equivalent of 
other System Programmer jobs?
What about the location, or the amount of experience?
Or are we just venting because the value of our skills have dropped in value in 
today's economy?
I don't know what it would be to be out of work in today's economy but I was in 
that boat in 1999 and again in 2001.  It was not fun and I would have taken a 
job for $40k if that would have meant not walking out of my mortgage.  
Would I take a $40k job now in my current situation?  Of course not but I would 
not blame another person for doing it to bridge themselves into a better job 
later on when the job market improves.
It is supply and demand.

From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Larry 
Macioce [mace1...@gmail.com]
Sent: Thursday, September 08, 2011 10:04 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: z/OS Systems Programmer Jobs in Dubuque Iowa

I am coming in late to this discussion but will include my $0.02, so remember 
you get what you pay for.

IBM has us over a barrel, if no one applies then IBM (and others) can go to the 
government and say look we told you so there is a short of qualified people in 
the US, up the H1B numbers. Once that occurs we are toast as the H1Bs will work 
for peanuts.
OR
System Programmers go to work for IBM (and others) at a sub-par rate and IBM 
(and others) get what they want, qualified help and a cheap price.

You could take the job and prolong it.

Mace

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS Systems Programmer Jobs in Dubuque Iowa

2011-09-08 Thread Uriel Carrasquilla
 Essentially the only thing to do there is to drink.

The good news is that with such low wages there is no risk of getting addicted 
to alcohol.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: z/OS Systems Programmer Jobs in Dubuque Iowa

2011-09-07 Thread Uriel Carrasquilla
What kind of money are you talking about?
is it full time, on-site work?

From: IBM Mainframe Discussion List [IBM-MAIN@bama.ua.edu] on behalf of Bobbie 
Justice [golds...@yahoo.com]
Sent: Wednesday, September 07, 2011 11:22 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: z/OS Systems Programmer Jobs in Dubuque Iowa

Yes, a big no thank you to that one. I got a call from them when I was 
unemployed for several months, and after they told me the salary they were 
offering, I made a very quick decision to remain unemployed.

I'll starve to death before I do systems programming work for the money they 
were offering.

Bobbie Jo Justice

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: MVSNFS shows a different file under the Unix side compared to the MVS side

2011-08-31 Thread Uriel Carrasquilla

Have you tried using FBS vs. FB ?  This is for fixed block standard and 
recognizes the last block can be shorter and not a full block.
 FB likewise permits the last block to be shorter.

Surely FB permits *any* block to be short. It's FBS that requires that
all but the last be full size.

We are actually using Variable blocks.
I think the argument of the cache is valid but not after 24 hours.
Also, I noticed that the files in the NFS mount point when I am looking at them 
under TSO/ISPF are quite often locked (file in use).  I end up having to put a 
B for browse under 3.4 and keep on hitting enter until I can get into the 
file.  The file is not supposed to be in use on the Unix side so I don't know 
why I get the locks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


MVSNFS shows a different file under the Unix side compared to the MVS side

2011-08-30 Thread Uriel Carrasquilla
we have a an NFS mount point on a Sun Solaris that is being serviced by MVS 
(zOS 1.11) NFS server.  when my customer dropped a file on the Sun's side NFS 
mount point, it caused an I/O error.  When I compared what I see on the Sun 
side under the mount point to what I see under TSO/ISPF, I can see that 1,131 
records failed to get across out of about 77k records.  Yet, the original file 
that was dropped into the NFS mount point are identical.  In other words, when 
I do a diff between the original file and the one in the MVS Mount Point on the 
Unix side, they are identical.  When I download the MVS file and do a diff with 
what is supposed to be an identical copy in the Unix Mount point, they are 
different.
May be someone can enlight me with an explanation as to why the the Unix side 
shows an identical copy of the original that was dropped in the mount point but 
in the MVS side I see records in errors.
Regards,
Uriel

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html