Wonder why IBM code quality is suffering?

2006-03-26 Thread Thomas Conley
Take a look at this job posting.  $35/hr for PLX programmers (for the I/O 
subsystem, no less!).  You gotta be kidding me.



XXX is in need of a PLX Programmer for one of our top clients in 
Poughkeepsie, NY.

Skills required:
PLX Programming skills (Very important as that is what the code is written 
in)
OS/390 skills (MVS) Experience (Equally important as this is the operating 
system that the code will run under).
S390 eServer hardware and millicode knowledge (Important as this is what the 
code is manipulating).
Architecture Verification skills- Must be able to read and interpret 
architecture documents, technical specs so to speak, and be able to write 
code (in PLX) to stress and test that architecture
Working knowledge or experience with S/390 eServer Channel I/O architecture 
and I/O devices (All of this work deals with I/O not CPU)


Duration: Until 12/31/05 w/ possibility of extension
Location: Poughkeepsie, NY
Shift: 1st shift, Monday-Friday
Compensation: $35/hr or $61,000 depending on benefits needed)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ICF Catalog with lots of redundant datasets

2006-03-26 Thread Ron Ferguson
Hi Mike,

 Mike Baker [EMAIL PROTECTED] wrote:  If we have lots of
 redundant datasets on the machine, and many HLQ (high level
qualifiers)  which could (also) be completely removed, but have not
been removed /
 cleaned up, is this likely to have much of a performance degradation
 affect on the Catalog / CAS.  

Mike, I'm not certain I fully understand the description of your
situation.  For example, how many is lots, and what exactly is a
redundant dataset?

Nevertheless, however I interpret the situation, my opinion is that it
does not affect the performance of either the catalog or CAS.  

The catalog is physically a VSAM KSDS, accessed directly by key, and
even if it contains thousands (even tens of thousands) of useless
records, the speed of access to any record in the catalog will not be
affected by the redundant records.  Most (possibly all) of the
catalog's index will/should be in CAS buffers, and therefore, accessing
any record in the catalog's data component will require just a single
I/O (at most).

The same answer applies to CAS performance  --  there's no way that
redundant catalog records can affect CAS performance.  Assuming by
redundant data set you mean a cataloged data set that doesn't actually
exist, this redundant record would never be read in the first place, and
would never find its way into CAS.  Since it would never be read, it
wouldn't take up space in CAS buffers, as records are brought into CAS
only by specific request when a task attempts to locate a data set.  By
your question, I'm guessing that you possibly believe all of a catalog's
records are somehow buffered in CAS, regardless of being specifically
requested, and that's not true.  

In my opinion, the single biggest performance benefit you can give your
catalog(s) is to turn on VLF (the Virtual Lookaside Facility) within CAS
(which is specified in the COVLFxx member of SYS1.PARMLIB, and can be
checked by a MODIFY CATALOG,PREPORT,VLF operator command).  This topic
has been discussed many times on this Listserv, and can be found in the
archives.  There's also very good and extensive information on this in
the z/OS DFSMS: Managing Catalog manual, SC26-7409.

Having said that, cleaning up redundant data set records  --  for
example, entire HLQ levels of useless data set entries that haven't been
cleaned up  --  makes your catalog at risk of significantly greater
problems when/if you have a catalog problem which requires diagnostic
analysis.  Any attempt to run diagnostics on this catalog will likely
identify lots (your word) of useless records that just clutter up the
true status of the catalog.  By not cleaning up these entries, you're
potentially creating a bigger problem for yourself at some later time
--  and if that time is when you have an outage on the catalog and it
results in longer recovery time, you may have critical applications
delayed while you struggle with a dirty catalog.

I hope I've shed some light on your question.  If I'm on a tangent and
don't understand what you're asking, drop me a note (on the Listserv, or
privately).

Ron Ferguson
President and CEO
Mainstar Software Corporation
www.mainstar.com
[EMAIL PROTECTED]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: VTAM and MS Host Integrated Server IP-DLC (Enterprise Extender)

2006-03-26 Thread Chris Mason
Tim,

Don't worry. You are among friends here who generally bridle at being forced
to call out beloved VTAM by some foreign name and perhaps there are some who
bridle even more at the forced cohabitation - but at least the bed (if you
regard the I/O logic as the bed) is made of robust materials and has a
dependable pedigree.

Come to think of it there may be some Micro$oft folk who can't get used to
their beloved SNA Server wearing different clothes.

I always feel that this name changing is to disguise the guilty and confuse
the innocent, perhaps not always without cause.

As Ed Rabera - almost - says you should subscribe to the IBMTCP-L list -
send
sub IBMTCP-L your name
to [EMAIL PROTECTED]

This is the group/list for the IP side of Communication Server and, hence,
Enterprise Extender. It can also, out of frustration over the unavailability
of a similar group/list, be the place for purely VTAM issues. However, as
you have already worked out, if it's anything to do with IBM 360 successor
systems, hardware or software, IBM-MAIN collects it - and many in attendance
are themselves IBM 360 successor people g.

Here's the documentation you should use:

A redbook, maybe the one you already know:

Migrating Subarea Networks to an IP Infrastructure Using Enterprise
Extender
http://www.redbooks.ibm.com/abstracts/sg245957.html

Three presentations:

z/OS CS Enterprise Extender Hints and Tips
http://www-1.ibm.com/support/docview.wss?uid=swg27006650aid=1

Understanding Enterprise Extender, Part 1 - Concepts and Considerations
http://www-1.ibm.com/support/docview.wss?uid=swg27006667aid=1

Understanding Enterprise Extender, Part 2 - Nuts and Bolts
http://www-1.ibm.com/support/docview.wss?uid=swg27006668aid=1

There are a couple of relevant threads running currently on IBMTCP-L: EE
Activation steps and Anyone convert their SNI Connection with
AGNS/Advantis over to EE yet?. You should check the IBMTCP-L archive,
http://vm.marist.edu/htbin/wlvindex?IBMTCP-L , or use, for example, Google
groups, to review past post threads.

If the redbook to which you are referring is the one about Enterprise
Extender as referenced above, this is still the only one - as far as I know.
However, the way you described it put me in mind of a document - I'm not
sure it was a redbook - which covered every known way to implement APPC,
that is, it covered the intricacies of he definition process on every known
platform.

I'm sure Pat forgot to add a g or the smiley characters to his final
sentence. g

There's no reason why you shouldn't use the IBM newsgroups - Pat does - a
lot. The great advantage is that they have the developers in attendance -
well, one very conscientious one anyhow who can activate others as required.
The starting point is
http://www-1.ibm.com/support/docview.wss?rs=2192context=SSRRLBdc=D700uid=swg27005100loc=en_UScs=utf-8lang=en

Regarding terminology: VTAM developers are their own worst enemies when it
comes to misuse of terminology. Unfortunately, at an early stage in SNA
networking evolution when the PU statement was no longer directly associated
with the SNA PU entity but with the adjacent link station, the opportunity
was missed to change the initials from PU to, say, ALS. This should have
happened at the time multiple links between NCPs was introduced. Thus we now
have the ridiculous situation with Enterprise Extender where the term PU
keeps appearing in a variety of contexts when it usually represents the
end-point of a logical link of some sort. Whenever you see PU mentioned
think first that it represents an adjacent link station, then possibly a
whole node and lastly it might actually refer to the SNA PU entity.

Finally, I Googled (HIS Enterprise Extender) in order to see what help
might be available for HIS and IP-DLC. Hm, I had already downloaded
Configuring IP-DLC Link Service for IBM Enterprise Extender[1] a couple of
months ago. I notice the document contains a lot about VTAM and TCP/IP for
MVS (as Communication Server IP used to be known before the merger). You
should apply the usual rule that, if a document tries to tell you technical
things about someone else's product, you should treat it with extreme
scepticism. Paying attention to this rule avoids masses of wasted time.

It might be useful to ask questions with reference to this document when it
concerns HIS IP-DLC matters since it seems to cover each of the
configuration panels.

Chris Mason

[1]
http://download.microsoft.com/download/a/f/f/affdc2aa-63ca-48c7-9431-c50736f24236/configuring%20ip-dlc%20link%20service%20for%20ibm%20enterprise%20extender.doc

- Original Message - 
From: Tim Hare [EMAIL PROTECTED]
Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@BAMA.UA.EDU
Sent: Friday, 24 March, 2006 5:54 PM
Subject: VTAM and MS Host Integrated Server IP-DLC (Enterprise Extender)


 Is this the correct place to ask about VTAM issues? We're trying to move a
 Microsoft Host Integration Server from 802.2 connection to IP-DLC (AKA
 Enterprise Extender). 

Re: using 3390 mod-9s

2006-03-26 Thread Anne Lynn Wheeler
DASDBill2 wrote:
 I once wrote a deblocker program to read in 640K tape blocks and break
them
 up into QSAM-friendly chunks of 32,760 bytes or less.  It was an
interesting
 exercise, made even more so by having to run it on MVS under  VM,
which caused
 a lot of unrepeatable chaining check errors due to the  very long channel
 program to read in 640K in one I/O  request.

cp/67 on 360/67 had to translate ccws from the virtual machine ... and
use data-chaining where the virtual machine CCW virtual data address
was contiguous ... but the virtual pages from the virtual machine were
scattered around memory.

moving to 370, IDALs were provided in lieu of data-chaining to break up
virtual contiguous areas into non-contiguous pages. part of this is
that the standard channel architecture precluded pre-fetching CCWs
(they had to be fetched and executed syncronously). on 360, breaking a
single ccw into multiple (data-chaining) CCWs introduced additional
latencies that could result in timing errors. on 370, non-contiguous
areas could be handled with IDALs ... and channel architecture allowed
prefetching of IDALs ... supposedly eliminating the timing latencies
associated that could happen with data-chaining approach.

This issue of channels working with real addresses necessitating CCWs
built with virtual address ... to be translated to a shadow set of CCWs
with real addresses ... affects all systems operating with virtual
memory (which support applications building CCWs with virtual memory
addresses ... including the virtual address space area may appear
linear ... but the corresponding virtual pages are actually
non-contiguous in real memory).

The original implementation of os/vs2 was built using standard MVT with
virtual address space tables and page interrupt handler hacked into the
side (for os/vs2 svs ... precursor to os/vs2 mvs ... since shorten to
just mvs). It also borrowed CCWTRANS from cp/67 to translate the
application CCWs (that had been built with virtual addresses) into
real CCWs that were built with real addresses for real execution.

This version of CCWTRANS had support for IDALs for running on 370s.

Typically, once MVS had shadowed the application CCWs ... creating the
shadow CCWs with IDALs  giving the non-contiguous page addresses ...
then any VM translation of the MVS translated CCWs was strictly
one-for-one replacement ... an exact copy of the MVS set of translated
CCWs ... which only differed in the real, real address spacified.

all the virtual machine stuff and cp67 had been developed by the
science cneter
http://www.garlic.com/~lynn/subtopic.html#545tech

there was a joint project between cambridge and endicott to simulate
virtual 370s under cp67 running on real 360/67 (for one thing the 370
virtual memory tables had somewhat different hardware definition, the
control register definitions were somewhat different, there some
different instructions, etc).

the base production system at cambridge was referred to as cp67l. the
modifications to cp67 to provide 370 virtual machines (as an
alternative option to providing 360 virtual machines) was referred to
as cp67h. Then further modifications were made to cp67 for the kernel
to run on real 370 (using 370 hardware definitions instead of 360
hardware definitions).  This cp67 kernel that ran on 370 hardware was
referred to as cp67i. cp67i was running regularly in production virtual
machine a year prior to the first engineering 370 model with virtual
memory hardware became available (in fact, cp67i was used as a
validation test for the machine when it first became operational).

cms multi-level source update management was developed in support of
cp67l/cp67h/cp67i set of updates.

also, the cp67h system ... which could run on a real 360/67, providing
virtual 370 machines ... was actually typically run in a virtual 360/67
virtual machine ... under cp67l on the cambridge 360/67. This was in
large part because of security concerns since the cambridge system
provided some amount of generalized time-sharing to various univ.
people in the cambridge area (mit, harvard, bu, etc). If cp67h was
hosted as the base timesharing service ... there were all sorts of
people that might trip over the unannounced 370 virtual memory
operation.

about the time some 370s (145) processors became available internally
(still long before announcement) a couple engineers came out from san
jose and added the device support for 3330s and 2505s to the cp67i
system (including multi-exposure support, set sector in support of
rps). also idal support was crafted into CCWTRANs. part of the issue
was that the channels on real 360/67s were a lot faster and had lot
lower latency ... so there were much fewer instances where breaking a
single CCW into multiple data-chained CCWs resulted in overruns.
However, 145 channel processing was much slower and required
(prefetch'able) IDALs to avoid a lot of the overrun situations.

a few past posts mentioning cp67l, cp67h, and cp67i activity:

Re: Wonder why IBM code quality is suffering?

2006-03-26 Thread Stephen M. Wiegand

At 09:51 AM 03/26/2006, you wrote:
Take a look at this job posting.  $35/hr for PLX programmers (for 
the I/O subsystem, no less!).  You gotta be kidding me.




Based on the description, this sounds like requiring a brain surgeon 
and paying them LPN wages!  Anyway, what is PLX?  Never heard of it.



XXX is in need of a PLX Programmer for one of our top clients in 
Poughkeepsie, NY.

Skills required:
PLX Programming skills (Very important as that is what the code is written in)
OS/390 skills (MVS) Experience (Equally important as this is the 
operating system that the code will run under).
S390 eServer hardware and millicode knowledge (Important as this is 
what the code is manipulating).
Architecture Verification skills- Must be able to read and interpret 
architecture documents, technical specs so to speak, and be able to 
write code (in PLX) to stress and test that architecture
Working knowledge or experience with S/390 eServer Channel I/O 
architecture and I/O devices (All of this work deals with I/O not CPU)


Duration: Until 12/31/05 w/ possibility of extension
Location: Poughkeepsie, NY
Shift: 1st shift, Monday-Friday
Compensation: $35/hr or $61,000 depending on benefits needed)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Steve Wiegand

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ESTAE-underTSO question

2006-03-26 Thread Walt Farrell

On 3/22/2006 2:26 PM, Paul Schuster wrote:

Hello:

I have a program with an ESTAE that gets control and issues a SDUMP.  This
works fine when submitted as a batch job, as the program is in an authorized
library.

When invoking the same program from a TSO REXX EXEC, the SDUMP in the
ESTAE fails with a S133 abend indicating 'An unauthorized program requested
an SVC dump.'



In order to run authorized under TSO/E your program needs to be listed 
in the appropriate IKJTSOxx member of your system parmlib concatenation.


There are several sections of IKJTSOxx that could be relevant, depending 
on how you invoke your program.  Assuming that you are invoking it via, 
e.g.,

  addresss TSO CALL 'library(program)' 'parms'
then you should add the program name to the AUTHPGMS section of IKJTSOxx.

Also note that the library must be in the APF list.

Finally, note that you must invoke it via some form of address TSO. 
REXX cannot invoke programs authorized if you use the REXX call 
statement, or if you use address linkmvs, address linkpgm, etc.


Walt

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Wonder why IBM code quality is suffering?

2006-03-26 Thread Charles Mills
PL/X is a vaguely PL/I-like (I'm SURE someone will have to correct me on
this) language that is used internally by IBM. Much of z/OS is written in
PL/X. PL/X has systems programming features such as being able to drop into
assembler. IBM has never released PL/X as a customer product but they have
shipped the compiler at various times under limited circumstances - it was
available to 3rd party developers for a while.

To see some PL/X, take a look at almost any of the OS macros. Roughly half
the code is familiar assembly/macro language - the other half, the code that
looks unfamiliar and PL/I-like, is PL/X.

Charles



-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Stephen M. Wiegand
Sent: Sunday, March 26, 2006 7:48 AM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Wonder why IBM code quality is suffering?


At 09:51 AM 03/26/2006, you wrote:
Take a look at this job posting.  $35/hr for PLX programmers (for 
the I/O subsystem, no less!).  You gotta be kidding me.


Based on the description, this sounds like requiring a brain surgeon 
and paying them LPN wages!  Anyway, what is PLX?  Never heard of it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Wonder why IBM code quality is suffering?

2006-03-26 Thread Ed Finnell
 
In a message dated 3/26/2006 10:33:39 A.M. Central Standard Time,  
[EMAIL PROTECTED] writes:

To see  some PL/X, take a look at almost any of the OS macros. Roughly half
the  code is familiar assembly/macro language - the other half, the code  that
looks unfamiliar and PL/I-like, is  PL/X.





There was big stinko at one of the SHAREs mid eighties. Chevron
had written PL/S and was willing to part with it to 'broaden the base'.  
Think it was RAND decided this was definite mainframe fertilizer and were  
willing 
to act as distributer(GMC 626
manure spreader comes to mind). Anyway they were passing out  mini-reels and 
giving sessions left and right on Monday and Tuesday and  about Wednesday 
became deathly quiet. Turns out couldn't give out any manuals to  go with it 
'cause they were all copyrighted.
 
Even our backwoods state legislature divies up the number of scholarships  
based on projected needs. Whether it's doctors, lawyers, scientists, nurses or  
teachers.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Wonder why IBM code quality is suffering?

2006-03-26 Thread Ed Gould

On Mar 26, 2006, at 12:25 PM, Ed Finnell wrote:

-- 
SNIP-



There was big stinko at one of the SHAREs mid eighties. Chevron
had written PL/S and was willing to part with it to 'broaden the  
base'.
Think it was RAND decided this was definite mainframe fertilizer  
and were  willing

to act as distributer(GMC 626
manure spreader comes to mind). Anyway they were passing out  mini- 
reels and
giving sessions left and right on Monday and Tuesday and  about  
Wednesday
became deathly quiet. Turns out couldn't give out any manuals to   
go with it

'cause they were all copyrighted.

--SNIP--

At GUIDE it was offered on a 2400' reel I didn't have one handy (like  
who carries a reel of tape around with them when they travel?)
I got the business card from the guy at Rand and promptly sent off a  
full tape the week following GUIDE. I do remember there was a fuss  
about IBM and the manual, but I thought the manual was on the tape, no?


I never heard back (I forgot about it). The next mini GUIDE(IIRC) is  
when we heard about the  hitting the fan. I am pretty sure this  
was an Anahiem.


Ed

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Wonder why IBM code quality is suffering?

2006-03-26 Thread Ed Finnell
 
In a message dated 3/26/2006 2:30:46 P.M. Central Standard Time,  
[EMAIL PROTECTED] writes:

full  tape the week following GUIDE. I do remember there was a fuss  
about  IBM and the manual, but I thought the manual was on the tape,  no?




 I never got my hands on one. I was doing ISPF something or  other
 and by the time I got wind of it the sails were all  furled... 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ICF Catalog with lots of redundant datasets

2006-03-26 Thread Mike Baker
Hi Ron,

Thanks for your excellent explanation. (PS: I attended one of your VSAM
courses in Wellington, New Zealand, back in the early 90s).

Just to elaborate on the lots of redundant datasets and HLQ's...  for
example, we have a HLQ called BUS, and approx 9000 BUS.* datasets of
which about 95% of them have been migrated to tape. The remaining 5% which
are still being used are because people have been to lazy(?) to change a
few remaining jobs to use a different HLQ. We could safely change these 5%
remaining jobs, and then delete all BUS.* datasets.

However, seeing as this has not been done, would this have much of an
overhead performance impact on the Catalog / CAS??

Could you please elaborate on this finer detail?

Thanks very much.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ICF Catalog with lots of redundant datasets

2006-03-26 Thread Ron Ferguson
Hi Mike,

 Mike Baker [EMAIL PROTECTED] wrote:  Just to elaborate on
the
 lots of redundant datasets and HLQ's...  for example, we have a HLQ

 called BUS, and approx 9000 BUS.* datasets of which about 95% of
them
 have been migrated to tape. The remaining 5% which are still being
used
 are because people have been to lazy(?) to change a few remaining
jobs to
 use a different HLQ. We could safely change these 5% remaining jobs,
and  then delete all BUS.* datasets.

Your explanation of redundant datasets still leaves a significant
unanswered question  --  when they've been migrated to tape, are they
still valid data sets under their original name, and if access to them
was to be necessary, would they be located through the catalog entry?
Or, in fact, are they orphaned catalog entries, and a valid data set
isn't anywhere for them?

On the assumption that this is not the case, and the catalog entries
really are orphaned, your elaboration does not change the answer  --
no, performance of the catalog itself, nor of CAS would not be adversely
affected by having so many useless BUS.* data sets (95%) scattered
around amongst the valid BUS.* data sets (5%).  Granted, with a ratio
like that, whenever a valid data set is located, the CI that contains
the record will be brought into a buffer within CAS, and depending on
the CI size of the catalog, one or more of the useless data set records
will also be brought in, thereby wasting some very-hard-to-quantify
amount of CAS buffer area.  

Nevertheless, cleaning this up will be a good idea, and you probably
should organize a project to whip these application people into shape
would be advised.  My larger concern would still be the issue of too
much of this orphaned garbage in the catalog, and some day when you
least expect it, you'll have problems with the catalog for some totally
unrelated reason, and now you have 8,000+ data set entries that only
complicate the situation.

Take care,
Ron Ferguson
Mainstar Software Corporation
www.mainstar.com
[EMAIL PROTECTED]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ICF Catalog with lots of redundant datasets

2006-03-26 Thread Ted MacNEIL
 called BUS, and approx 9000 BUS.* datasets of which about 95% of
them
 have been migrated to tape. The remaining 5% which are still being
used
 are because people have been to lazy(?) to change a few remaining
jobs

I think you missed Ron's point.
At the risk of exagerating, you can have a million datasets, that (while taking 
up catalogue space) have no impact to performance.
The catalogue is indexed.
It is cached.
But, if you don't touch a DSN (or an alias) it is not loaded, opened, or 
referenced!


-
-teD

I’m an enthusiastic proselytiser of the universal panacea I believe in!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Wonder why IBM code quality is suffering?

2006-03-26 Thread Richard Tsujimoto
Jeez, (IIRC) I still remember it being PL/C. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Wonder why IBM code quality is suffering?

2006-03-26 Thread Graeme Gibson
Originally BSL (Basic Systems Language) which morphed into PL/S 
(Programming Language/Systems) and finally PL/X (... /X(cross) 
platform?).  The inline assembler (GENERATE) capability was prone 
to abuse in the early days, for quite a while we saw PL/S programs 
that consisted of 10,000 lines of assembler wrapped in GENERATE / ENDGEN.  :-)


At one time ISTR that Fujitsu(?) were distributing a PL/S.

PWD members were at one time able to get PL/S (or PL/X?) on an asis 
basis w/out support for I think USD500 onetime.


google:  [ IBM PL/S generate assembler  endgen ]
  threw up a hit titled: 
http://www.mainframeforum.com/showthread.php?s=d6262087ecc7af587452db75b9e2e12egoto=lastpostforumid=927Mainframeforum 
- PL/X Anyone

..but you'll have to use Google's cached copy as the original is gone.

Take care all,
Graeme.

At 10:33 AM 27/03/2006, you wrote:

Jeez, (IIRC) I still remember it being PL/C.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Wonder why IBM code quality is suffering?

2006-03-26 Thread Tom Schmidt
On Sun, 26 Mar 2006 19:33:18 -0500, Richard Tsujimoto wrote:

Jeez, (IIRC) I still remember it being PL/C.


No, PL/C was Cornell University's student PL/1 compiler.
(I remember it, too; Waterloo had it as one of their batch compilers, as
did ISU and many other colleges and universities around the world.)

The PL/X genealogy included PL/S and PL/AS, but not PL/C.

--
Tom Schmidt
Madison, WI

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Field level security (was RE: Convert 3490E Tapes to CD or DVD

2006-03-26 Thread Ray Mullins
ADABAS does field level security, and has for years.   But it is overhead.

Later,
Ray

-- 
M. Ray Mullins 
Roseville, CA, USA 
http://www.catherdersoftware.com/
http://www.mrmullins.big-bear-city.ca.us/ 
http://www.the-bus-stops-here.org/ 

German is essentially a form of assembly language consisting entirely of far
calls heavily accented with throaty guttural sounds. 

--ilvi 



 

 -Original Message-
 From: IBM Mainframe Discussion List 
 [mailto:[EMAIL PROTECTED] On Behalf Of Ed Gould
 Sent: Friday 24 March 2006 17:06
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: Re: Convert 3490E Tapes to CD or DVD
 
 On Mar 24, 2006, at 4:26 PM, Eric Bielefeld wrote:
 
 I have not heard of any MF application that does field 
 level security except for possibly DB2. This is not to say 
 their isn't just that it is unusual, IMO. There may be some 
 user code in CICS that selectively displays a field (or not) 
 but that seems to be a transaction type security . There are 
 just too many ways you can access a file that is one of the 
 reasons why people need RACF (or other security package).
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


PL/C (was: Wonder why IBM code quality is suffering?)

2006-03-26 Thread John P Baker
PL/C was an interesting compiler.

It provided the capability to correct many common coding errors.

Unfortunately, there was no guarantee that it would correct the coding error
in the way you might expect.

You could actually give PL/C no input.  I would then detect the lack of a
MAIN procedure and would then build a dummy MAIN procedure.

John P Baker
Software Engineer

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Tom Schmidt
Sent: Sunday, March 26, 2006 23:12
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Wonder why IBM code quality is suffering?

On Sun, 26 Mar 2006 19:33:18 -0500, Richard Tsujimoto wrote:

Jeez, (IIRC) I still remember it being PL/C.


No, PL/C was Cornell University's student PL/1 compiler.
(I remember it, too; Waterloo had it as one of their batch compilers, as
did ISU and many other colleges and universities around the world.)

The PL/X genealogy included PL/S and PL/AS, but not PL/C.

--
Tom Schmidt
Madison, WI

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Good Intro to ZSeries for PFCSK ?

2006-03-26 Thread Ed Gould

Introduction to the New Mainframe: z/OS Basics May 15, 2006
May 15, 2006 Workshop in San Francisco, USA Contact:  
[EMAIL PROTECTED] More details are available at http:// 
www.redbooks.ibm.com/workshops/GR9534


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: VTAM and MS Host Integrated Server IP-DLC (Enterprise Extender)

2006-03-26 Thread Chris Mason
Tim,

Looking for another redbook, I found the document I was puzzling about in my
last post. It is a redbook, the Multiplatform APPC Configuration Guide,
GG24-4485-00, December 1994. I think I must have seen an earlier version of
the document but some clever Raleigh ITSO assignee must have picked it up
and made a redbook out of it.

 Chris Mason

- Original Message - 
From: Tim Hare [EMAIL PROTECTED]
Newsgroups: bit.listserv.ibm-main
To: IBM-MAIN@BAMA.UA.EDU
Sent: Friday, 24 March, 2006 5:54 PM
Subject: VTAM and MS Host Integrated Server IP-DLC (Enterprise Extender)


 Is this the correct place to ask about VTAM issues? We're trying to move a
 Microsoft Host Integration Server from 802.2 connection to IP-DLC (AKA
 Enterprise Extender). The documentation we've found, however, doesn't
 really use the same names everywhere so we're having difficulty relating
it
 to what we want to do.  I remember (fondly) an old IBM Redbook which had
 examples of how to define things on both ends, and also documented what
 names and parameters had to match. Is there any such thing for HIS and
VTAM
 (sorry Communication Server)?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Wonder why IBM code quality is suffering?

2006-03-26 Thread Timothy Sipples
I didn't see the original Web address for this PLX job posting. Here's one 
instance:

http://www.net-temps.com/job/2ow6/PLX-POK/plx_programmer_ex_ibmers.html

With respect to the conclusions one could draw, here they are:

1. Somebody is (was?) looking for a PLX programmer.
2. Somebody would have made a nice profit at that asking price.

With respect to the educated guesses beyond that, here they are:

3. At that asking price, somebody probably didn't find a PLX programmer.

:-)

- - - - -
Timothy F. Sipples
Consulting Enterprise Software Architect, z9/zSeries
IBM Japan, Ltd.
E-Mail: [EMAIL PROTECTED]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Convert 3490E Tapes to CD or DVD

2006-03-26 Thread Vernooy, C.P. - SPLXM
Eric,

If you go for CD's/DVD's you better prepare yourself for long time 
conservation. I read a (German?) article last week or so about an investigation 
on the lifetime of CDs and DVDs. It seems to be even worse than rumours had it 
already, some lasted only a few years. You should ask guarantees from the 
company that produces your CDS/DVDs and plan for regular copyoperations for 
safety.

Kees.

Eric N. Bielefeld [EMAIL PROTECTED] wrote in message news:[EMAIL 
PROTECTED]...
 Don,
 
 I know that I have searched previously for companies to do this, but I 
 didn't find anywhere near as many hits as I got when I entered your search 
 arguments.  Thanks - those are a lot better than the search arguments I used 
 previously.
 
 I was hoping for someone who had already done this, and had companies that 
 they would recommend.  I'll definetly check this out some of these companies 
 on Monday when I get back to work.
 
 Eric Bielefeld
 PH Mining Equipment
 
 - Original Message - 
 From: Imbriale, Don [EMAIL PROTECTED]
 Newsgroups: bit.listserv.ibm-main
 To: IBM-MAIN@BAMA.UA.EDU
 Sent: Friday, March 24, 2006 4:49 PM
 Subject: Re: Convert 3490E Tapes to CD or DVD
 
 
 A search via Google for convert 3490e tape to cd yields thousands of
  hits.  On the first page of 20 hits, there are at least a half a dozen
  companies that provide the service you are looking for.  Even to the point
  of converting binary and packed decimal fields as needed.  Note however
  that these do not seem to be inexpensive services.
 
  Don Imbriale
 
  On Fri, 24 Mar 2006 14:16:16 -0600, Eric Bielefeld Eric-
  [EMAIL PROTECTED] wrote:
 
 Does anyone know of any companies that can convert mainframe tapes to
 DVDs or CDs?  As many of you know, our mainframe is going out the door
 at the end of April.  All of the historical data and otherwise tape
 data will then in essence be unreadable by us.  All of our tapes are
 currently 3490-E model tapes.  We are looking for a company who can
 read the data on a 3490E drive and convert it to ASCII and write it on
 a DVD or CD, or just FTP it to us.  This would be on an as needed
 basis, and may or may not be a lot of data.
 
 You can reply to me or the group, or call me.
 
 Thanks,
 
 Eric Bielefeld
 Sr. Systems Programmer
 PH Mining Equipment
 414-671-7849
 Milwaukee, Wisconsin 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


**
For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), 
its subsidiaries and/or its employees shall not be liable for the incorrect or 
incomplete transmission of this e-mail or any attachments, nor responsible for 
any delay in receipt.
**

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Convert 3490E Tapes to CD or DVD

2006-03-26 Thread Dave Cartwright
On Fri, 24 Mar 2006 14:16:16 -0600, Eric Bielefeld Eric-
[EMAIL PROTECTED] wrote:

Does anyone know of any companies that can convert mainframe tapes to
DVDs or CDs?  As many of you know, our mainframe is going out the door
at the end of April.  All of the historical data and otherwise tape
data will then in essence be unreadable by us.

Eric,
Check out the documentation (File 001) on the CBT tape. There are several
programs that will convert a tape to AWS format files. You yourself can
create these files and simply write them to CD or DVD or any other medium
after transferring them to a PC. AWS is a standard format that is handled
by Flex-ES, P/390 and Hercules.
You could legally have a Hercules system running MVS 3.8 that you can use
to  access the data on these tapes, if only just to read and print it.

DC

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html