RACF simple doubt ...

2006-09-30 Thread Jacky Bright

Hi,

I hav defined generic profile 'PROD.**' and USR1 has ALTER access to this
profile

Later I defined dataset proiled named 'PROD.BACKUP' and provided OPR1 and
OPR2 users READ access.

Is thr necessity of providing ALTER access to USR1 for this newly created
profile PROD.BACKUP ???

JAcky

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DYNALLOC

2006-09-30 Thread Ken Kornblum
It's in the Authorized Assembler Services Guide.  Chapters 25 through 27

Cheers,
Ken Kornblum
Neon Enterprise Software
Austin, TX 

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Gary Smith
Sent: Saturday, September 30, 2006 9:11 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: [IBM-MAIN] DYNALLOC

In what IBM manual can I find documentation on SVC 99?  It's not in any
of the
places I thought to look.  It's been a long time since I wrestled with
the
documentation.

Gary L. Smith
Columbus, Ohio

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DYNALLOC

2006-09-30 Thread Gary Smith
In what IBM manual can I find documentation on SVC 99?  It's not in any of the
places I thought to look.  It's been a long time since I wrestled with the
documentation.

Gary L. Smith
Columbus, Ohio

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REAL memory column in SDSF

2006-09-30 Thread Anne & Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


Brian Inglis <[EMAIL PROTECTED]> writes:
> Had to be reduced to 9 pages (36KB) because the 3880/3380 would miss
> the start of the next track (RPS miss) on a chained multi-block big
> page transfer because of overhead. 

processing latency ... this was if you would to do multiple
consecutive full track transfers ... with head-switch to different
tracks (on the same cycliner; aka arm position) w/o loosing
unnecessary revolutions ... aka being to do multiple full track
transfers in the same number of disk rotations.

as already discussed (in some detail) ... 3880 disk controller processed
control commands much slower than the previous 3830 disk controller
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF

which met that it was taking longer elapsed time between commands
... while the disks continued to rotate.

there had been earlier studied in detail regarding elapsed time to do
a head switch on 3330s ... in order to read/write "consecutive" blocks
on different tracks (on the same cylinder) w/o unproductive disk
rotation. intra-track head switch (3330) official specs called for
a 110 dummy spacer record (between 4k page blocks) that allowed time
for processing the head switch command ... while the disk continued
to rotate. the rotation of the dummy spacer block overlapped with the
processing of the head switch command ... allowing the head switch
command processing to complete before the next 4k page block had rotated
past the r/w head.

the problem was that 3330 track only had enuf room for three 4k page blocks
with 101-byte dummary spacer records (i.e. by the time the head switch
commnad had finished processing, the start of the next 4k record had
already rotated past the r/w head).

it turns that both channels and disk controllers introduced processing
delay/latency. so the i put together a test program that would format
a 3330 track with different sized dummy spacer block and then test whether
a head switch was performed fast enuf before the target record had rotated
past the r/w head.

i tested the program with 3830 controllers on 4341, 158, 168, 3031,
3033, and 3081. it turns out that a 3830 in combination with 4341 and
370/168, the head switch command processed within the 101 byte
rotation latency.

combination of 3830 and 158 didn't process the head switch command
within the 101 byte rotation (resulting in a missed revolution). the
158 had integrated channel microcode sharing the 158 processor engine
with the 370 microcode. all the 303x processors had a external
"channel director" box. the 303x channel director boxes were a
dedicated 158 processing engine with only the integrated channel
microcode (w/o the 370 microcode) ... and none of the 303x processors
could handle the head switch processing within the 101 byte dummy
block rotation latency. the 3081 channels appeared to have similar
processing latency as 158 and 303x channel director (not able to
perform head switch operation within 101 dummy block rotation).

i also got a number of customer installations to run the test with a
wide variety of processors and both 3830 controllers and oem clone
disk controllers.

misc. past posts discussing the 3330 101/110 dummy block for
head switch latency: 
http://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
http://www.garlic.com/~lynn/2004d.html#64 System/360 40 years old today
http://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
http://www.garlic.com/~lynn/2004d.html#66 System/360 40 years old today
http://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE
http://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small 
clusters
http://www.garlic.com/~lynn/2004f.html#49 can a program be run withour main 
memory?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSORT vs Syncsort

2006-09-30 Thread Ed Rabara
SyncSort for z/OS R1.2 can be installed using SMP/E or not. Your choice. 
SyncSort even provides an ISPF front-end to the installation process (which 
I do not use, went straight to the manual SMP/E install). The SMP/E 
process, as provided, will allow you to build a new SMP CSI for your 
global, target, and dlib zone, if you want to.

SyncSort default options are configured using a usermod (sample is 
provided). Replace IEBGENER with SyncSort's GENER is also packaged as a 
usermod.

SyncSort fixes are electronically downloadable using FTP to their download 
site. Gotta have a registered login to do so.

My only objection is in the packaging of the fixes, what SyncSort calls 
TPF's. I wish they were closer to IBM's PTF packaging in that pre-req, co-
req, and sup's are spelled out ahead of time so that I can do pre-apply 
verification. (Oh well, that what apply-checks are for.)


On Thu, 28 Sep 2006 23:06:15 -0500, Ed Gould <[EMAIL PROTECTED]> 
wrote:

>The difference that I personally like is that DFSORT is supported by
>SMP/e (read IBM real SMPe support NOT CA so called SMPe support). I
>don't know currently (this information is old) is that Syncsort
>distributed fixes by weekly (or seemed like) in the form of ZAPs on a
>printed mailed sheet.
>
>Ed

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REAL memory column in SDSF

2006-09-30 Thread Anne & Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.


[EMAIL PROTECTED] (Ed Gould) writes:
> It would be interesting, I would think to have the "old timers"
> compare the code that was used in the "old days" against what is used
> today.
>
> The code I think has been recoded many a time. Do you think the new
> people could show the old people new tricks or would it be the other
> way around?

some of this cropped up during the early days of os/vs2 svs
development.

at the time, cp67 was one of the few relatively succesful operating
systems that supported virtual memory, paging, etc (at least in the
ibm camp). as a result some of the people working on os/vs2 svs was
looking at pieces of cp67 for example.

one of the big issues facing transition from real memory mvt to virtual
memory environment was what to do about channel programs.

in virtual machine environment, the guest operating system invokes
channel programs ... that have virtual addresses. channel operation
runs asyncronously with real addresses. as a result, cp67 had a lot of
code (module CCWTRANs) to create an exact replica of the virtual
channel program ... but with real addresses (along with fixing the
associated virtual pages at real addresses for the duration of i/o
operation). these were "shadow" channel programs.

svs had a compareable problem with channel programs generated in the
application space and passing the address to the kernel with
EXCP/SVC0. the svs kernel now was faced with also scanning the virtual
channel program and created a replica/shadow version using real
addreses. the initial work involved taking CCWTRANS from cp67 and
crafting it into the said of the SVS development effort.

one of the other issues was that the POK performance modeling group
got involved in doing low-level event modeling of os/vs2 paging
operations. one of their conclusions ... which I argued with them
about ... was that replacing non-changed pages was more efficient than
selecting a change page for replacement. no matter how much arguing
they were adament that on a page fault ... for a missing page ... the
page replacement algorithm should look for a non-changed page to be
replaced (rather than a changed page). This reasoning was that
replacing a non-changed page took significantly less effort (there was
no writing out required for the current page).

the issue is that in LRU (least recently used) page replacement
strategy ... you are looking to replace pages that have the least
likelyhood of being used in the near future. the non-changed/changed
strategy resulted in less weight being placed on whether the page
would be needed in the near future.  this strategy went into svs and
continued into the very late 70s (with mvs) before it was corrected.

finally it dawned on somebody that the non-changed/changed strategy
resulted in replacing relatively high-use, comonly used linkpack
executable (non-changed) pages before more lightly referenced, private
application data (changed) pages.

these days there is a lot of trade-off trying to move data between
memory in really large block transfers  and using excess
electronic memory to compensate for disk i/o bottlenecks. in the vs1
handshaking scenario ... vs1 letting vm do its paging in 4k blocks was
frequently signifantly more efficient than paging in 2k blocks (made
less efficient use of real storage, but it was a reasonable trade-off
since there was effectively mode real storage resources than there
were disk i/o access resources).

later "big pages" went to 40k (10 4k page) 3380 track demand page
transfers.  vm/hpo3.4 would typically do more total 4k transfers than
vm/hpo3.2 (for the same workload and thruput) ... however, it could do
the transfers with much fewer disk accesses; it made less efficient
use of real storage, but more efficient use of disk i/o accesses
(again trading off real storage resource efficiency for disk i/o
resource efficiency).

... or somewhat reminiscent of a line that I started using as an
undergraduate in conunction with dynamic adaptive scheduling;
"schedule to the (system thruput) bottleneck". misc. past posts
mentioning past dynamic adaptive scheduling work and/or the resource
manager 
http://www.garlic.com/~lynn/subtopic.html#fairshare

previous posts in this thread:
http://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF

misc past posts mentioning os/vs2 starting out using CCWTRANS from cp67
http://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
http://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for 
a computer to Love?
http://www.garlic.com/~lynn/2001l.html#36 History
http://www.garlic.com/~lynn/2002n.html#62 PLX
http://www.garlic.com/~ly

Getting away from the name IRXFLOC

2006-09-30 Thread Charles Mills
Cross-posted from the Rexx list, where the response was underwhelming!

 

I have a compiled Rexx program that runs in MVS batch. It must be able to
run either with the "Rexx library" (that is, truly compiled) or with the
alternate library. It uses many home-grown assembler functions. I provide
them to the program in two forms: linked in and callable via DLINK, and as a
function package in a load module named IRXFLOC.

 

For a variety of good reasons that are beyond the scope of this note I would
like to not use the name IRXFLOC (or any of the "standard" IRX names).
It looks to me like the only reasonable possibility is to modify EAGSTMVS
from the sample library to create my own EAGSTMVS that calls IRXINIT before
IRXEXEC, pre-creating the Rexx environment and passing a Function Package
Table that includes a different name.

 

This seems like an incredibly involved approach to what should be a simple
problem. Is there an easier way? I would, for example, love to be able to
front-end or modify EAGSTMVS and somehow simply say "here is the in-load
module  address of the function package directory that I want you to use."

 

I cannot include the function package directory in the load module and ALIAS
or IDENTIFY it as IRXFLOC for good reasons that are beyond the scope of this
note.

 

Any suggestions would be appreciated. Thanks,

 

Charles


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Linkage Editor

2006-09-30 Thread Charles Mills
> And so the entry of the ALIAS may be different from the entry of the
> module you specify on the NAME statement.

You're asking: can I create a load module named XXX in which EXEC PGM=XXX
starts execution at displacement 1000, and which also has an alias named
YYY, and EXEC PGM=YYY starts execution at displacement 2000?

Yes, absolutely. I have done it. I suppose one could have two unrelated
"programs" that were linked into a single load module. Don't know why you'd
want to, and the tendency of the utilities to "drop" aliases and leave them
behind argues against it, but you could do it. More useful for a single
program with two (or more!) slightly different functions either of which
could be invoked through the use of different PGM= names.

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Bernd Oppolzer
Sent: Saturday, September 30, 2006 4:52 AM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Linkage Editor

I'm not sure, but I believe: 
if you have no ENTRY control statement but you have an ALIAS control 
statement, and there is a ENTRY or CSECT in the object module with the same 
name, the ALIAS will get this entry. And so the entry of the ALIAS may be 
different from the entry of the module you specify on the NAME statement. 
Correct? 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Linkage Editor

2006-09-30 Thread Paul Gilmartin
In a recent note, Charles Mills said:

> Date: Fri, 29 Sep 2006 16:18:06 -0700
> 
> I believe I recall doing that also, although the timeframe would have been
> ~1970, back when I found it more amusing than I do today to play "what if?"
> in MVS (then OS/360).
> 
Not so much such bravado as naivete.  Modifying code generation in
a compiler, I needed to become familiar with the documented formats
of SYSLIN records.  On discovering the entry address in the END
record, I assumed with delight that I could save one instruction
per load module by omitting the BC 15 and letting ATTACH bypass the
eyecatchers.

I had never previously used a system on which executables were
re-editable, nor one with CKD DASD, so neither the possibility nor
the desirability of relinking to reblock occurred to me.

And, regardless, it would not have occurred to me that simple
relinking would fail to preserve such an important attribute
from the previous link editing.  An expert regular contributor
to ASSEMBLER-LIST has taken the position that preserving such
information is unnecessary and even undesirable -- the programmer
should either retain the original link editing commands (SMP/E
does so) or rely on AMBLIST to recreate them.  IBM development
has apparently judged otherwise and lately provided the "-attr"
option on the INCLUDE command.  I suspect this is to support
IEBCOPY's use of Binder to reblock and convert between load
modules and program objects.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
> Of Paul Gilmartin
> 
> > http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/iea2b160/5.1.1
> 
>  * Through an assembler- or compiler-produced END statement
>of an input object module if one is present.  ...

-- gil
-- 
StorageTek
INFORMATION made POWERFUL

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: REAL memory column in SDSF

2006-09-30 Thread Anne & Lynn Wheeler
The following message is a courtesy copy of an article
that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well.

Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes:
> in the early 80s ... "big pages" were implemented for both VM and MVS.
> this didn't change the virtual page size ... but changed the unit of
> moving pages between memory and 3380s ... i.e. "big pages" were
> 10 4k pages (3380) that moved to disk and were fetched back in from
> disk. a page fault for any 4k page in a "big page" ... would result
> in the whole "big page" being fetched from disk.

re:
http://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF

"big pages" support shipped in VM HPO3.4 ... it was referred to as
"swapper" ...  however the traditional definition of swapping has been
to move all storage associated with a task in single unit ... I've
used the term of "big pages" ... since the implementation was more
akin to demand paging ... but in 3380 track sized units (10 4k pages).

from vmshare archive ... discussion of hpo3.4
http://vm.marist.edu/~vmshare/browse?fn=34PERF&ft=MEMO

and mention of hpo3.4 swapper from melinda's vm history
http://vm.marist.edu/~vmshare/browse?fn=VMHIST05&ft=NOTE&args=swapper#hit

vmshare was online discussion forum provided by tymshare to share
organization starting in the mid-70s on tymshare's vm370 based
commercial timesharing service ... misc. past posts referencing
various vm370 based commercial timesharing services
http://www.garlic.com/~lynn/subtopic.html#timeshare

in the original 370, there was support for both 2k and 4k pages
... and the page size unit of managing real storage with virtual
memory was also the unit of moving virtual memory between real storage
and disk. the smaller page sizes tended to better optimize constrained
real storage sizes (i.e. compared to 4k page sizes, an application
might actually only need the first half or the last half of a specific
4k page, 2k page sizes could mean that the application could
effectively execute in less total real storage).

the issue mentioned in this post
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
and 
http://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s

was that systems had shifted from having excess disk i/o resources to
disk i/o resources being a major system bottleneck ... issue also discussed
here about CKD DASD architecture
http://www.garlic.com/~lynn/2006r.thml#31 50th Anniversary of invention of disk 
drives
http://www.garlic.com/~lynn/2006r.thml#33 50th Anniversary of invention of disk 
drives

with the increasing amounts of real storage ... there was more and
more a tendency to leveraging the additional real storage resources to
compensate for the declining relative system disk i/o efficiency.

this was seen in mid-70s with the vs1 "hand-shaking" that was somewhat
done in conjunction with the ECPS microcode enhancement for 370
138/148.
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

VS1 was effectively MFT laid out to run in single 4mbyte virtual
address space with 2k paging (somewhat akin to os/vs2 svs mapping MVT
to a single 16mbyte virtual address space). In vs1 hand-shaking, vs1
was run in a 4mbyte virtual machine with a one-to-one correspondance
between the vs1 4mbyte virtual address space 2k virtual pages and the
4mbyte virtual machine address space. 

VS1 hand-shaking effectively turned over paging to the vm virtual
machine handler (vm would present a special page fault interrupt to
the vs1 supervisor ... and then when vm had finished handling the page
fault, present a page complete interrupt to the vs1 supervisor). Part
of the increase in efficiency was eliminating duplicate paging when
VS1 was running under vm. However part of the efficiency improvement
was VM was doing demand paging using 4k transfers rather than VS1 2k
transfers. In fact, there were situations were VS1 running on 1mbyte
370/148 under VM had better thruput than VS1 running stand-alone w/o VM
(the other part of this was my global LRU replacement algorithm and my
code pathlength from handling page fault, to doing the page i/o to
completion was much better than the equivalent VS1 code).

there were two issues with 3380, over the years, disk i/o had become
increasingly a significant system bottleneck. more specifically
latency per disk access (arm motion and avg. rotational delay) was
significantly lagging behind improvements in other system
components. so part of compensating for disk i/o access latency was to
significantly increase amount transfered per operation. the other was
that 3380 increased the transfer rate by a factor of ten while its
access time only increased by a factor of 3-4. significantly
increasing the amount transferred per access also better matched the
changes in disk technology over time (note later technologies
introduced raid that did large transfers across multiple disk arms in
parallel)

Re: Dataspace vs 64 bit performance

2006-09-30 Thread Rob Scott
If you can forsee a situation where you need more than 2Gb for your
temporary storage - then I would suggest converting now.

Otherwise consider the following :

(a) Are you experiencing performance problems now? 
(b) Watch out for the frequency of the SAC instruction - this can slow
things down drastically if you are switching between AR and Primary mode
too often.
(c) Do you share the dataspace storage with other address spaces? - this
is also possible with 64-bit storage but is a little bit more effort.
(d) Using 64-bit may allow you to access/use services that do not
currently run in AR mode - this could save you some CPU time or at least
having to flip between AR mode and Primary to access them.
(e) Does/could your application run on hardware that might not support
64-bit?
(f) Could your application run on a system where its MEMLIMIT could be
knee-capped?



Rob Scott
Rocket Software, Inc
275 Grove Street
Newton, MA 02466
617-614-2305
[EMAIL PROTECTED]
http://www.rs.com/portfolio/mxi/
 

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Miklos Szigetvari
Sent: 30 September 2006 06:01
To: IBM-MAIN@BAMA.UA.EDU
Subject: Dataspace vs 64 bit performance

 Hi

(Maybe already discussed)
Dataspace vs 64 bit performance.

Currently our application is using datasspaces to store large temporary
informations (store sequential and retrive later sequential) Is it worth
to change to 64 bit ?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Linkage Editor

2006-09-30 Thread Bernd Oppolzer
I'm not sure, but I believe: 
if you have no ENTRY control statement but you have an ALIAS control 
statement, and there is a ENTRY or CSECT in the object module with the same 
name, the ALIAS will get this entry. And so the entry of the ALIAS may be 
different from the entry of the module you specify on the NAME statement. 
Correct? 
Kind regards
Bernd


Am Freitag, 29. September 2006 20:39 schrieben Sie:
> I believe the linkage editor/binder can get an entry specification form
> one of two possible places: the ENTRY control statement and the END
> "card" in an object module. If neither is present, the default entry
> point is the first CSECT in the load module.  This need not be the first
> one included if things are changed by ORDER statements.  I don't believe
> the NAME statement has any effect.
>
> Things may be different for program objects.
>
> The manual you want is Program Management in the SMS library.
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Install new digital certificate

2006-09-30 Thread Jay Howard
I need to know how to get a new certificate in TCPIP for TN3270 
connections.I create the new certificate and  added the new certificate to 
the keyring. However it appears that it is not being used. I am using RACF 
to hold the certificate. 

Thanks, 
Jay 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Dataspace vs 64 bit performance

2006-09-30 Thread Miklos Szigetvari
 Hi

(Maybe already discussed)
Dataspace vs 64 bit performance.

Currently our application is using datasspaces to store large temporary
informations
(store sequential and retrive later sequential)
Is it worth to change to 64 bit ?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Linkage Editor

2006-09-30 Thread Kenny Fogarty

* Gibney, Dave wrote, On 30/09/2006 08:07:

 keep an "up to date" set via Softcopy Librarian at work and on my
laptop. But, it's a little old, last new stuff is April 2006 :( 
  

Mark Hammond wrote:

Thanks to all.  One of my biggest frustrations is finding the right 
manual to get the information I need.
  
When I'm not sure, I use the IBM documentation CDs on my PC, 
and do a global search. It may take several minutes, but that 
generally beats taking out several manuals and checking the contents.


Gerhard Postpischil
Bradford, VT
I keep links to various sites in my del.icio.us account, that way, I 
have the same links at home, in the office, on the laptop - everywhere 
really. As I find new sites of interest, or chapters of books that I 
need regularly, I just add them to del.icio.us with appropriate tags, 
and they're saved for the next time I need them. I've also got various 
add-ins on my Firefox browser which validate links from time to time 
(useful for when IBM decide to move links and I've not got the updated 
link in place) which help me keep everything relatively up to date.


Of course, this is no good for licensed materials, but it's good for 
everything else.


Cheers,
Kenny

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Linkage Editor

2006-09-30 Thread Gibney, Dave
 keep an "up to date" set via Softcopy Librarian at work and on my
laptop. But, it's a little old, last new stuff is April 2006 :( 

> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of Gerhard Postpischil
> Sent: Friday, September 29, 2006 2:13 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: Linkage Editor
> 
> Mark Hammond wrote:
> > Thanks to all.  One of my biggest frustrations is finding the right 
> > manual to get the information I need.
> 
> When I'm not sure, I use the IBM documentation CDs on my PC, 
> and do a global search. It may take several minutes, but that 
> generally beats taking out several manuals and checking the contents.
> 
> Gerhard Postpischil
> Bradford, VT
> 
> --
> For IBM-MAIN subscribe / signoff / archive access 
> instructions, send email to [EMAIL PROTECTED] with the 
> message: GET IBM-MAIN INFO Search the archives at 
> http://bama.ua.edu/archives/ibm-main.html
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html