Re: Real storage usage - a quick question
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Knutson, Sam) writes: > You should have the PTFs for z/OS APAR OA17114 installed if you are > using paged fixed buffers in DB2 V8. Not having it was one of the > causes of a z/OS outage here when a DB2 DBA accidently overcommitted > storage to DB2. aka application page fixed buffers ... allows applications to specify the "real addresses" in the channel program ... avoiding the dynamic channel program translation (creating a duplicate of the channel program passed by excp/svc0) and dynamic page fixing that otherwise has to occur on every i/o operations (however, it can eliminate pageable storage needed by the rest of system) recent post mentioning difference between EXCP and EXCPVR (vis-a-vis channel program translation) http://www.garlic.com/~lynn/2007q.html#8 GETMAIN/FREEMAIN and virtual storage backing up other recent posts discussing dynamic channel program translation (in the initial translation from MVT to OS/VS2 supporting virtual memory, there was extensive borrowing of technology from cp67 CCWTRANS, channel program translation) http://www.garlic.com/~lynn/2007e.html#19 Cycles per ASM instruction http://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007e.html#46 FBA rant http://www.garlic.com/~lynn/2007f.html#0 FBA rant http://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007f.html#33 Historical curiosity question http://www.garlic.com/~lynn/2007f.html#34 Historical curiosity question http://www.garlic.com/~lynn/2007k.html#26 user level TCP implementation http://www.garlic.com/~lynn/2007n.html#35 IBM obsoleting mainframe hardware http://www.garlic.com/~lynn/2007o.html#37 Each CPU usage http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007p.html#70 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007p.html#72 A question for the Wheelers - Diagnose instruction http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar' -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Real storage usage - a quick question
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Veilleux, Jon L) writes: > In z/OS 1.8 the memory management is much more conducive to large > memory. They no longer use the least recently used algorithm and no > longer check every page. This has made a big difference for us. Under > 1.7 we had issues with large real memory sizes due to the constant > checking by RSM. This is no longer the case and we have increased our > memory dramatically with no performance hit. one of the things found in "clock" LRU-approximation that i had originally done as undergraduate in the 60s http://www.garlic.com/~lynn/subtopic.html#wsclock was that if the interval between page resets started to exceed some limit, then there was little differention benefit of the reset activity ... least recently used tends to have some implicit dependencies on amount of "history" ... if the duration is too long ... then it lost much of its correlation being able to differentate between pages as to future page reference pattern. however across a wide range of configurations and workloads in the 70s, "clock" LRU-approximation had the advantage of effectively being able to (usefully) dynamically adapt the interval. however with a lot of cp67 experimenting and also heavy use of storage reference traces and page replacement modeling ... it was possible to show that outside some useful operating range ... the use of LRU algorithms for differentiating/predicting future page reference behavior became less and less accurate. It was also possible to show that for very large memories ... that the overhead of repeatedly resetting page reference bits provided less benefit than any possible improvement in page replacement strategy. we did do some experimenting at the science center attempting to recognize the operating region/environment across where clock LRU-approximated was beneficial ... and attempt to take some secondary measures/strategies when it was outside that operating region/envrionment. one of the scenarios was that most LRU-approximation algorithms are measured against how well they performed vis-a-vis simulation that exactly implemented least-recently-used page ordering (measured in terms of total page faults for given workload and real storage size). "Good" approximations tended to come within 5-15 percent (total page faults) of "real" least-recently-used page ordering. We were able to find some page replacement variations that instead of being 5-15 percent worse/more (total page faults compared to simulated "real" least-recently-used page ordering), we were able to show 5-15 percent fewer total page faults. the scenario was that in some configuration/workload scenarios, LRU-approximate could effectively cycle thru every page in real storage w/o finding a candidate ... and then take the first page it started with. Besides having a lot of processing overhead, this characteristic effectively degraded to FIFO page replacement (there are operating regions for LRU where it can degenerate to FIFO page replacement at the same time taking an extrodinary amount of processor overhead). our variation tended to recognize when operating in this configuration/workload region and effectively switched to RANDOM page replacement at very low processor overhead (and modeling showed that when not able to make any other differentiation between pages to be replaced ... RANDOM replacement makes better choice than FIFO, independent of the overhead issue). In fact, the original cp67 delivered at the univ. last week jan68, ... also referenced here http://www.garlic.com/~lynn/2007r.html#74 System 360 EBCDIC vs. ASCII ... effectively implemented something that tended to operate as FIFO replacement with purely software and didn't make use of the hardware reference bits. As undergraduate, I did the kernel algorithm and software changes to implement "clock" LRU-approximation page replacement ... taking advantage of page replacement bits. In this scenario ... with only on the order of 120 real "pageable pages" ... this reduced the time spent in page replacement selection (under relatively heavy load) from approx. 10 percent of total processor to effectively unmeasureable (and at the same time drastically improvement the quality of the replacement choice). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: System 360 EBCDIC vs. ASCII
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Timothy Sipples) writes: > An awful lot of modems and serial connections had to handle 7-bit, > too, complicating the user experience for dial-up access to host > systems, BBSes, etc. Basically if you set your modem to 7 bits, you > struggled to transfer binary files (see: Kermit), and PC extensions > for things like line drawing characters looked like a jumbled mess. > If you set your modem to 8 bits you usually lost the parity bit, so > you lost what little error checking you had. And a lot of systems > still tried to use that high order bit for parity, so you saw a > jumbled mess on your PC again. Owners of modem dial-up pools > installed workarounds to try to detect what the end user had set, but > this was a mess, too. On some systems you wouldn't see anything, so > you didn't know what to do. (The correct answer: hit Enter a few > times, or maybe Escape, or) I'm sure AT&T enjoyed some extra > earnings as dial-up modem users had to call over and over again, > hoping to get the configuration settings right through trial and > error, all because of the complications of 7 versus 8 bits. This > affected all sorts of serial connections, including hardwired ones: > plotters, ASCII terminals, etc. when cp67 was installed at the univ the last week of jan68, it had terminal support for 1052s and 2741s ... but the univ. had some number of tty/ascii devices. so one of the modifications to cp67 was to add tty/ascii terminal support. the base cp67 code had some stuff for dynamically determining the terminal type and "switching" the 2702 line scanner using the SAD command. so to remain consistent, i worked out a process to add TTY/ascii terminal support ... preserving the base cp67 dynamic terminal type determination. the univ. also was getting dial-up interface ... with base number that would roll-over to the first unused line. the idea that all terminals could dial in on the same phone number, regardless of type. this "almost" worked ... but it turned out that they had taken some short cuts with 2702 implementation. the issue was that while SAD command would switch the line scanner ... but the short-cut was that the line-speed oscillator was hard-wired to each port. for hard-wired lines ... the appropriate terminal types was connected to the appropriate 2702 with the corresponding line-speed wired (and then cp67 could dynamically determine the correct terminal type and switch the line scanner as needed with the SAD command). However, this wouldn't work for dial-up lines with common dial-in pool ... where any terminal type might get connected to any 2702 port. so somewhat because of this, the univ. decided to build our own clone controller that would also be able to perform dynamic line-speed determination. this involved reverse engineering the 360/67 multiplexor channel interface and building a channel interface board for an Interdata/3 minicomputer (platform for implemented controller clone). misc. past posts about the clone controller project http://www.garlic.com/~lynn/subtopic.html#360pcm i remember two "bugs" from the project. one bug involved "red-lighting" the 360/67. the 360/67 had high-resolution timer that tic'ed at approx 13mseconds. the timer had to update loc. 80 storage when it "tic'ed". If the timer tic'ed a 2nd time before the previous tic had been updated in storage (say because some channel/controller had obtained the storage bus for the period and failed to release it for that perioid), the timer would force a red-light/machine check. the other bug was initially getting ascii data into storage .. after running it thru standard ascii->ebcdic translation table, it was all garbage. we eventually figured out every byte was "bit-reversed" ... i.e. 2702 line-scanner would take leading bit off the line and store it in low-order bit position (in a byte ... reversing the order of bits off the line. the interdata/3 started out doing standard ascii taking leading bit off the line and storing it in the high-order bit in a byte. so initially, the ascii bytes was getting to 360/67 main memory in non-bit-reversed bytes and then being run through the standard 2702 ascii->ebcdic (bit-reversed) translation table. this project got written up as the four of us being instrumental in starting the clone controller business. of course, all the clone controller business was the major motivation for the future system project ... lots of past posts http://www.garlic.com/~lynn/subtopic.html#futuresys including a few with this reference http://www.ecole.org/Crisis_and_change_1995_1.htm from above: IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impos
Re: CSA 'above the bar'
The following message is a courtesy copy of an article that has been posted to alt.folklore.computers,bit.listserv.ibm-main as well. Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes: > one of the other issues for TLB (hardware that translates virtual page > addresses to real page addresses) ... all the entries were > tagged/associated with specific virtual address spaces > ... i.e. "STO-associative". This generalized mechanism resulted in a > huge number of "duplicated" entries CSA/common-segment. So as a > special case optimization for the whole MVS CSA/common-segment hack > gorp ... a special option was provided that identified virtual > addresses as something belonging to common-segment. These areas then > became associated in the TLB with effectively a system-wide, unique, > artificial "common-segment" virtual address space (effectively > violating the whole generalized virtual address space architecture > ... rather than associated with generalized virtual address space > ... it became associated with a custom operating system specific > construct that was known to have very specific characteristics). re: http://www.garlic.com/~lynn/2007r.html#67 CSA 'above the bar' from z/architecture principles of operation http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/CCONTENTS segment-table entries http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/3.11.2.2?DT=20040504121320 defining the (MVS-specific) common-segment bit (in 64bit address segment table) ... aka Common-Segment Bit (C): Bit 59 controls the use of the translation-lookaside-buffer (TLB) copies of the segment-table entry and of the page table which it designates. A zero identifies a private segment; in this case, the segment-table entry and the page table it designates may be used only in association with the segment-table origin that designates the segment table in which the segment-table entry resides. A one identifies a common segment; in this case, the segment-table entry and the page table it designates may continue to be used for translating addresses corresponding to the segment index, even though a different segment table is specified. ... snip ... ... aka segment table (and the corresponding segment table origin address or "STO") is effective equivalent to unique virtual address space. since MVS has the common segment(s) appearing in every virtual address space, rathing than filling up TLB entries with large number of duplicated entries for the same information, effecitvely create a special class of virtual addresses that apply across everything in the system. this ugly common segment gorp then creates all sort of complications (that weren't part of the original virtual memory architecture) ... see the programming notes regarding common segment operation/problems at the above URL describing segment-table entries. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: High order bit in 31/24 bit address
The following message is a courtesy copy of an article that has been posted to alt.folklore.computers as well. [EMAIL PROTECTED] (Steve Samson) writes: > As for 32-bit mode (TSS) I don't have a POPS for that architecture but > I suspect the HO bit is treated as any other. TSS did not use the > "sign bit" as a signal, just as an address bit. lots of 360 documents at bitsavers: http://bitsavers.org/pdf/ibm/360/ including various functional characteristics http://bitsavers.org/pdf/ibm/360/funcChar/ specifically 360/67 functional characteristics a27-2719-0 http://bitsavers.org/pdf/ibm/360/funcChar/A27-2719-0_360-67_funcChar.pdf and ga27-2719-2 http://bitsavers.org/pdf/ibm/360/funcChar/GA27-2719-2_360-67_funcChar.pdf which has a lot of the gory details. as somewhat referenced here ... 360/67 was originally intended for use by tss/360 ... but for a whole variety of reasons, most of them ran cp67 (or in straight 360/65 mode with mvt w/o using virtual memory hardware) http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar' curtesy of science center http://www.garlic.com/~lynn/subtopic.html#545tech in any case, psw format, pg. 15 bit meaning 0-3 spare (must be 0) 4 24-32 bit address mode 5 translation control 6 i/o mask (summary) 7 external mask (summary) 8-11protection key 12 ascii-8 mode 13 machine check mask 14 wait state 15 problem state 16-17 instruction length code 18-19 condition code 20-23 program mask 24-31 spare 32-63 instruction address ... there were a quite a few of the machines used internally. one of the projects were adding 370 "virtual machine" option to cp67 simulation ... this was having cp67 simulate the new instructions added to 370 (prior to announcement of 370 virtual memory). one of the places that deployed numerous of these machines was in the field/data processing/sales division for a project called HONE http://www.garlic.com/~lynn/subtopic.html#hone for hands-on network environment ... the idea was that in the wake of 23jun69 unbundling announcement http://www.garlic.com/~lynn/subtopic.html#unbundle that SEs in the branch office could get operating system "hands-on" experience with (370) systems running in cp67 (370) virtual machines. however, the science center had also ported apl\360 to cms for cms\apl and done a lot of work enhancing it to operate in "large" virtual memory environment (most apl\360 was limited to 16k workspaces, hardly adequate for many real world problems). With cms\apl, there were lots of new (internal) apl-based applications developed (some number of them of the genre that today would be done with spreadsheets) ... including "configurators" ... which basically filled out mainframe system orders for the branch office personal. As the use of these applications grew on HONE ... eventually they eclipsed the virtual guest "hands-on" training and would consume all available resources. at some point in the 70s, it was not even possible to submit a mainframe order that hadn't been run thru HONE configurator. science center had also done quite a bit of work in the area of sophisticated system performance modeling ... including laying the groundwork for what would become capacity planning. some of this i've commented about with regard to calibrating and validating http://www.garlic.com/~lynn/subtopic.html#benchmark the release of my resource manager http://www.garlic.com/~lynn/subtopic.html#fairshare in addition, a flavor of the performance modeling work was also deployed on HONE as the (apl based) "performance predictor". Branch office people could submit customer configuration and workload details/characteristics and then ask "what-if" questions of the "performance predictor" ... as to what would happen if there was configuration and/or workload changes. another project was doing the cp67 changes to support a full 370 virtual memory implementation. this had a version cp67 running either in a 360/67 virtual machine (under cp67) or stand-alone real 360/67 simulating virtual machine with full 370 virtual memory operation. Then there was a custom version of cp67 that believed it ran on 370 virtual memory "hardware" (rather than on 360/67 hardware). This was in regular production use a year before the first engineering 370 machine with virtual memory support was operational (and long before announcement). past posts in the related thread: http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar' http://www.garlic.com/~lynn/2007r.html#62 CSA 'above the bar' http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar' http://www.garlic.com/~lynn/2007r.html#65 CSA 'above the bar' http://www.garlic.com/~lynn/2007r.html#67 CSA 'above the bar' misc. past posts mentioning "performance predictor" http://www.garlic.com/~lynn/2001i.html#46 Withdrawal Announcement 901-218 - No More 'small machines' http://www.garlic.com/~lynn/2
Re: CSA 'above the bar'
The following message is a courtesy copy of an article that has been posted to alt.folklore.computers as well. [EMAIL PROTECTED] (Van Dalsen, Herbie) writes: > Someone wants to create a shared block of memory CSA/not and share it > between programs. My understanding is that a 24-bit program can > address 24-bit addresses, 31-bit, 64-bit... So in my inexperienced > mind the 24bit program could never share in the happiness of this > above the bar heaven of shared storage. as i mentioned in this post http://www.garlic.com/~lynn/2007r.html#62 CSA 'above the bar' ... the way that i originally did sharing implementation and mmap support http://www.garlic.com/~lynn/subtopic.html#mmap was that the same shared object wasn't required to occupy the same virtual address in every virtual address space. however, it could represent a challenge when program images with "relocatable address constants" were involved http://www.garlic.com/~lynn/subtopic.html#adcon there would still be an issue of the amount of happiness (available in 24bit mode) as opposed to any happiness. it would create a problem for processors that had virtual caches ... i.e. cache lines indexed by virtual address ... resulting in synonyms/duplicates in the cache when the same object was addressed by different virtual addresses. here is old email discussing dual index 3090 D-cache http://www.garlic.com/~lynn/2003j.html#email831118 in this post http://www.garlic.com/~lynn/2003j.html#42 Flash 10208 other posts about virtual cache http://www.garlic.com/~lynn/2006u.html#37 To RISC or not to RISC http://www.garlic.com/~lynn/2006v.html#6 Reasons for the big paradigm switch http://www.garlic.com/~lynn/2006w.html#17 Cache, TLB, and OS one of the other issues for TLB (hardware that translates virtual page addresses to real page addresses) ... all the entries were tagged/associated with specific virtual address spaces ... i.e. "STO-associative". This generalized mechanism resulted in a huge number of "duplicated" entries CSA/common-segment. So as a special case optimization for the whole MVS CSA/common-segment hack gorp ... a special option was provided that identified virtual addresses as something belonging to common-segment. These areas then became associated in the TLB with effectively a system-wide, unique, artificial "common-segment" virtual address space (effectively violating the whole generalized virtual address space architecture ... rather than associated with generalized virtual address space ... it became associated with a custom operating system specific construct that was known to have very specific characteristics). past post in this thread discussing rise of the whole ugly common segment gorp http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar' other posts in this thread http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar' http://www.garlic.com/~lynn/2007r.html#65 CSA 'above the bar' -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
[EMAIL PROTECTED] (Ted MacNEIL) writes: > That's why there can be a 'double paging' penalty for a LINUX (or > z/OS, or z/VM, or...). > > z/VM, and its predecessors, has always had the capability to defines > more storage than is on the box. > > It even has swap files. i had other problems with the os/vs2 group (initially svs before it morphed into mvs). one was all the stuff about LRU replacement algorithms and what it met. lots of posts on the subject http://www.garlic.com/~lynn/subtopic.html#wsclock early on, the pok performance modeling group had discovered on a page fault that if it selected "non-changed" pages (for replacement) before "changed" pages ... there wouldn't need the overhead of doing a write before the read. i tried to convince them it would be violated fundamental tenents of LRU replacement paradigm. It wasn't until well into MVS releases that somebody pointed out that they were selecting for replacement, high-use, non-changed, system/shared executable pages, before (lower use) private application data pages (which were changed/modified). another issue isn't just the double paging overhead ... there is the possibility that a virtual guest is running a LRU-like replacement algorithm and selecting a real page with a low use virtual page for replacement (to be refreshed with the missing page). VM may also be doing LRU-like replacement algorithm and noticed (also) that the guest's real page (virtual machine virtual page) hadn't been recently used and selected it for replacement. The pathelogical problem is that the guest may always be deciding it needs one of its real pages (because the corresponding virtual pages weren't being used) moments after VM has decided to remove the corresponding guest virtual machine page from real storage aka running a virtual guest's LRU-like replacement algorithm can violate the premise behind LRU replacement ... since the guest's real page that corresponds to the guest's least recently used virtual page has some probability of being the next page that the guest might actually decide to use misc. past posts in thread: http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar' http://www.garlic.com/~lynn/2007r.html#62 CSA 'above the bar' http://www.garlic.com/~lynn/2007r.html#64 CSA 'above the bar' -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
[EMAIL PROTECTED] (Binyamin Dissen) writes: > Does z/VM use virtual storage? comment in this thread asking how many times has virtual memory been reinvented http://www.garlic.com/~lynn/2007r.html#51 Translation of IBM Basic Assembler to C? some footnotes about the science center http://www.garlic.com/~lynn/subtopic.html#545tech from Melinda's paper "VM and the VM Community: Past, Present, and Future" http://www.princeton.edu/~melinda ... What was most significant was that the commitment to virtual memory was backed with no successful experience. A system of that period that had implemented virtual memory was the Ferranti Atlas computer, and that was known not to be working well. What was frightening is that nobody who was setting this virtual memory direction at IBM knew why Atlas didn't work ... snip ... quoted from L.W. Comeau, "CP-40, the Origin of VM/370", Proceedings of SEAS AM82, September, 1982 and ... Creasy had decided to build CP-40 while riding on the MTA. "I launched the effort between Xmas 1964 and year's end, after making the decision while on an MTA bus from Arlington to Cambridge. It was a Tuesday, I believe." (R.J. Creasy, private communication, 1989.) ... snip ... cp40 was built on specially modified 360/40 with virtual memory hardware ... implementing virtual machines. This morphed into cp67 when 360/67 with standard virtual memory became available. and as per previous post in thread http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar' http://www.garlic.com/~lynn/2007r.html#62 CSA 'above the bar' the initial hack to mvt for os/vs2, in support of 370 virtual memory, involved borrowing a lot of code from cp67. lots of the vm370 microcode assists developed during the 70s and early 80s eventually morphed into pr/sm and current day LPARs ... which is basically stripped down version of full VM virtual machine function. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (McKown, John) writes: > Just as a thought. Could somebody write a subsystem which starts at IPL > time, does the shared GETMAIN, then (here's the rub) somehow have that > memory automatically added to every address space which starts > thereafter? I don't know enough about subsystems. I would guess that it > would be easier for said subsystem to implement a PC so that a "client" > could request access to the shared GCSA (to coin a phrase for it - G for > Grande, like the HLASM instructions). The PC would set up all the > "difficult" parts and return a 64-bit address to the shared memory > space. re: http://www.garlic.com/~lynn/2007r.html#56 CSA 'above the bar' i had done something similar, but different in the waning days of cp67 and then ported it to vm370. it was generalized memmap function that allowed different virtual address spaces to have the same shared memory object at different addresses. vm370 started out with a drastic subset of this function that was cribbed off the virtual "IPL" command. however, it was dependent on providing r/o sharing of the same object by "segment protection" feature that was part of the original, base 370 virtual memory architecture. this was one of the features that got dropped when the retrofit of virtual memory hardware to 370/165 ran into scheduling problems ... could regain six month in schedule if several features were dropped (and the favorite son operating system in pok claimed that they didn't find the features really useful). as a result, this caused all the other processors that already had implemented full 370 virtual memory architecture to go back and pull the dropped features. it also forced the vm370 group to significantly redo their implementation on how to protect shared segments across multiple different virtual address spaces (effectively a real cludge that had been used in cp67) in any case, a drastic subset of my (genealized) memory mapping and sharing implementation was eventually released as something called discontiguous shared segments. lots of past posts mentioning the cms filesystem changes supporting memory mapping (and page mapped operation) http://www.garlic.com/~lynn/subtopic.html#mmap and numerous posts discussing the difficulty that the os/360 relocatable adcon convention represented for allowing sharing same object in different virtual address spaces at potentially different virtual addresses http://www.garlic.com/~lynn/subtopic.html#adcon while tss/360 had numerous other problems, they at least adopted a different convention to address relocatable address constant issue for a shared, virtual memory environment -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CSA 'above the bar'
Steve Samson <[EMAIL PROTECTED]> writes: > The discussion suggests that the "dead zone" represented an arbitrary > decision. However it is absolutely necessary to preserve compatibility > with programs dating back to OS/360. If a 24-bit or 31-bit address is > interpreted as or expanded to a 64-bit address and the high-order bit > happens to be on, that would cast the virtual address into the 2-4 > gigabyte range and unpredictable effects could ensue. > > Use of the high-order bit in an address to signal the end of a > parameter list is common, and no practical means of filtering or > converting the programs is available. > > I think the dead zone is necessary in z/VSE for the same reason. > > Other operating systems did not use the high order bit in the same > way, so there is no need for the dead zone in virtual addresses. > > Has this helped to achieve clarity? 360/67 had both 24-bit and 32-bit virtual addressing modes ... as well as some other things that didn't reappear until xa. there was some discussion in the xa mode about returning to the 360/67 32-bit mode vis-a-vis using 31-bit ... which would have been in the architecture "redbook" (the discussion i remember was the difference in operation of things like BXH and BXLE instructions between 31-bit and 32-bit modes) principles of operation was one of the first major publications done with cms script ... in large part because it supported conditional so on the command line ... either the whole architecture "redbook" could be printed ... or just the principles of operation subset (w/o all the additional detail ... it was called "redbook" because it was distributed in a 3-ring red binder). common segment area started out being the MVS solution to moving subsystems into the own address space ... and the pervasive use of pointer passing APIs. this was what initially led to MVS kernel image occupying 8mbytes of every 16mbyte virtual address space (so for applications making kernel calls ... the kernel could directly access the parameter list). however, this pointer-passing api paradigm created significant problems when subsystems were moved into their own address space (as part of morphing os/vs2 svs to os/vs2 mvs). common segment could start out as 1mbyte in every address space ... where applications could squirrel away parameter list ... and then make call to the subsystem (passing thru the kernel for the address space switch). the problem was for the larger installations, common segment could grow to 5-6 mbytes that appeared in every application virtual address space (with the 8mbyte taken out for the kernel image) that might leave only 2-3mbytes for applications (out of the 16mbytes). the stop-gap solution in the 3033 time-frame was dual-address space mode (pending access registers, program call, etc) ... there was still a pass thru the kernel to switch to a called subsystem ... but the called subsystem could reach back into the calling application's virtual address space (w/o being forced to resorting to the common segment hack). 3033 also introduced a different "above the line" concept. the mismatch between processor thruput and disk thruput was becoming more and more exacerbated. i once advocated a statement that over a period of a decade or so, that the disk relative system thruput had declined by an order of magnitude (or more) ... aka disk thruput increased by 3-4 times while processor thruput increased by 40-50 times. As a result, real storage was more and more being used for caching and/or other mechanisms to compensate for the lagging disk relative system thruput. we were starting to see clusters of 4341 decked out w/max. storage and max channel and i/o capacity ... matching or beating 3033 thruput at a lower price. one of the 4341 cluster benefits was that there was more aggregate real storage than the 16mbyte limit for 3033. the hack was to redefine two (undefined/unused) bits in the page table entry. standard page table entry had 16 bits, including a 12bit (4k) page number field (allowed addressing up to 16mbytes real storage). With the two additional bits, it was possible to address up to 16384 4kbyte pages (up to 64mbyte of real storage) ... but only 16mbytes at a time. in real addressing mode ... it was only possible to address the first 16mbytes and in virtual addressing mode ... it was only possible to address a specific 16mbytes (but it was possible to have more than 4096 total 4kbyte pages, some of which could reside about 16mbyte real). it was possible to use channel program IDAL to specify address greater than 16mbyte real address (allowing data to be read/written above the 16mbyte line). however, the actual channel programs were still limited to residing below the 16mbyte line. some of this was masked by the whole channel program translation mechanism that was necessary as part of moving to virtual memory environment. the original transition for mvt was hacking a little bit of support for a single virtual address spac
Re: IBM System/3 & 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes: > 3277 had quite a bit of local intelligence ... it was possible to do > some custom stuff in the terminal that changed the repeat start-delay > and repeat ... as well as adding fifo to handle keyboard locking up if > you happen to be typing when the system went to (re)write something on > the screen. the move to 3274 controller for 3278/3279/etc terminals ... > moved all that intelligence back into the controller ... reducing amount > of electronics and manufacturing costs. with electronics moved back into > controller ... it also degraded performance and response. re: http://www.garlic.com/~lynn/2007r.html#7 IBM System/3 & 3277-1 http://www.garlic.com/~lynn/2007r.html#8 IBM System/3 & 3277-1 somebody picking around in some of the referenced old postings, sent private email asking about reference to ANR download being 2-3 times baster than DCA download ... and what was ANR ... other than APPN "Automatic Networking Routing". ANR was3272/3277 ... vis-a-vis DCA 3274/3278-9. In addition to DCA having slower human (real terminal) response ... because so much of the electronics had been moved back into controller, it also affected later terminal emulation download thruput. quicky search engine for 3277 & anr turns up http://www.classiccmp.org/pipermail/cctech/2007-September/084640.html misc. past posts mentioning terminal emulation http://www.garlic.com/~lynn/subnetwork.html#emulation as client/server started to proliferate ... the communication group made various attempts (like SAA) to protect their terminal emulation install base. when we came up with 3tier/multi-tier architecture ... we took lots of heat from the sna and saa forces. misc. posts mentioning coming up with multitier networking architecture http://www.garlic.com/~lynn/subnetwork.html#3tier for other drift ... APPN started out as AWP164. For a time, the person responsible and I used to report to the same executive. I would periodically chide him that the communication group didn't appreciate what he was doing and that he should instead work on real networking (like tcp/ip). In fact, the communication group non-concurred with announcing APPN. After some delay and escalation, the announcement letter was carefully rewritten to not state any connection between APPN and SNA. of course we were also running hsdt project ... misc. posts http://www.garlic.com/~lynn/subnetwork.html#hsdt and recent post illustrating gap between what we were doing and what the communication group was doing http://www.garlic.com/~lynn/2007p.html#64 part of the issue was that in early days of SNA ... my wife had co-authored AWP39 ... peer-to-peer networking ... which the communication group possibly viewed as competitive with their communication activity. she was then con'ed into going to pok to be in charge of loosely-coupled architecture and was frequently battling with SNA forces that it wasn't appropriate for loosely-coupled operation. She came up with peer-coupled shared data architecture ... which didn't see a lot of uptake until sysplex ... except for IMS hot-standby ... misc. past references http://www.garlic.com/~lynn/subtopic.html#shareddata recent posts mentioning ASWP39 http://www.garlic.com/~lynn/2007b.html#9 Mainframe vs. "Server" (Was Just another example of mainframe http://www.garlic.com/~lynn/2007b.html#48 6400 impact printer http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now? http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications http://www.garlic.com/~lynn/2007o.html#72 FICON tape drive? http://www.garlic.com/~lynn/2007p.html#12 JES2 or JES3, Which one is older? http://www.garlic.com/~lynn/2007p.html#23 Newsweek article--baby boomers and computers http://www.garlic.com/~lynn/2007q.html#46 Are there tasks that don't play by WLM's rules -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM System/3 & 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. "Rostyslaw J. Lewyckyj" <[EMAIL PROTECTED]> writes: > If memory hasn't failed me, we read mark sense cards on something that > was called a 1230. We didn't have one in the computing center. It was > in a separate laboratory somewhere in the School of Education. > We sent the decks over there. I don't remember what we got back. > I think the 1230 may have punched the marked card. re: http://www.garlic.com/~lynn/2007q.html#71 IBM System/3 & 3277-1 http://www.garlic.com/~lynn/2007r.html#2 IBM System/3 & 3277-1 wiki mark sense page http://en.wikipedia.org/wiki/Mark_sense mentions that 513, 514, 557, and 519 could handle mark sense. also has pointer to 805 test scoring machine. 513 & 514 reproducing punches could handle mark sense ... so it is possible that a 513/514 had preprossed the mark sense student registration cards ... and the 2540 was only processing the reproduced punch cards (and i just not paying that much attention). the wiki reference also has url for 513/514 (pdf) reference manual -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM System/3 & 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. bbreynolds <[EMAIL PROTECTED]> writes: > This thread started about the 3277-001 used on a System/3 Model 15 > (would that be a 5415?): as 3277's relied on the 3271/3272/3275 for > the major portion of their intelligence, I would assume that there > would have had to been some pretty substantial hardware in the > System/3 to make the 3277-001 believe it was attached to a > controller. I can't think how the functions would be split out on a > 3277 not on a controller; unless the 3277-001 was "gutted". Any hint > if a cable other than a simple coax connected the 3277 to the CPU? 3277 had quite a bit of local intelligence ... it was possible to do some custom stuff in the terminal that changed the repeat start-delay and repeat ... as well as adding fifo to handle keyboard locking up if you happen to be typing when the system went to (re)write something on the screen. the move to 3274 controller for 3278/3279/etc terminals ... moved all that intelligence back into the controller ... reducing amount of electronics and manufacturing costs. with electronics moved back into controller ... it also degraded performance and response. several of us complained about it ... but were told that 327x terminals were targeted at data entry market and didn't have the requirements for interactive response and human factors that would be needed for something like interactive computing. as seen in some of the referenced performance comparisons ... say http://www.garlic.com/~lynn/2001m.html#19 3270 protocol ... it was much more difficult to achieve subsecond response with 3274/3278 vis-a-vis 3272/3277. However, for mvs/tso with system response already on the order of a second (or much worse) ... it was pretty negligible consideration. however, heavily loaded vm/cms systems tended to be more on the order of a quarter second (or less, one system i had care&feeding of ... was on the order of .11 seconds 90th percentile for trivial interactive under heavy load). past posts mentioning some (hardware) fixes to 3277 ... and not being able to doing anything with later 3278/3279 because even that bit of electronics had been moved back into the controller (and/or some other 3272/3277 issues vis-a-vis 3274/3278). http://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology http://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life http://www.garlic.com/~lynn/99.html#28 IBM S/360 http://www.garlic.com/~lynn/99.html#69 System/1 ? http://www.garlic.com/~lynn/99.html#193 Back to the original mainframe model? http://www.garlic.com/~lynn/99.html#239 IBM UC info http://www.garlic.com/~lynn/2000c.html#63 Does the word "mainframe" still have a meaning? http://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning? http://www.garlic.com/~lynn/2000c.html#66 Does the word "mainframe" still have a meaning? http://www.garlic.com/~lynn/2000c.html#67 Does the word "mainframe" still have a meaning? http://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?" http://www.garlic.com/~lynn/2000g.html#23 IBM's mess http://www.garlic.com/~lynn/2001b.html#12 Now early Arpanet security http://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu http://www.garlic.com/~lynn/2001i.html#51 DARPA was: Short Watson Biography http://www.garlic.com/~lynn/2001k.html#30 3270 protocol http://www.garlic.com/~lynn/2001k.html#33 3270 protocol http://www.garlic.com/~lynn/2001k.html#44 3270 protocol http://www.garlic.com/~lynn/2001k.html#46 3270 protocol http://www.garlic.com/~lynn/2001l.html#32 mainframe question http://www.garlic.com/~lynn/2001m.html#17 3270 protocol http://www.garlic.com/~lynn/2001m.html#19 3270 protocol http://www.garlic.com/~lynn/2002f.html#14 Mail system scalability (Was: Re: Itanium troubles) http://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine was it? http://www.garlic.com/~lynn/2002i.html#48 CDC6600 - just how powerful a machine was it? http://www.garlic.com/~lynn/2002i.html#50 CDC6600 - just how powerful a machine was it? http://www.garlic.com/~lynn/2002j.html#67 Total Computing Power http://www.garlic.com/~lynn/2002j.html#74 Itanium2 power limited? http://www.garlic.com/~lynn/2002j.html#77 IBM 327x terminals and controllers (was Re: Itanium2 power http://www.garlic.com/~lynn/2002k.html#2 IBM 327x terminals and controllers (was Re: Itanium2 power http://www.garlic.com/~lynn/2002k.html#6 IBM 327x terminals and controllers (was Re: Itanium2 power http://www.garlic.com/~lynn/2002m.html#24 Original K & R C Compilers http://www.garlic.com/~lynn/2002p.html#29 Vector display systems http://www.garlic.com/~lynn/2002q.html#51 windows office xp http://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives http://www.garli
Re: IBM System/3 & 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: > What I don't understand is pre sorting a deck that will be used as > input to the computer--couldn't the computer sort it faster than a > person could? The machine sorted strictly sequentially, while the > computer had bubble or shell sorts that were more efficient. maybe > tape sorting was slow, but disk sorting should've been fast. If the > machine had some core ie 128 k, then plenty of work could be done > within the CPU at very high speed. simple example would be fortran student jobs. the "master" of the program is the individual student's card deck. the student has access to only fortran compile & execution capability ... and compile would be one pass of the input card deck. when i started, the univ. had 1401 that was used as unit-record front-end to 709. the card decks (potentially multiple student jobs) would be collected in card tray. when the tray approached full (our every couple hrs), the tray of cards would be read by the 1401 and transferred to tape. the tape would be carried to 709 tape drive and processed (sequentially, each job compiled and executed) with output going to another tape. When processing finished, the output tape would be moved to 1401 and results printed. The operator would take the printed, fan-fold output, "burst it" ... i.e. tear it into individual jobs, match the bursted print output with corresponding original card deck, wrap the bursted print output around the input card deck (with rubber band) and place it in output bin for student pickup. there were some administrative jobs that used sort ... but that frequently had trays and trays of cards ... written to tape .. and then multiple tape sort (with intermediate tape files) that ran for extended period of time. i did write part of an application that was used for class registration. 2540 could not only read "holes" ... but also had the capability of reading "sense-marked" cards (i.e. no. 2 pencil marks in little boxes on cards). the 2540 had two feeds from the sides with five card stackers in the middle. one side read cards and could select two of the read-side stackers or the middle stacker, the other side punched cards and could select two of the punch-side stackers or the middle stacker. class registration had all these sense-marked cards ... which would read and place in the middle stacker. if the processing found some problem with a card ... a blank card from the punch side would be punched behind the recently read sense-marked card (with some problem ... before the next card would be read/processed) standard processing had an operator removing cards from the stacker and placing in card trays. all of the class registration sense-marked cards were plain manilla. the "punch" cards were loaded with cards that had yellow (or sometimes red) across the top band of the card. once all class registration cards were processed ... there would be multiple trays ... sporadically sprinkled with yellow top-edge cards ... clearly identifying the registration cards with some kind of problem. q&d conversion of gcard ios3270 to html http://www.garlic.com/~lynn/gcard.html reader/punch channel program command codes http://www.garlic.com/~lynn/gcard.html#23 system/360 model 30 machine room, 2540 is seen in middle, in front of the tape drives and partly obscured by 2311 disk drive. the "card reader" (feed) is on the right and the punch is on the left, the five output stackers are in the center http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP2030.html system 370 model 40 machine room, 2540 is in upper middle http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP2040.html better picture of 2540 on the right with somebody loading deck of cards to be read http://www.cs.ncl.ac.uk/events/anniversaries/40th/images/ibm360_672/slide19.jpg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM System/3 & 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes: > field/col definition for 12-2-9 TXT card: > > col > 1 12-2-9 / x'02' > 2-4 TXT > 5 blank > 6-8 relative address of first instruction on record > 9-10blank > 11-12 byte count ... number of bytes in information field > 15-16 ESDID > 17-72 56-byte information field > 73-80 deck id, sequence number, or both > > cols. 2-4 and 73-80 were character ... the other fields were hex. re: http://www.garlic.com/~lynn/2007q.html#69 IBM System/3 & 3277-1 "txt" card decks were nearly executable output from assemblers and compilers. more information about format of other cards in txt card deck http://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First video terminal?) before i learned about "rep" cards, i would duplicate a "TXT", multipunching the patch/fix into the duplicated card. keypunches just had keys for punching the character information, if you were dealing with hex ... for which there was no equivalent character ... it would be necessary to "multi-punch" to get the correct holes punched. for hex, it was necessary to read the holes ... since even if the card had been "interpreted" ... there were no corresponding character symbols for the majority of the hex codes. my process was to fan the txt card deck ... reading the holes in cols 6-8 (displacement address in the program of data punched in the specific card) ... looking for the card corresponding to the data i needed to patch. I would then take that card and duplicate it out to the cols that needed to be "fixed" ... multi-punch the corrections (in the duplicate/new card) and then resume duplicating the remaining of the card. misc past posts mentioning multi-punch http://www.garlic.com/~lynn/93.html#17 unit record & other controllers http://www.garlic.com/~lynn/2000f.html#75 Florida is in a 30 year flashback! http://www.garlic.com/~lynn/2001b.html#26 HELP http://www.garlic.com/~lynn/2001b.html#27 HELP http://www.garlic.com/~lynn/2001k.html#27 Is anybody out there still writting BAL 370. http://www.garlic.com/~lynn/2001k.html#28 Is anybody out there still writting BAL 370. http://www.garlic.com/~lynn/2002k.html#63 OT (sort-of) - Does it take math skills to do data processing ? http://www.garlic.com/~lynn/2004p.html#24 Systems software versus applications software definitions http://www.garlic.com/~lynn/2005c.html#54 12-2-9 REP & 47F0 http://www.garlic.com/~lynn/2006c.html#17 IBM 610 workstation computer http://www.garlic.com/~lynn/2006g.html#43 Binder REP Cards (Was: What's the linkage editor really wants?) http://www.garlic.com/~lynn/2006g.html#58 REP cards http://www.garlic.com/~lynn/2006l.html#64 Large Computer Rescue http://www.garlic.com/~lynn/2007d.html#51 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007f.html#78 What happened to the Teletype Corporation? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM System/3 & 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: > I could read ASCII from a paper tape. Took me a while. :-) previous post in this thread: http://www.garlic.com/~lynn/2007q.html#48 IBM System/3 & 3277-1 i eventually learned to read 12-2-9 (i.e. card punch holes for hex "02") "txt" text deck cards ... as part of multi-punch/duplicate cards and punching patches ... i had a 2000 card assembler program and it was frequently faster to multi-punch fixes (into duplicate/new card) than to reassemble program (which could take 30-60 minutes elapsed time ... this was on 360/30 under os/360 release 6 ... i had dedicated university machine room on weekends for 48hrs stretch). basically had to not only be able to read storage dumps and equivalence between hexcode and things like instructions and/or addresses ... but the similar information on cards in "punch hole" representation. field/col definition for 12-2-9 TXT card: col 1 12-2-9 / x'02' 2-4 TXT 5 blank 6-8 relative address of first instruction on record 9-10blank 11-12 byte count ... number of bytes in information field 15-16 ESDID 17-72 56-byte information field 73-80 deck id, sequence number, or both cols. 2-4 and 73-80 were character ... the other fields were hex. q&d converstion of gcard ios3270 to html http://www.garlic.com/~lynn/gcard.html but it lacks card punch hole equivalence for hex (on real green card) here is actual scan of a 360 green card ... front & back (11mb) http://weblog.ceicher.com/archives/IBM360greencard.pdf from: http://weblog.ceicher.com/archives/2006/12/ibm_system360_green_card.html the following table is from http://www.cs.uiowa.edu/~jones/cards/codes.html giving equivalence between card punch codes, hexidemal value, and ebcdic 00 10 20 30 40 50 60 70 80 90 A0 B0 C0 D0 E0 F0 ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ ___ 0|NUL| |DS | |SP | & | - | | | | | | | | | 0 |0 |__1|___|__2|___|__3|__4|__5|___|___|___|___|___|___|___|___|___| 1| | |SOS| | | | / | | a | j | | | A | J | | 1 |1 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 2| | |FS | | | | | | b | k | s | | B | K | S | 2 |2 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 3| |TM | | | | | | | c | l | t | | C | L | T | 3 |3 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 4|PF |RES|BYP|PN | | | | | d | m | u | | D | M | U | 4 |4 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 5|HT |NL |LF |RS | | | | | e | n | v | | E | N | V | 5 |5 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 6|LC |BS |EOB|UC | | | | | f | o | w | | F | O | W | 6 |6 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 7|DEL|IL |PRE|EOT| | | | | g | p | x | | G | P | X | 7 |7 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 8| | | | | | | | | h | q | y | | H | Q | Y | 8 |8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 9| | | | | | | | | i | r | z | | I | R | Z | 9 |9 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| A| | | | | ¢ | ! | | : | | | | | | | | |2-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| B| | | | | . | $ | , | # | | | | | | | | |3-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| C| | | | | < | * | % | @ | | | | | | | | |4-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| D| | | | | ( | ) | _ | ' | | | | | | | | |5-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| E| | | | | + | ; | > | = | | | | | | | | |6-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| F| | | | | | | ¬ | ? | " | | | | | | | | |7-8 |___|___|___|___|___|___|___|___|___|___|___|___|___|___|___|___| 12 11 10 12 11 10 12 11 10 12 11 10 9 9 9 10 12 11 i.e. hex values down the left and across the top, punch holes dorwn the right adn across the bottom. and card punch format ... card rows are number 12, 11, 0-9 from the top. /&-0123456789ABCDEFGHIJKLMNOPQR/STUVWXYZb#@'>V?.¤[<§!$*];^±,%v\¶ 12 / O OOO 11| O O OO 0|O O OO 1| OOOO 2| OOOO O O O O 3| OOOO O O O O 4|
Re: IBM System/3 & 3277-1
The following message is a courtesy copy of an article that has been posted to comp.sys.ibm.sys3x.misc,alt.folklore.computers,bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: > (For the AS/400 I never could figure the internal code architecture, > IBM used something called "LIC" that was rather vague. I once tried > to get an optional machine language listing of my application program > compilation but it was very confusing. I believe IBM used a multi- > layered approach for AS/400 internals, remnants of its "Future System" > effort. I was not a big AS/400 fan, except for a file-aid tool that > was better than mainframe tools.) one of the things that as/400 layered approach bought was that it could move from a CICS chip to a (power/pc) RISC chip w/o a lot of trouble. the future system project was going to replace 360/370 in the early-to-mid 70s ... when the project was eventually canceled there was big effort to make up for lost time resulting from the future system distraction http://www.garlic.com/~lynn/subtopic.html#futuresys attempting to get stuff back into the 370 (hardware & software) product pipelines ... crash program for 303x was part of that. part of the analysis "killing" the project was that if a "future system" machine was built from the fastest hardware then available (370/195) it would have the thruput of a 370/145. the folklore is that some of the future system participants regrouped in rochester, coming out with the s/38 (which didn't have nearly the thruput requirements). i've periodically commented that there is some characteristics of the 801 risc activities in the 70s to go to the exact opposite extreme of what went on in future system. a early, big push for 801/risc was effort to replace the multitude of corporate internal microprocessors with common risc architecture chips (every low-to-mid range 370 implemented with microcode on their own unique microprocessor, controllers, and other kinds of microprocessors). one of these was going to be the s/38 followon, as/400. the common 801/risc microprocessor effort ran into all sorts of problems and eventually died off ... at which time, as/400 had crash project to design a new CISC processor. misc. past 801, romp, rios, fort knox, power, power/pc, somerset, etc postings http://www.garlic.com/~lynn/subtopic.html#801 as well as some old email from the period http://www.garlic.com/~lynn/lhwemail.html#801 effectively the effort was revisited when rochester began move of as/400 from their CISC chip to its current use of 801/RISC chip. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Are there tasks that don't play by WLM's rules
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: > I seem to remember this as TCP/IP version 3.2 with 3.3 having the fixes > for optimization. Weren't there twin stacks being managed or some such > thing. I'm not too TCP/IP literate. We had this original version > implemented because I remember doing a pre/post resource impact analysis > finding additional CPU, significant in relation to prior usage, in use by > TCP/IP. re: http://www.garlic.com/~lynn/2007q.html#45 Are there tasks that don't play by WLM's rules as per previous post ... there was the vs/pascal implementation ported from vm ... with a "diagnose" instruction simulation done in os ... and then the vtam-based implementation (that started out only being "correct" if it had lower thruput than lu6.2). some part of the base code poor thruput (and high processor consumption) was that (only) a channel-attached "bridge" box was being supported ... rather than a native channel-attached tcp/ip "router" box. In the LAN "bridge" scenario ... the mainframe host code not only had to do the ip-header gorp ... but also had to do the lan/mac header overhead before passing the packet to the channel for processing by the "bridge" box. part of the rfc 1044 three orders of magnitude improvement http://www.garlic.com/~lynn/subnetwork.html#1044 was having a real channel-attach tcp/ip router box ... eliminating the mainframe host code having to also provide the lan/mac header overhead processing (needed by a lan/mac "bridge" box ... rather than having a real channel-attach tcp/ip router box). part of this possibly was the whole focus on the sna communication paradigm (the old joke that it wasn't a system, wasn't a network, and wasn't an architecture) ... where vtam provided the communication addressing (and didn't have the concept of networking). in the early days of sna ... my wife had co-authored "AWP39" for peer-to-peer networking architecture ... which was possibly viewed as somewhat in competition with sna. part of the issue is that in most of the industry, networking it peer-to-peer ... it is only when sna had co-opted the term "networking" to apply to communication ... that it was necessary to qualify "networking" with "peer-to-peer". this was possibly also why she got con'ed into going to pok to be in charge of loosely-coupled architecture. while there she also created "peer-to-peer shared data" architecture ... which, except for ims hot-standby, didn't see a lot of uptake until sysplex. misc past posts http://www.garlic.com/~lynn/subtopic.html#shareddata for another archeological trivia ... APPN was originally "AWP164". misc. past posts mentioning AWP39 http://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment http://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5 http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005u.html#23 Channel Distances http://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe http://www.garlic.com/~lynn/2006j.html#31 virtual memory http://www.garlic.com/~lynn/2006k.html#9 Arpa address http://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server http://www.garlic.com/~lynn/2006l.html#4 Google Architecture http://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?) http://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R http://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2006u.html#28 Assembler question http://www.garlic.com/~lynn/2006u.html#55 What's a mainframe? http://www.garlic.com/~lynn/2007b.html#9 Mainframe vs. "Server" (Was Just another example of mainframe http://www.garlic.com/~lynn/2007b.html#48 6400 impact printer http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now? http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications http://www.garlic.com/~lynn/2007o.html#72 FICON tape drive? http://www.garlic.com/~lynn/2007p.html#12 JES2 or JES3, Which one is older? http://www.garlic.com/~lynn/2007p.html#23 Newsweek article--baby boomers and computers -- For IBM-MAIN subscribe / signoff / archive access instructions, s
Re: Are there tasks that don't play by WLM's rules
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Ted MacNEIL) writes: > It's not just z/OS UNIX. > The first implementation of TCP/IP on OS/390 was a port from VM. > And, it was a pig until they decided to re-implement by starting from > scratch using z/OS UNIX (circa 2.7). there was two issues ... the base was implemented in vs/pascal on on 3090 (under vm) it got about 44kbytes/sec thruput and consumed nearly whole 3090 processor. i did the support for rfc 1044 http://www.garlic.com/~lynn/subnetwork.html#1044 and in some tuning tests at cray research ... got 1mbyte/sec (channel media) thruput between 4341 clone and cray machine ... using only very modest amount of the 4341 ... about 25 times the bytes moved for maybe 1/30th the pathlength ... say nearly three orders of magnitude improvement in bytes/mip thruput the initial port to os ... kept the base vm tcp/ip code unchanged and implemented a cut-down vm emulation underneath (just enuf to run the tcp/ip code) ... which further aggrevated the poor tcp/ip thruput there was then a tcp/ip implementation done "in vtam" that had been outsourced to subcontractor. the folklore is that initial version delivered had tcp with higher thruput than lu6.2 and the subcontractor was told that everybody knows that lu6.2 has much higher thruput (than tcp/ip) and therefor the tcp/ip implementation must be incorrect ... and only a "correct" implementation was going to be accepted. misc. past references to folklore about the vtam-based implementation for tcp/ip http://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files? http://www.garlic.com/~lynn/2000c.html#58 Disincentives for MVS & future of MVS systems programmers http://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable http://www.garlic.com/~lynn/2002q.html#27 Beyond 8+3 http://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned http://www.garlic.com/~lynn/2004e.html#35 The attack of the killer mainframes http://www.garlic.com/~lynn/2005h.html#43 Systems Programming for 8 Year-olds http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005r.html#2 Intel strikes back with a parallel x86 design http://www.garlic.com/~lynn/2006f.html#13 Barbaras (mini-)rant http://www.garlic.com/~lynn/2006l.html#53 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?) http://www.garlic.com/~lynn/2006w.html#29 Descriptive term for reentrant program that nonetheless is http://www.garlic.com/~lynn/2007h.html#8 whiny question: Why won't z/OS support the HMC 3270 emulator i had a project i called hsdt (high-speed data transport) http://www.garlic.com/~lynn/subnetwork.html#hsdt that would periodically run into contention with the communication group. among other things, had deployed backbone connected to the internal network http://www.garlic.com/~lynn/subnetwork.html#internalnet that had T1 (and higher speed) terrestrial and satellite links. recent post http://www.garlic.com/~lynn/2007p.html#64 mentioning business trip to the far east to visit a company that we were buying some hardware from. the friday before we left, somebody in raleigh had announced a new internal discussion group that was to use the following terminology references: low-speed <9.6kbits medium-speed19.2kbits high-speed 56kbits very high-speed 1.5mbits on the wall of a conference room, the following monday on the other side of the pacific low-speed <20mbits medium-speed100mbits high-speed 200-300mbits very high-speed >600mbits we had also been doing some work with NSF and various universities leading up to what was to be NSFNET backbone ... aka tcp/ip is the technology basis for the modern internet, nsfnet backbone is the operational basis for the modern internet and CIX is the business basis for the modern internet. some old email references from that period http://www.garlic.com/~lynn/lhwemail.html#nsfnet -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes: > The first operational 370 hardware supporting virtual memory was a > 370/145 engineering processor. However, cp67h with cp67i running in a > 370 virtual machine was in regular operation a year before the 370/145 > engineering box was operational. In fact, cp67i system was used as > initial software brought up on the 370/145 engineering box. re: http://www.garlic.com/~lynn/2007p.html#74 GETMAIN/FREEMAIN and virtual storage backing up for additional topic drift, another internal project that drew on some of the cp67h activity was the inciption of the internal HONE project. lots of past posts mentioning HONE (and/or APL) http://www.garlic.com/~lynn/subtopic.html#hone this is at least partially motivated by the 23jun69 unbundling announcement ... a little topic drift here http://www.garlic.com/~lynn/2007q.html#13 Does software life begin at 40? IBM updates IMS database http://www.garlic.com/~lynn/2007q.html#14 Does software life begin at 40? IBM updates IMS database misc. other posts mentioning unbundling and starting to charge for application software http://www.garlic.com/~lynn/subtopic.html#unbundle the other aspect of unbundling was that it also started to charge for SE time/services. prior to that, (young/new) SEs picked up a lot of their experience via "on the job training" ... working with more experienced SEs on the customer machine. with unbundling and charging customers for SE services/time, this "hands-on" learning experience evaporated. somewhat as a substitute, HONE (Hands-On Network Experience) was created ... with a number of 360/67 running a clone of the science centers http://www.garlic.com/~lynn/subtopic.html#545tech cp67 system were installed around the country. the idea was that SEs (at branch offices) could pickup ("hands-on") experience running/testing operating systems remotely in the HONE cp67 virtual machines. for slightly other, topic drift ... this recent post http://www.garlic.com/~lynn/2007q.html#22 Enterprise: Accelerating the Progress of Linux When initial 370 was announced, virtual memory still wasn't available ... but there were a few new instructions ... and the operating systems were updated to make use of the new instructions. that is somewhat where a subset of the "cp67h" enhancements came into play (at HONE) ... it was possible to run the latest (370) operating systems in cp67 virtual machines ... with cp67 kernel simulating the latest, new 370 instructions. Another activity by the science center, effectively resulted in the direction of HONE completely changing. The science center had also did a port of apl\360 to cms as cms\apl. Among other things ... APL "work spaces" could now be 16mbytes ... instead of the 16kbyte-32kbytes typical of apl\360 ... and an API for operating system functions was added (things like being able to do file i/o). This allowed APL to start being used for real-world applications (instead of toy demos that were frequently the result of the 16k limitation). In this period, APL was frequently used for lots of things that spreadsheets are used for today. Quite a few APL applications (like configurators) in support of sales and marketing were deployed on HONE ... and overtime these started to consume all available HONE processing ... and the original use for SE "hands-on" withered and disappeared. After vm370 became available, HONE upgraded from cp67 to vm370 (and HONE clones started to sprout up around the world). Also by the mid-70s, it was no longer possible for computing system orders to be submitted w/o first having been processed through some number of HONE APL applications (like configurators). other posts in this thread: http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007p.html#70 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007p.html#73 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007q.html#8 GETMAIN/FREEMAIN and virtual storage backing up In the 70s, the various HONE datacenters were consolidated in cal. with possibly largest "single system image" operation. This involved quite a few operational and functional enhancements to vm370 supporting load-balancing and fall-over ... that allowed a large number of loosely-coupled (tightly-coupled) multiprocessors to effectively operate as single large timesharing service (in part driven by the significant processing requirements because of using APL) ... somewhat reminiscent of some modern day advanced operations. Then because of business continuity considerations, the california datacenter was replicated first in Dallas, and then a 3rd in Boulder (supporting geog
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Hunkeler Peter , KIUK 3) writes: > Fixed storage is not only to support diabled users but much more often > used in the ubiquituos I/O processing. The channel subsystem (the I/O > part of System z hardware) does not use DAT. Channel commands transfer > data blocks data from and to real storage to and from I/O devices, > resp. Before the I/O can be initiated, MVS's I/O supervisor code has > to make sure the virtual storage allocated for the I/O buffers is not > being paged out while the channel subsystem is working on the I/O > request. Therefore, the pages will be fixed before the I/O supervisor > passes the I/O request to the channel subs this was part of the technology that was borrowed from cp67 in the original os/vs2 work ... discussed earlier in this thread http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage backing up http://www.garlic.com/~lynn/2007p.html#70 GETMAIN/FREEMAIN and virtual storage backing up one of the uses for "fixed" storage was allowing applications to build channel programs with the (previously) "fixed", real storage addresses ... then the application channel program could be directly executed ... w/o requiring the supervisor having to scan ... building a shadow/duplicate channel program with the "real" addresses for instance, lookup various discussions about EXCPVR compared to EXCP ... this redbook has some discussion of the differences between EXCPVR and EXCP (although most of the discussion is about support for using storage about 2GB line) http://www.redbooks.ibm.com/abstracts/SG245976.html from 2.10.3 Using EXCP and EXCPVR Programs using EXCPVR have the esponsibility to page fix all I/O area and build real channel programs. ... snip ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. "Bill Ogden" <[EMAIL PROTECTED]> writes: > The statements about the 360/67 are correct. It was a little ahead of > its time in several ways. The 67's DAT design was a bit different than > the later S/370 DAT that was used by MVS, and is typically not > considered in the history lines for MVS. re: 2007p.html:Subject: Re: GETMAIN/FREEMAIN and virtual storage backing up 2007p.html:Subject: Re: GETMAIN/FREEMAIN and virtual storage backing up 2007p.html:Subject: Re: GETMAIN/FREEMAIN and virtual storage backing up other than original os/vs2 prototype implementation was done with mvt kernel modified with a lot of code borrowed from cp67 running on 360/67 i had done a lot of work with virtual memory as an undergraduate http://www.garlic.com/~lynn/subtopic.html#wsclock and then later after joining the science center http://www.garlic.com/~lynn/subtopic.html#545tech and in the early 70s several of us would make frequent sojourns to pok (out the mass pike and down the taconic) for architecture meetings (virtual memory, multiprocessing, etc) ... including architecture meetings where several features were pulled from 370 virtual memory architecture in order to buy 370/165 engineers six month schedule in their hardware implementation. there were other issues in the os/vs2 virtual memory implementation (spanning both svs and mvs) ... one had to do with the page replacement algorithm implementation ... the standard is LRU (least recently used) or various approximations related of LRU. The pok performance modeling group had discovered that (at a micro-level) that if a non-changed page was selected for replacement ... that the latency to service a page fault was much less than if a changed page was selected for replacement (non-changed pages could be immediately discarded, without needing to write, relying on copy already out on disk). However, i repeatedly pointed out to them that weighting the replacement algorithm based on changed bit as opposed to the reference bit ... severely negated any recently used strategy. They went ahead with it anyway (possibly they didn't have very good macro-level simulation capability and stuck with just the micro-level simulation could make informed judgement). in any case, it was well into a number of MVS release before somebody got an award for improving MVS performance by changing to give more weight to the reference use in replacement decisions (example was that under the earlier strategy, the replacement algorithm was selecting high-use, shared, executable linklib virtual pages for replacement before private, lower-use application data virtual pages). another influence of cp67 and the science center was a joint project between endicott and the science center to do custom modifications to cp67 to provide "370" (virtual memory architecture) virtual machines. For instance, this required cp67 simulating 370 architecture hardware format virtual memory tables ... rather than 360/67 architecture hardware format virtual memory tables ... internally, this was comingly referred to as "cp67h" system. After that was done, there were modifications to cp67 to make it run on 370 hardware ... building 370 format tables ... rather than 360/67 format tables. Internally, this was comingly referred to as cp67i. The first operational 370 hardware supporting virtual memory was a 370/145 engineering processor. However, cp67h with cp67i running in a 370 virtual machine was in regular operation a year before the 370/145 engineering box was operational. In fact, cp67i system was used as initial software brought up on the 370/145 engineering box. One of the complexities in the cp67h & cp67i development was it was all done on the science center cp67 timesharing service. Information about virtual memory for 370 was an extremely tightly held corporate secret ... and there were a variety of non-employees (from numerous education institutions in the cambridg area) with regular access to the science center timesharing service. As a result ... nearly all of the cp67h work went on in a 360/67 virtual machine (not on the bare hardware) to isolate it from any non-employee prying eyes. lots of past posts about use of cp67 for timesharing service ... both internally and externally (including mentioning it being used to address various security issues) http://www.garlic.com/~lynn/subtopic.html#timeshare misc past posts mentioning cp67h and/or cp67i systems: http://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it? http://www.garlic.com/~lynn/2004b.html#31 determining memory size http://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap ! http://www.garlic.com/~lynn/2004p.html#50 IBM 3614 and 3624 ATM's http://www.garlic.com/~lynn/2005c.html#59 intel's Vanderpool and virtualization in general http://www.garlic.c
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: > This has always intrigued me. What was done to eliminate the > possibility that the channel had to access a virtual page that had > been paged out? An enabled application or system code that is copying > and translating virtual-to-real addresses can always suffer a page > fault, wait for the page-in, and resume as if nothing had happened, > but channels cannot wait for page-fault resolution. Or could they? re: http://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage backing up part of CCWTRANS creation of "shadow" channel programs (with real addresses) included pinning/locking the associated virtual pages (to those real-addresses). after the real i/o had completed (running with the "shadow" channel program), there was an UNTRANS process ... that included unpinning the associated virtual pages. the original 370 virutal memory architecture included some number of features that didn't actually make it out. i've posted before about some features that the 165 hardware engineers ran into problems ... creating full 370 virtual memory hardware retrofit to the 165 ... and in escalation where they claimed they could pickup six months on the delivery schedule if they could drop the features ... and the pok favorite son operating system expressed they could see no use for the features. dropping the features then met that all the other processors had to undo their implementation and any software that was already completed that would use the additional features ... and to be reworked. there had been channel operation with virtual addresses defined (including being able to suspend because of a page-fault and then be resumed) and there was folklore there was even patents on such channel operation with virtual addresses. this never got very far into the 370 architecture. for lots of topic drift ... past posts mentioning issue with 370/165 virtual memory hardware retrofit schedule and dropping a number of features to make up six monhts http://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ??? http://www.garlic.com/~lynn/99.html#7 IBM S/360 http://www.garlic.com/~lynn/99.html#204 Core (word usage) was anti-equipment etc http://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc http://www.garlic.com/~lynn/2000d.html#82 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition) http://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 bit? http://www.garlic.com/~lynn/2000f.html#55 X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs) http://www.garlic.com/~lynn/2000f.html#63 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2000g.html#10 360/370 instruction cycle time http://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time http://www.garlic.com/~lynn/2000g.html#16 360/370 instruction cycle time http://www.garlic.com/~lynn/2000g.html#21 360/370 instruction cycle time http://www.garlic.com/~lynn/2001.html#63 Are the L1 and L2 caches flushed on a page fault ? http://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits http://www.garlic.com/~lynn/2001k.html#8 Minimalist design (was Re: Parity - why even or odd) http://www.garlic.com/~lynn/2002.html#48 Microcode? http://www.garlic.com/~lynn/2002.html#50 Microcode? http://www.garlic.com/~lynn/2002.html#52 Microcode? http://www.garlic.com/~lynn/2002g.html#47 Why are Mainframe Computers really still in use at all? http://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes? http://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes? http://www.garlic.com/~lynn/2002m.html#68 Tweaking old computers? http://www.garlic.com/~lynn/2002n.html#10 Coherent TLBs http://www.garlic.com/~lynn/2002n.html#15 Tweaking old computers? http://www.garlic.com/~lynn/2002n.html#23 Tweaking old computers? http://www.garlic.com/~lynn/2002n.html#32 why does wait state exist? http://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033 http://www.garlic.com/~lynn/2002p.html#44 Linux paging http://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text http://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions http://www.garlic.com/~lynn/2003g.html#19 Multiple layers of virtual address translation http://www.garlic.com/~lynn/2003g.html#20 price ov IBM virtual address box?? http://www.garlic.com/~lynn/2003h.html#37 Does PowerPC 970 has Tagged TLBs (Address Space Identifiers) http://www.garlic.com/~lynn/2003m.html#34 SR 15,15 was: IEFBR14 Problems http://www.garlic.com/~lynn/2003m.html#37 S/360 undocumented instructions? http://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone http://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment protection hack http://www.garlic.c
Re: GETMAIN/FREEMAIN and virtual storage backing up
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Hunkeler Peter , KIUK 3) writes: > OS/360 was a real storage only operating system. DAT was introduced with > S/370. OS/390 could run on that hardware but not use DAT (and other > new hardware facilites). DAT was introduced on 360/67 ... basically 360/65 with dynamic address translation ... at least in its single processor version (although 360/67 offerred both 24-bit as well as 32-bit virtual addressing modes). The 360/67 multiprocessor did offer some additional features vis-a-vis 360/65 multiprocessor ... like all 360/67 processors could directly address all physical channels (while 360/65 multiprocessor was limited to addressing common real storage ... but didn't provide channel multiprocessor connectivity). tss/360 was to be the official operating system supporting 360/67 but ran into lots of problems and was decommited. however, the science center http://www.garlic.com/~lynn/subtopic.html#545tech did do a virtual machine monitor called cp40 for a 360/40 with custom hardware dynamic address translation modifications ... and then morphed it into cp67 when production 360/67 machines became available. cp67 was the precursor to vm370 when virtual memory support was announced for 370. the initial prototype for os/vs2 svs ... precusor to os/vs2 mvs ... was a custom modified mvt system ... initially running on 360/67 machines. it had hack on the side to create a single 16mbyte virtual address space and some simple interrupt handler for page faults. it also had CCWTRANS (and associated routines) from cp67 wired into the side to handle the application channel programs (from excp/svc0) to "real" channel program translation. This is an issue common for both virtual machine monitors and the os/vs genre of operation systems ... where the applications built channel programs that were then passed to be directly executed. The 360/370 genre of channels required "real" addresses for execution ... but the application (and/or virtual machine) built channel programs all had "virtual address" specifications. To handle the situation, a copy of the original channel program had to be created with the specified virtual addresses replaced with the corresponding real addresses. for other topic drift ... charlie's work on fine-grain locking supporting cp67 multiprocessor operation resulted in his invention of the compare-and-swap instruction (mnemonic chosen because CAS are charlie's initials). initial forey with pok and 370 architecture owners were met with brick wall resistance because the pok favorite son operating system people claimed that the test-and-set instruction (from 360 days) were more than sufficient for all multiprocessor support. The challenge was in order to justify comapre-and-swap instruction was a non-multiprocessor use had to be defined/invented. The result was the multi-threaded use description (whether or not the environment was multiprocessor) that current shows up in appendix section in principles of operation. misc. posts mentioning multiprocessor and/or compare-and-swap instruction http://www.garlic.com/~lynn/subtopic.html#smp somewhat related to the original thread subject ... when i first got a copy of cp67 at the university as an undergraduate ... when virtual machine logged on ... the virtual address space "backing store" (for the virtual machine) were all initialized to a single, special "zeros" page on the cp67 ipl/boot volume. Each corresponding page table entry that pointed to the "zeros" page also had a flag that if the virtual page was ever modified/changed (after being fetched into real storage), it was to have a new (disk paging) backing location dynamically allocated. an early enhancement that i made to cp67 ... was to initialize freshly, created virtual storage with indication that on initial page fault, that instead of fetching the virtual page from some disk location ... that a real page was to be allocated and then simply cleared to zeros (i used a bxle loop with stm of ten registers that had been all cleared to zeros). The "recompute" flag still remained the same ... i.e. if virtual execution subsequently "modified" a zeros page ... it would have a new back disk page location dynamically allocated. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Choice Overload In Parallel Programming http://developers.slashdot.org/developers/07/10/03/0021253.shtml from above: "And then we show them the parallel programming environments they can work with: MPI, OpenMP, Ct, HPF, TBB, Erlang, Shmemm, Portals, ZPL, BSP, CHARM++, Cilk, Co-array Fortran, PVM, Pthreads, windows threads, Tstreams, GA, Java, UPC, Titanium, Parlog, NESL,Split-C... and the list goes on and on. If we aren't careful, the result could very well be a 'choice overload' experience with software vendors running away in frustration." ... snip .. and ... Embedded software stuck at C http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=202102427 from above: "The inability of C/C++ code to parallelize coupled with its ubiquity throughout the embedded market is a major issue for multi-core going forward," Heikkila wrote in a follow up email to EE Times. "Any alternative parallel programming languages certainly won't materialize in the embedded market, but instead will more likely gain momentum in a more mainstream computing market before making its way into embedded applications," he added. ... snip ... past posts in thread: http://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#26 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#34 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#38 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#60 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#63 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#5 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#13 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#14 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#19 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#22 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#26 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#29 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#37 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#39 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#49 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#51 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#52 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#53 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#54 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#58 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#59 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#61 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#70 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#1 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#3 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#6 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#25 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#28 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#38 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007n.html#39 Is Parallel Programming Just Too Hard? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Industry Standard Time To Analyze A Line Of Code
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (John P Baker) writes: > Back in the 80s, we operated under the premise that a seasoned > programmer should be able to produce 20 lines of bug-free assembler > code per day. there have been periodic statements that code generation can be the simplest part of the problem. we've periodically commented that the effort to produce a service can be 4-10 times that of a straight-forward application (or taking a well-tested and well-debugged application and turning it into a service can take 4-10 times the effort of the original application development). frequently this may have only a little to do with lines-of-code. we were called in to consult with a small client/server startup that wanted to do payment transactions on servers ... they had this technology called SSL ... and subsequently the activity has frequently been referred to as "electronic commerce". Part of the infrastructure that the server payment application talked to was something called a "payment gateway" ... misc. past posts mentioning payment gateway activity http://www.garlic.com/~lynn/subnetwork.html#gateway the initial take was to take transaction message formats from existing circuit-based infrastructure and map them to packets in internet infrastructure. this somewhat ignored a whole lot of telco provisioning that went into circuit-based operation ... and provided a basis for business critical dataprocessing ... which was all missing in the initial transition to internet-based operation. as part of supporting an operational environment (as opposed to somewhat trivial technology demonstration) ... we had to invent a lot of compensating processes for the internet environment. some other recent posts raising the issue about business critical dataprocessing http://www.garlic.com/~lynn/2007f.html#37 Is computer history taught now? http://www.garlic.com/~lynn/2007g.html#51 IBM to the PCM market(the sky is falling!!!the sky is falling!!) http://www.garlic.com/~lynn/2007h.html#78 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007n.html#10 The top 10 dead (or dying) computer skills http://www.garlic.com/~lynn/2007n.html#76 PSI MIPS http://www.garlic.com/~lynn/2007n.html#77 PSI MIPS http://www.garlic.com/~lynn/2007o.html#23 Outsourcing loosing steam? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: India is outsourcing jobs as well
re: http://www.garlic.com/~lynn/2007p.html#39 Inda is outsourcing jobs as well Why Is US Grad School Mainly Non-US Students? http://ask.slashdot.org/askslashdot/07/09/29/2027210.shtml from above: I am a new graduate student in Computer Engineering. I would like to get my MS and possibly my Ph.D. I have learned that 90% of my department is from India and many others are from China. ... snip ... somewhat related recent post http://www.garlic.com/~lynn/2007o.html#76 Graduate Enrollment in 2005 giving stats showing it slightly closing between 2001 & 2005, i.e. foreign/US; 2001: 6500/2500 and 2005: 4500/3500 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: India is outsourcing jobs as well
Edward Jaffe wrote: I have family all over Virginia. "Less developed" is probably a good thing. It's a beautiful state. Lots of history. There's something very wrong with and/or not being stated in the premise here. They probably need people in the United States because things aren't working out so well with an all-Indian work force. the other possibility is that they have some specific outsourcing that may include requirement for some legacy skills ... it may turn out to be cheaper to hire people, that already have such experience, than try to train a new generation ... especially if it is considered obsolete skills with limited future applicability. i've frequently claimed that a big boost for outsourcing was as part of y2k remediation efforts ... when it wasn't so much a question of pay scale ... but getting anybody at all. this was significantly aggravated because it was happening during the big resource demand growth in the internet bubble. once business relations were established (during the y2k era), these business relations continued to exist after y2k remediation completed. some of the recent statistics ... that well over half of cs advanced degrees from us institutions were to people not born in the US. still the majority of the advanced degrees (from us institutions) are to people not born in the us ... while at the same time the number graduating from non-US institutions is dramatically increasing. This is coupled with things like test scores for US highschool graduates ranks near the bottom of all industrial nations. misc. recent posts on the subject: http://www.garlic.com/~lynn/2007g.html#6 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#7 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#34 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#35 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#52 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#68 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007i.html#13 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007l.html#22 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007o.html#20 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007o.html#21 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007o.html#22 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007p.html#15 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007p.html#18 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007p.html#22 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007p.html#32 U.S. Cedes Top Spot in Global IT Competitiveness -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Writing 23FDs
Matthew Stitt wrote: Because the FBA's and 8809's were boat anchors. And the 3350's and 3420 gave interchangeability with MVS. With things connected to normal channels the sky was the limit with what could be done with the 4331. Using the ICA severely limited your devices. The 3350 and 3420 tapes could run circles around the standard stuff IBM wanted to sell with the 4331. 4331 had integrated channels (aka like 370/158 and many other processors) i think you are referring to the integrated controller adapter (as opposed to integrated channels). part of the ICA case were that run-of-the-mill controllers were going to be physically on the size of 4331 (or larger) and cost (unless you could pickup old hardware at surplus prices). An example of the size ... in addition to the original effort to use it for 3090 service processor ... http://www.garlic.com/~lynn/2007p.html#36 Writing 23FDs research had a project that had a 4331 as a desk-side personal computer. FBAs were mostly boat anchors because mvs wouldn't ship support for them. Eventually all physical disks migrated to FBA ... and for mvs compatibility, there had to be CKD emulation (the first was 3375). misc. past posts mentioning ckd issues http://www.garlic.com/~lynn/subtopic.html#dasd i was told that even if i provided fully tested and integrated mvs fba support, there would still be a bill of $26m for education, classes, documentation, etc. In order to justify mvs fba support, i had to show incremental disk sale ROI (increment gross sales at least 10-20 times the expense) attributed solely to the availability of the mvs fba support. misc. past posts mention being quoted $26m as bill for mvs fba education, classes and documentation: http://www.garlic.com/~lynn/97.html#16 Why Mainframes? http://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable? http://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background http://www.garlic.com/~lynn/2000.html#86 Ux's good points. http://www.garlic.com/~lynn/2000f.html#18 OT? http://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea) http://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?) http://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position) http://www.garlic.com/~lynn/2001g.html#32 Did AT&T offer Unix to Digital Equipment in the 70s? http://www.garlic.com/~lynn/2002.html#5 index searching http://www.garlic.com/~lynn/2002.html#10 index searching http://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers http://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer http://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth http://www.garlic.com/~lynn/2003c.html#48 "average" DASD Blocksize http://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW http://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today http://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc http://www.garlic.com/~lynn/2004l.html#23 Is the solution FBA was Re: FW: Looking for Disk Calc http://www.garlic.com/~lynn/2004n.html#52 CKD Disks? http://www.garlic.com/~lynn/2005c.html#64 Is the solution FBA was Re: FW: Looking for Disk Calc http://www.garlic.com/~lynn/2005m.html#40 capacity of largest drive http://www.garlic.com/~lynn/2005u.html#21 3390-81 http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s http://www.garlic.com/~lynn/2006f.html#4 using 3390 mod-9s -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Writing 23FDs
Raymond Noal wrote: Dear List: An IBM 4361 Model Group 5 had the ECPS feature - Extended Control Program Support (ECPS) -- offers VSE mode, VM/370 mode, and MVS/370 mode. These modes provide microcode assists that make the system control programs operate more efficiently. ECPS was originally done for virgil/tully (370 138/148). basically portions of kernel/nucleus pathlengths were implemented in microcode. a "new" instruction was defined each of these (moved) pathlength snippets ... and placed "in front" of the corresponding kernel instructions. The parameter list for the "new instruction" included address(es) of where the microcode was to resume in the standard code. here is old post that details what portions of the vm370 kernel were identified for movement into microcode http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist the issue was that low-end and mid-range 370 machines were vertical microcode to implement 370 instruction set ... and there was typically an avg ratio of 10:1 (microcode instructions to 370 instructions) ... this avg. ratio has also been found by some of the more recent 370 emulators on i86 platforms. For virgil/tully we were given that there was approx. 6k of microcode instruction space available ... and typical kernel instruction would translate approx. 1:1 into microcode (6k bytes of kernel 370 instructions translates into approx. 6k bytes of microcode). So the identification activity was to identity the 6k bytes of vm370 kernel code that were the highest used pathlengths. There was also an ipl/boot sequence that identified whether it was running on an ECPS machine ... and if not, it had a table of all ECPS instruction in the kernel which it would overlay with no-ops (allowing the same kernel to execute on both ECPS machines and non-ECPS machines). Note, for vm370, vm microcode assist (VMA) had previously been implemented on 370/158. This were specific, high-use, supervisor state instructions that normally interrupted into the vm370 kernel for simulation. A new "mode" was defined for the machine which was "virtual machine" supervisor state ... and the machine microcode was changed to directly execute the supervisor state instruction using "virtual machine" rules ... w/o having to interrupt into the vm370 kernel. As part of the virgil/tully ECPS effort, there was also implementation of the VMA supervisor instructions, as well as additional supervisor state instructions not in the original VMA implementation. Later there was an ECPS-like effort done for the 3033 for MVS. There were some difference between the 3033 MVS changes and the virgil/tully vm370 implementation. * the new MVS would only run on machines with the MVS microcode enhancement and wouldn't run on machines w/o the feature * 3033 was a horizontal microcode machine where the ratio of microcode instructions to 370 instructions was nearly 1:1 ... aka there was little or no performance difference between the 370 instruction implementation and the microcode implementation (this characteristic continued on later high-end machines) later, in the 4331/4341 time-frame ... there was some effort to retrofit the mvs ecps change to 4341s ... allowing latest release of mvs to operate on 4341 machines. there was lots of contention over the value of doing this since 4341 was barely powerful enough to support any kind of mvs thruput and 4331 was quite a bit below that threshold (so i can't be positive, but i'm pretty sure that the mvs ecps feature was ever retrofitted to 4331 ... although it was eventually made available on 4341). somewhat 4361 topic drift the 3081 had a service processor which ran off a 3310 fba disk. part of the issue was that field service had a requirement that it could perform bootstrap field diagnostics starting with a scope. this was no longer possible for the 3081 ... so a service processor was added that had the capability of diagnosing 3081 hardware ... and it was possible for field service to do diagnostic field bootstrap starting with scope on the service processor. the service processor function was getting more and more complex, and so it was decided that for 3090, it would use a 4331 running a highly customized version of vm370 release 6 ... and all service processor menu screens implemented in cms ios3270. before 3090 first customer ship, the service processor was upgraded to a pair of 4361s (running vm370 and cms with menu screens implemented in cms ios3270). having a pair of redundant 4361s eliminated the requirement for field service to bootstrap diagnose 4361s ... since they could just switch to the other 4361 machine for diagnosing the 3090 (if there was 4361 failure). misc. past posts mentioning service processor operation http://www.garlic.com/~lynn/96.html#41 IBM 4361 CPU technology http://www.garlic.com/~lynn/99.html#61 Living legends http://www.garlic.com/~lynn/99.html#62 Living legends http://www.garlic.com/~lynn/99.html#108 IBM 9020 computers
Re: zH/OS (z/OS on Hercules for personal use only)
Andreas F. Geissbuehler wrote: It's been done many times before, FREEWARE for STRICTLY PERSONAL USE. It is proven to sell more licences for commercial use. There is precedence, DB2, Lotus... personal computing ... freeware or not ... has always shown to contribute significantly to useage increase. CMS was the personal computing of 60s and 70s (first as cambridge monitor system on cp67 and then renamed to conversational monitor system as part of the morph to vm370) ... and SHARE case studies in the 70s showed that vm370/cms environments had largest usage growth (this was part of the many countermeasures to the perodic corporate statements that vm370 product was being eliminated). misc. past posts mentioning cambridge science center ... originated cp40 and cp67 virtual machine systems (along with cms) http://www.garlic.com/~lynn/subtopic.html#545tech where gml was invented (precursor to sgml, html, xml, etc) http://www.garlic.com/~lynn/subtopic.html#sgml where compare&swap multiprocessor instruction was invented http://www.garlic.com/~lynn/subtopic.html#smp and where the technology for the internal network originated http://www.garlic.com/~lynn/subnetwork.html#internalnet which was also the basis for bitnet (and european earn): http://www.garlic.com/~lynn/subnetwork.html#bitnet -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM Releases Office Desktop Software at No Charge to Foster Collaboration and Innovation
Knutson, Sam wrote: We are a large IMS DC/DB shop and CICS DB2/IMS DBCTL. IMS is still an order of magnitude more efficient than DB2. That is not saying anything bad about DB2 it is designed for more flexible data manipulation and easier development by offloading more business and data handling logic into the DBMS. DB2 is a relational database and IMS a hierarchical one though IMS appears to be geared up to take on some new abilities soon with V10. IMS is wickedly efficient ask some of the large banking and delivery concerns that still use it to process large transactions volumes. some of this was part of the discourse between the ims group and the system/r group in 70s ... i.e. originally relational/sql implementation http://www.garlic.com/~lynn/subtopic.html#systemr there was then technology transfer from sjr to endicott for sql/ds ... and one of the people listed at this meeting claimed to have handled a lot of the technology transfer from endicott to stl for DB2 http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 an old email with ims & relational reference: http://www.galric.com/~lynn/2007.html#email801016 in this post http://www.garlic.com/~lynn/2007.html#1 "The Elements of Programming Style" in the discussion between the two groups ... the ims claim was that ims was significantly more efficient between it had direct pointers ... while relational abstracted away the pointers with an implicit index. The implicit index (under the covers) tended to double the amount of disk space required and significantly increased the number of disk access to reach the desired record. the relational counter argument was that it significantly reduced the people/manual effort required to manage the effort. the transition in the 80s was that the economics for disk space significantly changed ... mitigating the disk space issue and the significant increase in system real storage sizes allowed much of the relational infrastructure information to be cached ... cutting down on the physical disk operations required. at the same time there was changes in people cost vis-a-vis hardware costs ... allowing some lower value uses to become practical (hardware costs dropped below some threshold and with elimination of some amount of manual/people cost support). other posts discussing the theme of changes in system configurations and relative costs between the 60s and the 80s and its effect on dbms implementation trade-offs: http://www.garlic.com/~lynn/2005s.html#12 Flat Query http://www.garlic.com/~lynn/2006l.html#0 history of computing http://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine http://www.garlic.com/~lynn/2006o.html#22 Cache-Size vs Performance http://www.garlic.com/~lynn/2007e.html#14 Cycles per ASM instruction http://www.garlic.com/~lynn/2007e.html#31 Quote from comp.object http://www.garlic.com/~lynn/2007f.html#66 IBM System z9 http://www.garlic.com/~lynn/2007o.html#17 FORTRAN IV program illustrating assigned GO TO on web site for other topic drift ... the university had gotten a ONR library automation grant and was selected as beta-test for the original CICS (adapting code that had been developed at a specific customer site and turning it into a product) ... and i got tasked to provide debugging and deployment support. misc. past posts mentioning CICS and/or BDAM http://www.garlic.com/~lynn/subtopic.html#bdam recent post in another thread discussing relative system disk thruput http://www.garlic.com/~lynn/2007o.html#69 ServerPac Installs and dataset allocations for other drift ... part of what prompted the observation mentioned in the above post was that the dynamic adaptive resource management work http://www.garlic.com/~lynn/subtopic.html#fairshare i had done as an undergraduate in the 60s and at the science center http://www.garlic.com/~lynn/subtopic.html#545tech in the 70s ... included the general objective of being able to (dynamically) "schedule to the bottleneck" ... aka dynamically recognize what is the major system resource bottleneck(s) and adapt the resource scheduling policy to the bottleneck resource(s). misc. past posts mentioning "schedule to the bottleneck" http://www.garlic.com/~lynn/93.html#5 360/67, was Re: IBM's Project F/S ? http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door http://www.garlic.com/~lynn/94.html#1 Multitasking question http://www.garlic.com/~lynn/94.html#50 Rethinking Virtual Memory http://www.garlic.com/~lynn/98.html#6 OS with no distinction between RAM and HD ? http://www.garlic.com/~lynn/98.html#17 S/360 operating systems geneaology http://www.garlic.com/~lynn/99.html#143 OS/360 (and descendents) VM system? http://www.garlic.com/~lynn/2000.html#86 Ux's good points. http://www.garlic.com/~lynn/2000f.html#36 Optimal replacement Algorithm http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers? http://www.garlic.com/~lynn/2003f.
Re: JES2 or JES3, Which one is older?
[EMAIL PROTECTED] (Vijay Kumar) writes: I am a new comer in the mainframe field. I am learning this technology from an institute in Singapore. I have searched the net and not able to find which of the Jes version was introduced first JES2 or JES3. I know Jes2 was evolved after HASP and Jes3 was after ASP. Could anyone please let me know the specific dates or year in which these two job entry subsystems were introduced. Waiting for your response. my wife did a stint in the g'burg jes group ... after hasp responsibility was moved to g'burg and renamed jes2 ... this was before getting con'ed into going to pok to be responsible for loosely-coupled architecture ... misc. past posts about creating peer-coupled shared data architecture ... which didn't see a lot of uptake (except for ims hot standby) until sysplex. misc. past posts http://www.garlic.com/~lynn/subtopic.html#shareddata one of her tasks in the jes group was "catcher" for asp ... as part of turning it into jes3 (aka the group for jes2 was already in existence before work on asp->jes3). her work included (co-)writing a plm for jes3. she also did a design for combined product with the best features of both products ... which didn't get very far because of strong opinions from the two (jes2 & jes3) camps. for some topic drift ... recent post in another thread http://www.garlic.com/~lynn/2007o.html#72 FICON tape drive? mentioning she had earlier co-authored AWP39, peer-to-peer networking ... in the early days of SNA (only in ibm was it necessary to qualify networking as peer-to-peer ... since that is the standard definition, however, SNA had co-opted the word to apply to their, non-networking, communication infrastructure) for other topic drift misc past posts mentioning hasp, jes2, and/or jes2 networking http://www.garlic.com/~lynn/subtopic.html#hasp -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: CA to IBM product swap
[EMAIL PROTECTED] (Mark Zelden) writes: > Then you switch back. ;-) There are actually a lot of companies that > seem to work that way. That's what happens when bean counters make > the decisions and don't consider the human aspects (time, training etc.) this is related to the original justification for 360 product line with common architecture across the product line ... recent post mentioning supposed testimony in the gov. anti-trust case by one of the bunch http://www.garlic.com/~lynn/2007p.html#8 what does xp do when system is copying i.e. compatible product line minimized having to redo applications every time customer upgraded/changed processor ... people resources and elapsed time for conversion was starting to dominate considerations this was also touched on by a talk amdahl gave at mit in the early 70s when asked about what justification was used getting funding for his clone processor company ... even if ibm were to completely walk away from 360, customers already had something like $200B invested in software applications, which would support clone processor business through at least the end of the century. and the "walk away from 360" could possibly considered a veiled reference to future system project http://www.garlic.com/~lynn/subtopic.html#futuresys which would have been as different from 360 as 360 had been different from earlier machines ... recent posts http://www.garlic.com/~lynn/2007p.html#1 what does xp do when system is copying http://www.garlic.com/~lynn/2007p.html#3 PL/S programming language http://www.garlic.com/~lynn/2007p.html#5 PL/S programming language -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FICON tape drive?
George McAliley wrote: All IBM 3490's on mainframes were either Block MPX channel (bus/tag) or ESCON. The STK 9490's on mainframe were also ESCON though they did have a SCSI interface for distributed system attachment. The IBM Magstar (3590) series were natively FICON and ESCON capable depending on how you configure the drive/controller. The newer 3592's are also either ESCON or FICON though they are really too fast for ESCON. All the Magstar drives have standalone (non_ATL) configurations but are now usually installed in ATL's or VTL's in today's world. a recent escon, sla, fcs, ficon and eckd x-over discussion from comp.arch newsgroup http://www.garlic.com/~lynn/2007o.html#54 mainframe performance, was Is a RISC chip more expensive? and additional drift, other posts in the thread: http://www.garlic.com/~lynn/2007o.html#42 mainframe performance, was Is a RISC chip more expensive? http://www.garlic.com/~lynn/2007o.html#55 mainframe performance, was Is a RISC chip more expensive? ... escon had been fiber technology that had knocking around pok from the 70s. my wife had been con'ed into going to pok to be in charge of loosely-coupled architecture where she created peer-coupled shared data architecture http://www.garlic.com/~lynn/subtopic.html#shareddata which didn't see a whole lot of take-up until sysplex ... except for ims hot-standby work. she also had significant battles with the communication group over not using sna for peer-coupled operation. eventually there supposedly was a (temporary) truce where sna had to be used for anything transiting the walls of the glasshouse but non-sna could be used within the walls of the glasshouse. this sort of came to a test with ctca enhancement; trotter/3088 where she pushed hard for being able to have full-duplex operation ... as improvement over standard ctca/channel half-duplex operation (which didn't make it out of the door). san jose research did do a vm/4341 cluster prototype using enhanced 3088 peer-coupled operation ... but when it came to make it available to customers, they were required to use sna for the implementation. a trivial example of the difference was the cluster synchronization protocol ... which started out being done in subsecond elapsed time. it was severely crippled by being forced to regress to a sna implementation which increased the cluster synchronization protocol elapsed time to nearly a minute. all of this contributed to her not lasting very long as pok's loosely-coupled architect. of course, part of her problem was that she had earlier co-authored AWP39, peer-coupled networking architecture in the early days of SNA ... which they possibly viewed as a threat. SNA architecture was VTAM ... not a networking architecture at all, but a (dumb) terminal communication control infrastructure that could handle massive numbers of terminals (or at least initially up to 64k). for other random trivia, appn was AWP164 misc. past posts mentioning AWP39 http://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment http://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5 http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005u.html#23 Channel Distances http://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe http://www.garlic.com/~lynn/2006j.html#31 virtual memory http://www.garlic.com/~lynn/2006k.html#9 Arpa address http://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server http://www.garlic.com/~lynn/2006l.html#4 Google Architecture http://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?) http://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R http://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2006u.html#28 Assembler question http://www.garlic.com/~lynn/2006u.html#55 What's a mainframe? http://www.garlic.com/~lynn/2007b.html#9 Mainframe vs. "Server" (Was Just another example of mainframe http://www.garlic.com/~lynn/2007b.html#48 6400 impact printer http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now? http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications -- For IBM-MAIN subscribe / signoff / archive access instruc
Re: ServerPac Installs and dataset allocations
Ted MacNEIL wrote: I put the heavily hit loadlibs such as SYS1.LINKLIB on one side of the VTOC, and the ISPF libraries on the other side of the VTOC. With todays heavily cached dasd, that probably will buy you very little anymore. Very little. Especially, since it's been over 15 years since IBM stopped recommending placing the VTOC (VTOCIX, VVDS, & Catalogue [if there is one]) elsewhere than at the beginning of the pack. for os/360 releases 11 & 14 system builds i had carefully reodered stage-2 sysgen to achieve optimal placement ... not only of datasets but also members within pds. i had given presentations at share (on the results of both the customized release 11 and 14 system builds) ... that for the university workload, i could achieve nearly three times increased thruput. i had also asked for being able to specify vtoc location ... which showed up in release 15/16 (release 15 slipped and there was a combined release 15/16). one of the problems was applying normal system maintenance ... replacing members in pds libraries like sys1.linklib could detrimentally affect the carefully ordering and over a period of six months, thruput could degrade by a third or more (and might require a new "build" of critical pds libraries). reference to old presentation that i had made a aug68 share meeting in boston (this particular presentation also included some measurements after i had rewritten several critical sections of the cp67 kernel): http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14 http://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14 however, going into the mid-70s, it was becoming apparent that overall system thruput (processor and memory) was increasing much faster than disk technology thruput was increasing. as a result there was starting to be more and more reliance on mechanisms (like more use of various kinds of caching technology) to compensate for the relative system degradation of disk thruput. at one point, i had made the observation that relative system disk thruput had degrading by a factor of ten times over a period of years. this upset some of the people in the disk division ... and the disk division performance group was assigned to refute the observation. after several weeks, they came back and effectively said that i had slightly understated the amount of relative system thruput degradation (i.e. disks were getting faster, but overall systems were getting also getting faster, much faster than disks were getting faster). in any case, the work by the disk division performance group eventually turned into a share presentation ... not on how slow disks are ... but on how to organize data on disk to improve overall system thruput. as caching technologies became more and more wide used ... nearly all of the work on careful ordering of "highly used" disk records (that i had done as undergraduate in the 60s) was obsoleted since such high-used records would now be found in the electronic caches. some number of old posts mentioning gpd finding that i had slightly understated the degree of disk technology relative system thruput degradation over a period of years http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door http://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts http://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros? http://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?) http://www.garlic.com/~lynn/98.html#46 The god old days(???) http://www.garlic.com/~lynn/99.html#4 IBM S/360 http://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine? http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers? http://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not? http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts) http://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts) http://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk? http://www.garlic.com/~lynn/2002.html#5 index searching http://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching) http://www.garlic.com/~lynn/2002b.html#20 index searching http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates? http://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates? http://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please http://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned http://www.garlic.com/~lynn/2004n.html#22 Shipwrecks http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new? http://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning http://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine http://www.garlic.com/~lynn/2006x.html#13 The Future of CPUs: What's
Re: CA to IBM TCP Conversion
Chris Mason wrote: Robert I thought I'd dig further into this IUCV point and I found a reference in the IP Configuration Guide. It appears that IUCV, VMCF and TNF "stuff" is still available, you just don't necessarily need it. It would appear to have become an *optional* bit of preparation for the use of the Communications Server (CS) IP component from being *required* as it was when I used to teach TCP/IP for MVS. It is described in the CS IP Configuration Guide under "Chapter 2. Configuration overview", "Required steps before starting TCP/IP" as "Step 3: Configure VMCF and TNF" on page 111 of the z/OS 1.8 manual. It appears that the section headers are logically incorrect since, as far as I can tell, it really is an *optional* step and depends on whether or not the Pascal API is used or not. The clearest indication that this step really is optional is "... therefore, some installations will require setting up VMCF and TNF." at the end of the first paragraph. I then found Dana Mitchell's post where he/she said something of the same as above. Chris Mason the original tcp/ip implementation was done in vs/pascal on vm370 (20 yrs ago) ... but there were some number of implementation bottlenecks ... such that it got about 44kbyte/sec aggregate thruput consuming a 3090 processor. i then did rfc1044 support for the product and in some tuning tests at cray research (between 4341 clone and a cray machine) was getting 4341 channel media speed thruput using only a modest amount of the 4341 clone. http://www.garlic.com/~lynn/subnetwork.html#1044 for some topic drift, recent post mentioning vs/pascal http://www.garlic.com/~lynn/2007o.html#61 (Newbie question)How does the modern high-end processor been designed? which is slightly related to topic in this newsgroup since the los gatos vlsi tools group was responsible for the 370 pascal implementation as well as the "LSM" http://www.garlic.com/~lynn/2007o.html#67 1401 simulator for OS/360 somewhat drifting back to the topic, a port of the implementation was then done for mvs ... by doing a (vm370) vmcf/iucv emulator for mvs systems. for other background ... internally there was something called spm that was originally implemented on cp67 (precursor to vm370 that ran on 360/67s) which was a superset of the later vmcf and iucv implementations. there was somewhat internal dissension leading up to the initial vmcf release ... since spm had been around for much longer period and had so much more function. Later, iucv was released to cover some additional function (also covered by spm) that was handled by vmcf. some old email with spm reference http://www.garlic.com/~lynn/2006w.html#email750430 http://www.garlic.com/~lynn/2006k.html#email851017 misc. old posts mentioning spm: http://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention http://www.garlic.com/~lynn/2004m.html#20 Whatever happened to IBM's VM PC software? http://www.garlic.com/~lynn/2005m.html#45 Digital ID http://www.garlic.com/~lynn/2006k.html#51 other cp/cms history http://www.garlic.com/~lynn/2006t.html#47 To RISC or not to RISC http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks? http://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network http://www.garlic.com/~lynn/2006w.html#16 intersection between autolog command and cmsback (more history) http://www.garlic.com/~lynn/2006w.html#52 IBM sues maker of Intel-based Mainframe clones -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Storage implementation
Mark Post wrote: If you give a Linux guest a 4GB virtual machine, it will have very close to a 4GB working set. If you give that same Linux guest 64GB, it will have very close to a 64GB working set. The fact that you say "Linux will use what it needs" tells me that you have little or no experience running Linux, either on midrange systems or on the mainframe. Linux will _always_ use everything you give it, if for nothing else than buffers and cache. Hence the constant battle we have with midrange Linux sysadmins, DBAs, etc., regarding this topic. It's also a recurring topic with the midrange performance/capacity folks, since we keep getting concerned phone calls and emails about how we have to add more RAM to a Linux system because it's running out. It hasn't run out, it is just using all the otherwise unused storage for buffers and cache, but that's not the behavior they're used to seeing from AIX, Solaris, HPUX, etc. past posts in this thread: http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#46 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#47 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#48 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#52 Virtual Storage implementation unix/linux will use its (virtual) machine storage for running applications and file caching. if there is enough machine storage ... the application storage is whatever the page requirements for the collection of the running appliications (linux kernel code, demons, etc). if the machine storage is at least as large as the total program execution storage ... then the system may not have to do any paging operations. the remaining (potentially virtual) machine storage will be used for file record caching is analogous to the operation of dbms systems with database record caching. lots of dbms systems have configuration parameters for total size of record caching ... attempting to tune them when running in a virtual memory environment. this is similar to the description of the (storage management) changes migrating apl\360 (assuming the available storage was "real") to cms\apl (where there was enormously larger amount of virtual, paged storage). The implicit assumption was that the available configured storage was "real" and could be used arbitrarily w/o regard to potential working set size implications and effect on demand paging. http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation one of the other projects at the science center http://www.garlic.com/~lynn/subtopic.html#545tech involved application instruction and storage access tracing. this was used in modeling possible page replacement algorithms and working set sizes. Eventually some of this was released as VS/REPACK product which would do semi-automated program reorgnization to optimize virtual storage operation. Early version of what became VS/REPACK was used in assisting with migrating apl\360 to virtual memory environment as cms\apl. Various versions of VS\REPACK were also used be other corporate product groups aiding in the migration from "real storage" paradigm to "virtual storage" paradigm. One such early user of the package was the organization developing and supporting IMS. A side-effect of the package tracing/monitoring ... in addition to helping analyzing execution characteristics in virtual storage environment, it was also used for straight-forward execution "hot-spot" analysis. random past posts mentioning vs/repack: http://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc) http://www.garlic.com/~lynn/97.html#20 Why Mainframes? http://www.garlic.com/~lynn/99.html#7 IBM S/360 http://www.garlic.com/~lynn/99.html#61 Living legends http://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft? http://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems http://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?" http://www.garlic.com/~lynn/2000g.html#11 360/370 instruction cycle time http://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back? http://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic) http://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes http://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...) http://www.garlic.com/~lynn/2001j.html#3 YKYGOW... http://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc http://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning) http://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning) http://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names http://www.garlic.com/~lynn/200
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Mark Post) writes: That's still probably too much, if only by a little. The idea is to force Linux to use as little storage as possible for buffers and cache, and page out any programs, etc., that haven't been used very recently. Letting z/VM handle this via expanded storage, and paging some things out to real disk turns out to work very well in a shared environment. Other techniques, such as having the kernel in a Named Saved Segment, and executable userspace code in a DCSS using the eXecute In Place file system helps even more. previous posts in this thread: http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#46 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#47 Virtual Storage implementation for more archeological topic drift with regard to DCSS. i had originally started what i called "virtual memory management" on cp67 platform at the science center http://www.garlic.com/~lynn/subtopic.html#545tech this included mapping the cms filesystem to a paged mapped infrastructure http://www.garlic.com/~lynn/subtopic.html#mmap which included a lot of fancy options about moving pages to/from virtual address space and disk storage. i then ported this to a vm370/cms environment with a lot of options for sharing of segments. a variety of some of this was used in some of the original relation/sql dbms work ... all done on vm370 platform http://www.garlic.com/~lynn/subtopic.html#systemr in the early 70s, there was a project called future system http://www.garlic.com/~lynn/subtopic.html#futuresys which was going to replace 360/370 with a radically different machine architecture. this effort absorbed significant corporate resources and when it was finally canceled (w/o even being announced) there was significant scrambling to get all sort of items back into the 370 hardware and software product pipeline. The resulting mad scramble open opportunity to get a lot of work ... that had continued at the science center http://www.garlic.com/~lynn/subtopic.html#545tech on 370s into vm370 product ... including my resource manager http://www.garlic.com/~lynn/subtopic.html#fairshare that included a large amount of other work not strictly related to resource management, things like lots of kernel reorganization for multiprocessor support http://www.garlic.com/~lynn/subtopic.html#smp part of that opportunity resulted in releasing an extremely small subset of the "virtual memory management" work as DCSS (and the generalized paged mapped infrastructure was not included). for additional topic drift ... some discussions of various problems trying to reconcile generalized virtual memory management features with os/360 address constant convention http://www.garlic.com/~lynn/subtopic.html#adcon -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Thompson, Steve) writes: VSE, as I recall, was told that it had 32MB (or something similar) and VM then took care of the paging (because VSE didn't page in that case) -- must understand the memory system used by VSE (similar to VS1). recent posts in this thread: http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation http://www.garlic.com/~lynn/2007o.html#46 Virtual Storage implementation the handshaking had earlier been implemented in cp67 with mvt by one of the university cp67 installations. a similar implementation was done for vm370 somewhat in conjunction with the ecps microcode assist ... for a little topic drift a recent post in comp.arch mentioning ecps (as well as sie) http://www.garlic.com/~lynn/2007o.html#42 mainframe performance, was Is a RISC chip more expensive? vs1 typically ran with something like a 4mbyte virtual address space ... something akin to the initial move of mvt to os/vs2 svs which ran with a 16mbyte virtual address space. for vs1 handshaking, vm370 gave the vs1 guest virtual machine a 4mbyte machine. then vs1 mapped its 4mbyte virtual address space one-for-one to the 4mbyte virtual machine address space (at first glance vs1 had 4mbyte virtual address space to a 4mbyte machine so it would never get any "guest" page faults). all the page faults would be happening at the vm370 level, which would then schedule a psuedo page-fault interrupt for the vs1 guest while it performed the page replacement operation. This would allow vs1 to switch to a different task/application ... so that the whole virtual machine execution wouldn't be blocked just waiting on page fault processing for a specific task/application. When page fetch had been completed, vm370 would post a psuedo page fetch completion interrupt to VS1 guest ... so that it might choose to re-enable that faulted task for execution. the assumption was that the virtual machine guest is multitasking lots of different workload and is capable of doing a task switch and continue execution when a specific task has a missing page. I had highly optimized both the native vm370 page processing pathlength as well as selection of code paths to be moved to microcode as part of the ECPS effort. As a result, it was actually possible for VS1 to have higher thruput under vm370 than running stand-alone on the same hardware (w/o vm370; my pathlength for doing page processing was significantly better than VS1's ... as well as my page replacement implementation). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Mark Post) writes: Oh, and if you create a 64GB z/VM guest, shame on you. As someone who is very heavy into z/VM performance once told me, "z/VM is very good at managing large numbers of small things. It's not so good at managing a smaller number of very large things." I tend to agree. The z/VM scheduler isn't too happy about guests with large working sets. the issue may not so much be a scheduling problem and/or specifically a "large" working set problem ... as somewhat mentioned in this post: http://www.garlic.com/~lynn/2007o.html#45 Virtual Storage implementation there is an implicit assumption in paged virtual memory and working sets with regard to least-recently-used page replacement algorithms ... which assumes that the page/buffer that has been least-recently-used in the past is likely to be least-recently-used in the future. however, virtual guests and various subsystems (which manage storage with their own least-recently-used algorithm) or likely to exhibit just the opposite behavior ... the page/buffer that has been least-recently-used in the past ... is the page/buffer that the virtual guest/subsystem is going to select for replacement and start using. Having a multi-level least-recently-used replacement strategy can exhibit pathelogical behavior where the next lower level management has removed the page/buffer which is going to be the higher level management operation is most likely to select to start using (the "hypervisor" closest to the hardware is the lowest level). lots of past posts mentioning page replacement algorithms and virtual storage management http://www.garlic.com/~lynn/subtopic.html#wsclock -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Ted MacNEIL) writes: Also, sub-systems like DB2 are getting to the point where you should/could/would not like it to page. Sort of throws the concept out the window, doesn't it? previous post in thread http://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation one of the things that can happen is if you run a subsystem doing LRU-like activity management in a virtual address space that is also being managed with a LRU-like activity management. I first noticed this in the 70s ... when some of the os/360 migrated to virtual storage support and were in turn run in a virtual machine. vm370 was managing the virtual address space (of the virtual machine) with an LRU-like algorithm while the guest operating system was also managing (what it thot to be real storage) with a LRU-like algorithm. if the virtual quest took a page fault ... it would examine its available storage for the least-recently used page to "replace". if the vm370 hypervisor was also paging ... it will also have used the same criteria to remove the least-recently used page from real storage ... however, this would possibly also going to be the most likely next page that the virtual guest was going to start using (when it did its own paging). dbms subsystems tends to have large buffer storage ... that are managed in a manner analogous to virtual storage ... i.e. the least recently used buffer is likely to be replaced with the latest requested record. A heavily used dbms subsystem is likely going to use the maximum storage available to it (because it is going to replace its least recently used buffers with the most recently requested records). one of my statements from the 70s was that running an LRU-like algorithm under a LRU-like algorithm can result in very pathelogical behavior and the virtualized quest/subystem can exhibit exact opposite of behavior of assumptions that are the foundation of LRU implementations (the least recently used page is the least likely to be needed in the near future, a "virtual" LRU-like algorithm is most likely to use the least recently used page). lots of past posts mentioning virtual storage page replacement and/or page/buffer replacement algorithms http://www.garlic.com/~lynn/subtopic.html#clock misc. past posts mentioning original rdbms & sql implementation (originally all done on vm370 platform) http://www.garlic.com/~lynn/subtopic.html#systemr including tech. transfer from bldg. 28 to endicott for sql/ds. for other topic drift one of the people in the meeting mentioned in the following post claimed to have handled large part of the technology transfer from endicott to bldg. 90 for DB2 http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 above meeting was related to turning out ha/cmp product http://www.garlic.com/~lynn/subtopic.html#hacmp and other old email related to working on ha/cmp scaleup http://www.garlic.com/~lynn/lhwemail.html#medusa another scenaro of running subsystem in a paged virtual address space ... which believed that the virtual address space was really memory was when the science center http://www.garlic.com/~lynn/subtopic.html#545tech originally did the port of apl\360 to cms for cms\apl. The problem was that apl\360 believed its "workspace" was resident real storage and had a storage allocation strategy that would assign a new storage location for every assignment statement ... until it had exhausted the available (workspace) available storage ... at which point it would do garbage collection and collapse all allocated locations into contiguous memory ... and then starting all over again. It wasn't too bad to repeatedly use all of a (real storage) 16kbyte swapped workspace. However, in cms virtual address space environment, the available workspace could easily be several mbytes (or even nearly all of 16mbytes). this would be under cp67 on a 360/67 with typically 512kbytes to 1mbyte of real storage. very quickly it was realized that the apl\360 storage management and garbage collection implementation had to be significantly reworked to move it to a virtual memory environment. Turns out one of the early major uses of cms\apl on the cambridge cp67 machine was the business planning people in armonk. prior to cms\apl, apl\360 with only 16kbyte-32kbyte workspace sizes didn't provide much room for working on any real world problems. significantly opening up the apl workspace size with cms\apl allowed for work on some real world problems. the business planning people loaded the most sensitive corporate information ... detailed customer information ... on the cambridge machine and ran sophisticated business modeling applications implemented in apl. for other drift, this represented some interested security issues ... since the cambridge system was also being used by numerous students and others from colleges and universities in the cambridge area. recent post on that particular topic drift: http://www.garlic.com/~lyn
Re: Virtual Storage implementation
[EMAIL PROTECTED] (Rugen, Len) writes: Virtual storage isn't exclusive to MVS - z/OS. One on of the best presentations I recall was in a VM Performance and Tuning class. Together with storage protection keys, page tables can be built to allow different "users" to have various parts of private, shared for read and shared for update storage. (At least update if you're friendly with the think king and he lets you in key 0). 360/67 was machine that came with virtual memory as standard ... it could be viewed somewhat as 360/65 with DAT box bolted on to the side ... although the 360/67 multiprocessor was significantly more sophisticated that 360/65 multiprocessor (for instance, all 360/67 processors in a multiprocessor complex could address all channels ... which wasn't true of 360/65 multiprocessors) ... some multiprocessor digression from a recent thread http://www.garlic.com/~lynn/2007o.html#37 Each CPU usage some of the people at the science center http://www.garlic.com/~lynn/subtopic.html#545tech were worried about some of the virtual memory issues ... some historical comment that atlas virtual memory never worked well ... misc. past posts mentioning atlas http://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths. http://www.garlic.com/~lynn/2001h.html#26 TECO Critique http://www.garlic.com/~lynn/2003b.html#1 Disk drives as commodities. Was Re: Yamhill http://www.garlic.com/~lynn/2003m.html#34 SR 15,15 was: IEFBR14 Problems http://www.garlic.com/~lynn/2005o.html#4 Robert Creasy, RIP http://www.garlic.com/~lynn/2006i.html#30 virtual memory http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance? http://www.garlic.com/~lynn/2007g.html#36 Wylbur and Paging also reference to cp67 and vm370 historical paper http://www.princeton.edu/~melinda/ anyway, as result of the concerns about virtual memory, cambridge modified a 360/40 with custom virtual memory hardware ... prior to 360/67 availability. cp/40 virtual machine system was built for the custom 360/40 ... which was remapped to cp/67 when 360/67 machines became available. virtual memory hardware support was eventually going to be made available on 370s ... although only 24-bit virtual memory addressing ... 360/67 had support for both 24-bit virtual memory addressing as well as 32-bit virtual memory addressing. originally 370 virtual memory was going to have a lot more features. the translation of the cp67 virtual machine system to vm370 virtual machine system was going to include use of some of these additional features. one specific feature that was going to be used was virtual memory shared segment allowing the same shared segment to appear in multiple different virtual address spaces ... and be read/only protected. retrofitting 370 virtual memory hardware support to 370/165 ran into some schedule delays ... in order to make up those delays, lots of 370 virtual memory features were dropped ... and the other machines that had implemented the full 370 virtual memory support had to remove the additional features. in escalation meetings about the trade-off delaying 370 virtual memory availability by six more months (because of hardware implementation issues for 370/165) or shipping a subset six months earlier ... the favorite son operating system took the position that they didn't need any of the additional features. this had a fairly big impact on vm370 implementation which including coming up with a emulation of shared segment protection using a kludge involving storage keys. the initial translation of os/360 MVT to virtual memory environment (for os/vs2 svs) involved creating a single 16mbyte virtual address space and hacking a simple paging support into the side of MVT. Then CCWTRANS (from cp67) was integrated into the MVT kernel to provide for channel program translation (i.e. handle all the stuff of taking the application space channel program that had been created with virtual address ... creating a copy substituting real addresses, pinning the associated pages, and all the rest of the gorp). misc. past posts mentioning the 370/165 virtual memory implementation schedule problems and impact on vm370 implementation http://www.garlic.com/~lynn/2000.html#59 Multithreading underlies new development paradigm http://www.garlic.com/~lynn/2003d.html#53 Reviving Multics http://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why? http://www.garlic.com/~lynn/2004p.html#8 vm/370 smp support and shared segment protection hack http://www.garlic.com/~lynn/2004p.html#9 vm/370 smp support and shared segment protection hack http://www.garlic.com/~lynn/2004p.html#10 vm/370 smp support and shared segment protection hack http://www.garlic.com/~lynn/2004p.html#14 vm/370 smp support and shared segment protection hack http://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns http://www.garlic.com/~lynn/2005.
Re: Each CPU usage
[EMAIL PROTECTED] (Thompson, Steve) writes: Imagine, you have a 3081 at 100% and you upgraded to a 3084 (basically you added the other 3081) and you are still at 100%. Or you have a 3033 and you went to a 470/V8. [I'm not saying these were the systems, just using them as examples.] 3081 was two processor ... and 3084 was pair of 3081s (four-processors). the basic 370/3033/308x multiprocessor cache coherency started out slowing the processor speed by 10% to allow for cross-cache chatter (i.e. raw two-processor thruput was 1.8 times raw single processor thruput). this is independent of any actual cache invalidates that were occuring (i.e. just providing for basic cross-cache communication). 3084 was even worse since each processor cache had to listen for x-cache chatter from three other processors (rather than just one other processor). 308x wasn't even going to have a single processor version ... however, eventually a single processor, 3083 did ship. This was primarily motivated by ACP/TPF which didn't have multiprocessor at the time (base 3083 processor was almost 15percent faster than one of the 3081 processors since the multiprocessor x-cache chatter slowdown was eliminated) in the 3081 time-frame ... both vm and mvs had kernel storage re-org to carefully align storage on cache-line boundaries (and multiples of cache lines). this was to eliminate a lot of cache-line "trashing" where two different storage locations overlapped in the same cache-line (and different processors could be simultaneously operating on the two storage locations). This kernel storage re-org was claimed to improve system thruput by something over five percent. the other example was a major restructuring of the vm multiprocessor support between r6 and sp1. the issue was that since acp/tpf didn't have multiprocessor support, there was a lot of acp/tpf running under vm370 on 3081s. for the dedicated acp/tpf, 3081 operations that met that they ran two copies of acp/tpf (in two different virtual machines) and/or that one of the processors sat idle most of the time. for the later case, the multiprocessor restructuring attempting to get (some amount of) virtual machine kernel processing running on the "idle" processor (overlapped with acp/tpf execution on the other processor). This involved introducing a lot of signal processor instructions to wake the possibly idle processor to get busy on some execution and return to executing the (acp/tpf) virtual machine (specific scenario was overlapping siof instruction emulation and channel program translation with the acp/tpf virtual machine execution). the standard virtual machine multiprocessor support was designed for efficiently handling lots of totally operations. the sp1 reorganization (for acp/tpf overlapped execution) was generic for all possible execution environments ... and introduced quite a bit of overhead (in the acp/tpf scenario it was justified on the basis that it improved overall thruput ... since there was an otherwise idle processor). a lot of existing customers moving from r6 multiprocessor support to sp1 multprocessor support found significant increase in multiprocessor overhead ... a combination of the significant increase in signal processor instructions, the corresponding interrupts and a lot of new "spin-lock" activities (just the "new" "spin-locks" measured as much as ten percent of each processor). "spin-locks" were typically used to provide exclusive execution for lots of kernel code. global kernel "spin-locks" were typical of lot of 60s, 70s and even 80s operating systems (i.e. a single kernel lock that kernel would attempt to obtain at entry into kernel mode ... interrupt routines, etc ... and spin/loop until it obtain the lock). at the science center, http://www.garlic.com/~lynn/subtopic.html#545tech charlie was working on fine-grain multiprocessing kernel locks (lots of short execution paths rather than the whole kernal) for cp67 when he invented the compare-and-swap instruction (CAS mnemonic chosen because they are charlie's initials ... compare-and-swap designation had to be invented to have something that matched CAS). the attempt to get CAS added to 370 architecture was initially rebuffed ... the favorite son operating system considered the test&set locking instruction (used for os/360 multiprocessor kernel spin-locks) more than sufficient for 370 multiprocessing support. the challenge was to come up with a non-multiprocessor use for the compare-and-swap instruction ... in order to get it included in 370 architecture. lots of past posts mentioning multiprocessor and/or compare-and-swap instruction http://www.garlic.com/~lynn/subtopic.html#smp this is where the use for a lot of multithreaded application software (regardless of whether running on multiprocessor hardware) was invented .. as well as the programming notes that now appear in appendix of principles of operation ... i.e. A.6 Multiprogramming and Multiprocessing Examples http://publ
Outsourcing loosing steam?
On Aug 15, 11:11 am, [EMAIL PROTECTED] (daver++) wrote: "Around 1:30 p.m., the CPB experienced problems accessing its database containing information on international travelers. Assuming this to be a wide-area network problem, CBP called Sprint, its carrier, to test the lines. After three fruitless hours of remote testing, Sprint finally sent technicians on-site. Another three hours passed before Sprint finally concluded that transmission lines were not the problem, meaning the problem was inside the CBP local network. After more hours of troubleshooting, the issue was finally resolved at 11:45 p.m. The real culprit: a failed router."http://blogs.zdnet.com/projectfailures/?p=346 20,000 stranded because it took over ten hours to diagnose and replace a failed router. I used to be a mainframe guy that inherited the network side, so they cut me some slack. BUT- I can guarantee that there wasn't anywhere near enough slack for me to get off with taking that long to replace a router. I would have been tarred, feathered and run out of town. It seems like basic due diligence wasn't even followed. Yes, Sprint added to the problem, but Sprint never should have been called. Why call Sprint before determining that the problem isn't on _your_ end? It is all a bit silly, and it frightens me a bit that our airlines have this level of quality. note that inadequate processes in packet networks contribute significantly to diagnosing the problems. some of the older protocols were much more circuit oriented ... and could much more rely on telco circuit diagnostics to identify problems. we experience this when we were building reliable network based infrastructures in the 80s ... and attempting to do some work on NSFNET infrastructure misc. collected old emails http://www.garlic.com/~lynn/lhwemail.html#nsfnet and to some extent met with quite a bit of corporate resistance ... somewhat highlighted in this old email: http://www.garlic.com/~lynn/2006w.html#email870109 in this post http://www.garlic.com/~lynn/2006w.html#21 note that while tcp/ip is the technology basis for the modern internet, nsfnet was the operational basis (interconnections of networks, i.e. internetworking), and cix was the business basis. in the above reference there is somebody in corporation proposing that sna could be proposed for basis for nsfnet ... the main issue was the ability to providing internetworking ... interconnection of large number of different networks. we later investigated several of the issues in more detail when we were doing the ha/cmp product http://www.garlic.com/~lynn/subtopic.html#hacmp which required a detailed threat and vulnerability study for high availability environments. we later got to use some of that experience when we were called in to consult with a small client/server company that wanted to do payments on their server http://www.garlic.com/~lynn/subnetwork.html#gateway they had this technology called SSL and the effort is now frequently referred to as electronic commerce. the initial simple obvious solution was to move the payment transaction message formats from their existing circuit-based environment to a packet-based enviornment. however, that totally ignored much of the availability, diagnostic, and recovery processews that were available in the circuit based environment. We eventually developed a set of compensating processes and procedures attempting to make the availability of the packet-based environment somewhat compareable to the existing circuit-based environment. for a little topic drift ... recent comment on availability, diagnosing and recovery in one of the ATC modernization efforts http://www.garlic.com/~lynn/2007o.html#18 misc past posts on estimated of 4-10 times the effort to take a well written application and turn it into an industrial strength service (in the case of the payment gateway, it was closer to ten times, including inventing various diagnostic and recovery process to compensate for moving payment gateway to a packet-based environment) http://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS) http://www.garlic.com/~lynn/2001n.html#91 Buffer overflow http://www.garlic.com/~lynn/2001n.html#93 Buffer overflow http://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM http://www.garlic.com/~lynn/2003j.html#15 A Dark Day http://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations http://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding http://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions http://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap ! http://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto? http://www.garlic.com/~lynn/2004m.html#51 stop worrying about it offshoring - it's doing fine http://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definit
Re: some questions about System z PR/SM.
[EMAIL PROTECTED] (R.S.) writes: No. PR/SM is microcode - a code under OS. Sometimes called "firmware. z/OS runs in LPAR. Although it can obtaine i.e. LPAR name, it is still "unaware" from PR/SM and LPARs features. z/OS works in "virtual machine" (Logical PARtition) and does know that machine. However some z/OS application, called HCD allows you to define "hardware configuration" - a set of I/O definitions ("manual Plug and Play") as well as division of CPC into LPARs. However the resulting file is simply transmitted to Support Element (notebook inside CPC) and it is interpreted by PR/SM. Just to complement: Another part of the file is also read by z/OS during IPL process (however this file is read from regular DASD, not SE). The prepared LPAR can be further customized on SE and can be used for Linux. originally pr/sm was done on 3090 ... somewhat in response to amdahl's hypervisor. it basically is a subset of virtual machine capability moved into "microcode". in the amdahl scenario ... amdahl had added a variation called "macrocode" ... which was a 370 instruction variation that sat part way between the "real" microcode and standard 370 machine instructions. it significantly simplified migrating virtual machine 370 code into the native machine. by comparison, 3090 pr/sm was a much more difficult task since it involved implementation directly in the 3090 microcode. however, much of pr/sm actually leveraged the SIE instruction which was used by virtual machine operating system to implement virtual machine mode. pr/sm evolved into supporting multiple concurrent hypervisors as "LPAR" for some topic drift ... some old email discussing amdahl hypervisor and macrocode http://www.garlic.com/~lynn/2006b.html#email810318 in this post http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode some old email somewhat comparing 3081 sie and 3090 sie http://www.garlic.com/~lynn/2006j.html#email810630 in this post http://www.garlic.com/~lynn/2006j.html#27 virtual memory above posts also includes numerous other references to sie, pr/sm, lpars, etc -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: PCI Compliance - Encryption of all non-console administrative access.
Lynn Wheeler <[EMAIL PROTECTED]> writes: for some topic drift ... part of the issue is that the majority of such compromises have involved data-at-rest ... not data-in-transit ... and lots of implementations don't provide the access control that may be found in mainframe installations ... so encrypting the data at risk might be viewed as compensating process for inadequate access control. the other part of it is that studies have something like 70 percent of such compromises have involved insiders (who already may have some level of access). re: http://www.garlic.com/~lynn/2007n.html#85 PCI Compliance - Encryption of all non-console administrative access. ... above post may have only made it to the newsgroup, not the mailing list for some additional drift, a recent post in ongoing financial crypto blog thread on (effectively) decline in security and assurance over the past several decades http://www.garlic.com/~lynn/aadsm27.htm#53 Doom and Gloom spreads, security revisionism suggests "H6.5: Be an adept!" -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: How old are you?
[EMAIL PROTECTED] (Robert Fake) writes: I'm 46 years old now. I started Assembler programming on a 360 when I was 18. Best move I ever made. I've always thought that the mainframes were here to stay and the move to "non-mainframe" platforms was driven significantly by the trade rags in the 90's. Our product and services business is dedicated to support of the mainframe, so I hope it will be around for many, many more years. i was undergraduate and had been invited to the spring '68 share meeting for cp67 announcement ... 40 yrs next spring ... recent references: http://www.garlic.com/~lynn/2007n.html#91 Combining VM list threads http://www.garlic.com/~lynn/2007n.html#92 vm 35th b'day at share in san diego next week i got to present some of the work i had been doing on total rework of os/360 stage2 sysgen and some early rework that i had done on cp67. i also got to give a more detailed presentation at the aug68 share meeting in boston, misc. past references: http://www.garlic.com/~lynn/93.html#1 360/67, was Re: IBM's Project F/S ? http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14 http://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14 http://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems? http://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360) http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM obsoleting mainframe hardware
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Chris Mason) writes: > One of the presentations was someone from a big UK bank who defended > IBM having made the 155 and 165 available and relatively shortly > afterwards having announced the 158 and 168 - together with the > relatively expensive DAT box extension to the 155 and 165. I hope I'm > remembering the details about right. > > I heard about this only second-hand but I believe the argument was > that IBM was right to offer the enhanced performance of the 155 and > 165 as soon as it could in spite of the fact that it knew that the > virtual storage models were well advanced in development. I guess > there was a shadow of the "it's illegal to preannounce" principle > hanging over this. re: http://www.garlic.com/~lynn/2007n.html#31 IBM obsoleting mainframe hardware http://www.garlic.com/~lynn/2007n.html#34 IBM obsoleting mainframe hardware 370/165 ... announce jun70 http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3165.html 370/168 ... announce aug72 http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3168.html for virtual memory ... hacking virtual memory support into MVT (for VS2/SVS) was needed in addition to the virtual memory hardware retrofitted to 165s (there were significant software as well as hardware schedules). this is similar to previous comments about *crash* program to try and get out 370-xa (after FS project was killed) and POK in 1976, convincing the corporation to shutdown vm370 product and transfer all the developers to POK as part of being able to make mvs/xa (software) schedule (although Endicott was eventually able to save part of the vm370 product mission). i've mentioned before about (370 virtual memory) prototype work that went on in pok, using 360/67s and hacking "single address space" virtual memory into the side of MVT ... as well as cobbling in cp67's (ccw translation) CCWTRAN into MVT ... i.e. cp67 had started out having to build "shadow" channel programs with real addresses ... for the virtual machine's channel programs; all the (MVT) channel programs passed via EXCP ... would be equivalent "virtual address" channel programs ... requiring similar translation (and misc. other things like page locking/pinning) recent posts about using CP67's CCWTRANS as part of turning MVT into os2/svs http://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007f.html#33 Historical curiosity question The other part ... was that there was a lot of work to retrofit virtual memory to 165 ... so much so that they ran into schedule problems. In order to buy back six months in the 165 virtual memory schedule, there was an escalation dropping several features from the original 370 virtual memory architecture. Once the 165 engineers had won that battle, then all the other processors (that had already completed their virtual memory implementations) ... had to go back and remove the dropped features. recent posts mentioning 165-ii schedule issues and impact on dropping features from original 370 virtual memory architecture http://www.garlic.com/~lynn/2007f.html#7 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007f.html#16 more shared segment archeology http://www.garlic.com/~lynn/2007j.html#43 z/VM usability http://www.garlic.com/~lynn/2007k.html#28 IBM 360 Model 20 Questions -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IBM obsoleting mainframe hardware
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Rick Fochtman) writes: > If the business needs are being satisfied, with reasonable economy, > who cares whether the box is "the lastest and greatest"? Future > business needs may or may not dictate upgrades. YMMV a little search engine mainframe surfing for vm/4341 turned up this story about a vm/4341 keeping the nyse running will thru the 80s ... apparently with an old mvt system that had been moved from 360/50s http://www.raylsaunders.com/asmwork.html that i mentioned in this recent post: http://www.garlic.com/~lynn/2007n.html#26 VM system kept NYSE running a quick check just this moment, turns up some problem with the URL ... but (as always) the wayback machine knows http://web.archive.org/web/20060220161415/http://raylsaunders.com/asmwork.html for other topic drift ... we spent some amount of time in the early 90s talking to SIAC about using ha/cmp for much of the work that the tandems were doing (see mainframe MDS-II being replaced with tandem MDS-IIIs in the above reference) ... lots of ha/cmp references: http://www.garlic.com/~lynn/subtopic.html#hacmp this was in the period that we were also working on ha/cmp and trying to cram as much computing into dense footprint, old email references: http://www.garlic.com/~lynn/lhwemail.html#medusa I had actually attempted to do something similar nearly a decade earlier with trying to cram as many 370 chipsets (had about 168-3 thruput) as possible into racks. the old 8-10 yr cycle for mainframe generations (and obsolescence) really showed up when the early 70s FS project was killed http://www.garlic.com/~lynn/subtopic.html#futuresys since it was going to be something completely different, much of the work on 370 related stuff pretty much went away. after FS was killed, there was a scramble to get stuff back into the 370 product pipeline. 370-xa/3081 was going to take eight yrs (early 80s) ... so they had to find something else that could be done in possibly half that time. the resulting 303x was quite a bit of warmed over 370. they took the intergrated channel microcode from 158 and made it stand-alone box called channel director. Then 158 paired with a channel director became 3031 (with integrated channel microcode running on different processor). 168 became 3032 repackaged to work with channel director. 3033 started out as 168 wiring diagram implemented with faster chip technology. straight-forward mapping would have just been 20percent faster than 168 ... other tweaks done during development got 3033 up to 1.5times 168. part of the issue was that up to the 80s, lots of technology was on 7-10yr cycle ... where in the 80s, the rate of change started to accelerate, for a time, leaving some mainframe technology in the dust. note that it wasn't just mainframes. circa 1990, there US automobile (C4) task force looked at being able to accelerate (cut in half) us automobile product cycle from 7-8yrs (in attempt to get on level playing field with some of the imports). it was interesting to watch what the mainframe people were saying in the meetings (since they were effectively in the same boat). one of the things that the automobile industry had been doing would run parallel new product projects offset by four yrs (so it appeared that something new was coming out every four yrs). the analogy for mainframes ... was as soon as 3033 was out the door, they started on 3090 (8yr overlap with 3081 with 4yr offset). in fairly stable industry this worked since consumer tastes weren't signicantly changing. However the 8yr lag could become significant if there was any significant change in what the market place was looking for (giving vendors that had much shorter product cycle a competitive edge). some recent references to C4 effort circa 1990 ... attempting to improve competitive footing vis-a-vis several imports: http://www.garlic.com/~lynn/2007f.html#50 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007g.html#29 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007g.html#34 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007g.html#52 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007i.html#13 U.S. Cedes Top Spot in Global IT Competitiveness http://www.garlic.com/~lynn/2007j.html#33 IBM Unionization -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Eugene Miya) writes: > No, the most difficult competition was and is against the IBM PC. > > If it did so well, we'd see more evidence of it being around. > They are not even museum pieces. re: http://www.garlic.com/~lynn/2007n.html#20 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM you didn't read the zillion previous posts mentioning that mid-range market for both vax/vms and 43xx volumes in departmental server market started to move to workstations and larger PCs in the mid-80s. above reference post ... mentions the previous post in the thread ... which made the same point one more time (and then later the workstations started to also loose out to PCs). http://www.garlic.com/~lynn/2007n.html#18 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM for instance, the 4361/4381 which were expecting similar large volume sales as seen for 4331/4341 ... never happened. similar numbers can be seen for vax/vms numbers ... where vax did do some volumes in the mid-80s with micro-vax ... also readily seen in the repeated references to decade of vax/vms numbers, sliced & diced by model, yr, domestic, world-wide, etc http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction ... the 4331s/4341s and other mid-market players in the departmental servers had very little PCs to compete with (late 70s and early 80s) ... it wasn't until you get to the followon machines; 4361s/4381s (and later vax) that you start to see the workstation/PC effect in the departmental server market. one of the contributions to the PCs in the departmental server market was a project called DataHub which was being done by the san jose disk division. Part of the software implementation was being done under work-for-hire subcontract by a group in Provo (one of the people from San Jose commuted to Provo nearly every week). At some point, the company decided to kill the DataHub project and allowed the Provo group to retain rights to everything that they had done under the work-for-hire contract. Not too long later, there was a company out of Provo with a PC server offering. misc. past posts mentioning DataHub project: http://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party http://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious http://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun? http://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments? http://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect http://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness http://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why? http://www.garlic.com/~lynn/2004f.html#16 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2005p.html#23 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005q.html#9 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005q.html#36 Intel strikes back with a parallel x86 design http://www.garlic.com/~lynn/2006l.html#39 Token-ring vs Ethernet - 10 years later http://www.garlic.com/~lynn/2006y.html#31 "The Elements of Programming Style" http://www.garlic.com/~lynn/2007f.html#17 Is computer history taught now? http://www.garlic.com/~lynn/2007j.html#49 How difficult would it be for a SYSPROG ? in the mean time, the communication division had seen a huge install base of communication controllers grow based on terminal emulation http://www.garlic.com/~lynn/subnetwork.html#emulation which was started to break away into various kinds of client/server ... they came up with SAA ... somewhat positioned at helping preserve their communication controller market (and countermeasure to client/server). A problem we had in this period was that we were making some number of customer executive presentations on 3-tier (network) architecture ... and taking flames & barbs from the SAA factions http://www.garlic.com/~lynn/subnetwork.html#3tier other recent posts in this same thread: http://www.garlic.com/~lynn/2007m.html#42 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#44 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#45 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#48 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#50 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#57 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#63 The Development of the Vital IBM PC in Spite of
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. re: http://www.garlic.com/~lynn/2007n.html#18 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM the place that 43xx had the most difficult competition against vax/vms was in the single (at a time) departmental servers (as some of the SHARE studies highlighted). cost of mid-range computers had dropped below a threshold that made them very cost-effective in departmental settings ... however scarce people skills and costs then started to dominate as market inhibitor. 43xx did do very well in large number of departmental server orders (especially with distributed, networked operation) ... where people support skill/costs could be amortized across large number of machines. clusters of 43xx also started to impact 3033. at one point (traditional internal politics), the head of pok, manipulated east fishkill to cut the allocation in half of a critical component needed for 43xx manufacturing. later the same person gave a talk to a large public audience and made some statement that something like 11,000 vax/vms orders should have been 43xx ... also referenced in this old post http://www.garlic.com/~lynn/2001m.html#15 departmental servers and old email mentioning various 43xx issues ... including moving workload off 3033 boxes onto 4341 clusters ... and large distributed departmental server operations. http://www.garlic.com/~lynn/lhwemail.html#43xx -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Phil Smith III) writes: > Re RISC vs. 68K: > Anyone who thinks the RISC chips killed the 68K is off base. They > just need to check the dates. Intel killed the 68K. Motorola allied > with IBM on RISC only after Intel had destroyed Motorola's market for > the 68K. 801 was originally targeted (very) low-end ... ROMP chip was targeted to be used in a displaywriter follow-in ... when that project was killed, the group looked around for something to save the effort ... and hit on the unix workstation market (with the displaywriter follow-on morphing into unix workstation). lots of unix workstation market place is very numerical and power hungry ... somewhat as a result ... the followon to ROMP for that market was large, power-hungry RIOS chipset (i.e. POWER, announced in RS/6000). Paperweight on my desk (from original) has six chips, and says 150 million OPS, 60 million FLOPS, and 7 million transistors. somerset was combined ibm, motorola, apple project to do a single chip, 801 PC-level implementation ... the executive we reported to when we were doing ha/cmp http://www.garlic.com/~lynn/subtopic.html#hacmp went over to head up somerset. part of somerset including infusing power/pc with some of motorola's 88k (risc) technology. ROMP and RIOS were single processer implementations with no provision for multi-processor cache consistency. power/pc was going to be able to support cache consistency and multiprocessor operation. lots of past 801 posts http://www.garlic.com/~lynn/subtopic.html#801 68k was still hanging in there in 89/90 time-frame ... a couple posts with some old references from the period (raw chip volumes, business analysis, etc) http://www.garlic.com/~lynn/2005q.html#35 Intel strickes back with a parallel x86 design http://www.garlic.com/~lynn/2005q.html#44 Intel strickes back with a parallel x86 design -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Phil Smith III) writes: > Re VAX vs. IBM: > I was a central, low level member of the 4300 series. I also led the > engineering side of the fight against the VAX. We never approached > the installed base of the VAX machines. Never. approach the size of the install base in number of customers or number of machines or competitive marketing approaching the customers that bought vaxes? past post giving decade of vax install numbers sliced and diced by model, yr, domestic, non-domestic, etc: http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction both 43xx and vaxes saw huge uptake in the early 80s with the growth of the department market ... which was starting to move into workstations and PCs by the mid-80s. as above, the big volumes for VAXes in the mid-80s were from micro-vax ... not traditional 780 machines. lots of vaxes were customer orders for one or a very few. vaxes had an advantage here since their installation and support required a lot less effort (something that 43xx was constantly fighting ... there were even some SHARE reports highlighting the resource requirement differences in competitive environment). however, there were some number of large customers that ordered 43xx boxes large lots (hundreds, even large hundreds). the resource support requirement competitive advantage (in small shops) was mitigated when amortized across a large number of boxes. old email about specific customer ordering in hundreds (customer initially thot 20, but order was finally for 210): http://www.garlic.com/2001m.html#email790404 in this post also discussing other "departmental computing" issues from the period http://www.garlic.com/~lynn/2001m.html#15 departmental server lots of old email discussing various aspects of 43xx ... use for clustering and/or distributed, departmental computing http://www.garlic.com/~lynn/lhwemail.html#43xx -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. another article on the same theme: Leopard and Vista: Last Gasp of the Big OS? http://news.yahoo.com/s/pcworld/133276 from above: Twenty years from now a new generation of computer users will look back on the operating systems of today with the same bemused smile we look back at the cars of the late 1950s and early 60s. They had huge fins, were the size of a small yacht and burned up just about as much gas. ... snip ... a few similar articles over the past yr: Windows Vista: The last Of Microsoft's Supersized Operating Systems? http://www.informationweek.com/blog/main/archives/2006/08/windows_vista_t.html Windows Vista the last of its kind http://www.techworld.com/news/index.cfm?NewsID=6718 Vista: The Last Microsoft Operating System that will Matter http://www.realtime-websecurity.com/articles_and_analysis/2007/01/vista_the_last_microsoft_opera.html Vista is the last of the dinosaurs http://www.theinquirer.net/default.aspx?article=36155 other recent posts in this thread: http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All http://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#68 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#73 Operating systems are old and busted -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Morten Reistad <[EMAIL PROTECTED]> writes: > Also, log structured file systems, the jfs and contributions to efs3, > and huge improvements to the irq and dma routing; including some work > in processor affinities. metadata logging is slightly different from log structured file systems. one of the problems with log structured file systems is the periodic "garbage collection" done to consolidate files, making their records sequential and contiguous. for other drift ... during work on HA/CMP http://www.garlic.com/~lynn/subtopic.html#hacmp we hired one of the people responsible for doing the BSD log structured filesystem implement to consult on doing a "geographically distributed filesystem". JFS was originally done by people working on 801/AIXV3. 801 early on had definition/implementation for "database memory" ... i.e. hardware could keep track of fine-grain changes (size on the order of cache-lines). Just load up data into memory mapped infrastructure ... provide the COMMIT boundaries ... and eliminate needing to sprinkle "log" calls thruout the code. At commit, just run thru the changed memory indications ... collecting data-lines needing logging. There had been various kinds of conflict between the unix development group in palo alto and the group in austin. The palo alto group took JFS and ported it to non-801 platforms ... having to retrofit the logging calls to the software (since they lacked database memory hardware). It turns out that the version with explicit logging calls ran faster than the original implementation (even on the same 801 hardware platform) ... the commit time scanning of memory for changes tended to be higher overhead than the explicit log calls. Then the remaining justification for database memory is the implementation simplification ... somewhat akin to some of the pushes for parallel programming (except parallel programming is frequently explicitly about performance; not trying to trade-off performance against simplicity). some of the database memory stuff can be found under the heading of transactional memory ... some posts mentioning transactional memory: http://www.garlic.com/~lynn/2005r.html#27 transactional memory question http://www.garlic.com/~lynn/2005s.html#33 Power5 and Cell, new issue of IBM Journal of R&D http://www.garlic.com/~lynn/2007b.html#44 Why so little parallelism? misc. past posts mentioning log structured filesystems http://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice http://www.garlic.com/~lynn/93.html#29 Log Structured filesystems -- think twice http://www.garlic.com/~lynn/2000c.html#24 Hard disks, one year ago today http://www.garlic.com/~lynn/2001f.html#59 JFSes: are they really needed? http://www.garlic.com/~lynn/2002b.html#20 index searching http://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer http://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill http://www.garlic.com/~lynn/2004g.html#22 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09? http://www.garlic.com/~lynn/2005n.html#36 Code density and performance? http://www.garlic.com/~lynn/2006j.html#3 virtual memory http://www.garlic.com/~lynn/2006j.html#10 The Chant of the Trolloc Hordes http://www.garlic.com/~lynn/2007.html#30 V2X2 vs. Shark (SnapShot v. FlashCopy) http://www.garlic.com/~lynn/2007i.html#27 John W. Backus, 82, Fortran developer, dies some past posts mentioning database memory http://www.garlic.com/~lynn/2002b.html#33 Does it support "Journaling"? http://www.garlic.com/~lynn/2002b.html#34 Does it support "Journaling"? http://www.garlic.com/~lynn/2003c.html#49 Filesystems http://www.garlic.com/~lynn/2003d.html#54 Filesystems http://www.garlic.com/~lynn/2005n.html#20 Why? (Was: US Military Dead during Iraq War http://www.garlic.com/~lynn/2005n.html#32 Why? (Was: US Military Dead during Iraq War http://www.garlic.com/~lynn/2006o.html#26 Cache-Size vs Performance http://www.garlic.com/~lynn/2006y.html#36 Multiple mappings http://www.garlic.com/~lynn/2007i.html#27 John W. Backus, 82, Fortran developer, dies -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Phil Smith III) writes: > Which is the end of the story, boys and girls. For, while so many > people focus on how the PC has damaged the mainframe, the mainframe > still stands tall. What the PC was meant to destroy, it did destroy - > the minis and superminis. DEC went from top of the heap (Queen > Elizabeth in Boston harbor for DECWorld) to non-existence in less than > 10 years. DG is no more. Wang is no more. The PC destroyed them > all. we were spending some time in SCI (as well as FCS and HIPPI) meetings. both Sequent and DG would build an SCI machine with four (intel) processor boards ... for 256process numa machine (convex built an sci machine with two hp/risc processor board ... for 128processor numa machine). both DG and sequent are gone ... sequent being absorbed by ibm ... and some recent references that the only surviving sequent technology may be found in some contributions to linux. HP's superdome may or may not be considered to be the exemplar follow-on. a couple recent posts on sci/numa machines: http://www.garlic.com/~lynn/2007g.html#3 University rank of Computer Architecture http://www.garlic.com/~lynn/2007m.html#13 Is Parallel Programming Just Too Hard? wang signed a deal with austin (and some of the austin people actually left and went to work for wang) to use rs/6000 as their hardware platform (getting out of the hardware business). in some of the a.f.c. posts, i've frequently pointed out that the late 70s and early 80s saw a significant uptake of mid-range machines in the departmental server market segment ... both vm/43xx and vax/vms ... with vm/43xx actually having larger install base than vax/vms (in part because there were numerous large customer orders for multiple hundred 43xx machines at a time). by the mid-80s that market segment was starting to be taken over by workstations and large PCs (with corresponding drop-off in sales of 43xx and vax machines). Later the more powerful PCs would also take over much of the workstation market. misc. old email mentioning various happenings around 43xx http://www.garlic.com/~lynn/lhwemail.html#43xx there had been anticipation that the introduction of the 4361/4381 would see compareable uptake to 4331/4341 ... but by then, the market was already starting to move to workstations and larger PCs. a couple past posts given domestic and world-wide vax numbers, sliced & diced by model and yr (post 85, the numbers are primarily micro-vax): http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction http://www.garlic.com/~lynn/2005f.html#37 Where should the type information be: in tags and descriptors http://www.garlic.com/~lynn/2006k.html#31 PDP-1 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Chris Mason) writes: > ... and thereby put the wait light out[1]. Having been brought up with > DOS (the original DOS), and, generally, S/360 Model 30s, I was used to > knowing how busy the machine was by observing the flickering of the > wait light. my first undergraduate programming job was to port MPIO from 1401 to 360/30. MPIO provided tape<->unitrecord/printer/punch front-end for university 709 running ibsys. it was possible to operate the 360/30 in 1401 emulation mode ... so i conjecture that the exercise was purely to get familiarity with new 360 ... which would eventually replace both the 709 and the front-end machine with 360/67. i got to design and implement my own monitor, device drivers, interrupt handlers, storage management, consol interface, etc ... and eventually had assembler program with approx. 2000 cards. running os/360 pcp (r6) ... the "stand-alone" version assembled in about 20-25 minutes elapsed time. I had conditional assembly that would also generate program that would run under PCP and used open/close and DCB macros. There were five DCB macros and you could tell from the wait light pattern when the assembler was processing a DCB macro ... and each one took 5-6 minutes elapsed time ... the os/360 conditional assembly version took an extra 30minutes (making the assembly nearly an hr total). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Scott Lurndal) writes: > Sure there are. Start with WindRiver. Then progress to MCP/AS, z/OS, > Exec/1100 and so forth. The whole world isn't microsoft, you know. the person at the science center http://www.garlic.com/~lynn/subtopic.html#545tech that did the technology that was used in the internetal network http://www.garlic.com/~lynn/subnetwork.html#internalnet and part of the technology that was used by customers and in bitnet http://www.garlic.com/~lynn/subnetwork.html#bitnet ... aka the base technology was extremely layered with effectively something akin to gateway like function ... it not only deployed peer-to-peer networking ... but easily provided emulators that could also talk to HASP/JES2 ... lots of posts mentioning hasp/jes2 http://www.garlic.com/~lynn/subtopic.html#hasp ... a recent x-over reference: http://www.garlic.com/~lynn/2007m.html#69 Operating systems are old and busted so by the time of bitnet time-frame ... internal corporate politics was such that it was restricting shipping support for just the HASP/JES2 interfaces ... even tho the native peer-to-peer implementations were much more efficient (and still continued to be used internally for some time). in any case, the implementation was one of those service virtual machines (virtual appliances) ... more x-over http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All http://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#68 Operating systems are old and busted and included in the implementation was a very small & tightly coded multitasking monitor (for dispatch/scheduling). now many yrs later the person had opportunity to be involved in project that involved one of the major RTOS ... and he happened to be looking thru the C-source which seemed to be familiar. Eventually checking an old listing of the multitasking monitor ... it was apparent that they had done a nearly line-by-line translation of his 360 assembler code into C ... including preserving all the original comments. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: > This is fascinating history, Lynn. I remember using the Prepare command in > channel programs for the 2701 that we used in the TUCC network ca. 1967 on. > > Speaking of old, busted systems and ones that were killed (like FS), does > anybody know anything about the new operating system that Amdahl was trying > to > build? I had a phone interview with an Amdahl person in SEP 1987 who > mentioned this OS and I started salivating at the prospect of working on that > > project. The next thing I knew the project had been killed. re: http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All http://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#68 Operating systems are old and busted Simpson (of HASP fame) ... misc. old posts mentioning hasp http://www.garlic.com/~lynn/subtopic.html#hasp including observation that the much of the source for HASP/JES2 internal networking support (before being released as project) carried the letters "TUCC" in cols. 68-71. misc. past posts mentioning internal network (which was mostly vm370 based ... with a few mvs/jes2 around the perimeter) http://www.garlic.com/~lynn/subnetwork.html#internalnet had left the HASP group and started an internal operating system project called RASP. It had some of the characteristics of TSS/360, being an extremely paged mapped oriented operating system (shared some characteristics of FS, s/38, as/400) ... but purely 370 based. Later, he left and became an Amdahl fellow in Dallas ... starting a similar project. There was some litigation as a result that included some code reviews (to see if any RASP code had leaked out). Some of this overlapped with the developed of Au/GOLD (aka UTS) ... and there was appeared to be some amount of anbivalence between the two groups. Knowing some of the people in both organizations ... I even tried to do some mediation (ignore for the moment that i didn't work for them and knew about unannounced, internal projects). One of the examples I tried to use was the UNIX TSS370 (SSUP) effort that was being done for internal AT&T use. A lot of the 370 UNIX being done in the 80s was all being done under VM ... not so much because of the point in the original subject of this thread ... but because VM370 would provide for hardware EREP (if necessary) on behalf of operating system in virtual machine ... and an effort to fit UNIX with 370 EREP was several times larger than any of the efforts just porting UNIX to 370. The TSS370/SSUP strategy being done for AT&T ... had all the low-level TSS/370 kernel hardware support ... but with UNIX layered ontop (an alternative approach to giving unix environment a large amount of 370 EREP). In any case, I suggested that the two groups might be able to form a marriage of convenience doing something similar. Didn't happen. misc. past posts mentioning tss370/ssup, rasp, aspen, au/gold/uts, etc http://www.garlic.com/~lynn/95.html#1 pathlengths http://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party http://www.garlic.com/~lynn/98.html#11 S/360 operating systems geneaology http://www.garlic.com/~lynn/99.html#2 IBM S/360 http://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art http://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it again http://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again http://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort) http://www.garlic.com/~lynn/2000c.html#8 IBM Linux http://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs) http://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc. http://www.garlic.com/~lynn/2001e.html#19 SIMTICS http://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator http://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370 http://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels http://www.garlic.com/~lynn/2001f.html#47 any 70's era supercomputers that ran as slow as today's supercomputers? http://www.garlic.com/~lynn/2001l.html#7 mainframe question http://www.garlic.com/~lynn/2001l.html#8 mainframe question http://www.garlic.com/~lynn/2001l.html#9 mainframe question http://www.garlic.com/~lynn/2001l.html#11 mainframe question http://www.garlic.com/~lynn/2001l.html#17 mainframe question http://www.garlic.com/~lynn/2001l.html#18 mainframe question http://www.garlic.com/~lynn/2001l.html#20 mainframe question http://www.garlic.com/~lynn/2002d.htm
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to alt.folklore.computers,bit.listserv.ibm-main as well. re: http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All http://www.garlic.com/~lynn/2007m.html#67 Operating systems are old and busted part of the timesharing issue was whether the off-shift useage charges (or just plain useage) could justify the off-shift operational costs ... since useage tended to decline significantly offshift and weekends (although I finally got my home machine for dial-up access mar70 ... it was 2741 selectric, and have effectively had online access at home ever since). lots of past posts about timesharing services ... including commercial (cp67 & vm370) timesharing service bureaus in the 60s & 70s. http://www.garlic.com/~lynn/subtopic.html#timeshare in the 60s & thru some of the 70s, machines tended to be leased ... and there was system meter ... which would rackup charges as the machine was used ... even when the machine was in "wait" state ... but I/O was active. The 2702 "prepare" command was mechanism to leave the terminal lines "prepared" for any terminal operation ... w/o actually having an active I/O apparent to the system meter. the incremental machine lease charges and costs having people/operators present ... was one of the inhibitors for justifying/providing around-the-clock, 7x24 timesharing operation (since offshift useage could be extremely spotty). eliminating system meter running ... when the system wasn't actually doing anything (just available for doing something) ... and being able to run with dark-room, unattended operation ... would significantly lower the off-shift useage threashold that was necessary to justify leaving the system up, available and operational (significantly helped in transition for providing production around-the-clock, 7x24 operation). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Operating systems are old and busted
The following message is a courtesy copy of an article that has been posted to alt.folklore.computers,bit.listserv.ibm-main as well. re: http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted http://www.garlic.com/~lynn/2007m.html#66 Off Topic But Concept should be Known To All part of the new, old things are called "virtual appliances" ... but in the good old 60s & 70s ... they were called "service virtual machines" some recent posts mentioning the virtual appliance genre http://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC http://www.garlic.com/~lynn/2006w.html#25 To RISC or not to RISC http://www.garlic.com/~lynn/2006x.html#6 Multics on Vmware ? http://www.garlic.com/~lynn/2006x.html#8 vmshare http://www.garlic.com/~lynn/2007i.html#36 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007k.html#26 user level TCP implementation http://www.garlic.com/~lynn/2007k.html#48 John W. Backus, 82, Fortran developer, dies cp67 had fairly early implemented fast, automated dump&reboot. That along with the change-over to using 2702 "prepare" command helped contribute to round-the-clock, 24x7 cp67/cms online, timesharing services. The issue was that 360/67 were leased ... and charge for was based on system meter running ... and having the system up 3rd/4th shift might have the meter running ... but light load charges might not be able to cover the off-shift lease rate. The system "meter" would run, even when the operating system was in wait state, but there was I/O "active". The use of the 2702 "prepare" command for terminal I/O would effectively suspend "I/O" and the system "meter" would stop. The other part as the fast, automated dump&reboot helped make practical to run cp67 3rd&4th shifts w/o any human (operator) present (aka "dark room") ... eliminating another expense that light-load offshift useage might not cover. The combination helped encourage both internal 7x24, around-the-clock online, timesharing cp67 operation ... as well as help make various commercial cp67 timesharing offerings more viable http://www.garlic.com/~lynn/subtopic.html#timeshare however, one of the short-comings with unattended, offshift operation was that the "service virtual machines" still required human intervention. as part of lots of work on performance tuning, dynamic adaptive dispatch/scheduling, virtual memory optimization, workload profiling http://www.garlic.com/~lynn/subtopic.html#fairshare http://www.garlic.com/~lynn/subtopic.html#wsclock and other stuff I was doing at the science center http://www.garlic.com/~lynn/subtopic.html#545tech I was having to do a lot of benchmarking http://www.garlic.com/~lynn/subtopic.html#benchmark and as part of the benchmarking, I worked on being able to automate the whole process. One of the issues was being able to generate a new/different kernel and automatically reboot ... and start the next sequence of benchmarks. cp67 had morphed into vm370 and inherited the automatic reboot operation. The issue then was how to get all the benchmarks kicked off. I created a "autolog" command that emulated the manual login process ... and added one such command late in the system bringup/boot process. The resulting process that was automatically logged on then could execute scripts with autolog commands for large number of other processes. I initially used it for implementing the benchmarking process. For instance, in the final sequence before release of the "resource manager" ... there was a sequence of something like 2000 (automated) benchmarks that took 3months elapsed time to run. However, it was quickly realized that the autolog process (for benchmarking) ... would also extremely useful for automating the startup of "service virtual machines" ... as part of automated system reboot. The burlington development group was one of the organizations that had been distracted by the future system project http://www.garlic.com/~lynn/subtopic.html#futuresys after FS was killed (and before burlington was put on notice that they were being shutdown and everybody being transferred to POK to support mvs/xa development) ... they had crash program to turn out items in vm370 release 3 ... and picked up a lot of stuff from the science center (including the autolog command) where we had continued to work on (360/370) virtual machine activity. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Off Topic But Concept should be Known To All
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Ken Brick) writes: > http://www.theregister.com/2007/06/20/usenix_07_opening_keynote/ the new, 40yr old theme, curtesy of the science center http://www.garlic.com/~lynn/subtopic.html#545tech related post here http://www.garlic.com/~lynn/2007m.html#64 Operating systems are old and busted a lot of technology evolved in that environment ... GML, precursor to SGML, HTML, XML, aka the markup stuff http://www.garlic.com/~lynn/subtopic.html#sgml the internal network ... which was larger than the arpanet/internet from just about the beginning until possibly sometime mid-85 http://www.garlic.com/~lynn/subnetwork.html#internalnet and as mentioned in this recent thread: http://www.garlic.com/~lynn/2007m.html#47 Capacity and Relational Database http://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database http://www.garlic.com/~lynn/2007m.html#56 Capacity and Relational Database relational/sql was first created in that environment http://www.garlic.com/~lynn/subtopic.html#systemr a lot of virtual memory and dispatch/scheduling http://www.garlic.com/~lynn/subtopic.html#fairshare http://www.garlic.com/~lynn/subtopic.html#wsclock among other things, it provides a fantastic incubator for R&D and new technology ... which has been somewhat alluded to in parts of a recent thread: http://www.garlic.com/~lynn/2007m.html#15 Patents, Copyrights, Profits, Flex and Hercules http://www.garlic.com/~lynn/2007m.html#20 Patents, Copyrights, Profits, Flex and Hercules http://www.garlic.com/~lynn/2007m.html#32 Patents, Copyrights, Profits, Flex and Hercules in fact, after the corporation had canceled the failed Future System project http://www.garlic.com/~lynn/subtopic.html#futuresys and realized that it had to throw resources back into the 370 product line ... POK was able to convince the corporation that the vm370 product had to be killed ... because they needed to transfer the all the (relatively few) people in the burlington mall development group to POK to provide support getting the mvs/xa development effort on schedule. Eventually, Endicott was able to salvage some of the product mission. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Capacity and Relational Database
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. IBMsysProg wrote: > From a software architecture standpoint, Multi Regions, Independent > locking (IRLM), Automated Recovery (DBRC), and DASD Logging became the > foundations of IBMs second relational data base system and its first > SQL based system called at its introduction DB2. IBMs first > relational database system pre dated wide use of DASD and long > historys could be written about it alone ... "Bill of Material > Program called at various times BOMP T-BOMP for TAPE BOMP and D-BOMP > for DISK BOMP. BOMP was probably used more for applications like > payroll than manufacturing. re: http://www.garlic.com/~lynn/2007m.html#47 Capacity and Relational Database http://www.garlic.com/~lynn/2007m.html#55 Capacity and Relational Database lots of postings about sql/relational database system/r done at sjr/bldg.28 http://www.garlic.com/~lynn/subtopic.html#systemr including mentioning doing work on system/r and handling technology transfer of system/r from sjr to endicott for sql/ds. another source of a lot of old archeological references: http://www.mcjones.org/System_R now system/r was all done in vm370 virtual machines ... technology out of the science center ... 4th flr, 545 tech sq http://www.garlic.com/~lynn/subtopic.html#545tech on the 5th flr, 545 tech sq was multics ... which had done an even earlier relational implementation. recent posting (in comp.databases.theory) http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance? with multics MRDS reference: http://www.multicians.org/mgm.html#MRDS http://www.mcjones.org/System_R/mrds.html now the seminal work on relational was done by Codd at SJR, A relational Model of Data for Large Shared Data Banks, ACM, v13n6, june 1970 http://www.acm.org/classics/nov95/toc.html wiki reference: http://en.wikipedia.org/wiki/Edgar_F._Codd minor pt in the above reference ... sjr was in bldg. 28 on the san jose plant site, the almaden facility wasn't built until the mid-80s. now one of the people in the meeting referenced here http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 mentioned that he had handled a lot of technology transfer from sql/ds & endicott back to STL for DB2 (even tho bldg. 28 and bldg. 90 are only about ten miles apart ... i would even periodically do the commute on my bike). for lots of topic drift ... two of the other people in that same meeting ... were later at a small client/server startup responsible for something called the commerce server and we were called in to consult on being able to do payment transactions on their server ... misc. collected postings mentioning putting together payment transaction infrastructure for what is now frequently referred to as electronic commerce http://www.garlic.com/~lynn/subnetwork.html#gateway -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Peter Flass <[EMAIL PROTECTED]> writes: > Well, unix is unix (or Linux). The problems come from the basic > design; if you changed the design, it wouldn't be unix. The best you > can do is mitigate the problems. This is the case with every OS - > some fundamental decisions made during the initial design can't be > changed without modifying the OS out of existence. > > This is just like programming languages. You can add "improvements", > but some initial design decisions are set in stone. virtual machines has periodically been used over the past 40yrs to address various limitations in operating systems ... rather than trying to stress a particular operating system past its design point ... attempting to consolidate more and more applications on a single operating system platform ... go to a two (multi) level paradigm ... where you have a virtual machine environment, timesharing multiple virtual machines concurrently on common platform ... and then within each virtual machine ... allow is doing its own thing (i.e. a little peter principle ... not pushing an operating system to rise past its level of competence). this is somewhat optimization at a more macro level ... while making some micro-level optimization sacrifices (i.e. the overhead of the virtual machine capability). re: http://www.garlic.com/~lynn/2007m.html#51 Is Parallel Progrmaming Just Too Hard? http://www.garlic.com/~lynn/2007m.html#52 Is Parallel Progrmaming Just Too Hard? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Capacity and Relational Database
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. re: http://www.garlic.com/~lynn/2007m.html##47 Capacity and Relational Database for some additional past history the university i was at was selected to be beta-test for original CICS ... it was an ONR, online library funded project. It also got a 2321, datacell as part of the project. One of my responsibilities got to be shooting bugs in this early CICS (before first official product ship). One specific bug I remember was that the customer installation that CICS had grown out of had been using a specific set of BDAM options. For whatever reason, the university library chose to use some other combination of BDAM options ... resulting in CICS failures. ... misc. past posts mentioning cics &/or bdam (and having to shoot CICS and BDAM bugs) http://www.garlic.com/~lynn/subtopic.html#bdam one of the IMS things in the mid-70s was transition to virtual memory environment. The science center http://www.garlic.com/~lynn/subtopic.html#545tech had done much of the early stuff on virtual memory as part of both CP67 and VM370. Some of the work involved extensive performance monitoring, performance modeling, workload profiling and the early stuff leading to capacity planning. http://www.garlic.com/~lynn/subtopic.html#benchmark One of these efforts was instruction tracing and modeling virtual memory useage. This was used extensively in many applications moving from real storage environment to virtual memory operation. One of the earliest was in was significant benefit as part of rewritting the whole APL storage management when the science center did the port of apl\360 to cms\apl (and expanding APL workspaces from typical 16k-32k real memory to allow maximum virtual memory sizes) ... various past posts mentioning APL and/or one of its heaviest users ... the HONE system http://www.garlic.com/~lynn/subtopic.html#hone In the mid-70s, one of the major internal users of this tracing and modeling application (from the science center) was the IMS group ... tracing and monitoring both general IMS performance operation ... as well as optimization for virtual memory operation. The science center also added semi-automated program re-organization to the application and the science center announced it as "VS/REPACK" product in 1976. And here is old email reference about getting pushed as general consultant to the IMS development group in STL (mentions luncheon with the IMS deevelopment people) http://www.garlic.com/~lynn/2007.html#email801016 this independent of the previous mention about working on some of system/r ... the original relational/sql implementation http://www.garlic.com/~lynn/subtopic.html#systemr for other drift ... lots of past posts about doing lots of stuff for virtual memory optimization and replacement algorithms http://www.garlic.com/~lynn/subtopic.html#wsclock Now, when my wife was con'ed into going to POK to be in charge of loosely-coupled architecture ... she originated "peer-coupled shared data" architecture (and a lot of the mainframe distributed/global locking stuff) http://www.garlic.com/~lynn/subtopic.html#shareddata which saw very little uptake until sysplex ... except for IMS and especially IMS hot-standby effort for somewhat other topic drift ... lots of past posts about being allowed to play disk engineer in bldg. 14&15 http://www.garlic.com/~lynn/subtopic.html#disk at one time there was joke about working 4hr shift week, 1st shift in bldg28/sjr, 2nd shift in bldgs. 14&15, 3rd shift in bldg90/stl, and 4th shift (aka weekends) at HONE. later when we were doing our HA/CMP product http://www.garlic.com/~lynn/subtopic.html#hacmp and scaleup for distributed databased operation ... along with scaleup for distributed lock manager (as well as massive distributed recovery) ... some email references here http://www.garlic.com/~lynn/lhwemail.html#medusa and minor reference in these posts http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 the people in STL complained that if we were allowed to ship the support for the commercial DBMS stuff ... we would be at least five yrs ahead of where they were. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes: > with the advent of PCs ... a lot of the cms personal computing migrated > to PCs ... although the (mainframe) virtual machine operating system > continues to survive ... and even had seen some resurgent in the early > part of this decade supporting large numbers of virtual machines running > linux ... somewhat in the "server consolidation" market segment > > recently, "server consolidation" has become something of a more widely > recognized buzzword ... pushing a combination of virtual machine > capability migrated to PC hardware platforms possibly in combination > with large BLADE form-factors farms ... where a business with hundreds, > thousands, or even tens of thousands of servers are consolidating into > much smaller space. re: http://www.garlic.com/~lynn/2007m.html#51 Is Parallel Programming Just Too Hard? note that in the 80s, there started to be the possibility of two-level "timesharing" dispatch/scheduling when some amount of the virtual machine capability migrated into the mainframe "hardware", ... commingly referred to now as LPARS (logical partitions). The hardware had to schedule/dispatch timeshare the virtual machine LPARS ... and within an LPAR could be a virtual machine operating system, also having to schedule/dispatch timeshare its virtual machines. something similar has to be going on the emerging PC-based genre of virtual machine implementations. one of the interesting dispatch/schedule evolution starts with single processor virtual machines running on single processor hardware ... then moving to single processor virtual machines running on multiple processor hardware ... things can get more complex when having to run multiple processor virtual machines running on multiple processsor hardware ... and it may not be possible to independently dispatch/schedule the different virtual processors of a virtual machine ... having possibly needing to dispatch/schedule multiple virtual processors (of a virtual machine) concurrently on multiple real processors. lots of past posts about multiprocessors, tightly-coupled, and/or compare&swap instruction http://www.garlic.com/~lynn/subtopic.html#smp -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: > When? I never considered IBM world and its batch environment > timesharing. Timesharing does not do large data processing tasks > well; and it's not supposed to. there were somewhat distinct, different environments ... one was commercial dataprocessing and the other was interactive computing and timesharing. the commercial, batch, production environment was oriented towards business dataprocessing ... it wasn't computing done on behalf of some specific person ... it was computing done on behalf of some business operation ... like the organizations payroll and printing checks. the requirement was that the business dataprocessing be done ... frequently on very determined scheduled ... independent of any specific person. over time, there was lots of batch technology evolved to guarentee that specific operations could be done reliably, predictably, and deterministicly independent of any human involvement. much of the interactive and virtual machine paradigm evolved totally independently at the science center ... first with cp40/cms, morphing into cp67/cms, followed by vm370/cms (even tho during the 70s, the batch infrastructure and the timesharing infastructure shared a common 370 hardware platform): http://www.garlic.com/~lynn/subtopic.html#545tech both multics (on the 5th flr) and science center (on the 4th flr) could trace common heritage back to ctss (and unix traces some heritage back to multics). even tho there was a relatively large timesharing install base (in most cases larger than any other vendor's timesharing install base that might be more commonly associated with timesharing) ... in the period, it was dwarfed by the commerical batch install base. I've joked before that at one period, the installed commercial customer install base was much larger than the timesharing customer install base, and the timesharing customer install base was much larger than the timesharing internal install base, and the timesharing internal install base was much larger than the internal installations that I directly supported (built, distributed, fixed bugs, on highly customized/modified kernel and services). However, at one point the number of internal installations that I directly supported was as large as the total number of Multics installations that ever existed. lots of past posts mentioning the timesharing environment http://www.garlic.com/~lynn/subtopic.html#timeshare much of that timesharing install base was cms personal computing ... while other was mixed-mode operation with cms personal computing and other kinds of operating systems in virtual machines ... aka the same timesharing infrastructure supporting both interactive cms personal computing as well as production (frequently batch) guest operating systems. this required a timesharing dispatching/scheduling policy infrastructure that could support a broad range of requirements. for a little topic drift, slightly related recent post: http://www.garlic.com/~lynn/2007m.html#46 Rate Monotonic scheduling (RMS) vs. OS Scheduling also coming out of the science center in the period (besides virtual machines, a lot of timesharing and interactive/personal computer) ... somewhat reflecting the timesharing and personal computing orientation was much of the internal networking technology http://www.garlic.com/~lynn/subnetwork.html#internalnet as well as things like the invention of GML, precusor to SGML, HTML, XML, etc http://www.garlic.com/~lynn/subtopic.html#sgml with the advent of PCs ... a lot of the cms personal computing migrated to PCs ... although the (mainframe) virtual machine operating system continues to survive ... and even had seen some resurgent in the early part of this decade supporting large numbers of virtual machines running linux ... somewhat in the "server consolidation" market segment recently, "server consolidation" has become something of a more widely recognized buzzword ... pushing a combination of virtual machine capability migrated to PC hardware platforms possibly in combination with large BLADE form-factors farms ... where a business with hundreds, thousands, or even tens of thousands of servers are consolidating into much smaller space. Microsoft Looks to Stop Internal Server Sprawl http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=296360 from above: The profile of Microsoft Corp.’s in-house server farm is similar to those of many other companies: one application per server, with less than 20% peak server utilization on average. But Devin Murray, Microsoft’s group manager of utility services, is working to change that. Murray’s team manages about 17,000 servers that support 40,000 of Microsoft’s end users worldwide. ... snip ... -- For IBM-MAIN subscribe / sign
Re: Capacity and Relational Database
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (IBMsysProg) writes: > Memory. > Over the years the first exploiters of architecture changes to allow more > address spaces, more real memory, and more virtual memory have always been > DBMS systems. At this time I would suggest allocating at least 2 gig of > additional real memory to your future DBMS. DBMS system address spaces in > general are intolerant of paging. This is because a page in results in a > wait for the entire address space and to make things worse DBMS address > spaces serve many concurrent users. i've related before the discussion between IMS group and (original SQL/System/r group about pros & cons. http://www.garlic.com/~lynn/subtopic.html#systemr IMS has/had direct pointers ... which significantly cut down processing overhead ... but significantly increased development, maintenance, and administrative costs. System/r abstracted away the direct pointers ... at the cost of implicit overhead of automatically maintained index. The "argument" back then was that the (RDBMS) automatically maintained index, doubled the physical disk space and significantly increased the number of disk i/os (as part of processing the index) ... offset by significantly reduced human resources & skills. going into the 80s ... disk price/bit came down significantly (muting the disk price/bit argument) and (relative) significant increases in system "real" memory allowed much of the indexes to be cashed (eliminated lots of the increased index disk i/os). The index overhead then somewhat shifted from the amount of disk i/os ... to just CPU overhead. In any case, the price and availability of system resources changes that went on in the 80s ... changed the trade-off between the human skill/resources and system price-resources ... significantly enabling the wider use of RDBMS. Virtual memory and high-end DBMS don't mesh very well. High-end DBMS tends to have lots of their own managed cache ... typically with some sort of LRU type algorithm. I first noticed that running an LRU storage managed algorithm under an LRU storage managed algorithm could be a bad idea ... in the mid-70s with SVS/MVS running in virtual machine (virtual memory). It was possible to get in extremely pathelogical situation where MVS would select one of the pages (at a location it believed to be "its" real-memory) to be replaced ... at about the same time that the virtual machine hypervisor also decided that the corresponding virtual page should be replaced (since they were both looking at effectively the same useage patterns as basis for replacement decision). As a result, a LRU-based strategy ... running in a virtual memory, can start to look like an MRU strategy (the next most likely page to be used ... is the one that has been least recently used). lots of past posts about page replacement algorithms ... including some difference of opinion of some of the "internally" implemented MVS strategies http://www.garlic.com/~lynn/subtopic.html#wsclock as well as some "old" email on various aspects of the subject http://www.garlic.com/~lynn/lhwemail.html#globallru In any case, when running high-end DBMS that have their own cache implementation ... in a virtual memory operating system environment ... there tends to be a lot of "tuning" options ... to minimize the conflict between the DBMS cache replacement strategy (typically some sort of LRU-based) and the operating sysetm virtual memory replacement strategy (typically also some sort of LRU-based). There is also the possibility of things analogous to the old "VS1-handshaking", where VM370 would present a psuedo page fault interrupt to VS1 (running in a virtual machine) ... enabling VS1 to do a task switch (instead of blocking the whole VS1 whenever any page fault occured for a virtual machine page). note that one of the progression of large real storage has resulted in DBMS "memory" implementations ... rather than assuming that the DBMS natively resides on disk and there is a lot of processor overhead related to the assumed DBMS operation. The assumption is that nearly everything is memory resident and managed with memory pointers ... with periodic snap-shots to disk for commits/integrity. Given the same amount of large real storage ... there are claims that the switch to a RDBMS memory-based paradigm can run ten times faster than a RDBMS disk-based paradigm that was fully cached and otherwise doing little disk i/o (and both running nearly identical SQL-based applications). misc. recent posts mentioning old "interactions" between IMS and System/r organizations regarding various pros and cons http://www.garlic.com/~lynn/2007e.html#1 Designing database tables for performance? http://www.garlic.com/~lynn/2007e.html#14 Cycles per ASM instruction http://www.garlic.com/~lynn/2007e.html#31 Quote from comp.object http://www.garlic.com/~lynn/2007e.h
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Frank McCoy <[EMAIL PROTECTED]> writes: > Yup ... As *would* have happened with the PC itself if they'd been > that tight-assed with it. They just didn't *get* the fact that the > open bus and configuration was what made the PC popular. IOW, it was > the *competition* that made it such a huge success. re: http://www.garlic.com/~lynn/2007m.html#42 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM http://www.garlic.com/~lynn/2007m.html#44 The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM as i've mentioned before ... the other market force was that the previous personal computers had been do-it-yourself and hobbiest market. individuals had to justify the cost of the box for their own personal interest ... that included a lot of the software ... not a lot of off-the-shelf stuff ... so individuals had to do that themselves also. big break-out for ibm/pc was selling it into terminal emulation market at businesses. business that had justified buying a couple thousand or tens of thousand (3270) terminals ... for about the same amount of money that provided both local computing and terminal emulation in a single desktop footprint. instead of selling one at a time to a very limited market ... orders were being taken for thousands at a time. these (business) install base motivated a lot of the business users and software entrepreneurs to write software applications for the install base. having growing library of useful software tools for the market segment ... made it easier to justify spending the money to buy the machine. the combination of growing install base and growing available application creates snowball effect (positive feedback). misc. past posts mentioning various aspects of the terminal emulation theme http://www.garlic.com/~lynn/subnetwork.html#emulation the business market potential significantly motivated the clone makers ... something that had been happening in the mainframe dataprocessing business market since at least the late 60s (and so wasn't that unique of a concept). misc. past posts mentioning (mainframe) plug compatible (clone) http://www.garlic.com/~lynn/subtopic.html#360pcm this was enormous synergistic effect ... that wouldn't happen in the purely home/hobbiest market ... since the purchase price for strictly individuals was still fairly significant with not a large number of "solutions" to attact a big following. possibly one of the biggest drivers of personal computers into the home/personal market was the internet ... the volumes from the business world were driving down the price point and the combination of the price-point and internet as a "personal" use (for the computers) ... then helped explode the sales into the home market (aka killer app/silver bullet for personal, personal computer use). recent references: http://www.garlic.com/~lynn/2007j.html#11 Newbie question on table design http://www.garlic.com/~lynn/2007j.html#71 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007k.html#68 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#37 Friday musings on the future of 3270 applications -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: The Development of the Vital IBM PC in Spite of the Corporate Culture of IBM
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: > The public history of the PC began in August 1981, when IBM first > announced 'The IBM Personal Computer.' . This was The original > PC. The time period for the development of this landmark, legacy > product was approximately a year. It must be remembered that IBM was a > centralized committee paper top down organization at the > time. Everything went by snail mail and paper, communication was slow > and lines of communication as well as the necessary and ... > > Read full article at > http://www.knowledgefield.com/articles/the-development-of-the-vital-ibm-pc-in-spite-of-the-corporate-culture-of-ibm.shtml trolling? how 'bout the internal network ... world-wide http://www.garlic.com/~lynn/subnetwork.html#internalnet larger than the arpanet/internet from just about the beginning until possibly mid-85 http://www.garlic.com/~lynn/subnetwork.html#internet the great switch-over from arpanet (host-to-host with homogeneous IMP front-ends) to internetworking protocol was on 1jan83. internet was somewhere between 100-250 nodes at the time (depending on how things were counted). the internal network was far past that ... passing 1000 nodes that summer http://www.garlic.com/~lynn/internet.htm#22 http://www.garlic.com/~lynn/99.html#112 http://www.garlic.com/~lynn/2006k.html#8 various old email on a variety of subjects from the 70s & 80s http://www.garlic.com/~lynn/lhwemail.html after the 23jun69 unbundling announced, there was an effort to deploy (360/67) cp67 machines in various datacenters to give branch office technical people an opportunity to practice with operating systems running in (the remote) cp67 virtual machines (logon from terminals in the branch office to cp67 machines at remote datacenters). this was called "HONE" (aka hands-on network experience). however, it was soon taken over by applications (mostly written in APL) supporting the branch office sales/marketing people (and the use by SEs for operating system experience eventually was dropped). when EMEA hdqtrs moved from the US to Paris in the early 70s ... I was called in to help with their HONE installation. At that time, it still took a little ingenuity to read email back in the states. http://www.garlic.com/~lynn/subtopic.html#hone note that the "5150 computer" announced aug81 was predated by the "5100 computer" from the palo alto science center ... 5100 demo'ed 1973 http://www-03.ibm.com/ibm/history/exhibits/pc/pc_1.html http://www-03.ibm.com/ibm/history/exhibits/pc/pc_2.html also, note that the boca group doing the development was designated IBU ... independent business unit ... where some amount of corporate culture command&control was much more relaxed ... for instance the standard A&R (announce and review) product process requiring sign-off from possibly nearly 500 executives from around the corporation. The birth of the IBM PC http://www-03.ibm.com/ibm/history/exhibits/pc25/pc25_birth.html misc. old posts: http://www.garlic.com/~lynn/2000.html#69 APL on PalmOS ??? http://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ??? http://www.garlic.com/~lynn/2000d.html#15 APL version in IBM 5100 (Was: Resurrecting the IBM 1130) http://www.garlic.com/~lynn/2002b.html#39 IBM 5100 [Was: First DESKTOP Unix Box?] http://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?] http://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?] http://www.garlic.com/~lynn/2002b.html#47 IBM 5100 [Was: First DESKTOP Unix Box?] http://www.garlic.com/~lynn/2003i.html#79 IBM 5100 http://www.garlic.com/~lynn/2003i.html#82 IBM 5100 http://www.garlic.com/~lynn/2003i.html#84 IBM 5100 http://www.garlic.com/~lynn/2003j.html#0 IBM 5100 http://www.garlic.com/~lynn/2003n.html#6 The IBM 5100 and John Titor http://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor http://www.garlic.com/~lynn/2005m.html#2 IBM 5100 luggable computer with APL http://www.garlic.com/~lynn/2005m.html#3 IBM 5100 luggable computer with APL parts of thread from last yr that might have some interest: http://www.garlic.com/~lynn/2006o.html#43 "25th Anniversary of the Personal Computer" http://www.garlic.com/~lynn/2006o.html#45 "25th Anniversary of the Personal Computer" http://www.garlic.com/~lynn/2006o.html#46 "25th Anniversary of the Personal Computer" http://www.garlic.com/~lynn/2006o.html#65 "25th Anniversary of the Personal Computer" http://www.garlic.com/~lynn/2006o.html#66 "25th Anniversary of the Personal Computer" http://www.garlic.com/~lynn/2006p.html#15 "25th Anniversary of the Personal Computer" http://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer" http://www.garlic.com/~lynn/2006p.html#34 "25th Anniversary of the Personal Computer" http://www.garlic.com/~lynn/2006p.html#36 "25th Anniversary of the Personal Computer" http://www.
Re: Patents, Copyrights, Profits, Flex and Hercules
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Bruce Black) writes: > ASM2 was eventually acquired by CA and become CA-DISK, then > Brightstore CA-DISK, and now CA Disk Backup and Restore. I think > there was an intermediate acquisition that I have forgotten about. for other folklore ... a couple people that worked on AIX system management left and formed a company called tivoli. eventually tivoli was bought up ... and when adstar was sold off ... some of the adstar software packages (as well as other software products) were moved over to tivoli ... for instance ADSM (adstar storage management) became TSM (tivoli storage management). i had done the original backup/archive implementation in the late 70s which was deployed at some number of internal datacenters ... and went thru a number of versions with various other people helping with the work. one of the people involved left ... and worked on a number of backup/archive implementations for other companies ... some of these other implementations may currently be sold by sterling(?). my original backup/archive internal implementation first saw product release as workstation datasave facility which then morphed into ADSM (before being renamed TSM). some old email on the subject http://www.garlic.com/~lynn/lhwemail.html#cmsback and numerous posts mentioning backup/archive http://www.garlic.com/~lynn/subtopic.html#backup -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Patents, Copyrights, Profits, Flex and Hercules
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (R.S.) writes: > Key-based solutions exist on mianframe as well as on other systems. > > I think it is rather technical, not ethical or organisational issue: > It is *easy* to have illegal software on PC, sometimes you are even > unaware of it. I mean a lot of small but usefull tools like Windows > Commander, archivizers, DVD-burning software etc. etc. > > Even if you have some "tools" for z/OS it is simply not so easy to > install it on the host - usually several persons are involed, usually > someone could ask - "Did we buy it ? How did you get it ?". > > From the other hand, people are interested in having some bells & > whistles on *their* PC (even company owned), while mainframe is not > *their*. It is not *personal*. It's "common". re: http://www.garlic.com/~lynn/2007m.html#15 Patents, Copyrights, Profits, Flex and Hercules slightly related recent posts about looking at software piracy (DRM) in the mainframe and PC market space http://www.garlic.com/~lynn/2007b.html#59 Peter Gutmann Rips Windows Vista Content Protection http://www.garlic.com/~lynn/aadsm27.htm#9 Enterprise Right Management vs. Traditional Encryption Tools old email about "new" apple lisa announcement and conjecture about the processor serial number being used for software licensing (and piracy countermeasure). http://www.garlic.com/~lynn/2007b.html#email830213 http://www.garlic.com/~lynn/2007b.html#email830213b in this recent post http://www.garlic.com/~lynn/2007b.html#56 old lisa info part of the mainframe was being able to show in court that something out of the ordinary had to have been done to subvert the licensing provisions (value was worth taking to court). in the PC case, the value of individual copy makes it difficult to justify investigation and bringing to court every individual case. TPM is the one of the latest in pirarcy countermeasure (as well as suppose to be countermeasure to software compromises). misc. past posts mentioning giving an assurance talk in trusted computing track at intel developers conference http://www.garlic.com/~lynn/aadsm5.htm#asrn1 Assurance, e-commerce, and some x9.59 http://www.garlic.com/~lynn/aadsm21.htm#3 Is there any future for smartcards? http://www.garlic.com/~lynn/aadsm23.htm#56 UK Detects Chip-And-PIN Security Flaw http://www.garlic.com/~lynn/aadsm24.htm#23 Use of TPM chip for RNG? http://www.garlic.com/~lynn/aadsm24.htm#52 Crypto to defend chip IP: snake oil or good idea? http://www.garlic.com/~lynn/2005g.html#36 Maximum RAM and ROM for smartcards http://www.garlic.com/~lynn/2005o.html#3 The Chinese MD5 attack http://www.garlic.com/~lynn/2006p.html#48 Device Authentication - The answer to attacks lauched using stolen passwords? http://www.garlic.com/~lynn/2006w.html#37 What does a patent do that copyright does not? http://www.garlic.com/~lynn/2007g.html#61 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007g.html#63 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007l.html#42 My Dream PC -- Chip-Based -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Patents, Copyrights, Profits, Flex and Hercules
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Clem Clarke) writes: > It's a shame, but unless IBM does do a big rethink on this, and allows > small developers some sort of inexpensive or free access to the > mainframes, they will die. Allowing a "hobbyist" license for Z/OS, > VM and VSE on Hercules would be one way, and what does IBM really have > to lose? And the gain would be that they could have many people > working at no cost on these systems developing tools and applications > to make them better and better. some related thread drift from another n.g. http://www.garlic.com/~lynn/2007m.html#3 nouns and adjectives that in the 60s and much of the 70s ... lots of the innovation came out of customer installations & datacenters ... since it was the customers that understood the need and requirement ... things like cics, ims, etc. later they were transferred to "development" organizations for product support. in many cases, this is misnomer ... since those "development" organizations are responsible for product maintenance ... not the products "development" (maybe doing plus/minus five percent changes per annum). I've periodically made facetious comments referencing the "term" inflation in using the word "development" applied to organizations that are primarily product "maintenance". something similar happened with the introduction of the ibm/pc ... large proportion of the "products" originated from end-users (that were faced with the actual problems and understood what kind of solution was needed). vendor product operations tend to have people like software engineers that understand issues about software maintenance ... but rarely have people with the necessary experience that they could see what solution was originally needed. even before ibm/pc came out ... there were some that had jump shipped from vm/cms (that had been providing mainframe-based personal computing environment) and were implementing some number of CMS applications on other early personal computers. These weren't ports of CMS applications (because the implementation details tended to be totally different), but frequently the look&feel and the solution they provided were the same. the "OCO-wars" were especially hard on the vm/cms community ... because not only was full source available ... but even maintenance, fixes, etc for customers were shipped as source updates ... based on CMS multi-level source maintenance facilities. Some studies from their period even claimed the number of system (source) updates done at customer datacenters (aka aggregate lines-of-code) was actually larger than the source lines-of-code in the base system. the high-end of the market is where the (quarterly) revenue/profit ... but all the innovation tends to originate at the low-end & mid-range ... in part innovation requires quite a bit of experimentation, trial&error, etc ... and the high-end is rarely made available for such experimentation. As a result, some of the other vendors found a need that could filled in the entry/low-end market segment (and long term ... it is frequently the entry/low-end that tends to feed the high-end with the applications that keep the high-end quarterly revenue sustained). the pre-occupation with quarterly results has been a sporadic topic for at least the last 40 yrs. during periods when there was significant general economic growth ... the generational issues appeared to almost take care of themselves ... allowing the perception that executives could solely concentrate on the quarterly issues. however, this approach somewhat came to roost. i've mentioned before about being at a talk at MIT in the early 70s where Amdahl was asked how he was able to convince the money people to support his new clone computer company. His reply was that there was already something like $200b that customers had invested in 360 applications ... that even if IBM were to totally walk away from 360/370 ... which might be considered a vieled reference to the future system project http:///www.garlic.com/~lynn/subtopic.html#futuresys ... (just) that (existing) software application base could keep him in business thru the end of the century. starting in the early 70s, i had been heavily involved with HONE deployment ... first its original objective to provied "hands-on" experience to branch office SEs with operating systems running in virtual machines ... and then the transition to being primarily an online, interactive environment deploying applications (mostly implemented in cms\apl) supporting sales & marketing worldwide. http://www.garlic.com/~lynn/subtopic.html#hone in the mid-70s, I got con'ed into helping with the virgil/tully microcode assists ... including spending time off & on over a period of a year running around the world with the product managers, meeting with business planning & forcasting groups positioning the processors
Re: mainframe = superserver
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: > I realize that this has probably been asked before, but google didn't > give me an answer. Before I asked my question let me state that I > know that windows server 2003 and longhorn won't run on an IBM > mainframe. There is the endian issue and the ascii vs ebcidic > issues. Is there a medium to large IBM box that can run a couple > hundred of virtual windows 2003 servers? And said box can scale up to > approximately 1000+ virtual windows servers? Given that all current > servers are dell and hp servers with 2 intel core 2 duo processors and > a total of 150TB of storage? > > I am doing research for the possible replacement of 200+ windows > server in our datacenter. We need to add servers, but there is > literally no more power. My thinking is if IBM can get windows > servers to run on their something like their mainframes it would save > electricity and space. Everyone would win. recent cross-over from another thread: http://www.garlic.com/~lynn/2007l.html#63 Is Parallel Programming Just Too Hard? mentioning that blades are also being sold into commercial market, frequently in combination with virtualization, for server consolidation and another mention here ... in slightly older thread/post: http://www.garlic.com/~lynn/2007h.html#2 The Mainframe in 10 Years and mention in this thread http://www.garlic.com/~lynn/2007k.html#22 Another "migration" from the mainframe http://www.garlic.com/~lynn/2007k.html#23 Another "migration" from the mainframe some references from above thread CIO Challenge: Energy Efficiency http://www.wallstreetandtech.com/showArticle.jhtml?articleID=192202377 IBM Unveils New Energy-Efficient Blades http://www.hpcwire.com/hpc/1379801.html IBM to focus on energy efficiency http://www.bladewatch.com/2007/05/10/ibm-to-focus-on-energy-efficiency/ Blade innovations highlight energy efficiency opportunities http://www.it-director.com/business/content.php?cid=9135 IBM defends blades' energy efficiency http://green.itweek.co.uk/2006/10/ibm_defends_bla.html IBM Data Center and Facilities Strategy Services - high density computing data center readiness assessment http://www-935.ibm.com/services/us/index.wss/offering/its/a1025605 Lots of Blade Server articles http://www.eweek.com/category2/0,1874,1658862,00.asp IBM Grid Computing Solutions - financial industry http://www-03.ibm.com/grid/solutions/by_industry/financial.shtml Grid Computing for Financial Services 2007 http://www.iqpc.com/cgi-bin/templates/genevent.html?topic=233&event=12603&; Grid computing: Accelerating the search for revenue and profit for financial markets http://www-03.ibm.com/industries/financialservices/doc/content/landing/973028103.html the previously mentioned scaleup activity was in large part about physical packaging and issues like power and cooling http://www.garlic.com/~lynn/lhwemail.html#medusa but the server consolidation is now frequently blades/grid technology in combination with virtualization curtesy of science center from mid-60s, first with cp40 and then when 360/67 became available, morphed into cp67 (precursor to vm370) ... misc past posts mentioning science center http://www.garlic.com/~lynn/subtopic.html#545tech besides virtualization and virtual machines being invented at the science center ... compare&swap instruction for multi-thread/multi-processor was also invented at the science center http://www.garlic.com/~lynn/subtopic.html#smp and also GML (later morphed into sgml, html, xml, etc) http://www.garlic.com/~lynn/subtopic.html#sgml and most of the internal network http://www.garlic.com/~lynn/subnetwork.html#internalnet which was also seen outside in deployments like bitnet and earn http://www.garlic.com/~lynn/subnetwork.html#bitnet -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Shane) writes: > M - seems to be just a "warming over" of the multi-programming > versus parallel discussion. > With the exception of that last link, I'd need serious convincing any of > them are "parallel programming". > > I suspect in the not too distant future, "multi-threaded" will supplant > (in the common vernacular) all notions of "parallel". > > If not already ... re: http://www.garlic.com/~lynn/2007l.html#60 Is Parallel Programming Just Too Hard? multi-threaded tends to be used in conjunction with tightly-coupled, shared-memory multiprocessing (and the current buzzword "multi-core"). lots of past posts mentioning shared-memory multiprocessing and/or compare&swap instruction http://www.garlic.com/~lynn/subtopic.html#smp compare&swap instruction had been invented by Charlie (CAS are Charlie's initials) at the science center ... working on fine-grain multiprocessor locking for cp67 http://www.garlic.com/~lynn/subtopic.html#545tech in order to get the instruction justified for 370, had to come up with the description of its use in multi-threaded/multi-programming operation, which was included (originally) in the (370) principles of operation ... a more recent version http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320 parallel has been used in reference to both (tightly-coupled) multi-threaded operation, as well as loosely-coupled &/or cluster multiprocessing operation misc. past posts mentioning doing high availability, cluster multiprocessing product http://www.garlic.com/~lynn/subtopic.html#hacmp some old email references about working on cluster scaleup http://www.garlic.com/~lynn/lhwemail.html#medusa a couple old posts specifically about working on applying distributed lock manager and cluster scaleup to parallel oracle http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 recent post mentioning my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture http://www.garlic.com/~lynn/2007l.html#62 Friday musings on the future of 3270 applications a lot of the "blades" stuff has been physical packaging originally done for (numerical intensive cluster) "GRIDs" (getting more & more computing into smaller and smaller footprint). some amount of GRID/blades are now being pitched into commercial sector. some of it isn't strictly loosely-coupled/cluster operation ... but it is also being used (frequently in combination with virtualization) for server consolidation. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Friday musings on the future of 3270 applications
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computer as well. [EMAIL PROTECTED] (Patrick O'Keefe) writes: > At times like this I sorely miss my long lost APPN "Formats and Protocols" > bible. I believe official sources claim that an APPN CP and an LU were > different kinds of NAUs because they were different chunks of code. > (Or at least the same chunk of code implementing 2 different FSMs.) re: http://www.garlic.com/~lynn/2007l.html#37 Friday musings on the future of 3270 applications As undergraduate at the univ, I had done TTY/ascii terminal support for cp67 ... and attempted to make the 2702 do something that it couldn't quite do. that somewhat prompted a univ. project to build our own clone controller (originally using an Interdata/3). This was subsequently a writeup ... blaiming us (at least in part) for clone controller business (interdata was subsequently bought by perkin-elmer and the box was sold under PE logo well thru the 80s ... apparently with the same channel interface card that was designed at the univ. in the 60s) http://www.garlic.com/~lynn/subtopic.html#360pcm the clone controller business was supposedly a major motivation behind the future system project http://www.garlic.com/~lynn/subtopic.html#futuresys ... recent FS reference/post (with some quotation by one of the executives involved in FS) http://www.garlic.com/~lynn/2007l.html#10 John W. Backus, 82, Fortran developer, dies one might claim that when FS was killed, that SNA attempted to still meet some of the FS objectives with the PU4/PU5 interface for advanced terminal control infrastructure. in the same time that SNA was starting, my wife co-authored peer-to-peer networking (AWP39) ... that defined real networking ... rather than complex terminal control. Possibly there was some amount of semantic confusion lingered on because the term "SNA" contained the word "network". Later, when my wife was con'ed into going to POK to be in charge of loosely-coupled architecture, she created peer-coupled shared data architecture (... and except for IMS hot-standby, didn't see a lot of uptake until sysplex) http://www.garlic.com/~lynn/subtopic.html#shareddata she had lots of battles with the SNA organization over peer-coupled shared data ... eventually there was temporary truce with my wife being able to specify peer-couple operation as long as it was within the walls of the same/single machine room (datacenter) ... but SNA was mandated if it "crossed" the walls of the machine room. much later, APPN was specified in AWP164 and when there was an attempt to announce/release APPN, the SNA organization non-concurred (at the time, the person responsible for APPN and I reported to the same executive). The APPN announcement was escalated and eventually the announcement letter was carefully rewritten to not imply that APPN had any relationship at all to SNA. misc. past posts mentioning AWP39 and/or AWP164: http://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment http://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5 http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2005u.html#23 Channel Distances http://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe http://www.garlic.com/~lynn/2006j.html#31 virtual memory http://www.garlic.com/~lynn/2006k.html#9 Arpa address http://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server http://www.garlic.com/~lynn/2006l.html#4 Google Architecture http://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?) http://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R http://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2006u.html#28 Assembler question http://www.garlic.com/~lynn/2006u.html#55 What's a mainframe? http://www.garlic.com/~lynn/2007b.html#9 Mainframe vs. "Server" (Was Just another example of mainframe http://www.garlic.com/~lynn/2007b.html#48 6400 impact printer http://www.garlic.com/~lynn/2007b.html#49 6400 impact printer http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now? http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. a couple recent items: The death of single threaded development http://blogs.zdnet.com/Ou/?p=519 Google Acquires Multicore Programming Startup PeakStream -- Multithreaded Programming http://www.informationweek.com/news/showArticle.jhtml?articleID=199901501 Intel updates compilers for multicore era http://arstechnica.com/news.ars/post/20070605-intel-updates-compilers-for-multicore-era.html Sun Updates Studio For Multi-core Development http://itmanagement.earthweb.com/entdev/article.php/3681151 Sun stresses multicore chips, Linux with dev tool http://news.yahoo.com/s/infoworld/20070604/tc_infoworld/89028 Scots firm demonstrates parallelizing compiler at MPF http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=199700792 recent posts on the subject: http://www.garlic.com/~lynn/2007.html#3 The Future of CPUs: What's After Multi-Core? http://www.garlic.com/~lynn/2007g.html#3 University rank of Computer Architecture http://www.garlic.com/~lynn/2007i.html#20 Does anyone know of a documented case of VM being penetrated by hackers? http://www.garlic.com/~lynn/2007i.html#78 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#15 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#26 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#34 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#38 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#42 My Dream PC -- Chip-Based -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. re: http://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#26 Is Parallel Programming Just Too Hard? http://www.garlic.com/~lynn/2007l.html#34 Is Parallel Programming Just Too Hard? recent news items Intel Pledges 80 Core Processor in 5 Years http://hardware.slashdot.org/hardware/06/09/26/1937237.shtml Intel shows off 80-core processor http://news.com.com/Intel+shows+off+80-core+processor/2100-1006_3-6158181.html Next Windows To Get Multicore Redesign http://developers.slashdot.org/article.pl?sid=07/05/31/1257231 part of the issue is that a lot of the parallel processing has been limited to high-end market ... where highly skilled programming could be used to manage large amount of shared resources ... effectively concurrently working on different activity from independent sources. as parallel hardware has started to move downstream into standard consumer market ... issues in the past couple yrs is how to change the (mostly) sequential programming paradigm to better utilize the independent/parallel hardware resources that are available. the hardware technology motivation is that as components are shrinking ... things like signal latency and syncronized, serial operation are starting to represent a significant limiting factor ... going to asyncronous operation ... even across the distances involved in a typical chip ... can contributed to significant thruput increases. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Friday musings on the future of 3270 applications
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Rich Smrcina) writes: > If I understand what your asking there are products on the market that > can do this today. > > As long as there is a 3270 on the back end (specifically TN3270), a web > interface or a web service is presented on the front end. part of this is some of the whole history about terminal emulation. 3270 terminal emulation contributed significantly to early uptake of ibm/pc ... i.e. businesses that had already allocated money for 3270 terminal ... it became nearly no-brainer ... to switch to an ibm/pc ... price was about the same ... and in single desktop footprint the business got both 3270 terminal and some possibly added-value local computing. http://www.garlic.com/~lynn/subnetwork.html#emulation this contributed to significant install base of 3270 terminal and terminal emulation products. in the later part of the 80s ... were had come up with 3-tier architecture (as an enhancement to client/server) http://www.garlic.com/~lynn/subnetwork.html#3tier and were out doing some amount of customer executive presentations ... and taking a lot of heat from the T/R and SAA forces(to some extent SAA could be viewed as attempting to help preserve the terminal emulation paradigm and inhibit the spread of client/server ... and especially this new fangled 3-tier stuff). we also were taking some amount of heat working with organizations around the nsfnet backbone effort (i.e. tcp/ip is considered the technology basis for the modern internet but nsfnet backbone would be considered the operational basis for the modern internet). some old email from the period on the topic http://www.garlic.com/~lynn/lhwemail.html#nsfnet and after starting to cancel our meetings with outside entities ... then there was suggestion that they should start proposing SNA/VTAM as the basis for nsfnet backbone ... specific old email reference http://www.garlic.com/~lynn/2006w.html#email870109 in this post http://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET one of the side happenings in all this was we did get an NSF audit of high-speed backbone we had running internally http://www.garlic.com/~lynn/subnetwork.html#internalnet which concluded that what we had running was at least five yrs ahead of all NSFNET backbone bids (to build something new) and for some topic drift ... tagential reference here http://www.garlic.com/~lynn/2007l.html#14 Sueprconductors and computing -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Questions to the list
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Tom Schmidt) writes: > It seems to me that only a few years ago (and probably in many of the > hundreds of recycled "I remember when..." threads lately) we were, as > a group, lamenting that we were all getting older and there was little > new blood being introduced to the mainframe. We were also having > 'sour grapes' discussions about workload moving off the mainframe and > companies abandoning the mainframe altogether. one of the issues ... especially on the usenet side ... was that the start of each new semester ... there would be a some new flurry of homework questions ... as individuals taking a computer class and getting possibly their first exposure to terminals and online infrastructures. as online infrastructures starting to permeate the whole culture ... it is now possible to find homework questions happening all thru the yr. there is some balance between answering questions where the askee has actually made some attempt to learn something ... or is using the list in lieu of having to learn anything. misc. past posts mentioning homework issue: http://www.garlic.com/~lynn/2000.html#28 Homework: Negative side of MVS? http://www.garlic.com/~lynn/2000.html#32 Homework: Negative side of MVS? http://www.garlic.com/~lynn/2001.html#70 what is interrupt mask register? http://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore? http://www.garlic.com/~lynn/2001c.html#11 Memory management - Page replacement http://www.garlic.com/~lynn/2001c.html#25 Use of ICM http://www.garlic.com/~lynn/2001k.html#75 Disappointed http://www.garlic.com/~lynn/2001l.html#0 Disappointed http://www.garlic.com/~lynn/2001m.html#0 7.2 Install "upgrade to ext3" LOSES DATA http://www.garlic.com/~lynn/2001m.html#32 Number of combinations in five digit lock? (or: Help, my brain hurts) http://www.garlic.com/~lynn/2002c.html#2 Need article on Cache schemes http://www.garlic.com/~lynn/2002f.html#32 Biometric Encryption: the solution for network intruders? http://www.garlic.com/~lynn/2002f.html#40 e-commerce future http://www.garlic.com/~lynn/2002g.html#83 Questions about computer security http://www.garlic.com/~lynn/2002l.html#58 Spin Loop? http://www.garlic.com/~lynn/2002l.html#59 Spin Loop? http://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card? http://www.garlic.com/~lynn/2002o.html#35 META: Newsgroup cliques? http://www.garlic.com/~lynn/2003d.html#27 [urgent] which OSI layer is SSL located? http://www.garlic.com/~lynn/2003j.html#34 Interrupt in an IBM mainframe http://www.garlic.com/~lynn/2003m.html#41 Issues in Using Virtual Address for addressing the Cache http://www.garlic.com/~lynn/2003m.html#46 OSI protocol header http://www.garlic.com/~lynn/2003n.html#4 Dual Signature http://www.garlic.com/~lynn/2004f.html#43 can a program be run withour main memory ? http://www.garlic.com/~lynn/2004f.html#51 before execution does it require whole program 2 b loaded in http://www.garlic.com/~lynn/2004f.html#61 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2004h.html#47 very basic quextions: public key encryption http://www.garlic.com/~lynn/2004k.html#34 August 23, 1957 http://www.garlic.com/~lynn/2005h.html#1 Single System Image questions http://www.garlic.com/~lynn/2005m.html#50 Cluster computing drawbacks http://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs? http://www.garlic.com/~lynn/2006b.html#2 Mount a tape http://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries http://www.garlic.com/~lynn/2006l.html#54 Memory Mapped I/O Vs I/O Mapped I/O http://www.garlic.com/~lynn/2007f.html#16 more shared segment archeology -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Anne & Lynn Wheeler <[EMAIL PROTECTED]> writes: > long ago and far away, this was one of the battles getting the > compare&swap instruction into 370 architecture. test&set had been around > in the 60s and was used fro 360/65 multiprocessor support with global > kernel spin-locks (set the lock and everybody else spins, untill the > lock is cleared). > ... > somewhat implicit in a lot of compare&swap uses is that there can be > concurrent threads executing in the same instruction sequences > simultaneously. the inital forey into POK attempting to get compare&swap > justified was unsuccesful, in large part because the favorite son > operating system felt that test&set was just fine for multiprocessor > support (the 360/65 smp global spin lock paradigm). the challenge was to > create justification for compare&swap instruction that was applicable to > single processor deployment. Thus was born the programming notes that > can be found in principles of operation describing how the "atomic" > characteristics of comapre&swap can be leveraged in single processor > environment for multithreaded applications (like DBMS) ... these aren't > necessarily concurrent multithreads ... but multiple threads that might > be interrupted and so atomic operations can be applied to both > simultaneous concurrent multithread operation as well as possibly > non-simultaneous (but interrruptable) multithreaded operation. re: http://www.garlic.com/~lynn/2007l.html#24 Is Parallel Programming Just Too Hard? misc. past posts mentioning smp and/or compare&swap instruction http://www.garlic.com/~lynn/subtopic.html#smp in the mid-70s i was working on a 5-way SMP implementation it involved one of the lower-end 370 processor designs ... and was moving lots of features into microcode. for one reason or another that project got killed, misc. past posts discussing the effort http://www.garlic.com/~lynn/subtopic.html#bounce shortly after that got killed, there was another project started for 16-way smp involving higher-end processors. we even co-opted the spare time from some of the processor engineers furiously attempting to complete the 3033. in general, most people that looked at it thought it was a really great idea ... until it came to the attention of the head of POK that it would possible be decades before the POK favorite son operating system would be able to support the machine. At which time, the 3033 engineers were instructed to get their noses back to the grindstone and some people were invited to never show up in POK again. misc. past references: http://www.garlic.com/~lynn/95.html#5 Who started RISC? (was: 64 bit Linux?) http://www.garlic.com/~lynn/95.html#6 801 http://www.garlic.com/~lynn/95.html#11 801 & power/pc http://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP? http://www.garlic.com/~lynn/2000.html#86 Ux's good points. http://www.garlic.com/~lynn/2001e.html#5 SIMTICS http://www.garlic.com/~lynn/2001h.html#33 D http://www.garlic.com/~lynn/2002i.html#82 HONE http://www.garlic.com/~lynn/2003.html#4 vax6k.openecs.org rebirth http://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth http://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters http://www.garlic.com/~lynn/2004f.html#26 command line switches [Re: [REALLY OT!] Overuse of symbolic http://www.garlic.com/~lynn/2004j.html#45 A quote from Crypto-Gram http://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling? http://www.garlic.com/~lynn/2005k.html#45 Performance and Capacity Planning http://www.garlic.com/~lynn/2005m.html#48 Code density and performance? http://www.garlic.com/~lynn/2005p.html#39 What ever happened to Tandem and NonStop OS ? http://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer http://www.garlic.com/~lynn/2006l.html#30 One or two CPUs - the pros & cons http://www.garlic.com/~lynn/2006n.html#37 History: How did Forth get its stacks? http://www.garlic.com/~lynn/2006r.html#22 Was FORTRAN buggy? http://www.garlic.com/~lynn/2006t.html#7 32 or even 64 registers for x86-64? http://www.garlic.com/~lynn/2006t.html#9 32 or even 64 registers for x86-64? http://www.garlic.com/~lynn/2007g.html#17 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007g.html#44 1960s: IBM mgmt mistrust of SLT for ICs? http://www.garlic.com/~lynn/2007g.html#57 IBM to the PCM market(the sky is falling!!!the sky is falling!!) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Is Parallel Programming Just Too Hard?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Howard Brazee) writes: > Depending on one's definition of parallel programming, we have been > doing to various degrees since before they started off-loading the > paper-tape reading to the paper-tape reader.Video cards on PCs are > powerful computers that work in parallel with the program's main > logic. Our operating systems have allowed us to run payroll and > accounts payable at the same time, and central databases have expanded > on this ability. lots of comments about this in the past couple yrs is that technology in support of parallel programming has not really changed in at least the past 20yrs ... as a result, the actual use has been limited to very specialized implementations. there has been lots of stuff in multiprogramming and multithreading in the same processor complex (single processor and/or shared-memory multiprocessor). multiprogramming was managing lots of independent & different tasks on the same processor complex. multithreading was program managing different tasks. application implementation of multithreading isn't necessarily very pervasive. some number of DBMS implementation have used things like transaction model ... to provide "independent" operations ... that they multithread. In this sense, DBMS kernels are somewhat more like operating system kernels ... highly specialized ... and not a lot of end-users implementing their own DBMS kernels. in a lot of multiprocessor kernel support ... a "global" kernel lock was used ... which only allowed a single thread to be executing in the kernel at a time. it was somewhat painful experience for a lot of kernel implementations to make the transition from single thread (at a time) executing in a multiprocessor kernel to multiple concurrent threads executing in same parts of the kernel. long ago and far away, this was one of the battles getting the compare&swap instruction into 370 architecture. test&set had been around in the 60s and was used fro 360/65 multiprocessor support with global kernel spin-locks (set the lock and everybody else spins, untill the lock is cleared). at the science center http://www.garlic.com/~lynn/subtopic.html#545tech Charlie had been doing a lot of work in fine-grain lock for the cp67 kernel and invented the compare&swap instruction (mnemonic chosen because "CAS" are charlie's initials). misc. past posts mentioning SMPs and/or compare&swap http://www.garlic.com/~lynn/subtopic.html#smp somewhat implicit in a lot of compare&swap uses is that there can be concurrent threads executing in the same instruction sequences simultaneously. the inital forey into POK attempting to get compare&swap justified was unsuccesful, in large part because the favorite son operating system felt that test&set was just fine for multiprocessor support (the 360/65 smp global spin lock paradigm). the challenge was to create justification for compare&swap instruction that was applicable to single processor deployment. Thus was born the programming notes that can be found in principles of operation describing how the "atomic" characteristics of comapre&swap can be leveraged in single processor environment for multithreaded applications (like DBMS) ... these aren't necessarily concurrent multithreads ... but multiple threads that might be interrupted and so atomic operations can be applied to both simultaneous concurrent multithread operation as well as possibly non-simultaneous (but interrruptable) multithreaded operation. the advances in concurrent, parallel technology into loosely-coupled/cluster deployments is even more limited than the proliferation in tightly-coupled environments. we had done a scallable distributed lock manager in support of our ha/cmp product http://www.garlic.com/~lynn/subtopic.html#hacmp and the "medusa" cluster-in-a-rack activity ... old email http://www.garlic.com/~lynn/lhwemail.html#medusa and somewhat referenced in these postings about old meeting http://www.garlic.com/~lynn/95.html#13 http://www.garlic.com/~lynn/96.html#15 ... but again ... it tended to be directly used by a very limited amount of specailized code ... there wasn't a huge number of different applications directly implementing semantics of highly parallel operation (for either tightly-coupled or loosely-coupled configurations). a couple recent posts in another thread/fora on the subject http://www.garlic.com/~lynn/2007l.html#15 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#19 John W. Backus, 82, Fortran developer, dies http://www.garlic.com/~lynn/2007l.html#23 John W. Backus, 82, Fortran developer, dies -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http:
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. Dan Espen <[EMAIL PROTECTED]> writes: > No, in the sense that C was pretty close to the assembler for > the machine UNIX was first developed on. > > It would have been interesting if K&R went on to implement UNIX > on a 360 type machine. I assume they would have extended the > C language and library functions to better exploit the hardware. re: http://www.garlic.com/~lynn/2007l.html#18 Non-Standard Mainframe Language? when i was undergraudate, i had added tty/ascii terminal support to cp67 ... in the process of doing that ... came up with some difficiencies in the 2702 terminal controller ... that somewhat prompted project to build our own clone controller out of interdata/3 ... which had somewhat 360-like instruction set. recent post making reference: http://www.garlic.com/~lynn/2007l.html#11 John W. Backus, 82, Fortran developer, dies there was some article blaiming us (at least in part) for the clone controller business. lots of past posts mentioning clone controllers http://www.garlic.com/~lynn/subtopic.html#360pcm all the references I've seen regarding redoing C & UNIX for portability make mention of (later) interdata machines (again 360-like) The First Unix Port http://www.usenix.org/publications/library/proceedings/usenix98/invited_talks/miller.ps Version 6 Unix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Version_6_Unix Interdata_v6 http://minnie.tuhs.org/UnixTree/Interdata_v6/ Anecdotes http://doi.ieeecomputersociety.org/10.1109/MAHC.1989.10025 The Daemon, the GNU and the Penguin - Chapter 2 and 3 http://www.icims.csl.uiuc.edu/~lheal/doc/dgp/chapter02_03.html of course these machines were quite a bit after interdata/3 Interdata 7/32 and 8/32 http://en.wikipedia.org/wiki/Interdata_7/32 in the above ... references to perkin-elmer having bought interdata and quite a bit of success in "defense and aerospace" industries. people i've talked to since, have said that a lot of the sales involved attaching to ibm mainframe ... and the channel attach board didn't appear to have been redesigned since our original (still wire-wrap). Interdata Simulator Configuration http://simh.trailing-edge.com/interdata.html from above: Interdata was founded in the mid 1960's. It produced a family of 16b minicomputers loosely modeled on the IBM 360 architecture. Microprogramming allowed a steady increase in the functionality of successive models. * Interdata 3 * Interdata 4 (autoload, floating point) * Interdata 5 (list processing, microcoded automatic I/O channel) * Interdata 70, 74, 80 * Interdata 6/16, 7/16 * Interdata 8/16, 8/16e (double precision floating point, extended memory) In the early 1970's, Interdata was purchased by Perkin-Elmer. In 1974, it introduced one of the first 32b minicomputers, the 7/32. Several generations of 32b systems followed: * Interdata 7/32 * Interdata 8/32 * Perkin-Elmer 3205, 3210, 3220 * Perkin-Elmer 3250 Interdata was spun out of Perkin-Elmer as Concurrent Computer Corporation. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Paul Gilmartin) writes: > Much of this is due to the reliance on null-terminated strings, which > are not peculiar to C, but are rooted in the UNIX continuum between > applications programming and systems programming. i've actually had this discussion with some of the people involved, null allowed for one byte overhead for arbitrary lengths ... somewhat the y2k phenomena ... as opposed to the two byte explicit length overhead (for up to 64k). x-over post on the subject from today in another fora http://www.garlic.com/~lynn/2007l.html#11 John W. Backus, 82, Fortran developer, dies lots of posts on the subject of exploits/failures related to the characteristic http://www.garlic.com/~lynn/subintegrity.html#overflow i had been monitoring some of the statistics thru the 90s ... but more recently there were much fewer ... so i had to do some analysis myself ... looking at some of the exploit databases. part of the problem (that I complained about) was that many of the descriptions were somewhat freeform and could be ambiguous ... which i complained about a number of times. there were some more recent announcements that they would be attempting to better classify/categorize exploits. old posts with some attempts at classification/categorization based on analysis of some of the exploit databases http://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE http://www.garlic.com/~lynn/2004j.html#58 Vintage computers are better than modern crap ! http://www.garlic.com/~lynn/2005c.html#32 [Lit.] Buffer overruns and this one mentions an article in early 2005 quoting a NIST study that came up with similar statistics that I had come up with nearly a year earlier: http://www.garlic.com/~lynn/2005b.html#43 [Lit.] Buffer overruns note part of the mentioned efforts was in support of my merged security taxonomy and glossary ... some notes here: http://www.garlic.com/~lynn/index.html#glosnote past posts in this thread: http://www.garlic.com/~lynn/2007k.html#65 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#67 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#73 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#74 Non-Standard Mainframe Language? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Shmuel Metz , Seymour J.) writes: > The were also assists for OS/VS1 and MVS/SE, to say nothing of the > infamous ECPS:VSE. re: http://www.garlic.com/~lynn/2007k.html#67 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#70 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#73 Non-Standard Mainframe Language? the 145/148 ... for lots of typical kernel instruction paths ... there was approximately a one-for-one byte translation from 370 into microcode. 145 allowed for scavanging part of processor memory for microcode. that was changed in 148 ... and after the OS/VS1 microcode assist was done for 148 ... there was only 6kbytes left in dedicated 148 microcode storage for VM370 ECPS. This somewhat contributed to us doing a significantly better job of choosing the highest used vm370 instruction paths (vis-a-vis the vs1 effort) for dropping into microcode. basically all the instruction paths thru the vm370 kernel were carefully profiled and then ranked as per use ... and then the top 6k bytes were chosen for migration to 148 m'code ... refs: http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist http://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist http://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist MVS/SE microcode assist would have been much more problamatical since it was applied to the high-end horizontal m'code machines ... where there was already nearly one-for-one between 370 execution and microcode execution; it wouldn't have been possibly to pick up the 10:1 improvement that you found in the low & mid-ranged microcoded machines (and in some cases, trying to do straight-forward one-for-one movement of blocks of 370 instructions to horizontal microcode, would actually increase processing time). The place where the vm370 virtual machine microcode assists worked across the whole machine line ... was being able to eliminate the priv. op interrupts into the vm370 kernel ... that 370 supervisor state instruction emulation, when running in special "virtual machine" problem state ... executed the instructions directly. This wasn't a one-for-one movement of kernel instructions to microcode instruction ... this was the total elimination of the interrupt processing, context switch, and a bunch of other kernel overhead stuff. This was further demonstrated when Amdahl implemented hypervisor in their "macrocode" ... a sort of 370 instruction set running in special hardware mode. The response was PR/SM on the 3090 (which was a much more difficult undertaking since it was native horizontal microcode programming). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Shmuel Metz , Seymour J.) writes: > Dont forget APL Shared Variables. re: http://www.garlic.com/~lynn/2007k.html#67 Non-Standard Mainframe Language? http://www.garlic.com/~lynn/2007k.html#70 Non-Standard Mainframe Language? yes, i skipped over some of the intermediate folklore. there was a big uproar created with the phili science center apl\360 group when the cambridge science center did cms\apl and added "system services calls" ... the claim was that it totally violated spirit of apl language ... although as i've referenced before, removing the trivial workspace size limits of apl\360 and providing for access to system services (like being able to do file read/writes) ... really opened up being able to use cms\apl for real world applications. Eventually, APL shared variables was the effective come-back from the APL language purists on how to be able to access system services ... w/o corrupting the purity of the APL language. misc. past posts mentioning apl shared variable http://www.garlic.com/~lynn/97.html#4 Mythical beasts (was IBM... mainframe) http://www.garlic.com/~lynn/2002c.html#30 OS Workloads : Interactive etc http://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History http://www.garlic.com/~lynn/2003n.html#8 The IBM 5100 and John Titor http://www.garlic.com/~lynn/2004c.html#7 IBM operating systems http://www.garlic.com/~lynn/2004n.html#37 passing of iverson http://www.garlic.com/~lynn/2005f.html#63 Moving assembler programs above the line http://www.garlic.com/~lynn/2005n.html#50 APL, J or K? http://www.garlic.com/~lynn/2006o.html#13 The SEL 840 computer -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Craddock, Chris) writes: > I never actually met a processor with the (mythical?) APL assist > feature. However, I did write mountains of APL throughout the 1980s. APL > was always thought of as a resource hog. IMHO it could be very efficient > or grotesque, depending on your data structures and algorithms. If you > wrote programs in the style of 3 GLs, it was typically a dog. re: http://www.garlic.com/~lynn/2007k.html#65 Non-Standard Mainframe Language? APL microcode assist was done for the 370/145 by the Palo Alto Science Center ... sort of as part of their doing APL\CMS. This was also made available on 370/148. As mentioned, the Cambridge Science Center had originally done the port of APL\360 to CMS for CMS\APL (i.e. back when it was Cambridge Monitor System for CP67; as part of the morph of CP67 to VM370 they "renamed" CMS the Conversational Monitor System). APL microcode assist gave APL\CMS applications on 370/145 about the same processor thruput as same APL\CMS application running on 370/168 (nearly ten times thruput increase). When the HONE datacenters were consolidated in silicon valley (actually across the back parking lot from the Palo Alto Science Center) ... they looked at whether they take any of their APL application intensive workload and move them off 168s to 145s. The problem was not only were their applications quite processor intensive (which 145 microcode assist would have given about equivalence) but also real storage and I/O intensive (which would have severely degraded if they had moved from 168 to 145/148). For nearly 15 yrs, i provided highly modified/customized versions of cp67 kernel and later vm370 kernels for HONE (and large number of other internal datacenters). I also periodically got involved in reviewing various APL applications from performance tuning standpoint ... aka majority of the applications that provided world-wide support for sales and marketing were implemented in APL running on CMS. Lots of past posts mentioning HONE and/or APL http://www.garlic.com/~lynn/subtopic.html#hone Eventually there was a APL language development group formed in STL which picked up APL\CMS responsibility as well as making it available on MVS ... renaming it VS\APL (and later APL2). Trivia ... in the early to mid 80s, the manager of the APL group in STL transferred to Palo Alto to head up a new group doing a port of BSD Unix to 370. I got to attend some of the design sessions and also help obtain a 370 C compiler for the effort. Before that specific implementation shipped, the group had their BSD porting efforts retargeted to the PC/RT ... eventually shipping "AOS" (the C compiler vendor being used had to retarget the backend from 370 to ROMP). misc. past posts mentioning 801/ROMP as well as risc, Iliad, RIOS, rs/6000, power/pc, etc http://www.garlic.com/~lynn/subtopic.html#801 The APL microcode assist was not made available on other processors. The 145/148 microcode engine was a vertical microcode engine and executing approx. 10 microcode instructions per every 370 instruction (some of the modern i86-based 370 simulators have similar ratio characteristics). The 370/165 had a horizontal microcode engine ... and achieved an avg. of 2.1 machine cycles per 370 instruction ... which was improved to 1.6 machine cycles per 370 instruction (and hit nearly 1:1 with 3033). Since 370 instructions were executing very close to hardware speed on the high-end processors ... there was frequently very little performance benefit of doing a 1-for-1 translation of 370 instruction into native hardware. The exception was virtual machine microcode assists on the high-end processors ... however these weren't the 1:1 translation of 370 instructions to native instructions. In the virtual machine assists, the instruction emulation for privilege instruction was modified to directly perform the privilege operation while in problem state (but according to "virtual machine" execution rules ... sort of a "3rd" machine state). This avoided the interrupt into the kernel, having to save registers and other state change overhead ... redecode the operation in software and perform the necessary operation, and then switch back to virtual machine problem state execution. In addition to stuff like APL microcode assist done for 145/148 ... there was the VM kernel assist "ECPS" done for both 138 & 148. This took about 6k bytes of vm370 370 kernel code and moved it into native microcode of the machines (again getting about 10:1 thruput improvement). some old posts about how we went about selecting what parts of the kernel code were moved into microcode (some of the initial work involved help from some of the same people involved in doing the 145 APL microcode assist) http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist http://www.garlic.com/~lynn/94.
Re: Non-Standard Mainframe Language?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Gerhard Postpischil) writes: > Back in the seventies I was in charge of the systems group at a > service bureau. One of our customers was from a local university, > running an APL application that tracked students vs. classes, and a > few other things. It was gold mine - whenever it ran, the CPU went > 100% busy and stayed that way for a long time. The same thing written > in another language might have cost one or two percent as much. recent post mentioning world-wide hone system http://www.garlic.com/~lynn/2007.html#30 V2X2 vs. Shark (SnapShot v. FlashCopy) http://www.garlic.com/~lynn/2007.html#31 V2X2 vs. Shark (SnapShot v. FlashCopy) http://www.garlic.com/~lynn/2007.html#46 How many 36-bit Unix ports in the old days? http://www.garlic.com/~lynn/2007b.html#51 Special characters in passwords was Re: RACF - Password rules http://www.garlic.com/~lynn/2007d.html#39 old tapes http://www.garlic.com/~lynn/2007e.html#38 FBA rant http://www.garlic.com/~lynn/2007e.html#41 IBM S/360 series operating systems history http://www.garlic.com/~lynn/2007f.html#12 FBA rant http://www.garlic.com/~lynn/2007f.html#20 Historical curiosity question http://www.garlic.com/~lynn/2007g.html#31 Wylbur and Paging http://www.garlic.com/~lynn/2007i.html#34 Internal DASD Pathing http://www.garlic.com/~lynn/2007i.html#77 Sizing CPU http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate http://www.garlic.com/~lynn/2007k.html#60 3350 failures HONE ("hands-on") started out in the US with cp67 ... sort of to allow branch office SEs to have "hands-on" with various operating sysetms (running in virtual machines). prior to 23jun69 unbundling announcement http://www.garlic.com/~lynn/subtopic.html#unbundle a lot of SEs got much of their "hands-on" experience in their customer accounts. after the unbundling announcement, SE time was being charged for ... and not a lot of customers were interested in paying to have SEs learn. however, the science center http://www.garlic.com/~lynn/subtopic.html#545tech in addition to doing virtual machines, cms, inventing GML (precursor to SGML, HTML, XML, etc) http://www.garlic.com/~lynn/subtopic.html#sgml and the internal networking technology http://www.garlic.com/~lynn/subnetwork.html#internalnet which was also used in bitnet and earn http://www.garlic.com/~lynn/subnetwork.html#bitnet also did a port of apl\360 to cms (cms\apl). apl\360 had its own monitor, scheduler, workspace swapping, terminal handler, etc ... all of which could be discarded in the port for cms\apl. also in moving from the 16kbyte (sometimes 32kbyte) "real" workspace sizes to CMS ... where the workspace size could be all of virtual memory ... the whole way that APL managed storage had to be reworked (the real storage stategy resulted in enormous page thrashing). part of cms\apl was also the ability to access system services (things like read/write files) ... something that apl didn't previously have. the combination of really large workspace sizes and the access to system services ... opened up APL for a lot of real-world problems. A lot of modeling off all kinds was done ... as well as a lot of stuff that these days are implementing with spreadsheets. One of the early "big" APL uses (at cambride) were a number of business planners from corporate hdqtrs in armonk. they forwarded a tape to cambridge with all of the most sensitive corporate customer business data ... and would do significant amount of business modeling and planning. this created an interested "security" scenario for the service at cambridge since there were a lot of non-employees using the system from various educational institutions in the cambridge area. one instance is this slightly related DNS trivia topic drift ... more than a decade before DNS http://www.garlic.com/~lynn/2007k.html#33 Even worse than Unix Before long there were a significant number of CMS\APL applications written that supported sales & marketing and deployed on the HONE system ... effectively taking over its whole use for sales & marketing (and eliminating the original "hands-on" use for SEs). Before long, sales couldn't even submit customer orders that hadn't been processed by some CMS\APL application. HONE transitioned from cp67 to vm370-based platform and from cms\apl to apl\cms (enhancements done by the palo alto science center ... including the 370/145 apl microcode assist) ... and clones of (US) HONE system were sprouting up all over the world (some of the early ones i even got to handle ... like when EMEA hdqtrs moved from the US to Paris). lots of other posts mentioning HONE and/or APL http://www.garlic.com/~lynn/subtopic.html#hone in the mid-70s, the US HONE datacenters were consolidated in silicon valley. The large customer base (all US sales and marketing) drove the requirement for large disk farm
Re: [META] Is WaveMind spamming entire IBM-MAIN readership?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. me too -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: 3350 failures
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: > I don't remember any 3350 problems as this device type was my first > performance charge with doing internal pathing/volume placement based on > performance metrics at timeshare NVIP back in the early 80's. I do however > remember the 3350 to 3380 migration project which turned ugly when we were > informed, post migration, that we needed plenum replacements on our 3380 > E's/K's. IIRC the plenum connected to 2 different HDA's but I could be > wrong on this point. Lots of long weekends with the media folks deciding > how to play musical chairs with strings of DASD. re: http://www.garlic.com/~lynn/2007k.html#58 3350 failures http://www.garlic.com/~lynn/2007k.html#60 3350 failures old email http://www.garlic.com/~lynn/2007b.html#email800402 talks about problem executing HIO/HDV to 3350 when (3880) control unit was busy (which may have also existing in 3830) ... and software fix was to i/o supervisor to "not do that". one of the 3350 to 3380 migration issues was that the 3380 had more data under each arm (proportional in excess in any increase in 3380 thruput improvement). internally we had some performance monitoring and modeling tools that would identify what 3350 data to move to what 3380 and some recommendations (in heavily loaded environment) to leave 3380 10-20 percent empty/idle (in order to have same thruput as 3350 configuration). there was a facetious proposal (even discussed at SHARE) for a special 3380 "feature" in the 3880 controller ... that would define extra priced 3380 drives that were "faster" (by reducing the number of cylinders that could be accessed). This was for shops where the administrators couldn't resist completely filling a 3380 as cost effective measure (however, they would feel comfortable with paying extra for feature that prevented them from completely filling a 3380). misc. past posts about getting to play dasd engineer in the disk engineering lab (bldg. 14) and the disk product test lab (bldg. 15). http://www.garlic.com/~lynn/subtopic.html#disk -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: 3350 failures
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Ted MacNEIL) writes: > I started with 3330's. > And, I remember when STK (STC) showed us their first ICEBERG, and the > size of the device was that of a standard conference table, weighed > less and had the capacity of an order (or 2) of magnitude larger than > the 3330 farm I first tended. > > That 3330 farm was less than 50 GB, and we were considered a medium to large > site. > (Running on a 3081-D) re: http://www.garlic.com/~lynn/2007k.html#58 3350 failures silicon valley area had at least three fairly large vm370 customer datacenters with good sized disk farms ... there was SLAC (lots of collection from the accelerator) and both Tymshare and internal HONE operation ... both extensive online, timesharing services http://www.garlic.com/~lynn/subtopic.html#timeshare HONE had somewhat started out with a number of cp67 installations to provide "hands-on" virtual machine use for branch office SEs. recent reference: http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate It then transitioned to vm370 and lots of online, interactive APL applications supporting sales & marketing ... i.e. at some point early in 370 timeframe, there was transition where machine orders couldn't even be submitted w/o having first being processed by a HONE configuration. In the mid-70s, the various (US) HONE datacenters were consolidated in silicon valley area ... with what was possibly the largest single-system configuration in the world at the time (large datafarm with load balancing across large number of processors in loosely-coupled configuration). http://www.garlic.com/~lynn/subtopic.html#hone another large datacenter in silicon valley was Lockheed's DIALOG (online library titles and abstracts which has gone thru a number of owners since that time) ... which had something like 300(?) 3330-clones in their data farm (the basic service was MVS ... but lots of it was run under VM ... on clone processors). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: 3350 failures
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] writes: > IBM 3880 - 1 or 2 (IBM DASD and Control Units Facts Folder G520-3075-2) old email with reference to finding bug in the 3350 support in 3880 controller (and possibility of same bug having been in 3830 controller) http://www.garlic.com/~lynn/2007b.html#email800402 in this recent post http://www.garlic.com/~lynn/2007b.html#28 What is "command reject" trying to tell me? above post also references early 3880 MVS RAS testing in this post http://www.garlic.com/~lynn/2007.html#2 "The Elements of Programming Style" -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Another "migration" from the mainframe
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. re: http://www.garlic.com/~lynn/2007k.html#18 Another "migration" from the mainframe http://www.garlic.com/~lynn/2007k.html#19 Another "migration" from the mainframe http://www.garlic.com/~lynn/2007k.html#22 Another "migration" from the mainframe lots of old posts mentioning working on our ha/cmp product ... and/or some loosely-coupled (dating back to at least when my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture) http://www.garlic.com/~lynn/subtopic.html#hacmp when she was in POK, in charge of loosely-coupled architecture, she developed peer-coupled shared data architecture, which didn't see a lot of uptake (except for ims-hotstandby) until parallel sysplex http://www.garlic.com/~lynn/subtopic.html#shareddata and a little followup of financial industry using blades/grids at the high-end ... including enabling them to do "real-time" trading algorithms ... something that they haven't been able to do before Lots of Blade Server articles http://www.eweek.com/category2/0,1874,1658862,00.asp IBM Grid Computing Solutions - financial industry http://www-03.ibm.com/grid/solutions/by_industry/financial.shtml from above: Optimized Analytic Infrastructure Drive higher margins and revenue growth by: * Turning creative quantitative insight into tested, supported, tradable investment products * Achieving near real-time and intraday decision making for on demand valuations and complex risk reporting in minutes * Reducing costs and enhancing standardization of existing analytic infrastructures ... snip ... Grid Computing for Financial Services 2007 http://www.iqpc.com/cgi-bin/templates/genevent.html?topic=233&event=12603&; from above: "70% of firms now deploy enterprise grids in key business areas" to maximise CPU power and business capability – but are you really driving its development forward in your IT strategy? ... snip ... Grid computing: Accelerating the search for revenue and profit for financial markets http://www-03.ibm.com/industries/financialservices/doc/content/landing/973028103.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Another "migration" from the mainframe
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. re: http://www.garlic.com/~lynn/2007k.html#18 Another "migration" from the mainframe http://www.garlic.com/~lynn/2007k.html#19 Another "migration" from the mainframe as an aside ... all the vendors that support server farms at least in the form of blade/GRID technology have done significant amounts of work on energy and cooling efficiency. in fact, cooling was one of major concerns working on ha/cmp scaleup related to these old emails http://www.garlic.com/~lynn/lhwemail.html#medusa small sample re blade/grid energy efficiency CIO Challenge: Energy Efficiency http://www.wallstreetandtech.com/showArticle.jhtml?articleID=192202377 from above: Like Fidelity, Wachovia has been targeting energy efficiency initiatives for the last 12 to 18 months or so. The initial spur was a move by the firm's traders in January to a new building in New York. "The three trading floors have relatively low ceiling heights, where it was not possible to put in a lot of air distribution, which meant we had to think creatively to ensure we don't have an unhealthy environment for the traders," ... snip ... and: IBM Unveils New Energy-Efficient Blades http://www.hpcwire.com/hpc/1379801.html IBM to focus on energy efficiency http://www.bladewatch.com/2007/05/10/ibm-to-focus-on-energy-efficiency/ Blade innovations highlight energy efficiency opportunities http://www.it-director.com/business/content.php?cid=9135 IBM defends blades' energy efficiency http://green.itweek.co.uk/2006/10/ibm_defends_bla.html IBM Data Center and Facilities Strategy Services - high density computing data center readiness assessment http://www-935.ibm.com/services/us/index.wss/offering/its/a1025605#spotligt-data-center -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Another "migration" from the mainframe
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Howard Brazee) writes: > I'd also like to see that with politics, but politician's pay is > power, and that cannot be deferred.But it is more important for a > President's policy to work for the long term than for a CEO's policy. > Neither should be measured by "not on my term", but both should be > measured by leaving a legacy that lasts.Build for the future - > when the other guys are running the company/country. i think that the comptroller general has suggested something similar for legislation ... that metrics are defined associated for any claimed benefits justifying some legislation ... and if the results fail to meet the metrics ... poof, its gone. however, in speeches that the comptroller general has given over the past yr or so on some aspects of medicare/medicaid legislation ... he has commented that he doesn't believe any congressman in the last fifty yrs has been capable of middle-school arithmatic. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Another "migration" from the mainframe
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main as well. [EMAIL PROTECTED] (Richards.Bob) writes: > I wonder if they will reveal the costs of extra hw/sw for > high-availability and business continuance associated with this > migration. Probably not. when we were doing the ha/cmp product, they were one of the customers we called on http://www.garlic.com/~lynn/subtopic.html#hacmp also, I had been asked to write a section in the corporate continuous availability strategy document. most of my writing got pulled because both Rochester and POK complained (that at the time, they couldn't meet what we were doing in ha/cmp). it was also in this period that we coined the terms "disaster survivability" and "geographic survivability" (to differentiate from disaster/recovery) http://www.garlic.com/~lynn/subtopic.html#available for other drift, old email about what we had been doing about ha/cmp scaleup http://www.garlic.com/~lynn/lhwemail.html#medusa -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Data Areas Manuals to be dropped
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] writes: > What is 'OCO' ? > Thanks there were several "OCO-wars" threads/discussion on vmshare. it was somewhat more of an issue in vm culture ... since source maintenance was standard and there were extensive amount of customer source changes available from waterloo/share library. tymshare had provided online computer conferencing for share called vmshare starting in mid-70s; in part, because tymshare offered vm-based commercial timesharing service (later tymshare would also offer pcshare online computer conferencing) ... lots/misc posts about vm-based online commercial timesharing services http://www.garlic.com/~lynn/subtopic.html#timeshare vmshare archives here: http://vm.marist.edu/~vmshare/ following is sample by doing a search on "oco war" in browse mode against all memo, note, and prob files. OCO's 10th b'day http://vm.marist.edu/~vmshare/browse?fn=OCO:BDAY&ft=MEMO OCO & source business http://vm.marist.edu/~vmshare/browse?fn=OCOBUS&ft=MEMO issue sort of dates back to 23jun69 unbundling announcement with start to charge for application software. misc. past posts mentioning unbundling http://www.garlic.com/~lynn/subtopic.html#unbundle initially only application software was charged for ... using an excuse that kernel/system software was required for operation of the hardware. later various circumstances precipitated decision to start charging for system software. this was about the time that my resource manager was going to be released ... so it got selected to be initial guinea pig for policty/practices related to kernel software charging. http://www.garlic.com/~lynn/subtopic.html#fairshare change to charging for software eventually also evolved into Object-Code-Only (i.e. OCO, no source). recent post also mentioning 23jun69 unbundling announcement resulted in start charging for SE services. http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Lean and Mean: 150,000 U.S. layoffs for IBM?
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Rick Fochtman) writes: > I went "the other way" in the Army, finding myself in a tropical > climate where the main diet was rice, with a few vegetables and maybe > a water buffalo, when the gunner "forgot" to clear the M2-HB before > attempting to "clean" it. In my (somewhat limited) experience, the > officers were there to get some combat time, and pay, into their > service records, as a stepping stone to further promotion. Net > result: the sergeants ran the Army while the officers "fought the > battles" and collected the medals. Needless to say, I have a very low > opinion of high-flying "leaders" that don't share the hardships of > those who are "led". re: http://www.garlic.com/~lynn/2007j.html#61 Lean and Mean: 150,000 U.S. layoffs for IBM? for other boyd drift, he did yr running datacenter at "spook base" ... possibly largest in the world ... at least in the fareast, at the time, claim was that it represented a $2.5B windfall for IBM. http://www.garlic.com/~lynn/2005t.html#1 Dangerous Hardware http://www.garlic.com/~lynn/2005t.html#2 Dangerous Hardware http://www.garlic.com/~lynn/2005t.html#5 Dangerous Hardware http://www.garlic.com/~lynn/2006u.html#51 Where can you get a Minor in Mainframe? http://www.garlic.com/~lynn/2007g.html#13 The Perfect Computer - 36 bits? http://www.garlic.com/~lynn/2007i.html#4 John W. Backus, 82, Fortran developer, dies and for other drift ... boyd's briefing on "organic design for command and control" ... past posts mentioning the briefing http://www.garlic.com/~lynn/94.html#8 scheduling & dynamic adaptive ... long posting warning http://www.garlic.com/~lynn/2000e.html#34 War, Chaos, & Business (web site), or Col John Boyd http://www.garlic.com/~lynn/2002q.html#33 Star Trek: TNG reference http://www.garlic.com/~lynn/2002q.html#34 Star Trek: TNG reference http://www.garlic.com/~lynn/2003h.html#46 employee motivation & executive compensation http://www.garlic.com/~lynn/2004k.html#25 Timeless Classics of Software Engineering http://www.garlic.com/~lynn/2004l.html#34 I am an ageing techy, expert on everything. Let me explain the http://www.garlic.com/~lynn/2004q.html#69 Organizations with two or more Managers http://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns http://www.garlic.com/~lynn/2005e.html#2 [Lit.] Buffer overruns http://www.garlic.com/~lynn/2005e.html#3 Computerworld Article: Dress for Success? http://www.garlic.com/~lynn/2005n.html#14 Why? (Was: US Military Dead during Iraq War http://www.garlic.com/~lynn/2006q.html#41 was change headers: The Fate of VM - was: Re: Baby MVS??? http://www.garlic.com/~lynn/2007c.html#25 Special characters in passwords was Re: RACF - Password rules http://www.garlic.com/~lynn/2007i.html#35 ANN: Microsoft goes Open Source and as before ... lots of other past posts mentioning Boyd as well as other URLs from around the web http://www.garlic.com/~lynn/subboyd.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Help settle a job title/role debate
The following message is a courtesy copy of an article that has been posted to bit.listserv.ibm-main,alt.folklore.computers as well. [EMAIL PROTECTED] (Mark Zelden) writes: > Today, I see these two used interchangeably. I've even seen title changes > from one to the other in the same shop when HR decided to review > everyone's job titles and such. > > I still prefer plain ol' Systems Programmer over all the titles I've had. re: http://www.garlic.com/~lynn/2007j.html#65 Help settle a job title/role debate later, i would have battles to have no title at all ... and have my business cards w/o any title (I would sometimes joke that if it was necessary to get things done based on a title ... then it was time to retire ... i should be able to convince people to do something based on it was the right thing to do). the other battle was being one of the first to have email address on business card. ... there is old joke about person that use to fly a kite from the roof of 705 bldg. in pok on april 1st ... who had pencils made up with his name ... "Elect lab director, raises or promotions, but not both". old references: http://www.garlic.com/~lynn/2000b.html#60 South San Jose (was Tysons Corner, Virginia) http://www.garlic.com/~lynn/2000d.html#38 S/360 development burnout? http://www.garlic.com/~lynn/2006m.html#22 Patent #6886160 this is slightly different than the Boyd line effectively about neither raises nor promotions ... recent ref: http://www.garlic.com/~lynn/2007j.html#61 Lean and Mean: 150,000 U.S. layoffs for IBM? which is more along the lines of references at some number of locations (across a variety of large bureaucratic organizations) being primarily mushroom factories (i.e. most of the people are kept in the dark and feed ) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html