timothy.sipp...@us.ibm.com (Timothy Sipples) writes: > I've known HP in its sales pitches to make a lot of fuss about > endianness as reason why it would be oh-so-difficult for an HP-UX > customer to move to Linux on X86, or for a Linux X86 customer to move > to (or add) Linux on System z, depending on their sales > situation. Then hundreds/thousands of HP customers moved without > endianness difficulty, and many more will follow. The IT community > figured out how to flip bit order a long time ago. Before System/360, > even. That's not to say endianness isn't a problem...for HP. If they > want to move HP-UX to a little endian CPU, they'll have a lot of > investment to do (as Sun did for Solaris X86). For non-OS > kernel/non-compiler programmers, which is the vast majority of us, > it's not a real-world problem. In fact, endianness is one of the least > interesting issues when porting from one CPU to another.
re http://www.garlic.com/~lynn/2011f.html#7 New job for mainframes: Cloud platform http://www.garlic.com/~lynn/2011f.html#9 New job for mainframes: Cloud platform when I was undergaduate in the 60s, some people from the science center came out and installed (virtual machine) cp67 on the 360/67 (as alternative to tss/360). cp67 had "automatic" terminal identification for 1052 & 2741 ... playing games switching the line-scanners with the 2702 SAD command. The univ. had bunch of TTY/ascii terminals ... so I set out to add TTY/ascii support also doing automatic terminal identification. It almost worked ... being able to dynamically identify 1052, 274, & TTY for directly/fixed connect lines. I had wanted to have a single dial-up number for all termainls ... with "hunt-group" ... allowing any terminal to come in on any port. The problem was that the 2702 took a short-cut and hardwired the line-speed for each port. This somewhat prompted the univ. to do a clone controller effort ... to dynamically do both automatic termeinal & automatic speed determination (reverse engineer channel interface, build controller interface board and program minicomputer to emulate 2702). Two early "bugs" that stick in my mind ... 1) the 360/67 had high-speed location 80 timer ... and if the channel interface board held the memory bus for two consecutive timer-tics (a timer-tic to update location 80 was stalled because memory bus was held ... and the next timer-tic happened while the previous timer-tic was still pending), the processor would stop & redlight 2) initial data into memory was all garbage. turns out had overlooked bit memory order. minicomputer convention was leading (byte) bit off the line started off into high-order (byte) bit position ... while 2702 line-scanner convention was to place leading (byte) bit off the line in the lower order (byte) bit position. while the minicomputer then was placing data into memory in line-order bit positiion ... each byte had the bit order reversed compared to the 2702 convention (standard 360 ascii translate tables that I had borrowed from BTAM handled the 2702 bit-reversed bytes). ... later, four of us get written up for being responsible for some portion of the mainframe clone controller business. A few years ago, in large datacenter, I ran across a descendent of our original box, handling a major portion of the dial-up POS cardswipe terminals in the country (some claim that it still used the original channel interface board design). I had posted same cloud item in a number of linkedin mainframe group http://www.garlic.com/~lynn/2011f.html#6 New job for mainframes: Cloud platform http://www.garlic.com/~lynn/2011f.html#8 New job for mainframes: Cloud platform also http://lnkd.in/F6X_3Y also refers to internal (virtual machine) HONE system being the largest "cloud" operation in the 70s & 80s. In the mid-70s, the US HONE datacenters were consolidated in silicon valley ... where it created the largest single-system-image cluster operation. Then in the early 80s, because of earthquake concerns, it was replicated in Dallas ... with distributed, load-balancing and fall-over between Dallas & PaloAlto ... eventually growing to 28 3081s. misc. past posts mentioning HONE http://www.garlic.com/~lynn/subtopic.html#hone HONE also discussed in this linkedin Greater IBM (current & former IBM employee) group about APL software preservation (major portion of HONE applications supporting worldwide sales & marketing had been implemented in APL; numerous HONE-clones all around the world): http://www.garlic.com/~lynn/2011e.html#83 History of APL -- Software Preservation Group http://www.garlic.com/~lynn/2011f.html#3 History of APL -- Software Preservation Group http://www.garlic.com/~lynn/2011f.html#10 History of APL -- Software Preservation Group http://www.garlic.com/~lynn/2011f.html#11 History of APL -- Software Preservation Group another cloud related item: Facebook Opens Up Its Hardware Secrets; The social network breaks an unwritten rule by giving away plans to its new data center--an action it hopes will make the Web more efficient http://www.technologyreview.com/computing/37317/?p1=MstRcnt&a=f for "HONE" related trivia ... the silicon valley HONE datacenter; do online satellite map search for Facebook's silicon valley address ... the bldg. next to it was the HONE datacenter (although the bldg. has a different occupant now). -- virtualization experience starting Jan1968, online at home since Mar1970 ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html